threads
listlengths
1
2.99k
[ { "msg_contents": "Hi all,\n\nI'm trying to create an index on a composite key using a DECIMAL type\nbut PostgreSQL raises the following error:\n\n\nCREATE TABLE header (\n year decimal(4) NOT NULL,\n number INTEGER NOT NULL,\n date DATE NOT NULL,\n cod_client CHAR(4) NOT NULL,\n CONSTRAINT k_header PRIMARY KEY (year,number)\n );\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'k_header'\nfor tabl\ne 'header'\nERROR: Can't find a default operator class for type 1700.\n\n-----------------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n\nJos�\n\n", "msg_date": "Tue, 10 Aug 1999 10:05:13 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "another DECIMAL problem " }, { "msg_contents": "> I'm trying to create an index on a composite key using a DECIMAL type\n> but PostgreSQL raises the following error:\n> ERROR: Can't find a default operator class for type 1700.\n\nafaik Postgres does not yet implement indices for NUMERIC or DECIMAL.\nDon't know if anyone has plans to do so...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 13:17:36 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] another DECIMAL problem" }, { "msg_contents": "> > I'm trying to create an index on a composite key using a DECIMAL type\n> > but PostgreSQL raises the following error:\n> > ERROR: Can't find a default operator class for type 1700.\n> \n> afaik Postgres does not yet implement indices for NUMERIC or DECIMAL.\n> Don't know if anyone has plans to do so...\n> \n> - Thomas\n\nTODO list has:\n\n\t* Add index on NUMERIC/DECIMAL type\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 12:56:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] another DECIMAL problem" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi all,\n> \n> I'm trying to create an index on a composite key using a DECIMAL type\n> but PostgreSQL raises the following error:\n> \n> \n> CREATE TABLE header (\n> year decimal(4) NOT NULL,\n> number INTEGER NOT NULL,\n> date DATE NOT NULL,\n> cod_client CHAR(4) NOT NULL,\n> CONSTRAINT k_header PRIMARY KEY (year,number)\n> );\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'k_header'\n> for tabl\n> e 'header'\n> ERROR: Can't find a default operator class for type 1700.\n> \n\nWe have the TODO item:\n\n\t* Add index on NUMERIC/DECIMAL type\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:05:29 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] another DECIMAL problem" }, { "msg_contents": "Bruce Momjian wrote:\n\n> > ERROR: Can't find a default operator class for type 1700.\n>\n> We have the TODO item:\n>\n> * Add index on NUMERIC/DECIMAL type\n>\n\n Nbtree operator class for NUMERIC is committed.\n\n Item closed.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Wed, 29 Sep 1999 23:10:03 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] another DECIMAL problem" }, { "msg_contents": "> Bruce Momjian wrote:\n> \n> > > ERROR: Can't find a default operator class for type 1700.\n> >\n> > We have the TODO item:\n> >\n> > * Add index on NUMERIC/DECIMAL type\n> >\n> \n> Nbtree operator class for NUMERIC is committed.\n> \n> Item closed.\n> \n\nTODO item marked as completed:\n\n\t* -Add index on NUMERIC/DECIMAL type\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 29 Sep 1999 17:40:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] another DECIMAL problem" } ]
[ { "msg_contents": "Philip Warner wrote:\n> \n> Did you ever make a decision about parrallel logging etc?\n\nYes. I decided to make first implementation without parallel \nlogging and run tests to see how many log spinlock contentions \nwill be there.\nI finally found parallel logging in Oracle - he uses it\nwith parallel server option (for clusters).\n\nI already began codding and hope to have something in\n~ 3 weeks...\n\nThanks to all!\n\nVadim\n", "msg_date": "Tue, 10 Aug 1999 18:40:59 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] WAL: parallel logging" } ]
[ { "msg_contents": "\nThe poll on linuxdev is over. Here are the results:\n\n\n\n\nPostgreSQL\t\t38.6% - (598 Votes)\nMySQL\t\t\t23.9% - (371 Votes)\nmSQL\t\t\t20.7% - (321 Votes)\nInformix\t\t10.9% - (169 Votes)\nOracle\t\t\t 3.5% - (55 Votes)\nDB2\t\t\t 0.8% - (13 Votes)\nSybase\t\t\t 0.7% - (12 Votes)\nOthers\t\t\t 0.5% - (8 Votes)\nInterbase\t\t 0.1% - (2 Votes)\n\n\nTotal Votes: 1549\n\n\nYep. We Won!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 10 Aug 1999 08:11:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": true, "msg_subject": "linuxdev.net poll results" } ]
[ { "msg_contents": "\n\tGreetings, Is there any way to have an spi function return the result of a query, or \nto connect two tables in a view? Specifically, I have a table of times and locations from an \naerial flight, and a table of images from that flight, and I would like to select the images \nthat occur within a specified area. I had thought to join two tables in a view which I know \nthat you can do with other databases but I cannot seem to determine how to do it in postgres. \nAlternately I had thought to use an spi function but I do not know how or if I can have it \nreturn the result of a select statement of more than one record. The database ins Postgresql \n6.5.1 on a Linux (RH 6) box.\n\tAny help would be appreciated.\n\t\n\tCollin Lynch.\n\n\n", "msg_date": "Tue, 10 Aug 1999 09:25:56 -0400 (EDT)", "msg_from": "\"Collin F. Lynch\" <[email protected]>", "msg_from_op": true, "msg_subject": "[INTERFACES] spi, tuples" }, { "msg_contents": "> Is there any way to ... connect two tables in a view?\n> Specifically, I have a table of times and locations from an\n> aerial flight, and a table of images from that flight,\n> and I would like to select the images\n> that occur within a specified area. I had thought to join \n> two tables in a view which I know\n> that you can do with other databases but I cannot seem to \n> determine how to do it in postgres.\n\nHave you just tried joining the tables on your datetime field? Is\nthere a problem with table joins and your images (presumably large\nobjects)?\n\nIf the simple join doesn't work, then doing the same in a view\nprobably won't either. How about sending the query and the schema?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 10 Aug 1999 13:41:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [INTERFACES] spi, tuples" } ]
[ { "msg_contents": "Sorry for the duplicate mail. Been having problems with outgoing (just to\ngive you an idea about the kinds of problems: I sent this message over 24\nhours ago).\nDEJ\n\n> -----Original Message-----\n> From:\tJackson, DeJuan [SMTP:[email protected]]\n> Sent:\tMonday, August 09, 1999 3:15 PM\n> To:\tPGSQL Hackers\n> Subject:\t[HACKERS] Drop table abort\n> \n> It seem that a drop table while in a transaction keeps the table but not\n> the\n> data. Bug? or undocumented feature?\n> \n> testcase=> select * from t;\n> i\n> -\n> (0 rows)\n> \n> testcase=> insert into t VALUES(1); \n> INSERT 551854 1\n> testcase=> insert into t VALUES(2); \n> INSERT 551855 1\n> testcase=> insert into t VALUES(3); \n> INSERT 551856 1\n> testcase=> select * from t;\n> i\n> -\n> 1\n> 2\n> 3\n> (3 rows)\n> \n> testcase=> begin;\n> BEGIN\n> testcase=> insert into t VALUES(4);\n> INSERT 551857 1\n> testcase=> drop table t;\n> DROP\n> testcase=> abort;\n> ABORT\n> testcase=> select * from t;\n> i\n> -\n> (0 rows)\n> \n> testcase=> select version(); \n> version \n> --------------------------------------------------------------\n> PostgreSQL 6.5.0 on i686-pc-linux-gnu, compiled by gcc 2.7.2.3\n", "msg_date": "Tue, 10 Aug 1999 16:46:57 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Drop table abort" } ]
[ { "msg_contents": "I have been using MicroEmacs for 9 years, and have been looking for a\nnice X editor. I looked at Xemacs(too complex to configure), and some\nothers, but they did not have the required features. I like a powerful\nsearch/replace, tags support, macro support, as-you-type syntax\ncolorization with user-definable languages, keyboard recording/playback,\netc.\n\nI found that the commercial Crisp editor from\nhttp://www.vital.com/crisp.htm does exactly what I want. It has the\nperfect balance between power and lean-ness I am looking for. It is\nonly $75 for non-commerical use until the end of August for PC's, Linux,\n*BSD's. Support is $100/year.\n\nIt is being actively developed by someone in England. I have found a\nfew bugs, and they are working on them now.\n\nThe license manager sounds very strict for an editor. For BSDI, it\nlocks to the BSDI host license id, not the CPU id, which pre-Pentium\nIII's don't have anyway. Not sure how the lock a MS Windows PC or\nLinux.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 17:55:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Crisp text editor" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I have been using MicroEmacs for 9 years, and have been looking for a\n> nice X editor. I looked at Xemacs(too complex to configure), and some\n> others, but they did not have the required features. \n\nHave you checked out CodeCrusader ?\n(at http://www.cco.caltech.edu/~jafl/jcc )\n\n> I like a powerful\n> search/replace, tags support, macro support, as-you-type syntax\n> colorization with user-definable languages,\n\nSeems to still miss python colorization :(\n\n> keyboard recording/playback, etc.\n>\n> I found that the commercial Crisp editor from\n> http://www.vital.com/crisp.htm does exactly what I want. It has the\n> perfect balance between power and lean-ness I am looking for. It is\n> only $75 for non-commerical use until the end of August for PC's, Linux,\n> *BSD's. Support is $100/year.\n> \n> It is being actively developed by someone in England. I have found a\n> few bugs, and they are working on them now.\n> \n> The license manager sounds very strict for an editor. For BSDI, it\n> locks to the BSDI host license id, not the CPU id, which pre-Pentium\n> III's don't have anyway. Not sure how the lock a MS Windows PC or\n> Linux.\n\nDoes the non-commercial version also have a lock against commercial use\n?\n\n--------\nHannu\n", "msg_date": "Wed, 11 Aug 1999 01:37:14 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor (probably OT)" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > I have been using MicroEmacs for 9 years, and have been looking for a\n> > nice X editor. I looked at Xemacs(too complex to configure), and some\n> > others, but they did not have the required features. \n> \n> Have you checked out CodeCrusader ?\n> (at http://www.cco.caltech.edu/~jafl/jcc )\n\nYes, codecrusader has no user-defined language coloring. All hardcoded\nin C++. It seems more like a integrated development environment(IDE),\nthan an editor with macro support and keyboard playback.\n\n> \n> > I like a powerful\n> > search/replace, tags support, macro support, as-you-type syntax\n> > colorization with user-definable languages,\n> \n> Seems to still miss python colorization :(\n\nCrisp has it. A python mode already defined, though you can define your\nown in a few minutes. If you are trying Crisp, go to Options/Buffer,\nand choose python as your colorizer. Colorizers are defined in keyword\nbuilder.\n\n> \n> > keyboard recording/playback, etc.\n> >\n> > I found that the commercial Crisp editor from\n> > http://www.vital.com/crisp.htm does exactly what I want. It has the\n> > perfect balance between power and lean-ness I am looking for. It is\n> > only $75 for non-commerical use until the end of August for PC's, Linux,\n> > *BSD's. Support is $100/year.\n> > \n> > It is being actively developed by someone in England. I have found a\n> > few bugs, and they are working on them now.\n> > \n> > The license manager sounds very strict for an editor. For BSDI, it\n> > locks to the BSDI host license id, not the CPU id, which pre-Pentium\n> > III's don't have anyway. Not sure how the lock a MS Windows PC or\n> > Linux.\n> \n> Does the non-commercial version also have a lock against commercial use\n> ?\n\nNon-commercial is cheaper. That is the only difference. \nCommercial/noncommercial is just what you tell the sales person.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 18:45:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Crisp text editor (probably OT)" }, { "msg_contents": "Have you seen JED ?\nIt's powerful editor with all (afaik) features you need.\n\nhttp://space.mit.edu/~davis/jed.html\n\n\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 11 Aug 1999 09:29:48 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor" }, { "msg_contents": "Bruce Momjian wrote:\n> > Does the non-commercial version also have a lock against commercial use\n> > ?\n> \n> Non-commercial is cheaper. That is the only difference.\n> Commercial/noncommercial is just what you tell the sales person.\n\nDon't they have their own definition of 'non-commercial' ?\n\nI also have a vague idea of what non-commercial means, but it gets \nreally hairy for for things like a a free database with commercial \nsupport - one has to be really careful not to do any for-pay work \nusing a non-commercial tool :)\n", "msg_date": "Wed, 11 Aug 1999 09:31:17 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor (probably OT)" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> I have been using MicroEmacs for 9 years, and have been looking for a\n> nice X editor. I looked at Xemacs(too complex to configure), and some\n> others, but they did not have the required features.\n\nI dunno about Xemacs, but regular GNU Emacs is no big deal to install;\nat least it wasn't last time I did it. (I have notes from installing\n19.34 on HPUX and SunOS, if you want 'em.) Recent versions do menus,\ncut&paste, syntax-driven highlighting, etc.\n\nIf you're accustomed to an emacs-clone you probably won't be happy with\nanything else. (But I've been using various flavors of Emacs since\n'81, so I may be a tad biased...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Aug 1999 10:15:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor [definitely OT]" }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > I have been using MicroEmacs for 9 years, and have been looking for a\n> > nice X editor. I looked at Xemacs(too complex to configure), and some\n> > others, but they did not have the required features.\n> \n> I dunno about Xemacs, but regular GNU Emacs is no big deal to install;\n> at least it wasn't last time I did it. (I have notes from installing\n> 19.34 on HPUX and SunOS, if you want 'em.) Recent versions do menus,\n> cut&paste, syntax-driven highlighting, etc.\n\nIt was easy to compile, and install, just a pain to change anything on\nit. I just couldn't understand how to do it at all. When I asked on\nIRC, someone said change Xresources to change the background color. It\nis a pain to make color changes in Xresources to make all the colors\nlook good together. It just seemed obvious things you would want to\nconfigure in an editor were not there, like tab size.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Aug 1999 10:19:35 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Crisp text editor [definitely OT]" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> It just seemed obvious things you would want to\n> configure in an editor were not there, like tab size.\n\nM-x set-variable tab-width.\n\nActually, I use the following command to customize Emacs for working\nwith the Postgres sources:\n\n; Cmd to set tab stops &etc for working with PostgreSQL code\n(defun pgsql-mode ()\n \"Set PostgreSQL C indenting conventions in current buffer.\"\n (interactive)\n (c-mode)\t\t\t\t; necessary to make c-set-offset local!\n (setq tab-width 4)\t\t\t; already buffer-local\n ; (setq comment-column 48)\t\t; already buffer-local\n (c-set-style \"bsd\")\n (c-set-offset 'case-label '+)\n)\n\nThis produces a pretty close approximation to the project's standard\nindentation rules. The only thing I've noticed it doesn't get right\nis that it doesn't know to put the left '{' after a foreach(...) at\nthe same indent as the foreach line --- you have to manually\nunindent the '{' one stop before you continue entering code.\nI haven't got round to figuring out how to tell the syntaxer that\nforeach is a loop keyword, although I'm sure it can be done.\n\nI have two or three other such macros for customizing to the indent\nhabits of other projects ... buffer-local settings are nice ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Aug 1999 11:04:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor [definitely OT] " }, { "msg_contents": "Bruce Momjian writes:\n > It was easy to compile, and install, just a pain to change anything on\n > it. I just couldn't understand how to do it at all. When I asked on\n > IRC, someone said change Xresources to change the background color. It\n > is a pain to make color changes in Xresources to make all the colors\n > look good together. It just seemed obvious things you would want to\n > configure in an editor were not there, like tab size.\n\nI believe the guys at #emacs mislead you somewhat on this one.\nYou don't have to escape to the cruel world of resources to change\ncolors or faces in emacs.\n\nThe proper/simplest way of configuring customizable options in (X)emacs is through \nM-x customize , and for faces (fonts used in various contexts) you use \nM-x customize-face .\n\nTo change the background color for all faces, change the background\nfor the face 'Default'.\n\nThis will present you with a browsable user interface to all the\ntweakable knobs in your emacs system.\n\nAs a new citizen of emacs it might seem overwhelming with the\npossibilities of customization and applications available, but after passing the\ninitial treshold of learning the basics you're almost certain to find\nit a rewarding environment to work in.\n\nYes, I'm probably very biased on the subject, but I urge you to give\nit a real try. \n\n/Daniel\n\nPS. \nLooking at the screenshots from the (unfree) Crisp editor, it seems to \noffer the same file/class browsing capabilities as speedbar in emacs.\n \n http://www.umc.se/~daniel/tmp/screen2.gif\n\nIf you want example configurations or have any questions about\nworking in emacs just ask. \n.DS\n\n\n-- \n_______________________________________________________________ /\\__ \n Daniel Lundin - UMC, UNIX Development \\/\n http://www.umc.se/~daniel/\n", "msg_date": "Thu, 12 Aug 1999 16:45:17 +0200 (CEST)", "msg_from": "Daniel Lundin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp text editor [definitely OT]" } ]
[ { "msg_contents": "Correction on Crisp maintenance/upgrades prices per year, for August\nonly:\n\t\n\tNoncommercial Windows 50.00\n\tNoncommercial Linux/FreeBSD/BSDI 50.00\n\tNoncommercial Other Unix 100.00\n\t\n\tCommercial Windows 75.00\n\tCommercial Linux/FreeBSD/BSDI 75.00\n\tCommercial Other Unix 100.00\n\nSo for $125, $75 + $50, you can get the editor and one year\nmaintenance/upgrades, http://www.vital.com/crisp.htm. I had quoted\n$100/year for support.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 10 Aug 1999 22:02:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Crisp pricing" }, { "msg_contents": "On Tue, 10 Aug 1999, Bruce Momjian wrote:\n\n> Correction on Crisp maintenance/upgrades prices per year, for August\n> only:\n> \t\n> \tNoncommercial Windows 50.00\n> \tNoncommercial Linux/FreeBSD/BSDI 50.00\n> \tNoncommercial Other Unix 100.00\n> \t\n> \tCommercial Windows 75.00\n> \tCommercial Linux/FreeBSD/BSDI 75.00\n> \tCommercial Other Unix 100.00\n> \n> So for $125, $75 + $50, you can get the editor and one year\n> maintenance/upgrades, http://www.vital.com/crisp.htm. I had quoted\n> $100/year for support.\n> \n> \n\nHave you looked at Visual Slick Edit? I can't find the URL right now\nbut it seems to be what you're looking for. Me? I'm using xemacs. I\ndon't mind the extra work setting it up, I doubt I'll ever find the one\neditor that's perfect. They all have something non-changable that's\nreally irritating!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 11 Aug 1999 06:51:30 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Crisp pricing" } ]
[ { "msg_contents": "Bruce Momjian writes:\n\n>I have been using MicroEmacs for 9 years, and have been looking for a\n>nice X editor. I looked at Xemacs(too complex to configure), and some\n>others, but they did not have the required features. I like a powerful\n>search/replace, tags support, macro support, as-you-type syntax\n>colorization with user-definable languages, keyboard recording/playback,\n>etc.\n\nSounds like gvim meets all your requirements, and more (e.g., you can \nscript it in Perl, Python, or TCL). The author encourages donations to \nan orphanage in Uganda, but other than that, it's free.\n\n>It is being actively developed by someone in England. I have found a\n>few bugs, and they are working on them now.\n\nSo far, gvim is bug-free for me.\n\n\t-Michael Robinson\n\n", "msg_date": "Wed, 11 Aug 1999 12:03:56 +0800 (CST)", "msg_from": "Michael Robinson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Crisp text editor" } ]
[ { "msg_contents": "Hi,\n\nIn libpgtcl, pg_select an array field is return as the following string:\n\n{\"red\",\"blue\",\"green\"}\n\nand it's rather difficult to process them as a normal tcl list.\nThe same thing for pg_exec, pg_result/tupleArray\n\nI think it would be better to return the string as:\n\n\"red\" \"blue\" \"green\"\n\nand tcl users could directly process the array as an ordinary tcl list.\n\nAs far as I know, arrays are not heavily used by libpgtcl users (thought\nit's an interesting feature) so changing format would not affect too\nmany applications but will encourage array field usage in future.\n\nWhat do you think?\n\nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 11 Aug 1999 06:26:23 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "> I think it would be better to return the string as:\n> \"red\" \"blue\" \"green\"\n> and tcl users could directly process the array as an ordinary tcl list.\n\nDo it!\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 11 Aug 1999 12:23:05 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "> > > I think it would be better to return the string as:\n> > > \"red\" \"blue\" \"green\"\n> > > and tcl users could directly process the array as an ordinary tcl list.\n\nWould it be also possible to use simple lists for arrays on *input* as\nwell as output? The implementation would be symmetric and (presumably)\neasier to use...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 11 Aug 1999 12:32:26 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > I think it would be better to return the string as:\n> > \"red\" \"blue\" \"green\"\n> > and tcl users could directly process the array as an ordinary tcl list.\n> \n> Do it!\n\nAll right!\n\nJust waited for an official approval!\n\n:-)\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 11 Aug 1999 12:32:40 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > > > I think it would be better to return the string as:\n> > > > \"red\" \"blue\" \"green\"\n> > > > and tcl users could directly process the array as an ordinary tcl list.\n> \n> Would it be also possible to use simple lists for arrays on *input* as\n> well as output? The implementation would be symmetric and (presumably)\n> easier to use...\n\nI am note sure that I am understanding the problem.\nThere is no *input* format in Tcl.\nThe only way for adding data to a table is:\npg_exec $dbc \"insert into .....\" and that's the PostgreSQL syntax.\nThere's no \"living\" snapshots in Tcl as in JDBC 2 (updatable\nrecordsets).\n\nFor the moment, the current syntax helps PgAccess. It returns exactly\nthe same format as it would be used to INSERT INTO queries so if you\nwould try to define a table with an array field of strings for example\nyou are able to add records and update them directly from PgAccess.\n>From that point of view, the new array field return format would give me\nheadaches for Pgaccess in order to restore the {\"..\",\"..\",\"..\"} format\nused for updating records.\n\nAm I missing something about the *input* format?\n\nOn the other hand, I have discovered in the libpgtcl source that there\nis a TCL_ARRAYS that if defined, would return array fields format\nexactly as a tcl list. But it is not defined anywhere. I think that the\nbehaviour of libpgtcl should be consistent so should we define\nTCL_ARRAYS by default in the next release?\n\n-- \nConstantin Teodorescu\nFLEX Consulting Braila, ROMANIA\n", "msg_date": "Wed, 11 Aug 1999 13:06:29 +0000", "msg_from": "Constantin Teodorescu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "> For the moment, the current syntax helps PgAccess. It returns exactly\n> the same format as it would be used to INSERT INTO queries so if you\n> would try to define a table with an array field of strings for example\n> you are able to add records and update them directly from PgAccess.\n> From that point of view, the new array field return format would give me\n> headaches for Pgaccess in order to restore the {\"..\",\"..\",\"..\"} format\n> used for updating records.\n> Am I missing something about the *input* format?\n\nNo, that is the issue I was bringing up. Perhaps we at least need\n\"convert to/from\" functions to help with formatting arrays??\n\n> On the other hand, I have discovered in the libpgtcl source that there\n> is a TCL_ARRAYS that if defined, would return array fields format\n> exactly as a tcl list. But it is not defined anywhere. I think that the\n> behaviour of libpgtcl should be consistent so should we define\n> TCL_ARRAYS by default in the next release?\n\nSo this is what you were proposing anyway, right? Or would you have\nother changes to make too?\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 11 Aug 1999 13:19:08 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "> For the moment, the current syntax helps PgAccess. It returns exactly\n> the same format as it would be used to INSERT INTO queries so if you\n> would try to define a table with an array field of strings for example\n> you are able to add records and update them directly from PgAccess.\n> >From that point of view, the new array field return format would give me\n> headaches for Pgaccess in order to restore the {\"..\",\"..\",\"..\"} format\n> used for updating records.\n> \n> Am I missing something about the *input* format?\n> \n> On the other hand, I have discovered in the libpgtcl source that there\n> is a TCL_ARRAYS that if defined, would return array fields format\n> exactly as a tcl list. But it is not defined anywhere. I think that the\n> behaviour of libpgtcl should be consistent so should we define\n> TCL_ARRAYS by default in the next release?\n\nThat define is from Massimo. Let's enable it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 11 Aug 1999 10:14:09 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" }, { "msg_contents": "> > For the moment, the current syntax helps PgAccess. It returns exactly\n> > the same format as it would be used to INSERT INTO queries so if you\n> > would try to define a table with an array field of strings for example\n> > you are able to add records and update them directly from PgAccess.\n> > >From that point of view, the new array field return format would give me\n> > headaches for Pgaccess in order to restore the {\"..\",\"..\",\"..\"} format\n> > used for updating records.\n> > \n> > Am I missing something about the *input* format?\n> > \n> > On the other hand, I have discovered in the libpgtcl source that there\n> > is a TCL_ARRAYS that if defined, would return array fields format\n> > exactly as a tcl list. But it is not defined anywhere. I think that the\n> > behaviour of libpgtcl should be consistent so should we define\n> > TCL_ARRAYS by default in the next release?\n> \n> That define is from Massimo. Let's enable it.\n> \n\nThat define is not enabled by default because it requires also a change in\nthe array output format in order to distinguish between the scalar and array\nvalues returned by the backend. It requires also my string-io contrib module\nif I remember correctly.\n\nI'm obviously using the TCL_ARRAYS feature from years and it works fine for\nme, but it could break other non tcl frontends which don't expect the new\nstring quoting format, so I won't advise to enable it by default.\n\nThis is a very ancient problem of postgres which has never been resolved.\nI proposed a solution using a C-like syntax for strings and arrays but it\nwasn't accepted. I think we should discuss again the string and array\nformats used by pgsql and find a common and non-ambiguous format for all\nthe i/o routines.\n\nRegarding the input I chose to implement a tcl layer which accepts tcl\nvalues, converts them to sql values and formats them into predefined query.\nFor example:\n\n defineQuery update_foo \\\n\t\"update foo set arr_val = %s, date_val = %s where key = %s\" \\\n\t{{string 1} date string}\n\n defineQuery select_foo \\\n\t\"select * from foo where str_val = %s and date_val = %d\" \\\n\t{string date}\n\n execQuery update_foo {\"a1 a2 a3\" \"31/12/1999\" \"x y z\"} -cmd\n set rows [execQuery select_foo {\"x y z\" \"31/12/1999\"} -rows]\n\nThe execQuery formats the tcl values accordingly to the types defined in\nthe defineQuery, submits the sql statement and converts back the result\nto the required format (rows or TclX keyed-lists).\nBesides this the execQuery can also keep a cache of previous query results\nand avoid resubmitting the query if defined as cacheable.\n\nUnfortunately my tcl layer is really a mess and depends on some my other\ntcl code, so I have never submitted it as contributed code.\n\nI think it would be difficult to put the input conversion code into the\nlibpgtcl library because you don't have any indication of the sql format\nrequired by the various values supplied in the query.\n\n-- \nMassimo Dal Zotto\n\n+----------------------------------------------------------------------+\n| Massimo Dal Zotto email: [email protected] |\n| Via Marconi, 141 phone: ++39-0461534251 |\n| 38057 Pergine Valsugana (TN) www: http://www.cs.unitn.it/~dz/ |\n| Italy pgp: finger [email protected] |\n+----------------------------------------------------------------------+\n", "msg_date": "Fri, 13 Aug 1999 13:30:17 +0200 (MEST)", "msg_from": "Massimo Dal Zotto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] libpgtcl and array fields return format - PROPOSAL" } ]
[ { "msg_contents": "I have gotten as far as the tcop, now I need some help as far as\nyacc(bison?) goes. Yacc seems to use an input buffer called YY_BUFFER or\nsomething, which appears to be a fixed length string. Does anybody have any\nidea how I go about changing this so that I can pass a char * which I have\nallocated (apart from reading the manual, I'm looking for the 'technical\nsummary' ;-)?\n\nSorry the front-end changes are taking so long to get out, I have been quite\nbusy. I was also working on some outdated source, and spent a while\npatching and diffing. But they are working and ready to go. Tomorrow...\nmaybe.....\n\nMikeA\n\n", "msg_date": "Wed, 11 Aug 1999 14:42:55 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query length string" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> I have gotten as far as the tcop, now I need some help as far as\n> yacc(bison?) goes. Yacc seems to use an input buffer called YY_BUFFER or\n> something, which appears to be a fixed length string. Does anybody have any\n> idea how I go about changing this so that I can pass a char * which I have\n> allocated (apart from reading the manual, I'm looking for the 'technical\n> summary' ;-)?\n\nI think you will have to hold your nose and read the manual --- but in\nflex it looks like yy_scan_string() is what you want... I dunno whether\nplain lexes have such a thing, but we effectively require flex anyway...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 11 Aug 1999 10:56:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query length string " }, { "msg_contents": "On Wed, Aug 11, 1999 at 02:42:55PM +0200, Ansley, Michael wrote:\n> I have gotten as far as the tcop, now I need some help as far as\n> yacc(bison?) goes. Yacc seems to use an input buffer called YY_BUFFER or\n> something, which appears to be a fixed length string. Does anybody have any\n> idea how I go about changing this so that I can pass a char * which I have\n> allocated (apart from reading the manual, I'm looking for the 'technical\n> summary' ;-)?\n\nI presume your talking about the YY_BUF_SIZE mentioned in\nbackend/parse/Makefile? Earlier in this list, Thomas Lockart mentioned that\nwe're really flex specific, not general lex (dupport for an exclusive\nstart state, I think he said.) Tom Lane alludes to that in his reply as well.\nGiven that, this excerpt from the flex docs is useful:\n\n--------------\nNote that yytext can be defined in two different ways: either as\na character pointer or as a character array. You can control which\ndefinition flex uses by including one of the special directives `%pointer'\nor `%array' in the first (definitions) section of your flex input. The\ndefault is `%pointer', unless you use the `-l' lex compatibility option,\nin which case yytext will be an array. The advantage of using `%pointer'\nis substantially faster scanning and no buffer overflow when matching very\n ^^^^^^^^^^^^^^^^^^ \nlarge tokens (unless you run out of dynamic memory). The disadvantage is\nthat you are restricted in how your actions can modify yytext (see the\nnext section), and calls to the `unput()' function destroys the present\ncontents of yytext, which can be a considerable porting headache when\nmoving between different lex versions.\n\nThe advantage of `%array' is that you can then modify yytext to your\nheart's content, and calls to `unput()' do not destroy yytext (see\nbelow). Furthermore, existing lex programs sometimes access yytext\nexternally using declarations of the form:\n\nextern char yytext[];\n\nThis definition is erroneous when used with `%pointer', but correct for\n`%array'.\n\n`%array' defines yytext to be an array of YYLMAX characters, which\ndefaults to a fairly large value. You can change the size by simply\n#define'ing YYLMAX to a different value in the first section of your\nflex input. As mentioned above, with `%pointer' yytext grows dynamically\n ^^^^^^^^^^^^^^^^^\nto accommodate large tokens. While this means your `%pointer' scanner\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\ncan accommodate very large tokens (such as matching entire blocks of\ncomments), bear in mind that each time the scanner must resize yytext it\nalso must rescan the entire token from the beginning, so matching such\ntokens can prove slow. yytext presently does not dynamically grow if a\ncall to `unput()' results in too much text being pushed back; instead,\na run-time error results.\n\nAlso note that you cannot use `%array' with C++ scanner classes (the c++\noption; see below).\n\n------------\n\nI've checked, and the parser generated from our parse.l does in fact do\nthe dynamic buffer resize bit, so the token buffer (char *) passed on to\nthe grammar generated by yacc(bison) is already of variable size. However,\nit looks like there are hand coded maximum SQL query length checks in\nthe parser there you'll need to zap.\n\nHTH,\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Wed, 11 Aug 1999 10:54:51 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query length string" } ]
[ { "msg_contents": "PostgreSQL 6.5.1 RPMS -- lamar owen (lo) series release 3 ready to bang\non. Visit http://www.ramifordistat.net/postgres for more details.\n\nThe short of it:\n\n3lo adds working regression testing to the mix, in package\npostgresql-test-6.5.1-3lo.i386.rpm.\n\n3lo includes the alpha patches that 2lo had.\n\n3lo includes many more examples and tests -- the entire source test tree\nis in package test.\n\n3lo includes the final preview of pgaccess 0.97, packaged as\n/usr/bin/pgaccess97 so as to not displace 0.96, yet.\n\nIf you are already running an RPM install of version 6.5 or later you\nmay use rpm -Uvh to upgrade, otherwise follow the directions on\nhttp://www.ramifordistat.net/postgres\n\nEnjoy!\n\nLamar Owen\nWGCR Internet Radio\n", "msg_date": "Wed, 11 Aug 1999 16:02:23 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": true, "msg_subject": "Testing RPMS -- 6.5.1-3lo ready." } ]
[ { "msg_contents": "We won!! Editor's Choice Award for Best Database Management System\nat the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n\nDetails to follow later.\n\nCongrats to everyone.\n\n - Thomas\n", "msg_date": "Wed, 11 Aug 1999 20:47:11 GMT", "msg_from": "Tom Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "We won!" }, { "msg_contents": "\nOn 11-Aug-99 Tom Lockhart wrote:\n> We won!! Editor's Choice Award for Best Database Management System\n> at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n> \n> Details to follow later.\n> \n> Congrats to everyone.\n> \n> - Thomas\n\nBreak out the Champagne!!!!!!!!!!!!!!!!!!!!!!!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 11 Aug 1999 16:52:37 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: We won!" }, { "msg_contents": "Tom Lockhart wrote:\n> \n> We won!! Editor's Choice Award for Best Database Management System\n> at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n\nNice!\n\nBut there is error @ \nhttp://www.linuxworld.com/linuxworld/lw-1999-08/lw-08-penguin_1.html\n\n> it has a range of very advanced features such as r-tree indexing \n> for geographical data and concurrency control at a lower \n> granularity than common row-level locking. \n> The latter feature is unique among enterprise databases, \n ^^^^^^\n> including Oracle, DB2, and Informix. \n ^^^^^^\nOracle has MVCC!!!\nAs well as Interbase, Rdb(?)...\n\nVadim\n", "msg_date": "Thu, 12 Aug 1999 10:58:10 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: We won!" }, { "msg_contents": "At 10:58 12/08/99 +0800, Vadim Mikheev wrote:\n>Tom Lockhart wrote:\n>> \n>> We won!! Editor's Choice Award for Best Database Management System\n>> at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n>\n>Nice!\n>\n>But there is error @ \n>http://www.linuxworld.com/linuxworld/lw-1999-08/lw-08-penguin_1.html\n>\n>> it has a range of very advanced features such as r-tree indexing \n>> for geographical data and concurrency control at a lower \n>> granularity than common row-level locking. \n>> The latter feature is unique among enterprise databases, \n> ^^^^^^\n>> including Oracle, DB2, and Informix. \n> ^^^^^^\n>Oracle has MVCC!!!\n>As well as Interbase, Rdb(?)...\n>\n\nYes, Rdb too...\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 12 Aug 1999 13:25:45 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "Vadim Mikheev wrote:\n> > The latter feature is unique among enterprise databases,\n> ^^^^^^\n> > including Oracle, DB2, and Informix.\n> ^^^^^^\n> Oracle has MVCC!!!\n\nThats what it says :-) ie. Oracle, DB2, and Informix and Postgres have MVCC.\n\n--------\nRegards\nTheo\n", "msg_date": "Thu, 12 Aug 1999 08:18:10 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "Theo Kramer wrote:\n> \n> Vadim Mikheev wrote:\n> > > The latter feature is unique among enterprise databases,\n> > ^^^^^^\n> > > including Oracle, DB2, and Informix.\n> > ^^^^^^\n> > Oracle has MVCC!!!\n> \n> Thats what it says :-) ie. Oracle, DB2, and Informix and Postgres have MVCC.\n\nIn that case, should it not be \"multique\" ;)\n\n-----------\nHannu\n", "msg_date": "Thu, 12 Aug 1999 11:44:16 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "Theo Kramer wrote:\n> \n> Vadim Mikheev wrote:\n> > > The latter feature is unique among enterprise databases,\n> > ^^^^^^\n> > > including Oracle, DB2, and Informix.\n> > ^^^^^^\n> > Oracle has MVCC!!!\n> \n> Thats what it says :-) ie. Oracle, DB2, and Informix and Postgres have MVCC.\n\nBut, afaik, Informix hasn't MVCC! -:))\n\nVadim\n", "msg_date": "Fri, 13 Aug 1999 08:34:30 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n\n> Thats what it says :-) ie. Oracle, DB2, and Informix and Postgres have MVCC.\n\nOK, I get it. They wrote \"unique among\" when they meant \"unique to\".\nThat changes the meaning of the sentence radically...\n\n-tih\n-- \nPopularity is the hallmark of mediocrity. --Niles Crane, \"Frasier\"\n", "msg_date": "13 Aug 1999 08:22:56 +0200", "msg_from": "Tom Ivar Helbekkmo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "> Tom Lockhart wrote:\n> > \n> > We won!! Editor's Choice Award for Best Database Management System\n> > at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n> \n> Nice!\n> \n> But there is error @ \n> http://www.linuxworld.com/linuxworld/lw-1999-08/lw-08-penguin_1.html\n\nWe should link this to our web page.\n\n> \n> > it has a range of very advanced features such as r-tree indexing \n> > for geographical data and concurrency control at a lower \n> > granularity than common row-level locking. \n> > The latter feature is unique among enterprise databases, \n> ^^^^^^\n> > including Oracle, DB2, and Informix. \n> ^^^^^^\n> Oracle has MVCC!!!\n> As well as Interbase, Rdb(?)...\n\nYes. Good point.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Aug 1999 11:21:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: We won!" }, { "msg_contents": "On Fri, 13 Aug 1999, Bruce Momjian wrote:\n\n> > Tom Lockhart wrote:\n> > > \n> > > We won!! Editor's Choice Award for Best Database Management System\n> > > at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n> > \n> > Nice!\n> > \n> > But there is error @ \n> > http://www.linuxworld.com/linuxworld/lw-1999-08/lw-08-penguin_1.html\n> \n> We should link this to our web page.\n\nClick on the logo.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 13 Aug 1999 11:52:45 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: We won!" }, { "msg_contents": "On Thu, 12 Aug 1999, Theo Kramer wrote:\n\n> Vadim Mikheev wrote:\n> > > The latter feature is unique among enterprise databases,\n> > ^^^^^^\n> > > including Oracle, DB2, and Informix.\n> > ^^^^^^\n> > Oracle has MVCC!!!\n> \n> Thats what it says :-) ie. Oracle, DB2, and Informix and Postgres have MVCC.\n\nUmmm, I read the above as \"unique\", as in nobody else has it, not even\nOracle, DB2 and Informix .. *shrug*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Sun, 15 Aug 1999 23:27:24 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" } ]
[ { "msg_contents": "Hello all,\n\nCurrently,only the first column of multi-column indices\nis used to find start scan position of Indexscan-s.\n\nTo speed up finding scan start position,I have changed\n_bt_first() to use as many keys as possible.\n\nI'll attach the patch here.\n\nRegards.\n\nHiroshi Inoue\[email protected]", "msg_date": "Thu, 12 Aug 1999 17:51:30 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index access using multi-column indices" }, { "msg_contents": "Hiroshi Inoue wrote:\n> \n> Hello all,\n> \n> Currently,only the first column of multi-column indices\n> is used to find start scan position of Indexscan-s.\n> \n> To speed up finding scan start position,I have changed\n> _bt_first() to use as many keys as possible.\n\nNice!\n\nVadim\n", "msg_date": "Tue, 17 Aug 1999 09:56:56 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Index access using multi-column indices" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Hiroshi Inoue\n> Sent: Thursday, August 12, 1999 5:52 PM\n> To: pgsql-patches\n> Cc: pgsql-hackers\n> Subject: [HACKERS] Index access using multi-column indices\n> \n> \n> Hello all,\n> \n> Currently,only the first column of multi-column indices\n> is used to find start scan position of Indexscan-s.\n> \n> To speed up finding scan start position,I have changed\n> _bt_first() to use as many keys as possible.\n> \n> I'll attach the patch here.\n>\n\nSeems this isn't committed yet.\nIf there's no objection,I will commit this patch to current tree.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Mon, 27 Sep 1999 17:51:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Index access using multi-column indices" }, { "msg_contents": "I will apply it today. I am working through my mailbox after a very\nbusy Summer.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Hiroshi Inoue\n> > Sent: Thursday, August 12, 1999 5:52 PM\n> > To: pgsql-patches\n> > Cc: pgsql-hackers\n> > Subject: [HACKERS] Index access using multi-column indices\n> > \n> > \n> > Hello all,\n> > \n> > Currently,only the first column of multi-column indices\n> > is used to find start scan position of Indexscan-s.\n> > \n> > To speed up finding scan start position,I have changed\n> > _bt_first() to use as many keys as possible.\n> > \n> > I'll attach the patch here.\n> >\n> \n> Seems this isn't committed yet.\n> If there's no objection,I will commit this patch to current tree.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> ************\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 11:38:05 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] RE: [HACKERS] Index access using multi-column indices" }, { "msg_contents": "Applied.\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hello all,\n> \n> Currently,only the first column of multi-column indices\n> is used to find start scan position of Indexscan-s.\n> \n> To speed up finding scan start position,I have changed\n> _bt_first() to use as many keys as possible.\n> \n> I'll attach the patch here.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n\n[Attachment, skipping...]\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:18:57 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Index access using multi-column indices" } ]
[ { "msg_contents": "Hi, all\n\nRight, it works. Patches will arrive as soon as I figure out how to get cvs\nto diff the whole lot into one patch. I managed to get a 97kB query\nthrough, where the normal max was 64kB. I have to warn though, it munches\nmemory. Disappearing at a rate of about 6MB a second, my paltry 64MB RAM\nwith 100MB swap looked alarmingly small. Of course, if you're on a decent\nserver, this becomes less of an issue, but it doesn't go away. Still, it's\nbetter than being limited to 64kB.\n\nThe main issue seemed to be scan.l I had to change myinput() to cope with\nmultiple calls, rather than assuming that flex would call it once only for\nthe entire query. As a matter of interest, the code seems to think that lex\nmight still be used. Is this the case, or can I remove the lex-specific\ncode, i.e.: we assume flex to be used ALWAYS.\n\nMikeA\n\n--------------------------------------------------------------------\nScience is the game we play with God to find out what his rules are.\n--------------------------------------------------------------------\n\n[(LI)U]NIX IS user friendly; it's just picky about who its friends are.\n\n", "msg_date": "Thu, 12 Aug 1999 13:29:14 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query string length" } ]
[ { "msg_contents": "I forgot, tokens are still limited to 16kB ;-)\n\n>> Right, it works. Patches will arrive as soon as I figure out how to get\ncvs \n>> to diff the whole lot into one patch. I managed to get a 97kB query\nthrough, \n>> where the normal max was 64kB. I have to warn though, it munches\nmemory. \n>> Disappearing at a rate of about 6MB a second, my paltry 64MB RAM with \n>> 100MB swap looked alarmingly small. Of course, if you're on a decent\nserver, \n>> this becomes less of an issue, but it doesn't go away. Still, it's\nbetter than \n>> being limited to 64kB.\n\n>> The main issue seemed to be scan.l I had to change myinput() to cope\nwith \n>> multiple calls, rather than assuming that flex would call it once only\nfor the \n>> entire query. As a matter of interest, the code seems to think that lex\nmight \n>> still be used. Is this the case, or can I remove the lex-specific code,\ni.e.: we \n>> assume flex to be used ALWAYS.\n\n> MikeA\n> \n> \n", "msg_date": "Thu, 12 Aug 1999 13:40:07 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Query string length" } ]
[ { "msg_contents": "\tGreetings, I'm having a peculiar problem that I seem unable to work my way around. \nusing pgtclsh I am attempting to load a file into postgres using the copy command specifically;\n\t\n\tcopy SensorDagta from '...../TempFile.txt' using delimiters '|';\n\t\nthe lines of the file themselves are:\n\n4/21/99|859717.0|1.8|63|1|1.0|4.5|75.9|12:27:54|00|1|181.7|8500873.9|715513.1|181.1|8500873.9|71\n5513.1|\"20L\"\n4/21/99|859721.0|1.8|63|1|1.0|4.5|75.9|12:27:54|00|0|181.7|8500873.9|715513.1|181.1|8500873.9|71\n5513.1|\"20L\"\n\n\tetc.\n\t\n\tThe key problem is that whenever this command is executed I receive a \nPOSTGRES_FATAL_ERROR from the database and am informed that pg_atoi failed on the L in 20L. I \nhave already tested replacing the \"\" around 20L with single quotes and nothing, no difference is \nmade. The SensorData table itself defines the last column as type text. Oddly enough whenever \nI attempt to execute the same command from within psql I have no problems. However when I \nattempt to execute psql with a -c command the same error occurs. Is there some way to suppress \nthis? I am attemting to simplify the act of loading data for my colleagues and the extra hoops \nare somewhat undesireable. \n\tThanks in advance.\n\tCollin Lynch.\n", "msg_date": "Thu, 12 Aug 1999 10:38:15 -0400 (EDT)", "msg_from": "\"Collin F. Lynch\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"20L\" not interpreted as text." }, { "msg_contents": "\"Collin F. Lynch\" <[email protected]> writes:\n> \tThe key problem is that whenever this command is executed I\n> receive a POSTGRES_FATAL_ERROR from the database and am informed that\n> pg_atoi failed on the L in 20L.\n\nCan't reproduce that here. What version are you running? What's\nthe *exact* definition of the table? A self-contained example that\ntriggers the problem for you would be helpful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 12 Aug 1999 18:59:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"20L\" not interpreted as text. " } ]
[ { "msg_contents": "> We won!! Editor's Choice Award for Best Database Management System\n> at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n\nCongratulations!\n\nIn my opinion it should make the web page\n\n\n\n", "msg_date": "Thu, 12 Aug 1999 22:09:58 +0200 (CEST)", "msg_from": "Kaare Rasmussen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "\nOn 12-Aug-99 Kaare Rasmussen wrote:\n>> We won!! Editor's Choice Award for Best Database Management System\n>> at the LinuxWorld Expo in San Jose. The sorry runnerup was Oracle :)\n> \n> Congratulations!\n> \n> In my opinion it should make the web page\n\nYou mean www.postgresql.org? It was there within about 5 mins of getting\nTom's announcement!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Thu, 12 Aug 1999 17:24:28 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" }, { "msg_contents": "> > In my opinion it should make the web page\n> \n> You mean www.postgresql.org? It was there within about 5 mins of getting\n> Tom's announcement!\n\nFor some reason Netscape doesn't realize that the page has changed.\nTry \"shift\"-\"reload\" to get the new announcement.\n\nLooks nice Vince. I'll send an account of the show tomorrow. Off to\nsleep now...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 14 Aug 1999 05:33:25 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: We won!" } ]
[ { "msg_contents": "Here's a followup to my first message:\n\n When I just try 'vacuum verbose' the vacuum goes through fine\nwithout aborting the backend. So it appears to be a problem with\nanalyzing the optimizer indexing.\n\n-Tony\n\n\n\n================================================================\n\nFor those that missed my first message:\n\nI've been getting a strange error from the vacuum command. When I try\n\n'vacuum verbose analyze' the vacuum goes through the tables fine until\n\njust after finishing one particular table. Then I get the error:\n\nNOTICE: AbortTransaction and not in in-progress state\n\npqReadData() -- backend closed the channel unexpectedly.\n\n This probably means the backend terminated abnormally\n\n before or while processing the request.\n\nWe have lost the connection to the backend, so further processing is\n\nimpossible. Terminating.\n\nWhen I try to vacuum the tables individually, I get no problems with\n\naborted backends.\n\nDoes anyone know what is going on here?\n\n-Tony\n\n", "msg_date": "Thu, 12 Aug 1999 16:57:13 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Followup to: Aborted Transaction During Vacuum" } ]
[ { "msg_contents": "Hi\n\nDoes postgres support the notion of single row fetch without having to use \ncursors with libpq.\n\nWhat I want to do is something like\n\n myResult = PQexec(myConnection, \"select * from mytable where field >= ''\")\n\n for (int i = 0; i < PQntuples(myResult); i++) {\n PQfetchRow(myResult);\n }\n\nIe. rows are retrieved from the backend only on request. I can then control\nwhen I want to stop retreiving rows.\n\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 06:11:48 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Single row fetch from backend" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> Does postgres support the notion of single row fetch without having to use \n> cursors with libpq.\n\nNot unless you can precalculate the number of rows you want and use\nLIMIT. I recommend a cursor ;-).\n\nThere has been some talk of modifying libpq so that rows could be handed\nback to the application a few at a time, rather than accumulating the\nwhole result before PQgetResult lets you have any of it. That wouldn't\nallow you to abort the SELECT early, mind you, but when you're dealing\nwith a really big result it would avoid waste of memory space inside the\nclient app. I'm not sure if that would address your problem or not.\n\nIf you really want the ability to stop the fetch from the backend at\nany random point, a cursor is the only way to do it. I suppose libpq\nmight try to offer some syntactic sugar that would make a cursor\nslightly easier to use, but it'd still be a cursor as far as the backend\nand the FE/BE protocol were concerned. ecpg is probably a better answer\nif you want syntactic sugar...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 10:41:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Single row fetch from backend " }, { "msg_contents": "Tom Lane wrote:\n> Not unless you can precalculate the number of rows you want and use\n> LIMIT. I recommend a cursor ;-).\n> \n> There has been some talk of modifying libpq so that rows could be handed\n> back to the application a few at a time, rather than accumulating the\n> whole result before PQgetResult lets you have any of it. That wouldn't\n> allow you to abort the SELECT early, mind you, but when you're dealing\n> with a really big result it would avoid waste of memory space inside the\n> client app. I'm not sure if that would address your problem or not.\n> \n> If you really want the ability to stop the fetch from the backend at\n> any random point, a cursor is the only way to do it. I suppose libpq\n> might try to offer some syntactic sugar that would make a cursor\n> slightly easier to use, but it'd still be a cursor as far as the backend\n> and the FE/BE protocol were concerned. ecpg is probably a better answer\n> if you want syntactic sugar...\n\nHmmm, I've had pretty bad experiences with cursors on Informix Online. When\nmany clients use cursors on large result sets the system (even on big iron)\ngrinds to a halt. Luckily you can fetch a single row at a time on a normal\nselect with Informix so that solved that. It does appear, however, that\nPostgres does not create huge overheads for cursors, but I would still like\nto see what happens when many clients do a cursor select...\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 17:33:28 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Single row fetch from backend" }, { "msg_contents": "> \n> Hmmm, I've had pretty bad experiences with cursors on Informix Online. When\n> many clients use cursors on large result sets the system (even on big iron)\n> grinds to a halt. Luckily you can fetch a single row at a time on a normal\n> select with Informix so that solved that. It does appear, however, that\n> Postgres does not create huge overheads for cursors, but I would still like\n> to see what happens when many clients do a cursor select...\n\nI believe later Informix ODBC version cache the query reqults in the sql\nserver in case they are needed later. Terrible for performance. I have\nclients downgrade to older isql clients.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Aug 1999 13:03:43 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Single row fetch from backend" }, { "msg_contents": "Bruce Momjian wrote:\n> I believe later Informix ODBC version cache the query reqults in the sql\n> server in case they are needed later. Terrible for performance. I have\n> clients downgrade to older isql clients.\n\nI had a look at the ODBC interface for Postgres, yet could not get it to work\non my Linux RH5.0 machine. When linking with libpsqlodbc.a I get the following\n\ncc -I $PGHOME/include/iodbc testpgodbc.c $PGHOME/lib/libpsqlodbc.a -lm\n\n libpsqlodbc.a(psqlodbc.o): In function `_init':\n psqlodbc.o(.text+0x0): multiple definition of `_init'\n /usr/lib/crti.o(.init+0x0): first defined here\n libpsqlodbc.a(psqlodbc.o): In function `_fini':\n psqlodbc.o(.text+0x30): multiple definition of `_fini'\n /usr/lib/crti.o(.fini+0x0): first defined here\n\nLooks like I am not doing the correct thing, yet don't know what else to do.\n\nRegards\nTheo\n", "msg_date": "Sat, 14 Aug 1999 15:15:28 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Single row fetch from backend" }, { "msg_contents": "\n> I had a look at the ODBC interface for Postgres, yet could not get it to work\n> on my Linux RH5.0 machine. When linking with libpsqlodbc.a I get the following\n> cc -I $PGHOME/include/iodbc testpgodbc.c $PGHOME/lib/libpsqlodbc.a -lm\n> libpsqlodbc.a(psqlodbc.o): In function `_init':\n> psqlodbc.o(.text+0x0): multiple definition of `_init'\n> /usr/lib/crti.o(.init+0x0): first defined here\n> libpsqlodbc.a(psqlodbc.o): In function `_fini':\n> psqlodbc.o(.text+0x30): multiple definition of `_fini'\n> /usr/lib/crti.o(.fini+0x0): first defined here\n> Looks like I am not doing the correct thing, yet don't know what else to do.\n\nAre you building in the Postgres tree using the make system? If you\naren't, try either:\n\n1) configure --with-odbc\n cd interfaces/odbc\n make install\n\nor\n\n2) Unpack the \"standalone\" odbc file from ftp://postgresql.org/pub/\nand configure then make it in a separate directory.\n\nI've left out a few steps; read the html or postscript docs in the\nchapter on ODBC for complete details, and let us know what didn't\nwork.\n\nI've built the ODBC interface on RH5.2, and probably had used an\nearlier version of RH when I was working out the port with the other\ndevelopers.\n\nGood luck.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 14 Aug 1999 13:52:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Single row fetch from backend" } ]
[ { "msg_contents": "Hi\n\nDoes anyone know why the following occurs?\n\n coza=> explain select * from accounts where domain >= '%' order by domain;\n NOTICE: QUERY PLAN:\n\n Index Scan using domain_idx on accounts (cost=1434.50 rows=19611 width=106)\n\nand\n\n coza=> explain select * from accounts order by domain;\n NOTICE: QUERY PLAN:\n\n Sort (cost=3068.39 rows=58830 width=106)\n -> Seq Scan on accounts (cost=3068.39 rows=58830 width=106)\n\nSurely both queries give the same result set, yet the second example does not\nuse the index causing unnecessary overhead.\n\nI am running version 6.5 (haven't upgraded to 6.5.1 as yet)\n\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 08:32:36 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Index scan?" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> Does anyone know why the following occurs?\n> coza=> explain select * from accounts where domain >= '%' order by domain;\n> NOTICE: QUERY PLAN:\n\n> Index Scan using domain_idx on accounts (cost=1434.50 rows=19611 width=106)\n\n> and\n> coza=> explain select * from accounts order by domain;\n> NOTICE: QUERY PLAN:\n\n> Sort (cost=3068.39 rows=58830 width=106)\n> -> Seq Scan on accounts (cost=3068.39 rows=58830 width=106)\n\n> Surely both queries give the same result set, yet the second example does not\n> use the index causing unnecessary overhead.\n\nYeah, this is a known limitation of the planner: it's only bright enough\nto skip an explicit sort step for an ORDER BY clause when the plan that\n*would be chosen anyway in the absence of ORDER BY* happens to produce\na properly sorted result. In your first example the WHERE clause can\nbe exploited to scan only part of the index (notice the difference in\nestimated output row counts), so an indexscan gets chosen --- and that\njust happens to deliver the sorted result you want. In the second\nexample the plan-picker sees no reason to use anything more expensive\nthan a sequential scan :-(\n\nWe need to push awareness of the output ordering requirement down into\nthe code that chooses the basic plan. It's on the TODO list (or should\nbe) but I dunno when someone will get around to it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 10:33:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Index scan? " }, { "msg_contents": "Tom Lane wrote:\n> Yeah, this is a known limitation of the planner: it's only bright enough\n> to skip an explicit sort step for an ORDER BY clause when the plan that\n> *would be chosen anyway in the absence of ORDER BY* happens to produce\n> a properly sorted result. In your first example the WHERE clause can\n> be exploited to scan only part of the index (notice the difference in\n> estimated output row counts), so an indexscan gets chosen --- and that\n> just happens to deliver the sorted result you want. In the second\n> example the plan-picker sees no reason to use anything more expensive\n> than a sequential scan :-(\n> \n> We need to push awareness of the output ordering requirement down into\n> the code that chooses the basic plan. It's on the TODO list (or should\n> be) but I dunno when someone will get around to it.\n\nI can't wait :-)\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 17:29:21 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Index scan?" }, { "msg_contents": "> Yeah, this is a known limitation of the planner: it's only bright enough\n> to skip an explicit sort step for an ORDER BY clause when the plan that\n> *would be chosen anyway in the absence of ORDER BY* happens to produce\n> a properly sorted result. In your first example the WHERE clause can\n> be exploited to scan only part of the index (notice the difference in\n> estimated output row counts), so an indexscan gets chosen --- and that\n> just happens to deliver the sorted result you want. In the second\n> example the plan-picker sees no reason to use anything more expensive\n> than a sequential scan :-(\n> \n> We need to push awareness of the output ordering requirement down into\n> the code that chooses the basic plan. It's on the TODO list (or should\n> be) but I dunno when someone will get around to it.\n\nAdded to TODO:\n\n\t* Allow optimizer to prefer plans that match ORDER BY\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 13 Aug 1999 12:54:30 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Index scan?" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> Tom Lane wrote:\n>> We need to push awareness of the output ordering requirement down into\n>> the code that chooses the basic plan. It's on the TODO list (or should\n>> be) but I dunno when someone will get around to it.\n\n> I can't wait :-)\n\nI am about to do some major hacking on the planner/optimizer's\nrepresentation of path sort orders (for anyone who cares, PathOrder data\nis going to be merged into the pathkeys structures). After the dust\nsettles, I will see what I can do with this issue --- it might be pretty\neasy once the data structures are cleaned up.\n\nAside from the case with an ORDER BY clause, I believe the planner is\ncurrently too dumb to exploit a pre-sorted path for GROUP BY. It\nalways puts in an explicit sort on the GROUP BY keys ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 13:50:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Index scan? " } ]
[ { "msg_contents": "I have mailed patches for this to the patches mailing list. Some testing\nand feedback would be appreciated.\n\nThanks\n\n\nMikeA\n\n", "msg_date": "Fri, 13 Aug 1999 10:28:37 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Unlimited query strings" } ]
[ { "msg_contents": "Hi\n\nI've been doing some big imports using COPY. Problem I have is COPY\naborting if a field could not be parsed. What's the feeling about\nchanging the behaviour so it does not abort, yet writes the offending line\nnumber to the error log and continues with the next line?\n\nI'm happy to do this if the above mentioned behaviour is acceptable.\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 11:53:34 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "COPY" }, { "msg_contents": "On Fri, 13 Aug 1999, Theo Kramer wrote:\n\n> Hi\n> \n> I've been doing some big imports using COPY. Problem I have is COPY\n> aborting if a field could not be parsed. What's the feeling about\n> changing the behaviour so it does not abort, yet writes the offending line\n> number to the error log and continues with the next line?\n> \nMy feeling is that the suggested enhancement would save me \nhours of work.\n\nMarc Zuckman\[email protected]\n\n________________________________\n_ Visit The Home and Condo MarketPlace\t\t _\n_ http://www.ClassyAd.com\t\t\t _\n_\t\t\t\t\t\t\t _\n_ FREE basic property listings/advertisements and searches. _\n_\t\t\t\t\t\t\t _\n_ Try our premium, yet inexpensive services for a real\t _\n_ selling or buying edge!\t\t\t\t _\n________________________________\n\n", "msg_date": "Fri, 13 Aug 1999 09:05:24 -0400 (EDT)", "msg_from": "Marc Howard Zuckman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPYz" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> I've been doing some big imports using COPY. Problem I have is COPY\n> aborting if a field could not be parsed. What's the feeling about\n> changing the behaviour so it does not abort, yet writes the offending line\n> number to the error log and continues with the next line?\n\nI can think of situations where you'd want it either way. (For example,\nin a pg_dump restore script I'd sure want big red warning flags if there\nwere any problems, not a piddly little message in the postmaster log...)\n\nHow about creating a SET variable that chooses either the above behavior\nor the existing one?\n\nIf that seems acceptable all 'round, we can start arguing about which\nway ought to be the default ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 10:09:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] COPY " }, { "msg_contents": "Tom Lane wrote:\n> I can think of situations where you'd want it either way. (For example,\n> in a pg_dump restore script I'd sure want big red warning flags if there\n> were any problems, not a piddly little message in the postmaster log...)\n> \n> How about creating a SET variable that chooses either the above behavior\n> or the existing one?\n> \n> If that seems acceptable all 'round, we can start arguing about which\n> way ought to be the default ;-)\n\nSounds good to me. I'll start looking at whats involved.\n--------\nRegards\nTheo\n", "msg_date": "Fri, 13 Aug 1999 17:28:30 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] COPY" } ]
[ { "msg_contents": "Hi,\n\nNow that the query strings are effectively unlimited in length, the point\nthat I mentioned in a previous mail about the token length being limited to\n16kB becomes an issue. One of the reasons for wanting a large query string\nlength is to allow people to insert long text strings into a text field.\nHowever, if I understand things right, the token length will limit the text\ngoing into the text field to 16kB. Is this right? Should I have a look at\nhow to increase the token length arbitrarily?\n\n--------------------------------------------------------------------\nScience is the game we play with God to find out what his rules are.\n--------------------------------------------------------------------\n\n[(LI)U]NIX IS user friendly; it's just picky about who its friends are.\n\n", "msg_date": "Fri, 13 Aug 1999 12:21:37 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Token length limit" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Now that the query strings are effectively unlimited in length, the point\n> that I mentioned in a previous mail about the token length being limited to\n> 16kB becomes an issue. One of the reasons for wanting a large query string\n> length is to allow people to insert long text strings into a text field.\n> However, if I understand things right, the token length will limit the text\n> going into the text field to 16kB. Is this right? Should I have a look at\n> how to increase the token length arbitrarily?\n\nYes, and yes. It's not a critical issue as long as we have limited\ntuple sizes, but we do need to fix this eventually.\n\n(Actually I think the limit is currently 64k not 16k, because of the\nhack that parser/Makefile applies to scan.c, but the point is there\nshouldn't be any hardwired limit...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 10:13:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Token length limit " } ]
[ { "msg_contents": "Hi,\n\nI've got several complaints about case-nonsensitive search doesn't\nworking with 6.5.1 and locale enabled (koi8-r) under FreeBSD 3.1 release\nEverything works fine if locale was hardcoded in main.c ! \nThey checked that environment is set before running postmaster\nand locale works fine on their machines.\nIt seems environment doens't pass to program (main.c)\n\nI have no right now access to FreeBSD machine but I did test locale setup\nwith 6.5 under FreeBSD 3.1 and had no problem. But because I've got\nseveral complaints probably there is some real problem.\n\nWhen I implement several years ago locale support I was sure, that\nsetlocale(LC_CTYPE,\"\") should get value of LC_CTYPE from an environment\nThis is ok for Linux, Solaris and DUX I just checked setlocale man pages.\nBut something is unclear in FreeBSD 3.1 man page. As far as I remember\nsomething were written about NULL (not \"\") value. i.e.\nsetlocale(LC_CTYPE,NULL) returns current locale setting. What does 'current'\nmeans ,\n\nCould somebody check how locale environment passed to program\nunder FreeBSD 3.1 ? \n\nSomething like this simple program should be fine:\n\n-----------------------------------------------\n#include<locale.h>\n#include<stdlib.h>\n#include<stdio.h>\n\n void main()\n {\n if (setlocale(LC_CTYPE, \"\")){\n printf(\"locale: ok.\\n\\n\");\n }\n else {\n printf(\"Failed to set locale.\\n\\n\");\n return;\n }\n }\n\n----------------------------------------------\nUnder Linux I got:\n\n1. Non-existent locale\n4:47[mira]:~/app/pgsql/locale>setenv LC_CTYPE tt\n14:47[mira]:~/app/pgsql/locale>./locale_test\nFailed to set locale.\n\n2. Locale exists\n14:47[mira]:~/app/pgsql/locale>setenv LC_CTYPE koi8-r\n14:48[mira]:~/app/pgsql/locale>./locale_test\nlocale: ok.\n\n3. LC_CTYPE is getting from LANG - bad locale \n14:48[mira]:~/app/pgsql/locale>unsetenv LC_CTYPE\n14:49[mira]:~/app/pgsql/locale>setenv LANG tt\n14:49[mira]:~/app/pgsql/locale>./locale_test\nFailed to set locale.\n\n4. LC_CTYPE is getting from LANG - good locale\n14:49[mira]:~/app/pgsql/locale>setenv LANG koi8-r\n14:50[mira]:~/app/pgsql/locale>./locale_test\nlocale: ok.\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 13 Aug 1999 14:36:30 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "behavior of setlocale under FreeBSD 3.1" } ]
[ { "msg_contents": ">> (Actually I think the limit is currently 64k not 16k, because of the\n>> hack that parser/Makefile applies to scan.c, but the point is there\n>> shouldn't be any hardwired limit...)\nYes, I noticed that hack. \n\n>> Yes, and yes. It's not a critical issue as long as we have limited\n>> tuple sizes, but we do need to fix this eventually.\nThe theory is, of course, that now the query string lengths are unlimited,\nthere is good reason to remove the tuple block-size limit. Somebody...\nsomebody.... anybody.....\n\nAnyway, it has been a good learning experience, and thanks for everybody's\nhelp so far. With some luck, the debugging process will be short and sweet.\n\n\nMikeA\n\n\n", "msg_date": "Fri, 13 Aug 1999 16:21:00 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Token length limit " } ]
[ { "msg_contents": "I'm just getting started with PostgreSQL, and I love it; I hope to\neventually use it in all of my projects. The only thing holding me back is\nthe lack of a good way to perform left outer joins. I scanned all of the\nmailing lists, and it seems that the issue has come up several times, and\npeople have shown interest, but there has been no visible progress. If an\nimplementation is quietly making its way through CVS, please let me know!\n\nWhen I code a one-sided join I'm generally thinking \"for selected objects\nfrom this class, fetch attributes plus related data from other classes\".\nBased on my vague impression that PostgreSQL converts some queries\ninternally into nested loops, I suggest the following new statement\n(partially stolen from InterBase's stored procedure language):\n\nFOR ... [WHERE ...] [GROUP BY ...] [HAVING ...] DO <statement>\n\nThis would convert directly into a nested loop around the <statement>, and\nwould replace any empty SELECT result within <statement> with a single row\nin which \"local\" object attributes are NULL. The current object(s) from the\nFOR ... DO would be accessible inside <statement>. Then one could write the\nSQL92:\n\nSELECT p.name, c.name FROM parents p LEFT JOIN children c ON c.parent = p.id\n\nas\n\nFOR parents p DO SELECT p.name, c.name FROM children c WHERE c.parent = p.id\n\nMore complex constructions could involve nested FOR ... DO's, in which case\nthe inner FOR ... DO's would each invoke their <statement> at least once,\nwith NULL objects if necessary. A list of all widgets, exploded into parts\nand sub-parts if possible, could be written:\n\nFOR widgets w DO\n FOR parts p1, widgets wp1 WHERE p1.widget = w.id and p1.part = wp1.id DO\n SELECT w.name, wp1.name, wp2.name FROM parts p2, widgets wp2\n WHERE p2.widget = p1.part and p2.part = wp2.id\n\nDoes this look more or less complicated to implement and use than the SQL92\nLEFT JOIN? Is it too non-standard to live? Too ambiguous or narrow? I'd\nimplement it myself, but I'm light-years away from being able to contribute\nanything but bug reports and ideas right now.\n\nThanks,\nEvan Simpson\n\n\n", "msg_date": "Fri, 13 Aug 1999 09:57:35 -0500", "msg_from": "\"Evan Simpson\" <[email protected]>", "msg_from_op": true, "msg_subject": "PROPOSAL: Statement for one-sided joins" }, { "msg_contents": "\"Evan Simpson\" <[email protected]> writes:\n> I'm just getting started with PostgreSQL, and I love it; I hope to\n> eventually use it in all of my projects. The only thing holding me back is\n> the lack of a good way to perform left outer joins.\n\nI think Thomas Lockhart is working on outer joins ... dunno what sort\nof timetable he has in mind, though.\n\n> [ proposal snipped ]\n\n> Does this look more or less complicated to implement and use than the SQL92\n> LEFT JOIN? Is it too non-standard to live?\n\nIn general I'd rather see us putting effort into features that are in\nthe standard rather than ones that aren't...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 13:58:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] PROPOSAL: Statement for one-sided joins " } ]
[ { "msg_contents": "Tom Lane wrote:\n\n> However, there should have been an \"ERROR\" message if something reported\n> an error. You said you only saw \"NOTICE: AbortTransaction and not in\n> in-progress state\" and not any \"ERROR\" before or after it? The NOTICE\n> presumably came out of AbortTransaction itself, after whatever went\n> wrong went wrong...\n>\n\nYes, I have an ERROR message (either I didn't notice it before or it is\nnew):\n\nNOTICE: Index pkex_ellipse_opto_proc: Pages 138; Tuples 30535. Elapsed 0/0\nsec.\nNOTICE: Index pkex_ellipse_opto_proc: Pages 138; Tuples 30535. Elapsed 0/0\nsec.\nERROR: vacuum: can't destroy lock file!\nNOTICE: AbortTransaction and not in in-progress state\nNOTICE: AbortTransaction and not in in-progress state\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\n\nIt looks like the error is either occuring on table ex_ellipse_opto_proc or\nthe next table in the list ex_ellipse_proc. However, I think the error is\nmore general than that. It appears to occur just before the last table in\nthe database gets vacuumed. Here's the list of my tables:\n\nDatabase = db01\n +------------------+----------------------------------+----------+\n | Owner | Relation | Type |\n +------------------+----------------------------------+----------+\n | postgres | center_out | table |\n | postgres | center_out_analog | table |\n | postgres | center_out_analog_proc | table |\n | postgres | center_out_cell | table |\n | postgres | center_out_cell_proc | table |\n | postgres | center_out_opto | table |\n | postgres | center_out_opto_proc | table |\n | postgres | center_out_pref_direction | table |\n | postgres | center_out_proc | table |\n | postgres | electrode | table |\n | postgres | ellipse | table |\n | postgres | ellipse_analog | table |\n | dan | ellipse_analog_proc | table |\n | postgres | ellipse_cell | table |\n | dan | ellipse_cell_proc | table |\n | postgres | ellipse_opto | table |\n | dan | ellipse_opto_proc | table |\n | dan | ellipse_proc | table |\n | dan | ex_ellipse | table |\n | dan | ex_ellipse_analog_proc | table |\n | dan | ex_ellipse_cell | table |\n | dan | ex_ellipse_cell_proc | table |\n | dan | ex_ellipse_opto | table |\n | dan | ex_ellipse_opto_proc | table\n| <---- ERROR occurs somewhere after here\n | dan | ex_ellipse_proc | table |\n +------------------+----------------------------------+----------+\n\nYesterday, I was adding tables in one by one from a previous pg_dump. The\nerror didn't pop up until after I had about 9 or 10 tables restored. I\ndidn't think about it then, but it may have always occured after the second\nto last table in the list. But don't hold me to that.\n\nIn any case, I'll try to re-build everything like you've asked to get a\nbetter error message. Maybe if I go through step-by-step again. You'll be\nable to help me find where the error is taking place.\n\nThanks Tom and Oliver. I'll get back to you when I finish the rebuild.\n\n-Tony\n\n\n", "msg_date": "Fri, 13 Aug 1999 13:42:05 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum" }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> ERROR: vacuum: can't destroy lock file!\n> NOTICE: AbortTransaction and not in in-progress state\n> NOTICE: AbortTransaction and not in in-progress state\n> pqReadData() -- backend closed the channel unexpectedly.\n\nAh-hah! Oliver is right, then, at least in part --- that error message\nfrom vacuum suggests that the vc_abort bug *is* biting you. However,\nthere may be more going on, because what Oliver and others observed did\nnot include a coredump (at least not that I heard about).\n\nYou can probably suppress the problem by installing the patch I posted\nto pgsql-patches a few days ago. However, I'd appreciate it if you'd\nfirst try to reproduce the problem again with debug/assert turned on,\nso that we can get some idea whether there is an additional bug that's\nonly biting you and not the previous reporters.\n\nBTW, if vc_abort is involved then the occurrence of the problem probably\ndepends on whether any other backends are using the database and what\nthey are doing. (The vc_abort bug escaped notice up till last week\nbecause it doesn't trigger when vacuum is the only thing running.)\nDo you have other clients running when you do the vacuum? What are\nthey doing?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 17:22:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " } ]
[ { "msg_contents": "I discovered that I could reproduce the coredump Oliver and Tony were\ntalking about by the simple expedient of removing pg_vlock manually\nwhile vacuum is running. Armed with a debugger it didn't take long to\nfind out what's going wrong:\n\n(a) vacuum.c does a CommitTransaction to commit its final table's\n worth of fixes.\n\n(b) CommitTransaction calls EndPortalAllocMode.\n\n(c) vacuum calls vc_shutdown, which tries to remove pg_vlock,\n and reports an error when the unlink() call fails.\n\n(d) during error cleanup, AbortTransaction is called.\n\n(e) AbortTransaction calls EndPortalAllocMode. There has been no\n intervening StartPortalAllocMode, so the portal's context stack\n is empty. EndPortalAllocMode tries to free a nonexistent memory\n context (or, if you have asserts turned on, dies with an assert\n failure). Ka-boom.\n\nIt seems to me that EndPortalAllocMode ought to be a little more\nforgiving of being called when the portal's context stack is empty.\nOtherwise, it is unsafe to call elog() from anywhere except within\na transaction, because any attempt to abort a non-existent transaction\n*will* coredump in this code.\n\nHowever, I'd like confirmation from someone who knows portalmem.c\na little better that this is a good change to make. Is there a\nbetter way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 13 Aug 1999 18:17:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Ah-hah, I see the problem: EndPortalAllocMode()" } ]
[ { "msg_contents": "I need to run psql really quiet - no messages, just returning RC.\npsql -q doesn't works as supposed from man page - \nI'm still getting messages like:\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index .....\nIs it a feature or I need to find some workaround \n\n regards,\n Oleg\n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 14 Aug 1999 03:30:34 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": true, "msg_subject": "How to get 'psql -q' runs really quiet ?" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n> I need to run psql really quiet - no messages, just returning RC.\n> psql -q doesn't works as supposed from man page - \n> I'm still getting messages like:\n> NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index .....\n\nThe way libpq is set up, NOTICE messages *will* appear on stderr\nno matter what, unless the client app overrides the default notice\nmessage processor (which is this hugely complicated routine that\ncalls fprintf(stderr, ...) ;-)).\n\nPerhaps psql ought to plug in a no-op notice message processor\nif -q is specified.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Aug 1999 10:50:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to get 'psql -q' runs really quiet ? " }, { "msg_contents": "> Oleg Bartunov <[email protected]> writes:\n> > I need to run psql really quiet - no messages, just returning RC.\n> > psql -q doesn't works as supposed from man page - \n> > I'm still getting messages like:\n> > NOTICE: CREATE TABLE/PRIMARY KEY will create implicit index .....\n> \n> The way libpq is set up, NOTICE messages *will* appear on stderr\n> no matter what, unless the client app overrides the default notice\n> message processor (which is this hugely complicated routine that\n> calls fprintf(stderr, ...) ;-)).\n> \n> Perhaps psql ought to plug in a no-op notice message processor\n> if -q is specified.\n> \n\nBut it is an elog. There is quite, and there is \"Don't report any\nerrors\". We don't have a flag for that. In fact, -q only turns of\ngreeting and goodbye, and -t turns off table headings and row counts.\nCan't the user send these massages to /dev/null when starting psql, or\nis the problem trimming out those notices? Can't grep -v do that for\nthem?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Aug 1999 00:35:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] How to get 'psql -q' runs really quiet ?" } ]
[ { "msg_contents": "Tom and Oliver,\n\n I added the patch and re-built everything. No more problems with the\nvacuum analyze (knock on wood since this is still Friday the 13th).\n\n Tom, does this patch address the problem you found in your\nEndPortalAllocMode comments? I'm just wondering whether another patch\nwill be needed or if this one should cover the problem.\n\nThanks again for the help guys.\n-Tony\n\n\nTom wrote:\n> (a) vacuum.c does a CommitTransaction to commit its final table's\n> worth of fixes.\n\n> (b) CommitTransaction calls EndPortalAllocMode.\n\n> (c) vacuum calls vc_shutdown, which tries to remove pg_vlock,\n and reports an error when the unlink() call fails.\n\n> (d) during error cleanup, AbortTransaction is called.\n\n> (e) AbortTransaction calls EndPortalAllocMode. There has been no\n> intervening StartPortalAllocMode, so the portal's context stack\n> is empty. EndPortalAllocMode tries to free a nonexistent memory\n> context (or, if you have asserts turned on, dies with an assert\n> failure). Ka-boom.\n\n\n\n", "msg_date": "Fri, 13 Aug 1999 21:13:01 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " }, { "msg_contents": "\"G. Anthony Reina\" <[email protected]> writes:\n> Tom, does this patch address the problem you found in your\n> EndPortalAllocMode comments? I'm just wondering whether another patch\n> will be needed or if this one should cover the problem.\n\nI think you need not worry. EndPortalAllocMode has a bug that ought to\nbe fixed IMHO, but it's only triggered if an error is reported outside\nof any transaction context, and there is darn little backend processing\nthat happens outside of any transaction context. This vacuum shutdown\ncheck might be the only such error, in fact. In short, it's something\nto clean up for 6.6, but I doubt it's worth issuing a 6.5 patch for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Aug 1999 10:27:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Aborted Transaction During Vacuum " } ]
[ { "msg_contents": "Mark Dalphin <[email protected]> writes:\n> Using the UNIQUE constraint in a TABLE definition no longer does anything.\n\nInteresting. Playing with some variants of your example shows that\nUNIQUE works fine *unless* there is another column marked PRIMARY KEY.\nThen the UNIQUE constraint is ignored. Looks like a simple logic bug in\nthe table-definition expander.\n\nA look at the CVS logs reveals this apparently related entry for\nparser/analyze.c:\n\nrevision 1.102\ndate: 1999/05/12 07:17:18; author: thomas; state: Exp; lines: +68 -24\nFix problem with multiple indices defined if using column- and table-\n constraints. Reported by Tom Lane.\nNow, check for duplicate indices and retain the one which is a primary-key.\n\nThomas, do you recall what that was all about? I don't offhand...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Aug 1999 10:46:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [BUGS] UNIQUE constraint no longer works under 6.5.1 " }, { "msg_contents": "> Interesting. Playing with some variants of your example shows that\n> UNIQUE works fine *unless* there is another column marked PRIMARY KEY.\n> Then the UNIQUE constraint is ignored. Looks like a simple logic bug in\n> the table-definition expander.\n> A look at the CVS logs reveals this apparently related entry for\n> parser/analyze.c:\n> revision 1.102\n> date: 1999/05/12 07:17:18; author: thomas; state: Exp; lines: +68 -24\n> Fix problem with multiple indices defined if using column- and table-\n> constraints. Reported by Tom Lane.\n> Now, check for duplicate indices and retain the one which is a primary-key.\n\nYow! The problem reported earlier (by you, so you share some blame! ;)\nwas that if one specified a primary key *and* a unique constraint, and\nthey both pointed to the same column, then you got two indices\ncreated. So I tried to go through the list of indices and drop any\nwhich seemed to be the same as the primary key index.\n\nI apparently hadn't tested for this reported case (obviously :() but\nit should be easy to fix. I'll look at it soon, unless someone already\nhas.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 14 Aug 1999 15:17:59 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] UNIQUE constraint no longer works under 6.5.1" }, { "msg_contents": "> > Interesting. Playing with some variants of your example shows that\n> > UNIQUE works fine *unless* there is another column marked PRIMARY KEY.\n> > Then the UNIQUE constraint is ignored. Looks like a simple logic bug in\n> > the table-definition expander.\n> > A look at the CVS logs reveals this apparently related entry for\n> > parser/analyze.c:\n> > revision 1.102\n> > date: 1999/05/12 07:17:18; author: thomas; state: Exp; lines: +68 -24\n> > Fix problem with multiple indices defined if using column- and table-\n> > constraints. Reported by Tom Lane.\n> > Now, check for duplicate indices and retain the one which is a primary-key.\n> Yow! The problem reported earlier (by you, so you share some blame! ;)\n> was that if one specified a primary key *and* a unique constraint, and\n> they both pointed to the same column, then you got two indices\n> created. So I tried to go through the list of indices and drop any\n> which seemed to be the same as the primary key index.\n\nOK, the immediate problem was due to a cut and paste typo (I was\ncomparing column names to decide if indices were identical, and the\npointer to the name was set to be the same for both index elements).\n\nBut, the code which was in there was always a bit wimpy; it only\nchecked for duplicate indices if they both had only one column. I've\nmodified it to (I think) check for any number of columns, so\nconstraints like\n\n create table t1 (i int, j int, unique(i,j), primary key(i,j))\n\nshould also work correctly by swallowing the \"unique\" index.\n\nHere is a patch, to be applied in src/backend/parser/. Let me know if\nit fixes your problem and any other cases you can think of, and I'll\napply it to the tree(s).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California", "msg_date": "Sat, 14 Aug 1999 23:45:27 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BUGS] UNIQUE constraint no longer works under 6.5.1" } ]
[ { "msg_contents": "Mark Dalphin <[email protected]> writes:\n> And this is the error I get when I try to insert anything, regardless\n> of whether the foreign key exists or not:\n\n> ERROR: There is no operator '=$' for types 'int4' and 'int4'\n\nI see it too. Looks like a bug ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Aug 1999 11:10:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] Anyone recognise this error from PL/pgSQL? " }, { "msg_contents": "> > And this is the error I get when I try to insert anything, regardless\n> > of whether the foreign key exists or not:\n> > ERROR: There is no operator '=$' for types 'int4' and 'int4'\n> I see it too. Looks like a bug ...\n\nI seem to have lost this thread. But in case it hasn't been suggested,\ntry putting a space between the equals sign and the dollar sign...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Sat, 14 Aug 1999 15:26:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [SQL] Anyone recognise this error from PL/pgSQL?" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> And this is the error I get when I try to insert anything, regardless\n>>>> of whether the foreign key exists or not:\n>>>> ERROR: There is no operator '=$' for types 'int4' and 'int4'\n>> I see it too. Looks like a bug ...\n\n> I seem to have lost this thread.\n\nSorry, it was over in pgsql-sql (and probably should have been in BUGS).\nI just wanted to draw attention to it in the hackers list.\n\n> But in case it hasn't been suggested,\n> try putting a space between the equals sign and the dollar sign...\n\nThere isn't any '$' anywhere in his function ... plpgsql is dropping the\nball somehow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 14 Aug 1999 11:30:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [SQL] Anyone recognise this error from PL/pgSQL? " } ]
[ { "msg_contents": "> Actually I have several cron tasks and they bored me emailing\n> every night output from psql.\n> I need emails only if some problems occur.\n> Ok. I could easily redirect all messages to some file if I would\n> sure psql returns return code in right way. Then I could \n> echo this file if RC != 0\n> grep -v will not works because elog messages are printed to STDERR\n> so I need something like:\n> psql -q test < tt.sql 2>&1 | grep -v '^NOTICE:' \n> but then I will lose return code from psql :-)\n> Having several flags for different kind of messages would be\n> very useful.\n\nOK:\n\n\ttrap \"rm -f /tmp/$$\" 0 1 2 3 15\n\tpsql -q test < tt.sql >/tmp/$$ 2>&1 \n\tif [ \"$?\" -ne 0 ]\n\tthen\techo \"Failure\"\n\tfi\n\tcat /tmp/$$ | grep -v '^NOTICE:'\n\nHaving different psql flags for different elog levels is a bit much. \npsql already has too many flags.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 15 Aug 1999 09:55:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Returned mail: User unknown" }, { "msg_contents": "> Actually I have several cron tasks and they bored me emailing\n> every night output from psql.\n> I need emails only if some problems occur.\n> Ok. I could easily redirect all messages to some file if I would\n> sure psql returns return code in right way. Then I could \n> echo this file if RC != 0\n> grep -v will not works because elog messages are printed to STDERR\n> so I need something like:\n> psql -q test < tt.sql 2>&1 | grep -v '^NOTICE:' \n> but then I will lose return code from psql :-)\n> Having several flags for different kind of messages would be\n> very useful.\n\nOK:\n\n\ttrap \"rm -f /tmp/$$\" 0 1 2 3 15\n\tpsql -q test < tt.sql >/tmp/$$ 2>&1 \n\tif [ \"$?\" -ne 0 ]\n\tthen\techo \"Failure\"\n\tfi\n\tcat /tmp/$$ | grep -v '^NOTICE:'\n\nHaving different psql flags for different elog levels is a bit much. \npsql already has too many flags.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n", "msg_date": "Sun, 15 Aug 1999 10:12:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "psql return code for NOTICE" } ]
[ { "msg_contents": "I�m trying to implement Statement Triggers in postgreSQL. The basic idea is\ncheck (in ExcutePlan) if there are statement before call first time to\nExecAppend, ExecDelete or ExecUpdate (or after the last call in case of\nafter statement).\n\nBut if I exec \"Insert into tbtest values (22,'cadena');\" then ExecutePlan is\ncall two times, and ExecX too. Why?\nThe execution trace is:\n Entering ExecutePlan\n Calling ExecAppend\n Entering ExecAppend\n Entering ExecutePlan\n Calling ExecAppend\n Entering ExecAppend\n\nOther Question: SQL3 says\n\" The execution of triggered actionsdepends on the cursosr mode of the\ncurrent SQ-transaction. If the cursor mode is set to cascade off, then the\nexecution of the <triggered SQL statement>s is effectively deferred until\nenacted implicitly by execution of a <commit statement> or a <close\nstatement>. Otherwise, the <triggered SQl statement>s are effectively\nexecuted ...\"\n\nHow apply this to postgre?\n\nThanks everybody.\n\n\nF.J.Cuberos\n\n", "msg_date": "Sun, 15 Aug 1999 16:28:03 +0200", "msg_from": "\"F J Cuberos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Statement Triggers. Ideas & help." }, { "msg_contents": "Forget the problem with Executeplan, I was getting the syslogs and stdour\nmessages in the same screen; SO DOUBLE!!!!.\n\nWith statement triggers (and defined in SQL3 too) will be interesting to\naccess OLD and NEW values of modified tables. How can be implemented? It�s\npossible to use MVCC for this?\n\nThanks\n\nF.J. Cuberos\nSevilla - Spain\n\n", "msg_date": "Mon, 16 Aug 1999 00:41:51 +0200", "msg_from": "\"F J Cuberos\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Statement Triggers. Ideas & help." }, { "msg_contents": "F J Cuberos wrote:\n> \n> Forget the problem with Executeplan, I was getting the syslogs and stdour\n> messages in the same screen; SO DOUBLE!!!!.\n> \n> With statement triggers (and defined in SQL3 too) will be interesting to\n> access OLD and NEW values of modified tables. How can be implemented? It�s\n> possible to use MVCC for this?\n\nMVCC uses t_xmin/t_xmax to decide what's visible to transaction.\nFor OLD/NEW you will have to analyze t_cmin/t_cmax as well.\n\nVadim\n", "msg_date": "Tue, 17 Aug 1999 09:55:27 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Statement Triggers. Ideas & help." }, { "msg_contents": "At 09:55 17/08/99 +0800, Vadim Mikheev wrote:\n>F J Cuberos wrote:\n>> \n>> Forget the problem with Executeplan, I was getting the syslogs and stdour\n>> messages in the same screen; SO DOUBLE!!!!.\n>> \n>> With statement triggers (and defined in SQL3 too) will be interesting to\n>> access OLD and NEW values of modified tables. How can be implemented? It�s\n>> possible to use MVCC for this?\n>\n>MVCC uses t_xmin/t_xmax to decide what's visible to transaction.\n>For OLD/NEW you will have to analyze t_cmin/t_cmax as well.\n\nFWIW, Dec (Oracle) Rdb does not allow access to OLD & NEW in statement\ntriggers; if you want the rows, you have to write 'for each row' triggers.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.C.N. 008 659 498) | /(@) ______---_\nTel: +61-03-5367 7422 | _________ \\\nFax: +61-03-5367 7430 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 17 Aug 1999 15:29:47 +1000", "msg_from": "Philip Warner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Statement Triggers. Ideas & help." } ]
[ { "msg_contents": "While poking at the vacuum-induced coredump we were discussing on\nFriday, I noticed that psql did not report \n\tERROR: vacuum: can't destroy lock file!\neven though this message was showing up in the postmaster log.\nEven more interesting, psql *did* report the\n\tNOTICE: AbortTransaction and not in in-progress state \nthat the backend emitted *after* the elog(ERROR) and just before\ncoredumping.\n\nIt turns out that this is a libpq deficiency: it's got the error\nmessage, but because PQexec() was used, it's waiting around for\na 'Z' ReadyForQuery message before it hands the error message\nback to the application. Since the backend crashes, of course\nthe 'Z' never comes ... and when libpq detects closure of the\nconnection, it wipes out the stored error message in its haste\nto report\n\tpqReadData() -- backend closed the channel unexpectedly.\n\t This probably means the backend terminated abnormally\n\t before or while processing the request.\nwhich is all that the user gets to see, unless he thinks to \nlook in the postmaster log. Boo hiss.\n\n(The reason the NOTICE shows up is that it's just dumped to stderr\nimmediately upon receipt, rather than being queued to hand back\nto the application.)\n\nI have a fix in mind for this: concatenate \"backend closed the channel\"\nto the waiting error message, instead of wiping it out. But I think\nI will wait till after Michael Ansley's long-query changes have been\ncommitted before I start hacking on libpq again.\n\nAnyway, if you want to know what really happened right before a\nbackend crash, you should look in the postmaster log until this\nis fixed...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 15 Aug 1999 22:56:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "libpq drops error messages received just before backend crash" }, { "msg_contents": "I wrote:\n> It turns out that this is a libpq deficiency: it's got the error\n> message, but because PQexec() was used, it's waiting around for\n> a 'Z' ReadyForQuery message before it hands the error message\n> back to the application. Since the backend crashes, of course\n> the 'Z' never comes ... and when libpq detects closure of the\n> connection, it wipes out the stored error message in its haste\n> to report\n> \tpqReadData() -- backend closed the channel unexpectedly.\n> \t This probably means the backend terminated abnormally\n> \t before or while processing the request.\n> which is all that the user gets to see, unless he thinks to \n> look in the postmaster log. Boo hiss.\n\nAlthough I forgot to mention it in the commit log entry, this problem\nis fixed in the libpq changes I just committed to the current branch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Aug 1999 21:53:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: libpq drops error messages received just before backend crash " } ]
[ { "msg_contents": "I have committed following changes to both the current and the statble\ntree. They should fix some problems when compiling libpq on Windows\nwith MB enabled, according to Hiroki Kataoka, the author of the\npatches.\n---\nTatsuo Ishii\n\n----------------------------------------------------------------\ndiff -rc src.orig/interfaces/libpq/win32.mak src/interfaces/libpq/win32.mak\n*** src.orig/interfaces/libpq/win32.mak\tTue Jun 8 16:00:37 1999\n--- src/interfaces/libpq/win32.mak\tFri Jul 16 00:28:16 1999\n***************\n*** 37,42 ****\n--- 37,48 ----\n \t-@erase \"$(OUTDIR)\\libpq.pch\"\n \t-@erase \"$(OUTDIR)\\libpqdll.exp\"\n \t-@erase \"$(OUTDIR)\\libpqdll.lib\"\n+ !IFDEF MULTIBYTE\n+ \t-@erase \"$(INTDIR)\\common.obj\"\n+ \t-@erase \"$(INTDIR)\\wchar.obj\"\n+ \t-@erase \"$(INTDIR)\\conv.obj\"\n+ \t-@erase \"$(INTDIR)\\big5.obj\"\n+ !ENDIF\n \n \"$(OUTDIR)\" :\n if not exist \"$(OUTDIR)/$(NULL)\" mkdir \"$(OUTDIR)\"\n***************\n*** 67,73 ****\n \t\"$(INTDIR)\\fe-print.obj\"\n \n !IFDEF MULTIBYTE\n! LIB32_OBJS = $(LIB32_OBJS) $(INTDIR)\\common.obj $(INTDIR)\\wchar.obj $(INTDIR)\\conv.obj\n !ENDIF\n \n RSC_PROJ=/l 0x409 /fo\"$(INTDIR)\\libpq.res\"\n--- 73,79 ----\n \t\"$(INTDIR)\\fe-print.obj\"\n \n !IFDEF MULTIBYTE\n! LIB32_OBJS = $(LIB32_OBJS) \"$(INTDIR)\\common.obj\" \"$(INTDIR)\\wchar.obj\" \"$(INTDIR)\\conv.obj\" \"$(INTDIR)\\big5.obj\"\n !ENDIF\n \n RSC_PROJ=/l 0x409 /fo\"$(INTDIR)\\libpq.res\"\n***************\n*** 103,110 ****\n--- 109,139 ----\n $(CPP) @<<\n $(CPP_PROJ) ..\\..\\backend\\lib\\dllist.c\n <<\n+ \n \n+ !IFDEF MULTIBYTE\n+ \"$(INTDIR)\\common.obj\" : ..\\..\\backend\\utils\\mb\\common.c\n+ $(CPP) @<<\n+ $(CPP_PROJ) /I \".\" ..\\..\\backend\\utils\\mb\\common.c\n+ <<\n \n+ \"$(INTDIR)\\wchar.obj\" : ..\\..\\backend\\utils\\mb\\wchar.c\n+ $(CPP) @<<\n+ $(CPP_PROJ) /I \".\" ..\\..\\backend\\utils\\mb\\wchar.c\n+ <<\n+ \n+ \"$(INTDIR)\\conv.obj\" : ..\\..\\backend\\utils\\mb\\conv.c\n+ $(CPP) @<<\n+ $(CPP_PROJ) /I \".\" ..\\..\\backend\\utils\\mb\\conv.c\n+ <<\n+ \n+ \"$(INTDIR)\\big5.obj\" : ..\\..\\backend\\utils\\mb\\big5.c\n+ $(CPP) @<<\n+ $(CPP_PROJ) /I \".\" ..\\..\\backend\\utils\\mb\\big5.c\n+ <<\n+ !ENDIF\n+ \n+ \n .c{$(CPP_OBJS)}.obj::\n $(CPP) @<<\n $(CPP_PROJ) $<\n\n\n", "msg_date": "Mon, 16 Aug 1999 13:35:27 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "WIN32 + MB fix" } ]
[ { "msg_contents": "> Other then that we won *woo hoo!!* ...\n> what ever did happen? *raised eyebrow*\n\nOK, I was sort of waiting for an update to be posted on the LinuxWorld\nsite, but I still don't see anything, so you'll have to take my word\nfor it ;)\n\nHere's the full story:\n\nI hadn't heard *anything* from the expo organizers prior to departing\nfor San Jose on Wednesday morning, so was a bit leary of my reception\nupon showing up cold. After an hour to get my rental car (seems some\nconvention had taken up most of the available ones...), and another 20\nminutes to drive to the San Jose convention center, it took a couple\nof minutes of talking, armed with a printout of our invitation, to get\nonto the exhibit floor without paying money (needless to say, I hate\nparting with a dollar, and hey, we *were* invited ;)\n\nAnyway, I got there after 11am, and the awards ceremony was taking\nplace at noon according to our invitation. But since it was a\nLinuxWorld magazine event, and taking place on the show floor, it\nwasn't actually listed in the LinuxWorld Expo program and my strategy\nof random questioning wasn't getting me closer. Thankfully they\nstarted announcing it over the PA by around 11:30, so I found the\nlocation pretty easily after that.\n\nIt would have been easier to find, but there were quite a few\nexhibits, including a large one from Oracle (oh oh, no way we're going\nto win this one, eh?...). It was great seeing a lot of the companies\nin the Linux/OpenSource market actually acting like real companies\nwith booths, \"demo girls\", the whole schmear. FreeBSD had a large\nbooth full of t-shirts, coffee mugs, and CDs; apparently they have a\nstrong Bay area contingent.\n\nWhen I got the the awards area, they were quite nice, and apologized\nfor not responding. Apparently their e-mail wasn't actually working\nduring the week before the show. Funny, I forgot to ask what system\nthey actually were running, but assume that it was some M$ corporate\ngarbage. They were using PowerPoint btw for projecting during the\nawards. Should have given them a hard time about it...\n\nThe awards ceremony was pretty low key. The magazine editor (sorry, I\ndon't remember names) presented the awards for \"Show Favorites\" first.\nThese were ones voted on by people actually at the show. Debian\ncleaned up a bunch of awards, btw. Then they got to the \"Editor's\nChoice\" awards. The Database category was pretty far down the list,\nbut it was only 10-15 minutes before they got to us. They'd been\npresenting the \"Finalist\" trophy first, and the \"Winner\" trophy after\nthat, and I was pretty amazed that they called Oracle before\nPostgreSQL! Just walked up to the front, got my picture taken with the\npresenter and the trophy, and that was it! But later, while I was\ngetting a box for the trophy, I talked with the editor a bit and he\nvolunteered that his DB editor was a Postgres fan, and that he was\nstarting to use it too. Did I mention that the guidelines for the\nawards judging included criteria on how much the candidate had\ncontributed to the OpenSource movement? So maybe our win over Oracle\nshouldn't suprise us too much, but it still felt great. Not every day\nthat you get to kick Oracle's butt up and down the street ;)\n\nSo, we've got a nice trophy, consisting of a curved piece of glass\nwith our project name (and Marc F's name; note to us: we should\nprobably ask them to put \"Postgres Development Group\" on it next time)\netched in. Looks nice. Supposedly they were going to post pictures on\ntheir web site, but I haven't found them yet.\n\nWhat should I do with the trophy? Send it straight to Marc, or should\nit travel a more indirect route, perhaps passing through Pennsylvania\nand other places first? The box seems fairly sturdy, and if I put it\ninto a bigger box then it should travel OK.\n\nSo, that's the story, and I'm sticking to it. I stayed up in the area\nanother couple of days working, so didn't get home until late Friday\nnight.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Mon, 16 Aug 1999 06:15:40 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [CORE] Re: tomorrow" }, { "msg_contents": "> > Other then that we won *woo hoo!!* ...\n> > what ever did happen? *raised eyebrow*\n> \n> OK, I was sort of waiting for an update to be posted on the LinuxWorld\n> site, but I still don't see anything, so you'll have to take my word\n> for it ;)\n> \n> Here's the full story:\n> \n> I hadn't heard *anything* from the expo organizers prior to departing\n> for San Jose on Wednesday morning, so was a bit leary of my reception\n> upon showing up cold. After an hour to get my rental car (seems some\n> convention had taken up most of the available ones...), and another 20\n> minutes to drive to the San Jose convention center, it took a couple\n> of minutes of talking, armed with a printout of our invitation, to get\n> onto the exhibit floor without paying money (needless to say, I hate\n> parting with a dollar, and hey, we *were* invited ;)\n\nI am heading to bed, but where is the picture? They must have a digital\ncamera at JPL somewhere. It is Nasa/JPL. Take one off that Mars rover\nthingy and post a picture, OK? :-)\n\n(I see you just got home on Friday.)\n\nSecond, my guess is that we are going to be getting one of these every\nyear, so Marc will need to buy a display case, right? :-) If\ncontribution to open source is a criteria, we will win easily next year.\nWe just need a few cool'o features for >= 6.6.\n\n> What should I do with the trophy? Send it straight to Marc, or should\n> it travel a more indirect route, perhaps passing through Pennsylvania\n> and other places first? The box seems fairly sturdy, and if I put it\n> into a bigger box then it should travel OK.\n\nSounds like that run they do for the Olympics with the torch. \"The\nPostgreSQL award will be passing through your town on October 12th, at\n...\" We could have Linux user groups doing the running. :-)\n\nThe big question is whether the box can make it to Russia.\n\n> So, that's the story, and I'm sticking to it. I stayed up in the area\n> another couple of days working, so didn't get home until late Friday\n> night.\n\nThanks for going.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 03:28:15 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Bruce Momjian wrote:\n> The big question is whether the box can make it to Russia.\n\nWhy not get the Russians.. and other contributors to go to the box :-).\n\nPerhaps a donation thingy on the home page to cover costs....?\n--------\nRegards\nTheo\n", "msg_date": "Mon, 16 Aug 1999 10:20:48 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "\nwelp, I missed this thread somewhere...\n\nhuh? what box? what did I miss?? :(\n\n\n\nOn Mon, 16 Aug 1999, Theo Kramer wrote:\n\n> Bruce Momjian wrote:\n> > The big question is whether the box can make it to Russia.\n> \n> Why not get the Russians.. and other contributors to go to the box :-).\n> \n> Perhaps a donation thingy on the home page to cover costs....?\n> --------\n> Regards\n> Theo\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 09:10:10 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Thomas Lockhart wrote:\n\n> When I got the the awards area, they were quite nice, and apologized\n> for not responding. Apparently their e-mail wasn't actually working\n> during the week before the show. Funny, I forgot to ask what system\n> they actually were running, but assume that it was some M$ corporate\n> garbage. They were using PowerPoint btw for projecting during the\n> awards. Should have given them a hard time about it...\n\n>From what Jeff was able to tell when he talked to someone there a few\nweeks back, they are \"considering moving their accounting system to\nPostgreSQL\" ... and the 'PowerPoint'...i twouldn't have been StarOffice's\nproduct vs MS, no? :)\n\n> So, we've got a nice trophy, consisting of a curved piece of glass\n> with our project name (and Marc F's name; note to us: we should\n> probably ask them to put \"Postgres Development Group\" on it next time)\n\nNo disagreement here...\n\n> What should I do with the trophy? Send it straight to Marc, or should\n> it travel a more indirect route, perhaps passing through Pennsylvania\n> and other places first? The box seems fairly sturdy, and if I put it\n> into a bigger box then it should travel OK.\n\nSounds cool to me...hrmmm...I'd like to get it, like, VRML'd so that we\ncould put it on the web site and rotated and whatnot :) \"see the back of\nour award\" *grin* And I have a good graphic artist up here, I'll get it\n\"touched up\" to say \"PostgreSQL Global Development Group\", if it can be\ndone :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 09:27:09 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Bruce Momjian wrote:\n\n> Second, my guess is that we are going to be getting one of these every\n> year, so Marc will need to buy a display case, right? :-) If\n> contribution to open source is a criteria, we will win easily next year.\n> We just need a few cool'o features for >= 6.6.\n\n*rofl* that would be most interesting to watch...\"PostgreSQL wins award\n15 years running!!\" *grin*\n\n> > What should I do with the trophy? Send it straight to Marc, or should\n> > it travel a more indirect route, perhaps passing through Pennsylvania\n> > and other places first? The box seems fairly sturdy, and if I put it\n> > into a bigger box then it should travel OK.\n> \n> Sounds like that run they do for the Olympics with the torch. \"The\n> PostgreSQL award will be passing through your town on October 12th, at\n> ...\" We could have Linux user groups doing the running. :-)\n\nLinux users can run? :)\n\n> The big question is whether the box can make it to Russia.\n\nI think we should get a \"duplicate\" made if we are going to have it do a\nworld tour or something like that, no? *rofl* \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 09:29:13 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Thomas Lockhart wrote:\n\n> It would have been easier to find, but there were quite a few\n> exhibits, including a large one from Oracle (oh oh, no way we're going\n> to win this one, eh?...). It was great seeing a lot of the companies\n> in the Linux/OpenSource market actually acting like real companies\n> with booths, \"demo girls\", the whole schmear. FreeBSD had a large\n> booth full of t-shirts, coffee mugs, and CDs; apparently they have a\n> strong Bay area contingent.\n\nYa, the way I've heard it (might not have been this one), but the\nFreeBSD'rs have been going ot Linux conventions with \"free CDs\" to give\naway, but RedHat has been 'anal' about doing similar...\n\n> So, that's the story, and I'm sticking to it. I stayed up in the area\n> another couple of days working, so didn't get home until late Friday\n> night.\n\nthanks for going :) I'm glad we had a 'human' prescense at the awards...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 09:34:35 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n\n> I think we should get a \"duplicate\" made if we are going to have it do a\n> world tour or something like that, no? *rofl* \n\nWhat, like the Stanley Cup? :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 16 Aug 1999 10:36:57 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> Bruce Momjian wrote:\n> > The big question is whether the box can make it to Russia.\n> \n> Why not get the Russians.. and other contributors to go to the box :-).\n> \n> Perhaps a donation thingy on the home page to cover costs....?\n\nThere has been talk that if we write a book, we can use the profits to\nfly people around for \"meetings\".\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 10:56:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Bruce Momjian wrote:\n> There has been talk that if we write a book, we can use the profits to\n> fly people around for \"meetings\".\n\nI'll be keeping my nose glued to oreilly.com :-)\n--------\nRegards\nTheo\n", "msg_date": "Mon, 16 Aug 1999 17:33:50 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n\n> On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n> \n> > I think we should get a \"duplicate\" made if we are going to have it do a\n> > world tour or something like that, no? *rofl* \n> \n> What, like the Stanley Cup? :)\n\nExactly :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 13:20:15 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> Bruce Momjian wrote:\n> > There has been talk that if we write a book, we can use the profits to\n> > fly people around for \"meetings\".\n> \n> I'll be keeping my nose glued to oreilly.com :-)\n\nThey haven't approached us yet. Other publishers have.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 12:41:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n> \n> > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n> > \n> > > I think we should get a \"duplicate\" made if we are going to have it do a\n> > > world tour or something like that, no? *rofl* \n> > \n> > What, like the Stanley Cup? :)\n> \n> Exactly :)\n\nYou know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\nMarc, it will have traveled around the world.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 12:43:44 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Bruce Momjian wrote:\n\n> > On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n> > \n> > > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n> > > \n> > > > I think we should get a \"duplicate\" made if we are going to have it do a\n> > > > world tour or something like that, no? *rofl* \n> > > \n> > > What, like the Stanley Cup? :)\n> > \n> > Exactly :)\n> \n> You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n> Marc, it will have traveled around the world.\n> \n> \n\nThen it can be called: The 1999 PostgreSQL World Tour or wait a few\nmonths and call it: The PostgreSQL 2K World Tour or I can shut up now.\n\n:)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 16 Aug 1999 13:08:55 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n\n> On Mon, 16 Aug 1999, Bruce Momjian wrote:\n> \n> > > On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n> > > \n> > > > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n> > > > \n> > > > > I think we should get a \"duplicate\" made if we are going to have it do a\n> > > > > world tour or something like that, no? *rofl* \n> > > > \n> > > > What, like the Stanley Cup? :)\n> > > \n> > > Exactly :)\n> > \n> > You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n> > Marc, it will have traveled around the world.\n> > \n> > \n> \n> Then it can be called: The 1999 PostgreSQL World Tour or wait a few\n> months and call it: The PostgreSQL 2K World Tour or I can shut up now.\n\n*groan* *grin*\n\nThe 1999 one sounds better, IMHO :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Mon, 16 Aug 1999 14:38:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n> >\n> > > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n> > >\n> > > > I think we should get a \"duplicate\" made if we are going to have it do a\n> > > > world tour or something like that, no? *rofl*\n> > >\n> > > What, like the Stanley Cup? :)\n> >\n> > Exactly :)\n> \n> You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n> Marc, it will have traveled around the world.\n\nI like this idea -:))\n\nVadim\n", "msg_date": "Tue, 17 Aug 1999 09:22:56 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": ">Bruce Momjian wrote:\n>> \n>> > On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n>> >\n>> > > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n>> > >\n>> > > > I think we should get a \"duplicate\" made if we are going to have it do a\n>> > > > world tour or something like that, no? *rofl*\n>> > >\n>> > > What, like the Stanley Cup? :)\n>> >\n>> > Exactly :)\n>> \n>> You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n>> Marc, it will have traveled around the world.\n>\n>I like this idea -:))\n>\n>Vadim\n\nI like it too:-)\n--\nTatsuo Ishii\n\n\n", "msg_date": "Tue, 17 Aug 1999 10:33:58 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow " }, { "msg_contents": "\nOn 17-Aug-99 Tatsuo Ishii wrote:\n>>Bruce Momjian wrote:\n>>> \n>>> > On Mon, 16 Aug 1999, Vince Vielhaber wrote:\n>>> >\n>>> > > On Mon, 16 Aug 1999, The Hermit Hacker wrote:\n>>> > >\n>>> > > > I think we should get a \"duplicate\" made if we are going to have it do a\n>>> > > > world tour or something like that, no? *rofl*\n>>> > >\n>>> > > What, like the Stanley Cup? :)\n>>> >\n>>> > Exactly :)\n>>> \n>>> You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n>>> Marc, it will have traveled around the world.\n>>\n>>I like this idea -:))\n>>\n>>Vadim\n> \n> I like it too:-)\n\nHmmmm... Y'know they always send a rep along with the Stanley Cup so as\nto keep it safe........... :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Mon, 16 Aug 1999 21:43:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> >>> You know, if we send it to Vadim, then to Tatsuo or Hiroshi, then to\n> >>> Marc, it will have traveled around the world.\n> >>\n> >>I like this idea -:))\n> >>\n> >>Vadim\n> > \n> > I like it too:-)\n> \n> Hmmmm... Y'know they always send a rep along with the Stanley Cup so as\n> to keep it safe........... :)\n> \n\nYou know, my wife and I were just talking today about taking an\naround-the-world cruise someday. This may be my chance. :-)\n\nI am seeing double-smiles from Vadim, so if people are serious and not\njust joking, I will try and make the arrangements so it is dropped off\nand picked up by reputable carriers to make the trip.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 21:55:36 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> You know, my wife and I were just talking today about taking an\n> around-the-world cruise someday. This may be my chance. :-)\n> \n> I am seeing double-smiles from Vadim, so if people are serious and not\n> just joking, I will try and make the arrangements so it is dropped off\n> and picked up by reputable carriers to make the trip.\n\nI'm not joking (though cost may be issue).\nFirst step is to define the route.\nSo, who would like to participate in this project?\n\nVadim\n", "msg_date": "Tue, 17 Aug 1999 10:30:31 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> Bruce Momjian wrote:\n> > \n> > You know, my wife and I were just talking today about taking an\n> > around-the-world cruise someday. This may be my chance. :-)\n> > \n> > I am seeing double-smiles from Vadim, so if people are serious and not\n> > just joking, I will try and make the arrangements so it is dropped off\n> > and picked up by reputable carriers to make the trip.\n> \n> I'm not joking (though cost may be issue).\n> First step is to define the route.\n> So, who would like to participate in this project?\n\nOK, let's get the box size and weight, and I will see what global\ncarriers can handle this. Seems DHL may be a good choice, though they\nseem to only do fast service. Let's see who has a door-to-door low\ncost/slow option that covers the globe. Suggestions?\n\nI just tried Airborne Express, and they don't have presence in all\nareas. They ship via third party to some places, so the billing can't\nbe done from one location. DHL looks expensive, and doesn't allow the\nproper billing either, so I can't pay for it all here.\n\nMy idea is to plan the route, and have each person get a price for any\nshipping means they prefer, and I will include checks in the award box,\nfor the proper amounts, to be cashed by each person for use in paying\ntheir part of the shipping. How does this sound? That seems like the\nonly way because I can't find a company that will allow for all shipping\nto be payed by one person. It also allows us to user cheaper carriers. \nI am sure the US Postal Service is cheaper than these premium carriers.\n\nShould England be on our list too? Don't want to forget people.\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 23:57:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> > I'm not joking (though cost may be issue).\n> > First step is to define the route.\n> > So, who would like to participate in this project?\n> OK, let's get the box size and weight, and I will see what global\n> carriers can handle this. Seems DHL may be a good choice, though they\n> seem to only do fast service. Let's see who has a door-to-door low\n> cost/slow option that covers the globe. Suggestions?\n\nThe weight for the original box is low ~2lbs/1kg. We might want to put\nit inside another box full of peanuts, but the weight would stay low;\nsay under 3kg. I'll help sponsor shipping in and out of Russia, and\nI'll bet DHL goes where you want. FedEx seems to have less coverage in\nsome areas, but those may just be the ones I've run into like China.\n\nI think perhaps some other legs of the trip could be sponsored by the\narea reps? Perhaps we should have a period where folks can propose a\nvisit, and if it coincides with a local event like a club meeting so\nmuch the better.\n\nbtw, I should be able to get a digital photo of the trophy soon, but\nhave been swamped at work and am getting ready to leave for 10 days\nfor vacation. So I may not be much help for a little while :( I hope\nthat doesn't delay a world tour too long...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 17 Aug 1999 06:11:49 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> > > I'm not joking (though cost may be issue).\n> > > First step is to define the route.\n> > > So, who would like to participate in this project?\n> > OK, let's get the box size and weight, and I will see what global\n> > carriers can handle this. Seems DHL may be a good choice, though they\n> > seem to only do fast service. Let's see who has a door-to-door low\n> > cost/slow option that covers the globe. Suggestions?\n> \n> The weight for the original box is low ~2lbs/1kg. We might want to put\n> it inside another box full of peanuts, but the weight would stay low;\n> say under 3kg. I'll help sponsor shipping in and out of Russia, and\n> I'll bet DHL goes where you want. FedEx seems to have less coverage in\n> some areas, but those may just be the ones I've run into like China.\n\nThat's a good weight. The big problem is getting it _out_ of these\nplaces. Seems the shipments mostly have to be payed by the sender, so\nit requires the people who have the award to pay shipping. I would like\nto see someone other than the sender pay, if possible, but if people\nwant to host the cost for their leg of the trip, all the better. Cost\nfrom USA to Krasnoyarsk for a 2 pound package is $80 via DHL.\n\n> I think perhaps some other legs of the trip could be sponsored by the\n> area reps? Perhaps we should have a period where folks can propose a\n> visit, and if it coincides with a local event like a club meeting so\n> much the better.\n\nOK, we should probably start taking visit requests. Any major\ndevelopers who want to host it for a while, speak up? Certainly anyone\non the developers page is welcome. We already have Thomas, me, Vadim,\nTatsuo, and Marc, in that order across the globe. Or we can go Thomas,\nTatsuo, Vadim, me, and Marc, though this would mean it had not crossed\nthe USA. In fact, we can each take a picture of it in our homes, and\nmake a web page of it. (I can scan in any photos for those without\ndigital cameras.) Now, that would be nifty.\n\n> btw, I should be able to get a digital photo of the trophy soon, but\n> have been swamped at work and am getting ready to leave for 10 days\n> for vacation. So I may not be much help for a little while :( I hope\n> that doesn't delay a world tour too long...\n\nConsider your time with the award ticking. :-) You get to play with it\ntoo.\n\nI can imagine the trip taking several months to cross the globe, so\nthere is no rush.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 02:44:21 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Thomas,\n\nBefore sending the box to world trip could you make a 3D picture and\npublish it on the Web. Take a look at Freedom VR \nhttp://www.honeylocust.com/vr/ - it's free and works fine.\n\nAs for the Russia in my experience DHL was always good. If you'll have a \nproblem with sending the trophy to Vadim I could arrange that from\nMoscow.\n\tRegards,\n\t\tOleg\n\nOn Tue, 17 Aug 1999, Thomas Lockhart wrote:\n\n> Date: Tue, 17 Aug 1999 06:11:49 +0000\n> From: Thomas Lockhart <[email protected]>\n> To: Bruce Momjian <[email protected]>\n> Cc: Vadim Mikheev <[email protected]>,\n> Postgres Hackers List <[email protected]>\n> Subject: Re: [HACKERS] Re: [CORE] Re: tomorrow\n> \n> > > I'm not joking (though cost may be issue).\n> > > First step is to define the route.\n> > > So, who would like to participate in this project?\n> > OK, let's get the box size and weight, and I will see what global\n> > carriers can handle this. Seems DHL may be a good choice, though they\n> > seem to only do fast service. Let's see who has a door-to-door low\n> > cost/slow option that covers the globe. Suggestions?\n> \n> The weight for the original box is low ~2lbs/1kg. We might want to put\n> it inside another box full of peanuts, but the weight would stay low;\n> say under 3kg. I'll help sponsor shipping in and out of Russia, and\n> I'll bet DHL goes where you want. FedEx seems to have less coverage in\n> some areas, but those may just be the ones I've run into like China.\n> \n> I think perhaps some other legs of the trip could be sponsored by the\n> area reps? Perhaps we should have a period where folks can propose a\n> visit, and if it coincides with a local event like a club meeting so\n> much the better.\n> \n> btw, I should be able to get a digital photo of the trophy soon, but\n> have been swamped at work and am getting ready to leave for 10 days\n> for vacation. So I may not be much help for a little while :( I hope\n> that doesn't delay a world tour too long...\n> \n> - Thomas\n> \n> -- \n> Thomas Lockhart\t\t\t\[email protected]\n> South Pasadena, California\n> \n\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 17 Aug 1999 10:45:20 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> Thomas,\n> \n> Before sending the box to world trip could you make a 3D picture and\n> publish it on the Web. Take a look at Freedom VR \n> http://www.honeylocust.com/vr/ - it's free and works fine.\n\nYes, I think Marc wants to do that.\n\n> As for the Russia in my experience DHL was always good. If you'll have a \n> problem with sending the trophy to Vadim I could arrange that from\n> Moscow.\n\nYes, DHL is good, but they can't do the entire trip with one person\npaying it all. They have to have the sender pay, so that is why I was\nleaning to including checks in the package for those who want it. Also,\nDHL-type carriers only offer 1-3 day delivery, which can get expensive\nif we have many stops.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 02:51:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> OK, we should probably start taking visit requests. Any major\n> developers who want to host it for a while, speak up? Certainly anyone\n> on the developers page is welcome. We already have Thomas, me, Vadim,\n> Tatsuo, and Marc, in that order across the globe. Or we can go Thomas,\n> Tatsuo, Vadim, me, and Marc, though this would mean it had not crossed\n> the USA. In fact, we can each take a picture of it in our homes, and\n> make a web page of it. (I can scan in any photos for those without\n> digital cameras.) Now, that would be nifty.\n\nWell if the lowly webmaster is eligible count me in. I can fedex to the\nnext person on the list.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 17 Aug 1999 07:28:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Vadim Mikheev wrote:\n\n> Bruce Momjian wrote:\n> > \n> > You know, my wife and I were just talking today about taking an\n> > around-the-world cruise someday. This may be my chance. :-)\n> > \n> > I am seeing double-smiles from Vadim, so if people are serious and not\n> > just joking, I will try and make the arrangements so it is dropped off\n> > and picked up by reputable carriers to make the trip.\n> \n> I'm not joking (though cost may be issue).\n> First step is to define the route.\n> So, who would like to participate in this project?\n\nI think, if we are going to be doing this, each person along the route has\nto \"add something\" to the package...a postcard from their area or\nsomething like that? something to \"Prove it was there\"?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 09:21:19 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> > > > I'm not joking (though cost may be issue).\n> > > > First step is to define the route.\n> > > > So, who would like to participate in this project?\n> > > OK, let's get the box size and weight, and I will see what global\n> > > carriers can handle this. Seems DHL may be a good choice, though they\n> > > seem to only do fast service. Let's see who has a door-to-door low\n> > > cost/slow option that covers the globe. Suggestions?\n> > \n> > The weight for the original box is low ~2lbs/1kg. We might want to put\n> > it inside another box full of peanuts, but the weight would stay low;\n> > say under 3kg. I'll help sponsor shipping in and out of Russia, and\n> > I'll bet DHL goes where you want. FedEx seems to have less coverage in\n> > some areas, but those may just be the ones I've run into like China.\n> \n> That's a good weight. The big problem is getting it _out_ of these\n> places. Seems the shipments mostly have to be payed by the sender, so\n> it requires the people who have the award to pay shipping. I would like\n> to see someone other than the sender pay, if possible, but if people\n> want to host the cost for their leg of the trip, all the better. Cost\n> from USA to Krasnoyarsk for a 2 pound package is $80 via DHL.\n> \n> > I think perhaps some other legs of the trip could be sponsored by the\n> > area reps? Perhaps we should have a period where folks can propose a\n> > visit, and if it coincides with a local event like a club meeting so\n> > much the better.\n> \n> OK, we should probably start taking visit requests. Any major\n> developers who want to host it for a while, speak up? Certainly anyone\n> on the developers page is welcome. We already have Thomas, me, Vadim,\n> Tatsuo, and Marc, in that order across the globe. Or we can go Thomas,\n> Tatsuo, Vadim, me, and Marc, though this would mean it had not crossed\n> the USA. In fact, we can each take a picture of it in our homes, and\n> make a web page of it. (I can scan in any photos for those without\n> digital cameras.) Now, that would be nifty.\n\nOnce we have a firmer idea of whom it is going to, planning might be\neasier, as well as costs, since we could do it in \"shorter hops\"...ie. If\nPeter Mount wanted it, then we would be able to do \"Thomas, Bruce, Peter,\nVadim, Tatsuo, Me\"...throw in Oleg there, and we can put him between Vadim\nand Tatsuo...Throw Tom Lane in there, and that can go between ... ??\n\nD'Arcy Cain, if he wants, can go between Tatsuo and Me\n(Japen->Toronto->NS), etc...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 09:27:03 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> > Thomas,\n> > \n> > Before sending the box to world trip could you make a 3D picture and\n> > publish it on the Web. Take a look at Freedom VR \n> > http://www.honeylocust.com/vr/ - it's free and works fine.\n> \n> Yes, I think Marc wants to do that.\n> \n> > As for the Russia in my experience DHL was always good. If you'll have a \n> > problem with sending the trophy to Vadim I could arrange that from\n> > Moscow.\n> \n> Yes, DHL is good, but they can't do the entire trip with one person\n> paying it all. They have to have the sender pay, so that is why I was\n> leaning to including checks in the package for those who want it. Also,\n> DHL-type carriers only offer 1-3 day delivery, which can get expensive\n> if we have many stops.\n\nWhat about someone like FedEx, with one account? Instead of sending\ncheques around?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 09:27:59 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "The Hermit Hacker <[email protected]> writes:\n> Once we have a firmer idea of whom it is going to, planning might be\n> easier, as well as costs, since we could do it in \"shorter hops\"...ie. If\n> Peter Mount wanted it, then we would be able to do \"Thomas, Bruce, Peter,\n> Vadim, Tatsuo, Me\"...throw in Oleg there, and we can put him between Vadim\n> and Tatsuo...Throw Tom Lane in there, and that can go between ... ??\n\nNeed some Europeans and Australians in there.\n\nI don't suppose we have much chance of hitting Antartica, but it'd be\ncool if the trophy made its way to every inhabited continent...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 09:52:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow " }, { "msg_contents": "> On Tue, 17 Aug 1999, Bruce Momjian wrote:\n> \n> > OK, we should probably start taking visit requests. Any major\n> > developers who want to host it for a while, speak up? Certainly anyone\n> > on the developers page is welcome. We already have Thomas, me, Vadim,\n> > Tatsuo, and Marc, in that order across the globe. Or we can go Thomas,\n> > Tatsuo, Vadim, me, and Marc, though this would mean it had not crossed\n> > the USA. In fact, we can each take a picture of it in our homes, and\n> > make a web page of it. (I can scan in any photos for those without\n> > digital cameras.) Now, that would be nifty.\n> \n> Well if the lowly webmaster is eligible count me in. I can fedex to the\n> next person on the list.\n\nAdded.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 10:40:13 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> On Tue, 17 Aug 1999, Vadim Mikheev wrote:\n> \n> > Bruce Momjian wrote:\n> > > \n> > > You know, my wife and I were just talking today about taking an\n> > > around-the-world cruise someday. This may be my chance. :-)\n> > > \n> > > I am seeing double-smiles from Vadim, so if people are serious and not\n> > > just joking, I will try and make the arrangements so it is dropped off\n> > > and picked up by reputable carriers to make the trip.\n> > \n> > I'm not joking (though cost may be issue).\n> > First step is to define the route.\n> > So, who would like to participate in this project?\n> \n> I think, if we are going to be doing this, each person along the route has\n> to \"add something\" to the package...a postcard from their area or\n> something like that? something to \"Prove it was there\"?\n\nYes, and we certainly need one sheet in the package for everyone's name\nand signature that we can scan in and put on the web page.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 10:42:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> > Yes, DHL is good, but they can't do the entire trip with one person\n> > paying it all. They have to have the sender pay, so that is why I was\n> > leaning to including checks in the package for those who want it. Also,\n> > DHL-type carriers only offer 1-3 day delivery, which can get expensive\n> > if we have many stops.\n> \n> What about someone like FedEx, with one account? Instead of sending\n> cheques around?\n\nMost carriers don't allow \"third-party\" billing, where the shipment is\npayed by neither the sender or receiver, in all global locations. That\nwas the problem with DHL and Airborne Express.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 11:00:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "At last something I can help with.\nOne of my customers is a courier company, I'll ask them to find out.\n\n> > > Yes, DHL is good, but they can't do the entire trip with one person\n> > > paying it all. They have to have the sender pay, so that is why I was\n> > > leaning to including checks in the package for those who want\n> it. Also,\n> > > DHL-type carriers only offer 1-3 day delivery, which can get expensive\n> > > if we have many stops.\n> >\n> > What about someone like FedEx, with one account? Instead of sending\n> > cheques around?\n>\n> Most carriers don't allow \"third-party\" billing, where the shipment is\n> payed by neither the sender or receiver, in all global locations. That\n> was the problem with DHL and Airborne Express.\n>\n> --\n> Bruce Momjian | http://www.op.net/~candle\n> [email protected] | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n", "msg_date": "Tue, 17 Aug 1999 16:35:27 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Thus spake The Hermit Hacker\n> D'Arcy Cain, if he wants, can go between Tatsuo and Me\n> (Japen->Toronto->NS), etc...\n\nI was wondering if I should speak up. I don't know how many people we\nhave here (the map suggests just me and the server) but I was thinking\nthat it would be worthwhile if we could make some sort of event out of\nit. Anyone else in or around Toronto want to try to put something like\nthat together? I imagine that we have a few months to do something while\nit globetrots.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 17 Aug 1999 13:40:18 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake The Hermit Hacker\n> > D'Arcy Cain, if he wants, can go between Tatsuo and Me\n> > (Japen->Toronto->NS), etc...\n> \n> I was wondering if I should speak up. I don't know how many people we\n> have here (the map suggests just me and the server) but I was thinking\n> that it would be worthwhile if we could make some sort of event out of\n> it. Anyone else in or around Toronto want to try to put something like\n> that together? I imagine that we have a few months to do something while\n> it globetrots.\n\nReserve the ROM and put it up on display? *grin* \n\nfor those not in the know: ROM == Royal Ontario Museum\n\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 14:49:23 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> Thus spake The Hermit Hacker\n> > D'Arcy Cain, if he wants, can go between Tatsuo and Me\n> > (Japen->Toronto->NS), etc...\n> \n> I was wondering if I should speak up. I don't know how many people we\n> have here (the map suggests just me and the server) but I was thinking\n> that it would be worthwhile if we could make some sort of event out of\n> it. Anyone else in or around Toronto want to try to put something like\n> that together? I imagine that we have a few months to do something while\n> it globetrots.\n\nI have added you to the list:\n\t\n\tThomas\n\tVince\n\tD'Arcy\n\tBruce\n\tVadim\n\tTatsuo\n\tMarc\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 14:40:55 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "> On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n> \n> > Thus spake The Hermit Hacker\n> > > D'Arcy Cain, if he wants, can go between Tatsuo and Me\n> > > (Japen->Toronto->NS), etc...\n> > \n> > I was wondering if I should speak up. I don't know how many people we\n> > have here (the map suggests just me and the server) but I was thinking\n> > that it would be worthwhile if we could make some sort of event out of\n> > it. Anyone else in or around Toronto want to try to put something like\n> > that together? I imagine that we have a few months to do something while\n> > it globetrots.\n> \n> Reserve the ROM and put it up on display? *grin* \n> \n> for those not in the know: ROM == Royal Ontario Museum\n\nIs that going to be big enough. :-)\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 14:42:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> > On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n> > \n> > > Thus spake The Hermit Hacker\n> > > > D'Arcy Cain, if he wants, can go between Tatsuo and Me\n> > > > (Japen->Toronto->NS), etc...\n> > > \n> > > I was wondering if I should speak up. I don't know how many people we\n> > > have here (the map suggests just me and the server) but I was thinking\n> > > that it would be worthwhile if we could make some sort of event out of\n> > > it. Anyone else in or around Toronto want to try to put something like\n> > > that together? I imagine that we have a few months to do something while\n> > > it globetrots.\n> > \n> > Reserve the ROM and put it up on display? *grin* \n> > \n> > for those not in the know: ROM == Royal Ontario Museum\n> \n> Is that going to be big enough. :-)\n\nNot sure...SkyDome might be more appropriate :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 16:22:16 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "\"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> ... it would be worthwhile if we could make some sort of event out of\n> it. Anyone else in or around Toronto want to try to put something like\n> that together? I imagine that we have a few months to do something while\n> it globetrots.\n\nHmm. Toronto is not too far away for me, nor for Bruce I imagine...\ndunno if it's in driving distance for Marc, but it's about as central\na spot as we'd be likely to find...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 16:33:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow " }, { "msg_contents": "> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > ... it would be worthwhile if we could make some sort of event out of\n> > it. Anyone else in or around Toronto want to try to put something like\n> > that together? I imagine that we have a few months to do something while\n> > it globetrots.\n> \n> Hmm. Toronto is not too far away for me, nor for Bruce I imagine...\n> dunno if it's in driving distance for Marc, but it's about as central\n> a spot as we'd be likely to find...\n\nMy wife would not mind the trip. We have friends in Toronto. It is 10\nhours. Yikes.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 17:01:53 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Thus spake The Hermit Hacker\n> > > Reserve the ROM and put it up on display? *grin* \n> > > for those not in the know: ROM == Royal Ontario Museum\n> > Is that going to be big enough. :-)\n> Not sure...SkyDome might be more appropriate :)\n\nYou have been gone from the Big Smoke too long there, young fella. We\nwant the ACC (Air Canada Centre) now. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 17 Aug 1999 21:37:20 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Thus spake Tom Lane\n> \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > ... it would be worthwhile if we could make some sort of event out of\n> > it. Anyone else in or around Toronto want to try to put something like\n> > that together? I imagine that we have a few months to do something while\n> > it globetrots.\n> \n> Hmm. Toronto is not too far away for me, nor for Bruce I imagine...\n> dunno if it's in driving distance for Marc, but it's about as central\n> a spot as we'd be likely to find...\n\nSo who's up for a party here? If we let it go around the world first\nand collect all those travel stickers or whatever, it might even be\nsomewhat newsworthy if we find a slow news day.\n\nIf it's a small enough group we can all go down to the cage and toast\nwww.PostgreSQL.org. :-)\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 17 Aug 1999 21:42:38 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> > \"D'Arcy\" \"J.M.\" Cain <[email protected]> writes:\n> > > ... it would be worthwhile if we could make some sort of event out of\n> > > it. Anyone else in or around Toronto want to try to put something like\n> > > that together? I imagine that we have a few months to do something while\n> > > it globetrots.\n> > \n> > Hmm. Toronto is not too far away for me, nor for Bruce I imagine...\n> > dunno if it's in driving distance for Marc, but it's about as central\n> > a spot as we'd be likely to find...\n> \n> My wife would not mind the trip. We have friends in Toronto. It is 10\n> hours. Yikes.\n\n22 for me, definitely not a \"weekend trip\" :( But, depending on when we\nplanned for it, I could probably take some time off, and then I could just\ndrive it back with me and save some shipping...*shrug*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 22:47:22 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n\n> Thus spake The Hermit Hacker\n> > > > Reserve the ROM and put it up on display? *grin* \n> > > > for those not in the know: ROM == Royal Ontario Museum\n> > > Is that going to be big enough. :-)\n> > Not sure...SkyDome might be more appropriate :)\n> \n> You have been gone from the Big Smoke too long there, young fella. We\n> want the ACC (Air Canada Centre) now. :-)\n\nOops...the girls wanted to go downtown while we were there, and we ran out\nof time. guess I should have re-acquianted myself with the downtown after\nall :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 22:48:08 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "\n\tIf the award is small enough you might consider a small cooler as the\nshipping box. I use coolers (as in plastic clad foam box to keep your \npicnic cool) to ship my underwater cameras all over the world. They\nsurvive many years of airline baggage handlers, are fairly inexpensive\nand give you a surface to duct-tape closed. About the only thing you may need\nto do is put \"contents are not food, no dry-ice\". Those are two most common\nquestions at customs.\n\n\n-- \n\tStephen N. Kogge\n\[email protected]\n\thttp://www.uimage.com\n\n\n", "msg_date": "Tue, 17 Aug 1999 22:03:21 -0400", "msg_from": "Stephen Kogge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow " }, { "msg_contents": "> On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n> \n> > Thus spake The Hermit Hacker\n> > > > > Reserve the ROM and put it up on display? *grin* \n> > > > > for those not in the know: ROM == Royal Ontario Museum\n> > > > Is that going to be big enough. :-)\n> > > Not sure...SkyDome might be more appropriate :)\n> > \n> > You have been gone from the Big Smoke too long there, young fella. We\n> > want the ACC (Air Canada Centre) now. :-)\n> \n> Oops...the girls wanted to go downtown while we were there, and we ran out\n> of time. guess I should have re-acquianted myself with the downtown after\n> all :)\n\nAre you guys doing the \"name the staduim after a company\" thing too?\nI don't like it.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 22:37:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "On Tue, 17 Aug 1999, Bruce Momjian wrote:\n\n> > On Tue, 17 Aug 1999, D'Arcy J.M. Cain wrote:\n> > \n> > > Thus spake The Hermit Hacker\n> > > > > > Reserve the ROM and put it up on display? *grin* \n> > > > > > for those not in the know: ROM == Royal Ontario Museum\n> > > > > Is that going to be big enough. :-)\n> > > > Not sure...SkyDome might be more appropriate :)\n> > > \n> > > You have been gone from the Big Smoke too long there, young fella. We\n> > > want the ACC (Air Canada Centre) now. :-)\n> > \n> > Oops...the girls wanted to go downtown while we were there, and we ran out\n> > of time. guess I should have re-acquianted myself with the downtown after\n> > all :)\n> \n> Are you guys doing the \"name the staduim after a company\" thing too?\n> I don't like it.\n\nGeez. and I though \"The PostgreSQL Centre\" had a nice ring to it :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 23:44:55 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Thus spake Bruce Momjian\n> > > > Not sure...SkyDome might be more appropriate :)\n> > > \n> > > You have been gone from the Big Smoke too long there, young fella. We\n> > > want the ACC (Air Canada Centre) now. :-)\n> > \n> > Oops...the girls wanted to go downtown while we were there, and we ran out\n> > of time. guess I should have re-acquianted myself with the downtown after\n> > all :)\n> \n> Are you guys doing the \"name the staduim after a company\" thing too?\n> I don't like it.\n\nNo, the ACC is a new building (*) while the Skydome has been around for\na few years.\n\n(*) Well, actually it used to be the main post office plant in downtown\nToronto. Part of the requirements they had were to keep the facade for\nhysterical... er, historical reasons.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 424 2871 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Wed, 18 Aug 1999 08:01:10 -0400 (EDT)", "msg_from": "\"D'Arcy\" \"J.M.\" Cain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" } ]
[ { "msg_contents": "Hi,\n\nI think I have fixed the freezing of the postgres backend on Windows NT. Now\nit survives 5 regression test in a cycle with some concurrent connections\nduring running the tests.\n\nWhere the problem was (manual backtrace):\n- InitPostgres() (utils/init/postinit.c)\n- InitProcess() (storage/lmgr/proc.c)\n- IpcSemaphoreCreate() (storage/ipc/ipc.c)\n- semget() (now in libcygipc - sem.c)\n- sem_connect() (sem.c)\n- WaitForSingleObject() (win32 system call)\n- freezing....\n\nIt have looked like a problem with initializing the same semaphore for\nsecond time (they are \"preinitialized\" for performance reasons in\nInitProcGlobal() in storage/lmgr/proc.c)\n\nThe fix (made for v6.5.1) is here:\n-------------------------------\n--- src/backend/storage/lmgr/proc.c.old\tSat Aug 14 16:50:19 1999\n+++ src/backend/storage/lmgr/proc.c\tSat Aug 14 16:50:52 1999\n@@ -160,6 +160,7 @@\n \t\t * Pre-create the semaphores for the first maxBackends\nprocesses,\n \t\t * unless we are running as a standalone backend.\n \t\t */\n+#ifndef __CYGWIN__\n \t\tif (key != PrivateIPCKey)\n \t\t{\n \t\t\tfor (i = 0;\n@@ -180,6 +181,7 @@\n \t\t\t\tProcGlobal->freeSemMap[i] = (1 <<\nPROC_NSEMS_PER_SET);\n \t\t\t}\n \t\t}\n+#endif /* __CYGWIN__ */\n \t}\n }\n------------------------------- \n\n\t\t\t\tDan\n\nPS: I have packed the tree after \"make install\" for 6.5.1 with the patch\nabove, so it is a \"binary distribution\".\n\n----------------------------------------------\nDaniel Horak\nnetwork and system administrator\ne-mail: [email protected]\nprivat e-mail: [email protected] ICQ:36448176\n----------------------------------------------\n", "msg_date": "Mon, 16 Aug 1999 11:35:54 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "backend freezeing on win32 fixed (I hope ;-) )" }, { "msg_contents": "Horak Daniel <[email protected]> writes:\n> I think I have fixed the freezing of the postgres backend on Windows NT. Now\n> it survives 5 regression test in a cycle with some concurrent connections\n> during running the tests.\n> It have looked like a problem with initializing the same semaphore for\n> second time (they are \"preinitialized\" for performance reasons in\n> InitProcGlobal() in storage/lmgr/proc.c)\n\nThey should never be \"initialized a second time\". And the preallocation\nis *not* for performance reasons, it is to make sure we can actually get\nenough semaphores (rather than dying under load when we fail to get the\nN+1'st semaphore when starting the N+1'st backend).\n\n> The fix (made for v6.5.1) is here:\n> [ Fix consists of diking out preallocation of semaphores by postmaster ]\n\nI do not like this patch one bit --- I think it is voodoo that doesn't\nreally have anything to do with the true problem. I don't know what\nthe true problem is, mind you, but I don't think this is the way to\nfix it.\n\nIs it possible that the CygWin environment doesn't have a correct\nemulation of IPC semaphores, such that a sema allocated by one process\n(the postmaster) is not available to other procs (the backends)?\nThat would explain preallocation not working --- but if that's it then\nwe have major problems in other places, since the code assumes that a\nsema once allocated will remain available to later backends.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 1999 10:07:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "On Mon, Aug 16, 1999 at 10:07:00AM -0400, Tom Lane wrote:\n>Is it possible that the CygWin environment doesn't have a correct\n>emulation of IPC semaphores, such that a sema allocated by one process\n>(the postmaster) is not available to other procs (the backends)?\n>That would explain preallocation not working --- but if that's it then\n>we have major problems in other places, since the code assumes that a\n>sema once allocated will remain available to later backends.\n\nWe don't have correct emulation of IPC semapahores since they are not\nimplemented at all. I assume that if you're relying on persistent\nsemaphores, then some add-on package is being used.\n\ncgf\n", "msg_date": "Mon, 16 Aug 1999 13:09:49 -0400", "msg_from": "Chris Faylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" } ]
[ { "msg_contents": "> They should never be \"initialized a second time\". And the \n> preallocation\n> is *not* for performance reasons, it is to make sure we can \n> actually get\n> enough semaphores (rather than dying under load when we fail \n> to get the\n> N+1'st semaphore when starting the N+1'st backend).\n> \n> > The fix (made for v6.5.1) is here:\n> > [ Fix consists of diking out preallocation of semaphores by \n> postmaster ]\n> \n> I do not like this patch one bit --- I think it is voodoo that doesn't\n> really have anything to do with the true problem. I don't know what\n> the true problem is, mind you, but I don't think this is the way to\n> fix it.\n\nI know it is not a perfect solution but now it is better than nothing.\n\n> \n> Is it possible that the CygWin environment doesn't have a correct\n> emulation of IPC semaphores, such that a sema allocated by one process\n\nIt seems that there is really a problem in the IPC for cygwin, but I am not\nan expert in Windows programming and internals so it is hard for me to make\na better one. But I will try to correct the IPC library.\n\n> (the postmaster) is not available to other procs (the backends)?\n> That would explain preallocation not working --- but if that's it then\n> we have major problems in other places, since the code assumes that a\n> sema once allocated will remain available to later backends.\n\nDoes this mean that when I have one connection to the server, I end it and\nstart a new one, this new one will use the same semaphores? But it seems to\nwork.\n\nCan disabling the semaphores prealocation have some negative effects on the\ncorrect function of the backend?\n\n\t\t\tDan\n", "msg_date": "Mon, 16 Aug 1999 16:35:05 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "Horak Daniel <[email protected]> writes:\n>> I do not like this patch one bit --- I think it is voodoo that doesn't\n>> really have anything to do with the true problem. I don't know what\n>> the true problem is, mind you, but I don't think this is the way to\n>> fix it.\n\n> I know it is not a perfect solution but now it is better than nothing.\n\n> Does this mean that when I have one connection to the server, I end it and\n> start a new one, this new one will use the same semaphores? But it seems to\n> work.\n\nYes. The way the code used to work (pre 6.5) was that the first backend\nto fire up would grab a block of 16 semaphores, which would be used by\nbackends 2-16; when you started a 17'th concurrent backend another block\nof 16 semaphores would be grabbed; etc. The code you diked out simply\nforces preallocation of a bunch of semaphores at postmaster start time,\nrather than having it done by selected backends. That's why it doesn't\nmake any sense to me that removing it would fix anything --- you'll\nstill have backends dependent on semaphores that were created by other\nprocesses, it's just that they were other backends instead of the\npostmaster.\n\nIn any case, when one backend quits and another one is started, the new\none will re-use the semaphore no longer used by the defunct backend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 1999 10:50:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " } ]
[ { "msg_contents": "Hi,\n\nI am using the libpq interface with binary cursors and am using numeric\nfields. There seems to be no conversion routines available in the\nfront end library for numeric types. Am I missing something or do\nI have to roll my own from numeric.c as per the backend?\n\nI also can't find anything in libpq on dates.\n\nShould appropriate conversion routines exist in libpq?\n\n--------\nRegards\nTheo\n", "msg_date": "Mon, 16 Aug 1999 18:09:13 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Binary cursors, numerics and other types" }, { "msg_contents": "Theo Kramer <[email protected]> writes:\n> I am using the libpq interface with binary cursors and am using numeric\n> fields. There seems to be no conversion routines available in the\n> front end library for numeric types. Am I missing something or do\n> I have to roll my own from numeric.c as per the backend?\n> I also can't find anything in libpq on dates.\n> Should appropriate conversion routines exist in libpq?\n\nIt is not libpq's job to try to deal with binary data from the server\n--- for one thing, libpq may be compiled on a different architecture\nwith a different representation than the server is (wrong endianness,\ndifferent floating point format, etc). libpq doesn't even have any\nway of finding out whether a conversion is needed, let alone doing it.\n\nIn the current scheme of things, binary cursors are of very limited use,\nand you are *really* foolish if you try to use them for anything except\nthe most primitive data types like \"int4\". Your code will break without\nwarning whenever Jan feels like changing the internal representation of\nnumeric, as I believe he intends to do soon. We have never guaranteed\nthat the internal representation of date/time types is frozen, either\n--- Thomas has been heard muttering about replacing timestamp with\ndatetime, for example.\n\nThere has been some talk of creating a CORBA interface to Postgres,\nwhich would make use of binary representations for the basic data types\nsafer, since I believe CORBA offers facilities for cross-platform\ntransfer of binary integers (floats too? not sure). But I don't think\nthat would extend to nonstandard Postgres datatypes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 1999 13:53:21 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursors, numerics and other types " }, { "msg_contents": "Tom Lane wrote:\n> It is not libpq's job to try to deal with binary data from the server\n> --- for one thing, libpq may be compiled on a different architecture\n> with a different representation than the server is (wrong endianness,\n> different floating point format, etc). libpq doesn't even have any\n> way of finding out whether a conversion is needed, let alone doing it.\n>\n> In the current scheme of things, binary cursors are of very limited use,\n> and you are *really* foolish if you try to use them for anything except\n> the most primitive data types like \"int4\". Your code will break without\n> warning whenever Jan feels like changing the internal representation of\n> numeric, as I believe he intends to do soon. We have never guaranteed\n> that the internal representation of date/time types is frozen, either\n> --- Thomas has been heard muttering about replacing timestamp with\n> datetime, for example.\n\nHmm, a real pity. Always thought that it was the responsibility of the\npersistant store to provide a heterogenous interface. Oh well back\nto messy error prone conversions to and from strings.\n\n--------\nRegards\nTheo\n", "msg_date": "Mon, 16 Aug 1999 20:22:44 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Binary cursors, numerics and other types" }, { "msg_contents": "> There has been some talk of creating a CORBA interface to Postgres,\n> which would make use of binary representations for the basic data types\n> safer, since I believe CORBA offers facilities for cross-platform\n> transfer of binary integers (floats too? not sure). But I don't think\n> that would extend to nonstandard Postgres datatypes.\n\nAll Postgres data types could be described easily using Corba\nconstructs. And all of those types would then have complete binary\ncompatibility across platforms. There are also facilities to\ndynamically map declarations from clients to servers and vica versa,\nthough one takes a performance hit to do so.\n\nWe've been using Corba (the ACE/TAO toolset) to build realtime systems\nat work. We're just getting far enough to start deploying some testbed\nsystems in a month or two. And in a lot of ways Corba is really great.\nOne thing to worry about is that afaik there is no single ORB which\nsupports as many platforms as we do. So we'd have to support multiple\nORBS to get at all of our targets, which will be a royal pain.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 17 Aug 1999 06:19:47 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Binary cursors, numerics and other types" } ]
[ { "msg_contents": "> Hi there.\n> \n> I remember someone talking about compiling pgsql so that it could use \n> table names longer than 32 bytes, but that it would require some \n> changes in the source code. Could anyone tell me what changes these \n> are, and how safe it would be to do it (that is, should I assume that \n> I could just compile a newer version making the same changes to the \n> sources, and have anyone experienced anything broken using the longer \n> table names)???\n> \n> Yours faithfully.\n> Finn Kettner.\n> PS. The main reasong for the longer table names is not the tables \n> themself, but the indexes etc. that are constructed automatically \n> using e.g. serial fields.\n\nThat serial table name is fixed in 6.5.*.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 14:01:42 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Long table names" }, { "msg_contents": "On Mon, Aug 16, 1999 at 07:45:47PM +0100, Finn Kettner wrote:\n> Hi there.\n> \n> I remember someone talking about compiling pgsql so that it could use \n> table names longer than 32 bytes, but that it would require some \n> changes in the source code. Could anyone tell me what changes these \n> are, and how safe it would be to do it (that is, should I assume that \n> I could just compile a newer version making the same changes to the \n> sources, and have anyone experienced anything broken using the longer \n> table names)???\n> \n> Yours faithfully.\n> Finn Kettner.\n> PS. The main reasong for the longer table names is not the tables \n> themself, but the indexes etc. that are constructed automatically \n> using e.g. serial fields.\n\nFinn - \nThe subsidary problem has been partially fixed in 6.5, at the slight cost\nof making it slightly more difficult to predict the name of the serial \n(or index?). Here's an example from one of my databases:\n\nI've got a table named \"PersonnelAddresses\", with a primary key of serial type,\ncalled \"PerAddrID\", as so:\n\nidas_proto=> \\d \"PersonnelAddresses\" \nTable = PersonnelAddresses\n+----------------------------------+----------------------------------+-------+\n| Field | Type | Length|\n+----------------------------------+----------------------------------+-------+\n| PerAddrID | int4 not null default nextval('\" | 4 |\n| PerIndex | int4 | 4 |\n| InstID | int4 | 4 |\n| Department | text | var |\n| Street | text | var |\n| Street2 | text | var |\n| City | text | var |\n| spID | int4 | 4 |\n| PostalCode | text | var |\n| CountryID | int4 | 4 |\n| Organization | text | var |\n| AddrType | text | var |\n+----------------------------------+----------------------------------+-------+\nIndex: PersonnelAddresses_pkey\n\nThe complete default for \"PerAddrID\" is:\n\nnextval('\"PersonnelAddresse_PerAddrID_seq\"')\n\nAs you can see, the table name has been truncated to make the whole\nthing fit into 32 characters. You'd need to check the source to see the\nexact algorithm: I'm not sure if it starts trimming on the field name,\never. In general, it means long, common prefixes to table names (like\nmy \"Personnel\", above), are bad, because they might lead to ambigous names\nfor auto generated things, like sequences. Some thought has gone into what\nthe Right Thing to do is, but I'm not clear if a consensus has emerged.\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Mon, 16 Aug 1999 13:29:52 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Long table names" }, { "msg_contents": "Hi there.\n\nI remember someone talking about compiling pgsql so that it could use \ntable names longer than 32 bytes, but that it would require some \nchanges in the source code. Could anyone tell me what changes these \nare, and how safe it would be to do it (that is, should I assume that \nI could just compile a newer version making the same changes to the \nsources, and have anyone experienced anything broken using the longer \ntable names)???\n\nYours faithfully.\nFinn Kettner.\nPS. The main reasong for the longer table names is not the tables \nthemself, but the indexes etc. that are constructed automatically \nusing e.g. serial fields.\n", "msg_date": "Mon, 16 Aug 1999 19:45:47 +0100", "msg_from": "\"Finn Kettner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Long table names" }, { "msg_contents": "\"Ross J. Reedstrom\" <[email protected]> writes:\n> As you can see, the table name has been truncated to make the whole\n> thing fit into 32 characters. You'd need to check the source to see the\n> exact algorithm: I'm not sure if it starts trimming on the field name,\n> ever.\n\nRight now, the algorithm is to preferentially truncate the longer name\ncomponent (table name or column name). There was some talk of adding\nquasi-random hash characters to reduce the probability of name\ncollisions, but it's not been done.\n\nAnyway, to answer the question Finn asked,\n\n>> I remember someone talking about compiling pgsql so that it could use \n>> table names longer than 32 bytes, but that it would require some \n>> changes in the source code. Could anyone tell me what changes these \n>> are,\n\nIn theory you should only have to change NAMEDATALEN, rebuild, and\ninitdb. I think someone reported a few months on actually trying\nthis experiment (probably, around the same time we decided to put in\nthe name-truncation logic); check the pghackers archives for details.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 1999 17:16:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Long table names " } ]
[ { "msg_contents": "Does anyone know why make_noname (in createplan.c) insists on putting\na SeqScan plan node above the Sort or Material node it's generating?\nAs far as I can tell, it's a waste of cycles:\n\n1. planner.c doesn't bother with a SeqScan above the Sorts it makes.\n2. The executor's nodeSeqscan.c just redirects all its calls to the\n outerPlan node, if it has an outerPlan.\n3. Things seem to work fine without it ;-)\n\nHowever, I'm not quite ready to commit this change without consultation.\nDoes anyone know what this was for?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 16 Aug 1999 20:41:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Why does make_noname insert a SeqScan above sort/material node?" }, { "msg_contents": "> Does anyone know why make_noname (in createplan.c) insists on putting\n> a SeqScan plan node above the Sort or Material node it's generating?\n> As far as I can tell, it's a waste of cycles:\n> \n> 1. planner.c doesn't bother with a SeqScan above the Sorts it makes.\n> 2. The executor's nodeSeqscan.c just redirects all its calls to the\n> outerPlan node, if it has an outerPlan.\n> 3. Things seem to work fine without it ;-)\n> \n> However, I'm not quite ready to commit this change without consultation.\n> Does anyone know what this was for?\n\nRemove it. Noname is for an internal temp table and they probably used\nit for some other uses in the past. That code needs cleaning, I bet.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 16 Aug 1999 21:34:26 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Why does make_noname insert a SeqScan above\n\tsort/material node?" } ]
[ { "msg_contents": "> > Thomas,\n> > \n> > Before sending the box to world trip could you make a 3D picture and\n> > publish it on the Web. Take a look at Freedom VR \n> > http://www.honeylocust.com/vr/ - it's free and works fine.\n> \n> Yes, I think Marc wants to do that.\n\nWow, that VR is amazing. It downloads the VR viewer right into your\nbrowser\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 03:02:17 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" } ]
[ { "msg_contents": "> In any case, when one backend quits and another one is \n> started, the new\n> one will re-use the semaphore no longer used by the defunct backend.\n\nI have tested my solution a bit more and I have to say that reusing a\nsemaphore by a new backend works OK. But it is not possible for a newly\ncreated backend to use a semaphore allocated by postmaster (it freezes on\ntest if the semaphore with given key already exists - done with\nsemId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\nstorage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\nuses the ipc library in the right way. There are no longer any error\nmessages from the ipc library when running the server. And I can't say that\nthe ipc library is a 100% correct implementation of SysV IPC, it is probably\n(sure ;-) )caused by the Windows internals.\n\n\t\t\tDan\n", "msg_date": "Tue, 17 Aug 1999 14:06:26 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "Horak Daniel <[email protected]> writes:\n> I have tested my solution a bit more and I have to say that reusing a\n> semaphore by a new backend works OK. But it is not possible for a newly\n> created backend to use a semaphore allocated by postmaster (it freezes on\n> test if the semaphore with given key already exists - done with\n> semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> storage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\n> uses the ipc library in the right way.\n\nIt seems that you have found a bug in the cygipc library. I suggest\nreporting it to the author of same...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 09:21:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > In any case, when one backend quits and another one is \n> > started, the new\n> > one will re-use the semaphore no longer used by the defunct backend.\n> \n> I have tested my solution a bit more and I have to say that reusing a\n> semaphore by a new backend works OK. But it is not possible for a newly\n> created backend to use a semaphore allocated by postmaster (it freezes on\n> test if the semaphore with given key already exists - done with\n> semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> storage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\n> uses the ipc library in the right way. There are no longer any error\n> messages from the ipc library when running the server. And I can't say that\n> the ipc library is a 100% correct implementation of SysV IPC, it is probably\n> (sure ;-) )caused by the Windows internals.\n\nSeems we may have to use the patch, or make some other patch for NT-only\nthat works around this NT bug.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 10:41:28 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" }, { "msg_contents": "> Horak Daniel <[email protected]> writes:\n> > I have tested my solution a bit more and I have to say that reusing a\n> > semaphore by a new backend works OK. But it is not possible for a newly\n> > created backend to use a semaphore allocated by postmaster (it freezes on\n> > test if the semaphore with given key already exists - done with\n> > semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> > storage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\n> > uses the ipc library in the right way.\n> \n> It seems that you have found a bug in the cygipc library. I suggest\n> reporting it to the author of same...\n\nYes, but can we expect all NT sites to get the patch before using\nPostgreSQL? Is there a workaround we can implement?\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 11:01:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> storage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\n>> uses the ipc library in the right way. There are no longer any error\n>> messages from the ipc library when running the server. And I can't say that\n>> the ipc library is a 100% correct implementation of SysV IPC, it is probably\n>> (sure ;-) )caused by the Windows internals.\n\n> Seems we may have to use the patch, or make some other patch for NT-only\n> that works around this NT bug.\n\nI don't have a problem with installing an NT patch (lord knows there\nare plenty of #ifdef __CYGWIN32__'s in the code already). But I have\na problem with *this* patch because I don't believe we understand what\nit is doing, and therefore I have no confidence in it. The extent of\nour understanding so far is that one backend can create a semaphore that\ncan be used by a later backend, but the postmaster cannot create a\nsemaphore that can be used by a later backend. I don't really believe\nthat; I think there is something else going on. Until we understand\nwhat the something else is, I don't think we have a trustworthy\nsolution.\n\nThe real reason I feel itchy about this is that I know that interprocess\nsynchronization is a very tricky area, so I'm not confident that the\nlimited amount of testing Dan can do by himself proves that things are\nsolid. As the old saw goes, \"testing cannot prove the absence of bugs\".\nI want to have both clean test results *and* an understanding of what\nwe are doing before I will feel comfortable.\n\nLooking again at the code, it occurs to me that a backend exiting\nnormally will probably leave its semaphore set nonzero, which could\n(given a buggy IPC library) have something to do with whether another\nprocess can attach to the sema or not. The postmaster code is *trying*\nto create the semas with nonzero starting values, but I see that the\nbackend code takes the additional step of doing\n\t\tsemun.val = IpcSemaphoreDefaultStartValue;\n\t\tsemctl(semId, semNum, SETVAL, semun);\nwhereas the postmaster code doesn't. Maybe the create call isn't\ninitializing the semaphores the way it's told to? It'd be worth\ntrying adding a step like this to the postmaster preallocation.\n\nIn any case, I'd really like us to get some feedback from the author of\ncygipc about this issue. I don't mind working around a bug once we\nunderstand exactly what the bug is --- but in this particular area,\nI think guessing our way to a workaround isn't good enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 13:30:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "Got it.\n\n\n> Bruce Momjian <[email protected]> writes:\n> >> storage/ipc/ipc.c ). Why it is, I don't know, but it seems that my solution\n> >> uses the ipc library in the right way. There are no longer any error\n> >> messages from the ipc library when running the server. And I can't say that\n> >> the ipc library is a 100% correct implementation of SysV IPC, it is probably\n> >> (sure ;-) )caused by the Windows internals.\n> \n> > Seems we may have to use the patch, or make some other patch for NT-only\n> > that works around this NT bug.\n> \n> I don't have a problem with installing an NT patch (lord knows there\n> are plenty of #ifdef __CYGWIN32__'s in the code already). But I have\n> a problem with *this* patch because I don't believe we understand what\n> it is doing, and therefore I have no confidence in it. The extent of\n> our understanding so far is that one backend can create a semaphore that\n> can be used by a later backend, but the postmaster cannot create a\n> semaphore that can be used by a later backend. I don't really believe\n> that; I think there is something else going on. Until we understand\n> what the something else is, I don't think we have a trustworthy\n> solution.\n> \n> The real reason I feel itchy about this is that I know that interprocess\n> synchronization is a very tricky area, so I'm not confident that the\n> limited amount of testing Dan can do by himself proves that things are\n> solid. As the old saw goes, \"testing cannot prove the absence of bugs\".\n> I want to have both clean test results *and* an understanding of what\n> we are doing before I will feel comfortable.\n> \n> Looking again at the code, it occurs to me that a backend exiting\n> normally will probably leave its semaphore set nonzero, which could\n> (given a buggy IPC library) have something to do with whether another\n> process can attach to the sema or not. The postmaster code is *trying*\n> to create the semas with nonzero starting values, but I see that the\n> backend code takes the additional step of doing\n> \t\tsemun.val = IpcSemaphoreDefaultStartValue;\n> \t\tsemctl(semId, semNum, SETVAL, semun);\n> whereas the postmaster code doesn't. Maybe the create call isn't\n> initializing the semaphores the way it's told to? It'd be worth\n> trying adding a step like this to the postmaster preallocation.\n> \n> In any case, I'd really like us to get some feedback from the author of\n> cygipc about this issue. I don't mind working around a bug once we\n> understand exactly what the bug is --- but in this particular area,\n> I think guessing our way to a workaround isn't good enough.\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 13:33:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Horak Daniel\n> Sent: Tuesday, August 17, 1999 9:06 PM\n> To: 'Tom Lane'\n> Cc: '[email protected]'\n> Subject: RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )\n>\n>\n> > In any case, when one backend quits and another one is\n> > started, the new\n> > one will re-use the semaphore no longer used by the defunct backend.\n>\n> I have tested my solution a bit more and I have to say that reusing a\n> semaphore by a new backend works OK. But it is not possible for a newly\n> created backend to use a semaphore allocated by postmaster (it freezes on\n> test if the semaphore with given key already exists - done with\n> semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> storage/ipc/ipc.c ). Why it is, I don't know, but it seems that\n> my solution\n> uses the ipc library in the right way. There are no longer any error\n> messages from the ipc library when running the server. And I\n> can't say that\n> the ipc library is a 100% correct implementation of SysV IPC, it\n> is probably\n> (sure ;-) )caused by the Windows internals.\n>\n\nYutaka Tanida [[email protected]] and I have examined IPC\nlibrary.\n\nWe found that postmaster doesn't call exec() after fork() since v6.4.\n\nThe value of static/extern variables which cygipc library holds may\nbe different from their initial values when postmaster fork()s child\nbackend processes.\n\nI made the following patch for cygipc library on trial.\nThis patch was effective for Yutaka's test case.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n*** sem.c.orig\tTue Dec 01 00:16:25 1998\n--- sem.c\tTue Aug 17 13:22:06 1999\n***************\n*** 58,63 ****\n--- 58,78 ----\n static int\t\t GFirstSem\t = 0;\t\t/*PCPC*/\n static int\t\t GFdSem\t ;\t\t/*PCPC*/\n\n+ static pid_t\tGProcessId = 0;\n+\n+ static void\tinit_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstSem = 0;\n+ \t\tused_sems = used_semids = max_semid = 0;\n+ \t\tsem_seq = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n+\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n /************************************************************************/\n***************\n*** 77,82 ****\n--- 92,98 ----\n {\n int LRet ;\n\n+ \tinit_globals();\n if( GFirstSem == 0 )\n {\n \tif( IsGSemSemExist() )\n*** shm.c.orig\tTue Dec 01 01:04:57 1998\n--- shm.c\tTue Aug 17 13:22:27 1999\n***************\n*** 59,64 ****\n--- 59,81 ----\n static int\t\t GFirstShm\t = 0;\t\t/*PCPC*/\n static int\t\t GFdShm\t ;\t\t/*PCPC*/\n\n+ /*****************************************/\n+ /*\tInitialization of static variables */\n+ /*****************************************/\n+ static pid_t GProcessId = 0;\n+ static void init_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstShm = 0;\n+ \t\tshm_rss = shm_swp = max_shmid = 0;\n+ \t\tshm_seq = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n+\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des shm\t\t*/\n /************************************************************************/\n***************\n*** 82,87 ****\n--- 99,105 ----\n {\n int LRet ;\n\n+ init_globals();\n if( GFirstShm == 0 )\n {\n if( IsGSemShmExist() )\n*** msg.c.orig\tTue Dec 01 00:16:09 1998\n--- msg.c\tTue Aug 17 13:20:04 1999\n***************\n*** 57,62 ****\n--- 57,77 ----\n static int\t\t GFirstMsg\t = 0;\t\t/*PCPC*/\n static int\t\t GFdMsg\t ;\t\t/*PCPC*/\n\n+ /*****************************************/\n+ /*\tInitialization of static variables */\n+ /*****************************************/\n+ static pid_t GProcessId = 0;\n+ static void init_globals(void)\n+ {\n+ \tpid_t pid;\n+\n+ \tif (pid=getpid(), pid != GProcessId)\n+ \t{\n+ \t\tGFirstMsg = 0;\n+ \t\tmsgbytes = msghdrs = msg_seq = used_queues = max_msqid = 0;\n+ \t\tGProcessId = pid;\n+ \t}\n+ }\n /************************************************************************/\n /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n /************************************************************************/\n***************\n*** 79,84 ****\n--- 94,100 ----\n {\n int LRet ;\n\n+ init_globals();\n if( GFirstMsg == 0 )\n {\n if( IsGSemMsgExist() )\n\n", "msg_date": "Wed, 18 Aug 1999 08:45:28 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "\nI have added this to the end of the README.NT file.\n\n\n\n[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]]On Behalf Of Horak Daniel\n> > Sent: Tuesday, August 17, 1999 9:06 PM\n> > To: 'Tom Lane'\n> > Cc: '[email protected]'\n> > Subject: RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )\n> >\n> >\n> > > In any case, when one backend quits and another one is\n> > > started, the new\n> > > one will re-use the semaphore no longer used by the defunct backend.\n> >\n> > I have tested my solution a bit more and I have to say that reusing a\n> > semaphore by a new backend works OK. But it is not possible for a newly\n> > created backend to use a semaphore allocated by postmaster (it freezes on\n> > test if the semaphore with given key already exists - done with\n> > semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> > storage/ipc/ipc.c ). Why it is, I don't know, but it seems that\n> > my solution\n> > uses the ipc library in the right way. There are no longer any error\n> > messages from the ipc library when running the server. And I\n> > can't say that\n> > the ipc library is a 100% correct implementation of SysV IPC, it\n> > is probably\n> > (sure ;-) )caused by the Windows internals.\n> >\n> \n> Yutaka Tanida [[email protected]] and I have examined IPC\n> library.\n> \n> We found that postmaster doesn't call exec() after fork() since v6.4.\n> \n> The value of static/extern variables which cygipc library holds may\n> be different from their initial values when postmaster fork()s child\n> backend processes.\n> \n> I made the following patch for cygipc library on trial.\n> This patch was effective for Yutaka's test case.\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n> \n> *** sem.c.orig\tTue Dec 01 00:16:25 1998\n> --- sem.c\tTue Aug 17 13:22:06 1999\n> ***************\n> *** 58,63 ****\n> --- 58,78 ----\n> static int\t\t GFirstSem\t = 0;\t\t/*PCPC*/\n> static int\t\t GFdSem\t ;\t\t/*PCPC*/\n> \n> + static pid_t\tGProcessId = 0;\n> +\n> + static void\tinit_globals(void)\n> + {\n> + \tpid_t pid;\n> +\n> + \tif (pid=getpid(), pid != GProcessId)\n> + \t{\n> + \t\tGFirstSem = 0;\n> + \t\tused_sems = used_semids = max_semid = 0;\n> + \t\tsem_seq = 0;\n> + \t\tGProcessId = pid;\n> + \t}\n> + }\n> +\n> /************************************************************************/\n> /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n> /************************************************************************/\n> ***************\n> *** 77,82 ****\n> --- 92,98 ----\n> {\n> int LRet ;\n> \n> + \tinit_globals();\n> if( GFirstSem == 0 )\n> {\n> \tif( IsGSemSemExist() )\n> *** shm.c.orig\tTue Dec 01 01:04:57 1998\n> --- shm.c\tTue Aug 17 13:22:27 1999\n> ***************\n> *** 59,64 ****\n> --- 59,81 ----\n> static int\t\t GFirstShm\t = 0;\t\t/*PCPC*/\n> static int\t\t GFdShm\t ;\t\t/*PCPC*/\n> \n> + /*****************************************/\n> + /*\tInitialization of static variables */\n> + /*****************************************/\n> + static pid_t GProcessId = 0;\n> + static void init_globals(void)\n> + {\n> + \tpid_t pid;\n> +\n> + \tif (pid=getpid(), pid != GProcessId)\n> + \t{\n> + \t\tGFirstShm = 0;\n> + \t\tshm_rss = shm_swp = max_shmid = 0;\n> + \t\tshm_seq = 0;\n> + \t\tGProcessId = pid;\n> + \t}\n> + }\n> +\n> /************************************************************************/\n> /* Demande d'acces a la zone partagee de gestion des shm\t\t*/\n> /************************************************************************/\n> ***************\n> *** 82,87 ****\n> --- 99,105 ----\n> {\n> int LRet ;\n> \n> + init_globals();\n> if( GFirstShm == 0 )\n> {\n> if( IsGSemShmExist() )\n> *** msg.c.orig\tTue Dec 01 00:16:09 1998\n> --- msg.c\tTue Aug 17 13:20:04 1999\n> ***************\n> *** 57,62 ****\n> --- 57,77 ----\n> static int\t\t GFirstMsg\t = 0;\t\t/*PCPC*/\n> static int\t\t GFdMsg\t ;\t\t/*PCPC*/\n> \n> + /*****************************************/\n> + /*\tInitialization of static variables */\n> + /*****************************************/\n> + static pid_t GProcessId = 0;\n> + static void init_globals(void)\n> + {\n> + \tpid_t pid;\n> +\n> + \tif (pid=getpid(), pid != GProcessId)\n> + \t{\n> + \t\tGFirstMsg = 0;\n> + \t\tmsgbytes = msghdrs = msg_seq = used_queues = max_msqid = 0;\n> + \t\tGProcessId = pid;\n> + \t}\n> + }\n> /************************************************************************/\n> /* Demande d'acces a la zone partagee de gestion des semaphores\t\t*/\n> /************************************************************************/\n> ***************\n> *** 79,84 ****\n> --- 94,100 ----\n> {\n> int LRet ;\n> \n> + init_globals();\n> if( GFirstMsg == 0 )\n> {\n> if( IsGSemMsgExist() )\n> \n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:32:34 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" } ]
[ { "msg_contents": "-----------------------------------------------------------------------------------------------------------------------------\nI have posted this mail to psql-general. But i didn't get any answer yet.\n-----------------------------------------------------------------------------------------------------------------------------\n\nHello\n \nWhen i had tried to insert into text field text (length about 4000 chars), the backend have crashed with status 139. This error is happened when the query length ( SQL query) is more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n\nMy questions are:\n1. Is there problem with text field or with length of SQL query?\n2. Would postgresql have any limits for SQL query length?\nI checked the archives but only found references to the 8K limit. Any help would be greatly appreciated.\nThanks for help\n\t\t\t\tNatalya Makushina \n\t\t\t\[email protected]\n", "msg_date": "Tue, 17 Aug 1999 16:21:53 +0400", "msg_from": "\"Natalya S. Makushina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with query length" } ]
[ { "msg_contents": "Hi,\n\nHas anybody received the patches that I sent making query strings\nextensible? I posted them twice, and didn't even get them back from the\npatches list myself. Is there a problem with attachments? There were two\nfiles attached.\n\nMikeA\n\n", "msg_date": "Tue, 17 Aug 1999 14:27:55 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query string lengths" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Has anybody received the patches that I sent making query strings\n> extensible? I posted them twice, and didn't even get them back from the\n> patches list myself. Is there a problem with attachments?\n\nI saw them in pgsql-patches-digest V1 #171.\n\nHaven't got round to doing anything with them yet --- I'm up to my\nelbows in the guts of the optimizer right now, and want to bring that\nwork to some kind of closure before I think about anything else.\nHas anyone else tried Michael's patches yet?\n\nBTW, the digest pretty much ruins MIME-ified attachments --- the message\nI have in my inbox is full of \"=0A=\" and other MIME junk, with no simple\nmeans of stripping it out since the MIME headers are gone. Dunno if\nthere is any way of fixing this. I don't suppose we can or even want to\ndiscourage people from sending patches as MIME attachments, but the\npatches digest is nearly useless when they do...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 09:43:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query string lengths " }, { "msg_contents": "> Has anybody received the patches that I sent making query strings\n> extensible? I posted them twice, and didn't even get them back from the\n> patches list myself. Is there a problem with attachments? There were two\n> files attached.\n\nI think we have seen them (I usually don't hang on to them since I'm\nnot the \"patch from the list\" group, but I see something dated Friday,\nAug 13).\n\nSorry for the delay in someone responding; we must be busy with other\nprojects. For your kind of patches, which touch several directories,\nit takes a bit longer for someone to wade through and understand them.\nAnd at this time we are not near a release date, so several of us get\nbusy on big Postgres projects for the next release and don't come up\nfor air ;)\n\nafaik there is no problem with the techniques you used to do this (I\nrecall you discussing it on the list) so the patches *should* make it\ninto the tree in the next bit. Probably for v6.6, not v6.5.x, but I'm\njust guessing...\n\nThanks for doing the work.\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 17 Aug 1999 14:40:45 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query string lengths" }, { "msg_contents": "[Charset iso-8859-1 unsupported, filtering to ASCII...]\n> Hi,\n> \n> Has anybody received the patches that I sent making query strings\n> extensible? I posted them twice, and didn't even get them back from the\n> patches list myself. Is there a problem with attachments? There were two\n> files attached.\n\nI have them. They go in 6.6, not 6.5.2.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 10:43:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Query string lengths" } ]
[ { "msg_contents": "-----------------------------------------------------------------------------------------------------------------------------\nI have posted this mail to psql-general. But i didn't get any answer yet.\n-----------------------------------------------------------------------------------------------------------------------------\n\nWhen i had tried to insert into text field text (length about 4000 chars), the backend have crashed with status 139. This error is happened when the query length ( SQL query) is more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n\nMy questions are:\n1. Is there problem with text field or with length of SQL query?\n2. Would postgresql have any limits for SQL query length?\nI checked the archives but only found references to the 8K limit. Any help would be greatly appreciated.\nThanks for help\n\t\t\t\tNatalya Makushina \n\t\t\t\[email protected]\n\n", "msg_date": "Tue, 17 Aug 1999 16:58:19 +0400", "msg_from": "\"Natalya S. Makushina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with query length" } ]
[ { "msg_contents": "-----------------------------------------------------------------------------------------------------------------------------\nI have posted this mail to psql-general. But i didn't get any answer yet.\n-----------------------------------------------------------------------------------------------------------------------------\n\nWhen i had tried to insert into text field text (length about 4000 chars), the backend have crashed with status 139. This error is happened when the query length ( SQL query) is more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n\nMy questions are:\n1. Is there problem with text field or with length of SQL query?\n2. Would postgresql have any limits for SQL query length?\nI checked the archives but only found references to the 8K limit. Any help would be greatly appreciated.\nThanks for help\n\t\t\t\tNatalya Makushina \n\t\t\t\[email protected]\n\n", "msg_date": "Tue, 17 Aug 1999 17:15:08 +0400", "msg_from": "\"Natalya S. Makushina\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with query length" } ]
[ { "msg_contents": "> Horak Daniel <[email protected]> writes:\n> > I have tested my solution a bit more and I have to say that \n> reusing a\n> > semaphore by a new backend works OK. But it is not possible \n> for a newly\n> > created backend to use a semaphore allocated by postmaster \n> (it freezes on\n> > test if the semaphore with given key already exists - done with\n> > semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> > storage/ipc/ipc.c ). Why it is, I don't know, but it seems \n> that my solution\n> > uses the ipc library in the right way.\n> \n> It seems that you have found a bug in the cygipc library. I suggest\n> reporting it to the author of same...\n\nOr it can be a feature ;-) But before it will be fixed (if it can be fixed)\nI would like to see my patch it the sources. It is very simple, without\nnegative effects... The win32 port will be more stable than it is now. We\nstill can't consider the win32 port to be run in a production environment.\n\n\t\t\tDan\n", "msg_date": "Tue, 17 Aug 1999 15:44:17 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " }, { "msg_contents": "Horak Daniel <[email protected]> writes:\n> Or it can be a feature ;-) But before it will be fixed (if it can be fixed)\n> I would like to see my patch it the sources. It is very simple, without\n> negative effects...\n\nHow do you know it has no negative effects? The problem that it was\nintended to fix only showed up with large numbers of backends (ie, more\nthan the system limit on number of semaphores, which is depressingly\nsmall on many old-line Unixes). Perhaps cygipc has no limit on number\nof semaphores, or perhaps it tries to be a faithful imitation of SysV ;-)\nHave you checked?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 10:08:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " } ]
[ { "msg_contents": "No, it appears that the flex module doesn't handle any tokens with a length\nabove this value. I didn't realise that it happened at about 4k in normal\noperation though; I had intentionally set my YY_BUF_SIZE to 4096 to try some\ntesting. I thought that this was causing the problem. \n\nDoes anybody know (scan.l, parse.l) reasonably intimately? Any info\nregarding this would be appreciated? Somebody mentioned this to me in mail\nregarding the query string length \n\n>> -----Original Message-----\n>> From: Natalya S. Makushina [mailto:[email protected]]\n>> Sent: Tuesday, August 17, 1999 3:15 PM\n>> To: '[email protected]'\n>> Subject: [HACKERS] Problem with query length\n>> \n>> \n>> -------------------------------------------------------------\n----------------------------------------------------------------\n>> I have posted this mail to psql-general. But i didn't get \n>> any answer yet.\n>> -------------------------------------------------------------\n----------------------------------------------------------------\n>> \n>> When i had tried to insert into text field text (length \n>> about 4000 chars), the backend have crashed with status 139. \n>> This error is happened when the query length ( SQL query) is \n>> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n>> \n>> My questions are:\n>> 1. Is there problem with text field or with length of SQL query?\n>> 2. Would postgresql have any limits for SQL query length?\n>> I checked the archives but only found references to the 8K \n>> limit. Any help would be greatly appreciated.\n>> Thanks for help\n>> \t\t\t\tNatalya Makushina \n>> \t\t\t\[email protected]\n>> \n>> \n", "msg_date": "Tue, 17 Aug 1999 15:49:26 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with query length" } ]
[ { "msg_contents": "Postgres Developers Tip 10001:\n\n>> > Tom, I just discovered that the web mail archive facility \n>> handles the\n>> > attachments quite well. If you go to the archive, it \n>> gives you a download\n>> > link for each attachment, and you just need to give the \n>> file a decent name.\n>> \n>> So it does. Good tip --- you ought to mention it on the \n>> hackers list.\n>> I was thinking I'd have to subscribe to patches directly rather than\n>> via digest, but this'll save the day.\n", "msg_date": "Tue, 17 Aug 1999 16:15:54 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: Patches " } ]
[ { "msg_contents": "> How do you know it has no negative effects? The problem that it was\n> intended to fix only showed up with large numbers of backends \n> (ie, more\n> than the system limit on number of semaphores, which is depressingly\n> small on many old-line Unixes). Perhaps cygipc has no limit on number\n> of semaphores, or perhaps it tries to be a faithful imitation \n> of SysV ;-)\n> Have you checked?\n\nThere is a static limit on the max number of semaphores, it can cause the\nsame problems as on Unix.\n\nthis is part of sys/sem.h:\n#define SEMMNI 128 /* ? max # of semaphore identifiers */\n#define SEMMSL 32 /* <= 512 max num of semaphores per id */\n?? should be 32 sems per id (DH)\n#define SEMMNS (SEMMNI*SEMMSL) /* ? max # of semaphores in system */\n#define SEMOPM 32 /* ~ 100 max num of ops per semop call */\n#define SEMVMX 32767 /* semaphore maximum value */\n\nBut I have thought no negative effects on other ports.\n\n\t\t\tDan\n", "msg_date": "Tue, 17 Aug 1999 16:26:12 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " } ]
[ { "msg_contents": "Thank you for answering all my stupid questions. I did read the User manual\nas well as Programmer and Admin, and what i was able to get out was not\nmuch. I guess i wasn't used to the way that postgres work coming from a\nmicrosoft background. But wouldn't you agree that the manuals was a little\nbrief and criptic at times? My question is how do i go about writing some\nmore documentations and examples and submit it for inclusion into the\nmanual. It just so that the next person w/ the same background as i would be\nable to understand those functions quickly and be able to use the example\nand apply it to his program w/o asking all those same basic questions which\ni'm sure you guys are tired of answering.\n\nAlso, is it the same thing for submitting documentation as well as function\ninto postgres? What i really need was the total amount of time between 2\nseparate point of times. If i ask for 'day' then it return day, 'minute'\nthen it return minutes. For example:\n\nTimein\t\t\t\tTimeout\nTue Aug 17 15:00:00 1999 CDT\tTue Aug 17 16:00:00 1999 CDT\n\nselect datediff(day, timein, timeout) as totaltime from schedule\n\nWould give me a _number_ 0 since it's the same day, and if i used minute as\nbelow:\n\n\nselect datediff(minute, timein, timeout) as totaltime from schedule\n\nIt would give me the number 60, that's it. I don't want any qualifier behind\nthe number since it blew up the stupid microsoft ADO driver like you\nwouldn't believe.\n\nThank you,\nThinh\n\n\n> -----Original Message-----\n> From: Herouth Maoz [mailto:[email protected]]\n> Sent: Tuesday, August 17, 1999 8:29 AM\n> To: Pham, Thinh; '[email protected]'\n> Subject: RE: [SQL] datediff function\n> \n> \n> At 16:18 +0300 on 17/08/1999, Pham, Thinh wrote:\n> \n> \n> > What happen if i just want to compare using minute only or \n> hour only instead\n> > of day? Is there a function to do that or is postgres only \n> work in day?\n> \n> No problem, just write 'now'::datetime - '6 hours'::timespan. \n> Or some such.\n> Please read about the datetime and timespan types in the user guide.\n> \n> Herouth\n> \n> --\n> Herouth Maoz, Internet developer.\n> Open University of Israel - Telem project\n> http://telem.openu.ac.il/~herutma\n> \n> \n> \n", "msg_date": "Tue, 17 Aug 1999 09:37:04 -0500", "msg_from": "\"Pham, Thinh\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [SQL] datediff function" }, { "msg_contents": "At 17:37 +0300 on 17/08/1999, Pham, Thinh wrote:\n\n\n> select datediff(minute, timein, timeout) as totaltime from schedule\n>\n> It would give me the number 60, that's it. I don't want any qualifier behind\n> the number since it blew up the stupid microsoft ADO driver like you\n> wouldn't believe.\n\nIf you don't want to write 'now'::datetime you can always write\ndatetime('now'). Same goes for '1 week'::timespan and timespan( '1 week' ).\nI don't think this will blow up your Microsoft product, but then again,\nanything can blow up a Microsoft product, being a Microsoft Product\nincluded...\n\nTo make things clear, here is what Postgres can and cannot do:\n\nIt can give you the interval between two dates. The returned value is an\ninteger representing the number of days between them.\n\nIt can give you the interval between two datetimes. The returned value is a\ntimespan, expressing days, hours, minutes, etc. as needed.\n\nAnother method to get the same thing is using age( datetime1, datetime2 ).\nThis returns a timespan, but expressed in years, months, days, hours and\nminutes. There is a subtle difference here, because a year is not always\n365 days, and a month is 28-31 days, depending...\n\nYou can also truncate datetimes, dates, and other date related types, to\nthe part of your choice. Truncate it to the minute, and it drops the\nseconds, and gives it back to you with 00 in the seconds. Truncate it to\ndays and it gives it back to you at 00:00:00. This is done with\ndate_trunc().\n\nAnother useful operation which can be done is taking one part of the\ndatetime (or related type). For example, the minutes, the seconds, the day,\nthe day of week, or the seconds since the epoch.\n\nNow, I'm not sure these functions do exactly what you wanted. It depends on\nwhat you expect from datediff(minute, timein, itmeout) when they are not on\nthe same day. For 13-oct-1999 14:00:00 and 14-oct-1999 14:00:05, do you\nexpect 5 or 24*60 + 5?\n\nIf only 5, then you can do it with\n\nSELECT date_part( 'minute', datetime1 - datetime2 )\n\nIf not, you will have to do the 24*60 calculation in full.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Tue, 17 Aug 1999 18:24:26 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [SQL] datediff function" }, { "msg_contents": "I unfortunately do MS-SQL.\nDatediff in MS-SQL gives you the number of boundaries between two dates.\nDATEDIFF(day, '1/1/99 23:59:00', '1/2/99 00:01:00') gives 1\nDATEDIFF(day, '1/2/99 00:01:00', '1/2/99 00:03:00') gives 0\nThe {PostgreSQL|postgres|pgsql|whatever} way of doing it is much nicer.\n\n> > select datediff(minute, timein, timeout) as totaltime from schedule\n> >\n> > It would give me the number 60, that's it. I don't want any\n> qualifier behind\n> > the number since it blew up the stupid microsoft ADO driver like you\n> > wouldn't believe.\n>\n> If you don't want to write 'now'::datetime you can always write\n> datetime('now'). Same goes for '1 week'::timespan and\nespan( \n> '1 week' ).\n> I don't think this will blow up your Microsoft product, but then again,\n> anything can blow up a Microsoft product, being a Microsoft Product\n> included...\n> \n> To make things clear, here is what Postgres can and cannot do:\n> \n> It can give you the interval between two dates. The returned value is an\n> integer representing the number of days between them.\n> \n> It can give you the interval between two datetimes. The returned \n> value is a\n> timespan, expressing days, hours, minutes, etc. as needed.\n> \n> Another method to get the same thing is using age( datetime1, datetime2 ).\n> This returns a timespan, but expressed in years, months, days, hours and\n> minutes. There is a subtle difference here, because a year is not always\n> 365 days, and a month is 28-31 days, depending...\n> \n> You can also truncate datetimes, dates, and other date related types, to\n> the part of your choice. Truncate it to the minute, and it drops the\n> seconds, and gives it back to you with 00 in the seconds. Truncate it to\n> days and it gives it back to you at 00:00:00. This is done with\n> date_trunc().\n> \n> Another useful operation which can be done is taking one part of the\n> datetime (or related type). For example, the minutes, the \n> seconds, the day,\n> the day of week, or the seconds since the epoch.\n> \n> Now, I'm not sure these functions do exactly what you wanted. It \n> depends on\n> what you expect from datediff(minute, timein, itmeout) when they \n> are not on\n> the same day. For 13-oct-1999 14:00:00 and 14-oct-1999 14:00:05, do you\n> expect 5 or 24*60 + 5?\n> \n> If only 5, then you can do it with\n> \n> SELECT date_part( 'minute', datetime1 - datetime2 )\n> \n> If not, you will have to do the 24*60 calculation in full.\n> \n> Herouth\n> \n> --\n> Herouth Maoz, Internet developer.\n> Open University of Israel - Telem project\n> http://telem.openu.ac.il/~herutma\n> \n> \n> \n> \n\n", "msg_date": "Tue, 17 Aug 1999 16:45:52 +0100", "msg_from": "\"John Ridout\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [SQL] datediff function" }, { "msg_contents": "Here the SQL/92 expression supported also by PostgreSQL:\nselect extract(day from date '1999-01-02') - extract(day from date\n'1999-01-01');\n\nJos�\n\nDatediff in MS-SQL gives you the number of boundaries between two dates.\nDATEDIFF(day, '1/1/99 23:59:00', '1/2/99 00:01:00') gives 1\nDATEDIFF(day, '1/2/99 00:01:00', '1/2/99 00:03:00') gives 0\nThe {PostgreSQL|postgres|pgsql|whatever} way of doing it is much nicer.\n\n> > select datediff(minute, timein, timeout) as totaltime from schedule\n> >\n> > It would give me the number 60, that's it. I don't want any\n> qualifier behind\n> > the number since it blew up the stupid microsoft ADO driver like you\n> > wouldn't believe.\n>\n> If you don't want to write 'now'::datetime you can always write\n> datetime('now'). Same goes for '1 week'::timespan and\nespan( extract(day from date '1999-01-02') - extract(day from date '1999\n-01-01');\n\n\nJohn Ridout ha scritto:\n\n> I unfortunately do MS-SQL.\n> > '1 week' ).\n> > I don't think this will blow up your Microsoft product, but then again,\n> > anything can blow up a Microsoft product, being a Microsoft Product\n> > included...\n> >\n> > To make things clear, here is what Postgres can and cannot do:\n> >\n> > It can give you the interval between two dates. The returned value is an\n> > integer representing the number of days between them.\n> >\n> > It can give you the interval between two datetimes. The returned\n> > value is a\n> > timespan, expressing days, hours, minutes, etc. as needed.\n> >\n> > Another method to get the same thing is using age( datetime1, datetime2 ).\n> > This returns a timespan, but expressed in years, months, days, hours and\n> > minutes. There is a subtle difference here, because a year is not always\n> > 365 days, and a month is 28-31 days, depending...\n> >\n> > You can also truncate datetimes, dates, and other date related types, to\n> > the part of your choice. Truncate it to the minute, and it drops the\n> > seconds, and gives it back to you with 00 in the seconds. Truncate it to\n> > days and it gives it back to you at 00:00:00. This is done with\n> > date_trunc().\n> >\n> > Another useful operation which can be done is taking one part of the\n> > datetime (or related type). For example, the minutes, the\n> > seconds, the day,\n> > the day of week, or the seconds since the epoch.\n> >\n> > Now, I'm not sure these functions do exactly what you wanted. It\n> > depends on\n> > what you expect from datediff(minute, timein, itmeout) when they\n> > are not on\n> > the same day. For 13-oct-1999 14:00:00 and 14-oct-1999 14:00:05, do you\n> > expect 5 or 24*60 + 5?\n> >\n> > If only 5, then you can do it with\n> >\n> > SELECT date_part( 'minute', datetime1 - datetime2 )\n> >\n> > If not, you will have to do the 24*60 calculation in full.\n> >\n> > Herouth\n> >\n> > --\n> > Herouth Maoz, Internet developer.\n> > Open University of Israel - Telem project\n> > http://telem.openu.ac.il/~herutma\n> >\n> >\n> >\n> >\n\n", "msg_date": "Mon, 23 Aug 1999 16:02:38 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] RE: [SQL] datediff function" } ]
[ { "msg_contents": "> > It seems that you have found a bug in the cygipc library. I suggest\n> > reporting it to the author of same...\n> \n> Yes, but can we expect all NT sites to get the patch before using\n> PostgreSQL? Is there a workaround we can implement?\n\nThe workaround is in my first mail in this thread - it disables the\npreallocation of semaphores (storage/lmgr/proc.c/InitProcGlobal()) in the\ncygwin port.Some patch that will correct the behavior of IPC library is not\navailable yet and I don't if it ever can be available (it can be a desing\nproblem in the library or Windows internals).\n\n\t\t\tDan\n", "msg_date": "Tue, 17 Aug 1999 17:57:51 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) )" } ]
[ { "msg_contents": "Hi All,\n\nTwo weeks ago somebody had reported that drop user don't remove rights from\nrelacl field of pg_class. This problem is more serious if you delete a group\nfrom pg_group without remoking rigths before. It causes backend terminates\nabnormally.\n\nMaybe interesting for others!! Could anybody include DENY sql command in\nTODO list.\n\nMy problem is: A group have rigths to access some table. I include a new\nuser in this group, but for three months he will not have rights to access\nthis table. So, if the new user have no rigths, he will get rights from his\ngroup. I think it would be enough DENY command (deny all on sometable from\nnewuser) includes something like \"NEWUSER=\" in relacl field.\n\nJust more one question: Aclitem type have the following rigths: =arwR\n(insert, select, update/delete, create rule, I suppose).\nHow could I grant update and revoke delete permissions on a table ?\n\nBest Regards,\n\nRicardo Coelho.\n\nP.S. I'm using Pgsql 6.5 Linux Intel\n\n", "msg_date": "Tue, 17 Aug 1999 13:32:26 -0300", "msg_from": "\"Ricardo Coelho\" <[email protected]>", "msg_from_op": true, "msg_subject": "Drop user problem and DENY command" } ]
[ { "msg_contents": "Hi All,\n\nTwo weeks ago somebody had reported that drop user don't remove rights from\nrelacl field of pg_class. This problem is more serious if you delete a group\nfrom pg_group without revoking rigths before. It causes backend terminates\nabnormally.\n\nMaybe interesting for others!! Could anybody include DENY sql command in\nTODO list.\n\nMy problem is: A group have rigths to access some table. I include a new\nuser in this group, but for three months he will not have rights to access\nthis table. So, if the new user have no rigths, he will get rights from his\ngroup. I think it would be enough DENY command (deny all on sometable from\nnewuser) includes something like \"NEWUSER=\" in relacl field.\n\nJust more one question: Aclitem type have the following rigths: =arwR\n(insert, select, update/delete, create rule, I suppose).\nHow could I grant update and revoke delete permissions on a table ?\n\nBest Regards,\n\nRicardo Coelho.\n\nP.S. I'm using Pgsql 6.5 Linux Intel\n\n\n", "msg_date": "Tue, 17 Aug 1999 14:53:46 -0300", "msg_from": "\"Ricardo Coelho\" <[email protected]>", "msg_from_op": true, "msg_subject": "Drop group problem and DENY command" } ]
[ { "msg_contents": "> > > i filed a bug report at one time noting that:\n> > > \"ALTER TABLE tbname RENAME TO tbname_new;\"\n> > > was not renaming all of the extents.\n> > > \n> > > do you know if this has been fixed?\n> > \n> > Yes, in 6.5.*.\n> \n> cool.\n> \n> if i'm annoying you, tell me to go away.\n> \n> do you know why vacuum can consume an enormous amount of core when cleaning\n> a large table?\n> \n> i've actually had to add a gig of swap to our server so that vacuum can\n> actually finish on some of our tables.\n> \n> sometimes the vacuum won't even do that, and i need to:\n> \n> pg_dump -t tb -s db > tb.dmp\n> psql -c \"copy tb to stdout using delimiters ':';\" db | gzip > tb.dat.gz\n> psql -c \"drop table tb;\" db\n> psql -e db < tb.dmp\n> zcat tb.dat.gz | psql -c \"copy tb from stdin using delimiters ':';\" db\n> \n> very painful (taking several hours).\n\nCan someone comment on the high memory usage of vacuum?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 14:38:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Large database" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n>> do you know why vacuum can consume an enormous amount of core when cleaning\n>> a large table?\n\n> Can someone comment on the high memory usage of vacuum?\n\nFirst thing that comes to mind is memory leaks for palloc'd data\ntypes...\n\nWhat exactly is the declaration of the table that's causing the problem?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 16:31:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [GENERAL] Large database " } ]
[ { "msg_contents": "Do we have a list of projects using PostgreSQL?\n\nThis should be on it.\n\nIt's an impressive linking of our DB engine with Gimp,\nthe best free graphics package.\n\nKeith.\n------------- Begin Forwarded Message -------------\n\nDate: Mon, 16 Aug 1999 16:41:57 +0200\nFrom: Alessandro Baldoni <[email protected]>\nX-Accept-Language: it\nMIME-Version: 1.0\nSubject: [HaruspeX] New site - Version 4.0 Released\nContent-Transfer-Encoding: 7bit\n\n\n\nDear HaruspeX Friends,\n HaruspeX is now hosted at http://www.linux.it/ospiti/haruspex. I\nchecked the site this morning and it finally works.\nI'm also proud to announce the release of HaruspeX 4.0 (please start\ndownloading it from tomorrow).\n>From the NEWS file:\n\n HaruspeX 4.0 is a major technological upgrade: thumbnails are no\nlonger stored\n as large objects, but as chunks of base64-encoded data in a separate\ntable.\n This neither increases nor decreases disk usage, but it makes full\ndatabase\n backups easier. Databases created with previous versions are no\nlonger\n compatible with HaruspeX 4.0.\n\nSince I graduated in July, I had to move my home page. You can find me\nand my plugins (the PhotoCD reader and the LoGConv edge detector) for\nThe GIMP at http://www.geocities.com/SiliconValley/Byte/8091.\n\nI will start my new job next week, therefore I won't be able to add new\nfeatures to HaruspeX for a while. I will still be able to process your\nbug reports.\n\nThank you for your support\nAlessandro Baldoni\n\n\n\n\n------------- End Forwarded Message -------------\n\n\n", "msg_date": "Tue, 17 Aug 1999 21:01:37 +0100 (BST)", "msg_from": "Keith Parks <[email protected]>", "msg_from_op": true, "msg_subject": "[HaruspeX] New site - Version 4.0 Released" }, { "msg_contents": "\nJeff was working on this, I believe...\n\nOn Tue, 17 Aug 1999, Keith Parks wrote:\n\n> Do we have a list of projects using PostgreSQL?\n> \n> This should be on it.\n> \n> It's an impressive linking of our DB engine with Gimp,\n> the best free graphics package.\n> \n> Keith.\n> ------------- Begin Forwarded Message -------------\n> \n> Date: Mon, 16 Aug 1999 16:41:57 +0200\n> From: Alessandro Baldoni <[email protected]>\n> X-Accept-Language: it\n> MIME-Version: 1.0\n> Subject: [HaruspeX] New site - Version 4.0 Released\n> Content-Transfer-Encoding: 7bit\n> \n> \n> \n> Dear HaruspeX Friends,\n> HaruspeX is now hosted at http://www.linux.it/ospiti/haruspex. I\n> checked the site this morning and it finally works.\n> I'm also proud to announce the release of HaruspeX 4.0 (please start\n> downloading it from tomorrow).\n> >From the NEWS file:\n> \n> HaruspeX 4.0 is a major technological upgrade: thumbnails are no\n> longer stored\n> as large objects, but as chunks of base64-encoded data in a separate\n> table.\n> This neither increases nor decreases disk usage, but it makes full\n> database\n> backups easier. Databases created with previous versions are no\n> longer\n> compatible with HaruspeX 4.0.\n> \n> Since I graduated in July, I had to move my home page. You can find me\n> and my plugins (the PhotoCD reader and the LoGConv edge detector) for\n> The GIMP at http://www.geocities.com/SiliconValley/Byte/8091.\n> \n> I will start my new job next week, therefore I won't be able to add new\n> features to HaruspeX for a while. I will still be able to process your\n> bug reports.\n> \n> Thank you for your support\n> Alessandro Baldoni\n> \n> \n> \n> \n> ------------- End Forwarded Message -------------\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 17 Aug 1999 17:30:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [HaruspeX] New site - Version 4.0 Released" }, { "msg_contents": "> Do we have a list of projects using PostgreSQL?\n> \n> This should be on it.\n> \n> It's an impressive linking of our DB engine with Gimp,\n> the best free graphics package.\n\nThis brings up a good point. Can someone go to www.xshare.com, and\nother sites, get a list of applications that use PostgreSQL with URL\nlocations, and give it to our webmaster for inclusion on the web page. \nI see lots of apps now, and we need a list of them on our web site.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 16:36:38 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [HaruspeX] New site - Version 4.0 Released" } ]
[ { "msg_contents": "I have just checked in a little test script that I've been using for a\nwhile (since before 6.5) to pound on lesser-used paths in the optimizer.\nIt's called src/test/regress/regressplans.sh, and it just runs the\nregular regression tests with different PGOPTIONS settings to force\nvarying plan type selections.\n\nThe reason I bring it up is that recently the thing has been failing\nwith backend messages \"ERROR: out of free buffers: time to abort\" (often\nfollowed by a core dump) at what seem to be random places. Running the\nregression test standalone with the same PGOPTIONS settings does not\nreproduce the error, and in fact it happens to different tests if you\nrun the script over and over.\n\nI have also sometimes seen failures out of mdblindwrt, apparently trying\nto dump a dirty buffer for a no-longer-existing database.\n\nAnyone have any idea how to debug this, or what might be triggering it?\nThe best theory I've come up with is that it's got something to do with\nthe repeated destruction and re-creation of the \"regression\" database.\nBut usually the failure occurs during the later tests within a\nparticular regression set, so you'd think any effects of destroying\nthe previous incarnation of the DB would be long gone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 17 Aug 1999 17:32:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "\"out of free buffers: time to abort\" message?" }, { "msg_contents": "> I have just checked in a little test script that I've been using for a\n> while (since before 6.5) to pound on lesser-used paths in the optimizer.\n> It's called src/test/regress/regressplans.sh, and it just runs the\n> regular regression tests with different PGOPTIONS settings to force\n> varying plan type selections.\n> \n> The reason I bring it up is that recently the thing has been failing\n> with backend messages \"ERROR: out of free buffers: time to abort\" (often\n> followed by a core dump) at what seem to be random places. Running the\n> regression test standalone with the same PGOPTIONS settings does not\n> reproduce the error, and in fact it happens to different tests if you\n> run the script over and over.\n> \n> I have also sometimes seen failures out of mdblindwrt, apparently trying\n> to dump a dirty buffer for a no-longer-existing database.\n> \n> Anyone have any idea how to debug this, or what might be triggering it?\n> The best theory I've come up with is that it's got something to do with\n> the repeated destruction and re-creation of the \"regression\" database.\n> But usually the failure occurs during the later tests within a\n> particular regression set, so you'd think any effects of destroying\n> the previous incarnation of the DB would be long gone.\n\nIf you restart the postmaster for every test, does the problem go away?\n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 17:59:52 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] \"out of free buffers: time to abort\" message?" } ]
[ { "msg_contents": "\ni have a table which uses an abstime to store a time/date.\n\nthe data originates as unix time_t, which i convert to a string when inserting\nthe data into the table.\n\ni do select's from the table with WHERE clauses that use the abstime stuff.\n\ni want to get the results of a select as unix time_t, without having to use\nthe expensive mktime()/strptime() unix C calls.\n\nis there a way to get the int4 value that postgres is storing raw for\nabstime?\n\ni'm working in C with libpq.\n\n-- \n[ Jim Mercer Reptilian Research [email protected] +1 416 410-5633 ]\n[ The telephone, for those of you who have forgotten, was a commonly used ]\n[ communications technology in the days before electronic mail. ]\n[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]\n", "msg_date": "Tue, 17 Aug 1999 18:23:29 -0400 (EDT)", "msg_from": "[email protected] (Jim Mercer)", "msg_from_op": true, "msg_subject": "getting at the actual int4 value of an abstime" }, { "msg_contents": "On Tue, Aug 17, 1999 at 06:23:29PM -0400, Jim Mercer wrote:\n> \n> i have a table which uses an abstime to store a time/date.\n> \n> the data originates as unix time_t, which i convert to a string when inserting\n> the data into the table.\n> \n> i do select's from the table with WHERE clauses that use the abstime stuff.\n> \n> i want to get the results of a select as unix time_t, without having to use\n> the expensive mktime()/strptime() unix C calls.\n> \n> is there a way to get the int4 value that postgres is storing raw for\n> abstime?\n\ntest=> create table timetest(timefield abstime);\nCREATE\ntest=> select abstime_finite(timefield) from timetest;\nabstime_finite\n--------------\n(0 rows)\n\ntest=> insert into timetest values (now());\nINSERT 518323 1\ntest=> insert into timetest values (now());\nINSERT 518324 1\ntest=> insert into timetest values (now());\nINSERT 518325 1\ntest=> select abstime_finite(timefield) from timetest;\nabstime_finite\n--------------\nt \nt \nt \n(3 rows)\n\ntest=> select timefield from timetest;\ntimefield\n----------------------------\nTue Aug 17 18:13:23 1999 CDT\nTue Aug 17 18:13:24 1999 CDT\nTue Aug 17 18:13:25 1999 CDT\n(3 rows)\n\ntest=> select timefield::int4 from timetest;\n?column? \n----------------------------\nTue Aug 17 18:13:23 1999 CDT\nTue Aug 17 18:13:24 1999 CDT\nTue Aug 17 18:13:25 1999 CDT\n(3 rows)\n\nHmm, this looks like a bug. I'm guessing we're storing and int8, and the\nconversion fails, so falls back to the default text output?\n\ntest=> select timefield::int8 from timetest;\n int8\n---------\n934931603\n934931604\n934931605\n(3 rows)\n\ntest=> select timefield::float from timetest;\n float8\n---------\n934931603\n934931604\n934931605\n(3 rows)\n\ntest=> select timefield::numeric from timetest;\n numeric\n---------\n934931603\n934931604\n934931605\n(3 rows)\n\ntest=> \n\nWhat version of PostgreSQL, BTW? This is 6.5: int8 and numeric support got a\nlot better vs. 6.4\n\nRoss\n\n-- \nRoss J. Reedstrom, Ph.D., <[email protected]> \nNSBRI Research Scientist/Programmer\nComputer and Information Technology Institute\nRice University, 6100 S. Main St., Houston, TX 77005\n", "msg_date": "Tue, 17 Aug 1999 18:24:55 -0500", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime" }, { "msg_contents": "> > i have a table which uses an abstime to store a time/date.\n> > the data originates as unix time_t\n> > i want to get the results of a select as unix time_t, without having \n> > to use the expensive mktime()/strptime() unix C calls.\n> > is there a way to get the int4 value that postgres is storing raw \n> > for abstime?\n\npostgres=> select date_part('epoch', timefield) from timetest;\ndate_part\n---------\n934957840\n(1 rows)\n\n> test=> select timefield::int4 from timetest;\n> ?column?\n> ----------------------------\n> Tue Aug 17 18:13:23 1999 CDT\n> Hmm, this looks like a bug. I'm guessing we're storing and int8, and the\n> conversion fails, so falls back to the default text output?\n\nProbably not. Abstime is internally stored as 4 bytes, roughly the\nsame as int4, and so Postgres is swallowing the conversion since it\nthinks they are equivalent. But the output conversion is not\nequivalent.\n\n> test=> select timefield::int8 from timetest;\n> int8\n> ---------\n> 934931603\n> What version of PostgreSQL, BTW? This is 6.5: int8 and numeric support got a\n> lot better vs. 6.4\n\nTrying to force a conversion to some other data type works, since the\nconversion isn't swallowed by Postgres. The int4 behavior should count\nas a bug...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 18 Aug 1999 06:34:41 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime" }, { "msg_contents": "> test=> select timefield::int8 from timetest;\n> int8\n> ---------\n> 934931603\n> 934931604\n> 934931605\n> (3 rows)\n\nhmmm, as you did, i tried timefield::int4, and got the same results.\ni hadn't tried timefield::int8.\n\ni suspect this would be more efficient than date_part('epoch', timefield).\n\n> What version of PostgreSQL, BTW? This is 6.5: int8 and numeric support got a\n> lot better vs. 6.4\n\ni am using 6.5, soon gonna upgrade to 6.5.1.\n\nthanx, this will make my code much more efficient.\n\nalso, is there a reverse to this?\n\nie. how does one inject unix time_t data into an abstime field.\n\ni currently pass my raw data through a filter, which converts it\nto 'yyyy-mm-dd HH:MM:SS'.\n\nthen i bring it in using: \"COPY tb USING STDIN;\"\n\nit would be nice if i could do a batch of:\n\"INSERT INTO tb (time_t, data1, date2) VALUES (934931604, 'aa', 'bb');\"\n\n-- \n[ Jim Mercer Reptilian Research [email protected] +1 416 410-5633 ]\n[ The telephone, for those of you who have forgotten, was a commonly used ]\n[ communications technology in the days before electronic mail. ]\n[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]\n", "msg_date": "Wed, 18 Aug 1999 09:33:42 -0400 (EDT)", "msg_from": "[email protected] (Jim Mercer)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime" }, { "msg_contents": "At 16:33 +0300 on 18/08/1999, Jim Mercer wrote:\n\n\n> i suspect this would be more efficient than date_part('epoch', timefield).\n\nYes, but if someday someone decides that dates should be represented in\nanother way, this will break, and date_part( 'epoch', timefield ) will\nalways return the seconds since epoch. Data encapsulation thingie.\n\n> also, is there a reverse to this?\n>\n> ie. how does one inject unix time_t data into an abstime field.\n\nInto a datetime, simply use datetime( n ). To an abstime, add an abstime()\naround the former. Don't try abstime( n ) - at least it doesn't work in 6.4.\n\n\n> then i bring it in using: \"COPY tb USING STDIN;\"\n>\n> it would be nice if i could do a batch of:\n> \"INSERT INTO tb (time_t, data1, date2) VALUES (934931604, 'aa', 'bb');\"\n\ncopy is more efficient that a bunch of inserts, mind you.\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Wed, 18 Aug 1999 17:08:11 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Re: [HACKERS] getting at the actual int4 value of\n\tan abstime" }, { "msg_contents": "I feel like such a bone-head asking this question, but I didn't find the\nanswer in the FAQ or the documentation, other than pgaccess is supposed to\nhave some of this functionality...\n\nHow do I import/export comma delimited tables?\n\nI thought a combination of pg_dump and psql might do it, but if so I must\nhave missed it. I saw a mention of it for pgaccess, but I'm looking for\nsomething I can put in a shell script.\n\n--\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n", "msg_date": "Wed, 18 Aug 1999 10:14:49 -0400 (EDT)", "msg_from": "Bruce Tong <[email protected]>", "msg_from_op": false, "msg_subject": "CVS Import/Export" }, { "msg_contents": "[email protected] (Jim Mercer) writes:\n> [ concern about speed of converting datetime values to/from text for\n> Postgres ]\n\nFWIW, I used to be really concerned about that too, because my\napplications do lots of storage and retrieval of datetimes.\nThen one day I did some profiling, and found that the datetime\nconversion code was down in the noise. Now I don't worry so much.\n\nIt *would* be nice though if there were some reasonably cheap documented\nconversions between datetime and a standard Unix time_t displayed as a\nnumber. Not so much because of speed, as because there are all kinds\nof ways to get the conversion wrong on the client side --- messing up\nthe timezone and not coping with all the Postgres datestyles are two\neasy ways to muff it.\n\nBTW, I believe Thomas is threatening to replace all the datetime-like\ntypes with what is currently called datetime (ie, a float8 measuring\nseconds with epoch 1/1/2000), so relying on the internal representation\nof abstime would be a bad idea...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 1999 10:26:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime " }, { "msg_contents": "\nThere is a COPY command that you can use...there is a man page for it,\nsorry, don't use it myself, so dont know the syntax :( I've never had\nmuch luck with using it, so generally cheat and create a fast perl script\nto do it as normal inserts :(\n\nOn Wed, 18 Aug 1999, Bruce Tong wrote:\n\n> I feel like such a bone-head asking this question, but I didn't find the\n> answer in the FAQ or the documentation, other than pgaccess is supposed to\n> have some of this functionality...\n> \n> How do I import/export comma delimited tables?\n> \n> I thought a combination of pg_dump and psql might do it, but if so I must\n> have missed it. I saw a mention of it for pgaccess, but I'm looking for\n> something I can put in a shell script.\n> \n> --\n> \n> Bruce Tong | Got me an office; I'm there late at night.\n> Systems Programmer | Just send me e-mail, maybe I'll write.\n> Electronic Vision / FITNE | \n> [email protected] | -- Joe Walsh for the 21st Century\n> \n> \n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 18 Aug 1999 11:42:58 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CVS Import/Export" }, { "msg_contents": "On Wed, 18 Aug 1999, Bruce Tong wrote:\n\n> I feel like such a bone-head asking this question, but I didn't find the\n> answer in the FAQ or the documentation, other than pgaccess is supposed to\n> have some of this functionality...\n> \n> How do I import/export comma delimited tables?\n> \n> I thought a combination of pg_dump and psql might do it, but if so I must\n> have missed it. I saw a mention of it for pgaccess, but I'm looking for\n> something I can put in a shell script.\n> \n> --\n> \n> Bruce Tong | Got me an office; I'm there late at night.\n\nIf you're after changing the field separator, psql has a \\f command.\n\nYou could do something like:\n\n$ psql -e <dbname> < out.sql > dump\n\nwhere out.sql looks like:\n\n\\f ,\n\\o \n-- some select statements go here\nSELECT foo FROM bar;\n\n-- EOF\n\n\nA method for importing would be similar.\n\n\n\nSimon.\n-- \n \"Don't anthropomorphise computers - they don't like it.\"\n \n Simon Drabble It's like karma for your brain.\n [email protected]\n\n", "msg_date": "Wed, 18 Aug 1999 10:52:39 -0400 (EDT)", "msg_from": "Simon Drabble <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CVS Import/Export" }, { "msg_contents": "At 17:14 +0300 on 18/08/1999, Bruce Tong wrote:\n\n> How do I import/export comma delimited tables?\n>\n> I thought a combination of pg_dump and psql might do it, but if so I must\n> have missed it. I saw a mention of it for pgaccess, but I'm looking for\n> something I can put in a shell script.\n\nIt has nothing to do with pgaccess. The way to import/export any tables is\nusing either the COPY command in PostgreSQL's SQL dialect, or the \\copy\ncommand in psql.\n\nThe difference between them is in where they look for the file to convert\nto/from. The COPY command is executed by the backend, and looks for a file\nin the backend's machine. The \\copy looks on the client machine that runs\nthe psql. Since, more often than not, this is the same machine, the best\nway to remember is that COPY is executed by the backend and therefore the\nfile must be readable to the postgres superuser (or writable for an\nexport), and \\copy runs in the client, so it should be readable/writable to\nthe one who runs the psql.\n\nCOPY has an option to read the standard input instead of a file, which is\nhow clients like psql are able to write things like \\copy. You can use COPY\nFROM STDIN in shell scripts.\n\nCOPY is better that \\copy as it allows you to set a delimiter, which \\copy\ndoes not - it always expects tabs.\n\nAnyway, this imports data from a file named \"stam.txt\" into the table\n\"test5\" of the database \"testing\":\n\npsql -c 'COPY test5 FROM stdin' testing < stam.txt\n\nThe following exports the same table:\n\npsql -qc 'COPY test5 TO stdin' testing > stam.txt\n\nHerouth\n\n--\nHerouth Maoz, Internet developer.\nOpen University of Israel - Telem project\nhttp://telem.openu.ac.il/~herutma\n\n\n", "msg_date": "Wed, 18 Aug 1999 17:56:44 +0300", "msg_from": "Herouth Maoz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CVS Import/Export" }, { "msg_contents": "> There is a COPY command that you can use...there is a man page for it,\n> sorry, don't use it myself, so dont know the syntax.\n\nAhh, COPY. All I really needed was the pointer. I remember skimming\nthat one and concluding it wasn't what I wanted. I must have skimmed\ntoo fast as I was certain it wouldn't be in SQL since nothing turned up in\nmy seach of \"The Practical SQL Handbook\" index.\n\nThanks to all for the examples.\n\n--\n\nBruce Tong | Got me an office; I'm there late at night.\nSystems Programmer | Just send me e-mail, maybe I'll write.\nElectronic Vision / FITNE | \[email protected] | -- Joe Walsh for the 21st Century\n\n\n", "msg_date": "Wed, 18 Aug 1999 11:04:12 -0400 (EDT)", "msg_from": "Bruce Tong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CVS Import/Export" }, { "msg_contents": ">> There is a COPY command that you can use...there is a man page for it,\n>> sorry, don't use it myself, so dont know the syntax.\n\nThen some bit about usually using Perl because of trouble getting COPY to\nperform exactly right and then having to pay the price with slow inserts\ninstead of fast COPY (sorry, I overhastily deleted it). I'm pretty sure\nMarc posted it (sorry about the cc if it wasn't you Marc)...\n\nYes I usually have a similar problem, especially with 'buggy' CVS file or\nother delimited files that haven't been rigourously generated or with\nhandling of NULL fields etc.\n\nI clean up the file with Perl but use this code to still use fast COPYs:\n\n#/usr/local/bin/perl5\n\nmy $database='test';\nopen PGSQL, \"|psql $database\" or die \"hey man, you crazy or what! I canny\nopen pipe psql $database!\";\n\nmy $table='test';\n\nprint PGSQL \"COPY $table from stdin;\\n\"; # First COPY\nmy $print_count=0; # Set counter to zero\n\nwhile (<LIST>) { # Where list is a filehandle to your CVS/delimited file\n\n # We go through the file line by line\n # Clean-up each line\n # And put each element in array @values\n # In the order of the fields in the table definition\n # And replacing NULLs with '\\N' (inclusive of quotes)\n\n print PGSQL join(\"\\t\",@values),\"\\n\";\n ++$print_count;\n\n if (!($print_count%50)) { # every fifty print\n print PGSQL \"\\\\.\\n\"; # close that batch of entries\n print PGSQL \"COPY $table from stdin;\\n\"; # start next batch\n };\n\n};\n\nprint PGSQL \"\\\\.\\n\";\n# we've printed a copy so worst that can happen is we copy in nothing!\n# but we must print this at then end to make sure all entries are copied\n\nclose(LIST);\nclose(PGSQL);\n\nI must say that it goes like the proverbial stuff off the shovel.\n\nHTH,\n\nStuart.\n+--------------------------+--------------------------------------+\n| Stuart C. G. Rison | Ludwig Institute for Cancer Research |\n+--------------------------+ 91 Riding House Street |\n| N.B. new phone code!! | London, W1P 8BT |\n| Tel. +44 (0)207 878 4041 | UNITED KINGDOM |\n| Fax. +44 (0)207 878 4040 | [email protected] |\n+--------------------------+--------------------------------------+\n", "msg_date": "Wed, 18 Aug 1999 19:16:45 +0100", "msg_from": "Stuart Rison <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] CVS Import/Export" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>>>> i want to get the results of a select as unix time_t, without having \n>>>> to use the expensive mktime()/strptime() unix C calls.\n>>>> is there a way to get the int4 value that postgres is storing raw \n>>>> for abstime?\n\n> postgres=> select date_part('epoch', timefield) from timetest;\n> date_part\n> ---------\n> 934957840\n> (1 rows)\n\nBTW, while rooting around in contrib/ I noticed that contrib/unixdate\nhas an efficient way of going the other direction: just apply the\nconversion from abstime with a type cheat. The coding is obsolete,\nbut updated to 6.5, it works fine:\n\nregression=> CREATE FUNCTION datetime(int4) RETURNS datetime\nregression-> AS 'abstime_datetime' LANGUAGE 'internal';\nCREATE\nregression=> select datetime(935779244);\ndatetime\n----------------------------\nFri Aug 27 14:40:44 1999 EDT\n(1 row)\nregression=> select date_part('epoch',\nregression-> 'Fri Aug 27 14:40:44 1999 EDT'::datetime);\ndate_part\n---------\n935779244\n(1 row)\n\nNifty. I wonder whether we shouldn't move this contrib feature into the\nstandard system for 6.6? Perhaps with a less generic name, such as\nepoch2datetime() --- otherwise the parser will think that it can use the\nfunction as an automatic int4->datetime type conversion, which is probably\nNot a Good Idea. But having both conversion directions would sure make\nlife simpler and less error-prone for client apps that need to translate\ndatetimes to and from time_t.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Aug 1999 15:05:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime " }, { "msg_contents": "> BTW, while rooting around in contrib/ I noticed that contrib/unixdate\n> has an efficient way of going the other direction: just apply the\n> conversion from abstime with a type cheat. The coding is obsolete,\n> but updated to 6.5, it works fine:\n\ni saw it there, but couldn't get it to work.\n\nthis looks like what i need.\n\n-- \n[ Jim Mercer Reptilian Research [email protected] +1 416 410-5633 ]\n[ The telephone, for those of you who have forgotten, was a commonly used ]\n[ communications technology in the days before electronic mail. ]\n[ They're still easy to find in most large cities. -- Nathaniel Borenstein ]\n", "msg_date": "Fri, 27 Aug 1999 15:11:51 -0400 (EDT)", "msg_from": "[email protected] (Jim Mercer)", "msg_from_op": true, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime" }, { "msg_contents": "You don't need to create such function it works already on v6.5:\n\nprova=> select date_part('epoch', current_date);\ndate_part\n---------\n935964000\n(1 row)\n\nprova=> select datetime(935964000);\ndatetime\n---------------------------\n30/08/1999 00:00:00.00 CEST\n(1 row)\n\nprova=> select date_part('epoch','30/08/1999 00:00:00.00 CEST'::datetime);\ndate_part\n---------\n935964000\n(1 row)\n\nJos�\n\nTom Lane ha scritto:\n\n> Thomas Lockhart <[email protected]> writes:\n> >>>> i want to get the results of a select as unix time_t, without having\n> >>>> to use the expensive mktime()/strptime() unix C calls.\n> >>>> is there a way to get the int4 value that postgres is storing raw\n> >>>> for abstime?\n>\n> > postgres=> select date_part('epoch', timefield) from timetest;\n> > date_part\n> > ---------\n> > 934957840\n> > (1 rows)\n>\n> BTW, while rooting around in contrib/ I noticed that contrib/unixdate\n> has an efficient way of going the other direction: just apply the\n> conversion from abstime with a type cheat. The coding is obsolete,\n> but updated to 6.5, it works fine:\n>\n> regression=> CREATE FUNCTION datetime(int4) RETURNS datetime\n> regression-> AS 'abstime_datetime' LANGUAGE 'internal';\n> CREATE\n> regression=> select datetime(935779244);\n> datetime\n> ----------------------------\n> Fri Aug 27 14:40:44 1999 EDT\n> (1 row)\n> regression=> select date_part('epoch',\n> regression-> 'Fri Aug 27 14:40:44 1999 EDT'::datetime);\n> date_part\n> ---------\n> 935779244\n> (1 row)\n>\n> Nifty. I wonder whether we shouldn't move this contrib feature into the\n> standard system for 6.6? Perhaps with a less generic name, such as\n> epoch2datetime() --- otherwise the parser will think that it can use the\n> function as an automatic int4->datetime type conversion, which is probably\n> Not a Good Idea. But having both conversion directions would sure make\n> life simpler and less error-prone for client apps that need to translate\n> datetimes to and from time_t.\n>\n> regards, tom lane\n>\n> ************\n\n", "msg_date": "Mon, 30 Aug 1999 15:19:25 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] getting at the actual int4 value of an abstime" } ]
[ { "msg_contents": "> On Tue, 17 Aug 1999, Bruce Momjian wrote:\n> \n> > > On Tue, 17 Aug 1999, Bruce Momjian wrote:\n> > > \n> > > > > > My wife would not mind the trip. We have friends in Toronto. It is 10\n> > > > > > hours. Yikes.\n> > > > > \n> > > > > 22 for me, definitely not a \"weekend trip\" :( But, depending on when we\n> > > > > planned for it, I could probably take some time off, and then I could just\n> > > > > drive it back with me and save some shipping...*shrug*\n> > > > > \n> > > > \n> > > > 22, double-yikes.\n> > > \n> > > *grin* we drove it straight when we went to Barrie last week...I can\n> > > usually do it straight, alone, as long as I haven't had to work that day,\n> > > but Andrea shared the driving this time, which was *really* helpful :)\n> > \n> > I know I can't do 22 hours straight. I have trouble with 6 to Boston. \n> > Maybe I can go to Boston, and stay there for a day or two, then go to\n> > Toronto.\n> \n> Ummm...Boston is the wrong direction, eh?:)\n\nMan, I see now. I remember Montreal is directly north of me, but I\nforget whether Toronto is East or West of that. Now I see it is West. \n:-)\n\nMan, it is right above Tom Lane. I might as well just pick him up on my\nway. I want to visit Pittsburg too. Tom, how far it Toronto from you.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 17 Aug 1999 23:31:48 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Man, it is right above Tom Lane. I might as well just pick him up on my\n> way. I want to visit Pittsburg too. Tom, how far it Toronto from you.\n\nIt's about 6 or 7 hours from here to Toronto. From where you live, I'd\nthink it'd be quicker to buzz up I-80 than to go through Pittsburgh...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 1999 09:58:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow " }, { "msg_contents": "> Bruce Momjian <[email protected]> writes:\n> > Man, it is right above Tom Lane. I might as well just pick him up on my\n> > way. I want to visit Pittsburg too. Tom, how far it Toronto from you.\n> \n> It's about 6 or 7 hours from here to Toronto. From where you live, I'd\n> think it'd be quicker to buzz up I-80 than to go through Pittsburgh...\n\nJust considering all options.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 18 Aug 1999 11:11:45 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [CORE] Re: tomorrow" } ]
[ { "msg_contents": "Hi, all\n\nI have found out what the problem is, although not (yet) the solution. \n\nExecutive summary:\n------------------\nThe scan.l code is not flexing as intended. This means that, for most\nproduction installations, the max token size is around 64kB.\n\nTechnical summary:\n------------------\nThe problem is that scan.l is compiling to scan.c with YY_USES_REJECT being\ndefined. When YY_USES_REJECT is defined, the token buffer is NOT\nexpandable, and the parser will fail if expansion is attempted. However,\nYY_USES_REJECT should not be defined, and I'm trying to work out why it is.\nI have posted to the flex mailing list, and expect a reply within the next\nday or so.\n\nThe bottom line:\n------------------\nThe token limit seems to be effectively the size of YY_BUF_SIZE in scan.l,\nuntil I submit a patch which should make it unlimited. \n\n\nMikeA\n\n>> -----Original Message-----\n>> From: Natalya S. Makushina [mailto:[email protected]]\n>> Sent: Tuesday, August 17, 1999 3:15 PM\n>> To: '[email protected]'\n>> Subject: [HACKERS] Problem with query length\n>> \n>> \n>> -------------------------------------------------------------\n>> ----------------------------------------------------------------\n>> I have posted this mail to psql-general. But i didn't get \n>> any answer yet.\n>> -------------------------------------------------------------\n>> ----------------------------------------------------------------\n>> \n>> When i had tried to insert into text field text (length \n>> about 4000 chars), the backend have crashed with status 139. \n>> This error is happened when the query length ( SQL query) is \n>> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n>> \n>> My questions are:\n>> 1. Is there problem with text field or with length of SQL query?\n>> 2. Would postgresql have any limits for SQL query length?\n>> I checked the archives but only found references to the 8K \n>> limit. Any help would be greatly appreciated.\n>> Thanks for help\n>> \t\t\t\tNatalya Makushina \n>> \t\t\t\[email protected]\n>> \n>> \n", "msg_date": "Wed, 18 Aug 1999 09:55:21 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with query length" } ]
[ { "msg_contents": "Hello,\n\nWe are using v6.3.2 with patches, and using many tables which\nuses 'large object'. Main problem with it is\n\n(1) cannot dump large object, is this still true in v6.5.1 ?\n or no plan for immplementing dumping blobs in near future?\n(2) if I want to clone dbs from linux to solaris machine, due to the above\n lo & pg_dump problem, lots of manual works needed to dump dbs\n -> restore to another architecture machine... Is there any utility\n to ease duplication(backup) of dbs?\n(3) to upgrade v6.3.2 dbs to v6.5.1, including large objects, is there\n ways to dump/restore?\n\nAnother problem with v6.3.2 is frequent messages(error?) related to\nthe backend cache invalidation failure -- probably posted many times...\nlike this:\n NOTICE: SIAssignBackendId: discarding tag 2147430138\n Connection databese 'request' failed.\n FATAL 1: Backend cache invalidation initialization failed\n(1) Increasing max connection # from 32 to 64 in\n src/include/storage/sinvaladt.h will simply fix above problem?\n(2) If I want to keep v6.3.2, which PATCH will FIX above problem?\n(3) already fixed in v6.5.1?\n\nBest Regards,\nC.S.Park\n\n", "msg_date": "Wed, 18 Aug 1999 17:43:27 +0900", "msg_from": "\"C.S.Park\" <[email protected]>", "msg_from_op": true, "msg_subject": "[Q] pg_dump with large object & backend cache..." } ]
[ { "msg_contents": "Just for a testing I made a huge table (>2GB and it has 10000000\ntuples). copy 10000000 tuples took 23 minutes. This is not so\nbad. Vacuum analyze took 11 minutes, not too bad. After this I created\nan index on int4 column. It took 9 minutes. Next I deleted 5000000\ntuples to see how long delete took. I found it was 6\nminutes. Good. Then I ran into a problem. After that I did vacuum\nanalyze, and seemed it took forever! (actually took 47 minutes). The\nbiggest problem was postgres's process size. It was 478MB! This is not\nacceptable for me. Any idea?\n\nThis is PostgreSQL 6.5.1 running on RH 6.0.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 18 Aug 1999 17:44:30 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "vacuum process size" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Just for a testing I made a huge table (>2GB and it has 10000000\n> tuples). copy 10000000 tuples took 23 minutes. This is not so\n> bad. Vacuum analyze took 11 minutes, not too bad. After this I created\n> an index on int4 column. It took 9 minutes. Next I deleted 5000000\n> tuples to see how long delete took. I found it was 6\n> minutes. Good. Then I ran into a problem. After that I did vacuum\n> analyze, and seemed it took forever! (actually took 47 minutes). The\n> biggest problem was postgres's process size. It was 478MB! This is not\n> acceptable for me. Any idea?\n\nYeah, I've complained about that before --- it seems that vacuum takes\na really unreasonable amount of time to remove dead tuples from an index.\nIt's been like that at least since 6.3.2, probably longer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 1999 10:02:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": ">Tatsuo Ishii <[email protected]> writes:\n>> Just for a testing I made a huge table (>2GB and it has 10000000\n>> tuples). copy 10000000 tuples took 23 minutes. This is not so\n>> bad. Vacuum analyze took 11 minutes, not too bad. After this I created\n>> an index on int4 column. It took 9 minutes. Next I deleted 5000000\n>> tuples to see how long delete took. I found it was 6\n>> minutes. Good. Then I ran into a problem. After that I did vacuum\n>> analyze, and seemed it took forever! (actually took 47 minutes). The\n>> biggest problem was postgres's process size. It was 478MB! This is not\n>> acceptable for me. Any idea?\n>\n>Yeah, I've complained about that before --- it seems that vacuum takes\n>a really unreasonable amount of time to remove dead tuples from an index.\n>It's been like that at least since 6.3.2, probably longer.\n\nHiroshi came up with a work around for this(see included\npatches). After applying it, the process size shrinked from 478MB to\n86MB! (the processing time did not descrease, however). According to\nhim, repalloc seems not very effective with large number of calls. The\npatches probably descreases the number to 1/10.\n--\nTatsuo Ishii\n\n-------------------------------------------------------------------------\n*** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n--- vacuum.c\tThu Aug 19 17:34:18 1999\n***************\n*** 2519,2530 ****\n static void\n vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n {\n \n \t/* allocate a VPageDescr entry if needed */\n \tif (vpl->vpl_num_pages == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100 * sizeof(VPageDescr));\n! \telse if (vpl->vpl_num_pages % 100 == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + 100) * sizeof(VPageDescr));\n \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n \t(vpl->vpl_num_pages)++;\n \n--- 2519,2531 ----\n static void\n vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n {\n+ #define PG_NPAGEDESC 1000\n \n \t/* allocate a VPageDescr entry if needed */\n \tif (vpl->vpl_num_pages == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n! \telse if (vpl->vpl_num_pages % PG_NPAGEDESC == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + PG_NPAGEDESC) * sizeof(VPageDescr));\n \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n \t(vpl->vpl_num_pages)++;\n \n", "msg_date": "Thu, 19 Aug 1999 17:54:00 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "Hi all,\n\nI found the following comment in utils/mmgr/aset.c.\nThe high memory usage of big vacuum is probably caused by this\nchange.\nCalling repalloc() many times with its size parameter increasing\nwould need large amount of memory.\n\nShould vacuum call realloc() directly ?\nOr should AllocSet..() be changed ?\n\nComments ?\n\n * NOTE:\n * This is a new (Feb. 05, 1999) implementation of the allocation set\n * routines. AllocSet...() does not use OrderedSet...() any more.\n * Instead it manages allocations in a block pool by itself, combining\n * many small allocations in a few bigger blocks. AllocSetFree() does\n * never free() memory really. It just add's the free'd area to some\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n * list for later reuse by AllocSetAlloc(). All memory blocks are\nfree()'d\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n>\n> >Tatsuo Ishii <[email protected]> writes:\n> >> Just for a testing I made a huge table (>2GB and it has 10000000\n> >> tuples). copy 10000000 tuples took 23 minutes. This is not so\n> >> bad. Vacuum analyze took 11 minutes, not too bad. After this I created\n> >> an index on int4 column. It took 9 minutes. Next I deleted 5000000\n> >> tuples to see how long delete took. I found it was 6\n> >> minutes. Good. Then I ran into a problem. After that I did vacuum\n> >> analyze, and seemed it took forever! (actually took 47 minutes). The\n> >> biggest problem was postgres's process size. It was 478MB! This is not\n> >> acceptable for me. Any idea?\n> >\n> >Yeah, I've complained about that before --- it seems that vacuum takes\n> >a really unreasonable amount of time to remove dead tuples from an index.\n> >It's been like that at least since 6.3.2, probably longer.\n>\n> Hiroshi came up with a work around for this(see included\n> patches). After applying it, the process size shrinked from 478MB to\n> 86MB! (the processing time did not descrease, however). According to\n> him, repalloc seems not very effective with large number of calls. The\n> patches probably descreases the number to 1/10.\n> --\n> Tatsuo Ishii\n>\n> -------------------------------------------------------------------------\n> *** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n> --- vacuum.c\tThu Aug 19 17:34:18 1999\n> ***************\n> *** 2519,2530 ****\n> static void\n> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> {\n>\n> \t/* allocate a VPageDescr entry if needed */\n> \tif (vpl->vpl_num_pages == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100 *\n> sizeof(VPageDescr));\n> ! \telse if (vpl->vpl_num_pages % 100 == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + 100) *\n> sizeof(VPageDescr));\n> \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> \t(vpl->vpl_num_pages)++;\n>\n> --- 2519,2531 ----\n> static void\n> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> {\n> + #define PG_NPAGEDESC 1000\n>\n> \t/* allocate a VPageDescr entry if needed */\n> \tif (vpl->vpl_num_pages == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n> ! \telse if (vpl->vpl_num_pages % PG_NPAGEDESC == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + PG_NPAGEDESC) *\n> sizeof(VPageDescr));\n> \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> \t(vpl->vpl_num_pages)++;\n>\n\n", "msg_date": "Fri, 20 Aug 1999 09:41:56 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] vacuum process size " }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> I found the following comment in utils/mmgr/aset.c.\n> The high memory usage of big vacuum is probably caused by this\n> change.\n\nAFAIK, there is no \"change\" there. free() doesn't give memory\nback to the kernel either.\n\n> Calling repalloc() many times with its size parameter increasing\n> would need large amount of memory.\n\nGood point, because aset.c doesn't coalesce adjacent free chunks.\nAnd of course, reallocating the block bigger and bigger is exactly\nthe usual behavior with realloc-using code :-(\n\nI don't think it would be a good idea to add coalescing logic to aset.c\n--- that'd defeat the purpose of building a small/simple/fast allocator.\n\nPerhaps for large standalone chunks (those that AllocSetAlloc made an\nentire separate block for), AllocSetFree should free() the block instead\nof putting the chunk on its own freelist. Assuming that malloc/free are\nsmart enough to coalesce adjacent blocks, that would prevent the bad\nbehavior from recurring once the request size gets past\nALLOC_SMALLCHUNK_LIMIT, and for small requests we don't care.\n\nBut it doesn't look like there is any cheap way to detect that a chunk\nbeing freed takes up all of its block. We'd have to mark it specially\nsomehow. A kluge that comes to mind is to set the chunk->size to zero\nwhen it is a standalone allocation.\n\nI believe Jan designed the current aset.c logic. Jan, any comments?\n\n> Should vacuum call realloc() directly ?\n\nNot unless you like *permanent* memory leaks instead of transient ones.\nConsider what will happen at elog().\n\nHowever, another possible solution is to redesign the data structure\nin vacuum() so that it can be made up of multiple allocation blocks,\nrather than insisting that all the array entries always be consecutive.\nThen it wouldn't depend on repalloc at all. On the whole I like that\nidea better --- even if repalloc can be fixed not to waste memory, it\nstill implies copying large amounts of data around for no purpose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Aug 1999 09:19:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "Tom, you already handled this, right?\n\n\n> >Tatsuo Ishii <[email protected]> writes:\n> >> Just for a testing I made a huge table (>2GB and it has 10000000\n> >> tuples). copy 10000000 tuples took 23 minutes. This is not so\n> >> bad. Vacuum analyze took 11 minutes, not too bad. After this I created\n> >> an index on int4 column. It took 9 minutes. Next I deleted 5000000\n> >> tuples to see how long delete took. I found it was 6\n> >> minutes. Good. Then I ran into a problem. After that I did vacuum\n> >> analyze, and seemed it took forever! (actually took 47 minutes). The\n> >> biggest problem was postgres's process size. It was 478MB! This is not\n> >> acceptable for me. Any idea?\n> >\n> >Yeah, I've complained about that before --- it seems that vacuum takes\n> >a really unreasonable amount of time to remove dead tuples from an index.\n> >It's been like that at least since 6.3.2, probably longer.\n> \n> Hiroshi came up with a work around for this(see included\n> patches). After applying it, the process size shrinked from 478MB to\n> 86MB! (the processing time did not descrease, however). According to\n> him, repalloc seems not very effective with large number of calls. The\n> patches probably descreases the number to 1/10.\n> --\n> Tatsuo Ishii\n> \n> -------------------------------------------------------------------------\n> *** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n> --- vacuum.c\tThu Aug 19 17:34:18 1999\n> ***************\n> *** 2519,2530 ****\n> static void\n> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> {\n> \n> \t/* allocate a VPageDescr entry if needed */\n> \tif (vpl->vpl_num_pages == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100 * sizeof(VPageDescr));\n> ! \telse if (vpl->vpl_num_pages % 100 == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + 100) * sizeof(VPageDescr));\n> \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> \t(vpl->vpl_num_pages)++;\n> \n> --- 2519,2531 ----\n> static void\n> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> {\n> + #define PG_NPAGEDESC 1000\n> \n> \t/* allocate a VPageDescr entry if needed */\n> \tif (vpl->vpl_num_pages == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n> ! \telse if (vpl->vpl_num_pages % PG_NPAGEDESC == 0)\n> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + PG_NPAGEDESC) * sizeof(VPageDescr));\n> \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> \t(vpl->vpl_num_pages)++;\n> \n> \n> ************\n> Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:33:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n> Tom, you already handled this, right?\n\nSomeone committed it, not sure if it was me.\n\nI was worried whether vacuum's other expandable lists needed the same\ntreatment, but Hiroshi and/or Tatsuo seemed to think it wasn't worth the\ntrouble to change them. So I guess the item is closed.\n\n\t\t\tregards, tom lane\n\n\n>>>> Tatsuo Ishii <[email protected]> writes:\n>>>>> Just for a testing I made a huge table (>2GB and it has 10000000\n>>>>> tuples). copy 10000000 tuples took 23 minutes. This is not so\n>>>>> bad. Vacuum analyze took 11 minutes, not too bad. After this I created\n>>>>> an index on int4 column. It took 9 minutes. Next I deleted 5000000\n>>>>> tuples to see how long delete took. I found it was 6\n>>>>> minutes. Good. Then I ran into a problem. After that I did vacuum\n>>>>> analyze, and seemed it took forever! (actually took 47 minutes). The\n>>>>> biggest problem was postgres's process size. It was 478MB! This is not\n>>>>> acceptable for me. Any idea?\n>>>> \n>>>> Yeah, I've complained about that before --- it seems that vacuum takes\n>>>> a really unreasonable amount of time to remove dead tuples from an index.\n>>>> It's been like that at least since 6.3.2, probably longer.\n>> \n>> Hiroshi came up with a work around for this(see included\n>> patches). After applying it, the process size shrinked from 478MB to\n>> 86MB! (the processing time did not descrease, however). According to\n>> him, repalloc seems not very effective with large number of calls. The\n>> patches probably descreases the number to 1/10.\n>> --\n>> Tatsuo Ishii\n>> \n>> -------------------------------------------------------------------------\n>> *** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n>> --- vacuum.c\tThu Aug 19 17:34:18 1999\n>> ***************\n>> *** 2519,2530 ****\n>> static void\n>> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n>> {\n>> \n>> /* allocate a VPageDescr entry if needed */\n>> if (vpl->vpl_num_pages == 0)\n>> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100 * sizeof(VPageDescr));\n>> ! \telse if (vpl->vpl_num_pages % 100 == 0)\n>> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + 100) * sizeof(VPageDescr));\nvpl-> vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n>> (vpl->vpl_num_pages)++;\n>> \n>> --- 2519,2531 ----\n>> static void\n>> vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n>> {\n>> + #define PG_NPAGEDESC 1000\n>> \n>> /* allocate a VPageDescr entry if needed */\n>> if (vpl->vpl_num_pages == 0)\n>> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n>> ! \telse if (vpl->vpl_num_pages % PG_NPAGEDESC == 0)\n>> ! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + PG_NPAGEDESC) * sizeof(VPageDescr));\nvpl-> vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n>> (vpl->vpl_num_pages)++;\n", "msg_date": "Mon, 27 Sep 1999 18:50:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": ">Bruce Momjian <[email protected]> writes:\n>> Tom, you already handled this, right?\n>\n>Someone committed it, not sure if it was me.\n\nI have comitted changes to both the statble and the current.\n\n>I was worried whether vacuum's other expandable lists needed the same\n>treatment, but Hiroshi and/or Tatsuo seemed to think it wasn't worth the\n>trouble to change them. So I guess the item is closed.\n\nI think so.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 28 Sep 1999 10:21:15 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] vacuum process size " } ]
[ { "msg_contents": "If I understand things right, the postgres process is both a reader and\nwriter. Is this right? If it is, would there be any value in separating\nthe reader and writer portions of the program? This is site specific, but\nmost production environments require far more reading than writing, and this\nwould allow smaller, faster (perhaps) readers to be started, while only\nopening the writers when necessary. In fact, only one writer could be used,\nas a daemon possibly, with perhaps slave writers where viable.\nAlso, this would allow administrators to further optimise the operation of\nthe database, and it would be a step closer to a parallel architecture.\nImagine being able to run two servers with readers only, and one server with\na writer, and auxillary reader, all serving up the same database!\n\nBy the way, is it possible to run two postgres servers using the same\ndatabase shared using NFS or SMB or something? Probably not, but why not?\n\nI know that a good network comms/signalling library would be needed to do\nsome of this stuff. Would it not be worthwhile to try coaxing one of the\nopen source products (perhaps ACE, I don't know of any others: does it have\na C interface) to supporting all the platforms that PG does?\n\nAny thoughts....\n\n\nMikeA\n", "msg_date": "Wed, 18 Aug 1999 11:27:57 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Architecture" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> If I understand things right, the postgres process is both a reader and\n> writer. Is this right? If it is, would there be any value in separating\n> the reader and writer portions of the program?\n\nRight offhand I don't see where there would be any real benefit to be\ngained. It's true that you could make a smaller Postgres executable\nif you stripped out writing (the same goes for a lot of other features\narguably not needed very often...) But on modern platforms it's not a\nwin to make multiple variants of an executable file. You're better off\nwith one executable that will be shared as an in-memory image by all the\nprocesses running Postgres.\n\n> By the way, is it possible to run two postgres servers using the same\n> database shared using NFS or SMB or something? Probably not, but why not?\n\nThe main problem is interprocess interlocking. Currently we rely on\nshared memory and SysV-style semaphores, which means all the Postgres\nprocesses need to be on the same Unix system. (Making some of them\nreader-only wouldn't help; they still need to be involved in locking.)\nYou could conceivably have the database files themselves be NFS-mounted\nby that system, but I doubt it'd be a performance win to do so.\n\nSpreading the servers across multiple machines would almost certainly\nbe a big loss because of the increased cost of communication for\nlocking. OTOH, a multi-CPU Unix box could be a real big win.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 1999 10:56:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Architecture " } ]
[ { "msg_contents": "> > > In any case, when one backend quits and another one is\n> > > started, the new\n> > > one will re-use the semaphore no longer used by the \n> defunct backend.\n> >\n> > I have tested my solution a bit more and I have to say that \n> reusing a\n> > semaphore by a new backend works OK. But it is not possible \n> for a newly\n> > created backend to use a semaphore allocated by postmaster \n> (it freezes on\n> > test if the semaphore with given key already exists - done with\n> > semId=semget(semKey, 0, 0) in function IpcSemaphoreCreate() in\n> > storage/ipc/ipc.c ). Why it is, I don't know, but it seems that\n> > my solution\n> > uses the ipc library in the right way. There are no longer any error\n> > messages from the ipc library when running the server. And I\n> > can't say that\n> > the ipc library is a 100% correct implementation of SysV IPC, it\n> > is probably\n> > (sure ;-) )caused by the Windows internals.\n> >\n> \n> Yutaka Tanida [[email protected]] and I have examined IPC\n> library.\n> \n> We found that postmaster doesn't call exec() after fork() since v6.4.\n> \n> The value of static/extern variables which cygipc library holds may\n> be different from their initial values when postmaster fork()s child\n> backend processes.\n> \n> I made the following patch for cygipc library on trial.\n> This patch was effective for Yutaka's test case.\n\nI will try it right now, it looks interesting. It could also explain some\n\"non-deterministic\" behavior of the cygipc library.\n\n\t\t\tDan\n", "msg_date": "Wed, 18 Aug 1999 12:02:55 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " } ]
[ { "msg_contents": "> > I made the following patch for cygipc library on trial.\n> > This patch was effective for Yutaka's test case.\n> \n> I will try it right now, it looks interesting. It could also \n> explain some\n> \"non-deterministic\" behavior of the cygipc library.\n\nAfter first few tests I can say, that the patch for cygipc looks like the\nright solution of the backend-freezing problem. So we can put it into\nsrc/win32 and mention to apply it in the README.NT in part about cygipc. I\nwill send it to author of the cygipc library.\n\n\t\t\tDan\n", "msg_date": "Wed, 18 Aug 1999 13:46:52 +0200", "msg_from": "Horak Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] backend freezeing on win32 fixed (I hope ;-) ) " } ]
[ { "msg_contents": "I need to parse this format:\n\n<Database ftpdatabase [hostname[:port]]>\n [<DatabaseID somebody>]\n [<DatabasePWD mypwd>]\n [<Table ftp_users>\n [<Uname uname>]\n [<CryptedPwd cryptedpwd>]\n [<FtpPath ftppath>]\n </Table>]\n</Database>\n\nThat's all that I currently have.\nSo, which of these would be the best tool (I know little about any of them).\nIf anyone gets the urge to just whip out a quick grammar file for me it\nwould be much appreciated.\nOr at least point me to a beginner level tutorial somewhere.\n\nadvTHANKSance\n\tDEJ\n", "msg_date": "Wed, 18 Aug 1999 13:11:47 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "[OT] flex, yacc, and bison" }, { "msg_contents": "\"Jackson, DeJuan\" <[email protected]> writes:\n> I need to parse this format:\n> <Database ftpdatabase [hostname[:port]]>\n> [<DatabaseID somebody>]\n> [<DatabasePWD mypwd>]\n> [<Table ftp_users>\n> [<Uname uname>]\n> [<CryptedPwd cryptedpwd>]\n> [<FtpPath ftppath>]\n> </Table>]\n> </Database>\n\nThat looks suspiciously like an SGML DTD to me...\n\nRather than doing the whole lex/yacc bit, I'd suggest finding some\nready-made SGML-parsing tools. For instance, if you are handy with\nPerl I think there are some SGML modules in CPAN ... certainly there\nare HTML parsers, which'd probably be easy to adapt to the purpose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 18 Aug 1999 16:59:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [OT] flex, yacc, and bison " }, { "msg_contents": "Tom Lane <[email protected]> writes:\n> \"Jackson, DeJuan\" <[email protected]> writes:\n> > I need to parse this format:\n> > <Database ftpdatabase [hostname[:port]]>\n> > [<DatabaseID somebody>]\n> > [<DatabasePWD mypwd>]\n> > [<Table ftp_users>\n> > [<Uname uname>]\n> > [<CryptedPwd cryptedpwd>]\n> > [<FtpPath ftppath>]\n> > </Table>]\n> > </Database>\n> \n> That looks suspiciously like an SGML DTD to me...\n\nWell, it could almost kind of be SGML, but as specified, there's no\nway it could possibly be XML (attributes have to have values, a couple\nof other things), which is unfortunate, since that's where all the\ncool tools are being developed these days.\n\n> Rather than doing the whole lex/yacc bit, I'd suggest finding some\n> ready-made SGML-parsing tools. For instance, if you are handy with\n> Perl I think there are some SGML modules in CPAN ... certainly there\n> are HTML parsers, which'd probably be easy to adapt to the purpose.\n\nI agree with Tom that you try to find existing parsers tuned towards\nthis stuff, with the addition that you do your self a favor (if you\nhave the option to change the format), and change it to be something\nthat can be parsed as XML.\n\nYou don't mention what this is for, but if you're able to move to XML,\nyou can use Perl (which I personally prefer), Python, TCL, or even one\nof several C libraries (expat or rxp or GNOME's libxml) that are\nsuprisingly easy to use, given that text hacking is not something that\nis traditionally easy to do in C. The possibilities are much broader.\n\nMike.\n", "msg_date": "18 Aug 1999 17:46:13 -0400", "msg_from": "Michael Alan Dorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [OT] flex, yacc, and bison" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Jackson, DeJuan\" <[email protected]> writes:\n> > I need to parse this format:\n> > <Database ftpdatabase [hostname[:port]]>\n> > [<DatabaseID somebody>]\n> > [<DatabasePWD mypwd>]\n> > [<Table ftp_users>\n> > [<Uname uname>]\n> > [<CryptedPwd cryptedpwd>]\n> > [<FtpPath ftppath>]\n> > </Table>]\n> > </Database>\n> \n> That looks suspiciously like an SGML DTD to me...\n> \n> Rather than doing the whole lex/yacc bit, I'd suggest finding some\n> ready-made SGML-parsing tools. For instance, if you are handy with\n> Perl I think there are some SGML modules in CPAN ... certainly there\n> are HTML parsers, which'd probably be easy to adapt to the purpose.\n\nThat's definitly not an SGML DTD and it isn't either valid SGML. It will\nbe hard to find a Perl Module.\n\n-Egon\n\nPS: a small and quick test if my email address is valid\n", "msg_date": "Thu, 19 Aug 1999 00:02:07 +0200", "msg_from": "Egon Schmid <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [OT] flex, yacc, and bison" } ]
[ { "msg_contents": "I'm trying to write a ncftp_auth deamon that will utilize PostgreSQL. The\nbase SDK is in C. I'm adapting it to C++ because I feel like it, but Perl\nwould be a slightly steeper learning curve (sockets and all that). I'm\ntrying to get a conf file parser such that the login's can be in multiple\ndatabase and/or on different machines.\n\tDEJ\n\n> -----Original Message-----\n> From:\tMichael Alan Dorman [SMTP:[email protected]]\n> Sent:\tWednesday, August 18, 1999 4:46 PM\n> To:\[email protected]\n> Subject:\tRe: [HACKERS] [OT] flex, yacc, and bison\n> \n> Tom Lane <[email protected]> writes:\n> > \"Jackson, DeJuan\" <[email protected]> writes:\n> > > I need to parse this format:\n> > > <Database ftpdatabase [hostname[:port]]>\n> > > [<DatabaseID somebody>]\n> > > [<DatabasePWD mypwd>]\n> > > [<Table ftp_users>\n> > > [<Uname uname>]\n> > > [<CryptedPwd cryptedpwd>]\n> > > [<FtpPath ftppath>]\n> > > </Table>]\n> > > </Database>\n> > \n> > That looks suspiciously like an SGML DTD to me...\n> \n> Well, it could almost kind of be SGML, but as specified, there's no\n> way it could possibly be XML (attributes have to have values, a couple\n> of other things), which is unfortunate, since that's where all the\n> cool tools are being developed these days.\n> \n> > Rather than doing the whole lex/yacc bit, I'd suggest finding some\n> > ready-made SGML-parsing tools. For instance, if you are handy with\n> > Perl I think there are some SGML modules in CPAN ... certainly there\n> > are HTML parsers, which'd probably be easy to adapt to the purpose.\n> \n> I agree with Tom that you try to find existing parsers tuned towards\n> this stuff, with the addition that you do your self a favor (if you\n> have the option to change the format), and change it to be something\n> that can be parsed as XML.\n> \n> You don't mention what this is for, but if you're able to move to XML,\n> you can use Perl (which I personally prefer), Python, TCL, or even one\n> of several C libraries (expat or rxp or GNOME's libxml) that are\n> suprisingly easy to use, given that text hacking is not something that\n> is traditionally easy to do in C. The possibilities are much broader.\n> \n> Mike.\n> \n> ************\n> Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com\n", "msg_date": "Wed, 18 Aug 1999 17:59:09 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] [OT] flex, yacc, and bison" }, { "msg_contents": "\nOn 18-Aug-99 Jackson, DeJuan wrote:\n> I'm trying to write a ncftp_auth deamon that will utilize PostgreSQL. The\n> base SDK is in C. I'm adapting it to C++ because I feel like it, but Perl\n> would be a slightly steeper learning curve (sockets and all that). I'm\n> trying to get a conf file parser such that the login's can be in multiple\n> database and/or on different machines.\n\nIf you're into C/C++ (like I am), it's almost trivial to parse that into\nindividual strings. I was under the impression from your first note that\nC/C++ wasn't an option. At the minimum, think strtok(). There's also\nstrsep() but it's not been one of my favorites. If you still need actual\ncode let me know and I can send you something.\n\nVince.\n\n\n\n\n> DEJ\n> \n>> -----Original Message-----\n>> From: Michael Alan Dorman [SMTP:[email protected]]\n>> Sent: Wednesday, August 18, 1999 4:46 PM\n>> To: [email protected]\n>> Subject: Re: [HACKERS] [OT] flex, yacc, and bison\n>> \n>> Tom Lane <[email protected]> writes:\n>> > \"Jackson, DeJuan\" <[email protected]> writes:\n>> > > I need to parse this format:\n>> > > <Database ftpdatabase [hostname[:port]]>\n>> > > [<DatabaseID somebody>]\n>> > > [<DatabasePWD mypwd>]\n>> > > [<Table ftp_users>\n>> > > [<Uname uname>]\n>> > > [<CryptedPwd cryptedpwd>]\n>> > > [<FtpPath ftppath>]\n>> > > </Table>]\n>> > > </Database>\n>> > \n>> > That looks suspiciously like an SGML DTD to me...\n>> \n>> Well, it could almost kind of be SGML, but as specified, there's no\n>> way it could possibly be XML (attributes have to have values, a couple\n>> of other things), which is unfortunate, since that's where all the\n>> cool tools are being developed these days.\n>> \n>> > Rather than doing the whole lex/yacc bit, I'd suggest finding some\n>> > ready-made SGML-parsing tools. For instance, if you are handy with\n>> > Perl I think there are some SGML modules in CPAN ... certainly there\n>> > are HTML parsers, which'd probably be easy to adapt to the purpose.\n>> \n>> I agree with Tom that you try to find existing parsers tuned towards\n>> this stuff, with the addition that you do your self a favor (if you\n>> have the option to change the format), and change it to be something\n>> that can be parsed as XML.\n>> \n>> You don't mention what this is for, but if you're able to move to XML,\n>> you can use Perl (which I personally prefer), Python, TCL, or even one\n>> of several C libraries (expat or rxp or GNOME's libxml) that are\n>> suprisingly easy to use, given that text hacking is not something that\n>> is traditionally easy to do in C. The possibilities are much broader.\n>> \n>> Mike.\n>> \n>> ************\n>> Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com\n> \n> ************\n> Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com\n> \n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Wed, 18 Aug 1999 19:25:47 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] [OT] flex, yacc, and bison" }, { "msg_contents": "\nOn 18-Aug-99 Vince Vielhaber wrote:\n> \n> On 18-Aug-99 Jackson, DeJuan wrote:\n>> I'm trying to write a ncftp_auth deamon that will utilize PostgreSQL. The\n>> base SDK is in C. I'm adapting it to C++ because I feel like it, but Perl\n>> would be a slightly steeper learning curve (sockets and all that). I'm\n>> trying to get a conf file parser such that the login's can be in multiple\n>> database and/or on different machines.\n> \n> If you're into C/C++ (like I am), it's almost trivial to parse that into\n> individual strings. I was under the impression from your first note that\n> C/C++ wasn't an option. At the minimum, think strtok(). There's also\n> strsep() but it's not been one of my favorites. If you still need actual\n> code let me know and I can send you something.\n\nIt's my bestst:\n\nint split(char delem, char *str, ... )\n{\n char *tmp, *_src, *t, **s;\n int fields = 0;\n\n _src = str;\n\n va_list ap;\n va_start(ap, str);\n\n if (! (tmp=new char[2048]) )\n return -1;\n\n while(_src)\n { t = _src;\n while (*t && ((*t) != delem) ) ++t;\n s = va_arg(ap, char **);\n if (!s || !*t)\n break;\n *s = ( t-_src-1 > 0) ? strndup(_src,t-_src-1) : 0 ;\n _src = t+1;\n ++ fields;\n }\n\n return fields;\n}\n\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Thu, 19 Aug 1999 13:43:49 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] [OT] flex, yacc, and bison" } ]
[ { "msg_contents": " Hello \n\nThank's very much for your help.\n\nI have already installed two copies of PostgreSQL DB. One was installed from RPM, another one was compiled without RPM. The copy installed from RPM has problem with query length, another copy haven't this problem! \n\nI decide to compile it from sources and try to use. After compilation all is OK! Query length is 8191 now.\n\n May be error presents in RPM.\n\n>From :\t\tAnsley, Michael [SMTP:[email protected]]\nDate :\t \t18 О©╫О©╫О©╫О©╫О©╫О©╫О©╫ 1999 О©╫. 11:55\nTo :\t\t'Natalya S. Makushina'; '[email protected]'\nSubject :\t\tRE: [HACKERS] Problem with query length\n\nHi, all\n\nI have found out what the problem is, although not (yet) the solution. \n\nExecutive summary:\n------------------\nThe scan.l code is not flexing as intended. This means that, for most\nproduction installations, the max token size is around 64kB.\n\nTechnical summary:\n------------------\nThe problem is that scan.l is compiling to scan.c with YY_USES_REJECT being\ndefined. When YY_USES_REJECT is defined, the token buffer is NOT\nexpandable, and the parser will fail if expansion is attempted. However,\nYY_USES_REJECT should not be defined, and I'm trying to work out why it is.\nI have posted to the flex mailing list, and expect a reply within the next\nday or so.\n\nThe bottom line:\n------------------\nThe token limit seems to be effectively the size of YY_BUF_SIZE in scan.l,\nuntil I submit a patch which should make it unlimited. \n\n\nMikeA\n\n>> -----Original Message-----\n>> From: Natalya S. Makushina [mailto:[email protected]]\n>> Sent: Tuesday, August 17, 1999 3:15 PM\n>> To: '[email protected]'\n>> Subject: [HACKERS] Problem with query length\n>> \n>> \n>> -------------------------------------------------------------\n>> ----------------------------------------------------------------\n>> I have posted this mail to psql-general. But i didn't get \n>> any answer yet.\n>> -------------------------------------------------------------\n>> ----------------------------------------------------------------\n>> \n>> When i had tried to insert into text field text (length \n>> about 4000 chars), the backend have crashed with status 139. \n>> This error is happened when the query length ( SQL query) is \n>> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n>> \n>> My questions are:\n>> 1. Is there problem with text field or with length of SQL query?\n>> 2. Would postgresql have any limits for SQL query length?\n>> I checked the archives but only found references to the 8K \n>> limit. Any help would be greatly appreciated.\n>> Thanks for help\n>> \t\t\t\tNatalya Makushina \n>> \t\t\t\[email protected]\n>> \n>> \n", "msg_date": "Thu, 19 Aug 1999 12:39:27 +0400", "msg_from": "\"Natalya S. Makushina\" <[email protected]>", "msg_from_op": true, "msg_subject": "[HACKERS] Problem with query length" } ]
[ { "msg_contents": "Taking this further:\n\nI have discovered why flex is inserting YY_USES_REJECT into the generated .c\nfile; it's because scan.l uses variable-length trailing context. This is\nmentioned by the flex documentation as a performance degrader of note, as\nwell as not allowing dynamic token resizing.\n\nSo, the question is, can we rewrite scan.l in such a way as to remove the\nvariable-length trailing context? I have mailed the flex mailing list to\nfind out if and how this can be done, so perhaps I'll be able to get rid of\nthe vltc's and consequently the YY_USES_REJECT. Then we'll have the dynamic\ntokens. If anybody can help me on this, I'd really appreciate it, because\nlanguage parsing is not my strong point. If anybody can point me to a\nreference implementation of SQL, either 92, or 93 (preferably with a\nreference lex file) that would probably be a good start.\n\nNatalya,\nI suggest that you compile your own source, and make sure that the parser\nhas its YY_BUF_SIZE set to 64k. This will at least allow you to use fairly\nlarge tokens, until something else can be done. The number that you have to\ncheck is in src/backend/parser/Makefile:\n\nscan.c: scan.l\n $(LEX) $<\n sed -e 's/#define YY_BUF_SIZE .*/#define YY_BUF_SIZE 65536/' \\\n <lex.yy.c >scan.c\n rm -f lex.yy.c\n\nMake sure that the 65535 is there, or adjust it to what you need. It goes\nwithout saying that you need to test this before rolling it into production.\n \n\nCheers...\n\nMikeA\n\n>> -----Original Message-----\n>> From: Natalya S. Makushina [mailto:[email protected]]\n>> Sent: Thursday, August 19, 1999 10:39 AM\n>> To: 'Ansley, Michael'; '[email protected]'\n>> Subject: [HACKERS] Problem with query length\n>> \n>> \n>> Hello \n>> \n>> Thank's very much for your help.\n>> \n>> I have already installed two copies of PostgreSQL DB. One \n>> was installed from RPM, another one was compiled without \n>> RPM. The copy installed from RPM has problem with query \n>> length, another copy haven't this problem! \n>> \n>> I decide to compile it from sources and try to use. After \n>> compilation all is OK! Query length is 8191 now.\n>> \n>> May be error presents in RPM.\n>> \n>> From :\t\tAnsley, Michael \n>> [SMTP:[email protected]]\n>> Date :\t \t18 О©╫О©╫О©╫О©╫О©╫О©╫О©╫ 1999 О©╫. 11:55\n>> To :\t\t'Natalya S. Makushina'; '[email protected]'\n>> Subject :\t\tRE: [HACKERS] Problem with query length\n>> \n>> Hi, all\n>> \n>> I have found out what the problem is, although not (yet) the \n>> solution. \n>> \n>> Executive summary:\n>> ------------------\n>> The scan.l code is not flexing as intended. This means \n>> that, for most\n>> production installations, the max token size is around 64kB.\n>> \n>> Technical summary:\n>> ------------------\n>> The problem is that scan.l is compiling to scan.c with \n>> YY_USES_REJECT being\n>> defined. When YY_USES_REJECT is defined, the token buffer is NOT\n>> expandable, and the parser will fail if expansion is \n>> attempted. However,\n>> YY_USES_REJECT should not be defined, and I'm trying to work \n>> out why it is.\n>> I have posted to the flex mailing list, and expect a reply \n>> within the next\n>> day or so.\n>> \n>> The bottom line:\n>> ------------------\n>> The token limit seems to be effectively the size of \n>> YY_BUF_SIZE in scan.l,\n>> until I submit a patch which should make it unlimited. \n>> \n>> \n>> MikeA\n>> \n>> >> -----Original Message-----\n>> >> From: Natalya S. Makushina [mailto:[email protected]]\n>> >> Sent: Tuesday, August 17, 1999 3:15 PM\n>> >> To: '[email protected]'\n>> >> Subject: [HACKERS] Problem with query length\n>> >> \n>> >> \n>> >> -------------------------------------------------------------\n>> >> ----------------------------------------------------------------\n>> >> I have posted this mail to psql-general. But i didn't get \n>> >> any answer yet.\n>> >> -------------------------------------------------------------\n>> >> ----------------------------------------------------------------\n>> >> \n>> >> When i had tried to insert into text field text (length \n>> >> about 4000 chars), the backend have crashed with status 139. \n>> >> This error is happened when the query length ( SQL query) is \n>> >> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n>> >> \n>> >> My questions are:\n>> >> 1. Is there problem with text field or with length of SQL query?\n>> >> 2. Would postgresql have any limits for SQL query length?\n>> >> I checked the archives but only found references to the 8K \n>> >> limit. Any help would be greatly appreciated.\n>> >> Thanks for help\n>> >> \t\t\t\tNatalya Makushina \n>> >> \t\t\t\[email protected]\n>> >> \n>> >> \n>> \n", "msg_date": "Thu, 19 Aug 1999 10:51:51 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with query length" } ]
[ { "msg_contents": "Please read the Inprise Linux Developers survey. The most interesting thing\nthat I picked out was the db platform question:\n\nWhich local database or database server do you plan to use with your Linux\napplication development? \n\nPG came 4th with 22.0%. This is quite a good indication of where things are\ngoing. Note that first was Oracle, second mySQL, and third Interbase. This\nmeans that 5300 people who answered the survey plan to use PG. That's a lot\nof people.\n\nFor the interfaces list: this is perhaps an opportunity to get Inprise to\nhelp in building BDE components.\n\nhttp://www.borland.com/linux/survey/\n\nMikeA\n\n", "msg_date": "Thu, 19 Aug 1999 11:40:32 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "Survey" }, { "msg_contents": "Hi All\n\nI'm having problems getting MS' ADO control to connect to my \nPostgres server via ODBC. Can anyone help here?\n\nThe problem is that if crashes VB when I attempt the connection.\n\nI'm using VB6 learning edition. I get the same problem when I try \nto connect with VB5's RDO object. I might be switching from VB5 \nto VB6 _IF_ I can test the connection.\n\nRegards\n\nJason Doller\n", "msg_date": "Fri, 20 Aug 1999 09:06:14 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Survey" } ]
[ { "msg_contents": "Dave Page <[email protected]> writes:\n> BTW what's this 'Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com'\n> that seems to be appearing on the bottom of the last 5 or 6 messages on the\n> list - I'm not sure I want that added to my sig.\n\nI don't like it either. Please remove the advertising, Marc.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 19 Aug 1999 09:21:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Mail list attachments" } ]
[ { "msg_contents": "\nThe following is from the web stats for www.PostgreSQL.org,browser report:\n\n 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n\nThe only one on top of that is the ht/dig stuff...\n\nDamn, we get alot of MicroSloth ppl through here, eh? :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 19 Aug 1999 11:52:50 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Tangent ... you know what's scary?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> The following is from the web stats for www.PostgreSQL.org,browser report:\n> \n> 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> \n> The only one on top of that is the ht/dig stuff...\n> \n> Damn, we get alot of MicroSloth ppl through here, eh? :)\n\nis that just doing a uniq on the agent field? if that's true, then it\nlooks like the linux user agents might vary on kernel version, in which\ncase you might have to add those up. plus there are about 100 version\nof Mozilla (4.0, 4.01, 4.02 .. 4.08, 4.5, 4.51, 4.6, 4.61). don't lose\nthe faith just so quickly.\n\nOTOH, didn't you know the browser war was over?\n\nBTW, i always thought it was really cheesy that MSIE uses Mozilla in its\nuser agent string. I would be embarassed to have my company's product\nnot be able to stand up by itself, for better or for worse.\n", "msg_date": "Thu, 19 Aug 1999 10:12:20 -0500", "msg_from": "Jeff Hoffmann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> The following is from the web stats for www.PostgreSQL.org,browser report:\n \n> Damn, we get alot of MicroSloth ppl through here, eh? :)\n\nHEY! I resemble that remark! (although I have enough sense to use\nNetscape on this machine...)\n\nLamar Owen\nWGCR Internet Radio\n[on a Win95 machine due to the lack of broadcast automation software\nunder linux....;-( ]\n", "msg_date": "Thu, 19 Aug 1999 11:22:18 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Thu, 19 Aug 1999, Jeff Hoffmann wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > The following is from the web stats for www.PostgreSQL.org,browser report:\n> > \n> > 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> > 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> > 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> > 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> > 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> > 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> > 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> > \n> > The only one on top of that is the ht/dig stuff...\n> > \n> > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> \n> is that just doing a uniq on the agent field? if that's true, then it\n\nbased on analog's report...\n\n> looks like the linux user agents might vary on kernel version, in which\n> case you might have to add those up. plus there are about 100 version\n> of Mozilla (4.0, 4.01, 4.02 .. 4.08, 4.5, 4.51, 4.6, 4.61). don't lose\n> the faith just so quickly.\n\nactually, I think its a cool thing...just think, all those MicroSloth\nusers learning about us to get away from SQL Server :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 19 Aug 1999 12:24:31 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Thu, 19 Aug 1999, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > \n> > The following is from the web stats for www.PostgreSQL.org,browser report:\n> \n> > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> \n> HEY! I resemble that remark! (although I have enough sense to use\n\nYou look like a Sloth? *grin*\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 19 Aug 1999 12:31:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "\nOn 19-Aug-99 Lamar Owen wrote:\n> The Hermit Hacker wrote:\n> [on a Win95 machine due to the lack of broadcast automation software\n> under linux....;-( ]\n\nYou may be seeing some soon - but it'll be for FreeBSD.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Thu, 19 Aug 1999 11:48:10 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "The Hermit Hacker wrote:\n> > > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> >\n> > HEY! I resemble that remark! \n> \n> You look like a Sloth? *grin*\n\nROTFL.... Judge for yourself:\nhttp://www.wgcr.org/about_us/who/LAMAR.JPG (sorry about the capitals,\nbut that's the way it's on there...)\n\nIt is sad that even Unix-heads like myself use such an inferior\nproduct.... Sad state indeed. (one reason why I volunteered to tackle\nthe RPM packaging issue head-on.... I had to do SOMETHING to help, even\na little thing such as packaging is a little help.)\n\nLamar Owen\nWGCR Internet Radio \n(who's still waiting for a .wav file of the \"Official\" PostgreSQL\npronunciation to encode to RealAudio and stream over my\nRealServer....;-))\n", "msg_date": "Thu, 19 Aug 1999 12:13:45 -0400", "msg_from": "Lamar Owen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Thu, 19 Aug 1999, Lamar Owen wrote:\n\n> The Hermit Hacker wrote:\n> > > > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> > >\n> > > HEY! I resemble that remark! \n> > \n> > You look like a Sloth? *grin*\n> \n> ROTFL.... Judge for yourself:\n> http://www.wgcr.org/about_us/who/LAMAR.JPG (sorry about the capitals,\n> but that's the way it's on there...)\n\nWell, sloth might be pushing it... :)\n\n> WGCR Internet Radio \n> (who's still waiting for a .wav file of the \"Official\" PostgreSQL\n> pronunciation to encode to RealAudio and stream over my\n> RealServer....;-))\n\nAck...will see what I can do :(\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 19 Aug 1999 13:20:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "\nOn 19-Aug-99 The Hermit Hacker wrote:\n> On Thu, 19 Aug 1999, Lamar Owen wrote:\n> \n>> The Hermit Hacker wrote:\n>> > > > Damn, we get alot of MicroSloth ppl through here, eh? :)\n>> > >\n>> > > HEY! I resemble that remark! \n>> > \n>> > You look like a Sloth? *grin*\n>> \n>> ROTFL.... Judge for yourself:\n>> http://www.wgcr.org/about_us/who/LAMAR.JPG (sorry about the capitals,\n>> but that's the way it's on there...)\n> \n> Well, sloth might be pushing it... :)\n> \n>> WGCR Internet Radio \n>> (who's still waiting for a .wav file of the \"Official\" PostgreSQL\n>> pronunciation to encode to RealAudio and stream over my\n>> RealServer....;-))\n> \n> Ack...will see what I can do :(\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n\nLamar, do you have festival running? If you do just pass it this:\n \nPosgress Q L\n\nand you'll have it.\n\nFor anyone not aware, festival is a text to speech package. I'll occasionally\nuse it for leaving messages on people's answering machines/voice mail!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: [email protected] flame-mail: /dev/null\n # include <std/disclaimers.h> TEAM-OS2\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n", "msg_date": "Thu, 19 Aug 1999 15:12:04 -0400 (EDT)", "msg_from": "Vince Vielhaber <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "Marc G. Fournier wrote:\n\n>\n> The following is from the web stats for www.PostgreSQL.org,browser report:\n>\n> 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n>\n> The only one on top of that is the ht/dig stuff...\n>\n> Damn, we get alot of MicroSloth ppl through here, eh? :)\n\n Could be a misinterpretation of log info. Many ppl have\n internet connectivity (and it seems plenty of time to spend\n for surfing too) at work. Others just use their M$ infected\n workstation for surfing while doing their real work on\n systems where real work can be done.\n\n BTW: A bus station is where a bus stops. A train station is\n where a train stops. I have a work station in front of me.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#========================================= [email protected] (Jan Wieck) #\n\n\n", "msg_date": "Thu, 19 Aug 1999 23:56:49 +0200 (MET DST)", "msg_from": "[email protected] (Jan Wieck)", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Thu, Aug 19, 1999 at 11:52:50AM -0300, The Hermit Hacker wrote:\n> \n> The following is from the web stats for www.PostgreSQL.org,browser report:\n> \n> 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> \n> The only one on top of that is the ht/dig stuff...\n> \n> Damn, we get alot of MicroSloth ppl through here, eh? :)\n\n\tHint: Microsoft makes the dominant web browser. Why are you\nsurprised?\n\n\n-- \nAdam Haberlach | \"Every day I download and build the latest\[email protected] | source code and check the about box. If I'm\nhttp://www.newsnipple.com | still in it, I stay at work.\" --Me\n", "msg_date": "Sun, 22 Aug 1999 11:48:12 -0700", "msg_from": "Adam Haberlach <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Sun, 22 Aug 1999, Adam Haberlach wrote:\n\n> On Thu, Aug 19, 1999 at 11:52:50AM -0300, The Hermit Hacker wrote:\n> > \n> > The following is from the web stats for www.PostgreSQL.org,browser report:\n> > \n> > 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> > 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> > 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> > 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> > 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> > 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> > 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> > \n> > The only one on top of that is the ht/dig stuff...\n> > \n> > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> \n> \tHint: Microsoft makes the dominant web browser. Why are you\n> surprised?\n\nActually, from tests I did a few months back, this isn't accurate...the\nproblem with the Netscape/Unix browsers is that for every version of Unix\nout there, it will report a different \"browser\"...\n\nBut, it only seperates by 'distinct full name'...here's just a selection\nof the Linux ones reported, and each one of those would be in the above\nreport \"seperately\":\n\nMozilla/4.61 [en] (X11; I; Linux 2.3.2 i686)\nMozilla/4.61 [en] (X11; I; Linux 2.3.9 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.0.30 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.0.34 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.0.34 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.0.35 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.0.35 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.0.36 i386)\nMozilla/4.61 [en] (X11; U; Linux 2.0.36 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.0.36 i586; Nav)\nMozilla/4.61 [en] (X11; U; Linux 2.0.36 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.0.36 i686; Nav)\nMozilla/4.61 [en] (X11; U; Linux 2.2.1 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.10 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.10 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.10-ac11 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.10-ac3 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-15 i486)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-15 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-15 i586; Nav)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-15 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-15smp i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-22 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-22 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-22smp i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.5-23cl i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.6 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.7 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.7 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.8 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.2.9 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.2.9 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.3.0 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.3.10 i586)\nMozilla/4.61 [en] (X11; U; Linux 2.3.12 i686)\nMozilla/4.61 [en] (X11; U; Linux 2.3.5 i586)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n\n", "msg_date": "Sun, 22 Aug 1999 16:59:25 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "> \n> The following is from the web stats for www.PostgreSQL.org,browser report:\n> \n> 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n\nThat seems totally wrong. Are you saying there is only a few Linux\nfolks, and no other Unix stuff, and all the other folks are PC MS\npeople.\n\nSeems strange.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Aug 1999 23:50:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Mon, 23 Aug 1999, Bruce Momjian wrote:\n\n> > \n> > The following is from the web stats for www.PostgreSQL.org,browser report:\n> > \n> > 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> > 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> > 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> > 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> > 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> > 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> > 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> \n> That seems totally wrong. Are you saying there is only a few Linux\n> folks, and no other Unix stuff, and all the other folks are PC MS\n> people.\n> \n> Seems strange.\n\nnah, the results are screwed up since MicroSloth doesn't actually have\ndifferent \"versions\"...see my followup posting that provided one gentleman\nwith an example of how something like Linux dispurses because of their\n\"version a week\" release scheduale :)\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Tue, 24 Aug 1999 02:20:52 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Tangent ... you know what's scary?" } ]
[ { "msg_contents": "I work at and Advertising Firm and I recently had the misfortune of learning\nthat MSIE current supplies two-thirds of total user hits. It is painful,\nand I wept with great despair upon the hearing of it.\n\tDEJ\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:[email protected]]\n> Sent:\tThursday, August 19, 1999 9:53 AM\n> To:\[email protected]\n> Subject:\t[HACKERS] Tangent ... you know what's scary?\n> \n> \n> The following is from the web stats for www.PostgreSQL.org,browser report:\n> \n> 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> \n> The only one on top of that is the ht/dig stuff...\n> \n> Damn, we get alot of MicroSloth ppl through here, eh? :)\n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org \n> primary: [email protected] secondary:\n> scrappy@{freebsd|postgresql}.org \n> \n> \n> ************\n", "msg_date": "Thu, 19 Aug 1999 10:27:31 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Tangent ... you know what's scary?" }, { "msg_contents": "On Thu, 19 Aug 1999, Jackson, DeJuan wrote:\n\n> I work at and Advertising Firm and I recently had the misfortune of learning\n> that MSIE current supplies two-thirds of total user hits. It is painful,\n> and I wept with great despair upon the hearing of it.\n\nI could see that, but I just think its odd that we're pretty much a Unix\nRDBMS but get a load of hits from the Win world...\n\n> \tDEJ\n> \n> > -----Original Message-----\n> > From:\tThe Hermit Hacker [SMTP:[email protected]]\n> > Sent:\tThursday, August 19, 1999 9:53 AM\n> > To:\[email protected]\n> > Subject:\t[HACKERS] Tangent ... you know what's scary?\n> > \n> > \n> > The following is from the web stats for www.PostgreSQL.org,browser report:\n> > \n> > 133414: Mozilla/4.0 (compatible; MSIE 4.01; Windows 98)\n> > 125351: Mozilla/4.0 (compatible; MSIE 4.01; Windows NT)\n> > 116392: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98; DigExt)\n> > 100206: Mozilla/4.0 (compatible; MSIE 5.0; Windows NT; DigExt)\n> > 77493: Mozilla/4.0 (compatible; MSIE 4.01; Windows 95)\n> > 59355: Mozilla/4.0 (compatible; MSIE 5.0; Windows 98)\n> > 52153: Mozilla/4.51 [en] (X11; I; Linux 2.2.5-15 i686)\n> > \n> > The only one on top of that is the ht/dig stuff...\n> > \n> > Damn, we get alot of MicroSloth ppl through here, eh? :)\n> > \n> > Marc G. Fournier ICQ#7615664 IRC Nick:\n> > Scrappy\n> > Systems Administrator @ hub.org \n> > primary: [email protected] secondary:\n> > scrappy@{freebsd|postgresql}.org \n> > \n> > \n> > ************\n> \n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Thu, 19 Aug 1999 12:39:06 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Tangent ... you know what's scary?" } ]
[ { "msg_contents": "Thanks guys, for all the help. \nThe reason that I was thinking 'grammar parser' was that the spec of this\nformat may change. I was looking for something that would be quick, but\neasily extended as the grammar changed/evolved. Also my next project will\nneed the same functionality if not an extension of the same grammar.\n\tAgain thanks,\n \t-DEJ\n\n> -----Original Message-----\n> \n> On 18-Aug-99 Vince Vielhaber wrote:\n> > \n> > On 18-Aug-99 Jackson, DeJuan wrote:\n> >> I'm trying to write a ncftp_auth deamon that will utilize PostgreSQL.\n> The\n> >> base SDK is in C. I'm adapting it to C++ because I feel like it, but\n> Perl\n> >> would be a slightly steeper learning curve (sockets and all that). I'm\n> >> trying to get a conf file parser such that the login's can be in\n> multiple\n> >> database and/or on different machines.\n> > \n> > If you're into C/C++ (like I am), it's almost trivial to parse that into\n> > individual strings. I was under the impression from your first note\n> that\n> > C/C++ wasn't an option. At the minimum, think strtok(). There's also\n> > strsep() but it's not been one of my favorites. If you still need\n> actual\n> > code let me know and I can send you something.\n> \n> It's my bestst:\n> \n> int split(char delem, char *str, ... )\n> {\n> char *tmp, *_src, *t, **s;\n> int fields = 0;\n> \n> _src = str;\n> \n> va_list ap;\n> va_start(ap, str);\n> \n> if (! (tmp=new char[2048]) )\n> return -1;\n> \n> while(_src)\n> { t = _src;\n> while (*t && ((*t) != delem) ) ++t;\n> s = va_arg(ap, char **);\n> if (!s || !*t)\n> break;\n> *s = ( t-_src-1 > 0) ? strndup(_src,t-_src-1) : 0 ;\n> _src = t+1;\n> ++ fields;\n> }\n> \n> return fields;\n> }\n> \n> \n> ---\n> Dmitry Samersoff, [email protected], ICQ:3161705\n> http://devnull.wplus.net\n> * There will come soft rains ...\n", "msg_date": "Thu, 19 Aug 1999 10:33:20 -0500", "msg_from": "\"Jackson, DeJuan\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] [OT] flex, yacc, and bison" } ]
[ { "msg_contents": "Hi!\n\nI'm currently fooling around with Postgres's parser, and I must admit\nsome things puzzle me completely. Please tell me what these things in\nlexer stand for:\n\n{operator}/-[\\.0-9]\t{\n\t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n\t\t\t\t\treturn Op;\n\t\t\t\t}\nIs it an operator followed by mandatory '-' and (dot or digit) ?\n\nAnd what this stands for:\n\n{identifier}/{space}*-{number}\n\nWhat's the meaning of all these?\n\n-- \nLeon.\n---------\n\"This may seem a bit weird, but that's okay, because it is weird.\" -\nPerl manpage.\n\n", "msg_date": "Fri, 20 Aug 1999 02:15:21 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres' lexer " } ]
[ { "msg_contents": "[email protected] wrote:\n >Package: postgresql\n >Version: 6.5.1-6\n \nI'm forwarding this to the hackers list, because it presumably is an issue\nto be addressed in the code for rolling back an aborted transaction. I\nsuppose that there is nothing there to handle the case that the index file\nruns out of diskspace and that therefore the index became corrupt.\n\nMind you, I'm just guessing!\n\n >I tried inserting a record into the table 'rolo' described below. Rolo\n >had two indices, also described below. When the disk was full, the \n >insert failed with a NOTICE: transaction abort and not in progress\n >error. Stopping and starting the postmaster, clearing the disk, and\n >repeating then gave, alternately, a 'cannot insert duplicate' error (the\n >insert was not a duplicate) and a 'rolo_pkey is not a btree' error (which\n >it was, to start with). Dropping both indices and rebuilding them restored\n >functionality.\n >\n >Here are the relevant table/index definitions:\n\nThere was nothing at all out of the ordinary about them: I think they are not\nlikely to be relevant to the problem.\n\n >\n >-- System Information\n >Debian Release: 2.1\n >Kernel Version: Linux intech19 2.2.7 #1 SMP Thu May 27 11:53:43 EDT 1999 i68\n >6 unknown\n >\n >Versions of the packages postgresql depends on:\n >ii libc6 2.0.7.19981211 GNU C Library: shared libraries\n >ii libc6 2.0.7.19981211 GNU C Library: shared libraries\n >ii libncurses4 4.2-3 Shared libraries for terminal handling\n >ii libpgsql2 6.5.1-6 Shared library libpq.so.2 for PostgreSQL\n >ii debianutils 1.10 Miscellaneous utilities specific to Debia\n >n.\n \n\n-- \n Vote against SPAM: http://www.politik-digital.de/spam/\n ========================================\nOliver Elphick [email protected]\nIsle of Wight http://www.lfix.co.uk/oliver\n PGP key from public servers; key ID 32B8FAA1\n ========================================\n \"But I would not have you to be ignorant, brethren, \n concerning them which are asleep, that ye sorrow not, \n even as others which have no hope. For if we believe \n that Jesus died and rose again, even so them also \n which sleep in Jesus will God bring with him.\" \n I Thessalonians 4:13,14 \n\n\n", "msg_date": "Fri, 20 Aug 1999 06:08:50 +0100", "msg_from": "\"Oliver Elphick\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bug#43221: postgresql: When disk is full, insert corrupts indices" }, { "msg_contents": "Oliver Elphick wrote:\n> \n> [email protected] wrote:\n> >Package: postgresql\n> >Version: 6.5.1-6\n> \n> I'm forwarding this to the hackers list, because it presumably is an issue\n> to be addressed in the code for rolling back an aborted transaction. I\n> suppose that there is nothing there to handle the case that the index file\n> runs out of diskspace and that therefore the index became corrupt.\n\nWhenever index insert fails (for _any_ reason) index may be\ncorrupted. I hope to address this with WAL...\n\nVadim\n", "msg_date": "Fri, 20 Aug 1999 13:37:05 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#43221: postgresql: When disk is full, insert\n\tcorrupts indices" }, { "msg_contents": "Vadim Mikheev <[email protected]> writes:\n> Whenever index insert fails (for _any_ reason) index may be\n ^^^^^^^^^^^^^^^^\n> corrupted. I hope to address this with WAL...\n\nOne certainly hopes that's not true in the case of \"cannot insert\nduplicate key into a unique index\" failures !?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Aug 1999 09:21:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#43221: postgresql: When disk is full,\n\tinsert corrupts indices" }, { "msg_contents": "Tom Lane wrote:\n> \n> Vadim Mikheev <[email protected]> writes:\n> > Whenever index insert fails (for _any_ reason) index may be\n> ^^^^^^^^^^^^^^^^\n> > corrupted. I hope to address this with WAL...\n> \n> One certainly hopes that's not true in the case of \"cannot insert\n> duplicate key into a unique index\" failures !?\n\nOh, I meant cases when child btree page is splitted but\nparent page is not updated, sorry.\n\nBTW, duplicate check is made _before_ insertion...\n\nVadim\nP.S. I'm on vacation till Aug 30...\n", "msg_date": "Mon, 23 Aug 1999 19:12:58 +0800", "msg_from": "Vadim Mikheev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Bug#43221: postgresql: When disk is full, insert\n\tcorrupts indices" } ]
[ { "msg_contents": "Hello in 6.5.1 i noticed this strange behavior;\nI have databaze vith about 50 table and the whole amount of date in them\nis aprox 30M.\nBut the size of pg_log after a two weeks of osing the db is 0.5G !!?\nI am using large object - so the dump and reload is rathe painfull. I have\nscript which dumps the db ewery week removes all files and initlocation,\nititbd createusers ....\nBut that's rather strange solution.\nI am using the db for fulltext search soi there are some big tables of\nwords and big referencig tablese between words and records in other\ntables.\nThis table are deleted and rebuild ewery night.\nIs there any way how to tell to postres to shring the pg_log?\n\nThank You very much.\n\nRichard Bouska\[email protected]\n\nPS: I tryed to find some hints on dejanews and in archives, but fond\nnothin promising the solution.; vacuum verbose analyze does not help.\n\n\n\n \n\n", "msg_date": "Fri, 20 Aug 1999 09:29:54 +0200 (CEST)", "msg_from": "Richard Bouska <[email protected]>", "msg_from_op": true, "msg_subject": "pg_log >> groving to infinity in 6.5.1 " } ]
[ { "msg_contents": "auth 30105652 subscribe pgsql-hackers [email protected]\n\n", "msg_date": "Fri, 20 Aug 1999 10:07:57 +0200 (CEST)", "msg_from": "Richard Bouska <[email protected]>", "msg_from_op": true, "msg_subject": "auth 30105652 subscribe pgsql-hackers [email protected]" } ]
[ { "msg_contents": "Hello in 6.5.1 i noticed this strange behavior;\nI have database vith about 50 table and the whole amount of date in them\nis aprox 30M.\nBut the size of pg_log after a two weeks of using the db is 0.5G !!?\nI am using large object - so the dump and reload is rathe painfull. I have\nscript which dumps the db every week removes all files and initlocation,\nititbd createusers ....\nBut that's rather dangerous solution.\nI am using the db for fulltext search so there are some big tables of\nwords and big referencig tablese between words and records in other\ntables.\nThis table are deleted and rebuild every night.\nIs there any way how to tell to postres to shring the pg_log?\n\nThank You very much.\n\nRichard Bouska\[email protected]\n\nPS: I tryed to find some hints on dejanews and in archives, but found\nnothing promissing the solution.; vacuum verbose analyze does not help.\n\n\n\n \n\n\n", "msg_date": "Fri, 20 Aug 1999 10:21:22 +0200 (CEST)", "msg_from": "Richard Bouska <[email protected]>", "msg_from_op": true, "msg_subject": "pg_log >> groving to infinity in 6.5.1" }, { "msg_contents": "Richard Bouska wrote:\n> \n> Hello in 6.5.1 i noticed this strange behavior;\n> I have database vith about 50 table and the whole amount of date in them\n> is aprox 30M.\n> But the size of pg_log after a two weeks of using the db is 0.5G !!?\n\nI have the same problem with pg_log ie. it has become huge. Is there anything\nthat\nI can do about this.\n--------\nRegards\nTheo\n", "msg_date": "Sun, 22 Aug 1999 05:27:38 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_log >> growing to infinity in 6.5.1" }, { "msg_contents": "Theo Kramer wrote:\n> I have the same problem with pg_log ie. it has become huge. Is there anything\n> that\n> I can do about this.\n\nIt has become corrupt - I had a disk overflow and can no longer see my\ndatabase tables. Help!!!!!!!!\n\nRegards\nTheo\n", "msg_date": "Sun, 22 Aug 1999 07:30:41 +0200 (SAST)", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_log >> growing to infinity in 6.5.1" } ]
[ { "msg_contents": ">> Hi!\nHi, Leon\n\n>> I'm currently fooling around with Postgres's parser, and I must admit\n>> some things puzzle me completely. Please tell me what these things in\n>> lexer stand for:\n>> \n>> {operator}/-[\\.0-9]\t{\n>> \t\t\t\t\tyylval.str = pstrdup((char*)yytext);\n>> \t\t\t\t\treturn Op;\n>> \t\t\t\t}\n>> Is it an operator followed by mandatory '-' and (dot or digit) ?\nI think this is used to recognize an operator followed by a minus or any\nsingle character (the period is escaped, the character can be used to denote\nthe base of the number) or a single digit.\nBut check this, I'm not totally sure.\n\n>> And what this stands for:\n>> \n>> {identifier}/{space}*-{number}\nAn identifier followed by any number of spaces, and then a minus, or a\nnumber. Again, double check this with a reference of some sorts.\n\n>> \n>> What's the meaning of all these?\n>> \nYou really should get a reference that deals with regular expressions. My\nunderstanding is (anybody feel free to comment here) that flex uses normal\nregular expressions to generate scanners.\n\n\nCheers...\n\nMikeA\n", "msg_date": "Fri, 20 Aug 1999 10:56:09 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer " }, { "msg_contents": "Ansley, Michael wrote:\n...\n> >> And what this stands for:\n> >>\n> >> {identifier}/{space}*-{number}\n> An identifier followed by any number of spaces, and then a minus, or a\n> number. Again, double check this with a reference of some sorts.\n\nWell, I studied flex manpage from top to bottom, and almost everything\nin Postgres's lexer makes sense. But these \"followed by spaces and a\nqueer minused number\" do not. Can someone tell me what do these \nminused single - digit numbers stand for?\n\n-- \nLeon.\n---------\n\"This may seem a bit weird, but that's okay, because it is weird.\" -\nPerl manpage.\n\n\n", "msg_date": "Fri, 20 Aug 1999 14:36:20 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "To further this thread:\n\nI have downloaded an example implementation of SQL which, thankfully, does\nnot use vltc's. I'm going to see where we have problems, and see if I can\nreduce the scanner rules to something that is not variable-length trailing.\n\nIf anybody with significant scanner/language/parse experience (i.e. more\nthan mine == zero) has some pearls of wisdom to add, please feel free. I'm\na little out of my depth here, and I'm a bit nervous to go changing the\nscanner.\n\n\nMikeA\n\n\n>> -----Original Message-----\n>> From: Natalya S. Makushina [mailto:[email protected]]\n>> Sent: Thursday, August 19, 1999 10:39 AM\n>> To: 'Ansley, Michael'; '[email protected]'\n>> Subject: [HACKERS] Problem with query length\n>> \n>> \n>> Hello \n>> \n>> Thank's very much for your help.\n>> \n>> I have already installed two copies of PostgreSQL DB. One \n>> was installed from RPM, another one was compiled without \n>> RPM. The copy installed from RPM has problem with query \n>> length, another copy haven't this problem! \n>> \n>> I decide to compile it from sources and try to use. After \n>> compilation all is OK! Query length is 8191 now.\n>> \n>> May be error presents in RPM.\n>> \n>> From :\t\tAnsley, Michael \n>> [SMTP:[email protected]]\n>> Date :\t \t18 О©╫О©╫О©╫О©╫О©╫О©╫О©╫ 1999 О©╫. 11:55\n>> To :\t\t'Natalya S. Makushina'; '[email protected]'\n>> Subject :\t\tRE: [HACKERS] Problem with query length\n>> \n>> Hi, all\n>> \n>> I have found out what the problem is, although not (yet) the \n>> solution. \n>> \n>> Executive summary:\n>> ------------------\n>> The scan.l code is not flexing as intended. This means \n>> that, for most\n>> production installations, the max token size is around 64kB.\n>> \n>> Technical summary:\n>> ------------------\n>> The problem is that scan.l is compiling to scan.c with \n>> YY_USES_REJECT being\n>> defined. When YY_USES_REJECT is defined, the token buffer is NOT\n>> expandable, and the parser will fail if expansion is \n>> attempted. However,\n>> YY_USES_REJECT should not be defined, and I'm trying to work \n>> out why it is.\n>> I have posted to the flex mailing list, and expect a reply \n>> within the next\n>> day or so.\n>> \n>> The bottom line:\n>> ------------------\n>> The token limit seems to be effectively the size of \n>> YY_BUF_SIZE in scan.l,\n>> until I submit a patch which should make it unlimited. \n>> \n>> \n>> MikeA\n>> \n>> >> -----Original Message-----\n>> >> From: Natalya S. Makushina [mailto:[email protected]]\n>> >> Sent: Tuesday, August 17, 1999 3:15 PM\n>> >> To: '[email protected]'\n>> >> Subject: [HACKERS] Problem with query length\n>> >> \n>> >> \n>> >> -------------------------------------------------------------\n>> >> ----------------------------------------------------------------\n>> >> I have posted this mail to psql-general. But i didn't get \n>> >> any answer yet.\n>> >> -------------------------------------------------------------\n>> >> ----------------------------------------------------------------\n>> >> \n>> >> When i had tried to insert into text field text (length \n>> >> about 4000 chars), the backend have crashed with status 139. \n>> >> This error is happened when the query length ( SQL query) is \n>> >> more than 4095 chars. I am using PostgreSQL 6.4.2 on Linux.\n>> >> \n>> >> My questions are:\n>> >> 1. Is there problem with text field or with length of SQL query?\n>> >> 2. Would postgresql have any limits for SQL query length?\n>> >> I checked the archives but only found references to the 8K \n>> >> limit. Any help would be greatly appreciated.\n>> >> Thanks for help\n>> >> \t\t\t\tNatalya Makushina \n>> >> \t\t\t\[email protected]\n>> >> \n>> >> \n>> \n>> ************\n>> Check out \"PostgreSQL Wearables\" @ http://www.pgsql.com\n>> \n", "msg_date": "Fri, 20 Aug 1999 11:02:47 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Problem with query length" } ]
[ { "msg_contents": "I�ve a problem with CVS to get the source, when trying to login I get the\nbash-2.02$ cvs -d :pserver:[email protected]:/usr/local/CVSROOT login\n(Logging in to [email protected])\nCVS password:\ncvs [login aborted]: authorization failed: server postgresql.org rejected\naccess\n\n\nI attach the files modified for Statement Triggers (StmtTrig) in postgreSQL,\nif somebody could make the patch and publish I�ll be in debt.\n\nThanks.\n\nNotes of use:\n\nKeep in mind that when creating a StmtTrig the functions executed get the\ntuples (NEW/OLD if PL, tg_trigtuple and tg_newtuple in C) set to NULL.\n\nIf there are statement and row triggers defined for the same table and the\nsame event:\na) if the event it�s before then it�s executed statement prior to any row\ntrigger\nb) if the event it�s afte then are executed all row prior to statement\ntrigger\n\nTODO triggers list:\n->Include order to triggers following the recomendations of SQL3.\n->Modify PL/SQL to access NEW/OLD table.", "msg_date": "Fri, 20 Aug 1999 11:26:16 +0200", "msg_from": "\"F.J. Cuberos\" <[email protected]>", "msg_from_op": true, "msg_subject": "Statement Triggers Patch" } ]
[ { "msg_contents": ">> >> Is it an operator followed by mandatory '-' and (dot or digit) ?\n>> I think this is used to recognize an operator followed by a minus or any\n>> single character (the period is escaped, the character can be used to\ndenote\n>> the base of the number) or a single digit.\nSorry, make that an operator followed by a minus AND then any single\ncharacter or a single digit.\n\nMikeA\n", "msg_date": "Fri, 20 Aug 1999 11:26:45 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer " } ]
[ { "msg_contents": "At the very least, couldn't vc_vpinsert() double\nvpl->vpl_num_pages whenever vpl->vpl_num_pages\nneeds to be expanded instead of expanding linearly\nby PG_NPAGEDESC, or by the original 100?\n\nMike Mascari\n([email protected])\n\n--- Hiroshi Inoue <[email protected]> wrote:\n> Hi all,\n> \n> I found the following comment in utils/mmgr/aset.c.\n> The high memory usage of big vacuum is probably\n> caused by this\n> change.\n> Calling repalloc() many times with its size\n> parameter increasing\n> would need large amount of memory.\n> \n> Should vacuum call realloc() directly ?\n> Or should AllocSet..() be changed ?\n> \n> Comments ?\n> \n> * NOTE:\n> * This is a new (Feb. 05, 1999) implementation\n> of the allocation set\n> * routines. AllocSet...() does not use\n> OrderedSet...() any more.\n> * Instead it manages allocations in a block\n> pool by itself, combining\n> * many small allocations in a few bigger\n> blocks. AllocSetFree() does\n> * never free() memory really. It just add's\n> the free'd area to some\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> * list for later reuse by AllocSetAlloc(). All\n> memory blocks are\n> free()'d\n> \n> Regards.\n> \n> Hiroshi Inoue\n> [email protected]\n\n> > *** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n> > --- vacuum.c\tThu Aug 19 17:34:18 1999\n> > ***************\n> > *** 2519,2530 ****\n> > static void\n> > vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> > {\n> >\n> > \t/* allocate a VPageDescr entry if needed */\n> > \tif (vpl->vpl_num_pages == 0)\n> > ! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100\n> *\n> > sizeof(VPageDescr));\n> > ! \telse if (vpl->vpl_num_pages % 100 == 0)\n> > ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> > repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages +\n> 100) *\n> > sizeof(VPageDescr));\n> > \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> > \t(vpl->vpl_num_pages)++;\n> >\n> > --- 2519,2531 ----\n> > static void\n> > vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n> > {\n> > + #define PG_NPAGEDESC 1000\n> >\n> > \t/* allocate a VPageDescr entry if needed */\n> > \tif (vpl->vpl_num_pages == 0)\n> > ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> > palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n> > ! \telse if (vpl->vpl_num_pages % PG_NPAGEDESC ==\n> 0)\n> > ! \t\tvpl->vpl_pagedesc = (VPageDescr *)\n> > repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages +\n> PG_NPAGEDESC) *\n> > sizeof(VPageDescr));\n> > \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n> > \t(vpl->vpl_num_pages)++;\n> >\n\n__________________________________________________\nDo You Yahoo!?\nBid and sell for free at http://auctions.yahoo.com\n\n", "msg_date": "Fri, 20 Aug 1999 02:41:19 -0700 (PDT)", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] vacuum process size " }, { "msg_contents": "Mike, \n\n> At the very least, couldn't vc_vpinsert() double\n> vpl->vpl_num_pages whenever vpl->vpl_num_pages\n> needs to be expanded instead of expanding linearly\n> by PG_NPAGEDESC, or by the original 100?\n\nI have tested your idea and found even more improved memory usage\n(86MB vs. 43MB). Standard vacuum consumes as much as 478MB memory with\ndeleting 5000000 tuples that would not be acceptable for most\nconfigurations. I think we should fix this as soon as possible. If\nthere's no objection, I will commit included patches to the stable\ntree (seems Tom has more aggressive idea, so I'll leave the current\ntree as it is).\n---\nTatsuo Ishii\n-------------------------------------------------------------------\n*** vacuum.c.orig\tSat Jul 3 09:32:40 1999\n--- vacuum.c\tTue Aug 24 10:08:43 1999\n***************\n*** 2519,2530 ****\n static void\n vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n {\n \n \t/* allocate a VPageDescr entry if needed */\n \tif (vpl->vpl_num_pages == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(100 * sizeof(VPageDescr));\n! \telse if (vpl->vpl_num_pages % 100 == 0)\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, (vpl->vpl_num_pages + 100) * sizeof(VPageDescr));\n \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n \t(vpl->vpl_num_pages)++;\n \n--- 2519,2538 ----\n static void\n vc_vpinsert(VPageList vpl, VPageDescr vpnew)\n {\n+ #define PG_NPAGEDESC 1024\n+ static uint num_pages;\n \n \t/* allocate a VPageDescr entry if needed */\n \tif (vpl->vpl_num_pages == 0)\n! \t{\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) palloc(PG_NPAGEDESC * sizeof(VPageDescr));\n! \t\tnum_pages = PG_NPAGEDESC;\n! \t}\n! \telse if (vpl->vpl_num_pages >= num_pages)\n! \t{\n! \t\tnum_pages *= 2;\n! \t\tvpl->vpl_pagedesc = (VPageDescr *) repalloc(vpl->vpl_pagedesc, num_pages * sizeof(VPageDescr));\n! \t}\n \tvpl->vpl_pagedesc[vpl->vpl_num_pages] = vpnew;\n \t(vpl->vpl_num_pages)++;\n \n", "msg_date": "Tue, 24 Aug 1999 10:12:37 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "> At the very least, couldn't vc_vpinsert() double\n> vpl->vpl_num_pages whenever vpl->vpl_num_pages\n> needs to be expanded instead of expanding linearly\n> by PG_NPAGEDESC, or by the original 100?\n\nThis seems like a good idea.\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 23 Aug 1999 23:59:51 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I have tested your idea and found even more improved memory usage\n> (86MB vs. 43MB). Standard vacuum consumes as much as 478MB memory with\n> deleting 5000000 tuples that would not be acceptable for most\n> configurations. I think we should fix this as soon as possible. If\n> there's no objection, I will commit included patches to the stable\n> tree (seems Tom has more aggressive idea, so I'll leave the current\n> tree as it is).\n\nNo, please make the change in current as well. I was thinking about\ntweaking aset.c to be smarter about releasing large chunks, but in any\ncase having the doubling behavior at the request point will be a big\nimprovement.\n\nI do not like your patch as given, however. By using a static variable\nyou are assuming that there is only one active VPageList at a time.\nIt looks to me like there are at least two --- and there is no reason\nto think they'd be the same size.\n\nYou need to add a num_pages field to the VPageList struct, not use\na static.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 11:05:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "I have been looking some more at the vacuum-process-size issue, and\nI am having a hard time understanding why the VPageList data structure\nis the critical one. As far as I can see, there should be at most one\npointer in it for each disk page of the relation. OK, you were\nvacuuming a table with something like a quarter million pages, so\nthe end size of the VPageList would have been something like a megabyte,\nand given the inefficient usage of repalloc() in the original code,\na lot more space than that would have been wasted as the list grew.\nSo doubling the array size at each step is a good change.\n\nBut there are a lot more tuples than pages in most relations.\n\nI see two lists with per-tuple data in vacuum.c, \"vtlinks\" in\nvc_scanheap and \"vtmove\" in vc_rpfheap, that are both being grown with\nessentially the same technique of repalloc() after every N entries.\nI'm not entirely clear on how many tuples get put into each of these\nlists, but it sure seems like in ordinary circumstances they'd be much\nbigger space hogs than any of the three VPageList lists.\n\nI recommend going to a doubling approach for each of these lists as\nwell as for VPageList.\n\nThere is a fourth usage of repalloc with the same method, for \"ioid\"\nin vc_getindices. This only gets one entry per index on the current\nrelation, so it's unlikely to be worth changing on its own merit.\nBut it might be worth building a single subroutine that expands a\ngrowable list of entries (taking sizeof() each entry as a parameter)\nand applying it in all four places.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 12:20:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "Then <[email protected]> spoke up and said:\n> So doubling the array size at each step is a good change.\n> \n> But there are a lot more tuples than pages in most relations.\n> \n> I see two lists with per-tuple data in vacuum.c, \"vtlinks\" in\n> vc_scanheap and \"vtmove\" in vc_rpfheap, that are both being grown with\n> essentially the same technique of repalloc() after every N entries.\n> I'm not entirely clear on how many tuples get put into each of these\n> lists, but it sure seems like in ordinary circumstances they'd be much\n> bigger space hogs than any of the three VPageList lists.\n> \n> I recommend going to a doubling approach for each of these lists as\n> well as for VPageList.\n\nQuestion: is there reliable information in pg_statistics (or other\nsystem tables) which can be used to make a reasonable estimate for the\nsizes of these structures before initial allocation? Certainly the\nfile size can be gotten from a stat (some portability issues, sparse\nfile issues).\n\n\n-- \n=====================================================================\n| JAVA must have been developed in the wilds of West Virginia. |\n| After all, why else would it support only single inheritance?? |\n=====================================================================\n| Finger [email protected] for my public key. |\n=====================================================================", "msg_date": "24 Aug 1999 13:01:12 -0400", "msg_from": "Brian E Gallew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": ">> If there's no objection, I will commit included patches to the stable\n>> tree (seems Tom has more aggressive idea, so I'll leave the current\n>> tree as it is).\n\n> No, please make the change in current as well. I was thinking about\n> tweaking aset.c to be smarter about releasing large chunks, but in any\n> case having the doubling behavior at the request point will be a big\n> improvement.\n\nI have just committed changes into current (but not REL6_5) to make\naset.c smarter about giving back memory from large requests. Basically,\nfor chunk sizes >= ALLOC_BIGCHUNK_LIMIT, pfree() does an actual free()\nand repalloc() does an actual realloc(). There is no change in behavior\nfor smaller chunk sizes. This should cap the amount of space that can\nbe wasted by aset.c while repalloc'ing a chunk larger and larger.\n\nFor lack of a better idea I set ALLOC_BIGCHUNK_LIMIT to 64K. I don't\nthink it'd pay to make it very small, but I don't really know whether\nthis is a good choice or not.\n\nIt would still be a good idea to fix vacuum.c to double its repalloc\nrequests at each step, but Tatsuo was already working on that part\nso I won't joggle his elbow...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 16:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "Brian E Gallew <[email protected]> writes:\n> Question: is there reliable information in pg_statistics (or other\n> system tables) which can be used to make a reasonable estimate for the\n> sizes of these structures before initial allocation? Certainly the\n> file size can be gotten from a stat (some portability issues, sparse\n> file issues).\n\npg_statistics would tell you what was found out by the last vacuum on\nthe table, if there ever was one. Dunno how reliable you want to\nconsider that to be. stat() would provide up-to-date info, but the\nproblem with it is that the total file size might be a drastic\noverestimate of the number of pages that vacuum needs to put in these\nlists. There's not really much chance of getting a useful estimate from\nthe last vacuum run, either. AFAICT what we are interested in is the\nnumber of pages containing dead tuples, and by definition all of those\ntuples will have died since the last vacuum...\n\nOn the whole, just fixing the memory management seems like the best bet.\nWe know how to do that, and it may benefit other things besides vacuum.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 16:51:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": ">I have just committed changes into current (but not REL6_5) to make\n\nJust for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\nCVS respository. I thought that REL6_5_PATCHES is the Tag for the 6.5\nstatble tree and would eventually become 6.5.2. If so, what is the\nREL6_5 Tag? Or I totally miss the point?\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 25 Aug 1999 09:08:29 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "> >I have just committed changes into current (but not REL6_5) to make\n> \n> Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\n> CVS respository. I thought that REL6_5_PATCHES is the Tag for the 6.5\n> statble tree and would eventually become 6.5.2. If so, what is the\n> REL6_5 Tag? Or I totally miss the point?\n\nREL6_5 was a mistake.\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 24 Aug 1999 20:22:16 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]]\n> Sent: Wednesday, August 25, 1999 1:20 AM\n> To: [email protected]\n> Cc: Mike Mascari; Hiroshi Inoue; [email protected]\n> Subject: Re: [HACKERS] vacuum process size \n> \n> \n> I have been looking some more at the vacuum-process-size issue, and\n> I am having a hard time understanding why the VPageList data structure\n> is the critical one. As far as I can see, there should be at most one\n> pointer in it for each disk page of the relation. OK, you were\n> vacuuming a table with something like a quarter million pages, so\n> the end size of the VPageList would have been something like a megabyte,\n> and given the inefficient usage of repalloc() in the original code,\n> a lot more space than that would have been wasted as the list grew.\n> So doubling the array size at each step is a good change.\n> \n> But there are a lot more tuples than pages in most relations.\n> \n> I see two lists with per-tuple data in vacuum.c, \"vtlinks\" in\n> vc_scanheap and \"vtmove\" in vc_rpfheap, that are both being grown with\n> essentially the same technique of repalloc() after every N entries.\n> I'm not entirely clear on how many tuples get put into each of these\n> lists, but it sure seems like in ordinary circumstances they'd be much\n> bigger space hogs than any of the three VPageList lists.\n>\n\nAFAIK,both vtlinks and vtmove are NULL if vacuum is executed\nwithout concurrent transactions.\nThey won't be so big unless loooong concurrent transactions exist.\n \nRegards.\n\nHiroshi Inoue\[email protected]\n", "msg_date": "Wed, 25 Aug 1999 10:11:42 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] vacuum process size " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\n> CVS respository. I thought that REL6_5_PATCHES is the Tag for the 6.5\n> statble tree and would eventually become 6.5.2. If so, what is the\n> REL6_5 Tag? Or I totally miss the point?\n\nRight, REL6_5_PATCHES is the 6.5.* branch. REL6_5 is just a tag ---\nthat is, it's effectively a frozen snapshot of the 6.5 release,\nnot an evolvable branch.\n\nI am not sure if Marc intends to continue this naming convention\nin future, or if it was just a mistake to create REL6_5 as a tag\nnot a branch. I don't see a whole lot of use for the frozen tag\nmyself...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Aug 1999 09:46:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > I have tested your idea and found even more improved memory usage\n> > (86MB vs. 43MB). Standard vacuum consumes as much as 478MB memory with\n> > deleting 5000000 tuples that would not be acceptable for most\n> > configurations. I think we should fix this as soon as possible. If\n> > there's no objection, I will commit included patches to the stable\n> > tree (seems Tom has more aggressive idea, so I'll leave the current\n> > tree as it is).\n> \n> No, please make the change in current as well. I was thinking about\n> tweaking aset.c to be smarter about releasing large chunks, but in any\n> case having the doubling behavior at the request point will be a big\n> improvement.\n> \n> I do not like your patch as given, however. By using a static variable\n> you are assuming that there is only one active VPageList at a time.\n> It looks to me like there are at least two --- and there is no reason\n> to think they'd be the same size.\n> \n> You need to add a num_pages field to the VPageList struct, not use\n> a static.\n\nGood point. I have committed new patches that do not use static\nvariables anymore to both REL6_5_PATCHES and current tree.\n\nModified files: backend/commands/vacuum.c and\ninclude/commands/vacuum.h.\n---\nTatsuo Ishii\n", "msg_date": "Wed, 25 Aug 1999 22:50:39 +0900", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " }, { "msg_contents": "On Wed, 25 Aug 1999, Tom Lane wrote:\n\n> Tatsuo Ishii <[email protected]> writes:\n> > Just for a confirmation: I see REL6_5_PATCHES and REL6_5 Tag in the\n> > CVS respository. I thought that REL6_5_PATCHES is the Tag for the 6.5\n> > statble tree and would eventually become 6.5.2. If so, what is the\n> > REL6_5 Tag? Or I totally miss the point?\n> \n> Right, REL6_5_PATCHES is the 6.5.* branch. REL6_5 is just a tag ---\n> that is, it's effectively a frozen snapshot of the 6.5 release,\n> not an evolvable branch.\n> \n> I am not sure if Marc intends to continue this naming convention\n> in future, or if it was just a mistake to create REL6_5 as a tag\n> not a branch. I don't see a whole lot of use for the frozen tag\n> myself...\n\nI like the frozen tag myself, since, in the future, if we need to create a\nquick tar ball of what things looked like at that release (ie.\nv6.5->v6.5.2 patch?), its easy to generate...\n\nActually, come to think of it...am going to try that out now...report back\nin a bit...\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org \nprimary: [email protected] secondary: scrappy@{freebsd|postgresql}.org \n\n", "msg_date": "Wed, 25 Aug 1999 11:05:54 -0300 (ADT)", "msg_from": "The Hermit Hacker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] vacuum process size " } ]
[ { "msg_contents": "Leon, I see that you have been running into the vltc problem ;-) I just run\na flex -p, and went to line 314.\n\nMikeA\n\n>> -----Original Message-----\n>> From: Ansley, Michael [mailto:[email protected]]\n>> Sent: Friday, August 20, 1999 11:27 AM\n>> To: 'Leon'; hackers\n>> Subject: RE: [HACKERS] Postgres' lexer \n>> \n>> \n>> >> >> Is it an operator followed by mandatory '-' and (dot \n>> or digit) ?\n>> >> I think this is used to recognize an operator followed by \n>> a minus or any\n>> >> single character (the period is escaped, the character \n>> can be used to\n>> denote\n>> >> the base of the number) or a single digit.\n>> Sorry, make that an operator followed by a minus AND then any single\n>> character or a single digit.\n>> \n>> MikeA\n>> \n>> ************\n>> \n", "msg_date": "Fri, 20 Aug 1999 12:32:00 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer " }, { "msg_contents": "Ansley, Michael wrote:\n> \n> Leon, I see that you have been running into the vltc problem ;-) I just run\n> a flex -p, and went to line 314.\n\nI got it. It is done to prevent minus from sticking to number in\nexpressions like 'a -2'. Dirty, but it works.\n\n-- \nLeon.\n---------\n\"This may seem a bit weird, but that's okay, because it is weird.\" -\nPerl manpage.\n\n", "msg_date": "Fri, 20 Aug 1999 18:33:37 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "I figured it out. The first thing the code does is set the state to xm\n(BEGIN (xm)). If you look in the comments at the top, Tom Lane put these in\nto deal with numeric strings with embedded minuses. Tom, can you give us a\nrun-down of what the problem was that required this stuff. Perhaps if we\ncan find another way around it, we can reduce the vltc's\n\nThanks...\n\nMikeA\n\n>> -----Original Message-----\n>> From: Leon [mailto:[email protected]]\n>> Sent: Friday, August 20, 1999 11:36 AM\n>> To: hackers\n>> Subject: Re: [HACKERS] Postgres' lexer\n>> \n>> \n>> Ansley, Michael wrote:\n>> ...\n>> > >> And what this stands for:\n>> > >>\n>> > >> {identifier}/{space}*-{number}\n>> > An identifier followed by any number of spaces, and then a \n>> minus, or a\n>> > number. Again, double check this with a reference of some sorts.\n>> \n>> Well, I studied flex manpage from top to bottom, and almost \n>> everything\n>> in Postgres's lexer makes sense. But these \"followed by spaces and a\n>> queer minused number\" do not. Can someone tell me what do these \n>> minused single - digit numbers stand for?\n>> \n>> -- \n>> Leon.\n>> ---------\n>> \"This may seem a bit weird, but that's okay, because it is weird.\" -\n>> Perl manpage.\n>> \n>> \n>> \n>> ************\n>> \n", "msg_date": "Fri, 20 Aug 1999 12:36:53 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "Sorry, Tom, I saw the tgl initals, and assumed it was you, before realising\nthat there are a couple of people who could be identified by those initials.\ntgl, please stand up ;-)\n\nMikeA\n\n>> -----Original Message-----\n>> From: Ansley, Michael \n>> Sent: Friday, August 20, 1999 12:37 PM\n>> To: 'Leon'; hackers; 'Tom Lane'\n>> Subject: RE: [HACKERS] Postgres' lexer\n>> \n>> \n>> I figured it out. The first thing the code does is set the \n>> state to xm (BEGIN (xm)). If you look in the comments at \n>> the top, Tom Lane put these in to deal with numeric strings \n>> with embedded minuses. Tom, can you give us a run-down of \n>> what the problem was that required this stuff. Perhaps if \n>> we can find another way around it, we can reduce the vltc's\n>> \n>> Thanks...\n>> \n>> MikeA\n>> \n>> >> -----Original Message-----\n>> >> From: Leon [mailto:[email protected]]\n>> >> Sent: Friday, August 20, 1999 11:36 AM\n>> >> To: hackers\n>> >> Subject: Re: [HACKERS] Postgres' lexer\n>> >> \n>> >> \n>> >> Ansley, Michael wrote:\n>> >> ...\n>> >> > >> And what this stands for:\n>> >> > >>\n>> >> > >> {identifier}/{space}*-{number}\n>> >> > An identifier followed by any number of spaces, and then a \n>> >> minus, or a\n>> >> > number. Again, double check this with a reference of \n>> some sorts.\n>> >> \n>> >> Well, I studied flex manpage from top to bottom, and almost \n>> >> everything\n>> >> in Postgres's lexer makes sense. But these \"followed by \n>> spaces and a\n>> >> queer minused number\" do not. Can someone tell me what do these \n>> >> minused single - digit numbers stand for?\n>> >> \n>> >> -- \n>> >> Leon.\n>> >> ---------\n>> >> \"This may seem a bit weird, but that's okay, because it \n>> is weird.\" -\n>> >> Perl manpage.\n>> >> \n>> >> \n>> >> \n>> >> ************\n>> >> \n>> \n", "msg_date": "Fri, 20 Aug 1999 12:38:37 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer" }, { "msg_contents": "\"Ansley, Michael\" <[email protected]> writes:\n> Sorry, Tom, I saw the tgl initals, and assumed it was you, before realising\n> that there are a couple of people who could be identified by those initials.\n\nAll of those are Lockhart. I recall having done something with the\nstring-constant lexing, but I have no idea what this <xm> is all about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Aug 1999 12:18:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Ansley, Michael\" <[email protected]> writes:\n> > Sorry, Tom, I saw the tgl initals, and assumed it was you, before realising\n> > that there are a couple of people who could be identified by those initials.\n> \n> All of those are Lockhart. I recall having done something with the\n> string-constant lexing, but I have no idea what this <xm> is all about.\n\nBTW, one more stu-u-u-upid question: why unary minus needs high \nprecedence? Seems that all works well without any specified\nprecedence for uminus ;) - it is only a remark.\n\n-- \nLeon.\n---------\n\"This may seem a bit weird, but that's okay, because it is weird.\" -\nPerl manpage.\n\n", "msg_date": "Sat, 21 Aug 1999 02:11:51 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": ">> > \n>> > Leon, I see that you have been running into the vltc \n>> problem ;-) I just run\n>> > a flex -p, and went to line 314.\n>> \n>> I got it. It is done to prevent minus from sticking to number in\n>> expressions like 'a -2'. Dirty, but it works.\n\nDirty, but it also breaks the scanner.\n\nMikeA\n", "msg_date": "Fri, 20 Aug 1999 16:02:13 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "Leon, if you manage to find a replacement for this, please let me know.\nI'll probably only pick it up after the weekend.\n\nI think that we need to find another way to tokenise the minus. First of\nall, though, how is the parser supposed to tell whether this:\na -2\nmeans this:\n(a - 2)\nor this:\na (-2)\n\ni.e.: does the unary - operator take precedence over the binary - operator\nor not? Is there even a difference. If the parser runs into this: 'a -2',\nperhaps we could replace it with 'a + (-2)' instead.\n\nHow does a C compiler tokenize this? Or some other standard SQL parser?\n\nMikeA\n\n>> -----Original Message-----\n>> From: Leon [mailto:[email protected]]\n>> Sent: Friday, August 20, 1999 3:34 PM\n>> To: hackers\n>> Subject: Re: [HACKERS] Postgres' lexer\n>> \n>> \n>> Ansley, Michael wrote:\n>> > \n>> > Leon, I see that you have been running into the vltc \n>> problem ;-) I just run\n>> > a flex -p, and went to line 314.\n>> \n>> I got it. It is done to prevent minus from sticking to number in\n>> expressions like 'a -2'. Dirty, but it works.\n>> \n>> -- \n>> Leon.\n>> ---------\n>> \"This may seem a bit weird, but that's okay, because it is weird.\" -\n>> Perl manpage.\n>> \n>> \n>> ************\n>> \n", "msg_date": "Fri, 20 Aug 1999 16:34:30 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer" }, { "msg_contents": " I think that we need to find another way to tokenise the minus. First of\n all, though, how is the parser supposed to tell whether this:\n a -2\n means this:\n (a - 2)\n or this:\n a (-2)\n\n i.e.: does the unary - operator take precedence over the binary - operator\n or not? Is there even a difference. If the parser runs into this: 'a -2',\n perhaps we could replace it with 'a + (-2)' instead.\n\n How does a C compiler tokenize this? Or some other standard SQL parser?\n\nFor the C compiler a -2 can only mean (a - 2); a (-2) must explicitly\nbe a function call and isn't generated by the compiler from a -2.\n\nI think the question for SQL is, does the language allow an ambiguity\nhere? If not, wouldn't it be much smarter to keep the minus sign as\nits own token and deal with the semantics in the parser?\n\nCheers,\nBrook\n", "msg_date": "Fri, 20 Aug 1999 08:57:31 -0600 (MDT)", "msg_from": "Brook Milligan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "Brook Milligan <[email protected]> writes:\n> I think the question for SQL is, does the language allow an ambiguity\n> here? If not, wouldn't it be much smarter to keep the minus sign as\n> its own token and deal with the semantics in the parser?\n\nI don't see a good reason to tokenize the '-' as part of the number\neither. I think that someone may have hacked the lexer to try to\nmerge unary minus into numeric constants, so that in an expression like\n\tWHERE field < -2\nthe -2 would be treated as a constant rather than an expression\ninvolving application of unary minus --- which is important because\nthe optimizer is too dumb to optimize the query if it looks like an\nexpression.\n\nHowever, trying to make that happen at lex time is just silly.\nThe lexer doesn't have enough context to handle all the cases\nanyway. We've currently got code in the grammar to do the same\nreduction. (IMHO that's still too early, and it ought to be done\npost-rewriter as part of a general-purpose constant expression\nreducer; will get around to that someday ;-).)\n\nSo it seems to me that we should just rip *all* this cruft out of the\nlexer, and always return '-' as a separate token, never as part of\na number. (*) Then we wouldn't need this lookahead feature.\n\nBut it'd be good to get an opinion from the other tgl first ;-).\nI'm just a kibitzer when it comes to the lex/yacc stuff.\n\n\t\t\tregards, tom lane\n\n(*) not counting float exponents, eg \"1.234e-56\" of course.\n", "msg_date": "Fri, 20 Aug 1999 12:31:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer " }, { "msg_contents": "Ansley, Michael wrote:\n> \n> Leon, if you manage to find a replacement for this, please let me know.\n> I'll probably only pick it up after the weekend.\n> \n> I think that we need to find another way to tokenise the minus. First of\n> all, though, how is the parser supposed to tell whether this:\n> a -2\n> means this:\n> (a - 2)\n> or this:\n> a (-2)\n\nI think that the current behavior is ok - it is what we would expect\nfrom expressions like 'a -2'.\n\nI have produced a patch to cleanup the code. It works due to the\nfact that unary minus gets processed in doNegate() in parser anyway,\nand it is by no way lexer's job to do grammatical parsing - i.e.\ndeciding if operator is to be treated as binary or unary. \n\nI ran regression tests, everything seems to be ok. It is my first\ndiff/patch experience in *NIX, so take it with mercy :) But it \nseems to be correct. It is to be applied against 6.5.0 (I have\nnot upgraded to 6.5.1 yet, but hope lexer hasn't changed since\nthen.) The patch mainly contains nuked code. The only thing added\nis my short comment :)\n\nHave I done some right thing? :)\n\n-- \nLeon.\n---------\n\"This may seem a bit weird, but that's okay, because it is weird.\" -\nPerl manpage.", "msg_date": "Fri, 20 Aug 1999 23:28:17 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "> But it'd be good to get an opinion from the other tgl first ;-).\n\nSadly, the former \"tgl\" ;)\n\nSorry, I was away on vacation. I've waded through ~300 mail messages\nalready, but have ~700 to go, so I apologize if I've missed some more\ndevelopments.\n\nI added the <xm> exclusive state to accomodate the possibility of a\nunary minus. The change was provoked by Vadim's addition of CREATE\nSEQUENCE, which should allow negative numbers for some arguments. But\nthis just uncovered the tip of the general problem...\n\nThere are several cases which need to be handled (I'm doing this from\nmemory, so may miss a few):\n\no Positive and negative numbers as standalone arguments, with and\nwithout spaces between the \"-\" and the digits.\n\no Positive and negative numbers as first arguments to binary\noperators, with and without spaces at all possible places.\n\no Positive and negative numbers as second arguments to binary\noperators, or as arguments to unary operators.\n\no Positive and negative numbers in the presence of operators\ncontaining minus signs, including a trailing minus sign where\npossible.\n\n'taint easy to do it completely right. Perhaps trying to do less in\nthe scanner is the right thing to do, but istm that it may put\nrestrictions on the grammar which are not currently there. Not a good\ntrade for a longer query length...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 31 Aug 1999 04:09:37 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "Oh, there isn't really much change. The minus is passed standalone\nas it always was. The only thing is that currently in numbers with unary\nminus it gets coerced not in lexer, but in parser in doNegate().\nI wonder why that hasn't been done earlier - especially considering\nthat doNegate() existed long before my, hmm, fiddling.\n\nTo tell the truth, there is some ambiguity in various operators.\nThat ambiguity is stemming from Postgres's type-extension system.\nConsider this: SELECT 3+-2; What would you expect from that? I\npersonally would expect the result of 1. But it produces an error,\nbecause '+-' is treated as some user-defined operator, which is \nnot true. Such innocent expression as SELECT --2 puts Postgres in\ndaze - it (psql) waits for 'completion' of such query (it treats\nsymbols '--' as comment start :-) See? There are more pitfalls\nbeside minus coercing :-) \n\nThis all was done to clean up the code and 'straighten' the parser.\nThere was a performance breaker, officially called AFAIR 'variable\ntrailing context'.\n\nThomas Lockhart wrote:\n\n> \n> There are several cases which need to be handled (I'm doing this from\n> memory, so may miss a few):\n> \n> o Positive and negative numbers as standalone arguments, with and\n> without spaces between the \"-\" and the digits.\n> \n> o Positive and negative numbers as first arguments to binary\n> operators, with and without spaces at all possible places.\n> \n> o Positive and negative numbers as second arguments to binary\n> operators, or as arguments to unary operators.\n> \n> o Positive and negative numbers in the presence of operators\n> containing minus signs, including a trailing minus sign where\n> possible.\n> \n> 'taint easy to do it completely right. Perhaps trying to do less in\n> the scanner is the right thing to do, but istm that it may put\n> restrictions on the grammar which are not currently there. Not a good\n> trade for a longer query length...\n\n\n-- \nLeon.\n\n", "msg_date": "Tue, 31 Aug 1999 18:30:38 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n> I added the <xm> exclusive state to accomodate the possibility of a\n> unary minus. The change was provoked by Vadim's addition of CREATE\n> SEQUENCE, which should allow negative numbers for some arguments. But\n> this just uncovered the tip of the general problem...\n\nIt seems awfully hard and dangerous to try to identify unary minus in\nthe lexer. The grammar at least has enough knowledge to recognize that\na minus *is* unary and not binary. Looking into gram.y, I find that the\nCREATE SEQUENCE productions handle collapsing unary minus all by\nthemselves! So in that particular case, there is still no need for the\nlexer to do it. AFAICT in a quick look through gram.y, there are no\nplaces where unary minus is recognized that gram.y won't try to collapse\nit.\n\nIn short, I still think that the whole mess ought to come out of the\nlexer...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 31 Aug 1999 09:57:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer " }, { "msg_contents": "> > I added the <xm> exclusive state to accomodate the possibility of a\n> > unary minus. The change was provoked by Vadim's addition of CREATE\n> > SEQUENCE, which should allow negative numbers for some arguments. But\n> > this just uncovered the tip of the general problem...\n> It seems awfully hard and dangerous to try to identify unary minus in\n> the lexer. The grammar at least has enough knowledge to recognize that\n> a minus *is* unary and not binary. Looking into gram.y, I find that the\n> CREATE SEQUENCE productions handle collapsing unary minus all by\n> themselves! So in that particular case, there is still no need for the\n> lexer to do it. AFAICT in a quick look through gram.y, there are no\n> places where unary minus is recognized that gram.y won't try to collapse\n> it.\n> In short, I still think that the whole mess ought to come out of the\n> lexer...\n\nMy recollection of the whole point is that, as you mention, *you can't\nidentify a unary minus in the lexer*. So the minus sign is kept\ndistinct, to be reconciled later as either a unary minus *or* an\noperator *or* whatever. The problem was that before, things like (-2)\nand (- 2) were handled differently just because the spacing was\ndifferent.\n\nAnyway, I'll look at the defacto changes; perhaps they are just fine\nbut I'm worried that we've reverted the behavior...\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Tue, 31 Aug 1999 15:22:43 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "> Oh, there isn't really much change. The minus is passed standalone\n> as it always was. The only thing is that currently in numbers with unary\n> minus it gets coerced not in lexer, but in parser in doNegate().\n> I wonder why that hasn't been done earlier - especially considering\n> that doNegate() existed long before my, hmm, fiddling.\n\nFor good reasons. See below :(\n\n> To tell the truth, there is some ambiguity in various operators.\n> That ambiguity is stemming from Postgres's type-extension system.\n\nThere is the possibility for ambiguity. But it is our responsibility\nto minimize that ambiguity and to make a predictable system, in the\npresence of Postgres' unique and valuable features such as type\nextension. imho this is more important than, for example, allowing\ninfinite-length queries.\n\n> Consider this: SELECT 3+-2; What would you expect from that? I\n> personally would expect the result of 1. But it produces an error,\n> because '+-' is treated as some user-defined operator, which is\n> not true.\n\nThat is part of my concern here. The current behavior is what you say\nyou would expect! Your patches change that behavor!!\n\npostgres=> select 3+-2;\n?column?\n--------\n 1\n(1 row)\n\n> Such innocent expression as SELECT --2 puts Postgres in\n> daze - it (psql) waits for 'completion' of such query (it treats\n> symbols '--' as comment start :-) See? There are more pitfalls\n> beside minus coercing :-)\n\nThere are some well-defined features of SQL92 which we try hard to\nsupport; the comment convention is one of them. That is a special case\nand shouldn't confuse the issue here; we'll need a different test case\nto make the point for me...\n\n> This all was done to clean up the code and 'straighten' the parser.\n> There was a performance breaker, officially called AFAIR 'variable\n> trailing context'.\n\nSorry, what is the performance penalty for that feature, and how do we\nmeasure that against breakage of expected, predictable behavior?\nPlease quantify.\n\nSo far, I'm not a fan of the proposed change; we're giving up behavior\nthat impacts Postgres' unique type extension features for an\narbitrarily large query buffer (as opposed to a generously large query\nbuffer, which can be accomplished just by changing the fixed size).\n\n - Thomas\n\n-- \nThomas Lockhart\t\t\t\[email protected]\nSouth Pasadena, California\n", "msg_date": "Wed, 01 Sep 1999 02:19:33 +0000", "msg_from": "Thomas Lockhart <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" }, { "msg_contents": "Thomas Lockhart <[email protected]> writes:\n>> Consider this: SELECT 3+-2; What would you expect from that? I\n>> personally would expect the result of 1. But it produces an error,\n>> because '+-' is treated as some user-defined operator, which is\n>> not true.\n\n> That is part of my concern here. The current behavior is what you say\n> you would expect! Your patches change that behavor!!\n\n> postgres=> select 3+-2;\n> ?column?\n> --------\n> 1\n> (1 row)\n\nOTOH, with current sources:\n\nregression=> select 3+- 2;\nERROR: Unable to identify an operator '+-' for types 'int4' and 'int4'\n You will have to retype this query using an explicit cast\n\nregression=> select f1+-f1 from int4_tbl;\nERROR: Unable to identify an operator '+-' for types 'int4' and 'int4'\n You will have to retype this query using an explicit cast\n\nTo my mind, without spaces this construction *is* ambiguous, and frankly\nI'd have expected the second interpretation ('+-' is a single operator\nname). Almost every computer language in the world uses \"greedy\"\ntokenization where the next token is the longest series of characters\nthat can validly be a token. I don't regard the above behavior as\npredictable, natural, nor obvious. In fact, I'd say it's a bug that\n\"3+-2\" and \"3+-x\" are not lexed in the same way.\n\nHowever, aside from arguing about whether the current behavior is good\nor bad, these examples seem to indicate that it doesn't take an infinite\namount of lookahead to reproduce the behavior. It looks to me like we\ncould preserve the current behavior by parsing a '-' as a separate token\nif it *immediately* precedes a digit, and otherwise allowing it to be\nfolded into the preceding operator. That could presumably be done\nwithout VLTC.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 01 Sep 1999 09:55:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer " }, { "msg_contents": "Tom Lane wrote:\n\n\n> To my mind, without spaces this construction *is* ambiguous, and frankly\n> I'd have expected the second interpretation ('+-' is a single operator\n> name). Almost every computer language in the world uses \"greedy\"\n> tokenization where the next token is the longest series of characters\n> that can validly be a token. I don't regard the above behavior as\n> predictable, natural, nor obvious. In fact, I'd say it's a bug that\n> \"3+-2\" and \"3+-x\" are not lexed in the same way.\n> \n\nCompletely agree with that. This differentiating behavior looks like a bug.\n\n> However, aside from arguing about whether the current behavior is good\n> or bad, these examples seem to indicate that it doesn't take an infinite\n> amount of lookahead to reproduce the behavior. It looks to me like we\n> could preserve the current behavior by parsing a '-' as a separate token\n> if it *immediately* precedes a digit, and otherwise allowing it to be\n> folded into the preceding operator. That could presumably be done\n> without VLTC.\n\nOk. If we *have* to preserve old weird behavior, here is the patch.\nIt is to be applied over all my other patches. Though if I were to\ndecide whether to restore old behavior, I wouldn't do it. Because it\nis inconsistency in grammar, i.e. a bug.\n\n-- \nLeon.", "msg_date": "Thu, 02 Sep 1999 17:25:11 +0500", "msg_from": "Leon <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "\nvoip=> select * from billing;\nBackend message type 0x44 arrived while idle\nBackend message type 0x44 arrived while idle\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nselect vendor_ip, sum(duration) as sumdur from billing where( call_type = 'E'\nand (stime >= 935092800 AND stime < 935179200)) group by vendor_ip;\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nWe have lost the connection to the backend, so further processing is\nimpossible. Terminating.\n\nPostgres 6.5.1 release, freebsd 2.2.5\n\n---\nDmitry Samersoff, [email protected], ICQ:3161705\nhttp://devnull.wplus.net\n* There will come soft rains ...\n", "msg_date": "Fri, 20 Aug 1999 20:37:18 +0400 (MSD)", "msg_from": "Dmitry Samersoff <[email protected]>", "msg_from_op": true, "msg_subject": "What does it mean?" } ]
[ { "msg_contents": "Mark,\n\n The error you are experiencing was discussed on the hacker's list\nlast week. Oliver Elphick and Tom Good worked out a patch for SQL to\nsolve the problem. You're right that the pg_vlock is getting created and\nthen getting deleted during the vacuuming of the last table. This ends\nup boinking the vacuum. I've attached the patch that Oliver sent to me.\nThe patch won't go through all of the way so you'll have to go through\nfile by file and just make the changes yourself (just deleting a few\nlines or adding a few, nothing big). That has solved the problem on my\nmachine.\n\n-Tony", "msg_date": "Fri, 20 Aug 1999 10:02:30 -0700", "msg_from": "\"G. Anthony Reina\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [GENERAL] Error during 'vacuum analyze'" }, { "msg_contents": "I assume we addressed this in our current tree, right?\n\n> Mark,\n> \n> The error you are experiencing was discussed on the hacker's list\n> last week. Oliver Elphick and Tom Good worked out a patch for SQL to\n> solve the problem. You're right that the pg_vlock is getting created and\n> then getting deleted during the vacuuming of the last table. This ends\n> up boinking the vacuum. I've attached the patch that Oliver sent to me.\n> The patch won't go through all of the way so you'll have to go through\n> file by file and just make the changes yourself (just deleting a few\n> lines or adding a few, nothing big). That has solved the problem on my\n> machine.\n> \n> -Tony\n> \n> \n> \n> \n> \n\n> \n> Index: include/access/nbtree.h\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/include/access/nbtree.h,v\n> retrieving revision 1.27\n> retrieving revision 1.27.2.1\n> diff -c -r1.27 -r1.27.2.1\n> *** include/access/nbtree.h 1999/05/25 22:04:55 1.27\n> --- include/access/nbtree.h 1999/08/08 20:24:09 1.27.2.1\n> ***************\n> *** 255,260 ****\n> --- 255,261 ----\n> extern void _bt_regscan(IndexScanDesc scan);\n> extern void _bt_dropscan(IndexScanDesc scan);\n> extern void _bt_adjscans(Relation rel, ItemPointer tid);\n> + extern void AtEOXact_nbtree(void);\n> \n> /*\n> * prototypes for functions in nbtsearch.c\n> Index: backend/access/nbtree/nbtscan.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/access/nbtree/nbtscan.c,v\n> retrieving revision 1.23.2.1\n> retrieving revision 1.23.2.2\n> diff -c -r1.23.2.1 -r1.23.2.2\n> *** backend/access/nbtree/nbtscan.c 1999/08/02 05:24:41 1.23.2.1\n> --- backend/access/nbtree/nbtscan.c 1999/08/08 20:24:10 1.23.2.2\n> ***************\n> *** 42,47 ****\n> --- 42,69 ----\n> static BTScanList BTScans = (BTScanList) NULL;\n> \n> static void _bt_scandel(IndexScanDesc scan, BlockNumber blkno, OffsetNumber offno);\n> + \n> + /*\n> + * AtEOXact_nbtree() --- clean up nbtree subsystem at xact abort or commit.\n> + *\n> + * This is here because it needs to touch this module's static var BTScans.\n> + */\n> + void\n> + AtEOXact_nbtree(void)\n> + {\n> + /* Note: these actions should only be necessary during xact abort;\n> + * but they can't hurt during a commit.\n> + */\n> + \n> + /* Reset the active-scans list to empty.\n> + * We do not need to free the list elements, because they're all\n> + * palloc()'d, so they'll go away at end of transaction anyway.\n> + */\n> + BTScans = NULL;\n> + \n> + /* If we were building a btree, we ain't anymore. */\n> + BuildingBtree = false;\n> + }\n> \n> /*\n> * _bt_regscan() -- register a new scan.\n> Index: backend/access/transam/transam.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/access/transam/transam.c,v\n> retrieving revision 1.27.2.1\n> retrieving revision 1.27.2.2\n> diff -c -r1.27.2.1 -r1.27.2.2\n> *** backend/access/transam/transam.c 1999/08/02 05:56:46 1.27.2.1\n> --- backend/access/transam/transam.c 1999/08/08 20:24:12 1.27.2.2\n> ***************\n> *** 20,26 ****\n> \n> #include \"access/heapam.h\"\n> #include \"catalog/catname.h\"\n> - #include \"commands/vacuum.h\"\n> \n> static int RecoveryCheckingEnabled(void);\n> static void TransRecover(Relation logRelation);\n> --- 20,25 ----\n> ***************\n> *** 83,95 ****\n> */\n> extern int OidGenLockId;\n> \n> - /* ----------------\n> - * globals that must be reset at abort\n> - * ----------------\n> - */\n> - extern bool BuildingBtree;\n> \n> - \n> /* ----------------\n> * recovery checking accessors\n> * ----------------\n> --- 82,88 ----\n> ***************\n> *** 568,578 ****\n> void\n> TransactionIdAbort(TransactionId transactionId)\n> {\n> - BuildingBtree = false;\n> - \n> - if (VacuumRunning)\n> - vc_abort();\n> - \n> if (AMI_OVERRIDE)\n> return;\n> \n> --- 561,566 ----\n> Index: backend/access/transam/xact.c\n> ===================================================================\n> RCS file: /usr/local/cvsroot/pgsql/src/backend/access/transam/xact.c,v\n> retrieving revision 1.42.2.1\n> retrieving revision 1.42.2.2\n> diff -c -r1.42.2.1 -r1.42.2.2\n> *** backend/access/transam/xact.c 1999/08/02 05:56:48 1.42.2.1\n> --- backend/access/transam/xact.c 1999/08/08 20:24:12 1.42.2.2\n> ***************\n> *** 144,152 ****\n> --- 144,154 ----\n> */\n> #include \"postgres.h\"\n> \n> + #include \"access/nbtree.h\"\n> #include \"catalog/heap.h\"\n> #include \"commands/async.h\"\n> #include \"commands/sequence.h\"\n> + #include \"commands/vacuum.h\"\n> #include \"libpq/be-fsstubs.h\"\n> #include \"storage/proc.h\"\n> #include \"utils/inval.h\"\n> ***************\n> *** 952,957 ****\n> --- 954,960 ----\n> }\n> \n> RelationPurgeLocalRelation(true);\n> + AtEOXact_nbtree();\n> AtCommit_Cache();\n> AtCommit_Locks();\n> AtCommit_Memory();\n> ***************\n> *** 1013,1021 ****\n> --- 1016,1027 ----\n> AtAbort_Notify();\n> CloseSequences();\n> AtEOXact_portals();\n> + if (VacuumRunning)\n> + vc_abort();\n> RecordTransactionAbort();\n> RelationPurgeLocalRelation(false);\n> DestroyNoNameRels();\n> + AtEOXact_nbtree();\n> AtAbort_Cache();\n> AtAbort_Locks();\n> AtAbort_Memory();\n> \n> \n\n\n-- \n Bruce Momjian | http://www.op.net/~candle\n [email protected] | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 27 Sep 1999 14:52:06 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Error during 'vacuum analyze'" } ]
[ { "msg_contents": "I have just noticed that the optimizer's indexing code is doing\nsomething that looks pretty bogus for non-btree-type indexes.\nIn optimizer/util/plancat.c, there's a routine index_info() that\npulls the necessary information about an index out of the catalogs.\nIt is picking up whatever operator is listed as \"strategy 1\" for the\nindex opclass of each index. Later on, the optimizer assumes that this\noperator represents the sort order induced by an indexscan over the\ngiven index. That's fine for btree, where strategy operator 1 is \"<\".\nBut for rtree and hash it seems to yield some rather odd choices:\n\n<< |box_left |rtree\n<< |box_left |rtree\n<< |poly_left |rtree\n<< |circle_left|rtree\n= |texteq |hash\n= |int4eq |hash\n= |int2eq |hash\n= |oideq |hash\n= |oid8eq |hash\n= |float4eq |hash\n= |nameeq |hash\n= |chareq |hash\n= |float8eq |hash\n= |datetime_eq|hash\n= |time_eq |hash\n= |timespan_eq|hash\n= |date_eq |hash\n= |int8eq |hash\n= |macaddr_eq |hash\n= |varchareq |hash\n= |network_eq |hash\n= |bpchareq |hash\n= |network_eq |hash\n\nI do not know whether an indexscan of an rtree can be counted on\nto yield the values in \"<<\" order ... but I do think it's pretty\nstrange to consider \"=\" as the sort order of a hash index!\n\nShouldn't we fix this somehow? The cleanest solution that comes\nto mind is to add a column to pg_am, wherein we would put the\nstrategy number of the operator that represents the sort ordering\nof the index, or zero if the index has no useful sort order (like\nhash). Any comments on this idea? Does it work for GIST indexes?\n\nAlso, does anyone know whether \"<<\" really is the sort order of\nan rtree? A couple of cursory tests didn't disprove it, but\nI'm not confident about it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Aug 1999 22:15:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "sorting by << correct for rtrees?" } ]
[ { "msg_contents": "I have just committed changes that alter the representation of\nSortClause nodes (making them like GroupClause, instead of the\ncrufty way they were done before). This breaks stored rules!\nYou will need to initdb next time you pull current sources...\n\nThe good news is that the optimizer is finally reasonably smart\nabout avoiding a top-level Sort operation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 20 Aug 1999 23:58:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Caution: tonight's commits force initdb" }, { "msg_contents": "Hi \n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]On Behalf Of Tom Lane\n> Sent: Saturday, August 21, 1999 12:58 PM\n> To: [email protected]\n> Subject: [HACKERS] Caution: tonight's commits force initdb\n> \n> \n> I have just committed changes that alter the representation of\n> SortClause nodes (making them like GroupClause, instead of the\n> crufty way they were done before). This breaks stored rules!\n> You will need to initdb next time you pull current sources...\n> \n> The good news is that the optimizer is finally reasonably smart\n> about avoiding a top-level Sort operation.\n>\n\nThanks for your good jobs.\n\nAfter applying this change,I tested some cases.\nFor a table t,explain shows\n\nexplain select * from t;\n\n\tNOTICE: QUERY PLAN:\n\n\tSeq Scan on t (cost=1716.32 rows=27131 width=612) \n\nAnd with ORDER BY clause\n\nexplain select * from t order by key;\n\n\tNOTICE: QUERY PLAN:\n\n\tIndex Scan using t_pkey on t (cost=2284.55 rows=27131 width=612)\n\nHmm,Index scan is chosen to select all rows.\nAFAIK,sequential scan + sort is much faster than index scan in\nmost cases.\n\n\tcost of index scan < cost of sequential scan + cost of sort\n\nI have felt that the current cost estimation of index scan is too small,\nthough I have no alternative.\n\nComments ?\n\nHiroshi Inoue\[email protected] \n", "msg_date": "Tue, 24 Aug 1999 08:53:13 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Caution: tonight's commits force initdb" }, { "msg_contents": "\"Hiroshi Inoue\" <[email protected]> writes:\n> Hmm,Index scan is chosen to select all rows.\n> AFAIK,sequential scan + sort is much faster than index scan in\n> most cases.\n> \tcost of index scan < cost of sequential scan + cost of sort\n> I have felt that the current cost estimation of index scan is too small,\n> though I have no alternative.\n\nHmm. Well, it's still a step forward that the system is able to\nconsider this query plan --- if it's choosing a plan that's actually\nslower, then that indicates we have a problem with our cost estimates.\n\nThe current cost estimate for a sort (see optimizer/path/costsize.c) is\nbasically just P log P disk accesses (P being the estimated relation\nsize in pages) plus N log N tuple comparisons (N being the estimated\ntuple count). This is fairly bogus --- for one thing it does not\naccount for the fact that sorts smaller than SortMem kilobytes are done\nin-memory without temp files. I doubt that the amount of I/O for a\nlarger sort is quite right either. We need to look at exactly what\nusage psort.c makes of temp files and revise the I/O estimates\naccordingly.\n\nI am also suspicious that indexscan costs are underestimated. The\ncost of reading the index is probably not too far off, but the cost\nof accessing the main table is bogus. Worst case, for a table whose\ntuples are thoroughly scattered, you would have a main-table page fetch\nfor *each* returned tuple. In practice it's probably not anywhere near\nthat bad, since you may have some clustering of tuples (so that the same\npage is hit several times in a row), and also the Postgres buffers and\nthe underlying Unix system's disk cache will save trips to disk if there\nis any locality of reference at all. I have no idea how to estimate\nthat effect --- anyone? But cost_index is clearly producing a\nridiculously optimistic estimate at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 10:58:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Sorting costs (was Caution: tonight's commits force initdb)" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <[email protected]> writes:\n> > Hmm,Index scan is chosen to select all rows.\n> > AFAIK,sequential scan + sort is much faster than index scan in\n> > most cases.\n> > cost of index scan < cost of sequential scan + cost of sort\n> > I have felt that the current cost estimation of index scan is too small,\n> > though I have no alternative.\n\ncan the optimizer make use of LIMIT, or some other hint that reaction \ntime is preferred over speed of full query ?\n\nIn web apps the index scan may often be fastre than seq scan + sort as\none \nmay not actually need all the tuples but only a small fraction from near \nthe beginning.\n\nGetting the beginning fast also gives better responsiveness for other \ninteractive uses.\n\n> I am also suspicious that indexscan costs are underestimated. The\n> cost of reading the index is probably not too far off, but the cost\n> of accessing the main table is bogus. Worst case, for a table whose\n> tuples are thoroughly scattered, you would have a main-table page fetch\n> for *each* returned tuple. In practice it's probably not anywhere near\n> that bad, since you may have some clustering of tuples (so that the same\n> page is hit several times in a row), and also the Postgres buffers and\n> the underlying Unix system's disk cache will save trips to disk if there\n> is any locality of reference at all. I have no idea how to estimate\n> that effect --- anyone? But cost_index is clearly producing a\n> ridiculously optimistic estimate at the moment.\n\nThe one way to find out would be actual benchmarking - if current \noptimizer prefers index scans it is possible to do a query using \nindex scan, dro the index, somehow flush disk cache and then do the \nsame query using seqscan+sort. \n\nIf the latter is preferred anyway we would have no way to test ...\n\n------------------\nHannu\n", "msg_date": "Tue, 24 Aug 1999 22:10:41 +0300", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Sorting costs (was Caution: tonight's commits force\n\tinitdb)" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> can the optimizer make use of LIMIT, or some other hint that reaction \n> time is preferred over speed of full query ?\n\nThat is on the to-do list. I think the hard part is working out the\ndetails of when a top-level LIMIT applies to the costs of a lower-level\nscan (for example, it does not if there's going to be a Sort in\nbetween), and then figuring out how to transmit that information around.\nThe optimizer does most of its cost estimation bottom-up, so it might\nbe hard to do much in any but the simplest cases.\n\n> The one way to find out would be actual benchmarking - if current \n> optimizer prefers index scans it is possible to do a query using \n> index scan, dro the index, somehow flush disk cache and then do the \n> same query using seqscan+sort. \n\n> If the latter is preferred anyway we would have no way to test ...\n\nYou can benchmark with and without index usage by starting your client\nwith PGOPTIONS=\"-fs\" or PGOPTIONS=\"-fi\" respectively --- that basically\nputs a very heavy thumb on the scales when the optimizer is choosing\nwhich to use ;-). However, I am not sure that I'd trust a small number\nof benchmark results as a general guide to costs. I think the critical\nfactor for indexscan costs is how scattered the tuples are with respect\nto the index order, and without some idea about that factor you'd have\nno way to know if a particular benchmark result is high, low, or average.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Aug 1999 17:14:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Sorting costs (was Caution: tonight's commits force\n\tinitdb)" }, { "msg_contents": "> -----Original Message-----\n> From: [email protected] [mailto:[email protected]]On Behalf Of Hannu\n> Krosing\n> Sent: Wednesday, August 25, 1999 4:11 AM\n> To: Tom Lane\n> Cc: Hiroshi Inoue; pgsql-hackers\n> Subject: Re: [HACKERS] Sorting costs (was Caution: tonight's commits\n> force initdb)\n>\n>\n> Tom Lane wrote:\n> >\n> > \"Hiroshi Inoue\" <[email protected]> writes:\n> > > Hmm,Index scan is chosen to select all rows.\n> > > AFAIK,sequential scan + sort is much faster than index scan in\n> > > most cases.\n> > > cost of index scan < cost of sequential scan + cost of sort\n> > > I have felt that the current cost estimation of index scan is\n> too small,\n> > > though I have no alternative.\n>\n> can the optimizer make use of LIMIT, or some other hint that reaction\n> time is preferred over speed of full query ?\n>\n> In web apps the index scan may often be fastre than seq scan + sort as\n> one\n> may not actually need all the tuples but only a small fraction from near\n> the beginning.\n>\n> Getting the beginning fast also gives better responsiveness for other\n> interactive uses.\n>\n\nI kow there are many cases that the response to get first rows is necessary.\nIt's the reason that I provided a patch for descending ORDER BY cases.\nI think that LIMIT is the hint to tell optimizer that the response is\nnecessary.\nWe would be able to use \"LIMIT ALL\" only to tell the hint if LIMIT is taken\ninto account.\n\n> > I am also suspicious that indexscan costs are underestimated. The\n> > cost of reading the index is probably not too far off, but the cost\n> > of accessing the main table is bogus. Worst case, for a table whose\n> > tuples are thoroughly scattered, you would have a main-table page fetch\n> > for *each* returned tuple. In practice it's probably not anywhere near\n> > that bad, since you may have some clustering of tuples (so that the same\n> > page is hit several times in a row), and also the Postgres buffers and\n> > the underlying Unix system's disk cache will save trips to disk if there\n> > is any locality of reference at all. I have no idea how to estimate\n> > that effect --- anyone? But cost_index is clearly producing a\n> > ridiculously optimistic estimate at the moment.\n>\n> The one way to find out would be actual benchmarking - if current\n> optimizer prefers index scans it is possible to do a query using\n> index scan, dro the index, somehow flush disk cache and then do the\n> same query using seqscan+sort.\n>\n\nProbably this case was benchmarked by someone(Vadim ?).\n\nI have thought that generally index scan is 5~10 times slower than\nsequential scan to select all rows.\nI know the case that index scan is more than 500 times slower than\nsequential scan. After clustering,the performance was dramatically\nimproved. Unfortunately,I don't know how to estimate such a case.\n\nRegards.\n\nHiroshi Inoue\[email protected]\n\n\n", "msg_date": "Wed, 25 Aug 1999 09:17:03 +0900", "msg_from": "\"Hiroshi Inoue\" <[email protected]>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Sorting costs (was Caution: tonight's commits force\n\tinitdb)" } ]
[ { "msg_contents": "I've just grabbed it now, I'll get back to you Monday.\n\n>> -----Original Message-----\n>> From: Leon [mailto:[email protected]]\n>> Sent: Friday, August 20, 1999 8:28 PM\n>> To: Ansley, Michael\n>> Cc: hackers\n>> Subject: Re: [HACKERS] Postgres' lexer\n>> \n>> \n>> Ansley, Michael wrote:\n>> > \n>> > Leon, if you manage to find a replacement for this, please \n>> let me know.\n>> > I'll probably only pick it up after the weekend.\n>> > \n>> > I think that we need to find another way to tokenise the \n>> minus. First of\n>> > all, though, how is the parser supposed to tell whether this:\n>> > a -2\n>> > means this:\n>> > (a - 2)\n>> > or this:\n>> > a (-2)\n>> \n>> I think that the current behavior is ok - it is what we would expect\n>> from expressions like 'a -2'.\n>> \n>> I have produced a patch to cleanup the code. It works due to the\n>> fact that unary minus gets processed in doNegate() in parser anyway,\n>> and it is by no way lexer's job to do grammatical parsing - i.e.\n>> deciding if operator is to be treated as binary or unary. \n>> \n>> I ran regression tests, everything seems to be ok. It is my first\n>> diff/patch experience in *NIX, so take it with mercy :) But it \n>> seems to be correct. It is to be applied against 6.5.0 (I have\n>> not upgraded to 6.5.1 yet, but hope lexer hasn't changed since\n>> then.) The patch mainly contains nuked code. The only thing added\n>> is my short comment :)\n>> \n>> Have I done some right thing? :)\n>> \n>> -- \n>> Leon.\n>> ---------\n>> \"This may seem a bit weird, but that's okay, because it is weird.\" -\n>> Perl manpage.\n>> \n", "msg_date": "Sat, 21 Aug 1999 10:17:39 +0200", "msg_from": "\"Ansley, Michael\" <[email protected]>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Postgres' lexer" } ]
[ { "msg_contents": "\n Hi,\n\n I'am programming a trigger in 'C' and I need know that a trigger run in\na transaction. How check in a trigger that is transacton set (BEGIN before\ntrigger)? \n\nAnd - exist some check function for table lock? (But _not_ table in current\nrelation (CurrentTriggerData->tg_relation).\n\n\t\t\t\t\t\tZakkr \n\n\n\n\n", "msg_date": "Mon, 23 Aug 1999 13:56:31 +0200 (CEST)", "msg_from": "Zakkr <[email protected]>", "msg_from_op": true, "msg_subject": "trigger: run in transaction " } ]
[ { "msg_contents": "What does it mean?\n\nprova=> select nome from prova group by nome having count(*) > 1;\nnome\nCarlos\nHenrique\nJose\n(3 rows)\n\nprova=> select oid,* from prova where nome in (select nome from prova\ngroup by nome having 1 < count(*));\nERROR: pull_var_clause: Cannot handle node type 108\n\nprova=> select * from prova where nome in (select nome from prova group\nby nome having count(*) > 1);\nERROR: rewrite: aggregate column of view must be at rigth side in qual\n\nJos�\n\n\n", "msg_date": "Mon, 23 Aug 1999 15:03:29 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "ERROR: pull_var_clause: Cannot handle node type 108" }, { "msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> What does it mean?\n> prova=> select nome from prova group by nome having count(*) > 1;\n> [ OK ]\n\n> prova=> select oid,* from prova where nome in (select nome from prova\n> group by nome having 1 < count(*));\n> ERROR: pull_var_clause: Cannot handle node type 108\n\n> prova=> select * from prova where nome in (select nome from prova group\n> by nome having count(*) > 1);\n> ERROR: rewrite: aggregate column of view must be at rigth side in qual\n\nI take it you are using 6.4, because 6.5 generates different failure\nmessages. But it's not any less broken :-(. The rewriter seems to have\na bunch of bugs associated with aggregate functions in HAVING clauses of\nsub-selects. Or maybe it's just several manifestations of the same bug.\nI have notes about this problem but do not understand it well enough to\nfix it. Perhaps Jan has a clue about it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Aug 1999 10:21:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: pull_var_clause: Cannot handle node type 108 " }, { "msg_contents": "\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > What does it mean?\n> > prova=> select nome from prova group by nome having count(*) > 1;\n> > [ OK ]\n>\n> > prova=> select oid,* from prova where nome in (select nome from prova\n> > group by nome having 1 < count(*));\n> > ERROR: pull_var_clause: Cannot handle node type 108\n>\n> > prova=> select * from prova where nome in (select nome from prova group\n> > by nome having count(*) > 1);\n> > ERROR: rewrite: aggregate column of view must be at rigth side in qual\n>\n> I take it you are using 6.4, because 6.5 generates different failure\n\nThis is my ver:\n\nhygea=> select version();\nversion\n-------------------------------------------------------------------\nPostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n(1 row)\n\n\n> messages. But it's not any less broken :-(. The rewriter seems to have\n> a bunch of bugs associated with aggregate functions in HAVING clauses of\n> sub-selects. Or maybe it's just several manifestations of the same bug.\n> I have notes about this problem but do not understand it well enough to\n> fix it. Perhaps Jan has a clue about it...\n>\n> regards, tom lane\n\n", "msg_date": "Mon, 23 Aug 1999 17:18:05 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: pull_var_clause: Cannot handle node type 108" }, { "msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n>> I take it you are using 6.4, because 6.5 generates different failure\n\n> This is my ver:\n\n> hygea=> select version();\n> version\n> -------------------------------------------------------------------\n> PostgreSQL 6.5.0 on i586-pc-linux-gnulibc1, compiled by gcc 2.7.2.1\n> (1 row)\n\n6.5 prerelease maybe? I'm fairly sure that 6.5 release does not have\nthe \"pull_var_clause: Cannot handle node type\" message; but it was a\nlate change.\n\n\"select version()\" is just about useless for determining what you\nare dealing with if you use CVS updates or snapshots, because the\nversion number only gets changed at official release times.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Aug 1999 11:31:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: pull_var_clause: Cannot handle node type 108 " }, { "msg_contents": "\n\nTom Lane ha scritto:\n\n> =?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> > What does it mean?\n> > prova=> select nome from prova group by nome having count(*) > 1;\n> > [ OK ]\n>\n> > prova=> select oid,* from prova where nome in (select nome from prova\n> > group by nome having 1 < count(*));\n> > ERROR: pull_var_clause: Cannot handle node type 108\n>\n> > prova=> select * from prova where nome in (select nome from prova group\n> > by nome having count(*) > 1);\n> > ERROR: rewrite: aggregate column of view must be at rigth side in qual\n>\n> I take it you are using 6.4, because 6.5 generates different failure\n> messages. But it's not any less broken :-(. The rewriter seems to have\n> a bunch of bugs associated with aggregate functions in HAVING clauses of\n> sub-selects. Or maybe it's just several manifestations of the same bug.\n> I have notes about this problem but do not understand it well enough to\n> fix it. Perhaps Jan has a clue about it...\n>\n> regards, tom lane\n\nYou are right Tom. I installed v6.5.1 and now the message is different, but I\ncan't understand it again:\n\nhygea=> select nome from prova group by nome having 1<count(*);\nnome\n------\ncarlos\njose\n(2 rows)\n\nhygea=> select oid,nome from prova where nome in (select nome from prova\ngroup by nome having 1<count(*));\nERROR: SELECT/HAVING requires aggregates to be valid\n\nJos�\n\n\n", "msg_date": "Fri, 27 Aug 1999 14:35:32 +0200", "msg_from": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] ERROR: pull_var_clause: Cannot handle node type 108" }, { "msg_contents": "=?iso-8859-1?Q?Jos=E9?= Soares <[email protected]> writes:\n> Tom Lane ha scritto:\n>> I take it you are using 6.4, because 6.5 generates different failure\n>> messages. But it's not any less broken :-(. The rewriter seems to have\n>> a bunch of bugs associated with aggregate functions in HAVING clauses of\n>> sub-selects.\n\n> You are right Tom. I installed v6.5.1 and now the message is different, but I\n> can't understand it again:\n\n> hygea=> select oid,nome from prova where nome in (select nome from prova\n> group by nome having 1<count(*));\n> ERROR: SELECT/HAVING requires aggregates to be valid\n\nWell, like I said, it's broken. What's actually going on is that the\nrewriter is mistakenly deciding that the count(*) needs to be pushed\ndown into another level of subselect:\n\nselect oid,nome from prova where nome in\n(select nome from prova group by nome having 1 < \n(select count(*) from prova));\n\nwhereupon the optimizer quite rightly complains that there is no\naggregate function visible in the mid-level HAVING clause.\n\nThis pushing-down is probably the right thing for some scenarios\ninvolving aggregate functions introduced by views, but it's surely\ndead wrong in the example as given. I don't currently understand\nthe rewriter well enough to know when it should happen or not happen.\nI might take a swipe at fixing it though if Jan doesn't step up to bat\nsoon --- this class of bugs has been generating complaints for a good\nwhile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Aug 1999 09:58:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] ERROR: pull_var_clause: Cannot handle node type 108 " } ]
[ { "msg_contents": "Hy all!\n\nI posted this mail on the general and sql mailing-list, but I got no\naswer, so I try to post it here;\nif the question is bad-explained, please ask me more details...\n\nI want to make a many-to-many relation on my db, and I would like to use\n\nan array-field of the external keys...\nThis is what I whoul like to create:\n\nCREATE TABLE Groups(\n IDGroup SERIAL, -- Primary key\n ...\n);\n\nCREATE TABLE Customers(\n IDGroups INT4[], -- Multiple foreign keys\n ...\n);\n\nSELECT Customers.* FROM Customers WHERE IDSearchedGroup IN\nCustomers.IDGroups;\n\nIDSearchedGruop is obviously a parameter.\n\nWell, this query doesn't work because in the SELECT ... the operator\n\"IN\" can work only on subselect, not on array... how can I check if an\nelement belongs to an array or not? Performance are not critical,\nbecause I have few groups for each Customer...\n\nYes, I know that the common way to make many-to-many relations is adding\n\na support table... but I don't like the conventional solution to this\nproblem... any ideas???\n\n\nPaolo\n\n\n", "msg_date": "Mon, 23 Aug 1999 15:37:37 +0200", "msg_from": "Alke <[email protected]>", "msg_from_op": true, "msg_subject": "Array-fields and many-to-many relations" }, { "msg_contents": "Alke <[email protected]> writes:\n> Well, this query doesn't work because in the SELECT ... the operator\n> \"IN\" can work only on subselect, not on array... how can I check if an\n> element belongs to an array or not?\n\ncontrib/array has a slightly ugly solution to your problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Aug 1999 10:23:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Array-fields and many-to-many relations " } ]
[ { "msg_contents": "Hy all!\n\nI posted this mail on the general and sql mailing-list, but I got no\naswer, so I try to post it here;\nif the question is bad-explained, please ask me more details...\n\nI would like to use OIDs as a primary key on every table of my db,\nbecause this seem to me a very smart thing to do..... but after some\nexperiment, I've decided to create a SERIAL field on every table, and to\n\nuse it as the primary key, and to abandon OIDs...\n\nThis is an example of what I whould like to write (and I can't):\n\nCREATE FUNCTION firstTable_DelCascade() RETURNS OPAQUE AS '\nBEGIN\n DELETE FROM anotherTable WHERE anotherTable.OIDfromFirstTable =\nOLD.OID;\n RETURN OLD;\nEND;\n' LANGUAGE 'plpgsql';\n\nWell, this can't work because OLD is of type RECORD, and type record\nhave no OID :-(\n(this is what I found in the docs, about RECORD:\n\"Only the user attributes of a table row are accessible in the row, no\nOid or other system attributes (hence the row could be from a view and\nview rows don't have useful system attributes)\" )\n\nThe main question is: can I get the OID of the OLD/NEW record in\ntriggers?\nA more general question is: what is the usefulness of OIDs, if I can't\nuse them in trigger?\n\n\nPaolo\n\n", "msg_date": "Mon, 23 Aug 1999 15:40:58 +0200", "msg_from": "Alke <[email protected]>", "msg_from_op": true, "msg_subject": "OID and PL/pgSQL trigger :-(" } ]