threads
listlengths
1
275
[ { "msg_contents": "We are having terrible performance issues with a production instance of \nPostgreSQL version 7.4.5, but are struggling with which parameters in \nthe postgresql.conf to change.\n\nOur database server is an Apple G5 (2 x 2GHz CPU, 2GB RAM). The \noperating system is Mac OS X 10.3.\n\nThe database seems to fine to start with, but then as the load increases \nit seems to reach a threshold where the number of non-idle queries in \npg_stat_activity grows heavily and we appear to get something similar to \na motorway tail back with up to perhaps 140 queries awaiting processing. \nAt the same time the virtual memory usage (reported by the OS) appears \nto grow heavily too (sometimes up to 50GB). The CPUs do not seems to be \nworking overly hard nor do the disks and the memory monitor reports \nabout 600MB of inactive memory. Once in this situation, the database \nnever catches up with itself and the only way to put it back on an even \nkeel is to stop the application and restart database.\n\nThe system memory settings are:\n\nkern.sysv.shmmax: 536870912\nkern.sysv.shmmin: 1\nkern.sysv.shmmni: 4096\nkern.sysv.shmseg: 4096\nkern.sysv.shmall: 131072\n\nWe have unlimited the number of processes and open files for the user \nrunning PostgreSQL (therefore max 2048 processes and max 12288 open files).\n\nNon default postgresql parameters are:\n\ntcpip_socket = true\nmax_connections = 500\nunix_socket_directory = '/Local/PostgreSQL'\nshared_buffers = 8192 # min 16, at least max_connections*2, \n8KB each\nsort_mem = 2048 # min 64, size in KB\nwal_buffers = 32 # min 4, 8KB each\neffective_cache_size = 100000 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page fetch cost\nlog_min_error_statement = info # Values in order of increasing severity:\nlog_duration = true\nlog_pid = true\nlog_statement = true\nlog_timestamp = true\nstats_command_string = true\n\nalthough on the last restart I changed the following (since the current \nconfig clearly isn't working):\n\nshared_buffers = 16384 # min 16, at least max_connections*2, \n8KB each\neffective_cache_size = 10000 # typically 8KB each\n\nWe don't know whether these have helped yet - but we should get a good \nidea around 10am tomorrow morning.\n\nWe currently have the application limited to a maximum of 40 concurrent \nconnections to the database.\n\nOur application produces a fairly varied mix of queries, some quite \ncomplex and plenty of them. We seem to average about 400,000 queries per \nhour. At first I thought it might be one or two inefficient queries \nblocking the CPUs but the CPUs don't seem to be very stretched. My guess \nis that we have our postgresql memory settings wrong, however, the is \nlots of conflicting advice about what to set (from 1000 to 100000 shared \nbuffers).\n\nDoes this heavy use of VM and query tail back indicate which memory \nsettings are wrong? Presumably if there are 140 queries in \npg_stat_activity then postgresql will be trying to service all these \nqueries at once? I also presume that if VM usage is high then we are \npaging a vast amount to disk. But I am not sure why.\n\nHas anyone seen this behaviour before and can anyone point me in the \nright direction?\n\nRegards,\n\nAlexander Stanier\n\n", "msg_date": "Tue, 05 Jul 2005 17:13:42 +0100", "msg_from": "Alexander Stanier <[email protected]>", "msg_from_op": true, "msg_subject": "Heavy virtual memory usage on production system" }, { "msg_contents": "Alexander Stanier <[email protected]> writes:\n> The database seems to fine to start with, but then as the load increases \n> it seems to reach a threshold where the number of non-idle queries in \n> pg_stat_activity grows heavily and we appear to get something similar to \n> a motorway tail back with up to perhaps 140 queries awaiting processing. \n> At the same time the virtual memory usage (reported by the OS) appears \n> to grow heavily too (sometimes up to 50GB). The CPUs do not seems to be \n> working overly hard nor do the disks and the memory monitor reports \n> about 600MB of inactive memory.\n\nYou shouldn't be putting a lot of credence in the virtual memory usage\nthen, methinks. Some versions of top count the Postgres shared memory\nagainst *each* backend process, leading to a wildly inflated figure for\ntotal memory used. I'd suggest watching the output of \"vmstat 1\" (or\nlocal equivalent) to observe whether there's any significant amount of\nswapping going on; if not, excessive memory usage isn't the problem.\n\nAre you sure that the problem isn't at the level of some query taking an\nexclusive lock and then sitting on it? I would expect either CPU or\ndisk bandwidth or both to be saturated if you were having a conventional\nresource limitation problem. Again, comparing vmstat readings during\nnormal and slow response conditions would be instructive.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 05 Jul 2005 12:55:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heavy virtual memory usage on production system " }, { "msg_contents": "The problem happened again this morning and I took the chance to check \nout the locking situation. The number of locks increased dramatically up \nto over 1000, but they were all \"AccessShareLocks\" and all were granted. \nThe odd \"RowExclusiveLock\" appeared but none persisted. On the basis \nthat nothing seems to be waiting for a lock, I don't think it is a \nlocking problem. I think the vast number of locks is symptom of the fact \nthat the server is trying to service a vast number of requests. \nEventually, the server seemed to catch up with itself - the CPU went up, \nthe VM went down and the number of queries in pg_stat_activity reduced.\n\nThe problem then occurred a second time and there seemed to be a lot of \npageouts and pageins going on, but I was only looking at top so it was \ndifficult to tell. I have now restarted with a statement_timeout of 2 \nmins to protect the server from poorly performing queries (fairly brutal \n- but it does at least stop the downward spiral). I have also reduced \nthe sort_mem to 1024. I guess it could be that we simply need more \nmemory in the server.\n\nI have got vmstat (vm_stat on Mac) running and I will watch the \nbehaviour......\n\nRegards, Alex Stanier.\n\nTom Lane wrote:\n\n>Alexander Stanier <[email protected]> writes:\n> \n>\n>>The database seems to fine to start with, but then as the load increases \n>>it seems to reach a threshold where the number of non-idle queries in \n>>pg_stat_activity grows heavily and we appear to get something similar to \n>>a motorway tail back with up to perhaps 140 queries awaiting processing. \n>>At the same time the virtual memory usage (reported by the OS) appears \n>>to grow heavily too (sometimes up to 50GB). The CPUs do not seems to be \n>>working overly hard nor do the disks and the memory monitor reports \n>>about 600MB of inactive memory.\n>> \n>>\n>\n>You shouldn't be putting a lot of credence in the virtual memory usage\n>then, methinks. Some versions of top count the Postgres shared memory\n>against *each* backend process, leading to a wildly inflated figure for\n>total memory used. I'd suggest watching the output of \"vmstat 1\" (or\n>local equivalent) to observe whether there's any significant amount of\n>swapping going on; if not, excessive memory usage isn't the problem.\n>\n>Are you sure that the problem isn't at the level of some query taking an\n>exclusive lock and then sitting on it? I would expect either CPU or\n>disk bandwidth or both to be saturated if you were having a conventional\n>resource limitation problem. Again, comparing vmstat readings during\n>normal and slow response conditions would be instructive.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n", "msg_date": "Wed, 06 Jul 2005 13:03:44 +0100", "msg_from": "Alexander Stanier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Heavy virtual memory usage on production system" }, { "msg_contents": "Alexander Stanier <[email protected]> writes:\n> The problem happened again this morning and I took the chance to check \n> out the locking situation. The number of locks increased dramatically up \n> to over 1000, but they were all \"AccessShareLocks\" and all were granted. \n> The odd \"RowExclusiveLock\" appeared but none persisted. On the basis \n> that nothing seems to be waiting for a lock, I don't think it is a \n> locking problem.\n\nHmm. How many active processes were there, and how many locks per\nprocess? (A quick \"SELECT pid, count(*) GROUP BY pid\" query should give\nyou this info next time.) We just recently got rid of some O(N^2)\nbehavior in the lock manager for cases where a single backend holds many\ndifferent locks. So if there's a single query acquiring a whole lot of\nlocks, that could possibly have something to do with this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Jul 2005 10:15:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heavy virtual memory usage on production system " }, { "msg_contents": "Looks as though there are several processes which are acquiring a load \nof locks:\n\npid | count\n------+-------\n 3193 | 2\n 3192 | 9\n 3191 | 7\n 3190 | 3\n 3189 | 2\n 3188 | 3\n 3187 | 3\n 3186 | 3\n 3185 | 3\n 3184 | 3\n 3183 | 3\n 3182 | 13\n 3181 | 3\n 3179 | 10\n 3175 | 13\n 3174 | 2\n 3173 | 10\n 2917 | 3\n 3153 | 8\n 3150 | 8\n 3149 | 8\n 3146 | 9\n 3145 | 8\n 3144 | 8\n 3143 | 9\n 3142 | 3\n 3141 | 10\n 3127 | 8\n 3125 | 13\n 3124 | 13\n 3121 | 8\n 3118 | 8\n 3114 | 8\n 3113 | 8\n 3110 | 8\n 3106 | 8\n 3104 | 9\n 3102 | 8\n 3100 | 13\n 2314 | 2\n(40 rows)\n\nI guess it might be worth us getting this server up to PostgreSQL 8.0.3. \nAt least we can then discount that as a problem.\n\nRegards, Alex Stanier.\n\nTom Lane wrote:\n\n>Alexander Stanier <[email protected]> writes:\n> \n>\n>>The problem happened again this morning and I took the chance to check \n>>out the locking situation. The number of locks increased dramatically up \n>>to over 1000, but they were all \"AccessShareLocks\" and all were granted. \n>>The odd \"RowExclusiveLock\" appeared but none persisted. On the basis \n>>that nothing seems to be waiting for a lock, I don't think it is a \n>>locking problem.\n>> \n>>\n>\n>Hmm. How many active processes were there, and how many locks per\n>process? (A quick \"SELECT pid, count(*) GROUP BY pid\" query should give\n>you this info next time.) We just recently got rid of some O(N^2)\n>behavior in the lock manager for cases where a single backend holds many\n>different locks. So if there's a single query acquiring a whole lot of\n>locks, that could possibly have something to do with this.\n>\n>\t\t\tregards, tom lane\n>\n> \n>\n", "msg_date": "Wed, 06 Jul 2005 15:49:40 +0100", "msg_from": "Alexander Stanier <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Heavy virtual memory usage on production system" }, { "msg_contents": "Alexander Stanier <[email protected]> writes:\n> Looks as though there are several processes which are acquiring a load \n> of locks:\n\n13 locks isn't \"a load\". I was worried about scenarios in which a\nsingle process might take hundreds or thousands of locks; it doesn't\nlook like you have that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 06 Jul 2005 11:06:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Heavy virtual memory usage on production system " } ]
[ { "msg_contents": "Hi all, we have the following setup:\n\n- Sun V250 server\n- 2*1.3GHz Sparc IIIi CPU\n- 8GB RAM\n- 8*73GB SCSI drives\n- Solaris 10\n- Postgres 8\n\nDisks 0 and 1 are mirrored and contain the OS and the various software\npackages, disks 2-7 are configured as a 320GB concatenation mounted on\n/data, which is where load files and Postgres database and log files live.\n\nThe box is used by a small number of developers doing solely\nPostgres-based data warehousing work. There are no end-users on the box,\nand we are aiming for the maximum IO throughput.\n\nQuestions are as follows:\n\n1) Should we have set the page size to 32MB when we compiled Postgres?\n\nWe mainly do bulk loads using 'copy', full-table scans and large joins so\nthis would seem sensible. Tables are typically 10 million rows at present.\n\n2) What are the obvious changes to make to postgresql.conf?\n\nThings like shared_buffers, work_mem, maintenance_work_mem and\ncheckpoint_segments seem like good candidates given our data warehousing\nworkloads.\n\n3) Ditto /etc/system?\n\n4) We moved the pg_xlog files off /data/postgres (disks 2-7) and into\n/opt/pg_xlog (disks 0-1), but it seemed like performance decreased, so we\nmoved them back again.\n\nHas anyone experienced real performance gains by moving the pg_xlog files?\n\nThanks in anticipation,\n\nPaul.\n\n\n", "msg_date": "Wed, 6 Jul 2005 18:15:47 +0100 (BST)", "msg_from": "\"Paul Johnson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Data Warehousing Tuning" }, { "msg_contents": "Paul,\n\n> Has anyone experienced real performance gains by moving the pg_xlog\n> files?\n\nYes. Both for data load and on OLTP workloads, this increased write \nperformance by as much as 15%. However, you need to configure the xlog \ndrive correctly, you can't just move it to a new disk. Make sure the \nother disk is dedicated exclusively to the xlog, set it forcedirectio, and \nincrease your checkpoint_segments to something like 128.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 6 Jul 2005 17:40:12 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Data Warehousing Tuning" } ]
[ { "msg_contents": "Hi,\n\nI'm about to recommend a server model to a client, and I've read a in \nmany places (including in this list) that storing indexes in one disk \nand the rest of the database in other disk might increase the overall \nperformance of the system in about 10%.\n\nMaking this only by a symbolic link is enough, or there are any futher \nsteps? We were thinking (for costs reasons) in use only 2 SCSI disks, \nwithout any level of RAID. Is this enough to achieve performance \nimprovement mentioned above?\n\nBest regards,\nAlvaro\n", "msg_date": "Wed, 06 Jul 2005 14:58:31 -0300", "msg_from": "Alvaro Nunes Melo <[email protected]>", "msg_from_op": true, "msg_subject": "Storing data and indexes in different disks" } ]
[ { "msg_contents": "I'd say it's a little early to worry about a 10% performance increase\nwhen you don't have any redundancy. You might want to consider using\nmore, cheaper SATA disks - with more spindles you may very well get\nbetter performance in addition to redundancy.\n\nAnyway, here's an optimization project I just went through recently: the\nold database was running all on a SAN attached RAID5 partition; moved\nthe indices to a local striped mirror set (faster disks too: 15K rpm),\nmoved the WAL files to a separate local two disk mirror, and spent a lot\nof time tuning the config parameters (the old install was running the\nconservative defaults). All the hardware (apart from the additional\ndisks) is the same. For some simple queries I saw as much as a 150X\nspeedup, though that's certainly not typical of the performance\nimprovement overall. Most of this is likely due to the memory settings,\nbut faster disks certainly play a part.\n\nIn any case, it's hard to say what would improve performance for you\nwithout knowing what kind of applications you are running and what sort\nof load they see.\n\nDmitri\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Alvaro\nNunes Melo\nSent: Wednesday, July 06, 2005 1:59 PM\nTo: PostgreSQL - Performance\nSubject: [PERFORM] Storing data and indexes in different disks\n\n\nHi,\n\nI'm about to recommend a server model to a client, and I've read a in \nmany places (including in this list) that storing indexes in one disk \nand the rest of the database in other disk might increase the overall \nperformance of the system in about 10%.\n\nMaking this only by a symbolic link is enough, or there are any futher \nsteps? We were thinking (for costs reasons) in use only 2 SCSI disks, \nwithout any level of RAID. Is this enough to achieve performance \nimprovement mentioned above?\n\nBest regards,\nAlvaro\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings\nThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer\n", "msg_date": "Wed, 6 Jul 2005 14:46:03 -0400", "msg_from": "\"Dmitri Bichko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing data and indexes in different disks" } ]
[ { "msg_contents": "�where is stored the value set by ALTER TABLE table_name ALTER COLUMN\ncolumn_name SET STATISTICS = [1-1000]?\nI've set this to 1000, and I didn't remember in which column (doh!). Is\nthere any table to look? (I did 'grep \"set stat\" $PGDATA/pg_log/*' and found\nit, but may be there is a better way)\n\nI couldn't find it in the docs neithr \"googling\"\n\n\nGreetings\n--------------------------------------\nLong life, little spam and prosperity\n\n", "msg_date": "Wed, 6 Jul 2005 16:49:21 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": true, "msg_subject": "ALTER TABLE tabla ALTER COLUMN columna SET STATISTICS number" }, { "msg_contents": "On Wed, Jul 06, 2005 at 04:49:21PM -0300, Dario wrote:\n> where is stored the value set by ALTER TABLE table_name ALTER COLUMN\n> column_name SET STATISTICS = [1-1000]?\n\npg_attribute.attstattarget\n\nExample query:\n\nSELECT attrelid::regclass, attname, attstattarget\nFROM pg_attribute\nWHERE attstattarget > 0;\n\nSee the \"System Catalogs\" chapter in the documentation for more\ninformation.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Wed, 6 Jul 2005 14:07:52 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ALTER TABLE tabla ALTER COLUMN columna SET STATISTICS number" }, { "msg_contents": "(first at all, sorry for my english)\nHi.\n - Does \"left join\" restrict the order in which the planner must join\ntables? I've read about join, but i'm not sure about left join...\n - If so: Can I avoid this behavior? I mean, make the planner resolve the\nquery, using statistics (uniqueness, data distribution) rather than join\norder.\n\n\tMy query looks like:\n\tSELECT ...\n FROM a, b,\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d on (d.key=a.key)\n WHERE (a.key = b.key) AND (b.column <= 100)\n\n b.column has a lot better selectivity, but planner insist on resolve\nfirst c.key = a.key.\n\n\tOf course, I could rewrite something like:\n\tSELECT ...\n FROM\n (SELECT ...\n FROM a,b\n LEFT JOIN d on (d.key=a.key)\n WHERE (b.column <= 100)\n )\n as aa\n LEFT JOIN c ON (c.key = aa.key)\n\n\tbut this is query is constructed by an application with a \"multicolumn\"\nfilter. It's dynamic.\n It means that a user could choose to look for \"c.column = 1000\". And\nalso, combinations of filters.\n\n\tSo, I need the planner to choose the best plan...\n\nI've already change statistics, I clustered tables with cluster, ran vacuum\nanalyze, changed work_mem, shared_buffers...\n\nGreetings. TIA.\n\n", "msg_date": "Wed, 6 Jul 2005 18:54:02 -0300", "msg_from": "\"Dario Pudlo\" <[email protected]>", "msg_from_op": false, "msg_subject": "join and query planner" }, { "msg_contents": "\n(first at all, sorry for my english)\nHi.\n - Does \"left join\" restrict the order in which the planner must join\ntables? I've read about join, but i'm not sure about left join...\n - If so: Can I avoid this behavior? I mean, make the planner resolve the\nquery, using statistics (uniqueness, data distribution) rather than join\norder.\n\n\tMy query looks like:\n\tSELECT ...\n FROM a, b,\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d on (d.key=a.key)\n WHERE (a.key = b.key) AND (b.column <= 100)\n\n b.column has a lot better selectivity, but planner insist on resolve\nfirst c.key = a.key.\n\n\tOf course, I could rewrite something like:\n\tSELECT ...\n FROM\n (SELECT ...\n FROM a,b\n LEFT JOIN d on (d.key=a.key)\n WHERE (b.column <= 100)\n )\n as aa\n LEFT JOIN c ON (c.key = aa.key)\n\n\tbut this is query is constructed by an application with a \"multicolumn\"\nfilter. It's dynamic.\n It means that a user could choose to look for \"c.column = 1000\". And\nalso, combinations of filters.\n\n\tSo, I need the planner to choose the best plan...\n\nI've already change statistics, I clustered tables with cluster, ran vacuum\nanalyze, changed work_mem, shared_buffers...\n\nGreetings. TIA.\n\n", "msg_date": "Wed, 6 Jul 2005 19:04:48 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": true, "msg_subject": "join and query planner" }, { "msg_contents": "On Wed, 6 Jul 2005, Dario wrote:\n\n>\n> (first at all, sorry for my english)\n> Hi.\n> - Does \"left join\" restrict the order in which the planner must join\n> tables? I've read about join, but i'm not sure about left join...\n\nYes. Reordering the outer joins can change the results in some cases which\nwould be invalid. Before we can change the ordering behavior, we really\nneed to know under what conditions it is safe to do the reordering.\n\n", "msg_date": "Wed, 6 Jul 2005 15:34:03 -0700 (PDT)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join and query planner" }, { "msg_contents": "Dario Pudlo wrote:\n> (first at all, sorry for my english)\n> Hi.\n> - Does \"left join\" restrict the order in which the planner must join\n> tables? I've read about join, but i'm not sure about left join...\n> - If so: Can I avoid this behavior? I mean, make the planner resolve the\n> query, using statistics (uniqueness, data distribution) rather than join\n> order.\n>\n> \tMy query looks like:\n> \tSELECT ...\n> FROM a, b,\n> LEFT JOIN c ON (c.key = a.key)\n> LEFT JOIN d on (d.key=a.key)\n> WHERE (a.key = b.key) AND (b.column <= 100)\n>\n> b.column has a lot better selectivity, but planner insist on resolve\n> first c.key = a.key.\n>\n> \tOf course, I could rewrite something like:\n> \tSELECT ...\n> FROM\n> (SELECT ...\n> FROM a,b\n> LEFT JOIN d on (d.key=a.key)\n> WHERE (b.column <= 100)\n> )\n> as aa\n> LEFT JOIN c ON (c.key = aa.key)\n>\n> \tbut this is query is constructed by an application with a \"multicolumn\"\n> filter. It's dynamic.\n> It means that a user could choose to look for \"c.column = 1000\". And\n> also, combinations of filters.\n>\n> \tSo, I need the planner to choose the best plan...\n\nProbably forcing the other join earlier could help:\nSELECT ...\n FROM a JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n...\n\nI think the problem is that postgresql can't break JOIN syntax very\neasily. But you can make the JOIN earlier.\n\nJohn\n=:->\n>\n> I've already change statistics, I clustered tables with cluster, ran vacuum\n> analyze, changed work_mem, shared_buffers...\n>\n> Greetings. TIA.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>", "msg_date": "Mon, 11 Jul 2005 18:39:08 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join and query planner" }, { "msg_contents": "On Wed, Jul 06, 2005 at 18:54:02 -0300,\n Dario Pudlo <[email protected]> wrote:\n> (first at all, sorry for my english)\n> Hi.\n> - Does \"left join\" restrict the order in which the planner must join\n> tables? I've read about join, but i'm not sure about left join...\n\nThe left join operator is not associative so in general the planner doesn't\nhave much flexibility to reorder left (or right) joins.\n", "msg_date": "Tue, 12 Jul 2005 09:35:52 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join and query planner" } ]
[ { "msg_contents": "Hi to all,\n\nI have a performace problem with the following query:\n\n BEGIN;\n DECLARE mycursor BINARY CURSOR FOR\n SELECT\n toponimo,\n wpt\n FROM wpt_comuni_view\n WHERE (\n wpt &&\n setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3D, 4326)\n );\n FETCH ALL IN mycursor;\n END;\n\nI get the results in about 108 seconds (8060 rows).\n\nIf I issue the SELECT alone (without the CURSOR) I get the \nsame results in less than 1 second.\n\nThe wpt_comuni_view is a VIEW of a 3 tables JOIN, and the \"wpt\"\nfield is a PostGIS geometry column. The \"&&\" is the PostGIS\n\"overlaps\" operator.\n\nIf I CURSOR SELECT from a temp table instead of the JOIN VIEW the \nquery time 1 second.\n\nIf I omit the WHERE clause the CURSOR fetches results in 1\nsecond.\n\nCan the CURSOR on JOIN affects so heavly the WHERE clause? I\nsuspect that - with the CURSOR - a sequential scan is performed\non the entire data set for each fetched record...\n\nAny idea?\n\nThis is the definition of the VIEW:\n\n CREATE VIEW wpt_comuni_view AS\n SELECT istat_wpt.oid, istat_wpt.id, istat_wpt.toponimo, \n istat_comuni.residenti, istat_wpt.wpt\n FROM istat_comuni\n JOIN istat_comuni2wpt\n USING (idprovincia, idcomune)\n JOIN istat_wpt\n ON (idwpt = id);\n\nThank you for any hint.\n\n-- \nNiccolo Rigacci\nFirenze - Italy\n\nWar against Iraq? Not in my name!\n", "msg_date": "Wed, 6 Jul 2005 23:19:46 +0200", "msg_from": "Niccolo Rigacci <[email protected]>", "msg_from_op": true, "msg_subject": "CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "Niccolo Rigacci wrote:\n\n>Hi to all,\n>\n>I have a performace problem with the following query:\n>\n> BEGIN;\n> DECLARE mycursor BINARY CURSOR FOR\n> SELECT\n> toponimo,\n> wpt\n> FROM wpt_comuni_view\n> WHERE (\n> wpt &&\n> setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3D, 4326)\n> );\n> FETCH ALL IN mycursor;\n> END;\n>\n>I get the results in about 108 seconds (8060 rows).\n>\n>If I issue the SELECT alone (without the CURSOR) I get the\n>same results in less than 1 second.\n>\n>The wpt_comuni_view is a VIEW of a 3 tables JOIN, and the \"wpt\"\n>field is a PostGIS geometry column. The \"&&\" is the PostGIS\n>\"overlaps\" operator.\n>\n>If I CURSOR SELECT from a temp table instead of the JOIN VIEW the\n>query time 1 second.\n>\n>If I omit the WHERE clause the CURSOR fetches results in 1\n>second.\n>\n>Can the CURSOR on JOIN affects so heavly the WHERE clause? I\n>suspect that - with the CURSOR - a sequential scan is performed\n>on the entire data set for each fetched record...\n>\n>Any idea?\n>\n>\nWhat does it say if you do \"EXPLAIN ANALYZE SELECT...\" both with and\nwithout the cursor?\nIt may not say much for the cursor, but I think you can explain analyze\nthe fetch statements.\n\nIt is my understanding that Cursors generally favor using an\nslow-startup style plan, which usually means using an index, because it\nexpects that you won't actually want all of the data. A seqscan is not\nalways slower, especially if you need to go through most of the data.\n\nWithout an explain analyze it's hard to say what the planner is thinking\nand doing.\n\n>This is the definition of the VIEW:\n>\n> CREATE VIEW wpt_comuni_view AS\n> SELECT istat_wpt.oid, istat_wpt.id, istat_wpt.toponimo,\n> istat_comuni.residenti, istat_wpt.wpt\n> FROM istat_comuni\n> JOIN istat_comuni2wpt\n> USING (idprovincia, idcomune)\n> JOIN istat_wpt\n> ON (idwpt = id);\n>\n>Thank you for any hint.\n>\n>\n>\nYou might also try comparing your CURSOR to a prepared statement. There\nare a few rare cases where preparing is worse than issuing the query\ndirectly, depending on your data layout.\n\nJohn\n=:->", "msg_date": "Wed, 06 Jul 2005 16:29:00 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "> >Can the CURSOR on JOIN affects so heavly the WHERE clause? I\n> >suspect that - with the CURSOR - a sequential scan is performed\n> >on the entire data set for each fetched record...\n> >\n> >Any idea?\n> >\n> >\n> What does it say if you do \"EXPLAIN ANALYZE SELECT...\" both with and\n> without the cursor?\n> It may not say much for the cursor, but I think you can explain analyze\n> the fetch statements.\n\nHow can I EXPLAIN ANALYZE a cursor like this?\n\n BEGIN;\n DECLARE mycursor BINARY CURSOR FOR\n SELECT ...\n FETCH ALL IN mycursor;\n END;\n\nI tried to put EXPLAIN ANALYZE in front of the SELECT and in \nfront of the FETCH, but I got two \"syntax error\"...\n\nThanks\n\n-- \nNiccolo Rigacci\nFirenze - Italy\n\nWar against Iraq? Not in my name!\n", "msg_date": "Thu, 7 Jul 2005 09:41:47 +0200", "msg_from": "Niccolo Rigacci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "On Wed, Jul 06, 2005 at 11:19:46PM +0200, Niccolo Rigacci wrote:\n> \n> I have a performace problem with the following query:\n> \n> BEGIN;\n> DECLARE mycursor BINARY CURSOR FOR\n> SELECT\n> toponimo,\n> wpt\n> FROM wpt_comuni_view\n> WHERE (\n> wpt &&\n> setSRID('BOX3D(4.83 36, 20.16 47.5)'::BOX3D, 4326)\n> );\n> FETCH ALL IN mycursor;\n> END;\n> \n> I get the results in about 108 seconds (8060 rows).\n> \n> If I issue the SELECT alone (without the CURSOR) I get the \n> same results in less than 1 second.\n\nBy trial and error I discovered that adding an \"ORDER BY \ntoponimo\" clause to the SELECT, boosts the CURSOR performances \nso that they are now equiparable to the SELECT alone.\n\nIs there some documentation on how an ORDER can affect the \nCURSOR in a different way than the SELECT?\n\n-- \nNiccolo Rigacci\nFirenze - Italy\n\nWar against Iraq? Not in my name!\n", "msg_date": "Thu, 7 Jul 2005 10:13:27 +0200", "msg_from": "Niccolo Rigacci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "Niccolo Rigacci wrote:\n>>\n>>I get the results in about 108 seconds (8060 rows).\n>>\n>>If I issue the SELECT alone (without the CURSOR) I get the \n>>same results in less than 1 second.\n> \n> \n> By trial and error I discovered that adding an \"ORDER BY \n> toponimo\" clause to the SELECT, boosts the CURSOR performances \n> so that they are now equiparable to the SELECT alone.\n> \n> Is there some documentation on how an ORDER can affect the \n> CURSOR in a different way than the SELECT?\n\nI think you're misunderstanding exactly what's happening here. If you \nask for a cursor, PG assumes you aren't going to want all the results \n(or at least not straight away). After all, most people use them to work \nthrough results in comparatively small chunks, perhaps only ever \nfetching 1% of the total results.\n\nSo - if you ask for a cursor, PG weights things to give you the first \nfew rows as soon as possible, at the expense of fetching *all* rows \nquickly. If you're only going to fetch e.g. the first 20 rows this is \nexactly what you want. In your case, since you're immediately issuing \nFETCH ALL, you're not really using the cursor at all, but PG doesn't \nknow that.\n\nSo - the ORDER BY means PG has to sort all the results before returning \nthe first row anyway. That probably means the plans with/without cursor \nare identical.\n\nOf course, all this assumes that your configuration settings are good \nand statistics adequate. To test that, try fetching just the first row \nfrom your cursor with/without the ORDER BY. Without should be quicker.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Jul 2005 10:14:50 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "On Thu, Jul 07, 2005 at 10:14:50AM +0100, Richard Huxton wrote:\n> >By trial and error I discovered that adding an \"ORDER BY \n> >toponimo\" clause to the SELECT, boosts the CURSOR performances \n> >so that they are now equiparable to the SELECT alone.\n\n> I think you're misunderstanding exactly what's happening here. If you \n> ask for a cursor, PG assumes you aren't going to want all the results \n> (or at least not straight away). After all, most people use them to work \n> through results in comparatively small chunks, perhaps only ever \n> fetching 1% of the total results.\n\nThis make finally sense!\n\n> In your case, since you're immediately issuing FETCH ALL,\n> you're not really using the cursor at all, but PG doesn't know\n> that.\n\nIn fact, fetching only the first rows from the cursor, is rather\nquick! This demonstrates that the PG planner is smart.\n\nNot so smart are the MapServer and QGIS query builders, which use\na CURSOR to FETCH ALL.\n\nI will investigate in this direction now.\n\nThank you very much, your help was excellent!\n\n-- \nNiccolo Rigacci\nFirenze - Italy\n\nWar against Iraq? Not in my name!\n", "msg_date": "Thu, 7 Jul 2005 12:06:58 +0200", "msg_from": "Niccolo Rigacci <[email protected]>", "msg_from_op": true, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times?" }, { "msg_contents": "Niccolo Rigacci <[email protected]> writes:\n> How can I EXPLAIN ANALYZE a cursor like this?\n\n> BEGIN;\n> DECLARE mycursor BINARY CURSOR FOR\n> SELECT ...\n> FETCH ALL IN mycursor;\n> END;\n\n> I tried to put EXPLAIN ANALYZE in front of the SELECT and in \n> front of the FETCH, but I got two \"syntax error\"...\n\nJust FYI, you can't EXPLAIN ANALYZE this, but you can EXPLAIN it:\n\n\tEXPLAIN DECLARE x CURSOR FOR ...\n\nso you can at least find out what the plan is.\n\nIt might be cool to support EXPLAIN ANALYZE FETCH --- not sure what that\nwould take.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Jul 2005 09:59:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: CURSOR slowes down a WHERE clause 100 times? " } ]
[ { "msg_contents": "\nHi Paul, just some quick thoughts:\n\n \n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Paul Johnson\n> Sent: Wednesday, July 06, 2005 10:16 AM\n> To: [email protected]\n> Subject: [PERFORM] Data Warehousing Tuning\n> \n\n> \n> Questions are as follows:\n> \n> 1) Should we have set the page size to 32MB when we compiled Postgres?\n> \n> We mainly do bulk loads using 'copy', full-table scans and \n> large joins so this would seem sensible. Tables are typically \n> 10 million rows at present.\n\nI would defer changing page size to the \"fine-tuning\" category, our\nexperience with that has not produced substantial gains. Would focus on\nthe other categories you mention below first.\n\nAlso, for heavy use of COPY, you may consider using the latest release\nof Bizgres 0.6, which should speed loads:\nhttp://www.bizgres.org/pages.php?pg=downloads or\nhttp://www.greenplum.com/prod_download.html for compiled version.\n\n\n> \n> 2) What are the obvious changes to make to postgresql.conf?\n> \n> Things like shared_buffers, work_mem, maintenance_work_mem \n> and checkpoint_segments seem like good candidates given our \n> data warehousing workloads.\n\nYou're on the right track, it depends on nature of queries (sorry for\ngiving you the \"consulting\" answer on that one), but here are some\nPostgreSQL configurations to consider:\n\na - Consider using separate disk partitions for transaction log, temp\nspace and WAL. See separate note about WAL and directio in Solaris\ntuning note, link below. May put temp space on a separate partition, in\nanticipation of forthcoming changes which take advantage of this.\n\nb - Sizing temp space? Should be as large as the largest index. Set\nmax speed read/write config: minimal journaling, use write-through cache\non this.\n\nc - Might try increasing checkpoint segments (64?). More logs produces\nsignificant benefit. And turn checkpoint warnings on (Off by default).\n\nd - Sort mem and work mem - What queries are you running? Workmem used\nin aggregation/sorts. How many concurrent reports? For 3 complex\nqueries, might try 256MB at work mem?\n\ne - You probably do this already, but always ANALYZE after loads.\n\nf - Maintenance work mem - used in vacuum, analyze, creating bulk\nindexes, bulk checking for keys. Might consider using 512 or 750?\n\n\n> \n> 3) Ditto /etc/system?\n\nSee http://www.bizgres.org/bb/viewtopic.php?t=6 for Solaris.\n\n\n> \n> 4) We moved the pg_xlog files off /data/postgres (disks 2-7) \n> and into /opt/pg_xlog (disks 0-1), but it seemed like \n> performance decreased, so we moved them back again.\n> \n> Has anyone experienced real performance gains by moving the \n> pg_xlog files?\n> \n\nLikely to help only with COPY.\n\nFeel free to contact me directly if you have any questions on my\nstatements above. There is a Configurator in development which you\nmight find helpful when it is complete:\nhttp://www.bizgres.org/pages.php?pg=developers%7Cprojects%7Cconfigurator\n\n\n\nKind Regards,\nFrank\n\nFrank Wosczyna\nSystems Engineer\n+1 650 224 7374\n \nhttp://www.greenplum.com \n\nGreenPlum, Inc.\n1900 South Norfolk Street, Suite 224\nSan Mateo, California 94403 USA\n\n", "msg_date": "Wed, 6 Jul 2005 20:02:24 -0400", "msg_from": "\"Frank Wosczyna\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data Warehousing Tuning" } ]
[ { "msg_contents": "Hello,\nI was wondering if there is any way to speed up deletes on this table \n(see details below)?\nI am running few of these deletes (could become many more) inside a \ntransaction and each one takes allmost a second to complete.\nIs it because of the foreign key constraint, or is it something else?\n\nThanks!\n\n Table \"public.contacts\"\n Column | Type | \nModifiers\n-------------+------------------------ \n+----------------------------------------------------------\nid | integer | not null default nextval \n('public.contacts_id_seq'::text)\nrecord | integer |\ntype | integer |\nvalue | character varying(128) |\ndescription | character varying(255) |\npriority | integer |\nitescotype | integer |\noriginal | integer |\nIndexes:\n \"contacts_pkey\" PRIMARY KEY, btree (id)\n \"contacts_record_idx\" btree (record)\nForeign-key constraints:\n \"contacts_original_fkey\" FOREIGN KEY (original) REFERENCES \ncontacts(id)\n\ndev=# select count(id) from contacts;\ncount\n--------\n984834\n(1 row)\n\n\ndev=# explain analyze DELETE FROM contacts WHERE id = 985458;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------\nIndex Scan using contacts_pkey on contacts (cost=0.00..3.01 rows=1 \nwidth=6) (actual time=0.043..0.049 rows=1 loops=1)\n Index Cond: (id = 985458)\nTotal runtime: 840.481 ms\n(3 rows)\n\n", "msg_date": "Thu, 7 Jul 2005 13:16:30 +0200", "msg_from": "Bendik Rognlien Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "How to speed up delete" }, { "msg_contents": "On Thu, 07 Jul 2005 13:16:30 +0200, Bendik Rognlien Johansen \n<[email protected]> wrote:\n\n> Hello,\n> I was wondering if there is any way to speed up deletes on this table \n> (see details below)?\n> I am running few of these deletes (could become many more) inside a \n> transaction and each one takes allmost a second to complete.\n> Is it because of the foreign key constraint, or is it something else?\n>\n> Thanks!\n\n\tCheck your references : on delete, pg needs to find which rows to \ncascade-delete, or set null, or restrict, in the tables which reference \nthis one. Also if this table references another I think it will lookup it \ntoo. Do you have indexes for all this ?\n", "msg_date": "Thu, 07 Jul 2005 14:55:28 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up delete" }, { "msg_contents": "Bendik Rognlien Johansen <[email protected]> writes:\n> I am running few of these deletes (could become many more) inside a \n> transaction and each one takes allmost a second to complete.\n> Is it because of the foreign key constraint, or is it something else?\n\nYou need an index on \"original\" to support that FK efficiently. Check\nfor references from other tables to this one, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Jul 2005 10:02:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to speed up delete " }, { "msg_contents": "Thanks!\nThat took care of it.\nOn Jul 7, 2005, at 4:02 PM, Tom Lane wrote:\n\n> Bendik Rognlien Johansen <[email protected]> writes:\n>\n>> I am running few of these deletes (could become many more) inside a\n>> transaction and each one takes allmost a second to complete.\n>> Is it because of the foreign key constraint, or is it something else?\n>>\n>\n> You need an index on \"original\" to support that FK efficiently. Check\n> for references from other tables to this one, too.\n>\n> regards, tom lane\n>\n\n", "msg_date": "Thu, 7 Jul 2005 17:03:36 +0200", "msg_from": "Bendik Rognlien Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to speed up delete " } ]
[ { "msg_contents": "Hi,\nThese last two days, I have some troubles with a very strange phenomena:\nI have a 400 Mb database and a stored procedure written in perl which\ncall 14 millions times spi_exec_query (thanks to Tom to fix the memory\nleak ;-) ).\nOn my laptop whith Centrino 1.6 GHz, 512 Mb RAM,\n - it is solved in 1h50' for Linux 2.6\n - it is solved in 1h37' for WXP Professionnal (<troll on> WXP better\ntan Linux ;-) <troll off>)\nOn a Desktop with PIV 2.8 GHz, \n - it is solved in 3h30 for W2K\nOn a Desktop with PIV 1.8 GHz, two disks with data and index's on each disk\n - it is solved in 4h for W2K\n\nI test CPU, memory performance on my laptop and it seems that the\nperformances are not perfect except for one single test: String sort.\n\nSo, it seems that for my application (database in memory, 14 millions\nof very small requests), Centrino (aka Pentium M) has a build-in\nhardware to boost Postgres performance :-)\nAny experience to confirm this fact ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 7 Jul 2005 14:42:08 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "\n> So, it seems that for my application (database in memory, 14 millions\n> of very small requests), Centrino (aka Pentium M) has a build-in\n> hardware to boost Postgres performance :-)\n> Any experience to confirm this fact ?\n\n\tOn my Centrino, Python flies. This might be due to the very large \nprocessor cache. Probably it is the same for perl. With two megabytes of \ncache, sorting things that fit into the cache should be a lot faster too. \nMaybe this explains it.\n\n\tCheck this out :\n\nhttp://www.anandtech.com/linux/showdoc.aspx?i=2308&p=5\n\nhttp://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2129&p=11\n\n\tBonus for Opteron lovers :\n\n\"The Dual Opteron 252's lead by 19% over the Quad Xeon 3.6 GHz 667MHz FSB\"\n\nhttp://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2397&p=12\n", "msg_date": "Thu, 07 Jul 2005 15:03:22 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Hi,\nThese last two days, I have some troubles with a very strange phenomena:\nI have a 400 Mb database and a stored procedure written in perl which\ncall 14 millions times spi_exec_query (thanks to Tom to fix the memory\nleak ;-) ).\nOn my laptop whith Centrino 1.6 GHz, 512 Mb RAM,\n- it is solved in 1h50' for Linux 2.6\n- it is solved in 1h37' for WXP Professionnal (<troll on> WXP better\ntan Linux ;-) <troll off>)\nOn a Desktop with PIV 2.8 GHz,\n- it is solved in 3h30 for W2K\nOn a Desktop with PIV 1.8 GHz, two disks with data and index's on each disk\n- it is solved in 4h for W2K\n\nI test CPU, memory performance on my laptop and it seems that the\nperformances are not perfect except for one single test: String sort.\n\nSo, it seems that for my application (database in memory, 14 millions\nof very small requests), Centrino (aka Pentium M) has a build-in\nhardware to boost Postgres performance :-)\nAny experience to confirm this fact ?\nSome tips to speed up Postgres on non-Centrino processors ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 7 Jul 2005 14:49:05 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Surprizing performances for Postgres on Centrino" }, { "msg_contents": "On 7/7/05, Jean-Max Reymond <[email protected]> wrote:\n> On my laptop whith Centrino 1.6 GHz, 512 Mb RAM,\n> - it is solved in 1h50' for Linux 2.6\n> - it is solved in 1h37' for WXP Professionnal (<troll on> WXP better\n> tan Linux ;-) <troll off>)\n[...]\n> I test CPU, memory performance on my laptop and it seems that the\n> performances are not perfect except for one single test: String sort.\n\nWell, Pentium 4 is not the most efficient processor around (despite\nall the advertisiing and all the advanced hyper features). Sure it\nreaches high GHz rates, but that's not what matters the most.\nThis is why AMD stopped giving GHz ratings and instead uses numbers\nwhich indicate how their processor relate to Pentium 4s. For instance\nAMD Athlon XP 1700+ is running at 1.45 GHz, but competes with\nPentium 4 1.7 GHz.\n\nSame is with Intels Pentium-III line (which evolved into Pentium-M\nCentrino actually). Like AMD Athlon, Pentium-M is more efficient\nabout its clockspeed than Pentium 4. In other words, you shouldn't\ncompare Pentium 4 and Pentium-M clock-by-clock. Pentium 4\njust needs more GHz to do same job as Pentium-M or Athlon.\n\nIf you want to get some better (more technical) information, just\ngoogle around for reviews and articles. There are plenty of them\nrecently since Apple intends to use Pentium-M as their future\nplatform, at least for notebooks. As for technical stuff, for instance\nlook at:\nhttp://www.tomshardware.com/howto/20050621/37watt-pc-02.html\n\nWhat really is interesting is the performance difference between\nWXP and L26... Are you sure they use exactly the same config\nparameters (shared buffers) and have similar statistics (both\nVACUUM ANALYZEd recently)?\n\n Regards,\n Dawid\n", "msg_date": "Thu, 7 Jul 2005 15:48:06 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surprizing performances for Postgres on Centrino" }, { "msg_contents": "On Thu, Jul 07, 2005 at 03:48:06PM +0200, Dawid Kuroczko wrote:\n> This is why AMD stopped giving GHz ratings and instead uses numbers\n> which indicate how their processor relate to Pentium 4s. For instance\n> AMD Athlon XP 1700+ is running at 1.45 GHz, but competes with\n> Pentium 4 1.7 GHz.\n\nActually, the XP ratings are _Athlon Thunderbird_ ratings, not P4 ratings. At\nleast they were intended to be that originally :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 7 Jul 2005 16:03:00 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surprizing performances for Postgres on Centrino" }, { "msg_contents": "On Thu, Jul 07, 2005 at 02:49:05PM +0200, Jean-Max Reymond wrote:\n> Hi,\n> These last two days, I have some troubles with a very strange phenomena:\n> I have a 400 Mb database and a stored procedure written in perl which\n> call 14 millions times spi_exec_query (thanks to Tom to fix the memory\n> leak ;-) ).\n> On my laptop whith Centrino 1.6 GHz, 512 Mb RAM,\n> - it is solved in 1h50' for Linux 2.6\n> - it is solved in 1h37' for WXP Professionnal (<troll on> WXP better\n> tan Linux ;-) <troll off>)\n> On a Desktop with PIV 2.8 GHz,\n> - it is solved in 3h30 for W2K\n> On a Desktop with PIV 1.8 GHz, two disks with data and index's on each disk\n> - it is solved in 4h for W2K\n\nDo you have the same locale settings on all of them?\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"We are who we choose to be\", sang the goldfinch\nwhen the sun is high (Sandman)\n", "msg_date": "Thu, 7 Jul 2005 10:09:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surprizing performances for Postgres on Centrino" }, { "msg_contents": "On 7/7/05, Alvaro Herrera <[email protected]> wrote:\n\n> \n> Do you have the same locale settings on all of them?\n> \n\ninterressant:\nUNICODE on the fast laptop\nSQL_ASCII on the slowest desktops.\nis UNICODE database faster than SQL_ASCII ?\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 7 Jul 2005 16:23:18 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Surprizing performances for Postgres on Centrino" }, { "msg_contents": "Jean-Max Reymond wrote:\n> On 7/7/05, Alvaro Herrera <[email protected]> wrote:\n> \n> \n>>Do you have the same locale settings on all of them?\n>>\n> \n> \n> interressant:\n> UNICODE on the fast laptop\n> SQL_ASCII on the slowest desktops.\n> is UNICODE database faster than SQL_ASCII ?\n\nThat's your encoding (character-set). Locale is something like \"C\" or \n\"en_US\" or \"fr_FR\".\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Jul 2005 15:52:33 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Surprizing performances for Postgres on Centrino" } ]
[ { "msg_contents": "Hi all,\n\n I hope I am not asking too many questions. :)\n\n I have been trying to solve a performance problem in my program for a \nwhile now and, after getting an index to work which didn't speed things \nup enough, I am stumped. I am hoping someone here might have come across \na similar issue and came up with a creative solution they wouldn't mind \nsharing.\n\n I am not looking for details, I expect to do my homework, I just need \na pointer, suggestion or trick.\n\n The problem I have is that I am using pgSQL as a back end for my \nweb-based *nix backup program. Part of the database stores info on every \nfile and directory per partition. I use this information to build my \ndirectory tree. I have a test partition with ~325,000 files of which \n~30,000 are directories. I have been able to get the performance up to a \nreasonable level for displaying the directory tree including expanding \nand contracting branches (~3-5sec). I do this by loading all the \ndirectory info into an array and a hash once and using them as needed \ninstead of hitting the DB.\n\n The problem comes when the user toggles a directory branch's backup \nflag (a simple check box beside the directory name). If it's a directory \nnear the end of a branch it is fast enough. If they toggle a single file \nit is nearly instant. However if they toggle say the root directory, so \nevery file and directory below it needs to be updated, it can take \n500-600sec to return. Obviously this is no good.\n\n What I need is a scheme for being able to say, essentially:\n\nUPDATE file_info_1 SET file_backup='t' WHERE file_parent_dir~'^/';\n\n Faster. An index isn't enough because it needs to hit every entry anyway.\n\n I use perl to access the DB and generate the web pages. The file \nbrowser portion looks and acts like most file browsers (directory tree \nin the left frame with expanding and contracting directory branches and \na list of files in a given directory on the right). It does not use any \nplug-ins like Java and that is important to me that it stays that way (I \nwant it to be as simple as possible for the user to install).\n\n So far the only suggestion I've received is to keep a secondary \n'delta' table to store just the request. Then on load get the existing \ndata then check it against the delta table before creating the page. The \nbiggest draw back for me with this is that currently I don't need to \nprovide an 'Apply' button because a simple javascript call passes the \nrequest onto the perl script immediately. I really like the Mac-esque \napproach to keeping the UI as simple and straight forward as possible. \nSo, a suggestion that doesn't require something like an 'Apply' button \nwould be much appreciated.\n\n Thanks for any suggestions in advance!\n\nMadison\n\nPS - For what it's worth, this is the last hurdle for me to overcome \nbefore I can finally release my program as 'beta' after over 15 months \nof work! :)\n", "msg_date": "Thu, 07 Jul 2005 09:13:30 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Need suggestion high-level suggestion on how to solve a performance\n\tproblem" }, { "msg_contents": "\n\tHello,\n\tI once upon a time worked in a company doing backup software and I \nremember these problems, we had exactly the same !\n\tThe file tree was all into memory and everytime the user clicked on \nsomething it haaad to update everything. Being C++ it was very fast, but \nto backup a million files you needed a gig of RAM, which is... a problem \nlet's say, when you think my linux laptop has about 400k files on it.\n\tSo we rewrote the project entirely with the purpose of doing the million \nfiles thingy with the clunky Pentium 90 with 64 megabytes of RAM, and it \nworked.\n\tWhat I did was this :\n\t- use Berkeley DB\n\tBerkeley DB isn't a database like postgres, it's just a tree, but it's \ncool for managing trees. It's quite fast, uses key compression, etc.\n\tIt has however a few drawbacks :\n\t- files tend to fragment a lot over time and it can't reindex or vacuum \nlike postgres. You have to dump and reload.\n\t- the price of the licence to be able to embed it in your product and \nsell it is expensive, and if you want crash-proof, it's insanely expensive.\n\t- Even though it's a tree it has no idea what a parent is so you have to \nmess with that manually. We used a clever path encoding to keep all the \npaths inside the same directory close in the tree ; and separated database \nfor dirs and files because we wanted the dirs to be in the cache, whereas \nwe almost never touched the files.\n\n\tAnd...\n\tYou can't make it if you update every node everytime the user clicks on \nsomething. You have to update 1 node.\n\tIn your tree you have nodes.\n\tGive each node a state being one of these three : include, exclude, \ninherit\n\tWhen you fetch a node you also fetch all of its parents, and you \npropagate the state to know the state of the final node.\n\tIf a node is in state 'inherit' it is like its parent, etc.\n\tSo you have faster updates but slower selects. However, there is a bonus \n: if you check a directory as \"include\" and one of its subdirectory as \n\"exclude\", and the user adds files all over the place, the files added in \nthe \"included\" directory will be automatically backed up and the ones in \nthe 'ignored' directory will be automatically ignored, you have nothing to \nchange.\n\tAnd it is not that slow because, if you think about it, suppose you have \n/var/www/mysite/blah with 20.000 files in it, in order to inherit the \nstate of the parents on them you only have to fetch /var once, www once, \netc.\n\tSo if you propagate your inherited properties when doing a tree traversal \nit comes at no cost.\n\t\n\tIMHO it's the only solution.\n\n\tIt can be done quite easily also, using ltree types and a little stored \nprocedures, you can even make a view which gives the state of each \nelement, computed by inheritance.\n\n\tHere's the secret : the user will select 100.000 files by clicking on a \ndirectory near root, but the user will NEVER look at 100.000 files. So you \ncan make looking at files 10x slower if you can make including/excluding \ndirectories 100.000 times faster.\n\n\tNow you'll ask me, but how do I calculate the total size of the backup \nwithout looking at all the files ? when I click on a directory I don't \nknow what files are in it and which will inherit and which will not.\n\n\tIt's simple : you precompute it when you scan the disk for changed files. \nThis is the only time you should do a complete tree exploration.\n\n\tOn each directory we put a matrix [M]x[N], M and N being one of the three \nabove state, containing the amount of stuff in the directory which would \nbe in state M if the directory was in state N. This is very easy to \ncompute when you scan for new files. Then when a directory changes state, \nyou have to sum a few cells of that matrix to know how much more that adds \nto the backup. And you only look up 1 record.\n\n\tIs that helpful ?\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 08 Jul 2005 00:21:31 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need suggestion high-level suggestion on how to solve a\n\tperformance problem" }, { "msg_contents": "PFC wrote:\n> \n> Hello,\n> I once upon a time worked in a company doing backup software and I \n> remember these problems, we had exactly the same !\n\n Prety neat. :)\n\n> The file tree was all into memory and everytime the user clicked on \n> something it haaad to update everything. Being C++ it was very fast, \n> but to backup a million files you needed a gig of RAM, which is... a \n> problem let's say, when you think my linux laptop has about 400k files \n> on it.\n\n I want this to run on \"average\" systems (I'm developing it primarily \non my modest P3 1GHz Thinkpad w/ 512MB RAM running Debian) so expecting \nthat much free memory is not reasonable. As it is my test DB, with a \nrealistic amount of data, is ~150MB.\n\n> So we rewrote the project entirely with the purpose of doing the \n> million files thingy with the clunky Pentium 90 with 64 megabytes of \n> RAM, and it worked.\n> What I did was this :\n> - use Berkeley DB\n<snip>\n> - the price of the licence to be able to embed it in your product \n> and sell it is expensive, and if you want crash-proof, it's insanely \n> expensive.\n\n This is the kicker right there; my program is released under the GPL \nso it's fee-free. I can't eat anything costly like that. As it is there \nis hundreds and hundreds of hours in this program that I am already \nhoping to recoup one day through support contracts. Adding commercial \nsoftware I am afraid is not an option.\n\n> bonus : if you check a directory as \"include\" and one of its \n> subdirectory as \"exclude\", and the user adds files all over the place, \n> the files added in the \"included\" directory will be automatically \n> backed up and the ones in the 'ignored' directory will be automatically \n> ignored, you have nothing to change.\n<snip>\n> IMHO it's the only solution.\n\n Now *this* is an idea worth looking into. How I will implement it \nwith my system I don't know yet but it's a new line of thinking. Wonderful!\n\n> Now you'll ask me, but how do I calculate the total size of the \n> backup without looking at all the files ? when I click on a directory I \n> don't know what files are in it and which will inherit and which will not.\n> \n> It's simple : you precompute it when you scan the disk for changed \n> files. This is the only time you should do a complete tree exploration.\n\n This is already what I do. When a user selects a partition they want \nto select files to backup or restore the partition is scanned. The scan \nlooks at every file, directory and symlink and records it's size (on \ndisk), it mtime, owner, group, etc. and records it to the database. I've \ngot this scan/update running at ~1,500 files/second on my laptop. That \nwas actually the first performance tuning I started with. :)\n\n With all the data in the DB the backup script can calculate rather \nintelligently where it wants to copy each directory to.\n\n> On each directory we put a matrix [M]x[N], M and N being one of the \n> three above state, containing the amount of stuff in the directory \n> which would be in state M if the directory was in state N. This is very \n> easy to compute when you scan for new files. Then when a directory \n> changes state, you have to sum a few cells of that matrix to know how \n> much more that adds to the backup. And you only look up 1 record.\n\n In my case what I do is calculate the size of all the files selected \nfor backup in each directory, sort the directories from all sources by \nthe total size of all their selected files and then start assigning the \ndirectories, largest to smallest to each of my available destination \nmedias. If it runs out of destination space it backs up what it can and \nthen waits a user-definable amount of time and then checks to see if any \nnew destination media has been made available. If so it again tries to \nassign the files/directories that didn't fit. It will loop a \nuser-definable number of times before giving up and warning the user \nthat more destination space is needed for that backup job.\n\n> Is that helpful ?\n\n The three states (inhertied, backup, ignore) has definately caught my \nattention. Thank you very much for your idea and lengthy reply!\n\nMadison\n", "msg_date": "Thu, 07 Jul 2005 19:28:50 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Need suggestion high-level suggestion on how to solve" }, { "msg_contents": "\n\n> This is the kicker right there; my program is released under the GPL \n> so it's fee-free. I can't eat anything costly like that. As it is there \n> is hundreds and hundreds of hours in this program that I am already \n> hoping to recoup one day through support contracts. Adding commercial \n> software I am afraid is not an option.\n\n\tIf you open-source GPL then Berkeley is free for you.\n", "msg_date": "Fri, 08 Jul 2005 01:58:46 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need suggestion high-level suggestion on how to solve a\n\tperformance problem" }, { "msg_contents": "Madison,\n\n>    The problem comes when the user toggles a directory branch's backup\n> flag (a simple check box beside the directory name). If it's a directory\n> near the end of a branch it is fast enough. If they toggle a single file\n> it is nearly instant. However if they toggle say the root directory, so\n> every file and directory below it needs to be updated, it can take\n> 500-600sec to return. Obviously this is no good.\n>\n>    What I need is a scheme for being able to say, essentially:\n>\n> UPDATE file_info_1 SET file_backup='t' WHERE file_parent_dir~'^/';\n\nWell, from the sound of it the problem is not selecting the files to be \nupdated, it's updating them. \n\nWhat I would do, personally, is *not* store an update flag for each file. \nInstead, I would store the update flag for the directory which was \nselected. If users want to exclude certain files and subdirectories, I'd \nalso include a dont_update flag. When it's time to back up, you simply \ncheck the tree for the most immediate update or don't update flag above \neach file.\n\nFor the table itself, I'd consider using ltree for the directory tree \nstructure. It has some nice features which makes it siginifcanly better \nthan using a delimited text field.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 7 Jul 2005 18:04:28 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need suggestion high-level suggestion on how to solve a\n\tperformance problem" } ]
[ { "msg_contents": "I'm putting together a road map on how our systems can scale as our load\nincreases. As part of this, I need to look into setting up some fast read\nonly mirrors of our database. We should have more than enough RAM to fit\neverything into memory. I would like to find out if I could expect better\nperformance by mounting the database from a RAM disk, or if I would be\nbetter off keeping that RAM free and increasing the effective_cache_size\nappropriately.\n\nI'd also be interested in knowing if this is dependant on whether I am\nrunning 7.4, 8.0 or 8.1.\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/", "msg_date": "Fri, 08 Jul 2005 11:32:40 +1000", "msg_from": "Stuart Bishop <[email protected]>", "msg_from_op": true, "msg_subject": "Mount database on RAM disk?" }, { "msg_contents": "Stuart Bishop wrote:\n> I'm putting together a road map on how our systems can scale as our load\n> increases. As part of this, I need to look into setting up some fast read\n> only mirrors of our database. We should have more than enough RAM to fit\n> everything into memory. I would like to find out if I could expect better\n> performance by mounting the database from a RAM disk, or if I would be\n> better off keeping that RAM free and increasing the effective_cache_size\n> appropriately.\n\nIn theory yes if you can fit the entire database onto a ram disk then \nyou would see a performance benefit.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> I'd also be interested in knowing if this is dependant on whether I am\n> running 7.4, 8.0 or 8.1.\n> \n> \n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Thu, 07 Jul 2005 20:05:42 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" }, { "msg_contents": "[email protected] (Stuart Bishop) writes:\n> I'm putting together a road map on how our systems can scale as our\n> load increases. As part of this, I need to look into setting up some\n> fast read only mirrors of our database. We should have more than\n> enough RAM to fit everything into memory. I would like to find out\n> if I could expect better performance by mounting the database from a\n> RAM disk, or if I would be better off keeping that RAM free and\n> increasing the effective_cache_size appropriately.\n\nIf you were willing to take on a not-inconsiderable risk, I'd think\nthat storing WAL files on a RAMDISK would be likely to be the fastest\nimprovement imaginable.\n\nIf I could get and deploy some SSD (Solid State Disk) devices that\nwould make this sort of thing *actually safe,* I'd expect that to be a\npretty fabulous improvement, at least for write-heavy database\nactivity.\n\n> I'd also be interested in knowing if this is dependant on whether I\n> am running 7.4, 8.0 or 8.1.\n\nBehaviour of all three could be somewhat different, as management of\nthe shared cache has been in flux...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Thu, 07 Jul 2005 23:30:09 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" }, { "msg_contents": "> If I could get and deploy some SSD (Solid State Disk) devices that\n> would make this sort of thing *actually safe,* I'd expect that to be a\n> pretty fabulous improvement, at least for write-heavy database\n> activity.\n\nNot nearly as much as you would expect. For the price of the SSD and a\nSCSI controller capable of keeping up to the SSD along with your regular\nstorage with enough throughput to keep up to structure IO, you can\npurchase a pretty good mid-range SAN which will be just as capable and\nmuch more versatile.\n\n-- \n\n", "msg_date": "Fri, 08 Jul 2005 10:03:48 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" }, { "msg_contents": "Stuart,\n\n> I'm putting together a road map on how our systems can scale as our load\n> increases. As part of this, I need to look into setting up some fast\n> read only mirrors of our database. We should have more than enough RAM\n> to fit everything into memory. I would like to find out if I could\n> expect better performance by mounting the database from a RAM disk, or\n> if I would be better off keeping that RAM free and increasing the\n> effective_cache_size appropriately.\n\nIf you're accessing a dedicated, read-only system with a database small \nenough to fit in RAM, it'll all be cached there anyway, at least on Linux \nand BSD. You won't be gaining anything by creating a ramdisk.\n\nBTW, effective_cache_size doesn't determine the amount of caching done. It \njust informs the planner about how much db is likely to be cached. The \nactual caching is up to the OS/filesystem.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 8 Jul 2005 10:22:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" } ]
[ { "msg_contents": "I am beginning to look at Postgres 8, and am particularly\ninterested in cost-based vacuum/analyze. I'm hoping someone\ncan shed some light on the behavior I am seeing.\n\nSuppose there are three threads:\n\nwriter_thread\n every 1/15 second do\n BEGIN TRANSACTION\n COPY table1 FROM stdin\n ...\n COPY tableN FROM stdin\n perform several UPDATEs, DELETEs and INSERTs\n COMMIT\n\nreader_thread\n every 1/15 second do\n BEGIN TRANSACTION\n SELECT FROM table1 ...\n ...\n SELECT FROM tableN ...\n COMMIT\n\nanalyze_thread\n every 5 minutes do\n ANALYZE table1\n ...\n ANALYZE tableN\n\n\nNow, Postgres 8.0.3 out-of-the-box (all default configs) on a\nparticular piece of hardware runs the Postgres connection for\nwriter_thread at about 15% CPU (meaningless, I know, but for\ncomparison) and runs the Postgres connection for reader_thread\nat about 30% CPU. Latency for reader_thread seeing updates\nfrom writer_thread is well under 1/15s. Impact of\nanalyze_thread is negligible.\n\nIf I make the single configuration change of setting\nvacuum_cost_delay=1000, each iteration in analyze_thread takes\nmuch longer, of course. But what I also see is that the CPU\nusage of the connections for writer_thread and reader_thread\nspike up to well over 80% each (this is a dualie) and latency\ndrops to 8-10s, during the ANALYZEs.\n\nI don't understand why this would be. I don't think there\nare any lock issues, and I don't see any obvious I/O issues.\nAm I missing something? Is there any way to get some\ninsight into what those connections are doing?\n\nThanks,\n\n\t--Ian\n\n\n", "msg_date": "Fri, 08 Jul 2005 12:25:02 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": true, "msg_subject": "cost-based vacuum" }, { "msg_contents": "Ian Westmacott <[email protected]> writes:\n> If I make the single configuration change of setting\n> vacuum_cost_delay=1000, each iteration in analyze_thread takes\n> much longer, of course. But what I also see is that the CPU\n> usage of the connections for writer_thread and reader_thread\n> spike up to well over 80% each (this is a dualie) and latency\n> drops to 8-10s, during the ANALYZEs.\n\n[ scratches head... ] That doesn't make any sense at all.\n\n> I don't understand why this would be. I don't think there\n> are any lock issues, and I don't see any obvious I/O issues.\n> Am I missing something? Is there any way to get some\n> insight into what those connections are doing?\n\nProfiling maybe? Can you put together a self-contained test case\nthat replicates this behavior, so other people could look?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Jul 2005 13:48:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum " }, { "msg_contents": "On Fri, 2005-07-08 at 12:25 -0400, Ian Westmacott wrote:\n> I am beginning to look at Postgres 8, and am particularly\n> interested in cost-based vacuum/analyze. I'm hoping someone\n> can shed some light on the behavior I am seeing.\n> \n> Suppose there are three threads:\n> \n> writer_thread\n> every 1/15 second do\n> BEGIN TRANSACTION\n> COPY table1 FROM stdin\n> ...\n> COPY tableN FROM stdin\n> perform several UPDATEs, DELETEs and INSERTs\n> COMMIT\n> \n> reader_thread\n> every 1/15 second do\n> BEGIN TRANSACTION\n> SELECT FROM table1 ...\n> ...\n> SELECT FROM tableN ...\n> COMMIT\n> \n> analyze_thread\n> every 5 minutes do\n> ANALYZE table1\n> ...\n> ANALYZE tableN\n> \n> \n> Now, Postgres 8.0.3 out-of-the-box (all default configs) on a\n> particular piece of hardware runs the Postgres connection for\n> writer_thread at about 15% CPU (meaningless, I know, but for\n> comparison) and runs the Postgres connection for reader_thread\n> at about 30% CPU. Latency for reader_thread seeing updates\n> from writer_thread is well under 1/15s. Impact of\n> analyze_thread is negligible.\n> \n> If I make the single configuration change of setting\n> vacuum_cost_delay=1000, each iteration in analyze_thread takes\n> much longer, of course. But what I also see is that the CPU\n> usage of the connections for writer_thread and reader_thread\n> spike up to well over 80% each (this is a dualie) and latency\n> drops to 8-10s, during the ANALYZEs.\n> \n> I don't understand why this would be. I don't think there\n> are any lock issues, and I don't see any obvious I/O issues.\n> Am I missing something? Is there any way to get some\n> insight into what those connections are doing?\n\nThe ANALYZE commands hold read locks on the tables you wish to write to.\nIf you slow them down, you merely slow down your write transactions\nalso, and then the read transactions that wait behind them. Every time\nthe ANALYZE sleeps it wakes up the other transactions, which then\nrealise they can't move because of locks and then wake up the ANALYZEs\nfor another shot. The end result is that you introduce more context-\nswitching, without any chance of doing more useful work while the\nANALYZEs sleep.\n\nDon't use the vacuum_cost_delay in this situation. You might try setting\nit to 0 for the analyze_thread only.\n\nSounds like you could speed things up by splitting everything into two\nsets of tables, with writer_thread1 and writer_thread2 etc. That way\nyour 2 CPUs would be able to independently be able to get through more\nwork without locking each other out.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Mon, 11 Jul 2005 12:31:44 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:\n> The ANALYZE commands hold read locks on the tables you wish to write to.\n> If you slow them down, you merely slow down your write transactions\n> also, and then the read transactions that wait behind them. Every time\n> the ANALYZE sleeps it wakes up the other transactions, which then\n> realise they can't move because of locks and then wake up the ANALYZEs\n> for another shot. The end result is that you introduce more context-\n> switching, without any chance of doing more useful work while the\n> ANALYZEs sleep.\n\nLet me make sure I understand. ANALYZE acquires a read\nlock on the table, that it holds until the operation is\ncomplete (including any sleeps). That read lock blocks\nthe extension of that table via COPY. Is that right?\n\nAccording to the 8.0 docs, ANALYZE acquires an ACCESS SHARE\nlock on the table, and that conflicts only with ACCESS\nEXCLUSIVE. Thats why I didn't think I had a lock issue,\nsince I think COPY only needs ROW EXCLUSIVE. Or perhaps\nthe transaction needs something more?\n\nThanks,\n\n\t--Ian\n\n\n", "msg_date": "Mon, 11 Jul 2005 09:07:46 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n>> I don't understand why this would be. I don't think there\n>> are any lock issues, and I don't see any obvious I/O issues.\n\n> The ANALYZE commands hold read locks on the tables you wish to write to.\n\nUnless there were more commands that Ian didn't show us, he's not taking\nany locks that would conflict with ANALYZE. So I don't believe this is\nthe explanation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Jul 2005 09:28:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum " }, { "msg_contents": "On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:\n> On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:\n> > The ANALYZE commands hold read locks on the tables you wish to write to.\n> > If you slow them down, you merely slow down your write transactions\n> > also, and then the read transactions that wait behind them. Every time\n> > the ANALYZE sleeps it wakes up the other transactions, which then\n> > realise they can't move because of locks and then wake up the ANALYZEs\n> > for another shot. The end result is that you introduce more context-\n> > switching, without any chance of doing more useful work while the\n> > ANALYZEs sleep.\n> \n> Let me make sure I understand. ANALYZE acquires a read\n> lock on the table, that it holds until the operation is\n> complete (including any sleeps). That read lock blocks\n> the extension of that table via COPY. Is that right?\n> \n> According to the 8.0 docs, ANALYZE acquires an ACCESS SHARE\n> lock on the table, and that conflicts only with ACCESS\n> EXCLUSIVE. Thats why I didn't think I had a lock issue,\n> since I think COPY only needs ROW EXCLUSIVE. Or perhaps\n> the transaction needs something more?\n\nThe docs are correct, but don't show catalog and buffer locks.\n\n...but on further reading of the code there are no catalog locks or\nbuffer locks held across the sleep points. So, my explanation doesn't\nwork as an explanation for the sleep/no sleep difference you have\nobserved.\n\nBest Regards, Simon Riggs\n\n\n\n\n", "msg_date": "Mon, 11 Jul 2005 15:51:28 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "On Mon, 2005-07-11 at 15:51 +0100, Simon Riggs wrote:\n> On Mon, 2005-07-11 at 09:07 -0400, Ian Westmacott wrote:\n> > On Mon, 2005-07-11 at 07:31, Simon Riggs wrote:\n> > > The ANALYZE commands hold read locks on the tables you wish to write to.\n> > > If you slow them down, you merely slow down your write transactions\n> > > also, and then the read transactions that wait behind them. Every time\n> > > the ANALYZE sleeps it wakes up the other transactions, which then\n> > > realise they can't move because of locks and then wake up the ANALYZEs\n> > > for another shot. The end result is that you introduce more context-\n> > > switching, without any chance of doing more useful work while the\n> > > ANALYZEs sleep.\n> > \n> > Let me make sure I understand. ANALYZE acquires a read\n> > lock on the table, that it holds until the operation is\n> > complete (including any sleeps). That read lock blocks\n> > the extension of that table via COPY. Is that right?\n> > \n> > According to the 8.0 docs, ANALYZE acquires an ACCESS SHARE\n> > lock on the table, and that conflicts only with ACCESS\n> > EXCLUSIVE. Thats why I didn't think I had a lock issue,\n> > since I think COPY only needs ROW EXCLUSIVE. Or perhaps\n> > the transaction needs something more?\n> \n> The docs are correct, but don't show catalog and buffer locks.\n> \n> ...but on further reading of the code there are no catalog locks or\n> buffer locks held across the sleep points. So, my explanation doesn't\n> work as an explanation for the sleep/no sleep difference you have\n> observed.\n\nI've been through all the code now and can't find any resource that is\nheld across a delay point. Nor any reason to believe that the vacuum\ncost accounting would slow anything down.\n\nSince vacuum_cost_delay is a userset parameter, you should be able to\nSET this solely for the analyze_thread. That way we will know with more\ncertainty that it is the analyze_thread that is interfering.\n\nWhat is your default_statistics_target?\nDo you have other stats targets set?\n\nHow long does ANALYZE take to run, with/without the vacuum_cost_delay?\n\nThanks,\n\nBest Regards, Simon Riggs\n\n\n", "msg_date": "Tue, 12 Jul 2005 08:45:38 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "On Tue, 2005-07-12 at 03:45, Simon Riggs wrote:\n> Since vacuum_cost_delay is a userset parameter, you should be able to\n> SET this solely for the analyze_thread. That way we will know with more\n> certainty that it is the analyze_thread that is interfering.\n\nThat is what I have been doing. In fact, I have eliminated\nthe reader_thread and analyze_thread. I just have the\nwriter_thread running, and a psql connection with which I\nperform ANALYZE, for various vacuum_cost_* parameters.\n(I'm trying to extract a reproducible experiment)\n\nIt appears not to matter whether it is one of the tables\nbeing written to that is ANALYZEd. I can ANALYZE an old,\nquiescent table, or a system table and see this effect.\n\n> What is your default_statistics_target?\n\nAll other configs are default; default_statistics_target=10.\n\n> Do you have other stats targets set?\n\nNo. The only thing slightly out of the ordinary with the\ntables is that they are created WITHOUT OIDS. Some indexes,\nbut no primary keys. All columns NOT NULL.\n\n> How long does ANALYZE take to run, with/without the vacuum_cost_delay?\n\nWell, on one table with about 50K rows, it takes about 1/4s\nto ANALYZE with vacuum_cost_delay=0, and about 15s with\nvacuum_cost_delay=1000.\n\nOther things of note:\n\n- VACUUM has the same effect. If I VACUUM or ANALYZE the\n whole DB, the CPU spikes reset between tables.\n- vmstat reports blocks written drops as the CPU rises.\n Don't know if it is cause or effect yet. On a small test\n system, I'm writing about 1.5MB/s. After about 20s\n of cost-based ANALYZE, this drops under 0.5MB/s.\n- this is a dual Xeon. I have tried both with and without\n hyperthreading. I haven't tried to reproduce it\n elsewhere yet, but will.\n- Looking at oprofile reports for 10-minute runs of a\n database-wide VACUUM with vacuum_cost_delay=0 and 1000,\n shows the latter spending a lot of time in LWLockAcquire\n and LWLockRelease (20% each vs. 2%).\n\n\nThanks,\n\n\t--Ian\n\n\n", "msg_date": "Tue, 12 Jul 2005 13:50:51 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:\n> It appears not to matter whether it is one of the tables\n> being written to that is ANALYZEd. I can ANALYZE an old,\n> quiescent table, or a system table and see this effect.\n\nCan you confirm that this effect is still seen even when the ANALYZE\ndoesn't touch *any* of the tables being accessed?\n\n> - this is a dual Xeon. \n\nIs that Xeon MP then?\n\n> - Looking at oprofile reports for 10-minute runs of a\n> database-wide VACUUM with vacuum_cost_delay=0 and 1000,\n> shows the latter spending a lot of time in LWLockAcquire\n> and LWLockRelease (20% each vs. 2%).\n\nIs this associated with high context switching also?\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 13 Jul 2005 16:55:50 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:\n> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:\n> > It appears not to matter whether it is one of the tables\n> > being written to that is ANALYZEd. I can ANALYZE an old,\n> > quiescent table, or a system table and see this effect.\n> \n> Can you confirm that this effect is still seen even when the ANALYZE\n> doesn't touch *any* of the tables being accessed?\n\nYes.\n\n> > - this is a dual Xeon. \n> \n> Is that Xeon MP then?\n\nYes.\n\n> > - Looking at oprofile reports for 10-minute runs of a\n> > database-wide VACUUM with vacuum_cost_delay=0 and 1000,\n> > shows the latter spending a lot of time in LWLockAcquire\n> > and LWLockRelease (20% each vs. 2%).\n> \n> Is this associated with high context switching also?\n\nYes, it appears that context switches increase up to 4-5x\nduring cost-based ANALYZE.\n\n\t--Ian\n\n\n", "msg_date": "Wed, 13 Jul 2005 14:40:36 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "Ian Westmacott <[email protected]> writes:\n> On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:\n>> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:\n>>> It appears not to matter whether it is one of the tables\n>>> being written to that is ANALYZEd. I can ANALYZE an old,\n>>> quiescent table, or a system table and see this effect.\n>> \n>> Can you confirm that this effect is still seen even when the ANALYZE\n>> doesn't touch *any* of the tables being accessed?\n\n> Yes.\n\nThis really isn't making any sense at all. I took another look through\nthe vacuum_delay_point() calls, and I can see a couple that are\nquestionably placed:\n\n* the one in count_nondeletable_pages() is done while we are holding\nexclusive lock on the table; we might be better off not to delay there,\nso as not to block non-VACUUM processes longer than we have to.\n\n* the ones in hashbulkdelete and rtbulkdelete are done while holding\nvarious forms of exclusive locks on the index (this was formerly true\nof gistbulkdelete as well). Again it might be better not to delay.\n\nHowever, these certainly do not explain Ian's problem, because (a) these\nonly apply to VACUUM, not ANALYZE; (b) they would only lock the table\nbeing VACUUMed, not other ones; (c) if these locks were to block the\nreader or writer thread, it'd manifest as blocking on a semaphore, not\nas a surge in LWLock thrashing.\n\n>> Is that Xeon MP then?\n\n> Yes.\n\nThe LWLock activity is certainly suggestive of prior reports of\nexcessive buffer manager lock contention, but it makes *no* sense that\nthat would be higher with vacuum cost delay than without. I'd have\nexpected the other way around.\n\nI'd really like to see a test case for this...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Jul 2005 14:58:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum " }, { "msg_contents": "On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:\n> Ian Westmacott <[email protected]> writes:\n> > On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:\n> >> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:\n> >>> It appears not to matter whether it is one of the tables\n> >>> being written to that is ANALYZEd. I can ANALYZE an old,\n> >>> quiescent table, or a system table and see this effect.\n> >> \n> >> Can you confirm that this effect is still seen even when the ANALYZE\n> >> doesn't touch *any* of the tables being accessed?\n> \n> > Yes.\n> \n> This really isn't making any sense at all. \n\nAgreed. I think all of this indicates that some wierdness (technical\nterm) is happening at a different level in the computing stack. I think\nall of this points fairly strongly to it *not* being a PostgreSQL\nalgorithm problem, i.e. if the code was executed by an idealised Knuth-\nlike CPU then we would not get this problem. Plus, I have faith that if\nit was a problem in that \"plane\" then you or another would have\nuncovered it by now.\n\n> However, these certainly do not explain Ian's problem, because (a) these\n> only apply to VACUUM, not ANALYZE; (b) they would only lock the table\n> being VACUUMed, not other ones; (c) if these locks were to block the\n> reader or writer thread, it'd manifest as blocking on a semaphore, not\n> as a surge in LWLock thrashing.\n\nI've seen enough circumstantial evidence to connect the time spent\ninside LWLockAcquire/Release as being connected to the Semaphore ops\nwithin them, not the other aspects of the code.\n\nMonths ago we discussed the problem of false sharing on closely packed\narrays of shared variables because of the large cache line size of the\nXeon MP. When last we touched on that thought, I focused on the thought\nthat the LWLock array was too tightly packed for the predefined locks.\nWhat we didn't discuss (because I was too focused on the other array)\nwas the PGPROC shared array is equally tightly packed, which could give\nproblems on the semaphores in LWLock.\n\nIntel says fairly clearly that this would be an issue. \n\n> >> Is that Xeon MP then?\n> \n> > Yes.\n> \n> The LWLock activity is certainly suggestive of prior reports of\n> excessive buffer manager lock contention, but it makes *no* sense that\n> that would be higher with vacuum cost delay than without. I'd have\n> expected the other way around.\n> \n> I'd really like to see a test case for this...\n\nMy feeling is that a \"micro-architecture\" test would be more likely to\nreveal some interesting information.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 13 Jul 2005 21:39:48 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: cost-based vacuum" }, { "msg_contents": "I can at least report that the problem does not seem to\noccur with Postgres 8.0.1 running on a dual Opteron.\n\n\t--Ian\n\n\nOn Wed, 2005-07-13 at 16:39, Simon Riggs wrote:\n> On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:\n> > Ian Westmacott <[email protected]> writes:\n> > > On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:\n> > >> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:\n> > >>> It appears not to matter whether it is one of the tables\n> > >>> being written to that is ANALYZEd. I can ANALYZE an old,\n> > >>> quiescent table, or a system table and see this effect.\n> > >> \n> > >> Can you confirm that this effect is still seen even when the ANALYZE\n> > >> doesn't touch *any* of the tables being accessed?\n> > \n> > > Yes.\n> > \n> > This really isn't making any sense at all. \n> \n> Agreed. I think all of this indicates that some wierdness (technical\n> term) is happening at a different level in the computing stack. I think\n> all of this points fairly strongly to it *not* being a PostgreSQL\n> algorithm problem, i.e. if the code was executed by an idealised Knuth-\n> like CPU then we would not get this problem. Plus, I have faith that if\n> it was a problem in that \"plane\" then you or another would have\n> uncovered it by now.\n> \n> > However, these certainly do not explain Ian's problem, because (a) these\n> > only apply to VACUUM, not ANALYZE; (b) they would only lock the table\n> > being VACUUMed, not other ones; (c) if these locks were to block the\n> > reader or writer thread, it'd manifest as blocking on a semaphore, not\n> > as a surge in LWLock thrashing.\n> \n> I've seen enough circumstantial evidence to connect the time spent\n> inside LWLockAcquire/Release as being connected to the Semaphore ops\n> within them, not the other aspects of the code.\n> \n> Months ago we discussed the problem of false sharing on closely packed\n> arrays of shared variables because of the large cache line size of the\n> Xeon MP. When last we touched on that thought, I focused on the thought\n> that the LWLock array was too tightly packed for the predefined locks.\n> What we didn't discuss (because I was too focused on the other array)\n> was the PGPROC shared array is equally tightly packed, which could give\n> problems on the semaphores in LWLock.\n> \n> Intel says fairly clearly that this would be an issue. \n> \n> > >> Is that Xeon MP then?\n> > \n> > > Yes.\n> > \n> > The LWLock activity is certainly suggestive of prior reports of\n> > excessive buffer manager lock contention, but it makes *no* sense that\n> > that would be higher with vacuum cost delay than without. I'd have\n> > expected the other way around.\n> > \n> > I'd really like to see a test case for this...\n> \n> My feeling is that a \"micro-architecture\" test would be more likely to\n> reveal some interesting information.\n> \n> Best Regards, Simon Riggs\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Wed, 13 Jul 2005 17:46:10 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": true, "msg_subject": "Re: cost-based vacuum" } ]
[ { "msg_contents": "> Stuart,\n> \n> > I'm putting together a road map on how our systems can scale as our\nload\n> > increases. As part of this, I need to look into setting up some fast\n> > read only mirrors of our database. We should have more than enough\nRAM\n> > to fit everything into memory. I would like to find out if I could\n> > expect better performance by mounting the database from a RAM disk,\nor\n> > if I would be better off keeping that RAM free and increasing the\n> > effective_cache_size appropriately.\n> \n> If you're accessing a dedicated, read-only system with a database\nsmall\n> enough to fit in RAM, it'll all be cached there anyway, at least on\nLinux\n> and BSD. You won't be gaining anything by creating a ramdisk.\n\n\n \nditto windows. \n\nFiles cached in memory are slower than reading straight from memory but\nnot nearly enough to justify reserving memory for your use. In other\nwords, your O/S is a machine with years and years of engineering\ndesigned best how to dole memory out to caching and various processes.\nWhy second guess it?\n\nMerlin\n", "msg_date": "Fri, 8 Jul 2005 15:21:21 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mount database on RAM disk?" }, { "msg_contents": "\nOn 8 Jul 2005, at 20:21, Merlin Moncure wrote:\n\n>> Stuart,\n>>\n>>\n>>> I'm putting together a road map on how our systems can scale as our\n>>>\n> load\n>\n>>> increases. As part of this, I need to look into setting up some fast\n>>> read only mirrors of our database. We should have more than enough\n>>>\n> RAM\n>\n>>> to fit everything into memory. I would like to find out if I could\n>>> expect better performance by mounting the database from a RAM disk,\n>>>\n> or\n>\n>>> if I would be better off keeping that RAM free and increasing the\n>>> effective_cache_size appropriately.\n>>>\n>>\n>> If you're accessing a dedicated, read-only system with a database\n>>\n> small\n>\n>> enough to fit in RAM, it'll all be cached there anyway, at least on\n>>\n> Linux\n>\n>> and BSD. You won't be gaining anything by creating a ramdisk.\n>>\n>\n>\n>\n> ditto windows.\n>\n> Files cached in memory are slower than reading straight from memory \n> but\n> not nearly enough to justify reserving memory for your use. In other\n> words, your O/S is a machine with years and years of engineering\n> designed best how to dole memory out to caching and various processes.\n> Why second guess it?\n\nBecause sometimes it gets it wrong. The most brutal method is \noccasionally the most desirable. Even if it not the \"right\" way to do \nit.\n\n> Merlin\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\n", "msg_date": "Sat, 9 Jul 2005 22:48:43 +0100", "msg_from": "Alex Stapleton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" }, { "msg_contents": "> On 8 Jul 2005, at 20:21, Merlin Moncure wrote:\n>> ditto windows.\n>>\n>> Files cached in memory are slower than reading straight from memory\n>> but not nearly enough to justify reserving memory for your use. In\n>> other words, your O/S is a machine with years and years of\n>> engineering designed best how to dole memory out to caching and\n>> various processes. Why second guess it?\n>\n> Because sometimes it gets it wrong. The most brutal method is\n> occasionally the most desirable. Even if it not the \"right\" way to do\n> it.\n\nThe fact that cache allows reads to come from memory means that for\nread-oriented activity, you're generally going to be better off\nleaving RAM as \"plain ordinary system memory\" so that it can\nautomatically be drawn into service as cache.\n\nThus, the main reason to consider using a RAM-disk is the fact that\nupdate times are negligible as there is not the latency of a\nround-trip to the disk.\n\nThat would encourage its use for write-heavy tables, with the STRONG\ncaveat that a power outage could readily destroy the database :-(.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in String.concat \"@\" [name;tld];;\nhttp://cbbrowne.com/info/rdbms.html\nRules of the Evil Overlord #153. \"My Legions of Terror will be an\nequal-opportunity employer. Conversely, when it is prophesied that no\nman can defeat me, I will keep in mind the increasing number of\nnon-traditional gender roles.\" <http://www.eviloverlord.com/>\n", "msg_date": "Sun, 10 Jul 2005 00:54:37 -0400", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mount database on RAM disk?" } ]
[ { "msg_contents": "I created a user with a password. That newly created user now have\ntables and indexes. I want to ALTER that user to exclude the password.\nHow is this accomplished without dropping and recreating the users?\n\nLarry Bailey\nSr. Oracle DBA\nFirst American Real Estate Solution\n(714) 701-3347\[email protected] \n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Fri, 8 Jul 2005 16:56:53 -0700", "msg_from": "\"Bailey, Larry\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to revoke a password" }, { "msg_contents": "Bailey, Larry wrote:\n> I created a user with a password. That newly created user now have\n> tables and indexes. I want to ALTER that user to exclude the password.\n> How is this accomplished without dropping and recreating the users?\n\nNever tried to go backwards before but:\n\nalter user foo with encrypted password '';\n\nBut as I look at pg_shadow there is still a hash...\n\nYou could do:\n\nupdate pg_shadow set passwd = '' where usename = 'foo';\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Larry Bailey\n> Sr. Oracle DBA\n> First American Real Estate Solution\n> (714) 701-3347\n> [email protected] \n> **********************************************************************\n> This message contains confidential information intended only for the \n> use of the addressee(s) named above and may contain information that \n> is legally privileged. If you are not the addressee, or the person \n> responsible for delivering it to the addressee, you are hereby \n> notified that reading, disseminating, distributing or copying this \n> message is strictly prohibited. If you have received this message by \n> mistake, please immediately notify us by replying to the message and \n> delete the original message immediately thereafter.\n> \n> Thank you. FADLD Tag\n> **********************************************************************\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 08 Jul 2005 17:09:48 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to revoke a password" }, { "msg_contents": "On Fri, Jul 08, 2005 at 05:09:48PM -0700, Joshua D. Drake wrote:\n> Bailey, Larry wrote:\n> >I created a user with a password. That newly created user now have\n> >tables and indexes. I want to ALTER that user to exclude the password.\n> >How is this accomplished without dropping and recreating the users?\n> \n> Never tried to go backwards before but:\n> \n> alter user foo with encrypted password '';\n\nI think you use NULL as password to ALTER USER.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"Y eso te lo doy firmado con mis l�grimas\" (Fiebre del Loco)\n", "msg_date": "Fri, 8 Jul 2005 20:17:43 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to revoke a password" } ]
[ { "msg_contents": "Thanks but it is still prompting for a password. \n\n\nLarry Bailey\nSr. Oracle DBA\nFirst American Real Estate Solution\n(714) 701-3347\[email protected] \n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: Friday, July 08, 2005 5:10 PM\nTo: Bailey, Larry\nCc: [email protected]\nSubject: Re: [PERFORM] How to revoke a password\n\nBailey, Larry wrote:\n> I created a user with a password. That newly created user now have \n> tables and indexes. I want to ALTER that user to exclude the password.\n> How is this accomplished without dropping and recreating the users?\n\nNever tried to go backwards before but:\n\nalter user foo with encrypted password '';\n\nBut as I look at pg_shadow there is still a hash...\n\nYou could do:\n\nupdate pg_shadow set passwd = '' where usename = 'foo';\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Larry Bailey\n> Sr. Oracle DBA\n> First American Real Estate Solution\n> (714) 701-3347\n> [email protected]\n> **********************************************************************\n> This message contains confidential information intended only for the \n> use of the addressee(s) named above and may contain information that \n> is legally privileged. If you are not the addressee, or the person \n> responsible for delivering it to the addressee, you are hereby \n> notified that reading, disseminating, distributing or copying this \n> message is strictly prohibited. If you have received this message by \n> mistake, please immediately notify us by replying to the message and \n> delete the original message immediately thereafter.\n> \n> Thank you. FADLD Tag\n> **********************************************************************\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n\n--\nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting Home of\nPostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n\n\n", "msg_date": "Fri, 8 Jul 2005 17:16:27 -0700", "msg_from": "\"Bailey, Larry\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to revoke a password" }, { "msg_contents": "Bailey, Larry wrote:\n> Thanks but it is still prompting for a password. \n> \n\nDoes your pg_hba.conf require a password?\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Larry Bailey\n> Sr. Oracle DBA\n> First American Real Estate Solution\n> (714) 701-3347\n> [email protected] \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]] \n> Sent: Friday, July 08, 2005 5:10 PM\n> To: Bailey, Larry\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to revoke a password\n> \n> Bailey, Larry wrote:\n> \n>>I created a user with a password. That newly created user now have \n>>tables and indexes. I want to ALTER that user to exclude the password.\n>>How is this accomplished without dropping and recreating the users?\n> \n> \n> Never tried to go backwards before but:\n> \n> alter user foo with encrypted password '';\n> \n> But as I look at pg_shadow there is still a hash...\n> \n> You could do:\n> \n> update pg_shadow set passwd = '' where usename = 'foo';\n> \n> Sincerely,\n> \n> Joshua D. Drake\n> \n> \n> \n>>Larry Bailey\n>>Sr. Oracle DBA\n>>First American Real Estate Solution\n>>(714) 701-3347\n>>[email protected]\n>>**********************************************************************\n>>This message contains confidential information intended only for the \n>>use of the addressee(s) named above and may contain information that \n>>is legally privileged. If you are not the addressee, or the person \n>>responsible for delivering it to the addressee, you are hereby \n>>notified that reading, disseminating, distributing or copying this \n>>message is strictly prohibited. If you have received this message by \n>>mistake, please immediately notify us by replying to the message and \n>>delete the original message immediately thereafter.\n>>\n>>Thank you. FADLD Tag\n>>**********************************************************************\n>>\n>>\n>>---------------------------(end of \n>>broadcast)---------------------------\n>>TIP 6: explain analyze is your friend\n> \n> \n> \n> --\n> Your PostgreSQL solutions provider, Command Prompt, Inc.\n> 24x7 support - 1.800.492.2240, programming, and consulting Home of\n> PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\n> http://www.commandprompt.com / http://www.postgresql.org\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n\n\n-- \nYour PostgreSQL solutions provider, Command Prompt, Inc.\n24x7 support - 1.800.492.2240, programming, and consulting\nHome of PostgreSQL Replicator, plPHP, plPerlNG and pgPHPToolkit\nhttp://www.commandprompt.com / http://www.postgresql.org\n", "msg_date": "Fri, 08 Jul 2005 17:56:06 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to revoke a password" }, { "msg_contents": "On Fri, Jul 08, 2005 at 05:16:27PM -0700, Bailey, Larry wrote:\n>\n> Thanks but it is still prompting for a password. \n\nLet's back up a bit: what problem are you trying to solve? Do you\nwant the user to be able to log in without entering a password? If\nso then see \"Client Authentication\" in the documentation:\n\nhttp://www.postgresql.org/docs/8.0/static/client-authentication.html\n\nIf you're trying to do something else then please elaborate, as\nit's not clear what you mean by \"I want to ALTER that user to exclude\nthe password.\"\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 8 Jul 2005 18:58:15 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to revoke a password" } ]
[ { "msg_contents": "The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..\n\nHow can that be possible?\n\nBtw: x and x||t are same ordered \n\nphoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------\n Sort (cost=2282.65..2284.92 rows=907 width=946) (actual\ntime=74.982..79.114 rows=950 loops=1)\n Sort Key: (x || t)\n -> Index Scan using i_i on test (cost=0.00..2238.09 rows=907 width=946)\n(actual time=0.077..51.015 rows=950 loops=1)\n Index Cond: (i < 20)\n Total runtime: 85.944 ms\n(5 rows)\n\nphoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x;\n QUERY PLAN\n----------------------------------------------------------------------------\n---------------------------------------------\n Sort (cost=2280.38..2282.65 rows=907 width=946) (actual\ntime=175.431..179.239 rows=950 loops=1)\n Sort Key: x\n -> Index Scan using i_i on test (cost=0.00..2235.82 rows=907 width=946)\n(actual time=0.024..5.378 rows=950 loops=1)\n Index Cond: (i < 20)\n Total runtime: 183.317 ms\n(5 rows)\n\n\n\n\n\nphoeniks=> \\d+ test\n Table \"public.test\"\n Column | Type | Modifiers | Description\n--------+---------+-----------+-------------\n i | integer | |\n t | text | |\n x | text | |\nIndexes:\n \"i_i\" btree (i)\n \"x_i\" btree (xpath_string(x, 'data'::text))\n \"x_ii\" btree (xpath_string(x, 'movie/characters/character'::text))\nHas OIDs: no\n\n", "msg_date": "Sat, 9 Jul 2005 22:38:40 +0400", "msg_from": "\"jobapply\" <[email protected]>", "msg_from_op": true, "msg_subject": "Sorting on longer key is faster ?" }, { "msg_contents": "jobapply wrote:\n> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..\n>\n> How can that be possible?\n>\n> Btw: x and x||t are same ordered\n>\n> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;\n> QUERY PLAN\n>\n\nWhat types are x and t, I have the feeling \"x || t\" is actually a\nboolean, so it is only a True/False sort, while ORDER BY x has to do\nsome sort of string comparison (which might actually be a locale\ndepended comparison, and strcoll can be very slow on some locales)\n\nJohn\n=:->\n\n> ----------------------------------------------------------------------------\n> ----------------------------------------------\n> Sort (cost=2282.65..2284.92 rows=907 width=946) (actual\n> time=74.982..79.114 rows=950 loops=1)\n> Sort Key: (x || t)\n> -> Index Scan using i_i on test (cost=0.00..2238.09 rows=907 width=946)\n> (actual time=0.077..51.015 rows=950 loops=1)\n> Index Cond: (i < 20)\n> Total runtime: 85.944 ms\n> (5 rows)\n>\n> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x;\n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ---------------------------------------------\n> Sort (cost=2280.38..2282.65 rows=907 width=946) (actual\n> time=175.431..179.239 rows=950 loops=1)\n> Sort Key: x\n> -> Index Scan using i_i on test (cost=0.00..2235.82 rows=907 width=946)\n> (actual time=0.024..5.378 rows=950 loops=1)\n> Index Cond: (i < 20)\n> Total runtime: 183.317 ms\n> (5 rows)\n>\n>\n>\n>\n>\n> phoeniks=> \\d+ test\n> Table \"public.test\"\n> Column | Type | Modifiers | Description\n> --------+---------+-----------+-------------\n> i | integer | |\n> t | text | |\n> x | text | |\n> Indexes:\n> \"i_i\" btree (i)\n> \"x_i\" btree (xpath_string(x, 'data'::text))\n> \"x_ii\" btree (xpath_string(x, 'movie/characters/character'::text))\n> Has OIDs: no\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>", "msg_date": "Mon, 11 Jul 2005 18:42:02 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting on longer key is faster ?" }, { "msg_contents": "\"jobapply\" <[email protected]> writes:\n> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..\n> How can that be possible?\n\nHmm, how long are the x values? Is it possible many of them are\nTOASTed?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 11 Jul 2005 19:47:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting on longer key is faster ? " }, { "msg_contents": "Chris Travers wrote:\n> John A Meinel wrote:\n>\n>> jobapply wrote:\n>>\n>>\n>>> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER\n>>> BY x..\n>>>\n>>> How can that be possible?\n>>>\n>>> Btw: x and x||t are same ordered\n>>>\n>>> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x\n>>> || t;\n>>> QUERY PLAN\n>>>\n>>>\n>>\n>>\n>> What types are x and t, I have the feeling \"x || t\" is actually a\n>> boolean, so it is only a True/False sort, while ORDER BY x has to do\n>> some sort of string comparison (which might actually be a locale\n>> depended comparison, and strcoll can be very slow on some locales)\n>>\n>>\n>>\n> Am I reading this that wrong? I would think that x || t would mean\n> \"concatenate x and t.\"\n\nSorry, I think you are right. I was getting my operators mixed up.\n>\n> This is interesting. I never through of writing a multicolumn sort this\n> way....\n\nI'm also surprised that the sort is faster with a merge operation. Are\nyou using UNICODE as the database format? I'm just wondering if it is\ndoing something funny like casting it to an easier to sort type.\n\n>\n> Best Wishes,\n> Chris Travers\n> Metatron Technology Consulting\n\nPS> Don't forget to Reply All so that your messages go back to the list.", "msg_date": "Mon, 11 Jul 2005 20:48:44 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting on longer key is faster ?" }, { "msg_contents": "jobapply wrote:\n> The 2 queries are almost same, but ORDER BY x||t is FASTER than ORDER BY x..\n>\n> How can that be possible?\n>\n> Btw: x and x||t are same ordered\n>\n> phoeniks=> explain analyze SELECT * FROM test WHERE i<20 ORDER BY x || t;\n> QUERY PLAN\n\nI also thought of another possibility. Are there a lot of similar\nentries in X? Meaning that the same value is repeated over and over? It\nis possible that the sort code has a weakness when sorting equal values.\n\nFor instance, if it was doing a Hash aggregation, you would have the\nsame hash repeated. (It isn't I'm just mentioning a case where it might\naffect something).\n\nIf it is creating a tree representation, it might cause some sort of\npathological worst-case behavior, where all entries keep adding to the\nsame side of the tree, rather than being more balanced.\n\nI don't know the internals of postgresql sorting, but just some ideas.\n\nJohn\n=:->", "msg_date": "Mon, 11 Jul 2005 20:52:25 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sorting on longer key is faster ?" } ]
[ { "msg_contents": ">- Sun V250 server\n>- 2*1.3GHz Sparc IIIi CPU\n>- 8GB RAM\n>- 8*73GB SCSI drives\n>- Solaris 10\n>- Postgres 8\n>4) We moved the pg_xlog files off /data/postgres (disks 2-7) and into\n>/opt/pg_xlog (disks 0-1), but it seemed like performance decreased, \n>so we moved them back again.\nYou have saturated SCSI bus.\n1x160GB/s SCSI too small for 8xHDD with 30-70MB/s\nSolutions:\nReplace CD/DVD/tape at top 2x5\" slots on 2xHDD (320 SCSI),\n install PCI 64/66 SCSI 320 controller \n (or simple RAID1 controller for minimize\n saturation of PCI buses)\n and attach to 2xHDD. Move /opt/pg_xlog on this drives.\n\nBest regards,\n Alexander Kirpa\n\n\n", "msg_date": "Mon, 11 Jul 2005 01:07:14 +0300", "msg_from": "\"Alexander Kirpa\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Data Warehousing Tuning" } ]
[ { "msg_contents": "In the past week, one guy of Unix Group in Colombia\nsay: \"Postgrest in production is bat, if the power off\nin any time the datas is lost why this datas is in\nplain files. Postgrest no ssupport data bases with\nmore 1 millon of records\". \nWath tell me in this respect?, is more best Informix\nas say \n\nIng. Alejandro Lemus G.\nRadio Taxi Aeropuerto S.A.\nAvenida de las Am�ricas # 51 - 39 Bogot� - Colombia\nTel: 571-4470694 / 571-4202600 Ext. 260 Fax: 571-2624070\nemail: [email protected]\n\n__________________________________________________\nCorreo Yahoo!\nEspacio para todos tus mensajes, antivirus y antispam �gratis! \nReg�strate ya - http://correo.espanol.yahoo.com/ \n", "msg_date": "Mon, 11 Jul 2005 07:59:51 -0500 (CDT)", "msg_from": "Alejandro Lemus <[email protected]>", "msg_from_op": true, "msg_subject": "Question" }, { "msg_contents": "Perhaps choose a better subject than \"question\" next time?\n\nAlejandro Lemus wrote:\n> In the past week, one guy of Unix Group in Colombia\n> say: \"Postgrest in production is bat, if the power off\n> in any time the datas is lost\n\nWrong. And it's called \"PostgreSQL\".\n\n > why this datas is in\n> plain files. Postgrest no ssupport data bases with\n> more 1 millon of records\". \n\nWrong.\n\n> Wath tell me in this respect?, is more best Informix\n> as say \n\nYour contact in the Unix Group in Columbia obviously talks on subjects \nwhere he knows little. Perhaps re-evaluate anything else you've heard \nfrom him.\n\nYou can find details on PostgreSQL at http://www.postgresql.org/, \nincluding the manuals:\n http://www.postgresql.org/docs/8.0/static/index.html\nThe FAQ:\n http://www.postgresql.org/docs/faq/\nSpanish/Brazilian communities, which might prove useful\n http://www.postgresql.org/community/international\n\nPostgreSQL is licensed under the BSD licence, which means you can freely \ndownload or deploy it in a commercial setting if you desire.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 11 Jul 2005 14:29:07 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Question" } ]
[ { "msg_contents": "> In the past week, one guy of Unix Group in Colombia\n> say: \"Postgrest in production is bat, if the power off in any \n> time the datas is lost why this datas is in plain files. \n> Postgrest no ssupport data bases with more 1 millon of records\". \n> Wath tell me in this respect?, is more best Informix as say \n\nBoth these statements are completely incorrect. \n\nUnlike some other \"database systems\", PostgreSQL *does* survive power\nloss without any major problems. Assuming you use a metadata journailng\nfilesystem, and don't run with non-battery-backed write-cache (but no db\ncan survive that..)\n\nAnd having a million records is no problem at all. You may run into\nconsiderations when you're talking billions, but you can do that as well\n- it just takes a bit more knowledge before you can do it right.\n\n//Magnus\n", "msg_date": "Mon, 11 Jul 2005 15:34:51 +0200", "msg_from": "\"Magnus Hagander\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question" } ]
[ { "msg_contents": "As a sometimes Informix and PostgreSQL DBA, I disagree with the contentions below. We have many tables with 10s of millions of rows in Postgres. We have had (alas) power issues with our lab on more than one occasion and the afflicted servers have recovered like a champ, every time.\n\nThis person may not like postgres (or very much likes Informix), but he shouldn't conjure up spurious reasons to support his/her prejudice.\n\nInformix is an excellent product, but it can be costly for web related applications. PostgeSQL is also an excellent database. Each has differences which may make the decision between the two of them clear. But facts are necessary to have a real discussion.\n\nGreg WIlliamson\nDBA\nGlobeXplorer LLC\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Alejandro\nLemus\nSent: Monday, July 11, 2005 6:00 AM\nTo: [email protected]\nSubject: [PERFORM] Question\n\n\nIn the past week, one guy of Unix Group in Colombia\nsay: \"Postgrest in production is bat, if the power off\nin any time the datas is lost why this datas is in\nplain files. Postgrest no ssupport data bases with\nmore 1 millon of records\". \nWath tell me in this respect?, is more best Informix\nas say \n\nIng. Alejandro Lemus G.\nRadio Taxi Aeropuerto S.A.\nAvenida de las Américas # 51 - 39 Bogotá - Colombia\nTel: 571-4470694 / 571-4202600 Ext. 260 Fax: 571-2624070\nemail: [email protected]\n\n__________________________________________________\nCorreo Yahoo!\nEspacio para todos tus mensajes, antivirus y antispam ¡gratis! \nRegístrate ya - http://correo.espanol.yahoo.com/ \n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n!DSPAM:42d26e2065882109568359!\n\n", "msg_date": "Mon, 11 Jul 2005 16:26:23 -0700", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Question" } ]
[ { "msg_contents": "Help! After recently migrating to Postgres 8, I've\ndiscovered to my horror that I can't determine which\nqueries are poorly performing anymore because the\nlogging has drastically changed and no longer shows\ndurations for anything done through JDBC.\n\nSo I'm desperately trying to do performance tuning on\nmy servers and have no way to sort out which\nstatements are the slowest.\n\nDoes anyone have any suggestions? How do you\ndetermine what queries are behaving badly when you\ncan't get durations out of the logs?\n\nI have a perl script that analyzes the output from\nPostgres 7 logs and it works great! But it relies on\nthe duration being there.\n\nI did some searches on postgresql.org mailing lists\nand have seen a few people discussing this problem,\nbut noone seems to be too worried about it. Is there\na simple work-around?\n\nSincerely,\n\nBrent\n\n\n\t\t\n____________________________________________________\nSell on Yahoo! Auctions � no fees. Bid on great items. \nhttp://auctions.yahoo.com/\n", "msg_date": "Mon, 11 Jul 2005 23:22:47 -0700 (PDT)", "msg_from": "Brent Henry <[email protected]>", "msg_from_op": true, "msg_subject": "General DB Tuning" }, { "msg_contents": "I have this in my postgresql.conf file and it works fine (set the min to \nwhatever you want to log)\nlog_min_duration_statement = 3000 # -1 is disabled, in milliseconds.\n\nAnother setting that might get what you want:\n\n#log_duration = false\n\nuncomment and change to true.\n\n From the docs: \n(http://www.postgresql.org/docs/8.0/interactive/runtime-config.html)\n\n Causes the duration of every completed statement which satisfies \nlog_statement to be logged. When using this option, if you are not using \nsyslog, it is recommended that you log the PID or session ID using \nlog_line_prefix so that you can link the statement to the duration using \nthe process ID or session ID. The default is off. Only superusers can \nchange this setting.\n\nBrent Henry wrote:\n> Help! After recently migrating to Postgres 8, I've\n> discovered to my horror that I can't determine which\n> queries are poorly performing anymore because the\n> logging has drastically changed and no longer shows\n> durations for anything done through JDBC.\n> \n> So I'm desperately trying to do performance tuning on\n> my servers and have no way to sort out which\n> statements are the slowest.\n> \n> Does anyone have any suggestions? How do you\n> determine what queries are behaving badly when you\n> can't get durations out of the logs?\n> \n> I have a perl script that analyzes the output from\n> Postgres 7 logs and it works great! But it relies on\n> the duration being there.\n> \n> I did some searches on postgresql.org mailing lists\n> and have seen a few people discussing this problem,\n> but noone seems to be too worried about it. Is there\n> a simple work-around?\n> \n> Sincerely,\n> \n> Brent\n> \n> \n> \t\t\n> ____________________________________________________\n> Sell on Yahoo! Auctions � no fees. Bid on great items. \n> http://auctions.yahoo.com/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n> \n> \n", "msg_date": "Tue, 12 Jul 2005 14:54:31 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "Yes, that is exactly what I want to use!\n\nUnfortunately, it doesn't work if you access postgres\nthrough a JDBC connection. I don't know why. I found\na posting from back in February which talks aobut this\na little:\n\nhttp://archives.postgresql.org/pgsql-admin/2005-02/msg00055.php\n\nBut I can't find anywhere where someone has fixed it. \nAm I the only one accessing postgres through JDBC?\n\n-Brent\n\n\n--- Tom Arthurs <[email protected]> wrote:\n\n> I have this in my postgresql.conf file and it works\n> fine (set the min to \n> whatever you want to log)\n> log_min_duration_statement = 3000 # -1 is disabled,\n> in milliseconds.\n> \n> Another setting that might get what you want:\n> \n> #log_duration = false\n> \n> uncomment and change to true.\n> \n> From the docs: \n>\n(http://www.postgresql.org/docs/8.0/interactive/runtime-config.html)\n> \n> Causes the duration of every completed statement\n> which satisfies \n> log_statement to be logged. When using this option,\n> if you are not using \n> syslog, it is recommended that you log the PID or\n> session ID using \n> log_line_prefix so that you can link the statement\n> to the duration using \n> the process ID or session ID. The default is off.\n> Only superusers can \n> change this setting.\n> \n> Brent Henry wrote:\n> > Help! After recently migrating to Postgres 8,\n> I've\n> > discovered to my horror that I can't determine\n> which\n> > queries are poorly performing anymore because the\n> > logging has drastically changed and no longer\n> shows\n> > durations for anything done through JDBC.\n> > \n> > So I'm desperately trying to do performance tuning\n> on\n> > my servers and have no way to sort out which\n> > statements are the slowest.\n> > \n> > Does anyone have any suggestions? How do you\n> > determine what queries are behaving badly when you\n> > can't get durations out of the logs?\n> > \n> > I have a perl script that analyzes the output from\n> > Postgres 7 logs and it works great! But it relies\n> on\n> > the duration being there.\n> > \n> > I did some searches on postgresql.org mailing\n> lists\n> > and have seen a few people discussing this\n> problem,\n> > but noone seems to be too worried about it. Is\n> there\n> > a simple work-around?\n> > \n> > Sincerely,\n> > \n> > Brent\n> > \n> > \n> > \t\t\n> >\n> ____________________________________________________\n> > Sell on Yahoo! Auctions � no fees. Bid on great\n> items. \n> > http://auctions.yahoo.com/\n> > \n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 5: don't forget to increase your free space\n> map settings\n> > \n> > \n> > \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Tue, 12 Jul 2005 16:32:30 -0700 (PDT)", "msg_from": "Brent Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "we are using jdbc -- the \"log_min_duration_statement = 3000 \" statement \nworks fine for me. Looks like there's no other work around for the \nbug(?). Not sure since I have no interest in logging a million \nstatements a day, I only want to see the poorly performing hits.\n\nBrent Henry wrote:\n> Yes, that is exactly what I want to use!\n> \n> Unfortunately, it doesn't work if you access postgres\n> through a JDBC connection. I don't know why. I found\n> a posting from back in February which talks aobut this\n> a little:\n> \n> http://archives.postgresql.org/pgsql-admin/2005-02/msg00055.php\n> \n> But I can't find anywhere where someone has fixed it. \n> Am I the only one accessing postgres through JDBC?\n> \n> -Brent\n> \n> \n> --- Tom Arthurs <[email protected]> wrote:\n> \n> \n>>I have this in my postgresql.conf file and it works\n>>fine (set the min to \n>>whatever you want to log)\n>>log_min_duration_statement = 3000 # -1 is disabled,\n>>in milliseconds.\n>>\n>>Another setting that might get what you want:\n>>\n>>#log_duration = false\n>>\n>>uncomment and change to true.\n>>\n>> From the docs: \n>>\n> \n> (http://www.postgresql.org/docs/8.0/interactive/runtime-config.html)\n> \n>> Causes the duration of every completed statement\n>>which satisfies \n>>log_statement to be logged. When using this option,\n>>if you are not using \n>>syslog, it is recommended that you log the PID or\n>>session ID using \n>>log_line_prefix so that you can link the statement\n>>to the duration using \n>>the process ID or session ID. The default is off.\n>>Only superusers can \n>>change this setting.\n>>\n>>Brent Henry wrote:\n>>\n>>>Help! After recently migrating to Postgres 8,\n>>\n>>I've\n>>\n>>>discovered to my horror that I can't determine\n>>\n>>which\n>>\n>>>queries are poorly performing anymore because the\n>>>logging has drastically changed and no longer\n>>\n>>shows\n>>\n>>>durations for anything done through JDBC.\n>>>\n>>>So I'm desperately trying to do performance tuning\n>>\n>>on\n>>\n>>>my servers and have no way to sort out which\n>>>statements are the slowest.\n>>>\n>>>Does anyone have any suggestions? How do you\n>>>determine what queries are behaving badly when you\n>>>can't get durations out of the logs?\n>>>\n>>>I have a perl script that analyzes the output from\n>>>Postgres 7 logs and it works great! But it relies\n>>\n>>on\n>>\n>>>the duration being there.\n>>>\n>>>I did some searches on postgresql.org mailing\n>>\n>>lists\n>>\n>>>and have seen a few people discussing this\n>>\n>>problem,\n>>\n>>>but noone seems to be too worried about it. Is\n>>\n>>there\n>>\n>>>a simple work-around?\n>>>\n>>>Sincerely,\n>>>\n>>>Brent\n>>>\n>>>\n>>>\t\t\n>>>\n>>\n>>____________________________________________________\n>>\n>>>Sell on Yahoo! Auctions � no fees. Bid on great\n>>\n>>items. \n>>\n>>>http://auctions.yahoo.com/\n>>>\n>>>---------------------------(end of\n>>\n>>broadcast)---------------------------\n>>\n>>>TIP 5: don't forget to increase your free space\n>>\n>>map settings\n>>\n>>>\n>>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 2: Don't 'kill -9' the postmaster\n>>\n> \n> \n> \n> __________________________________________________\n> Do You Yahoo!?\n> Tired of spam? Yahoo! Mail has the best spam protection around \n> http://mail.yahoo.com \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n> \n> \n", "msg_date": "Tue, 12 Jul 2005 17:36:45 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "Tom Arthurs wrote:\n\n> we are using jdbc -- the \"log_min_duration_statement = 3000 \" \n> statement works fine for me. Looks like there's no other work around \n> for the bug(?). Not sure since I have no interest in logging a \n> million statements a day, I only want to see the poorly performing hits. \n\n\nDoesn't it depend on what jdbc driver you are using?\n\nDennis\n", "msg_date": "Tue, 12 Jul 2005 18:05:13 -0700", "msg_from": "Dennis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": ">> we are using jdbc -- the \"log_min_duration_statement = 3000 \" \n>> statement works fine for me. Looks like there's no other work around \n>> for the bug(?). Not sure since I have no interest in logging a \n>> million statements a day, I only want to see the poorly performing hits. \n> \n> Doesn't it depend on what jdbc driver you are using?\n\nIt depends if he's using new-protocol prepared queries which don't get \nlogged properly. Wasn't that fixed for 8.1 or something?\n\nChris\n\n", "msg_date": "Wed, 13 Jul 2005 09:17:56 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "hmm, yea maybe -- we are using the 7.4 driver with 8.0.x db.\n\nDennis wrote:\n> Tom Arthurs wrote:\n> \n>> we are using jdbc -- the \"log_min_duration_statement = 3000 \" \n>> statement works fine for me. Looks like there's no other work around \n>> for the bug(?). Not sure since I have no interest in logging a \n>> million statements a day, I only want to see the poorly performing hits. \n> \n> \n> \n> Doesn't it depend on what jdbc driver you are using?\n> \n> Dennis\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n> \n> \n", "msg_date": "Tue, 12 Jul 2005 18:30:14 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "We are running Postgres 8.0.2 with the 8.0.2 jdbc\ndriver. And yes we are using prepared statements. \nI've spent hours trying to get the\n'log_min_duration_statement' and 'log_duration'\noptions to work with no luck. I never get any\nduration from the statement. I also never see 'begin'\nor 'commit' in the log so I can't tell how long my\nbatch commands are taking to commit to the DB.\n\nIs there a different kind of 'prepared' statements\nthat we should be using in the driver to get logging\nto work properly? What is the 'new' protocol?\n\nTom, what version are you using? Are you using\nprepared statements in JDBC?\n\n-Brent\n\n\n--- Christopher Kings-Lynne\n<[email protected]> wrote:\n\n> >> we are using jdbc -- the\n> \"log_min_duration_statement = 3000 \" \n> >> statement works fine for me. Looks like there's\n> no other work around \n> >> for the bug(?). Not sure since I have no\n> interest in logging a \n> >> million statements a day, I only want to see the\n> poorly performing hits. \n> > \n> > Doesn't it depend on what jdbc driver you are\n> using?\n> \n> It depends if he's using new-protocol prepared\n> queries which don't get \n> logged properly. Wasn't that fixed for 8.1 or\n> something?\n> \n> Chris\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n\n\t\t\n____________________________________________________\nStart your day with Yahoo! - make it your home page\nhttp://www.yahoo.com/r/hs\n \n", "msg_date": "Tue, 12 Jul 2005 18:36:30 -0700 (PDT)", "msg_from": "Brent Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "> Is there a different kind of 'prepared' statements\n> that we should be using in the driver to get logging\n> to work properly? What is the 'new' protocol?\n\nThe 8.0.2 jdbc driver uses real prepared statements instead of faked \nones. The problem is the new protocol (that the 8.0.2 driver users) has \na bug where protocol-prepared queries don't get logged properly.\n\nI don't know if it's been fixed...\n\nChris\n\n", "msg_date": "Wed, 13 Jul 2005 09:52:20 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "Here's the answer for you from the jdbc list:\n\n> Alvin Hung wrote:\n> \n> \n>>> Currently, 8.0.2 / JDBC 8.0-310, log_min_duration_statement does not\n>>> work with JDBC. Nothing will get logged. This makes it very\n>>> difficult to tune a java application. Can you tell me when will this\n>>> be fixed? Thanks.\n> \n> \n> This is a server limitation: it does not handle logging of the V3\n> extended query protocol very well. There's gradual progress being made\n> on it; you might want to search the pgsql-hackers and pgsql-patches\n> archives for details.\n==========================================================================================\n\n\nWe are using prepared statements, but we are using the 7.4 driver with \nthe 8.0.3 server. I think it comes down to locally (on the client) \nprepared statements vs using server side prepared statments. I never \ngot past this issue (changing the code is in our todo list, but pretty \nfar down it) so I never noticed the logging issues.)\n\nI had a problem with prepared statements with the 8.x drivers -- here's \nwhat I got from the jdbc list when I asked the question:\n\n>>1. What changed between the driver versions that generate this error?\n> \n> \n> The driver started to use server-side prepared statements for\n> parameterization of queries (i.e. the driver translates ? to $n in the\n> main query string, and sends the actual parameter values out-of-band\n> from the query itself). One sideeffect of this is that parameters are\n> more strongly typed than in the 7.4.x versions where the driver would do\n> literal parameter substitution into the query string before sending it\n> to the backend. Also, you can use parameters in fewer places (they must\n> fit the backend's idea of where parameterizable expressions are allowed)\n> -- e.g. see the recent thread about \"ORDER BY ?\" changing behaviour with\n> the newer driver.\n> \n> \n>>> 2. What is the downside of continuing to use the 7.x version of the\n>>> driver -- or are there better alternatives (patch, new version, etc). I\n>>> am using build 311 of the driver.\n> \n> \n> Most active development happens on the 8.0 version; 7.4.x is maintained\n> for bugfixes but that's about it, you won't get the benefit of any\n> performance improvements or added features that go into 8.0. Also, the\n> 7.4.x driver won't necessarily work with servers >= 8.0.\n> \n> In the longer term, the 7.4.x version will eventually become unmaintained.\n\nSo for the short term, you could downgrade your driver.\n\n\nBrent Henry wrote:\n> We are running Postgres 8.0.2 with the 8.0.2 jdbc\n> driver. And yes we are using prepared statements. \n> I've spent hours trying to get the\n> 'log_min_duration_statement' and 'log_duration'\n> options to work with no luck. I never get any\n> duration from the statement. I also never see 'begin'\n> or 'commit' in the log so I can't tell how long my\n> batch commands are taking to commit to the DB.\n> \n> Is there a different kind of 'prepared' statements\n> that we should be using in the driver to get logging\n> to work properly? What is the 'new' protocol?\n> \n> Tom, what version are you using? Are you using\n> prepared statements in JDBC?\n> \n> -Brent\n> \n> \n> --- Christopher Kings-Lynne\n> <[email protected]> wrote:\n> \n> \n>>>>we are using jdbc -- the\n>>\n>>\"log_min_duration_statement = 3000 \" \n>>\n>>>>statement works fine for me. Looks like there's\n>>\n>>no other work around \n>>\n>>>>for the bug(?). Not sure since I have no\n>>\n>>interest in logging a \n>>\n>>>>million statements a day, I only want to see the\n>>\n>>poorly performing hits. \n>>\n>>>Doesn't it depend on what jdbc driver you are\n>>\n>>using?\n>>\n>>It depends if he's using new-protocol prepared\n>>queries which don't get \n>>logged properly. Wasn't that fixed for 8.1 or\n>>something?\n>>\n>>Chris\n>>\n>>\n>>---------------------------(end of\n>>broadcast)---------------------------\n>>TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n> \n> \n> \n> \n> \t\t\n> ____________________________________________________\n> Start your day with Yahoo! - make it your home page\n> http://www.yahoo.com/r/hs\n> \n> \n> \n> \n", "msg_date": "Tue, 12 Jul 2005 18:53:11 -0700", "msg_from": "Tom Arthurs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "On Wed, Jul 13, 2005 at 09:52:20AM +0800, Christopher Kings-Lynne wrote:\n> The 8.0.2 jdbc driver uses real prepared statements instead of faked \n> ones. The problem is the new protocol (that the 8.0.2 driver users) has \n> a bug where protocol-prepared queries don't get logged properly.\n> I don't know if it's been fixed...\n\nIt's not in 8.0.3, but I was having the same problems with DBD::Pg so\nI backported some of it and also changed the code so that it listed the\nvalues of the bind parameters, so you get something like\n\nLOG: statement: SELECT sr.name,sr.seq_region_id, sr.length, 1 FROM seq_region sr WHERE sr.name = $1 AND sr.coord_system_id = $2\nLOG: binding: \"dbdpg_2\" with 2 parameters\nLOG: bind \"dbdpg_2\" $1 = \"20\"\nLOG: bind \"dbdpg_2\" $2 = \"1\"\nLOG: statement: EXECUTE [PREPARE: SELECT sr.name,sr.seq_region_id, sr.length, 1 FROM seq_region sr WHERE sr.name = $1 AND sr.coord_system_id = $2]\nLOG: duration: 0.164 ms\n\nI've attached a patch in case anyone finds it useful.\n\n -Mark", "msg_date": "Wed, 13 Jul 2005 11:07:40 +0100", "msg_from": "Mark Rae <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" }, { "msg_contents": "On Wed, 2005-07-13 at 09:52 +0800, Christopher Kings-Lynne wrote:\n> > Is there a different kind of 'prepared' statements\n> > that we should be using in the driver to get logging\n> > to work properly? What is the 'new' protocol?\n> \n> The 8.0.2 jdbc driver uses real prepared statements instead of faked \n> ones. The problem is the new protocol (that the 8.0.2 driver users) has \n> a bug where protocol-prepared queries don't get logged properly.\n> \n> I don't know if it's been fixed...\n\nYes, there is a fix for this in 8.1\n\nBrent has been sent the details.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 13 Jul 2005 17:10:06 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: General DB Tuning" } ]
[ { "msg_contents": "Hi,\n\nWe have a couple of database that are identical (one for each customer).\nThey are all relatively small, ranging from 100k records to 1m records.\nThere's only one main table with some smaller tables, a lot of indexes \nand some functions.\n\nI would like to make an estimation of the performance, the diskspace \nand other related things,\nwhen we have database of for instance 10 million records or 100 million \nrecords.\n\nIs there any math to be done on that ?\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 12 Jul 2005 18:21:57 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Projecting currentdb to more users" }, { "msg_contents": "On 7/12/05, Yves Vindevogel <[email protected]> wrote:\n> Hi,\n> \n> We have a couple of database that are identical (one for each customer).\n> They are all relatively small, ranging from 100k records to 1m records.\n> There's only one main table with some smaller tables, a lot of indexes\n> and some functions.\n> \n> I would like to make an estimation of the performance, the diskspace\n> and other related things,\n> when we have database of for instance 10 million records or 100 million\n> records.\n> \n> Is there any math to be done on that ?\n\nIts pretty easy to make a database run fast with only a few thousand\nrecords, or even a million records, however things start to slow down\nnon-linearly when the database grows too big to fit in RAM.\n\nI'm not a guru, but my attempts to do this have not been very accurate.\n\nMaybe (just maybe) you could get an idea by disabling the OS cache on\nthe file system(s) holding the database and then somehow fragmenting\nthe drive severly (maybe by putting each table in it's own disk\npartition?!?) and measuring performance.\n\nOn the positive side, there are a lot of wise people on this list who\nhave +++ experience optimzing slow queries on big databases. So\nqueries now that run in 20 ms but slow down to 7 seconds when your\ntables grow will likely benefit from optimizing.\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 12 Jul 2005 13:00:49 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Projecting currentdb to more users" } ]
[ { "msg_contents": " From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. \n\n\"...125. Intel has designed its compiler purposely to degrade performance when a program\nis run on an AMD platform. To achieve this, Intel designed the compiler to compile code\nalong several alternate code paths. Some paths are executed when the program runs on an Intel\nplatform and others are executed when the program is operated on a computer with an AMD\nmicroprocessor. (The choice of code path is determined when the program is started, using a\nfeature known as \"CPUID\" which identifies the computer's microprocessor.) By design, the\ncode paths were not created equally. If the program detects a \"Genuine Intel\" microprocessor,\nit executes a fully optimized code path and operates with the maximum efficiency. However,\nif the program detects an \"Authentic AMD\" microprocessor, it executes a different code path\nthat will degrade the program's performance or cause it to crash...\"\n\n", "msg_date": "Tue, 12 Jul 2005 18:24:52 -0000", "msg_from": "\"Mohan, Ross\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Projecting currentdb to more users" }, { "msg_contents": "On Tue, 2005-07-12 at 13:24, Mohan, Ross wrote:\n> From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. \n> \n> \"...125. Intel has designed its compiler purposely to degrade performance when a program\n> is run on an AMD platform. To achieve this, Intel designed the compiler to compile code\n> along several alternate code paths. Some paths are executed when the program runs on an Intel\n> platform and others are executed when the program is operated on a computer with an AMD\n> microprocessor. (The choice of code path is determined when the program is started, using a\n> feature known as \"CPUID\" which identifies the computer's microprocessor.) By design, the\n> code paths were not created equally. If the program detects a \"Genuine Intel\" microprocessor,\n> it executes a fully optimized code path and operates with the maximum efficiency. However,\n> if the program detects an \"Authentic AMD\" microprocessor, it executes a different code path\n> that will degrade the program's performance or cause it to crash...\"\n\nWell, this is, right now, just AMD's supposition about Intel's\nbehaviour, I'm not sure one way or the other if Intel IS doing this. \nBeing a big, money hungry company, I wouldn't be surprised if they are,\nbut I don't think it would affect postgresql for most people, since they\nwould be using the gcc compiler.\n", "msg_date": "Tue, 12 Jul 2005 13:41:14 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Projecting currentdb to more users" }, { "msg_contents": "On Tue, Jul 12, 2005 at 01:41:14PM -0500, Scott Marlowe wrote:\n> On Tue, 2005-07-12 at 13:24, Mohan, Ross wrote:\n> > From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. \n> Well, this is, right now, just AMD's supposition about Intel's\n> behaviour, I'm not sure one way or the other if Intel IS doing this. \n\nI think its more a case of AMD now having solid evidence to back\nup the claims. \n\nThis discovery, and that fact that you could get round it by\ntoggling some flags, was being discussed on various HPC mailing \nlists around about the beginning of this year.\n\n -Mark\n\n", "msg_date": "Tue, 12 Jul 2005 21:06:09 +0100", "msg_from": "Mark Rae <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Projecting currentdb to more users" }, { "msg_contents": "On Tue, 2005-07-12 at 15:06, Mark Rae wrote:\n> On Tue, Jul 12, 2005 at 01:41:14PM -0500, Scott Marlowe wrote:\n> > On Tue, 2005-07-12 at 13:24, Mohan, Ross wrote:\n> > > From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues. \n> > Well, this is, right now, just AMD's supposition about Intel's\n> > behaviour, I'm not sure one way or the other if Intel IS doing this. \n> \n> I think its more a case of AMD now having solid evidence to back\n> up the claims. \n> \n> This discovery, and that fact that you could get round it by\n> toggling some flags, was being discussed on various HPC mailing \n> lists around about the beginning of this year.\n\nWow! That's pretty fascinating. So, is the evidence pretty\noverwhelming that this was not simple incompetence, but real malice?\n\nI could see either one being a cause of this issue, and wouldn't really\nbe surprised by either one.\n", "msg_date": "Tue, 12 Jul 2005 15:11:35 -0500", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Projecting currentdb to more users" }, { "msg_contents": "On Tue, Jul 12, 2005 at 03:11:35PM -0500, Scott Marlowe wrote:\n> On Tue, 2005-07-12 at 15:06, Mark Rae wrote:\n> > I think its more a case of AMD now having solid evidence to back\n> > up the claims. \n> \n> Wow! That's pretty fascinating. So, is the evidence pretty\n> overwhelming that this was not simple incompetence, but real malice?\n\n\nI suppose that depends on the exact nature of the 'check'.\n\nAs far as I was aware it was more a case of 'I don't recognise this\nprocessor, so I'll do it the slow but safe way'.\n\nHowever from what AMD are claiming, it seems to be more of a \n'Its an AMD processor so I'll be deliberately slow and buggy'\n\n\nHaving said that, I have tried compiling PG with the intel compiler \nin the past, and haven't noticed any real difference. But in a database\nthere isn't much scope for vectorization and pipelining\ncompared with numerical code, which is where the Intel compiler\nmakes the greatest difference.\n\n -Mark\n", "msg_date": "Tue, 12 Jul 2005 21:32:13 +0100", "msg_from": "Mark Rae <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Projecting currentdb to more users" }, { "msg_contents": "2005/7/12, Mohan, Ross <[email protected]>:\n> From AMD's suit against Intel. Perhaps relevant to some PG/AMD issues.\n\nPostgres is compiled with gnu compiler. Isn't it ?\nI don't know how much can Postgres benefit from an optimized Intel compiler.\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Wed, 13 Jul 2005 07:18:32 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Projecting currentdb to more users" } ]
[ { "msg_contents": "with my application, it seems that size of cache has great effect:\nfrom 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and\n20% again from 1Mb L2 cache to 2Mb L2 cache.\nI don't understand why a 512Kb cache L2 is too small to fit the data's\ndoes it exist a tool to trace processor activity and confirm that\nprocessor is waiting for memory ?\ndoes it exist a tool to snapshot postgres activity and understand\nwhere we spend time and potentialy avoid the bottleneck ?\n\nthanks for your tips.\n\n-- \nJean-Max Reymond\nCKR Solutions Open Source\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Wed, 13 Jul 2005 10:20:51 +0200", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": true, "msg_subject": "size of cache" }, { "msg_contents": "On Wed, 2005-07-13 at 10:20 +0200, Jean-Max Reymond wrote:\n> with my application, it seems that size of cache has great effect:\n> from 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and\n> 20% again from 1Mb L2 cache to 2Mb L2 cache.\n\nMemory request time is the main bottleneck in well tuned database\nsystems, so your results could be reasonable. \n\n> I don't understand why a 512Kb cache L2 is too small to fit the data's\n> does it exist a tool to trace processor activity and confirm that\n> processor is waiting for memory ?\n\nYou have both data and instruction cache on the CPU. It is likely it is\nthe instruction cache that is too small to fit all of the code required\nfor your application's workload mix.\n\nUse Intel VTune or similar to show the results you seek. \n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 13 Jul 2005 17:07:47 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: size of cache" } ]
[ { "msg_contents": "Hello\n\nI have a large database with 4 large tables (each containing at least \n200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's \nbetter to split them into small tables (e.g tables of 2000 rows) to \nspeed the access and the update of those tables (considering that i will \nhave few update but a lot of reading).\n\nDo you think it would be efficient ?\n\nNicolas, wondering if he hadn't be too greedy\n\n-- \n\n-------------------------------------------------------------------------\n� soyez ce que vous voudriez avoir l'air d'�tre � Lewis Caroll\n\n", "msg_date": "Wed, 13 Jul 2005 12:08:54 +0200", "msg_from": "Nicolas Beaume <[email protected]>", "msg_from_op": true, "msg_subject": "large table vs multiple smal tables" }, { "msg_contents": "Nicolas,\n\nThese sizes would not be considered large. I would leave them\nas single tables.\n\nKen\n\nOn Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:\n> Hello\n> \n> I have a large database with 4 large tables (each containing at least \n> 200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's \n> better to split them into small tables (e.g tables of 2000 rows) to \n> speed the access and the update of those tables (considering that i will \n> have few update but a lot of reading).\n> \n> Do you think it would be efficient ?\n> \n> Nicolas, wondering if he hadn't be too greedy\n> \n> -- \n> \n> -------------------------------------------------------------------------\n> ? soyez ce que vous voudriez avoir l'air d'?tre ? Lewis Caroll\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n", "msg_date": "Wed, 13 Jul 2005 07:33:05 -0500", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large table vs multiple smal tables" }, { "msg_contents": "On Wed, Jul 13, 2005 at 12:08:54PM +0200, Nicolas Beaume wrote:\n> Hello\n> \n> I have a large database with 4 large tables (each containing at least \n> 200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's \n> better to split them into small tables (e.g tables of 2000 rows) to \n> speed the access and the update of those tables (considering that i will \n> have few update but a lot of reading).\n\n2 million rows is nothing unless you're on a 486 or something. As for\nyour other question, remember the first rule of performance tuning:\ndon't tune unless you actually need to.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 14 Jul 2005 13:26:52 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: large table vs multiple smal tables" } ]
[ { "msg_contents": ">Nicolas,\n>\n>These sizes would not be considered large. I would leave them\n>as single tables.\n>\n>Ken\n\n\n\nok, i though it was large but i must confess i'm relatively new in the \ndatabase word. thank you for the answer.\n\nJust another question : what is the maximal number of rows that can be \ncontain in a cursor ?\n\nNicolas, having a lot of things to learn\n\n-- \n\n-------------------------------------------------------------------------\n� soyez ce que vous voudriez avoir l'air d'�tre � Lewis Caroll\n\n", "msg_date": "Wed, 13 Jul 2005 15:55:52 +0200", "msg_from": "Nicolas Beaume <[email protected]>", "msg_from_op": true, "msg_subject": "(pas de sujet)" } ]
[ { "msg_contents": "Gurus,\n\nA table in one of my databases has just crossed the 30 million row \nmark and has begun to feel very sluggish for just about anything I do \nwith it. I keep the entire database vacuumed regularly. And, as \nlong as I'm not doing a sequential scan, things seem reasonably quick \nmost of the time. I'm now thinking that my problem is IO because \nanything that involves heavy ( like a seq scan ) IO seems to slow to \na crawl. Even if I am using indexed fields to grab a few thousand \nrows, then going to sequential scans it gets very very slow.\n\nI have also had the occurrence where queries will not finish for days \n( I eventually have to kill them ). I was hoping to provide an \nexplain analyze for them, but if they never finish... even the \nexplain never finishes when I try that.\n\nFor example, as I'm writing this, I am running an UPDATE statement \nthat will affect a small part of the table, and is querying on an \nindexed boolean field.\n\nI have been waiting for over an hour and a half as I write this and \nit still hasn't finished. I'm thinking \"I bet Tom, Simon or Josh \nwouldn't put up with this kind of wait time..\", so I thought I would \nsee if anyone here had some pointers. Maybe I have a really stupid \nsetting in my conf file that is causing this. I really can't believe \nI am at the limits of this hardware, however.\n\n\nThe query:\nupdate eventactivity set ftindex = false where ftindex = true; \n( added the where clause because I don't want to alter where ftindex \nis null )\n\n\n\nThe table:\n Column | Type | Modifiers\n-------------+-----------------------------+-----------\nentrydate | timestamp without time zone |\nincidentid | character varying(40) |\nstatustype | character varying(20) |\nunitid | character varying(20) |\nrecordtext | character varying(255) |\nrecordtext2 | character varying(255) |\ninsertdate | timestamp without time zone |\nftindex | boolean |\nIndexes: eventactivity1 btree (incidentid),\n eventactivity_entrydate_idx btree (entrydate),\n eventactivity_ftindex_idx btree (ftindex),\n eventactivity_oid_idx btree (oid)\n\n\n\n\nThe hardware:\n\n4 x 2.2GHz Opterons\n12 GB of RAM\n4x10k 73GB Ultra320 SCSI drives in RAID 0+1\n1GB hardware cache memory on the RAID controller\n\nThe OS:\nFedora, kernel 2.6.6-1.435.2.3smp ( redhat stock kernel )\nfilesystem is mounted as ext2\n\n#####\n\nvmstat output ( as I am waiting for this to finish ):\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\nr b swpd free buff cache si so bi bo in cs us \nsy id wa\n0 1 5436 2823908 26140 9183704 0 1 2211 540 694 336 \n9 2 76 13\n\n#####\n\niostat output ( as I am waiting for this to finish ):\navg-cpu: %user %nice %sys %iowait %idle\n 9.19 0.00 2.19 13.08 75.53\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\ncciss/c0d0 329.26 17686.03 4317.57 161788630 39496378\n\n\n#####\nThis is a dedicated postgresql server, so maybe some of these \nsettings are more liberal than they should be?\n\nrelevant ( I hope ) postgresql.conf options are:\n\nshared_buffers = 50000\neffective_cache_size = 1348000\nrandom_page_cost = 3\nwork_mem = 512000\nmax_fsm_pages = 80000\nlog_min_duration_statement = 60000\nfsync = true ( not sure if I'm daring enough to run without this )\nwal_buffers = 1000\ncheckpoint_segments = 64\ncheckpoint_timeout = 3000\n\n\n#---- FOR PG_AUTOVACUUM --#\nstats_command_string = true\nstats_row_level = true\n\nThanks in advance,\nDan\n\n\n\n\n\n\n\n", "msg_date": "Wed, 13 Jul 2005 12:54:35 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Quad Opteron stuck in the mud" }, { "msg_contents": "So sorry, I forgot to mention I'm running version 8.0.1\n\nThanks\n\n", "msg_date": "Wed, 13 Jul 2005 13:09:37 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "Dan Harris wrote:\n> Gurus,\n>\n\n > even the explain never\n> finishes when I try that.\n\nJust a short bit. If \"EXPLAIN SELECT\" doesn't return, there seems to be\na very serious problem. Because I think EXPLAIN doesn't actually run the\nquery, just has the query planner run. And the query planner shouldn't\never get heavily stuck.\n\nI might be wrong, but there may be something much more substantially\nwrong than slow i/o.\nJohn\n=:->", "msg_date": "Wed, 13 Jul 2005 14:11:46 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "\nOn Jul 13, 2005, at 1:11 PM, John A Meinel wrote:\n\n>\n> I might be wrong, but there may be something much more substantially\n> wrong than slow i/o.\n> John\n>\n\nYes, I'm afraid of that too. I just don't know what tools I should \nuse to figure that out. I have some 20 other databases on this \nsystem, same schema but varying sizes, and the small ones perform \nvery well. It feels like there is an O(n) increase in wait time that \nhas recently become very noticeable on the largest of them.\n\n-Dan\n", "msg_date": "Wed, 13 Jul 2005 13:16:25 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "* Dan Harris ([email protected]) wrote:\n> On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:\n> >I might be wrong, but there may be something much more substantially\n> >wrong than slow i/o.\n> \n> Yes, I'm afraid of that too. I just don't know what tools I should \n> use to figure that out. I have some 20 other databases on this \n> system, same schema but varying sizes, and the small ones perform \n> very well. It feels like there is an O(n) increase in wait time that \n> has recently become very noticeable on the largest of them.\n\nCould you come up w/ a test case that others could reproduce where\nexplain isn't returning? I think that would be very useful towards\nsolving at least that issue...\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 13 Jul 2005 16:17:15 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "On Wed, Jul 13, 2005 at 01:16:25PM -0600, Dan Harris wrote:\n\n> On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:\n> \n> >I might be wrong, but there may be something much more substantially\n> >wrong than slow i/o.\n> \n> Yes, I'm afraid of that too. I just don't know what tools I should \n> use to figure that out. I have some 20 other databases on this \n> system, same schema but varying sizes, and the small ones perform \n> very well. It feels like there is an O(n) increase in wait time that \n> has recently become very noticeable on the largest of them.\n\nI'd guess it's stuck on some lock. Try that EXPLAIN, and when it\nblocks, watch the pg_locks view for locks not granted to the process\nexecuting the EXPLAIN. Then check what else is holding the locks.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"La rebeld�a es la virtud original del hombre\" (Arthur Schopenhauer)\n", "msg_date": "Wed, 13 Jul 2005 16:18:17 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "\nOn Jul 13, 2005, at 2:17 PM, Stephen Frost wrote:\n>\n> Could you come up w/ a test case that others could reproduce where\n> explain isn't returning?\n\nThis was simply due to my n00bness :) I had always been doing \nexplain analyze, instead of just explain. Next time one of these \nqueries comes up, I will be sure to do the explain without analyze.\n\nFYI that update query I mentioned in the initial thread just finished \nafter updating 8.3 million rows.\n\n-Dan\n\n", "msg_date": "Wed, 13 Jul 2005 14:20:24 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "On Jul 13, 2005, at 2:54 PM, Dan Harris wrote:\n\n> 4 x 2.2GHz Opterons\n> 12 GB of RAM\n> 4x10k 73GB Ultra320 SCSI drives in RAID 0+1\n> 1GB hardware cache memory on the RAID controller\n>\n\nif it is taking that long to update about 25% of your table, then you \nmust be I/O bound. check I/o while you're running a big query.\n\nalso, what RAID controller are you running? be sure you have the \nlatest BIOS and drivers for it.\n\non a pair of dual opterons, I can do large operations on tables with \n100 million rows much faster than you seem to be able. I have \nMegaRAID 320-2x controllers with 15kRPM drives.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806", "msg_date": "Wed, 13 Jul 2005 16:49:31 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "On Wed, 2005-07-13 at 12:54 -0600, Dan Harris wrote:\n> For example, as I'm writing this, I am running an UPDATE statement \n> that will affect a small part of the table, and is querying on an \n> indexed boolean field.\n\nAn indexed boolean field?\n\nHopefully, ftindex is false for very few rows of the table?\n\nTry changing the ftindex to be a partial index, so only index the false\nvalues. Or don't index it at all.\n\nSplit the table up into smaller pieces.\n\nDon't use an UPDATE statement. Keep a second table, and insert records\ninto it when you would have updated previously. If a row is not found,\nyou know that it has ftindex=true. That way, you'll never have row\nversions building up in the main table, which you'll still get even if\nyou VACUUM.\n\nBest Regards, Simon Riggs\n\n\n\n", "msg_date": "Wed, 13 Jul 2005 23:51:16 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n\n> I keep the entire database vacuumed regularly.\n\nHow often is \"regularly\"? We get frequent posts from people who think daily or\nevery 4 hours is often enough. If the table is very busy you can need vacuums\nas often as every 15 minutes. \n\nAlso, if you've done occasional massive batch updates like you describe here\nyou may need a VACUUM FULL or alternatively a CLUSTER command to compact the\ntable -- vacuum identifies the free space but if you've doubled the size of\nyour table with a large update that's a lot more free space than you want\nhanging around waiting to be used.\n\n> For example, as I'm writing this, I am running an UPDATE statement that will\n> affect a small part of the table, and is querying on an indexed boolean field.\n...\n> update eventactivity set ftindex = false where ftindex = true; ( added the\n> where clause because I don't want to alter where ftindex is null )\n\nIt's definitely worthwhile doing an \"EXPLAIN UPDATE...\" to see if this even\nused the index. It sounds like it did a sequential scan.\n\nSequential scans during updates are especially painful. If there isn't free\nspace lying around in the page where the updated record lies then another page\nhas to be used or a new page added. If you're doing a massive update you can\nexhaust the free space available making the update have to go back and forth\nbetween the page being read and the end of the table where pages are being\nwritten.\n\n> #####\n> \n> vmstat output ( as I am waiting for this to finish ):\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy id wa\n> 0 1 5436 2823908 26140 9183704 0 1 2211 540 694 336 9 2 76 13\n\n[I assume you ran \"vmstat 10\" or some other interval and then waited for at\nleast the second line? The first line outputted from vmstat is mostly\nmeaningless]\n\nUm. That's a pretty meager i/o rate. Just over 2MB/s. The cpu is 76% idle\nwhich sounds fine but that could be one processor pegged at 100% while the\nothers are idle. If this query is the only one running on the system then it\nwould behave just like that.\n\nIs it possible you have some foreign keys referencing these records that\nyou're updating? In which case every record being updated might be causing a\nfull table scan on another table (or multiple other tables). If those tables\nare entirely in cache then it could cause these high cpu low i/o symptoms.\n\nOr are there any triggers on this table?\n\n\n-- \ngreg\n\n", "msg_date": "14 Jul 2005 02:12:30 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "\nOn Jul 14, 2005, at 12:12 AM, Greg Stark wrote:\n\n> Dan Harris <[email protected]> writes:\n>\n>\n>> I keep the entire database vacuumed regularly.\n>>\n>\n> How often is \"regularly\"?\nWell, once every day, but there aren't a ton of inserts or updates \ngoing on a daily basis. Maybe 1,000 total inserts?\n>\n> Also, if you've done occasional massive batch updates like you \n> describe here\n> you may need a VACUUM FULL or alternatively a CLUSTER command to \n> compact the\n> table -- vacuum identifies the free space but if you've doubled the \n> size of\n> your table with a large update that's a lot more free space than \n> you want\n> hanging around waiting to be used.\n>\nI have a feeling I'm going to need to do a cluster soon. I have done \nseveral mass deletes and reloads on it.\n\n>\n>> For example, as I'm writing this, I am running an UPDATE \n>> statement that will\n>> affect a small part of the table, and is querying on an indexed \n>> boolean field.\n>>\n> ...\n>\n>> update eventactivity set ftindex = false where ftindex = true; \n>> ( added the\n>> where clause because I don't want to alter where ftindex is null )\n>>\n>\n> It's definitely worthwhile doing an \"EXPLAIN UPDATE...\" to see if \n> this even\n> used the index. It sounds like it did a sequential scan.\n>\n\nI tried that, and indeed it was using an index, although after \nreading Simon's post, I realize that was kind of dumb to have an \nindex on a bool. I have since removed it.\n\n> Sequential scans during updates are especially painful. If there \n> isn't free\n> space lying around in the page where the updated record lies then \n> another page\n> has to be used or a new page added. If you're doing a massive \n> update you can\n> exhaust the free space available making the update have to go back \n> and forth\n> between the page being read and the end of the table where pages \n> are being\n> written.\n\nThis is great info, thanks.\n\n>\n>\n>> #####\n>>\n>> vmstat output ( as I am waiting for this to finish ):\n>> procs -----------memory---------- ---swap-- -----io---- --system--\n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in \n>> cs us sy id wa\n>> 0 1 5436 2823908 26140 9183704 0 1 2211 540 694 \n>> 336 9 2 76 13\n>>\n>\n> [I assume you ran \"vmstat 10\" or some other interval and then \n> waited for at\n> least the second line? The first line outputted from vmstat is mostly\n> meaningless]\n\nYeah, this was at least 10 or so down the list ( the last one before \nctrl-c )\n\n>\n> Um. That's a pretty meager i/o rate. Just over 2MB/s. The cpu is \n> 76% idle\n> which sounds fine but that could be one processor pegged at 100% \n> while the\n> others are idle. If this query is the only one running on the \n> system then it\n> would behave just like that.\nWell, none of my processors had ever reached 100% until I changed to \next2 today ( read below for more info )\n>\n> Is it possible you have some foreign keys referencing these records \n> that\n> you're updating? In which case every record being updated might be \n> causing a\n> full table scan on another table (or multiple other tables). If \n> those tables\n> are entirely in cache then it could cause these high cpu low i/o \n> symptoms.\n>\n\nNo foreign keys or triggers.\n\n\nOk, so I remounted this drive as ext2 shortly before sending my first \nemail today. It wasn't enough time for me to notice the ABSOLUTELY \nHUGE difference in performance change. Ext3 must really be crappy \nfor postgres, or at least is on this box. Now that it's ext2, this \nthing is flying like never before. My CPU utilization has \nskyrocketed, telling me that the disk IO was constraining it immensely.\n\nI always knew that it might be a little faster, but the box feels \nlike it can \"breathe\" again and things that used to be IO intensive \nand run for an hour or more are now running in < 5 minutes. I'm a \nlittle worried about not having a journalized file system, but that \nperformance difference will keep me from switching back ( at least to \next3! ). Maybe someday I will try XFS.\n\nI would be surprised if everyone who ran ext3 had this kind of \nproblem, maybe it's specific to my kernel, raid controller, I don't \nknow. But, this is amazing. It's like I have a new server.\n\nThanks to everyone for their valuable input and a big thanks to all \nthe dedicated pg developers on here who make this possible!\n\n-Dan\n\n", "msg_date": "Thu, 14 Jul 2005 00:28:05 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "On Thu, Jul 14, 2005 at 12:28:05AM -0600, Dan Harris wrote:\n\n> Ok, so I remounted this drive as ext2 shortly before sending my first \n> email today. It wasn't enough time for me to notice the ABSOLUTELY \n> HUGE difference in performance change. Ext3 must really be crappy \n> for postgres, or at least is on this box. Now that it's ext2, this \n> thing is flying like never before. My CPU utilization has \n> skyrocketed, telling me that the disk IO was constraining it immensely.\n\nWere you using the default journal settings for ext3?\n\nAn interesting experiment would be to use the other journal options\n(particularly data=writeback). From the mount manpage:\n\n data=journal / data=ordered / data=writeback\n Specifies the journalling mode for file data. Metadata is\n always journaled. To use modes other than ordered on the root\n file system, pass the mode to the kernel as boot parameter, e.g.\n rootflags=data=journal.\n\n journal\n All data is committed into the journal prior to being\n written into the main file system.\n\n ordered\n This is the default mode. All data is forced directly\n out to the main file system prior to its metadata being\n committed to the journal.\n\n writeback\n Data ordering is not preserved - data may be written into\n the main file system after its metadata has been commit-\n ted to the journal. This is rumoured to be the highest-\n throughput option. It guarantees internal file system\n integrity, however it can allow old data to appear in\n files after a crash and journal recovery.\n\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n", "msg_date": "Thu, 14 Jul 2005 11:47:51 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "\nOn Jul 14, 2005, at 9:47 AM, Alvaro Herrera wrote:\n\n> On Thu, Jul 14, 2005 at 12:28:05AM -0600, Dan Harris wrote:\n>\n>> . Ext3 must really be crappy\n>> for postgres, or at least is on this box.\n>\n> Were you using the default journal settings for ext3?\n\nYes, I was. Next time I get a chance to reboot this box, I will try \nwriteback and compare the benchmarks to my previous config. Thanks \nfor the tip.\n\n", "msg_date": "Thu, 14 Jul 2005 10:05:52 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Quad Opteron stuck in the mud" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n\n> Well, once every day, but there aren't a ton of inserts or updates going on a\n> daily basis. Maybe 1,000 total inserts?\n\nIt's actually deletes and updates that matter. not inserts.\n\n> I have a feeling I'm going to need to do a cluster soon. I have done several\n> mass deletes and reloads on it.\n\nCLUSTER effectively does a VACUUM FULL but takes a different approach and\nwrites out a whole new table, which if there's lots of free space is faster\nthan moving records around to compact the table.\n\n> I tried that, and indeed it was using an index, although after reading Simon's\n> post, I realize that was kind of dumb to have an index on a bool. I have since\n> removed it.\n\nIf there are very few records (like well under 10%) with that column equal to\nfalse (or very few equal to true) then it's not necessarily useless. But\nprobably more useful is a partial index on some other column.\n\nSomething like \n\nCREATE INDEX ON pk WHERE flag = false;\n\n> No foreign keys or triggers.\n\nNote that I'm talking about foreign keys in *other* tables that refer to\ncolumns in this table. Every update on this table would have to scan those\nother tables looking for records referencing the updated rows.\n\n\n> Ok, so I remounted this drive as ext2 shortly before sending my first email\n> today. It wasn't enough time for me to notice the ABSOLUTELY HUGE difference\n> in performance change. Ext3 must really be crappy for postgres, or at least\n> is on this box. Now that it's ext2, this thing is flying like never before.\n> My CPU utilization has skyrocketed, telling me that the disk IO was\n> constraining it immensely.\n> \n> I always knew that it might be a little faster, but the box feels like it can\n> \"breathe\" again and things that used to be IO intensive and run for an hour or\n> more are now running in < 5 minutes. I'm a little worried about not having a\n> journalized file system, but that performance difference will keep me from\n> switching back ( at least to ext3! ). Maybe someday I will try XFS.\n\n@spock(Fascinating).\n\nI wonder if ext3 might be issuing IDE cache flushes on every fsync (to sync\nthe journal) whereas ext2 might not be issuing any cache flushes at all.\n\nIf the IDE cache is never being flushed then you'll see much better\nperformance but run the risk of data loss in a power failure or hardware\nfailure. (But not in the case of an OS crash, or at least no more than\notherwise.)\n\nYou could also try using the \"-O journal_dev\" option to put the ext3 journal\non a separate device.\n\n-- \ngreg\n\n", "msg_date": "14 Jul 2005 16:47:02 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Quad Opteron stuck in the mud" } ]
[ { "msg_contents": "Hi,\n\nI've got a java based web application that uses PostgreSQL 8.0.2. \nPostgreSQL runs on its own machine with RHEL 3, ia32e kernel, dual Xeon \nprocessor, 4 Gb ram.\n\nThe web application runs on a seperate machine from the database. The \napplication machine has three tomcat instances configured to use 64 \ndatabase connections each using DBCP for pooling. Most of the data \naccess is via Hibernate. The database itself is about 100 meg in size.\n\nWe're perf testing the application with Loadrunner. At about 500 virtual \nusers hitting the web application, the cpu utilization on the database \nserver is at 100%, PostgreSQL is on its knees. The memory usage isn't \nbad, the I/O isn't bad, only the CPU seems to be maxed out.\n\nchecking the status of connections at this point ( ps -eaf | grep \n\"postgres:\") where the CPU is maxed out I saw this:\n\n127 idle\n12 bind\n38 parse\n34 select\n\nHibernate is used in the application and unfortunately this seems to \ncause queries not to get logged. (see \nhttp://archives.postgresql.org/pgsql-admin/2005-05/msg00241.php)\n\nI know there has been discussion about problems on Xeon MP systems. Is \nthis what we are running into? Or is something else going on? Is there \nother information I can provide that might help determine what is going on?\n\nHere are the postgresql.conf settings:\n\n# The maximum number of connections.\nmax_connections = 256\n\n# Standard performance-related settings.\nshared_buffers = 16384\nmax_fsm_pages = 200000\nmax_fsm_relations = 10000\nfsync = false\nwal_sync_method = fsync\nwal_buffers = 32\ncheckpoint_segments = 6\neffective_cache_size = 38400\nrandom_page_cost = 2\nwork_mem = 16384\nmaintenance_work_mem = 16384\n\n# TODO - need to investigate these.\ncommit_delay = 0\ncommit_siblings = 5\nmax_locks_per_transaction = 512\n\n", "msg_date": "Wed, 13 Jul 2005 12:13:31 -0700", "msg_from": "Dennis <[email protected]>", "msg_from_op": true, "msg_subject": "performance problems ... 100 cpu utilization" }, { "msg_contents": "\n\"Dennis\" <[email protected]> writes\n>\n> checking the status of connections at this point ( ps -eaf | grep\n> \"postgres:\") where the CPU is maxed out I saw this:\n>\n> 127 idle\n> 12 bind\n> 38 parse\n> 34 select\n>\n\nAre you sure 100% CPU usage is solely contributed by Postgresql? Also, from\nthe ps status you list, I can hardly see that's a problem because of problem\nyou mentioned below.\n\n>\n> I know there has been discussion about problems on Xeon MP systems. Is\n> this what we are running into? Or is something else going on? Is there\n> other information I can provide that might help determine what is going\non?\n>\n\nHere is a talk about Xeon-SMP spinlock contention problem:\nhttp://archives.postgresql.org/pgsql-performance/2005-05/msg00441.php\n\n\nRegards,\nQingqing\n\n\n", "msg_date": "Thu, 14 Jul 2005 11:11:29 +0800", "msg_from": "\"Qingqing Zhou\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problems ... 100 cpu utilization" }, { "msg_contents": "Qingqing Zhou wrote:\n\n>Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from\n>the ps status you list, I can hardly see that's a problem because of problem\n>you mentioned below.\n> \n>\nThe postgreSQL processes are what is taking up all the cpu. There aren't \nany other major applications on the machine. Its a dedicated database \nserver, only for this application.\n\nIt doesn't seem to make sense that PostgreSQL would be maxed out at this \npoint. I think given the size of the box, it could do quite a bit \nbetter. So, what is going on? I don't know.\n\nDennis\n", "msg_date": "Wed, 13 Jul 2005 20:28:43 -0700", "msg_from": "Dennis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance problems ... 100 cpu utilization" }, { "msg_contents": "What is the load average on this machine? Do you do many updates? If you \ndo a lot of updates, perhaps you haven't vacuumed recently. We were \nseeing similar symptoms when we started load testing our stuff and it \nturned out we were vacuuming too infrequently.\n\nDavid\n\nDennis wrote:\n> Qingqing Zhou wrote:\n> \n>> Are you sure 100% CPU usage is solely contributed by Postgresql? Also, \n>> from\n>> the ps status you list, I can hardly see that's a problem because of \n>> problem\n>> you mentioned below.\n>> \n>>\n> The postgreSQL processes are what is taking up all the cpu. There aren't \n> any other major applications on the machine. Its a dedicated database \n> server, only for this application.\n> \n> It doesn't seem to make sense that PostgreSQL would be maxed out at this \n> point. I think given the size of the box, it could do quite a bit \n> better. So, what is going on? I don't know.\n> \n> Dennis\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Thu, 14 Jul 2005 17:07:33 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problems ... 100 cpu utilization" }, { "msg_contents": "David Mitchell wrote:\n\n> What is the load average on this machine? Do you do many updates? If \n> you do a lot of updates, perhaps you haven't vacuumed recently. We \n> were seeing similar symptoms when we started load testing our stuff \n> and it turned out we were vacuuming too infrequently.\n\nThe load average at the 100% utilization point was about 30! A vacuum \nanalyze was done before the test was started. I believe there are many \nmore selects than updates happening at any one time.\n\nDennis\n", "msg_date": "Wed, 13 Jul 2005 23:54:38 -0700", "msg_from": "Dennis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance problems ... 100 cpu utilization" }, { "msg_contents": "If your table has got into this state, then vacuum analyze won't fix it. \nYou will have to do a vacuum full to get it back to normal, then \nregularly vacuum (not full) to keep it in good condition. We vacuum our \ncritical tables every 10 minutes to keep them in good nick.\n\nDavid\n\nDennis wrote:\n> David Mitchell wrote:\n> \n>> What is the load average on this machine? Do you do many updates? If \n>> you do a lot of updates, perhaps you haven't vacuumed recently. We \n>> were seeing similar symptoms when we started load testing our stuff \n>> and it turned out we were vacuuming too infrequently.\n> \n> \n> The load average at the 100% utilization point was about 30! A vacuum \n> analyze was done before the test was started. I believe there are many \n> more selects than updates happening at any one time.\n> \n> Dennis\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n> \n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n", "msg_date": "Fri, 15 Jul 2005 12:43:07 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance problems ... 100 cpu utilization" }, { "msg_contents": "David Mitchell wrote:\n\n> If your table has got into this state, then vacuum analyze won't fix \n> it. You will have to do a vacuum full to get it back to normal, then \n> regularly vacuum (not full) to keep it in good condition. We vacuum \n> our critical tables every 10 minutes to keep them in good nick.\n\n\nSo should I have vacuum run during the load test? At what level of \nupdates should it run every ten minutes?\n\nDennis\n", "msg_date": "Fri, 15 Jul 2005 09:03:12 -0700", "msg_from": "Dennis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance problems ... 100 cpu utilization" } ]
[ { "msg_contents": "VACUUM FULL ANALYZE is performed right before tests.\nUPDATE test SET t = xpath_string(x, 'movie/rating'::text); is performed also\nto make selects equal.\nXpath_string is IMMUTABLE.\n\n Table \"public.test\"\n Column | Type | Modifiers | Description\n--------+------------------+-----------+-------------\n i | integer | |\n t | text | |\n x | text | |\n d | double precision | |\nIndexes:\n \"floatind\" btree (d)\n \"i_i\" btree (i) CLUSTER\n \"t_ind\" btree (t)\n \"t_x_ind\" btree (t, xpath_string(x, 'data'::text))\n \"x_i\" btree (xpath_string(x, 'data'::text))\n \"x_ii\" btree (xpath_string(x, 'movie/characters/character'::text))\n \"x_iii\" btree (xpath_string(x, 'movie/rating'::text))\nHas OIDs: no\n \nexplain analyze select count(*) from (\n select * from test order by xpath_string(x, 'movie/rating'::text)\nlimit 1000 offset 10\n) a;\n \n \n \nQUERY PLAN\nAggregate (cost=342.37..342.37 rows=1 width=0) (actual\ntime=403.580..403.584 rows=1 loops=1)\n-> Subquery Scan a (cost=3.27..339.87 rows=1000 width=0) (actual\ntime=4.252..398.261 rows=1000 loops=1)\n-> Limit (cost=3.27..329.87 rows=1000 width=969) (actual\ntime=4.242..389.557 rows=1000 loops=1)\n-> Index Scan using x_iii on test (cost=0.00..3266.00 rows=10000\nwidth=969) (actual time=0.488..381.049 rows=1010 loops=1)\n Total runtime: 403.695 ms\n \n \nexplain analyze select count(*) from (\n select * from test order by t limit 1000 offset 10\n) a;\n \n \nQUERY PLAN\nAggregate (cost=339.84..339.84 rows=1 width=0) (actual time=26.662..26.666\nrows=1 loops=1)\n-> Subquery Scan a (cost=3.24..337.34 rows=1000 width=0) (actual\ntime=0.228..22.416 rows=1000 loops=1)\n-> Limit (cost=3.24..327.34 rows=1000 width=969) (actual\ntime=0.217..14.244 rows=1000 loops=1)\n-> Index Scan using t_ind on test (cost=0.00..3241.00 rows=10000\nwidth=969) (actual time=0.099..6.371 rows=1010 loops=1)\n Total runtime: 26.749 ms\n\n", "msg_date": "Thu, 14 Jul 2005 02:35:10 +0400", "msg_from": "\"jobapply\" <[email protected]>", "msg_from_op": true, "msg_subject": "Functional index is 5 times slower than the basic one" } ]
[ { "msg_contents": "\nThe question appeared because of strange issues with functional indexes.\nIt seems they are recalculated even where it is obviously not needed.\n\n\\d+ test:\n\n i | integer | |\n t | text | |\n x | text | |\n \"i_i\" btree (i)\n \"x_i\" btree (xpath_string(x, 'data'::text))\n \"x_ii\" btree (xpath_string(x, 'movie/characters/character'::text))\n \"x_iii\" btree (xpath_string(x, 'movie/rating'::text))\n\n\n1) \nWhen I run\nVACUUM FULL ANALYZE VERBOSE \nOR\nVACUUM ANALYZE\n\nAfter text\n\nINFO: analyzing \"public.test\"\nINFO: \"test\": scanned 733 of 733 pages, containing 10000 live rows and 0\ndead rows; 3000 rows in sample, 10000 estimated total rows\n\na lot of xpath_string calls occur. \nDoes VACUUM rebuild indexes ? What for to recalculate that all?\nIt makes VACUUMing very slow.\n\nSimple VACUUM call does not lead to such function calls.\n\n2)\nWhen I do \nselect * from test order by xpath_string(x, 'movie/rating'::text) limit 1000\noffset 10;\n\nPlanner uses index x_iii (as it should, ok here): \tLimit -> Index scan.\nBut many of calls to xpath_string occur in execution time. \nWhy ? Index is calculated already and everything is so immutable..\n\n\nPlease answer if you have any ideas.. Functional indexes seemed so great\nfirst, but now I uncover weird issues I can't understand..\n\n\n\n\n\n", "msg_date": "Thu, 14 Jul 2005 03:46:23 +0400", "msg_from": "\"jobapply\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing Function called on VACUUM and sorting ?" } ]
[ { "msg_contents": "Hi,\n\nI'm having a problem with a query that performs a sequential scan on a \ntable when it should be performing an index scan. The interesting thing \nis, when we dumped the database on another server, it performed an index \nscan on that server. The systems are running the same versions of \npostgres (7.4.8) and the problem persists after running an \"ANALYZE \nVERBOSE\" and after a \"REINDEX TABLE sq_ast FORCE\". The only difference \nthat i can see is that the postgresql.conf files differ slightly, and \nthe hardware is different. Note that the system performing the \nsequential scan is a Dual 2.8GHz Xeon, 4GB Ram, 300GB HDD. And the \nsystem performing an index scan is not as powerful.\n\nA copy of the postgresql.conf for the system performing the index scan \ncan be found at http://beta.squiz.net/~mmcintyre/postgresql_squiz_uk.conf\nA copy of the postgresql.conf for the system performing the sequential \nscan can be found at http://beta.squiz.net/~mmcintyre/postgresql_future.conf\n\nThe Query:\n\nSELECT a.assetid, a.short_name, a.type_code, a.status, l.linkid, \nl.link_type, l.sort_order, lt.num_kids, u.url, ap.path,\n CASE u.http\n WHEN '1' THEN 'http'\n WHEN '0' THEN 'https'\n END AS protocol\nFROM ((sq_ast a LEFT JOIN sq_ast_url u ON a.assetid = u.assetid) LEFT \nJOIN sq_ast_path ap ON a.assetid = ap.assetid),sq_ast_lnk l, \nsq_ast_lnk_tree lt WHERE a.assetid = l.minorid AND\n l.linkid = lt.linkid AND l.majorid = '2' AND\n l.link_type <= 2 ORDER BY sort_order\n\n\nThe EXPLAIN ANALYZE from the system performing an sequential scan:\n\nQUERY PLAN\nSort (cost=30079.79..30079.89 rows=42 width=113) (actual time=39889.989..39890.346 rows=260 loops=1)\n Sort Key: l.sort_order\n -> Nested Loop (cost=25638.02..30078.65 rows=42 width=113) (actual time=9056.336..39888.557 rows=260 loops=1)\n -> Merge Join (cost=25638.02..29736.01 rows=25 width=109) (actual time=9056.246..39389.359 rows=260 loops=1)\n Merge Cond: ((\"outer\".assetid)::text = \"inner\".\"?column5?\")\n -> Merge Left Join (cost=25410.50..29132.82 rows=150816 width=97) (actual time=8378.176..38742.111 rows=150567 loops=1)\n Merge Cond: ((\"outer\".assetid)::text = (\"inner\".assetid)::text)\n -> Merge Left Join (cost=25410.50..26165.14 rows=150816 width=83) (actual time=8378.130..9656.413 rows=150489 loops=1)\n Merge Cond: (\"outer\".\"?column5?\" = \"inner\".\"?column4?\")\n -> Sort (cost=25408.17..25785.21 rows=150816 width=48) (actual time=8377.733..8609.218 rows=150486 loops=1)\n Sort Key: (a.assetid)::text\n -> Seq Scan on sq_ast a (cost=0.00..12436.16 rows=150816 width=48) (actual time=0.011..5578.231 rows=151378 loops=1)\n -> Sort (cost=2.33..2.43 rows=37 width=43) (actual time=0.364..0.428 rows=37 loops=1)\n Sort Key: (u.assetid)::text\n -> Seq Scan on sq_ast_url u (cost=0.00..1.37 rows=37 width=43) (actual time=0.023..0.161 rows=37 loops=1)\n -> Index Scan using sq_ast_path_ast on sq_ast_path ap (cost=0.00..2016.98 rows=45893 width=23) (actual time=0.024..14041.571 rows=45812 loops=1)\n -> Sort (cost=227.52..227.58 rows=25 width=21) (actual time=131.838..132.314 rows=260 loops=1)\n Sort Key: (l.minorid)::text\n -> Index Scan using sq_ast_lnk_majorid on sq_ast_lnk l (cost=0.00..226.94 rows=25 width=21) (actual time=0.169..126.201 rows=260 loops=1)\n Index Cond: ((majorid)::text = '2'::text)\n Filter: (link_type <= 2)\n -> Index Scan using sq_ast_lnk_tree_linkid on sq_ast_lnk_tree lt (cost=0.00..13.66 rows=3 width=8) (actual time=1.539..1.900 rows=1 loops=260)\n Index Cond: (\"outer\".linkid = lt.linkid)\nTotal runtime: 39930.395 ms\n\n\nThe EXPLAIN ANALYZE from the system performing an index scan scan:\n\n\nSort (cost=16873.64..16873.74 rows=40 width=113) (actual time=2169.905..2169.912 rows=13 loops=1)\n Sort Key: l.sort_order\n -> Nested Loop (cost=251.39..16872.58 rows=40 width=113) (actual time=45.724..2169.780 rows=13 loops=1)\n -> Merge Join (cost=251.39..16506.42 rows=32 width=109) (actual time=45.561..2169.012 rows=13 loops=1)\n Merge Cond: ((\"outer\".assetid)::text = \"inner\".\"?column5?\")\n -> Merge Left Join (cost=2.33..15881.92 rows=149982 width=97) (actual time=0.530..1948.718 rows=138569 loops=1)\n Merge Cond: ((\"outer\".assetid)::text = (\"inner\".assetid)::text)\n -> Merge Left Join (cost=2.33..13056.04 rows=149982 width=83) (actual time=0.406..953.781 rows=138491 loops=1)\n Merge Cond: ((\"outer\".assetid)::text = \"inner\".\"?column4?\")\n -> Index Scan using sq_ast_pkey on sq_ast a (cost=0.00..14952.78 rows=149982 width=48) (actual time=0.154..388.872 rows=138488 loops=1)\n -> Sort (cost=2.33..2.43 rows=37 width=43) (actual time=0.235..0.264 rows=37 loops=1)\n Sort Key: (u.assetid)::text\n -> Seq Scan on sq_ast_url u (cost=0.00..1.37 rows=37 width=43) (actual time=0.036..0.103 rows=37 loops=1)\n -> Index Scan using sq_ast_path_ast on sq_ast_path ap (cost=0.00..1926.18 rows=42071 width=23) (actual time=0.110..105.918 rows=42661 loops=1)\n -> Sort (cost=249.05..249.14 rows=36 width=21) (actual time=0.310..0.324 rows=13 loops=1)\n Sort Key: (l.minorid)::text\n -> Index Scan using sq_ast_lnk_majorid on sq_ast_lnk l (cost=0.00..248.12 rows=36 width=21) (actual time=0.141..0.282 rows=13 loops=1)\n Index Cond: ((majorid)::text = '2'::text)\n Filter: (link_type <= 2)\n -> Index Scan using sq_ast_lnk_tree_linkid on sq_ast_lnk_tree lt (cost=0.00..11.41 rows=2 width=8) (actual time=0.043..0.045 rows=1 loops=13)\n Index Cond: (\"outer\".linkid = lt.linkid)\n Total runtime: 2170.165 ms\n(22 rows)\n\nTHE DESC of the sq_ast table.\n\n\nfuture_v3_schema=# \\d sq_ast\n\n Table \"public.sq_ast\"\n Column | Type | Modifiers\n-----------------------+-----------------------------+---------------------------------------------\n assetid | character varying(15) | not null\n type_code | character varying(100) | not null\n version | character varying(20) | not null default '0.0.0'::character \nvarying\n name | character varying(255) | not null default ''::character varying\n short_name | character varying(255) | not null default ''::character \nvarying\n status | integer | not null default 1\n languages | character varying(50) | not null default ''::character varying\n charset | character varying(50) | not null default ''::character varying\n force_secure | character(1) | not null default '0'::bpchar\n created | timestamp without time zone | not null\n created_userid | character varying(255) | not null\n updated | timestamp without time zone | not null\n updated_userid | character varying(255) | not null\n published | timestamp without time zone |\n published_userid | character varying(255) |\n status_changed | timestamp without time zone |\n status_changed_userid | character varying(255) |\nIndexes:\n \"sq_asset_pkey\" primary key, btree (assetid)\n \"sq_ast_created\" btree (created)\n \"sq_ast_name\" btree (name)\n \"sq_ast_published\" btree (published)\n \"sq_ast_type_code\" btree (type_code)\n \"sq_ast_updated\" btree (updated)\n\n\nAny ideas?\n\n-- \nMarc McIntyre\nMySource Matrix Lead Developer\n\n\n", "msg_date": "Thu, 14 Jul 2005 10:06:34 +1000", "msg_from": "Marc McIntyre <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Query" }, { "msg_contents": "On Thu, 2005-07-14 at 10:06 +1000, Marc McIntyre wrote:\n\n> I'm having a problem with a query that performs a sequential scan on a \n> table when it should be performing an index scan. The interesting thing \n> is, when we dumped the database on another server, it performed an index \n> scan on that server.\n...\n> The EXPLAIN ANALYZE from the system performing an sequential scan:\n> \n> QUERY PLAN\n> Sort (cost=30079.79..30079.89 rows=42 width=113) (actual time=39889.989..39890.346 rows=260 loops=1)\n...\n> The EXPLAIN ANALYZE from the system performing an index scan scan:\n> Sort (cost=16873.64..16873.74 rows=40 width=113) (actual time=2169.905..2169.912 rows=13 loops=1)\n\nlooks like the first query is returning 260 rows,\nbut the second one 13\n\nthis may not be your problem, but are you sure you are using the same\nquery on the same data here ?\n\ngnari\n\n\n", "msg_date": "Thu, 14 Jul 2005 11:22:33 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Pretty sure. When I ran the queries I was in psql. I ran the first \nquery. Then I pressed the up arrow to re-run the same query. I did \nnot notice that about the rows. Might reindexing be in order?\n\nOn Jul 14, 2005, at 7:22 AM, Ragnar Hafsta� wrote:\n\n> On Thu, 2005-07-14 at 10:06 +1000, Marc McIntyre wrote:\n>\n>> I'm having a problem with a query that performs a sequential scan \n>> on a\n>> table when it should be performing an index scan. The interesting \n>> thing\n>> is, when we dumped the database on another server, it performed an \n>> index\n>> scan on that server.\n> ...\n>> The EXPLAIN ANALYZE from the system performing an sequential scan:\n>>\n>> QUERY PLAN\n>> Sort (cost=30079.79..30079.89 rows=42 width=113) (actual \n>> time=39889.989..39890.346 rows=260 loops=1)\n> ...\n>> The EXPLAIN ANALYZE from the system performing an index scan scan:\n>> Sort (cost=16873.64..16873.74 rows=40 width=113) (actual \n>> time=2169.905..2169.912 rows=13 loops=1)\n>\n> looks like the first query is returning 260 rows,\n> but the second one 13\n>\n> this may not be your problem, but are you sure you are using the same\n> query on the same data here ?\n>\n> gnari\n>\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n>\n\n\n", "msg_date": "Tue, 21 Nov 2006 14:52:56 -0500", "msg_from": "Joe Lester <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" } ]
[ { "msg_contents": "I just took delivery of a new system, and used the opportunity to\nbenchmark postgresql 8.0 performance on various filesystems. The system\nin question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n5 7200RPM SATA disks attached to an Areca hardware RAID controller\nhaving 128MB of cache. The caches are all write-back.\n\nI ran pgbench with a scale factor of 1000 and a total of 100,000\ntransactions per run. I varied the number of clients between 10 and\n100. It appears from my test JFS is much faster than both ext3 and XFS\nfor this workload. JFS and XFS were made with the mkfs defaults. ext3\nwas made with -T largefile4 and -E stride=32. The deadline scheduler\nwas used for all runs (anticipatory scheduler is much worse).\n\nHere's the result, in transactions per second.\n\n ext3 jfs xfs\n-----------------------------\n 10 Clients 55 81 68\n100 Clients 61 100 64\n----------------------------\n\n-jwb\n", "msg_date": "Wed, 13 Jul 2005 17:20:15 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "JFS fastest filesystem for PostgreSQL?" } ]
[ { "msg_contents": "Hi,\n\nOur application requires a number of processes to select and update rows\nfrom a very small (<10 rows) Postgres table on a regular and frequent\nbasis. These processes often run for weeks at a time, but over the\nspace of a few days we find that updates start getting painfully slow.\nWe are running a full vacuum/analyze and reindex on the table every day,\nbut the updates keep getting slower and slower until the processes are\nrestarted. Restarting the processes isn't really a viable option in our\n24/7 production environment, so we're trying to figure out what's\ncausing the slow updates.\n\nThe environment is as follows:\n\nRed Hat 9, kernel 2.4.20-8\nPostgreSQL 7.3.2\necpg 2.10.0\n\nThe processes are all compiled C programs accessing the database using\nECPG.\n\nDoes anyone have any thoughts on what might be happening here?\n\nThanks\nAlison\n\n", "msg_date": "Thu, 14 Jul 2005 15:08:30 +1000", "msg_from": "[email protected] (Alison Winters)", "msg_from_op": true, "msg_subject": "lots of updates on small table" }, { "msg_contents": "On Thu, Jul 14, 2005 at 03:08:30PM +1000, Alison Winters wrote:\n> Hi,\n> \n> Our application requires a number of processes to select and update rows\n> from a very small (<10 rows) Postgres table on a regular and frequent\n> basis. These processes often run for weeks at a time, but over the\n> space of a few days we find that updates start getting painfully slow.\n> We are running a full vacuum/analyze and reindex on the table every day,\n\nFull vacuum, eh? I wonder if what you really need is very frequent\nnon-full vacuum. Say, once in 15 minutes (exact rate depending on dead\ntuple rate.)\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"World domination is proceeding according to plan\" (Andrew Morton)\n", "msg_date": "Thu, 14 Jul 2005 12:21:14 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "On Thu, 2005-07-14 at 15:08 +1000, Alison Winters wrote:\n> Hi,\n> \n> Our application requires a number of processes to select and update rows\n> from a very small (<10 rows) Postgres table on a regular and frequent\n> basis. These processes often run for weeks at a time, but over the\n\nAre these long running transactions or is the process issuing many short\ntransactions?\n\nIf your transaction lasts a week, then a daily vacuum isn't really doing\nanything.\n\nI presume you also run ANALYZE in some shape or form periodically?\n\n> space of a few days we find that updates start getting painfully slow.\n> We are running a full vacuum/analyze and reindex on the table every day,\n\nIf they're short transactions, run vacuum (not vacuum full) every 100 or\nso updates. This might even be once a minute.\n\nAnalyze periodically as well.\n\n-- \n\n", "msg_date": "Thu, 14 Jul 2005 12:37:42 -0400", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "Hi,\n\n> > Our application requires a number of processes to select and update rows\n> > from a very small (<10 rows) Postgres table on a regular and frequent\n> > basis. These processes often run for weeks at a time, but over the\n> > space of a few days we find that updates start getting painfully slow.\n> > We are running a full vacuum/analyze and reindex on the table every day,\n> Full vacuum, eh? I wonder if what you really need is very frequent\n> non-full vacuum. Say, once in 15 minutes (exact rate depending on dead\n> tuple rate.)\n>\nIs there a difference between vacuum and vacuum full? Currently we have\na cron job going every hour that does:\n\nVACUUM FULL VERBOSE ANALYZE plc_fldio\nREINDEX TABLE plc_fldio\n\nThe most recent output was this:\n\nINFO: --Relation public.plc_fldio--\nINFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n CPU 0.04s/0.14u sec elapsed 0.18 sec.\nINFO: Index plcpage_idx: Pages 315; Tuples 108137: Deleted 4176.\n CPU 0.03s/0.04u sec elapsed 0.14 sec.\nINFO: Rel plc_fldio: Pages: 1221 --> 1221; Tuple(s) moved: 0.\n CPU 0.03s/0.04u sec elapsed 0.36 sec.\nINFO: Analyzing public.plc_fldio\nVACUUM\nREINDEX\n\nWe'll up it to every 15 minutes, but i don't know if that'll help\nbecause even with the current vacuuming the updates are still getting\nslower and slower over the course of several days. What really puzzles\nme is why restarting the processes fixes it. Does PostgreSQL keep some\nkind of backlog of transactions all for one database connection? Isn't\nit normal to have processes that keep a single database connection open\nfor days at a time?\n\nRegarding the question another poster asked: all the transactions are\nvery short. The table is essentially a database replacement for a\nshared memory segment - it contains a few rows of byte values that are\nconstantly updated byte-at-a-time to communicate data between different\nindustrial control processes.\n\nThanks for the thoughts everyone,\n\nAlison\n\n", "msg_date": "Fri, 15 Jul 2005 09:42:12 +1000", "msg_from": "[email protected] (Alison Winters)", "msg_from_op": true, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "Alison Winters wrote:\n> Hi,\n>\n>\n>>>Our application requires a number of processes to select and update rows\n>>>from a very small (<10 rows) Postgres table on a regular and frequent\n>>>basis. These processes often run for weeks at a time, but over the\n>>>space of a few days we find that updates start getting painfully slow.\n>>>We are running a full vacuum/analyze and reindex on the table every day,\n>>\n>>Full vacuum, eh? I wonder if what you really need is very frequent\n>>non-full vacuum. Say, once in 15 minutes (exact rate depending on dead\n>>tuple rate.)\n>>\n>\n> Is there a difference between vacuum and vacuum full? Currently we have\n> a cron job going every hour that does:\n>\n> VACUUM FULL VERBOSE ANALYZE plc_fldio\n> REINDEX TABLE plc_fldio\n\nVACUUM FULL exclusively locks the table (so that nothing else can\nhappen) and the compacts it as much as it can.\nYou almost definitely want to only VACUUM every 15min, maybe VACUUM FULL\n1/day.\n\nVACUUM FULL is more for when you haven't been VACUUMing often enough. Or\nhave major changes to your table.\nBasically VACUUM marks rows as empty and available for reuse, VACUUM\nFULL removes empty space (but requires a full lock, because it is moving\nrows around).\n\nIf anything, I would estimate that VACUUM FULL would be hurting your\nperformance. But it may happen fast enough not to matter.\n\n>\n> The most recent output was this:\n>\n> INFO: --Relation public.plc_fldio--\n> INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n> CPU 0.04s/0.14u sec elapsed 0.18 sec.\n> INFO: Index plcpage_idx: Pages 315; Tuples 108137: Deleted 4176.\n> CPU 0.03s/0.04u sec elapsed 0.14 sec.\n> INFO: Rel plc_fldio: Pages: 1221 --> 1221; Tuple(s) moved: 0.\n> CPU 0.03s/0.04u sec elapsed 0.36 sec.\n> INFO: Analyzing public.plc_fldio\n> VACUUM\n> REINDEX\n>\n> We'll up it to every 15 minutes, but i don't know if that'll help\n> because even with the current vacuuming the updates are still getting\n> slower and slower over the course of several days. What really puzzles\n> me is why restarting the processes fixes it. Does PostgreSQL keep some\n> kind of backlog of transactions all for one database connection? Isn't\n> it normal to have processes that keep a single database connection open\n> for days at a time?\n\nI believe that certain locks are grabbed per session. Or at least there\nis some command that you can run, which you don't want to run in a\nmaintained connection. (It might be VACUUM FULL, I don't remember which\none it is).\n\nBut the fact that your application works at all seems to be that it\nisn't acquiring any locks.\n\nI know VACUUM cannot clean up any rows that are visible in one of the\ntransactions, I don't know if this includes active connections or not.\n\n>\n> Regarding the question another poster asked: all the transactions are\n> very short. The table is essentially a database replacement for a\n> shared memory segment - it contains a few rows of byte values that are\n> constantly updated byte-at-a-time to communicate data between different\n> industrial control processes.\n>\n> Thanks for the thoughts everyone,\n>\n> Alison\n>\n\nIs it possible to have some sort of timer that would recognize it has\nbeen connected for too long, drop the database connection, and\nreconnect? I don't know that it would solve anything, but it would be\nsomething you could try.\n\nJohn\n=:->", "msg_date": "Thu, 14 Jul 2005 18:52:24 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "[email protected] (Alison Winters) writes:\n>>> Our application requires a number of processes to select and update rows\n>>> from a very small (<10 rows) Postgres table on a regular and frequent\n>>> basis. These processes often run for weeks at a time, but over the\n>>> space of a few days we find that updates start getting painfully slow.\n\nNo wonder, considering that your \"less than 10 rows\" table contains\nsomething upwards of 100000 tuples:\n\n> INFO: --Relation public.plc_fldio--\n> INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n> CPU 0.04s/0.14u sec elapsed 0.18 sec.\n\nWhat you need to do is find out why VACUUM is unable to reclaim all\nthose dead row versions. The reason is likely that some process is\nsitting on a open transaction for days at a time.\n\n> Isn't it normal to have processes that keep a single database\n> connection open for days at a time?\n\nDatabase connection, sure. Single transaction, no.\n\n> Regarding the question another poster asked: all the transactions are\n> very short.\n\nSomewhere you have one that isn't. Try watching the backends with ps,\nor look at the pg_stat_activity view if your version of PG has it,\nto see which sessions are staying \"idle in transaction\" indefinitely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Jul 2005 19:57:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table " }, { "msg_contents": "On Fri, Jul 15, 2005 at 09:42:12AM +1000, Alison Winters wrote:\n\n> > > Our application requires a number of processes to select and update rows\n> > > from a very small (<10 rows) Postgres table on a regular and frequent\n> > > basis. These processes often run for weeks at a time, but over the\n> > > space of a few days we find that updates start getting painfully slow.\n> > > We are running a full vacuum/analyze and reindex on the table every day,\n> > Full vacuum, eh? I wonder if what you really need is very frequent\n> > non-full vacuum. Say, once in 15 minutes (exact rate depending on dead\n> > tuple rate.)\n> >\n> Is there a difference between vacuum and vacuum full?\n\nYes. Vacuum full is more aggresive in compacting the table. Though it\nreally works the same in the presence of long-running transactions:\ntuples just can't be removed.\n\n> The most recent output was this:\n> \n> INFO: --Relation public.plc_fldio--\n> INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n> CPU 0.04s/0.14u sec elapsed 0.18 sec.\n> INFO: Index plcpage_idx: Pages 315; Tuples 108137: Deleted 4176.\n> CPU 0.03s/0.04u sec elapsed 0.14 sec.\n> INFO: Rel plc_fldio: Pages: 1221 --> 1221; Tuple(s) moved: 0.\n> CPU 0.03s/0.04u sec elapsed 0.36 sec.\n> INFO: Analyzing public.plc_fldio\n\nHmm, so it seems your hourly vacuum is enough. I think the bloat theory\ncan be trashed. Unless I'm reading this output wrong; I don't remember\nthe details of this vacuum output.\n\n> We'll up it to every 15 minutes, but i don't know if that'll help\n> because even with the current vacuuming the updates are still getting\n> slower and slower over the course of several days. What really puzzles\n> me is why restarting the processes fixes it.\n\nI wonder if the problem may be plan caching. I didn't pay full\nattention to the description of your problem, so I don't remember if it\ncould be an issue, but it's something to consider.\n\n> Does PostgreSQL keep some kind of backlog of transactions all for one\n> database connection?\n\nNo. There could be a problem if you had very long transactions, but\napparently this isn't your problem.\n\n> Isn't it normal to have processes that keep a single database\n> connection open for days at a time?\n\nI guess it depends on exactly what you do with it. I know of at least\none case where an app keeps a connection open for months, without a\nproblem. (It's been running for four or five years, and monthly\n\"uptime\" for that particular daemon is not unheard of.)\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"Everybody understands Mickey Mouse. Few understand Hermann Hesse.\nHardly anybody understands Einstein. And nobody understands Emperor Norton.\"\n", "msg_date": "Thu, 14 Jul 2005 20:28:24 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "On Thu, Jul 14, 2005 at 08:28:24PM -0400, Alvaro Herrera wrote:\n> On Fri, Jul 15, 2005 at 09:42:12AM +1000, Alison Winters wrote:\n> \n\n> > INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n> \n> Hmm, so it seems your hourly vacuum is enough. I think the bloat theory\n> can be trashed. Unless I'm reading this output wrong; I don't remember\n> the details of this vacuum output.\n\nOk, so I was _very_ wrong :-) Sorry.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\nEssentially, you're proposing Kevlar shoes as a solution for the problem\nthat you want to walk around carrying a loaded gun aimed at your foot.\n(Tom Lane)\n", "msg_date": "Thu, 14 Jul 2005 20:37:30 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "Hi all,\n\n> No wonder, considering that your \"less than 10 rows\" table contains\n> something upwards of 100000 tuples:\n>\n> > INFO: --Relation public.plc_fldio--\n> > INFO: Pages 1221: Changed 3, reaped 256, Empty 0, New 0; Tup 108137: Vac 4176, Keep/VTL 108133/108133, UnUsed 19, MinLen 84, MaxLen 84; Re-using: Free/Avail. Space 445176/371836; EndEmpty/Avail. Pages 0/256.\n> > CPU 0.04s/0.14u sec elapsed 0.18 sec.\n>\n> What you need to do is find out why VACUUM is unable to reclaim all\n> those dead row versions. The reason is likely that some process is\n> sitting on a open transaction for days at a time.\n>\nCheers mate, that was one of our theories but we weren't sure if it'd be\nworth rebuilding everything to check. We've been compiling without the\n-t (autocommit) flag to ecpg, and i believe what's happening is\nsometimes a transaction is begun and then the processes cycle around\ndoing hardware i/o and never commit or only commit way too late. What\nwe're going to try now is remove all the begins and commits from the\ncode and compile with -t to make sure that any updates happen\nimmediately. Hopefully that'll avoid any hanging transactions.\n\nWe'll also set up a 10-minutely vacuum (not full) as per some other\nsuggestions here. I'll let you know how it goes - we'll probably slot\neverything in on Monday so we have a week to follow it.\n\nThanks everyone\nAlison\n\n", "msg_date": "Fri, 15 Jul 2005 12:26:09 +1000", "msg_from": "[email protected] (Alison Winters)", "msg_from_op": true, "msg_subject": "Re: lots of updates on small table" }, { "msg_contents": "Just to dig up an old thread from last month:\n\nIn case anyone was wondering we finally got a free day to put in the new\nversion of the software, and it's greatly improved the performance. The\nsolutions we employed were as follows:\n\n- recompile everything with ecpg -t for auto-commit\n- vacuum run by cron every 15 minutes\n- vacuum full analyze AND a reindex of the table in question run by cron\n once per day\n\nI also went through and double-checked everywhere where we opened a\ncursor to make sure it was always closed when we were done.\n\nThis seems to have fixed up the endlessly growing indexes and slower and\nslower updates. Thanks to everyone who helped out.\n\nAlison\n\n", "msg_date": "Fri, 02 Sep 2005 09:23:44 +1000", "msg_from": "[email protected] (Alison Winters)", "msg_from_op": true, "msg_subject": "Re: lots of updates on small table" } ]
[ { "msg_contents": "Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A \nprofiler is a tool that monitors the database server and outputs a detailed \ntrace of all the transactions/queries that are executed on a database during \na specified period of time. Kindly let me know if any of you knows of such a \ntool for PostgreSQL.\n Agha Asif Raza\n\nIs there any MS-SQL Server like 'Profiler' available for PostgreSQL? A profiler is a tool that monitors the database server and outputs a detailed trace of all the transactions/queries that are executed on a database during a specified period of time. Kindly let me know if any of you knows of such a tool for PostgreSQL.\n\n \nAgha Asif Raza", "msg_date": "Thu, 14 Jul 2005 10:58:26 +0500", "msg_from": "Agha Asif Raza <[email protected]>", "msg_from_op": true, "msg_subject": "Profiler for PostgreSQL" }, { "msg_contents": "Agha Asif Raza wrote:\n> Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A \n> profiler is a tool that monitors the database server and outputs a detailed \n> trace of all the transactions/queries that are executed on a database during \n> a specified period of time. Kindly let me know if any of you knows of such a \n> tool for PostgreSQL.\n> Agha Asif Raza\n\nSure see log_statement in postgresql.conf. There are a lot of settings\nin there to control what is logged.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 14 Jul 2005 02:27:32 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Profiler for PostgreSQL" }, { "msg_contents": "Try turning on query logging and using the 'pqa' utility on pgfoundry.org.\n\nChris\n\nAgha Asif Raza wrote:\n> Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A \n> profiler is a tool that monitors the database server and outputs a \n> detailed trace of all the transactions/queries that are executed on a \n> database during a specified period of time. Kindly let me know if any of \n> you knows of such a tool for PostgreSQL.\n> \n> Agha Asif Raza\n\n", "msg_date": "Thu, 14 Jul 2005 14:29:56 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Profiler for PostgreSQL" }, { "msg_contents": "On Thu, 2005-07-14 at 14:29 +0800, Christopher Kings-Lynne wrote:\n> Try turning on query logging and using the 'pqa' utility on pgfoundry.org.\n\nHave you got that to work for 8 ?\n\npqa 1.5 doesn't even work with its own test file.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Thu, 14 Jul 2005 21:47:46 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Profiler for PostgreSQL" }, { "msg_contents": "Bruce Momjian wrote:\n> Agha Asif Raza wrote:\n> \n>>Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A \n>>profiler is a tool that monitors the database server and outputs a detailed \n>>trace of all the transactions/queries that are executed on a database during \n>>a specified period of time. Kindly let me know if any of you knows of such a \n>>tool for PostgreSQL.\n>> Agha Asif Raza\n> \n> \n> Sure see log_statement in postgresql.conf. There are a lot of settings\n> in there to control what is logged.\n\nThere's nothing really comparable at the moment, but some tasks can be \ndone with log_statement.\nI'm planning to implement a full-blown profiling like MSSQL's, but don't \nexpect this too soon (I'm thinking about this for a year now. So many \nplans, so little time).\n\nRegards,\nAndreas\n\n\n", "msg_date": "Fri, 15 Jul 2005 08:19:08 +0000", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Profiler for PostgreSQL" } ]
[ { "msg_contents": "[reposted due to delivery error -jwb]\n\nI just took delivery of a new system, and used the opportunity to\nbenchmark postgresql 8.0 performance on various filesystems. The system\nin question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n5 7200RPM SATA disks attached to an Areca hardware RAID controller\nhaving 128MB of cache. The caches are all write-back.\n\nI ran pgbench with a scale factor of 1000 and a total of 100,000\ntransactions per run. I varied the number of clients between 10 and\n100. It appears from my test JFS is much faster than both ext3 and XFS\nfor this workload. JFS and XFS were made with the mkfs defaults. ext3\nwas made with -T largefile4 and -E stride=32. The deadline scheduler\nwas used for all runs (anticipatory scheduler is much worse).\n\nHere's the result, in transactions per second.\n\n ext3 jfs xfs\n-----------------------------\n 10 Clients 55 81 68\n100 Clients 61 100 64\n----------------------------\n\n-jwb\n", "msg_date": "Wed, 13 Jul 2005 23:33:41 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "On 7/14/05, Jeffrey W. Baker <[email protected]> wrote:\n> [reposted due to delivery error -jwb]\n> \n> I just took delivery of a new system, and used the opportunity to\n> benchmark postgresql 8.0 performance on various filesystems. The system\n> in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n> 5 7200RPM SATA disks attached to an Areca hardware RAID controller\n> having 128MB of cache. The caches are all write-back.\n> \n> I ran pgbench with a scale factor of 1000 and a total of 100,000\n> transactions per run. I varied the number of clients between 10 and\n> 100. It appears from my test JFS is much faster than both ext3 and XFS\n> for this workload. JFS and XFS were made with the mkfs defaults. ext3\n> was made with -T largefile4 and -E stride=32. The deadline scheduler\n> was used for all runs (anticipatory scheduler is much worse).\n> \n> Here's the result, in transactions per second.\n> \n> ext3 jfs xfs\n> -----------------------------\n> 10 Clients 55 81 68\n> 100 Clients 61 100 64\n> ----------------------------\n\nIf you still have a chance, could you do tests with other journaling\noptions for ext3 (journal=writeback, journal=data)? And could you\ngive figures about performace of other IO elevators? I mean, you\nwrote that anticipatory is much wore -- how much worse? :) Could\nyou give numbers for deadline,anticipatory,cfq elevators? :)\n\nAnd, additionally would it be possible to give numbers for bonnie++\nresults? To see how does pgbench to bonnie++ relate?\n\n Regards,\n Dawid\n", "msg_date": "Thu, 14 Jul 2005 10:03:10 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "\n\nQuoting \"Jeffrey W. Baker\" <[email protected]>:\n\n\n> \n> Here's the result, in transactions per second.\n> \n> ext3 jfs xfs\n> ----------------------\n\n-------\n> 10 Clients 55 81 68\n> 100 Clients 61 100 64\n> ----------------------------\n\nWas fsync true? And have you tried ext2? Legend has it that ext2 is the\nfastest thing going for synchronous writes (besides O_DIRECT or raw) because\nthere's no journal.\n\n> \n> -jwb\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n", "msg_date": "Thu, 14 Jul 2005 02:56:33 -0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "Did you seperate the data & the transaction log? I've noticed less than\noptimal performance on xfs if the transaction log is on the xfs data\npartition, and it's silly to put the xlog on a journaled filesystem\nanyway. Try putting xlog on an ext2 for all the tests.\n\nMike Stone\n", "msg_date": "Thu, 14 Jul 2005 06:30:40 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nDawid Kuroczko wrote:\n|\n| If you still have a chance, could you do tests with other journaling\n| options for ext3 (journal=writeback, journal=data)? And could you\n| give figures about performace of other IO elevators? I mean, you\n| wrote that anticipatory is much wore -- how much worse? :) Could\n| you give numbers for deadline,anticipatory,cfq elevators? :)\n|\n| And, additionally would it be possible to give numbers for bonnie++\n| results? To see how does pgbench to bonnie++ relate?\n|\n\nHello, list.\n\nI've been thinking on this one for a while - I'm not sure as to what\nratio pgbench has with regard to stressing CPU vs. I/O. There is one\nthing that's definitely worth mentioning though: in the tests that I've\nbeen doing with bonnie++ and iozone at my former job, while building a\ndistributed indexing engine, jfs was the one filesystem with the least\nstrain on the CPU, which might be one of the deciding factors in making\nit look good for a particular workload.\n\nI'm afraid I don't have any concrete figures to offer as the material\nitself was classified. I can tell though that we've been comparing it\nwith both ext2 and ext3, as well as xfs, and notably, xfs was the worst\nCPU hog of all. The CPU load difference between jfs and xfs was about\n10% in favor of jfs in all random read/write tests, and the interesting\nthing is, jfs managed to shuffle around quite a lot of data: the\nmbps/cpu% ratio in xfs was much worse. As expected, there wasn't much\ndifference in block transfer tests, but jfs was slightly winning in the\narea of CPU consumption and slightly lagging in the transfer rate field.\n\nWhat is a little bit concerning though, is the fact that some Linux\ndistributors like SuSE have removed jfs support from their admin tooling\n<quote>due to technical problems with jfs</quote>\n(http://your-local-suse-mirror/.../suse/i386/9.3/docu/RELEASE-NOTES.en.html#14)\n\nI'm curious as to what this means - did they have problems integrating\nit into their toolchain or are there actual problems going on in jfs\ncurrently?\n\nKind regards,\n- --\nGrega Bremec\ngregab at p0f dot net\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.2.4 (GNU/Linux)\n\niD8DBQFC1ld4fu4IwuB3+XoRAqEyAJ0TS9son+brhbQGtV7Cw7T8wa9W2gCfZ02/\ndWm/E/Dc99TyKbxxl2tKaZc=\n=nvv3\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 14 Jul 2005 14:15:52 +0200", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "On Thu, Jul 14, 2005 at 02:15:52PM +0200, Grega Bremec wrote:\n>I'm curious as to what this means - did they have problems integrating\n>it into their toolchain or are there actual problems going on in jfs\n>currently?\n\nI've found jfs to be the least stable linux filesystem and won't allow\nit anywhere near an important system. YMMV. \n\nMike Stone\n", "msg_date": "Thu, 14 Jul 2005 09:20:56 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "On Thu, 2005-07-14 at 10:03 +0200, Dawid Kuroczko wrote:\n> On 7/14/05, Jeffrey W. Baker <[email protected]> wrote:\n> > [reposted due to delivery error -jwb]\n> > \n> > I just took delivery of a new system, and used the opportunity to\n> > benchmark postgresql 8.0 performance on various filesystems. The system\n> > in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n> > 5 7200RPM SATA disks attached to an Areca hardware RAID controller\n> > having 128MB of cache. The caches are all write-back.\n> > \n> > I ran pgbench with a scale factor of 1000 and a total of 100,000\n> > transactions per run. I varied the number of clients between 10 and\n> > 100. It appears from my test JFS is much faster than both ext3 and XFS\n> > for this workload. JFS and XFS were made with the mkfs defaults. ext3\n> > was made with -T largefile4 and -E stride=32. The deadline scheduler\n> > was used for all runs (anticipatory scheduler is much worse).\n> > \n> > Here's the result, in transactions per second.\n> > \n> > ext3 jfs xfs\n> > -----------------------------\n> > 10 Clients 55 81 68\n> > 100 Clients 61 100 64\n> > ----------------------------\n> \n> If you still have a chance, could you do tests with other journaling\n> options for ext3 (journal=writeback, journal=data)? And could you\n> give figures about performace of other IO elevators? I mean, you\n> wrote that anticipatory is much wore -- how much worse? :) Could\n> you give numbers for deadline,anticipatory,cfq elevators? :)\n> \n> And, additionally would it be possible to give numbers for bonnie++\n> results? To see how does pgbench to bonnie++ relate?\n\nPhew, that's a lot of permutations. At 20-30 minutes per run, I'm\nthinking 5-8 hours or so. Still, for you dear readers, I'll somehow\naccomplish this tedious feat.\n\nAs for Bonnie, JFS is a good 60-80% faster than ext3. See my message to\next3-users yesterday.\n\nUsing bonnie++ with a 10GB fileset, in MB/s:\n\n ext3 jfs xfs\nRead 112 188 141\nWrite 97 157 167\nRewrite 51 71 60\n\n-jwb\n", "msg_date": "Thu, 14 Jul 2005 06:27:24 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "On Wed, Jul 13, 2005 at 11:33:41PM -0700, Jeffrey W. Baker wrote:\n> [reposted due to delivery error -jwb]\n> \n> I just took delivery of a new system, and used the opportunity to\n> benchmark postgresql 8.0 performance on various filesystems. The system\n> in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n> 5 7200RPM SATA disks attached to an Areca hardware RAID controller\n> having 128MB of cache. The caches are all write-back.\n> \n> I ran pgbench with a scale factor of 1000 and a total of 100,000\n> transactions per run. I varied the number of clients between 10 and\n> 100. It appears from my test JFS is much faster than both ext3 and XFS\n> for this workload. JFS and XFS were made with the mkfs defaults. ext3\n> was made with -T largefile4 and -E stride=32. The deadline scheduler\n> was used for all runs (anticipatory scheduler is much worse).\n> \n> Here's the result, in transactions per second.\n> \n> ext3 jfs xfs\n> -----------------------------\n> 10 Clients 55 81 68\n> 100 Clients 61 100 64\n> ----------------------------\n\nBTW, it'd be interesting to see how UFS on FreeBSD compared.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 14 Jul 2005 13:29:48 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" }, { "msg_contents": "\n>>I ran pgbench with a scale factor of 1000 and a total of 100,000\n>>transactions per run. I varied the number of clients between 10 and\n>>100. It appears from my test JFS is much faster than both ext3 and XFS\n>>for this workload. JFS and XFS were made with the mkfs defaults. ext3\n>>was made with -T largefile4 and -E stride=32. The deadline scheduler\n>>was used for all runs (anticipatory scheduler is much worse).\n>>\n>>Here's the result, in transactions per second.\n>>\n>> ext3 jfs xfs\n>>-----------------------------\n>> 10 Clients 55 81 68\n>>100 Clients 61 100 64\n>>----------------------------\n\nI would be curious as to what options were passed to jfs and xfs.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n> BTW, it'd be interesting to see how UFS on FreeBSD compared.\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Thu, 14 Jul 2005 11:37:10 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" } ]
[ { "msg_contents": "It seems functional indexes are recalculated even where it is obviously not\nneeded.\n\n\\d+ test:\n\n i | integer | |\n t | text | |\n x | text | |\n \"i_i\" btree (i)\n \"x_iii\" btree (xpath_string(x, 'movie/rating'::text))\n\n\n1) \nWhen I run\nVACUUM FULL ANALYZE VERBOSE \nOR\nVACUUM ANALYZE\na lot of xpath_string calls occur. \nDoes VACUUM rebuild indexes ? What for to recalculate that all?\nIt makes VACUUMing very slow.\n\nSimple VACUUM call does not lead to such function calls.\n\n2)\nWhen I do \nselect * from test order by xpath_string(x, 'movie/rating'::text) limit 1000\noffset 10;\n\nPlanner uses index x_iii (as it should, ok here): \tLimit -> Index scan.\nBut many of calls to xpath_string occur in execution time. \nWhy ? Index is calculated already and everything is so immutable..\n\nPlease answer if you have any ideas.. Functional indexes seemed so great\nfirst, but now I uncover weird issues I can't understand..\n\n\n\n\n\n", "msg_date": "Thu, 14 Jul 2005 10:33:53 +0400", "msg_from": "\"jobapply\" <[email protected]>", "msg_from_op": true, "msg_subject": "Indexing Function called on VACUUM and sorting ?" } ]
[ { "msg_contents": "I was wondering - have you had a chance to run the same benchmarks on\nReiserFS (ideally both 3 and 4, with notail)?\n\nI'd be quite interested to see how it performs in this situation since\nit's my fs of choice for most things.\n\nThanks,\nDmitri\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jeffrey W.\nBaker\nSent: Thursday, July 14, 2005 2:34 AM\nTo: [email protected]\nSubject: [PERFORM] JFS fastest filesystem for PostgreSQL?\n\n\n[reposted due to delivery error -jwb]\n\nI just took delivery of a new system, and used the opportunity to\nbenchmark postgresql 8.0 performance on various filesystems. The system\nin question runs Linux 2.6.12, has one CPU and 1GB of system memory, and\n5 7200RPM SATA disks attached to an Areca hardware RAID controller\nhaving 128MB of cache. The caches are all write-back.\n\nI ran pgbench with a scale factor of 1000 and a total of 100,000\ntransactions per run. I varied the number of clients between 10 and\n100. It appears from my test JFS is much faster than both ext3 and XFS\nfor this workload. JFS and XFS were made with the mkfs defaults. ext3\nwas made with -T largefile4 and -E stride=32. The deadline scheduler\nwas used for all runs (anticipatory scheduler is much worse).\n\nHere's the result, in transactions per second.\n\n ext3 jfs xfs\n-----------------------------\n 10 Clients 55 81 68\n100 Clients 61 100 64\n----------------------------\n\n-jwb\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\nThe information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from any computer\n", "msg_date": "Thu, 14 Jul 2005 03:08:41 -0400", "msg_from": "\"Dmitri Bichko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: JFS fastest filesystem for PostgreSQL?" } ]
[ { "msg_contents": "Before I ask, I don't want to start a war.\n\nCan someone here give me an honest opinion of how PostgresSQL (PG) is better \nthan Firebird on Windows?\n\nI've just recently started reading the Firebird NG and a poster over there \nhas brought up some serious issues with Firebird, but they seem to not take \nthe issues seriously.\n\nI first wanted to go with Firebird for 2 reasons...\n\nVery easy to configure and very easy to install.\nI assumed that the database worked ok, but I'm not so sure now.\n\nSo, I've decided to give PG a try...I've downloaded it, but haven't \ninstalled it yet.\n\nSo any provable information that you can provide as to why/how PG is \nbetter/faster/easier/reliable than Firebird would be greatly appreciated.\n\nThanks \n\n\n", "msg_date": "Thu, 14 Jul 2005 00:19:34 -0700", "msg_from": "\"Relaxin\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgresSQL vs. Firebird" }, { "msg_contents": "\nOn Thu, 2005-07-14 at 00:19 -0700, Relaxin wrote:\n> Before I ask, I don't want to start a war.\n> \n> Can someone here give me an honest opinion of how PostgresSQL (PG) is better \n> than Firebird on Windows?\n\nA colleague of mine has made some benchmarks using those two:\nhttp://www.1006.org/pg/postgresql_firebird_win_linux.pdf\n\nHe benchmarked inserts done through *his* own Delphi code varying a few\nparameters. The servers run on Windows in all tests. The clients\nwere on Windows or Linux.\n\nThe summary is that PG beats FB performance-wise in all tests except\nwhen you do many small transactions (autocommit on) with fsync on.\n\nBye, Chris.\n\n\n\n\n", "msg_date": "Fri, 15 Jul 2005 11:17:11 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgresSQL vs. Firebird" } ]
[ { "msg_contents": "I'm trying to improve the speed of this query:\n\nexplain select recordtext from eventactivity inner join ( select \nincidentid from k_r where id = 94 ) a using ( incidentid ) inner join \n( select incidentid from k_b where id = 107 ) b using ( incidentid );\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------\nMerge Join (cost=2747.29..4249364.96 rows=11968693 width=35)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Merge Join (cost=1349.56..4230052.73 rows=4413563 width=117)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Index Scan using eventactivity1 on eventactivity \n(cost=0.00..4051200.28 rows=44519781 width=49)\n -> Sort (cost=1349.56..1350.85 rows=517 width=68)\n Sort Key: (k_b.incidentid)::text\n -> Index Scan using k_b_idx on k_b \n(cost=0.00..1326.26 rows=517 width=68)\n Index Cond: (id = 107)\n -> Sort (cost=1397.73..1399.09 rows=542 width=68)\n Sort Key: (k_r.incidentid)::text\n -> Index Scan using k_r_idx on k_r (cost=0.00..1373.12 \nrows=542 width=68)\n Index Cond: (id = 94)\n(13 rows)\n\n\nThere are many millions of rows in eventactivity. There are a few \nten-thousand rows in k_r and k_b. There is an index on 'incidentid' \nin all three tables. There should only be less than 100 rows matched \nin k_r and k_b total. That part on its own is very very fast. But, \nit should have those 100 or so incidentids extracted in under a \nsecond and then go into eventactivity AFTER doing that. At least, \nthat's my intention to make this fast.\n\nRight now, it looks like pg is trying to sort the entire \neventactivity table for the merge join which is taking several \nminutes to do. Can I rephrase this so that it does the searching \nthrough k_r and k_b FIRST and then go into eventactivity using the \nindex on incidentid? It seems like that shouldn't be too hard to \nmake fast but my SQL query skills are only average.\n\nThanks\n-Dan\n", "msg_date": "Thu, 14 Jul 2005 02:05:23 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "slow joining very large table to smaller ones" }, { "msg_contents": "Dan Harris wrote:\n> I'm trying to improve the speed of this query:\n>\n> explain select recordtext from eventactivity inner join ( select\n> incidentid from k_r where id = 94 ) a using ( incidentid ) inner join (\n> select incidentid from k_b where id = 107 ) b using ( incidentid );\n\nYou might try giving it a little bit more freedom with:\n\nEXPLAIN ANALYZE\nSELECT recordtext FROM eventactivity, k_r, k_b\n WHERE eventactivity.incidentid = k_r.incidentid\n AND eventactivity.incidentid = k_b.incidentid\n AND k_r.id = 94\n AND k_b.id = 107\n-- AND k_r.incidentid = k_b.incidentid\n;\n\nI'm pretty sure that would give identical results, just let the planner\nhave a little bit more freedom about how it does it.\nAlso the last line is commented out, because I think it is redundant.\n\nYou might also try:\nEXPLAIN ANALYZE\nSELECT recordtext\n FROM eventactivity JOIN k_r USING (incidentid)\n JOIN k_b USING (incidentid)\n WHERE k_r.id = 94\n AND k_b.id = 107\n;\n\nAlso, if possible give us the EXPLAIN ANALYZE so that we know if the\nplanner is making accurate estimates. (You might send an EXPLAIN while\nwaiting for the EXPLAIN ANALYZE to finish)\n\nYou can also try disabling merge joins, and see how that changes things.\n\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> --------------------------------------\n> Merge Join (cost=2747.29..4249364.96 rows=11968693 width=35)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Merge Join (cost=1349.56..4230052.73 rows=4413563 width=117)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Index Scan using eventactivity1 on eventactivity\n> (cost=0.00..4051200.28 rows=44519781 width=49)\n> -> Sort (cost=1349.56..1350.85 rows=517 width=68)\n> Sort Key: (k_b.incidentid)::text\n> -> Index Scan using k_b_idx on k_b (cost=0.00..1326.26\n> rows=517 width=68)\n> Index Cond: (id = 107)\n> -> Sort (cost=1397.73..1399.09 rows=542 width=68)\n> Sort Key: (k_r.incidentid)::text\n> -> Index Scan using k_r_idx on k_r (cost=0.00..1373.12\n> rows=542 width=68)\n> Index Cond: (id = 94)\n> (13 rows)\n>\n>\n> There are many millions of rows in eventactivity. There are a few\n> ten-thousand rows in k_r and k_b. There is an index on 'incidentid' in\n> all three tables. There should only be less than 100 rows matched in\n> k_r and k_b total. That part on its own is very very fast. But, it\n> should have those 100 or so incidentids extracted in under a second and\n> then go into eventactivity AFTER doing that. At least, that's my\n> intention to make this fast.\n\nWell, postgres is estimating around 500 rows each, is that way off? Try\njust doing:\nEXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;\nEXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;\n\nAnd see if postgres estimates the number of rows properly.\n\nI assume you have recently VACUUM ANALYZEd, which means you might need\nto update the statistics target (ALTER TABLE k_b ALTER COLUMN\nincidientid SET STATISTICS 100) default is IIRC 10, ranges from 1-1000,\nhigher is more accurate, but makes ANALYZE slower.\n\n>\n> Right now, it looks like pg is trying to sort the entire eventactivity\n> table for the merge join which is taking several minutes to do. Can I\n> rephrase this so that it does the searching through k_r and k_b FIRST\n> and then go into eventactivity using the index on incidentid? It seems\n> like that shouldn't be too hard to make fast but my SQL query skills\n> are only average.\n\nTo me, it looks like it is doing an index scan (on k_b.id) through k_b\nfirst, sorting the results by incidentid, then merge joining that with\neventactivity.\n\nI'm guessing you actually want it to merge k_b and k_r to get extra\nselectivity before joining against eventactivity.\nI think my alternate forms would let postgres realize this. But if not,\nyou could try:\n\nEXPLAIN ANALYZE\nSELECT recordtext FROM eventactivity\n JOIN (SELECT incidentid FROM k_r JOIN k_b USING (incidentid)\n\tWHERE k_r.id = 94 AND k_b.id = 107)\nUSING (incidentid);\n\nI don't know how selective your keys are, but one of these queries\nshould probably structure it better for the planner. It depends a lot on\nhow selective your query is.\nIf you have 100M rows, the above query looks like it expects k_r to\nrestrict it to 44M rows, and k_r + k_b down to 11M rows, which really\nshould be a seq scan (> 10% of the rows = seq scan). But if you are\nsaying the selectivity is mis-estimated it could be different.\n\nJohn\n=:->\n>\n> Thanks\n> -Dan\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>", "msg_date": "Thu, 14 Jul 2005 10:42:29 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\nOn Jul 14, 2005, at 9:42 AM, John A Meinel wrote:\n>\n>\n> You might try giving it a little bit more freedom with:\n>\n> EXPLAIN ANALYZE\n> SELECT recordtext FROM eventactivity, k_r, k_b\n> WHERE eventactivity.incidentid = k_r.incidentid\n> AND eventactivity.incidentid = k_b.incidentid\n> AND k_r.id = 94\n> AND k_b.id = 107\n> -- AND k_r.incidentid = k_b.incidentid\n> ;\n>\n> I'm pretty sure that would give identical results, just let the \n> planner\n> have a little bit more freedom about how it does it.\n> Also the last line is commented out, because I think it is redundant.\n>\n\nOk, I tried this one. My ssh keeps getting cut off by a router \nsomewhere between me and the server due to inactivity timeouts, so \nall I know is that both the select and explain analyze are taking \nover an hour to run. Here's the explain select for that one, since \nthat's the best I can get.\n\nexplain select recordtext from eventactivity,k_r,k_b where \neventactivity.incidentid = k_r.incidentid and \neventactivity.incidentid = k_b.incidentid and k_r.id = 94 and k_b.id \n= 107;\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------\nMerge Join (cost=9624.61..4679590.52 rows=151009549 width=35)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Merge Join (cost=4766.92..4547684.26 rows=16072733 width=117)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Index Scan using eventactivity1 on eventactivity \n(cost=0.00..4186753.16 rows=46029271 width=49)\n -> Sort (cost=4766.92..4771.47 rows=1821 width=68)\n Sort Key: (k_b.incidentid)::text\n -> Index Scan using k_b_idx on k_b \n(cost=0.00..4668.31 rows=1821 width=68)\n Index Cond: (id = 107)\n -> Sort (cost=4857.69..4862.39 rows=1879 width=68)\n Sort Key: (k_r.incidentid)::text\n -> Index Scan using k_r_idx on k_r (cost=0.00..4755.52 \nrows=1879 width=68)\n Index Cond: (id = 94)\n(13 rows)\n\n\n\n> You might also try:\n> EXPLAIN ANALYZE\n> SELECT recordtext\n> FROM eventactivity JOIN k_r USING (incidentid)\n> JOIN k_b USING (incidentid)\n> WHERE k_r.id = 94\n> AND k_b.id = 107\n> ;\n>\n\nSimilar results here. The query is taking at least an hour to finish.\n\nexplain select recordtext from eventactivity join k_r using \n( incidentid ) join k_b using (incidentid ) where k_r.id = 94 and \nk_b.id = 107;\n\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------\nMerge Join (cost=9542.77..4672831.12 rows=148391132 width=35)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Merge Join (cost=4726.61..4542825.87 rows=15930238 width=117)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Index Scan using eventactivity1 on eventactivity \n(cost=0.00..4184145.43 rows=46000104 width=49)\n -> Sort (cost=4726.61..4731.13 rows=1806 width=68)\n Sort Key: (k_b.incidentid)::text\n -> Index Scan using k_b_idx on k_b \n(cost=0.00..4628.92 rows=1806 width=68)\n Index Cond: (id = 107)\n -> Sort (cost=4816.16..4820.82 rows=1863 width=68)\n Sort Key: (k_r.incidentid)::text\n -> Index Scan using k_r_idx on k_r (cost=0.00..4714.97 \nrows=1863 width=68)\n Index Cond: (id = 94)\n(13 rows)\n\n\n\n> Also, if possible give us the EXPLAIN ANALYZE so that we know if the\n> planner is making accurate estimates. (You might send an EXPLAIN while\n> waiting for the EXPLAIN ANALYZE to finish)\n>\n> You can also try disabling merge joins, and see how that changes \n> things.\n>\n\nAre there any negative sideaffects of doing this?\n\n>\n>>\n>>\n>\n> Well, postgres is estimating around 500 rows each, is that way off? \n> Try\n> just doing:\n> EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;\n> EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;\n>\n> And see if postgres estimates the number of rows properly.\n>\n> I assume you have recently VACUUM ANALYZEd, which means you might need\n> to update the statistics target (ALTER TABLE k_b ALTER COLUMN\n> incidientid SET STATISTICS 100) default is IIRC 10, ranges from \n> 1-1000,\n> higher is more accurate, but makes ANALYZE slower.\n>\n>\n>>\n>> Right now, it looks like pg is trying to sort the entire \n>> eventactivity\n>> table for the merge join which is taking several minutes to do. \n>> Can I\n>> rephrase this so that it does the searching through k_r and k_b \n>> FIRST\n>> and then go into eventactivity using the index on incidentid? It \n>> seems\n>> like that shouldn't be too hard to make fast but my SQL query skills\n>> are only average.\n>>\n>\n> To me, it looks like it is doing an index scan (on k_b.id) through k_b\n> first, sorting the results by incidentid, then merge joining that with\n> eventactivity.\n>\n> I'm guessing you actually want it to merge k_b and k_r to get extra\n> selectivity before joining against eventactivity.\n> I think my alternate forms would let postgres realize this. But if \n> not,\n> you could try:\n>\n> EXPLAIN ANALYZE\n> SELECT recordtext FROM eventactivity\n> JOIN (SELECT incidentid FROM k_r JOIN k_b USING (incidentid)\n> WHERE k_r.id = 94 AND k_b.id = 107)\n> USING (incidentid);\n>\n\nThis one looks like the same plan as the others:\n\nexplain select recordtext from eventactivity join ( select incidentid \nfrom k_r join k_b using (incidentid) where k_r.id = 94 and k_b.id = \n107 ) a using (incidentid );\n QUERY PLAN\n------------------------------------------------------------------------ \n--------------------------------------\nMerge Join (cost=9793.33..4693149.15 rows=156544758 width=35)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Merge Join (cost=4847.75..4557237.59 rows=16365843 width=117)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n -> Index Scan using eventactivity1 on eventactivity \n(cost=0.00..4191691.79 rows=46084161 width=49)\n -> Sort (cost=4847.75..4852.38 rows=1852 width=68)\n Sort Key: (k_b.incidentid)::text\n -> Index Scan using k_b_idx on k_b \n(cost=0.00..4747.24 rows=1852 width=68)\n Index Cond: (id = 107)\n -> Sort (cost=4945.58..4950.36 rows=1913 width=68)\n Sort Key: (k_r.incidentid)::text\n -> Index Scan using k_r_idx on k_r (cost=0.00..4841.30 \nrows=1913 width=68)\n Index Cond: (id = 94)\n(13 rows)\n\n\n\n> I don't know how selective your keys are, but one of these queries\n> should probably structure it better for the planner. It depends a \n> lot on\n> how selective your query is.\n\neventactivity currently has around 36 million rows in it. There \nshould only be maybe 200-300 incidentids at most that will be matched \nwith the combination of k_b and k_r. That's why I was thinking I \ncould somehow get a list of just the incidentids that matched the id \n= 94 and id = 107 in k_b and k_r first. Then, I would only need to \ngrab a few hundred out of 36 million rows from eventactivity.\n\n> If you have 100M rows, the above query looks like it expects k_r to\n> restrict it to 44M rows, and k_r + k_b down to 11M rows, which really\n> should be a seq scan (> 10% of the rows = seq scan). But if you are\n> saying the selectivity is mis-estimated it could be different.\n>\n\nYeah, if I understand you correctly, I think the previous paragraph \nshows this is a significant misestimate.\n\n\n", "msg_date": "Thu, 14 Jul 2005 16:29:58 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "Dan Harris wrote:\n>\n> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:\n\n...\nDid you try doing this to see how good the planners selectivity\nestimates are?\n\n>> Well, postgres is estimating around 500 rows each, is that way off? Try\n>> just doing:\n>> EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;\n>> EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;\n\nThese should be fast queries.\n\nJohn\n=:->", "msg_date": "Thu, 14 Jul 2005 17:46:09 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "On Thu, Jul 14, 2005 at 04:29:58PM -0600, Dan Harris wrote:\n>Ok, I tried this one. My ssh keeps getting cut off by a router \n>somewhere between me and the server due to inactivity timeouts, so \n>all I know is that both the select and explain analyze are taking \n>over an hour to run.\n\nTry running the query as a script with nohup & redirect the output to a\nfile.\n\nMike Stone\n", "msg_date": "Thu, 14 Jul 2005 18:47:46 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> Here's the explain select for that one, since \n> that's the best I can get.\n\n> explain select recordtext from eventactivity,k_r,k_b where \n> eventactivity.incidentid = k_r.incidentid and \n> eventactivity.incidentid = k_b.incidentid and k_r.id = 94 and k_b.id \n> = 107;\n> QUERY PLAN\n> ------------------------------------------------------------------------ \n> --------------------------------------\n> Merge Join (cost=9624.61..4679590.52 rows=151009549 width=35)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Merge Join (cost=4766.92..4547684.26 rows=16072733 width=117)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Index Scan using eventactivity1 on eventactivity \n> (cost=0.00..4186753.16 rows=46029271 width=49)\n> -> Sort (cost=4766.92..4771.47 rows=1821 width=68)\n> Sort Key: (k_b.incidentid)::text\n> -> Index Scan using k_b_idx on k_b \n> (cost=0.00..4668.31 rows=1821 width=68)\n> Index Cond: (id = 107)\n> -> Sort (cost=4857.69..4862.39 rows=1879 width=68)\n> Sort Key: (k_r.incidentid)::text\n> -> Index Scan using k_r_idx on k_r (cost=0.00..4755.52 \n> rows=1879 width=68)\n> Index Cond: (id = 94)\n> (13 rows)\n\nThere's something awfully fishy here. The 8.0 planner is definitely\ncapable of figuring out that it ought to join the smaller relations\nfirst. As an example, using 8.0.3+ (CVS branch tip) I did\n\nregression=# create table eventactivity(incidentid varchar, recordtext text);\nCREATE TABLE\nregression=# create table k_r(incidentid varchar);\nCREATE TABLE\nregression=# create table k_b(incidentid varchar);\nCREATE TABLE\nregression=# explain select recordtext from eventactivity inner join\n(select incidentid from k_r) a using (incidentid)\ninner join (select incidentid from k_b) b using (incidentid);\n\n(Being too impatient to actually fill the eventactivity table with 36M\nrows of data, I just did some debugger magic to make the planner think\nthat that was the table size...) The default plan looks like\n\n Merge Join (cost=16137814.70..36563453.23 rows=1361700000 width=32)\n Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column3?\")\n -> Merge Join (cost=170.85..290.48 rows=7565 width=64)\n Merge Cond: (\"outer\".\"?column2?\" = \"inner\".\"?column2?\")\n -> Sort (cost=85.43..88.50 rows=1230 width=32)\n Sort Key: (k_r.incidentid)::text\n -> Seq Scan on k_r (cost=0.00..22.30 rows=1230 width=32)\n -> Sort (cost=85.43..88.50 rows=1230 width=32)\n Sort Key: (k_b.incidentid)::text\n -> Seq Scan on k_b (cost=0.00..22.30 rows=1230 width=32)\n -> Sort (cost=16137643.84..16227643.84 rows=36000000 width=64)\n Sort Key: (eventactivity.incidentid)::text\n -> Seq Scan on eventactivity (cost=0.00..1080000.00 rows=36000000 width=64)\n\nand if I \"set enable_mergejoin TO 0;\" I get\n\n Hash Join (cost=612.54..83761451.54 rows=1361700000 width=32)\n Hash Cond: ((\"outer\".incidentid)::text = (\"inner\".incidentid)::text)\n -> Seq Scan on eventactivity (cost=0.00..1080000.00 rows=36000000 width=64)\n -> Hash (cost=504.62..504.62 rows=7565 width=64)\n -> Hash Join (cost=25.38..504.62 rows=7565 width=64)\n Hash Cond: ((\"outer\".incidentid)::text = (\"inner\".incidentid)::text)\n -> Seq Scan on k_r (cost=0.00..22.30 rows=1230 width=32)\n -> Hash (cost=22.30..22.30 rows=1230 width=32)\n -> Seq Scan on k_b (cost=0.00..22.30 rows=1230 width=32)\n\nwhich is the plan I would judge Most Likely To Succeed based on what we\nknow about Dan's problem. (The fact that the planner is estimating it\nas twice as expensive as the mergejoin comes from the fact that with no\nstatistics about the join keys, the planner deliberately estimates hash\njoin as expensive, because it can be pretty awful in the presence of\nmany equal keys.)\n\nSo the planner is certainly capable of finding the desired plan, even\nwithout any tweaking of the query text. This means that what we have\nis mainly a statistical problem. Have you ANALYZEd these tables\nrecently? If so, may we see the pg_stats rows for incidentid in all\nthree tables?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Jul 2005 19:08:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones " }, { "msg_contents": "Dan Harris wrote:\n>\n> On Jul 14, 2005, at 9:42 AM, John A Meinel wrote:\n>\n>>\n>>\n>> You might try giving it a little bit more freedom with:\n>>\n>> EXPLAIN ANALYZE\n>> SELECT recordtext FROM eventactivity, k_r, k_b\n>> WHERE eventactivity.incidentid = k_r.incidentid\n>> AND eventactivity.incidentid = k_b.incidentid\n>> AND k_r.id = 94\n>> AND k_b.id = 107\n>> -- AND k_r.incidentid = k_b.incidentid\n>> ;\n>>\n>> I'm pretty sure that would give identical results, just let the planner\n>> have a little bit more freedom about how it does it.\n>> Also the last line is commented out, because I think it is redundant.\n>>\n>\n> Ok, I tried this one. My ssh keeps getting cut off by a router\n> somewhere between me and the server due to inactivity timeouts, so all\n> I know is that both the select and explain analyze are taking over an\n> hour to run. Here's the explain select for that one, since that's the\n> best I can get.\n>\n> explain select recordtext from eventactivity,k_r,k_b where\n> eventactivity.incidentid = k_r.incidentid and eventactivity.incidentid\n> = k_b.incidentid and k_r.id = 94 and k_b.id = 107;\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> --------------------------------------\n> Merge Join (cost=9624.61..4679590.52 rows=151009549 width=35)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Merge Join (cost=4766.92..4547684.26 rows=16072733 width=117)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Index Scan using eventactivity1 on eventactivity\n> (cost=0.00..4186753.16 rows=46029271 width=49)\n> -> Sort (cost=4766.92..4771.47 rows=1821 width=68)\n> Sort Key: (k_b.incidentid)::text\n> -> Index Scan using k_b_idx on k_b (cost=0.00..4668.31\n> rows=1821 width=68)\n> Index Cond: (id = 107)\n> -> Sort (cost=4857.69..4862.39 rows=1879 width=68)\n> Sort Key: (k_r.incidentid)::text\n> -> Index Scan using k_r_idx on k_r (cost=0.00..4755.52\n> rows=1879 width=68)\n> Index Cond: (id = 94)\n> (13 rows)\n>\n\nIf anything, the estimations have gotten worse. As now it thinks there\nwill be 1800 rows returned each, whereas you were thinking it would be\nmore around 100.\n\nSince you didn't say, you did VACUUM ANALYZE recently, right?\n\n>\n\n...\n\n>>\n>> You can also try disabling merge joins, and see how that changes things.\n>>\n>\n> Are there any negative sideaffects of doing this?\n\nIf the planner is estimating things correctly, you want to give it the\nmost flexibility of plans to pick from, because sometimes a merge join\nis faster (postgres doesn't pick things because it wants to go slower).\nThe only reason for the disable flags is that sometimes the planner\ndoesn't estimate correctly. Usually disabling a method is not the final\nsolution, but a way to try out different methods, and see what happens\nto the results.\n\nUsing: SET enable_mergejoin TO off;\nYou can disable it just for the current session (not for the entire\ndatabase). Which is the recommended way if you have a query that\npostgres is messing up on. (Usually it is correct elsewhere).\n\n\n>>\n>> Well, postgres is estimating around 500 rows each, is that way off? Try\n>> just doing:\n>> EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;\n>> EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;\n\nOnce again, do this and post the results. We might just need to tweak\nyour settings so that it estimates the number of rows correctly, and we\ndon't need to do anything else.\n\n>>\n>> And see if postgres estimates the number of rows properly.\n>>\n>> I assume you have recently VACUUM ANALYZEd, which means you might need\n>> to update the statistics target (ALTER TABLE k_b ALTER COLUMN\n>> incidientid SET STATISTICS 100) default is IIRC 10, ranges from 1-1000,\n>> higher is more accurate, but makes ANALYZE slower.\n>>\n\n\n...\n\n>> EXPLAIN ANALYZE\n>> SELECT recordtext FROM eventactivity\n>> JOIN (SELECT incidentid FROM k_r JOIN k_b USING (incidentid)\n>> WHERE k_r.id = 94 AND k_b.id = 107)\n>> USING (incidentid);\n>>\n>\n> This one looks like the same plan as the others:\n>\n> explain select recordtext from eventactivity join ( select incidentid\n> from k_r join k_b using (incidentid) where k_r.id = 94 and k_b.id = 107\n> ) a using (incidentid );\n\nWell, the planner is powerful enough to flatten nested selects. To make\nit less \"intelligent\" you can do:\nSET join_collapse_limit 1;\nor\nSET join_collapse_limit 0;\nWhich should tell postgres to not try and get tricky with your query.\nAgain, *usually* the planner knows better than you do. So again just do\nit to see what you get.\n\nThe problem is that if you are only using EXPLAIN SELECT, you will\nprobably get something which *looks* worse. Because if it looked better,\nthe planner would have used it. That is why you really need the EXPLAIN\nANALYZE, so that you can see where the planner is incorrect in it's\nestimates.\n\n\n> QUERY PLAN\n> ------------------------------------------------------------------------\n> --------------------------------------\n> Merge Join (cost=9793.33..4693149.15 rows=156544758 width=35)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Merge Join (cost=4847.75..4557237.59 rows=16365843 width=117)\n> Merge Cond: ((\"outer\".incidentid)::text = \"inner\".\"?column2?\")\n> -> Index Scan using eventactivity1 on eventactivity\n> (cost=0.00..4191691.79 rows=46084161 width=49)\n> -> Sort (cost=4847.75..4852.38 rows=1852 width=68)\n> Sort Key: (k_b.incidentid)::text\n> -> Index Scan using k_b_idx on k_b (cost=0.00..4747.24\n> rows=1852 width=68)\n> Index Cond: (id = 107)\n> -> Sort (cost=4945.58..4950.36 rows=1913 width=68)\n> Sort Key: (k_r.incidentid)::text\n> -> Index Scan using k_r_idx on k_r (cost=0.00..4841.30\n> rows=1913 width=68)\n> Index Cond: (id = 94)\n> (13 rows)\n>\n\nWhat I don't understand is that the planner is actually estimating that\njoining against the new table is going to *increase* the number of\nreturned rows.\nBecause the final number of rows here is 156M while the initial join\nshows only 16M.\nAnd it also thinks that it will only grab 46M rows from eventactivity.\n\nIf you have analyzed recently can you do:\nSELECT relname, reltuples FROM pg_class WHERE relname='eventactivity';\n\nIt is a cheaper form than \"SELECT count(*) FROM eventactivity\" to get an\napproximate estimate of the number of rows. But if it isn't too\nexpensive, please also give the value from SELECT count(*) FROM\neventactivity.\n\nAgain, that helps us know if your tables are up-to-date.\n\n\n>\n>\n>> I don't know how selective your keys are, but one of these queries\n>> should probably structure it better for the planner. It depends a lot on\n>> how selective your query is.\n>\n>\n> eventactivity currently has around 36 million rows in it. There should\n> only be maybe 200-300 incidentids at most that will be matched with the\n> combination of k_b and k_r. That's why I was thinking I could somehow\n> get a list of just the incidentids that matched the id = 94 and id =\n> 107 in k_b and k_r first. Then, I would only need to grab a few hundred\n> out of 36 million rows from eventactivity.\n>\n\nWell, you can also try:\nSELECT count(*) FROM k_b JOIN k_r USING (incidentid)\n WHERE k_b.id=?? AND k_r.id=??\n;\n\nThat will tell you how many rows they have in common.\n\n>> If you have 100M rows, the above query looks like it expects k_r to\n>> restrict it to 44M rows, and k_r + k_b down to 11M rows, which really\n>> should be a seq scan (> 10% of the rows = seq scan). But if you are\n>> saying the selectivity is mis-estimated it could be different.\n>>\n>\n> Yeah, if I understand you correctly, I think the previous paragraph\n> shows this is a significant misestimate.\n>\n\nWell, if you look at the latest plans, things have gone up from 44M to\n156M, I don't know why it is worse, but it is getting there.\n\nJohn\n=:->", "msg_date": "Thu, 14 Jul 2005 18:12:55 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> What I don't understand is that the planner is actually estimating that\n> joining against the new table is going to *increase* the number of\n> returned rows.\n\nIt evidently thinks that incidentid in the k_r table is pretty\nnonunique. We really need to look at the statistics data to\nsee what's going on.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 14 Jul 2005 19:30:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones " }, { "msg_contents": "Tom Lane wrote:\n> John A Meinel <[email protected]> writes:\n>\n>>What I don't understand is that the planner is actually estimating that\n>>joining against the new table is going to *increase* the number of\n>>returned rows.\n>\n>\n> It evidently thinks that incidentid in the k_r table is pretty\n> nonunique. We really need to look at the statistics data to\n> see what's going on.\n>\n> \t\t\tregards, tom lane\n>\n\nOkay, sure. What about doing this, then:\n\nEXPLAIN ANALYZE\nSELECT recordtext FROM eventactivity\n JOIN (SELECT DISTINCT incidentid FROM k_r JOIN k_b USING (incidentid)\n\t WHERE k_r.id = ?? AND k_b.id = ??)\n USING (incidentid)\n;\n\nSince I assume that eventactivity is the only table with \"recordtext\",\nand that you don't get any columns from k_r and k_b, meaning it would be\npointless to get duplicate incidentids.\n\nI may be misunderstanding what the query is trying to do, but depending\non what is in k_r and k_b, is it possible to use a UNIQUE INDEX rather\nthan just an index on incidentid?\n\nThere is also the possibility of\nEXPLAIN ANALYZE\nSELECT recordtext FROM eventactivtity\n JOIN (SELECT incidentid FROM k_r WHERE k_r.id = ??\n UNION SELECT incidentid FROM k_b WHERE k_b.id = ??)\n USING (incidentid)\n;\n\nBut both of these would mean that you don't actually want columns from\nk_r or k_b, just a unique list of incident ids.\n\nBut first, I agree, we should make sure the pg_stats values are reasonable.\n\nJohn\n=:->", "msg_date": "Thu, 14 Jul 2005 18:39:58 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\nOn Jul 14, 2005, at 5:12 PM, John A Meinel wrote:\n\n> Dan Harris wrote:\n>\n>\n>>>\n>>> Well, postgres is estimating around 500 rows each, is that way \n>>> off? Try\n>>> just doing:\n>>> EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;\n>>> EXPLAIN ANALYZE SELECT incidentid FROM k_r WHERE id = 94;\n>>>\n>\n> Once again, do this and post the results. We might just need to tweak\n> your settings so that it estimates the number of rows correctly, \n> and we\n> don't need to do anything else.\n>\n\nOk, sorry I missed these the first time through:\n\nexplain analyze select incidentid from k_b where id = 107;\n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------\nIndex Scan using k_b_idx on k_b (cost=0.00..1926.03 rows=675 \nwidth=14) (actual time=0.042..298.394 rows=2493 loops=1)\n Index Cond: (id = 107)\nTotal runtime: 299.103 ms\n\nselect count(*) from k_b;\ncount\n--------\n698350\n\n( sorry! I think I said this one only had tens of thousands in it )\n\n\nexplain analyze select incidentid from k_r where id = \n94; QUERY PLAN\n------------------------------------------------------------------------ \n-------------------------------------------------\nIndex Scan using k_r_idx on k_r (cost=0.00..2137.61 rows=757 \nwidth=14) (actual time=0.092..212.187 rows=10893 loops=1)\n Index Cond: (id = 94)\nTotal runtime: 216.498 ms\n(3 rows)\n\n\nselect count(*) from k_r;\ncount\n--------\n671670\n\n\nThat one is quite a bit slower, yet it's the same table structure and \nsame index as k_b, also it has fewer records.\n\nI did run VACUUM ANALYZE immediately before running these queries. \nIt seems a lot better with the join_collapse set.\n\n>\n> \\\n> Well, the planner is powerful enough to flatten nested selects. To \n> make\n> it less \"intelligent\" you can do:\n> SET join_collapse_limit 1;\n> or\n> SET join_collapse_limit 0;\n> Which should tell postgres to not try and get tricky with your query.\n> Again, *usually* the planner knows better than you do. So again \n> just do\n> it to see what you get.\n>\n\nOk, when join_collapse_limit = 1 I get this now:\n\nexplain analyze select recordtext from eventactivity join ( select \nincidentid from k_r join k_b using (incidentid) where k_r.id = 94 and \nk_b.id = 107 ) a using (incidentid );\n \nQUERY PLAN\n------------------------------------------------------------------------ \n-----------------------------------------------------------------------\nNested Loop (cost=0.00..156509.08 rows=2948 width=35) (actual \ntime=1.555..340.625 rows=24825 loops=1)\n -> Nested Loop (cost=0.00..5361.89 rows=6 width=28) (actual \ntime=1.234..142.078 rows=366 loops=1)\n -> Index Scan using k_b_idx on k_b (cost=0.00..1943.09 \nrows=681 width=14) (actual time=0.423..56.974 rows=2521 loops=1)\n Index Cond: (id = 107)\n -> Index Scan using k_r_idx on k_r (cost=0.00..5.01 \nrows=1 width=14) (actual time=0.031..0.031 rows=0 loops=2521)\n Index Cond: ((k_r.id = 94) AND \n((k_r.incidentid)::text = (\"outer\".incidentid)::text))\n -> Index Scan using eventactivity1 on eventactivity \n(cost=0.00..25079.55 rows=8932 width=49) (actual time=0.107..0.481 \nrows=68 loops=366)\n Index Cond: ((eventactivity.incidentid)::text = \n(\"outer\".incidentid)::text)\nTotal runtime: 347.975 ms\n\nMUCH better! Maybe you can help me understand what I did and if I \nneed to make something permanent to get this behavior from now on?\n\n\n\n>\n>\n>\n> If you have analyzed recently can you do:\n> SELECT relname, reltuples FROM pg_class WHERE relname='eventactivity';\n>\n> It is a cheaper form than \"SELECT count(*) FROM eventactivity\" to \n> get an\n> approximate estimate of the number of rows. But if it isn't too\n> expensive, please also give the value from SELECT count(*) FROM\n> eventactivity.\n>\n> Again, that helps us know if your tables are up-to-date.\n>\n\nSure:\n\nselect relname, reltuples from pg_class where relname='eventactivity';\n relname | reltuples\n---------------+-------------\neventactivity | 3.16882e+07\n\nselect count(*) from eventactivity;\n count\n----------\n31871142\n\n\n\n\n\n>\n>\n>>\n>>\n>>\n>>> I don't know how selective your keys are, but one of these queries\n>>> should probably structure it better for the planner. It depends \n>>> a lot on\n>>> how selective your query is.\n>>>\n>>\n>>\n>> eventactivity currently has around 36 million rows in it. There \n>> should\n>> only be maybe 200-300 incidentids at most that will be matched \n>> with the\n>> combination of k_b and k_r. That's why I was thinking I could \n>> somehow\n>> get a list of just the incidentids that matched the id = 94 and id =\n>> 107 in k_b and k_r first. Then, I would only need to grab a few \n>> hundred\n>> out of 36 million rows from eventactivity.\n>>\n>>\n>\n> Well, you can also try:\n> SELECT count(*) FROM k_b JOIN k_r USING (incidentid)\n> WHERE k_b.id=?? AND k_r.id=??\n> ;\n>\n> That will tell you how many rows they have in common.\n\nselect count(*) from k_b join k_r using (incidentid) where k_b.id=107 \nand k_r.id=94;\ncount\n-------\n 373\n\n\n\n>\n> Well, if you look at the latest plans, things have gone up from 44M to\n> 156M, I don't know why it is worse, but it is getting there.\n\nI assume this is because r_k and r_b are growing fairly rapidly right \nnow. The time in between queries contained a lot of inserts. I was \ncareful to vacuum analyze before sending statistics, as I did this \ntime. I'm sorry if this has confused the issue.\n\n\n\n", "msg_date": "Thu, 14 Jul 2005 18:12:07 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\nOn Jul 14, 2005, at 7:15 PM, John A Meinel wrote:\n>\n>\n> Is the distribution of your rows uneven? Meaning do you have more rows\n> with a later id than an earlier one?\n>\n\nThere are definitely some id's that will have many times more than \nthe others. If I group and count them, the top 10 are fairly \ndominant in the table.\n>>\n>\n> Hmm.. How to do it permanantly? Well you could always issue \"set\n> join_collapse set 1; select * from ....\"\n> But obviously that isn't what you prefer. :)\n>\n> I think there are things you can do to make merge join more expensive\n> than a nested loop, but I'm not sure what they are.\n\nMaybe someone else has some ideas to encourage this behavior for \nfuture work? Setting it on a per-connection basis is doable, but \nwould add some burden to us in code.\n\n>\n> What I really don't understand is that the estimates dropped as well.\n> The actual number of estimate rows drops to 3k instead of > 1M.\n> The real question is why does the planner think it will be so \n> expensive?\n>\n>\n>> select count(*) from k_b join k_r using (incidentid) where k_b.id=107\n>> and k_r.id=94;\n>> count\n>> -------\n>> 373\n>>\n>>\n>\n> Well, this says that they are indeed much more selective.\n> Each one has > 1k rows, but together you end up with only 400.\n>\n\nIs this a bad thing? Is this not \"selective enough\" to make it much \nfaster?\n\nOverall, I'm much happier now after seeing the new plan come about, \nif I can find a way to make that join_collapse behavior permanent, I \ncan certainly live with these numbers.\n\nThanks again for your continued efforts.\n\n-Dan\n", "msg_date": "Thu, 14 Jul 2005 21:49:50 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "Dan Harris wrote:\n>\n> On Jul 14, 2005, at 7:15 PM, John A Meinel wrote:\n>\n>>\n>>\n>> Is the distribution of your rows uneven? Meaning do you have more rows\n>> with a later id than an earlier one?\n>>\n>\n> There are definitely some id's that will have many times more than the\n> others. If I group and count them, the top 10 are fairly dominant in\n> the table.\n\nThat usually skews the estimates. Since the estimate is more of an\naverage (unless the statistics are higher).\n\n>\n>>>\n>>\n>> Hmm.. How to do it permanantly? Well you could always issue \"set\n>> join_collapse set 1; select * from ....\"\n>> But obviously that isn't what you prefer. :)\n>>\n>> I think there are things you can do to make merge join more expensive\n>> than a nested loop, but I'm not sure what they are.\n>\n>\n> Maybe someone else has some ideas to encourage this behavior for future\n> work? Setting it on a per-connection basis is doable, but would add\n> some burden to us in code.\n\nMy biggest question is why the planner things the Nested Loop would be\nso expensive.\nHave you tuned any of the parameters? It seems like something is out of\nwhack. (cpu_tuple_cost, random_page_cost, etc...)\n\n>\n>>\n>> What I really don't understand is that the estimates dropped as well.\n>> The actual number of estimate rows drops to 3k instead of > 1M.\n>> The real question is why does the planner think it will be so expensive?\n>>\n>>\n>>> select count(*) from k_b join k_r using (incidentid) where k_b.id=107\n>>> and k_r.id=94;\n>>> count\n>>> -------\n>>> 373\n>>>\n>>>\n>>\n>> Well, this says that they are indeed much more selective.\n>> Each one has > 1k rows, but together you end up with only 400.\n>>\n>\n> Is this a bad thing? Is this not \"selective enough\" to make it much\n> faster?\n\nYes, being more selective is what makes it faster. But the planner\ndoesn't seem to notice it properly.\n\n>\n> Overall, I'm much happier now after seeing the new plan come about, if\n> I can find a way to make that join_collapse behavior permanent, I can\n> certainly live with these numbers.\n>\n\nI'm sure there are pieces to tune, but I've reached my limits of\nparameters to tweak :)\n\n> Thanks again for your continued efforts.\n>\n> -Dan\n>\n\nJohn\n=:->", "msg_date": "Thu, 14 Jul 2005 23:12:15 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\nOn Jul 14, 2005, at 10:12 PM, John A Meinel wrote:\n>\n> My biggest question is why the planner things the Nested Loop would be\n> so expensive.\n> Have you tuned any of the parameters? It seems like something is \n> out of\n> whack. (cpu_tuple_cost, random_page_cost, etc...)\n>\n\nhere's some of my postgresql.conf. Feel free to blast me if I did \nsomething idiotic here.\n\nshared_buffers = 50000\neffective_cache_size = 1348000\nrandom_page_cost = 3\nwork_mem = 512000\nmax_fsm_pages = 80000\nlog_min_duration_statement = 60000\nfsync = true ( not sure if I'm daring enough to run without this )\nwal_buffers = 1000\ncheckpoint_segments = 64\ncheckpoint_timeout = 3000\n\n\n#---- FOR PG_AUTOVACUUM --#\nstats_command_string = true\nstats_row_level = true\n\n", "msg_date": "Fri, 15 Jul 2005 09:09:37 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\nOn Jul 15, 2005, at 9:09 AM, Dan Harris wrote:\n\n>\n> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:\n>\n>>\n>> My biggest question is why the planner things the Nested Loop \n>> would be\n>> so expensive.\n>> Have you tuned any of the parameters? It seems like something is \n>> out of\n>> whack. (cpu_tuple_cost, random_page_cost, etc...)\n>>\n>>\n>\n> here's some of my postgresql.conf. Feel free to blast me if I did \n> something idiotic here.\n>\n> shared_buffers = 50000\n> effective_cache_size = 1348000\n> random_page_cost = 3\n> work_mem = 512000\n> max_fsm_pages = 80000\n> log_min_duration_statement = 60000\n> fsync = true ( not sure if I'm daring enough to run without this )\n> wal_buffers = 1000\n> checkpoint_segments = 64\n> checkpoint_timeout = 3000\n>\n>\n> #---- FOR PG_AUTOVACUUM --#\n> stats_command_string = true\n> stats_row_level = true\n>\n\nSorry, I forgot to re-post my hardware specs.\n\nHP DL585\n4 x 2.2 GHz Opteron\n12GB RAM\nSmartArray RAID controller, 1GB hardware cache, 4x73GB 10k SCSI in \nRAID 0+1\next2 filesystem\n\nAlso, there are 30 databases on the machine, 27 of them are identical \nschemas. \n \n", "msg_date": "Fri, 15 Jul 2005 09:21:31 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "On Thu, Jul 14, 2005 at 16:29:58 -0600,\n Dan Harris <[email protected]> wrote:\n> \n> Ok, I tried this one. My ssh keeps getting cut off by a router \n> somewhere between me and the server due to inactivity timeouts, so \n> all I know is that both the select and explain analyze are taking \n> over an hour to run. Here's the explain select for that one, since \n> that's the best I can get.\n\nAre you using NAT at home? That's probably where the issue is. If you\nhave control of that box you can probably increase the timeout to a\ncouple of hours.\n", "msg_date": "Fri, 15 Jul 2005 11:44:58 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "\n>> Ok, I tried this one. My ssh keeps getting cut off by a router\n>> somewhere between me and the server due to inactivity timeouts, so\n>> all I know is that both the select and explain analyze are taking\n>> over an hour to run. Here's the explain select for that one, since\n>> that's the best I can get.\n\n\tone word : screen !\n\tone of the most useful little command line utilities...\n", "msg_date": "Fri, 15 Jul 2005 19:39:30 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "Dan Harris wrote:\n>\n> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:\n>\n>>\n>> My biggest question is why the planner things the Nested Loop would be\n>> so expensive.\n>> Have you tuned any of the parameters? It seems like something is out of\n>> whack. (cpu_tuple_cost, random_page_cost, etc...)\n>>\n>\n> here's some of my postgresql.conf. Feel free to blast me if I did\n> something idiotic here.\n>\n> shared_buffers = 50000\n> effective_cache_size = 1348000\n> random_page_cost = 3\n> work_mem = 512000\n\nUnless you are the only person connecting to this database, your\nwork_mem is very high. And if you haven't modified maintenance_work_mem\nit is probably very low. work_mem might be causing postgres to think it\ncan fit all of a merge into ram, making it faster, I can't say for sure.\n\n> max_fsm_pages = 80000\n\nThis seems high, but it depends how many updates/deletes you get\nin-between vacuums. It may not be too excessive. VACUUM [FULL] VERBOSE\nreplies with how many free pages are left, if you didn't use that\nalready for tuning. Though it should be tuned based on a steady state\nsituation. Not a one time test.\n\n> log_min_duration_statement = 60000\n> fsync = true ( not sure if I'm daring enough to run without this )\n> wal_buffers = 1000\n> checkpoint_segments = 64\n> checkpoint_timeout = 3000\n>\n\nThese seem fine to me.\n\nCan you include the output of EXPLAIN SELECT both with and without SET\njoin_collapselimit? Since your tables have grown, I can't compare the\nestimated number of rows, and costs very well.\n\nEXPLAIN without ANALYZE is fine, since I am wondering what the planner\nis thinking things cost.\n\nJohn\n=:->\n\n>\n> #---- FOR PG_AUTOVACUUM --#\n> stats_command_string = true\n> stats_row_level = true\n>", "msg_date": "Fri, 15 Jul 2005 19:06:30 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" }, { "msg_contents": "On 7/15/05, Bruno Wolff III <[email protected]> wrote:\n> On Thu, Jul 14, 2005 at 16:29:58 -0600,\n> Dan Harris <[email protected]> wrote:\n> >\n> > Ok, I tried this one. My ssh keeps getting cut off by a router\n> > somewhere between me and the server due to inactivity timeouts, so\n> > all I know is that both the select and explain analyze are taking\n> > over an hour to run. Here's the explain select for that one, since\n> > that's the best I can get.\n> \n> Are you using NAT at home? That's probably where the issue is. If you\n> have control of that box you can probably increase the timeout to a\n> couple of hours.\n\nSome versions of ssh have such a configuration option (in .ssh/config):\n\nHost *\n ServerAliveInterval 600\n\n...it means that ssh will send a \"ping\" packet to a sshd every 10 minutes\nof inactivity. This way NAT will see activity and won't kill the session.\nI'm using OpenSSH_4.1p1 for this...\n\nOh, and it doesn't have anything to do with TCP keep alive, which is\nrather for finding dead connections than keeping connections alive. ;)\n\n Regards,\n Dawid\n", "msg_date": "Mon, 18 Jul 2005 09:51:41 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow joining very large table to smaller ones" } ]
[ { "msg_contents": "I am working on a system that uses postgresql 7.4.2 (can't change that\nuntil 8.1 goes stable). Just figured out that there are about 285,000\nconnections created over about 11 hours every day. That averages out to\nabout 7.2 connections per second.\n\nIs that a lot? I've never seen that many.\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Fri, 15 Jul 2005 00:00:58 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "What's a lot of connections?" }, { "msg_contents": "On Fri, 2005-07-15 at 00:00 -0700, Karim Nassar wrote:\n> I am working on a system that uses postgresql 7.4.2 (can't change that\n> until 8.1 goes stable). Just figured out that there are about 285,000\n> connections created over about 11 hours every day. That averages out to\n> about 7.2 connections per second.\n> \n> Is that a lot? I've never seen that many.\n\nI see about 8 million connections per full day. Connecting to postgres\nis cheap.\n\n-jwb\n", "msg_date": "Fri, 15 Jul 2005 00:14:42 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's a lot of connections?" }, { "msg_contents": "\"Jeffrey W. Baker\" <[email protected]> writes:\n> On Fri, 2005-07-15 at 00:00 -0700, Karim Nassar wrote:\n>> I am working on a system that uses postgresql 7.4.2 (can't change that\n>> until 8.1 goes stable). Just figured out that there are about 285,000\n>> connections created over about 11 hours every day. That averages out to\n>> about 7.2 connections per second.\n>> \n>> Is that a lot? I've never seen that many.\n\n> I see about 8 million connections per full day. Connecting to postgres\n> is cheap.\n\nIt's not *that* cheap. I think you'd get materially better performance\nif you managed to pool your connections a bit. By the time a backend\nhas started, initialized itself, joined a database, and populated its\ninternal caches with enough catalog entries to get useful work done,\nyou've got a fair number of cycles invested in it. Dropping the backend\nafter only one or two queries is just not going to be efficient.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 15 Jul 2005 10:16:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What's a lot of connections? " } ]
[ { "msg_contents": "Hello all\n\n I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\nan 3Ware SATA raid. Currently the database is only 16G with about 2\ntables with 500000+ row, one table 200000+ row and a few small\ntables. The larger tables get updated about every two hours. The\nproblem I having with this server (which is in production) is the disk\nIO. On the larger tables I'm getting disk IO wait averages of\n~70-90%. I've been tweaking the linux kernel as specified in the\nPostgreSQL documentations and switched to the deadline\nscheduler. Nothing seems to be fixing this. The queries are as\noptimized as I can get them. fsync is off in an attempt to help\npreformance still nothing. Are there any setting I should be look at\nthe could improve on this???\n\nThanks for and help in advance.\n\nRon\n", "msg_date": "Fri, 15 Jul 2005 14:39:36 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Really bad diskio" }, { "msg_contents": "Ron Wills wrote:\n> Hello all\n> \n> I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> an 3Ware SATA raid. \n\n2 drives?\n4 drives?\n8 drives?\n\nRAID 1? 0? 10? 5?\n\n\nCurrently the database is only 16G with about 2\n> tables with 500000+ row, one table 200000+ row and a few small\n> tables. The larger tables get updated about every two hours. The\n> problem I having with this server (which is in production) is the disk\n> IO. On the larger tables I'm getting disk IO wait averages of\n> ~70-90%. I've been tweaking the linux kernel as specified in the\n> PostgreSQL documentations and switched to the deadline\n> scheduler. Nothing seems to be fixing this. The queries are as\n> optimized as I can get them. fsync is off in an attempt to help\n> preformance still nothing. Are there any setting I should be look at\n> the could improve on this???\n> \n> Thanks for and help in advance.\n> \n> Ron\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Fri, 15 Jul 2005 13:45:07 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "On Fri, 2005-07-15 at 14:39 -0600, Ron Wills wrote:\n> Hello all\n> \n> I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> an 3Ware SATA raid. Currently the database is only 16G with about 2\n> tables with 500000+ row, one table 200000+ row and a few small\n> tables. The larger tables get updated about every two hours. The\n> problem I having with this server (which is in production) is the disk\n> IO. On the larger tables I'm getting disk IO wait averages of\n> ~70-90%. I've been tweaking the linux kernel as specified in the\n> PostgreSQL documentations and switched to the deadline\n> scheduler. Nothing seems to be fixing this. The queries are as\n> optimized as I can get them. fsync is off in an attempt to help\n> preformance still nothing. Are there any setting I should be look at\n> the could improve on this???\n\nCan you please characterize this a bit better? Send the output of\nvmstat or iostat over several minutes, or similar diagnostic\ninformation.\n\nAlso please describe your hardware more.\n\nRegards,\nJeff Baker\n", "msg_date": "Fri, 15 Jul 2005 14:00:07 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "At Fri, 15 Jul 2005 13:45:07 -0700,\nJoshua D. Drake wrote:\n> \n> Ron Wills wrote:\n> > Hello all\n> > \n> > I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> > an 3Ware SATA raid. \n> \n> 2 drives?\n> 4 drives?\n> 8 drives?\n\n 3 drives raid 5. I don't believe it's the raid. I've tested this by\nmoving the database to the mirrors software raid where the root is\nfound and onto the the SATA raid. Neither relieved the IO problems.\n I was also was thinking this could be from the transactional\nsubsystem getting overloaded? There are several automated processes\nthat use the DB. Most are just selects, but the data updates and one\nthat updates the smaller tables that are the heavier queries. On\ntheir own they seem to work ok, (still high IO, but fairly quick). But\nif even the simplest select is called during the heavier operation,\nthen everything goes out through the roof. Maybe there's something I'm\nmissing here as well?\n\n> RAID 1? 0? 10? 5?\n> \n> \n> Currently the database is only 16G with about 2\n> > tables with 500000+ row, one table 200000+ row and a few small\n> > tables. The larger tables get updated about every two hours. The\n> > problem I having with this server (which is in production) is the disk\n> > IO. On the larger tables I'm getting disk IO wait averages of\n> > ~70-90%. I've been tweaking the linux kernel as specified in the\n> > PostgreSQL documentations and switched to the deadline\n> > scheduler. Nothing seems to be fixing this. The queries are as\n> > optimized as I can get them. fsync is off in an attempt to help\n> > preformance still nothing. Are there any setting I should be look at\n> > the could improve on this???\n> > \n> > Thanks for and help in advance.\n> > \n> > Ron\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> \n> \n> -- \n> Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\n> PostgreSQL Replication, Consulting, Custom Programming, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Fri, 15 Jul 2005 15:04:35 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "\nOn Jul 15, 2005, at 2:39 PM, Ron Wills wrote:\n\n> Hello all\n>\n> I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> an 3Ware SATA raid.\n\nOperating System? Which file system are you using? I was having a \nsimilar problem just a few days ago and learned that ext3 was the \nculprit.\n\n-Dan\n", "msg_date": "Fri, 15 Jul 2005 15:07:08 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "On Fri, Jul 15, 2005 at 03:04:35PM -0600, Ron Wills wrote:\n>\n> > Ron Wills wrote:\n> > > Hello all\n> > > \n> > > I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> > > an 3Ware SATA raid. \n> > \n> 3 drives raid 5. I don't believe it's the raid. I've tested this by\n> moving the database to the mirrors software raid where the root is\n> found and onto the the SATA raid. Neither relieved the IO problems.\n\nWhat filesystem is this?\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\nSi no sabes adonde vas, es muy probable que acabes en otra parte.\n", "msg_date": "Fri, 15 Jul 2005 17:10:26 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "On Fri, 2005-07-15 at 15:04 -0600, Ron Wills wrote:\n> At Fri, 15 Jul 2005 13:45:07 -0700,\n> Joshua D. Drake wrote:\n> > \n> > Ron Wills wrote:\n> > > Hello all\n> > > \n> > > I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> > > an 3Ware SATA raid. \n> > \n> > 2 drives?\n> > 4 drives?\n> > 8 drives?\n> \n> 3 drives raid 5. I don't believe it's the raid. I've tested this by\n> moving the database to the mirrors software raid where the root is\n> found and onto the the SATA raid. Neither relieved the IO problems.\n\nHard or soft RAID? Which controller? Many of the 3Ware controllers\n(85xx and 95xx) have extremely bad RAID 5 performance.\n\nDid you take any pgbench or other benchmark figures before you started\nusing the DB?\n\n-jwb\n", "msg_date": "Fri, 15 Jul 2005 14:17:34 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "At Fri, 15 Jul 2005 14:00:07 -0700,\nJeffrey W. Baker wrote:\n> \n> On Fri, 2005-07-15 at 14:39 -0600, Ron Wills wrote:\n> > Hello all\n> > \n> > I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> > an 3Ware SATA raid. Currently the database is only 16G with about 2\n> > tables with 500000+ row, one table 200000+ row and a few small\n> > tables. The larger tables get updated about every two hours. The\n> > problem I having with this server (which is in production) is the disk\n> > IO. On the larger tables I'm getting disk IO wait averages of\n> > ~70-90%. I've been tweaking the linux kernel as specified in the\n> > PostgreSQL documentations and switched to the deadline\n> > scheduler. Nothing seems to be fixing this. The queries are as\n> > optimized as I can get them. fsync is off in an attempt to help\n> > preformance still nothing. Are there any setting I should be look at\n> > the could improve on this???\n> \n> Can you please characterize this a bit better? Send the output of\n> vmstat or iostat over several minutes, or similar diagnostic\n> information.\n> \n> Also please describe your hardware more.\n\nHere's a bit of a dump of the system that should be useful.\n\nProcessors x2:\n\nvendor_id : AuthenticAMD\ncpu family : 6\nmodel : 8\nmodel name : AMD Athlon(tm) MP 2400+\nstepping : 1\ncpu MHz : 2000.474\ncache size : 256 KB\n\nMemTotal: 903804 kB\n\nMandrake 10.0 Linux kernel 2.6.3-19mdk\n\nThe raid controller, which is using the hardware raid configuration:\n\n3ware 9000 Storage Controller device driver for Linux v2.26.02.001.\nscsi0 : 3ware 9000 Storage Controller\n3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xe8020000, IRQ: 17.\n3w-9xxx: scsi0: Firmware FE9X 2.02.00.011, BIOS BE9X 2.02.01.037, Ports: 4.\n Vendor: 3ware Model: Logical Disk 00 Rev: 1.00\n Type: Direct-Access ANSI SCSI revision: 00\nSCSI device sda: 624955392 512-byte hdwr sectors (319977 MB)\nSCSI device sda: drive cache: write back, no read (daft)\n\nThis is also on a 3.6 reiser filesystem.\n\nHere's the iostat for 10mins every 10secs. I've removed the stats from\nthe idle drives to reduce the size of this email.\n\nLinux 2.6.3-19mdksmp (photo_server) \t07/15/2005\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.85 1.53 2.15 39.52 53.95\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 82.49 4501.73 188.38 1818836580 76110154\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.30 0.00 1.00 96.30 2.40\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 87.80 6159.20 340.00 61592 3400\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.50 0.00 1.45 94.35 1.70\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 89.60 5402.40 320.80 54024 3208\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.00 0.10 1.35 97.55 0.00\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 105.20 5626.40 332.80 56264 3328\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.40 0.00 1.00 87.40 11.20\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.61 4484.32 515.48 44888 5160\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.45 0.00 1.00 92.66 5.89\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 89.10 4596.00 225.60 45960 2256\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.30 0.00 0.80 96.30 2.60\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 86.49 3877.48 414.01 38736 4136\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.50 0.00 1.00 98.15 0.35\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 97.10 4710.49 405.19 47152 4056\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.35 0.00 1.00 98.65 0.00\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 93.30 5324.80 186.40 53248 1864\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.40 0.00 1.10 96.70 1.80\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 117.88 5481.72 402.80 54872 4032\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.50 0.00 1.05 98.30 0.15\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 124.00 6081.60 403.20 60816 4032\n\navg-cpu: %user %nice %sys %iowait %idle\n 8.75 0.00 2.55 84.46 4.25\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 125.20 5609.60 228.80 56096 2288\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.25 0.00 1.30 96.00 0.45\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 176.98 6166.17 686.29 61600 6856\n\navg-cpu: %user %nice %sys %iowait %idle\n 5.95 0.00 2.25 88.09 3.70\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 154.55 7879.32 295.70 78872 2960\n\navg-cpu: %user %nice %sys %iowait %idle\n 10.29 0.00 3.40 81.97 4.35\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 213.19 11422.18 557.84 114336 5584\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.90 0.10 3.25 94.75 0.00\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 227.80 12330.40 212.80 123304 2128\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.55 0.00 0.85 96.80 1.80\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 96.30 3464.80 568.80 34648 5688\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.70 0.00 1.10 97.25 0.95\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.60 4989.60 237.60 49896 2376\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.75 0.00 2.10 93.55 1.60\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 198.40 10031.63 458.86 100216 4584\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.65 0.00 2.40 95.90 1.05\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 250.25 14174.63 231.77 141888 2320\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.60 0.00 2.15 97.20 0.05\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 285.50 12127.20 423.20 121272 4232\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.60 0.00 2.90 95.65 0.85\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 393.70 14383.20 534.40 143832 5344\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.55 0.00 2.15 96.15 1.15\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 252.15 11801.80 246.15 118136 2464\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.75 0.00 3.45 95.15 0.65\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 396.00 19980.80 261.60 199808 2616\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.70 0.00 2.70 95.70 0.90\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 286.20 14182.40 467.20 141824 4672\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.70 0.00 2.70 95.65 0.95\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 344.20 15838.40 473.60 158384 4736\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.75 0.00 1.70 97.50 0.05\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 178.72 7495.70 412.39 75032 4128\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.05 0.05 1.30 97.05 0.55\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 107.89 4334.87 249.35 43392 2496\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.55 0.00 1.30 98.10 0.05\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 107.01 6345.55 321.12 63392 3208\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.65 0.00 1.05 97.55 0.75\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 107.79 3908.89 464.34 39128 4648\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.50 0.00 1.15 97.75 0.60\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 109.21 4162.56 434.83 41584 4344\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.75 0.00 1.15 98.00 0.10\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 104.19 4796.81 211.58 48064 2120\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.70 0.00 1.05 97.85 0.40\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 105.50 4690.40 429.60 46904 4296\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.75 0.00 1.10 98.15 0.00\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 107.51 4525.33 357.96 45208 3576\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.80 0.00 1.65 92.81 2.75\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 123.18 3810.59 512.29 38144 5128\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.60 0.00 1.05 97.10 1.25\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 104.60 3780.00 236.00 37800 2360\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.70 0.00 1.10 95.96 2.25\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 117.08 3817.78 466.73 38216 4672\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.65 0.00 0.90 96.65 1.80\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 117.20 3629.60 477.60 36296 4776\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.80 0.00 1.10 97.50 0.60\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 112.79 4258.94 326.07 42632 3264\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.05 0.15 1.20 97.50 0.10\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 125.83 2592.99 522.12 25904 5216\n\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.60 0.00 0.55 98.20 0.65\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 104.90 823.98 305.29 8248 3056\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.50 0.00 0.65 98.75 0.10\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 109.80 734.40 468.80 7344 4688\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.15 0.00 1.05 97.75 0.05\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 107.70 751.20 463.20 7512 4632\n\navg-cpu: %user %nice %sys %iowait %idle\n 6.50 0.00 1.85 90.25 1.40\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 98.00 739.14 277.08 7384 2768\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.20 0.00 0.40 82.75 16.65\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 83.13 550.90 360.08 5520 3608\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.65 0.30 2.15 82.91 11.99\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 100.00 1136.46 503.50 11376 5040\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.00 6.25 2.15 89.70 0.90\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 170.17 4106.51 388.39 41024 3880\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.75 0.15 1.75 73.70 23.65\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 234.60 5107.20 232.80 51072 2328\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.15 0.00 0.65 49.48 49.73\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 175.52 1431.37 122.28 14328 1224\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.15 0.00 0.55 50.22 49.08\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 173.50 1464.00 119.20 14640 1192\n\navg-cpu: %user %nice %sys %iowait %idle\n 2.00 0.00 0.60 76.18 21.22\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 130.60 1044.80 203.20 10448 2032\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.90 0.10 0.75 97.55 0.70\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.09 1024.22 197.80 10232 1976\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.25 0.00 0.40 73.78 25.57\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 92.81 582.83 506.99 5840 5080\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.20 0.00 0.55 98.85 0.40\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 90.80 657.60 383.20 6576 3832\n\navg-cpu: %user %nice %sys %iowait %idle\n 16.46 0.00 4.25 77.09 2.20\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 99.60 1174.83 549.85 11760 5504\n\navg-cpu: %user %nice %sys %iowait %idle\n 8.05 0.00 2.60 56.92 32.43\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 172.30 2063.20 128.00 20632 1280\n\navg-cpu: %user %nice %sys %iowait %idle\n 20.84 0.00 4.75 52.82 21.59\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 174.30 1416.80 484.00 14168 4840\n\navg-cpu: %user %nice %sys %iowait %idle\n 1.30 0.00 1.60 56.93 40.17\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 181.02 2858.74 418.78 28616 4192\n\navg-cpu: %user %nice %sys %iowait %idle\n 19.17 0.00 4.44 49.78 26.61\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 162.20 1286.40 373.60 12864 3736\n\navg-cpu: %user %nice %sys %iowait %idle\n 0.15 0.00 0.60 50.85 48.40\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 178.08 1436.64 97.70 14352 976\n\n\n> Regards,\n> Jeff Baker\n", "msg_date": "Fri, 15 Jul 2005 15:29:06 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "At Fri, 15 Jul 2005 14:17:34 -0700,\nJeffrey W. Baker wrote:\n> \n> On Fri, 2005-07-15 at 15:04 -0600, Ron Wills wrote:\n> > At Fri, 15 Jul 2005 13:45:07 -0700,\n> > Joshua D. Drake wrote:\n> > > \n> > > Ron Wills wrote:\n> > > > Hello all\n> > > > \n> > > > I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> > > > an 3Ware SATA raid. \n> > > \n> > > 2 drives?\n> > > 4 drives?\n> > > 8 drives?\n> > \n> > 3 drives raid 5. I don't believe it's the raid. I've tested this by\n> > moving the database to the mirrors software raid where the root is\n> > found and onto the the SATA raid. Neither relieved the IO problems.\n> \n> Hard or soft RAID? Which controller? Many of the 3Ware controllers\n> (85xx and 95xx) have extremely bad RAID 5 performance.\n> \n> Did you take any pgbench or other benchmark figures before you started\n> using the DB?\n\n No, unfortunatly, I'm more or less just the developer for the\nautomation systems and admin the system to keep everything going. I\nhave very little say in the hardware used and I don't have any\nphysical access to the machine, it's found a province over :P.\n But, for what the system, this IO seems unreasonable. I run\ndevelopment on a 1.4Ghz Athlon, Gentoo system, with no raid and I\ncan't reproduce this kind of IO :(.\n\n> -jwb\n", "msg_date": "Fri, 15 Jul 2005 15:48:13 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "On Fri, 2005-07-15 at 15:29 -0600, Ron Wills wrote:\n> Here's a bit of a dump of the system that should be useful.\n> \n> Processors x2:\n> \n> vendor_id : AuthenticAMD\n> cpu family : 6\n> model : 8\n> model name : AMD Athlon(tm) MP 2400+\n> stepping : 1\n> cpu MHz : 2000.474\n> cache size : 256 KB\n> \n> MemTotal: 903804 kB\n> \n> Mandrake 10.0 Linux kernel 2.6.3-19mdk\n> \n> The raid controller, which is using the hardware raid configuration:\n> \n> 3ware 9000 Storage Controller device driver for Linux v2.26.02.001.\n> scsi0 : 3ware 9000 Storage Controller\n> 3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xe8020000, IRQ: 17.\n> 3w-9xxx: scsi0: Firmware FE9X 2.02.00.011, BIOS BE9X 2.02.01.037, Ports: 4.\n> Vendor: 3ware Model: Logical Disk 00 Rev: 1.00\n> Type: Direct-Access ANSI SCSI revision: 00\n> SCSI device sda: 624955392 512-byte hdwr sectors (319977 MB)\n> SCSI device sda: drive cache: write back, no read (daft)\n> \n> This is also on a 3.6 reiser filesystem.\n> \n> Here's the iostat for 10mins every 10secs. I've removed the stats from\n> the idle drives to reduce the size of this email.\n> \n> Linux 2.6.3-19mdksmp (photo_server) \t07/15/2005\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 2.85 1.53 2.15 39.52 53.95\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 82.49 4501.73 188.38 1818836580 76110154\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 0.30 0.00 1.00 96.30 2.40\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 87.80 6159.20 340.00 61592 3400\n\nThese I/O numbers are not so horrible, really. 100% iowait is not\nnecessarily a symptom of misconfiguration. It just means you are disk\nlimited. With a database 20 times larger than main memory, this is no\nsurprise.\n\nIf I had to speculate about the best way to improve your performance, I\nwould say:\n\n1a) Get a better RAID controller. The 3ware hardware RAID5 is very bad.\n1b) Get more disks.\n2) Get a (much) newer kernel.\n3) Try XFS or JFS. Reiser3 has never looked good in my pgbench runs\n\nBy the way, are you experiencing bad application performance, or are you\njust unhappy with the iostat figures?\n\nRegards,\njwb\n\n", "msg_date": "Fri, 15 Jul 2005 14:53:26 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "At Fri, 15 Jul 2005 14:53:26 -0700,\nJeffrey W. Baker wrote:\n> \n> On Fri, 2005-07-15 at 15:29 -0600, Ron Wills wrote:\n> > Here's a bit of a dump of the system that should be useful.\n> > \n> > Processors x2:\n> > \n> > vendor_id : AuthenticAMD\n> > cpu family : 6\n> > model : 8\n> > model name : AMD Athlon(tm) MP 2400+\n> > stepping : 1\n> > cpu MHz : 2000.474\n> > cache size : 256 KB\n> > \n> > MemTotal: 903804 kB\n> > \n> > Mandrake 10.0 Linux kernel 2.6.3-19mdk\n> > \n> > The raid controller, which is using the hardware raid configuration:\n> > \n> > 3ware 9000 Storage Controller device driver for Linux v2.26.02.001.\n> > scsi0 : 3ware 9000 Storage Controller\n> > 3w-9xxx: scsi0: Found a 3ware 9000 Storage Controller at 0xe8020000, IRQ: 17.\n> > 3w-9xxx: scsi0: Firmware FE9X 2.02.00.011, BIOS BE9X 2.02.01.037, Ports: 4.\n> > Vendor: 3ware Model: Logical Disk 00 Rev: 1.00\n> > Type: Direct-Access ANSI SCSI revision: 00\n> > SCSI device sda: 624955392 512-byte hdwr sectors (319977 MB)\n> > SCSI device sda: drive cache: write back, no read (daft)\n> > \n> > This is also on a 3.6 reiser filesystem.\n> > \n> > Here's the iostat for 10mins every 10secs. I've removed the stats from\n> > the idle drives to reduce the size of this email.\n> > \n> > Linux 2.6.3-19mdksmp (photo_server) \t07/15/2005\n> > \n> > avg-cpu: %user %nice %sys %iowait %idle\n> > 2.85 1.53 2.15 39.52 53.95\n> > \n> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> > sda 82.49 4501.73 188.38 1818836580 76110154\n> > \n> > avg-cpu: %user %nice %sys %iowait %idle\n> > 0.30 0.00 1.00 96.30 2.40\n> > \n> > Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> > sda 87.80 6159.20 340.00 61592 3400\n> \n> These I/O numbers are not so horrible, really. 100% iowait is not\n> necessarily a symptom of misconfiguration. It just means you are disk\n> limited. With a database 20 times larger than main memory, this is no\n> surprise.\n> \n> If I had to speculate about the best way to improve your performance, I\n> would say:\n> \n> 1a) Get a better RAID controller. The 3ware hardware RAID5 is very bad.\n> 1b) Get more disks.\n> 2) Get a (much) newer kernel.\n> 3) Try XFS or JFS. Reiser3 has never looked good in my pgbench runs\n\nNot good news :(. I can't change the hardware, hopefully a kernel\nupdate and XFS of JFS will make an improvement. I was hoping for\nsoftware raid (always has worked well), but the client didn't feel\nconforable with it :P.\n \n> By the way, are you experiencing bad application performance, or are you\n> just unhappy with the iostat figures?\n\n It's affecting the whole system. It is sending the load averages\nthrough the roof (from 4 to 12) and processes that would take only a\nfew minutes starts going over an hour, until it clears up. Well, I\nguess I'll have to drum up some more programming magic... and I'm\nstarting to run out of tricks... I love my job some day :$\n \n> Regards,\n> jwb\n> \n", "msg_date": "Fri, 15 Jul 2005 16:11:39 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really bad diskio" }, { "msg_contents": "At Fri, 15 Jul 2005 14:39:36 -0600,\nRon Wills wrote:\n\n I just wanted to thank everyone for their help. I believe we found a\nsolution that will help with this problem, with the hardware\nconfiguration and caching the larger tables into smaller data sets. \nA valuable lesson learned from this ;)\n\n> Hello all\n> \n> I'm running a postgres 7.4.5, on a dual 2.4Ghz Athlon, 1Gig RAM and\n> an 3Ware SATA raid. Currently the database is only 16G with about 2\n> tables with 500000+ row, one table 200000+ row and a few small\n> tables. The larger tables get updated about every two hours. The\n> problem I having with this server (which is in production) is the disk\n> IO. On the larger tables I'm getting disk IO wait averages of\n> ~70-90%. I've been tweaking the linux kernel as specified in the\n> PostgreSQL documentations and switched to the deadline\n> scheduler. Nothing seems to be fixing this. The queries are as\n> optimized as I can get them. fsync is off in an attempt to help\n> preformance still nothing. Are there any setting I should be look at\n> the could improve on this???\n> \n> Thanks for and help in advance.\n> \n> Ron\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Sat, 16 Jul 2005 15:14:55 -0600", "msg_from": "Ron Wills <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really bad diskio" } ]
[ { "msg_contents": "In our last installment, we saw that JFS provides higher pgbench\nperformance than either XFS or ext3. Using a direct-I/O patch stolen\nfrom 8.1, JFS achieved 105 tps with 100 clients.\n\nTo refresh, the machine in question has 5 7200RPM SATA disks, an Areca\nRAID controller with 128MB cache, and 1GB of main memory. pgbench is\nbeing run with a scale factor of 1000 and 100000 total transactions.\n\nAt the suggestion of Andreas Dilger of clusterfs, I tried modulating the\nsize of the ext3 journal, and the mount options (data=journal,\nwriteback, and ordered). I turns out that you can achieve a substantial\nimprovement (almost 50%) by simply mounting the ext3 volume with\ndata=writeback instead of data=ordered (the default). Changing the\njournal size did not seem to make a difference, except that 256MB is for\nsome reason pathological (9% slower than the best time). 128MB, the\ndefault for a large volume, gave the same performance as 400MB (the max)\nor 32MB.\n\nIn the end, the ext3 volume mounted with -o noatime,data=writeback\nyielded 88 tps with 100 clients. This is about 16% off the performance\nof JFS with default options.\n\nAndreas pointed me to experimental patches to ext3's block allocation\ncode and writeback strategy. I will test these, but I expect the\ndatabase community, which seems so attached to its data, will be very\ninterested in code that has not yet entered mainstream use.\n\nAnother frequent suggestion is to put the xlog on a separate device. I\ntried this, and, for a given number of disks, it appears to be\ncounter-productive. A RAID5 of 5 disks holding both logs and data is\nabout 15% faster than a RAID5 of 3 disks with the data, and a mirror of\ntwo disks holding the xlog.\n\nHere are the pgbench results for each permutation of ext3:\n\nJournal Size | Journal Mode | 1 Client | 10 Clients | 100 Clients\n------------------------------------------------------------------\n32 ordered 28 51 57\n32 writeback 34 70 88\n64 ordered 29 52 61\n64 writeback 32 69 87\n128 ordered 32 54 62\n128 writeback 34 70 88\n256 ordered 28 51 60\n256 writeback 29 64 79\n400 ordered 26 49 59\n400 writeback 32 70 87\n\n-jwb\n", "msg_date": "Sat, 16 Jul 2005 01:12:27 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "more filesystem benchmarks" }, { "msg_contents": "On Sat, Jul 16, 2005 at 01:12:27AM -0700, Jeffrey W. Baker wrote:\n>Another frequent suggestion is to put the xlog on a separate device. I\n>tried this, and, for a given number of disks, it appears to be\n>counter-productive. A RAID5 of 5 disks holding both logs and data is\n>about 15% faster than a RAID5 of 3 disks with the data, and a mirror of\n>two disks holding the xlog.\n\nTry simply a seperate partition on the same device with a different\nfilesystem (ext2). \n\nMike Stone\n", "msg_date": "Sat, 16 Jul 2005 07:01:08 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: more filesystem benchmarks" } ]
[ { "msg_contents": "\n\n\n\nPostgres Version:\n7.3.9 and 8.0.1 (different sites use different versions depending on when\nthey first installed Postgres)\n\nMigration Plans:\nAll sites on 8.n within the next 6-9 months.\n\nScenario:\nA temporary table is created via a \"SELECT blah INTO TEMPORARY TABLE blah\nFROM...\". The SELECT query is composed of a number of joins on small\n(thousands of rows) parameter tables. A view is not usable here because\nthe temporary table SELECT query is constructed on the fly in PHP with JOIN\nparameters and WHERE filters that may change from main query set to main\nquery set.\n\nAfter the table is created, the key main query JOIN parameter (device ID)\nis indexed. The resulting temporary table is at most 3000-4000 small (128\nbyte) records.\n\nThe temporary table is then joined in a series of SELECT queries to other\ndata tables in the database that contain information associated with the\nrecords in the temporary table. These secondary tables can have tens of\nmillions of records each. After the queries are executed, the DB\nconnection is closed and the temporary table and index automatically\ndeleted.\n\nAre there any performance issues or considerations associated with using a\ntemporary table in this scenario? Is it worth my trying to develop a\nsolution that just incorporates all the logic used to create the temporary\ntable into each of the main queries? How expensive an operation is\ntemporary table creation and joining?\n\nThanks in advance for your advice,\n--- Steve\n\n", "msg_date": "Sat, 16 Jul 2005 17:42:21 -0400", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Questions about temporary tables and performance" }, { "msg_contents": "Steven Rosenstein <[email protected]> writes:\n> Are there any performance issues or considerations associated with using a\n> temporary table in this scenario?\n\nIt's probably worthwhile to ANALYZE the temp table after it's filled,\nbefore you start joining to it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Jul 2005 00:44:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Questions about temporary tables and performance " } ]
[ { "msg_contents": "I have a unique scenerio. My DB is under \"continual load\", meaning\nthat I am constantly using COPY to insert new data into the DB. There\nis no \"quiet period\" for the database, at least not for hours on end. \nNormally, checkpoint_segments can help absorb some of that, but my\nexperience is that if I crank the number up, it simply delays the\nimpact, and when it occurs, it takes a VERY long time (minutes) to\nclear.\n\nThoughts?\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Sun, 17 Jul 2005 13:08:34 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> I have a unique scenerio. My DB is under \"continual load\", meaning\n> that I am constantly using COPY to insert new data into the DB. There\n> is no \"quiet period\" for the database, at least not for hours on end. \n> Normally, checkpoint_segments can help absorb some of that, but my\n> experience is that if I crank the number up, it simply delays the\n> impact, and when it occurs, it takes a VERY long time (minutes) to\n> clear.\n\nIf you are using 8.0, you can probably alleviate the problem by making\nthe bgwriter more aggressive. I don't have any immediate\nrecommendations for specific settings though.\n\nA small checkpoint_segments setting is definitely bad news for\nperformance under heavy load.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 17 Jul 2005 14:15:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On Jul 17, 2005, at 1:08 PM, Christopher Petrilli wrote:\n\n> Normally, checkpoint_segments can help absorb some of that, but my\n> experience is that if I crank the number up, it simply delays the\n> impact, and when it occurs, it takes a VERY long time (minutes) to\n> clear.\n\nThere comes a point where your only recourse is to throw hardware at \nthe problem. I would suspect that getting faster disks and splitting \nthe checkpoint log to its own RAID partition would help you here. \nAdding more RAM while you're at it always does wonders for me :-)\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806", "msg_date": "Mon, 18 Jul 2005 14:32:14 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/18/05, Vivek Khera <[email protected]> wrote:\n> \n> On Jul 17, 2005, at 1:08 PM, Christopher Petrilli wrote:\n> \n> > Normally, checkpoint_segments can help absorb some of that, but my\n> > experience is that if I crank the number up, it simply delays the\n> > impact, and when it occurs, it takes a VERY long time (minutes) to\n> > clear.\n> \n> There comes a point where your only recourse is to throw hardware at\n> the problem. I would suspect that getting faster disks and splitting\n> the checkpoint log to its own RAID partition would help you here.\n> Adding more RAM while you're at it always does wonders for me :-)\n\nMy concern is less with absolute performance, than with the nosedive\nit goes into. I published some of my earlier findings and comparisons\non my blog, but there's a graph here:\n\nhttp://blog.amber.org/diagrams/comparison_mysql_pgsql.png\n\nNotice the VERY steep drop off. I'm still trying to get rid of it,\nbut honestly, am not smart enough to know where it's originating. I\nhave no desire to ever use MySQL, but it is a reference point, and\nsince I don't particularly need transactional integrity, a valid\ncomparison.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Mon, 18 Jul 2005 14:45:35 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> http://blog.amber.org/diagrams/comparison_mysql_pgsql.png\n\n> Notice the VERY steep drop off.\n\nHmm. Whatever that is, it's not checkpoint's fault. I would interpret\nthe regular upticks in the Postgres times (every several hundred\niterations) as being the effects of checkpoints. You could probably\nsmooth out those peaks some with appropriate hacking on bgwriter\nparameters, but that's not the issue at hand (is it?).\n\nI have no idea at all what's causing the sudden falloff in performance\nafter about 10000 iterations. COPY per se ought to be about a\nconstant-time operation, since APPEND is (or should be) constant-time.\nWhat indexes, foreign keys, etc do you have on this table? What else\nwas going on at the time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 15:32:36 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On 7/18/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > http://blog.amber.org/diagrams/comparison_mysql_pgsql.png\n> \n> > Notice the VERY steep drop off.\n> \n> Hmm. Whatever that is, it's not checkpoint's fault. I would interpret\n> the regular upticks in the Postgres times (every several hundred\n> iterations) as being the effects of checkpoints. You could probably\n> smooth out those peaks some with appropriate hacking on bgwriter\n> parameters, but that's not the issue at hand (is it?).\n\nI tried hacking that, turning it up to be more agressive, it got\nworse. Turned it down, it got worse :-)\n \n> I have no idea at all what's causing the sudden falloff in performance\n> after about 10000 iterations. COPY per se ought to be about a\n> constant-time operation, since APPEND is (or should be) constant-time.\n> What indexes, foreign keys, etc do you have on this table? What else\n> was going on at the time?\n\nThe table has 15 columns, 5 indexes (character, inet and timestamp).\nNo foreign keys. The only other thing running on the machine was the\napplication actually DOING the benchmarking, written in Python\n(psycopg), but it was, according to top, using less than 1% of the\nCPU. It was just talking through a pipe to a psql prompt to do the\nCOPY.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Mon, 18 Jul 2005 15:34:57 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On 7/18/05, Tom Lane <[email protected]> wrote:\n>> I have no idea at all what's causing the sudden falloff in performance\n>> after about 10000 iterations. COPY per se ought to be about a\n>> constant-time operation, since APPEND is (or should be) constant-time.\n>> What indexes, foreign keys, etc do you have on this table? What else\n>> was going on at the time?\n\n> The table has 15 columns, 5 indexes (character, inet and timestamp).\n> No foreign keys. The only other thing running on the machine was the\n> application actually DOING the benchmarking, written in Python\n> (psycopg), but it was, according to top, using less than 1% of the\n> CPU. It was just talking through a pipe to a psql prompt to do the\n> COPY.\n\nSounds pretty plain-vanilla all right.\n\nAre you in a position to try the same benchmark against CVS tip?\n(The nightly snapshot tarball would be plenty close enough.) I'm\njust wondering if the old bgwriter behavior of locking down the\nbufmgr while it examined the ARC/2Q data structures is causing this...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 16:32:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On 7/18/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > On 7/18/05, Tom Lane <[email protected]> wrote:\n> >> I have no idea at all what's causing the sudden falloff in performance\n> >> after about 10000 iterations. COPY per se ought to be about a\n> >> constant-time operation, since APPEND is (or should be) constant-time.\n> >> What indexes, foreign keys, etc do you have on this table? What else\n> >> was going on at the time?\n> \n> > The table has 15 columns, 5 indexes (character, inet and timestamp).\n> > No foreign keys. The only other thing running on the machine was the\n> > application actually DOING the benchmarking, written in Python\n> > (psycopg), but it was, according to top, using less than 1% of the\n> > CPU. It was just talking through a pipe to a psql prompt to do the\n> > COPY.\n> \n> Sounds pretty plain-vanilla all right.\n> \n> Are you in a position to try the same benchmark against CVS tip?\n> (The nightly snapshot tarball would be plenty close enough.) I'm\n> just wondering if the old bgwriter behavior of locking down the\n> bufmgr while it examined the ARC/2Q data structures is causing this...\n\nSo here's something odd I noticed:\n\n20735 pgsql 16 0 20640 11m 10m R 48.0 1.2 4:09.65\npostmaster \n20734 petrilli 25 0 8640 2108 1368 R 38.1 0.2 4:25.80 psql\n\nThe 47 and 38.1 are %CPU. Why would psql be burning so much CPU? I've\ngot it attached ,via a pipe to another process that's driving it\n(until I implement the protocol for COPY later). I wouldn't think it\nshould be uing such a huge percentage of the CPU, no?\n\nThe Python script that's actually driving it is about 10% o the CPU,\nwhich is just because it's generating the incoming data on the fly. \nThoughts?\n\nI will give the CVS head a spin soon, but I wanted to formalize my\nbenchmarking more first.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Mon, 18 Jul 2005 19:30:53 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/18/05, Tom Lane <[email protected]> wrote:\n> > The table has 15 columns, 5 indexes (character, inet and timestamp).\n> > No foreign keys. The only other thing running on the machine was the\n> > application actually DOING the benchmarking, written in Python\n> > (psycopg), but it was, according to top, using less than 1% of the\n> > CPU. It was just talking through a pipe to a psql prompt to do the\n> > COPY.\n> \n> Sounds pretty plain-vanilla all right.\n> \n> Are you in a position to try the same benchmark against CVS tip?\n> (The nightly snapshot tarball would be plenty close enough.) I'm\n> just wondering if the old bgwriter behavior of locking down the\n> bufmgr while it examined the ARC/2Q data structures is causing this...\n\nTom,\n\nIt looks like the CVS HEAD is definately \"better,\" but not by a huge\namount. The only difference is I wasn't run autovacuum in the\nbackground (default settings), but I don't think this explains it. \nHere's a graph of the differences and density of behavior:\n\nhttp://blog.amber.org/diagrams/pgsql_copy_803_cvs.png\n\nI can provide the raw data. Each COPY was 500 rows. Note that fsync\nis turned off here. Maybe it'd be more stable with it turned on?\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 10:48:42 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> Here's a graph of the differences and density of behavior:\n\n> http://blog.amber.org/diagrams/pgsql_copy_803_cvs.png\n\n> I can provide the raw data.\n\nHow about the complete test case? There's something awfully odd going\non there, and I'd like to find out what.\n\n> Note that fsync is turned off here. Maybe it'd be more stable with it\n> turned on?\n\nHard to say. I was about to ask if you'd experimented with altering\nconfiguration parameters such as shared_buffers or checkpoint_segments\nto see if you can move the point of onset of slowdown. I'm thinking\nthe behavioral change might be associated with running out of free\nbuffers or some such. (Are you running these tests under a freshly-\nstarted postmaster, or one that's been busy for awhile?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 11:04:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "\n\tWhat happens if, say at iteration 6000 (a bit after the mess starts), you \npause it for a few minutes and resume. Will it restart with a plateau like \nat the beginning of the test ? or not ?\n\tWhat if, during this pause, you disconnect and reconnect, or restart the \npostmaster, or vacuum, or analyze ?\n\n\n> On 7/18/05, Tom Lane <[email protected]> wrote:\n>> > The table has 15 columns, 5 indexes (character, inet and timestamp).\n>> > No foreign keys. The only other thing running on the machine was the\n>> > application actually DOING the benchmarking, written in Python\n>> > (psycopg), but it was, according to top, using less than 1% of the\n>> > CPU. It was just talking through a pipe to a psql prompt to do the\n>> > COPY.\n>>\n>> Sounds pretty plain-vanilla all right.\n>>\n>> Are you in a position to try the same benchmark against CVS tip?\n>> (The nightly snapshot tarball would be plenty close enough.) I'm\n>> just wondering if the old bgwriter behavior of locking down the\n>> bufmgr while it examined the ARC/2Q data structures is causing this...\n>\n> Tom,\n>\n> It looks like the CVS HEAD is definately \"better,\" but not by a huge\n> amount. The only difference is I wasn't run autovacuum in the\n> background (default settings), but I don't think this explains it.\n> Here's a graph of the differences and density of behavior:\n>\n> http://blog.amber.org/diagrams/pgsql_copy_803_cvs.png\n>\n> I can provide the raw data. Each COPY was 500 rows. Note that fsync\n> is turned off here. Maybe it'd be more stable with it turned on?\n>\n> Chris\n\n\n", "msg_date": "Tue, 19 Jul 2005 17:33:21 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/19/05, PFC <[email protected]> wrote:\n> \n> What happens if, say at iteration 6000 (a bit after the mess starts), you\n> pause it for a few minutes and resume. Will it restart with a plateau like\n> at the beginning of the test ? or not ?\n\nNot sure... my benchmark is designed to represent what the database\nwill do under \"typical\" circumstances, and unfortunately these are\ntypical for the application. However, I can see about adding some\ndelays, though multiple minutes would be absurd in the application. \nPerhaps a 5-10 second day? Would that still be interesting?\n\n> What if, during this pause, you disconnect and reconnect, or restart the\n> postmaster, or vacuum, or analyze ?\n\nWell, I don't have the numbers any more, but restarting the postmaster\nhas no effect, other than the first few hundreds COPYs are worse than\nanything (3-4x slower), but then it goes back to following the trend\nline. The data in the chart for v8.0.3 includes running pg_autovacuum\n(5 minutes).\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 11:44:00 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On 7/19/05, PFC <[email protected]> wrote:\n>> What happens if, say at iteration 6000 (a bit after the mess starts), you\n>> pause it for a few minutes and resume. Will it restart with a plateau like\n>> at the beginning of the test ? or not ?\n\n> Not sure... my benchmark is designed to represent what the database\n> will do under \"typical\" circumstances, and unfortunately these are\n> typical for the application. However, I can see about adding some\n> delays, though multiple minutes would be absurd in the application. \n> Perhaps a 5-10 second day? Would that still be interesting?\n\nI think PFC's question was not directed towards modeling your\napplication, but about helping us understand what is going wrong\n(so we can fix it). It seemed like a good idea to me.\n\n> Well, I don't have the numbers any more, but restarting the postmaster\n> has no effect, other than the first few hundreds COPYs are worse than\n> anything (3-4x slower), but then it goes back to following the trend\n> line. The data in the chart for v8.0.3 includes running pg_autovacuum\n> (5 minutes).\n\nThe startup transient probably corresponds to the extra I/O needed to\nrepopulate shared buffers with a useful subset of your indexes. But\njust to be perfectly clear: you tried this, and after the startup\ntransient it returned to the *original* trend line? In particular,\nthe performance goes into the tank after about 5000 total iterations,\nand not 5000 iterations after the postmaster restart?\n\nI'm suddenly wondering if the performance dropoff corresponds to the\npoint where the indexes have grown large enough to not fit in shared\nbuffers anymore. If I understand correctly, the 5000-iterations mark\ncorresponds to 2.5 million total rows in the table; with 5 indexes\nyou'd have 12.5 million index entries or probably a couple hundred MB\ntotal. If the insertion pattern is sufficiently random that the entire\nindex ranges are \"hot\" then you might not have enough RAM.\n\nAgain, experimenting with different values of shared_buffers seems like\na very worthwhile thing to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 11:57:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "\n\n> I think PFC's question was not directed towards modeling your\n> application, but about helping us understand what is going wrong\n> (so we can fix it).\n\n\tExactly, I was wondering if this delay would allow things to get flushed, \nfor instance, which would give information about the problem (if giving it \na few minutes of rest resumed normal operation, it would mean that some \nbuffer somewhere is getting filled faster than it can be flushed).\n\n\tSo, go ahead with a few minutes even if it's unrealistic, that is not the \npoint, you have to tweak it in various possible manners to understand the \ncauses.\n\n\tAnd instead of a pause, why not just set the duration of your test to \n6000 iterations and run it two times without dropping the test table ?\n\n\tI'm going into wild guesses, but first you should want to know if the \nproblem is because the table is big, or if it's something else. So you run \nthe complete test, stopping a bit after it starts to make a mess, then \ninstead of dumping the table and restarting the test anew, you leave it as \nit is, do something, then run a new test, but on this table which already \nhas data.\n\n\t'something' could be one of those :\n\tdisconnect, reconnect (well you'll have to do that if you run the test \ntwice anyway)\n\tjust wait\n\trestart postgres\n\tunmount and remount the volume with the logs/data on it\n\treboot the machine\n\tanalyze\n\tvacuum\n\tvacuum analyze\n\tcluster\n\tvacuum full\n\treindex\n\tdefrag your files on disk (stopping postgres and copying the database \n from your disk to anotherone and back will do)\n\tor even dump'n'reload the whole database\n\n\tI think useful information can be extracted that way. If one of these \nfixes your problem it'l give hints.\n", "msg_date": "Tue, 19 Jul 2005 18:25:48 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "\n\n> total. If the insertion pattern is sufficiently random that the entire\n> index ranges are \"hot\" then you might not have enough RAM.\n\n\tTry doing the test dropping some of your indexes and see if it moves the \nnumber of iterations after which it becomes slow.\n", "msg_date": "Tue, 19 Jul 2005 18:27:21 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On 7/19/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes: \n> > Not sure... my benchmark is designed to represent what the database\n> > will do under \"typical\" circumstances, and unfortunately these are\n> > typical for the application. However, I can see about adding some\n> > delays, though multiple minutes would be absurd in the application.\n> > Perhaps a 5-10 second day? Would that still be interesting?\n> \n> I think PFC's question was not directed towards modeling your\n> application, but about helping us understand what is going wrong\n> (so we can fix it). It seemed like a good idea to me.\n\nOK, I can modify the code to do that, and I will post it on the web.\n\n> The startup transient probably corresponds to the extra I/O needed to\n> repopulate shared buffers with a useful subset of your indexes. But\n> just to be perfectly clear: you tried this, and after the startup\n> transient it returned to the *original* trend line? In particular,\n> the performance goes into the tank after about 5000 total iterations,\n> and not 5000 iterations after the postmaster restart?\n\nThis is correct, the TOTAL is what matters, not the specific instance\ncount. I did an earlier run with larger batch sizes, and it hit at a\nsimilar row count, so it's definately row-count/size related.\n \n> I'm suddenly wondering if the performance dropoff corresponds to the\n> point where the indexes have grown large enough to not fit in shared\n> buffers anymore. If I understand correctly, the 5000-iterations mark\n> corresponds to 2.5 million total rows in the table; with 5 indexes\n> you'd have 12.5 million index entries or probably a couple hundred MB\n> total. If the insertion pattern is sufficiently random that the entire\n> index ranges are \"hot\" then you might not have enough RAM.\n\nThis is entirely possible, currently:\n\nshared_buffers = 1000 \nwork_mem = 65535 \nmaintenance_work_mem = 16384 \nmax_stack_depth = 2048 \n\n> Again, experimenting with different values of shared_buffers seems like\n> a very worthwhile thing to do.\n\nI miss-understood shared_buffers then, as I thought work_mem was where\nindexes were kept. If this is where index manipulations happen, then\nI can up it quite a bit. The machine this is running on has 2GB of\nRAM.\n\nMy concern isn't absolute performance, as this is not representative\nhardware, but instead is the evenness of behavior.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 12:30:34 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/19/05, PFC <[email protected]> wrote:\n> \n> \n> > I think PFC's question was not directed towards modeling your\n> > application, but about helping us understand what is going wrong\n> > (so we can fix it).\n> \n> Exactly, I was wondering if this delay would allow things to get flushed,\n> for instance, which would give information about the problem (if giving it\n> a few minutes of rest resumed normal operation, it would mean that some\n> buffer somewhere is getting filled faster than it can be flushed).\n> \n> So, go ahead with a few minutes even if it's unrealistic, that is not the\n> point, you have to tweak it in various possible manners to understand the\n> causes.\n\nTotally understand, and appologize if I sounded dismissive. I\ndefinately appreciate the insight and input.\n \n> And instead of a pause, why not just set the duration of your test to\n> 6000 iterations and run it two times without dropping the test table ?\n\nThis I can do. I'll probably set it for 5,000 for the first, and\nthen start the second. In non-benchmark experience, however, this\ndidn't seem to make much difference.\n\n> I'm going into wild guesses, but first you should want to know if the\n> problem is because the table is big, or if it's something else. So you run\n> the complete test, stopping a bit after it starts to make a mess, then\n> instead of dumping the table and restarting the test anew, you leave it as\n> it is, do something, then run a new test, but on this table which already\n> has data.\n> \n> 'something' could be one of those :\n> disconnect, reconnect (well you'll have to do that if you run the test\n> twice anyway)\n> just wait\n> restart postgres\n> unmount and remount the volume with the logs/data on it\n> reboot the machine\n> analyze\n> vacuum\n> vacuum analyze\n> cluster\n> vacuum full\n> reindex\n> defrag your files on disk (stopping postgres and copying the database\n> from your disk to anotherone and back will do)\n> or even dump'n'reload the whole database\n> \n> I think useful information can be extracted that way. If one of these\n> fixes your problem it'l give hints.\n> \n\nThis could take a while :-)\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 12:34:19 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On 7/19/05, Tom Lane <[email protected]> wrote:\n>> I'm suddenly wondering if the performance dropoff corresponds to the\n>> point where the indexes have grown large enough to not fit in shared\n>> buffers anymore. If I understand correctly, the 5000-iterations mark\n>> corresponds to 2.5 million total rows in the table; with 5 indexes\n>> you'd have 12.5 million index entries or probably a couple hundred MB\n>> total. If the insertion pattern is sufficiently random that the entire\n>> index ranges are \"hot\" then you might not have enough RAM.\n\n> This is entirely possible, currently:\n\n> shared_buffers = 1000 \n\nAh-hah --- with that setting, you could be seeing shared-buffer\nthrashing even if only a fraction of the total index ranges need to be\ntouched. I'd try some runs with shared_buffers at 10000, 50000, 100000.\n\nYou might also try strace'ing the backend and see if behavior changes\nnoticeably when the performance tanks.\n\nFWIW I have seen similar behavior while playing with MySQL's sql-bench\ntest --- the default 1000 shared_buffers is not large enough to hold\nthe \"hot\" part of the indexes in some of their insertion tests, and so\nperformance tanks --- you can see this happening in strace because the\nkernel request mix goes from almost all writes to a significant part\nreads. On a pure data insertion benchmark you'd like to see nothing\nbut writes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 12:42:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "As I'm doing this, I'm noticing something *VERY* disturbing to me:\n\npostmaster backend: 20.3% CPU\npsql frontend: 61.2% CPU\n\nWTF? The only thing going through the front end is the COPY command,\nand it's sent to the backend to read from a file?\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 12:54:35 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> As I'm doing this, I'm noticing something *VERY* disturbing to me:\n> postmaster backend: 20.3% CPU\n> psql frontend: 61.2% CPU\n\n> WTF? The only thing going through the front end is the COPY command,\n> and it's sent to the backend to read from a file?\n\nAre you sure the backend is reading directly from the file, and not\nthrough psql? (\\copy, or COPY FROM STDIN, would go through psql.)\n\nBut even so that seems awfully high, considering how little work psql\nhas to do compared to the backend. Has anyone ever profiled psql doing\nthis sort of thing? I know I've spent all my time looking at the\nbackend ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 13:05:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On 7/19/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > As I'm doing this, I'm noticing something *VERY* disturbing to me:\n> > postmaster backend: 20.3% CPU\n> > psql frontend: 61.2% CPU\n> \n> > WTF? The only thing going through the front end is the COPY command,\n> > and it's sent to the backend to read from a file?\n> \n> Are you sure the backend is reading directly from the file, and not\n> through psql? (\\copy, or COPY FROM STDIN, would go through psql.)\n\nThe exact command is:\n\nCOPY test (columnlist...) FROM '/tmp/loadfile';\n \n> But even so that seems awfully high, considering how little work psql\n> has to do compared to the backend. Has anyone ever profiled psql doing\n> this sort of thing? I know I've spent all my time looking at the\n> backend ...\n\nLinux 2.6, ext3, data=writeback\n\nIt's flipped now (stil lrunning), and it's 48% postmaster, 36% psql,\nbut anything more than 1-2% seems absurd.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 13:13:05 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n>> Are you sure the backend is reading directly from the file, and not\n>> through psql? (\\copy, or COPY FROM STDIN, would go through psql.)\n\n> The exact command is:\n> COPY test (columnlist...) FROM '/tmp/loadfile';\n\nI tried to replicate this by putting a ton of COPY commands like that\ninto a file and doing \"psql -f file ...\". I don't see more than about\n0.3% CPU going to psql. So there's something funny about your test\nconditions. How *exactly* are you invoking psql?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 14:09:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "On 7/19/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> >> Are you sure the backend is reading directly from the file, and not\n> >> through psql? (\\copy, or COPY FROM STDIN, would go through psql.)\n> \n> > The exact command is:\n> > COPY test (columnlist...) FROM '/tmp/loadfile';\n> \n> I tried to replicate this by putting a ton of COPY commands like that\n> into a file and doing \"psql -f file ...\". I don't see more than about\n> 0.3% CPU going to psql. So there's something funny about your test\n> conditions. How *exactly* are you invoking psql?\n\nIt is a subprocess of a Python process, driven using a pexpect\ninterchange. I send the COPY command, then wait for the '=#' to come\nback.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 14:48:54 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "Christopher Petrilli <[email protected]> writes:\n> On 7/19/05, Tom Lane <[email protected]> wrote:\n>> How *exactly* are you invoking psql?\n\n> It is a subprocess of a Python process, driven using a pexpect\n> interchange. I send the COPY command, then wait for the '=#' to come\n> back.\n\nSome weird interaction with pexpect maybe? Try adding \"-n\" (disable\nreadline) to the psql command switches.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 14:53:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions " }, { "msg_contents": "\n> It is a subprocess of a Python process, driven using a pexpect\n> interchange. I send the COPY command, then wait for the '=#' to come\n> back.\n\n\tdid you try sending the COPY as a normal query through psycopg ?\n", "msg_date": "Tue, 19 Jul 2005 21:05:27 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/19/05, Tom Lane <[email protected]> wrote:\n> Christopher Petrilli <[email protected]> writes:\n> > On 7/19/05, Tom Lane <[email protected]> wrote:\n> >> How *exactly* are you invoking psql?\n> \n> > It is a subprocess of a Python process, driven using a pexpect\n> > interchange. I send the COPY command, then wait for the '=#' to come\n> > back.\n> \n> Some weird interaction with pexpect maybe? Try adding \"-n\" (disable\n> readline) to the psql command switches.\n\nUm... WOW!\n\n==> pgsql_benchmark_803_bigbuffers10000_noreadline.txt <==\n0 0.0319459438324 0.0263829231262\n1 0.0303978919983 0.0263390541077\n2 0.0306499004364 0.0273139476776\n3 0.0306959152222 0.0270659923553\n4 0.0307791233063 0.0278429985046\n5 0.0306351184845 0.0278820991516\n6 0.0307800769806 0.0335869789124\n7 0.0408310890198 0.0370559692383\n8 0.0371310710907 0.0344209671021\n9 0.0372560024261 0.0334041118622\n\n==> pgsql_benchmark_803_bigbuffers10000.txt <==\n0 0.0352520942688 0.149132013321\n1 0.0320160388947 0.146126031876\n2 0.0307128429413 0.139330863953\n3 0.0306718349457 0.139590978622\n4 0.0307030677795 0.140225172043\n5 0.0306420326233 0.140012979507\n6 0.0307261943817 0.139672994614\n7 0.0307750701904 0.140661001205\n8 0.0307800769806 0.141661167145\n9 0.0306720733643 0.141198158264 \n\nFirst column is iteration, second is \"gen time\" to generate the load\nfile, and 3rd is \"load time\".\n\nIt doesn't stay QUITE that low, but it stays lower... quite a bit. \nWe'll see what happens over time.\n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Tue, 19 Jul 2005 15:22:07 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/19/05, Christopher Petrilli <[email protected]> wrote:\n> On 7/19/05, Tom Lane <[email protected]> wrote:\n> > Christopher Petrilli <[email protected]> writes:\n> > > On 7/19/05, Tom Lane <[email protected]> wrote:\n> > >> How *exactly* are you invoking psql?\n> >\n> > > It is a subprocess of a Python process, driven using a pexpect\n> > > interchange. I send the COPY command, then wait for the '=#' to come\n> > > back.\n> >\n> > Some weird interaction with pexpect maybe? Try adding \"-n\" (disable\n> > readline) to the psql command switches.\n> \n> Um... WOW!\n> It doesn't stay QUITE that low, but it stays lower... quite a bit.\n> We'll see what happens over time.\n\nhere's a look at the difference:\n\nhttp://blog.amber.org/diagrams/pgsql_readline_impact.png\n\nI'm running additional comparisons AFTER clustering and analyzing the tables... \n\nChris\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Wed, 20 Jul 2005 11:52:52 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" }, { "msg_contents": "On 7/19/05, Christopher Petrilli <[email protected]> wrote:\n> It looks like the CVS HEAD is definately \"better,\" but not by a huge\n> amount. The only difference is I wasn't run autovacuum in the\n> background (default settings), but I don't think this explains it.\n> Here's a graph of the differences and density of behavior:\n> \n> http://blog.amber.org/diagrams/pgsql_copy_803_cvs.png\n> \n> I can provide the raw data. Each COPY was 500 rows. Note that fsync\n> is turned off here. Maybe it'd be more stable with it turned on?\n\nI've updated this with trend-lines.\n\nChris\n\n-- \n| Christopher Petrilli\n| [email protected]\n", "msg_date": "Wed, 20 Jul 2005 12:16:26 -0400", "msg_from": "Christopher Petrilli <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Impact of checkpoint_segments under continual load conditions" } ]
[ { "msg_contents": "Sigh...\n\nI recently upgraded from 7.4.1 to 8.0.3. The application did not change. I'm\nnow running both database concurrently (on different ports, same machine) just\nso I could verify the problem really exists.\n\nThe application is a custom test application for testing mechanical systems. \nThe runs in question (4 at a time) each generate 16 queries at a time of which\nthe results are sent to the mechanical system which processes the request, which\nprocesses them anywhere from 10 to 120 seconds. The system is capable of\ncompleting between 4 and 8 jobs at once. So, once the system is running, at\nmost there will be 8 queries per run simultaneously.\n\nThe entire database fits into RAM (2Gb), as evidenced by no disk activity and\nrelatively small database size. pg_xlog is on different disks from the db.\n\nThe problem is that on version 8.0.3, once I get 3 or more concurrent runs\ngoing, the query times start tanking (>20 seconds). On 7.4.1, the applications\nhum along with queries typically below .2 seconds on over 5 concurrent runs. \nNeedless to say, 7.4.1 behaves as expected... The only change between runs is\nthe port connecting to. Bot DB's are up at the same time.\n\nFor 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\nanalyze every 5 minutes.\n\nThe system is Mandrake Linux running 2.4.22 kernel with dual Intel Xenon CPU\nwith HT enabled. On an 803 run, the context switching is up around 60k. On\n7.4.1, it maxes around 23k and averages < 1k.\n\nI've attached four files. 741 has the query and explain analyze. 803 has the\nquery and explain analyze during loaded and unloaded times. I've also attached\nthe conf files for the two versions running. I've gone through them and don't\nsee any explanation for the problem I'm having.\n\nI'm guessing this is the CS problem that reared it's head last year? I had an\ne-mail exchange in April of last year about this. Any reason this would be\nworse with 8.0.3?\n\nThanks,\nRob\n\n-- \n 13:33:43 up 3 days, 17:08, 9 users, load average: 0.16, 0.59, 0.40\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 21:34:16 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "Robert Creager wrote:\n\n>For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\n>analyze every 5 minutes.\n> \n>\n\nAre you sure that pg_autovacuum is doing it's job? Meaning are you sure \nit's vacuuming as often as needed? Try to run it with -d2 or so and \nmake sure that it is actually doing the vacuuming needed.\n\n", "msg_date": "Sun, 17 Jul 2005 23:48:20 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -" }, { "msg_contents": "I am, and it is. It's ANALYZING and VACUUM'ing tables every interval (5 minutes\n- 8.0.3). Right now, for that last 4 hours, I'm not VACUUMing the 7.4.1\ndatabase and it's still clicking along at < .2 second queries. Last year\n(7.4.1), I noticed that it took about a week of heavy activity (for this DB)\nbefore I'd really need a vacuum. That's when I put in the 5 min cron.\n\nWhen I first switched over to 8.0.3, I was still running the cron vacuum. I got\ninto big trouble when I had vacuum's backed up for 6 hours. That's when I\nstarted noticing the query problem, and the CS numbers being high. 7.4.1\nvacuums every 5 minutes always take < 30 seconds (when I'm watching).\n\nCheers,\nRob\n\nWhen grilled further on (Sun, 17 Jul 2005 23:48:20 -0400),\n\"Matthew T. O'Connor\" <[email protected]> confessed:\n\n> Robert Creager wrote:\n> \n> >For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\n> >analyze every 5 minutes.\n> > \n> >\n> \n> Are you sure that pg_autovacuum is doing it's job? Meaning are you sure \n> it's vacuuming as often as needed? Try to run it with -d2 or so and \n> make sure that it is actually doing the vacuuming needed.\n\n\n-- \n 22:04:10 up 4 days, 1:39, 8 users, load average: 0.15, 0.15, 0.12\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 22:10:50 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> I'm guessing this is the CS problem that reared it's head last year?\n\nThe context swap problem was no worse in 8.0 than in prior versions,\nso that hardly seems like a good explanation. Have you tried reverting\nto the cron-based vacuuming method you used in 7.4?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 00:10:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue? " }, { "msg_contents": "Sounds like either someone is holding a lock on your pg8 db, or maybe \nyou need a vacuum full. No amount of normal vacuuming will fix a table \nthat needs a vacuum full. Although if that were the case I'd expect you \nto have slow queries regardless of the number of concurrent connections. \nMaybe you should check who is holding locks.\n\nDavid\n\nRobert Creager wrote:\n> I am, and it is. It's ANALYZING and VACUUM'ing tables every interval (5 minutes\n> - 8.0.3). Right now, for that last 4 hours, I'm not VACUUMing the 7.4.1\n> database and it's still clicking along at < .2 second queries. Last year\n> (7.4.1), I noticed that it took about a week of heavy activity (for this DB)\n> before I'd really need a vacuum. That's when I put in the 5 min cron.\n> \n> When I first switched over to 8.0.3, I was still running the cron vacuum. I got\n> into big trouble when I had vacuum's backed up for 6 hours. That's when I\n> started noticing the query problem, and the CS numbers being high. 7.4.1\n> vacuums every 5 minutes always take < 30 seconds (when I'm watching).\n> \n> Cheers,\n> Rob\n> \n> When grilled further on (Sun, 17 Jul 2005 23:48:20 -0400),\n> \"Matthew T. O'Connor\" <[email protected]> confessed:\n> \n> \n>>Robert Creager wrote:\n>>\n>>\n>>>For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\n>>>analyze every 5 minutes.\n>>> \n>>>\n>>\n>>Are you sure that pg_autovacuum is doing it's job? Meaning are you sure \n>>it's vacuuming as often as needed? Try to run it with -d2 or so and \n>>make sure that it is actually doing the vacuuming needed.\n> \n> \n> \n\n\n-- \nDavid Mitchell\nSoftware Engineer\nTelogis\n\n", "msg_date": "Mon, 18 Jul 2005 16:17:54 +1200", "msg_from": "David Mitchell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> I am, and it is. It's ANALYZING and VACUUM'ing tables every interval (5 mi=\n> nutes\n> - 8.0.3). Right now, for that last 4 hours, I'm not VACUUMing the 7.4.1\n> database and it's still clicking along at < .2 second queries.\n\nHave you compared physical table sizes? If the autovac daemon did let\nthings get out of hand, you'd need a VACUUM FULL or CLUSTER or TRUNCATE\nto get the table size back down --- plain VACUUM is unlikely to fix it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 00:18:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS " }, { "msg_contents": "Ok, it doesn't look like an autovacuum problem. The only other thing I \ncan think of is that some query is doing a seq scan rather than an index \nscan. Have you turned on the query logging to see what queries are \ntaking so long?\n\nMatt\n\n\nRobert Creager wrote:\n\n>I am, and it is. It's ANALYZING and VACUUM'ing tables every interval (5 minutes\n>- 8.0.3). Right now, for that last 4 hours, I'm not VACUUMing the 7.4.1\n>database and it's still clicking along at < .2 second queries. Last year\n>(7.4.1), I noticed that it took about a week of heavy activity (for this DB)\n>before I'd really need a vacuum. That's when I put in the 5 min cron.\n>\n>When I first switched over to 8.0.3, I was still running the cron vacuum. I got\n>into big trouble when I had vacuum's backed up for 6 hours. That's when I\n>started noticing the query problem, and the CS numbers being high. 7.4.1\n>vacuums every 5 minutes always take < 30 seconds (when I'm watching).\n>\n>Cheers,\n>Rob\n>\n>When grilled further on (Sun, 17 Jul 2005 23:48:20 -0400),\n>\"Matthew T. O'Connor\" <[email protected]> confessed:\n>\n> \n>\n>>Robert Creager wrote:\n>>\n>> \n>>\n>>>For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\n>>>analyze every 5 minutes.\n>>> \n>>>\n>>> \n>>>\n>>Are you sure that pg_autovacuum is doing it's job? Meaning are you sure \n>>it's vacuuming as often as needed? Try to run it with -d2 or so and \n>>make sure that it is actually doing the vacuuming needed.\n>> \n>>\n>\n>\n> \n>\n\n", "msg_date": "Mon, 18 Jul 2005 00:18:43 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 00:18:43 -0400),\n\"Matthew T. O'Connor\" <[email protected]> confessed:\n\n> Have you turned on the query logging to see what queries are \n> taking so long?\n> \n\nYeah. In the original message is a typical query. One from 741 and the other\non 803. On 803, an explain analyze is done twice. Once during the problem,\nonce when the system is idle. On 741, the query behaves the same no matter\nwhat...\n\nCheers,\nRob\n\n-- \n 22:31:18 up 4 days, 2:06, 8 users, load average: 0.25, 0.18, 0.11\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 22:32:59 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 16:17:54 +1200),\nDavid Mitchell <[email protected]> confessed:\n\n> Maybe you should check who is holding locks.\n\nHmmm... The only difference is how the vacuum is run. One by autovacuum, one\nby cron (vacuum analyze every 5 minutes).\n\nCheers,\nRob\n\n-- \n 23:01:44 up 4 days, 2:36, 6 users, load average: 0.27, 0.16, 0.10\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 23:02:08 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "On Sun, 2005-07-17 at 21:34 -0600, Robert Creager wrote:\n> Sigh...\n> \n> I recently upgraded from 7.4.1 to 8.0.3. The application did not change. I'm\n> now running both database concurrently (on different ports, same machine) just\n> so I could verify the problem really exists.\n> \n> The application is a custom test application for testing mechanical systems. \n> The runs in question (4 at a time) each generate 16 queries at a time of which\n> the results are sent to the mechanical system which processes the request, which\n> processes them anywhere from 10 to 120 seconds. The system is capable of\n> completing between 4 and 8 jobs at once. So, once the system is running, at\n> most there will be 8 queries per run simultaneously.\n> \n> The entire database fits into RAM (2Gb), as evidenced by no disk activity and\n> relatively small database size. pg_xlog is on different disks from the db.\n> \n> The problem is that on version 8.0.3, once I get 3 or more concurrent runs\n> going, the query times start tanking (>20 seconds). On 7.4.1, the applications\n> hum along with queries typically below .2 seconds on over 5 concurrent runs. \n> Needless to say, 7.4.1 behaves as expected... The only change between runs is\n> the port connecting to. Bot DB's are up at the same time.\n> \n> For 8.03, pg_autovacuum is running. On 7.4.1, I set up a cron job to vacuum\n> analyze every 5 minutes.\n> \n> The system is Mandrake Linux running 2.4.22 kernel with dual Intel Xenon CPU\n> with HT enabled. On an 803 run, the context switching is up around 60k. On\n> 7.4.1, it maxes around 23k and averages < 1k.\n\nDid you build 8.0.3 yourself, or install it from packages? I've seen in\nthe past where pg would build with the wrong kind of mutexes on some\nmachines, and that would send the CS through the roof. If you did build\nit yourself, check your ./configure logs. If not, try strace.\n\n-jwb\n", "msg_date": "Sun, 17 Jul 2005 22:09:11 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 00:10:53 -0400),\nTom Lane <[email protected]> confessed:\n\n> Have you tried reverting\n> to the cron-based vacuuming method you used in 7.4?\n> \n\nI just stopped autovacuum, ran a manual vacuum analyze on 803 (2064 pages\nneeded, 8000000 FSM setting) and re-started the run (with cron vac enabled). \nThe query problem has not showed up yet (1/2 hour). A vacuum on 741 showed 3434\npages needed, 200000 FSM setting.\n\nI'll let it run the night and see if it shows up after a couple of hours. It\nhas run clean for 1 hour prior. If this runs 'till morning, I'll re-enable the\nautovacuum, disable the cron and see if it reproduces itself (the slowdown).\n\nCheers,\nRob\n\n-- \n 22:18:40 up 4 days, 1:53, 8 users, load average: 0.10, 0.20, 0.14\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 23:09:12 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "When grilled further on (Sun, 17 Jul 2005 22:09:11 -0700),\n\"Jeffrey W. Baker\" <[email protected]> confessed:\n\n> \n> Did you build 8.0.3 yourself, or install it from packages? I've seen in\n> the past where pg would build with the wrong kind of mutexes on some\n> machines, and that would send the CS through the roof. If you did build\n> it yourself, check your ./configure logs. If not, try strace.\n\nI always build PG from source. I did check the config.log command line\n(./configure) and they were similar enough. The system has not changed between\nbuilding the two versions (if it ain't broke...).\n\nCheers,\nRob\n\n-- \n 23:25:21 up 4 days, 3:00, 6 users, load average: 0.25, 0.15, 0.11\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 23:27:08 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -\tCS" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 00:18:30 -0400),\nTom Lane <[email protected]> confessed:\n\n> Robert Creager <[email protected]> writes:\n> > I am, and it is. It's ANALYZING and VACUUM'ing tables every interval (5 mi=\n> > nutes\n> > - 8.0.3). Right now, for that last 4 hours, I'm not VACUUMing the 7.4.1\n> > database and it's still clicking along at < .2 second queries.\n> \n> Have you compared physical table sizes? If the autovac daemon did let\n> things get out of hand, you'd need a VACUUM FULL or CLUSTER or TRUNCATE\n> to get the table size back down --- plain VACUUM is unlikely to fix it.\n\nTable sizes, no. Entire DB size is 45Mb for 803 and 29Mb for 741. Cannot make\na direct comparison between the two as I've run against more machines now with\n803 than 741, so I'd expect it to be larger.\n\nI'm still running relatively clean on 803 with cron vacuum. The CS are jumping\nfrom 100 to 120k, but it's not steady state like it was before, and queries are\nall under 5 seconds (none hitting the logs) and are typically (glancing at test\nruns) still under 1 sec, with some hitting ~2 seconds occasionally.\n\nI've 6 runs going concurrently. Just saw (vmstat 1) a set of 8 seconds where\nthe CS didn't drop below 90k, but right now its at ~300 for over 30 seconds... \nIt's bouncing all over the place, but staying reasonably well behaved overall.\n\nWhoop. Spoke too soon. Just hit the wall. CS at ~80k constant, queries over\n10 seconds and rising (30+ now)... Looking at ps, the vacuum is currently\nrunning. Going back in the logs, the CS and vacuum hit at about the same time.\n\nI'm going to go back to 741 with the same load and see what happens by tomorrow\nmorning... I'll change the cron vac to hit the 741 db.\n\nCheers,\nRob\n\n-- \n 23:29:24 up 4 days, 3:04, 6 users, load average: 0.02, 0.07, 0.08\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 23:43:29 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 00:10:53 -0400),\nTom Lane <[email protected]> confessed:\n\n> The context swap problem was no worse in 8.0 than in prior versions,\n> so that hardly seems like a good explanation. Have you tried reverting\n> to the cron-based vacuuming method you used in 7.4?\n> \n\nI've \"vacuum_cost_delay = 10\" in the conf file for 803. hit, miss, dirty and\nlimit are 1, 10, 20 and 200 respectively. Could that be contributing to the\nproblem? I'll know more in an hour or so with 741 running and cron vac and the\nsame load.\n\nCheers,\nRob\n\n-- \n 23:53:53 up 4 days, 3:28, 6 users, load average: 0.11, 0.13, 0.11\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sun, 17 Jul 2005 23:56:30 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "When grilled further on (Sun, 17 Jul 2005 23:43:29 -0600),\nRobert Creager <[email protected]> confessed:\n\n> I've 6 runs going concurrently. Just saw (vmstat 1) a set of 8 seconds where\n> the CS didn't drop below 90k, but right now its at ~300 for over 30 seconds...\n\n> It's bouncing all over the place, but staying reasonably well behaved overall.\n> \n\nAgainst 741 and the same load, CS is steady around 300 with spikes up to 4k, but\nit's only been running for about 15 minutes. All queries are < .2 seconds.\n\nCheers,\nRob\n\n-- \n 00:03:33 up 4 days, 3:38, 6 users, load average: 1.67, 0.98, 0.44\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Mon, 18 Jul 2005 00:07:13 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Tom Lane <[email protected]> confessed:\n>> The context swap problem was no worse in 8.0 than in prior versions,\n>> so that hardly seems like a good explanation. Have you tried reverting\n>> to the cron-based vacuuming method you used in 7.4?\n\n> I've \"vacuum_cost_delay = 10\" in the conf file for 803.\n\nHmm, did you read this thread?\nhttp://archives.postgresql.org/pgsql-performance/2005-07/msg00088.php\n\nIt's still far from clear what's going on there, but it might be\ninteresting to see if turning off the vacuum delay changes your results\nwith 8.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 09:23:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS " }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 00:10:53 -0400),\nTom Lane <[email protected]> confessed:\n\n> The context swap problem was no worse in 8.0 than in prior versions,\n> so that hardly seems like a good explanation. Have you tried reverting\n> to the cron-based vacuuming method you used in 7.4?\n> \n\nRan 7 hours on 741 with VACUUM ANALYZE every 5 minutes. The largest CS I saw\nwas 40k, with an average of 500 (via script which monitors vmstat output).\n\nI've done a VACUUM FULL ANALYZE on 803 and have switched the cron based VACUUM\nANALYZE to 803 also. The tests are now running again.\n\n> Hmm, did you read this thread?\n> http://archives.postgresql.org/pgsql-performance/2005-07/msg00088.php\n\nI just glanced at it. Once I've reproduced (or not) the problem on 803 with the\nVACUUM FULL, I'll turn off the vacuum delay.\n\nCheers,\nRob\n\n-- \n 07:10:06 up 4 days, 10:45, 6 users, load average: 0.28, 0.40, 0.29\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Mon, 18 Jul 2005 07:41:00 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "When grilled further on (Mon, 18 Jul 2005 09:23:11 -0400),\nTom Lane <[email protected]> confessed:\n\n> It's still far from clear what's going on there, but it might be\n> interesting to see if turning off the vacuum delay changes your results\n> with 8.0.\n> \n\nCan that be affected by hupping the server, or do I need a restart?\n\nThanks,\nRob\n\n-- \n 07:46:53 up 4 days, 11:21, 6 users, load average: 0.77, 0.43, 0.27\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Mon, 18 Jul 2005 07:47:44 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Tom Lane <[email protected]> confessed:\n>> It's still far from clear what's going on there, but it might be\n>> interesting to see if turning off the vacuum delay changes your results\n>> with 8.0.\n\n> Can that be affected by hupping the server, or do I need a restart?\n\nsighup should be fine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 10:03:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS " }, { "msg_contents": "Tom Lane wrote:\n\n>Robert Creager <[email protected]> writes:\n> \n>\n>>I've \"vacuum_cost_delay = 10\" in the conf file for 803.\n>> \n>>\n>\n>Hmm, did you read this thread?\n>http://archives.postgresql.org/pgsql-performance/2005-07/msg00088.php\n>\n>It's still far from clear what's going on there, but it might be\n>interesting to see if turning off the vacuum delay changes your results\n>with 8.0.\n>\n\nWith the contrib autovacuum code if you don't specify vacuum delay \nsettings from the command line, then autovacuum doesn't touch them. \nTherefore (if you aren't specifying them from the command line), on 803, \nthe vacuum delay settings should be the same for a cron issued vacuum \nand an autovacuum issued vacuum. So if the vacuum delay settings are \nthe problem, then it should show up either way.\n\n\nMatt\n\n", "msg_date": "Mon, 18 Jul 2005 10:31:38 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 -" }, { "msg_contents": "\"Matthew T. O'Connor\" <[email protected]> writes:\n> Therefore (if you aren't specifying them from the command line), on 803, \n> the vacuum delay settings should be the same for a cron issued vacuum \n> and an autovacuum issued vacuum. So if the vacuum delay settings are \n> the problem, then it should show up either way.\n\n... as indeed it does according to Robert's recent reports. Still\nawaiting the definitive test, but I'm starting to think this is another\ncase of the strange behavior Ian Westmacott exhibited.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 10:53:04 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - " } ]
[ { "msg_contents": "\nIn regards to\nhttp://archives.postgresql.org/pgsql-performance/2005-07/msg00261.php\n\nTom Says:\n> ... as indeed it does according to Robert's recent reports. Still\n> awaiting the definitive test, but I'm starting to think this is another\n> case of the strange behavior Ian Westmacott exhibited.\n\nOk. This morning at around 7:30am I started tests against a freshly VACUUM FULL\nANALYZE 803 database with the vacuum delay on and cron running vacuum analyze\nevery 5 minutes.\n\nAround 8:15 I was starting to receive hits of a few seconds of high CS hits,\nhigher than the previous 7 hour run on 741. I changed the vacuum delay to 0 and\nHUP'ed the server (how can I see the value vacuum_cost_delay run time?). By\n10:30, I had vacuum jobs backed up since 9:20 and the queries were over 75\nseconds.\n\nI'm currently running on 741 as I need to get work done today ;-) I'll restart\nthe 803 db, vacuum full analyze again and next opportunity (maybe tonight),\nstart runs again with cron vacuum and a vacuum_cost_delay of 0, unless I should\ntry something else?\n\nCheers,\nRob\n\n-- \n\nRobert Creager\nAdvisory Software Engineer\nPhone 303.673.2365\nPager 888.912.4458\nFax 303.661.5379\nStorageTek\n", "msg_date": "Mon, 18 Jul 2005 11:49:52 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Around 8:15 I was starting to receive hits of a few seconds of high CS hits,\n> higher than the previous 7 hour run on 741. I changed the vacuum delay to 0 and\n> HUP'ed the server (how can I see the value vacuum_cost_delay run\n> time?).\n\nStart a fresh psql session and \"SHOW vacuum_cost_delay\" to verify what\nthe active setting is.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Jul 2005 13:52:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue? " }, { "msg_contents": "On Mon, 18 Jul 2005 13:52:53 -0400\nTom Lane <[email protected]> wrote:\n\n> Start a fresh psql session and \"SHOW vacuum_cost_delay\" to verify what\n> the active setting is.\n\nThanks. It does show 0 for 803 in a session that was up since I thought I had\nHUPed the server with the new value.\n\nThis is leading me to believe that 803 doesn't do very well with VACUUM ANALYZE\nrunning often, at least in my particular application... I will provide a more\ndefinitive statement to that affect, hopefully tonight.\n\nCheers,\nRob\n", "msg_date": "Mon, 18 Jul 2005 12:21:03 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "On Mon, 18 Jul 2005 13:52:53 -0400\nTom Lane <[email protected]> wrote:\n\n> \n> Start a fresh psql session and \"SHOW vacuum_cost_delay\" to verify what\n> the active setting is.\n> \n\nAlright. Restarted the 803 database. Cron based vacuum analyze is running\nevery 5 minutes. vacuum_cost_delay is 0. The problem showed up after about 1/2\nhour of running. I've got vacuum jobs stacked from the last 35 minutes, with 2\nvacuums running at the same time. CS is around 73k.\n\nWhat do I do now? I can bring the db back to normal and not run any cron based\nvacuum to see if it still happens, but I suspect nothing will happen without the\nvacuum. I'll leave it in it's current semi-catatonic state as long as possible\nin case there is something to look at?\n\nCheers,\nRob\n", "msg_date": "Tue, 19 Jul 2005 10:50:14 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "Robert Creager <[email protected]> writes:\n> Alright. Restarted the 803 database. Cron based vacuum analyze is\n> running every 5 minutes. vacuum_cost_delay is 0. The problem showed\n> up after about 1/2 hour of running. I've got vacuum jobs stacked from\n> the last 35 minutes, with 2 vacuums running at the same time. CS is\n> around 73k.\n\nHmm, I hadn't thought about the possible impact of multiple concurrent\nvacuums. Is the problem caused by that, or has performance already gone\ninto the tank by the time the cron-driven vacuums are taking long enough\nto overlap?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 12:54:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue? " }, { "msg_contents": "On Tue, 19 Jul 2005 12:54:22 -0400\nTom Lane <[email protected]> wrote:\n\n> Hmm, I hadn't thought about the possible impact of multiple concurrent\n> vacuums. Is the problem caused by that, or has performance already gone\n> into the tank by the time the cron-driven vacuums are taking long enough\n> to overlap?\n\nDon't know just yet. When I run the vacuums manually on a healthy system on\n741, they take less than 30 seconds. I've stopped the cron vacuum and canceled\nall the outstanding vacuum processes, but the 803 is still struggling (1/2 hour\nlater).\n\nI'll re-start the database, vacuum full analyze and restart the runs without the\ncron vacuum running.\n\nCheers,\nRob\n", "msg_date": "Tue, 19 Jul 2005 12:09:51 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "On Tue, 19 Jul 2005 12:54:22 -0400\nTom Lane <[email protected]> wrote:\n\n> Robert Creager <[email protected]> writes:\n> \n> Hmm, I hadn't thought about the possible impact of multiple concurrent\n> vacuums. Is the problem caused by that, or has performance already gone\n> into the tank by the time the cron-driven vacuums are taking long enough\n> to overlap?\n\nAll statements over 5 seconds are logged. Vacuums are running on the 5 minute mark.\n\nLog file shows the first query starts going bad a 9:32:15 (7 seconds), although the second query start before the first . The first vacuum statement logged shows 1148 seconds completing at 9:54:09, so starting at 9:35. Looks like the vacuum is an innocent bystander of the problem.\n\nThe first problem queries are below. Additionally, I've attached 5 minutes (bzipped) of logs starting at the first event below.\n\nJul 19 09:32:15 annette postgres[17029]: [2-1] LOG: duration: 7146.168 ms statement:\nJul 19 09:32:15 annette postgres[17029]: [2-2] ^I SELECT location_id, location_type.name AS type, library, rail\nJul 19 09:32:15 annette postgres[17029]: [2-3] ^I FROM location_lock JOIN location USING( location_id )\nJul 19 09:32:15 annette postgres[17029]: [2-4] ^I JOIN location_type USING( location_type_id )\nJul 19 09:32:15 annette postgres[17029]: [2-5] ^I WHERE test_session_id = '5264'\nJul 19 09:32:20 annette postgres[17092]: [2-1] LOG: duration: 13389.730 ms statement:\nJul 19 09:32:20 annette postgres[17092]: [2-2] ^I SELECT location_type.name AS location_type_name,\nJul 19 09:32:20 annette postgres[17092]: [2-3] ^I library, rail, col, side, row, location_id,\nJul 19 09:32:20 annette postgres[17092]: [2-4] ^I hli_lsm, hli_panel, hli_row, hli_col\nJul 19 09:32:20 annette postgres[17092]: [2-5] ^I FROM location JOIN location_type USING( location_type_id )\nJul 19 09:32:20 annette postgres[17092]: [2-6] ^I JOIN complex USING( library_id )\nJul 19 09:32:20 annette postgres[17092]: [2-7] ^I LEFT OUTER JOIN hli_location USING( location_id )\nJul 19 09:32:20 annette postgres[17092]: [2-8] ^I LEFT OUTER JOIN application USING( application_id )\nJul 19 09:32:20 annette postgres[17092]: [2-9] ^I WHERE complex.complex_id = '13'\nJul 19 09:32:20 annette postgres[17092]: [2-10] ^I AND location_id NOT IN\nJul 19 09:32:20 annette postgres[17092]: [2-11] ^I (SELECT location_id\nJul 19 09:32:20 annette postgres[17092]: [2-12] ^I FROM location_lock)\nJul 19 09:32:20 annette postgres[17092]: [2-13] ^I AND location_id NOT IN\nJul 19 09:32:20 annette postgres[17092]: [2-14] ^I (SELECT location_id\nJul 19 09:32:20 annette postgres[17092]: [2-15] ^I FROM cartridge)\nJul 19 09:32:20 annette postgres[17092]: [2-16] ^IAND (location_type.name ~ 'cell' AND application.name ~ 'hli' AND hli_lsm = 1 AND col BETWEEN -2 AND 2)\nJul 19 09:32:20 annette postgres[17092]: [2-17] ^I\nJul 19 09:32:20 annette postgres[17092]: [2-18] ^I ORDER BY location.usage_count, location.rand LIMIT 1\nJul 19 09:32:20 annette postgres[17092]: [2-19] ^I FOR UPDATE OF location\n\nCheers,\nRob", "msg_date": "Tue, 19 Jul 2005 13:34:17 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS issue?" }, { "msg_contents": "When grilled further on (Tue, 19 Jul 2005 12:09:51 -0600),\nRobert Creager <[email protected]> confessed:\n\n> On Tue, 19 Jul 2005 12:54:22 -0400\n> Tom Lane <[email protected]> wrote:\n> \n> > Hmm, I hadn't thought about the possible impact of multiple concurrent\n> > vacuums. Is the problem caused by that, or has performance already gone\n> > into the tank by the time the cron-driven vacuums are taking long enough\n> > to overlap?\n> \n> \n> I'll re-start the database, vacuum full analyze and restart the runs without\nthe\n> cron vacuum running.\n> \n\nIt took a few hours, but the problem did finally occur with no vacuum running on\n803. CS is averaging 72k. I cannot quantitatively say it took longer to\nreproduce than with the vacuums running, but it seemed like it did.\n\nCan any information be gotten out of this? Should I try CVS HEAD?\n\nThoughts?\n\nThanks,\nRob\n\n-- \n 22:41:36 up 6 days, 2:16, 6 users, load average: 0.15, 0.21, 0.30\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Tue, 19 Jul 2005 22:49:08 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" }, { "msg_contents": "I've now backed off to version 7.4.1, which doesn't exhibit the problems that\n8.0.3 does. I guess I'll wait 'till the next version and see if any progress\nhas occurred.\n\nRob\n\nWhen grilled further on (Tue, 19 Jul 2005 22:49:08 -0600),\nRobert Creager <[email protected]> confessed:\n\n> When grilled further on (Tue, 19 Jul 2005 12:09:51 -0600),\n> Robert Creager <[email protected]> confessed:\n> \n> > On Tue, 19 Jul 2005 12:54:22 -0400\n> > Tom Lane <[email protected]> wrote:\n> > \n> > > Hmm, I hadn't thought about the possible impact of multiple concurrent\n> > > vacuums. Is the problem caused by that, or has performance already gone\n> > > into the tank by the time the cron-driven vacuums are taking long enough\n> > > to overlap?\n> > \n> > \n> > I'll re-start the database, vacuum full analyze and restart the runs without\n> the\n> > cron vacuum running.\n> > \n> \n> It took a few hours, but the problem did finally occur with no vacuum running\non\n> 803. CS is averaging 72k. I cannot quantitatively say it took longer to\n> reproduce than with the vacuums running, but it seemed like it did.\n> \n> Can any information be gotten out of this? Should I try CVS HEAD?\n> \n> Thoughts?\n> \n> Thanks,\n> Rob\n> \n> -- \n> 22:41:36 up 6 days, 2:16, 6 users, load average: 0.15, 0.21, 0.30\n> Linux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004\n\n\n-- \n 14:35:32 up 9 days, 18:10, 5 users, load average: 2.17, 2.19, 2.15\nLinux 2.6.5-02 #8 SMP Mon Jul 12 21:34:44 MDT 2004", "msg_date": "Sat, 23 Jul 2005 14:36:50 -0600", "msg_from": "Robert Creager <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Huge performance problem between 7.4.1 and 8.0.3 - CS" } ]
[ { "msg_contents": "Just out of curiosity, does it do any better with the following?\n\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100)\n\n\n>>> \"Dario Pudlo\" <[email protected]> 07/06/05 4:54 PM >>>\n(first at all, sorry for my english)\nHi.\n - Does \"left join\" restrict the order in which the planner must join\ntables? I've read about join, but i'm not sure about left join...\n - If so: Can I avoid this behavior? I mean, make the planner resolve\nthe\nquery, using statistics (uniqueness, data distribution) rather than join\norder.\n\n\tMy query looks like:\n\tSELECT ...\n FROM a, b,\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d on (d.key=a.key)\n WHERE (a.key = b.key) AND (b.column <= 100)\n\n b.column has a lot better selectivity, but planner insist on\nresolve\nfirst c.key = a.key.\n\n\tOf course, I could rewrite something like:\n\tSELECT ...\n FROM\n (SELECT ...\n FROM a,b\n LEFT JOIN d on (d.key=a.key)\n WHERE (b.column <= 100)\n )\n as aa\n LEFT JOIN c ON (c.key = aa.key)\n\n\tbut this is query is constructed by an application with a\n\"multicolumn\"\nfilter. It's dynamic.\n It means that a user could choose to look for \"c.column = 1000\".\nAnd\nalso, combinations of filters.\n\n\tSo, I need the planner to choose the best plan...\n\nI've already change statistics, I clustered tables with cluster, ran\nvacuum\nanalyze, changed work_mem, shared_buffers...\n\nGreetings. TIA.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Mon, 18 Jul 2005 12:57:31 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join and query planner" }, { "msg_contents": "Hi.\n\n> Just out of curiosity, does it do any better with the following?\n>\n> SELECT ...\n\nYes, it does.\n\nBut my query could also be\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n/*new*/ , e\n WHERE (b.column <= 100)\n/*new*/ and (e.key = a.key) and (e.field = 'filter')\n\nbecause it's constructed by an application. I needed to know if, somehow,\nsomeway, I can \"unforce\" join order.\nThe only way to solve it so far is changing application. It must build\nsomething like\n\n SELECT ...\n FROM b\n JOIN (a JOIN e ON (e.key = a.key)) ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100) and (e.field = 'filter')\n\nSupossed that e.field has (should have) better selectivity. But now this\nproblem belongs to programmer's group :-)\n\nThe query, in fact, has more tables to join. I wonder if lowering geqo\nthreshold could do the work...\n\nThank you. Greetings. Long life, little spam and prosperity!\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Kevin\nGrittner\nEnviado el: lunes, 18 de julio de 2005 14:58\nPara: [email protected]; [email protected]\nAsunto: Re: [PERFORM] join and query planner\n\n\nJust out of curiosity, does it do any better with the following?\n\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100)\n\n\n>>> snipp\n\n", "msg_date": "Mon, 18 Jul 2005 16:24:19 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join and query planner" } ]
[ { "msg_contents": "Hi,\n\nSuppose I have a table with 4 fields (f1, f2, f3, f4)\nI define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)\n\nI have 3 records\nA, B, C, D (this will be inserted)\nA, B, C, E (this will pass u2, but not u1, thus not inserted)\nA, B, F, D (this will pass u1, but not u2, thus not inserted)\n\nNow, for performance ...\n\nI have tables like this with 500.000 records where there's a new upload \nof approx. 20.000 records.\nIt is only now that we say index u2 to be necessary. So, until now, I \ndid something like insert into ... select f1, f2, f2, max(f4) group by \nf1, f2, f3\nThat is ok ... and also logically ok because of the data definition\n\nI cannot do this with 2 group by's. I tried this on paper and I'm not \nsucceeding.\n\nSo, I must use a function that will check against u1 and u2, and then \ninsert if it is ok.\nI know that such a function is way slower that my insert query.\n\nSo, my question ...\nHow can I keep the same performance, but also with the new index in \nmind ???\n\n\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Mon, 18 Jul 2005 21:29:20 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Insert performance (OT?)" }, { "msg_contents": "nobody ?\n\nOn 18 Jul 2005, at 21:29, Yves Vindevogel wrote:\n\n> Hi,\n>\n> Suppose I have a table with 4 fields (f1, f2, f3, f4)\n> I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)\n>\n> I have 3 records\n> A, B, C, D (this will be inserted)\n> A, B, C, E (this will pass u2, but not u1, thus not inserted)\n> A, B, F, D (this will pass u1, but not u2, thus not inserted)\n>\n> Now, for performance ...\n>\n> I have tables like this with 500.000 records where there's a new \n> upload of approx. 20.000 records.\n> It is only now that we say index u2 to be necessary. So, until now, I \n> did something like insert into ... select f1, f2, f2, max(f4) group by \n> f1, f2, f3\n> That is ok ... and also logically ok because of the data definition\n>\n> I cannot do this with 2 group by's. I tried this on paper and I'm not \n> succeeding.\n>\n> So, I must use a function that will check against u1 and u2, and then \n> insert if it is ok.\n> I know that such a function is way slower that my insert query.\n>\n> So, my question ...\n> How can I keep the same performance, but also with the new index in \n> mind ???\n>\n>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n> <Pasted Graphic 2.tiff>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 19 Jul 2005 10:35:15 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance (OT?)" }, { "msg_contents": "Yves Vindevogel wrote:\n> Hi,\n> \n> Suppose I have a table with 4 fields (f1, f2, f3, f4)\n> I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)\n> \n> I have 3 records\n> A, B, C, D (this will be inserted)\n> A, B, C, E (this will pass u2, but not u1, thus not inserted)\n> A, B, F, D (this will pass u1, but not u2, thus not inserted)\n\nAre you saying you want to know whether they will be inserted before you \ntry to do so?\n\n> Now, for performance ...\n> \n> I have tables like this with 500.000 records where there's a new upload \n> of approx. 20.000 records.\n> It is only now that we say index u2 to be necessary. So, until now, I \n> did something like insert into ... select f1, f2, f2, max(f4) group by \n> f1, f2, f3\n> That is ok ... and also logically ok because of the data definition\n\nI'm confused here - assuming you meant \"select f1,f2,f3\", then I don't \nsee how you guarantee the row doesn't alredy exist.\n\n> I cannot do this with 2 group by's. I tried this on paper and I'm not \n> succeeding.\n\nI don't see how you can have two group-by's, or what that would mean if \nyou did.\n\n> So, I must use a function that will check against u1 and u2, and then \n> insert if it is ok.\n> I know that such a function is way slower that my insert query.\n\nSo - you have a table, called something like \"upload\" with 20,000 rows \nand you'd like to know whether it is safe to insert them. Well, it's \neasy enough to identify which ones are duplicates.\n\nSELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f3;\nSELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f4;\n\nAre you saying that deleting these rows and then inserting takes too long?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jul 2005 10:39:07 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance (OT?)" }, { "msg_contents": "Yves Vindevogel wrote:\n >>> So, I must use a function that will check against u1 and u2, and then\n>>> insert if it is ok.\n>>> I know that such a function is way slower that my insert query.\n>>\n>> So - you have a table, called something like \"upload\" with 20,000 rows \n>> and you'd like to know whether it is safe to insert them. Well, it's \n>> easy enough to identify which ones are duplicates.\n>>\n>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f3;\n>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f4;\n>>\n> That is a good idea. I can delete the ones that would fail my first \n> unique index this way, and then delete the ones that would fail my \n> second unique index and then upload them.\n> Hmm, why did I not think of that myself.\n\nI've spent a lot of time moving data from one system to another, usually \nhaving to clean it in the process. At 9pm on a Friday, you decide that \non the next job you'll find an efficient way to do it :-)\n\n>> Are you saying that deleting these rows and then inserting takes too \n>> long?\n>>\n> This goes very fast, but not with a function that checks each record one \n> by one.\n\nYou could get away with one query if you converted them to left-joins:\nINSERT INTO ...\nSELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL\nUNION\nSELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL\n\nThe UNION will remove duplicates for you, but this might turn out to be \nslower than two separate queries.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jul 2005 11:51:51 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance (OT?)" }, { "msg_contents": "I will use 2 queries. They run within a function fnUpload(), so I'm \ngoing to keep it simple.\n\n\nOn 19 Jul 2005, at 12:51, Richard Huxton wrote:\n\n> Yves Vindevogel wrote:\n> >>> So, I must use a function that will check against u1 and u2, and \n> then\n>>>> insert if it is ok.\n>>>> I know that such a function is way slower that my insert query.\n>>>\n>>> So - you have a table, called something like \"upload\" with 20,000 \n>>> rows and you'd like to know whether it is safe to insert them. Well, \n>>> it's easy enough to identify which ones are duplicates.\n>>>\n>>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f3;\n>>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f4;\n>>>\n>> That is a good idea. I can delete the ones that would fail my first \n>> unique index this way, and then delete the ones that would fail my \n>> second unique index and then upload them.\n>> Hmm, why did I not think of that myself.\n>\n> I've spent a lot of time moving data from one system to another, \n> usually having to clean it in the process. At 9pm on a Friday, you \n> decide that on the next job you'll find an efficient way to do it :-)\n>\n>>> Are you saying that deleting these rows and then inserting takes too \n>>> long?\n>>>\n>> This goes very fast, but not with a function that checks each record \n>> one by one.\n>\n> You could get away with one query if you converted them to left-joins:\n> INSERT INTO ...\n> SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL\n> UNION\n> SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL\n>\n> The UNION will remove duplicates for you, but this might turn out to \n> be slower than two separate queries.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 19 Jul 2005 15:38:36 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Insert performance (OT?)" }, { "msg_contents": "On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton <[email protected]>\nwrote:\n>You could get away with one query if you converted them to left-joins:\n>INSERT INTO ...\n>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL\n>UNION\n>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL\n\nFor the archives: This won't work. Each of the two SELECTs\neliminates rows violating one of the two constraints but includes rows\nviolating the other constraint. After the UNION you are back to\nviolating both constraints :-(\n\nServus\n Manfred\n\n", "msg_date": "Wed, 17 Aug 2005 22:25:09 +0200", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance (OT?)" }, { "msg_contents": "Manfred Koizar wrote:\n> On Tue, 19 Jul 2005 11:51:51 +0100, Richard Huxton <[email protected]>\n> wrote:\n>\n>>You could get away with one query if you converted them to left-joins:\n>>INSERT INTO ...\n>>SELECT * FROM upload LEFT JOIN ... WHERE f3 IS NULL\n>>UNION\n>>SELECT * FROM upload LEFT JOIN ... WHERE f4 IS NULL\n>\n>\n> For the archives: This won't work. Each of the two SELECTs\n> eliminates rows violating one of the two constraints but includes rows\n> violating the other constraint. After the UNION you are back to\n> violating both constraints :-(\n\nCouldn't you use \"INTERSECT\" then? To only get the rows that *both*\nqueries return?\nJohn\n=:->\n\n>\n> Servus\n> Manfred\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>", "msg_date": "Wed, 17 Aug 2005 17:02:28 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Insert performance (OT?)" } ]
[ { "msg_contents": "You might want to set join_collapse_limit high, and use the JOIN\noperators rather than the comma-separated lists. We generate the WHERE\nclause on the fly, based on user input, and this has worked well for us.\n \n-Kevin\n \n \n>>> \"Dario\" <[email protected]> 07/18/05 2:24 PM >>>\nHi.\n\n> Just out of curiosity, does it do any better with the following?\n>\n> SELECT ...\n\nYes, it does.\n\nBut my query could also be\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n/*new*/ , e\n WHERE (b.column <= 100)\n/*new*/ and (e.key = a.key) and (e.field = 'filter')\n\nbecause it's constructed by an application. I needed to know if,\nsomehow,\nsomeway, I can \"unforce\" join order.\nThe only way to solve it so far is changing application. It must build\nsomething like\n\n SELECT ...\n FROM b\n JOIN (a JOIN e ON (e.key = a.key)) ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100) and (e.field = 'filter')\n\nSupossed that e.field has (should have) better selectivity. But now this\nproblem belongs to programmer's group :-)\n\nThe query, in fact, has more tables to join. I wonder if lowering geqo\nthreshold could do the work...\n\nThank you. Greetings. Long life, little spam and prosperity!\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Kevin\nGrittner\nEnviado el: lunes, 18 de julio de 2005 14:58\nPara: [email protected]; [email protected]\nAsunto: Re: [PERFORM] join and query planner\n\n\nJust out of curiosity, does it do any better with the following?\n\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100)\n\n\n>>> snipp\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Mon, 18 Jul 2005 15:47:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: join and query planner" }, { "msg_contents": "I'll try that.\n\nLet you know as soon as I can take a look.\n\n\nThank you-\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Kevin\nGrittner\nEnviado el: lunes, 18 de julio de 2005 17:48\nPara: [email protected]; [email protected]\nAsunto: Re: [PERFORM] join and query planner\n\n\nYou might want to set join_collapse_limit high, and use the JOIN\noperators rather than the comma-separated lists. We generate the WHERE\nclause on the fly, based on user input, and this has worked well for us.\n \n-Kevin\n \n \n>>> \"Dario\" <[email protected]> 07/18/05 2:24 PM >>>\nHi.\n\n> Just out of curiosity, does it do any better with the following?\n>\n> SELECT ...\n\nYes, it does.\n\nBut my query could also be\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n/*new*/ , e\n WHERE (b.column <= 100)\n/*new*/ and (e.key = a.key) and (e.field = 'filter')\n\nbecause it's constructed by an application. I needed to know if,\nsomehow,\nsomeway, I can \"unforce\" join order.\nThe only way to solve it so far is changing application. It must build\nsomething like\n\n SELECT ...\n FROM b\n JOIN (a JOIN e ON (e.key = a.key)) ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100) and (e.field = 'filter')\n\nSupossed that e.field has (should have) better selectivity. But now this\nproblem belongs to programmer's group :-)\n\nThe query, in fact, has more tables to join. I wonder if lowering geqo\nthreshold could do the work...\n\nThank you. Greetings. Long life, little spam and prosperity!\n\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Kevin\nGrittner\nEnviado el: lunes, 18 de julio de 2005 14:58\nPara: [email protected]; [email protected]\nAsunto: Re: [PERFORM] join and query planner\n\n\nJust out of curiosity, does it do any better with the following?\n\n SELECT ...\n FROM a\n JOIN b ON (a.key = b.key)\n LEFT JOIN c ON (c.key = a.key)\n LEFT JOIN d ON (d.key=a.key)\n WHERE (b.column <= 100)\n\n\n>>> snipp\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Tue, 19 Jul 2005 18:41:18 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join and query planner" } ]
[ { "msg_contents": "BTW: thank you for the idea\n\nBegin forwarded message:\n\n> From: Yves Vindevogel <[email protected]>\n> Date: Tue 19 Jul 2005 12:20:34 CEST\n> To: Richard Huxton <[email protected]>\n> Subject: Re: [PERFORM] Insert performance (OT?)\n>\n>\n> On 19 Jul 2005, at 11:39, Richard Huxton wrote:\n>\n>> Yves Vindevogel wrote:\n>>> Hi,\n>>> Suppose I have a table with 4 fields (f1, f2, f3, f4)\n>>> I define 2 unique indexes u1 (f1, f2, f3) and u2 (f1, f2, f4)\n>>> I have 3 records\n>>> A, B, C, D (this will be inserted)\n>>> A, B, C, E (this will pass u2, but not u1, thus not inserted)\n>>> A, B, F, D (this will pass u1, but not u2, thus not inserted)\n>>\n>> Are you saying you want to know whether they will be inserted before \n>> you try to do so?\n>>\n> No, that is not an issue. Problem is that when I use a big query with \n> \"insert into .. select\" and one record is wrong (like above) the \n> complete insert query is abandonned.\n> Therefore, I must do it another way. Or I must be able to say, insert \n> them and dump the rest.\n>\n>>> Now, for performance ...\n>>> I have tables like this with 500.000 records where there's a new \n>>> upload of approx. 20.000 records.\n>>> It is only now that we say index u2 to be necessary. So, until now, \n>>> I did something like insert into ... select f1, f2, f2, max(f4) \n>>> group by f1, f2, f3\n>>> That is ok ... and also logically ok because of the data definition\n>>\n>> I'm confused here - assuming you meant \"select f1,f2,f3\", then I \n>> don't see how you guarantee the row doesn't alredy exist.\n>>\n> No, I meant it with max(f4) because my table has 4 fields. And no, I \n> can't guarantee that, that is exactly my problem.\n> But with the unique indexes, I'm certain that it will not get into my \n> database\n>\n>>> I cannot do this with 2 group by's. I tried this on paper and I'm \n>>> not succeeding.\n>>\n>> I don't see how you can have two group-by's, or what that would mean \n>> if you did.\n>>\n> select from ( select from group by) as foo group by\n>\n>>> So, I must use a function that will check against u1 and u2, and \n>>> then insert if it is ok.\n>>> I know that such a function is way slower that my insert query.\n>>\n>> So - you have a table, called something like \"upload\" with 20,000 \n>> rows and you'd like to know whether it is safe to insert them. Well, \n>> it's easy enough to identify which ones are duplicates.\n>>\n>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f3;\n>> SELECT * FROM upload JOIN main_table ON u1=f1 AND u2=f2 AND u3=f4;\n>>\n> That is a good idea. I can delete the ones that would fail my first \n> unique index this way, and then delete the ones that would fail my \n> second unique index and then upload them.\n> Hmm, why did I not think of that myself.\n>\n>> Are you saying that deleting these rows and then inserting takes too \n>> long?\n>>\n> This goes very fast, but not with a function that checks each record \n> one by one.\n>\n>> --\n>> Richard Huxton\n>> Archonet Ltd\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>>\n> Met vriendelijke groeten,\n> Bien à vous,\n> Kind regards,\n>\n> Yves Vindevogel\n> Implements\n>\n\n>\n>\n> Mail: [email protected] - Mobile: +32 (478) 80 82 91\n>\n> Kempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n>\n> Web: http://www.implements.be\n>\n> First they ignore you. Then they laugh at you. Then they fight you. \n> Then you win.\n> Mahatma Ghandi.\n>\nMet vriendelijke groeten,\nBien à vous,\nKind regards,\n\nYves Vindevogel\nImplements\n\nMail: [email protected] - Mobile: +32 (478) 80 82 91\n\nKempische Steenweg 206 - 3500 Hasselt - Tel-Fax: +32 (11) 43 55 76\n\nWeb: http://www.implements.be\n\nFirst they ignore you. Then they laugh at you. Then they fight you. \nThen you win.\nMahatma Ghandi.", "msg_date": "Tue, 19 Jul 2005 12:21:08 +0200", "msg_from": "Yves Vindevogel <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Insert performance (OT?)" } ]
[ { "msg_contents": "Hi,\nI'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\nRunning scripts locally, it takes about 1.5x longer than mysql, and the load \non the server is only about 21%.\nI upped the sort_mem to 8192 (kB), and shared_buffers and \neffective_cache_size to 65536 (512MB), but neither the timing nor the server \nload have changed at all. FYI, I'm going to be working on data sets in the \norder of GB.\n I think I've gone about as far as I can with google.. can anybody give me \nsome advice on how to improve the raw performance before I start looking at \ncode changes?\n Thanks in advance.\n\nHi,\nI'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\nRunning scripts locally, it takes about 1.5x longer than mysql, and the load on the server is only about 21%.\nI upped the sort_mem to 8192 (kB), and shared_buffers and effective_cache_size to 65536 (512MB), but neither the timing nor the server load have changed at all. FYI, I'm going to be working on data sets in the order of GB.\n\n \nI think I've gone about as far as I can with google.. can anybody give me some advice on how to improve the raw performance before I start looking at code changes?\n \nThanks in advance.", "msg_date": "Tue, 19 Jul 2005 13:42:42 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for tips" }, { "msg_contents": "Oliver Crosby wrote:\n> Hi,\n> I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\n> Running scripts locally, it takes about 1.5x longer than mysql, and the \n> load on the server is only about 21%.\n\nWhat queries?\nWhat is your structure?\nHave you tried explain analyze?\nHow many rows in the table?\nWhich OS?\nHow are you testing the speed?\nWhat type of RAID?\n\n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Tue, 19 Jul 2005 10:50:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "Oliver Crosby wrote:\n> Hi,\n> I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\n> Running scripts locally, it takes about 1.5x longer than mysql, and the\n> load on the server is only about 21%.\n> I upped the sort_mem to 8192 (kB), and shared_buffers and\n> effective_cache_size to 65536 (512MB), but neither the timing nor the\n> server load have changed at all. FYI, I'm going to be working on data\n> sets in the order of GB.\n>\n> I think I've gone about as far as I can with google.. can anybody give\n> me some advice on how to improve the raw performance before I start\n> looking at code changes?\n>\n> Thanks in advance.\n\nFirst, try to post in plain-text rather than html, it is easier to read. :)\n\nSecond, if you can determine what queries are running slow, post the\nresult of EXPLAIN ANALYZE on them, and we can try to help you tune\nthem/postgres to better effect.\n\nJust a blanket question like this is hard to answer. Your new\nshared_buffers are probably *way* too high. They should be at most\naround 10% of ram. Since this is a dedicated server effective_cache_size\nshould be probably ~75% of ram, or close to 1.2GB.\n\nThere are quite a few things that you can tweak, so the more information\nyou can give, the more we can help.\n\nFor instance, if you are loading a lot of data into a table, if\npossible, you want to use COPY not INSERT.\nIf you have a lot of indexes and are loading a significant portion, it\nis sometimes faster to drop the indexes, COPY the data in, and then\nrebuild the indexes.\n\nFor tables with a lot of inserts/updates, you need to watch out for\nforeign key constraints. (Generally, you will want an index on both\nsides of the foreign key. One is required, the other is recommended for\nfaster update/deletes).\n\nJohn\n=:->", "msg_date": "Tue, 19 Jul 2005 12:58:20 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "Oliver Crosby wrote:\n> Hi,\n> I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\n> Running scripts locally, it takes about 1.5x longer than mysql, and the load \n> on the server is only about 21%.\n\nWhat scripts? What do they do?\nOh, and 7.4.8 is the latest release - worth upgrading for the fixes.\n\n> I upped the sort_mem to 8192 (kB), and shared_buffers and \n> effective_cache_size to 65536 (512MB), but neither the timing nor the server \n> load have changed at all.\n\nWell, effective_cache_size is the amount of RAM being used by the OS to \ncache your files, so take a look at top/free and set it based on that \n(pick a steady load).\n\nWhat sort_mem should be will obviously depend how much sorting you do.\n\nDrop shared_buffers down to about 10000 - 20000 (at a guess)\n\nYou may find the following useful\n http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php\nRead the Performance Tuning article, there is an updated one for version \n8 at:\n http://www.powerpostgresql.com/PerfList\n\n > FYI, I'm going to be working on data sets in the\n> order of GB.\n\nFair enough.\n\n> I think I've gone about as far as I can with google.. can anybody give me \n> some advice on how to improve the raw performance before I start looking at \n> code changes?\n\nIdentify what the problem is first of all. Some things to consider:\n - Are there particular queries giving you trouble?\n - Is your load mostly reads or mostly writes?\n - Do you have one user or 100?\n - Are you block-loading data efficiently where necessary?\n - Have you indexed both sides of your foreign-keys where sensible?\n - Are your disks being used effectively?\n - Are your statistics accurate/up to date?\n\nBear in mind that MySQL will probably be quicker for simple queries for \none user and always will be. If you have multiple users running a mix of \nmulti-table joins and updates then PG will have a chance to stretch its \nlegs.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 19 Jul 2005 19:10:20 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "I was hoping to start with tuning postgres to match the hardware, but\nin any case..\n\nThe queries are all simple insert or select statements on single tables.\nEg. select x from table where y=?; or insert into table (a, b, c)\nvalues (?, ?, ?);\nIn the case of selects where it's a large table, there's an index on\nthe column being searched, so in terms of the example above, x is\neither a pkey column or other related field, and y is a non-pkey\ncolumn.\n\nI'm not sure what you mean by structure.\n\nI tried explain analyse on the individual queries, but I'm not sure\nwhat can be done to manipulate them when they don't do much.\n\nMy test environment has about 100k - 300k rows in each table, and for\nproduction I'm expecting this to be in the order of 1M+.\n\nThe OS is Redhat Enterprise 3.\n\nI'm using a time command when I call the scripts to get a total\nrunning time from start to finish.\n\nI don't know what we have for RAID, but I suspect it's just a single\n10k or 15k rpm hdd.\n------------------------------------------------------------------------------------------------------------------------\nI'll try your recommendations for shared_buffers and\neffective_cache_size. Thanks John!\n\nWe're trying to improve performance on a log processing script to the\npoint where it can be run as close as possible to realtime. A lot of\nwhat gets inserted depends on what's already in the db, and it runs\nitem-by-item... so unfortunately I can't take advantage of copy.\n\nWe tried dropping indices, copying data in, then rebuilding. It works\ngreat for a bulk import, but the processing script went a lot slower\nwithout them. (Each insert is preceeded by a local cache check and\nthen a db search to see if an ID already exists for an item.)\n\nWe have no foreign keys at the moment. Would they help?\n\n\nOn 7/19/05, Joshua D. Drake <[email protected]> wrote:\n> Oliver Crosby wrote:\n> > Hi,\n> > I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs of ram.\n> > Running scripts locally, it takes about 1.5x longer than mysql, and the\n> > load on the server is only about 21%.\n> \n> What queries?\n> What is your structure?\n> Have you tried explain analyze?\n> How many rows in the table?\n> Which OS?\n> How are you testing the speed?\n> What type of RAID?\n> \n> \n> \n> --\n> Your PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\n> PostgreSQL Replication, Consulting, Custom Programming, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n>\n", "msg_date": "Tue, 19 Jul 2005 14:21:22 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "\n\nWhat programming language are these scripts written in ?\n", "msg_date": "Tue, 19 Jul 2005 20:40:56 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "> Identify what the problem is first of all. Some things to consider:\n> - Are there particular queries giving you trouble?\n> - Is your load mostly reads or mostly writes?\n> - Do you have one user or 100?\n> - Are you block-loading data efficiently where necessary?\n> - Have you indexed both sides of your foreign-keys where sensible?\n> - Are your disks being used effectively?\n> - Are your statistics accurate/up to date?\n\nNo queries in particular appear to be a problem. I think it's just the\noverall speed. If any of the configuration settings will help make the\nsimple select queries go faster, that would be ideal.\nThe load is about 50/50 read/write.\nAt the moment it's just one user, but the goal is to have a cluster of\nservers (probably less than a dozen) updating to a central db.\nIndices exist for the fields being searched, but we don't have any foreign keys.\nI'm not too familiar with effective disk usage or statistics...\n", "msg_date": "Tue, 19 Jul 2005 14:41:51 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "> What programming language are these scripts written in ?\n\nperl. using the DBD:Pg interface instead of command-lining it through psql\n", "msg_date": "Tue, 19 Jul 2005 14:44:26 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "Oliver Crosby <[email protected]> writes:\n> The queries are all simple insert or select statements on single tables.\n> Eg. select x from table where y=?; or insert into table (a, b, c)\n> values (?, ?, ?);\n> In the case of selects where it's a large table, there's an index on\n> the column being searched, so in terms of the example above, x is\n> either a pkey column or other related field, and y is a non-pkey\n> column.\n\nIf you're running only a single query at a time (no multiple clients),\nthen this is pretty much the definition of a MySQL-friendly workload;\nI'd have to say we are doing really well if we are only 50% slower.\nPostgres doesn't have any performance advantages until you get into\ncomplex queries or a significant amount of concurrency.\n\nYou could possibly get some improvement if you can re-use prepared plans\nfor the queries; but this will require some fooling with the client code\n(I'm not sure if DBD::Pg even has support for it at all).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 15:01:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips " }, { "msg_contents": "> If you're running only a single query at a time (no multiple clients),\n> then this is pretty much the definition of a MySQL-friendly workload;\n> I'd have to say we are doing really well if we are only 50% slower.\n> Postgres doesn't have any performance advantages until you get into\n> complex queries or a significant amount of concurrency.\n\nThe original port was actually twice as slow. It improved quite a bit\nafter I added transactions and trimmed the schema a bit.\n\n> You could possibly get some improvement if you can re-use prepared plans\n> for the queries; but this will require some fooling with the client code\n> (I'm not sure if DBD::Pg even has support for it at all).\n\nAye. We have prepared statements.\n", "msg_date": "Tue, 19 Jul 2005 15:11:15 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On Tue, Jul 19, 2005 at 03:01:00PM -0400, Tom Lane wrote:\n> You could possibly get some improvement if you can re-use prepared plans\n> for the queries; but this will require some fooling with the client code\n> (I'm not sure if DBD::Pg even has support for it at all).\n\nNewer versions has, when compiled against the 8.0 client libraries and using\nan 8.0 server (AFAIK).\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 19 Jul 2005 21:12:03 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "\n\tI can't say wether MySQL is faster for very small queries (like \nSELECT'ing one row based on an indexed field).\n\tThat's why I was asking you about the language...\n\n\tI assume you're using a persistent connection.\n\tFor simple queries like this, PG 8.x seemed to be a lot faster than PG \n7.x. Have you tried 8 ?\n\tI was asking you which language, because for such really small queries \nyou have to take into account the library overhead. For instance, in PHP a \nsimple query can be 10 times slower in Postgres than in MySQL and I \nbelieve it is because php's MySQL driver has seen a lot of optimization \nwhereas the postgres driver has not. Interestingly, the situation is \nreversed with Python : its best postgres driver (psycopg 2) is a lot \nfaster than the MySQL adapter, and faster than both php adapters (a lot \nfaster).\n\n\tThe same query can get (this is from the back of my head):\n\nPHP+Postgres\t\t3-5 ms\nPython+MySQL\t1ms\nPHP+MySQL\t\t0.5 ms\nPython+Postgres\t0.15 ms\n\nAnd yes, I had queries executing in 150 microseconds or so, this includes \ntime to convert the results to native python objects ! This was on a loop \nof 10000 times the same query. But psycopg2 is fast. The overhead for \nparsing a simple query and fetching just a row is really small.\n\nThis is on my Centrino 1.6G laptop.\n", "msg_date": "Tue, 19 Jul 2005 21:14:04 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "Oliver Crosby <[email protected]> writes:\n>> You could possibly get some improvement if you can re-use prepared plans\n>> for the queries; but this will require some fooling with the client code\n>> (I'm not sure if DBD::Pg even has support for it at all).\n\n> Aye. We have prepared statements.\n\nAh, but are they really prepared, or is DBD::Pg faking it by inserting\nparameter values into the query text and then sending the assembled\nstring as a fresh query? It wasn't until about 7.4 that we had adequate\nbackend support to let client libraries support prepared queries\nproperly, and I'm unsure that DBD::Pg has been updated to take advantage\nof that support.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 15:16:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips " }, { "msg_contents": "On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:\n> Ah, but are they really prepared, or is DBD::Pg faking it by inserting\n> parameter values into the query text and then sending the assembled\n> string as a fresh query? \n\nThey are really prepared.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 19 Jul 2005 21:36:01 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "\nOn Jul 19, 2005, at 3:36 PM, Steinar H. Gunderson wrote:\n\n> On Tue, Jul 19, 2005 at 03:16:31PM -0400, Tom Lane wrote:\n>\n>> Ah, but are they really prepared, or is DBD::Pg faking it by \n>> inserting\n>> parameter values into the query text and then sending the assembled\n>> string as a fresh query?\n>>\n>\n> They are really prepared.\n\nThat depends on what version you are using. Older versions did what \nTom mentioned rather than sending PREPARE & EXECUTE.\n\nNot sure what version that changed in.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 19 Jul 2005 15:53:04 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On 07/19/2005-02:41PM, Oliver Crosby wrote:\n> \n> No queries in particular appear to be a problem. \n\nThat could mean they are ALL a problem. Let see some EXPLAIN ANAYZE\nresults just to rule it out.\n\n> At the moment it's just one user, \n\nWith 1 user PostgreSQL will probobaly never beat MySQL\nbut with hundreds it will.\n\n", "msg_date": "Tue, 19 Jul 2005 16:30:04 -0400", "msg_from": "Christopher Weimann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On Jul 19, 2005, at 3:01 PM, Tom Lane wrote:\n\n> You could possibly get some improvement if you can re-use prepared \n> plans\n> for the queries; but this will require some fooling with the client \n> code\n> (I'm not sure if DBD::Pg even has support for it at all).\n>\n\nDBD::Pg 1.40+ by default uses server-side prepared statements when \nyou do $dbh->prepare() against an 8.x database server.\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806", "msg_date": "Tue, 26 Jul 2005 11:22:31 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips " } ]
[ { "msg_contents": "The thread below has the test case that we were able to use to reproduce\nthe issue.\n\n \n\nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\n \n\nThe last messages on this subject are from April of 2005. Has there\nbeen any successful ways to significantly reduce the impact this has to\nmulti-processing? I haven't been able to find anything showing a\nresolution of some kind.\n\n \n\nWe are seeing this on two of our machines:\n\n \n\nQuad 3.0 GHz XEON with 3GB of memory running PG 7.4.3 with SuSE kernel\n2.4\n\n \n\nDual 2.8 GHz XEON with 2GB of memory running PG 8.0.0 with SuSE kernel\n2.4\n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nThe thread below has the test case that we were able to use\nto reproduce the issue.\n \nhttp://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n \nThe last messages on this subject are from April of\n2005.  Has there been any successful ways to significantly reduce the\nimpact this has to multi-processing?  I haven’t been able to find anything\nshowing a resolution of some kind.\n \nWe are seeing this on two of our machines:\n \nQuad 3.0 GHz XEON with 3GB of memory running PG 7.4.3 with SuSE\nkernel 2.4\n \nDual 2.8 GHz XEON with 2GB of memory running PG 8.0.0 with SuSE\nkernel 2.4", "msg_date": "Tue, 19 Jul 2005 13:23:25 -0500", "msg_from": "\"Sailer, Denis (YBUSA-CDR)\" <[email protected]>", "msg_from_op": true, "msg_subject": "context-switching issue on Xeon" }, { "msg_contents": "\nFWIW, I'm seeing this with a client at the moment. 40-60k CS per second\non Dual 3.2GHz.\n\nThere are plenty of other issues we're dealing with, but this is \nobviously\ndisconcerting...\n\n\nOn 19 Jul 2005, at 19:23, Sailer, Denis (YBUSA-CDR) wrote:\n\n> The thread below has the test case that we were able to use to \n> reproduce the issue.\n>\n>\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n>\n>\n> The last messages on this subject are from April of 2005. Has \n> there been any successful ways to significantly reduce the impact \n> this has to multi-processing? I haven�t been able to find anything \n> showing a resolution of some kind.\n>\n>\n> We are seeing this on two of our machines:\n>\n>\n> Quad 3.0 GHz XEON with 3GB of memory running PG 7.4.3 with SuSE \n> kernel 2.4\n>\n>\n> Dual 2.8 GHz XEON with 2GB of memory running PG 8.0.0 with SuSE \n> kernel 2.4\n>\n>\n>\n>\n>\n\n", "msg_date": "Tue, 19 Jul 2005 19:38:14 +0100", "msg_from": "David Hodgkinson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: context-switching issue on Xeon" }, { "msg_contents": "\"Sailer, Denis (YBUSA-CDR)\" <[email protected]> writes:\n> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php\n\n> The last messages on this subject are from April of 2005. Has there\n> been any successful ways to significantly reduce the impact this has to\n> multi-processing?\n\nCVS tip should largely fix the problem as far as buffer manager\ncontention goes.\n\n> I haven't been able to find anything showing a\n> resolution of some kind.\n\nLook at the Feb/March threads concerning buffer manager rewrite, clock\nsweep, etc ... eg\nhttp://archives.postgresql.org/pgsql-patches/2005-03/msg00015.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 19 Jul 2005 14:40:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: context-switching issue on Xeon " } ]
[ { "msg_contents": "Hi Oliver,\n\nWe had low resource utilization and poor throughput on inserts of\nthousands of rows within a single database transaction. There were a\nlot of configuration parameters we changed, but the one which helped the\nmost was wal_buffers -- we wound up setting it to 1000. This may be\nhigher than it needs to be, but when we got to something which ran well,\nwe stopped tinkering. The default value clearly caused a bottleneck.\n \nYou might find this page useful:\n \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/annotated_conf_e.html\n \n-Kevin\n \n \n>>> Oliver Crosby <[email protected]> 07/19/05 1:21 PM >>>\nI was hoping to start with tuning postgres to match the hardware, but\nin any case..\n\nThe queries are all simple insert or select statements on single tables.\nEg. select x from table where y=?; or insert into table (a, b, c)\nvalues (?, ?, ?);\nIn the case of selects where it's a large table, there's an index on\nthe column being searched, so in terms of the example above, x is\neither a pkey column or other related field, and y is a non-pkey\ncolumn.\n\nI'm not sure what you mean by structure.\n\nI tried explain analyse on the individual queries, but I'm not sure\nwhat can be done to manipulate them when they don't do much.\n\nMy test environment has about 100k - 300k rows in each table, and for\nproduction I'm expecting this to be in the order of 1M+.\n\nThe OS is Redhat Enterprise 3.\n\nI'm using a time command when I call the scripts to get a total\nrunning time from start to finish.\n\nI don't know what we have for RAID, but I suspect it's just a single\n10k or 15k rpm hdd.\n------------------------------------------------------------------------------------------------------------------------\nI'll try your recommendations for shared_buffers and\neffective_cache_size. Thanks John!\n\nWe're trying to improve performance on a log processing script to the\npoint where it can be run as close as possible to realtime. A lot of\nwhat gets inserted depends on what's already in the db, and it runs\nitem-by-item... so unfortunately I can't take advantage of copy.\n\nWe tried dropping indices, copying data in, then rebuilding. It works\ngreat for a bulk import, but the processing script went a lot slower\nwithout them. (Each insert is preceeded by a local cache check and\nthen a db search to see if an ID already exists for an item.)\n\nWe have no foreign keys at the moment. Would they help?\n\n\nOn 7/19/05, Joshua D. Drake <[email protected]> wrote:\n> Oliver Crosby wrote:\n> > Hi,\n> > I'm running Postgres 7.4.6 on a dedicated server with about 1.5gigs\nof ram.\n> > Running scripts locally, it takes about 1.5x longer than mysql, and\nthe\n> > load on the server is only about 21%.\n> \n> What queries?\n> What is your structure?\n> Have you tried explain analyze?\n> How many rows in the table?\n> Which OS?\n> How are you testing the speed?\n> What type of RAID?\n> \n> \n> \n> --\n> Your PostgreSQL solutions company - Command Prompt, Inc.\n1.800.492.2240\n> PostgreSQL Replication, Consulting, Custom Programming, 24x7 support\n> Managed Services, Shared and Dedicated Hosting\n> Co-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 19 Jul 2005 13:58:20 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" } ]
[ { "msg_contents": "> We had low resource utilization and poor throughput on inserts of\n> thousands of rows within a single database transaction. There were a\n> lot of configuration parameters we changed, but the one which helped the\n> most was wal_buffers -- we wound up setting it to 1000. This may be\n> higher than it needs to be, but when we got to something which ran well,\n> we stopped tinkering. The default value clearly caused a bottleneck.\n\nI just tried wal_buffers = 1000, sort_mem at 10% and\neffective_cache_size at 75%.\nThe performance refuses to budge.. I guess that's as good as it'll go?\n", "msg_date": "Tue, 19 Jul 2005 16:08:11 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On 7/19/05, Oliver Crosby <[email protected]> wrote:\n> > We had low resource utilization and poor throughput on inserts of\n> > thousands of rows within a single database transaction. There were a\n> > lot of configuration parameters we changed, but the one which helped the\n> > most was wal_buffers -- we wound up setting it to 1000. This may be\n> > higher than it needs to be, but when we got to something which ran well,\n> > we stopped tinkering. The default value clearly caused a bottleneck.\n> \n> I just tried wal_buffers = 1000, sort_mem at 10% and\n> effective_cache_size at 75%.\n> The performance refuses to budge.. I guess that's as good as it'll go?\n\nIf it is possible try:\n1) wrapping many inserts into one transaction\n(BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to\nhandle less transactions per second (each your insert is a transaction), it\nmay work faster.\n\n2) If you can do 1, you could go further and use a COPY command which is\nthe fastest way to bulk-load a database.\n\nSometimes I insert data info temporary table, and then do:\nINSERT INTO sometable SELECT * FROM tmp_table;\n(but I do it when I want to do some select, updates, etc on\nthe data before \"commiting\" them to main table; dropping\ntemporary table is much cheaper than vacuuming many-a-row\ntable).\n\n Regards,\n Dawid\n\nPS: Where can I find benchmarks comparing PHP vs Perl vs Python in\nterms of speed of executing prepared statements?\n", "msg_date": "Tue, 19 Jul 2005 22:19:03 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "> If it is possible try:\n> 1) wrapping many inserts into one transaction\n> (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to\n> handle less transactions per second (each your insert is a transaction), it\n> may work faster.\n\nAye, that's what I have it doing right now. The transactions do save a\nHUGE chunk of time. (Cuts it down by about 40%).\n\n> 2) If you can do 1, you could go further and use a COPY command which is\n> the fastest way to bulk-load a database.\n\nI don't think I can use COPY in my case because I need to do\nprocessing on a per-line basis, and I need to check if the item I want\nto insert is already there, and if it is, I need to get it's ID so I\ncan use that for further processing.\n", "msg_date": "Tue, 19 Jul 2005 16:28:26 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "\n> PS: Where can I find benchmarks comparing PHP vs Perl vs Python in\n> terms of speed of executing prepared statements?\n\n\tI'm afraid you'll have to do these yourself !\n\n\tAnd, I don't think the Python drivers support real prepared statements \n(the speed of psycopy is really good though).\n\tI don't think PHP either ; they don't even provide a database interface \nto speak of (ie you have to build the query string by hand including \nquoting).\n\n", "msg_date": "Tue, 19 Jul 2005 22:28:44 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On Tue, 2005-07-19 at 16:28 -0400, Oliver Crosby wrote:\n> > If it is possible try:\n> > 1) wrapping many inserts into one transaction\n> > (BEGIN;INSERT;INSERT;...INSERT;COMMIT;). As PostgreSQL will need to\n> > handle less transactions per second (each your insert is a transaction), it\n> > may work faster.\n> \n> Aye, that's what I have it doing right now. The transactions do save a\n> HUGE chunk of time. (Cuts it down by about 40%).\n> \n> > 2) If you can do 1, you could go further and use a COPY command which is\n> > the fastest way to bulk-load a database.\n> \n> I don't think I can use COPY in my case because I need to do\n> processing on a per-line basis, and I need to check if the item I want\n> to insert is already there, and if it is, I need to get it's ID so I\n> can use that for further processing.\n> \n\nsince triggers work with COPY, you could probably write a trigger that\nlooks for this condition and does the ID processsing you need; you could\nthereby enjoy the enormous speed gain resulting from COPY and maintain\nyour data continuity.\n\nSven\n\n", "msg_date": "Tue, 19 Jul 2005 16:46:01 -0400", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "> since triggers work with COPY, you could probably write a trigger that\n> looks for this condition and does the ID processsing you need; you could\n> thereby enjoy the enormous speed gain resulting from COPY and maintain\n> your data continuity.\n\nSo... (bear with me here.. trying to make sense of this)..\nWith triggers there's a way I can do the parsing I need to on a log\nfile and react to completed events in non-sequential order (you can\nignore that part.. it's just how we piece together different related\nevents) and then have perl/DBD::Pg invoke a copy command (which, from\nwhat I can tell, has to operate on a file...) and the copy command can\nfeed the ID I need back to perl so I can work with it...\nIf that doesn't hurt my brain, then I'm at least kinda confused...\nAnyway. Heading home now. I'll think about this more tonight/tomorrow.\n", "msg_date": "Tue, 19 Jul 2005 17:04:04 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "On Tue, 2005-07-19 at 17:04 -0400, Oliver Crosby wrote:\n> > since triggers work with COPY, you could probably write a trigger that\n> > looks for this condition and does the ID processsing you need; you could\n> > thereby enjoy the enormous speed gain resulting from COPY and maintain\n> > your data continuity.\n> \n> So... (bear with me here.. trying to make sense of this)..\n> With triggers there's a way I can do the parsing I need to on a log\n> file and react to completed events in non-sequential order (you can\n> ignore that part.. it's just how we piece together different related\n> events) and then have perl/DBD::Pg invoke a copy command (which, from\n> what I can tell, has to operate on a file...) and the copy command can\n> feed the ID I need back to perl so I can work with it...\n> If that doesn't hurt my brain, then I'm at least kinda confused...\n> Anyway. Heading home now. I'll think about this more tonight/tomorrow.\n> \n\nWell without knowing the specifics of what you are actually trying to\naccomplish I cannot say yes or no to your question. I am not sure from\nwhere this data is coming that you are inserting into the db. However,\nif the scenario is this: a) attempt to insert a row b) if row exists\nalready, grab the ID and do other db selects/inserts/deletes based on\nthat ID, then there is no need to feed this information back to the\nperlscript. Is your perlscript parsing a file and then using the parsed\ninformation to insert rows? If so, how is the ID that is returned used?\nCan you have the trigger use the ID that may be returned to perform\nwhatever it is that your perlscript is trying to accomplish with that\nID?\n\nIt's all kind of vague so my answers may or may not help, but based on\nthe [lack of] specifics you have provided, I fear that is the best\nsuggestion that I can offer at this point.\n\nSven\n\n", "msg_date": "Tue, 19 Jul 2005 17:15:08 -0400", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "\n\tYou could have a program pre-parse your log and put it in a format \nunderstandable by COPY, then load it in a temporary table and write a part \nof your application simply as a plpgsql function, reading from this table \nand doing queries (or a plperl function)...\n\n> So... (bear with me here.. trying to make sense of this)..\n> With triggers there's a way I can do the parsing I need to on a log\n> file and react to completed events in non-sequential order (you can\n> ignore that part.. it's just how we piece together different related\n> events) and then have perl/DBD::Pg invoke a copy command (which, from\n> what I can tell, has to operate on a file...) and the copy command can\n> feed the ID I need back to perl so I can work with it...\n> If that doesn't hurt my brain, then I'm at least kinda confused...\n> Anyway. Heading home now. I'll think about this more tonight/tomorrow.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n\n", "msg_date": "Wed, 20 Jul 2005 01:01:56 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Looking for tips" }, { "msg_contents": "Sorry for the lack of specifics...\n\nWe have a file generated as a list of events, one per line. Suppose\nlines 1,2,3,5,7,11,etc were related, then the last one would specify\nthat it's the last event. Gradually this gets assembled by a perl\nscript and when the last event is encountered, it gets inserted into\nthe db. For a given table, let's say it's of the form (a,b,c) where\n'a' is a pkey, 'b' is indexed, and 'c' is other related information.\nThe most common 'b' values are cached locally with the perl script to\nsave us having to query the db. So what we end up having is:\n\nif 'b' exists in cache, use cached 'a' value and continue\nelse if 'b' exists in the db, use the associated 'a' value and continue\nelse add a new line with 'b', return the new 'a' and continue\n\nThe local cache was a huge time saver with mysql. I've tried making a\nplpgsql function that handles everything in one step on the db side,\nbut it didn't show any improvement. Time permitting, I'll try some new\napproaches with changing the scripts and queries, though right now I\nwas just hoping to tune postgresql.conf to work better with the\nhardware available.\n\nThanks to everyone for your help. Very much appreciated.\n", "msg_date": "Tue, 19 Jul 2005 21:50:18 -0400", "msg_from": "Oliver Crosby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" } ]
[ { "msg_contents": "I tuned a query last week to obtain acceptable performance.\nHere is my recorded explain analyze results:\n\n-----\nLOG: duration: 826.505 ms statement: explain analyze\n SELECT\n c.id AS contact_id,\n sr.id AS sales_rep_id,\n LTRIM(RTRIM(sr.firstname || ' ' || sr.lastname)) AS sales_rep_name,\n p.id AS partner_id,\n p.company AS partner_company,\n coalesce(LTRIM(RTRIM(c.company)), LTRIM(RTRIM(c.firstname || ' ' || c.lastname)))\n AS contact_company,\n LTRIM(RTRIM(c.city || ' ' || c.state || ' ' || c.postalcode || ' ' || c.country))\n AS contact_location,\n c.phone AS contact_phone,\n c.email AS contact_email,\n co.name AS contact_country,\n TO_CHAR(c.request_status_last_modified, 'mm/dd/yy hh12:mi pm')\n AS request_status_last_modified,\n TO_CHAR(c.request_status_last_modified, 'yyyymmddhh24miss')\n AS rqst_stat_last_mdfd_sortable,\n c.token_id\n FROM\n sales_reps sr\n JOIN partners p ON (sr.id = p.sales_rep_id)\n JOIN contacts c ON (p.id = c.partner_id)\n JOIN countries co ON (LOWER(c.country) = LOWER(co.code))\n JOIN partner_classification pc ON (p.classification_id = pc.id AND pc.classification != 'Sales Rep')\n WHERE\n c.lead_deleted IS NULL\n AND EXISTS\n (\n SELECT\n lr.id\n FROM\n lead_requests lr,\n lead_request_status lrs\n WHERE\n c.id = lr.contact_id AND\n lr.status_id = lrs.id AND\n lrs.is_closed = 0\n )\n ORDER BY\n contact_company, contact_id\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=18266.77..18266.80 rows=11 width=219) (actual time=795.502..795.763 rows=246 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Hash Join (cost=18258.48..18266.58 rows=11 width=219) (actual time=747.551..788.095 rows=246 loops=1)\n Hash Cond: (lower((\"outer\".code)::text) = lower((\"inner\".country)::text))\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.040..2.128 rows=242 loops=1)\n -> Hash (cost=18258.45..18258.45 rows=9 width=206) (actual time=746.653..746.653 rows=0 loops=1)\n -> Merge Join (cost=18258.12..18258.45 rows=9 width=206) (actual time=729.412..743.691 rows=246 loops=1)\n Merge Cond: (\"outer\".sales_rep_id = \"inner\".id)\n -> Sort (cost=18255.70..18255.73 rows=9 width=185) (actual time=727.948..728.274 rows=249 loops=1)\n Sort Key: p.sales_rep_id\n -> Merge Join (cost=18255.39..18255.56 rows=9 width=185) (actual time=712.747..723.095 rows=249 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".classification_id)\n -> Sort (cost=1.05..1.05 rows=2 width=10) (actual time=0.192..0.195 rows=2 loops=1)\n Sort Key: pc.id\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.100..0.142 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Sort (cost=18254.35..18254.38 rows=13 width=195) (actual time=712.401..712.675 rows=250 loops=1)\n Sort Key: p.classification_id\n -> Merge Join (cost=0.00..18254.11 rows=13 width=195) (actual time=47.844..705.517 rows=448 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".partner_id)\n -> Index Scan using partners_pkey on partners p (cost=0.00..30.80 rows=395 width=53) (actual time=0.066..5.746 rows=395 loops=1)\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..130358.50 rows=93 width=152) (actual time=0.351..662.576 rows=452 loops=1)\n Filter: ((lead_deleted IS NULL) AND (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..6.76 rows=2 width=10) (actual time=0.094..0.094 rows=0 loops=5573)\n Join Filter: (\"outer\".status_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..4.23 rows=2 width=20) (actual time=0.068..0.069 rows=0 loops=5573)\n Index Cond: ($0 = contact_id)\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.030..0.094 rows=4 loops=519)\n Filter: (is_closed = 0::numeric)\n -> Sort (cost=2.42..2.52 rows=39 width=31) (actual time=1.334..1.665 rows=267 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=31) (actual time=0.064..0.533 rows=39 loops=1)\n Total runtime: 798.494 ms\n(34 rows)\n-----\n\nI rebooted the database machine later that night.\nNow, when I run the same query, I get the following\nresults:\n\n-----\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=17415.32..17415.35 rows=11 width=219) (actual time=6880.583..6880.738 rows=246 loops=1)\n Sort Key: COALESCE(ltrim(rtrim((c.company)::text)), ltrim(rtrim((((c.firstname)::text || ' '::text) || (c.lastname)::text)))), c.id\n -> Merge Join (cost=17414.22..17415.13 rows=11 width=219) (actual time=6828.441..6871.894 rows=246 loops=1)\n Merge Cond: (\"outer\".sales_rep_id = \"inner\".id)\n -> Sort (cost=17411.80..17411.83 rows=11 width=198) (actual time=6825.227..6825.652 rows=249 loops=1)\n Sort Key: p.sales_rep_id\n -> Merge Join (cost=17411.42..17411.61 rows=11 width=198) (actual time=6805.894..6818.717 rows=249 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".classification_id)\n -> Sort (cost=1.05..1.05 rows=2 width=10) (actual time=0.788..0.792 rows=2 loops=1)\n Sort Key: pc.id\n -> Seq Scan on partner_classification pc (cost=0.00..1.04 rows=2 width=10) (actual time=0.094..0.554 rows=2 loops=1)\n Filter: ((classification)::text <> 'Sales Rep'::text)\n -> Sort (cost=17410.38..17410.41 rows=15 width=208) (actual time=6804.649..6804.923 rows=250 loops=1)\n Sort Key: p.classification_id\n -> Merge Join (cost=4.42..17410.08 rows=15 width=208) (actual time=62.598..6795.704 rows=448 loops=1)\n Merge Cond: (\"outer\".partner_id = \"inner\".id)\n -> Nested Loop (cost=4.42..130886.19 rows=113 width=165) (actual time=8.807..6712.529 rows=739 loops=1)\n Join Filter: (lower((\"outer\".country)::text) = lower((\"inner\".code)::text))\n -> Index Scan using contacts_partner_id_idx on contacts c (cost=0.00..130206.59 rows=93 width=152) (actual time=0.793..4082.343 rows=739 loops=1)\n Filter: ((lead_deleted IS NULL) AND (subplan))\n SubPlan\n -> Nested Loop (cost=0.00..6.76 rows=2 width=10) (actual time=0.084..0.084 rows=0 loops=37077)\n Join Filter: (\"outer\".status_id = \"inner\".id)\n -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..4.23 rows=2 width=20) (actual time=0.066..0.066 rows=0 loops=37077)\n Index Cond: ($0 = contact_id)\n -> Seq Scan on lead_request_status lrs (cost=0.00..1.16 rows=8 width=10) (actual time=0.031..0.140 rows=4 loops=1195)\n Filter: (is_closed = 0::numeric)\n -> Materialize (cost=4.42..6.84 rows=242 width=19) (actual time=0.003..0.347 rows=242 loops=739)\n -> Seq Scan on countries co (cost=0.00..4.42 rows=242 width=19) (actual time=0.038..3.162 rows=242 loops=1)\n -> Index Scan using partners_pkey on partners p (cost=0.00..30.80 rows=395 width=53) (actual time=0.062..15.152 rows=787 loops=1)\n -> Sort (cost=2.42..2.52 rows=39 width=31) (actual time=1.916..2.723 rows=267 loops=1)\n Sort Key: sr.id\n -> Seq Scan on sales_reps sr (cost=0.00..1.39 rows=39 width=31) (actual time=0.065..0.723 rows=39 loops=1)\n Total runtime: 6886.307 ms\n(34 rows)\n-----\n\nThere is definitely a difference in the query plans.\nI am guessing this difference in the performance decrease.\nHowever, nothing was changed in the postgresql.conf file.\nI may have run something in the psql explain analyze session\na week ago, but I can't figure out what I changed.\n\nSo, the bottom line is this:\n\nWhat do I need to do to get back to the better performance?\nIs it possible to determine what options may have changed from\nthe above query plan differences?\n\nAnd, also,\n\nWhat is the \"Materialize\" query plan item in the second\nquery plan, the slower plan?\n\nIf you need any additional configurations, please let me\nknow.\n\nThank you very much in advance for any pointers you can\nprovide.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Tue, 19 Jul 2005 14:05:57 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "performance decrease after reboot" }, { "msg_contents": "On Tue, 19 Jul 2005, John Mendenhall wrote:\n\n> I tuned a query last week to obtain acceptable performance.\n> Here is my recorded explain analyze results:\n>\n> LOG: duration: 826.505 ms statement: explain analyze\n> [cut for brevity]\n> \n> I rebooted the database machine later that night.\n> Now, when I run the same query, I get the following\n> results:\n> \n> LOG: duration: 6931.701 ms statement: explain analyze\n> [cut for brevity]\n\nI just ran my query again, no changes from yesterday\nand it is back to normal:\n\nLOG: duration: 795.839 ms statement: explain analyze\n\nWhat could have been the problem?\n\nThe major differences in the query plan are as follows:\n\n(1) The one that runs faster uses a Hash Join at the\nvery top of the query plan. It does a Hash Cond on\nthe country and code fields.\n\n(2) The one that runs slower uses a Materialize with\nthe subplan, with no Hash items. The Materialize does\nSeq Scan of the countries table, and above it, a Join\nFilter is run.\n\n(3) The partners_pkey index on the partners table is\nin a different place in the query.\n\nDoes anyone know what would cause the query plan to be\ndifferent like this, for the same server, same query?\nI run vacuum analyze every night. Is this perhaps the\nproblem?\n\nWhat setting do I need to tweak to make sure the faster\nplan is always found?\n\nThanks for any pointers in this dilemma.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Wed, 20 Jul 2005 13:28:32 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance decrease after reboot" } ]
[ { "msg_contents": "Hi,\nI have a similar application,\nbut instead of adding new items to the db once at time,\nI retrieve new IDs from a sequence (actually only every 10'000 times) and write a csv file from perl.\nWhen finished, I load all new record in one run with Copy.\n \nhth,\n \nMarc Mamin\n\n________________________________\n\nFrom: [email protected] on behalf of Oliver Crosby\nSent: Wed 7/20/2005 3:50 AM\nTo: PFC\nCc: Sven Willenberger; Dawid Kuroczko; Kevin Grittner; [email protected]; [email protected]\nSubject: Re: [PERFORM] Looking for tips\n\n\n\nSorry for the lack of specifics...\n\nWe have a file generated as a list of events, one per line. Suppose\nlines 1,2,3,5,7,11,etc were related, then the last one would specify\nthat it's the last event. Gradually this gets assembled by a perl\nscript and when the last event is encountered, it gets inserted into\nthe db. For a given table, let's say it's of the form (a,b,c) where\n'a' is a pkey, 'b' is indexed, and 'c' is other related information.\nThe most common 'b' values are cached locally with the perl script to\nsave us having to query the db. So what we end up having is:\n\nif 'b' exists in cache, use cached 'a' value and continue\nelse if 'b' exists in the db, use the associated 'a' value and continue\nelse add a new line with 'b', return the new 'a' and continue\n\nThe local cache was a huge time saver with mysql. I've tried making a\nplpgsql function that handles everything in one step on the db side,\nbut it didn't show any improvement. Time permitting, I'll try some new\napproaches with changing the scripts and queries, though right now I\nwas just hoping to tune postgresql.conf to work better with the\nhardware available.\n\nThanks to everyone for your help. Very much appreciated.\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\n\n\n\n\nRe: [PERFORM] Looking for tips\n\n\n \nHi,\nI have a similar application,\nbut instead of adding new items to the db once at time,\nI retrieve new IDs from a sequence (actually only every 10'000 \ntimes) and write a csv file from perl.\nWhen finished, I load all new record in one run with Copy.\n \nhth,\n \nMarc Mamin\n\n\n\nFrom: \[email protected] on behalf of Oliver \nCrosbySent: Wed 7/20/2005 3:50 AMTo: PFCCc: \nSven Willenberger; Dawid Kuroczko; Kevin Grittner; [email protected]; \[email protected]: Re: [PERFORM] Looking for \ntips\n\nSorry for the lack of specifics...We have a file \ngenerated as a list of events, one per line. Supposelines 1,2,3,5,7,11,etc \nwere related, then the last one would specifythat it's the last event. \nGradually this gets assembled by a perlscript and when the last event is \nencountered, it gets inserted intothe db. For a given table, let's say it's \nof the form (a,b,c) where'a' is a pkey, 'b' is indexed, and 'c' is other \nrelated information.The most common 'b' values are cached locally with the \nperl script tosave us having to query the db. So what we end up having \nis:if 'b' exists in cache, use cached 'a' value and continueelse if \n'b' exists in the db, use the associated 'a' value and continueelse add a \nnew line with 'b', return the new 'a' and continueThe local cache was a \nhuge time saver with mysql. I've tried making aplpgsql function that handles \neverything in one step on the db side,but it didn't show any improvement. \nTime permitting, I'll try some newapproaches with changing the scripts and \nqueries, though right now Iwas just hoping to tune postgresql.conf to work \nbetter with thehardware available.Thanks to everyone for your help. \nVery much appreciated.---------------------------(end of \nbroadcast)---------------------------TIP 5: don't forget to increase your \nfree space map settings", "msg_date": "Wed, 20 Jul 2005 11:05:17 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Looking for tips" } ]
[ { "msg_contents": "Hi,\n\nI do not under stand the following explain output (pgsql 8.0.3):\n\nexplain analyze\nselect b.e from b, d\nwhere b.r=516081780 and b.c=513652057 and b.e=d.e;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..1220.09 rows=1 width=4) (actual \ntime=0.213..2926.845 rows=324503 loops=1)\n -> Index Scan using b_index on b (cost=0.00..1199.12 rows=1 \nwidth=4) (actual time=0.104..17.418 rows=3293 loops=1)\n Index Cond: (r = 516081780::oid)\n Filter: (c = 513652057::oid)\n -> Index Scan using d_e_index on d (cost=0.00..19.22 rows=140 \nwidth=4) (actual time=0.009..0.380 rows=99 loops=3293)\n Index Cond: (\"outer\".e = d.e)\n Total runtime: 3638.783 ms\n(7 rows)\n\nWhy is the rows estimate for b_index and the nested loop 1? It is \nactually 3293 and 324503.\n\nI did VACUUM ANALYZE before and I also increased the STATISTICS TARGET \non b.e to 500. No change.\n\nHere is the size of the tables:\n\nselect count(oid) from b;\n 3532161\n\nselect count(oid) from b where r=516081780 and c=513652057;\n 3293\n\nselect count(oid) from d;\n 117270\n\n\nRegards,\n\nDirk\n", "msg_date": "Wed, 20 Jul 2005 17:25:09 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizer seems to be way off, why?" }, { "msg_contents": "Dirk Lutzebäck wrote:\n> Hi,\n> \n> I do not under stand the following explain output (pgsql 8.0.3):\n> \n> explain analyze\n> select b.e from b, d\n> where b.r=516081780 and b.c=513652057 and b.e=d.e;\n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------- \n> \n> Nested Loop (cost=0.00..1220.09 rows=1 width=4) (actual \n> time=0.213..2926.845 rows=324503 loops=1)\n> -> Index Scan using b_index on b (cost=0.00..1199.12 rows=1 width=4) \n> (actual time=0.104..17.418 rows=3293 loops=1)\n> Index Cond: (r = 516081780::oid)\n> Filter: (c = 513652057::oid)\n> -> Index Scan using d_e_index on d (cost=0.00..19.22 rows=140 \n> width=4) (actual time=0.009..0.380 rows=99 loops=3293)\n> Index Cond: (\"outer\".e = d.e)\n> Total runtime: 3638.783 ms\n> (7 rows)\n> \n> Why is the rows estimate for b_index and the nested loop 1? It is \n> actually 3293 and 324503.\n\nI'm guessing (and that's all it is) that b.r and b.c have a higher \ncorrelation than the planner is expecting. That is, it expects the \nb.c=... to reduce the number of matching rows much more than it is.\n\nTry a query just on WHERE b.r=516081780 and see if it gets the estimate \nright for that.\n\nIf it's a common query, it might be worth an index on (r,c)\n\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Wed, 20 Jul 2005 18:01:41 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer seems to be way off, why?" }, { "msg_contents": "Richard Huxton wrote:\n> Dirk Lutzeb�ck wrote:\n> \n>> Hi,\n>>\n>> I do not under stand the following explain output (pgsql 8.0.3):\n>>\n>> explain analyze\n>> select b.e from b, d\n>> where b.r=516081780 and b.c=513652057 and b.e=d.e;\n>>\n>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------------- \n>>\n>> Nested Loop (cost=0.00..1220.09 rows=1 width=4) (actual \n>> time=0.213..2926.845 rows=324503 loops=1)\n>> -> Index Scan using b_index on b (cost=0.00..1199.12 rows=1 \n>> width=4) (actual time=0.104..17.418 rows=3293 loops=1)\n>> Index Cond: (r = 516081780::oid)\n>> Filter: (c = 513652057::oid)\n>> -> Index Scan using d_e_index on d (cost=0.00..19.22 rows=140 \n>> width=4) (actual time=0.009..0.380 rows=99 loops=3293)\n>> Index Cond: (\"outer\".e = d.e)\n>> Total runtime: 3638.783 ms\n>> (7 rows)\n>>\n>> Why is the rows estimate for b_index and the nested loop 1? It is \n>> actually 3293 and 324503.\n> \n> \n> I'm guessing (and that's all it is) that b.r and b.c have a higher \n> correlation than the planner is expecting. That is, it expects the \n> b.c=... to reduce the number of matching rows much more than it is.\n> \n> Try a query just on WHERE b.r=516081780 and see if it gets the estimate \n> right for that.\n> \n> If it's a common query, it might be worth an index on (r,c)\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n\nThanks Richard, dropping the join for b.c now gives better estimates (it \nalso uses a different index now) although not accurate (off by factor \n10). This query is embedded in a larger query which now got a 1000 times \nspeed up (!) because I can drop b.c because it is redundant.\n\nThough, why can't the planner see this correlation? I think somebody \nsaid the planner does not know about multiple column correlations, does it?\n\nRegards,\n\nDirk\n\n", "msg_date": "Wed, 20 Jul 2005 21:16:24 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizer seems to be way off, why?" }, { "msg_contents": "Dirk Lutzebäck wrote:\n> Richard Huxton wrote:\n> \n>> Dirk Lutzebäck wrote:\n>>\n>>> Hi,\n>>>\n>>> I do not under stand the following explain output (pgsql 8.0.3):\n>>>\n>>> explain analyze\n>>> select b.e from b, d\n>>> where b.r=516081780 and b.c=513652057 and b.e=d.e;\n>>>\n>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------------------------------------------\n>>>\n>>> Nested Loop (cost=0.00..1220.09 rows=1 width=4) (actual\n>>> time=0.213..2926.845 rows=324503 loops=1)\n>>> -> Index Scan using b_index on b (cost=0.00..1199.12 rows=1\n>>> width=4) (actual time=0.104..17.418 rows=3293 loops=1)\n>>> Index Cond: (r = 516081780::oid)\n>>> Filter: (c = 513652057::oid)\n>>> -> Index Scan using d_e_index on d (cost=0.00..19.22 rows=140\n>>> width=4) (actual time=0.009..0.380 rows=99 loops=3293)\n>>> Index Cond: (\"outer\".e = d.e)\n>>> Total runtime: 3638.783 ms\n>>> (7 rows)\n>>>\n>>> Why is the rows estimate for b_index and the nested loop 1? It is\n>>> actually 3293 and 324503.\n>>\n>>\n>>\n>> I'm guessing (and that's all it is) that b.r and b.c have a higher\n>> correlation than the planner is expecting. That is, it expects the\n>> b.c=... to reduce the number of matching rows much more than it is.\n>>\n>> Try a query just on WHERE b.r=516081780 and see if it gets the\n>> estimate right for that.\n>>\n>> If it's a common query, it might be worth an index on (r,c)\n>>\n>> -- \n>> Richard Huxton\n>> Archonet Ltd\n>>\n> \n> Thanks Richard, dropping the join for b.c now gives better estimates (it\n> also uses a different index now) although not accurate (off by factor\n> 10). This query is embedded in a larger query which now got a 1000 times\n> speed up (!) because I can drop b.c because it is redundant.\n\nWell, part of the problem is that the poorly estimated row is not 'b.e'\nbut 'b.r', it expects to only find one row that matches, and instead\nfinds 3293 rows.\n\nNow, that *could* be because it mis-estimates the selectivity of b.r & b.c.\n\nIt actually estimated the join with d approximately correctly. (It\nthought that for each row it would find 140, and it averaged 99).\n\n> \n> Though, why can't the planner see this correlation? I think somebody\n> said the planner does not know about multiple column correlations, does it?\n\nThe planner does not maintain cross-column statistics, so you are\ncorrect. I believe it assumes distributions are independent. So that if\nr=RRRRR is 10% selective, and c=CCCC is 20% selective, the total\nselectivity of r=RRRR AND c=CCCC is 2%. I could be wrong on this, but I\nthink it is approximately correct.\n\nNow if you created the index on b(r,c), then it would have a much better\nidea of how selective that would be. At the very least, it could index\non (r,c) rather than indexing on (r) and filtering by (c).\n\nAlso, if you have very skewed data (where you have 1 value 100k times,\nand 50 values only 10times each), the planner can overestimate the low\nvalues, and underestimate the high one. (It uses random sampling, so it\nkind of depends where the entries are.)\n\nHave you tried increasing the statistics on b.r and or b.c? Do you have\nan index on b.c or just b.r?\n\nTo see what the planner thinks, you might try:\n\nEXPLAIN ANALYZE\nselect count(*) from b where r=516081780;\n\nThat would tell you how selective the planner thinks the r= is.\n> \n> Regards,\n> \n> Dirk\n> \nJohn\n=:->", "msg_date": "Wed, 20 Jul 2005 15:57:51 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer seems to be way off, why?" }, { "msg_contents": "\nJohn A Meinel <[email protected]> writes:\n\n> Now if you created the index on b(r,c), then it would have a much better\n> idea of how selective that would be. At the very least, it could index\n> on (r,c) rather than indexing on (r) and filtering by (c).\n\nThere has been some discussion of adding functionality like this but afaik no\nversion of Postgres actually does this yet.\n\nAdding the index may still help though.\n\n\n\n-- \ngreg\n\n", "msg_date": "21 Jul 2005 12:40:17 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizer seems to be way off, why?" } ]
[ { "msg_contents": "Hello,\nI'm searching for two facts:\nHow much space takes a varchar column if there is no value in it (NULL)?\nHow much space needs a index of an integer column?\nHope I post to the right list and hope anybody can help me.\nThank you\nGreetings\nAchim\n", "msg_date": "Thu, 21 Jul 2005 12:02:18 +0200", "msg_from": "Achim Luber <[email protected]>", "msg_from_op": true, "msg_subject": "Size of empty varchar and size of index" } ]
[ { "msg_contents": "Preferably via JDBC, but by C/C++ if necessary.\n\nStreaming being the operative word.\n\nTips appreciated.\n", "msg_date": "Thu, 21 Jul 2005 11:58:57 -0400", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "What is best way to stream terabytes of data into postgresql?" }, { "msg_contents": "Jeff,\n\n> Streaming being the operative word.\n\nNot sure how much hacking you want to do, but the TelegraphCQ project is \nbased on PostgreSQL:\nhttp://telegraph.cs.berkeley.edu/telegraphcq/v0.2/\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 21 Jul 2005 17:26:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What is best way to stream terabytes of data into postgresql?" } ]
[ { "msg_contents": " \n\n\n> Subject: [PERFORM] What is best way to stream terabytes of \n> data into postgresql?\n> \n> Preferably via JDBC, but by C/C++ if necessary.\n> \n> Streaming being the operative word.\n> \n> Tips appreciated.\n> \n\nHi,\n\nWe granted our Java Loader to the Bizgres Open Source,\nhttp://www.bizgres.org/assets/BZ_userguide.htm#50413574_pgfId-110126\n\nYou can load from STDIN instead of a file, as long as you prepend the\nstream with the Loader Control file, for example:\n\nfor name in customer orders lineitem partsupp supplier part;do;cat\nTPCH_load_100gb_${name}.ctl /mnt/<remote-host>/TPCH-Data/${name}.tbl.* |\nloader.sh -h localhost -p 10001 -d tpch -t -u mpp; done\n\nYou can also run the loader from a remote host as well, with the \"-h\"\n<host> being the target system with the Postgres database.\n\nIf you have terabytes of data, you might want to set a batch size (-b\nswitch) to commit occasionally.\n\nFeel free to contact me directly if you have questions.\n\nThanks,\n\nFrank\n\nFrank Wosczyna\nSystems Engineer\nGreenplum / Bizgres MPP\nwww.greenplum.com\n\n", "msg_date": "Thu, 21 Jul 2005 14:26:04 -0400", "msg_from": "\"Frank Wosczyna\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: What is best way to stream terabytes of data into" } ]
[ { "msg_contents": "Hello, I have PostgreSQL 8.0.3 running on a \"workstation\" with 768 MB\nof RAM, under FreeBSD. And I have a 47-milion row table:\n\nqnex=# explain select * from log;\n QUERY PLAN \n-----------------------------------------------------------------------\n Seq Scan on log (cost=0.00..1741852.36 rows=47044336 width=180)\n(1 row)\n\n...which is joined with a few smaller ones, like:\n\nqnex=# explain select * from useragents;\n QUERY PLAN \n-------------------------------------------------------------------\n Seq Scan on useragents (cost=0.00..9475.96 rows=364896 width=96)\n(1 row)\n\nshared_buffers = 5000\nrandom_page_cost = 3\nwork_mem = 102400\neffective_cache_size = 60000\n\nNow, if I do a SELECT:\n\nqnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Limit (cost=15912.20..15912.31 rows=1 width=272)\n -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n Hash Cond: (\"outer\".useragent_id = \"inner\".useragent_id)\n -> Seq Scan on log (cost=0.00..1741852.36 rows=47044336 width=180)\n -> Hash (cost=9475.96..9475.96 rows=364896 width=96)\n -> Seq Scan on useragents (cost=0.00..9475.96\nrows=364896 width=96)\n(6 rows)\n\nOr:\n\nqnex=# EXPLAIN SELECT * FROM log NATURAL LEFT JOIN useragents LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------------------\n Limit (cost=15912.20..15912.31 rows=1 width=272)\n -> Hash Left Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n Hash Cond: (\"outer\".useragent_id = \"inner\".useragent_id)\n -> Seq Scan on log (cost=0.00..1741852.36 rows=47044336 width=180)\n -> Hash (cost=9475.96..9475.96 rows=364896 width=96)\n -> Seq Scan on useragents (cost=0.00..9475.96\nrows=364896 width=96)\n(6 rows)\n\nTime: 2.688 ms\n\n...the query seems to last forever (its hashing 47 million rows!)\n\nIf I set enable_hashjoin=false:\n\nqnex=# EXPLAIN ANALYZE SELECT * FROM log NATURAL LEFT JOIN useragents LIMIT 1;\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3.07 rows=1 width=272) (actual time=74.214..74.216\nrows=1 loops=1)\n -> Nested Loop Left Join (cost=0.00..144295895.01 rows=47044336\nwidth=272) (actual time=74.204..74.204 rows=1 loops=1)\n -> Seq Scan on log (cost=0.00..1741852.36 rows=47044336\nwidth=180) (actual time=23.270..23.270 rows=1 loops=1)\n -> Index Scan using useragents_pkey on useragents \n(cost=0.00..3.02 rows=1 width=96) (actual time=50.867..50.867 rows=1\nloops=1)\n Index Cond: (\"outer\".useragent_id = useragents.useragent_id)\n Total runtime: 74.483 ms\n\n...which is way faster. Of course if I did:\n\nqnex=# EXPLAIN ANALYZE SELECT * FROM log NATURAL LEFT JOIN useragents\nWHERE logid = (SELECT logid FROM log LIMIT 1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.04..6.09 rows=1 width=272) (actual\ntime=61.403..61.419 rows=1 loops=1)\n InitPlan\n -> Limit (cost=0.00..0.04 rows=1 width=4) (actual\ntime=0.029..0.032 rows=1 loops=1)\n -> Seq Scan on log (cost=0.00..1741852.36 rows=47044336\nwidth=4) (actual time=0.023..0.023 rows=1 loops=1)\n -> Index Scan using log_pkey on log (cost=0.00..3.02 rows=1\nwidth=180) (actual time=61.316..61.319 rows=1 loops=1)\n Index Cond: (logid = $0)\n -> Index Scan using useragents_pkey on useragents \n(cost=0.00..3.02 rows=1 width=96) (actual time=0.036..0.042 rows=1\nloops=1)\n Index Cond: (\"outer\".useragent_id = useragents.useragent_id)\n Total runtime: 61.741 ms\n(9 rows)\n\n...I tried tweaking cpu_*, work_mem, effective_cache and so on, but without\nany luck. 47 milion table is huge compared to useragents (I actually need\nto join the log with 3 similar to useragents tables, and create a view out\nof it). Also tried using LEFT/RIGHT JOINS insead of (inner) JOINs...\nOf course the database is freshly vacuum analyzed, and statistics are\nset at 50...\n\nMy view of the problem is that planner ignores the \"LIMIT\" part. It assumes\nit _needs_ to return all 47 million rows joined with the useragents table, so\nthe hashjoin is the only sane approach. But chances are that unless I'll\nuse LIMIT 200000, the nested loop will be much faster.\n\nAny ideas how to make it work (other than rewriting the query to use\nsubselects, use explicit id-rows, disabling hashjoin completely)?\nOr is this a bug?\n\n Regards,\n Dawid\n", "msg_date": "Fri, 22 Jul 2005 11:10:05 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": true, "msg_subject": "Planner doesn't look at LIMIT?" }, { "msg_contents": "\n\tWhich row do you want ? Do you want 'a row' at random ?\n\tI presume you want the N latest rows ?\n\tIn that case you should use an ORDER BY on an indexed field, the serial \nprimary key will do nicely (ORDER BY id DESC) ; it's indexed so it will \nuse the index and it will fly.\n\n> Any ideas how to make it work (other than rewriting the query to use\n> subselects, use explicit id-rows, disabling hashjoin completely)?\n> Or is this a bug?\n>\n> Regards,\n> Dawid\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n", "msg_date": "Fri, 22 Jul 2005 15:16:57 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT?" }, { "msg_contents": "Dawid Kuroczko <[email protected]> writes:\n> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;\n\n> Limit (cost=15912.20..15912.31 rows=1 width=272)\n> -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n\n> If I set enable_hashjoin=false:\n\n> qnex=# EXPLAIN ANALYZE SELECT * FROM log NATURAL LEFT JOIN useragents LIMIT 1;\n\n> Limit (cost=0.00..3.07 rows=1 width=272) (actual time=74.214..74.216\n> rows=1 loops=1)\n> -> Nested Loop Left Join (cost=0.00..144295895.01 rows=47044336\n> width=272) (actual time=74.204..74.204 rows=1 loops=1)\n\nThis is quite strange. The nestloop plan definitely should be preferred\nin the context of the LIMIT, considering that it has far lower estimated\ncost. And it is preferred in simple tests for me. It seems there must\nbe something specific to your installation that's causing the planner to\ngo wrong. Can you develop a self-contained test case that behaves this\nway for you?\n\nI recall we saw a similar complaint a month or two back, but the\ncomplainant never followed up with anything useful for tracking down\nthe problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jul 2005 10:39:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT? " }, { "msg_contents": "Dawid Kuroczko wrote:\n>work_mem = 102400\n\n>...I tried tweaking cpu_*, work_mem, effective_cache and so on, but without\n>any luck.\n\nI'm hoping you didn't tweak it enough! I posted something similar\nthis a while ago, but haven't since got around to figuring out\na useful test case to send to the list.\n\nTry reducing your work_mem down to 1000 or so and things should\nstart doing what you expect.\n\n\n Sam\n", "msg_date": "Fri, 22 Jul 2005 15:51:22 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT?" }, { "msg_contents": "On 7/22/05, Tom Lane <[email protected]> wrote:\n> Dawid Kuroczko <[email protected]> writes:\n> > qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;\n> \n> > Limit (cost=15912.20..15912.31 rows=1 width=272)\n> > -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n> \n> This is quite strange. The nestloop plan definitely should be preferred\n> in the context of the LIMIT, considering that it has far lower estimated\n> cost. And it is preferred in simple tests for me. It seems there must\n> be something specific to your installation that's causing the planner to\n> go wrong. Can you develop a self-contained test case that behaves this\n> way for you?\n\nWhy, certainly. I did test it also on Gentoo Linux PostgreSQL 8.0.1 (yeah,\na bit older one), but the behaviour is the same. The test looks like this:\n\n-- First lets make a \"small\" lookup table -- 400000 rows.\nCREATE TABLE lookup (\n lookup_id serial PRIMARY KEY,\n value integer NOT NULL\n);\nINSERT INTO lookup (value) SELECT * FROM generate_series(1, 400000);\nVACUUM ANALYZE lookup;\n-- Then lets make a huge data table...\nCREATE TABLE huge_data (\n huge_data_id serial PRIMARY KEY,\n lookup_id integer NOT NULL\n);\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM lookup;\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 800 000\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 1 600 000\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 3 200 000\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 6 400 000\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 12 800 000\n-- You may want to put ANALYZE and EXPLAIN between each of these\n-- steps. In my cases, at 12.8 mln rows PostgreSQL seems to go for hashjoin\n-- in each case. YMMV, so you may try to push it up to 1024 mln rows.\nINSERT INTO huge_data (lookup_id) SELECT lookup_id FROM huge_data; -- 25 600 000\nANALYZE huge_data;\nEXPLAIN SELECT * FROM huge_data NATURAL JOIN lookup LIMIT 1;\n\nMy EXPLAIN FROM Linux (SMP P-III box), with PostgreSQL 8.0.1, during making\nthis test case:\n\nqnex=# EXPLAIN SELECT * FROM huge_data NATURAL JOIN lookup LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------------------\nLimit (cost=0.00..3.21 rows=1 width=12)\n -> Nested Loop (cost=0.00..19557596.04 rows=6094777 width=12)\n -> Seq Scan on huge_data (cost=0.00..95372.42 rows=6399942 width=8)\n -> Index Scan using lookup_pkey on lookup (cost=0.00..3.02\nrows=1 width=8)\n Index Cond: (\"outer\".lookup_id = lookup.lookup_id)\n(5 rows)\n\nTime: 4,333 ms\nqnex=# INSERT INTO huge_data (lookup_id) SELECT lookup_id FROM\nhuge_data; -- 12 800 000\nINSERT 0 6400000\nTime: 501014,692 ms\nqnex=# ANALYZE huge_data;\nANALYZE\nTime: 4243,453 ms\nqnex=# EXPLAIN SELECT * FROM huge_data NATURAL JOIN lookup LIMIT 1;\n QUERY PLAN\n-------------------------------------------------------------------------------\nLimit (cost=11719.00..11719.09 rows=1 width=12)\n -> Hash Join (cost=11719.00..1212739.73 rows=12800185 width=12)\n Hash Cond: (\"outer\".lookup_id = \"inner\".lookup_id)\n -> Seq Scan on huge_data (cost=0.00..190747.84 rows=12800184 width=8)\n -> Hash (cost=5961.00..5961.00 rows=400000 width=8)\n -> Seq Scan on lookup (cost=0.00..5961.00 rows=400000 width=8)\n(6 rows)\n\n Regards,\n Dawid\n", "msg_date": "Fri, 22 Jul 2005 18:09:37 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner doesn't look at LIMIT?" }, { "msg_contents": "I wrote:\n> Dawid Kuroczko <[email protected]> writes:\n>> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;\n\n>> Limit (cost=15912.20..15912.31 rows=1 width=272)\n>> -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n\n>> If I set enable_hashjoin=false:\n\n>> qnex=# EXPLAIN ANALYZE SELECT * FROM log NATURAL LEFT JOIN useragents LIMIT 1;\n\n>> Limit (cost=0.00..3.07 rows=1 width=272) (actual time=74.214..74.216\n>> rows=1 loops=1)\n>> -> Nested Loop Left Join (cost=0.00..144295895.01 rows=47044336\n>> width=272) (actual time=74.204..74.204 rows=1 loops=1)\n\n> This is quite strange. The nestloop plan definitely should be preferred\n> in the context of the LIMIT, considering that it has far lower estimated\n> cost. And it is preferred in simple tests for me.\n\nAfter a suitable period of contemplating my navel, I figured out\nwhat is going on here: the total costs involved are large enough that\nthe still-fairly-high startup cost of the hash is disregarded by\ncompare_fuzzy_path_costs(), and so the nestloop is discarded as not\nhaving any significant potential advantage in startup time.\n\nI think that this refutes the original scheme of using the same fuzz\nfactor for both startup and total cost comparisons, and therefore\npropose the attached patch.\n\nComments?\n\n\t\t\tregards, tom lane\n\n*** src/backend/optimizer/util/pathnode.c.orig\tFri Jul 15 13:09:25 2005\n--- src/backend/optimizer/util/pathnode.c\tFri Jul 22 12:08:25 2005\n***************\n*** 98,157 ****\n static int\n compare_fuzzy_path_costs(Path *path1, Path *path2, CostSelector criterion)\n {\n- \tCost\t\tfuzz;\n- \n \t/*\n! \t * The fuzz factor is set at one percent of the smaller total_cost,\n! \t * but not less than 0.01 cost units (just in case total cost is\n! \t * zero).\n \t *\n \t * XXX does this percentage need to be user-configurable?\n \t */\n- \tfuzz = Min(path1->total_cost, path2->total_cost) * 0.01;\n- \tfuzz = Max(fuzz, 0.01);\n- \n \tif (criterion == STARTUP_COST)\n \t{\n! \t\tif (Abs(path1->startup_cost - path2->startup_cost) > fuzz)\n! \t\t{\n! \t\t\tif (path1->startup_cost < path2->startup_cost)\n! \t\t\t\treturn -1;\n! \t\t\telse\n! \t\t\t\treturn +1;\n! \t\t}\n \n \t\t/*\n \t\t * If paths have the same startup cost (not at all unlikely),\n \t\t * order them by total cost.\n \t\t */\n! \t\tif (Abs(path1->total_cost - path2->total_cost) > fuzz)\n! \t\t{\n! \t\t\tif (path1->total_cost < path2->total_cost)\n! \t\t\t\treturn -1;\n! \t\t\telse\n! \t\t\t\treturn +1;\n! \t\t}\n \t}\n \telse\n \t{\n! \t\tif (Abs(path1->total_cost - path2->total_cost) > fuzz)\n! \t\t{\n! \t\t\tif (path1->total_cost < path2->total_cost)\n! \t\t\t\treturn -1;\n! \t\t\telse\n! \t\t\t\treturn +1;\n! \t\t}\n \n \t\t/*\n \t\t * If paths have the same total cost, order them by startup cost.\n \t\t */\n! \t\tif (Abs(path1->startup_cost - path2->startup_cost) > fuzz)\n! \t\t{\n! \t\t\tif (path1->startup_cost < path2->startup_cost)\n! \t\t\t\treturn -1;\n! \t\t\telse\n! \t\t\t\treturn +1;\n! \t\t}\n \t}\n \treturn 0;\n }\n--- 98,138 ----\n static int\n compare_fuzzy_path_costs(Path *path1, Path *path2, CostSelector criterion)\n {\n \t/*\n! \t * We use a fuzz factor of 1% of the smaller cost.\n \t *\n \t * XXX does this percentage need to be user-configurable?\n \t */\n \tif (criterion == STARTUP_COST)\n \t{\n! \t\tif (path1->startup_cost > path2->startup_cost * 1.01)\n! \t\t\treturn +1;\n! \t\tif (path2->startup_cost > path1->startup_cost * 1.01)\n! \t\t\treturn -1;\n \n \t\t/*\n \t\t * If paths have the same startup cost (not at all unlikely),\n \t\t * order them by total cost.\n \t\t */\n! \t\tif (path1->total_cost > path2->total_cost * 1.01)\n! \t\t\treturn +1;\n! \t\tif (path2->total_cost > path1->total_cost * 1.01)\n! \t\t\treturn -1;\n \t}\n \telse\n \t{\n! \t\tif (path1->total_cost > path2->total_cost * 1.01)\n! \t\t\treturn +1;\n! \t\tif (path2->total_cost > path1->total_cost * 1.01)\n! \t\t\treturn -1;\n \n \t\t/*\n \t\t * If paths have the same total cost, order them by startup cost.\n \t\t */\n! \t\tif (path1->startup_cost > path2->startup_cost * 1.01)\n! \t\t\treturn +1;\n! \t\tif (path2->startup_cost > path1->startup_cost * 1.01)\n! \t\t\treturn -1;\n \t}\n \treturn 0;\n }\n", "msg_date": "Fri, 22 Jul 2005 12:20:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT? " }, { "msg_contents": "On Fri, 2005-07-22 at 12:20 -0400, Tom Lane wrote:\n> I think that this refutes the original scheme of using the same fuzz\n> factor for both startup and total cost comparisons, and therefore\n> propose the attached patch.\n> \n> Comments?\n\nLooks good. I think it explains a few other wierd perf reports also.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Fri, 22 Jul 2005 18:31:58 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner doesn't look at LIMIT?" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Looks good. I think it explains a few other wierd perf reports also.\n\nCould be. I went back to look at Sam Mason's report about three weeks\nago, and it definitely seems to explain his issue. The \"fuzzy cost\ncomparison\" logic is new in 8.0 so it hasn't had all that much\ntesting...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 22 Jul 2005 14:06:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner doesn't look at LIMIT? " }, { "msg_contents": "On 7/22/05, Tom Lane <[email protected]> wrote:\n> > This is quite strange. The nestloop plan definitely should be preferred\n> > in the context of the LIMIT, considering that it has far lower estimated\n> > cost. And it is preferred in simple tests for me.\n> \n> After a suitable period of contemplating my navel, I figured out\n> what is going on here: the total costs involved are large enough that\n> the still-fairly-high startup cost of the hash is disregarded by\n> compare_fuzzy_path_costs(), and so the nestloop is discarded as not\n> having any significant potential advantage in startup time.\n> \n> I think that this refutes the original scheme of using the same fuzz\n> factor for both startup and total cost comparisons, and therefore\n> propose the attached patch.\n> \n> Comments?\n\nWorks great!!!\n\nWith LIMIT below 4 000 000 rows (its 47-milion row table) it prefers\nnested loops, then it starts to introduce merge joins.\n\n Regards,\n Dawid\n", "msg_date": "Fri, 22 Jul 2005 22:48:58 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner doesn't look at LIMIT?" }, { "msg_contents": "Tom Lane wrote:\n>Could be. I went back to look at Sam Mason's report about three weeks\n>ago, and it definitely seems to explain his issue.\n\nI've just built a patched version as well and it appears to be doing\nwhat I think is the right thing now. I.e. actually picking the\nplan with the lower cost.\n\nThanks!\n\n\n Sam\n", "msg_date": "Sat, 23 Jul 2005 11:42:35 +0100", "msg_from": "Sam Mason <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Planner doesn't look at LIMIT?" }, { "msg_contents": "I have a case that I though was an example of this issue,\nand that this patch would correct. I applied this patch\nto an 8.0.3 source distribution, but it didn't seem to\nsolve my problem.\n\nIn a nutshell, I have a LIMIT query where the planner\nseems to favor a merge join over a nested loop. I've\nsimplified the query as much as possible:\n\n\nitvtrackdata3=> \\d tableA\nTable \"public.tableA\"\n Column | Type | Modifiers \n--------+----------+-----------\n foo | bigint | not null\n bar | smallint | not null\n bap | bigint | not null\n bip | bigint | not null\n bom | bigint | not null\nIndexes:\n \"idx_tableA_bip\" btree (bip) WHERE (bip =\n9000000000000000000::bigint)\n \"idx_tableA_foo\" btree (foo)\n\nitvtrackdata3=> \\d tableB\nTable \"tableB\"\n Column | Type | Modifiers \n---------+----------+-----------\n bim | bigint | not null\n bif | smallint | not null\n baf | smallint | not null\n bof | smallint | not null\n buf | smallint | not null\n foo | bigint | not null\nIndexes:\n \"idx_tableB_bim\" btree (\"bim\", foo)\n\nitvtrackdata3=> set default_statistics_target to 1000;\nSET\nTime: 0.448 ms\nitvtrackdata3=> analyze tableA;\nANALYZE\nTime: 4237.151 ms\nitvtrackdata3=> analyze tableB;\nANALYZE\nTime: 46672.939 ms\nitvtrackdata3=> explain analyze SELECT * FROM tableB NATURAL JOIN tableA\nWHERE bim>=72555896091359 AND bim<72555935412959 AND bim=bap ORDER BY\nbim ASC LIMIT 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=149626.57..252987.71 rows=1 width=50) (actual\ntime=5684.013..5684.013 rows=1 loops=1)\n -> Merge Join (cost=149626.57..252987.71 rows=1 width=50) (actual\ntime=5684.012..5684.012 rows=1 loops=1)\n Merge Cond: ((\"outer\".\"bim\" = \"inner\".\"bap\") AND (\"outer\".foo =\n\"inner\".foo))\n -> Index Scan using idx_tableB_bim on tableB \n(cost=0.00..97391.22 rows=55672 width=24) (actual time=0.017..0.059\nrows=29 loops=1)\n Index Cond: ((\"bim\" >= 72555896091359::bigint) AND (\"bim\"\n< 72555935412959::bigint))\n -> Sort (cost=149626.57..151523.94 rows=758948 width=34)\n(actual time=5099.300..5442.825 rows=560856 loops=1)\n Sort Key: tableA.\"bap\", tableA.foo\n -> Seq Scan on tableA (cost=0.00..47351.48 rows=758948\nwidth=34) (actual time=0.021..1645.204 rows=758948 loops=1)\n Total runtime: 5706.655 ms\n(9 rows)\n\nTime: 5729.984 ms\nitvtrackdata3=> set enable_mergejoin to false;\nSET\nTime: 0.373 ms\nitvtrackdata3=> explain analyze SELECT * FROM tableB NATURAL JOIN tableA\nWHERE bim>=72555896091359 AND bim<72555935412959 AND bim=bap ORDER BY\nbim ASC LIMIT 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..432619.68 rows=1 width=50) (actual\ntime=11.149..11.150 rows=1 loops=1)\n -> Nested Loop (cost=0.00..432619.68 rows=1 width=50) (actual\ntime=11.148..11.148 rows=1 loops=1)\n Join Filter: (\"outer\".\"bim\" = \"inner\".\"bap\")\n -> Index Scan using idx_tableB_bim on tableB \n(cost=0.00..97391.22 rows=55672 width=24) (actual time=0.017..0.062\nrows=29 loops=1)\n Index Cond: ((\"bim\" >= 72555896091359::bigint) AND (\"bim\"\n< 72555935412959::bigint))\n -> Index Scan using idx_tableA_foo on tableA (cost=0.00..6.01\nrows=1 width=34) (actual time=0.007..0.379 rows=1 loops=29)\n Index Cond: (\"outer\".foo = tableA.foo)\n Total runtime: 11.215 ms\n(8 rows)\n\nTime: 32.007 ms\n\n\nHave I just flubbed the patch, or is there something else\ngoing on here?\n\nThanks,\n\n\t--Ian\n\n\nOn Fri, 2005-07-22 at 12:20, Tom Lane wrote:\n> I wrote:\n> > Dawid Kuroczko <[email protected]> writes:\n> >> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;\n> \n> >> Limit (cost=15912.20..15912.31 rows=1 width=272)\n> >> -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)\n> \n> >> If I set enable_hashjoin=false:\n> \n> >> qnex=# EXPLAIN ANALYZE SELECT * FROM log NATURAL LEFT JOIN useragents LIMIT 1;\n> \n> >> Limit (cost=0.00..3.07 rows=1 width=272) (actual time=74.214..74.216\n> >> rows=1 loops=1)\n> >> -> Nested Loop Left Join (cost=0.00..144295895.01 rows=47044336\n> >> width=272) (actual time=74.204..74.204 rows=1 loops=1)\n> \n> > This is quite strange. The nestloop plan definitely should be preferred\n> > in the context of the LIMIT, considering that it has far lower estimated\n> > cost. And it is preferred in simple tests for me.\n> \n> After a suitable period of contemplating my navel, I figured out\n> what is going on here: the total costs involved are large enough that\n> the still-fairly-high startup cost of the hash is disregarded by\n> compare_fuzzy_path_costs(), and so the nestloop is discarded as not\n> having any significant potential advantage in startup time.\n> \n> I think that this refutes the original scheme of using the same fuzz\n> factor for both startup and total cost comparisons, and therefore\n> propose the attached patch.\n> \n> Comments?\n> \n> \t\t\tregards, tom lane\n> \n> *** src/backend/optimizer/util/pathnode.c.orig\tFri Jul 15 13:09:25 2005\n> --- src/backend/optimizer/util/pathnode.c\tFri Jul 22 12:08:25 2005\n> ***************\n> *** 98,157 ****\n> static int\n> compare_fuzzy_path_costs(Path *path1, Path *path2, CostSelector criterion)\n> {\n> - \tCost\t\tfuzz;\n> - \n> \t/*\n> ! \t * The fuzz factor is set at one percent of the smaller total_cost,\n> ! \t * but not less than 0.01 cost units (just in case total cost is\n> ! \t * zero).\n> \t *\n> \t * XXX does this percentage need to be user-configurable?\n> \t */\n> - \tfuzz = Min(path1->total_cost, path2->total_cost) * 0.01;\n> - \tfuzz = Max(fuzz, 0.01);\n> - \n> \tif (criterion == STARTUP_COST)\n> \t{\n> ! \t\tif (Abs(path1->startup_cost - path2->startup_cost) > fuzz)\n> ! \t\t{\n> ! \t\t\tif (path1->startup_cost < path2->startup_cost)\n> ! \t\t\t\treturn -1;\n> ! \t\t\telse\n> ! \t\t\t\treturn +1;\n> ! \t\t}\n> \n> \t\t/*\n> \t\t * If paths have the same startup cost (not at all unlikely),\n> \t\t * order them by total cost.\n> \t\t */\n> ! \t\tif (Abs(path1->total_cost - path2->total_cost) > fuzz)\n> ! \t\t{\n> ! \t\t\tif (path1->total_cost < path2->total_cost)\n> ! \t\t\t\treturn -1;\n> ! \t\t\telse\n> ! \t\t\t\treturn +1;\n> ! \t\t}\n> \t}\n> \telse\n> \t{\n> ! \t\tif (Abs(path1->total_cost - path2->total_cost) > fuzz)\n> ! \t\t{\n> ! \t\t\tif (path1->total_cost < path2->total_cost)\n> ! \t\t\t\treturn -1;\n> ! \t\t\telse\n> ! \t\t\t\treturn +1;\n> ! \t\t}\n> \n> \t\t/*\n> \t\t * If paths have the same total cost, order them by startup cost.\n> \t\t */\n> ! \t\tif (Abs(path1->startup_cost - path2->startup_cost) > fuzz)\n> ! \t\t{\n> ! \t\t\tif (path1->startup_cost < path2->startup_cost)\n> ! \t\t\t\treturn -1;\n> ! \t\t\telse\n> ! \t\t\t\treturn +1;\n> ! \t\t}\n> \t}\n> \treturn 0;\n> }\n> --- 98,138 ----\n> static int\n> compare_fuzzy_path_costs(Path *path1, Path *path2, CostSelector criterion)\n> {\n> \t/*\n> ! \t * We use a fuzz factor of 1% of the smaller cost.\n> \t *\n> \t * XXX does this percentage need to be user-configurable?\n> \t */\n> \tif (criterion == STARTUP_COST)\n> \t{\n> ! \t\tif (path1->startup_cost > path2->startup_cost * 1.01)\n> ! \t\t\treturn +1;\n> ! \t\tif (path2->startup_cost > path1->startup_cost * 1.01)\n> ! \t\t\treturn -1;\n> \n> \t\t/*\n> \t\t * If paths have the same startup cost (not at all unlikely),\n> \t\t * order them by total cost.\n> \t\t */\n> ! \t\tif (path1->total_cost > path2->total_cost * 1.01)\n> ! \t\t\treturn +1;\n> ! \t\tif (path2->total_cost > path1->total_cost * 1.01)\n> ! \t\t\treturn -1;\n> \t}\n> \telse\n> \t{\n> ! \t\tif (path1->total_cost > path2->total_cost * 1.01)\n> ! \t\t\treturn +1;\n> ! \t\tif (path2->total_cost > path1->total_cost * 1.01)\n> ! \t\t\treturn -1;\n> \n> \t\t/*\n> \t\t * If paths have the same total cost, order them by startup cost.\n> \t\t */\n> ! \t\tif (path1->startup_cost > path2->startup_cost * 1.01)\n> ! \t\t\treturn +1;\n> ! \t\tif (path2->startup_cost > path1->startup_cost * 1.01)\n> ! \t\t\treturn -1;\n> \t}\n> \treturn 0;\n> }\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 10 Aug 2005 17:03:53 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT?" }, { "msg_contents": "Ian Westmacott <[email protected]> writes:\n> In a nutshell, I have a LIMIT query where the planner\n> seems to favor a merge join over a nested loop.\n\nThe planner is already estimating only one row out of the join, and so\nthe LIMIT doesn't affect its cost estimates at all.\n\nIt appears to me that the reason the nestloop plan is fast is just\nchance: a suitable matching row is found very early in the scan of\ntableB, so that the indexscan on it can stop after 29 rows, instead\nof having to go through all 55000 rows in the given range of bim.\nIf it'd have had to go through, say, half of the rows to find a match,\nthe sort/merge plan would show up a lot better.\n\nIf this wasn't chance, but was expected because there are many matching\nrows and not only one, then there's a statistical problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 10 Aug 2005 18:55:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT? " }, { "msg_contents": "On Wed, 2005-08-10 at 18:55, Tom Lane wrote:\n> Ian Westmacott <[email protected]> writes:\n> > In a nutshell, I have a LIMIT query where the planner\n> > seems to favor a merge join over a nested loop.\n> \n> The planner is already estimating only one row out of the join, and so\n> the LIMIT doesn't affect its cost estimates at all.\n>\n> It appears to me that the reason the nestloop plan is fast is just\n> chance: a suitable matching row is found very early in the scan of\n> tableB, so that the indexscan on it can stop after 29 rows, instead\n> of having to go through all 55000 rows in the given range of bim.\n> If it'd have had to go through, say, half of the rows to find a match,\n> the sort/merge plan would show up a lot better.\n\nOh, I see. Thanks, that clears up some misconceptions I\nhad about the explain output.\n\n> If this wasn't chance, but was expected because there are many matching\n> rows and not only one, then there's a statistical problem.\n\nWell, there are in fact almost 300 of them in this case.\nSo I guess what I need to do is give the planner more\ninformation to correctly predict that.\n\nThanks,\n\n\t--Ian\n\n\n", "msg_date": "Thu, 11 Aug 2005 09:26:37 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner doesn't look at LIMIT?" } ]
[ { "msg_contents": "Hi all,\n\n I am trying to do an update on a table but so far I can't seem to \ncome up with a usable index. After my last question/thread the user \n'PFC' recommended I store whether a file was to be backed up as either \n't'(rue), 'f'(alse) or 'i'(nherit) to speed up changing files and sub \ndirectories under a given directory when it was toggled. I've more or \nless finished implementing this and it is certainly a LOT faster but I \nam hoping to make it just a little faster still with an Index.\n\n Tom Lane pointed out to me that I needed 'text_pattern_ops' on my \n'file_parent_dir' column in the index if I wanted to do pattern matching \n(the C locale wasn't set). Now I have added an additional condition and \nI think this might be my problem. Here is a sample query I am trying to \ncreate my index for:\n\n\nUPDATE file_info_2 SET file_backup='i' WHERE file_backup!='i' AND \nfile_parent_dir='/';\n\n This would be an example of someone changing the backup state of the \nroot of a partition. It could also be:\n\n\nUPDATE file_info_2 SET file_backup='i' WHERE file_backup!='i' AND \nfile_parent_dir='/usr';\n\n If, for example, the user was toggling the backup state of the '/usr' \ndirectory.\n\n I suspected that because I was using \"file_backup!='i'\" that maybe I \nwas running into the same problem as before so I tried creating the index:\n\n\ntle-bu=> CREATE INDEX file_info_2_mupdate_idx ON file_info_2 \n(file_backup bpchar_pattern_ops, file_parent_dir text_pattern_ops);\n\ntle-bu=> EXPLAIN ANALYZE UPDATE file_info_2 SET file_backup='i' WHERE \nfile_backup!='i' AND file_parent_dir~'^/'; \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on file_info_2 (cost=0.00..13379.38 rows=1 width=134) \n(actual time=1623.819..1624.087 rows=4 loops=1)\n Filter: ((file_backup <> 'i'::bpchar) AND (file_parent_dir ~ \n'^/'::text))\n Total runtime: 1628.053 ms\n(3 rows)\n\n\n This index wasn't used though, even when I set 'enable_seqscan' to \n'OFF'. The column 'file_backup' is 'char(1)' and the column \n'file_parent_dir' is 'text'.\n\n\ntle-bu=> \\d file_info_2; \\di file_info_2_mupdate_idx; \nTable \"public.file_info_2\"\n Column | Type | Modifiers\n-----------------+--------------+------------------------------\n file_group_name | text |\n file_group_uid | integer | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | integer | not null\n file_size | bigint | not null\n file_type | character(1) | not null\n file_user_name | text |\n file_user_uid | integer | not null\n file_backup | character(1) | not null default 'i'::bpchar\n file_display | character(1) | not null default 'i'::bpchar\n file_restore | character(1) | not null default 'i'::bpchar\nIndexes:\n \"file_info_2_mupdate_idx\" btree (file_backup bpchar_pattern_ops, \nfile_parent_dir text_pattern_ops)\n \"file_info_2_supdate_idx\" btree (file_parent_dir, file_name, file_type)\n\n List of relations\n Schema | Name | Type | Owner | Table\n--------+-------------------------+-------+---------+-------------\n public | file_info_2_mupdate_idx | index | madison | file_info_2\n(1 row)\n\n Could it be that there needs to be a certain number of \n\"file_backup!='i'\" before the planner will use the index? I have also \ntried not defining an op_class on both tables (and one at a time) but I \ncan't seem to figure this out.\n\n As always, thank you!\n\nMadison\n", "msg_date": "Fri, 22 Jul 2005 10:46:42 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Another index question" }, { "msg_contents": "Line noise, sorry...\n\n After posting I went back to reading the pgsql docs and saw the query:\n\n\nSELECT am.amname AS index_method, opc.opcname AS opclass_name, \nopr.oprname AS opclass_operator FROM pg_am am, pg_opclass opc, pg_amop \namop, pg_operator opr WHERE opc.opcamid = am.oid AND amop.amopclaid = \nopc.oid AND amop.amopopr = opr.oid ORDER BY index_method, opclass_name, \nopclass_operator;\n\n Which listed all the op_classes. I noticed none of the \nopclass_operators supported '!=' so I wondered if that was simply an \nunindexable (is that a word?) operator. So I tried creating the index:\n\n\ntle-bu=> CREATE INDEX file_info_2_mupdate_idx ON file_info_2 \n(file_backup, file_parent_dir text_pattern_ops);\n\n And changing my query to:\n\n\ntle-bu=> EXPLAIN ANALYZE UPDATE file_info_2 SET file_backup='i' WHERE \nfile_backup='t' OR file_backup='f' AND file_parent_dir~'^/';\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using file_info_2_mupdate_idx, file_info_2_mupdate_idx on \nfile_info_2 (cost=0.00..10.04 rows=1 width=134) (actual \ntime=0.112..0.718 rows=4 loops=1)\n Index Cond: ((file_backup = 't'::bpchar) OR ((file_backup = \n'f'::bpchar) AND (file_parent_dir ~>=~ '/'::text) AND (file_parent_dir \n~<~ '0'::text)))\n Filter: ((file_backup = 't'::bpchar) OR ((file_backup = 'f'::bpchar) \nAND (file_parent_dir ~ '^/'::text)))\n Total runtime: 60.359 ms\n(4 rows)\n\n Bingo!\n\n Hopefully someone might find this useful in the archives. :p\n\nMadison\n\n\nMadison Kelly wrote:\n> Hi all,\n> \n> I am trying to do an update on a table but so far I can't seem to come \n> up with a usable index. After my last question/thread the user 'PFC' \n> recommended I store whether a file was to be backed up as either \n> 't'(rue), 'f'(alse) or 'i'(nherit) to speed up changing files and sub \n> directories under a given directory when it was toggled. I've more or \n> less finished implementing this and it is certainly a LOT faster but I \n> am hoping to make it just a little faster still with an Index.\n> \n> Tom Lane pointed out to me that I needed 'text_pattern_ops' on my \n> 'file_parent_dir' column in the index if I wanted to do pattern matching \n> (the C locale wasn't set). Now I have added an additional condition and \n> I think this might be my problem. Here is a sample query I am trying to \n> create my index for:\n> \n> \n> UPDATE file_info_2 SET file_backup='i' WHERE file_backup!='i' AND \n> file_parent_dir='/';\n> \n> This would be an example of someone changing the backup state of the \n> root of a partition. It could also be:\n> \n> \n> UPDATE file_info_2 SET file_backup='i' WHERE file_backup!='i' AND \n> file_parent_dir='/usr';\n> \n> If, for example, the user was toggling the backup state of the '/usr' \n> directory.\n> \n> I suspected that because I was using \"file_backup!='i'\" that maybe I \n> was running into the same problem as before so I tried creating the index:\n> \n> \n> tle-bu=> CREATE INDEX file_info_2_mupdate_idx ON file_info_2 \n> (file_backup bpchar_pattern_ops, file_parent_dir text_pattern_ops);\n> \n> tle-bu=> EXPLAIN ANALYZE UPDATE file_info_2 SET file_backup='i' WHERE \n> file_backup!='i' AND file_parent_dir~'^/'; QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------- \n> \n> Seq Scan on file_info_2 (cost=0.00..13379.38 rows=1 width=134) (actual \n> time=1623.819..1624.087 rows=4 loops=1)\n> Filter: ((file_backup <> 'i'::bpchar) AND (file_parent_dir ~ \n> '^/'::text))\n> Total runtime: 1628.053 ms\n> (3 rows)\n> \n> \n> This index wasn't used though, even when I set 'enable_seqscan' to \n> 'OFF'. The column 'file_backup' is 'char(1)' and the column \n> 'file_parent_dir' is 'text'.\n> \n> \n> tle-bu=> \\d file_info_2; \\di file_info_2_mupdate_idx; Table \n> \"public.file_info_2\"\n> Column | Type | Modifiers\n> -----------------+--------------+------------------------------\n> file_group_name | text |\n> file_group_uid | integer | not null\n> file_mod_time | bigint | not null\n> file_name | text | not null\n> file_parent_dir | text | not null\n> file_perm | integer | not null\n> file_size | bigint | not null\n> file_type | character(1) | not null\n> file_user_name | text |\n> file_user_uid | integer | not null\n> file_backup | character(1) | not null default 'i'::bpchar\n> file_display | character(1) | not null default 'i'::bpchar\n> file_restore | character(1) | not null default 'i'::bpchar\n> Indexes:\n> \"file_info_2_mupdate_idx\" btree (file_backup bpchar_pattern_ops, \n> file_parent_dir text_pattern_ops)\n> \"file_info_2_supdate_idx\" btree (file_parent_dir, file_name, file_type)\n> \n> List of relations\n> Schema | Name | Type | Owner | Table\n> --------+-------------------------+-------+---------+-------------\n> public | file_info_2_mupdate_idx | index | madison | file_info_2\n> (1 row)\n> \n> Could it be that there needs to be a certain number of \n> \"file_backup!='i'\" before the planner will use the index? I have also \n> tried not defining an op_class on both tables (and one at a time) but I \n> can't seem to figure this out.\n> \n> As always, thank you!\n> \n> Madison\n", "msg_date": "Fri, 22 Jul 2005 11:09:32 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Solved (was: Re: Another index question)" } ]
[ { "msg_contents": "pgsql performance gurus,\n\nI sent the following message earlier this week. I have continued\nattempting to find something on the net that would explain this\nstrange change of query plans, but nothing seems to apply.\n\nAre there any thoughts, such as possibly tweaking the database\nsomehow to see if I can get this to repeat consistently?\n\nPlease let me know if any of you have any pointers as to \nthe cause of the different query plans.\n\nThank you very much in advance for any pointers you can provide.\n\nJohnM\n\nOn Tue, 19 Jul 2005, John Mendenhall wrote:\n\n> I tuned a query last week to obtain acceptable performance.\n> Here is my recorded explain analyze results:\n>\n> LOG: duration: 826.505 ms statement: explain analyze\n> [cut for brevity]\n> \n> I rebooted the database machine later that night.\n> Now, when I run the same query, I get the following\n> results:\n> \n> LOG: duration: 6931.701 ms statement: explain analyze\n> [cut for brevity]\n\nI just ran my query again, no changes from yesterday\nand it is back to normal:\n\nLOG: duration: 795.839 ms statement: explain analyze\n\nWhat could have been the problem?\n\nThe major differences in the query plan are as follows:\n\n(1) The one that runs faster uses a Hash Join at the\nvery top of the query plan. It does a Hash Cond on\nthe country and code fields.\n\n(2) The one that runs slower uses a Materialize with\nthe subplan, with no Hash items. The Materialize does\nSeq Scan of the countries table, and above it, a Join\nFilter is run.\n\n(3) The partners_pkey index on the partners table is\nin a different place in the query.\n\nDoes anyone know what would cause the query plan to be\ndifferent like this, for the same server, same query?\nI run vacuum analyze every night. Is this perhaps the\nproblem?\n\nWhat setting do I need to tweak to make sure the faster\nplan is always found?\n\nThanks for any pointers in this dilemma.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n", "msg_date": "Sat, 23 Jul 2005 09:50:34 -0700", "msg_from": "John Mendenhall <[email protected]>", "msg_from_op": true, "msg_subject": "re: performance decrease after reboot" } ]
[ { "msg_contents": "It's likely that data is in filesystem (not database) cache the second time you run the query. See if the same thing happens when you stop and restart the postmaster (it likely wont), then do something like this to flush the filesystem cache (read a big file, can't give you a sample cmd because my Treo has no equal sign :-) then run the query again.\n\n- Luke\n\n -----Original Message-----\nFrom: \tJohn Mendenhall [mailto:[email protected]]\nSent:\tSat Jul 23 12:54:18 2005\nTo:\tpgsql-performance list\nSubject:\t[PERFORM] re: performance decrease after reboot\n\npgsql performance gurus,\n\nI sent the following message earlier this week. I have continued\nattempting to find something on the net that would explain this\nstrange change of query plans, but nothing seems to apply.\n\nAre there any thoughts, such as possibly tweaking the database\nsomehow to see if I can get this to repeat consistently?\n\nPlease let me know if any of you have any pointers as to \nthe cause of the different query plans.\n\nThank you very much in advance for any pointers you can provide.\n\nJohnM\n\nOn Tue, 19 Jul 2005, John Mendenhall wrote:\n\n> I tuned a query last week to obtain acceptable performance.\n> Here is my recorded explain analyze results:\n>\n> LOG: duration: 826.505 ms statement: explain analyze\n> [cut for brevity]\n> \n> I rebooted the database machine later that night.\n> Now, when I run the same query, I get the following\n> results:\n> \n> LOG: duration: 6931.701 ms statement: explain analyze\n> [cut for brevity]\n\nI just ran my query again, no changes from yesterday\nand it is back to normal:\n\nLOG: duration: 795.839 ms statement: explain analyze\n\nWhat could have been the problem?\n\nThe major differences in the query plan are as follows:\n\n(1) The one that runs faster uses a Hash Join at the\nvery top of the query plan. It does a Hash Cond on\nthe country and code fields.\n\n(2) The one that runs slower uses a Materialize with\nthe subplan, with no Hash items. The Materialize does\nSeq Scan of the countries table, and above it, a Join\nFilter is run.\n\n(3) The partners_pkey index on the partners table is\nin a different place in the query.\n\nDoes anyone know what would cause the query plan to be\ndifferent like this, for the same server, same query?\nI run vacuum analyze every night. Is this perhaps the\nproblem?\n\nWhat setting do I need to tweak to make sure the faster\nplan is always found?\n\nThanks for any pointers in this dilemma.\n\nJohnM\n\n-- \nJohn Mendenhall\[email protected]\nsurf utopia\ninternet services\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n", "msg_date": "Sat, 23 Jul 2005 13:02:58 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: re: performance decrease after reboot" } ]
[ { "msg_contents": "Lately, I've been reading a lot about these new Coraid AoE RAID \ndevices ( http://www.coraid.com ). They tout it as being fast and \ncheap and better than iSCSI due to the lack of TCP/IP over the wire. \nIs it likely that a 15-drive RAID 10 Linux software RAID would \noutperform a 4-drive 10k SCSI RAID 0+1 for a heavy-loaded database? \nIf not software RAID, how about their dedicated RAID controller blade?\n\nI'm definitely IO bound right now and starving for spindles. Does \nthis make sense or is it too good to be true?\n\nThanks\n-Dan\n", "msg_date": "Mon, 25 Jul 2005 02:27:31 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Coraid/AoE device experience?" } ]
[ { "msg_contents": "Shashi Kanth Boddula wrote:\n> \n> The customer is using DBmirror tool to mirror the database records of\n> primary to secondary . The customer is complaining that there is one day\n> (24 hours) delay between primary and secondray for database\n> synchronization . They have dedicated line and bandwidth , but still the\n> problems exists. \n\nYou don't say what the nature of the problem with dbmirror is. Are they \nsaturating their bandwidth? Are one or both servers unable to keep pace \nwith the updates?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 25 Jul 2005 14:04:01 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mirroring PostgreSQL database" }, { "msg_contents": "Try Slony: www.slony.info\n\nShashi Kanth Boddula wrote:\n> \n> Hi,\n> I have one customer who is using PostgreSQL 7.4.8 on Linux . He has some \n> problems with database mirroring . The details are follows.\n> The customer is using Linux on which PostgreSQL 7.4.8 along with Jboss \n> 3.2.3 is running . He has 2 servers , one is acting as a live server \n> (primary) and another is acting as a fail-over (secondary) server \n> . Secondary server is placed in remote location . These servers are \n> acting as a Attendence server for daily activities . Nearly 50,000 \n> employees depend on the live server .\n> \n> The customer is using DBmirror tool to mirror the database records of \n> primary to secondary . The customer is complaining that there is one day \n> (24 hours) delay between primary and secondray for database \n> synchronization . They have dedicated line and bandwidth , but still the \n> problems exists.\n> \n> I just want to know , for immediate data mirroring , what is the best \n> way for PostgreSQL . PostgreSQL is offering many mirror tools , but \n> which one is the best ?. Is there any other way to accomplish the task ?\n> \n> Thank you . Waiting for your reply.\n> \n> \n> Thanks & Regards,\n> Shashi Kanth\n> Consultant - Linux\n> RHCE , LPIC-2\n> Onward Novell - Bangalore\n> 9886455567\n> \n> \n", "msg_date": "Mon, 25 Jul 2005 23:20:31 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mirroring PostgreSQL database" }, { "msg_contents": "\n> I just want to know , for immediate data mirroring , what is the best \n> way for PostgreSQL . PostgreSQL is offering many mirror tools , but \n> which one is the best ?. Is there any other way to accomplish the task ?\n\nYou want to take a look at Slony-I or Mammoth Replicator.\n\nhttp://www.slony.info/\nhttp://www.commandprompt.com/\n\n\n> \n> Thank you . Waiting for your reply.\n> \n> \n> Thanks & Regards,\n> Shashi Kanth\n> Consultant - Linux\n> RHCE , LPIC-2\n> Onward Novell - Bangalore\n> 9886455567\n> \n> \n\n\n-- \nYour PostgreSQL solutions company - Command Prompt, Inc. 1.800.492.2240\nPostgreSQL Replication, Consulting, Custom Programming, 24x7 support\nManaged Services, Shared and Dedicated Hosting\nCo-Authors: plPHP, plPerlNG - http://www.commandprompt.com/\n", "msg_date": "Mon, 25 Jul 2005 09:03:37 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mirroring PostgreSQL database" }, { "msg_contents": "Hi, \nI have one customer who is using PostgreSQL 7.4.8 on Linux . He has some\nproblems with database mirroring . The details are follows. \nThe customer is using Linux on which PostgreSQL 7.4.8 along with Jboss\n3.2.3 is running . He has 2 servers , one is acting as a live server\n(primary) and another is acting as a fail-over (secondary) server . \nSecondary server is placed in remote location . These servers are acting\nas a Attendence server for daily activities . Nearly 50,000 employees\ndepend on the live server . \n \nThe customer is using DBmirror tool to mirror the database records of\nprimary to secondary . The customer is complaining that there is one day\n(24 hours) delay between primary and secondray for database\nsynchronization . They have dedicated line and bandwidth , but still the\nproblems exists. \n \nI just want to know , for immediate data mirroring , what is the best\nway for PostgreSQL . PostgreSQL is offering many mirror tools , but\nwhich one is the best ?. Is there any other way to accomplish the task ?\n\n \nThank you . Waiting for your reply. \n \n\nThanks & Regards,\nShashi Kanth\nConsultant - Linux\nRHCE , LPIC-2\nOnward Novell - Bangalore\n9886455567\n\n\n\n\n\n\n\n\n\n\nHi,\n \n\n\n \n\nI have one customer who is using PostgreSQL 7.4.8 on Linux . He has some problems with database mirroring . The details are follows.\n \n\n\n \n\n The customer is using Linux on which PostgreSQL 7.4.8 along with Jboss 3.2.3 is running . He has 2 servers , one is acting as a live server (primary) and another is acting as a fail-over (secondary)  server .  Secondary server is placed in remote location . These servers are acting as a Attendence server for daily activities . Nearly 50,000 employees depend on the live server .\n \n\n  \n \n\n The customer is using DBmirror tool to mirror the database records of primary to secondary . The customer is complaining that there is one day (24 hours) delay between primary and secondray for database synchronization . They have dedicated line and bandwidth , but still the problems exists.\n \n\n  \n \n\n I just want to know , for immediate data mirroring , what is the best way for PostgreSQL . PostgreSQL is offering many mirror tools , but which one is the best ?. Is there any other way to accomplish the task ?\n \n\n  \n \n\n Thank you . Waiting for your reply.\n \n\n  \n \nThanks & Regards,Shashi KanthConsultant - LinuxRHCE , LPIC-2Onward Novell - Bangalore9886455567", "msg_date": "Mon, 25 Jul 2005 12:01:52 -0600", "msg_from": "\"Shashi Kanth Boddula\" <[email protected]>", "msg_from_op": false, "msg_subject": "Mirroring PostgreSQL database" }, { "msg_contents": "Shashi Kanth Boddula schrieb:\n\n> Hi,\n> I have one customer who is using PostgreSQL 7.4.8 on Linux . He has\n> some problems with database mirroring . The details are follows.\n> The customer is using Linux on which PostgreSQL 7.4.8 along with Jboss\n> 3.2.3 is running . He has 2 servers , one is acting as a live server\n> (primary) and another is acting as a fail-over (secondary) server\n> . Secondary server is placed in remote location . These servers are\n> acting as a Attendence server for daily activities . Nearly 50,000\n> employees depend on the live server .\n> \n> The customer is using DBmirror tool to mirror the database records of\n> primary to secondary . The customer is complaining that there is one\n> day (24 hours) delay between primary and secondray for database\n> synchronization . They have dedicated line and bandwidth , but still\n> the problems exists.\n> \n> I just want to know , for immediate data mirroring , what is the best\n> way for PostgreSQL . PostgreSQL is offering many mirror tools , but\n> which one is the best ?. Is there any other way to accomplish the task ?\n> \n> Thank you . Waiting for your reply.\n> \n>\n> Thanks & Regards,\n> Shashi Kanth\n> Consultant - Linux\n> RHCE , LPIC-2\n> Onward Novell - Bangalore\n> 9886455567\n>\n>\n\nFor java based solution you could also have a look at x-jdbc or xjdbc.\n\nBut before you should find out what the reason for the delay is\nactually. When the backup server is to slow, it may be not important which\nmirroring tool you use.\n\n\n-- \nBest Regards / Viele Gr��e\n\nSebastian Hennebrueder\n\n----\n\nhttp://www.laliluna.de\n\nTutorials for JSP, JavaServer Faces, Struts, Hibernate and EJB \n\nGet support, education and consulting for these technologies - uncomplicated and cheap.\n\n", "msg_date": "Wed, 27 Jul 2005 23:21:39 +0200", "msg_from": "Sebastian Hennebrueder <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mirroring PostgreSQL database" } ]
[ { "msg_contents": " \nI have an 8.02 postgresql database with about 180 GB in size, running on\n2.6 RedHat kernel with 32 GB of RAM and 2 CPUs. I'm running the vacuum\nfull analyze command, and has been running for at least two consecutive\ndays with no other processes running (it's an offline loading server). I\ntweaked the maintenanace_mem to its max (2 GB) with work_mem of 8M. I\nhave no issues with my checkpoints. I can still I/O activities against\nthe physical files of the \"property\" table and its two indexes (primary\nkey and r index). The property files are about 128GB and indexes are\nabout 15 GB. I have run the same maintenance job on a different box\n(staging) with identical hardware config (except with 64 GB instead of\n32) and took less than 12 hours. Any clue or tip is really\nappreciated. \n\nAlso read a comment by Tom Lane, that terminating the process should be\ncrash-safe if I had to. \n\nThanks,\n\n\n-- \n Husam \n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Mon, 25 Jul 2005 14:30:24 -0700", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "\"Vacuum Full Analyze\" taking so long" }, { "msg_contents": "I'd say, \"don't do that\". Unless you've deleted a lot of stuff and are\nexpecting the DB to shrink, a full vacuum shouldn't really be needed. On\na DB that big a full vacuum is just going to take a long time. If you\nreally are shrinking, consider structuring things so you can just drop a\ntable instead of vacuuming it (the drop is fairly instantaneous). If you\ncan't do that, consider dropping the indices, vacuuming, and recreating\nthe indices. \n\nMike Stone\n", "msg_date": "Mon, 25 Jul 2005 17:30:58 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" }, { "msg_contents": "Vacuum full takes an exclusive lock on the tables it runs against, so if you\nhave anything else reading the table while you are trying to run it, the\nvacuum full will wait, possibly forever until it can get the lock.\n\nWhat does the system load look like while you are running this? What does\nvmstat 1 show you? Is there load on the system other than the database?\n\nDo you really need to run vacuum full instead of vacuum?\n\n- Luke\n\n\nOn 7/25/05 2:30 PM, \"Tomeh, Husam\" <[email protected]> wrote:\n\n> \n> I have an 8.02 postgresql database with about 180 GB in size, running on\n> 2.6 RedHat kernel with 32 GB of RAM and 2 CPUs. I'm running the vacuum\n> full analyze command, and has been running for at least two consecutive\n> days with no other processes running (it's an offline loading server). I\n> tweaked the maintenanace_mem to its max (2 GB) with work_mem of 8M. I\n> have no issues with my checkpoints. I can still I/O activities against\n> the physical files of the \"property\" table and its two indexes (primary\n> key and r index). The property files are about 128GB and indexes are\n> about 15 GB. I have run the same maintenance job on a different box\n> (staging) with identical hardware config (except with 64 GB instead of\n> 32) and took less than 12 hours. Any clue or tip is really\n> appreciated. \n> \n> Also read a comment by Tom Lane, that terminating the process should be\n> crash-safe if I had to.\n> \n> Thanks,\n> \n\n\n", "msg_date": "Mon, 25 Jul 2005 15:48:39 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" } ]
[ { "msg_contents": "I need COPY via libpqxx to insert millions of rows into two tables. One\ntable has roughly have as many rows and requires half the storage. In\nproduction, the largest table will grow by ~30M rows/day. To test the\nCOPY performance I split my transactions into 10,000 rows. I insert\nroughly 5000 rows into table A for every 10,000 rows into table B.\n \nTable A has one unique index:\n \n\"order_main_pk\" UNIQUE, btree (cl_ord_id)\n \nTable B has 1 unique index and 2 non-unique indexes:\n \n\"order_transition_pk\" UNIQUE, btree (collating_seq)\n\"order_transition_ak2\" btree (orig_cl_ord_id)\n\"order_transition_ak3\" btree (exec_id)\n \nMy testing environment is as follows:\n-Postgresql 8.0.1\n-libpqxx 2.5.0\n-Linux 2.6.11.4-21.7-smp x86_64 \n-Dual Opteron 246\n-System disk (postgres data resides on this SCSI disk) - Seagate\n(ST373453LC) - 15K, 73 GB\n(http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,5\n49,00.html)\n-2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside\non this disk\n-NO RAID\n \nPostgreSQL\nHere are the results of copying in 10M rows as fast as possible:\n(10K/transaction)\nTotal Time: 1129.556 s\nRows/sec: 9899.922\nTransaction>1.2s 225\nTransaction>1.5s 77\nTransaction>2.0s 4\nMax Transaction 2.325s\n \nMySQL\nI ran a similar test with MySQL 4.1.10a (InnoDB) which produced these\nresults: (I used MySQL's INSERT INTO x VALUES\n(1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)\nTotal Time: 860.000 s\nRows/sec: 11627.91\nTransaction>1.2s 0\nTransaction>1.5s 0\nTransaction>2.0s 0\nMax Transaction 1.175s\n \nConsidering the configurations shown below, can anyone offer advice to\nclose the 15% gap and the much worse variability I'm experiencing.\nThanks\n \nMy postgresql.conf has the following non-default values:\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\nlisten_addresses = '*' # what IP interface(s) to listen on; \nmax_connections = 100\n#-----------------------------------------------------------------------\n----\n# RESOURCE USAGE (except WAL)\n#-----------------------------------------------------------------------\n----\nshared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\nwork_mem = 2048 # min 64, size in KB\nmaintenance_work_mem = 204800 # min 1024, size in KB\nmax_fsm_pages = 2250000 # min max_fsm_relations*16, 6 bytes each\nbgwriter_delay = 200 # 10-10000 milliseconds between rounds\nbgwriter_percent = 10 # 0-100% of dirty buffers in each round\nbgwriter_maxpages = 1000 # 0-1000 buffers max per round\n#-----------------------------------------------------------------------\n----\n# WRITE AHEAD LOG\n#-----------------------------------------------------------------------\n----\nfsync = false # turns forced synchronization on or off\nwal_buffers = 64 # min 4, 8KB each\ncheckpoint_segments = 40 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n#-----------------------------------------------------------------------\n----\n# QUERY TUNING\n#-----------------------------------------------------------------------\n----\neffective_cache_size = 65536 # typically 8KB each\nrandom_page_cost = 2 # units are one sequential page fetch cost\n#-----------------------------------------------------------------------\n----\n# ERROR REPORTING AND LOGGING\n#-----------------------------------------------------------------------\n---- \nlog_min_duration_statement = 250 # -1 is disabled, in milliseconds.\nlog_connections = true\nlog_disconnections = true\nlog_duration = true\nlog_line_prefix = '<%r%u%p%t%d%%' # e.g. '<%u%%%d> ' \n # %u=user name %d=database name\n # %r=remote host and port\n # %p=PID %t=timestamp %i=command tag\n # %c=session id %l=session line number\n # %s=session start timestamp %x=transaction id\n # %q=stop here in non-session processes\n # %%='%'\nlog_statement = 'none' # none, mod, ddl, all\n#-----------------------------------------------------------------------\n----\n# RUNTIME STATISTICS\n#-----------------------------------------------------------------------\n----\n# - Query/Index Statistics Collector -\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\n \nMy MySQL my.ini has the following non default values:\ninnodb_data_home_dir = /var/lib/mysql/\ninnodb_data_file_path = ibdata1:10M:autoextend\ninnodb_log_group_home_dir = /var/lib/mysql/\ninnodb_log_arch_dir = /var/lib/mysql/\n# You can set .._buffer_pool_size up to 50 - 80 %\n# of RAM but beware of setting memory usage too high\ninnodb_buffer_pool_size = 512M\ninnodb_additional_mem_pool_size = 64M\n# Set .._log_file_size to 25 % of buffer pool size\ninnodb_log_file_size = 128M\ninnodb_log_buffer_size = 64M\ninnodb_flush_log_at_trx_commit = 1\ninnodb_lock_wait_timeout = 50\ninnodb_flush_method = O_DSYNC\nmax_allowed_packet = 16M\n\n \n \n \n\nMessage\n\n\n\nI need COPY via \nlibpqxx to insert millions of rows into two tables.  One table has roughly \nhave as many rows and requires half the storage.  In production, the \nlargest table will grow by ~30M rows/day.  To test the COPY performance I \nsplit my transactions into 10,000 rows.  I insert roughly 5000 rows \ninto table A for every 10,000 rows into table B.\n \nTable A has one \nunique index:\n \n\"order_main_pk\" \nUNIQUE, btree (cl_ord_id)\n \nTable B has 1 unique \nindex and 2 non-unique indexes:\n \n\"order_transition_pk\" UNIQUE, btree \n(collating_seq)\"order_transition_ak2\" btree \n(orig_cl_ord_id)\"order_transition_ak3\" btree (exec_id)\n \nMy testing \nenvironment is as follows:\n-Postgresql \n8.0.1\n-libpqxx \n2.5.0\n-Linux 2.6.11.4-21.7-smp x86_64 \n\n-Dual Opteron \n246\n-System disk \n(postgres data resides on this SCSI disk) -  Seagate (ST373453LC) - 15K, 73 GB (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.html)\n-2nd logical disk - 10K, 36GB IBM SCSI \n(IC35L036UCDY10-0) - WAL reside on this disk\n-NO RAID\n \nPostgreSQL\nHere are the results \nof copying in 10M rows as fast as possible: \n(10K/transaction)\nTotal \nTime:            1129.556 \ns\nRows/sec:             \n9899.922\nTransaction>1.2s    \n225\n\nTransaction>1.5s     \n77\n\nTransaction>2.0s      \n4\nMax \nTransaction       2.325s\n \nMySQL\nI ran a similar test with MySQL \n4.1.10a (InnoDB) which produced these results: (I used MySQL's INSERT \nINTO x VALUES (1,2,3)(4,5,6)(...,...,...) syntax) \n(10K/transaction)\nTotal \nTime:         860.000 \ns\nRows/sec:        11627.91\n\nTransaction>1.2s      \n0\n\nTransaction>1.5s      \n0\n\nTransaction>2.0s      \n0\nMax \nTransaction       1.175s\n \nConsidering the \nconfigurations shown below, can anyone offer advice to close the 15% gap and the \nmuch worse variability I'm experiencing.  \nThanks\n \nMy \npostgresql.conf has the following non-default \nvalues:\n# \n-----------------------------# PostgreSQL configuration file# \n-----------------------------listen_addresses = '*' # what IP \ninterface(s) to listen on; max_connections = 100\n#---------------------------------------------------------------------------# \nRESOURCE USAGE (except \nWAL)#---------------------------------------------------------------------------shared_buffers \n= 65536  # min 16, at least max_connections*2, 8KB eachwork_mem = \n2048   # min 64, size in KBmaintenance_work_mem = \n204800 # min 1024, size in KBmax_fsm_pages = 2250000  # min \nmax_fsm_relations*16, 6 bytes eachbgwriter_delay = 200  # 10-10000 \nmilliseconds between roundsbgwriter_percent = 10  # 0-100% of \ndirty buffers in each roundbgwriter_maxpages = 1000 # 0-1000 buffers \nmax per round\n#---------------------------------------------------------------------------# \nWRITE AHEAD \nLOG#---------------------------------------------------------------------------fsync \n= false   # turns forced synchronization on or offwal_buffers \n= 64  # min 4, 8KB eachcheckpoint_segments = 40 # in logfile segments, \nmin 1, 16MB eachcheckpoint_timeout = 600 # range 30-3600, in \nseconds\n#---------------------------------------------------------------------------# \nQUERY \nTUNING#---------------------------------------------------------------------------effective_cache_size \n= 65536 # typically 8KB eachrandom_page_cost = 2  # units are \none sequential page fetch cost\n#---------------------------------------------------------------------------# \nERROR REPORTING AND \nLOGGING#---------------------------------------------------------------------------   \nlog_min_duration_statement =    250 # -1 is disabled, in \nmilliseconds.\nlog_connections = \ntruelog_disconnections = truelog_duration = truelog_line_prefix = \n'<%r%u%p%t%d%%'  # e.g. '<%u%%%d> ' \n    # %u=user name %d=database \nname    # %r=remote host and \nport    # %p=PID %t=timestamp %i=command \ntag    # %c=session id %l=session line \nnumber    # %s=session start timestamp %x=transaction \nid    # %q=stop here in non-session \nprocesses    # %%='%'log_statement = \n'none'  # none, mod, ddl, all\n#---------------------------------------------------------------------------# \nRUNTIME \nSTATISTICS#---------------------------------------------------------------------------# \n- Query/Index Statistics Collector -stats_start_collector = \ntruestats_command_string = truestats_block_level = \ntruestats_row_level = truestats_reset_on_server_start = \ntrue\n \nMy MySQL \nmy.ini has the following non default \nvalues:\ninnodb_data_home_dir = \n/var/lib/mysql/innodb_data_file_path = \nibdata1:10M:autoextendinnodb_log_group_home_dir = \n/var/lib/mysql/innodb_log_arch_dir = /var/lib/mysql/# You can set \n.._buffer_pool_size up to 50 - 80 %# of RAM but beware of setting memory \nusage too highinnodb_buffer_pool_size = \n512Minnodb_additional_mem_pool_size = 64M# Set .._log_file_size to 25 % \nof buffer pool sizeinnodb_log_file_size = 128Minnodb_log_buffer_size = \n64Minnodb_flush_log_at_trx_commit = 1innodb_lock_wait_timeout = \n50innodb_flush_method = O_DSYNC\nmax_allowed_packet = 16M", "msg_date": "Mon, 25 Jul 2005 17:32:50 -0500", "msg_from": "\"Chris Isaacson\" <[email protected]>", "msg_from_op": true, "msg_subject": "COPY insert performance" }, { "msg_contents": "Chris Isaacson wrote:\n> I need COPY via libpqxx to insert millions of rows into two tables. One\n> table has roughly have as many rows and requires half the storage. In\n> production, the largest table will grow by ~30M rows/day. To test the\n> COPY performance I split my transactions into 10,000 rows. I insert\n> roughly 5000 rows into table A for every 10,000 rows into table B.\n>\n> Table A has one unique index:\n>\n> \"order_main_pk\" UNIQUE, btree (cl_ord_id)\n>\n> Table B has 1 unique index and 2 non-unique indexes:\n>\n> \"order_transition_pk\" UNIQUE, btree (collating_seq)\n> \"order_transition_ak2\" btree (orig_cl_ord_id)\n> \"order_transition_ak3\" btree (exec_id)\n\nDo you have any foreign key references?\nIf you are creating a table for the first time (or loading a large\nfraction of the data), it is common to drop the indexes and foreign keys\nfirst, and then insert/copy, and then drop them again.\n\nIs InnoDB the backend with referential integrity, and true transaction\nsupport? I believe the default backend does not support either (so it is\n\"cheating\" to give you speed, which may be just fine for your needs,\nespecially since you are willing to run fsync=false).\n\nI think moving pg_xlog to a dedicated drive (set of drives) could help\nyour performance. As well as increasing checkpoint_segments.\n\nI don't know if you gain much by changing the bg_writer settings, if you\nare streaming everything in at once, you probably want to have it\nwritten out right away. My understanding is that bg_writer settings are\nfor the case where you have mixed read and writes going on at the same\ntime, and you want to make sure that the reads have time to execute (ie\nthe writes are not saturating your IO).\n\nAlso, is any of this tested under load? Having a separate process issue\nqueries while you are loading in data. Traditionally MySQL is faster\nwith a single process inserting/querying for data, but once you have\nmultiple processes hitting it at the same time, it's performance\ndegrades much faster than postgres.\n\nYou also seem to be giving MySQL 512M of ram to work with, while only\ngiving 2M/200M to postgres. (re)creating indexes uses\nmaintenance_work_mem, but updating indexes could easily use work_mem.\nYou may be RAM starved.\n\nJohn\n=:->\n\n\n>\n> My testing environment is as follows:\n> -Postgresql 8.0.1\n> -libpqxx 2.5.0\n> -Linux 2.6.11.4-21.7-smp x86_64\n> -Dual Opteron 246\n> -System disk (postgres data resides on this SCSI disk) - Seagate\n> (ST373453LC) - 15K, 73 GB\n> (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.html)\n> -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside\n> on this disk\n> -NO RAID\n>\n> *PostgreSQL*\n> Here are the results of copying in 10M rows as fast as possible:\n> (10K/transaction)\n> Total Time: 1129.556 s\n> Rows/sec: 9899.922\n> Transaction>1.2s 225\n> Transaction>1.5s 77\n> Transaction>2.0s 4\n> Max Transaction 2.325s\n>\n> **MySQL**\n> **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced these\n> results: (I used MySQL's INSERT INTO x VALUES\n> (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)\n> Total Time: 860.000 s\n> Rows/sec: 11627.91\n> Transaction>1.2s 0\n> Transaction>1.5s 0\n> Transaction>2.0s 0\n> Max Transaction 1.175s\n>\n> Considering the configurations shown below, can anyone offer advice to\n> close the 15% gap and the much worse variability I'm experiencing. Thanks\n>\n> My *postgresql.conf* has the following non-default values:\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> listen_addresses = '*' # what IP interface(s) to listen on;\n> max_connections = 100\n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n> work_mem = 2048 # min 64, size in KB\n> maintenance_work_mem = 204800 # min 1024, size in KB\n> max_fsm_pages = 2250000 # min max_fsm_relations*16, 6 bytes each\n> bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n> bgwriter_percent = 10 # 0-100% of dirty buffers in each round\n> bgwriter_maxpages = 1000 # 0-1000 buffers max per round\n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> fsync = false # turns forced synchronization on or off\n> wal_buffers = 64 # min 4, 8KB each\n> checkpoint_segments = 40 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 600 # range 30-3600, in seconds\n> #---------------------------------------------------------------------------\n> # QUERY TUNING\n> #---------------------------------------------------------------------------\n> effective_cache_size = 65536 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n> #---------------------------------------------------------------------------\n> # ERROR REPORTING AND LOGGING\n> #---------------------------------------------------------------------------\n>\n> log_min_duration_statement = 250 # -1 is disabled, in milliseconds.\n> log_connections = true\n> log_disconnections = true\n> log_duration = true\n> log_line_prefix = '<%r%u%p%t%d%%' # e.g. '<%u%%%d> '\n> # %u=user name %d=database name\n> # %r=remote host and port\n> # %p=PID %t=timestamp %i=command tag\n> # %c=session id %l=session line number\n> # %s=session start timestamp %x=transaction id\n> # %q=stop here in non-session processes\n> # %%='%'\n> log_statement = 'none' # none, mod, ddl, all\n> #---------------------------------------------------------------------------\n> # RUNTIME STATISTICS\n> #---------------------------------------------------------------------------\n> # - Query/Index Statistics Collector -\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n>\n> My MySQL *my.ini* has the following non default values:\n> innodb_data_home_dir = /var/lib/mysql/\n> innodb_data_file_path = ibdata1:10M:autoextend\n> innodb_log_group_home_dir = /var/lib/mysql/\n> innodb_log_arch_dir = /var/lib/mysql/\n> # You can set .._buffer_pool_size up to 50 - 80 %\n> # of RAM but beware of setting memory usage too high\n> innodb_buffer_pool_size = 512M\n> innodb_additional_mem_pool_size = 64M\n> # Set .._log_file_size to 25 % of buffer pool size\n> innodb_log_file_size = 128M\n> innodb_log_buffer_size = 64M\n> innodb_flush_log_at_trx_commit = 1\n> innodb_lock_wait_timeout = 50\n> innodb_flush_method = O_DSYNC\n> max_allowed_packet = 16M\n>\n>\n>", "msg_date": "Mon, 25 Jul 2005 18:08:33 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY insert performance" }, { "msg_contents": "Chris,\n\nYou can try the Bizgres distribution of postgres (based on version 8.0.3),\nthe COPY support is 30% faster as reported by OSDL (without indexes). This\nis due to very slow parsing within the COPY command, which is sped up using\nmicro-optimized logic for parsing. There is a patch pending for the\ndevelopment version of Postgres which implements the same code, but you can\nuse Bizgres and get it now instead of waiting for postgres 8.1 to come out.\nAlso, Bizgres is QA tested with the enhanced features.\n\nBizgres is a free / open source distribution of Postgres for Business\nIntelligence / Data Warehousing.\n\nBizgres currently features postgres 8.0.3 plus these patches:\n* Bypass WAL when performing ³CREATE TABLE AS SELECT²\n* COPY is between 30% and 90% faster on machines with fast I/O\n* Enhanced support for data partitioning with partition elimination\noptimization \n* Bitmap Scan support for multiple index use in queries and better low\ncardinality column performance\n* Improved optimization of queries with LIMIT\n\nSee: http://www.bizgres.org for more.\n\n- Luke\n\n\nOn 7/25/05 3:32 PM, \"Chris Isaacson\" <[email protected]> wrote:\n\n> I need COPY via libpqxx to insert millions of rows into two tables. One table\n> has roughly have as many rows and requires half the storage. In production,\n> the largest table will grow by ~30M rows/day. To test the COPY performance I\n> split my transactions into 10,000 rows. I insert roughly 5000 rows into table\n> A for every 10,000 rows into table B.\n> \n> Table A has one unique index:\n> \n> \"order_main_pk\" UNIQUE, btree (cl_ord_id)\n> \n> Table B has 1 unique index and 2 non-unique indexes:\n> \n> \"order_transition_pk\" UNIQUE, btree (collating_seq)\n> \"order_transition_ak2\" btree (orig_cl_ord_id)\n> \"order_transition_ak3\" btree (exec_id)\n> \n> My testing environment is as follows:\n> -Postgresql 8.0.1\n> -libpqxx 2.5.0\n> -Linux 2.6.11.4-21.7-smp x86_64\n> -Dual Opteron 246\n> -System disk (postgres data resides on this SCSI disk) - Seagate (ST373453LC)\n> - 15K, 73 GB \n> (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.\n> html)\n> -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside on this\n> disk\n> -NO RAID\n> \n> PostgreSQL\n> Here are the results of copying in 10M rows as fast as possible:\n> (10K/transaction)\n> Total Time: 1129.556 s\n> Rows/sec: 9899.922\n> Transaction>1.2s 225\n> Transaction>1.5s 77\n> Transaction>2.0s 4\n> Max Transaction 2.325s\n> \n> MySQL\n> I ran a similar test with MySQL 4.1.10a (InnoDB) which produced these results:\n> (I used MySQL's INSERT INTO x VALUES (1,2,3)(4,5,6)(...,...,...) syntax)\n> (10K/transaction)\n> Total Time: 860.000 s\n> Rows/sec: 11627.91\n> Transaction>1.2s 0\n> Transaction>1.5s 0\n> Transaction>2.0s 0\n> Max Transaction 1.175s\n> \n> Considering the configurations shown below, can anyone offer advice to close\n> the 15% gap and the much worse variability I'm experiencing. Thanks\n> \n> My postgresql.conf has the following non-default values:\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> listen_addresses = '*' # what IP interface(s) to listen on;\n> max_connections = 100\n> #---------------------------------------------------------------------------\n> # RESOURCE USAGE (except WAL)\n> #---------------------------------------------------------------------------\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n> work_mem = 2048 # min 64, size in KB\n> maintenance_work_mem = 204800 # min 1024, size in KB\n> max_fsm_pages = 2250000 # min max_fsm_relations*16, 6 bytes each\n> bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n> bgwriter_percent = 10 # 0-100% of dirty buffers in each round\n> bgwriter_maxpages = 1000 # 0-1000 buffers max per round\n> #---------------------------------------------------------------------------\n> # WRITE AHEAD LOG\n> #---------------------------------------------------------------------------\n> fsync = false # turns forced synchronization on or off\n> wal_buffers = 64 # min 4, 8KB each\n> checkpoint_segments = 40 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 600 # range 30-3600, in seconds\n> #---------------------------------------------------------------------------\n> # QUERY TUNING\n> #---------------------------------------------------------------------------\n> effective_cache_size = 65536 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n> #---------------------------------------------------------------------------\n> # ERROR REPORTING AND LOGGING\n> #---------------------------------------------------------------------------\n> log_min_duration_statement = 250 # -1 is disabled, in milliseconds.\n> log_connections = true\n> log_disconnections = true\n> log_duration = true\n> log_line_prefix = '<%r%u%p%t%d%%' # e.g. '<%u%%%d> '\n> # %u=user name %d=database name\n> # %r=remote host and port\n> # %p=PID %t=timestamp %i=command tag\n> # %c=session id %l=session line number\n> # %s=session start timestamp %x=transaction id\n> # %q=stop here in non-session processes\n> # %%='%'\n> log_statement = 'none' # none, mod, ddl, all\n> #---------------------------------------------------------------------------\n> # RUNTIME STATISTICS\n> #---------------------------------------------------------------------------\n> # - Query/Index Statistics Collector -\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n> \n> My MySQL my.ini has the following non default values:\n> innodb_data_home_dir = /var/lib/mysql/\n> innodb_data_file_path = ibdata1:10M:autoextend\n> innodb_log_group_home_dir = /var/lib/mysql/\n> innodb_log_arch_dir = /var/lib/mysql/\n> # You can set .._buffer_pool_size up to 50 - 80 %\n> # of RAM but beware of setting memory usage too high\n> innodb_buffer_pool_size = 512M\n> innodb_additional_mem_pool_size = 64M\n> # Set .._log_file_size to 25 % of buffer pool size\n> innodb_log_file_size = 128M\n> innodb_log_buffer_size = 64M\n> innodb_flush_log_at_trx_commit = 1\n> innodb_lock_wait_timeout = 50\n> innodb_flush_method = O_DSYNC\n> max_allowed_packet = 16M\n> \n> \n> \n> \n\n\n\n\n\nRe: [PERFORM] COPY insert performance\n\n\nChris,\n\nYou can try the Bizgres distribution of postgres (based on version 8.0.3), the COPY support is 30% faster as reported by OSDL (without indexes).  This is due to very slow parsing within the COPY command, which is sped up using micro-optimized logic for parsing.  There is a patch pending for the development version of Postgres which implements the same code, but you can use Bizgres and get it now instead of waiting for postgres 8.1 to come out.  Also, Bizgres is QA tested with the enhanced features.\n\nBizgres is a free / open source distribution of Postgres for Business Intelligence / Data Warehousing. \n\nBizgres currently features postgres 8.0.3 plus these patches:\nBypass WAL when performing “CREATE TABLE AS SELECT”\nCOPY is between 30% and 90% faster on machines with fast I/O\nEnhanced support for data partitioning with partition elimination optimization\nBitmap Scan support for multiple index use in queries and better low cardinality column performance\nImproved optimization of queries with LIMIT\n\nSee: http://www.bizgres.org for more.\n\n- Luke\n\n\nOn 7/25/05 3:32 PM, \"Chris Isaacson\" <[email protected]> wrote:\n\nI need COPY via libpqxx to insert millions of rows into two tables.  One table has roughly have as many rows and requires half the storage.  In production, the largest table will grow by ~30M rows/day.  To test the COPY performance I split my transactions into 10,000 rows.  I insert roughly 5000 rows into table A for every 10,000 rows into table B.\n \nTable A has one unique index:\n \n\"order_main_pk\" UNIQUE, btree (cl_ord_id)\n \nTable B has 1 unique index and 2 non-unique indexes:\n \n\"order_transition_pk\" UNIQUE, btree (collating_seq)\n\"order_transition_ak2\" btree (orig_cl_ord_id)\n\"order_transition_ak3\" btree (exec_id)\n \nMy testing environment is as follows:\n-Postgresql 8.0.1\n-libpqxx 2.5.0\n-Linux 2.6.11.4-21.7-smp x86_64 \n-Dual Opteron 246\n-System disk (postgres data resides on this SCSI disk) - Seagate (ST373453LC) - 15K, 73 GB (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081,549,00.html)\n-2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside on this disk\n-NO RAID\n \nPostgreSQL\nHere are the results of copying in 10M rows as fast as possible: (10K/transaction)\nTotal Time:            1129.556 s\nRows/sec:             9899.922\nTransaction>1.2s    225\nTransaction>1.5s     77\nTransaction>2.0s      4\nMax Transaction       2.325s\n \nMySQL\nI ran a similar test with MySQL 4.1.10a (InnoDB) which produced these results: (I used MySQL's INSERT INTO x VALUES (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)\nTotal Time:         860.000 s\nRows/sec:        11627.91\nTransaction>1.2s      0\nTransaction>1.5s      0\nTransaction>2.0s      0\nMax Transaction       1.175s\n \nConsidering the configurations shown below, can anyone offer advice to close the 15% gap and the much worse variability I'm experiencing.  Thanks\n \nMy postgresql.conf has the following non-default values:\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\nlisten_addresses = '*' # what IP interface(s) to listen on; \nmax_connections = 100\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\nshared_buffers = 65536  # min 16, at least max_connections*2, 8KB each\nwork_mem = 2048   # min 64, size in KB\nmaintenance_work_mem = 204800 # min 1024, size in KB\nmax_fsm_pages = 2250000  # min max_fsm_relations*16, 6 bytes each\nbgwriter_delay = 200  # 10-10000 milliseconds between rounds\nbgwriter_percent = 10  # 0-100% of dirty buffers in each round\nbgwriter_maxpages = 1000 # 0-1000 buffers max per round\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\nfsync = false   # turns forced synchronization on or off\nwal_buffers = 64  # min 4, 8KB each\ncheckpoint_segments = 40 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 600 # range 30-3600, in seconds\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\neffective_cache_size = 65536 # typically 8KB each\nrandom_page_cost = 2  # units are one sequential page fetch cost\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------   \nlog_min_duration_statement =    250 # -1 is disabled, in milliseconds.\nlog_connections = true\nlog_disconnections = true\nlog_duration = true\nlog_line_prefix = '<%r%u%p%t%d%%'  # e.g. '<%u%%%d> ' \n    # %u=user name %d=database name\n    # %r=remote host and port\n    # %p=PID %t=timestamp %i=command tag\n    # %c=session id %l=session line number\n    # %s=session start timestamp %x=transaction id\n    # %q=stop here in non-session processes\n    # %%='%'\nlog_statement = 'none'  # none, mod, ddl, all\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n# - Query/Index Statistics Collector -\nstats_start_collector = true\nstats_command_string = true\nstats_block_level = true\nstats_row_level = true\nstats_reset_on_server_start = true\n \nMy MySQL my.ini has the following non default values:\ninnodb_data_home_dir = /var/lib/mysql/\ninnodb_data_file_path = ibdata1:10M:autoextend\ninnodb_log_group_home_dir = /var/lib/mysql/\ninnodb_log_arch_dir = /var/lib/mysql/\n# You can set .._buffer_pool_size up to 50 - 80 %\n# of RAM but beware of setting memory usage too high\ninnodb_buffer_pool_size = 512M\ninnodb_additional_mem_pool_size = 64M\n# Set .._log_file_size to 25 % of buffer pool size\ninnodb_log_file_size = 128M\ninnodb_log_buffer_size = 64M\ninnodb_flush_log_at_trx_commit = 1\ninnodb_lock_wait_timeout = 50\ninnodb_flush_method = O_DSYNC\nmax_allowed_packet = 16M", "msg_date": "Mon, 25 Jul 2005 17:31:02 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY insert performance" }, { "msg_contents": "Hi Chris,\n\nHave you considered breaking the data into multiple chunks and COPYing\neach concurrently?\n\nAlso, have you ensured that your table isn't storing OIDs?\n\nOn Mon, 25 Jul 2005, Chris Isaacson wrote:\n\n> #-----------------------------------------------------------------------\n> ----\n> # RESOURCE USAGE (except WAL)\n> #-----------------------------------------------------------------------\n> ----\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n\nshared_buffers that high has been shown to affect performance. Try 12000.\n\n> wal_buffers = 64 # min 4, 8KB each\n\nIncreasing wal_buffers can also have an effect on performance.\n\nThanks,\n\nGavin\n", "msg_date": "Tue, 26 Jul 2005 22:12:21 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY insert performance" } ]
[ { "msg_contents": " \nNothing was running except the job. The server did not look stressed out\nlooking at top and vmstat. We have seen slower query performance when\nperforming load tests, so I run the re-index on all application indexes\nand then issue a full vacuum. I ran the same thing on a staging server\nand it took less than 12 hours. Is there a possibility the DB pages are\ncorrupted. Is there a command to verify that. (In Oracle, there's a\ndbverify command that checks for corruption on the data files level). \n\nThe other question I have. What would be the proper approach to rebuild\nindexes. I re-indexes and then run vacuum/analyze. Should I not use the\nre-index approach, and instead, drop the indexes, vacuum the tables, and\nthen create the indexes, then run analyze on tables and indexes?? \n\nThanks,\n\n\n-- \n Husam \n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Monday, July 25, 2005 3:49 PM\nTo: Tomeh, Husam; [email protected]\nSubject: Re: [PERFORM] \"Vacuum Full Analyze\" taking so long\n\nVacuum full takes an exclusive lock on the tables it runs against, so if\nyou have anything else reading the table while you are trying to run it,\nthe vacuum full will wait, possibly forever until it can get the lock.\n\nWhat does the system load look like while you are running this? What\ndoes vmstat 1 show you? Is there load on the system other than the\ndatabase?\n\nDo you really need to run vacuum full instead of vacuum?\n\n- Luke\n\n\nOn 7/25/05 2:30 PM, \"Tomeh, Husam\" <[email protected]> wrote:\n\n> \n> I have an 8.02 postgresql database with about 180 GB in size, running \n> on\n> 2.6 RedHat kernel with 32 GB of RAM and 2 CPUs. I'm running the vacuum\n\n> full analyze command, and has been running for at least two \n> consecutive days with no other processes running (it's an offline \n> loading server). I tweaked the maintenanace_mem to its max (2 GB) with\n\n> work_mem of 8M. I have no issues with my checkpoints. I can still I/O \n> activities against the physical files of the \"property\" table and its \n> two indexes (primary key and r index). The property files are about \n> 128GB and indexes are about 15 GB. I have run the same maintenance job\n\n> on a different box\n> (staging) with identical hardware config (except with 64 GB instead of\n> 32) and took less than 12 hours. Any clue or tip is really\n> appreciated. \n> \n> Also read a comment by Tom Lane, that terminating the process should \n> be crash-safe if I had to.\n> \n> Thanks,\n> \n\n\n\n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Mon, 25 Jul 2005 16:14:16 -0700", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" }, { "msg_contents": "Tomeh, Husam wrote:\n>\n> Nothing was running except the job. The server did not look stressed out\n> looking at top and vmstat. We have seen slower query performance when\n> performing load tests, so I run the re-index on all application indexes\n> and then issue a full vacuum. I ran the same thing on a staging server\n> and it took less than 12 hours. Is there a possibility the DB pages are\n> corrupted. Is there a command to verify that. (In Oracle, there's a\n> dbverify command that checks for corruption on the data files level).\n>\n> The other question I have. What would be the proper approach to rebuild\n> indexes. I re-indexes and then run vacuum/analyze. Should I not use the\n> re-index approach, and instead, drop the indexes, vacuum the tables, and\n> then create the indexes, then run analyze on tables and indexes??\n\nI *think* if you are planning on dropping the indexes anyway, just drop\nthem, VACUUM ANALYZE, and then recreate them, I don't think you have to\nre-analyze after you have recreated them.\n\nJohn\n=:->\n\n>\n> Thanks,\n>\n>", "msg_date": "Mon, 25 Jul 2005 18:31:08 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" }, { "msg_contents": "\nHusam,\n\nOn 7/25/05 4:31 PM, \"John A Meinel\" <[email protected]> wrote:\n\n> Tomeh, Husam wrote:\n>> \n>> Nothing was running except the job. The server did not look stressed out\n>> looking at top and vmstat. We have seen slower query performance when\n>> performing load tests, so I run the re-index on all application indexes\n>> and then issue a full vacuum. I ran the same thing on a staging server\n>> and it took less than 12 hours. Is there a possibility the DB pages are\n>> corrupted. Is there a command to verify that. (In Oracle, there's a\n>> dbverify command that checks for corruption on the data files level).\n>> \n>> The other question I have. What would be the proper approach to rebuild\n>> indexes. I re-indexes and then run vacuum/analyze. Should I not use the\n>> re-index approach, and instead, drop the indexes, vacuum the tables, and\n>> then create the indexes, then run analyze on tables and indexes??\n> \n> I *think* if you are planning on dropping the indexes anyway, just drop\n> them, VACUUM ANALYZE, and then recreate them, I don't think you have to\n> re-analyze after you have recreated them.\n\nI agree - and don't run \"VACUUM FULL\", it is quite different from \"VACUUM\".\nAlso - you should only need to vacuum if you've deleted a lot of data. It's\njob is to reclaim space lost to rows marked deleted. So, in fact, you may\nnot even need to run VACUUM.\n\n\"VACUUM FULL\" is like a disk defragmentation operation within the DBMS, and\nis only necessary if there is a slowdown in performance from lots and lots\nof deletes and/or updates and new data isn't finding sequential pages for\nstorage, which is rare. Given the need for locking, it's generally better\nto dump and restore in that case, but again it's a very rare occasion.\n\nI don't know of a command to check for page corruption, but I would think\nthat if you can run VACUUM (not full) you should be OK.\n\n- Luke\n\n\n", "msg_date": "Mon, 25 Jul 2005 17:51:48 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" }, { "msg_contents": "Tomeh, Husam wrote:\n> The other question I have. What would be the proper approach to rebuild\n> indexes. I re-indexes and then run vacuum/analyze. Should I not use the\n> re-index approach, and instead, drop the indexes, vacuum the tables, and\n> then create the indexes, then run analyze on tables and indexes?? \n\nIf you just want to rebuild indexes, just drop and recreate.\n\nHowever, you are also running a VACUUM FULL, so I presume you \nhave deleted a significant number of rows and want to recover the \nspace that was in use by them. In that scenario, it is often \nbetter to CLUSTER the table to force a rebuild. While VACUUM FULL \nmoves the tuples around inside the existing file(s), CLUSTER \nsimply creates new file(s), moves all the non-deleted tuples \nthere and then swaps the old and the new files. There can be a \nsignificant performance increase in doing so (but you obviously \nneed to have some free diskspace).\nIf you CLUSTER your table it will be ordered by the index you \nspecify. There can be a performance increase in doing so, but if \nyou don't want to you can also do a no-op ALTER TABLE and change \na column to a datatype that is the same as it already has. This \ntoo will force a rewrite of the table but without ordering the \ntuples.\n\nSo in short my recommendations:\n- to rebuild indexes, just drop and recreate the indexes\n- to rebuild everything because there is space that can \nbepermanently reclaimed, drop indexes, cluster or alter the \ntable, recreate the indexes and anlyze the table\n\nJochem\n", "msg_date": "Tue, 26 Jul 2005 11:58:17 +0200", "msg_from": "Jochem van Dieten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" } ]
[ { "msg_contents": "I do not have any foreign keys and I need the indexes on during the\ninsert/copy b/c in production a few queries heavily dependent on the\nindexes will be issued. These queries will be infrequent, but must be\nfast when issued.\n\nI am using InnoDB with MySQL which appears to enforce true transaction\nsupport. (http://dev.mysql.com/doc/mysql/en/innodb-overview.html) If\nnot, how is InnoDB \"cheating\"?\n\nSorry for the confusion, but pg_xlog is currently on a dedicated drive\n(10K SCSI, see below). Would I realize further gains if I had a third\ndrive and put the indexes on that drive? \n\nI've played with the checkpoint_segments. I noticed an enormous\nimprovement increasing from the default to 40, but neglible improvement\nthereafter. Do you have a recommendation for a value?\n\nMy bg_writer adjustments were a last ditch effort. I found your advice\ncorrect and realized no gain. I have not tested under a querying load\nwhich is a good next step. I had not thought of the comparative\ndegradation of MySQL vs. PostgreSQL.\n\nThanks for the tip on the RAM usage by indexes. I was under the\nincorrect assumption that shared_buffers would take care of this. I'll\nincrease work_mem to 512MB and rerun my test. I have 1G of RAM, which\nis less than we'll be running in production (likely 2G).\n\n-----Original Message-----\nFrom: John A Meinel [mailto:[email protected]] \nSent: Monday, July 25, 2005 6:09 PM\nTo: Chris Isaacson; Postgresql Performance\nSubject: Re: [PERFORM] COPY insert performance\n\n\nChris Isaacson wrote:\n> I need COPY via libpqxx to insert millions of rows into two tables. \n> One table has roughly have as many rows and requires half the storage.\n\n> In production, the largest table will grow by ~30M rows/day. To test \n> the COPY performance I split my transactions into 10,000 rows. I \n> insert roughly 5000 rows into table A for every 10,000 rows into table\n\n> B.\n>\n> Table A has one unique index:\n>\n> \"order_main_pk\" UNIQUE, btree (cl_ord_id)\n>\n> Table B has 1 unique index and 2 non-unique indexes:\n>\n> \"order_transition_pk\" UNIQUE, btree (collating_seq) \n> \"order_transition_ak2\" btree (orig_cl_ord_id) \"order_transition_ak3\" \n> btree (exec_id)\n\nDo you have any foreign key references?\nIf you are creating a table for the first time (or loading a large\nfraction of the data), it is common to drop the indexes and foreign keys\nfirst, and then insert/copy, and then drop them again.\n\nIs InnoDB the backend with referential integrity, and true transaction\nsupport? I believe the default backend does not support either (so it is\n\"cheating\" to give you speed, which may be just fine for your needs,\nespecially since you are willing to run fsync=false).\n\nI think moving pg_xlog to a dedicated drive (set of drives) could help\nyour performance. As well as increasing checkpoint_segments.\n\nI don't know if you gain much by changing the bg_writer settings, if you\nare streaming everything in at once, you probably want to have it\nwritten out right away. My understanding is that bg_writer settings are\nfor the case where you have mixed read and writes going on at the same\ntime, and you want to make sure that the reads have time to execute (ie\nthe writes are not saturating your IO).\n\nAlso, is any of this tested under load? Having a separate process issue\nqueries while you are loading in data. Traditionally MySQL is faster\nwith a single process inserting/querying for data, but once you have\nmultiple processes hitting it at the same time, it's performance\ndegrades much faster than postgres.\n\nYou also seem to be giving MySQL 512M of ram to work with, while only\ngiving 2M/200M to postgres. (re)creating indexes uses\nmaintenance_work_mem, but updating indexes could easily use work_mem.\nYou may be RAM starved.\n\nJohn\n=:->\n\n\n>\n> My testing environment is as follows:\n> -Postgresql 8.0.1\n> -libpqxx 2.5.0\n> -Linux 2.6.11.4-21.7-smp x86_64\n> -Dual Opteron 246\n> -System disk (postgres data resides on this SCSI disk) - Seagate\n> (ST373453LC) - 15K, 73 GB\n> (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081\n> ,549,00.html)\n> -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside\n> on this disk\n> -NO RAID\n>\n> *PostgreSQL*\n> Here are the results of copying in 10M rows as fast as possible:\n> (10K/transaction)\n> Total Time: 1129.556 s\n> Rows/sec: 9899.922\n> Transaction>1.2s 225\n> Transaction>1.5s 77\n> Transaction>2.0s 4\n> Max Transaction 2.325s\n>\n> **MySQL**\n> **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced \n> these\n> results: (I used MySQL's INSERT INTO x VALUES\n> (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)\n> Total Time: 860.000 s\n> Rows/sec: 11627.91\n> Transaction>1.2s 0\n> Transaction>1.5s 0\n> Transaction>2.0s 0\n> Max Transaction 1.175s\n>\n> Considering the configurations shown below, can anyone offer advice to\n\n> close the 15% gap and the much worse variability I'm experiencing. \n> Thanks\n>\n> My *postgresql.conf* has the following non-default values:\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> listen_addresses = '*' # what IP interface(s) to listen on; \n> max_connections = 100\n> #---------------------------------------------------------------------\n> ------\n> # RESOURCE USAGE (except WAL)\n>\n#-----------------------------------------------------------------------\n----\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n> work_mem = 2048 # min 64, size in KB\n> maintenance_work_mem = 204800 # min 1024, size in KB\n> max_fsm_pages = 2250000 # min max_fsm_relations*16, 6 bytes each\n> bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n> bgwriter_percent = 10 # 0-100% of dirty buffers in each round\n> bgwriter_maxpages = 1000 # 0-1000 buffers max per round\n>\n#-----------------------------------------------------------------------\n----\n> # WRITE AHEAD LOG\n>\n#-----------------------------------------------------------------------\n----\n> fsync = false # turns forced synchronization on or off\n> wal_buffers = 64 # min 4, 8KB each\n> checkpoint_segments = 40 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 600 # range 30-3600, in seconds\n>\n#-----------------------------------------------------------------------\n----\n> # QUERY TUNING\n>\n#-----------------------------------------------------------------------\n----\n> effective_cache_size = 65536 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n>\n#-----------------------------------------------------------------------\n----\n> # ERROR REPORTING AND LOGGING\n>\n#-----------------------------------------------------------------------\n----\n>\n> log_min_duration_statement = 250 # -1 is disabled, in milliseconds.\n> log_connections = true\n> log_disconnections = true\n> log_duration = true\n> log_line_prefix = '<%r%u%p%t%d%%' # e.g. '<%u%%%d> '\n> # %u=user name %d=database name\n> # %r=remote host and port\n> # %p=PID %t=timestamp %i=command tag\n> # %c=session id %l=session line number\n> # %s=session start timestamp %x=transaction id\n> # %q=stop here in non-session processes\n> # %%='%'\n> log_statement = 'none' # none, mod, ddl, all\n> #---------------------------------------------------------------------\n> ------\n> # RUNTIME STATISTICS\n>\n#-----------------------------------------------------------------------\n----\n> # - Query/Index Statistics Collector -\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n>\n> My MySQL *my.ini* has the following non default values: \n> innodb_data_home_dir = /var/lib/mysql/ innodb_data_file_path = \n> ibdata1:10M:autoextend innodb_log_group_home_dir = /var/lib/mysql/\n> innodb_log_arch_dir = /var/lib/mysql/\n> # You can set .._buffer_pool_size up to 50 - 80 %\n> # of RAM but beware of setting memory usage too high\n> innodb_buffer_pool_size = 512M\n> innodb_additional_mem_pool_size = 64M\n> # Set .._log_file_size to 25 % of buffer pool size\n> innodb_log_file_size = 128M\n> innodb_log_buffer_size = 64M\n> innodb_flush_log_at_trx_commit = 1\n> innodb_lock_wait_timeout = 50\n> innodb_flush_method = O_DSYNC\n> max_allowed_packet = 16M\n>\n>\n>\n\n", "msg_date": "Tue, 26 Jul 2005 07:15:15 -0500", "msg_from": "\"Chris Isaacson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY insert performance" }, { "msg_contents": "\nOn Jul 26, 2005, at 8:15 AM, Chris Isaacson wrote:\n>\n> I am using InnoDB with MySQL which appears to enforce true transaction\n> support. (http://dev.mysql.com/doc/mysql/en/innodb-overview.html) If\n> not, how is InnoDB \"cheating\"?\n>\n\nare you sure your tables are innodb?\nchances are high unless you explcitly stated \"type = innodb\" when \ncreating that they are myisam.\n\nlook at \"show table status\" output to verify.\n\n>\n> I've played with the checkpoint_segments. I noticed an enormous\n> improvement increasing from the default to 40, but neglible \n> improvement\n> thereafter. Do you have a recommendation for a value?\n\nthere's been a thread on -hackers recently about checkpoint issues.. \nin a nut shell there isn't much to do. But I'd say give bizgres a \ntry if you're going to be continually loading huge amounts of data.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n\n", "msg_date": "Tue, 26 Jul 2005 10:22:57 -0400", "msg_from": "Jeff Trout <[email protected]>", "msg_from_op": false, "msg_subject": "Re: COPY insert performance" } ]
[ { "msg_contents": "John,\n\n(FYI: got a failed to deliver to [email protected])\n\nI do not have any foreign keys and I need the indexes on during the\ninsert/copy b/c in production a few queries heavily dependent on the\nindexes will be issued. These queries will be infrequent, but must be\nfast when issued.\n\nI am using InnoDB with MySQL which appears to enforce true transaction\nsupport. (http://dev.mysql.com/doc/mysql/en/innodb-overview.html) If\nnot, how is InnoDB \"cheating\"?\n\nSorry for the confusion, but pg_xlog is currently on a dedicated drive\n(10K SCSI, see below). Would I realize further gains if I had a third\ndrive and put the indexes on that drive? =20\n\nI've played with the checkpoint_segments. I noticed an enormous\nimprovement increasing from the default to 40, but neglible improvement\nthereafter. Do you have a recommendation for a value?\n\nMy bg_writer adjustments were a last ditch effort. I found your advice\ncorrect and realized no gain. I have not tested under a querying load\nwhich is a good next step. I had not thought of the comparative\ndegradation of MySQL vs. PostgreSQL.\n\nThanks for the tip on the RAM usage by indexes. I was under the\nincorrect assumption that shared_buffers would take care of this. I'll\nincrease work_mem to 512MB and rerun my test. I have 1G of RAM, which\nis less than we'll be running in production (likely 2G).\n\n-Chris\n\n-----Original Message-----\nFrom: John A Meinel [mailto:[email protected]] \nSent: Monday, July 25, 2005 6:09 PM\nTo: Chris Isaacson; Postgresql Performance\nSubject: Re: [PERFORM] COPY insert performance\n\n\nChris Isaacson wrote:\n> I need COPY via libpqxx to insert millions of rows into two tables. \n> One table has roughly have as many rows and requires half the storage.\n\n> In production, the largest table will grow by ~30M rows/day. To test \n> the COPY performance I split my transactions into 10,000 rows. I \n> insert roughly 5000 rows into table A for every 10,000 rows into table\n\n> B.\n>\n> Table A has one unique index:\n>\n> \"order_main_pk\" UNIQUE, btree (cl_ord_id)\n>\n> Table B has 1 unique index and 2 non-unique indexes:\n>\n> \"order_transition_pk\" UNIQUE, btree (collating_seq) \n> \"order_transition_ak2\" btree (orig_cl_ord_id) \"order_transition_ak3\" \n> btree (exec_id)\n\nDo you have any foreign key references?\nIf you are creating a table for the first time (or loading a large\nfraction of the data), it is common to drop the indexes and foreign keys\nfirst, and then insert/copy, and then drop them again.\n\nIs InnoDB the backend with referential integrity, and true transaction\nsupport? I believe the default backend does not support either (so it is\n\"cheating\" to give you speed, which may be just fine for your needs,\nespecially since you are willing to run fsync=false).\n\nI think moving pg_xlog to a dedicated drive (set of drives) could help\nyour performance. As well as increasing checkpoint_segments.\n\nI don't know if you gain much by changing the bg_writer settings, if you\nare streaming everything in at once, you probably want to have it\nwritten out right away. My understanding is that bg_writer settings are\nfor the case where you have mixed read and writes going on at the same\ntime, and you want to make sure that the reads have time to execute (ie\nthe writes are not saturating your IO).\n\nAlso, is any of this tested under load? Having a separate process issue\nqueries while you are loading in data. Traditionally MySQL is faster\nwith a single process inserting/querying for data, but once you have\nmultiple processes hitting it at the same time, it's performance\ndegrades much faster than postgres.\n\nYou also seem to be giving MySQL 512M of ram to work with, while only\ngiving 2M/200M to postgres. (re)creating indexes uses\nmaintenance_work_mem, but updating indexes could easily use work_mem.\nYou may be RAM starved.\n\nJohn\n=:->\n\n\n>\n> My testing environment is as follows:\n> -Postgresql 8.0.1\n> -libpqxx 2.5.0\n> -Linux 2.6.11.4-21.7-smp x86_64\n> -Dual Opteron 246\n> -System disk (postgres data resides on this SCSI disk) - Seagate\n> (ST373453LC) - 15K, 73 GB\n> (http://www.seagate.com/cda/products/discsales/marketing/detail/0,1081\n> ,549,00.html)\n> -2nd logical disk - 10K, 36GB IBM SCSI (IC35L036UCDY10-0) - WAL reside\n> on this disk\n> -NO RAID\n>\n> *PostgreSQL*\n> Here are the results of copying in 10M rows as fast as possible:\n> (10K/transaction)\n> Total Time: 1129.556 s\n> Rows/sec: 9899.922\n> Transaction>1.2s 225\n> Transaction>1.5s 77\n> Transaction>2.0s 4\n> Max Transaction 2.325s\n>\n> **MySQL**\n> **I ran a similar test with MySQL 4.1.10a (InnoDB) which produced \n> these\n> results: (I used MySQL's INSERT INTO x VALUES\n> (1,2,3)(4,5,6)(...,...,...) syntax) (10K/transaction)\n> Total Time: 860.000 s\n> Rows/sec: 11627.91\n> Transaction>1.2s 0\n> Transaction>1.5s 0\n> Transaction>2.0s 0\n> Max Transaction 1.175s\n>\n> Considering the configurations shown below, can anyone offer advice to\n\n> close the 15% gap and the much worse variability I'm experiencing. \n> Thanks\n>\n> My *postgresql.conf* has the following non-default values:\n> # -----------------------------\n> # PostgreSQL configuration file\n> # -----------------------------\n> listen_addresses = '*' # what IP interface(s) to listen on; \n> max_connections = 100\n> #---------------------------------------------------------------------\n> ------\n> # RESOURCE USAGE (except WAL)\n>\n#-----------------------------------------------------------------------\n----\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n> work_mem = 2048 # min 64, size in KB\n> maintenance_work_mem = 204800 # min 1024, size in KB\n> max_fsm_pages = 2250000 # min max_fsm_relations*16, 6 bytes each\n> bgwriter_delay = 200 # 10-10000 milliseconds between rounds\n> bgwriter_percent = 10 # 0-100% of dirty buffers in each round\n> bgwriter_maxpages = 1000 # 0-1000 buffers max per round\n>\n#-----------------------------------------------------------------------\n----\n> # WRITE AHEAD LOG\n>\n#-----------------------------------------------------------------------\n----\n> fsync = false # turns forced synchronization on or off\n> wal_buffers = 64 # min 4, 8KB each\n> checkpoint_segments = 40 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 600 # range 30-3600, in seconds\n>\n#-----------------------------------------------------------------------\n----\n> # QUERY TUNING\n>\n#-----------------------------------------------------------------------\n----\n> effective_cache_size = 65536 # typically 8KB each\n> random_page_cost = 2 # units are one sequential page fetch cost\n>\n#-----------------------------------------------------------------------\n----\n> # ERROR REPORTING AND LOGGING\n>\n#-----------------------------------------------------------------------\n----\n>\n> log_min_duration_statement = 250 # -1 is disabled, in milliseconds.\n> log_connections = true\n> log_disconnections = true\n> log_duration = true\n> log_line_prefix = '<%r%u%p%t%d%%' # e.g. '<%u%%%d> '\n> # %u=user name %d=database name\n> # %r=remote host and port\n> # %p=PID %t=timestamp %i=command tag\n> # %c=session id %l=session line number\n> # %s=session start timestamp %x=transaction id\n> # %q=stop here in non-session processes\n> # %%='%'\n> log_statement = 'none' # none, mod, ddl, all\n> #---------------------------------------------------------------------\n> ------\n> # RUNTIME STATISTICS\n>\n#-----------------------------------------------------------------------\n----\n> # - Query/Index Statistics Collector -\n> stats_start_collector = true\n> stats_command_string = true\n> stats_block_level = true\n> stats_row_level = true\n> stats_reset_on_server_start = true\n>\n> My MySQL *my.ini* has the following non default values: \n> innodb_data_home_dir = /var/lib/mysql/ innodb_data_file_path = \n> ibdata1:10M:autoextend innodb_log_group_home_dir = /var/lib/mysql/\n> innodb_log_arch_dir = /var/lib/mysql/\n> # You can set .._buffer_pool_size up to 50 - 80 %\n> # of RAM but beware of setting memory usage too high\n> innodb_buffer_pool_size = 512M\n> innodb_additional_mem_pool_size = 64M\n> # Set .._log_file_size to 25 % of buffer pool size\n> innodb_log_file_size = 128M\n> innodb_log_buffer_size = 64M\n> innodb_flush_log_at_trx_commit = 1\n> innodb_lock_wait_timeout = 50\n> innodb_flush_method = O_DSYNC\n> max_allowed_packet = 16M\n>\n>\n>\n\n", "msg_date": "Tue, 26 Jul 2005 07:23:02 -0500", "msg_from": "\"Chris Isaacson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY insert performance" } ]
[ { "msg_contents": "I need the chunks for each table COPYed within the same transaction\nwhich is why I'm not COPYing concurrently via multiple\nthreads/processes. I will experiment w/o OID's and decreasing the\nshared_buffers and wal_buffers.\n\nThanks,\nChris\n\n-----Original Message-----\nFrom: Gavin Sherry [mailto:[email protected]] \nSent: Tuesday, July 26, 2005 7:12 AM\nTo: Chris Isaacson\nCc: [email protected]\nSubject: Re: [PERFORM] COPY insert performance\n\n\nHi Chris,\n\nHave you considered breaking the data into multiple chunks and COPYing\neach concurrently?\n\nAlso, have you ensured that your table isn't storing OIDs?\n\nOn Mon, 25 Jul 2005, Chris Isaacson wrote:\n\n> #---------------------------------------------------------------------\n> --\n> ----\n> # RESOURCE USAGE (except WAL)\n>\n#-----------------------------------------------------------------------\n> ----\n> shared_buffers = 65536 # min 16, at least max_connections*2, 8KB each\n\nshared_buffers that high has been shown to affect performance. Try\n12000.\n\n> wal_buffers = 64 # min 4, 8KB each\n\nIncreasing wal_buffers can also have an effect on performance.\n\nThanks,\n\nGavin\n", "msg_date": "Tue, 26 Jul 2005 07:27:25 -0500", "msg_from": "\"Chris Isaacson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: COPY insert performance" } ]
[ { "msg_contents": "I saw a review of a relatively inexpensive RAM disk over at \nanandtech.com, the Gigabyte i-RAM\nhttp://www.anandtech.com/storage/showdoc.aspx?i=2480\n\nBasically, it is a PCI card, which takes standard DDR RAM, and has a \nSATA port on it, so that to the system, it looks like a normal SATA drive.\n\nThe card costs about $100-150, and you fill it with your own ram, so for \na 4GB (max size) disk, it costs around $500. Looking for solid state \nstorage devices, the cheapest I found was around $5k for 2GB.\n\nGigabyte claims that the battery backup can last up to 16h, which seems \ndecent, if not really long (the $5k solution has a built-in harddrive so \nthat if the power goes out, it uses the battery power to copy the \nramdisk onto the harddrive for more permanent storage).\n\nAnyway, would something like this be reasonable as a drive for storing \npg_xlog? With 4GB you could have as many as 256 checkpoint segments.\n\nI'm a little leary as it is definitely a version 1.0 product (it is \nstill using an FPGA as the controller, so they were obviously pushing to \nget the card into production).\n\nBut it seems like this might be a decent way to improve insert \nperformance, without setting fsync=false.\n\nProbably it should see some serious testing (as in power spikes/pulled \nplugs, etc). I know the article made some claim that if you actually \npull out the card it goes into \"high consumption mode\" which is somehow \ngreater than if you leave it in the slot with the power off. Which to me \nseems like a lot of bull, and really means the 16h is only under \nbest-case circumstances. But even 1-2h is sufficient to handle a simple \npower outage.\n\nAnd if you had a UPS with detection of power failure, you could always \nsync the ramdisk to a local partition before the power goes out. Though \nyou could do that with a normal in-memory ramdisk (tmpfs) without having \nto buy the card. Though it does give you up-to an extra 4GB of ram, for \nmachines which have already maxed out their slots.\n\nAnyway, I thought I would mention it to the list, to see if anyone else \nhas heard of it, or has any thoughts on the matter. I'm sure there are \nsome people who are using more expensive ram disks, maybe they have some \nideas about what this device is missing. (other than costing about \n1/10th the price)\n\nJohn\n=:->", "msg_date": "Tue, 26 Jul 2005 11:34:37 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Cheap RAM disk?" }, { "msg_contents": "[email protected] (John A Meinel) writes:\n> I saw a review of a relatively inexpensive RAM disk over at\n> anandtech.com, the Gigabyte i-RAM\n> http://www.anandtech.com/storage/showdoc.aspx?i=2480\n\nAnd the review shows that it's not *all* that valuable for many of the\ncases they looked at.\n\n> Basically, it is a PCI card, which takes standard DDR RAM, and has a\n> SATA port on it, so that to the system, it looks like a normal SATA\n> drive.\n>\n> The card costs about $100-150, and you fill it with your own ram, so\n> for a 4GB (max size) disk, it costs around $500. Looking for solid\n> state storage devices, the cheapest I found was around $5k for 2GB.\n>\n> Gigabyte claims that the battery backup can last up to 16h, which\n> seems decent, if not really long (the $5k solution has a built-in\n> harddrive so that if the power goes out, it uses the battery power to\n> copy the ramdisk onto the harddrive for more permanent storage).\n>\n> Anyway, would something like this be reasonable as a drive for storing\n> pg_xlog? With 4GB you could have as many as 256 checkpoint segments.\n>\n> I'm a little leary as it is definitely a version 1.0 product (it is\n> still using an FPGA as the controller, so they were obviously pushing\n> to get the card into production).\n\nWhat disappoints me is that nobody has tried the CF/RAM answer; rather\nthan putting a hard drive on the board, you put on some form of flash\ndevice (CompactFlash or such), where if power fails, it pushes data\nonto the CF. That ought to be cheaper (both in terms of hardware cost\nand power consumption) than using a hard disk.\n\n> But it seems like this might be a decent way to improve insert\n> performance, without setting fsync=false.\n\nThat's the case which might prove Ludicrously Quicker than any of the\nsample cases in the review.\n\n> Probably it should see some serious testing (as in power spikes/pulled\n> plugs, etc). I know the article made some claim that if you actually\n> pull out the card it goes into \"high consumption mode\" which is\n> somehow greater than if you leave it in the slot with the power\n> off. Which to me seems like a lot of bull, and really means the 16h is\n> only under best-case circumstances. But even 1-2h is sufficient to\n> handle a simple power outage.\n\nCertainly.\n\n> Anyway, I thought I would mention it to the list, to see if anyone\n> else has heard of it, or has any thoughts on the matter. I'm sure\n> there are some people who are using more expensive ram disks, maybe\n> they have some ideas about what this device is missing. (other than\n> costing about 1/10th the price)\n\nWell, if it hits a \"2.0\" version, it may get interesting...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 26 Jul 2005 12:51:14 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "On Tue, 2005-07-26 at 11:34 -0500, John A Meinel wrote:\n> I saw a review of a relatively inexpensive RAM disk over at \n> anandtech.com, the Gigabyte i-RAM\n> http://www.anandtech.com/storage/showdoc.aspx?i=2480\n> \n> Basically, it is a PCI card, which takes standard DDR RAM, and has a \n> SATA port on it, so that to the system, it looks like a normal SATA drive.\n> \n> The card costs about $100-150, and you fill it with your own ram, so for \n> a 4GB (max size) disk, it costs around $500. Looking for solid state \n> storage devices, the cheapest I found was around $5k for 2GB.\n> \n> Gigabyte claims that the battery backup can last up to 16h, which seems \n> decent, if not really long (the $5k solution has a built-in harddrive so \n> that if the power goes out, it uses the battery power to copy the \n> ramdisk onto the harddrive for more permanent storage).\n> \n> Anyway, would something like this be reasonable as a drive for storing \n> pg_xlog? With 4GB you could have as many as 256 checkpoint segments.\n\nI haven't tried this product, but the microbenchmarks seem truly slow.\nI think you would get a similar benefit by simply sticking a 1GB or 2GB\nDIMM -- battery-backed, of course -- in your RAID controller.\n\n-jwb\n", "msg_date": "Tue, 26 Jul 2005 10:42:19 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "On Jul 26, 2005, at 12:34 PM, John A Meinel wrote:\n\n> Basically, it is a PCI card, which takes standard DDR RAM, and has \n> a SATA port on it, so that to the system, it looks like a normal \n> SATA drive.\n>\n> The card costs about $100-150, and you fill it with your own ram, \n> so for a 4GB (max size) disk, it costs around $500. Looking for \n> solid state storage devices, the cheapest I found was around $5k \n> for 2GB.\n>\n\ngotta love /. don't ya?\n\nThis card doesn't accept ECC RAM therefore it is nothing more than a \ntoy. I wouldn't trust it as far as I could throw it.\n\nThere are other vendors of SSD's out there. Some even have *real* \npower fail strategies such as dumping to a physical disk. These are \nnot cheap, but you gets what ya pays for...\n\nVivek Khera, Ph.D.\n+1-301-869-4449 x806", "msg_date": "Tue, 26 Jul 2005 13:49:05 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "\n> I'm a little leary as it is definitely a version 1.0 product (it is\n> still using an FPGA as the controller, so they were obviously pushing to\n> get the card into production).\n\n\tNot necessarily. FPGA's have become a sensible choice now. My RME studio \nsoundcard uses a big FPGA.\n\n\tThe performance in the test doesn't look that good, though, but don't \nforget it was run under windows. For instance they get 77s to copy the \nFirefox source tree on their Athlon 64/raptor ; my Duron / 7200rpm ide \ndrive does it in 30 seconds, but not with windows of course.\n\n\tHowever it doesnt' use ECC so... That's a pity, because they could have \nimplemented ECC in \"software\" inside the chip, and have the benefits of \nerror correction with normal, cheap RAM.\n\n\tWell; wait and see...\n", "msg_date": "Tue, 26 Jul 2005 20:16:59 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "Yup - interesting and very niche product - it seems like it's only obvious\napplication is for the Postgresql WAL problem :-)\n\nThe real differentiator is the battery backup part. Otherwise, the\nfilesystem caching is more effective, so put the RAM on the motherboard.\n\n- Luke\n\n\n", "msg_date": "Tue, 26 Jul 2005 11:23:23 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "Luke Lonergan wrote:\n> Yup - interesting and very niche product - it seems like it's only obvious\n> application is for the Postgresql WAL problem :-)\n\nWell, you could do it for any journaled system (XFS, JFS, ext3, reiserfs).\n\nBut yes, it seems specifically designed for a battery backed journal. \nThough the article reviews it for very different purposes.\n\nThough it was a Windows review, and I don't know of any way to make NTFS \nuse a separate device for a journal. (Though I expect it is possible \nsomehow).\n\nJohn\n=:->\n\n\n> \n> The real differentiator is the battery backup part. Otherwise, the\n> filesystem caching is more effective, so put the RAM on the motherboard.\n> \n> - Luke\n>", "msg_date": "Tue, 26 Jul 2005 13:28:03 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "On Tue, Jul 26, 2005 at 11:23:23AM -0700, Luke Lonergan wrote:\n>Yup - interesting and very niche product - it seems like it's only obvious\n>application is for the Postgresql WAL problem :-)\n\nOn the contrary--it's not obvious that it is an ideal fit for a WAL. A\nram disk like this is optimized for highly random access applications.\nThe WAL is a single sequential writer. If you're in the kind of market\nthat needs a really high performance WAL you'd be much better served by\nputting a big NVRAM cache in front of a fast disk array than by buying a\ntoy like this. \n\nMike Stone\n", "msg_date": "Tue, 26 Jul 2005 14:33:43 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "Please see:\n\nhttp://www.newegg.com/Product/Product.asp?Item=N82E16820145309\nand\nhttp://www.newegg.com/Product/Product.asp?Item=N82E16820145416\n\nThe price of Reg ECC is not significantly higher than regular ram at\nthis point. Plus if you go with super fast 2-2-2-6 then it's actualy\nmore than good ol 2.5 Reg ECC.\n\nAlex Turner\nNetEconomist\n\nOn 7/26/05, PFC <[email protected]> wrote:\n> \n> > I'm a little leary as it is definitely a version 1.0 product (it is\n> > still using an FPGA as the controller, so they were obviously pushing to\n> > get the card into production).\n> \n> Not necessarily. FPGA's have become a sensible choice now. My RME studio\n> soundcard uses a big FPGA.\n> \n> The performance in the test doesn't look that good, though, but don't\n> forget it was run under windows. For instance they get 77s to copy the\n> Firefox source tree on their Athlon 64/raptor ; my Duron / 7200rpm ide\n> drive does it in 30 seconds, but not with windows of course.\n> \n> However it doesnt' use ECC so... That's a pity, because they could have\n> implemented ECC in \"software\" inside the chip, and have the benefits of\n> error correction with normal, cheap RAM.\n> \n> Well; wait and see...\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n", "msg_date": "Tue, 26 Jul 2005 15:10:11 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "Also seems pretty silly to put it on a regular SATA connection, when\nall that can manage is 150MB/sec. If you made it connection directly\nto 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not\nto mention PCI-X.\n\nAlex Turner\nNetEconomist\n\nOn 7/26/05, John A Meinel <[email protected]> wrote:\n> I saw a review of a relatively inexpensive RAM disk over at\n> anandtech.com, the Gigabyte i-RAM\n> http://www.anandtech.com/storage/showdoc.aspx?i=2480\n> \n> Basically, it is a PCI card, which takes standard DDR RAM, and has a\n> SATA port on it, so that to the system, it looks like a normal SATA drive.\n> \n> The card costs about $100-150, and you fill it with your own ram, so for\n> a 4GB (max size) disk, it costs around $500. Looking for solid state\n> storage devices, the cheapest I found was around $5k for 2GB.\n> \n> Gigabyte claims that the battery backup can last up to 16h, which seems\n> decent, if not really long (the $5k solution has a built-in harddrive so\n> that if the power goes out, it uses the battery power to copy the\n> ramdisk onto the harddrive for more permanent storage).\n> \n> Anyway, would something like this be reasonable as a drive for storing\n> pg_xlog? With 4GB you could have as many as 256 checkpoint segments.\n> \n> I'm a little leary as it is definitely a version 1.0 product (it is\n> still using an FPGA as the controller, so they were obviously pushing to\n> get the card into production).\n> \n> But it seems like this might be a decent way to improve insert\n> performance, without setting fsync=false.\n> \n> Probably it should see some serious testing (as in power spikes/pulled\n> plugs, etc). I know the article made some claim that if you actually\n> pull out the card it goes into \"high consumption mode\" which is somehow\n> greater than if you leave it in the slot with the power off. Which to me\n> seems like a lot of bull, and really means the 16h is only under\n> best-case circumstances. But even 1-2h is sufficient to handle a simple\n> power outage.\n> \n> And if you had a UPS with detection of power failure, you could always\n> sync the ramdisk to a local partition before the power goes out. Though\n> you could do that with a normal in-memory ramdisk (tmpfs) without having\n> to buy the card. Though it does give you up-to an extra 4GB of ram, for\n> machines which have already maxed out their slots.\n> \n> Anyway, I thought I would mention it to the list, to see if anyone else\n> has heard of it, or has any thoughts on the matter. I'm sure there are\n> some people who are using more expensive ram disks, maybe they have some\n> ideas about what this device is missing. (other than costing about\n> 1/10th the price)\n> \n> John\n> =:->\n> \n> \n> \n>\n", "msg_date": "Tue, 26 Jul 2005 15:11:42 -0400", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "Alex Turner wrote:\n> Also seems pretty silly to put it on a regular SATA connection, when\n> all that can manage is 150MB/sec. If you made it connection directly\n> to 66/64-bit PCI then it could actualy _use_ the speed of the RAM, not\n> to mention PCI-X.\n> \n> Alex Turner\n> NetEconomist\n> \n\nWell, the whole point is to have it look like a normal SATA drive, even \nto the point that you can boot off of it, without having to load a \nsingle driver.\n\nNow, you could offer that you could recreate a SATA controller on the \ncard, with a SATA bios, etc. And then you could get the increased speed, \nand still have bootable functionality.\n\nBut it is a version 1.0 of a product, and I'm sure they tried to make it \nas cheap as possible (and within their own capabilities.)\n\nJohn\n=:->", "msg_date": "Tue, 26 Jul 2005 14:15:27 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "> you'd be much better served by\n> putting a big NVRAM cache in front of a fast disk array\n\nI agree with the point below, but I think price was the issue of the\noriginal discussion. That said, it seems that a single high speed spindle\nwould give this a run for its money in both price and performance, and for\nthe same reasons Mike points out. Maybe a SCSI 160 or 320 at 15k, or maybe\neven something slower.\n\nRick\n\[email protected] wrote on 07/26/2005 01:33:43 PM:\n\n> On Tue, Jul 26, 2005 at 11:23:23AM -0700, Luke Lonergan wrote:\n> >Yup - interesting and very niche product - it seems like it's only\nobvious\n> >application is for the Postgresql WAL problem :-)\n>\n> On the contrary--it's not obvious that it is an ideal fit for a WAL. A\n> ram disk like this is optimized for highly random access applications.\n> The WAL is a single sequential writer. If you're in the kind of market\n> that needs a really high performance WAL you'd be much better served by\n> putting a big NVRAM cache in front of a fast disk array than by buying a\n> toy like this.\n>\n> Mike Stone\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n", "msg_date": "Tue, 26 Jul 2005 15:02:01 -0500", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" }, { "msg_contents": "[email protected] (\"Jeffrey W. Baker\") writes:\n> I haven't tried this product, but the microbenchmarks seem truly\n> slow. I think you would get a similar benefit by simply sticking a\n> 1GB or 2GB DIMM -- battery-backed, of course -- in your RAID\n> controller.\n\nWell, the microbenchmarks were pretty pre-sophomoric, essentially\ntrying to express how the device would be useful to a Windows user\nthat *might* play games...\n\nI'm sure it's hurt by the fact that it's using a SATA (\"version 1\")\ninterface rather than something faster.\n\nMind you, I'd like to see the product succeed, because they might come\nup with a \"version 2\" of it that is what I'd really like...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://www.ntlug.org/~cbbrowne/sap.html\nRules of the Evil Overlord #78. \"I will not tell my Legions of Terror\n\"And he must be taken alive!\" The command will be: ``And try to take\nhim alive if it is reasonably practical.''\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Tue, 26 Jul 2005 18:12:10 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheap RAM disk?" } ]
[ { "msg_contents": "I am working on a process that will be inserting tens of million rows \nand need this to be as quick as possible.\n\nThe catch is that for each row I could potentially insert, I need to \nlook and see if the relationship is already there to prevent \nmultiple entries. Currently I am doing a SELECT before doing the \nINSERT, but I recognize the speed penalty in doing to operations. I \nwonder if there is some way I can say \"insert this record, only if it \ndoesn't exist already\". To see if it exists, I would need to compare \n3 fields instead of just enforcing a primary key.\n\nEven if this could be a small increase per record, even a few percent \nfaster compounded over the whole load could be a significant reduction.\n\nThanks for any ideas you might have.\n\n-Dan\n", "msg_date": "Tue, 26 Jul 2005 10:50:14 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "faster INSERT with possible pre-existing row?" }, { "msg_contents": "Dan Harris wrote:\n> I am working on a process that will be inserting tens of million rows \n> and need this to be as quick as possible.\n> \n> The catch is that for each row I could potentially insert, I need to \n> look and see if the relationship is already there to prevent multiple \n> entries. Currently I am doing a SELECT before doing the INSERT, but I \n> recognize the speed penalty in doing to operations. I wonder if there \n> is some way I can say \"insert this record, only if it doesn't exist \n> already\". To see if it exists, I would need to compare 3 fields \n> instead of just enforcing a primary key.\n> \n> Even if this could be a small increase per record, even a few percent \n> faster compounded over the whole load could be a significant reduction.\n> \n> Thanks for any ideas you might have.\n> \n> -Dan\n> \n\nYou could insert all of your data into a temporary table, and then do:\n\nINSERT INTO final_table SELECT * FROM temp_table WHERE NOT EXISTS \n(SELECT info FROM final_table WHERE id=id, path=path, y=y);\n\nOr you could load it into the temporary table, and then:\nDELETE FROM temp_table WHERE EXISTS (SELECT FROM final_table WHERE id...);\n\nAnd then do a plain INSERT INTO.\n\nI can't say what the specific performance increases would be, but \ntemp_table could certainly be an actual TEMP table (meaning it only \nexists during the connection), and you could easily do a COPY into that \ntable to load it up quickly, without having to check any constraints.\n\nJust a thought,\nJohn\n=:->", "msg_date": "Tue, 26 Jul 2005 11:56:16 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "John,\n\nOn 7/26/05 9:56 AM, \"John A Meinel\" <[email protected]> wrote:\n\n> You could insert all of your data into a temporary table, and then do:\n> \n> INSERT INTO final_table SELECT * FROM temp_table WHERE NOT EXISTS\n> (SELECT info FROM final_table WHERE id=id, path=path, y=y);\n> \n> Or you could load it into the temporary table, and then:\n> DELETE FROM temp_table WHERE EXISTS (SELECT FROM final_table WHERE id...);\n> \n> And then do a plain INSERT INTO.\n> \n> I can't say what the specific performance increases would be, but\n> temp_table could certainly be an actual TEMP table (meaning it only\n> exists during the connection), and you could easily do a COPY into that\n> table to load it up quickly, without having to check any constraints.\n\nYah - that's a typical approach, and it would be excellent if the COPY\nbypassed WAL for the temp table load. This is something we discussed in\nbizgres development a while back. I think we should do this for sure -\nwould nearly double the temp table load rate, and the subsequent temp table\ndelete *should* be fast enough (?) Any performance tests you've done on\nthat delete/subselect operation?\n\n- Luke\n\n\n", "msg_date": "Tue, 26 Jul 2005 11:46:33 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "On Tue, 2005-07-26 at 10:50 -0600, Dan Harris wrote:\n> I am working on a process that will be inserting tens of million rows \n> and need this to be as quick as possible.\n> \n> The catch is that for each row I could potentially insert, I need to \n> look and see if the relationship is already there to prevent \n> multiple entries. Currently I am doing a SELECT before doing the \n> INSERT, but I recognize the speed penalty in doing to operations. I \n> wonder if there is some way I can say \"insert this record, only if it \n> doesn't exist already\". To see if it exists, I would need to compare \n> 3 fields instead of just enforcing a primary key.\n> \n> Even if this could be a small increase per record, even a few percent \n> faster compounded over the whole load could be a significant reduction.\n> \n> Thanks for any ideas you might have.\n> \n\nPerhaps a trigger:\n\nCREATE FUNCTION verify_unique() RETURNS TRIGGER AS $func$\nBEGIN\n\tPERFORM a,b,c FROM table1 WHERE a = NEW.a and b = NEW.b and c = NEW.c;\n\tIF FOUND THEN \n\t\tRETURN NULL;\n\tEND IF;\n\tRETURN NEW;\nEND;\n$func$ LANGUAGE plpgsql STABLE;\n\nCREATE TRIGGER verify_unique BEFORE INSERT ON table1 FOR EACH ROW\nEXECUTE PROCEDURE verify_unique();\n\nTriggers are fired on COPY commands and if table1 is able to be cached\nand you have an index on table1(a,b,c) the results should be fairly\ndecent. I would be interested in seeing the difference in timing between\nthis approach and the temp table approach.\n\n", "msg_date": "Tue, 26 Jul 2005 15:03:17 -0400", "msg_from": "Sven Willenberger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "On 7/26/05, Dan Harris <[email protected]> wrote:\n> I am working on a process that will be inserting tens of million rows\n> and need this to be as quick as possible.\n> \n> The catch is that for each row I could potentially insert, I need to\n> look and see if the relationship is already there to prevent\n> multiple entries. Currently I am doing a SELECT before doing the\n> INSERT, but I recognize the speed penalty in doing to operations. I\n> wonder if there is some way I can say \"insert this record, only if it\n> doesn't exist already\". To see if it exists, I would need to compare\n> 3 fields instead of just enforcing a primary key.\n\nI struggled with this for a while. At first I tried stored procedures\nand triggers, but it took very long (over 24 hours for my dataset).\nAfter several iterations of rewritting it, first into C# then into\nPython I got the whole process down to under 30 min.\n\nMy scenario is this:\nI want to normalize log data. For example, for the IP address in a log\nentry, I need to look up the unique id of the IP address, or if the IP\naddress is new, insert it and then return the newly created entry.\nMultiple processes use the data, but only one process, run daily,\nactually changes it. Because this one process knows that the data is\nstatic, it selects the tables into in-memory hash tables (C#) or\nDictionaries (Python) and then does the lookups there. It is *super*\nfast, but it uses a *lot* of ram. ;-)\n\nTo limit the ram, I wrote a version of the python code that uses gdbm\nfiles instead of Dictionaries. This requires a newer version of Python\n(to allow a gdbm db to work just like a dictionary) but makes life\neasier in case someone is using my software on a lower end machine.\nThis doubled the time of the lookups from about 15 minutes to 30,\nbringing the whole process to about 45 minutes.\n\n-- \nMatthew Nuzum\nwww.bearfruit.org\n", "msg_date": "Tue, 26 Jul 2005 14:35:17 -0500", "msg_from": "Matthew Nuzum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "Matthew Nuzum wrote:\n> On 7/26/05, Dan Harris <[email protected]> wrote:\n> \n>>I am working on a process that will be inserting tens of million rows\n>>and need this to be as quick as possible.\n>>\n>>The catch is that for each row I could potentially insert, I need to\n>>look and see if the relationship is already there to prevent\n>>multiple entries. Currently I am doing a SELECT before doing the\n>>INSERT, but I recognize the speed penalty in doing to operations. I\n>>wonder if there is some way I can say \"insert this record, only if it\n>>doesn't exist already\". To see if it exists, I would need to compare\n>>3 fields instead of just enforcing a primary key.\n> \n> \n> I struggled with this for a while. At first I tried stored procedures\n> and triggers, but it took very long (over 24 hours for my dataset).\n> After several iterations of rewritting it, first into C# then into\n> Python I got the whole process down to under 30 min.\n> \n> My scenario is this:\n> I want to normalize log data. For example, for the IP address in a log\n> entry, I need to look up the unique id of the IP address, or if the IP\n> address is new, insert it and then return the newly created entry.\n> Multiple processes use the data, but only one process, run daily,\n> actually changes it. Because this one process knows that the data is\n> static, it selects the tables into in-memory hash tables (C#) or\n> Dictionaries (Python) and then does the lookups there. It is *super*\n> fast, but it uses a *lot* of ram. ;-)\n> \n> To limit the ram, I wrote a version of the python code that uses gdbm\n> files instead of Dictionaries. This requires a newer version of Python\n> (to allow a gdbm db to work just like a dictionary) but makes life\n> easier in case someone is using my software on a lower end machine.\n> This doubled the time of the lookups from about 15 minutes to 30,\n> bringing the whole process to about 45 minutes.\n> \n\nDid you ever try the temp table approach? You could:\n\nCOPY all records into temp_table, with an empty row for ip_id\n-- Get any entries which already exist\nUPDATE temp_table SET ip_id =\n\t(SELECT ip_id from ipaddress WHERE add=add)\n WHERE EXISTS (SELECT ip_id FROM ipaddress WHERE add=add);\n-- Create new entries\nINSERT INTO ipaddress(add) SELECT add FROM temp_table\n WHERE ip_id IS NULL;\n-- Update the rest\nUPDATE temp_table SET ip_id =\n\t(SELECT ip_id from ipaddress WHERE add=add)\n WHERE ip_id IS NULL AND\n\tEXISTS (SELECT ip_id FROM ipaddress WHERE add=add);\n\nThis would let the database do all of the updating work in bulk on it's \nside, rather than you pulling all the data out and doing it locally.\n\nAn alternative would be something like:\n\nCREATE TEMP TABLE new_ids (address text, ip_id int);\nCOPY all potentially new addresses into that table.\n-- Delete all entries which already exist\nDELETE FROM new_ids WHERE EXISTS\n\t(SELECT ip_id FROM ipaddresses\n \t WHERE add=new_ids.address);\n-- Now create the new entries\nINSERT INTO ipaddresses(add) SELECT address FROM new_ids;\n\n-- At this point you are guaranteed to have all addresses existing in\n-- the database\n\nIf you then insert your full data into the final table, only leave the \nip_id column as null. Then if you have a partial index where ip_id is \nNULL, you could use the command:\n\nUPDATE final_table SET ip_id =\n\t(SELECT ip_id FROM ipaddresses WHERE add=final_table.add)\nWHERE ip_id IS NULL;\n\nYou could also do this in a temporary table, before bulk inserting into \nthe final table.\n\nI don't know what you have tried, but I know that for Dan, he easily has \n > 36M rows. So I don't think he wants to pull that locally and create a \nin-memory hash just to insert 100 rows or so.\n\nAlso, for your situation, if you do keep a local cache, you could \ncertainly save the cache between runs, and use a temp table to determine \nwhat new ids you need to add to it. Then you wouldn't have to pull the \ncomplete set each time. You just pull new values for entries you haven't \nadded yet.\n\nJohn\n=:->", "msg_date": "Tue, 26 Jul 2005 14:51:11 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "Easier and faster than doing the custom trigger is to simply define a\nunique index and let the DB enforce the constraint with an index lookup,\nsomething like:\n\ncreate unique index happy_index ON happy_table(col1, col2, col3);\n\nThat should run faster than the custom trigger, but not as fast as the\ntemp table solution suggested elsewhere because it will need to do an\nindex lookup for each row. With this solution, it is important that\nyour shared_buffers are set high enough that the happy_index can be kept\nin memory, otherwise performance will drop precipitously. Also, if you\nare increasing the size of the table by a large percentage, you will\nwant to ANALYZE periodically, as an optimal plan for a small table may\nbe a disaster for a large table, and PostgreSQL won't switch plans\nunless you run ANALYZE.\n\n-- Mark\n\nOn Tue, 2005-07-26 at 14:51 -0500, John A Meinel wrote:\n> Matthew Nuzum wrote:\n> > On 7/26/05, Dan Harris <[email protected]> wrote:\n> > \n> >>I am working on a process that will be inserting tens of million rows\n> >>and need this to be as quick as possible.\n> >>\n> >>The catch is that for each row I could potentially insert, I need to\n> >>look and see if the relationship is already there to prevent\n> >>multiple entries. Currently I am doing a SELECT before doing the\n> >>INSERT, but I recognize the speed penalty in doing to operations. I\n> >>wonder if there is some way I can say \"insert this record, only if it\n> >>doesn't exist already\". To see if it exists, I would need to compare\n> >>3 fields instead of just enforcing a primary key.\n> > \n> > \n> > I struggled with this for a while. At first I tried stored procedures\n> > and triggers, but it took very long (over 24 hours for my dataset).\n> > After several iterations of rewritting it, first into C# then into\n> > Python I got the whole process down to under 30 min.\n> > \n> > My scenario is this:\n> > I want to normalize log data. For example, for the IP address in a log\n> > entry, I need to look up the unique id of the IP address, or if the IP\n> > address is new, insert it and then return the newly created entry.\n> > Multiple processes use the data, but only one process, run daily,\n> > actually changes it. Because this one process knows that the data is\n> > static, it selects the tables into in-memory hash tables (C#) or\n> > Dictionaries (Python) and then does the lookups there. It is *super*\n> > fast, but it uses a *lot* of ram. ;-)\n> > \n> > To limit the ram, I wrote a version of the python code that uses gdbm\n> > files instead of Dictionaries. This requires a newer version of Python\n> > (to allow a gdbm db to work just like a dictionary) but makes life\n> > easier in case someone is using my software on a lower end machine.\n> > This doubled the time of the lookups from about 15 minutes to 30,\n> > bringing the whole process to about 45 minutes.\n> > \n> \n> Did you ever try the temp table approach? You could:\n> \n> COPY all records into temp_table, with an empty row for ip_id\n> -- Get any entries which already exist\n> UPDATE temp_table SET ip_id =\n> \t(SELECT ip_id from ipaddress WHERE add=add)\n> WHERE EXISTS (SELECT ip_id FROM ipaddress WHERE add=add);\n> -- Create new entries\n> INSERT INTO ipaddress(add) SELECT add FROM temp_table\n> WHERE ip_id IS NULL;\n> -- Update the rest\n> UPDATE temp_table SET ip_id =\n> \t(SELECT ip_id from ipaddress WHERE add=add)\n> WHERE ip_id IS NULL AND\n> \tEXISTS (SELECT ip_id FROM ipaddress WHERE add=add);\n> \n> This would let the database do all of the updating work in bulk on it's \n> side, rather than you pulling all the data out and doing it locally.\n> \n> An alternative would be something like:\n> \n> CREATE TEMP TABLE new_ids (address text, ip_id int);\n> COPY all potentially new addresses into that table.\n> -- Delete all entries which already exist\n> DELETE FROM new_ids WHERE EXISTS\n> \t(SELECT ip_id FROM ipaddresses\n> \t WHERE add=new_ids.address);\n> -- Now create the new entries\n> INSERT INTO ipaddresses(add) SELECT address FROM new_ids;\n> \n> -- At this point you are guaranteed to have all addresses existing in\n> -- the database\n> \n> If you then insert your full data into the final table, only leave the \n> ip_id column as null. Then if you have a partial index where ip_id is \n> NULL, you could use the command:\n> \n> UPDATE final_table SET ip_id =\n> \t(SELECT ip_id FROM ipaddresses WHERE add=final_table.add)\n> WHERE ip_id IS NULL;\n> \n> You could also do this in a temporary table, before bulk inserting into \n> the final table.\n> \n> I don't know what you have tried, but I know that for Dan, he easily has \n> > 36M rows. So I don't think he wants to pull that locally and create a \n> in-memory hash just to insert 100 rows or so.\n> \n> Also, for your situation, if you do keep a local cache, you could \n> certainly save the cache between runs, and use a temp table to determine \n> what new ids you need to add to it. Then you wouldn't have to pull the \n> complete set each time. You just pull new values for entries you haven't \n> added yet.\n> \n> John\n> =:->\n\n", "msg_date": "Tue, 26 Jul 2005 15:29:23 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" }, { "msg_contents": "Insert into a temp table then use INSERT INTO...SELECT FROM to insert \nall rows into the proper table that don't have a relationship.\n\nChris\n\nDan Harris wrote:\n> I am working on a process that will be inserting tens of million rows \n> and need this to be as quick as possible.\n> \n> The catch is that for each row I could potentially insert, I need to \n> look and see if the relationship is already there to prevent multiple \n> entries. Currently I am doing a SELECT before doing the INSERT, but I \n> recognize the speed penalty in doing to operations. I wonder if there \n> is some way I can say \"insert this record, only if it doesn't exist \n> already\". To see if it exists, I would need to compare 3 fields \n> instead of just enforcing a primary key.\n> \n> Even if this could be a small increase per record, even a few percent \n> faster compounded over the whole load could be a significant reduction.\n> \n> Thanks for any ideas you might have.\n> \n> -Dan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n", "msg_date": "Wed, 27 Jul 2005 09:55:47 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: faster INSERT with possible pre-existing row?" } ]
[ { "msg_contents": "Hannu,\n\nOn 7/26/05 11:56 AM, \"Hannu Krosing\" <[email protected]> wrote:\n\n> On T, 2005-07-26 at 11:46 -0700, Luke Lonergan wrote:\n> \n>> Yah - that's a typical approach, and it would be excellent if the COPY\n>> bypassed WAL for the temp table load.\n> \n> Don't *all* operations on TEMP tables bypass WAL ?\n\nGood question - do they? We had discussed the bypass as an elective option,\nor an automated one for special conditions (no index on table, empty table)\nor both. I thought that temp tables was one of those special conditions.\n\nWell - now that I test it, it appears you are correct, temp table COPY\nbypasses WAL - thanks for pointing it out!\n\nThe following test is on a load of 200MB of table data from an ASCII file\nwith 1 text column of size 145MB.\n\n- Luke\n\n===================== TEST ===========================\ndgtestdb=# create temporary table temp1 (a text);\nCREATE TABLE\ndgtestdb=# \\timing\nTiming is on.\ndgtestdb=# \\i copy.ctl\nCOPY\nTime: 4549.212 ms\ndgtestdb=# \\i copy.ctl\nCOPY\nTime: 3897.395 ms\n\n-- that's two tests, two loads of 200MB each, averaging 4.2 secs\n\ndgtestdb=# create table temp2 as select * from temp1;\nSELECT\nTime: 5914.803 ms\n\n-- a quick comparison to \"CREATE TABLE AS SELECT\", which bypasses WAL\n-- on bizgres\n\ndgtestdb=# drop table temp1;\nDROP TABLE\nTime: 135.782 ms\ndgtestdb=# drop table temp2;\nDROP TABLE\nTime: 3.707 ms\ndgtestdb=# create table temp1 (a text);\nCREATE TABLE\nTime: 1.667 ms\ndgtestdb=# \\i copy.ctl\nCOPY\nTime: 6034.274 ms\ndgtestdb=# \n\n-- This was a non-temporary table COPY, showing the slower performance of 6\nsecs.\n\n- Luke\n\n\n", "msg_date": "Tue, 26 Jul 2005 12:30:00 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible" }, { "msg_contents": "Luke,\n\n> Well - now that I test it, it appears you are correct, temp table COPY\n> bypasses WAL - thanks for pointing it out!\n\nRIght. The problem is bypassing WAL for loading new \"scratch\" tables which \naren't TEMPORARY tables. We need to do this for multi-threaded ETL, since:\na) Temp tables can't be shared by several writers, and\nb) you can't index a temp table.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 27 Jul 2005 09:29:09 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible" }, { "msg_contents": "\n\nOn Wed, 27 Jul 2005, Josh Berkus wrote:\n\n> b) you can't index a temp table.\n> \n\njurka# create temp table t (a int);\nCREATE\njurka# create index myi on t(a);\nCREATE\n", "msg_date": "Wed, 27 Jul 2005 13:28:44 -0500 (EST)", "msg_from": "Kris Jurka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible" }, { "msg_contents": "On Wed, 2005-07-27 at 09:29 -0700, Josh Berkus wrote:\n> Luke,\n> \n> > Well - now that I test it, it appears you are correct, temp table COPY\n> > bypasses WAL - thanks for pointing it out!\n> \n> RIght. The problem is bypassing WAL for loading new \"scratch\" tables which \n> aren't TEMPORARY tables. We need to do this for multi-threaded ETL, since:\n> a) Temp tables can't be shared by several writers, and\n> b) you can't index a temp table.\n\nThe description of \"scratch\" tables might need some slight\nclarification. It kindof makes it sound like temp tables.\n\nI had in mind the extra tables that an application sometimes needs to\noperate faster. Denormalisations, pre-joined tables, pre-calculated\nresults, aggregated data. These are not temporary tables, just part of\nthe application - multi-user tables that stay across shutdown/restart.\n\nIf you have gallons of GB, you will probably by looking to make use of\nsuch tables.\n\nYou can use such tables for the style of ETL known as ELT, but that is\nnot the only use.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 27 Jul 2005 22:37:47 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible" }, { "msg_contents": "\n\n> I had in mind the extra tables that an application sometimes needs to\n> operate faster. Denormalisations, pre-joined tables, pre-calculated\n> results, aggregated data. These are not temporary tables, just part of\n> the application - multi-user tables that stay across shutdown/restart.\n\n\tYou could also add caching search results for easy pagination without \nredoing always entirely on each page the Big Slow Search Query that every \nwebsite has...\n", "msg_date": "Wed, 27 Jul 2005 23:53:02 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> RIght. The problem is bypassing WAL for loading new \"scratch\" tables which \n> aren't TEMPORARY tables. We need to do this for multi-threaded ETL, since:\n> a) Temp tables can't be shared by several writers, and\n> b) you can't index a temp table.\n\nThis may not matter given point (a), but: point (b) is completely wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jul 2005 20:41:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Bizgres-general] Re: faster INSERT with possible " } ]
[ { "msg_contents": "My application is using Firebird 1.5.2\n\nI have at my database:\n- 150 Doamins\n- 318 tables\n- 141 Views\n- 365 Procedures\n- 407 Triggers\n- 75 generators\n- 161 Exceptions\n- 183 UDFs\n- 1077 Indexes\n\nMy question is:\n\nPostgre SQL will be more faster than Firebird? How much (in percent)?\n\nI need about 20% to 50% more performance at my application.\nCan I get this migratin to postgresql ?\n\n", "msg_date": "Tue, 26 Jul 2005 16:35:19 -0300 (EST)", "msg_from": "\"Roberto Germano Vieweg Neto\" <[email protected]>", "msg_from_op": true, "msg_subject": "[IMPORTANT] - My application performance" }, { "msg_contents": "The number of objects in your system has virtually nothing to do with\nperformance (at least on any decent database...)\n\nWhat is your application doing? What's the bottleneck right now?\n\nOn Tue, Jul 26, 2005 at 04:35:19PM -0300, Roberto Germano Vieweg Neto wrote:\n> My application is using Firebird 1.5.2\n> \n> I have at my database:\n> - 150 Doamins\n> - 318 tables\n> - 141 Views\n> - 365 Procedures\n> - 407 Triggers\n> - 75 generators\n> - 161 Exceptions\n> - 183 UDFs\n> - 1077 Indexes\n> \n> My question is:\n> \n> Postgre SQL will be more faster than Firebird? How much (in percent)?\n> \n> I need about 20% to 50% more performance at my application.\n> Can I get this migratin to postgresql ?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 26 Jul 2005 15:05:00 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [IMPORTANT] - My application performance" }, { "msg_contents": "What's the your platform? Windows or Linux?\nWhat's the data volume (up million records)?\n\n-----Mensagem original-----\nDe: [email protected]\n[mailto:[email protected]] Em nome de Roberto Germano\nVieweg Neto\nEnviada em: terça-feira, 26 de julho de 2005 16:35\nPara: [email protected]\nAssunto: [PERFORM] [IMPORTANT] - My application performance\n\nMy application is using Firebird 1.5.2\n\nI have at my database:\n- 150 Doamins\n- 318 tables\n- 141 Views\n- 365 Procedures\n- 407 Triggers\n- 75 generators\n- 161 Exceptions\n- 183 UDFs\n- 1077 Indexes\n\nMy question is:\n\nPostgre SQL will be more faster than Firebird? How much (in percent)?\n\nI need about 20% to 50% more performance at my application.\nCan I get this migratin to postgresql ?\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n", "msg_date": "Tue, 26 Jul 2005 18:00:52 -0300", "msg_from": "\"Alvaro Neto\" <[email protected]>", "msg_from_op": false, "msg_subject": "RES: [IMPORTANT] - My application performance" }, { "msg_contents": "\n\nRoberto Germano Vieweg Neto wrote:\n> My application is using Firebird 1.5.2\n> \n> I have at my database:\n> - 150 Doamins\n> - 318 tables\n> - 141 Views\n> - 365 Procedures\n> - 407 Triggers\n> - 75 generators\n> - 161 Exceptions\n> - 183 UDFs\n> - 1077 Indexes\n> \n> My question is:\n> \n> Postgre SQL will be more faster than Firebird? How much (in percent)?\n\nI think you can probably expect around 10341.426% improvement.\n\n\n\n\n\n\n\nps. Yes, I am joking just in case...\n\n", "msg_date": "Wed, 27 Jul 2005 10:02:21 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [IMPORTANT] - My application performance" }, { "msg_contents": "On 7/26/05, Roberto Germano Vieweg Neto <[email protected]> wrote:\n> My application is using Firebird 1.5.2\n> \n> My question is:\n> \n> Postgre SQL will be more faster than Firebird? How much (in percent)?\n> \n> I need about 20% to 50% more performance at my application.\n> Can I get this migratin to postgresql ?\n\nThe answer is: maybe. There's nothing which stops PostgreSQL from\nbeing faster, and likewise there is nothing that stops it from being\nslower. YMMV.\n\nYour route should be:\n * migrate most speed-demanding part to PostgreSQL\n * benchmark it, trying to emulate real-world load.\n * if it is slower than Firebird, post it here, together with EXPLAIN ANALYZEs\n and ask if there's something you can do to speed it up.\n\n Regards,\n Dawid\n", "msg_date": "Wed, 27 Jul 2005 09:28:00 +0200", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [IMPORTANT] - My application performance" } ]
[ { "msg_contents": "Is there a way to make the query planner consider pulling inner appends \noutside joins?\n\nExample:\nnatural_person inherits from person (obviously)\n\nadmpostgres3=# explain analyze select u.name, p.name from users u, person p \nwhere p.user_id = u.id and u.name = 's_ohl';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=8.01..3350.14 rows=3 width=36) (actual \ntime=107.391..343.657 rows=10 loops=1)\n Hash Cond: (\"outer\".user_id = \"inner\".id)\n -> Append (cost=0.00..2461.34 rows=117434 width=20) (actual \ntime=0.007..264.910 rows=117434 loops=1)\n -> Seq Scan on person p (cost=0.00..575.06 rows=31606 width=20) \n(actual time=0.005..38.911 rows=31606 loops=1)\n -> Seq Scan on natural_person p (cost=0.00..1886.28 rows=85828 \nwidth=19) (actual time=0.003..104.338 rows=85828 loops=1)\n -> Hash (cost=8.01..8.01 rows=2 width=24) (actual time=0.096..0.096 \nrows=0 loops=1)\n -> Index Scan using users_name_idx on users u (cost=0.00..8.01 \nrows=2 width=24) (actual time=0.041..0.081 rows=10 loops=1)\n Index Cond: ((name)::text = 's_ohl'::text)\n Total runtime: 343.786 ms\n(9 rows)\n\nadmpostgres3=# explain analyze select u.name, p.name from users u, only \nperson p where p.user_id = u.id and u.name = 's_ohl' union all select \nu.name, p.name from users u, only natural_person p where p.user_id = u.id \nand u.name = 's_ohl';\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n Append (cost=0.00..28.19 rows=3 width=28) (actual time=0.197..0.366 \nrows=10 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..14.12 rows=1 width=28) \n(actual time=0.159..0.159 rows=0 loops=1)\n -> Nested Loop (cost=0.00..14.11 rows=1 width=28) (actual \ntime=0.157..0.157 rows=0 loops=1)\n -> Index Scan using users_name_idx on users u \n(cost=0.00..8.01 rows=2 width=24) (actual time=0.039..0.075 rows=10 loops=1)\n Index Cond: ((name)::text = 's_ohl'::text)\n -> Index Scan using person_user_idx on person p \n(cost=0.00..3.03 rows=2 width=8) (actual time=0.006..0.006 rows=0 loops=10)\n Index Cond: (p.user_id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..14.08 rows=2 width=28) \n(actual time=0.036..0.193 rows=10 loops=1)\n -> Nested Loop (cost=0.00..14.06 rows=2 width=28) (actual \ntime=0.033..0.171 rows=10 loops=1)\n -> Index Scan using users_name_idx on users u \n(cost=0.00..8.01 rows=2 width=24) (actual time=0.018..0.049 rows=10 loops=1)\n Index Cond: ((name)::text = 's_ohl'::text)\n -> Index Scan using natural_person_user_idx on \nnatural_person p (cost=0.00..3.01 rows=1 width=8) (actual \ntime=0.006..0.007 rows=1 loops=10)\n Index Cond: (p.user_id = \"outer\".id)\n Total runtime: 0.475 ms\n(14 rows)\n\n\nMit freundlichem Gruß\nJens Schicke\n-- \nJens Schicke\t\t [email protected]\nasco GmbH\t\t http://www.asco.de\nMittelweg 7\t\t Tel 0531/3906-127\n38106 Braunschweig\t Fax 0531/3906-400\n", "msg_date": "Wed, 27 Jul 2005 09:56:12 +0200", "msg_from": "Jens-Wolfhard Schicke <[email protected]>", "msg_from_op": true, "msg_subject": "Inherited Table Query Planning (fwd)" }, { "msg_contents": "Jens-Wolfhard Schicke <[email protected]> writes:\n> Is there a way to make the query planner consider pulling inner appends \n> outside joins?\n\nNot at the moment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Jul 2005 10:09:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inherited Table Query Planning (fwd) " } ]
[ { "msg_contents": "I am having a problem with a view on one of my db's. This view is\ntrying to sequentially can the 2 tables it is accessing. However,\nwhen I explain the view on most of my other db's (all have the same\nschema's), it is using the indexes. Can anyone please help me\nunderstand why postgres is choosing to sequenially scan both tables?\n\nBoth tables in the view have a primary key defined on inv_nbr,\ninv_qfr. Vacuum and analyze have been run on the tables in question\nto try and make sure stats are up to date.\n\nThanks,\n\nChris\nPG - 7.3.4\nRH 2.1\n\n\nHere is the view definition:\nSELECT DISTINCT clmcom1.inv_nbr AS inventory_number, \n clmcom1.inv_qfr AS inventory_qualifier, \n clmcom1.pat_addr_1 AS patient_address_1, \n clmcom1.pat_addr_2 AS patient_address_2, \n clmcom1.pat_city AS patient_city, \n clmcom1.pat_cntry AS patient_country, \n clmcom1.pat_dob AS patient_date_of_birth, \n clmcom1.pat_gender_cd AS patient_gender_code,\n clmcom1.pat_info_pregnancy_ind AS pregnancy_ind,\n clmcom1.pat_state AS patient_state, \n clmcom1.pat_suffix AS patient_suffix, \n clmcom1.pat_zip AS patient_zip_code, \n clmcom1.payto_addr_1 AS payto_address_1, \n clmcom1.payto_addr_2 AS payto_address_2, \n clmcom1.payto_city, \n clmcom1.payto_cntry AS payto_country, \n clmcom1.payto_f_name AS payto_first_name, \n clmcom1.payto_m_name AS payto_middle_name,\n clmcom1.payto_state, \n clmcom1.payto_zip AS payto_zip_code, \n clmcom1.clm_tot_clm_chgs AS total_claim_charge, \n clmcom1.bill_l_name_org AS\nbilling_last_name_or_org,\n clmcom1.clm_delay_rsn_cd AS\nclaim_delay_reason_code,\n clmcom1.clm_submit_rsn_cd AS\nclaim_submit_reason_code,\n clmcom1.payto_l_name_org AS\npayto_last_name_or_org,\n clmcom1.payto_prim_id AS payto_primary_id, \n clmcom1.bill_prim_id AS billing_prov_primary_id,\n clmcom1.clm_tot_ncov_chgs AS total_ncov_charge, \n clmcom2.contract_amt AS contract_amount, \n clmcom2.svc_fac_or_lab_name, \n clmcom2.svc_fac_addr_1 AS svc_fac_address_1, \n clmcom2.svc_fac_addr_2 AS svc_fac_address_2, \n clmcom2.svc_fac_city, \n clmcom2.svc_fac_zip AS svc_fac_zip_code \nFROM (clmcom1 LEFT JOIN clmcom2 ON (((clmcom1.inv_nbr =\nclmcom2.inv_nbr) AND\n \n(clmcom1.inv_qfr = clmcom2.inv_qfr))))\nORDER BY clmcom1.inv_nbr, \n clmcom1.inv_qfr, \n clmcom1.pat_addr_1, \n clmcom1.pat_addr_2, \n clmcom1.pat_city, \n clmcom1.pat_cntry,\n clmcom1.pat_dob, \n clmcom1.pat_gender_cd, \n clmcom1.pat_info_pregnancy_ind, \n clmcom1.pat_state, \n clmcom1.pat_suffix, \n clmcom1.pat_zip, \n clmcom1.payto_addr_1, \n clmcom1.payto_addr_2, \n clmcom1.payto_city, \n clmcom1.payto_cntry,\n clmcom1.payto_f_name, \n clmcom1.payto_m_name, \n clmcom1.payto_state, \n clmcom1.payto_zip, \n clmcom1.clm_tot_clm_chgs, \n clmcom1.bill_l_name_org, \n clmcom1.clm_delay_rsn_cd, \n clmcom1.clm_submit_rsn_cd, \n clmcom1.payto_l_name_org, \n clmcom1.payto_prim_id, \n clmcom1.bill_prim_id, \n clmcom1.clm_tot_ncov_chgs, \n clmcom2.contract_amt, \n clmcom2.svc_fac_or_lab_name, \n clmcom2.svc_fac_addr_1, \n clmcom2.svc_fac_addr_2, \n clmcom2.svc_fac_city, \n clmcom2.svc_fac_zip;\n\n\nHere is the explain analyze from the problem db:\nprob_db=# explain analyze select * from clm_com;\n\n\n \n QUERY PLAN\n\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan clm_com (cost=1039824.35..1150697.61 rows=126712\nwidth=367) (actual time=311792.78..405819.03 rows=1266114 loops=1)\n -> Unique (cost=1039824.35..1150697.61 rows=126712 width=367)\n(actual time=311792.74..386313.14 rows=1266114 loops=1)\n -> Sort (cost=1039824.35..1042992.16 rows=1267123\nwidth=367) (actual time=311792.74..338189.48 rows=1266114 loops=1)\n Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\nclmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\nclmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\nclmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\nclmcom1.pat_zip, clmcom1.payto_addr_1,\nclmcom1.payto_addr_2, clmcom1.payto_city, clmcom1.payto_cntry,\nclmcom1.payto_f_name, clmcom1.payto_m_name, clmcom1.payto_state,\nclmcom1.payto_zip, clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\nclmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\nclmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\nclmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\nclmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\nclmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n -> Hash Join (cost=132972.78..548171.70 rows=1267123\nwidth=367) (actual time=16999.32..179359.43 rows=1266114 loops=1)\n Hash Cond: (\"outer\".inv_nbr = \"inner\".inv_nbr)\n Join Filter: (\"outer\".inv_qfr = \"inner\".inv_qfr)\n -> Seq Scan on clmcom1 (cost=0.00..267017.23\nrows=1267123 width=271) (actual time=0.11..84711.83 rows=1266114\nloops=1)\n -> Hash (cost=111200.82..111200.82 rows=1269582\nwidth=96) (actual time=16987.45..16987.45 rows=0 loops=1)\n -> Seq Scan on clmcom2 \n(cost=0.00..111200.82 rows=1269582 width=96) (actual\ntime=0.07..12164.81 rows=1266108 loops=1)\n Total runtime: 407317.47 msec\n(11 rows)\n~\n\n\n\nHere is the explain analyze from a good db (on the same postgres cluster);\ngood_db=# explain analyze select * from clm_com;\n \n \n \n \n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan clm_com (cost=78780.59..89498.29 rows=12249 width=359)\n(actual time=73045.36..79974.37 rows=122494 loops=1)\n -> Unique (cost=78780.59..89498.29 rows=12249 width=359) (actual\ntime=73045.28..78031.99 rows=122494 loops=1)\n -> Sort (cost=78780.59..79086.81 rows=122488 width=359)\n(actual time=73045.28..73362.94 rows=122494 loops=1)\n Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\nclmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\nclmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\nclmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\nclmcom1.pat_zip, clmcom1.payto_addr_1, clmcom1.payto_addr_2,\nclmcom1.payto_city, clmcom1.payto_cntry, clmcom1.payto_f_name,\nclmcom1.payto_m_name, clmcom1.payto_state, clmcom1.payto_zip,\nclmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\nclmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\nclmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\nclmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\nclmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\nclmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n -> Merge Join (cost=0.00..56945.12 rows=122488\nwidth=359) (actual time=54.76..71635.65 rows=122494 loops=1)\n Merge Cond: ((\"outer\".inv_nbr = \"inner\".inv_nbr)\nAND (\"outer\".inv_qfr = \"inner\".inv_qfr))\n -> Index Scan using clmcom1_pkey on clmcom1 \n(cost=0.00..38645.61 rows=122488 width=267) (actual\ntime=25.60..49142.16 rows=122494 loops=1)\n -> Index Scan using clmcom2_pkey on clmcom2 \n(cost=0.00..16004.08 rows=122488 width=92) (actual\ntime=29.09..19418.94 rows=122494 loops=1)\n Total runtime: 80162.26 msec\n(9 rows)\n", "msg_date": "Wed, 27 Jul 2005 11:12:27 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Help with view performance problem" }, { "msg_contents": "I did some more testing, and ran the explain analyze on the problem. \nIn my session I did a set enable_hashjoin = false and then ran the\nanalyze. This caused it to use the indexes as I have been expecting\nit to do.\n\nNow, how can I get it to use the indexes w/o manipulating the\nenvironment? What make postgresql want to sequentially scan and use a\nhash join?\n\nthanks,\n\nChris\n\nexplain analyze with set_hashjoin=false;\nprob_db=#explain analyze select * from clm_com;\n\n\n \n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan clm_com (cost=1057975.45..1169021.26 rows=126910\nwidth=366) (actual time=142307.99..225997.22 rows=1268649 loops=1)\n -> Unique (cost=1057975.45..1169021.26 rows=126910 width=366)\n(actual time=142307.96..206082.30 rows=1268649 loops=1)\n -> Sort (cost=1057975.45..1061148.19 rows=1269095\nwidth=366) (actual time=142307.95..156019.01 rows=1268649 loops=1)\n Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\nclmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\nclmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\nclmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\nclmcom1.pat_zip, clmcom1.payto_addr_1,\nclmcom1.payto_addr_2, clmcom1.payto_city, clmcom1.payto_cntry,\nclmcom1.payto_f_name, clmcom1.payto_m_name, clmcom1.payto_state,\nclmcom1.payto_zip, clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\nclmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\nclmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\nclmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\nclmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\nclmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n -> Merge Join (cost=0.00..565541.46 rows=1269095\nwidth=366) (actual time=464.89..130638.06 rows=1268649 loops=1)\n Merge Cond: (\"outer\".inv_nbr = \"inner\".inv_nbr)\n Join Filter: (\"outer\".inv_qfr = \"inner\".inv_qfr)\n -> Index Scan using clmcom1_inv_nbr_iview_idx on\nclmcom1 (cost=0.00..380534.32 rows=1269095 width=270) (actual\ntime=0.27..82159.37 rows=1268649 loops=1)\n -> Index Scan using clmcom2_inv_nbr_iview_idx on\nclmcom2 (cost=0.00..159636.25 rows=1271198 width=96) (actual\ntime=464.56..21774.02 rows=1494019 loops=1)\n Total runtime: 227369.39 msec\n(10 rows)\n\n\n\nOn 7/27/05, Chris Hoover <[email protected]> wrote:\n> I am having a problem with a view on one of my db's. This view is\n> trying to sequentially can the 2 tables it is accessing. However,\n> when I explain the view on most of my other db's (all have the same\n> schema's), it is using the indexes. Can anyone please help me\n> understand why postgres is choosing to sequenially scan both tables?\n> \n> Both tables in the view have a primary key defined on inv_nbr,\n> inv_qfr. Vacuum and analyze have been run on the tables in question\n> to try and make sure stats are up to date.\n> \n> Thanks,\n> \n> Chris\n> PG - 7.3.4\n> RH 2.1\n> \n> \n> Here is the view definition:\n> SELECT DISTINCT clmcom1.inv_nbr AS inventory_number,\n> clmcom1.inv_qfr AS inventory_qualifier,\n> clmcom1.pat_addr_1 AS patient_address_1,\n> clmcom1.pat_addr_2 AS patient_address_2,\n> clmcom1.pat_city AS patient_city,\n> clmcom1.pat_cntry AS patient_country,\n> clmcom1.pat_dob AS patient_date_of_birth,\n> clmcom1.pat_gender_cd AS patient_gender_code,\n> clmcom1.pat_info_pregnancy_ind AS pregnancy_ind,\n> clmcom1.pat_state AS patient_state,\n> clmcom1.pat_suffix AS patient_suffix,\n> clmcom1.pat_zip AS patient_zip_code,\n> clmcom1.payto_addr_1 AS payto_address_1,\n> clmcom1.payto_addr_2 AS payto_address_2,\n> clmcom1.payto_city,\n> clmcom1.payto_cntry AS payto_country,\n> clmcom1.payto_f_name AS payto_first_name,\n> clmcom1.payto_m_name AS payto_middle_name,\n> clmcom1.payto_state,\n> clmcom1.payto_zip AS payto_zip_code,\n> clmcom1.clm_tot_clm_chgs AS total_claim_charge,\n> clmcom1.bill_l_name_org AS\n> billing_last_name_or_org,\n> clmcom1.clm_delay_rsn_cd AS\n> claim_delay_reason_code,\n> clmcom1.clm_submit_rsn_cd AS\n> claim_submit_reason_code,\n> clmcom1.payto_l_name_org AS\n> payto_last_name_or_org,\n> clmcom1.payto_prim_id AS payto_primary_id,\n> clmcom1.bill_prim_id AS billing_prov_primary_id,\n> clmcom1.clm_tot_ncov_chgs AS total_ncov_charge,\n> clmcom2.contract_amt AS contract_amount,\n> clmcom2.svc_fac_or_lab_name,\n> clmcom2.svc_fac_addr_1 AS svc_fac_address_1,\n> clmcom2.svc_fac_addr_2 AS svc_fac_address_2,\n> clmcom2.svc_fac_city,\n> clmcom2.svc_fac_zip AS svc_fac_zip_code\n> FROM (clmcom1 LEFT JOIN clmcom2 ON (((clmcom1.inv_nbr =\n> clmcom2.inv_nbr) AND\n> \n> (clmcom1.inv_qfr = clmcom2.inv_qfr))))\n> ORDER BY clmcom1.inv_nbr,\n> clmcom1.inv_qfr,\n> clmcom1.pat_addr_1,\n> clmcom1.pat_addr_2,\n> clmcom1.pat_city,\n> clmcom1.pat_cntry,\n> clmcom1.pat_dob,\n> clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind,\n> clmcom1.pat_state,\n> clmcom1.pat_suffix,\n> clmcom1.pat_zip,\n> clmcom1.payto_addr_1,\n> clmcom1.payto_addr_2,\n> clmcom1.payto_city,\n> clmcom1.payto_cntry,\n> clmcom1.payto_f_name,\n> clmcom1.payto_m_name,\n> clmcom1.payto_state,\n> clmcom1.payto_zip,\n> clmcom1.clm_tot_clm_chgs,\n> clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd,\n> clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org,\n> clmcom1.payto_prim_id,\n> clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs,\n> clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name,\n> clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2,\n> clmcom2.svc_fac_city,\n> clmcom2.svc_fac_zip;\n> \n> \n> Here is the explain analyze from the problem db:\n> prob_db=# explain analyze select * from clm_com;\n> \n> \n> \n> QUERY PLAN\n> \n> \n> \n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Subquery Scan clm_com (cost=1039824.35..1150697.61 rows=126712\n> width=367) (actual time=311792.78..405819.03 rows=1266114 loops=1)\n> -> Unique (cost=1039824.35..1150697.61 rows=126712 width=367)\n> (actual time=311792.74..386313.14 rows=1266114 loops=1)\n> -> Sort (cost=1039824.35..1042992.16 rows=1267123\n> width=367) (actual time=311792.74..338189.48 rows=1266114 loops=1)\n> Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\n> clmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\n> clmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\n> clmcom1.pat_zip, clmcom1.payto_addr_1,\n> clmcom1.payto_addr_2, clmcom1.payto_city, clmcom1.payto_cntry,\n> clmcom1.payto_f_name, clmcom1.payto_m_name, clmcom1.payto_state,\n> clmcom1.payto_zip, clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n> -> Hash Join (cost=132972.78..548171.70 rows=1267123\n> width=367) (actual time=16999.32..179359.43 rows=1266114 loops=1)\n> Hash Cond: (\"outer\".inv_nbr = \"inner\".inv_nbr)\n> Join Filter: (\"outer\".inv_qfr = \"inner\".inv_qfr)\n> -> Seq Scan on clmcom1 (cost=0.00..267017.23\n> rows=1267123 width=271) (actual time=0.11..84711.83 rows=1266114\n> loops=1)\n> -> Hash (cost=111200.82..111200.82 rows=1269582\n> width=96) (actual time=16987.45..16987.45 rows=0 loops=1)\n> -> Seq Scan on clmcom2\n> (cost=0.00..111200.82 rows=1269582 width=96) (actual\n> time=0.07..12164.81 rows=1266108 loops=1)\n> Total runtime: 407317.47 msec\n> (11 rows)\n> ~\n> \n> \n> \n> Here is the explain analyze from a good db (on the same postgres cluster);\n> good_db=# explain analyze select * from clm_com;\n> \n> \n> \n> \n> \n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Subquery Scan clm_com (cost=78780.59..89498.29 rows=12249 width=359)\n> (actual time=73045.36..79974.37 rows=122494 loops=1)\n> -> Unique (cost=78780.59..89498.29 rows=12249 width=359) (actual\n> time=73045.28..78031.99 rows=122494 loops=1)\n> -> Sort (cost=78780.59..79086.81 rows=122488 width=359)\n> (actual time=73045.28..73362.94 rows=122494 loops=1)\n> Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\n> clmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\n> clmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\n> clmcom1.pat_zip, clmcom1.payto_addr_1, clmcom1.payto_addr_2,\n> clmcom1.payto_city, clmcom1.payto_cntry, clmcom1.payto_f_name,\n> clmcom1.payto_m_name, clmcom1.payto_state, clmcom1.payto_zip,\n> clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n> -> Merge Join (cost=0.00..56945.12 rows=122488\n> width=359) (actual time=54.76..71635.65 rows=122494 loops=1)\n> Merge Cond: ((\"outer\".inv_nbr = \"inner\".inv_nbr)\n> AND (\"outer\".inv_qfr = \"inner\".inv_qfr))\n> -> Index Scan using clmcom1_pkey on clmcom1\n> (cost=0.00..38645.61 rows=122488 width=267) (actual\n> time=25.60..49142.16 rows=122494 loops=1)\n> -> Index Scan using clmcom2_pkey on clmcom2\n> (cost=0.00..16004.08 rows=122488 width=92) (actual\n> time=29.09..19418.94 rows=122494 loops=1)\n> Total runtime: 80162.26 msec\n> (9 rows)\n>\n", "msg_date": "Wed, 27 Jul 2005 12:29:14 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with view performance problem" }, { "msg_contents": "Does anyone have any suggestions on this? I did not get any response\nfrom the admin list.\n\nThanks,\n\nChris\n\n---------- Forwarded message ----------\nFrom: Chris Hoover <[email protected]>\nDate: Jul 27, 2005 12:29 PM\nSubject: Re: Help with view performance problem\nTo: [email protected]\n\n\nI did some more testing, and ran the explain analyze on the problem.\nIn my session I did a set enable_hashjoin = false and then ran the\nanalyze. This caused it to use the indexes as I have been expecting\nit to do.\n\nNow, how can I get it to use the indexes w/o manipulating the\nenvironment? What make postgresql want to sequentially scan and use a\nhash join?\n\nthanks,\n\nChris\n\nexplain analyze with set_hashjoin=false;\nprob_db=#explain analyze select * from clm_com;\n\n\n\n QUERY PLAN\n\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n----------------------------------------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan clm_com (cost=1057975.45..1169021.26 rows=126910\nwidth=366) (actual time=142307.99..225997.22 rows=1268649 loops=1)\n -> Unique (cost=1057975.45..1169021.26 rows=126910 width=366)\n(actual time=142307.96..206082.30 rows=1268649 loops=1)\n -> Sort (cost=1057975.45..1061148.19 rows=1269095\nwidth=366) (actual time=142307.95..156019.01 rows=1268649 loops=1)\n Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\nclmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\nclmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\nclmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\nclmcom1.pat_zip, clmcom1.payto_addr_1,\nclmcom1.payto_addr_2, clmcom1.payto_city, clmcom1.payto_cntry,\nclmcom1.payto_f_name, clmcom1.payto_m_name, clmcom1.payto_state,\nclmcom1.payto_zip, clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\nclmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\nclmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\nclmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\nclmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\nclmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n -> Merge Join (cost=0.00..565541.46 rows=1269095\nwidth=366) (actual time=464.89..130638.06 rows=1268649 loops=1)\n Merge Cond: (\"outer\".inv_nbr = \"inner\".inv_nbr)\n Join Filter: (\"outer\".inv_qfr = \"inner\".inv_qfr)\n -> Index Scan using clmcom1_inv_nbr_iview_idx on\nclmcom1 (cost=0.00..380534.32 rows=1269095 width=270) (actual\ntime=0.27..82159.37 rows=1268649 loops=1)\n -> Index Scan using clmcom2_inv_nbr_iview_idx on\nclmcom2 (cost=0.00..159636.25 rows=1271198 width=96) (actual\ntime=464.56..21774.02 rows=1494019 loops=1)\n Total runtime: 227369.39 msec\n(10 rows)\n\n\n\nOn 7/27/05, Chris Hoover <[email protected]> wrote:\n> I am having a problem with a view on one of my db's. This view is\n> trying to sequentially can the 2 tables it is accessing. However,\n> when I explain the view on most of my other db's (all have the same\n> schema's), it is using the indexes. Can anyone please help me\n> understand why postgres is choosing to sequenially scan both tables?\n>\n> Both tables in the view have a primary key defined on inv_nbr,\n> inv_qfr. Vacuum and analyze have been run on the tables in question\n> to try and make sure stats are up to date.\n>\n> Thanks,\n>\n> Chris\n> PG - 7.3.4\n> RH 2.1\n>\n>\n> Here is the view definition:\n> SELECT DISTINCT clmcom1.inv_nbr AS inventory_number,\n> clmcom1.inv_qfr AS inventory_qualifier,\n> clmcom1.pat_addr_1 AS patient_address_1,\n> clmcom1.pat_addr_2 AS patient_address_2,\n> clmcom1.pat_city AS patient_city,\n> clmcom1.pat_cntry AS patient_country,\n> clmcom1.pat_dob AS patient_date_of_birth,\n> clmcom1.pat_gender_cd AS patient_gender_code,\n> clmcom1.pat_info_pregnancy_ind AS pregnancy_ind,\n> clmcom1.pat_state AS patient_state,\n> clmcom1.pat_suffix AS patient_suffix,\n> clmcom1.pat_zip AS patient_zip_code,\n> clmcom1.payto_addr_1 AS payto_address_1,\n> clmcom1.payto_addr_2 AS payto_address_2,\n> clmcom1.payto_city,\n> clmcom1.payto_cntry AS payto_country,\n> clmcom1.payto_f_name AS payto_first_name,\n> clmcom1.payto_m_name AS payto_middle_name,\n> clmcom1.payto_state,\n> clmcom1.payto_zip AS payto_zip_code,\n> clmcom1.clm_tot_clm_chgs AS total_claim_charge,\n> clmcom1.bill_l_name_org AS\n> billing_last_name_or_org,\n> clmcom1.clm_delay_rsn_cd AS\n> claim_delay_reason_code,\n> clmcom1.clm_submit_rsn_cd AS\n> claim_submit_reason_code,\n> clmcom1.payto_l_name_org AS\n> payto_last_name_or_org,\n> clmcom1.payto_prim_id AS payto_primary_id,\n> clmcom1.bill_prim_id AS billing_prov_primary_id,\n> clmcom1.clm_tot_ncov_chgs AS total_ncov_charge,\n> clmcom2.contract_amt AS contract_amount,\n> clmcom2.svc_fac_or_lab_name,\n> clmcom2.svc_fac_addr_1 AS svc_fac_address_1,\n> clmcom2.svc_fac_addr_2 AS svc_fac_address_2,\n> clmcom2.svc_fac_city,\n> clmcom2.svc_fac_zip AS svc_fac_zip_code\n> FROM (clmcom1 LEFT JOIN clmcom2 ON (((clmcom1.inv_nbr =\n> clmcom2.inv_nbr) AND\n>\n> (clmcom1.inv_qfr = clmcom2.inv_qfr))))\n> ORDER BY clmcom1.inv_nbr,\n> clmcom1.inv_qfr,\n> clmcom1.pat_addr_1,\n> clmcom1.pat_addr_2,\n> clmcom1.pat_city,\n> clmcom1.pat_cntry,\n> clmcom1.pat_dob,\n> clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind,\n> clmcom1.pat_state,\n> clmcom1.pat_suffix,\n> clmcom1.pat_zip,\n> clmcom1.payto_addr_1,\n> clmcom1.payto_addr_2,\n> clmcom1.payto_city,\n> clmcom1.payto_cntry,\n> clmcom1.payto_f_name,\n> clmcom1.payto_m_name,\n> clmcom1.payto_state,\n> clmcom1.payto_zip,\n> clmcom1.clm_tot_clm_chgs,\n> clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd,\n> clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org,\n> clmcom1.payto_prim_id,\n> clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs,\n> clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name,\n> clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2,\n> clmcom2.svc_fac_city,\n> clmcom2.svc_fac_zip;\n>\n>\n> Here is the explain analyze from the problem db:\n> prob_db=# explain analyze select * from clm_com;\n>\n>\n>\n> QUERY PLAN\n>\n>\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Subquery Scan clm_com (cost=1039824.35..1150697.61 rows=126712\n> width=367) (actual time=311792.78..405819.03 rows=1266114 loops=1)\n> -> Unique (cost=1039824.35..1150697.61 rows=126712 width=367)\n> (actual time=311792.74..386313.14 rows=1266114 loops=1)\n> -> Sort (cost=1039824.35..1042992.16 rows=1267123\n> width=367) (actual time=311792.74..338189.48 rows=1266114 loops=1)\n> Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\n> clmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\n> clmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\n> clmcom1.pat_zip, clmcom1.payto_addr_1,\n> clmcom1.payto_addr_2, clmcom1.payto_city, clmcom1.payto_cntry,\n> clmcom1.payto_f_name, clmcom1.payto_m_name, clmcom1.payto_state,\n> clmcom1.payto_zip, clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n> -> Hash Join (cost=132972.78..548171.70 rows=1267123\n> width=367) (actual time=16999.32..179359.43 rows=1266114 loops=1)\n> Hash Cond: (\"outer\".inv_nbr = \"inner\".inv_nbr)\n> Join Filter: (\"outer\".inv_qfr = \"inner\".inv_qfr)\n> -> Seq Scan on clmcom1 (cost=0.00..267017.23\n> rows=1267123 width=271) (actual time=0.11..84711.83 rows=1266114\n> loops=1)\n> -> Hash (cost=111200.82..111200.82 rows=1269582\n> width=96) (actual time=16987.45..16987.45 rows=0 loops=1)\n> -> Seq Scan on clmcom2\n> (cost=0.00..111200.82 rows=1269582 width=96) (actual\n> time=0.07..12164.81 rows=1266108 loops=1)\n> Total runtime: 407317.47 msec\n> (11 rows)\n> ~\n>\n>\n>\n> Here is the explain analyze from a good db (on the same postgres cluster);\n> good_db=# explain analyze select * from clm_com;\n>\n>\n>\n>\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Subquery Scan clm_com (cost=78780.59..89498.29 rows=12249 width=359)\n> (actual time=73045.36..79974.37 rows=122494 loops=1)\n> -> Unique (cost=78780.59..89498.29 rows=12249 width=359) (actual\n> time=73045.28..78031.99 rows=122494 loops=1)\n> -> Sort (cost=78780.59..79086.81 rows=122488 width=359)\n> (actual time=73045.28..73362.94 rows=122494 loops=1)\n> Sort Key: clmcom1.inv_nbr, clmcom1.inv_qfr,\n> clmcom1.pat_addr_1, clmcom1.pat_addr_2, clmcom1.pat_city,\n> clmcom1.pat_cntry, clmcom1.pat_dob, clmcom1.pat_gender_cd,\n> clmcom1.pat_info_pregnancy_ind, clmcom1.pat_state, clmcom1.pat_suffix,\n> clmcom1.pat_zip, clmcom1.payto_addr_1, clmcom1.payto_addr_2,\n> clmcom1.payto_city, clmcom1.payto_cntry, clmcom1.payto_f_name,\n> clmcom1.payto_m_name, clmcom1.payto_state, clmcom1.payto_zip,\n> clmcom1.clm_tot_clm_chgs, clmcom1.bill_l_name_org,\n> clmcom1.clm_delay_rsn_cd, clmcom1.clm_submit_rsn_cd,\n> clmcom1.payto_l_name_org, clmcom1.payto_prim_id, clmcom1.bill_prim_id,\n> clmcom1.clm_tot_ncov_chgs, clmcom2.contract_amt,\n> clmcom2.svc_fac_or_lab_name, clmcom2.svc_fac_addr_1,\n> clmcom2.svc_fac_addr_2, clmcom2.svc_fac_city, clmcom2.svc_fac_zip\n> -> Merge Join (cost=0.00..56945.12 rows=122488\n> width=359) (actual time=54.76..71635.65 rows=122494 loops=1)\n> Merge Cond: ((\"outer\".inv_nbr = \"inner\".inv_nbr)\n> AND (\"outer\".inv_qfr = \"inner\".inv_qfr))\n> -> Index Scan using clmcom1_pkey on clmcom1\n> (cost=0.00..38645.61 rows=122488 width=267) (actual\n> time=25.60..49142.16 rows=122494 loops=1)\n> -> Index Scan using clmcom2_pkey on clmcom2\n> (cost=0.00..16004.08 rows=122488 width=92) (actual\n> time=29.09..19418.94 rows=122494 loops=1)\n> Total runtime: 80162.26 msec\n> (9 rows)\n>\n", "msg_date": "Thu, 28 Jul 2005 10:38:06 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Help with view performance problem" }, { "msg_contents": "\nOn Jul 28, 2005, at 8:38 AM, Chris Hoover wrote:\n>\n>\n> I did some more testing, and ran the explain analyze on the problem.\n> In my session I did a set enable_hashjoin = false and then ran the\n> analyze. This caused it to use the indexes as I have been expecting\n> it to do.\n>\n> Now, how can I get it to use the indexes w/o manipulating the\n> environment? What make postgresql want to sequentially scan and use a\n> hash join?\n>\n> thanks,\n>\n> Chris\n>\n> explain analyze with set_hashjoin=false;\n> prob_db=#explain analyze select * from clm_com;\n>\n>\n\nI had something similar to this happen recently. The planner was \nchoosing a merge join and seq scan because my 'random_page_cost' was \nset too high. I had it at 3 , and ended up settling at 1.8 to get it \nto correctly use my indices. Once that change was in place, the \nplanner did the 'right' thing for me.\n\nNot sure if this will help you, but it sounds similar.\n\n-Dan\n", "msg_date": "Thu, 28 Jul 2005 10:14:32 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: Help with view performance problem" }, { "msg_contents": "I'm alreading running at 1.5. It looks like if I drop the\nrandom_page_cost t0 1.39, it starts using the indexes. Are there any\nunseen issues with dropping the random_page_cost this low?\n\nThanks,\n\nChris\n\nOn 7/28/05, Dan Harris <[email protected]> wrote:\n> \n> On Jul 28, 2005, at 8:38 AM, Chris Hoover wrote:\n> >\n> >\n> > I did some more testing, and ran the explain analyze on the problem.\n> > In my session I did a set enable_hashjoin = false and then ran the\n> > analyze. This caused it to use the indexes as I have been expecting\n> > it to do.\n> >\n> > Now, how can I get it to use the indexes w/o manipulating the\n> > environment? What make postgresql want to sequentially scan and use a\n> > hash join?\n> >\n> > thanks,\n> >\n> > Chris\n> >\n> > explain analyze with set_hashjoin=false;\n> > prob_db=#explain analyze select * from clm_com;\n> >\n> >\n> \n> I had something similar to this happen recently. The planner was\n> choosing a merge join and seq scan because my 'random_page_cost' was\n> set too high. I had it at 3 , and ended up settling at 1.8 to get it\n> to correctly use my indices. Once that change was in place, the\n> planner did the 'right' thing for me.\n> \n> Not sure if this will help you, but it sounds similar.\n> \n> -Dan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Thu, 28 Jul 2005 13:34:54 -0400", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fwd: Help with view performance problem" } ]
[ { "msg_contents": "Thank you all for your great input. It sure helped. \n\n-- \n Husam \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Jochem van\nDieten\nSent: Tuesday, July 26, 2005 2:58 AM\nTo: [email protected]\nSubject: Re: [PERFORM] \"Vacuum Full Analyze\" taking so long\n\nTomeh, Husam wrote:\n> The other question I have. What would be the proper approach to \n> rebuild indexes. I re-indexes and then run vacuum/analyze. Should I \n> not use the re-index approach, and instead, drop the indexes, vacuum \n> the tables, and then create the indexes, then run analyze on tables\nand indexes??\n\nIf you just want to rebuild indexes, just drop and recreate.\n\nHowever, you are also running a VACUUM FULL, so I presume you have\ndeleted a significant number of rows and want to recover the space that\nwas in use by them. In that scenario, it is often better to CLUSTER the\ntable to force a rebuild. While VACUUM FULL moves the tuples around\ninside the existing file(s), CLUSTER simply creates new file(s), moves\nall the non-deleted tuples there and then swaps the old and the new\nfiles. There can be a significant performance increase in doing so (but\nyou obviously need to have some free diskspace).\nIf you CLUSTER your table it will be ordered by the index you specify.\nThere can be a performance increase in doing so, but if you don't want\nto you can also do a no-op ALTER TABLE and change a column to a datatype\nthat is the same as it already has. This too will force a rewrite of the\ntable but without ordering the tuples.\n\nSo in short my recommendations:\n- to rebuild indexes, just drop and recreate the indexes\n- to rebuild everything because there is space that can bepermanently\nreclaimed, drop indexes, cluster or alter the table, recreate the\nindexes and anlyze the table\n\nJochem\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n**********************************************************************\nThis message contains confidential information intended only for the \nuse of the addressee(s) named above and may contain information that \nis legally privileged. If you are not the addressee, or the person \nresponsible for delivering it to the addressee, you are hereby \nnotified that reading, disseminating, distributing or copying this \nmessage is strictly prohibited. If you have received this message by \nmistake, please immediately notify us by replying to the message and \ndelete the original message immediately thereafter.\n\nThank you. FADLD Tag\n**********************************************************************\n\n", "msg_date": "Wed, 27 Jul 2005 09:19:42 -0700", "msg_from": "\"Tomeh, Husam\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"Vacuum Full Analyze\" taking so long" } ]
[ { "msg_contents": "Folks,\n\nI ran a wal_buffer test series. It appears that increasing the \nwal_buffers is indeed very important for OLTP applications, potentially \nresulting in as much as a 15% average increase in transaction processing. \nWhat's interesting is that this is not just true for 8.1, it's true for \n8.0.3 as well. \n\nMore importantly, 8.1 performance is somehow back up to above-8.0 levels. \nSomething was broken in June that's got fixed (this test series is based \non July 3 CVS) but I don't know what. Clues?\n\nTest results are here:\nhttp://pgfoundry.org/docman/view.php/1000041/79/wal_buffer_test.pdf\n\nAs always, detailed test results are available from OSDL, just use:\nhttp://khack.osdl.org/stp/#\nwhere # is the test number.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 27 Jul 2005 13:30:01 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "wal_buffer tests in" }, { "msg_contents": "On Wed, 27 Jul 2005 13:30:01 -0700\nJosh Berkus <[email protected]> wrote:\n\n> Folks,\n> \n> I ran a wal_buffer test series. It appears that increasing the \n> wal_buffers is indeed very important for OLTP applications, potentially \n> resulting in as much as a 15% average increase in transaction processing. \n> What's interesting is that this is not just true for 8.1, it's true for \n> 8.0.3 as well. \n> \n> More importantly, 8.1 performance is somehow back up to above-8.0 levels. \n> Something was broken in June that's got fixed (this test series is based \n> on July 3 CVS) but I don't know what. Clues?\n> \n> Test results are here:\n> http://pgfoundry.org/docman/view.php/1000041/79/wal_buffer_test.pdf\n> \n> As always, detailed test results are available from OSDL, just use:\n> http://khack.osdl.org/stp/#\n> where # is the test number.\n\nThe increase could actually be higher than 15% as 1800 notpm is about\nthe max throughput you can have with 150 warehouses with the default\nthinktimes. The rule of thumb is about 12 * warehouses, for the\nthroughput.\n\nMark\n", "msg_date": "Wed, 27 Jul 2005 14:42:01 -0700", "msg_from": "Mark Wong <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_buffer tests in" }, { "msg_contents": "On Wed, 2005-07-27 at 13:30 -0700, Josh Berkus wrote:\n\n> I ran a wal_buffer test series. It appears that increasing the \n> wal_buffers is indeed very important for OLTP applications, potentially \n> resulting in as much as a 15% average increase in transaction processing. \n> What's interesting is that this is not just true for 8.1, it's true for \n> 8.0.3 as well. \n\nThe most important thing about these tests is that for the first time we\nhave eliminated much of the post checkpoint noise-and-delay.\n\nLook at the response time charts between\n\nhttp://khack.osdl.org/stp/302959/results/0/rt.html\n\nand\n\nhttp://khack.osdl.org/stp/302963/results/0/rt.html\n\nThis last set of results is a thing of beauty and I must congratulate\neverybody involved for getting here after much effort.\n\nThe graphs are smooth, which shows a balanced machine. I'd like to\nrepeat test 302963 with full_page_writes=false, to see if those response\ntime spikes at checkpoint drop down to normal level.\n\nI think these results are valid for large DW data loads also.\n\nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 27 Jul 2005 23:00:55 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_buffer tests in" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Folks,\n> \n> I ran a wal_buffer test series. It appears that increasing the \n> wal_buffers is indeed very important for OLTP applications, potentially \n> resulting in as much as a 15% average increase in transaction processing. \n> What's interesting is that this is not just true for 8.1, it's true for \n> 8.0.3 as well. \n\nYou have wal_buffer set to 2048? That's pretty radical compared to the default\nof just 5. Your tests shows you had to go to this large a value to see the\nmaximum effect?\n\n-- \ngreg\n\n", "msg_date": "28 Jul 2005 07:49:29 -0400", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: wal_buffer tests in" }, { "msg_contents": "Greg,\n\n> You have wal_buffer set to 2048? That's pretty radical compared to the\n> default of just 5. Your tests shows you had to go to this large a value\n> to see the maximum effect?\n\nNo, take a look at the graph. It looks like we got the maximum effect \nfrom a wal_buffers somewhere between 64 and 256. On the DBT2 runs, any \nvariation less than 5% is just noise.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 28 Jul 2005 14:57:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: wal_buffer tests in" } ]
[ { "msg_contents": "I'm not sure how much this has been discussed on the list, but wasn't\nable to find anything relevant in the archives.\n\nThe new Spamassassin is due out pretty soon. They are currently testing\n3.1.0pre4. One of the things I hope to get out of this release is bayes\nword stats moved to a real RDBMS. They have separated the mysql\nBayesStore module from the PgSQL one so now postgres can use it's own\nqueries.\n\nI loaded all of this stuff up on a test server and am finding that the\nbayes put performance is really not good enough for any real amount of\nmail load.\n\nThe performance problems seems to be when the bayes module is\ninserting/updating. This is now handled by the token_put procedure.\n\nAfter playing with various indexes and what not I simply am unable to\nmake this procedure perform any better. Perhaps someone on the list can\nspot the bottleneck and reveal why this procedure isn't performing that\nwell or ways to make it better.\n\nI put the rest of the schema up at\nhttp://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone\nneeds to see it too.\n\nCREATE OR REPLACE FUNCTION put_token(integer, bytea, integer, integer,\ninteger) RETURNS bool AS '\nDECLARE\n inuserid ALIAS for $1;\n intoken ALIAS for $2;\n inspam_count ALIAS for $3;\n inham_count ALIAS for $4;\n inatime ALIAS for $5;\n got_token record;\n updated_atime_p bool;\nBEGIN\n updated_atime_p := FALSE;\n SELECT INTO got_token spam_count, ham_count, atime\n FROM bayes_token\n WHERE id = inuserid\n AND token = intoken;\n IF NOT FOUND THEN\n -- we do not insert negative counts, just return true\n IF (inspam_count < 0 OR inham_count < 0) THEN\n RETURN TRUE;\n END IF;\n INSERT INTO bayes_token (id, token, spam_count, ham_count, atime)\n VALUES (inuserid, intoken, inspam_count, inham_count, inatime);\n IF NOT FOUND THEN\n RAISE EXCEPTION ''unable to insert into bayes_token'';\n return FALSE;\n END IF;\n UPDATE bayes_vars SET token_count = token_count + 1\n WHERE id = inuserid;\n IF NOT FOUND THEN\n RAISE EXCEPTION ''unable to update token_count in bayes_vars'';\n return FALSE;\n END IF;\n UPDATE bayes_vars SET newest_token_age = inatime\n WHERE id = inuserid AND newest_token_age < inatime;\n IF NOT FOUND THEN\n UPDATE bayes_vars\n SET oldest_token_age = inatime\n WHERE id = inuserid\n AND oldest_token_age > inatime;\n END IF;\n return TRUE;\n ELSE\n IF (inspam_count != 0) THEN\n -- no need to update atime if it is < the existing value\n IF (inatime < got_token.atime) THEN\n UPDATE bayes_token\n SET spam_count = spam_count + inspam_count\n WHERE id = inuserid\n AND token = intoken\n AND spam_count + inspam_count >= 0;\n ELSE\n UPDATE bayes_token\n SET spam_count = spam_count + inspam_count,\n atime = inatime\n WHERE id = inuserid\n AND token = intoken\n AND spam_count + inspam_count >= 0;\n IF FOUND THEN\n updated_atime_p := TRUE;\n END IF;\n END IF;\n END IF;\n IF (inham_count != 0) THEN\n -- no need to update atime is < the existing value or if it was\nalready updated\n IF inatime < got_token.atime OR updated_atime_p THEN\n UPDATE bayes_token\n SET ham_count = ham_count + inham_count\n WHERE id = inuserid\n AND token = intoken\n AND ham_count + inham_count >= 0;\n ELSE\n UPDATE bayes_token\n SET ham_count = ham_count + inham_count,\n atime = inatime\n WHERE id = inuserid\n AND token = intoken\n AND ham_count + inham_count >= 0;\n IF FOUND THEN\n updated_atime_p := TRUE;\n END IF;\n END IF;\n END IF;\n IF updated_atime_p THEN\n UPDATE bayes_vars\n SET oldest_token_age = inatime\n WHERE id = inuserid\n AND oldest_token_age > inatime;\n END IF;\n return TRUE;\n END IF;\nEND;\n' LANGUAGE 'plpgsql';\n\n", "msg_date": "Wed, 27 Jul 2005 14:35:20 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems testing with Spamassassin 3.1.0 Bayes module." }, { "msg_contents": "Matt,\n\n> After playing with various indexes and what not I simply am unable to\n> make this procedure perform any better.  Perhaps someone on the list can\n> spot the bottleneck and reveal why this procedure isn't performing that\n> well or ways to make it better.\n\nWell, my first thought is that this is a pretty complicated procedure for \nsomething you want to peform well. Is all this logic really necessary? \nHow does it get done for MySQL?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 27 Jul 2005 17:12:15 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 Bayes\n module." }, { "msg_contents": "Josh Berkus wrote:\n> Matt,\n> \n> \n>>After playing with various indexes and what not I simply am unable to\n>>make this procedure perform any better. Perhaps someone on the list can\n>>spot the bottleneck and reveal why this procedure isn't performing that\n>>well or ways to make it better.\n> \n> \n> Well, my first thought is that this is a pretty complicated procedure for \n> something you want to peform well. Is all this logic really necessary? \n> How does it get done for MySQL?\n> \n\nI'm not sure if it's all needed, in mysql they have this simple schema:\n\n===============================================\nCREATE TABLE bayes_expire (\n id int(11) NOT NULL default '0',\n runtime int(11) NOT NULL default '0',\n KEY bayes_expire_idx1 (id)\n) TYPE=MyISAM;\n\nCREATE TABLE bayes_global_vars (\n variable varchar(30) NOT NULL default '',\n value varchar(200) NOT NULL default '',\n PRIMARY KEY (variable)\n) TYPE=MyISAM;\n\nINSERT INTO bayes_global_vars VALUES ('VERSION','3');\n\nCREATE TABLE bayes_seen (\n id int(11) NOT NULL default '0',\n msgid varchar(200) binary NOT NULL default '',\n flag char(1) NOT NULL default '',\n PRIMARY KEY (id,msgid)\n) TYPE=MyISAM;\n\nCREATE TABLE bayes_token (\n id int(11) NOT NULL default '0',\n token char(5) NOT NULL default '',\n spam_count int(11) NOT NULL default '0',\n ham_count int(11) NOT NULL default '0',\n atime int(11) NOT NULL default '0',\n PRIMARY KEY (id, token),\n INDEX bayes_token_idx1 (token),\n INDEX bayes_token_idx2 (id, atime)\n) TYPE=MyISAM;\n\nCREATE TABLE bayes_vars (\n id int(11) NOT NULL AUTO_INCREMENT,\n username varchar(200) NOT NULL default '',\n spam_count int(11) NOT NULL default '0',\n ham_count int(11) NOT NULL default '0',\n token_count int(11) NOT NULL default '0',\n last_expire int(11) NOT NULL default '0',\n last_atime_delta int(11) NOT NULL default '0',\n last_expire_reduce int(11) NOT NULL default '0',\n oldest_token_age int(11) NOT NULL default '2147483647',\n newest_token_age int(11) NOT NULL default '0',\n PRIMARY KEY (id),\n UNIQUE bayes_vars_idx1 (username)\n) TYPE=MyISAM;\n===============================================\n\nThen they do this to insert the token:\n\nINSERT INTO bayes_token (\n id,\n token,\n spam_count,\n ham_count,\n atime\n) VALUES (\n ?,\n ?,\n ?,\n ?,\n ?\n) ON DUPLICATE KEY\n UPDATE\n spam_count = GREATEST(spam_count + ?, 0),\n ham_count = GREATEST(ham_count + ?, 0),\n atime = GREATEST(atime, ?)\n\nOr update the token:\n\nUPDATE bayes_vars SET\n $token_count_update\n newest_token_age = GREATEST(newest_token_age, ?),\n oldest_token_age = LEAST(oldest_token_age, ?)\n WHERE id = ?\n\n\nI think the reason why the procedure was written for postgres was\nbecause of the greatest and least statements performing poorly.\n\nHonestly, I'm not real up on writing procs, I was hoping the problem\nwould be obvious to someone.\n\nschu\n", "msg_date": "Wed, 27 Jul 2005 17:59:37 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matt,\n\n> UPDATE bayes_vars SET\n> $token_count_update\n> newest_token_age = GREATEST(newest_token_age, ?),\n> oldest_token_age = LEAST(oldest_token_age, ?)\n> WHERE id = ?\n>\n>\n> I think the reason why the procedure was written for postgres was\n> because of the greatest and least statements performing poorly.\n\nWell, it might be because we don't have a built-in GREATEST or LEAST prior to \n8.1. However, it's pretty darned easy to construct one.\n\n> Honestly, I'm not real up on writing procs, I was hoping the problem\n> would be obvious to someone.\n\nWell, there's the general performance tuning stuff of course (postgresql.conf) \nwhich if you've not done any of it will pretty dramatically affect your \nthrougput rates. And vacuum, analyze, indexes, etc.\n\nYou should also look at ways to make the SP simpler. For example, you have a \ncycle that looks like:\n\nSELECT\n\tIF NOT FOUND\n\t\tINSERT\n\tELSE\n\t\tUPDATE\n\nWhich could be made shorter as:\n\nUPDATE\n\tIF NOT FOUND\n\t\tINSERT\n\n... saving you one index scan.\n\nAlso, I don't quite follow it, but the procedure seems to be doing at least \ntwo steps that the MySQL version isn't doing at all. If the PG version is \ndoing more things, of course it's going to take longer.\n\nFinally, when you have a proc you're happy with, I suggest having an expert \nre-write it in C, which should double the procedure performance.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 27 Jul 2005 19:12:39 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 Bayes\n module." }, { "msg_contents": "Josh Berkus wrote:\n> Matt,\n> \n\n> Well, it might be because we don't have a built-in GREATEST or LEAST prior to \n> 8.1. However, it's pretty darned easy to construct one.\n\nI was more talking about min() and max() but yea, I think you knew where\nI was going with it...\n\n> \n> Well, there's the general performance tuning stuff of course (postgresql.conf) \n> which if you've not done any of it will pretty dramatically affect your \n> througput rates. And vacuum, analyze, indexes, etc.\n\nI have gone though all that.\n\n> You should also look at ways to make the SP simpler. For example, you have a \n> cycle that looks like:\n> \n> SELECT\n> \tIF NOT FOUND\n> \t\tINSERT\n> \tELSE\n> \t\tUPDATE\n> \n> Which could be made shorter as:\n> \n> UPDATE\n> \tIF NOT FOUND\n> \t\tINSERT\n> \n> ... saving you one index scan.\n> \n> Also, I don't quite follow it, but the procedure seems to be doing at least \n> two steps that the MySQL version isn't doing at all. If the PG version is \n> doing more things, of course it's going to take longer.\n> \n> Finally, when you have a proc you're happy with, I suggest having an expert \n> re-write it in C, which should double the procedure performance.\n> \n\nSounds like I need to completely understand what the proc is doing and\nwork on a rewrite. I'll look into writing it in C, I need to do some\nreading about how that works and exactly what it buys you.\n\nThanks for the helpful comments.\n\nschu\n", "msg_date": "Wed, 27 Jul 2005 18:17:05 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher <[email protected]> writes:\n> After playing with various indexes and what not I simply am unable to\n> make this procedure perform any better. Perhaps someone on the list can\n> spot the bottleneck and reveal why this procedure isn't performing that\n> well or ways to make it better.\n\nThere's not anything obviously wrong with that procedure --- all of the\nupdates are on primary keys, so one would expect reasonably efficient\nquery plans to get chosen. Perhaps it'd be worth the trouble to build\nthe server with profiling enabled and get a gprof trace to see where the\ntime is going.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jul 2005 02:19:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 Bayes\n\tmodule." }, { "msg_contents": "On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:\n\n> I put the rest of the schema up at\n> http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone\n> needs to see it too.\n\nDo you have sample data too?\n\n-- \nKarim Nassar\nCollaborative Computing Lab of NAU\nOffice: (928) 523 5868 -=- Mobile: (928) 699 9221\nhttp://ccl.cens.nau.edu/~kan4\n\n", "msg_date": "Wed, 27 Jul 2005 23:27:03 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Karim Nassar wrote:\n> On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:\n> \n> \n>>I put the rest of the schema up at\n>>http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone\n>>needs to see it too.\n> \n> \n> Do you have sample data too?\n> \n\nOk, I finally got some test data together so that others can test\nwithout installing SA.\n\nThe schema and test dataset is over at\nhttp://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz\n\nI have a pretty fast machine with a tuned postgres and it takes it about\n2 minutes 30 seconds to load the test data. Since the test data is the\nbayes information on 616 spam messages than comes out to be about 250ms\nper message. While that is doable, it does add quite a bit of overhead\nto the email system.\n\nPerhaps this is as fast as I can expect it to go, if that's the case I\nmay have to look at mysql, but I really don't want to do that.\n\nI will be working on some other benchmarks, and reading though exactly\nhow bayes works, but at least there is some data to play with.\n\nschu\n", "msg_date": "Thu, 28 Jul 2005 16:13:19 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Thu, 28 Jul 2005, Matthew Schumacher wrote:\n\n> Karim Nassar wrote:\n> > On Wed, 2005-07-27 at 14:35 -0800, Matthew Schumacher wrote:\n> >\n> >\n> >>I put the rest of the schema up at\n> >>http://www.aptalaska.net/~matt.s/bayes/bayes_pg.sql in case someone\n> >>needs to see it too.\n> >\n> >\n> > Do you have sample data too?\n> >\n>\n> Ok, I finally got some test data together so that others can test\n> without installing SA.\n>\n> The schema and test dataset is over at\n> http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz\n>\n> I have a pretty fast machine with a tuned postgres and it takes it about\n> 2 minutes 30 seconds to load the test data. Since the test data is the\n> bayes information on 616 spam messages than comes out to be about 250ms\n> per message. While that is doable, it does add quite a bit of overhead\n> to the email system.\n\nI had a look at your data -- thanks.\n\nI have a question though: put_token() is invoked 120596 times in your\nbenchmark... for 616 messages. That's nearly 200 queries (not even\ncounting the 1-8 (??) inside the function itself) per message. Something\ndoesn't seem right there....\n\nGavin\n", "msg_date": "Fri, 29 Jul 2005 13:57:15 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Gavin Sherry wrote:\n\n> \n> I had a look at your data -- thanks.\n> \n> I have a question though: put_token() is invoked 120596 times in your\n> benchmark... for 616 messages. That's nearly 200 queries (not even\n> counting the 1-8 (??) inside the function itself) per message. Something\n> doesn't seem right there....\n> \n> Gavin\n\nI am pretty sure that's right because it is doing word statistics on\nemail messages.\n\nI need to spend some time studying the code, I just haven't found time yet.\n\nWould it be safe to say that there isn't any glaring performance\npenalties other than the sheer volume of queries?\n\nThanks,\n\nschu\n", "msg_date": "Thu, 28 Jul 2005 21:10:07 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Thu, 2005-07-28 at 16:13 -0800, Matthew Schumacher wrote:\n> \n> Ok, I finally got some test data together so that others can test\n> without installing SA.\n> \n> The schema and test dataset is over at\n> http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz\n> \n> I have a pretty fast machine with a tuned postgres and it takes it about\n> 2 minutes 30 seconds to load the test data. Since the test data is the\n> bayes information on 616 spam messages than comes out to be about 250ms\n> per message. While that is doable, it does add quite a bit of overhead\n> to the email system.\n\nOn my laptop this takes:\n\nreal 1m33.758s\nuser 0m4.285s\nsys 0m1.181s\n\nOne interesting effect is the data in bayes_vars has a huge number of\nupdates and needs vacuum _frequently_. After the run a vacuum full\ncompacts it down from 461 pages to 1 page.\n\nRegards,\n\t\t\t\t\tAndrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n I don't do it for the money.\n -- Donald Trump, Art of the Deal\n\n-------------------------------------------------------------------------", "msg_date": "Fri, 29 Jul 2005 17:50:11 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "zOn Thu, 28 Jul 2005, Matthew Schumacher wrote:\n\n> Gavin Sherry wrote:\n>\n> >\n> > I had a look at your data -- thanks.\n> >\n> > I have a question though: put_token() is invoked 120596 times in your\n> > benchmark... for 616 messages. That's nearly 200 queries (not even\n> > counting the 1-8 (??) inside the function itself) per message. Something\n> > doesn't seem right there....\n> >\n> > Gavin\n>\n> I am pretty sure that's right because it is doing word statistics on\n> email messages.\n>\n> I need to spend some time studying the code, I just haven't found time yet.\n>\n> Would it be safe to say that there isn't any glaring performance\n> penalties other than the sheer volume of queries?\n\nWell, everything relating to one message should be issued in a transaction\nblock. Secondly, the initial select may be unnecessary -- I haven't looked\nat the logic that closely.\n\nThere is, potentially, some parser overhead. In C, you could get around\nthis with PQprepare() et al.\n\nIt would also be interesting to look at the cost of a C function.\n\nGavin\n", "msg_date": "Fri, 29 Jul 2005 15:58:24 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Wed, 27 Jul 2005, Matthew Schumacher wrote:\n\n> Then they do this to insert the token:\n> \n> INSERT INTO bayes_token (\n> id,\n> token,\n> spam_count,\n> ham_count,\n> atime\n> ) VALUES (\n> ?,\n> ?,\n> ?,\n> ?,\n> ?\n> ) ON DUPLICATE KEY\n> UPDATE\n> spam_count = GREATEST(spam_count + ?, 0),\n> ham_count = GREATEST(ham_count + ?, 0),\n> atime = GREATEST(atime, ?)\n> \n> Or update the token:\n> \n> UPDATE bayes_vars SET\n> $token_count_update\n> newest_token_age = GREATEST(newest_token_age, ?),\n> oldest_token_age = LEAST(oldest_token_age, ?)\n> WHERE id = ?\n> \n> \n> I think the reason why the procedure was written for postgres was\n> because of the greatest and least statements performing poorly.\n\nHow can they perform poorly when they are dead simple? Here are 2\nfunctions that work for the above cases of greatest:\n\nCREATE FUNCTION greatest_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'\n LANGUAGE SQL;\n\nCREATE FUNCTION least_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'\n LANGUAGE SQL;\n\nand these should be inlined by pg and very fast to execute.\n\nI wrote a function that should do what the insert above does. The update \nI've not looked at (I don't know what $token_count_update is) but the \nupdate looks simple enough to just implement the same way in pg as in \nmysql.\n\nFor the insert or replace case you can probably use this function:\n\nCREATE FUNCTION insert_or_update_token (xid INTEGER,\n xtoken BYTEA,\n xspam_count INTEGER,\n xham_count INTEGER,\n xatime INTEGER)\nRETURNS VOID AS\n$$\nBEGIN\n LOOP\n UPDATE bayes_token\n SET spam_count = greatest_int (spam_count + xspam_count, 0),\n ham_count = greatest_int (ham_count + xham_count, 0),\n atime = greatest_int (atime, xatime)\n WHERE id = xid\n AND token = xtoken;\n\n IF found THEN\n \t RETURN;\n END IF;\n\n BEGIN\n INSERT INTO bayes_token VALUES (xid,\n xtoken,\n xspam_count,\n xham_count,\n xatime);\n RETURN;\n EXCEPTION WHEN unique_violation THEN\n -- do nothing\n END;\n END LOOP;\nEND;\n$$\nLANGUAGE plpgsql;\n\nIt's not really tested so I can't tell if it's faster then what you have. \nWhat it does do is mimic the way you insert values in mysql. It only work\non pg 8.0 and later however since the exception handling was added in 8.0.\n\n-- \n/Dennis Bj�rklund\n\n", "msg_date": "Fri, 29 Jul 2005 08:48:48 +0200 (CEST)", "msg_from": "Dennis Bjorklund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Andrew McMillan wrote:\n> On Thu, 2005-07-28 at 16:13 -0800, Matthew Schumacher wrote:\n> \n>>Ok, I finally got some test data together so that others can test\n>>without installing SA.\n>>\n>>The schema and test dataset is over at\n>>http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz\n>>\n>>I have a pretty fast machine with a tuned postgres and it takes it about\n>>2 minutes 30 seconds to load the test data. Since the test data is the\n>>bayes information on 616 spam messages than comes out to be about 250ms\n>>per message. While that is doable, it does add quite a bit of overhead\n>>to the email system.\n> \n> \n> On my laptop this takes:\n> \n> real 1m33.758s\n> user 0m4.285s\n> sys 0m1.181s\n> \n> One interesting effect is the data in bayes_vars has a huge number of\n> updates and needs vacuum _frequently_. After the run a vacuum full\n> compacts it down from 461 pages to 1 page.\n> \n> Regards,\n> \t\t\t\t\tAndrew.\n> \n\nI wonder why your laptop is so much faster. My 2 min 30 sec test was\ndone on a dual xeon with a LSI megaraid with 128MB cache and writeback\ncaching turned on.\n\nHere are my memory settings:\n\nshared_buffers = 16384\nwork_mem = 32768\nmaintenance_work_mem = 65536\n\nI tried higher values before I came back to these but it didn't help my\nperformance any. I should also mention that this is a production\ndatabase server that was servicing other queries when I ran this test.\n\nHow often should this table be vacuumed, every 5 minutes?\n\nAlso, this test goes a bit faster with sync turned off, if mysql isn't\nusing sync that would be why it's so much faster. Anyone know what the\ndefault for mysql is?\n\nThanks,\nschu\n\n", "msg_date": "Fri, 29 Jul 2005 09:37:42 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Dennis,\n\n>       EXCEPTION WHEN unique_violation THEN\n\nI seem to remember that catching an exception in a PL/pgSQL procedure was a \nlarge performance cost. It'd be better to do UPDATE ... IF NOT FOUND.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 29 Jul 2005 10:38:04 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Josh Berkus wrote:\n\n>Dennis,\n>\n> \n>\n>> EXCEPTION WHEN unique_violation THEN\n>> \n>>\n>\n>I seem to remember that catching an exception in a PL/pgSQL procedure was a \n>large performance cost. It'd be better to do UPDATE ... IF NOT FOUND.\n>\n> \n>\nActually, he was doing an implicit UPDATE IF NOT FOUND in that he was doing:\n\nUPDATE\n\nIF found THEN return;\n\nINSERT\nEXCEPT\n...\n\nSo really, the exception should never be triggered.\nJohn\n=:->", "msg_date": "Fri, 29 Jul 2005 13:29:46 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Tom,\n\nOn 7/27/05 11:19 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> Matthew Schumacher <[email protected]> writes:\n>> After playing with various indexes and what not I simply am unable to\n>> make this procedure perform any better. Perhaps someone on the list can\n>> spot the bottleneck and reveal why this procedure isn't performing that\n>> well or ways to make it better.\n> \n> There's not anything obviously wrong with that procedure --- all of the\n> updates are on primary keys, so one would expect reasonably efficient\n> query plans to get chosen. Perhaps it'd be worth the trouble to build\n> the server with profiling enabled and get a gprof trace to see where the\n> time is going.\n\nYes - that would be excellent. We've used oprofile recently at Mark Wong's\nsuggestion, which doesn't require rebuilding the source.\n\n- Luke\n\n\n", "msg_date": "Fri, 29 Jul 2005 12:02:30 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "\n\n> Also, this test goes a bit faster with sync turned off, if mysql isn't\n> using sync that would be why it's so much faster. Anyone know what the\n> default for mysql is?\n\n\tFor InnoDB I think it's like Postgres (only slower) ; for MyISAM it's no \nfsync, no transactions, no crash tolerance of any kind, and it's not a \ndefault value (in the sense that you could tweak it) it's just the way \nit's coded.\n", "msg_date": "Fri, 29 Jul 2005 21:24:32 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Fri, 2005-07-29 at 09:37 -0800, Matthew Schumacher wrote:\n> > \n> > On my laptop this takes:\n> > \n> > real 1m33.758s\n> > user 0m4.285s\n> > sys 0m1.181s\n> > \n> > One interesting effect is the data in bayes_vars has a huge number of\n> > updates and needs vacuum _frequently_. After the run a vacuum full\n> > compacts it down from 461 pages to 1 page.\n> > \n> \n> I wonder why your laptop is so much faster. My 2 min 30 sec test was\n> done on a dual xeon with a LSI megaraid with 128MB cache and writeback\n> caching turned on.\n\nI only do development stuff on my laptop, and all of my databases are\nreconstructable from copies, etc... so I turn off fsync in this case.\n\n\n> How often should this table be vacuumed, every 5 minutes?\n\nI would be tempted to vacuum after each e-mail, in this case.\n\n\n> Also, this test goes a bit faster with sync turned off, if mysql isn't\n> using sync that would be why it's so much faster. Anyone know what the\n> default for mysql is?\n\nIt depends on your table type for MySQL.\n\nFor the data in question (i.e. bayes scoring) it would seem that not\nmuch would be lost if you did have to restore your data from a day old\nbackup, so perhaps fsync=false is OK for this particular application.\n\nRegards,\n\t\t\t\t\tAndrew McMillan.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n What we wish, that we readily believe.\n -- Demosthenes\n-------------------------------------------------------------------------", "msg_date": "Sat, 30 Jul 2005 08:19:31 +1200", "msg_from": "Andrew McMillan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Andrew McMillan wrote:\n> \n> For the data in question (i.e. bayes scoring) it would seem that not\n> much would be lost if you did have to restore your data from a day old\n> backup, so perhaps fsync=false is OK for this particular application.\n> \n> Regards,\n> \t\t\t\t\tAndrew McMillan.\n> \n\nRestoring from the previous days backup is plenty exceptable in this\napplication, so is it possible to turn fsync off, but only for this\ndatabase, or would I need to start another psql instance?\n\nschu\n", "msg_date": "Fri, 29 Jul 2005 12:30:20 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Andrew McMillan <[email protected]> writes:\n> On Fri, 2005-07-29 at 09:37 -0800, Matthew Schumacher wrote:\n>> How often should this table be vacuumed, every 5 minutes?\n\n> I would be tempted to vacuum after each e-mail, in this case.\n\nPerhaps the bulk of the transient states should be done in a temp table,\nand only write into a real table when you're done? Dropping a temp\ntable is way quicker than vacuuming it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jul 2005 17:01:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "\nOk, here is where I'm at, I reduced the proc down to this:\n\nCREATE FUNCTION update_token (_id INTEGER,\n _token BYTEA,\n _spam_count INTEGER,\n _ham_count INTEGER,\n _atime INTEGER)\nRETURNS VOID AS\n$$\nBEGIN\n LOOP\n UPDATE bayes_token\n SET spam_count = spam_count + _spam_count,\n ham_count = ham_count + _ham_count,\n atime = _atime\n WHERE id = _id\n AND token = _token;\n\n IF found THEN\n RETURN;\n END IF;\n\n INSERT INTO bayes_token VALUES (_id, _token, _spam_count,\n_ham_count, _atime);\n IF FOUND THEN\n UPDATE bayes_vars SET token_count = token_count + 1 WHERE id = _id;\n IF NOT FOUND THEN\n RAISE EXCEPTION 'unable to update token_count in bayes_vars';\n return FALSE;\n END IF;\n\n RETURN;\n END IF;\n\n RETURN;\n END LOOP;\nEND;\n$$\nLANGUAGE plpgsql;\n\nAll it's doing is trying the update before the insert to get around the\nproblem of not knowing which is needed. With only 2-3 of the queries\nimplemented I'm already back to running about the same speed as the\noriginal SA proc that is going to ship with SA 3.1.0.\n\nAll of the queries are using indexes so at this point I'm pretty\nconvinced that the biggest problem is the sheer number of queries\nrequired to run this proc 200 times for each email (once for each token).\n\nI don't see anything that could be done to make this much faster on the\npostgres end, it's looking like the solution is going to involve cutting\ndown the number of queries some how.\n\nOne thing that is still very puzzling to me is why this runs so much\nslower when I put the data.sql in a transaction. Obviously transactions\nare acting different when you call a proc a zillion times vs an insert\nquery.\n\nAnyway, if anyone else has any ideas I'm all ears, but at this point\nit's looking like raw query speed is needed for this app and while I\ndon't care for mysql as a database, it does have the speed going for it.\n\nschu\n", "msg_date": "Fri, 29 Jul 2005 13:48:00 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher <[email protected]> writes:\n> One thing that is still very puzzling to me is why this runs so much\n> slower when I put the data.sql in a transaction. Obviously transactions\n> are acting different when you call a proc a zillion times vs an insert\n> query.\n\nI looked into this a bit. It seems that the problem when you wrap the\nentire insertion series into one transaction is associated with the fact\nthat the test does so many successive updates of the single row in\nbayes_vars. (VACUUM VERBOSE at the end of the test shows it cleaning up\n49383 dead versions of the one row.) This is bad enough when it's in\nseparate transactions, but when it's in one transaction, none of those\ndead row versions can be marked \"fully dead\" yet --- so for every update\nof the row, the unique-key check has to visit every dead version to make\nsure it's dead in the context of the current transaction. This makes\nthe process O(N^2) in the number of updates per transaction. Which is\nbad enough if you just want to do one transaction per message, but it's\nintolerable if you try to wrap the whole bulk-load scenario into one\ntransaction.\n\nI'm not sure that we can do anything to make this a lot smarter, but\nin any case, the real problem is to not do quite so many updates of\nbayes_vars.\n\nHow constrained are you as to the format of the SQL generated by\nSpamAssassin? In particular, could you convert the commands generated\nfor a single message into a single statement? I experimented with\npassing all the tokens for a given message as a single bytea array,\nas in the attached, and got almost a factor of 4 runtime reduction\non your test case.\n\nBTW, it's possible that this is all just a startup-transient problem:\nonce the database has been reasonably well populated, one would expect\nnew tokens to be added infrequently, and so the number of updates to\nbayes_vars ought to drop off.\n\n\t\t\tregards, tom lane\n\n\nRevised insertion procedure:\n\n\nCREATE or replace FUNCTION put_tokens (_id INTEGER,\n _tokens BYTEA[],\n _spam_count INTEGER,\n _ham_count INTEGER,\n _atime INTEGER)\nRETURNS VOID AS\n$$\ndeclare _token bytea;\n new_tokens integer := 0;\nBEGIN\n for i in array_lower(_tokens,1) .. array_upper(_tokens,1)\n LOOP\n _token := _tokens[i];\n UPDATE bayes_token\n SET spam_count = spam_count + _spam_count,\n ham_count = ham_count + _ham_count,\n atime = _atime\n WHERE id = _id\n AND token = _token;\n\n IF not found THEN\n INSERT INTO bayes_token VALUES (_id, _token, _spam_count,\n _ham_count, _atime);\n new_tokens := new_tokens + 1;\n END IF;\n END LOOP;\n if new_tokens > 0 THEN\n UPDATE bayes_vars SET token_count = token_count + new_tokens\n WHERE id = _id;\n IF NOT FOUND THEN\n RAISE EXCEPTION 'unable to update token_count in bayes_vars';\n END IF;\n END IF;\n RETURN;\nEND;\n$$\nLANGUAGE plpgsql;\n\n\nTypical input:\n\n\nselect put_tokens(1,'{\"\\\\125\\\\42\\\\80\\\\167\\\\166\",\"\\\\38\\\\153\\\\220\\\\93\\\\190\",\"\\\\68\\\\7\\\\112\\\\52\\\\224\",\"\\\\51\\\\14\\\\78\\\\155\\\\49\",\"\\\\73\\\\245\\\\15\\\\221\\\\43\",\"\\\\96\\\\179\\\\108\\\\197\\\\121\",\"\\\\123\\\\97\\\\220\\\\173\\\\247\",\"\\\\55\\\\132\\\\243\\\\51\\\\65\",\"\\\\238\\\\36\\\\129\\\\75\\\\181\",\"\\\\145\\\\253\\\\196\\\\106\\\\90\",\"\\\\119\\\\0\\\\51\\\\127\\\\236\",\"\\\\229\\\\35\\\\181\\\\222\\\\3\",\"\\\\163\\\\1\\\\191\\\\220\\\\79\",\"\\\\232\\\\97\\\\152\\\\207\\\\26\",\"\\\\111\\\\146\\\\81\\\\182\\\\250\",\"\\\\47\\\\141\\\\12\\\\76\\\\45\",\"\\\\252\\\\97\\\\168\\\\243\\\\222\",\"\\\\24\\\\157\\\\202\\\\45\\\\24\",\"\\\\230\\\\207\\\\30\\\\46\\\\115\",\"\\\\106\\\\45\\\\182\\\\94\\\\136\",\"\\\\45\\\\66\\\\245\\\\41\\\\103\",\"\\\\108\\\\126\\\\171\\\\154\\\\210\",\"\\\\64\\\\90\\\\1\\\\184\\\\145\",\"\\\\242\\\\78\\\\150\\\\104\\\\213\",\"\\\\214\\\\\\\\134\\\\7\\\\179\\\\150\",\"\\\\249\\\\12\\\\247\\\\164\\\\74\",\"\\\\234\\\\35\\\\93\\\\118\\\\102\",\"\\\\5\\\\152\\\\152\\\\219\\\\188\",\"\\\\99\\\\186\\\\172\\\\56\\\\241\",\"\\\\99\\\\220\\\\62\\\\240\\\\148\",\"\\\\106\\\\12\\\\199\\\\33\\\\177\",\"\\\\34\\\\74\\\\190\\\\192\\\\186\",\"\\\\219\\\\127\\\\145\\\\132\\\\203\",\"\\\\240\\\\113\\\\128\\\\160\\\\46\",\"\\\\83\\\\5\\\\239\\\\206\\\\221\",\"\\\\245\\\\253\\\\219\\\\83\\\\250\",\"\\\\1\\\\53\\\\126\\\\56\\\\129\",\"\\\\206\\\\1!\n 30\\\\97\\\\246\\\\47\",\"\\\\217\\\\57\\\\185\\\\37\\\\202\",\"\\\\235\\\\10\\\\74\\\\224\\\\150\",\"\\\\80\\\\151\\\\70\\\\52\\\\96\",\"\\\\126\\\\49\\\\156\\\\162\\\\93\",\"\\\\243\\\\120\\\\218\\\\226\\\\49\",\"\\\\251\\\\132\\\\118\\\\47\\\\221\",\"\\\\241\\\\160\\\\120\\\\146\\\\198\",\"\\\\183\\\\32\\\\161\\\\223\\\\178\",\"\\\\80\\\\205\\\\77\\\\57\\\\2\",\"\\\\121\\\\231\\\\13\\\\71\\\\218\",\"\\\\71\\\\143\\\\184\\\\88\\\\185\",\"\\\\163\\\\96\\\\119\\\\211\\\\142\",\"\\\\20\\\\143\\\\90\\\\91\\\\211\",\"\\\\179\\\\228\\\\212\\\\15\\\\22\",\"\\\\243\\\\35\\\\149\\\\9\\\\55\",\"\\\\140\\\\149\\\\99\\\\233\\\\241\",\"\\\\164\\\\246\\\\101\\\\147\\\\107\",\"\\\\202\\\\70\\\\218\\\\40\\\\114\",\"\\\\39\\\\36\\\\186\\\\46\\\\84\",\"\\\\58\\\\116\\\\44\\\\237\\\\2\",\"\\\\80\\\\204\\\\185\\\\47\\\\105\",\"\\\\64\\\\227\\\\29\\\\108\\\\222\",\"\\\\173\\\\115\\\\56\\\\91\\\\52\",\"\\\\102\\\\39\\\\157\\\\252\\\\64\",\"\\\\133\\\\9\\\\89\\\\207\\\\62\",\"\\\\27\\\\2\\\\230\\\\227\\\\201\",\"\\\\163\\\\45\\\\123\\\\160\\\\129\",\"\\\\170\\\\131\\\\168\\\\107\\\\198\",\"\\\\236\\\\253\\\\0\\\\43\\\\228\",\"\\\\44\\\\255\\\\93\\\\197\\\\136\",\"\\\\64\\\\122\\\\42\\\\230\\\\126\",\"\\\\207\\\\222\\\\104\\\\27\\\\239\",\"\\\\26\\\\240\\\\78\\\\73\\\\45\",\"\\\\225\\\\107\\\\181\\\\246\\\\160\",\"\\\\231\\\\72\\\\243\\\\36\\\\159\",\"\\\\248\\\\60\\\\14\\\\67\\\\145\",\"\\\\21\\\\161\\\\247\\\\43\\\\198\",\"\\\\81\\\\243\\\\19!\n 1\\\\168\\\\18\",\"\\\\237\\\\227\\\\23\\\\40\\\\140\",\"\\\\60\\\\90\\\\96\\\\168\\\\201\",\"\\\\211\\\n\\107\\\\181\\\\46\\\\38\",\"\\\\178\\\\129\\\\212\\\\16\\\\254\",\"\\\\85\\\\177\\\\246\\\\29\\\\221\",\"\\\\182\\\\123\\\\178\\\\157\\\\9\",\"\\\\154\\\\159\\\\180\\\\116\\\\89\",\"\\\\80\\\\136\\\\196\\\\242\\\\161\",\"\\\\185\\\\110\\\\90\\\\163\\\\157\",\"\\\\163\\\\191\\\\229\\\\13\\\\42\",\"\\\\11\\\\119\\\\205\\\\160\\\\223\",\"\\\\75\\\\216\\\\70\\\\223\\\\6\",\"\\\\130\\\\48\\\\154\\\\145\\\\51\",\"\\\\62\\\\104\\\\212\\\\72\\\\3\",\"\\\\247\\\\105\\\\51\\\\64\\\\136\",\"\\\\17\\\\96\\\\45\\\\40\\\\77\",\"\\\\52\\\\1\\\\252\\\\53\\\\121\",\"\\\\68\\\\195\\\\58\\\\103\\\\91\",\"\\\\135\\\\131\\\\100\\\\4\\\\0\",\"\\\\131\\\\129\\\\44\\\\193\\\\194\",\"\\\\47\\\\234\\\\101\\\\143\\\\26\",\"\\\\206\\\\\\\\134\\\\32\\\\154\\\\0\",\"\\\\17\\\\41\\\\177\\\\34\\\\178\",\"\\\\145\\\\127\\\\114\\\\231\\\\216\",\"\\\\19\\\\172\\\\6\\\\39\\\\126\",\"\\\\237\\\\233\\\\121\\\\43\\\\119\",\"\\\\201\\\\167\\\\167\\\\67\\\\233\",\"\\\\88\\\\159\\\\102\\\\50\\\\117\",\"\\\\100\\\\133\\\\107\\\\190\\\\133\",\"\\\\169\\\\146\\\\178\\\\120\\\\106\"}',1,0,1088628232);\nselect put_tokens(1,'{\"\\\\196\\\\75\\\\30\\\\153\\\\73\",\"\\\\73\\\\245\\\\15\\\\221\\\\43\",\"\\\\14\\\\7\\\\116\\\\254\\\\162\",\"\\\\244\\\\161\\\\139\\\\59\\\\16\",\"\\\\214\\\\226\\\\238\\\\196\\\\30\",\"\\\\209\\\\14\\\\131\\\\231\\\\30\",\"\\\\41\\\\\\\\134\\\\176\\\\195\\\\166\",\"\\\\70\\\\206\\\\48\\\\38\\\\33\",\"\\\\247\\\\131\\\\136\\\\80\\\\31\",\"\\\\4\\\\85\\\\5\\\\167\\\\214\",\"\\\\246\\\\106\\\\225\\\\106\\\\242\",\"\\\\28\\\\0\\\\229\\\\160\\\\90\",\"\\\\127\\\\209\\\\58\\\\120\\\\83\",\"\\\\12\\\\52\\\\52\\\\147\\\\95\",\"\\\\255\\\\115\\\\21\\\\5\\\\68\",\"\\\\244\\\\152\\\\121\\\\76\\\\20\",\"\\\\19\\\\128\\\\183\\\\248\\\\181\",\"\\\\140\\\\91\\\\18\\\\127\\\\208\",\"\\\\93\\\\9\\\\62\\\\196\\\\247\",\"\\\\248\\\\200\\\\31\\\\207\\\\108\",\"\\\\44\\\\216\\\\247\\\\15\\\\195\",\"\\\\59\\\\189\\\\9\\\\237\\\\142\",\"\\\\1\\\\14\\\\10\\\\221\\\\68\",\"\\\\163\\\\155\\\\122\\\\223\\\\104\",\"\\\\97\\\\5\\\\105\\\\55\\\\137\",\"\\\\184\\\\211\\\\162\\\\23\\\\247\",\"\\\\239\\\\249\\\\83\\\\68\\\\54\",\"\\\\67\\\\207\\\\180\\\\186\\\\234\",\"\\\\99\\\\78\\\\237\\\\211\\\\180\",\"\\\\200\\\\11\\\\32\\\\179\\\\50\",\"\\\\95\\\\105\\\\18\\\\60\\\\253\",\"\\\\207\\\\102\\\\227\\\\94\\\\84\",\"\\\\71\\\\143\\\\184\\\\88\\\\185\",\"\\\\13\\\\181\\\\75\\\\24\\\\192\",\"\\\\188\\\\241\\\\141\\\\99\\\\242\",\"\\\\139\\\\124\\\\248\\\\130\\\\4\",\"\\\\25\\\\110\\\\149\\\\63\\\\114\",\"\\\\21\\\\162\\\\199\\\\1!\n 29\\\\199\",\"\\\\164\\\\246\\\\101\\\\147\\\\107\",\"\\\\198\\\\202\\\\223\\\\58\\\\197\",\"\\\\181\\\\10\\\\41\\\\25\\\\130\",\"\\\\71\\\\163\\\\116\\\\239\\\\170\",\"\\\\46\\\\170\\\\238\\\\142\\\\89\",\"\\\\176\\\\120\\\\106\\\\103\\\\228\",\"\\\\39\\\\228\\\\25\\\\38\\\\170\",\"\\\\114\\\\79\\\\121\\\\18\\\\222\",\"\\\\178\\\\105\\\\98\\\\61\\\\39\",\"\\\\90\\\\61\\\\12\\\\23\\\\135\",\"\\\\176\\\\118\\\\81\\\\65\\\\66\",\"\\\\55\\\\104\\\\57\\\\198\\\\150\",\"\\\\206\\\\251\\\\224\\\\128\\\\41\",\"\\\\29\\\\158\\\\68\\\\146\\\\164\",\"\\\\248\\\\60\\\\14\\\\67\\\\145\",\"\\\\210\\\\220\\\\161\\\\10\\\\254\",\"\\\\72\\\\81\\\\151\\\\213\\\\68\",\"\\\\25\\\\236\\\\210\\\\197\\\\128\",\"\\\\72\\\\37\\\\208\\\\227\\\\54\",\"\\\\242\\\\24\\\\6\\\\88\\\\26\",\"\\\\128\\\\197\\\\20\\\\5\\\\211\",\"\\\\98\\\\105\\\\71\\\\42\\\\180\",\"\\\\91\\\\43\\\\72\\\\84\\\\104\",\"\\\\205\\\\254\\\\174\\\\65\\\\141\",\"\\\\222\\\\194\\\\126\\\\204\\\\164\",\"\\\\233\\\\153\\\\37\\\\148\\\\226\",\"\\\\32\\\\195\\\\22\\\\153\\\\87\",\"\\\\194\\\\97\\\\220\\\\251\\\\18\",\"\\\\151\\\\201\\\\148\\\\52\\\\147\",\"\\\\205\\\\55\\\\0\\\\226\\\\58\",\"\\\\172\\\\12\\\\50\\\\0\\\\140\",\"\\\\56\\\\32\\\\43\\\\9\\\\45\",\"\\\\18\\\\174\\\\50\\\\162\\\\126\",\"\\\\138\\\\150\\\\12\\\\72\\\\189\",\"\\\\49\\\\230\\\\150\\\\210\\\\48\",\"\\\\2\\\\140\\\\64\\\\104\\\\32\",\"\\\\14\\\\174\\\\41\\\\196\\\\121\",\"\\\\100\\\\195\\\\116\\\\130\\\\101\",\"\\!\n \\222\\\\45\\\\94\\\\39\\\\64\",\"\\\\178\\\\203\\\\221\\\\63\\\\94\",\"\\\\26\\\\188\\\\157\\\\\\\\134\n\\\\52\",\"\\\\119\\\\0\\\\51\\\\127\\\\236\",\"\\\\88\\\\32\\\\224\\\\142\\\\164\",\"\\\\111\\\\146\\\\81\\\\182\\\\250\",\"\\\\12\\\\177\\\\151\\\\83\\\\13\",\"\\\\113\\\\27\\\\173\\\\162\\\\19\",\"\\\\158\\\\216\\\\41\\\\236\\\\226\",\"\\\\16\\\\88\\\\\\\\134\\\\180\\\\112\",\"\\\\43\\\\32\\\\16\\\\77\\\\238\",\"\\\\136\\\\93\\\\210\\\\172\\\\63\",\"\\\\251\\\\214\\\\30\\\\40\\\\146\",\"\\\\156\\\\27\\\\198\\\\60\\\\170\",\"\\\\185\\\\29\\\\172\\\\30\\\\68\",\"\\\\202\\\\83\\\\59\\\\228\\\\252\",\"\\\\219\\\\127\\\\145\\\\132\\\\203\",\"\\\\1\\\\223\\\\97\\\\229\\\\127\",\"\\\\113\\\\83\\\\123\\\\167\\\\140\",\"\\\\99\\\\1\\\\116\\\\56\\\\165\",\"\\\\143\\\\224\\\\239\\\\1\\\\173\",\"\\\\49\\\\186\\\\156\\\\51\\\\92\",\"\\\\246\\\\224\\\\70\\\\245\\\\137\",\"\\\\235\\\\10\\\\74\\\\224\\\\150\",\"\\\\43\\\\88\\\\245\\\\14\\\\103\",\"\\\\88\\\\128\\\\232\\\\142\\\\254\",\"\\\\251\\\\132\\\\118\\\\47\\\\221\",\"\\\\36\\\\7\\\\142\\\\234\\\\98\",\"\\\\130\\\\126\\\\199\\\\170\\\\126\",\"\\\\133\\\\23\\\\51\\\\253\\\\234\",\"\\\\249\\\\89\\\\242\\\\87\\\\86\",\"\\\\102\\\\243\\\\47\\\\193\\\\211\",\"\\\\140\\\\18\\\\140\\\\164\\\\248\",\"\\\\179\\\\228\\\\212\\\\15\\\\22\",\"\\\\168\\\\155\\\\243\\\\169\\\\191\",\"\\\\117\\\\37\\\\139\\\\241\\\\230\",\"\\\\155\\\\11\\\\254\\\\171\\\\200\",\"\\\\196\\\\159\\\\253\\\\223\\\\15\",\"\\\\93\\\\207\\\\154\\\\106\\\\135\",\"\\\\11\\\\255\\\\28\\\\123\\\\125\",\"\\\\239\\\\9\\\\226!\n \\\\59\\\\198\",\"\\\\191\\\\204\\\\230\\\\61\\\\39\",\"\\\\175\\\\204\\\\181\\\\113\\\\\\\\134\",\"\\\\64\\\\227\\\\29\\\\108\\\\222\",\"\\\\169\\\\173\\\\194\\\\83\\\\40\",\"\\\\212\\\\93\\\\170\\\\169\\\\12\",\"\\\\249\\\\55\\\\232\\\\182\\\\208\",\"\\\\75\\\\175\\\\181\\\\248\\\\246\",\"\\\\108\\\\95\\\\114\\\\215\\\\138\",\"\\\\220\\\\37\\\\59\\\\207\\\\197\",\"\\\\45\\\\146\\\\43\\\\76\\\\81\",\"\\\\166\\\\231\\\\20\\\\9\\\\189\",\"\\\\27\\\\126\\\\81\\\\92\\\\75\",\"\\\\66\\\\168\\\\119\\\\100\\\\196\",\"\\\\229\\\\9\\\\196\\\\165\\\\250\",\"\\\\83\\\\186\\\\103\\\\184\\\\46\",\"\\\\85\\\\177\\\\246\\\\29\\\\221\",\"\\\\140\\\\159\\\\53\\\\211\\\\157\",\"\\\\214\\\\193\\\\192\\\\217\\\\109\",\"\\\\10\\\\5\\\\64\\\\97\\\\157\",\"\\\\92\\\\137\\\\120\\\\70\\\\55\",\"\\\\235\\\\45\\\\181\\\\44\\\\98\",\"\\\\150\\\\56\\\\132\\\\207\\\\19\",\"\\\\67\\\\95\\\\161\\\\39\\\\122\",\"\\\\109\\\\65\\\\145\\\\170\\\\79\",\"\\\\\\\\134\\\\28\\\\90\\\\39\\\\33\",\"\\\\226\\\\177\\\\240\\\\202\\\\157\",\"\\\\1\\\\57\\\\50\\\\6\\\\240\",\"\\\\249\\\\240\\\\222\\\\56\\\\161\",\"\\\\110\\\\136\\\\88\\\\85\\\\249\",\"\\\\82\\\\27\\\\239\\\\51\\\\211\",\"\\\\114\\\\223\\\\252\\\\83\\\\189\",\"\\\\129\\\\216\\\\251\\\\218\\\\80\",\"\\\\247\\\\36\\\\101\\\\90\\\\229\",\"\\\\209\\\\73\\\\221\\\\46\\\\11\",\"\\\\242\\\\12\\\\120\\\\117\\\\\\\\134\",\"\\\\146\\\\198\\\\57\\\\177\\\\49\",\"\\\\212\\\\57\\\\9\\\\240\\\\216\",\"\\\\215\\\\151\\\\2!\n 16\\\\59\\\\75\",\"\\\\47\\\\132\\\\161\\\\165\\\\54\",\"\\\\113\\\\4\\\\77\\\\241\\\\150\",\"\\\\217\\\n\\184\\\\149\\\\53\\\\124\",\"\\\\152\\\\111\\\\25\\\\231\\\\104\",\"\\\\42\\\\185\\\\112\\\\250\\\\156\",\"\\\\39\\\\131\\\\14\\\\140\\\\189\",\"\\\\148\\\\169\\\\158\\\\251\\\\150\",\"\\\\184\\\\142\\\\204\\\\122\\\\179\",\"\\\\19\\\\189\\\\181\\\\105\\\\116\",\"\\\\116\\\\77\\\\22\\\\135\\\\50\",\"\\\\236\\\\231\\\\60\\\\132\\\\229\",\"\\\\200\\\\63\\\\76\\\\232\\\\9\",\"\\\\32\\\\20\\\\168\\\\87\\\\45\",\"\\\\99\\\\129\\\\99\\\\165\\\\29\",\"\\\\2\\\\208\\\\66\\\\228\\\\105\",\"\\\\99\\\\194\\\\194\\\\229\\\\17\",\"\\\\85\\\\250\\\\55\\\\51\\\\114\",\"\\\\200\\\\165\\\\249\\\\77\\\\72\",\"\\\\5\\\\91\\\\178\\\\157\\\\24\",\"\\\\245\\\\253\\\\219\\\\83\\\\250\",\"\\\\166\\\\103\\\\181\\\\196\\\\34\",\"\\\\227\\\\149\\\\148\\\\105\\\\157\",\"\\\\95\\\\44\\\\15\\\\251\\\\98\",\"\\\\183\\\\32\\\\161\\\\223\\\\178\",\"\\\\120\\\\236\\\\145\\\\158\\\\78\",\"\\\\244\\\\4\\\\92\\\\233\\\\112\",\"\\\\189\\\\231\\\\124\\\\92\\\\19\",\"\\\\112\\\\132\\\\8\\\\49\\\\157\",\"\\\\160\\\\243\\\\244\\\\94\\\\104\",\"\\\\150\\\\176\\\\139\\\\251\\\\157\",\"\\\\176\\\\193\\\\155\\\\175\\\\144\",\"\\\\161\\\\208\\\\145\\\\92\\\\92\",\"\\\\77\\\\122\\\\94\\\\69\\\\182\",\"\\\\77\\\\13\\\\131\\\\29\\\\27\",\"\\\\92\\\\9\\\\178\\\\204\\\\254\",\"\\\\177\\\\4\\\\154\\\\211\\\\63\",\"\\\\62\\\\4\\\\242\\\\1\\\\78\",\"\\\\4\\\\129\\\\113\\\\205\\\\164\",\"\\\\168\\\\95\\\\68\\\\89\\\\38\",\"\\\\173\\\\115\\\\56\\\\91\\\\52\",\"\\\\212\\\\161\\\\1!\n 59\\\\148\\\\179\",\"\\\\133\\\\9\\\\89\\\\207\\\\62\",\"\\\\242\\\\51\\\\168\\\\130\\\\86\",\"\\\\154\\\\199\\\\208\\\\84\\\\2\",\"\\\\160\\\\215\\\\250\\\\104\\\\22\",\"\\\\45\\\\252\\\\143\\\\149\\\\204\",\"\\\\48\\\\50\\\\91\\\\39\\\\243\",\"\\\\94\\\\168\\\\48\\\\202\\\\122\",\"\\\\238\\\\38\\\\180\\\\135\\\\142\",\"\\\\234\\\\59\\\\24\\\\148\\\\2\",\"\\\\237\\\\227\\\\23\\\\40\\\\140\",\"\\\\7\\\\114\\\\176\\\\80\\\\123\",\"\\\\204\\\\170\\\\0\\\\60\\\\65\",\"\\\\217\\\\202\\\\249\\\\158\\\\182\",\"\\\\82\\\\170\\\\45\\\\96\\\\86\",\"\\\\118\\\\11\\\\123\\\\51\\\\216\",\"\\\\192\\\\130\\\\153\\\\88\\\\59\",\"\\\\219\\\\53\\\\146\\\\88\\\\198\",\"\\\\203\\\\114\\\\182\\\\147\\\\145\",\"\\\\158\\\\140\\\\239\\\\104\\\\247\",\"\\\\179\\\\86\\\\111\\\\146\\\\65\",\"\\\\192\\\\168\\\\51\\\\125\\\\183\",\"\\\\8\\\\251\\\\77\\\\143\\\\231\",\"\\\\237\\\\229\\\\173\\\\221\\\\29\",\"\\\\69\\\\178\\\\247\\\\196\\\\175\",\"\\\\114\\\\33\\\\237\\\\189\\\\119\",\"\\\\220\\\\44\\\\144\\\\93\\\\98\",\"\\\\241\\\\38\\\\138\\\\127\\\\252\",\"\\\\66\\\\218\\\\237\\\\199\\\\157\",\"\\\\132\\\\240\\\\212\\\\221\\\\48\",\"\\\\180\\\\41\\\\157\\\\84\\\\37\",\"\\\\203\\\\180\\\\58\\\\113\\\\136\",\"\\\\156\\\\39\\\\111\\\\181\\\\34\",\"\\\\16\\\\202\\\\216\\\\183\\\\55\",\"\\\\154\\\\51\\\\122\\\\201\\\\45\",\"\\\\218\\\\112\\\\47\\\\206\\\\142\",\"\\\\189\\\\141\\\\110\\\\230\\\\132\",\"\\\\80\\\\167\\\\61\\\\103\\\\247\",\"\\\\186\\!\n \\15\\\\121\\\\27\\\\167\",\"\\\\103\\\\163\\\\217\\\\19\\\\220\",\"\\\\173\\\\116\\\\86\\\\7\\\\249\"\n,\"\\\\25\\\\37\\\\98\\\\35\\\\127\",\"\\\\44\\\\92\\\\200\\\\89\\\\84\",\"\\\\171\\\\129\\\\106\\\\249\\\\38\",\"\\\\24\\\\147\\\\77\\\\\\\\134\\\\62\",\"\\\\254\\\\184\\\\72\\\\159\\\\91\",\"\\\\221\\\\13\\\\18\\\\153\\\\154\",\"\\\\109\\\\232\\\\79\\\\169\\\\176\",\"\\\\152\\\\103\\\\190\\\\50\\\\18\",\"\\\\51\\\\71\\\\217\\\\22\\\\76\",\"\\\\105\\\\109\\\\7\\\\77\\\\198\",\"\\\\250\\\\121\\\\163\\\\49\\\\73\",\"\\\\138\\\\204\\\\\\\\134\\\\247\\\\116\",\"\\\\130\\\\38\\\\156\\\\36\\\\27\",\"\\\\20\\\\83\\\\86\\\\113\\\\124\",\"\\\\40\\\\63\\\\161\\\\157\\\\76\",\"\\\\205\\\\99\\\\150\\\\109\\\\249\",\"\\\\111\\\\174\\\\57\\\\169\\\\238\",\"\\\\106\\\\169\\\\245\\\\170\\\\240\",\"\\\\32\\\\10\\\\53\\\\160\\\\76\",\"\\\\226\\\\0\\\\58\\\\9\\\\22\",\"\\\\63\\\\83\\\\21\\\\3\\\\205\",\"\\\\212\\\\141\\\\249\\\\177\\\\102\",\"\\\\197\\\\226\\\\42\\\\202\\\\130\",\"\\\\70\\\\40\\\\85\\\\176\\\\2\",\"\\\\3\\\\16\\\\133\\\\118\\\\91\",\"\\\\232\\\\48\\\\176\\\\209\\\\77\",\"\\\\20\\\\149\\\\0\\\\2\\\\144\",\"\\\\50\\\\87\\\\138\\\\108\\\\149\",\"\\\\13\\\\78\\\\64\\\\211\\\\245\",\"\\\\15\\\\158\\\\123\\\\62\\\\103\",\"\\\\239\\\\68\\\\210\\\\175\\\\197\",\"\\\\247\\\\216\\\\7\\\\211\\\\5\",\"\\\\112\\\\100\\\\135\\\\210\\\\101\",\"\\\\47\\\\26\\\\118\\\\254\\\\62\",\"\\\\123\\\\7\\\\143\\\\206\\\\114\",\"\\\\184\\\\43\\\\252\\\\56\\\\194\",\"\\\\55\\\\16\\\\219\\\\\\\\134\\\\201\",\"\\\\170\\\\128\\\\224\\\\160\\\\251\",\"\\\\180\\\\10!\n 8\\\\182\\\\255\\\\118\",\"\\\\164\\\\155\\\\151\\\\195\\\\67\",\"\\\\116\\\\56\\\\163\\\\249\\\\92\",\"\\\\250\\\\207\\\\75\\\\244\\\\104\",\"\\\\122\\\\219\\\\25\\\\49\\\\17\",\"\\\\16\\\\61\\\\66\\\\50\\\\32\",\"\\\\15\\\\223\\\\166\\\\\\\\134\\\\188\",\"\\\\16\\\\221\\\\48\\\\159\\\\124\",\"\\\\163\\\\66\\\\245\\\\19\\\\190\",\"\\\\52\\\\177\\\\137\\\\57\\\\104\",\"\\\\137\\\\158\\\\143\\\\12\\\\73\",\"\\\\175\\\\156\\\\252\\\\243\\\\165\",\"\\\\18\\\\119\\\\\\\\134\\\\198\\\\209\",\"\\\\179\\\\60\\\\37\\\\63\\\\136\",\"\\\\68\\\\117\\\\75\\\\163\\\\27\",\"\\\\234\\\\108\\\\150\\\\93\\\\15\",\"\\\\209\\\\159\\\\154\\\\221\\\\138\",\"\\\\70\\\\215\\\\50\\\\36\\\\255\",\"\\\\237\\\\64\\\\91\\\\125\\\\54\",\"\\\\17\\\\41\\\\177\\\\34\\\\178\",\"\\\\19\\\\241\\\\29\\\\12\\\\34\",\"\\\\\\\\134\\\\151\\\\112\\\\6\\\\214\",\"\\\\63\\\\61\\\\146\\\\243\\\\60\"}',1,0,1088649636);\n\n\nFor testing purposes, here is the Perl script I used to make the revised\ninput from the original test script:\n\n\n#!/usr/bin/perl -w\n\n$lastat = 0;\n\nwhile (<>) {\n if (m/^select put_token\\((\\d+),'(.*)',(\\d+),(\\d+),(\\d+)\\);$/) {\n\t$d1 = $1;\n\t$data = $2;\n\t$d3 = $3;\n\t$d4 = $4;\n\t$d5 = $5;\n\t$data =~ s/\\\\/\\\\\\\\/g;\n\tif ($d5 == $lastat) {\n\t $last2 = $last2 . ',\"' . $data . '\"';\n\t} else {\n\t if ($lastat) {\n\t\tprint \"select put_tokens($last1,'{$last2}',$last3,$last4,$lastat);\\n\";\n\t }\n\t $last1 = $d1;\n\t $last2 = '\"' . $data . '\"';\n\t $last3 = $d3;\n\t $last4 = $d4;\n\t $lastat = $d5;\n\t}\n } else {\n\tprint ;\n }\n}\n\nprint \"select put_tokens($last1,'{$last2}',$last3,$last4,$lastat);\\n\";\n\n\nBTW, the data quoting is probably wrong here, but I suppose that can be\nfixed easily.\n", "msg_date": "Sat, 30 Jul 2005 14:28:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "Tom Lane wrote:\n\n> I looked into this a bit. It seems that the problem when you wrap the\n> entire insertion series into one transaction is associated with the fact\n> that the test does so many successive updates of the single row in\n> bayes_vars. (VACUUM VERBOSE at the end of the test shows it cleaning up\n> 49383 dead versions of the one row.) This is bad enough when it's in\n> separate transactions, but when it's in one transaction, none of those\n> dead row versions can be marked \"fully dead\" yet --- so for every update\n> of the row, the unique-key check has to visit every dead version to make\n> sure it's dead in the context of the current transaction. This makes\n> the process O(N^2) in the number of updates per transaction. Which is\n> bad enough if you just want to do one transaction per message, but it's\n> intolerable if you try to wrap the whole bulk-load scenario into one\n> transaction.\n> \n> I'm not sure that we can do anything to make this a lot smarter, but\n> in any case, the real problem is to not do quite so many updates of\n> bayes_vars.\n> \n> How constrained are you as to the format of the SQL generated by\n> SpamAssassin? In particular, could you convert the commands generated\n> for a single message into a single statement? I experimented with\n> passing all the tokens for a given message as a single bytea array,\n> as in the attached, and got almost a factor of 4 runtime reduction\n> on your test case.\n> \n> BTW, it's possible that this is all just a startup-transient problem:\n> once the database has been reasonably well populated, one would expect\n> new tokens to be added infrequently, and so the number of updates to\n> bayes_vars ought to drop off.\n> \n> \t\t\tregards, tom lane\n> \n\nThe spamassassins bayes code calls the _put_token method in the storage\nmodule a loop. This means that the storage module isn't called once per\nmessage, but once per token.\n\nI'll look into modifying it to so that the bayes code passes a hash of\ntokens to the storage module where they can loop or in the case of the\npgsql module pass an array of tokens to a procedure where we loop and\nuse temp tables to make this much more efficient.\n\nI don't have much time this weekend to toss at this, but will be looking\nat it on Monday.\n\nThanks,\n\nschu\n", "msg_date": "Sat, 30 Jul 2005 13:06:49 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n\n>All it's doing is trying the update before the insert to get around the\n>problem of not knowing which is needed. With only 2-3 of the queries\n>implemented I'm already back to running about the same speed as the\n>original SA proc that is going to ship with SA 3.1.0.\n>\n>All of the queries are using indexes so at this point I'm pretty\n>convinced that the biggest problem is the sheer number of queries\n>required to run this proc 200 times for each email (once for each token).\n>\n>I don't see anything that could be done to make this much faster on the\n>postgres end, it's looking like the solution is going to involve cutting\n>down the number of queries some how.\n>\n>One thing that is still very puzzling to me is why this runs so much\n>slower when I put the data.sql in a transaction. Obviously transactions\n>are acting different when you call a proc a zillion times vs an insert\n>query.\n> \n>\nWell, I played with adding a COMMIT;BEGIN; statement to your exact test\nevery 1000 lines. And this is what I got:\n\nUnmodified:\nreal 17m53.587s\nuser 0m6.204s\nsys 0m3.556s\n\nWith BEGIN/COMMIT:\nreal 1m53.466s\nuser 0m5.203s\nsys 0m3.211s\n\nSo I see the potential for improvement almost 10 fold by switching to\ntransactions. I played with the perl script (and re-implemented it in\npython), and for the same data as the perl script, using COPY instead of\nINSERT INTO means 5s instead of 33s.\n\nI also played around with adding VACUUM ANALYZE every 10 COMMITS, which\nbrings the speed to:\n\nreal 1m41.258s\nuser 0m5.394s\nsys 0m3.212s\n\nAnd doing VACUUM ANALYZE every 5 COMMITS makes it:\nreal 1m46.403s\nuser 0m5.597s\nsys 0m3.244s\n\nI'm assuming the slowdown is because of the extra time spent vacuuming.\nOverall performance might still be improving, since you wouldn't\nactually be inserting all 100k rows at once.\n\n\nJust to complete the reference, the perl version runs as:\n10:44:02 -- START\n10:44:35 -- AFTER TEMP LOAD : loaded 120596 records\n10:44:39 -- AFTER bayes_token INSERT : inserted 49359 new records into\nbayes_token\n10:44:41 -- AFTER bayes_vars UPDATE : updated 1 records\n10:46:42 -- AFTER bayes_token UPDATE : updated 47537 records\nDONE\n\nMy python version runs as:\n00:22:54 -- START\n00:23:00 -- AFTER TEMP LOAD : loaded 120596 records\n00:23:03 -- AFTER bayes_token INSERT : inserted 49359 new records into\nbayes_token\n00:23:06 -- AFTER bayes_vars UPDATE : updated 1 records\n00:25:04 -- AFTER bayes_token UPDATE : updated 47537 records\nDONE\n\nThe python is effectively just a port of the perl code (with many lines\nvirtually unchanged), and really the only performance difference is that\nthe initial data load is much faster with a COPY.\n\nThis is all run on Ubuntu, with postgres 7.4.7, and a completely\nunchanged postgresql.conf. (But the machine is a dual P4 2.4GHz, with\n3GB of RAM).\n\nJohn\n=:->\n\n>Anyway, if anyone else has any ideas I'm all ears, but at this point\n>it's looking like raw query speed is needed for this app and while I\n>don't care for mysql as a database, it does have the speed going for it.\n>\n>schu\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n> \n>", "msg_date": "Sun, 31 Jul 2005 00:27:06 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n\n>Tom Lane wrote:\n>\n> \n>\n>>I looked into this a bit. It seems that the problem when you wrap the\n>>entire insertion series into one transaction is associated with the fact\n>>that the test does so many successive updates of the single row in\n>>bayes_vars. (VACUUM VERBOSE at the end of the test shows it cleaning up\n>>49383 dead versions of the one row.) This is bad enough when it's in\n>>separate transactions, but when it's in one transaction, none of those\n>>dead row versions can be marked \"fully dead\" yet --- so for every update\n>>of the row, the unique-key check has to visit every dead version to make\n>>sure it's dead in the context of the current transaction. This makes\n>>the process O(N^2) in the number of updates per transaction. Which is\n>>bad enough if you just want to do one transaction per message, but it's\n>>intolerable if you try to wrap the whole bulk-load scenario into one\n>>transaction.\n>>\n>>I'm not sure that we can do anything to make this a lot smarter, but\n>>in any case, the real problem is to not do quite so many updates of\n>>bayes_vars.\n>>\n>>How constrained are you as to the format of the SQL generated by\n>>SpamAssassin? In particular, could you convert the commands generated\n>>for a single message into a single statement? I experimented with\n>>passing all the tokens for a given message as a single bytea array,\n>>as in the attached, and got almost a factor of 4 runtime reduction\n>>on your test case.\n>>\n>>BTW, it's possible that this is all just a startup-transient problem:\n>>once the database has been reasonably well populated, one would expect\n>>new tokens to be added infrequently, and so the number of updates to\n>>bayes_vars ought to drop off.\n>>\n>>\t\t\tregards, tom lane\n>>\n>> \n>>\n>\n>The spamassassins bayes code calls the _put_token method in the storage\n>module a loop. This means that the storage module isn't called once per\n>message, but once per token.\n> \n>\nWell, putting everything into a transaction per email might make your\npain go away.\nIf you saw the email I just sent, I modified your data.sql file to add a\n\"COMMIT;BEGIN\" every 1000 selects, and I saw a performance jump from 18\nminutes down to less than 2 minutes. Heck, on my machine, the advanced\nperl version takes more than 2 minutes to run. It is actually slower\nthan the data.sql with commit statements.\n\n>I'll look into modifying it to so that the bayes code passes a hash of\n>tokens to the storage module where they can loop or in the case of the\n>pgsql module pass an array of tokens to a procedure where we loop and\n>use temp tables to make this much more efficient.\n> \n>\nWell, you could do that. Or you could just have the bayes code issue\n\"BEGIN;\" when it starts processing an email, and a \"COMMIT;\" when it\nfinishes. From my testing, you will see an enormous speed improvement.\n(And you might consider including a fairly frequent VACUUM ANALYZE)\n\n>I don't have much time this weekend to toss at this, but will be looking\n>at it on Monday.\n> \n>\nGood luck,\nJohn\n=:->\n\n>Thanks,\n>\n>schu\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: don't forget to increase your free space map settings\n>\n> \n>", "msg_date": "Sun, 31 Jul 2005 00:31:54 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "John Arbash Meinel wrote:\n\n>Matthew Schumacher wrote:\n>\n> \n>\n>>All it's doing is trying the update before the insert to get around the\n>>problem of not knowing which is needed. With only 2-3 of the queries\n>>implemented I'm already back to running about the same speed as the\n>>original SA proc that is going to ship with SA 3.1.0.\n>>\n>>All of the queries are using indexes so at this point I'm pretty\n>>convinced that the biggest problem is the sheer number of queries\n>>required to run this proc 200 times for each email (once for each token).\n>>\n>>I don't see anything that could be done to make this much faster on the\n>>postgres end, it's looking like the solution is going to involve cutting\n>>down the number of queries some how.\n>>\n>>One thing that is still very puzzling to me is why this runs so much\n>>slower when I put the data.sql in a transaction. Obviously transactions\n>>are acting different when you call a proc a zillion times vs an insert\n>>query.\n>> \n>>\n>> \n>>\n>Well, I played with adding a COMMIT;BEGIN; statement to your exact test\n>every 1000 lines. And this is what I got:\n> \n>\nJust for reference, I also tested this on my old server, which is a dual\nCeleron 450 with 256M ram. FC4 and Postgres 8.0.3\nUnmodified:\nreal 54m15.557s\nuser 0m24.328s\nsys 0m14.200s\n\nWith Transactions every 1000 selects, and vacuum every 5000:\nreal 8m36.528s\nuser 0m16.585s\nsys 0m12.569s\n\nWith Transactions every 1000 selects, and vacuum every 10000:\nreal 7m50.748s\nuser 0m16.183s\nsys 0m12.489s\n\nOn this machine vacuum is more expensive, since it doesn't have as much ram.\n\nAnyway, on this machine, I see approx 7x improvement. Which I think is\nprobably going to satisfy your spamassassin needs.\nJohn\n=:->\n\nPS> Looking forward to having a spamassassin that can utilize my\nfavorite db. Right now, I'm not using a db backend because it wasn't\nworth setting up mysql.\n\n>Unmodified:\n>real 17m53.587s\n>user 0m6.204s\n>sys 0m3.556s\n>\n>With BEGIN/COMMIT:\n>real 1m53.466s\n>user 0m5.203s\n>sys 0m3.211s\n>\n>So I see the potential for improvement almost 10 fold by switching to\n>transactions. I played with the perl script (and re-implemented it in\n>python), and for the same data as the perl script, using COPY instead of\n>INSERT INTO means 5s instead of 33s.\n>\n>I also played around with adding VACUUM ANALYZE every 10 COMMITS, which\n>brings the speed to:\n>\n>real 1m41.258s\n>user 0m5.394s\n>sys 0m3.212s\n>\n>And doing VACUUM ANALYZE every 5 COMMITS makes it:\n>real 1m46.403s\n>user 0m5.597s\n>sys 0m3.244s\n>\n>I'm assuming the slowdown is because of the extra time spent vacuuming.\n>Overall performance might still be improving, since you wouldn't\n>actually be inserting all 100k rows at once.\n> \n>\n...\n\n>This is all run on Ubuntu, with postgres 7.4.7, and a completely\n>unchanged postgresql.conf. (But the machine is a dual P4 2.4GHz, with\n>3GB of RAM).\n>\n>John\n>=:->\n> \n>", "msg_date": "Sun, 31 Jul 2005 10:52:14 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Ok, here is the current plan.\n\nChange the spamassassin API to pass a hash of tokens into the storage\nmodule, pass the tokens to the proc as an array, start a transaction,\nload the tokens into a temp table using copy, select the tokens distinct\ninto the token table for new tokens, update the token table for known\ntokens, then commit.\n\nThis solves the following problems:\n\n1. Each email is a transaction instead of each token.\n2. The update statement is only called when we really need an update\nwhich avoids all of those searches.\n3. The looping work is done inside the proc instead of perl calling a\nmethod a zillion times per email.\n\nI'm not sure how vacuuming will be done yet, if we vacuum once per email\nthat may be too often, so I may do that every 5 mins in cron.\n\nschu\n", "msg_date": "Sun, 31 Jul 2005 08:51:06 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Sun, Jul 31, 2005 at 08:51:06AM -0800, Matthew Schumacher wrote:\n> Ok, here is the current plan.\n> \n> Change the spamassassin API to pass a hash of tokens into the storage\n> module, pass the tokens to the proc as an array, start a transaction,\n> load the tokens into a temp table using copy, select the tokens distinct\n> into the token table for new tokens, update the token table for known\n> tokens, then commit.\n\nYou might consider:\nUPDATE tokens\n FROM temp_table (this updates existing records)\n\nINSERT INTO tokens\n SELECT ...\n FROM temp_table\n WHERE NOT IN (SELECT ... FROM tokens)\n\nThis way you don't do an update to newly inserted tokens, which helps\nkeep vacuuming needs in check.\n\n> This solves the following problems:\n> \n> 1. Each email is a transaction instead of each token.\n> 2. The update statement is only called when we really need an update\n> which avoids all of those searches.\n> 3. The looping work is done inside the proc instead of perl calling a\n> method a zillion times per email.\n> \n> I'm not sure how vacuuming will be done yet, if we vacuum once per email\n> that may be too often, so I may do that every 5 mins in cron.\n\nI would suggest leaving an option to have SA vacuum every n emails,\nsince some people may not want to mess with cron, etc. I suspect that\npg_autovacuum would be able to keep up with things pretty well, though.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sun, 31 Jul 2005 12:10:12 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Sun, Jul 31, 2005 at 08:51:06AM -0800, Matthew Schumacher wrote:\n> \n>>Ok, here is the current plan.\n>>\n>>Change the spamassassin API to pass a hash of tokens into the storage\n>>module, pass the tokens to the proc as an array, start a transaction,\n>>load the tokens into a temp table using copy, select the tokens distinct\n>>into the token table for new tokens, update the token table for known\n>>tokens, then commit.\n> \n> \n> You might consider:\n> UPDATE tokens\n> FROM temp_table (this updates existing records)\n> \n> INSERT INTO tokens\n> SELECT ...\n> FROM temp_table\n> WHERE NOT IN (SELECT ... FROM tokens)\n> \n> This way you don't do an update to newly inserted tokens, which helps\n> keep vacuuming needs in check.\n\nThe subselect might be quite a big set, so avoiding a full table scan \nand materialization by\n\nDELETE temp_table\n WHERE key IN (select key FROM tokens JOIN temp_table);\nINSERT INTO TOKENS SELECT * FROM temp_table;\n\nor\n\nINSERT INTO TOKENS\nSELECT temp_table.* FROM temp_table LEFT JOIN tokens USING (key)\nWHERE tokens.key IS NULL\n\nmight be an additional win, assuming that only a small fraction of \ntokens is inserted and updated.\n\nRegards,\nAndreas\n", "msg_date": "Sun, 31 Jul 2005 20:19:11 +0200", "msg_from": "Andreas Pflug <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Hi All,\n\nAs a SpamAssassin developer, who by my own admission has real problem\ngetting PostgreSQL to work well, I must thank everyone for their\nfeedback on this issue. Believe me when I say what is in the tree now\nis a far cry from what used to be there, orders of magnitude faster\nfor sure. I think there are several good ideas that have come out of\nthis thread and I've set about attempting to implement them.\n\nHere is a version of the stored procedure, based in large part by the\none written by Tom Lane, that accepts and array of tokens and loops\nover them to either update or insert them into the database (I'm not\nincluding the greatest_int/least_int procedures but you've all seen\nthem before):\n\nCREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n intokenary BYTEA[],\n inspam_count INTEGER,\n inham_count INTEGER,\n inatime INTEGER)\nRETURNS VOID AS ' \nDECLARE\n _token BYTEA;\n new_tokens INTEGER := 0;\nBEGIN\n for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n LOOP\n _token := intokenary[i];\n UPDATE bayes_token\n SET spam_count = greatest_int(spam_count + inspam_count, 0),\n ham_count = greatest_int(ham_count + inham_count, 0),\n atime = greatest_int(atime, inatime)\n WHERE id = inuserid \n AND token = _token; \n IF NOT FOUND THEN \n -- we do not insert negative counts, just return true\n IF NOT (inspam_count < 0 OR inham_count < 0) THEN\n INSERT INTO bayes_token (id, token, spam_count,\n ham_count, atime) \n VALUES (inuserid, _token, inspam_count, inham_count, inatime); \n IF FOUND THEN\n new_tokens := new_tokens + 1;\n END IF;\n END IF;\n END IF;\n END LOOP;\n\n UPDATE bayes_vars\n SET token_count = token_count + new_tokens,\n newest_token_age = greatest_int(newest_token_age, inatime),\n oldest_token_age = least_int(oldest_token_age, inatime)\n WHERE id = inuserid;\n RETURN;\nEND; \n' LANGUAGE 'plpgsql'; \n\nThis version is about 32x faster than the old version, with the\ndefault fsync value and autovacuum running in the background.\n\nThe next hurdle, and I've just posted to the DBD::Pg list, is\nescaping/quoting the token strings. They are true binary strings,\nsubstrings of SHA1 hashes, I don't think the original data set\nprovided puts them in the right context. They have proved to be\ntricky. I'm unable to call the stored procedure from perl because I\nkeep getting a malformed array litteral error.\n\nHere is some example code that shows the issue:\n#!/usr/bin/perl -w\n\n# from a new db, do this first\n# INSERT INTO bayes_vars VALUES (1,'nobody',0,0,0,0,0,0,2147483647,0);\n\nuse strict;\nuse DBI;\nuse DBD::Pg qw(:pg_types);\nuse Digest::SHA1 qw(sha1);\n\nmy $dbh = DBI->connect(\"DBI:Pg:dbname=spamassassin\",\"postgres\") || die;\n\nmy @dataary;\n\n# Input is just a list of words (ie /usr/share/dict/words) stop after 150\nwhile(<>) {\n chomp;\n push(@dataary, substr(sha1($_), -5));\n# to see it work with normal string comment out above and uncomment below\n# push(@dataary, $_);\n last if scalar(@dataary) >= 150;\n}\n\nmy $datastring = join(\",\", map { '\"' . bytea_esc($_) . '\"' }\n@dataary);\nmy $sql = \"select put_tokens(1, '{$datastring}', 1, 1, 10000)\";\nmy $sth = $dbh->prepare($sql);\nmy $rc = $sth->execute();\nunless ($rc) {\n print \"Error: \" . $dbh->errstr() . \"\\n\";\n}\n$sth->finish();\n\nsub bytea_esc {\n my ($str) = @_;\n my $buf = \"\";\n foreach my $char (split(//,$str)) {\n if (ord($char) == 0) { $buf .= \"\\\\\\\\000\"; }\n elsif (ord($char) == 39) { $buf .= \"\\\\\\\\047\"; }\n elsif (ord($char) == 92) { $buf .= \"\\\\\\\\134\"; }\n else { $buf .= $char; }\n }\n return $buf;\n}\n\nAny ideas? or thoughts on the revised procedure? I'd greatly\nappriciate them.\n\nSorry for the length, but hopefully it give a good enough example.\n\nThanks\nMichael Parker", "msg_date": "Sun, 31 Jul 2005 23:04:49 -0500", "msg_from": "Michael Parker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Michael Parker <[email protected]> writes:\n> The next hurdle, and I've just posted to the DBD::Pg list, is\n> escaping/quoting the token strings.\n\nIf you're trying to write a bytea[] literal, I think the most reliable\nway to write the individual bytes is\n\t\\\\\\\\nnn\nwhere nnn is *octal*. The idea here is:\n\t* string literal parser takes off one level of backslashing,\n\t leaving \\\\nnn\n\t* array input parser takes off another level, leaving \\nnn\n\t* bytea input parser knows about backslashed octal values\n\nNote it has to be 3 octal digits every time, no abbreviations.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Aug 2005 00:42:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "Michael Parker <[email protected]> writes:\n> sub bytea_esc {\n> my ($str) = @_;\n> my $buf = \"\";\n> foreach my $char (split(//,$str)) {\n> if (ord($char) == 0) { $buf .= \"\\\\\\\\000\"; }\n> elsif (ord($char) == 39) { $buf .= \"\\\\\\\\047\"; }\n> elsif (ord($char) == 92) { $buf .= \"\\\\\\\\134\"; }\n> else { $buf .= $char; }\n> }\n> return $buf;\n> }\n\nOh, I see the problem: you forgot to convert \" to a backslash sequence.\n\nIt would probably also be wise to convert anything >= 128 to a backslash\nsequence, so as to avoid any possible problems with multibyte character\nencodings. You wouldn't see this issue in a SQL_ASCII database, but I\nsuspect it would rise up to bite you with other encoding settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Aug 2005 00:53:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "Tom Lane wrote:\n> Michael Parker <[email protected]> writes:\n> \n>>sub bytea_esc {\n>> my ($str) = @_;\n>> my $buf = \"\";\n>> foreach my $char (split(//,$str)) {\n>> if (ord($char) == 0) { $buf .= \"\\\\\\\\000\"; }\n>> elsif (ord($char) == 39) { $buf .= \"\\\\\\\\047\"; }\n>> elsif (ord($char) == 92) { $buf .= \"\\\\\\\\134\"; }\n>> else { $buf .= $char; }\n>> }\n>> return $buf;\n>>}\n> \n> \n> Oh, I see the problem: you forgot to convert \" to a backslash sequence.\n> \n> It would probably also be wise to convert anything >= 128 to a backslash\n> sequence, so as to avoid any possible problems with multibyte character\n> encodings. You wouldn't see this issue in a SQL_ASCII database, but I\n> suspect it would rise up to bite you with other encoding settings.\n> \n> \t\t\tregards, tom lane\n\nHere is some code that applies Toms Suggestions:\n\n38c39,41\n< if (ord($char) == 0) { $buf .= \"\\\\\\\\000\"; }\n---\n> if (ord($char) >= 128) { $buf .= \"\\\\\\\\\" . sprintf (\"%lo\",\nord($char)); }\n> elsif (ord($char) == 0) { $buf .= \"\\\\\\\\000\"; }\n> elsif (ord($char) == 34) { $buf .= \"\\\\\\\\042\"; }\n\nBut this begs the question, why not escape everything?\n\nschu\n", "msg_date": "Mon, 01 Aug 2005 09:08:48 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Tom Lane wrote:\n> \n> Revised insertion procedure:\n> \n> \n> CREATE or replace FUNCTION put_tokens (_id INTEGER,\n> _tokens BYTEA[],\n> _spam_count INTEGER,\n> _ham_count INTEGER,\n> _atime INTEGER)\n> RETURNS VOID AS\n> $$\n> declare _token bytea;\n> new_tokens integer := 0;\n> BEGIN\n> for i in array_lower(_tokens,1) .. array_upper(_tokens,1)\n> LOOP\n> _token := _tokens[i];\n> UPDATE bayes_token\n> SET spam_count = spam_count + _spam_count,\n> ham_count = ham_count + _ham_count,\n> atime = _atime\n> WHERE id = _id\n> AND token = _token;\n> \n> IF not found THEN\n> INSERT INTO bayes_token VALUES (_id, _token, _spam_count,\n> _ham_count, _atime);\n> new_tokens := new_tokens + 1;\n> END IF;\n> END LOOP;\n> if new_tokens > 0 THEN\n> UPDATE bayes_vars SET token_count = token_count + new_tokens\n> WHERE id = _id;\n> IF NOT FOUND THEN\n> RAISE EXCEPTION 'unable to update token_count in bayes_vars';\n> END IF;\n> END IF;\n> RETURN;\n> END;\n> $$\n> LANGUAGE plpgsql;\n> \n\nTom, thanks for all your help on this, I think we are fairly close to\nhaving this done in a proc. The biggest problem we are running into now\nis that the data gets inserted as an int. Even though your proc defines\n_token as byeta, I get numbers in the table. For example:\n\nselect put_tokens2(1, '{\"\\\\246\\\\323\\\\061\\\\332\\\\277\"}', 1, 1, 10000);\n\nproduces this:\n\n id | token | spam_count | ham_count | atime\n----+-----------------+------------+-----------+-------\n 1 | 246323061332277 | 1 | 1 | 10000\n\nI'm not sure why this is happening, perhaps the problem is obvious to you?\n\nThanks,\nschu\n", "msg_date": "Mon, 01 Aug 2005 12:04:19 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "\n\n> select put_tokens2(1, '{\"\\\\246\\\\323\\\\061\\\\332\\\\277\"}', 1, 1, 10000);\n\n\tTry adding more backslashes until it works (seems that you need \\\\\\\\ or \nsomething).\n\tDon't DBI convert the language types to postgres quoted forms on its own ?\n\n", "msg_date": "Mon, 01 Aug 2005 23:18:23 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "PFC wrote:\n> \n> \n>> select put_tokens2(1, '{\"\\\\246\\\\323\\\\061\\\\332\\\\277\"}', 1, 1, 10000);\n> \n> \n> Try adding more backslashes until it works (seems that you need \\\\\\\\\n> or something).\n> Don't DBI convert the language types to postgres quoted forms on its\n> own ?\n> \n\nYour right.... I am finding that the proc is not the problem as I\nsuspected, it works correctly when I am not calling it from perl,\nsomething isn't escaped correctly....\n\nschu\n", "msg_date": "Mon, 01 Aug 2005 13:28:24 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "On Mon, Aug 01, 2005 at 01:28:24PM -0800, Matthew Schumacher wrote:\n> PFC wrote:\n> > \n> > \n> >> select put_tokens2(1, '{\"\\\\246\\\\323\\\\061\\\\332\\\\277\"}', 1, 1, 10000);\n> > \n> > \n> > Try adding more backslashes until it works (seems that you need \\\\\\\\\n> > or something).\n> > Don't DBI convert the language types to postgres quoted forms on its\n> > own ?\n> > \n> \n> Your right.... I am finding that the proc is not the problem as I\n> suspected, it works correctly when I am not calling it from perl,\n> something isn't escaped correctly....\n\nI'm not sure who's responsible for DBI::Pg (Josh?), but would it make\nsense to add better support for bytea to DBI::Pg? ISTM there should be a\nbetter way of doing this than adding gobs of \\'s.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 1 Aug 2005 16:49:31 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Jim C. Nasby wrote:\n\n>I'm not sure who's responsible for DBI::Pg (Josh?), but would it make\n>sense to add better support for bytea to DBI::Pg? ISTM there should be a\n>better way of doing this than adding gobs of \\'s.\n> \n>\nIt has support for binding a bytea parameter, but in this case we're\ntrying to build up an array and pass that into a stored procedure. The\n$dbh->quote() method for DBD::Pg lacks the ability to quote bytea\ntypes. There is actually a TODO note in the code about adding support\nfor quoting Pg specific types. Presumabliy the difficulties we are\nhaving with this would be solved by that, once it has been implemented. \nIn the meantime, I believe it's just a matter of getting the right\nescapes happen so that the procedure is inserting values that we can\nlater get via a select and using bind_param() with the PG_BYTEA type.\n\nMichael", "msg_date": "Mon, 01 Aug 2005 17:09:07 -0500", "msg_from": "Michael Parker <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Okay,\n\nHere is the status of the SA updates and a question:\n\nMichael got SA changed to pass an array of tokens to the proc so right\nthere we gained a ton of performance due to connections and transactions\nbeing grouped into one per email instead of one per token.\n\nNow I am working on making the proc even faster. Since we have all of\nthe tokens coming in as an array, it should be possible to get this down\nto just a couple of queries.\n\nI have the proc using IN and NOT IN statements to update everything at\nonce from a temp table, but it progressively gets slower because the\ntemp table is growing between vacuums. At this point it's slightly\nslower than the old update or else insert on every token.\n\nWhat I really want to do is have the token array available as a record\nso that I can query against it, but not have it take up the resources of\na real table. If I could copy from an array into a record then I can\neven get rid of the loop. Anyone have any thoughts on how to do this?\n\n\nCREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n intokenary BYTEA[],\n inspam_count INTEGER,\n inham_count INTEGER,\n inatime INTEGER)\nRETURNS VOID AS '\nDECLARE\n _token BYTEA;\nBEGIN\n\n for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n LOOP\n _token := intokenary[i];\n INSERT INTO bayes_token_tmp VALUES (_token);\n END LOOP;\n\n UPDATE\n bayes_token\n SET\n spam_count = greatest_int(spam_count + inspam_count, 0),\n ham_count = greatest_int(ham_count + inham_count , 0),\n atime = greatest_int(atime, 1000)\n WHERE\n id = inuserid\n AND\n (token) IN (SELECT intoken FROM bayes_token_tmp);\n\n UPDATE\n bayes_vars\n SET\n token_count = token_count + (SELECT count(intoken) FROM\nbayes_token_tmp WHERE intoken NOT IN (SELECT token FROM bayes_token)),\n newest_token_age = greatest_int(newest_token_age, inatime),\n oldest_token_age = least_int(oldest_token_age, inatime)\n WHERE\n id = inuserid;\n\n INSERT INTO\n bayes_token\n SELECT\n inuserid,\n intoken,\n inspam_count,\n inham_count,\n inatime\n FROM\n bayes_token_tmp\n WHERE\n (inspam_count > 0 OR inham_count > 0)\n AND\n (intoken) NOT IN (SELECT token FROM bayes_token);\n\n delete from bayes_token_tmp;\n\n RETURN;\nEND;\n' LANGUAGE 'plpgsql';\n\nCREATE OR REPLACE FUNCTION greatest_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'\n LANGUAGE SQL;\n\nCREATE OR REPLACE FUNCTION least_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'\n LANGUAGE SQL;\n", "msg_date": "Thu, 04 Aug 2005 00:13:55 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "\n> What I really want to do is have the token array available as a record\n> so that I can query against it, but not have it take up the resources of\n> a real table. If I could copy from an array into a record then I can\n> even get rid of the loop. Anyone have any thoughts on how to do this?\n\n\tYou could make a set-returning-function (about 3 lines) which RETURNs \nNEXT every element in the array ; then you can use this SRF just like a \ntable and SELECT from it, join it with your other tables, etc.\n", "msg_date": "Thu, 04 Aug 2005 10:45:40 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n> Okay,\n>\n> Here is the status of the SA updates and a question:\n>\n> Michael got SA changed to pass an array of tokens to the proc so right\n> there we gained a ton of performance due to connections and transactions\n> being grouped into one per email instead of one per token.\n>\n> Now I am working on making the proc even faster. Since we have all of\n> the tokens coming in as an array, it should be possible to get this down\n> to just a couple of queries.\n>\n> I have the proc using IN and NOT IN statements to update everything at\n> once from a temp table, but it progressively gets slower because the\n> temp table is growing between vacuums. At this point it's slightly\n> slower than the old update or else insert on every token.\n\nI recommend that you drop and re-create the temp table. There is no\nreason to have it around, considering you delete and re-add everything.\nThat means you never have to vacuum it, since it always only contains\nthe latest rows.\n\n>\n> What I really want to do is have the token array available as a record\n> so that I can query against it, but not have it take up the resources of\n> a real table. If I could copy from an array into a record then I can\n> even get rid of the loop. Anyone have any thoughts on how to do this?\n>\n\nMy one question here, is the inspam_count and inham_count *always* the\nsame for all tokens? I would have thought each token has it's own count.\nAnyway, there are a few lines I would change:\n\n>\n> CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n> intokenary BYTEA[],\n> inspam_count INTEGER,\n> inham_count INTEGER,\n> inatime INTEGER)\n> RETURNS VOID AS '\n> DECLARE\n> _token BYTEA;\n> BEGIN\n>\n\n -- create the table at the start of the procedure\n CREATE TEMP TABLE bayes_token_tmp (intoken bytea);\n -- You might also add primary key if you are going to be adding\n -- *lots* of entries, but it sounds like you are going to have\n -- less than 1 page, so it doesn't matter\n\n> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n> LOOP\n> _token := intokenary[i];\n> INSERT INTO bayes_token_tmp VALUES (_token);\n> END LOOP;\n>\n> UPDATE\n> bayes_token\n> SET\n> spam_count = greatest_int(spam_count + inspam_count, 0),\n> ham_count = greatest_int(ham_count + inham_count , 0),\n> atime = greatest_int(atime, 1000)\n> WHERE\n> id = inuserid\n> AND\n\n-- (token) IN (SELECT intoken FROM bayes_token_tmp);\n EXISTS (SELECT token FROM bayes_token_tmp\n\t WHERE intoken=token LIMIT 1);\n\n-- I would also avoid your intoken (NOT) IN (SELECT token FROM\n-- bayes_token) There are a few possibilities, but to me\n-- as your bayes_token table becomes big, this will start\n-- to be the slow point\n\n-- Rather than doing 2 NOT IN queries, it *might* be faster to do\n DELETE FROM bayes_token_tmp\n\tWHERE NOT EXISTS (SELECT token FROM bayes_token\n\t\t\t WHERE token=intoken);\n\n\n>\n> UPDATE\n> bayes_vars\n> SET\n\n-- token_count = token_count + (SELECT count(intoken) FROM\n-- bayes_token_tmp WHERE intoken NOT IN (SELECT token FROM bayes_token)),\n token_count = token_count + (SELECT count(intoken)\n\t\t\t\t FROM bayes_token_tmp)\n\n-- You don't need the where NOT IN, since we already removed those rows\n\n> newest_token_age = greatest_int(newest_token_age, inatime),\n> oldest_token_age = least_int(oldest_token_age, inatime)\n> WHERE\n> id = inuserid;\n>\n> INSERT INTO\n> bayes_token\n> SELECT\n> inuserid,\n> intoken,\n> inspam_count,\n> inham_count,\n> inatime\n> FROM\n> bayes_token_tmp\n> WHERE\n> (inspam_count > 0 OR inham_count > 0)\n\n-- AND\n-- (intoken) NOT IN (SELECT token FROM bayes_token);\n\n-- You don't need either of those lines, again because we already\n-- filtered\n\n-- delete from bayes_token_tmp;\n-- And rather than deleting all of the entries just\n DROP TABLE bayes_token_tmp;\n\n>\n> RETURN;\n> END;\n> ' LANGUAGE 'plpgsql';\n>\n> CREATE OR REPLACE FUNCTION greatest_int (integer, integer)\n> RETURNS INTEGER\n> IMMUTABLE STRICT\n> AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'\n> LANGUAGE SQL;\n>\n> CREATE OR REPLACE FUNCTION least_int (integer, integer)\n> RETURNS INTEGER\n> IMMUTABLE STRICT\n> AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'\n> LANGUAGE SQL;\n>\n\nSo to clarify, here is my finished function:\n------------------------------------\nCREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n intokenary BYTEA[],\n inspam_count INTEGER,\n inham_count INTEGER,\n inatime INTEGER)\nRETURNS VOID AS '\nDECLARE\n _token BYTEA;\nBEGIN\n\n CREATE TEMP TABLE bayes_token_tmp (intoken bytea);\n for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n LOOP\n _token := intokenary[i];\n INSERT INTO bayes_token_tmp VALUES (_token);\n END LOOP;\n\n UPDATE\n bayes_token\n SET\n spam_count = greatest_int(spam_count + inspam_count, 0),\n ham_count = greatest_int(ham_count + inham_count , 0),\n atime = greatest_int(atime, 1000)\n WHERE\n id = inuserid\n AND\n EXISTS (SELECT token FROM bayes_token_tmp\n \t WHERE intoken=token LIMIT 1);\n\n DELETE FROM bayes_token_tmp\n\tWHERE NOT EXISTS (SELECT token FROM bayes_token\n\t\t\t WHERE token=intoken);\n\n UPDATE\n bayes_vars\n SET\n token_count = token_count + (SELECT count(intoken)\n\t\t\t\t FROM bayes_token_tmp),\n newest_token_age = greatest_int(newest_token_age, inatime),\n oldest_token_age = least_int(oldest_token_age, inatime)\n WHERE\n id = inuserid;\n\n INSERT INTO\n bayes_token\n SELECT\n inuserid,\n intoken,\n inspam_count,\n inham_count,\n inatime\n FROM\n bayes_token_tmp\n WHERE\n (inspam_count > 0 OR inham_count > 0)\n\n DROP TABLE bayes_token_tmp;\n\n RETURN;\nEND;\n' LANGUAGE 'plpgsql';", "msg_date": "Thu, 04 Aug 2005 09:10:03 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher <[email protected]> writes:\n> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n> LOOP\n> _token := intokenary[i];\n> INSERT INTO bayes_token_tmp VALUES (_token);\n> END LOOP;\n\n> UPDATE\n> bayes_token\n> SET\n> spam_count = greatest_int(spam_count + inspam_count, 0),\n> ham_count = greatest_int(ham_count + inham_count , 0),\n> atime = greatest_int(atime, 1000)\n> WHERE\n> id = inuserid\n> AND\n> (token) IN (SELECT intoken FROM bayes_token_tmp);\n\nI don't really see why you think that this path is going to lead to\nbetter performance than where you were before. Manipulation of the\ntemp table is never going to be free, and IN (sub-select) is always\ninherently not fast, and NOT IN (sub-select) is always inherently\nawful. Throwing a pile of simple queries at the problem is not\nnecessarily the wrong way ... especially when you are doing it in\nplpgsql, because you've already eliminated the overhead of network\nround trips and repeated planning of the queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Aug 2005 10:37:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "Tom Lane wrote:\n> Matthew Schumacher <[email protected]> writes:\n>\n>> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n>> LOOP\n>> _token := intokenary[i];\n>> INSERT INTO bayes_token_tmp VALUES (_token);\n>> END LOOP;\n>\n>\n>> UPDATE\n>> bayes_token\n>> SET\n>> spam_count = greatest_int(spam_count + inspam_count, 0),\n>> ham_count = greatest_int(ham_count + inham_count , 0),\n>> atime = greatest_int(atime, 1000)\n>> WHERE\n>> id = inuserid\n>> AND\n>> (token) IN (SELECT intoken FROM bayes_token_tmp);\n>\n>\n> I don't really see why you think that this path is going to lead to\n> better performance than where you were before. Manipulation of the\n> temp table is never going to be free, and IN (sub-select) is always\n> inherently not fast, and NOT IN (sub-select) is always inherently\n> awful. Throwing a pile of simple queries at the problem is not\n> necessarily the wrong way ... especially when you are doing it in\n> plpgsql, because you've already eliminated the overhead of network\n> round trips and repeated planning of the queries.\n\nSo for an IN (sub-select), does it actually pull all of the rows from\nthe other table, or is the planner smart enough to stop once it finds\nsomething?\n\nIs IN (sub-select) about the same as EXISTS (sub-select WHERE x=y)?\n\nWhat about NOT IN (sub-select) versus NOT EXISTS (sub-select WHERE x=y)\n\nI would guess that the EXISTS/NOT EXISTS would be faster, though it\nprobably would necessitate using a nested loop (at least that seems to\nbe the way the query is written).\n\nI did some tests on a database with 800k rows, versus a temp table with\n2k rows. I did one sequential test (1-2000, with 66 rows missing), and\none sparse test (1-200, 100000-100200, 200000-200200, ... with 658 rows\nmissing).\n\nIf found that NOT IN did indeed have to load the whole table. IN was\nsmart enough to do a nested loop.\nEXISTS and NOT EXISTS did a sequential scan on my temp table, with a\nSubPlan filter (which looks a whole lot like a Nested Loop).\n\nWhat I found was that IN performed about the same as EXISTS (since they\nare both effectively doing a nested loop), but that NOT IN took 4,000ms\nwhile NOT EXISTS was the same speed as EXISTS at around 166ms.\n\nAnyway, so it does seem like NOT IN is not a good choice, but IN seems\nto be equivalent to EXISTS, and NOT EXISTS is also very fast.\n\nIs this generally true, or did I just get lucky on my data?\n\nJohn\n=:->\n\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>", "msg_date": "Thu, 04 Aug 2005 10:08:09 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "John A Meinel wrote:\n> Matthew Schumacher wrote:\n> \n > I recommend that you drop and re-create the temp table. There is no\n> reason to have it around, considering you delete and re-add everything.\n> That means you never have to vacuum it, since it always only contains\n> the latest rows.\n\nWhenever I have a create temp and drop statement I get these errors:\n\nselect put_tokens(1, '{\"\\\\\\\\000\"}', 1, 1, 1000);\nERROR: relation with OID 582248 does not exist\nCONTEXT: SQL statement \"INSERT INTO bayes_token_tmp VALUES ( $1 )\"\nPL/pgSQL function \"put_tokens\" line 12 at SQL statement\n\n\n> \n> \n> \n> My one question here, is the inspam_count and inham_count *always* the\n> same for all tokens? I would have thought each token has it's own count.\n> Anyway, there are a few lines I would change:\n\nNo, we get the userid, inspam, inham, and atime, and they are the same\nfor each token. If we have a different user we call the proc again.\n\n> -- create the table at the start of the procedure\n> CREATE TEMP TABLE bayes_token_tmp (intoken bytea);\n> -- You might also add primary key if you are going to be adding\n> -- *lots* of entries, but it sounds like you are going to have\n> -- less than 1 page, so it doesn't matter\n\nThis causes errors, see above....\n\n> -- (token) IN (SELECT intoken FROM bayes_token_tmp);\n> EXISTS (SELECT token FROM bayes_token_tmp\n> \t WHERE intoken=token LIMIT 1);\n> \n> -- I would also avoid your intoken (NOT) IN (SELECT token FROM\n> -- bayes_token) There are a few possibilities, but to me\n> -- as your bayes_token table becomes big, this will start\n> -- to be the slow point\n> \n> -- Rather than doing 2 NOT IN queries, it *might* be faster to do\n> DELETE FROM bayes_token_tmp\n> \tWHERE NOT EXISTS (SELECT token FROM bayes_token\n> \t\t\t WHERE token=intoken);\n> \n>\n\nI'll look into this.\n\n\nthanks,\n\nschu\n", "msg_date": "Thu, 04 Aug 2005 08:13:35 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "John A Meinel <[email protected]> writes:\n> Tom Lane wrote:\n>> I don't really see why you think that this path is going to lead to\n>> better performance than where you were before.\n\n> So for an IN (sub-select), does it actually pull all of the rows from\n> the other table, or is the planner smart enough to stop once it finds\n> something?\n\nIt stops when it finds something --- but it's still a join operation\nin essence. I don't see that putting the values one by one into a table\nand then joining is going to be a win compared to just processing the\nvalues one at a time against the main table.\n\n> Is IN (sub-select) about the same as EXISTS (sub-select WHERE x=y)?\n> What about NOT IN (sub-select) versus NOT EXISTS (sub-select WHERE x=y)\n\nThe EXISTS variants are actually worse, because we've not spent as much\ntime teaching the planner how to optimize them. There's effectively\nonly one decent plan for an EXISTS, which is that the subselect's \"x\" is\nindexed and we do an indexscan probe using the outer \"y\" for each outer\nrow. IN and NOT IN can do that, or several alternative plans that might\nbe better depending on data statistics.\n\nHowever, that's cold comfort for Matthew's application -- the only way\nhe'd get any benefit from all those planner smarts is if he ANALYZEs\nthe temp table after loading it and then EXECUTEs the main query (so\nthat it gets re-planned every time). Plus, at least some of those\nalternative plans would require an index on the temp table, which is\nunlikely to be worth the cost of setting up. And finally, this\nformulation requires separate IN and NOT IN tests that are necessarily\ngoing to do a lot of redundant work.\n\nThere's enough overhead here that I find it highly doubtful that it'll\nbe a win compared to the original approach of retail queries against the\nmain table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 04 Aug 2005 12:16:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 " }, { "msg_contents": "Tom Lane wrote:\n\n> I don't really see why you think that this path is going to lead to\n> better performance than where you were before. Manipulation of the\n> temp table is never going to be free, and IN (sub-select) is always\n> inherently not fast, and NOT IN (sub-select) is always inherently\n> awful. Throwing a pile of simple queries at the problem is not\n> necessarily the wrong way ... especially when you are doing it in\n> plpgsql, because you've already eliminated the overhead of network\n> round trips and repeated planning of the queries.\n> \n> \t\t\tregards, tom lane\n\nThe reason why I think this may be faster is because I would avoid\nrunning an update on data that needs to be inserted which saves\nsearching though the table for a matching token.\n\nPerhaps I should do the insert first, then drop those tokens from the\ntemp table, then do my updates in a loop.\n\nI'll have to do some benchmarking...\n\nschu\n", "msg_date": "Thu, 04 Aug 2005 08:17:58 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n> Tom Lane wrote:\n> \n> \n>>I don't really see why you think that this path is going to lead to\n>>better performance than where you were before. Manipulation of the\n>>temp table is never going to be free, and IN (sub-select) is always\n>>inherently not fast, and NOT IN (sub-select) is always inherently\n>>awful. Throwing a pile of simple queries at the problem is not\n>>necessarily the wrong way ... especially when you are doing it in\n>>plpgsql, because you've already eliminated the overhead of network\n>>round trips and repeated planning of the queries.\n>>\n>>\t\t\tregards, tom lane\n> \n> \n> The reason why I think this may be faster is because I would avoid\n> running an update on data that needs to be inserted which saves\n> searching though the table for a matching token.\n> \n> Perhaps I should do the insert first, then drop those tokens from the\n> temp table, then do my updates in a loop.\n> \n> I'll have to do some benchmarking...\n> \n> schu\n\nTom, I think your right, whenever I do a NOT IN it does a full table\nscan against bayes_token and since that table is going to get very big\ndoing the simple query in a loop that uses an index seems a bit faster.\n\nJohn, thanks for your help, it was worth a try, but it looks like the\nlooping is just faster.\n\nHere is what I have so far in case anyone else has ideas before I\nabandon it:\n\nCREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n intokenary BYTEA[],\n inspam_count INTEGER,\n inham_count INTEGER,\n inatime INTEGER)\nRETURNS VOID AS '\nDECLARE\n _token BYTEA;\nBEGIN\n\n UPDATE\n bayes_token\n SET\n spam_count = greatest_int(spam_count + inspam_count, 0),\n ham_count = greatest_int(ham_count + inham_count , 0),\n atime = greatest_int(atime, inatime)\n WHERE\n id = inuserid\n AND\n (token) IN (SELECT bayes_token_tmp FROM bayes_token_tmp(intokenary));\n\n UPDATE\n bayes_vars\n SET\n token_count = token_count + (\n SELECT\n count(bayes_token_tmp)\n FROM\n bayes_token_tmp(intokenary)\n WHERE\n bayes_token_tmp NOT IN (SELECT token FROM bayes_token)),\n newest_token_age = greatest_int(newest_token_age, inatime),\n oldest_token_age = least_int(oldest_token_age, inatime)\n WHERE\n id = inuserid;\n\n INSERT INTO\n bayes_token\n SELECT\n inuserid,\n bayes_token_tmp,\n inspam_count,\n inham_count,\n inatime\n FROM\n bayes_token_tmp(intokenary)\n WHERE\n (inspam_count > 0 OR inham_count > 0)\n AND\n (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);\n\n RETURN;\nEND;\n' LANGUAGE 'plpgsql';\n\nCREATE OR REPLACE FUNCTION bayes_token_tmp(intokenary BYTEA[]) RETURNS\nSETOF bytea AS\n'\nBEGIN\n for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n LOOP\n return next intokenary[i];\n END LOOP;\n RETURN;\nend\n'\nlanguage 'plpgsql';\n\nCREATE OR REPLACE FUNCTION greatest_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'\n LANGUAGE SQL;\n\nCREATE OR REPLACE FUNCTION least_int (integer, integer)\n RETURNS INTEGER\n IMMUTABLE STRICT\n AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'\n LANGUAGE SQL;\n", "msg_date": "Thu, 04 Aug 2005 09:35:44 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n> Matthew Schumacher wrote:\n>\n>>Tom Lane wrote:\n>>\n>>\n>>\n>>>I don't really see why you think that this path is going to lead to\n>>>better performance than where you were before. Manipulation of the\n>>>temp table is never going to be free, and IN (sub-select) is always\n>>>inherently not fast, and NOT IN (sub-select) is always inherently\n>>>awful. Throwing a pile of simple queries at the problem is not\n>>>necessarily the wrong way ... especially when you are doing it in\n>>>plpgsql, because you've already eliminated the overhead of network\n>>>round trips and repeated planning of the queries.\n>>>\n>>>\t\t\tregards, tom lane\n>>\n>>\n>>The reason why I think this may be faster is because I would avoid\n>>running an update on data that needs to be inserted which saves\n>>searching though the table for a matching token.\n>>\n>>Perhaps I should do the insert first, then drop those tokens from the\n>>temp table, then do my updates in a loop.\n>>\n>>I'll have to do some benchmarking...\n>>\n>>schu\n>\n>\n> Tom, I think your right, whenever I do a NOT IN it does a full table\n> scan against bayes_token and since that table is going to get very big\n> doing the simple query in a loop that uses an index seems a bit faster.\n>\n> John, thanks for your help, it was worth a try, but it looks like the\n> looping is just faster.\n>\n> Here is what I have so far in case anyone else has ideas before I\n> abandon it:\n\nSurely this isn't what you have. You have *no* loop here, and you have\nstuff like:\n AND\n (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);\n\nI'm guessing this isn't your last version of the function.\n\nAs far as putting the CREATE TEMP TABLE inside the function, I think the\nproblem is that the first time it runs, it compiles the function, and\nwhen it gets to the UPDATE/INSERT with the temporary table name, at\ncompile time it hard-codes that table id.\n\nI tried getting around it by using \"EXECUTE\" which worked, but it made\nthe function horribly slow. So I don't recommend it.\n\nAnyway, if you want us to evaluate it, you really need to send us the\nreal final function.\n\nJohn\n=:->\n\n\n>\n> CREATE OR REPLACE FUNCTION put_tokens(inuserid INTEGER,\n> intokenary BYTEA[],\n> inspam_count INTEGER,\n> inham_count INTEGER,\n> inatime INTEGER)\n> RETURNS VOID AS '\n> DECLARE\n> _token BYTEA;\n> BEGIN\n>\n> UPDATE\n> bayes_token\n> SET\n> spam_count = greatest_int(spam_count + inspam_count, 0),\n> ham_count = greatest_int(ham_count + inham_count , 0),\n> atime = greatest_int(atime, inatime)\n> WHERE\n> id = inuserid\n> AND\n> (token) IN (SELECT bayes_token_tmp FROM bayes_token_tmp(intokenary));\n>\n> UPDATE\n> bayes_vars\n> SET\n> token_count = token_count + (\n> SELECT\n> count(bayes_token_tmp)\n> FROM\n> bayes_token_tmp(intokenary)\n> WHERE\n> bayes_token_tmp NOT IN (SELECT token FROM bayes_token)),\n> newest_token_age = greatest_int(newest_token_age, inatime),\n> oldest_token_age = least_int(oldest_token_age, inatime)\n> WHERE\n> id = inuserid;\n>\n> INSERT INTO\n> bayes_token\n> SELECT\n> inuserid,\n> bayes_token_tmp,\n> inspam_count,\n> inham_count,\n> inatime\n> FROM\n> bayes_token_tmp(intokenary)\n> WHERE\n> (inspam_count > 0 OR inham_count > 0)\n> AND\n> (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);\n>\n> RETURN;\n> END;\n> ' LANGUAGE 'plpgsql';\n>\n> CREATE OR REPLACE FUNCTION bayes_token_tmp(intokenary BYTEA[]) RETURNS\n> SETOF bytea AS\n> '\n> BEGIN\n> for i in array_lower(intokenary, 1) .. array_upper(intokenary, 1)\n> LOOP\n> return next intokenary[i];\n> END LOOP;\n> RETURN;\n> end\n> '\n> language 'plpgsql';\n>\n> CREATE OR REPLACE FUNCTION greatest_int (integer, integer)\n> RETURNS INTEGER\n> IMMUTABLE STRICT\n> AS 'SELECT CASE WHEN $1 < $2 THEN $2 ELSE $1 END;'\n> LANGUAGE SQL;\n>\n> CREATE OR REPLACE FUNCTION least_int (integer, integer)\n> RETURNS INTEGER\n> IMMUTABLE STRICT\n> AS 'SELECT CASE WHEN $1 < $2 THEN $1 ELSE $2 END;'\n> LANGUAGE SQL;\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>", "msg_date": "Thu, 04 Aug 2005 14:36:12 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "John A Meinel wrote:\n\n> Surely this isn't what you have. You have *no* loop here, and you have\n> stuff like:\n> AND\n> (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);\n> \n> I'm guessing this isn't your last version of the function.\n> \n> As far as putting the CREATE TEMP TABLE inside the function, I think the\n> problem is that the first time it runs, it compiles the function, and\n> when it gets to the UPDATE/INSERT with the temporary table name, at\n> compile time it hard-codes that table id.\n> \n> I tried getting around it by using \"EXECUTE\" which worked, but it made\n> the function horribly slow. So I don't recommend it.\n> \n> Anyway, if you want us to evaluate it, you really need to send us the\n> real final function.\n> \n> John\n> =:->\n\nIt is the final function. It doesn't need a loop because of the\nbayes_token_tmp function I added. The array is passed to it and it\nreturns a record set so I can work off of it like it's a table. So the\nfunction works the same way it before, but instead of using SELECT\nintoken from TEMPTABLE, you use SELECT bayes_token_tmp from\nbayes_token_tmp(intokenary).\n\nI think this is more efficient than the create table overhead,\nespecially because the incoming record set won't be to big.\n\nThanks,\n\nschu\n\n", "msg_date": "Thu, 04 Aug 2005 14:37:41 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" }, { "msg_contents": "Matthew Schumacher wrote:\n> John A Meinel wrote:\n>\n>\n>>Surely this isn't what you have. You have *no* loop here, and you have\n>>stuff like:\n>> AND\n>> (bayes_token_tmp) NOT IN (SELECT token FROM bayes_token);\n>>\n>>I'm guessing this isn't your last version of the function.\n>>\n>>As far as putting the CREATE TEMP TABLE inside the function, I think the\n>>problem is that the first time it runs, it compiles the function, and\n>>when it gets to the UPDATE/INSERT with the temporary table name, at\n>>compile time it hard-codes that table id.\n>>\n>>I tried getting around it by using \"EXECUTE\" which worked, but it made\n>>the function horribly slow. So I don't recommend it.\n>>\n>>Anyway, if you want us to evaluate it, you really need to send us the\n>>real final function.\n>>\n>>John\n>>=:->\n>\n>\n> It is the final function. It doesn't need a loop because of the\n> bayes_token_tmp function I added. The array is passed to it and it\n> returns a record set so I can work off of it like it's a table. So the\n> function works the same way it before, but instead of using SELECT\n> intoken from TEMPTABLE, you use SELECT bayes_token_tmp from\n> bayes_token_tmp(intokenary).\n>\n> I think this is more efficient than the create table overhead,\n> especially because the incoming record set won't be to big.\n>\n> Thanks,\n>\n> schu\n>\n>\n\nWell, I would at least recommend that you change the \"WHERE\nbayes_token_tmp NOT IN (SELECT token FROM bayes_token)\"\nwith a\n\"WHERE NOT EXISTS (SELECT toke FROM bayes_token WHERE\ntoken=bayes_token_tmp)\"\n\nYou might try experimenting with the differences, but on my system the\nNOT IN has to do a full sequential scan on bayes_token and load all\nentries into a list, while NOT EXISTS can do effectively a nested loop.\n\nThe nested loop requires that there is an index on bayes_token(token),\nbut I'm pretty sure there is anyway.\n\nAgain, in my testing, it was a difference of 4200ms versus 180ms. (800k\nrows in my big table, 2k in the temp one)\n\nJohn\n=:->", "msg_date": "Thu, 04 Aug 2005 17:42:52 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0" } ]
[ { "msg_contents": "Hi all;\n\nI have a customer who currently uses an application which had become \nslow. After doing some digging, I found the slow query:\n\nSELECT c.accno, c.description, c.link, c.category, ac.project_id, \np.projectnumber,\n a.department_id, d.description AS department\nFROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id)\n JOIN ar a ON (a.id = ac.trans_id)\n LEFT JOIN project p ON (ac.project_id = p.id)\n LEFT JOIN department d ON (d.id = a.department_id)\nWHERE a.customer_id = 11373 AND a.id IN (\n SELECT max(id) FROM ar WHERE customer_id = 11373\n);\n\n(reformatted for readability)\n\nThis is taking 10 seconds to run.\n\nInterestingly, both the project and department tables are blank, and if \nI omit them, the query becomes:\nSELECT c.accno, c.description, c.link, c.category, ac.project_id\nFROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id)\n JOIN ar a ON (a.id = ac.trans_id)\nWHERE a.customer_id = 11373 AND a.id IN (\n SELECT max(id) FROM ar WHERE customer_id = 11373\n);\n\nThis takes 139ms. 1% of the previous query.\n\nThe plan for the long query is:\n\n \nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=87337.25..106344.93 rows=41 width=118) (actual \ntime=7615.843..9850.209 rows=10 loops=1)\n Hash Cond: (\"outer\".trans_id = \"inner\".max)\n -> Merge Right Join (cost=86620.57..100889.85 rows=947598 \nwidth=126) (actual time=7408.830..9200.435 rows=177769 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".department_id)\n -> Index Scan using department_id_key on department d \n(cost=0.00..52.66\nrows=1060 width=36) (actual time=0.090..0.090 rows=0 loops=1)\n -> Sort (cost=86620.57..87067.55 rows=178792 width=94) \n(actual time=7408.709..7925.843 rows=177769 loops=1)\n Sort Key: a.department_id\n -> Merge Right Join (cost=45871.18..46952.83 \nrows=178792 width=94) (actual time=4962.122..6671.319 rows=177769 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".project_id)\n -> Index Scan using project_id_key on project p \n(cost=0.00..49.80 rows=800 width=36) (actual time=0.007..0.007 rows=0 \nloops=1)\n -> Sort (cost=45871.18..46318.16 rows=178792 \nwidth=62) (actual time=4962.084..5475.636 rows=177769 loops=1)\n Sort Key: ac.project_id\n -> Hash Join (cost=821.20..13193.43 \nrows=178792 width=62) (actual time=174.905..4295.685 rows=177769 loops=1)\n Hash Cond: (\"outer\".chart_id = \"inner\".id)\n -> Hash Join (cost=817.66..10508.02 \nrows=178791\nwidth=20) (actual time=173.952..2840.824 rows=177769 loops=1)\n Hash Cond: (\"outer\".trans_id = \n\"inner\".id)\n -> Seq Scan on acc_trans ac \n(cost=0.00..3304.38 rows=181538 width=12) (actual time=0.062..537.753 \nrows=181322 loops=1)\n -> Hash (cost=659.55..659.55 \nrows=22844 width=8) (actual time=173.625..173.625 rows=0 loops=1)\n -> Seq Scan on ar a \n(cost=0.00..659.55 rows=22844 width=8) (actual time=0.022..101.828 \nrows=22844 loops=1)\n Filter: (customer_id \n= 11373)\n -> Hash (cost=3.23..3.23 rows=123 \nwidth=50) (actual time=0.915..0.915 rows=0 loops=1)\n -> Seq Scan on chart c \n(cost=0.00..3.23 rows=123 width=50) (actual time=0.013..0.528 rows=123 \nloops=1)\n -> Hash (cost=716.67..716.67 rows=1 width=4) (actual \ntime=129.037..129.037 rows=0 loops=1)\n -> Subquery Scan \"IN_subquery\" (cost=716.66..716.67 rows=1 \nwidth=4) (actual time=129.017..129.025 rows=1 loops=1)\n -> Aggregate (cost=716.66..716.66 rows=1 width=4) \n(actual time=129.008..129.011 rows=1 loops=1)\n -> Seq Scan on ar (cost=0.00..659.55 rows=22844 \nwidth=4) (actual time=0.020..73.266 rows=22844 loops=1)\n Filter: (customer_id = 11373)\n Total runtime: 9954.133 ms\n(28 rows)\n \nThe shorter query's plan is:\n\n \nQUERY PLAN\n \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=728.42..732.96 rows=8 width=50) (actual \ntime=130.908..131.593 rows=10 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".chart_id)\n -> Seq Scan on chart c (cost=0.00..3.23 rows=123 width=50) (actual \ntime=0.006..0.361 rows=123 loops=1)\n -> Hash (cost=728.40..728.40 rows=8 width=8) (actual \ntime=130.841..130.841 rows=0 loops=1)\n -> Nested Loop (cost=716.67..728.40 rows=8 width=8) (actual \ntime=130.692..130.805 rows=10 loops=1)\n -> Nested Loop (cost=716.67..720.89 rows=1 width=8) \n(actual time=130.626..130.639 rows=1 loops=1)\n -> HashAggregate (cost=716.67..716.67 rows=1 \nwidth=4) (actual time=130.484..130.487 rows=1 loops=1)\n -> Subquery Scan \"IN_subquery\" \n(cost=716.66..716.67 rows=1 width=4) (actual time=130.455..130.464 \nrows=1 loops=1)\n -> Aggregate (cost=716.66..716.66 \nrows=1 width=4) (actual time=130.445..130.448 rows=1 loops=1)\n -> Seq Scan on ar \n(cost=0.00..659.55 rows=22844 width=4) (actual time=0.020..74.174 \nrows=22844 loops=1)\n Filter: (customer_id = 11373)\n -> Index Scan using ar_id_key on ar a \n(cost=0.00..4.20 rows=1 width=4) (actual time=0.122..0.125 rows=1 loops=1)\n Index Cond: (a.id = \"outer\".max)\n Filter: (customer_id = 11373)\n -> Index Scan using acc_trans_trans_id_key on acc_trans \nac (cost=0.00..7.41 rows=8 width=12) (actual time=0.051..0.097 rows=10 \nloops=1)\n Index Cond: (\"outer\".max = ac.trans_id)\n Total runtime: 131.879 ms\n(17 rows)\n \nI am not sure if I want to remove support for the other two tables \nyet. However, I wanted to submit this here as a (possibly corner-) \ncase where the plan seems to be far slower than it needs to be.\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Wed, 27 Jul 2005 22:15:47 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Left joining against two empty tables makes a query SLOW" }, { "msg_contents": "On 7/28/05, Chris Travers <[email protected]> wrote:\n> \n> Hi all;\n> \n> I have a customer who currently uses an application which had become\n> slow. After doing some digging, I found the slow query:\n> \n> SELECT c.accno, c.description, c.link, c.category, ac.project_id,\n> p.projectnumber,\n> a.department_id, d.description AS department\n> FROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id <http://c.id>)\n> JOIN ar a ON (a.id <http://a.id> = ac.trans_id)\n> LEFT JOIN project p ON (ac.project_id = p.id <http://p.id>)\n> LEFT JOIN department d ON (d.id <http://d.id> = a.department_id)\n> WHERE a.customer_id = 11373 AND a.id <http://a.id> IN (\n> SELECT max(id) FROM ar WHERE customer_id = 11373\n> );\n> \n> (reformatted for readability)\n> \n> This is taking 10 seconds to run.\n> \n> Interestingly, both the project and department tables are blank, and if\n> I omit them, the query becomes:\n> SELECT c.accno, c.description, c.link, c.category, ac.project_id\n> FROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id <http://c.id>)\n> JOIN ar a ON (a.id <http://a.id> = ac.trans_id)\n> WHERE a.customer_id = 11373 AND a.id <http://a.id> IN (\n> SELECT max(id) FROM ar WHERE customer_id = 11373\n> );\n> \n> This takes 139ms. 1% of the previous query.\n> \n> The plan for the long query is:\n> \n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash IN Join (cost=87337.25..106344.93 rows=41 width=118) (actual\n> time=7615.843..9850.209 rows=10 loops=1)\n> Hash Cond: (\"outer\".trans_id = \"inner\".max)\n> -> Merge Right Join (cost=86620.57..100889.85 rows=947598\n> width=126) (actual time=7408.830..9200.435 rows=177769 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".department_id)\n> -> Index Scan using department_id_key on department d\n> (cost=0.00..52.66\n> rows=1060 width=36) (actual time=0.090..0.090 rows=0 loops=1)\n\n\nvacuum & reindex the department and project table as the planner expects \nthere are 1060 rows but actually returning nothing.\n\n-> Sort (cost=86620.57..87067.55 rows=178792 width=94)\n> (actual time=7408.709..7925.843 rows=177769 loops=1)\n> Sort Key: a.department_id\n> -> Merge Right Join (cost=45871.18..46952.83\n> rows=178792 width=94) (actual time=4962.122..6671.319 rows=177769 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".project_id)\n> -> Index Scan using project_id_key on project p\n> (cost=0.00..49.80 rows=800 width=36) (actual time=0.007..0.007 rows=0\n> loops=1)\n> -> Sort (cost=45871.18..46318.16 rows=178792\n> width=62) (actual time=4962.084..5475.636 rows=177769 loops=1)\n> Sort Key: ac.project_id\n> -> Hash Join (cost=821.20..13193.43\n> rows=178792 width=62) (actual time=174.905..4295.685 rows=177769 loops=1)\n> Hash Cond: (\"outer\".chart_id = \"inner\".id)\n> -> Hash Join (cost=817.66..10508.02\n> rows=178791\n> width=20) (actual time=173.952..2840.824 rows=177769 loops=1)\n> Hash Cond: (\"outer\".trans_id =\n> \"inner\".id)\n> -> Seq Scan on acc_trans ac\n> (cost=0.00..3304.38 rows=181538 width=12) (actual time=0.062..537.753\n> rows=181322 loops=1)\n> -> Hash (cost=659.55..659.55\n> rows=22844 width=8) (actual time=173.625..173.625 rows=0 loops=1)\n> -> Seq Scan on ar a\n> (cost=0.00..659.55 rows=22844 width=8) (actual time=0.022..101.828\n> rows=22844 loops=1)\n> Filter: (customer_id\n> = 11373)\n> -> Hash (cost=3.23..3.23 rows=123\n> width=50) (actual time=0.915..0.915 rows=0 loops=1)\n> -> Seq Scan on chart c\n> (cost=0.00..3.23 rows=123 width=50) (actual time=0.013..0.528 rows=123\n> loops=1)\n> -> Hash (cost=716.67..716.67 rows=1 width=4) (actual\n> time=129.037..129.037 rows=0 loops=1)\n> -> Subquery Scan \"IN_subquery\" (cost=716.66..716.67 rows=1\n> width=4) (actual time=129.017..129.025 rows=1 loops=1)\n> -> Aggregate (cost=716.66..716.66 rows=1 width=4)\n> (actual time=129.008..129.011 rows=1 loops=1)\n> -> Seq Scan on ar (cost=0.00..659.55 rows=22844\n> width=4) (actual time=0.020..73.266 rows=22844 loops=1)\n> Filter: (customer_id = 11373)\n> Total runtime: 9954.133 ms\n> (28 rows)\n> \n> The shorter query's plan is:\n> \n> \n> QUERY PLAN\n> \n> \n> ---------------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=728.42..732.96 rows=8 width=50) (actual\n> time=130.908..131.593 rows=10 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".chart_id)\n> -> Seq Scan on chart c (cost=0.00..3.23 rows=123 width=50) (actual\n> time=0.006..0.361 rows=123 loops=1)\n> -> Hash (cost=728.40..728.40 rows=8 width=8) (actual\n> time=130.841..130.841 rows=0 loops=1)\n> -> Nested Loop (cost=716.67..728.40 rows=8 width=8) (actual\n> time=130.692..130.805 rows=10 loops=1)\n> -> Nested Loop (cost=716.67..720.89 rows=1 width=8)\n> (actual time=130.626..130.639 rows=1 loops=1)\n> -> HashAggregate (cost=716.67..716.67 rows=1\n> width=4) (actual time=130.484..130.487 rows=1 loops=1)\n> -> Subquery Scan \"IN_subquery\"\n> (cost=716.66..716.67 rows=1 width=4) (actual time=130.455..130.464\n> rows=1 loops=1)\n> -> Aggregate (cost=716.66..716.66\n> rows=1 width=4) (actual time=130.445..130.448 rows=1 loops=1)\n> -> Seq Scan on ar\n> (cost=0.00..659.55 rows=22844 width=4) (actual time=0.020..74.174\n> rows=22844 loops=1)\n> Filter: (customer_id = 11373)\n> -> Index Scan using ar_id_key on ar a\n> (cost=0.00..4.20 rows=1 width=4) (actual time=0.122..0.125 rows=1 loops=1)\n> Index Cond: (a.id <http://a.id> = \"outer\".max)\n> Filter: (customer_id = 11373)\n> -> Index Scan using acc_trans_trans_id_key on acc_trans\n> ac (cost=0.00..7.41 rows=8 width=12) (actual time=0.051..0.097 rows=10\n> loops=1)\n> Index Cond: (\"outer\".max = ac.trans_id)\n> Total runtime: 131.879 ms\n> (17 rows)\n> \n> I am not sure if I want to remove support for the other two tables\n> yet. However, I wanted to submit this here as a (possibly corner-)\n> case where the plan seems to be far slower than it needs to be.\n> \n> Best Wishes,\n> Chris Travers\n> Metatron Technology Consulting\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n\n-- \nwith regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nOn 7/28/05, Chris Travers <[email protected]> wrote:\nHi all;I have a customer who currently uses an application which had becomeslow.  After doing some digging, I found the slow query:SELECT c.accno, c.description, c.link, c.category, ac.project_id,p.projectnumber\n,        a.department_id, d.description AS departmentFROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id)        JOIN ar a ON (a.id = ac.trans_id)        LEFT JOIN project p ON (\nac.project_id = p.id)        LEFT JOIN department d ON (d.id = a.department_id)WHERE a.customer_id = 11373 AND a.id IN (        SELECT max(id) FROM ar WHERE customer_id = 11373\n);(reformatted for readability)This is taking 10 seconds to run.Interestingly, both the project and department tables are blank, and ifI omit them, the query becomes:SELECT c.accno, c.description\n, c.link, c.category, ac.project_idFROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id)        JOIN ar a ON (a.id = ac.trans_id)WHERE a.customer_id = 11373 AND \na.id IN (        SELECT max(id) FROM ar WHERE customer_id = 11373);This takes 139ms.  1% of the previous query.The plan for the long query is:QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash IN Join  (cost=87337.25..106344.93 rows=41 width=118) (actualtime=7615.843..9850.209 rows=10 loops=1)   Hash Cond: (\"outer\".trans_id = \"inner\".max)   ->  Merge Right Join  (cost=\n86620.57..100889.85 rows=947598width=126) (actual time=7408.830..9200.435 rows=177769 loops=1)         Merge Cond: (\"outer\".id = \"inner\".department_id)         ->  Index Scan using department_id_key on department d\n(cost=0.00..52.66rows=1060 width=36) (actual time=0.090..0.090 rows=0 loops=1)\nvacuum & reindex the department and project table as the planner expects there are 1060 rows but actually returning nothing.\n         ->  Sort  (cost=86620.57..87067.55 rows=178792 width=94)(actual time=\n7408.709..7925.843 rows=177769 loops=1)               Sort Key: a.department_id              \n->  Merge Right Join  (cost=45871.18..46952.83rows=178792 width=94) (actual time=4962.122..6671.319 rows=177769 loops=1)                    \nMerge Cond: (\"outer\".id = \"inner\".project_id)                    \n->  Index Scan using project_id_key on project p(cost=0.00..49.80 rows=800 width=36) (actual time=0.007..0.007 rows=0loops=1)                    \n->  Sort  (cost=45871.18..46318.16 rows=178792width=62) (actual time=4962.084..5475.636 rows=177769 loops=1)                          \nSort Key: ac.project_id                          \n->  Hash Join  (cost=821.20..13193.43rows=178792 width=62) (actual time=174.905..4295.685 rows=177769 loops=1)                                \nHash Cond: (\"outer\".chart_id = \"inner\".id)                                \n->  Hash Join  (cost=817.66..10508.02rows=178791width=20) (actual time=173.952..2840.824 rows=177769 loops=1)                                      \nHash Cond: (\"outer\".trans_id =\"inner\".id)                                      \n->  Seq Scan on acc_trans ac(cost=0.00..3304.38 rows=181538 width=12) (actual time=0.062..537.753rows=181322 loops=1)                                      \n->  Hash  (cost=659.55..659.55rows=22844 width=8) (actual time=173.625..173.625 rows=0 loops=1)                                            \n->  Seq Scan on ar a(cost=0.00..659.55 rows=22844 width=8) (actual time=0.022..101.828rows=22844 loops=1)                                                  \nFilter: (customer_id= 11373)                                \n->  Hash  (cost=3.23..3.23 rows=123width=50) (actual time=0.915..0.915 rows=0 loops=1)                                      \n->  Seq Scan on chart c(cost=0.00..3.23 rows=123 width=50) (actual time=0.013..0.528 rows=123loops=1)   ->  Hash  (cost=716.67..716.67 rows=1 width=4) (actualtime=129.037..129.037 rows=0 loops=1)\n        \n->  Subquery Scan\n\"IN_subquery\"  (cost=716.66..716.67 rows=1width=4) (actual time=129.017..129.025 rows=1 loops=1)              \n->  Aggregate  (cost=716.66..716.66 rows=1\nwidth=4)(actual time=129.008..129.011 rows=1 loops=1)                    \n->  Seq Scan on ar  (cost=0.00..659.55 rows=22844width=4) (actual time=0.020..73.266 rows=22844 loops=1)                          \nFilter: (customer_id = 11373) Total runtime: 9954.133 ms(28 rows)The shorter query's plan is:QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join  (cost=728.42..732.96 rows=8 width=50) (actualtime=130.908..131.593 rows=10 loops=1)   Hash Cond: (\"outer\".id = \"inner\".chart_id)   ->  Seq Scan on chart c  (cost=0.00..3.23\n rows=123 width=50) (actualtime=0.006..0.361 rows=123 loops=1)   ->  Hash  (cost=728.40..728.40 rows=8 width=8) (actualtime=130.841..130.841 rows=0 loops=1)        \n->  Nested Loop  (cost=716.67..728.40 rows=8\nwidth=8) (actualtime=130.692..130.805 rows=10 loops=1)              \n->  Nested Loop  (cost=716.67..720.89 rows=1\nwidth=8)(actual time=130.626..130.639 rows=1 loops=1)                    \n->  HashAggregate  (cost=716.67..716.67 rows=1width=4) (actual time=130.484..130.487 rows=1 loops=1)                          \n->  Subquery Scan \"IN_subquery\"(cost=716.66..716.67 rows=1 width=4) (actual time=130.455..130.464rows=1 loops=1)                                \n->  Aggregate  (cost=716.66..716.66rows=1 width=4) (actual time=130.445..130.448 rows=1 loops=1)                                      \n->  Seq Scan on ar(cost=0.00..659.55 rows=22844 width=4) (actual time=0.020..74.174rows=22844 loops=1)                                            \nFilter: (customer_id = 11373)                    \n->  Index Scan using ar_id_key on ar a(cost=0.00..4.20 rows=1 width=4) (actual time=0.122..0.125 rows=1 loops=1)                          \nIndex Cond: (a.id = \"outer\".max)                          \nFilter: (customer_id = 11373)              \n->  Index Scan using acc_trans_trans_id_key on acc_transac  (cost=0.00..7.41 rows=8 width=12) (actual time=0.051..0.097 rows=10loops=1)                    \nIndex Cond: (\"outer\".max = ac.trans_id) Total runtime: 131.879 ms(17 rows)I am not sure if I want to remove support for the other two tablesyet.   However, I wanted to submit this here as a (possibly corner-)\ncase where the plan seems to be far slower than it needs to be.Best Wishes,Chris TraversMetatron Technology Consulting---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend-- with regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Thu, 28 Jul 2005 15:25:57 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left joining against two empty tables makes a query SLOW" }, { "msg_contents": "Gnanavel S wrote:\n\n>\n>\n> vacuum & reindex the department and project table as the planner \n> expects there are 1060 rows but actually returning nothing.\n\nI guess I should have mentioned that I have been vacuuming and \nreindexing at least once a week, and I did so just before running this test.\nNormally I do: \nvacuum analyze;\nreindex database ....;\n\nSecondly, the project table has *never* had anything in it. So where \nare these numbers coming from?\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Thu, 28 Jul 2005 09:55:53 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Left joining against two empty tables makes a query" }, { "msg_contents": "Chris Travers <[email protected]> writes:\n> Secondly, the project table has *never* had anything in it. So where \n> are these numbers coming from?\n\nThe planner is designed to assume a certain minimum size (10 pages) when\nit sees that a table is of zero physical length. The reason for this is\nthat there are lots of scenarios where a plan created just after a table\nis first created will continue to be used while the table is filled, and\nif we optimized on the assumption of zero size we would produce plans\nthat seriously suck once the table gets big. Assuming a few thousand\nrows keeps us out of the worst problems of this type.\n\n(If we had an infrastructure for regenerating cached plans then we could\nfix this more directly, by replanning whenever the table size changes\n\"too much\". We don't yet but I hope there will be something by 8.2.)\n\nYou might try going ahead and actually putting a row or two into\nprojects; vacuuming that will change the state to where the planner\nwill believe the small size. (If you aren't ever planning to have\nanything in projects, why have the table at all?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jul 2005 14:13:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left joining against two empty tables makes a query " }, { "msg_contents": "On 7/28/05, Chris Travers <[email protected]> wrote:\n> \n> Gnanavel S wrote:\n> \n> >\n> >\n> > vacuum & reindex the department and project table as the planner\n> > expects there are 1060 rows but actually returning nothing.\n> \n> I guess I should have mentioned that I have been vacuuming and\n> reindexing at least once a week, and I did so just before running this \n> test.\n> Normally I do:\n> vacuum analyze;\n> reindex database ....;\n\n\nreindex the tables separately.\n\nSecondly, the project table has *never* had anything in it. So where\n> are these numbers coming from?\n\n\npg_statistics \n\nBest Wishes,\n> Chris Travers\n> Metatron Technology Consulting\n> \n\n\n\n-- \nwith regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nOn 7/28/05, Chris Travers <[email protected]> wrote:\nGnanavel S wrote:>>> vacuum & reindex the department and project table as the planner> expects there are 1060 rows but actually returning nothing.I guess I should have mentioned that I have been vacuuming and\nreindexing at least once a week, and I did so just before running this test.Normally I do:vacuum analyze;reindex database ....;\nreindex the tables separately.\nSecondly, the project table has *never* had anything in it.  So whereare these numbers coming from?\n\npg_statistics \nBest Wishes,Chris TraversMetatron Technology Consulting\n-- with regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Fri, 29 Jul 2005 09:35:40 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left joining against two empty tables makes a query SLOW" }, { "msg_contents": "Gnanavel S wrote:\n> reindex the tables separately.\n\nReindexing should not affect this problem, anyway.\n\n-Neil\n", "msg_date": "Fri, 29 Jul 2005 14:26:29 +1000", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left joining against two empty tables makes a query" }, { "msg_contents": "\n>\n> Secondly, the project table has *never* had anything in it. So where\n> are these numbers coming from? \n>\n>\n> pg_statistics\n\nI very much doubt that. I was unable to locate any rows in pg_statistic \nwhere the pg_class.oid for either table matched any row's starelid.\n\nTom's argument that this is behavior by design makes sense. I assumed \nthat something like that had to be going on, otherwise there would be \nnowhere for the numbers to come from. I.e. if there never were any rows \nin the table, then if pg_statistic is showing 1060 rows, we have bigger \nproblems than a bad query plan. I hope however that eventually tables \nwhich are truly empty can be treated intelligently sometime in the \nfuture in Left Joins. Otherwise this limits the usefulness of out of \nthe box solutions which may have functionality that we don't use. Such \nsolutions can then kill the database performance quite easily.\n\nChris Travers\nMetatron Technology Consulting\n", "msg_date": "Thu, 28 Jul 2005 22:23:15 -0700", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Left joining against two empty tables makes a query" }, { "msg_contents": "Sorry for my english.\nMay I ask? (i'm still learning postgresql). Isn't outer join forcing \"join\norder\"?\nThe planner will resolve a, then ac in order to resolve left join previously\nand will not be able to choose the customer_id filter (more selective)...\nAFAIK (not too far :-)) this will be the join order, even if projects and\ndeparments are not empty, no matter how much statistical info you (the\nengine) have (has).\n\nWorkaround:\nYou should probably try to use a subquery to allow planner to choose join\norder (as long as you can modify source code :-O ). You know project and\ndepartment are empty now so...\n\n\nSELECT aa.accno, aa.description, aa.link, aa.category, aa.project_id,\naa.department, p.projectnumber, d.description from (\n\t\tSELECT c.accno, c.description, c.link, c.category, ac.project_id,\n\t\t\ta.department_id AS department\n\t\t\tFROM chart c JOIN acc_trans ac ON (ac.chart_id = c.id)\n\t\t\tJOIN ar a ON (a.id = ac.trans_id)\n\t\t\tWHERE a.customer_id = 11373 AND a.id IN (\n\t\t\t\t SELECT max(id) FROM ar WHERE customer_id = 11373)\n\t\t) aa\n\n\t\tLEFT JOIN project p ON (aa.project_id = p.id)\n\t\tLEFT JOIN department d ON (d.id = aa.department)\n\nDoubt of it. I rewrite it at first sight.\n\nLong life, little spam and prosperity.\n\n-----Mensaje original-----\nDe: [email protected]\n[mailto:[email protected]]En nombre de Chris\nTravers\nEnviado el: viernes, 29 de julio de 2005 2:23\nPara: Gnanavel S\nCC: Chris Travers; [email protected]\nAsunto: Re: [PERFORM] Left joining against two empty tables makes a\nquery\n\n\n\n>\n> Secondly, the project table has *never* had anything in it. So where\n> are these numbers coming from?\n>\n>\n> pg_statistics\n\nI very much doubt that. I was unable to locate any rows in pg_statistic\nwhere the pg_class.oid for either table matched any row's starelid.\n\nTom's argument that this is behavior by design makes sense. I assumed\nthat something like that had to be going on, otherwise there would be\nnowhere for the numbers to come from. I.e. if there never were any rows\nin the table, then if pg_statistic is showing 1060 rows, we have bigger\nproblems than a bad query plan. I hope however that eventually tables\nwhich are truly empty can be treated intelligently sometime in the\nfuture in Left Joins. Otherwise this limits the usefulness of out of\nthe box solutions which may have functionality that we don't use. Such\nsolutions can then kill the database performance quite easily.\n\nChris Travers\nMetatron Technology Consulting\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Fri, 29 Jul 2005 20:18:56 -0300", "msg_from": "\"Dario\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Left joining against two empty tables makes a query" } ]
[ { "msg_contents": "Hello,\n\nwe recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4 \nopterons, 16GB of memory and MegaRAID with enough disks. OS is Debian \nSarge amd64, PostgreSQL is 8.0.3. Size of database is something like 80GB \nand our website performs about 600 selects and several updates/inserts a \nsecond.\n\nv40z performs somewhat better than our old Dell but mostly due to \nincreased amount of memory. The problem is.. there seems to by plenty of \nfree CPU available and almost no IO-wait but CPU bound queries seem to \nlinger for some reason. Problem appears very clearly during checkpointing. \nQueries accumulate and when checkpointing is over, there can be something \nlike 400 queries running but over 50% of cpu is just idling.\n\nprocs -----------memory------------ ---swap-- -----io---- --system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 3 1 0 494008 159492 14107180 0 0 919 3164 3176 13031 29 12 52 8\n 5 3 0 477508 159508 14118452 0 0 1071 4479 3474 13298 27 13 47 13\n 0 0 0 463604 159532 14128832 0 0 922 2903 3352 12627 29 11 52 8\n 3 1 0 442260 159616 14141668 0 0 1208 3153 3357 13163 28 12 52 9\n\nAn example of a lingering query (there's usually several of these or similar):\n\nSELECT u.uid, u.nick, u.name, u.showname, i.status, i.stamp, i.image_id, \ni.info, i.t_width, i.t_height FROM users u INNER JOIN image i ON i.uid = \nu.uid INNER JOIN user_online uo ON u.uid = uo.uid WHERE u.city_id = 5 AND \ni.status = 'd' AND u.status = 'a' ORDER BY city_id, upper(u.nick) LIMIT \n(40 + 1) OFFSET 320\n\nTables involved contain no more than 4 million rows. Those are constantly \naccessed and should fit nicely to cache. But database is just slow because \nof some unknown reason. Any ideas?\n\n----------------->8 Relevant rows from postgresql.conf 8<-----------------\n\nshared_buffers = 15000 # min 16, at least max_connections*2, 8KB each\nwork_mem = 1536 # min 64, size in KB\nmaintenance_work_mem = 32768 # min 1024, size in KB\n\nmax_fsm_pages = 1000000 # min max_fsm_relations*16, 6 bytes each\nmax_fsm_relations = 5000 # min 100, ~50 bytes each\n\nvacuum_cost_delay = 15 # 0-1000 milliseconds\nvacuum_cost_limit = 120 # 0-10000 credits\n\nbgwriter_percent = 2 # 0-100% of dirty buffers in each round\n\nfsync = true # turns forced synchronization on or off\n # fsync, fdatasync, open_sync, or open_datasync\nwal_buffers = 128 # min 4, 8KB each\ncommit_delay = 80000 # range 0-100000, in microseconds\ncommit_siblings = 10 # range 1-1000\n\ncheckpoint_segments = 200 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 1800 # range 30-3600, in seconds\n\neffective_cache_size = 1000000 # typically 8KB each\nrandom_page_cost = 1.8 # units are one sequential page fetch cost\n\ndefault_statistics_target = 150 # range 1-1000\n\nstats_start_collector = true\nstats_command_string = true\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n", "msg_date": "Thu, 28 Jul 2005 12:21:12 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "Finding bottleneck" }, { "msg_contents": "Hi,\n\nOn Thu, 28 Jul 2005, Kari Lavikka wrote:\n\n> ----------------->8 Relevant rows from postgresql.conf 8<-----------------\n>\n> shared_buffers = 15000 # min 16, at least max_connections*2, 8KB each\n> work_mem = 1536 # min 64, size in KB\n\nAs an aside, I'd increase work_mem -- but it doesn't sound like that is\nyour problem.\n\n> maintenance_work_mem = 32768 # min 1024, size in KB\n>\n> max_fsm_pages = 1000000 # min max_fsm_relations*16, 6 bytes each\n> max_fsm_relations = 5000 # min 100, ~50 bytes each\n>\n> vacuum_cost_delay = 15 # 0-1000 milliseconds\n> vacuum_cost_limit = 120 # 0-10000 credits\n>\n> bgwriter_percent = 2 # 0-100% of dirty buffers in each round\n>\n> fsync = true # turns forced synchronization on or off\n> # fsync, fdatasync, open_sync, or open_datasync\n> wal_buffers = 128 # min 4, 8KB each\n\nSome benchmarking results out today suggest that wal_buffers = 1024 or\neven 2048 could greatly assist you.\n\n> commit_delay = 80000 # range 0-100000, in microseconds\n> commit_siblings = 10 # range 1-1000\n\nThis may explain the fact that you've got backed up queries and idle CPU\n-- I'm not certain though. What does disabling commit_delay do to your\nsituation?\n\nGavin\n", "msg_date": "Thu, 28 Jul 2005 19:34:50 +1000 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck" }, { "msg_contents": "> effective_cache_size = 1000000 # typically 8KB each\n\nI have this setting on postgresql 7.4.8 on FreeBSD with 4 GB RAM:\n\neffective_cache_size = 27462\n\nSo eventhough your machine runs Debian and you have four times as much\nRAM as mine your effective_cache_size is 36 times larger. You could\ntry lowering this setting.\n\nregards\nClaus\n", "msg_date": "Thu, 28 Jul 2005 13:52:03 +0200", "msg_from": "Claus Guttesen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck" }, { "msg_contents": "On 7/28/05 2:21 AM, \"Kari Lavikka\" <[email protected]> wrote:\n\nThere's a new profiling tool called oprofile:\n\n http://oprofile.sourceforge.net/download/\n\nthat can be run without instrumenting the binaries beforehand. To actually\nfind out what the code is doing during these stalls, oprofile can show you\nin which routines the CPU is spending time when you start/stop the\nprofiling.\n\nAs an alternative to the \"guess->change parameters->repeat\" approach, this\nis the most direct way to find the exact nature of the problem.\n\n- Luke\n\n\n", "msg_date": "Thu, 28 Jul 2005 11:27:36 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck" }, { "msg_contents": "\nHi!\n\nOprofile looks quite interesting. I'm not very familiar with postgresql \ninternals, but here's some report output:\n\nCPU: AMD64 processors, speed 2190.23 MHz (estimated)\nCounted CPU_CLK_UNHALTED events (Cycles outside of halt state) with a unit \nmask of 0x00 (No unit mask) count 100000\nsamples % symbol name\n13513390 16.0074 AtEOXact_CatCache\n4492257 5.3213 StrategyGetBuffer\n2279285 2.6999 AllocSetAlloc\n2121509 2.5130 LWLockAcquire\n2023574 2.3970 hash_seq_search\n1971358 2.3352 nocachegetattr\n1837168 2.1762 GetSnapshotData\n1793693 2.1247 SearchCatCache\n1777385 2.1054 hash_search\n1460804 1.7304 ExecMakeFunctionResultNoSets\n1360930 1.6121 _bt_compare\n1344604 1.5928 yyparse\n1318407 1.5617 LWLockRelease\n1290814 1.5290 FunctionCall2\n1137544 1.3475 ExecEvalVar\n1102236 1.3057 hash_any\n912677 1.0811 OpernameGetCandidates\n877993 1.0400 ReadBufferInternal\n783908 0.9286 TransactionIdPrecedes\n772886 0.9155 MemoryContextAllocZeroAligned\n679768 0.8052 StrategyBufferLookup\n609339 0.7218 equal\n600584 0.7114 PGSemaphoreLock\n\nAnd btw, I tried to strace lingering queries under different loads. When \nnumber of concurrent queries increases, lseek and read syscalls stay \nwithin quite constant limits but number of semop calls quadruples.\n\nAre there some buffer locking issues?\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\nOn Thu, 28 Jul 2005, Luke Lonergan wrote:\n\n> On 7/28/05 2:21 AM, \"Kari Lavikka\" <[email protected]> wrote:\n>\n> There's a new profiling tool called oprofile:\n>\n> http://oprofile.sourceforge.net/download/\n>\n> that can be run without instrumenting the binaries beforehand. To actually\n> find out what the code is doing during these stalls, oprofile can show you\n> in which routines the CPU is spending time when you start/stop the\n> profiling.\n>\n> As an alternative to the \"guess->change parameters->repeat\" approach, this\n> is the most direct way to find the exact nature of the problem.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n", "msg_date": "Mon, 8 Aug 2005 15:03:21 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck" }, { "msg_contents": "Kari Lavikka <[email protected]> writes:\n> samples % symbol name\n> 13513390 16.0074 AtEOXact_CatCache\n\nThat seems quite odd --- I'm not used to seeing that function at the top\nof a profile. What is the workload being profiled, exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 10:39:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Finding bottleneck " } ]
[ { "msg_contents": "> I'm not sure how much this has been discussed on the list, but wasn't\n> able to find anything relevant in the archives.\n> \n> The new Spamassassin is due out pretty soon. They are currently\ntesting\n> 3.1.0pre4. One of the things I hope to get out of this release is\nbayes\n> word stats moved to a real RDBMS. They have separated the mysql\n> BayesStore module from the PgSQL one so now postgres can use it's own\n> queries.\n> \n> I loaded all of this stuff up on a test server and am finding that the\n> bayes put performance is really not good enough for any real amount of\n> mail load.\n> \n> The performance problems seems to be when the bayes module is\n> inserting/updating. This is now handled by the token_put procedure.\n\n1. you need high performance client side timing (sub 1 millisecond). on\nwin32 use QueryPerformanceCounter\n\n2. one by one, convert queries inside your routine into dynamic\nversions. That is, use execute 'query string'\n\n3. Identify the problem. Something somewhere is not using the index.\nBecause of the way the planner works you have to do this sometimes.\n\nMerlin\n", "msg_date": "Thu, 28 Jul 2005 08:20:35 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin 3.1.0 Bayes\n module." } ]
[ { "msg_contents": "Kari Lavikka wrote:\n> shared_buffers = 15000 \nyou can play around with this one but in my experience it doesn't make\nmuch difference anymore (it used to).\n\n> work_mem = 1536 # min 64, size in KB\nthis seems low. are you sure you are not getting sorts swapped to disk?\n\n> fsync = true # turns forced synchronization on or\noff\ndoes turning this to off make a difference? This would help narrow down\nwhere the problem is.\n\n> commit_delay = 80000 # range 0-100000, in microseconds\nhm! how did you arrive at this number? try setting to zero and\ncomparing.\n\n> stats_start_collector = true\n> stats_command_string = true\nwith a high query load you may want to consider turning this off. On\nwin32, I've had some problem with stat's collector under high load\nconditions. Not un unix, but it's something to look at. Just turn off\nstats for a while and see if it helps.\n\ngood luck! your hardware should be more than adequate.\n\nMerlin\n", "msg_date": "Thu, 28 Jul 2005 09:02:18 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Finding bottleneck" } ]
[ { "msg_contents": "\n\n\n\nPostgres V7.3.9-2.\n\nWhile executing a query in psql, the following error was generated:\n\nvsa=# select * from vsa.dtbl_logged_event_20050318 where id=2689472;\nPANIC: open of /vsa/db/pg_clog/0FC0 failed: No such file or directory\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\nI checked in the /vsa/db/pg_clog directory, and the files have monotonically increasing filenames starting with 0000. The most recent names are:\n\n-rw------- 1 postgres postgres 262144 Jul 25 21:39 04CA\n-rw------- 1 postgres postgres 262144 Jul 26 01:10 04CB\n-rw------- 1 postgres postgres 262144 Jul 26 05:39 04CC\n-rw------- 1 postgres postgres 262144 Jul 28 00:01 04CD\n-rw------- 1 postgres postgres 237568 Jul 28 11:31 04CE\n\nAny idea why Postgres would be looking for a clog file name 0FC0 when the most recent filename is 04CE?\n\nAny help and suggestions for recovery are appreciated.\n\n--- Steve\n___________________________________________________________________________________\n\nSteven Rosenstein\nIT Architect/Developer | IBM Virtual Server Administration\nVoice/FAX: 845-689-2064 | Cell: 646-345-6978 | Tieline: 930-6001\nText Messaging: 6463456978 @ mobile.mycingular.com\nEmail: srosenst @ us.ibm.com\n\n\"Learn from the mistakes of others because you can't live long enough to\nmake them all yourself.\" -- Eleanor Roosevelt\n\n", "msg_date": "Thu, 28 Jul 2005 13:00:05 -0400", "msg_from": "Steven Rosenstein <[email protected]>", "msg_from_op": true, "msg_subject": "Unable to explain DB error" }, { "msg_contents": "Steven Rosenstein <[email protected]> writes:\n> Any idea why Postgres would be looking for a clog file name 0FC0 when the most recent filename is 04CE?\n\nCorrupt data --- specifically a bad transaction number in a tuple\nheader. (In practice, this is the first field looked at in which\nwe can readily detect an error, so you tend to see this symptom for\nany serious data corruption situation. The actual fault may well\nbe something like a corrupt page header causing the code to follow\n\"tuple pointers\" that point to garbage.)\n\nSee the PG list archives for past discussions of dealing with corrupt\ndata. pgsql-performance is pretty off-topic for this.\n\nBTW, PG 7.4 and up handle this sort of thing much more gracefully ...\nthey can't resurrect corrupt data of course, but they tend not to\npanic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Jul 2005 14:19:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unable to explain DB error " } ]
[ { "msg_contents": "I ran into a situation today maintaining someone else's code where the\nsum time running 2 queries seems to be faster than 1. The original code\nwas split into two queries. I thought about joining them, but\nconsidering the intelligence of my predecessor, I wanted to test it. \n\nThe question is, which technique is really faster? Is there some hidden\nsetup cost I don't see with explain analyze?\n\nPostgres 7.4.7, Redhat AES 3\n\nEach query individually:\n\ntest=> explain analyze\ntest-> select * from order WHERE ord_batch='343B' AND ord_id='12-645';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) (actual time=0.063..0.066 rows=1 loops=1)\n Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n Total runtime: 0.172 ms\n(3 rows)\n\n\ntest=> explain analyze\ntest-> select cli_name from client where cli_code='1837';\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------\n Index Scan using client_pkey on client (cost=0.00..5.98 rows=2 width=39) (actual time=0.043..0.047 rows=1 loops=1)\n Index Cond: (cli_code = '1837'::bpchar)\n Total runtime: 0.112 ms\n(3 rows)\n\nJoined:\n\ntest=> explain analyze\ntest-> SELECT cli_name,order.*\ntest-> FROM order\ntest-> JOIN client ON (ord_client = cli_code)\ntest-> WHERE ord_batch='343B' AND ord_id='12-645';\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..12.00 rows=2 width=525) (actual time=0.120..0.128 rows=1 loops=1)\n -> Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) (actual time=0.064..0.066 rows=1 loops=1)\n Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n -> Index Scan using client_pkey on client (cost=0.00..5.98 rows=1 width=51) (actual time=0.023..0.026 rows=1 loops=1)\n Index Cond: (\"outer\".ord_client = client.cli_code)\n Total runtime: 0.328 ms\n(6 rows)\n\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Thu, 28 Jul 2005 16:04:25 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "Two queries are better than one?" }, { "msg_contents": "On Thu, Jul 28, 2005 at 04:04:25PM -0700, Karim Nassar wrote:\n> I ran into a situation today maintaining someone else's code where the\n> sum time running 2 queries seems to be faster than 1. The original code\n> was split into two queries. I thought about joining them, but\n> considering the intelligence of my predecessor, I wanted to test it. \n> \n> The question is, which technique is really faster? Is there some hidden\n> setup cost I don't see with explain analyze?\n\nTo see which technique will be faster in your application, time the\napplication code. The queries you show are taking fractions of a\nmillisecond; the communications overhead of executing two queries\nmight make that technique significantly slower than just the server\nexecution time that EXPLAIN ANALYZE shows.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Thu, 28 Jul 2005 19:53:22 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two queries are better than one?" }, { "msg_contents": "Karim Nassar wrote:\n> I ran into a situation today maintaining someone else's code where the\n> sum time running 2 queries seems to be faster than 1. The original code\n> was split into two queries. I thought about joining them, but\n> considering the intelligence of my predecessor, I wanted to test it.\n>\n> The question is, which technique is really faster? Is there some hidden\n> setup cost I don't see with explain analyze?\n\nYes, the time it takes your user code to parse the result, and create\nthe new query. :)\n\nIt does seem like you are taking an extra 0.1ms for the combined query,\nbut that means you don't have another round trip to the database. So\nthat would mean one less context switch, and you don't need to know what\nthe cli_code is before you can get the cli_name.\n\nI would guess the overhead is the time for postgres to parse out the\ntext, place another index query, and then combine the rows. It seems\nlike this shouldn't take 0.1ms, but then again, that isn't very long.\n\nAlso, did you run it *lots* of times to make sure that this isn't just\nnoise?\n\nJohn\n=:->\n\n>\n> Postgres 7.4.7, Redhat AES 3\n>\n> Each query individually:\n>\n> test=> explain analyze\n> test-> select * from order WHERE ord_batch='343B' AND ord_id='12-645';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) (actual time=0.063..0.066 rows=1 loops=1)\n> Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n> Total runtime: 0.172 ms\n> (3 rows)\n>\n>\n> test=> explain analyze\n> test-> select cli_name from client where cli_code='1837';\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------\n> Index Scan using client_pkey on client (cost=0.00..5.98 rows=2 width=39) (actual time=0.043..0.047 rows=1 loops=1)\n> Index Cond: (cli_code = '1837'::bpchar)\n> Total runtime: 0.112 ms\n> (3 rows)\n>\n> Joined:\n>\n> test=> explain analyze\n> test-> SELECT cli_name,order.*\n> test-> FROM order\n> test-> JOIN client ON (ord_client = cli_code)\n> test-> WHERE ord_batch='343B' AND ord_id='12-645';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..12.00 rows=2 width=525) (actual time=0.120..0.128 rows=1 loops=1)\n> -> Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) (actual time=0.064..0.066 rows=1 loops=1)\n> Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n> -> Index Scan using client_pkey on client (cost=0.00..5.98 rows=1 width=51) (actual time=0.023..0.026 rows=1 loops=1)\n> Index Cond: (\"outer\".ord_client = client.cli_code)\n> Total runtime: 0.328 ms\n> (6 rows)\n>\n>", "msg_date": "Thu, 28 Jul 2005 21:01:01 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two queries are better than one?" }, { "msg_contents": "On Thu, 2005-07-28 at 21:01 -0500, John A Meinel wrote:\n>\n> Also, did you run it *lots* of times to make sure that this isn't just\n> noise?\n\nIf a dozen is lots, yes. :-)\n\nIt was very consistent as I repeatedly ran it.\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Thu, 28 Jul 2005 19:02:13 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two queries are better than one?" }, { "msg_contents": "On 7/29/05, Karim Nassar <[email protected]> wrote:\n> \n> I ran into a situation today maintaining someone else's code where the\n> sum time running 2 queries seems to be faster than 1. The original code\n> was split into two queries. I thought about joining them, but\n> considering the intelligence of my predecessor, I wanted to test it.\n> \n> The question is, which technique is really faster? Is there some hidden\n> setup cost I don't see with explain analyze?\n> \n> Postgres 7.4.7, Redhat AES 3\n> \n> Each query individually:\n> \n> test=> explain analyze\n> test-> select * from order WHERE ord_batch='343B' AND ord_id='12-645';\n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------\n> Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) \n> (actual time=0.063..0.066 rows=1 loops=1)\n> Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n> Total runtime: 0.172 ms\n> (3 rows)\n> \n> \n> test=> explain analyze\n> test-> select cli_name from client where cli_code='1837';\n> QUERY PLAN\n> \n> ---------------------------------------------------------------------------------------------------------------------\n> Index Scan using client_pkey on client (cost=0.00..5.98 rows=2 width=39) \n> (actual time=0.043..0.047 rows=1 loops=1)\n> Index Cond: (cli_code = '1837'::bpchar)\n> Total runtime: 0.112 ms\n> (3 rows)\n> \n> Joined:\n> \n> test=> explain analyze\n> test-> SELECT cli_name,order.*\n> test-> FROM order\n> test-> JOIN client ON (ord_client = cli_code)\n> test-> WHERE ord_batch='343B' AND ord_id='12-645';\n\n\nwhere is the cli_code condition in the above query?\n\nQUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop (cost=0.00..12.00 rows=2 width=525) (actual time=0.120..0.128rows=1 loops=1)\n> -> Index Scan using order_pkey on order (cost=0.00..6.02 rows=1 width=486) \n> (actual time=0.064..0.066 rows=1 loops=1)\n> Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))\n> -> Index Scan using client_pkey on client (cost=0.00..5.98 rows=1 \n> width=51) (actual time=0.023..0.026 rows=1 loops=1)\n> Index Cond: (\"outer\".ord_client = client.cli_code)\n> Total runtime: 0.328 ms\n> (6 rows)\n> \n> \n> --\n> Karim Nassar <[email protected]>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n\n-- \nwith regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nOn 7/29/05, Karim Nassar <[email protected]> wrote:\nI ran into a situation today maintaining someone else's code where thesum time running 2 queries seems to be faster than 1. The original codewas split into two queries. I thought about joining them, butconsidering the intelligence of my predecessor, I wanted to test it.\nThe question is, which technique is really faster? Is there some hiddensetup cost I don't see with explain analyze?Postgres 7.4.7, Redhat AES 3Each query individually:test=> explain analyze\ntest-> select * from order  WHERE ord_batch='343B' AND ord_id='12-645';                                                        \nQUERY PLAN---------------------------------------------------------------------------------------------------------------------------- Index Scan using order_pkey on order  (cost=0.00..6.02 rows=1 width=486) (actual time=\n0.063..0.066 rows=1 loops=1)   Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar)) Total runtime: 0.172 ms(3 rows)test=> explain analyzetest->     select cli_name from client where cli_code='1837';\n                                                    \nQUERY PLAN--------------------------------------------------------------------------------------------------------------------- Index Scan using client_pkey on client  (cost=0.00..5.98 rows=2 width=39) (actual time=\n0.043..0.047 rows=1 loops=1)   Index Cond: (cli_code = '1837'::bpchar) Total runtime: 0.112 ms(3 rows)Joined:test=> explain analyzetest->    SELECT cli_name,order.*test->               FROM order\ntest->              \nJOIN client ON (ord_client = cli_code)test->              WHERE\nord_batch='343B' AND ord_id='12-645';\nwhere is the cli_code condition in the above query?\n                                                             QUERY\nPLAN---------------------------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..12.00 rows=2 width=525) (actual time=0.120..0.128 rows=1 loops=1)\n  \n->  Index Scan using order_pkey on\norder  (cost=0.00..6.02 rows=1 width=486) (actual\ntime=0.064..0.066 rows=1 loops=1)         Index Cond: ((ord_batch = '343B'::bpchar) AND (ord_id = '12-645'::bpchar))  \n->  Index Scan using client_pkey on\nclient  (cost=0.00..5.98 rows=1 width=51) (actual\ntime=0.023..0.026 rows=1 loops=1)         Index Cond: (\"outer\".ord_client = client.cli_code) Total runtime: 0.328 ms(6 rows)--Karim Nassar <[email protected]\n>---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend-- with regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Fri, 29 Jul 2005 09:41:31 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two queries are better than one?" }, { "msg_contents": "On Fri, 2005-07-29 at 09:41 +0530, Gnanavel S wrote:\n\n> \n> Joined:\n> \n> test=> explain analyze\n> test-> SELECT cli_name,order.*\n> test-> FROM order \n> test-> JOIN client ON (ord_client = cli_code)\n> test-> WHERE ord_batch='343B' AND\n> ord_id='12-645';\n> \n> where is the cli_code condition in the above query?\n\nI don't understand the question. ord_client is the client code, and\ncli_code is the client code, for their respective tables. batch/id is\nunique, so there is only one record from order, and only one client to\nassociate.\n\nClearer?\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Thu, 28 Jul 2005 21:21:38 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two queries are better than one?" }, { "msg_contents": "On 7/29/05, Karim Nassar <[email protected]> wrote:\n> \n> On Fri, 2005-07-29 at 09:41 +0530, Gnanavel S wrote:\n> \n> >\n> > Joined:\n> >\n> > test=> explain analyze\n> > test-> SELECT cli_name,order.*\n> > test-> FROM order\n> > test-> JOIN client ON (ord_client = cli_code)\n> > test-> WHERE ord_batch='343B' AND\n> > ord_id='12-645';\n> >\n> > where is the cli_code condition in the above query?\n> \n> I don't understand the question. ord_client is the client code, and\n> cli_code is the client code, for their respective tables. batch/id is\n> unique, so there is only one record from order, and only one client to\n> associate.\n> \n> Clearer?\n\n\nok.\n\nReason might be comparing with a literal value (previous case) is cheaper \nthan comparing with column(as it has to be evaluated). But with the previous \ncase getting and assigning the cli_code in the application and executing in \ndb will be time consuming as it includes IPC cost.\n\n--\n> Karim Nassar <[email protected]>\n> \n> \n\n\n-- \nwith regards,\nS.Gnanavel\nSatyam Computer Services Ltd.\n\nOn 7/29/05, Karim Nassar <[email protected]> wrote:\nOn Fri, 2005-07-29 at 09:41 +0530, Gnanavel S wrote:>>         Joined:>>         test=> explain analyze>         test->    SELECT cli_name,order.*>        \ntest->              \nFROM order>        \ntest->              \nJOIN client ON (ord_client = cli_code)>        \ntest->              WHERE\nord_batch='343B' AND>         ord_id='12-645';>> where is the cli_code condition in the above query?I don't understand the question. ord_client is the client code, andcli_code is the client code, for their respective tables. batch/id is\nunique, so there is only one record from order, and only one client toassociate.Clearer?\nok.\n\n Reason might be comparing with a literal value (previous case) is\ncheaper than comparing with column(as it has to be evaluated). \nBut with the previous case getting and assigning the cli_code in the\napplication and executing in db will be time consuming as it includes\nIPC cost.\n\n--Karim Nassar <[email protected]\n>-- with regards,S.GnanavelSatyam Computer Services Ltd.", "msg_date": "Fri, 29 Jul 2005 10:08:49 +0530", "msg_from": "Gnanavel S <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two queries are better than one?" } ]
[ { "msg_contents": "work_mem = 131072 # min 64, size in KB\nshared_buffers = 16000 # min 16, at least max_connections*2, 8KB each\ncheckpoint_segments = 128 # in logfile segments, min 1, 16MB each\neffective_cache_size = 750000 # typically 8KB each\nfsync=false # turns forced synchronization on or off\n \n------------------------------------------\nOn Bizgres (0_7_2) running on a 2GHz Opteron:\n------------------------------------------\n[llonergan@stinger4 bayesBenchmark]$ ./test.sh\n\nreal 0m38.348s\nuser 0m1.422s\nsys 0m1.870s\n \n------------------------------------------\nOn a 2.4GHz AMD64:\n------------------------------------------\n[llonergan@kite15 bayesBenchmark]$ ./test.sh\n\nreal 0m35.497s\nuser 0m2.250s\nsys 0m0.470s\n \nNow we turn fsync=true:\n \n------------------------------------------\nOn a 2.4GHz AMD64:\n------------------------------------------\n[llonergan@kite15 bayesBenchmark]$ ./test.sh\n\nreal 2m7.368s\nuser 0m2.560s\nsys 0m0.750s\n \nI guess we see the real culprit here. Anyone surprised it's the WAL?\n \n- Luke\n\n________________________________\n\nFrom: [email protected] on behalf of Andrew McMillan\nSent: Thu 7/28/2005 10:50 PM\nTo: Matthew Schumacher\nCc: [email protected]\nSubject: Re: [PERFORM] Performance problems testing with Spamassassin 3.1.0\n\n\n\nOn Thu, 2005-07-28 at 16:13 -0800, Matthew Schumacher wrote:\n>\n> Ok, I finally got some test data together so that others can test\n> without installing SA.\n>\n> The schema and test dataset is over at\n> http://www.aptalaska.net/~matt.s/bayes/bayesBenchmark.tar.gz\n>\n> I have a pretty fast machine with a tuned postgres and it takes it about\n> 2 minutes 30 seconds to load the test data. Since the test data is the\n> bayes information on 616 spam messages than comes out to be about 250ms\n> per message. While that is doable, it does add quite a bit of overhead\n> to the email system.\n\nOn my laptop this takes:\n\nreal 1m33.758s\nuser 0m4.285s\nsys 0m1.181s\n\nOne interesting effect is the data in bayes_vars has a huge number of\nupdates and needs vacuum _frequently_. After the run a vacuum full\ncompacts it down from 461 pages to 1 page.\n\nRegards,\n Andrew.\n\n-------------------------------------------------------------------------\nAndrew @ Catalyst .Net .NZ Ltd, PO Box 11-053, Manners St, Wellington\nWEB: http://catalyst.net.nz/ PHYS: Level 2, 150-154 Willis St\nDDI: +64(4)803-2201 MOB: +64(272)DEBIAN OFFICE: +64(4)499-2267\n I don't do it for the money.\n -- Donald Trump, Art of the Deal\n\n-------------------------------------------------------------------------\n\n\n\n\n", "msg_date": "Fri, 29 Jul 2005 03:01:07 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "On Fri, Jul 29, 2005 at 03:01:07AM -0400, Luke Lonergan wrote:\n\n> I guess we see the real culprit here. Anyone surprised it's the WAL?\n\nSo what? Are you planning to suggest people to turn fsync=false?\n\nI just had a person lose 3 days of data on some tables because of that,\neven when checkpoints were 5 minutes apart. With fsync off, there's no\nwork _at all_ going on, not just the WAL -- heap/index file fsync at\ncheckpoint is also skipped. This is no good.\n\n-- \nAlvaro Herrera (<alvherre[a]alvh.no-ip.org>)\n\"In a specialized industrial society, it would be a disaster\nto have kids running around loose.\" (Paul Graham)\n", "msg_date": "Fri, 29 Jul 2005 09:23:19 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "\"Luke Lonergan\" <[email protected]> writes:\n> I guess we see the real culprit here. Anyone surprised it's the WAL?\n\nYou have not proved that at all.\n\nI haven't had time to look at Matthew's problem, but someone upthread\nimplied that it was doing a separate transaction for each word. If so,\ncollapsing that to something more reasonable (say one xact per message)\nwould probably help a great deal.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 29 Jul 2005 10:12:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin " }, { "msg_contents": "Luke,\n\n> work_mem = 131072 # min 64, size in KB\n\nIncidentally, this is much too high for an OLTP application, although I don't \nthink this would have affected the test.\n\n> shared_buffers = 16000 # min 16, at least max_connections*2, 8KB\n> each checkpoint_segments = 128 # in logfile segments, min 1, 16MB\n> each effective_cache_size = 750000 # typically 8KB each\n> fsync=false # turns forced synchronization on or off\n\nTry changing:\nwal_buffers = 256\n\nand try Bruce's stop full_page_writes patch.\n\n> I guess we see the real culprit here. Anyone surprised it's the WAL?\n\nNope. On high-end OLTP stuff, it's crucial that the WAL have its own \ndedicated disk resource.\n\nAlso, running a complex stored procedure for each and every word in each \ne-mail is rather deadly ... with the e-mail traffic our server at Globix \nreceives, for example, that would amount to running it about 1,000 times a \nminute. It would be far better to batch this, somehow, maybe using temp \ntables.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 29 Jul 2005 09:47:03 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "Alvaro,\n\nOn 7/29/05 6:23 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\n> On Fri, Jul 29, 2005 at 03:01:07AM -0400, Luke Lonergan wrote:\n> \n>> I guess we see the real culprit here. Anyone surprised it's the WAL?\n> \n> So what? Are you planning to suggest people to turn fsync=false?\n\nThat's not the conclusion I made, no. I was pointing out that fsync has a\nHUGE impact on his problem, which implies something to do with the I/O sync\noperations. Black box bottleneck hunt approach #12.\n \n> With fsync off, there's no\n> work _at all_ going on, not just the WAL -- heap/index file fsync at\n> checkpoint is also skipped. This is no good.\n\nOK - so that's what Tom is pointing out, that fsync impacts more than WAL.\n\nHowever, finding out that fsync/no fsync makes a 400% difference in speed\nfor this problem is interesting and relevant, no?\n\n- Luke\n\n\n", "msg_date": "Fri, 29 Jul 2005 10:11:10 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "Tom,\n\nOn 7/29/05 7:12 AM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n>> I guess we see the real culprit here. Anyone surprised it's the WAL?\n> \n> You have not proved that at all.\n\nAs Alvaro pointed out, fsync has impact on more than WAL, so good point.\nInteresting that fsync has such a huge impact on this situation though.\n\n- Luke\n\n\n", "msg_date": "Fri, 29 Jul 2005 10:13:41 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "Ok,\n\nHere is something new, when I take my data.sql file and add a begin and\ncommit at the top and bottom, the benchmark is a LOT slower?\n\nMy understanding is that it should be much faster because fsync isn't\ncalled until the commit instead of on every sql command.\n\nI must be missing something here.\n\nschu\n", "msg_date": "Fri, 29 Jul 2005 11:16:52 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "On Fri, 2005-07-29 at 09:47 -0700, Josh Berkus wrote:\n> Try changing:\n> wal_buffers = 256\n> \n> and try Bruce's stop full_page_writes patch.\n> \n> > I guess we see the real culprit here. Anyone surprised it's the WAL?\n> \n> Nope. On high-end OLTP stuff, it's crucial that the WAL have its own \n> dedicated disk resource.\n> \n> Also, running a complex stored procedure for each and every word in each \n> e-mail is rather deadly ... with the e-mail traffic our server at Globix \n> receives, for example, that would amount to running it about 1,000 times a \n> minute. \n\nIs this a real-world fix? Seems to me that Spam Assassin runs on a\nplethora of mail servers, and optimizing his/her/my/your pg config\ndoesn't solve the root problem: there are thousands of (seemingly)\nhigh-overhead function calls being executed. \n\n\n> It would be far better to batch this, somehow, maybe using temp \n> tables.\n\nAgreed. On my G4 laptop running the default configured Ubuntu Linux\npostgresql 7.4.7 package, it took 43 minutes for Matthew's script to run\n(I ran it twice just to be sure). In my spare time over the last day, I\ncreated a brute force perl script that took under 6 minutes. Am I on to\nsomething, or did I just optimize for *my* system?\n\nhttp://ccl.cens.nau.edu/~kan4/files/k-bayesBenchmark.tar.gz\n\nkan4@slap-happy:~/k-bayesBenchmark$ time ./test.pl\n<-- snip db creation stuff -->\n17:18:44 -- START\n17:19:37 -- AFTER TEMP LOAD : loaded 120596 records\n17:19:46 -- AFTER bayes_token INSERT : inserted 49359 new records into bayes_token\n17:19:50 -- AFTER bayes_vars UPDATE : updated 1 records\n17:23:37 -- AFTER bayes_token UPDATE : updated 47537 records\nDONE\n\nreal 5m4.551s\nuser 0m29.442s\nsys 0m3.925s\n\n\nI am sure someone smarter could optimize further.\n\nAnyone with a super-spifty machine wanna see if there is an improvement\nhere?\n\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Fri, 29 Jul 2005 17:39:41 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "Karim Nassar wrote:\n> \n> kan4@slap-happy:~/k-bayesBenchmark$ time ./test.pl\n> <-- snip db creation stuff -->\n> 17:18:44 -- START\n> 17:19:37 -- AFTER TEMP LOAD : loaded 120596 records\n> 17:19:46 -- AFTER bayes_token INSERT : inserted 49359 new records into bayes_token\n> 17:19:50 -- AFTER bayes_vars UPDATE : updated 1 records\n> 17:23:37 -- AFTER bayes_token UPDATE : updated 47537 records\n> DONE\n> \n> real 5m4.551s\n> user 0m29.442s\n> sys 0m3.925s\n> \n> \n> I am sure someone smarter could optimize further.\n> \n> Anyone with a super-spifty machine wanna see if there is an improvement\n> here?\n> \n\nThere is a great improvement in loading the data. While I didn't load\nit on my server, my test box shows significant gains.\n\nIt seems that the only thing your script does different is separate the\nupdates from inserts so that an expensive update isn't called when we\nwant to insert. The other major difference is the 'IN' and 'MOT IN'\nsyntax which looks to be much faster than trying everything as an update\nbefore inserting.\n\nWhile these optimizations seem to make a huge difference in loading the\ntoken data, the real life scenario is a little different.\n\nYou see, the database keeps track of the number of times each token was\nfound in ham or spam, so that when we see a new message we can parse it\ninto tokens then compare with the database to see how likely the\nmessages is spam based on the statistics of tokens we have already\nlearned on.\n\nSince we would want to commit this data after each message, the number\nof tokens processed at one time would probably only be a few hundred,\nmost of which are probably updates after we have trained on a few\nthousand emails.\n\nI apologize if my crude benchmark was misleading, it was meant to\nsimulate the sheer number of inserts/updates the database may go though\nin an environment that didn't require people to load spamassassin and\nstart training on spam.\n\nI'll do some more testing on Monday, perhaps grouping even 200 tokens at\na time using your method will yield significant gains, but probably not\nas dramatic as it does using my loading benchmark.\n\nI post more when I have a chance to look at this in more depth.\n\nThanks,\nschu\n", "msg_date": "Sat, 30 Jul 2005 00:46:27 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" }, { "msg_contents": "On Sat, 2005-07-30 at 00:46 -0800, Matthew Schumacher wrote:\n\n> I'll do some more testing on Monday, perhaps grouping even 200 tokens at\n> a time using your method will yield significant gains, but probably not\n> as dramatic as it does using my loading benchmark.\n\nIn that case, some of the clauses could be simplified further since we\nknow that we are dealing with only one user. I don't know what that will\nget us, since postgres is so damn clever.\n\nI suspect that the aggregate functions will be more efficient when you\ndo this, since the temp table will be much smaller, but I am only\nguessing at this point. \n\nIf you need to support a massive initial data load, further time savings\nare to be had by doing COPY instead of 126,000 inserts.\n\nPlease do keep us updated. \n\nThanking all the gods and/or developers for spamassassin,\n-- \nKarim Nassar <[email protected]>\n\n", "msg_date": "Sat, 30 Jul 2005 03:34:00 -0700", "msg_from": "Karim Nassar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems testing with Spamassassin" } ]
[ { "msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 1797\nLogged by: Magno Leite\nEmail address: [email protected]\nPostgreSQL version: 8.0\nOperating system: Windows XP Professional Edition\nDescription: Problem using Limit in a function, seqscan\nDetails: \n\nI looked for about this problem in BUG REPORT but I can't find. This is my\nproblem, when I try to use limit in a function, the Postgre doesn't use my\nindex, then it use sequencial scan. What is the problem ?\n", "msg_date": "Fri, 29 Jul 2005 13:52:45 +0100 (BST)", "msg_from": "\"Magno Leite\" <[email protected]>", "msg_from_op": true, "msg_subject": "BUG #1797: Problem using Limit in a function, seqscan" }, { "msg_contents": "On Fri, Jul 29, 2005 at 01:52:45PM +0100, Magno Leite wrote:\n> I looked for about this problem in BUG REPORT but I can't find. This is my\n> problem, when I try to use limit in a function, the Postgre doesn't use my\n> index, then it use sequencial scan. What is the problem ?\n\nWithout more information we can only guess, but if you're using\nPL/pgSQL then a cached query plan might be responsible. Here's an\nexcerpt from the PREPARE documentation:\n\n In some situations, the query plan produced for a prepared\n statement will be inferior to the query plan that would have\n been chosen if the statement had been submitted and executed\n normally. This is because when the statement is planned and\n the planner attempts to determine the optimal query plan, the\n actual values of any parameters specified in the statement are\n unavailable. PostgreSQL collects statistics on the distribution\n of data in the table, and can use constant values in a statement\n to make guesses about the likely result of executing the\n statement. Since this data is unavailable when planning prepared\n statements with parameters, the chosen plan may be suboptimal.\n\nIf you'd like us to take a closer look, then please post a self-\ncontained example, i.e., all SQL statements that somebody could\nload into an empty database to reproduce the behavior you're seeing.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 29 Jul 2005 08:00:38 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1797: Problem using Limit in a function, seqscan" }, { "msg_contents": "On Fri, Jul 29, 2005 at 13:52:45 +0100,\n Magno Leite <[email protected]> wrote:\n> \n> Description: Problem using Limit in a function, seqscan\n> \n> I looked for about this problem in BUG REPORT but I can't find. This is my\n> problem, when I try to use limit in a function, the Postgre doesn't use my\n> index, then it use sequencial scan. What is the problem ?\n\nYou haven't described the problem well enough to allow us to help you and\nyou posted it to the wrong list. This should be discussed on the performance\nlist, not the bug list.\n\nIt would help if you showed us the query you are running and run it outside\nof the function with EXPLAIN ANALYSE and show us that output. Depending\non what that output shows, we may ask you other questions.\n", "msg_date": "Fri, 29 Jul 2005 09:06:42 -0500", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: BUG #1797: Problem using Limit in a function, seqscan" } ]
[ { "msg_contents": "Hi,\n\ndoes anybody have expierence with this machine (4x 875 dual core Opteron \nCPUs)? We run RHEL 3.0, 32bit and under high load it is a drag. We \nmostly run memory demanding queries. Context switches are pretty much \naround 20.000 on the average, no cs spikes when we run many processes in \nparallel. Actually we only see two processes in running state! When \nthere are only a few processes running context switches go much higher. \nAt the moment we are much slower that with a 4way XEON box (DL580).\n\nWe are running 8.0.3 compiled with -mathlon flags.\n\nRegards,\n\nDirk\n", "msg_date": "Fri, 29 Jul 2005 18:45:12 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems on 4/8way Opteron (dualcore) HP DL585" }, { "msg_contents": "Dirk,\n\n> does anybody have expierence with this machine (4x 875 dual core Opteron\n> CPUs)?\n\nNope. I suspect that you may be the first person to report in on \ndual-cores. There may be special compile issues with dual-cores that \nwe've not yet encountered.\n\n> We run RHEL 3.0, 32bit and under high load it is a drag. We \n> mostly run memory demanding queries. Context switches are pretty much\n> around 20.000 on the average, no cs spikes when we run many processes in\n> parallel. Actually we only see two processes in running state! When\n> there are only a few processes running context switches go much higher.\n> At the moment we are much slower that with a 4way XEON box (DL580).\n\nUm, that was a bit incoherent. Are you seeing a CS storm or aren't you?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 29 Jul 2005 10:46:10 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore) HP DL585" }, { "msg_contents": "On Fri, 2005-07-29 at 10:46 -0700, Josh Berkus wrote:\n> Dirk,\n> \n> > does anybody have expierence with this machine (4x 875 dual core Opteron\n> > CPUs)?\n\nI'm using dual 275s without problems.\n\n> Nope. I suspect that you may be the first person to report in on \n> dual-cores. There may be special compile issues with dual-cores that \n> we've not yet encountered.\n\nDoubtful. However you could see improvements using recent Linux kernel\ncode. There have been some patches for optimizing scheduling and memory\nallocations.\n\nHowever, if you are running this machine in 32-bit mode, why did you\nbother paying $14,000 for your CPUs? You will get FAR better\nperformance in 64-bit mode. 64-bit mode will give you 30-50% better\nperformance on PostgreSQL loads, in my experience. Also, if I remember\ncorrectly, the 32-bit x86 kernel doesn't understand Opteron NUMA\ntopology, so you may be seeing poor memory allocation decisions.\n\n-jwb\n\n> > We run RHEL 3.0, 32bit and under high load it is a drag. We \n> > mostly run memory demanding queries. Context switches are pretty much\n> > around 20.000 on the average, no cs spikes when we run many processes in\n> > parallel. Actually we only see two processes in running state! When\n> > there are only a few processes running context switches go much higher.\n> > At the moment we are much slower that with a 4way XEON box (DL580).\n> \n> Um, that was a bit incoherent. Are you seeing a CS storm or aren't you?\n> \n", "msg_date": "Fri, 29 Jul 2005 11:55:42 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore) HP" }, { "msg_contents": "On 7/29/05 10:46 AM, \"Josh Berkus\" <[email protected]> wrote:\n>> does anybody have expierence with this machine (4x 875 dual core Opteron\n>> CPUs)?\n> \n> Nope. I suspect that you may be the first person to report in on\n> dual-cores. There may be special compile issues with dual-cores that\n> we've not yet encountered.\n\n\nThere was recently a discussion of similar types of problems on a couple of\nthe supercomputing lists, regarding surprisingly substandard performance\nfrom large dual-core opteron installations.\n\nThe problem as I remember it boiled down to the Linux kernel handling\nmemory/process management very badly on large dual core systems --\npathological NUMA behavior. However, this problem has apparently been fixed\nin Linux v2.6.12+, and using the more recent kernel on large dual core\nsystems generated *massive* performance improvements on these systems for\nthe individuals with this issue. Using the patched kernel, one gets the\nperformance most people were expecting.\n\nThe v2.6.12+ kernels are a bit new, but they contain a very important\nperformance patch for systems like the one above. It would definitely be\nworth testing if possible.\n\n\nJ. Andrew Rogers\n\n\n", "msg_date": "Fri, 29 Jul 2005 12:21:53 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore)" }, { "msg_contents": "I've been running 2x265's on FC4 64-bit (2.6.11-1+) and it's been \nrunning perfect. With NUMA enabled, it runs incrementally faster than \nNUMA off. Performance is definitely better than the 2x244s they replaced \n-- how much faster, I can't measure since I don't have the transaction \nvolume to compare to previous benchmarks. I do see more consistently low \nresponse times though, can run apache also on the server for faster HTML \ngeneration times and top seems to show in general twice as much CPU \npower idle on average (25% per 265 core versus 50% per 244.)\n\nI haven't investigated the 2.6.12+ kernel updates yet -- I probably will \ndo our development servers first to give it a test.\n\n\n> The problem as I remember it boiled down to the Linux kernel handling\n> memory/process management very badly on large dual core systems --\n> pathological NUMA behavior. However, this problem has apparently been fixed\n> in Linux v2.6.12+, and using the more recent kernel on large dual core\n> systems generated *massive* performance improvements on these systems for\n> the individuals with this issue. Using the patched kernel, one gets the\n> performance most people were expecting.\n", "msg_date": "Sat, 30 Jul 2005 00:57:38 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore)" }, { "msg_contents": "Hi Jeff,\n\nwhich box are you running precisely and which OS/kernel?\n\nWe need to run 32bit because we need failover to 32 bit XEON system \n(DL580). If this does not work out we probably need to switch to 64 bit \n(dump/restore) and run a nother 64bit failover box too.\n\nRegards,\n\nDirk\n\n\n\nJeffrey W. Baker wrote:\n> On Fri, 2005-07-29 at 10:46 -0700, Josh Berkus wrote:\n> \n>>Dirk,\n>>\n>>\n>>>does anybody have expierence with this machine (4x 875 dual core Opteron\n>>>CPUs)?\n> \n> \n> I'm using dual 275s without problems.\n> \n> \n>>Nope. I suspect that you may be the first person to report in on \n>>dual-cores. There may be special compile issues with dual-cores that \n>>we've not yet encountered.\n> \n> \n> Doubtful. However you could see improvements using recent Linux kernel\n> code. There have been some patches for optimizing scheduling and memory\n> allocations.\n> \n> However, if you are running this machine in 32-bit mode, why did you\n> bother paying $14,000 for your CPUs? You will get FAR better\n> performance in 64-bit mode. 64-bit mode will give you 30-50% better\n> performance on PostgreSQL loads, in my experience. Also, if I remember\n> correctly, the 32-bit x86 kernel doesn't understand Opteron NUMA\n> topology, so you may be seeing poor memory allocation decisions.\n> \n> -jwb\n> \n> \n>>>We run RHEL 3.0, 32bit and under high load it is a drag. We \n>>>mostly run memory demanding queries. Context switches are pretty much\n>>>around 20.000 on the average, no cs spikes when we run many processes in\n>>>parallel. Actually we only see two processes in running state! When\n>>>there are only a few processes running context switches go much higher.\n>>>At the moment we are much slower that with a 4way XEON box (DL580).\n>>\n>>Um, that was a bit incoherent. Are you seeing a CS storm or aren't you?\n>>\n\n", "msg_date": "Sun, 31 Jul 2005 12:11:02 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore) HP" }, { "msg_contents": "Anybody knows if RedHat is already supporting this patch on an \nenterprise version?\n\nRegards,\n\nDirk\n\n\n\nJ. Andrew Rogers wrote:\n> On 7/29/05 10:46 AM, \"Josh Berkus\" <[email protected]> wrote:\n> \n>>>does anybody have expierence with this machine (4x 875 dual core Opteron\n>>>CPUs)?\n>>\n>>Nope. I suspect that you may be the first person to report in on\n>>dual-cores. There may be special compile issues with dual-cores that\n>>we've not yet encountered.\n> \n> \n> \n> There was recently a discussion of similar types of problems on a couple of\n> the supercomputing lists, regarding surprisingly substandard performance\n> from large dual-core opteron installations.\n> \n> The problem as I remember it boiled down to the Linux kernel handling\n> memory/process management very badly on large dual core systems --\n> pathological NUMA behavior. However, this problem has apparently been fixed\n> in Linux v2.6.12+, and using the more recent kernel on large dual core\n> systems generated *massive* performance improvements on these systems for\n> the individuals with this issue. Using the patched kernel, one gets the\n> performance most people were expecting.\n> \n> The v2.6.12+ kernels are a bit new, but they contain a very important\n> performance patch for systems like the one above. It would definitely be\n> worth testing if possible.\n> \n> \n> J. Andrew Rogers\n> \n> \n\n", "msg_date": "Sun, 31 Jul 2005 12:12:32 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore) HP" }, { "msg_contents": "On 7/30/05 12:57 AM, \"William Yu\" <[email protected]> wrote:\n> I haven't investigated the 2.6.12+ kernel updates yet -- I probably will\n> do our development servers first to give it a test.\n\n\nThe kernel updates make the NUMA code dual-core aware, which apparently\nmakes a big difference in some cases but not in others. It makes some\nsense, since multi-processor multi-core machines will have two different\ntypes of non-locality instead of just one that need to be managed. Prior to\nthe v2.6.12 patches, a dual-core dual-proc machine was viewed as a quad-proc\nmachine.\n\nThe closest thing to a supported v2.6.12 kernel that I know of is FC4, which\nis not really supported in the enterprise sense of course.\n\n\nJ. Andrew Rogers\n\n\n", "msg_date": "Sun, 31 Jul 2005 08:29:15 -0700", "msg_from": "\"J. Andrew Rogers\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore)" }, { "msg_contents": "A 4xDC would be far more sensitive to poor NUMA code than 2xDC so I'm \nnot surprised I don't see performance issues on our 2xDC w/ < 2.6.12.\n\n\nJ. Andrew Rogers wrote:\n> On 7/30/05 12:57 AM, \"William Yu\" <[email protected]> wrote:\n> \n>>I haven't investigated the 2.6.12+ kernel updates yet -- I probably will\n>>do our development servers first to give it a test.\n> \n> \n> \n> The kernel updates make the NUMA code dual-core aware, which apparently\n> makes a big difference in some cases but not in others. It makes some\n> sense, since multi-processor multi-core machines will have two different\n> types of non-locality instead of just one that need to be managed. Prior to\n> the v2.6.12 patches, a dual-core dual-proc machine was viewed as a quad-proc\n> machine.\n> \n> The closest thing to a supported v2.6.12 kernel that I know of is FC4, which\n> is not really supported in the enterprise sense of course.\n> \n> \n> J. Andrew Rogers\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n", "msg_date": "Sun, 31 Jul 2005 14:11:58 -0700", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4/8way Opteron (dualcore)" } ]
[ { "msg_contents": "Hi,\n\nI'm happily using ltree since a long time, but I'm recently having \ntroubles because of ltree <@ operator selectivity that is causing very \nbad planner choices.\n\nAn example of slow query is:\n\nSELECT\n batch_id,\n b.t_stamp AS t_stamp,\n objects,\n CASE WHEN sent IS NULL THEN gw_batch_sent(b.batch_id) ELSE sent END \nAS sent\nFROM\n gw_users u JOIN gw_batches b USING (u_id)\nWHERE\n u.tree <@ '1041' AND\n b.t_stamp >= 'today'::date - '7 days'::interval AND\n b.t_stamp < 'today'\nORDER BY\n t_stamp DESC;\n\nI've posted the EXPLAIN ANALYZE output here for better readability: \nhttp://rafb.net/paste/results/NrCDMs50.html\n\nAs you may see, disabling nested loops makes the query lightning fast.\n\n\nThe problem is caused by the fact that most of the records of gw_users \nmatch the \"u.tree <@ '1041'\" condition:\n\nSELECT COUNT(*) FROM gw_users;\n count\n-------\n 5012\n\nSELECT COUNT(*) FROM gw_users WHERE tree <@ '1041';\n count\n-------\n 4684\n\nIs there anything I can do apart from disabling nested loops?\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n", "msg_date": "Fri, 29 Jul 2005 18:50:44 +0200", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": true, "msg_subject": "ltree <@ operator selectivity causes very slow plan" } ]
[ { "msg_contents": "Tom,\n\n>> I've attached it here, sorry to the list owner for the patch inclusion /\n>> off-topic.\n> \n> This patch appears to reverse out the most recent committed changes in\n> copy.c.\n\nWhich changes do you refer to? I thought I accommodated all the recent\nchanges (I recall some changes to the tupletable/tupleslot interface, HEADER\nin cvs, and hex escapes and maybe one or 2 more). What did I miss?\n\nThanks.\nAlon.\n\n\n", "msg_date": "Mon, 01 Aug 2005 15:48:35 -0400", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] COPY FROM performance improvements" }, { "msg_contents": "\"Alon Goldshuv\" <[email protected]> writes:\n>> This patch appears to reverse out the most recent committed changes in\n>> copy.c.\n\n> Which changes do you refer to? I thought I accommodated all the recent\n> changes (I recall some changes to the tupletable/tupleslot interface, HEADER\n> in cvs, and hex escapes and maybe one or 2 more). What did I miss?\n\nThe latest touch of copy.c, namely this patch:\n\n2005-07-10 17:13 tgl\n\n\t* doc/src/sgml/ref/create_type.sgml, src/backend/commands/copy.c,\n\tsrc/backend/commands/typecmds.c, src/backend/tcop/fastpath.c,\n\tsrc/backend/tcop/postgres.c, src/backend/utils/adt/arrayfuncs.c,\n\tsrc/backend/utils/adt/date.c, src/backend/utils/adt/numeric.c,\n\tsrc/backend/utils/adt/rowtypes.c,\n\tsrc/backend/utils/adt/timestamp.c, src/backend/utils/adt/varbit.c,\n\tsrc/backend/utils/adt/varchar.c, src/backend/utils/adt/varlena.c,\n\tsrc/backend/utils/mb/mbutils.c, src/include/catalog/catversion.h,\n\tsrc/include/catalog/pg_proc.h,\n\tsrc/test/regress/expected/type_sanity.out,\n\tsrc/test/regress/sql/type_sanity.sql: Change typreceive function\n\tAPI so that receive functions get the same optional arguments as\n\ttext input functions, ie, typioparam OID and atttypmod. Make all\n\tthe datatypes that use typmod enforce it the same way in typreceive\n\tas they do in typinput. This fixes a problem with failure to\n\tenforce length restrictions during COPY FROM BINARY.\n\nIt was rather obvious, given that the first chunk of the patch backed up\nthe file's CVS version stamp from 1.247 to 1.246 :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Aug 2005 19:51:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] COPY FROM performance improvements " }, { "msg_contents": "Tom,\n\nThanks for pointing it out. I made the small required modifications to match\ncopy.c version 1.247 and sent it to -patches list. New patch is V16.\n\nAlon.\n\n\nOn 8/1/05 7:51 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> \"Alon Goldshuv\" <[email protected]> writes:\n>>> This patch appears to reverse out the most recent committed changes in\n>>> copy.c.\n> \n>> Which changes do you refer to? I thought I accommodated all the recent\n>> changes (I recall some changes to the tupletable/tupleslot interface, HEADER\n>> in cvs, and hex escapes and maybe one or 2 more). What did I miss?\n> \n> The latest touch of copy.c, namely this patch:\n> \n> 2005-07-10 17:13 tgl\n> \n> * doc/src/sgml/ref/create_type.sgml, src/backend/commands/copy.c,\n> src/backend/commands/typecmds.c, src/backend/tcop/fastpath.c,\n> src/backend/tcop/postgres.c, src/backend/utils/adt/arrayfuncs.c,\n> src/backend/utils/adt/date.c, src/backend/utils/adt/numeric.c,\n> src/backend/utils/adt/rowtypes.c,\n> src/backend/utils/adt/timestamp.c, src/backend/utils/adt/varbit.c,\n> src/backend/utils/adt/varchar.c, src/backend/utils/adt/varlena.c,\n> src/backend/utils/mb/mbutils.c, src/include/catalog/catversion.h,\n> src/include/catalog/pg_proc.h,\n> src/test/regress/expected/type_sanity.out,\n> src/test/regress/sql/type_sanity.sql: Change typreceive function\n> API so that receive functions get the same optional arguments as\n> text input functions, ie, typioparam OID and atttypmod. Make all\n> the datatypes that use typmod enforce it the same way in typreceive\n> as they do in typinput. This fixes a problem with failure to\n> enforce length restrictions during COPY FROM BINARY.\n> \n> It was rather obvious, given that the first chunk of the patch backed up\n> the file's CVS version stamp from 1.247 to 1.246 :-(\n> \n> regards, tom lane\n> \n\n\n", "msg_date": "Tue, 02 Aug 2005 11:03:44 -0400", "msg_from": "\"Alon Goldshuv\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] COPY FROM performance improvements" } ]
[ { "msg_contents": "Hi all,\n\nWe're using 8.0.3 and we're seeing a problem where the planner is choosing a \nseq scan and hash join over an index scan. If I set enable_hashjoin to off, \nthen I get the plan I'm expecting and the query runs a lot faster. I've also \ntried lowering the random page cost (even to 1) but the planner still \nchooses to use the hash join.\n\nDoes anyone have any thoughts/suggestions? I saw that there was a thread \nrecently in which the planner wasn't correctly estimating the cost for \nqueries using LIMIT. Is it possible that something similar is happening here \n(perhaps because of the sort) and that the patch Tom proposed would fix it?\n\nThanks. Here are the various queries and plans:\n\nNormal settings\n------------------------\nexplain analyze\nselect\nc.sourceId,\nc.targetId,\nabs(c.tr <http://c.tr> - c.sr <http://c.sr>) as xmy,\n(c.sr <http://c.sr> - s.ar <http://s.ar>) * (c.tr <http://c.tr> -\nt.ar<http://t.ar>)\nas xy,\n(c.sr <http://c.sr> - s.ar <http://s.ar>) * (c.sr <http://c.sr> -\ns.ar<http://s.ar>)\nas x2,\n(c.tr <http://c.tr> - t.ar <http://t.ar>) * (c.tr <http://c.tr> -\nt.ar<http://t.ar>)\nas y2\nfrom\ncandidates617004 c,\nlte_user s,\nlte_user t\nwhere\nc.sourceId = s.user_id\nand c.targetId = t.user_id\norder by\nc.sourceId,\nc.targetId;\n\nQUERY PLAN\nSort (cost=13430.57..13439.24 rows=3467 width=48) (actual time=\n1390.000..1390.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Merge Join (cost=9912.07..13226.72 rows=3467 width=48) (actual time=\n1344.000..1375.000 rows=3467 loops=1)\nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)\n-> Index Scan using lte_user_pkey on lte_user s\n(cost=0.00..16837.71rows=279395 width=16) (actual time=\n0.000..95.000 rows=50034 loops=1)\n-> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual time=\n1156.000..1156.000 rows=3467 loops=1)\nSort Key: c.sourceid\n-> Hash Join (cost=8710.44..9708.21 rows=3467 width=40) (actual time=\n1125.000..1156.000 rows=3467 loops=1)\nHash Cond: (\"outer\".targetid = \"inner\".user_id)\n-> Seq Scan on candidates617004 c (cost=0.00..67.67 rows=3467 width=32) \n(actual time=0.000..0.000 rows=3467 loops=1)\n-> Hash (cost=8011.95..8011.95 rows=279395 width=16) (actual time=\n1125.000..1125.000 rows=0 loops=1)\n-> Seq Scan on lte_user t (cost=0.00..8011.95 rows=279395 width=16) (actual \ntime=0.000..670.000 rows=279395 loops=1)\nTotal runtime: 1406.000 ms\n\nenable_hashjoin disabled\n----------------------------------------\nQUERY PLAN\nSort (cost=14355.37..14364.03 rows=3467 width=48) (actual time=\n391.000..391.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Nested Loop (cost=271.52..14151.51 rows=3467 width=48) (actual time=\n203.000..359.000 rows=3467 loops=1)\n-> Merge Join (cost=271.52..3490.83 rows=3467 width=40) (actual time=\n203.000..218.000 rows=3467 loops=1)\nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)\n-> Index Scan using lte_user_pkey on lte_user s\n(cost=0.00..16837.71rows=279395 width=16) (actual time=\n0.000..126.000 rows=50034 loops=1)\n-> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\ntime=15.000..30.000rows=3467 loops=1)\nSort Key: c.sourceid\n-> Seq Scan on candidates617004 c (cost=0.00..67.67 rows=3467 width=32) \n(actual time=0.000..0.000 rows=3467 loops=1)\n-> Index Scan using lte_user_pkey on lte_user t (cost=0.00..3.03 rows=1 \nwidth=16) (actual time=0.031..0.036 rows=1 loops=3467)\nIndex Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 406.000 ms\n\nrandom_page_cost set to 1.5\n----------------------------------------------\nQUERY PLAN\nSort (cost=12702.62..12711.29 rows=3467 width=48) (actual time=\n1407.000..1407.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Merge Join (cost=9912.07..12498.77 rows=3467 width=48) (actual time=\n1391.000..1407.000 rows=3467 loops=1)\nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)\n-> Index Scan using lte_user_pkey on lte_user s\n(cost=0.00..12807.34rows=279395 width=16) (actual time=\n0.000..46.000 rows=50034 loops=1)\n-> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual time=\n1188.000..1188.000 rows=3467 loops=1)\nSort Key: c.sourceid\n-> Hash Join (cost=8710.44..9708.21 rows=3467 width=40) (actual time=\n1157.000..1188.000 rows=3467 loops=1)\nHash Cond: (\"outer\".targetid = \"inner\".user_id)\n-> Seq Scan on candidates617004 c (cost=0.00..67.67 rows=3467 width=32) \n(actual time=0.000..15.000 rows=3467 loops=1)\n-> Hash (cost=8011.95..8011.95 rows=279395 width=16) (actual time=\n1157.000..1157.000 rows=0 loops=1)\n-> Seq Scan on lte_user t (cost=0.00..8011.95 rows=279395 width=16) (actual \ntime=0.000..750.000 rows=279395 loops=1)\nTotal runtime: 1422.000 ms\n\nrandom_page_cost set to 1.5 and enable_hashjoin set to false\n--------------------------------------------------------------------------------------------------\nQUERY PLAN\nSort (cost=13565.58..13574.25 rows=3467 width=48) (actual time=\n390.000..390.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Nested Loop (cost=271.52..13361.73 rows=3467 width=48) (actual time=\n203.000..360.000 rows=3467 loops=1)\n-> Merge Join (cost=271.52..2762.88 rows=3467 width=40) (actual time=\n203.000..250.000 rows=3467 loops=1)\nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)\n-> Index Scan using lte_user_pkey on lte_user s\n(cost=0.00..12807.34rows=279395 width=16) (actual time=\n0.000..48.000 rows=50034 loops=1)\n-> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\ntime=15.000..31.000rows=3467 loops=1)\nSort Key: c.sourceid\n-> Seq Scan on candidates617004 c (cost=0.00..67.67 rows=3467 width=32) \n(actual time=0.000..15.000 rows=3467 loops=1)\n-> Index Scan using lte_user_pkey on lte_user t (cost=0.00..3.02 rows=1 \nwidth=16) (actual time=0.023..0.023 rows=1 loops=3467)\nIndex Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 406.000 ms\n\nThanks,\nMeetesh\n\nHi all,\n\nWe're using 8.0.3 and we're seeing a problem where the planner is\nchoosing a seq scan and hash join over an index scan.  If I set\nenable_hashjoin to off, then I get the plan I'm expecting and the query\nruns a lot faster.  I've also tried lowering the random page cost\n(even to 1) but the planner still chooses to use the hash join.\n\nDoes anyone have any thoughts/suggestions?  I saw that there was a\nthread recently in which the planner wasn't correctly estimating the\ncost for queries using LIMIT.  Is it possible that something\nsimilar is happening here (perhaps because of the sort) and that the\npatch Tom proposed would fix it?\n\nThanks.  Here are the various queries and plans:\n\nNormal settings\n------------------------\nexplain analyze\n    select\n        c.sourceId,\n        c.targetId,\n        abs(c.tr - c.sr) as xmy,\n        (c.sr - s.ar) * (c.tr - t.ar) as xy,\n        (c.sr - s.ar) * (c.sr - s.ar) as x2,\n        (c.tr - t.ar) * (c.tr - t.ar) as y2\n    from\n        candidates617004 c,\n        lte_user s,\n        lte_user t\n    where\n        c.sourceId = s.user_id\n        and c.targetId = t.user_id\n    order by\n        c.sourceId,\n        c.targetId;\n\nQUERY PLAN\nSort  (cost=13430.57..13439.24 rows=3467 width=48) (actual time=1390.000..1390.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Merge Join  (cost=9912.07..13226.72 rows=3467\nwidth=48) (actual time=1344.000..1375.000 rows=3467 loops=1)\n        Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..16837.71 rows=279395\nwidth=16) (actual time=0.000..95.000 rows=50034 loops=1)\n        ->  Sort \n(cost=9912.07..9920.73 rows=3467 width=40) (actual\ntime=1156.000..1156.000 rows=3467 loops=1)\n              Sort Key: c.sourceid\n             \n->  Hash Join  (cost=8710.44..9708.21 rows=3467 width=40)\n(actual time=1125.000..1156.000 rows=3467 loops=1)\n                   \nHash Cond: (\"outer\".targetid = \"inner\".user_id)\n                   \n->  Seq Scan on candidates617004 c  (cost=0.00..67.67\nrows=3467 width=32) (actual time=0.000..0.000 rows=3467 loops=1)\n                   \n->  Hash  (cost=8011.95..8011.95 rows=279395 width=16)\n(actual time=1125.000..1125.000 rows=0 loops=1)\n                         \n->  Seq Scan on lte_user t  (cost=0.00..8011.95\nrows=279395 width=16) (actual time=0.000..670.000 rows=279395 loops=1)\nTotal runtime: 1406.000 ms\n\nenable_hashjoin disabled\n----------------------------------------\nQUERY PLAN\nSort  (cost=14355.37..14364.03 rows=3467 width=48) (actual time=391.000..391.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=271.52..14151.51 rows=3467 width=48) (actual time=203.000..359.000 rows=3467 loops=1)\n        ->  Merge Join \n(cost=271.52..3490.83 rows=3467 width=40) (actual time=203.000..218.000\nrows=3467 loops=1)\n              Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n             \n->  Index Scan using lte_user_pkey on lte_user s \n(cost=0.00..16837.71 rows=279395 width=16) (actual time=0.000..126.000\nrows=50034 loops=1)\n             \n->  Sort  (cost=271.52..280.19 rows=3467 width=32) (actual\ntime=15.000..30.000 rows=3467 loops=1)\n                   \nSort Key: c.sourceid\n                   \n->  Seq Scan on candidates617004 c  (cost=0.00..67.67\nrows=3467 width=32) (actual time=0.000..0.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user t  (cost=0.00..3.03 rows=1 width=16)\n(actual time=0.031..0.036 rows=1 loops=3467)\n              Index Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 406.000 ms\n\nrandom_page_cost set to 1.5\n----------------------------------------------\nQUERY PLAN\nSort  (cost=12702.62..12711.29 rows=3467 width=48) (actual time=1407.000..1407.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Merge Join  (cost=9912.07..12498.77 rows=3467\nwidth=48) (actual time=1391.000..1407.000 rows=3467 loops=1)\n        Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..12807.34 rows=279395\nwidth=16) (actual time=0.000..46.000 rows=50034 loops=1)\n        ->  Sort \n(cost=9912.07..9920.73 rows=3467 width=40) (actual\ntime=1188.000..1188.000 rows=3467 loops=1)\n              Sort Key: c.sourceid\n             \n->  Hash Join  (cost=8710.44..9708.21 rows=3467 width=40)\n(actual time=1157.000..1188.000 rows=3467 loops=1)\n                   \nHash Cond: (\"outer\".targetid = \"inner\".user_id)\n                   \n->  Seq Scan on candidates617004 c  (cost=0.00..67.67\nrows=3467 width=32) (actual time=0.000..15.000 rows=3467 loops=1)\n                   \n->  Hash  (cost=8011.95..8011.95 rows=279395 width=16)\n(actual time=1157.000..1157.000 rows=0 loops=1)\n                         \n->  Seq Scan on lte_user t  (cost=0.00..8011.95\nrows=279395 width=16) (actual time=0.000..750.000 rows=279395 loops=1)\nTotal runtime: 1422.000 ms\n\nrandom_page_cost set to 1.5 and enable_hashjoin set to false\n--------------------------------------------------------------------------------------------------\nQUERY PLAN\nSort  (cost=13565.58..13574.25 rows=3467 width=48) (actual time=390.000..390.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=271.52..13361.73 rows=3467 width=48) (actual time=203.000..360.000 rows=3467 loops=1)\n        ->  Merge Join \n(cost=271.52..2762.88 rows=3467 width=40) (actual time=203.000..250.000\nrows=3467 loops=1)\n              Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n             \n->  Index Scan using lte_user_pkey on lte_user s \n(cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..48.000\nrows=50034 loops=1)\n             \n->  Sort  (cost=271.52..280.19 rows=3467 width=32) (actual\ntime=15.000..31.000 rows=3467 loops=1)\n                   \nSort Key: c.sourceid\n                   \n->  Seq Scan on candidates617004 c  (cost=0.00..67.67\nrows=3467 width=32) (actual time=0.000..15.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user t  (cost=0.00..3.02 rows=1 width=16)\n(actual time=0.023..0.023 rows=1 loops=3467)\n              Index Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 406.000 ms\n\nThanks,\nMeetesh", "msg_date": "Tue, 2 Aug 2005 00:19:27 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "[Meetesh Karia - Tue at 12:19:27AM +0200]\n> We're using 8.0.3 and we're seeing a problem where the planner is choosing a \n> seq scan and hash join over an index scan. If I set enable_hashjoin to off, \n> then I get the plan I'm expecting and the query runs a lot faster. I've also \n> tried lowering the random page cost (even to 1) but the planner still \n> chooses to use the hash join.\n\nHave you tried increasing the statistics collection?\n\n-- \nTobias Brox, +47-91700050\nNordicbet, IT dept\n", "msg_date": "Tue, 2 Aug 2005 00:37:08 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "Meetesh Karia wrote:\n> Hi all,\n> \n> We're using 8.0.3 and we're seeing a problem where the planner is\n> choosing a seq scan and hash join over an index scan. If I set\n> enable_hashjoin to off, then I get the plan I'm expecting and the query\n> runs a lot faster. I've also tried lowering the random page cost (even\n> to 1) but the planner still chooses to use the hash join.\n> \n> Does anyone have any thoughts/suggestions? I saw that there was a\n> thread recently in which the planner wasn't correctly estimating the\n> cost for queries using LIMIT. Is it possible that something similar is\n> happening here (perhaps because of the sort) and that the patch Tom\n> proposed would fix it?\n> \n> Thanks. Here are the various queries and plans:\n> \n> Normal settings\n\n...\n\n> QUERY PLAN\n> Sort (cost=13430.57..13439.24 rows=3467 width=48) (actual\n> time=1390.000..1390.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Merge Join (cost=9912.07..13226.72 rows=3467 width=48) (actual\n> time=1344.000..1375.000 rows=3467 loops=1)\n> Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> -> Index Scan using lte_user_pkey on lte_user s \n> (cost=0.00..16837.71 rows=279395 width=16) (actual time=0.000..95.000\n> rows=50034 loops=1)\n\nThis is where the planner is messing up, and mis-estimating the\nselectivity. It is expecting to get 280k rows, but only needs to get 50k.\nI assume lte_user is the bigger table, and that candidates617004 has\nsome subset.\n\nHas lte_user and candidates617004 been recently ANALYZEd? All estimates,\nexcept for the expected number of rows from lte_user seem to be okay.\n\nIs user_id the primary key for lte_user?\nI'm trying to figure out how you can get 50k rows, by searching a\nprimary key, against a 3.5k rows. Is user_id only part of the primary\nkey for lte_user?\n\nCan you give us the output of:\n\\d lte_user\n\\d candidates617004\n\nSo that we have the description of the tables, and what indexes you have\ndefined?\n\nAlso, if you could describe the table layouts, that would help.\n\nJohn\n=:->\n\n\n> -> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual\n> time=1156.000..1156.000 rows=3467 loops=1)\n> Sort Key: c.sourceid\n> -> Hash Join (cost=8710.44..9708.21 rows=3467 width=40)\n> (actual time=1125.000..1156.000 rows=3467 loops=1)\n> Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n> -> Seq Scan on candidates617004 c \n> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000\n> rows=3467 loops=1)\n> -> Hash (cost=8011.95..8011.95 rows=279395\n> width=16) (actual time=1125.000..1125.000 rows=0 loops=1)\n> -> Seq Scan on lte_user t \n> (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..670.000\n> rows=279395 loops=1)\n> Total runtime: 1406.000 ms\n> \n> enable_hashjoin disabled\n> ----------------------------------------\n> QUERY PLAN\n> Sort (cost=14355.37..14364.03 rows=3467 width=48) (actual\n> time=391.000..391.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Nested Loop (cost=271.52..14151.51 rows=3467 width=48) (actual\n> time=203.000..359.000 rows=3467 loops=1)\n> -> Merge Join (cost=271.52..3490.83 rows=3467 width=40)\n> (actual time=203.000..218.000 rows=3467 loops=1)\n> Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> -> Index Scan using lte_user_pkey on lte_user s \n> (cost=0.00..16837.71 rows=279395 width=16) (actual time=0.000..126.000\n> rows=50034 loops=1)\n> -> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\n> time=15.000..30.000 rows=3467 loops=1)\n> Sort Key: c.sourceid\n> -> Seq Scan on candidates617004 c \n> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000\n> rows=3467 loops=1)\n> -> Index Scan using lte_user_pkey on lte_user t \n> (cost=0.00..3.03 rows=1 width=16) (actual time=0.031..0.036 rows=1\n> loops=3467)\n> Index Cond: (\"outer\".targetid = t.user_id)\n> Total runtime: 406.000 ms\n> \n> random_page_cost set to 1.5\n> ----------------------------------------------\n> QUERY PLAN\n> Sort (cost=12702.62..12711.29 rows=3467 width=48) (actual\n> time=1407.000..1407.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Merge Join (cost=9912.07..12498.77 rows=3467 width=48) (actual\n> time=1391.000..1407.000 rows=3467 loops=1)\n> Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> -> Index Scan using lte_user_pkey on lte_user s \n> (cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..46.000\n> rows=50034 loops=1)\n> -> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual\n> time=1188.000..1188.000 rows=3467 loops=1)\n> Sort Key: c.sourceid\n> -> Hash Join (cost=8710.44..9708.21 rows=3467 width=40)\n> (actual time=1157.000..1188.000 rows=3467 loops=1)\n> Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n> -> Seq Scan on candidates617004 c \n> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000\n> rows=3467 loops=1)\n> -> Hash (cost=8011.95..8011.95 rows=279395\n> width=16) (actual time=1157.000..1157.000 rows=0 loops=1)\n> -> Seq Scan on lte_user t \n> (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..750.000\n> rows=279395 loops=1)\n> Total runtime: 1422.000 ms\n> \n> random_page_cost set to 1.5 and enable_hashjoin set to false\n> --------------------------------------------------------------------------------------------------\n> QUERY PLAN\n> Sort (cost=13565.58..13574.25 rows=3467 width=48) (actual\n> time=390.000..390.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Nested Loop (cost=271.52..13361.73 rows=3467 width=48) (actual\n> time=203.000..360.000 rows=3467 loops=1)\n> -> Merge Join (cost=271.52..2762.88 rows=3467 width=40)\n> (actual time=203.000..250.000 rows=3467 loops=1)\n> Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> -> Index Scan using lte_user_pkey on lte_user s \n> (cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..48.000\n> rows=50034 loops=1)\n> -> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\n> time=15.000..31.000 rows=3467 loops=1)\n> Sort Key: c.sourceid\n> -> Seq Scan on candidates617004 c \n> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000\n> rows=3467 loops=1)\n> -> Index Scan using lte_user_pkey on lte_user t \n> (cost=0.00..3.02 rows=1 width=16) (actual time=0.023..0.023 rows=1\n> loops=3467)\n> Index Cond: (\"outer\".targetid = t.user_id)\n> Total runtime: 406.000 ms\n> \n> Thanks,\n> Meetesh", "msg_date": "Mon, 01 Aug 2005 18:16:27 -0500", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "Are you referring to the statistics gathering target for ANALYZE? Based on \nyour email, I just tried the following and then re-ran the explain analyze \nbut got the same \"incorrect\" plan:\n\nalter table candidates617004\nalter column sourceId set statistics 1000,\nalter column targetId set statistics 1000;\nanalyze candidates617004;\n\nalter table lte_user\nalter column user_id set statistics 1000;\nanalyze lte_user;\n\nThanks for your suggestion,\nMeetesh\n\nOn 8/2/05, Tobias Brox <[email protected]> wrote:\n> \n> [Meetesh Karia - Tue at 12:19:27AM +0200]\n> > We're using 8.0.3 and we're seeing a problem where the planner is \n> choosing a\n> > seq scan and hash join over an index scan. If I set enable_hashjoin to \n> off,\n> > then I get the plan I'm expecting and the query runs a lot faster. I've \n> also\n> > tried lowering the random page cost (even to 1) but the planner still\n> > chooses to use the hash join.\n> \n> Have you tried increasing the statistics collection?\n> \n> --\n> Tobias Brox, +47-91700050\n> Nordicbet, IT dept\n>\n\nAre you referring to the statistics gathering target for ANALYZE? \nBased on your email, I just tried the following and then re-ran the\nexplain analyze but got the same \"incorrect\" plan:\n\nalter table candidates617004\n    alter column sourceId set statistics 1000,\n    alter column targetId set statistics 1000;\nanalyze candidates617004;\n\nalter table lte_user\n    alter column user_id set statistics 1000;\nanalyze lte_user;\nThanks for your suggestion,\nMeetesh\nOn 8/2/05, Tobias Brox <[email protected]> wrote:\n[Meetesh Karia - Tue at 12:19:27AM +0200]> We're using 8.0.3 and we're seeing a problem where the planner is choosing a> seq scan and hash join over an index scan. If I set enable_hashjoin to off,> then I get the plan I'm expecting and the query runs a lot faster. I've also\n> tried lowering the random page cost (even to 1) but the planner still> chooses to use the hash join.Have you tried increasing the statistics collection?--Tobias Brox, +47-91700050Nordicbet, IT dept", "msg_date": "Tue, 2 Aug 2005 01:30:26 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "Thanks John. I've answered your questions below:\n\nHas lte_user and candidates617004 been recently ANALYZEd? All estimates,\n> except for the expected number of rows from lte_user seem to be okay.\n\n\nI ANALYZEd both tables just before putting together my first email. And, \nunfortunately, modifying the statistics target didn't help either.\n\nIs user_id the primary key for lte_user?\n\n\nYes \n\nI'm trying to figure out how you can get 50k rows, by searching a\n> primary key, against a 3.5k rows. Is user_id only part of the primary\n> key for lte_user?\n\n\nHmmm ... I missed that before. But, that surprises me too. Especially since \nsourceId in the candidates table has only 1 value. Also, user_id is the \ncomplete primary key for lte_user.\n\nCan you give us the output of:\n> \\d lte_user\n> \\d candidates617004\n\n\nSure, here they are:\n\nlte=# \\d lte_user\nTable \"public.lte_user\"\nColumn | Type | Modifiers\n---------------+-----------------------------+-----------\nuser_id | bigint | not null\nfirstname | character varying(255) |\nlastname | character varying(255) |\naddress1 | character varying(255) |\naddress2 | character varying(255) |\ncity | character varying(255) |\nstate | character varying(255) |\nzip | character varying(255) |\nphone1 | character varying(255) |\nphone2 | character varying(255) |\nusername | character varying(255) |\npassword | character varying(255) |\ndeleted | boolean | not null\next_cust_id | character varying(255) |\naboutme | character varying(255) |\nbirthday | timestamp without time zone |\nfm_id | bigint |\nar | double precision |\nIndexes:\n\"lte_user_pkey\" PRIMARY KEY, btree (user_id)\n\"idx_user_extid\" btree (ext_cust_id)\n\"idx_user_username\" btree (username)\nForeign-key constraints:\n\"fk_user_fm\" FOREIGN KEY (fm_id) REFERENCES fm(fm_id)\n\nlte=# \\d candidates617004\nTable \"public.candidates617004\"\nColumn | Type | Modifiers\n--------------+------------------+-----------\nfmid | bigint |\nsourceid | bigint |\nsr | double precision |\ntargetid | bigint |\ntr | double precision | \n\nAlso, if you could describe the table layouts, that would help.\n\n\nSure. The lte_user table is just a collection of users. user_id is assigned \nuniquely using a sequence. During some processing, we create a candidates \ntable (candidates617004 in our case). This table is usually a temp table. \nsourceid is a user_id (in this case it is always 617004) and targetid is \nalso a user_id (2860 distinct values out of 3467). The rest of the \ninformation is either only used in the select clause or not used at all \nduring this processing.\n\nDid I miss something in the table layout description that would be helpful?\n\nThanks for your help!\nMeetesh\n\n> -> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual\n> > time=1156.000..1156.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid\n> > -> Hash Join (cost=8710.44..9708.21 rows=3467 width=40)\n> > (actual time=1125.000..1156.000 rows=3467 loops=1)\n> > Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n> > -> Seq Scan on candidates617004 c\n> > (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000\n> > rows=3467 loops=1)\n> > -> Hash (cost=8011.95..8011.95 rows=279395\n> > width=16) (actual time=1125.000..1125.000 rows=0 loops=1)\n> > -> Seq Scan on lte_user t\n> > (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..670.000\n> > rows=279395 loops=1)\n> > Total runtime: 1406.000 ms\n> >\n> > enable_hashjoin disabled\n> > ----------------------------------------\n> > QUERY PLAN\n> > Sort (cost=14355.37..14364.03 rows=3467 width=48) (actual\n> > time=391.000..391.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid, c.targetid\n> > -> Nested Loop (cost=271.52..14151.51 rows=3467 width=48) (actual\n> > time=203.000..359.000 rows=3467 loops=1)\n> > -> Merge Join (cost=271.52..3490.83 rows=3467 width=40)\n> > (actual time=203.000..218.000 rows=3467 loops=1)\n> > Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> > -> Index Scan using lte_user_pkey on lte_user s\n> > (cost=0.00..16837.71 rows=279395 width=16) (actual time=0.000..126.000\n> > rows=50034 loops=1)\n> > -> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\n> > time=15.000..30.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid\n> > -> Seq Scan on candidates617004 c\n> > (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000\n> > rows=3467 loops=1)\n> > -> Index Scan using lte_user_pkey on lte_user t\n> > (cost=0.00..3.03 rows=1 width=16) (actual time=0.031..0.036 rows=1\n> > loops=3467)\n> > Index Cond: (\"outer\".targetid = t.user_id)\n> > Total runtime: 406.000 ms\n> >\n> > random_page_cost set to 1.5\n> > ----------------------------------------------\n> > QUERY PLAN\n> > Sort (cost=12702.62..12711.29 rows=3467 width=48) (actual\n> > time=1407.000..1407.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid, c.targetid\n> > -> Merge Join (cost=9912.07..12498.77 rows=3467 width=48) (actual\n> > time=1391.000..1407.000 rows=3467 loops=1)\n> > Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> > -> Index Scan using lte_user_pkey on lte_user s\n> > (cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..46.000\n> > rows=50034 loops=1)\n> > -> Sort (cost=9912.07..9920.73 rows=3467 width=40) (actual\n> > time=1188.000..1188.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid\n> > -> Hash Join (cost=8710.44..9708.21 rows=3467 width=40)\n> > (actual time=1157.000..1188.000 rows=3467 loops=1)\n> > Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n> > -> Seq Scan on candidates617004 c\n> > (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000\n> > rows=3467 loops=1)\n> > -> Hash (cost=8011.95..8011.95 rows=279395\n> > width=16) (actual time=1157.000..1157.000 rows=0 loops=1)\n> > -> Seq Scan on lte_user t\n> > (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..750.000\n> > rows=279395 loops=1)\n> > Total runtime: 1422.000 ms\n> >\n> > random_page_cost set to 1.5 and enable_hashjoin set to false\n> > \n> --------------------------------------------------------------------------------------------------\n> > QUERY PLAN\n> > Sort (cost=13565.58..13574.25 rows=3467 width=48) (actual\n> > time=390.000..390.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid, c.targetid\n> > -> Nested Loop (cost=271.52..13361.73 rows=3467 width=48) (actual\n> > time=203.000..360.000 rows=3467 loops=1)\n> > -> Merge Join (cost=271.52..2762.88 rows=3467 width=40)\n> > (actual time=203.000..250.000 rows=3467 loops=1)\n> > Merge Cond: (\"outer\".user_id = \"inner\".sourceid)\n> > -> Index Scan using lte_user_pkey on lte_user s\n> > (cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..48.000\n> > rows=50034 loops=1)\n> > -> Sort (cost=271.52..280.19 rows=3467 width=32) (actual\n> > time=15.000..31.000 rows=3467 loops=1)\n> > Sort Key: c.sourceid\n> > -> Seq Scan on candidates617004 c\n> > (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000\n> > rows=3467 loops=1)\n> > -> Index Scan using lte_user_pkey on lte_user t\n> > (cost=0.00..3.02 rows=1 width=16) (actual time=0.023..0.023 rows=1\n> > loops=3467)\n> > Index Cond: (\"outer\".targetid = t.user_id)\n> > Total runtime: 406.000 ms\n> >\n> > Thanks,\n> > Meetesh\n> \n> \n> \n>\n\nThanks John.  I've answered your questions below:\nHas lte_user and candidates617004 been recently ANALYZEd? All estimates,except for the expected number of rows from lte_user seem to be okay.\nI ANALYZEd both tables just before putting together my first\nemail.  And, unfortunately, modifying the statistics target didn't\nhelp either.\nIs user_id the primary key for lte_user?\nYes \nI'm trying to figure out how you can get 50k rows, by searching aprimary key, against a \n3.5k rows. Is user_id only part of the primarykey for lte_user?\nHmmm ... I missed that before.  But, that surprises me too. \nEspecially since sourceId in the candidates table has only 1\nvalue.  Also, user_id is the complete primary key for lte_user.\nCan you give us the output of:\\d lte_user\\d candidates617004\n\nSure, here they are:\n\nlte=# \\d lte_user\n                 Table \"public.lte_user\"\n    Column    \n|           \nType            \n| Modifiers\n---------------+-----------------------------+-----------\n user_id       |\nbigint                     \n| not null\n firstname     | character varying(255)      |\n lastname      | character varying(255)      |\n address1      | character varying(255)      |\n address2      | character varying(255)      |\n city          | character varying(255)      |\n state         | character varying(255)      |\n zip           | character varying(255)      |\n phone1        | character varying(255)      |\n phone2        | character varying(255)      |\n username      | character varying(255)      |\n password      | character varying(255)      |\n deleted       |\nboolean                    \n| not null\n ext_cust_id   | character varying(255)      |\n aboutme       | character varying(255)      |\n birthday      | timestamp without time zone |\n fm_id       |\nbigint                     \n|\n ar            | double\nprecision           \n|\nIndexes:\n    \"lte_user_pkey\" PRIMARY KEY, btree (user_id)\n    \"idx_user_extid\" btree (ext_cust_id)\n    \"idx_user_username\" btree (username)\nForeign-key constraints:\n    \"fk_user_fm\" FOREIGN KEY (fm_id) REFERENCES fm(fm_id)\n\nlte=# \\d candidates617004\n       Table \"public.candidates617004\"\n    Column   \n|      \nType       | Modifiers\n--------------+------------------+-----------\n fmid       | bigint           |\n sourceid     | bigint           |\n sr            | double precision |\n targetid     | bigint           |\n tr           | double precision | \nAlso, if you could describe the table layouts, that would help.\n\nSure.  The lte_user table is just a collection of users. \nuser_id is assigned uniquely using a sequence.  During some\nprocessing, we create a candidates table (candidates617004 in our\ncase).  This table is usually a temp table.  sourceid is a\nuser_id (in this case it is always 617004) and targetid is also a\nuser_id (2860 distinct values out of 3467).  The rest of the\ninformation is either only used in the select clause or not used at all\nduring this processing.\n\nDid I miss something in the table layout description that would be helpful?\n\nThanks for your help!\nMeetesh\n>        \n->  Sort  (cost=9912.07..9920.73 rows=3467\nwidth=40) (actual> time=1156.000..1156.000 rows=3467 loops=1)>               Sort Key: c.sourceid>              \n->  Hash Join  (cost=8710.44..9708.21 rows=3467\nwidth=40)> (actual time=1125.000..1156.000 rows=3467 loops=1)>                    \nHash Cond: (\"outer\".targetid = \"inner\".user_id)>                    \n->  Seq Scan on candidates617004 c> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000> rows=3467 loops=1)>                    \n->  Hash  (cost=8011.95..8011.95 rows=279395> width=16) (actual time=1125.000..1125.000 rows=0 loops=1)>                          \n->  Seq Scan on lte_user t> (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..670.000> rows=279395 loops=1)> Total runtime: 1406.000 ms>> enable_hashjoin disabled> ----------------------------------------\n> QUERY PLAN> Sort  (cost=14355.37..14364.03 rows=3467 width=48) (actual> time=391.000..391.000 rows=3467 loops=1)>   Sort Key: c.sourceid, c.targetid>   ->  Nested Loop  (cost=271.52..14151.51\n rows=3467 width=48) (actual> time=203.000..359.000 rows=3467 loops=1)>        \n->  Merge Join  (cost=271.52..3490.83 rows=3467\nwidth=40)> (actual time=203.000..218.000 rows=3467 loops=1)>              \nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)>              \n->  Index Scan using lte_user_pkey on lte_user s> (cost=0.00..16837.71 rows=279395 width=16) (actual time=0.000..126.000> rows=50034 loops=1)>              \n->  Sort  (cost=271.52..280.19 rows=3467\nwidth=32) (actual> time=15.000..30.000 rows=3467 loops=1)>                    \nSort Key: c.sourceid>                    \n->  Seq Scan on candidates617004 c> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..0.000> rows=3467 loops=1)>         ->  Index Scan using lte_user_pkey on lte_user t> (cost=0.00..3.03\n rows=1 width=16) (actual time=0.031..0.036 rows=1> loops=3467)>              \nIndex Cond: (\"outer\".targetid = t.user_id)> Total runtime: 406.000 ms>> random_page_cost set to 1.5> ----------------------------------------------> QUERY PLAN> Sort  (cost=\n12702.62..12711.29 rows=3467 width=48) (actual> time=1407.000..1407.000 rows=3467 loops=1)>   Sort Key: c.sourceid, c.targetid>   ->  Merge Join  (cost=9912.07..12498.77 rows=3467 width=48) (actual\n> time=1391.000..1407.000 rows=3467 loops=1)>         Merge Cond: (\"outer\".user_id = \"inner\".sourceid)>         ->  Index Scan using lte_user_pkey on lte_user s> (cost=0.00..12807.34\n rows=279395 width=16) (actual time=0.000..46.000> rows=50034 loops=1)>        \n->  Sort  (cost=9912.07..9920.73 rows=3467\nwidth=40) (actual> time=1188.000..1188.000 rows=3467 loops=1)>               Sort Key: c.sourceid>              \n->  Hash Join  (cost=8710.44..9708.21 rows=3467\nwidth=40)> (actual time=1157.000..1188.000 rows=3467 loops=1)>                    \nHash Cond: (\"outer\".targetid = \"inner\".user_id)>                    \n->  Seq Scan on candidates617004 c> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000> rows=3467 loops=1)>                    \n->  Hash  (cost=8011.95..8011.95 rows=279395> width=16) (actual time=1157.000..1157.000 rows=0 loops=1)>                          \n->  Seq Scan on lte_user t> (cost=0.00..8011.95 rows=279395 width=16) (actual time=0.000..750.000> rows=279395 loops=1)> Total runtime: 1422.000 ms>> random_page_cost set to 1.5 and enable_hashjoin set to false\n> --------------------------------------------------------------------------------------------------> QUERY PLAN> Sort  (cost=13565.58..13574.25 rows=3467 width=48) (actual> time=390.000..390.000\n rows=3467 loops=1)>   Sort Key: c.sourceid, c.targetid>   ->  Nested Loop  (cost=271.52..13361.73 rows=3467 width=48) (actual> time=203.000..360.000 rows=3467 loops=1)>        \n->  Merge Join  (cost=271.52..2762.88 rows=3467\nwidth=40)> (actual time=203.000..250.000 rows=3467 loops=1)>              \nMerge Cond: (\"outer\".user_id = \"inner\".sourceid)>              \n->  Index Scan using lte_user_pkey on lte_user s> (cost=0.00..12807.34 rows=279395 width=16) (actual time=0.000..48.000> rows=50034 loops=1)>              \n->  Sort  (cost=271.52..280.19 rows=3467\nwidth=32) (actual> time=15.000..31.000 rows=3467 loops=1)>                    \nSort Key: c.sourceid>                    \n->  Seq Scan on candidates617004 c> (cost=0.00..67.67 rows=3467 width=32) (actual time=0.000..15.000> rows=3467 loops=1)>         ->  Index Scan using lte_user_pkey on lte_user t> (cost=\n0.00..3.02 rows=1 width=16) (actual time=0.023..0.023 rows=1> loops=3467)>              \nIndex Cond: (\"outer\".targetid = t.user_id)> Total runtime: 406.000 ms>> Thanks,> Meetesh", "msg_date": "Tue, 2 Aug 2005 01:56:13 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "Meetesh Karia <[email protected]> writes:\n> Sure. The lte_user table is just a collection of users. user_id is assigned=\n> uniquely using a sequence. During some processing, we create a candidates=\n> table (candidates617004 in our case). This table is usually a temp table.=\n> sourceid is a user_id (in this case it is always 617004) and targetid is=20\n> also a user_id (2860 distinct values out of 3467). The rest of the=20\n> information is either only used in the select clause or not used at all=20\n> during this processing.\n\nIf you know that sourceid has only a single value, it'd probably be\nhelpful to call out that value in the query, ie,\n\twhere ... AND c.sourceId = 617004 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 01 Aug 2005 20:15:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan " }, { "msg_contents": "Thanks Tom,\n\nThat modifies the query plan slightly, but the planner still decides to do a \nhash join for the lte_user table aliased 't'. Though, if I make this change \nand set enable_hashjoin to off, the query plan (and execution time) gets \neven better.\n\nenable_hashjoin = on\n----------------------------------\nQUERY PLAN\nSort (cost=10113.35..10122.02 rows=3467 width=48) (actual time=\n1203.000..1203.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Nested Loop (cost=8711.19..9909.50 rows=3467 width=48) (actual time=\n1156.000..1203.000 rows=3467 loops=1)\n-> Index Scan using lte_user_pkey on lte_user s (cost=0.00..3.02 rows=1 \nwidth=16) (actual time=0.000..0.000 rows=1 loops=1)\nIndex Cond: (617004 = user_id)\n-> Hash Join (cost=8711.19..9776.46 rows=3467 width=40) (actual time=\n1156.000..1187.000 rows=3467 loops=1)\nHash Cond: (\"outer\".targetid = \"inner\".user_id)\n-> Seq Scan on candidates617004 c (cost=0.00..76.34 rows=3467 width=32) \n(actual time=0.000..16.000 rows=3467 loops=1)\nFilter: (sourceid = 617004)\n-> Hash (cost=8012.55..8012.55 rows=279455 width=16) (actual time=\n1141.000..1141.000 rows=0 loops=1)\n-> Seq Scan on lte_user t (cost=0.00..8012.55 rows=279455 width=16) (actual \ntime=0.000..720.000 rows=279395 loops=1)\nTotal runtime: 1218.000 ms\n\nenable_hashjoin = off\n-----------------------------------\nQUERY PLAN\nSort (cost=10942.56..10951.22 rows=3467 width=48) (actual time=\n188.000..188.000 rows=3467 loops=1)\nSort Key: c.sourceid, c.targetid\n-> Nested Loop (cost=0.00..10738.71 rows=3467 width=48) (actual time=\n0.000..188.000 rows=3467 loops=1)\n-> Index Scan using lte_user_pkey on lte_user s (cost=0.00..3.02 rows=1 \nwidth=16) (actual time=0.000..0.000 rows=1 loops=1)\nIndex Cond: (617004 = user_id)\n-> Nested Loop (cost=0.00..10605.67 rows=3467 width=40) (actual time=\n0.000..157.000 rows=3467 loops=1)\n-> Seq Scan on candidates617004 c (cost=0.00..76.34 rows=3467 width=32) \n(actual time=0.000..15.000 rows=3467 loops=1)\nFilter: (sourceid = 617004)\n-> Index Scan using lte_user_pkey on lte_user t (cost=0.00..3.02 rows=1 \nwidth=16) (actual time=0.028..0.037 rows=1 loops=3467)\nIndex Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 188.000 ms\n\nThanks,\nMeetesh\n\nOn 8/2/05, Tom Lane <[email protected]> wrote:\n> \n> Meetesh Karia <[email protected]> writes:\n> > Sure. The lte_user table is just a collection of users. user_id is \n> assigned=\n> > uniquely using a sequence. During some processing, we create a \n> candidates=\n> > table (candidates617004 in our case). This table is usually a temp \n> table.=\n> > sourceid is a user_id (in this case it is always 617004) and targetid \n> is=20\n> > also a user_id (2860 distinct values out of 3467). The rest of the=20\n> > information is either only used in the select clause or not used at \n> all=20\n> > during this processing.\n> \n> If you know that sourceid has only a single value, it'd probably be\n> helpful to call out that value in the query, ie,\n> where ... AND c.sourceId = 617004 ...\n> \n> regards, tom lane\n>\n\nThanks Tom,\n\nThat modifies the query plan slightly, but the planner still decides to\ndo a hash join for the lte_user table aliased 't'.  Though, if I\nmake this change and set enable_hashjoin to off, the query plan (and\nexecution time) gets even better.\n\nenable_hashjoin = on\n----------------------------------\nQUERY PLAN\nSort  (cost=10113.35..10122.02 rows=3467 width=48) (actual time=1203.000..1203.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=8711.19..9909.50 rows=3467\nwidth=48) (actual time=1156.000..1203.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..3.02 rows=1 width=16)\n(actual time=0.000..0.000 rows=1 loops=1)\n              Index Cond: (617004 = user_id)\n        ->  Hash Join \n(cost=8711.19..9776.46 rows=3467 width=40) (actual\ntime=1156.000..1187.000 rows=3467 loops=1)\n              Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n             \n->  Seq Scan on candidates617004 c  (cost=0.00..76.34\nrows=3467 width=32) (actual time=0.000..16.000 rows=3467 loops=1)\n                   \nFilter: (sourceid = 617004)\n             \n->  Hash  (cost=8012.55..8012.55 rows=279455 width=16)\n(actual time=1141.000..1141.000 rows=0 loops=1)\n                   \n->  Seq Scan on lte_user t  (cost=0.00..8012.55\nrows=279455 width=16) (actual time=0.000..720.000 rows=279395 loops=1)\nTotal runtime: 1218.000 ms\n\nenable_hashjoin = off\n-----------------------------------\nQUERY PLAN\nSort  (cost=10942.56..10951.22 rows=3467 width=48) (actual time=188.000..188.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=0.00..10738.71 rows=3467 width=48) (actual time=0.000..188.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..3.02 rows=1 width=16)\n(actual time=0.000..0.000 rows=1 loops=1)\n              Index Cond: (617004 = user_id)\n        ->  Nested\nLoop  (cost=0.00..10605.67 rows=3467 width=40) (actual\ntime=0.000..157.000 rows=3467 loops=1)\n             \n->  Seq Scan on candidates617004 c  (cost=0.00..76.34\nrows=3467 width=32) (actual time=0.000..15.000 rows=3467 loops=1)\n                   \nFilter: (sourceid = 617004)\n             \n->  Index Scan using lte_user_pkey on lte_user t \n(cost=0.00..3.02 rows=1 width=16) (actual time=0.028..0.037 rows=1\nloops=3467)\n                   \nIndex Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 188.000 ms\n\nThanks,\nMeeteshOn 8/2/05, Tom Lane <[email protected]> wrote:\nMeetesh Karia <[email protected]> writes:> Sure. The lte_user table is just a collection of users. user_id is assigned=> uniquely using a sequence. During some processing, we create a candidates=\n> table (candidates617004 in our case). This table is usually a temp table.=> sourceid is a user_id (in this case it is always 617004) and targetid is=20> also a user_id (2860 distinct values out of 3467). The rest of the=20\n> information is either only used in the select clause or not used at all=20> during this processing.If you know that sourceid has only a single value, it'd probably behelpful to call out that value in the query, ie,\n        where ... AND c.sourceId = 617004 ...                        regards,\ntom lane", "msg_date": "Tue, 2 Aug 2005 09:05:50 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" }, { "msg_contents": "Btw - I tried playing around with some of the other planner cost constants \nbut I wasn't able to get the planner to choose the index scan. It seems like \nthe issue is that the estimated cost for fetching one row from the index (\n3.02) is a little high in my case. Is there any way that I can adjust that \ncost estimate? Are there any side effects of doing that? Or is my best \nsolution to simple set enable_hashjoin to off for this query?\n\nThanks,\nMeetesh\n\nOn 8/2/05, Meetesh Karia <[email protected]> wrote:\n> \n> Thanks Tom,\n> \n> That modifies the query plan slightly, but the planner still decides to do \n> a hash join for the lte_user table aliased 't'. Though, if I make this \n> change and set enable_hashjoin to off, the query plan (and execution time) \n> gets even better.\n> \n> enable_hashjoin = on\n> ----------------------------------\n> QUERY PLAN\n> Sort (cost=10113.35..10122.02 rows=3467 width=48) (actual time=\n> 1203.000..1203.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Nested Loop (cost=8711.19..9909.50 rows=3467 width=48) (actual time=\n> 1156.000..1203.000 rows=3467 loops=1)\n> -> Index Scan using lte_user_pkey on lte_user s (cost=0.00..3.02 rows=1 \n> width=16) (actual time=0.000..0.000 rows=1 loops=1)\n> Index Cond: (617004 = user_id)\n> -> Hash Join (cost=8711.19..9776.46 rows=3467 width=40) (actual time=\n> 1156.000..1187.000 rows=3467 loops=1)\n> Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n> -> Seq Scan on candidates617004 c (cost=0.00..76.34 rows=3467 width=32) \n> (actual time=0.000..16.000 rows=3467 loops=1)\n> Filter: (sourceid = 617004)\n> -> Hash (cost=8012.55..8012.55 rows=279455 width=16) (actual time=\n> 1141.000..1141.000 rows=0 loops=1)\n> -> Seq Scan on lte_user t (cost=0.00..8012.55 rows=279455 width=16) \n> (actual time=0.000..720.000 rows=279395 loops=1)\n> Total runtime: 1218.000 ms\n> \n> enable_hashjoin = off\n> -----------------------------------\n> QUERY PLAN\n> Sort (cost=10942.56..10951.22 rows=3467 width=48) (actual time=\n> 188.000..188.000 rows=3467 loops=1)\n> Sort Key: c.sourceid, c.targetid\n> -> Nested Loop (cost=0.00..10738.71 rows=3467 width=48) (actual time=\n> 0.000..188.000 rows=3467 loops=1)\n> -> Index Scan using lte_user_pkey on lte_user s (cost=0.00..3.02 rows=1 \n> width=16) (actual time=0.000..0.000 rows=1 loops=1)\n> Index Cond: (617004 = user_id)\n> -> Nested Loop (cost=0.00..10605.67 rows=3467 width=40) (actual time=\n> 0.000..157.000 rows=3467 loops=1)\n> -> Seq Scan on candidates617004 c (cost=0.00..76.34 rows=3467 width=32) \n> (actual time=0.000..15.000 rows=3467 loops=1)\n> Filter: (sourceid = 617004)\n> -> Index Scan using lte_user_pkey on lte_user t (cost=0.00..3.02 rows=1 \n> width=16) (actual time=0.028..0.037 rows=1 loops=3467)\n> Index Cond: (\"outer\".targetid = t.user_id)\n> Total runtime: 188.000 ms\n> \n> Thanks,\n> Meetesh\n> \n> On 8/2/05, Tom Lane <[email protected]> wrote:\n> > \n> > Meetesh Karia <[email protected]> writes:\n> > > Sure. The lte_user table is just a collection of users. user_id is \n> > assigned=\n> > > uniquely using a sequence. During some processing, we create a \n> > candidates= \n> > > table (candidates617004 in our case). This table is usually a temp \n> > table.=\n> > > sourceid is a user_id (in this case it is always 617004) and targetid \n> > is=20\n> > > also a user_id (2860 distinct values out of 3467). The rest of the=20 \n> > > information is either only used in the select clause or not used at \n> > all=20\n> > > during this processing.\n> > \n> > If you know that sourceid has only a single value, it'd probably be\n> > helpful to call out that value in the query, ie, \n> > where ... AND c.sourceId = 617004 ...\n> > \n> > regards, tom lane\n> > \n> \n>\n\nBtw - I tried playing around with some of the other planner cost\nconstants but I wasn't able to get the planner to choose the index\nscan.  It seems like the issue is that the estimated cost for\nfetching one row from the index (3.02) is a little high in my\ncase.  Is there any way that I can adjust that cost\nestimate?  Are there any side effects of doing that?  Or is\nmy best solution to simple set enable_hashjoin to off for this query?\n\nThanks,\nMeeteshOn 8/2/05, Meetesh Karia <[email protected]> wrote:\nThanks Tom,\n\nThat modifies the query plan slightly, but the planner still decides to\ndo a hash join for the lte_user table aliased 't'.  Though, if I\nmake this change and set enable_hashjoin to off, the query plan (and\nexecution time) gets even better.\n\nenable_hashjoin = on\n----------------------------------\nQUERY PLAN\nSort  (cost=10113.35..10122.02 rows=3467 width=48) (actual time=1203.000..1203.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=8711.19..9909.50 rows=3467\nwidth=48) (actual time=1156.000..1203.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..3.02 rows=1 width=16)\n(actual time=0.000..0.000 rows=1 loops=1)\n              Index Cond: (617004 = user_id)\n        ->  Hash Join \n(cost=8711.19..9776.46 rows=3467 width=40) (actual\ntime=1156.000..1187.000 rows=3467 loops=1)\n              Hash Cond: (\"outer\".targetid = \"inner\".user_id)\n             \n->  Seq Scan on candidates617004 c  (cost=0.00..76.34\nrows=3467 width=32) (actual time=0.000..16.000 rows=3467 loops=1)\n                   \nFilter: (sourceid = 617004)\n             \n->  Hash  (cost=8012.55..8012.55 rows=279455 width=16)\n(actual time=1141.000..1141.000 rows=0 loops=1)\n                   \n->  Seq Scan on lte_user t  (cost=0.00..8012.55\nrows=279455 width=16) (actual time=0.000..720.000 rows=279395 loops=1)\nTotal runtime: 1218.000 ms\n\nenable_hashjoin = off\n-----------------------------------\nQUERY PLAN\nSort  (cost=10942.56..10951.22 rows=3467 width=48) (actual time=188.000..188.000 rows=3467 loops=1)\n  Sort Key: c.sourceid, c.targetid\n  ->  Nested Loop  (cost=0.00..10738.71 rows=3467 width=48) (actual time=0.000..188.000 rows=3467 loops=1)\n        ->  Index Scan using\nlte_user_pkey on lte_user s  (cost=0.00..3.02 rows=1 width=16)\n(actual time=0.000..0.000 rows=1 loops=1)\n              Index Cond: (617004 = user_id)\n        ->  Nested\nLoop  (cost=0.00..10605.67 rows=3467 width=40) (actual\ntime=0.000..157.000 rows=3467 loops=1)\n             \n->  Seq Scan on candidates617004 c  (cost=0.00..76.34\nrows=3467 width=32) (actual time=0.000..15.000 rows=3467 loops=1)\n                   \nFilter: (sourceid = 617004)\n             \n->  Index Scan using lte_user_pkey on lte_user t \n(cost=0.00..3.02 rows=1 width=16) (actual time=0.028..0.037 rows=1\nloops=3467)\n                   \nIndex Cond: (\"outer\".targetid = t.user_id)\nTotal runtime: 188.000 ms\n\nThanks,\nMeeteshOn 8/2/05, Tom Lane <\[email protected]> wrote:\nMeetesh Karia <[email protected]> writes:> Sure. The lte_user table is just a collection of users. user_id is assigned=\n> uniquely using a sequence. During some processing, we create a candidates=\n> table (candidates617004 in our case). This table is usually a temp table.=> sourceid is a user_id (in this case it is always 617004) and targetid is=20> also a user_id (2860 distinct values out of 3467). The rest of the=20\n> information is either only used in the select clause or not used at all=20> during this processing.If you know that sourceid has only a single value, it'd probably behelpful to call out that value in the query, ie,\n        where ... AND c.sourceId = 617004 ...                        regards,\ntom lane", "msg_date": "Wed, 3 Aug 2005 14:48:38 +0200", "msg_from": "Meetesh Karia <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner incorrectly choosing seq scan over index scan" } ]
[ { "msg_contents": "The short question: \n\nIs there any ways to give postgresql a hint that a\nparticular SQL call should be run at lower priority? Since every db\nconnection has a pid, I can manually run \"renice\" to scheduele it by the OS\n- but of course I can't do it manually all the time.\n\nThe long story:\n\nWe have a constantly growing database, and hence also a constantly growing\nload on the database server. A hardware upgrade has for different reasons\nbeen postponed, and it's still beeing postponed.\n\nWe were hitting the first capacity problems in June, though so far I've\nmanaged to keep the situation in check by tuning the configuration, adding\nindices, optimizing queries, doing cacheing in the application, and at one\npoint in the code I'm even asking the database for \"explain plan\", grepping\nout the estimated cost number, and referring the user to take contact with\nthe IT-dept if he really needs the report. But I digress.\n\nStill there are lots of CPU power available - normally the server runs with\n50-80% of the CPUs idle, it's just the spikes that kills us.\n\nWe basically have two kind of queries that are significant - an ever-ongoing\n\"critical\" rush of simple queries, both reading and writing to the database,\nplus some few heavy \"non-critical\" read-only queries that may cause\nsignificant iowait. The problem comes when we are so unlucky that two or\nthree heavy queries are run simultaneously; we get congestion problems -\ninstead of the applications just running a bit slower, they run _much_\nslower.\n\nIdeally, if it was trivial to give priorities, it should be possible to keep\nthe CPUs running at 100% for hours without causing critical problems...?\n\n-- \nTobias Brox, +47-91700050\nTromso, Norway\n", "msg_date": "Tue, 2 Aug 2005 18:04:34 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "\"nice\"/low priority Query" }, { "msg_contents": "Tobias Brox <[email protected]> writes:\n> Is there any ways to give postgresql a hint that a\n> particular SQL call should be run at lower priority? Since every db\n> connection has a pid, I can manually run \"renice\" to scheduele it by the OS\n> - but of course I can't do it manually all the time.\n\nAnd it won't help you anyway, because renice only affects CPU priority\nnot I/O scheduling ... which, by your description, is the real problem.\n\nI think the only thing that's likely to help much is trying to arrange\nthat the \"simple\" queries only need to touch pages that are already in\nmemory. Some playing around with shared_buffer sizing might help.\nAlso, if you're not on PG 8.0.*, an update might help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 02 Aug 2005 12:19:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"nice\"/low priority Query " }, { "msg_contents": "Tom Lane wrote:\n> Tobias Brox <[email protected]> writes:\n> \n>>Is there any ways to give postgresql a hint that a\n>>particular SQL call should be run at lower priority? Since every db\n>>connection has a pid, I can manually run \"renice\" to scheduele it by the OS\n>>- but of course I can't do it manually all the time.\n> \n> And it won't help you anyway, because renice only affects CPU priority\n> not I/O scheduling ... which, by your description, is the real problem.\n> \n> I think the only thing that's likely to help much is trying to arrange\n> that the \"simple\" queries only need to touch pages that are already in\n> memory. Some playing around with shared_buffer sizing might help.\n> Also, if you're not on PG 8.0.*, an update might help.\n\nWould it be useful to be able to re-use the vacuum_cost_xxx settings in \n8.0 for this sort of thing? I'm thinking a long-running report query \nisn't that different from a vacuum.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 02 Aug 2005 17:54:52 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"nice\"/low priority Query" }, { "msg_contents": "On Tue, Aug 02, 2005 at 12:19:30PM -0400, Tom Lane wrote:\n> Tobias Brox <[email protected]> writes:\n> > Is there any ways to give postgresql a hint that a\n> > particular SQL call should be run at lower priority? Since every db\n> > connection has a pid, I can manually run \"renice\" to scheduele it by the OS\n> > - but of course I can't do it manually all the time.\n> \n> And it won't help you anyway, because renice only affects CPU priority\n> not I/O scheduling ... which, by your description, is the real problem.\n\nActually, from what I've read 4.2BSD actually took priority into account\nwhen scheduling I/O. I don't know if this behavior is still present in\nFreeBSD or the like, though. So depending on the OS, priority could play\na role in determining I/O scheduling.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Tue, 2 Aug 2005 12:25:50 -0500", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"nice\"/low priority Query" }, { "msg_contents": "[Tobias Brox - Tue at 06:04:34PM +0200]\n> (...) and at one\n> point in the code I'm even asking the database for \"explain plan\", grepping\n> out the estimated cost number, and referring the user to take contact with\n> the IT-dept if he really needs the report. But I digress.\n\nI just came to think about some more \"dirty\" tricks I can do. I have turned\non stats collection in the configuration; now, if I do:\n\n select count(*) from pg_stat_activity where not current_query like '<IDLE>%';\n \nor, eventually:\n\n select count(*) from pg_stat_activity \n where not current_query like '<IDLE>%' and query_start+'1 second'<now();\n\nit will give a hint about how busy the database server is, thus I can\neventually let the application sleep and retry if there are any other heavy\nqueries in progress.\n\n-- \nTobias Brox, +47-91700050\nNordicbet, IT dept\n", "msg_date": "Tue, 2 Aug 2005 21:59:15 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"nice\"/low priority Query" }, { "msg_contents": "Tobias Brox wrote:\n> [Tobias Brox - Tue at 06:04:34PM +0200]\n> \n>>(...) and at one\n>>point in the code I'm even asking the database for \"explain plan\", grepping\n>>out the estimated cost number, and referring the user to take contact with\n>>the IT-dept if he really needs the report. But I digress.\n> \n> \n> I just came to think about some more \"dirty\" tricks I can do. I have turned\n> on stats collection in the configuration; now, if I do:\n> \n> select count(*) from pg_stat_activity where not current_query like '<IDLE>%';\n> \n> or, eventually:\n> \n> select count(*) from pg_stat_activity \n> where not current_query like '<IDLE>%' and query_start+'1 second'<now();\n> \n> it will give a hint about how busy the database server is, thus I can\n> eventually let the application sleep and retry if there are any other heavy\n> queries in progress.\n\nOr - create a table with an estimated_cost column, when you start a new \n\"heavy\" query, insert that query's cost, then sleep \nSUM(estimated_cost)/100 secs or something. When the query ends, delete \nthe cost-row.\n\nHmm - actually rather than dividing by 100, perhaps make it a tunable value.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 03 Aug 2005 09:53:29 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"nice\"/low priority Query" }, { "msg_contents": "Jim C. Nasby wrote:\n> Actually, from what I've read 4.2BSD actually took priority into account\n> when scheduling I/O.\n\nFWIW, you can set I/O priority in recent versions of the Linux kernel \nusing ionice, which is part of RML's schedutils package (which was \nrecently merged into util-linux).\n\n-Neil\n", "msg_date": "Thu, 04 Aug 2005 13:32:39 -0400", "msg_from": "Neil Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"nice\"/low priority Query" } ]
[ { "msg_contents": "Dumping a database which contains a table with a bytea column takes\napproximately 25 hours and 45 minutes. The database has 26 tables in\nit. The other 25 tables take less than 5 minutes to dump so almost all\ntime is spent dumping the bytea table.\n\n \n\nprd1=# \\d ybnet.ebook_master;\n\n Table \"ybnet.ebook_master\"\n\n Column | Type | Modifiers\n\n--------------+---------+-----------\n\n region_key | integer | not null\n\n book_key | integer | not null\n\n pub_sequence | integer | not null\n\n section_code | integer | not null\n\n pagenbr | integer | not null\n\n pdffile | bytea |\n\nIndexes:\n\n \"ebook_master_pkey\" PRIMARY KEY, btree (book_key, pub_sequence,\nsection_code, pagenbr, region_key)\n\nForeign-key constraints:\n\n \"FK1_book_year\" FOREIGN KEY (book_key, pub_sequence, region_key)\nREFERENCES ybnet.book_year(book_key, pub_sequence, region_key)\n\n \"FK1_ebook_section\" FOREIGN KEY (section_code) REFERENCES\nybnet.ebook_section(sectioncode)\n\nTablespace: \"ebook\"\n\n \n\nThe tablespace ebook is 65504295 bytes in size and the ebook_master\ntable has 61-1GB files associated to it.\n\n \n\nThe command to dump the database is:\n\n \n\npg_dump --file=$DUMP_FILE --format=c --data-only --verbose\n--host=ybcdrdbp01 $DATABASE\n\n \n\nI also perform a hot backup of this database using pg_start_backup(),\ntar, and pg_stop_backup(). It takes only 20 minutes to create a tar\nball of the entire 62GB. I like the speed of this method, but it does\nnot allow me to restore 1 table at a time. \n\n \n\nThe version of postgres is PostgreSQL 8.0.0 on i686-pc-linux-gnu,\ncompiled by GCC gcc (GCC) 3.2.2\n\n \n\nThe machine has 4 Xeon 3.00 GHz processors with hyper-threading on and\n4GB of memory. Postgres is supported by two file systems connected to\nan EMC SAN disk array. One 2 GB one for the log files and a second 500\nGB one for the data and indexes. All output files for the backup files\nare placed onto the 500 GB volume group and then backed up to an\nexternal storage manager.\n\n \n\nPortions of the config file are:\n\n \n\nshared_buffers = 16384\n\nwork_mem = 8192\n\nmaintenance_work_mem = 16384\n\n \n\nmax_fsm_pages = 512000\n\nmax_fsm_relations = 1000\n\nfsync = true\n\n \n\n# - Checkpoints -\n\ncheckpoint_segments = 20\n\n \n\n# - Planner Cost Constants -\n\neffective_cache_size = 262144\n\nrandom_page_cost = 3\n\n \n\n \n\nI am looking for ideas for making the backup of the above table much\nfaster.\n\n\n\n\n\n\n\n\n\n\nDumping a database which contains a table with a bytea\ncolumn takes approximately 25 hours and 45 minutes.  The database has 26\ntables in it. The other 25 tables take less than 5 minutes to dump so almost\nall time is spent dumping the bytea table.\n \nprd1=# \\d ybnet.ebook_master;\n    \nTable \"ybnet.ebook_master\"\n   \nColumn    |  Type   | Modifiers\n--------------+---------+-----------\n region_key  \n| integer | not null\n book_key    \n| integer | not null\n pub_sequence | integer\n| not null\n section_code | integer\n| not null\n pagenbr     \n| integer | not null\n pdffile     \n| bytea   |\nIndexes:\n    \"ebook_master_pkey\"\nPRIMARY KEY, btree (book_key, pub_sequence, section_code, pagenbr, region_key)\nForeign-key constraints:\n   \n\"FK1_book_year\" FOREIGN KEY (book_key, pub_sequence, region_key)\nREFERENCES ybnet.book_year(book_key, pub_sequence, region_key)\n   \n\"FK1_ebook_section\" FOREIGN KEY (section_code) REFERENCES ybnet.ebook_section(sectioncode)\nTablespace: \"ebook\"\n \nThe tablespace ebook is 65504295 bytes in size and the ebook_master\ntable has 61-1GB files associated to it.\n \nThe command to dump the database is:\n \npg_dump --file=$DUMP_FILE --format=c --data-only\n--verbose -–host=ybcdrdbp01 $DATABASE\n \nI also perform a hot backup of this database using pg_start_backup(),\ntar, and pg_stop_backup().  It takes only 20 minutes to create a tar ball\nof the entire 62GB.  I like the speed of this method, but it does not\nallow me to restore 1 table at a time. \n \nThe version of postgres is PostgreSQL\n8.0.0 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.2.2\n \nThe machine has 4 Xeon 3.00 GHz processors with\nhyper-threading on and 4GB of memory.  Postgres is supported by two file\nsystems connected to an EMC SAN disk array.  One 2 GB one for the log\nfiles and a second 500 GB one for the data and indexes.  All output files\nfor the backup files are placed onto the 500 GB volume group and then backed up\nto an external storage manager.\n \nPortions of the config file are:\n \nshared_buffers = 16384\nwork_mem = 8192\nmaintenance_work_mem = 16384\n \nmax_fsm_pages = 512000\nmax_fsm_relations = 1000\nfsync = true\n \n# - Checkpoints -\ncheckpoint_segments = 20\n \n# - Planner Cost Constants -\neffective_cache_size =\n262144\nrandom_page_cost = 3\n \n \nI am looking for ideas for making the backup of the above\ntable much faster.", "msg_date": "Tue, 2 Aug 2005 13:57:50 -0500", "msg_from": "\"Sailer, Denis (YBUSA-CDR)\" <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump for table with bytea takes a long time" } ]
[ { "msg_contents": "I have in my possession some performance tuning documents authored by Bruce\nMomjian, Josh Berkus, and others. They give good information on utilities to\nuse (like ipcs, sar, vmstat, etc) to evaluate disk, memory, etc. performance\non Unix-based systems.\n\nProblem is, I have applications running on Windows 2003, and have worked\nmostly on Unix before. Was wondering if anyone knows where there might be a\nWindows performance document that tells what to use / where to look in\nWindows for some of this data. I am thinking that I may not seeing what I\nneed\nin perfmon or the Windows task manager.\n\nWant to answer questions like:\n How much memory is being used for disk buffer cache?\n How to I lock shared memory for PostgreSQL (if possible at all)?\n How to determine if SWAP (esp. page-in) activity is hurting me?\n Does Windows use a 'unified buffer cache' or not?\n How do I determine how much space is required to do most of my sorts in\nRAM?\n\n\n", "msg_date": "Wed, 3 Aug 2005 09:15:34 -0400", "msg_from": "\"Lane Van Ingen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Is There A Windows Version of Performance Tuning Documents?" }, { "msg_contents": "Lane Van Ingen wrote:\n> I have in my possession some performance tuning documents authored by Bruce\n> Momjian, Josh Berkus, and others. They give good information on utilities to\n> use (like ipcs, sar, vmstat, etc) to evaluate disk, memory, etc. performance\n> on Unix-based systems.\n>\n> Problem is, I have applications running on Windows 2003, and have worked\n> mostly on Unix before. Was wondering if anyone knows where there might be a\n> Windows performance document that tells what to use / where to look in\n> Windows for some of this data. I am thinking that I may not seeing what I\n> need\n> in perfmon or the Windows task manager.\n>\n> Want to answer questions like:\n> How much memory is being used for disk buffer cache?\n> How to I lock shared memory for PostgreSQL (if possible at all)?\n> How to determine if SWAP (esp. page-in) activity is hurting me?\n> Does Windows use a 'unified buffer cache' or not?\n> How do I determine how much space is required to do most of my sorts in\n> RAM?\n>\n\nI don't know of any specific documentation. I would mention the\nTaskManager as the first place I would look (Ctrl+Shift+Esc, or right\nclick on the task bar).\nYou can customize the columns that it shows in the process view, so you\ncan get an idea if something is paging, how much I/O it is using, etc.\n\nI'm sure there are other better tools, but this one is pretty easy to\nget to, and shows quite a bit.\n\nJohn\n=:->", "msg_date": "Wed, 03 Aug 2005 10:07:47 -0500", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is There A Windows Version of Performance Tuning Documents?" } ]
[ { "msg_contents": "Does postgres support indexed views/materialised views that some of the \nother databases support?\n\nThanks\nPrasanna S\n\nDoes postgres support indexed views/materialised views that some of the other databases support?\n\nThanks\nPrasanna S", "msg_date": "Thu, 4 Aug 2005 13:54:32 +0530", "msg_from": "prasanna s <[email protected]>", "msg_from_op": true, "msg_subject": "Indexed views." }, { "msg_contents": "No, unless you use some custom triggers.\n\nprasanna s wrote:\n> Does postgres support indexed views/materialised views that some of the \n> other databases support?\n> \n> Thanks\n> Prasanna S\n\n", "msg_date": "Thu, 04 Aug 2005 16:37:14 +0800", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexed views." }, { "msg_contents": "prasanna s wrote:\n\n> Does postgres support indexed views/materialised views that some of \n> the other databases support?\n>\n> Thanks\n> Prasanna S\n\nHi!\n\nIt is not supported, but perhaps this will help you: \nhttp://www.varlena.com/varlena/GeneralBits/Tidbits/matviews.html\n\n\n\n", "msg_date": "Thu, 04 Aug 2005 10:52:09 +0200", "msg_from": "Laszlo Hornyak <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexed views." } ]
[ { "msg_contents": "[[I'm posting this on behalf of my co-worker who cannot post to this \nlist at the moment]]\n\nHi,\n\nI had installed PostgreSQL on a 4-way AMD Opteron 875 (dual core) and \nthe performance isn't on the expected level.\n\nDetails:\nThe \"old\" server is a 4-way XEON MP 3.0 GHz with 4MB L3 cache, 32 GB RAM \n(PC1600) and local FC-RAID 10. Hyper-Threading is off. (DL580)\nThe \"old\" server is using Red Hat Enterprise Linux 3 Update 5.\nThe \"new\" server is a 4-way Opteron 875 with 1 MB L2 cache, 32 GB RAM \n(PC3200) and the same local FC-RAID 10. (HP DL585)\nThe \"new\" server is using Red Hat Enterprise Linux 4 (with the latest \nx86_64 kernel from Red Hat - 2.6.9-11.ELsmp #1 SMP Fri May 20 18:25:30 \nEDT 2005 x86_64)\nI use PostgreSQL version 8.0.3.\n\nThe issue is that the Opteron is slower as the XEON MP under high load. \nI have created a test with parallel queries which are typical for my \napplication. The queries are in a range of small queries (0.1 seconds) \nand larger queries using join (15 seconds).\nThe test starts parallel clients. Each clients runs the queries in a \nrandom order. The test takes care that a client use always the same \nrandom order to get valid results.\n\nHere are the number of queries which the server has finished in a fix \nperiod of time.\nI used PostgreSQL 8.1 snapshot from last week compiled as 64bit binary \nfor DL585-64bit.\nI used PostgreSQL 8.0.3 compiled as 32bit binary for DL585-32bit and DL580.\nDuring the tests everything which is needed is in the file cache. I \ndidn't have read activity.\nContext switch spikes are over 50000 during the test on both server. My \nfeeling is that the XEON has a tick more context switches.\n\n\n\nPostgreSQL params:\nmax_locks_per_transaction = 256\nshared_buffers = 40000\neffective_cache_size = 3840000\nwork_mem = 300000\nmaintenance_work_mem = 512000\nwal_buffers = 32\ncheckpoint_segments = 24\n\n\nI was expecting two times more queries on the DL585. The DL585 with \nPostgreSQL 8.0.3 32bit does meltdown earlier as the XEON in production \nuse. Please compare 4 clients and 8 clients. With 4 clients the Opteron \nis in front and with 8 clients the XEON doesn't meltdown that much as \nthe Opteron.\n\nI don't have any idea what cause this. Benchmarks like SAP's SD 2-tier \nshowing that the DL585 can handle nearly three times more load as the \nDL580 with XEON 3.0. We choose the 4-way Opteron 875 based on such \nbenchmark to replace the 4-way XEON MP.\n\nDoes anyone have comments or ideas on which I have to focus my work?\n\nI guess, the shared buffer cause the meltdown when to many clients are \naccessing the same data.\nI didn't understand why the 4-way XEON MP 3.0 can deal with this better \nas the 4-way Opteron 875.\nThe system load on the Opteron is never over 3.0. The XEON MP has a load \nup to 4.0.\n\nShould I try other settings for PostgreSQL in postgresql.conf?\nShould I try other setting for the compilation?\n\nI will compile the latest PostgreSQL 8.1 snapshot for 32bit to evaluate \nthe new shared buffer code from Tom.\nI think, the 64bit is slow because my queries are CPU intensive.\n\nCan someone provide a commercial support contact for this issue?\n\nSven.\n\n\n\n\n\n\n\n\n[[I'm\nposting this on behalf of my co-worker who cannot post to this list at\nthe moment]]\n\nHi,\n\n\nI had installed PostgreSQL on a 4-way AMD Opteron 875 (dual core) and\nthe performance isn't on the expected level.\n\n\nDetails:\n\nThe \"old\" server is a 4-way XEON MP 3.0 GHz with 4MB L3 cache, 32 GB\nRAM (PC1600)  and local FC-RAID 10. Hyper-Threading is off. (DL580)\n\nThe \"old\" server is using Red Hat Enterprise Linux 3 Update 5.\n\nThe \"new\" server is a 4-way Opteron 875 with 1 MB L2 cache, 32 GB RAM\n(PC3200) and the same local FC-RAID 10. (HP DL585)\n\nThe \"new\" server is using Red Hat Enterprise Linux 4 (with the latest\nx86_64 kernel from Red Hat - 2.6.9-11.ELsmp #1 SMP Fri May 20 18:25:30\nEDT 2005 x86_64)\n\nI use PostgreSQL version 8.0.3.\n\n\nThe issue is that the Opteron is slower as the XEON MP under high load.\nI have created a test with parallel queries which are typical for my\napplication. The queries are in a range of small queries (0.1 seconds)\nand larger queries using join (15 seconds).\n\nThe test starts parallel clients. Each clients runs the queries in a\nrandom order. The test takes care that a client use always the same\nrandom order to get valid results.\n\n\nHere are the number of queries which the server has finished in a fix\nperiod of time.\n\nI used PostgreSQL 8.1 snapshot from last week compiled as 64bit binary\nfor DL585-64bit.\n\nI used PostgreSQL 8.0.3 compiled as 32bit binary for DL585-32bit and\nDL580.\n\nDuring the tests everything which is needed is in the file cache. I\ndidn't have read activity.\n\nContext switch  spikes are over 50000 during the test on both server.\nMy feeling is that the XEON has a tick more context switches.\n\n\n\n\n\nPostgreSQL params:\n\nmax_locks_per_transaction = 256\n\nshared_buffers = 40000\n\neffective_cache_size = 3840000\n\nwork_mem = 300000\n\nmaintenance_work_mem = 512000\n\nwal_buffers = 32\n\ncheckpoint_segments = 24\n\n\n\nI was expecting two times more queries on the DL585. The DL585 with\nPostgreSQL 8.0.3 32bit does meltdown earlier as the XEON in production\nuse. Please compare 4 clients and 8 clients. With 4 clients the Opteron\nis in front and with 8 clients the XEON doesn't meltdown that much as\nthe Opteron.\n\n\nI don't have any idea what cause this. Benchmarks like SAP's SD 2-tier\nshowing that the DL585 can handle nearly three times more load as the\nDL580 with XEON 3.0. We choose the 4-way Opteron 875 based on such\nbenchmark to replace the 4-way XEON MP.\n\n\nDoes anyone have comments or ideas on which I have to focus my work?\n\n\nI guess, the shared buffer cause the meltdown when to many clients are\naccessing the same data.\n\nI didn't understand why the 4-way XEON MP 3.0 can deal with this better\nas the 4-way Opteron 875.\n\nThe system load on the Opteron is never over 3.0. The XEON MP has a\nload up to 4.0.\n\n\nShould I try other settings for PostgreSQL in postgresql.conf?\n\nShould I try other setting for the compilation?\n\n\nI will compile the latest PostgreSQL 8.1 snapshot for 32bit to evaluate\nthe new shared buffer code from Tom.\n\nI think, the 64bit is slow because my queries are CPU intensive.\n\n\nCan someone provide a commercial support contact for this issue?\n\n\nSven.", "msg_date": "Fri, 05 Aug 2005 13:11:31 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems on 4-way AMD Opteron 875 (dual core)" }, { "msg_contents": "On Fri, Aug 05, 2005 at 01:11:31PM +0200, Dirk Lutzeb�ck wrote:\n>I will compile the latest PostgreSQL 8.1 snapshot for 32bit to evaluate \n>the new shared buffer code from Tom.\n>I think, the 64bit is slow because my queries are CPU intensive.\n\nHave you actually tried it or are you guessing? If you're guessing, then\ncompile it as a 64 bit binary and benchmark that.\n\nMike Stone\n", "msg_date": "Fri, 05 Aug 2005 08:27:19 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4-way AMD Opteron 875 (dual core)" }, { "msg_contents": "\nMichael Stone wrote:\n\n> On Fri, Aug 05, 2005 at 01:11:31PM +0200, Dirk Lutzeb�ck wrote:\n>\n>> I will compile the latest PostgreSQL 8.1 snapshot for 32bit to \n>> evaluate the new shared buffer code from Tom.\n>> I think, the 64bit is slow because my queries are CPU intensive.\n>\n>\n> Have you actually tried it or are you guessing? If you're guessing, then\n> compile it as a 64 bit binary and benchmark that.\n>\n> Mike Stone\n\nWe tried it. 64bit 8.1dev was slower than 32bit 8.0.3.\n\nDirk\n", "msg_date": "Fri, 05 Aug 2005 14:56:23 +0200", "msg_from": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems on 4-way AMD Opteron 875 (dual" }, { "msg_contents": "=?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[email protected]> writes:\n> Here are the number of queries which the server has finished in a fix \n> period of time.\n\nUh, you never actually supplied any numbers (or much of any other\nspecifics about what was tested, either).\n\nMy first reaction is \"don't vary more than one experimental parameter at\na time\". There is no way to tell whether the discrepancy is due to the\ndifferent hardware, different Postgres version, or 32-bit vs 64-bit\nbuild.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 05 Aug 2005 09:53:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems on 4-way AMD Opteron 875 (dual core) " } ]
[ { "msg_contents": "I've got similiar queries that I think should be evaluated (as\ndisplayed through 'explain') the same, but don't.\nHopefully this is the rigth place to send such a question and one of\nyou can help explain this to me.\n\nThe Tables:\n Connection - 1.2 million entries, about 60 megs, 3 integer fields\nthat link two tables together (i.e. an identifier and two foreign\nkeys).\n has an index on the identifier and either of\nthe foreign keys.\n rtmessagestate - very small, 5 entries\n rtmessage - pretty big, 80,000 entries\n\n\nThe Queries:\n \nselect rtmessagestate.* from rtmessagestate, connection where\nconnection_registry_id = 40105 and obj1 = 73582 and obj2 =\nrtmessagestate.id;\n returns 1 in 13.7 ms\n \nselect rtmessage.id, subject from rtmessage, connection where\nconnection_registry_id = 40003 and obj1 = 4666 and obj2 =\nrtmessage.id;\n returns 12 in 2 ms\n\nSome more possibly important details:\n entries in Connection with connection_registry_id = 40105: 30,000\n entries with this id and obj1 = 73582: 1\n entries in Connection with connection_registry_id = 40003: 6,000\n entries with this id and obj1 = 4666: 20\n \nbut as I said before, there is an btree index on (connection_registry_id, obj1)\n\nExplain:\nThe first query, breaks down as:\nHash Join (cost=5.96..7.04 rows=1 width=14)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n -> Hash (cost=5.96..5.96 rows=1 width=4)\n -> Index Scan using connection_regid_obj1_index on\nconnection (cost=0.00..5.96 rows=1 width=4)\n Index Cond: ((connection_registry_id = 40105) AND\n(obj1 = 73582))(6 rows)\n\n\nWhile the second query is:\n Nested Loop (cost=0.00..11.62 rows=2 width=38)\n -> Index Scan using connection_regid_obj1_index on connection \n(cost=0.00..5.96 rows=1 width=4)\n Index Cond: ((connection_registry_id = 40003) AND (obj1 = 4666))\n -> Index Scan using rtmessage_pkey on rtmessage \n(cost=0.00..5.65 rows=1 width=38)\n Index Cond: (\"outer\".obj2 = rtmessage.id)\n (5 rows)\n\nActually running these queries shows that the second one (nested loop)\nis much faster than the hash join, presumably because of hash startup\ncosts. Any ideas how I can make them both use the nested loop. I\nassume that this would be the fastest for both.\n\nOddly enough, running the 1st query (rtmessagestate) as two queries or\nwith a sub query is way faster than doing the join.\n\nAnd yes, I realize this schema may not be the most efficient for these\nexamples, but it seems to be the most flexible. I'm working on some\nschema improvements also\nbut if I could understand why this is slow that woudl probably help also.\n\nThanks for you help,\n\nRhett\n", "msg_date": "Fri, 5 Aug 2005 11:35:03 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Why hash join instead of nested loop?" }, { "msg_contents": "Rhett,\n\nPlease post the explain analyze for both queries. From that we can see the \npredicted and the actual costs of them.\n\nRegards,\nOtto\n\n\n----- Original Message ----- \nFrom: \"Rhett Garber\" <[email protected]>\nTo: <[email protected]>\nSent: Friday, August 05, 2005 8:35 PM\nSubject: [PERFORM] Why hash join instead of nested loop?\n\n\nI've got similiar queries that I think should be evaluated (as\ndisplayed through 'explain') the same, but don't.\nHopefully this is the rigth place to send such a question and one of\nyou can help explain this to me.\n\nThe Tables:\n Connection - 1.2 million entries, about 60 megs, 3 integer fields\nthat link two tables together (i.e. an identifier and two foreign\nkeys).\n has an index on the identifier and either of\nthe foreign keys.\n rtmessagestate - very small, 5 entries\n rtmessage - pretty big, 80,000 entries\n\n\nThe Queries:\n\nselect rtmessagestate.* from rtmessagestate, connection where\nconnection_registry_id = 40105 and obj1 = 73582 and obj2 =\nrtmessagestate.id;\n returns 1 in 13.7 ms\n\nselect rtmessage.id, subject from rtmessage, connection where\nconnection_registry_id = 40003 and obj1 = 4666 and obj2 =\nrtmessage.id;\n returns 12 in 2 ms\n\nSome more possibly important details:\n entries in Connection with connection_registry_id = 40105: 30,000\n entries with this id and obj1 = 73582: 1\n entries in Connection with connection_registry_id = 40003: 6,000\n entries with this id and obj1 = 4666: 20\n\nbut as I said before, there is an btree index on (connection_registry_id, \nobj1)\n\nExplain:\nThe first query, breaks down as:\nHash Join (cost=5.96..7.04 rows=1 width=14)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n -> Hash (cost=5.96..5.96 rows=1 width=4)\n -> Index Scan using connection_regid_obj1_index on\nconnection (cost=0.00..5.96 rows=1 width=4)\n Index Cond: ((connection_registry_id = 40105) AND\n(obj1 = 73582))(6 rows)\n\n\nWhile the second query is:\n Nested Loop (cost=0.00..11.62 rows=2 width=38)\n -> Index Scan using connection_regid_obj1_index on connection\n(cost=0.00..5.96 rows=1 width=4)\n Index Cond: ((connection_registry_id = 40003) AND (obj1 = \n4666))\n -> Index Scan using rtmessage_pkey on rtmessage\n(cost=0.00..5.65 rows=1 width=38)\n Index Cond: (\"outer\".obj2 = rtmessage.id)\n (5 rows)\n\nActually running these queries shows that the second one (nested loop)\nis much faster than the hash join, presumably because of hash startup\ncosts. Any ideas how I can make them both use the nested loop. I\nassume that this would be the fastest for both.\n\nOddly enough, running the 1st query (rtmessagestate) as two queries or\nwith a sub query is way faster than doing the join.\n\nAnd yes, I realize this schema may not be the most efficient for these\nexamples, but it seems to be the most flexible. I'm working on some\nschema improvements also\nbut if I could understand why this is slow that woudl probably help also.\n\nThanks for you help,\n\nRhett\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n", "msg_date": "Sat, 6 Aug 2005 00:13:49 +0200", "msg_from": "=?ISO-8859-1?Q?Havasv=F6lgyi_Ott=F3?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "On 8/5/05, Havasvölgyi Ottó <[email protected]> wrote:\n\n> Please post the explain analyze for both queries. From that we can see the\n> predicted and the actual costs of them.\n\n\n> select rtmessagestate.* from rtmessagestate, connection where\n> connection_registry_id = 40105 and obj1 = 73582 and obj2 =\n> rtmessagestate.id;\n\nHash Join (cost=5.96..7.04 rows=1 width=14) (actual\ntime=10.591..10.609 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n(actual time=0.011..0.022 rows=5 loops=1)\n -> Hash (cost=5.96..5.96 rows=1 width=4) (actual\ntime=0.109..0.109 rows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on\nconnection (cost=0.00..5.96 rows=1 width=4) (actual time=0.070..0.076\nrows=1 loops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1\n= 73582)) Total runtime: 11.536 ms\n(7 rows)\n\n> select rtmessage.id, subject from rtmessage, connection where\n> connection_registry_id = 40003 and obj1 = 4666 and obj2 =\n> rtmessage.id;\n\nNested Loop (cost=0.00..11.62 rows=2 width=38) (actual\ntime=0.186..0.970 rows=12 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection \n(cost=0.00..5.96 rows=1 width=4) (actual time=0.109..0.308 rows=12\nloops=1)\n Index Cond: ((connection_registry_id = 40003) AND (obj1 = 4666))\n -> Index Scan using rtmessage_pkey on rtmessage (cost=0.00..5.65\nrows=1 width=38) (actual time=0.032..0.039 rows=1 loops=12)\n Index Cond: (\"outer\".obj2 = rtmessage.id)\n Total runtime: 1.183 ms\n(6 rows)\n\nRhett\n", "msg_date": "Fri, 5 Aug 2005 16:16:51 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Rhett Garber <[email protected]> writes:\n> Hash Join (cost=5.96..7.04 rows=1 width=14) (actual\n> time=10.591..10.609 rows=1 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".obj2)\n> -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n> (actual time=0.011..0.022 rows=5 loops=1)\n> -> Hash (cost=5.96..5.96 rows=1 width=4) (actual\n> time=0.109..0.109 rows=0 loops=1)\n> -> Index Scan using connection_regid_obj1_index on\n> connection (cost=0.00..5.96 rows=1 width=4) (actual time=0.070..0.076\n> rows=1 loops=1)\n> Index Cond: ((connection_registry_id = 40105) AND (obj1\n> = 73582)) Total runtime: 11.536 ms\n> (7 rows)\n\n[ scratches head... ] If the hash table build takes only 0.109 msec\nand loads only one row into the hash table, and the scan of\nrtmessagestate takes only 0.022 msec and produces only 5 rows, it is\nreal hard to see how the join takes 10.609 msec overall. Unless the id\nand obj2 columns are of a datatype with an incredibly slow equality\nfunction. What is the datatype involved here, anyway? And what PG\nversion are we speaking of?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 06 Aug 2005 01:03:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "This is postgres 7.4.1\n\nAll the rows involved are integers.\n\nThanks,\n\nRhett\n\nOn 8/5/05, Tom Lane <[email protected]> wrote:\n> Rhett Garber <[email protected]> writes:\n> > Hash Join (cost=5.96..7.04 rows=1 width=14) (actual\n> > time=10.591..10.609 rows=1 loops=1)\n> > Hash Cond: (\"outer\".id = \"inner\".obj2)\n> > -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n> > (actual time=0.011..0.022 rows=5 loops=1)\n> > -> Hash (cost=5.96..5.96 rows=1 width=4) (actual\n> > time=0.109..0.109 rows=0 loops=1)\n> > -> Index Scan using connection_regid_obj1_index on\n> > connection (cost=0.00..5.96 rows=1 width=4) (actual time=0.070..0.076\n> > rows=1 loops=1)\n> > Index Cond: ((connection_registry_id = 40105) AND (obj1\n> > = 73582)) Total runtime: 11.536 ms\n> > (7 rows)\n> \n> [ scratches head... ] If the hash table build takes only 0.109 msec\n> and loads only one row into the hash table, and the scan of\n> rtmessagestate takes only 0.022 msec and produces only 5 rows, it is\n> real hard to see how the join takes 10.609 msec overall. Unless the id\n> and obj2 columns are of a datatype with an incredibly slow equality\n> function. What is the datatype involved here, anyway? And what PG\n> version are we speaking of?\n> \n> regards, tom lane\n>\n", "msg_date": "Mon, 8 Aug 2005 10:00:08 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Rhett Garber <[email protected]> writes:\n> This is postgres 7.4.1\n> All the rows involved are integers.\n\nHmph. There is something really strange going on here. I tried to\nduplicate your problem in 7.4.*, thus:\n\nregression=# create table rtmessagestate(id int, f1 char(6));\nCREATE TABLE\nregression=# insert into rtmessagestate values(1,'z');\nINSERT 559399 1\nregression=# insert into rtmessagestate values(2,'z');\nINSERT 559400 1\nregression=# insert into rtmessagestate values(3,'z');\nINSERT 559401 1\nregression=# insert into rtmessagestate values(4,'z');\nINSERT 559402 1\nregression=# insert into rtmessagestate values(5,'z');\nINSERT 559403 1\nregression=# vacuum analyze rtmessagestate;\nVACUUM\nregression=# create table connection(connection_registry_id int, obj1 int, obj2 int);\nCREATE TABLE\nregression=# create index connection_regid_obj1_index on connection(connection_registry_id,obj1);\nCREATE INDEX\nregression=# insert into connection values(40105,73582,3);\nINSERT 559407 1\nregression=# explain analyze select rtmessagestate.* from rtmessagestate,connection where (connection_registry_id = 40105) AND (obj1 = 73582) and id = obj2;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.498..0.544 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14) (actual time=0.030..0.072 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.305..0.305 rows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection (cost=0.00..4.83 rows=1 width=4) (actual time=0.236..0.264 rows=1 loops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1 = 73582))\n Total runtime: 1.119 ms\n(7 rows)\n\nThis duplicates your example as to plan and row counts:\n\n> Hash Join (cost=5.96..7.04 rows=1 width=14) (actual\n> time=10.591..10.609 rows=1 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".obj2)\n> -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n> (actual time=0.011..0.022 rows=5 loops=1)\n> -> Hash (cost=5.96..5.96 rows=1 width=4) (actual\n> time=0.109..0.109 rows=0 loops=1)\n> -> Index Scan using connection_regid_obj1_index on\n> connection (cost=0.00..5.96 rows=1 width=4) (actual time=0.070..0.076\n> rows=1 loops=1)\n> Index Cond: ((connection_registry_id = 40105) AND (obj1\n> = 73582)) Total runtime: 11.536 ms\n> (7 rows)\n\nMy machine is considerably slower than yours, to judge by the actual\nelapsed times in the scan nodes ... so why is it beating the pants\noff yours in the join step?\n\nCan you try the above script verbatim in a scratch database and see\nwhat you get? (Note it's worth trying the explain two or three\ntimes to be sure the values have settled out.)\n\nI'm testing a fairly recent 7.4-branch build (7.4.8 plus), so that's one\npossible reason for the discrepancy between my results and yours, but I\ndo not see anything in the 7.4 CVS logs that looks like it's related to\nhashjoin performance.\n\nI'd be interested to see results from other people using 7.4.* too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 08 Aug 2005 20:58:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "On Mon, Aug 08, 2005 at 08:58:26PM -0400, Tom Lane wrote:\n> Hmph. There is something really strange going on here. I tried to\n> duplicate your problem in 7.4.*, thus:\n\nPostgreSQL 7.4.7 (Debian sarge):\n\n<create table and stuff, exactly the same as you>\n\nregression=# explain analyze select rtmessagestate.* from rtmessagestate,connection where (connection_registry_id = 40105) AND (obj1 = 73582) and id = obj2;\n\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.155..0.159 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14) (actual time=0.003..0.006 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.026..0.026 rows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection (cost=0.00..4.83 rows=1 width=4) (actual time=0.011..0.012 rows=1 loops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1 = 73582))\n Total runtime: 0.215 ms\n(7 rows)\n\nThis is an Opteron (in 32-bit mode), though.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 9 Aug 2005 03:08:50 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "On Mon, Aug 08, 2005 at 08:58:26PM -0400, Tom Lane wrote:\n> I'd be interested to see results from other people using 7.4.* too.\n\nI just built 7.4.1 on FreeBSD 4.11-STABLE and ran your test:\n\ntest=# explain analyze select rtmessagestate.* from rtmessagestate,connection where (connection_registry_id = 40105) AND (obj1 = 73582) and id = obj2;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.220..0.264 rows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14) (actual time=0.015..0.050 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.103..0.103 rows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection (cost=0.00..4.83 rows=1 width=4) (actual time=0.070..0.081 rows=1 loops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1 = 73582))\n Total runtime: 0.495 ms\n(7 rows)\n\n-- \nMichael Fuhr\n", "msg_date": "Mon, 8 Aug 2005 23:11:48 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "On Mon, 2005-08-08 at 20:58, Tom Lane wrote:\n> I'd be interested to see results from other people using 7.4.* too.\n\n7.4.8:\n \nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.122..0.126\nrows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n(actual time=0.003..0.006 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.021..0.021\nrows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection \n(cost=0.00..4.83 rows=1 width=4) (actual time=0.013..0.015 rows=1\nloops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1 =\n73582))\n Total runtime: 0.198 ms\n\n7.4.2:\n\n \nQUERY\nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.577..0.600\nrows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n(actual time=0.006..0.023 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.032..0.032\nrows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on connection \n(cost=0.00..4.83 rows=1 width=4) (actual time=0.016..0.020 rows=1\nloops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1 =\n73582))\n Total runtime: 0.697 ms\n\n\n\t--Ian\n\n\n", "msg_date": "Tue, 09 Aug 2005 08:15:56 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Ian Westmacott <[email protected]> writes:\n> On Mon, 2005-08-08 at 20:58, Tom Lane wrote:\n>> I'd be interested to see results from other people using 7.4.* too.\n\n> 7.4.8:\n> Total runtime: 0.198 ms\n\n> 7.4.2:\n> Total runtime: 0.697 ms\n\nJust to be clear: those are two different machines of different speeds,\nright? I don't believe we put any factor-of-three speedups into 7.4.*\nafter release ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Aug 2005 10:33:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "Yes, sorry, two totally different machines. The 7.4.8\nrun was on a dual P4 3.2GHz, and the 7.4.2 run was on\na dual hyperthreaded Xeon 2.4GHz.\n\n\t--Ian\n\n\nOn Tue, 2005-08-09 at 10:33, Tom Lane wrote:\n> Ian Westmacott <[email protected]> writes:\n> > On Mon, 2005-08-08 at 20:58, Tom Lane wrote:\n> >> I'd be interested to see results from other people using 7.4.* too.\n> \n> > 7.4.8:\n> > Total runtime: 0.198 ms\n> \n> > 7.4.2:\n> > Total runtime: 0.697 ms\n> \n> Just to be clear: those are two different machines of different speeds,\n> right? I don't believe we put any factor-of-three speedups into 7.4.*\n> after release ;-)\n> \n> \t\t\tregards, tom lane\n\n", "msg_date": "Tue, 09 Aug 2005 10:43:33 -0400", "msg_from": "Ian Westmacott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Duplicated your setup in a separate DB.\n\nAt least its reproducable for me.....\n\nI tested this on a Xeon 2 Ghz, 1 Gig Ram. Its running on some shared\nstorage array that I'm not sure the details of.\n\nMy production example also shows up on our production machine that is\nalmost the same hardware but has dual zeon and 6 gigs of ram.\n\nRhett\n\nHash Join (cost=4.83..5.91 rows=1 width=14) (actual time=7.148..7.159\nrows=1 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".obj2)\n -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n(actual time=0.007..0.015 rows=5 loops=1)\n -> Hash (cost=4.83..4.83 rows=1 width=4) (actual\ntime=0.055..0.055 rows=0 loops=1)\n -> Index Scan using connection_regid_obj1_index on\nconnection (cost=0.00..4.83 rows=1 width=4) (actual time=0.028..0.032\nrows=1 loops=1)\n Index Cond: ((connection_registry_id = 40105) AND (obj1\n= 73582)) Total runtime: 7.693 ms\n(7 rows)\n\n\nOn 8/8/05, Tom Lane <[email protected]> wrote:\n> Rhett Garber <[email protected]> writes:\n> > This is postgres 7.4.1\n> > All the rows involved are integers.\n> \n> Hmph. There is something really strange going on here. I tried to\n> duplicate your problem in 7.4.*, thus:\n> \n> regression=# create table rtmessagestate(id int, f1 char(6));\n> CREATE TABLE\n> regression=# insert into rtmessagestate values(1,'z');\n> INSERT 559399 1\n> regression=# insert into rtmessagestate values(2,'z');\n> INSERT 559400 1\n> regression=# insert into rtmessagestate values(3,'z');\n> INSERT 559401 1\n> regression=# insert into rtmessagestate values(4,'z');\n> INSERT 559402 1\n> regression=# insert into rtmessagestate values(5,'z');\n> INSERT 559403 1\n> regression=# vacuum analyze rtmessagestate;\n> VACUUM\n> regression=# create table connection(connection_registry_id int, obj1 int, obj2 int);\n> CREATE TABLE\n> regression=# create index connection_regid_obj1_index on connection(connection_registry_id,obj1);\n> CREATE INDEX\n> regression=# insert into connection values(40105,73582,3);\n> INSERT 559407 1\n> regression=# explain analyze select rtmessagestate.* from rtmessagestate,connection where (connection_registry_id = 40105) AND (obj1 = 73582) and id = obj2;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------\n> Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=0.498..0.544 rows=1 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".obj2)\n> -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14) (actual time=0.030..0.072 rows=5 loops=1)\n> -> Hash (cost=4.83..4.83 rows=1 width=4) (actual time=0.305..0.305 rows=0 loops=1)\n> -> Index Scan using connection_regid_obj1_index on connection (cost=0.00..4.83 rows=1 width=4) (actual time=0.236..0.264 rows=1 loops=1)\n> Index Cond: ((connection_registry_id = 40105) AND (obj1 = 73582))\n> Total runtime: 1.119 ms\n> (7 rows)\n> \n> This duplicates your example as to plan and row counts:\n> \n> > Hash Join (cost=5.96..7.04 rows=1 width=14) (actual\n> > time=10.591..10.609 rows=1 loops=1)\n> > Hash Cond: (\"outer\".id = \"inner\".obj2)\n> > -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n> > (actual time=0.011..0.022 rows=5 loops=1)\n> > -> Hash (cost=5.96..5.96 rows=1 width=4) (actual\n> > time=0.109..0.109 rows=0 loops=1)\n> > -> Index Scan using connection_regid_obj1_index on\n> > connection (cost=0.00..5.96 rows=1 width=4) (actual time=0.070..0.076\n> > rows=1 loops=1)\n> > Index Cond: ((connection_registry_id = 40105) AND (obj1\n> > = 73582)) Total runtime: 11.536 ms\n> > (7 rows)\n> \n> My machine is considerably slower than yours, to judge by the actual\n> elapsed times in the scan nodes ... so why is it beating the pants\n> off yours in the join step?\n> \n> Can you try the above script verbatim in a scratch database and see\n> what you get? (Note it's worth trying the explain two or three\n> times to be sure the values have settled out.)\n> \n> I'm testing a fairly recent 7.4-branch build (7.4.8 plus), so that's one\n> possible reason for the discrepancy between my results and yours, but I\n> do not see anything in the 7.4 CVS logs that looks like it's related to\n> hashjoin performance.\n> \n> I'd be interested to see results from other people using 7.4.* too.\n> \n> regards, tom lane\n>\n", "msg_date": "Tue, 9 Aug 2005 09:51:51 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Rhett Garber <[email protected]> writes:\n> Duplicated your setup in a separate DB.\n> At least its reproducable for me.....\n\nHmm. Well, we now have several data points but they seem to be on\nwildly varying hardware. To try to normalize the results a little,\nI computed the total actual time for the hash plan divided by the sum\nof the actual times for the two scan nodes. Thus, for your example:\n\n> Hash Join (cost=4.83..5.91 rows=1 width=14) (actual time=7.148..7.159\n> rows=1 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".obj2)\n> -> Seq Scan on rtmessagestate (cost=0.00..1.05 rows=5 width=14)\n> (actual time=0.007..0.015 rows=5 loops=1)\n> -> Hash (cost=4.83..4.83 rows=1 width=4) (actual\n> time=0.055..0.055 rows=0 loops=1)\n> -> Index Scan using connection_regid_obj1_index on\n> connection (cost=0.00..4.83 rows=1 width=4) (actual time=0.028..0.032\n> rows=1 loops=1)\n> Index Cond: ((connection_registry_id = 40105) AND (obj1\n> = 73582)) Total runtime: 7.693 ms\n> (7 rows)\n\nthis would be 7.159 / (0.015 + 0.032). This is probably not an\nenormously robust statistic but it at least focuses attention in the\nright place. Here's what I get (rounded off to 4 digits which is surely\nas much precision as we have in the numbers):\n\n Tom 7.4.8+\t 1.619\n Ian 7.4.8\t 6.000\n Ian 7.4.2\t 13.95\n Steinar 7.4.7\t 8.833\n Rhett orig\t108.3\n Rhett test\t152.3\n Michael 7.4.1\t 2.015\n\nMy number seems to be a bit of an outlier to the low side, but yours are\nway the heck to the high side. And Michael's test seems to rule out the\nidea that it's something broken in 7.4.1 in particular.\n\nI'm now thinking you've got either a platform- or compiler-specific\nproblem. Exactly what is the hardware (the CPU not the disks)? How did\nyou build or come by the Postgres executables (compiler, configure\noptions, etc)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Aug 2005 13:12:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "> I'm now thinking you've got either a platform- or compiler-specific\n> problem. Exactly what is the hardware (the CPU not the disks)? How did\n> you build or come by the Postgres executables (compiler, configure\n> options, etc)?\n\nI've tried it on two of our machines, both HP Proliant DL580:\nProduction: Intel(R) Xeon(TM) MP CPU 2.80GHz (I think there are 2\nphysical CPUs with Hyperthreading, shows up as 4)\n 6 gigs RAM\nDevelopment: Intel(R) XEON(TM) MP CPU 2.00GHz (I have vague\nrecollection of disabling hyperthreading on this chip because of some\nother kernel issue)\n 1 gig RAM\n\nThey are both running SuSE 8, 2.4.21-128-smp kernel\n\nCompile instructions (I didn't do it myself) indicate we built from\nsource with nothing fancy:\n\ntar xpvf postgresql-7.4.1.tar.bz2\ncd postgresql-7.4.1\n./configure --prefix=/usr/local/postgresql-7.4.1\nmake\nmake install\nmake install-all-headers\n\nIf i run 'file' on /usr/local/postgresql-7.4.1/bin/postgres :\npostgres: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV),\ndynamically linked (uses shared libs), not stripped\n\nThanks for all your help guys,\n\nRhett\n", "msg_date": "Tue, 9 Aug 2005 11:10:33 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Rhett Garber <[email protected]> writes:\n> They are both running SuSE 8, 2.4.21-128-smp kernel\n\n> Compile instructions (I didn't do it myself) indicate we built from\n> source with nothing fancy:\n\nYou could double-check the configure options by running pg_config.\nBut probably the more interesting question is whether any nondefault\nCFLAGS were used, and I don't think pg_config records that.\n(Hmm, maybe it should.)\n\nIn any case, there's no smoking gun there. I'm now wondering if maybe\nthere's something unusual about your runtime parameters. AFAIR you\ndidn't show us your postgresql.conf settings --- could we see any\nnondefault entries there?\n\n(I looked quickly at the 7.4 hashjoin code, and I see that it uses a\nhash table sized according to sort_mem even when the input is predicted\nto be very small ... so an enormous sort_mem setting would account for\nsome plan startup overhead to initialize the table ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Aug 2005 14:37:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "Well that could be an issue, is this abnormally large:\n\n#shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each\nshared_buffers = 206440\n#sort_mem = 131072 # min 64, size in KB\nsort_mem = 524288 # min 64, size in KB\nvacuum_mem = 131072 # min 1024, size in K\n\nI actually had a lot of trouble finding example values for these... no\none wants to give real numbers in any postgres performance tuning\narticles I saw. What would be appropriate for machines with 1 or 6\ngigs of RAM and wanting to maximize performance.\n\nRhett\n\nOn 8/9/05, Tom Lane <[email protected]> wrote:\n> Rhett Garber <[email protected]> writes:\n> > They are both running SuSE 8, 2.4.21-128-smp kernel\n> \n> > Compile instructions (I didn't do it myself) indicate we built from\n> > source with nothing fancy:\n> \n> You could double-check the configure options by running pg_config.\n> But probably the more interesting question is whether any nondefault\n> CFLAGS were used, and I don't think pg_config records that.\n> (Hmm, maybe it should.)\n> \n> In any case, there's no smoking gun there. I'm now wondering if maybe\n> there's something unusual about your runtime parameters. AFAIR you\n> didn't show us your postgresql.conf settings --- could we see any\n> nondefault entries there?\n> \n> (I looked quickly at the 7.4 hashjoin code, and I see that it uses a\n> hash table sized according to sort_mem even when the input is predicted\n> to be very small ... so an enormous sort_mem setting would account for\n> some plan startup overhead to initialize the table ...)\n> \n> regards, tom lane\n>\n", "msg_date": "Tue, 9 Aug 2005 11:51:30 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" }, { "msg_contents": "Rhett Garber <[email protected]> writes:\n> Well that could be an issue, is this abnormally large:\n> #shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each\n> shared_buffers = 206440\n> #sort_mem = 131072 # min 64, size in KB\n> sort_mem = 524288 # min 64, size in KB\n> vacuum_mem = 131072 # min 1024, size in K\n\nThe vacuum_mem number is OK I think, but both of the others seem\nunreasonably large. Conventional wisdom about shared_buffers is that\nthe sweet spot is maybe 10000 or so buffers, rarely more than 50000.\n(Particularly in pre-8.0 releases, there are code paths that grovel\nthrough all the buffers linearly, so there is a significant cost to\nmaking it too large.) Don't worry about it being too small to make\neffective use of RAM --- we rely on the kernel's disk cache to do that.\n\nsort_mem is *per sort*, and so half a gig in a machine with only a\ncouple of gig is far too much except when you know you have only one\nquery running. A couple dozen backends each trying to use half a gig\nwill drive you into the ground in no time. Conventional wisdom here\nis that the global setting should be conservatively small (perhaps\n10Mb to 100Mb depending on how many concurrent backends you expect to\nhave), and then you can explicitly increase it locally with SET for\nspecific queries that need it.\n\nIn terms of the problem at hand, try the test case with a few different\nvalues of sort_mem (use SET to adjust it, you don't need to touch the\nconfig file) and see what happens. I think the cost you're seeing is\njust startup overhead to zero a hash table of a few hundred meg ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 09 Aug 2005 15:00:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why hash join instead of nested loop? " }, { "msg_contents": "Bingo, the smaller the sort_mem, the faster that query is.\n\nThanks a lot to everybody that helped, i'll tweak with these values\nmore when I get a chance now that I have some guidelines that make\nsense.\n\nRhett\n\nOn 8/9/05, Tom Lane <[email protected]> wrote:\n> Rhett Garber <[email protected]> writes:\n> > Well that could be an issue, is this abnormally large:\n> > #shared_buffers = 1536 # min 16, at least max_connections*2, 8KB each\n> > shared_buffers = 206440\n> > #sort_mem = 131072 # min 64, size in KB\n> > sort_mem = 524288 # min 64, size in KB\n> > vacuum_mem = 131072 # min 1024, size in K\n> \n> The vacuum_mem number is OK I think, but both of the others seem\n> unreasonably large. Conventional wisdom about shared_buffers is that\n> the sweet spot is maybe 10000 or so buffers, rarely more than 50000.\n> (Particularly in pre-8.0 releases, there are code paths that grovel\n> through all the buffers linearly, so there is a significant cost to\n> making it too large.) Don't worry about it being too small to make\n> effective use of RAM --- we rely on the kernel's disk cache to do that.\n> \n> sort_mem is *per sort*, and so half a gig in a machine with only a\n> couple of gig is far too much except when you know you have only one\n> query running. A couple dozen backends each trying to use half a gig\n> will drive you into the ground in no time. Conventional wisdom here\n> is that the global setting should be conservatively small (perhaps\n> 10Mb to 100Mb depending on how many concurrent backends you expect to\n> have), and then you can explicitly increase it locally with SET for\n> specific queries that need it.\n> \n> In terms of the problem at hand, try the test case with a few different\n> values of sort_mem (use SET to adjust it, you don't need to touch the\n> config file) and see what happens. I think the cost you're seeing is\n> just startup overhead to zero a hash table of a few hundred meg ...\n> \n> regards, tom lane\n>\n", "msg_date": "Tue, 9 Aug 2005 12:11:52 -0700", "msg_from": "Rhett Garber <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why hash join instead of nested loop?" } ]