threads
listlengths
1
275
[ { "msg_contents": "Thankyou Alexander,\n\n\tThat has worked and appears to have fixed the issue with syslog.\n\nTheo\n\n-----Original Message-----\nFrom: Alexander Borkowski [mailto:[email protected]] \nSent: Tuesday, 21 December 2004 10:09 AM\nTo: Theo Galanakis\nCc: '[email protected]'\nSubject: Re: [PERFORM] PG Logging is Slow\n\n\nTheo,\n\n > \tI tried the -/var/log/postgresql.log option however I noticed no\n > performance improvement. May be the fact that we use redhad linux and >\nsyslog, I'm no sys-admin, so I'm not sure if there is a difference \nbetween\n > syslogd and syslog.\n\nDid you restart syslogd (that's the server process implementing the \nsyslog (= system log) service) after you changed its configuration?\n\nIn order to do so, try running\n\n/etc/init.d/syslog restart\n\nas root from a commandline.\n\nHTH\n\nAlex\n\n\n______________________________________________________________________\nThis email, including attachments, is intended only for the addressee\nand may be confidential, privileged and subject to copyright. If you\nhave received this email in error, please advise the sender and delete\nit. If you are not the intended recipient of this email, you must not\nuse, copy or disclose its content to anyone. You must not copy or \ncommunicate to others content that is confidential or subject to \ncopyright, unless you have the consent of the content owner.\n\n\n\n\nRE: [PERFORM] PG Logging is Slow\n\n\nThankyou Alexander,\n\n        That has worked and appears to have fixed the issue with syslog.\n\nTheo\n\n-----Original Message-----\nFrom: Alexander Borkowski [mailto:[email protected]] \nSent: Tuesday, 21 December 2004 10:09 AM\nTo: Theo Galanakis\nCc: '[email protected]'\nSubject: Re: [PERFORM] PG Logging is Slow\n\n\nTheo,\n\n >      I tried the -/var/log/postgresql.log option however I noticed no\n > performance improvement. May be the fact that we use redhad linux and  > syslog, I'm no sys-admin, so I'm not sure if there is a difference \nbetween\n > syslogd and syslog.\n\nDid you restart syslogd (that's the server process implementing the \nsyslog (= system log) service) after you changed its configuration?\n\nIn order to do so, try running\n\n/etc/init.d/syslog restart\n\nas root from a commandline.\n\nHTH\n\nAlex", "msg_date": "Tue, 21 Dec 2004 16:53:51 +1100", "msg_from": "Theo Galanakis <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG Logging is Slow" } ]
[ { "msg_contents": "I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram\nof 4 Gb. Since 1 1/2 yr. when I started to use the database server after\noptimizing the postgresql.conf everything went fine until a couple of\nweeks ago , my database grew up to 3.5 Gb and there were more than 140\nconcurent connections.\nThe server seemed to be slower in the rush hour peroid than before . There\nis some swap process too. My top and meminfo are shown here below:\n14:52:13 up 13 days, 2:50, 2 users, load average: 5.58, 5.97, 6.11\n218 processes: 210 sleeping, 1 running, 0 zombie, 7 stopped\nCPU0 states: 7.2% user 55.2% system 0.0% nice 0.0% iowait 36.4% idle\nCPU1 states: 8.3% user 56.1% system 0.0% nice 0.0% iowait 34.4% idle\nCPU2 states: 10.0% user 57.0% system 0.0% nice 0.0% iowait 32.4% idle\nCPU3 states: 6.2% user 55.3% system 0.0% nice 0.0% iowait 37.3% idle\nMem: 4124720k av, 4105916k used, 18804k free, 0k shrd, 10152k buff\n 2900720k actv, 219908k in_d, 167468k in_c\nSwap: 20370412k av, 390372k used, 19980040k free 2781256k\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n 14 root 18 0 0 0 0 SW 54.5 0.0 766:10 1\nkscand/HighMem\n13304 postgres 17 0 280M 280M 276M D 52.5 6.9 0:10 2 postmaster\n12035 postgres 16 0 175M 174M 169M D 33.0 4.3 0:26 3 postmaster\n13193 postgres 16 0 128M 127M 124M S 28.4 3.1 0:05 3 postmaster\n12137 postgres 16 0 498M 497M 431M D 27.2 12.3 0:34 1 postmaster\n 11 root 15 0 0 0 0 SW 13.9 0.0 363:00 2 kswapd\n13241 postgres 16 0 318M 318M 314M D 7.3 7.9 0:09 2 postmaster\n 13 root 16 0 0 0 0 SW 6.9 0.0 82:17 0\nkscand/Normal\n13367 postgres 15 0 196M 196M 193M D 6.5 4.8 0:02 2 postmaster\n11984 postgres 15 0 305M 305M 301M S 4.9 7.5 2:55 1 postmaster\n13331 postgres 16 0 970M 970M 966M S 4.9 24.0 0:22 1 postmaster\n12388 postgres 15 0 293M 292M 289M S 3.9 7.2 2:42 3 postmaster\n13328 postgres 15 0 276M 276M 272M S 2.7 6.8 0:22 0 postmaster\n 26 root 16 0 0 0 0 SW 2.3 0.0 10:12 1 kjournald\n11831 postgres 15 0 634M 634M 630M S 1.5 15.7 1:33 3 postmaster\n12127 postgres 15 0 117M 116M 114M S 1.1 2.8 0:20 1 postmaster\n12002 postgres 15 0 429M 429M 426M S 0.9 10.6 0:24 1 postmaster\n12991 postgres 15 0 143M 143M 139M S 0.7 3.5 0:29 1 postmaster\n13234 postgres 15 0 288M 288M 284M S 0.7 7.1 0:17 0 postmaster\n13337 postgres 15 0 172M 171M 168M S 0.3 4.2 0:06 0 postmaster\n13413 root 15 0 1276 1276 856 R 0.3 0.0 0:00 0 top\n11937 postgres 15 0 379M 379M 375M S 0.1 9.4 2:59 2 postmaster\n\nShared kernel mem:\n[root@data3 root]# cat < /proc/sys/kernel/shmmax\n4000000000\n[root@data3 root]# cat < /proc/sys/kernel/shmall\n300000000\n\nmeminfo :\n total: used: free: shared: buffers: cached:\nMem: 4223713280 4200480768 23232512 0 11497472 3555827712\nSwap: 20859301888 303460352 20555841536\nMemTotal: 4124720 kB\nMemFree: 22688 kB\nMemShared: 0 kB\nBuffers: 11228 kB\nCached: 3367688 kB\nSwapCached: 104800 kB\nActive: 3141224 kB\nActiveAnon: 684960 kB\nActiveCache: 2456264 kB\nInact_dirty: 220504 kB\nInact_laundry: 166844 kB\nInact_clean: 94252 kB\nInact_target: 724564 kB\nHighTotal: 3276736 kB\nHighFree: 3832 kB\nLowTotal: 847984 kB\nLowFree: 18856 kB\nSwapTotal: 20370412 kB\nSwapFree: 20074064 kB\n\nPostgresql.conf :\n# Connection Parameters\n#\ntcpip_socket = true\n#ssl = false\n\n#max_connections = 32\nmax_connections = 180\n#superuser_reserved_connections = 2\n\n#port = 5432\n#hostname_lookup = false\n#show_source_port = false\n\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n\n#virtual_host = ''\n\n#krb_server_keyfile = ''\n\n\n#\n# Shared Memory Size\n#\n#shared_buffers = 64 # min max_connections*2 or 16, 8KB each\nshared_buffers = 250000\n#max_fsm_relations = 1000 # min 10, fsm is free space map, ~40 bytes\n#max_fsm_pages = 10000 # min 1000, fsm is free space map, ~6 bytes\n#max_locks_per_transaction = 64 # min 10\n#wal_buffers = 8 # min 4, typically 8KB each\n\n#\n# Non-shared Memory Sizes\n#\n#sort_mem = 1024 # min 64, size in KB\nsort_mem = 60000\n#vacuum_mem = 8192 # min 1024, size in KB\nvacuum_mem = 20072\n\n# Write-ahead log (WAL)\n#\n#checkpoint_segments = 3 # in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 300 # range 30-3600, in seconds\n#\n#commit_delay = 0 # range 0-100000, in microseconds\ncommit_delay = 10\n#commit_siblings = 5 # range 1-1000\n#\n#fsync = true\nfsync = false\n#wal_sync_method = fsync # the default varies across platforms:\n# # fsync, fdatasync, open_sync, or\nopen_datasync\n#wal_debug = 0 # range 0-16\n\n\n#\n# Optimizer Parameters\n#\n#enable_seqscan = true\n#enable_indexscan = true\n#enable_tidscan = true\n#enable_sort = true\n#enable_nestloop = true\n#enable_mergejoin = true\n#enable_hashjoin = true\n\n#effective_cache_size = 1000 # typically 8KB each\neffective_cache_size = 5000\n#random_page_cost = 4 # units are one sequential page fetch cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n#default_statistics_target = 10 # range 1-1000\n\n#\n# GEQO Optimizer Parameters\n#\n#geqo = true\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n#geqo_threshold = 11\n#geqo_pool_size = 0 # default based on tables in statement,\n # range 128-1024\n#geqo_effort = 1\n#geqo_generations = 0\n\n\n\nPlease give me any comment about adjustment my mechine.\nAmrit Angsusingh\nnakornsawan , Thailand\n\n\n", "msg_date": "Tue, 21 Dec 2004 16:31:49 +0700 (ICT)", "msg_from": "\"Amrit Angsusingh\" <[email protected]>", "msg_from_op": true, "msg_subject": "Howto Increased performace ?" }, { "msg_contents": "On Tue, 2004-12-21 at 16:31 +0700, Amrit Angsusingh wrote:\n> I used postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram\n> of 4 Gb. Since 1 1/2 yr. when I started to use the database server after\n> optimizing the postgresql.conf everything went fine until a couple of\n> weeks ago , my database grew up to 3.5 Gb and there were more than 140\n> concurent connections.\n...\n> shared_buffers = 250000\nthis is much higher than usually adviced on this list.\ntry to reduce this to 25000\n \n> effective_cache_size = 5000\nand increase this instead, to say, 50000\n\n\ngnari\n\n\n", "msg_date": "Fri, 24 Dec 2004 15:39:32 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Hi,\n\n> #sort_mem = 1024 # min 64, size in KB\n> sort_mem = 60000\n\nI think this might be too much. You are using 60000KB _per connection_ here \n= 10GB for your maximum of 180 connections.\n\nBy comparison, I am specifiying 4096 (subject to adjustment) for a machine \nwith a similar spec to yours.\n\nregards\nIain\n\n\n\n\n", "msg_date": "Mon, 27 Dec 2004 10:29:54 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Hi,\n\nThese are some settings that I am planning to start with for a 4GB RAM dual \nopteron system with a maximum of 100 connections:\n\n\nshared_buffers 8192 (=67MB RAM)\nsort_mem 4096 (=400MB RAM for 100 connections)\neffective_cache_size 380000(@8KB =3.04GB RAM)\nvacuum_mem 32768 KB\nwal_buffers 64\ncheckpoint_segments 8\n\nIn theory, effective cache size is the amount of memory left over for the OS \nto cache the filesystem after running all programs and having 100 users \nconnected, plus a little slack.\n\nregards\nIain\n----- Original Message ----- \nFrom: \"Amrit Angsusingh\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, December 27, 2004 6:21 PM\nSubject: Re: [PERFORM] Howto Increased performace ?\n\n\n>\n>\n>>> #sort_mem = 1024 # min 64, size in KB\n>>> sort_mem = 60000\n>\n>> I think this might be too much. You are using 60000KB _per connection_\n>> here\n>> = 10GB for your maximum of 180 connections.\n>>\n>> By comparison, I am specifiying 4096 (subject to adjustment) for a \n>> machine\n>> with a similar spec to yours.\n>>\n>> regards\n>> Iain\n>\n> I reduced it to\n> sort_mem = 8192\n> If I increase it higher , what will be result I could expect.\n>\n> and I also reduce the\n> max connection to 160\n> and\n> shared buffer to shared_buffers = 27853\n> effective_cache_size = 81920 [what does it for?]\n>\n> do you think is it still too much especialy effective cache ?\n>\n> Thanks\n> Amrit\n>\n> Amrit Angsusingh\n> Nakornsawan,Thailand \n\n", "msg_date": "Mon, 27 Dec 2004 18:34:45 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Iain wrote:\n\n> sort_mem 4096 (=400MB RAM for 100 connections)\n\nIf I understand correctly, memory usage related to `sort_mem'\nis per connection *and* per sort.\nIf every client runs a query with 3 sorts in its plan, you are\ngoing to need (in theory) 100 connections * 4Mb * 3 sorts,\nwhich is 1.2 Gb.\n\nPlease correct me if I'm wrong...\n\n-- \nCosimo\n\n", "msg_date": "Mon, 27 Dec 2004 12:36:50 +0100", "msg_from": "Cosimo Streppone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "On Mon, 2004-12-27 at 22:31 +0700, Amrit Angsusingh wrote:\n> [ [email protected] ]\n> >\n> > These are some settings that I am planning to start with for a 4GB RAM\n> > dual\n> > opteron system with a maximum of 100 connections:\n> >\n> >\n> > shared_buffers 8192 (=67MB RAM)\n> > sort_mem 4096 (=400MB RAM for 100 connections)\n> > effective_cache_size 380000(@8KB =3.04GB RAM)\n> > vacuum_mem 32768 KB\n> > wal_buffers 64\n> > checkpoint_segments 8\n> >\n> > In theory, effective cache size is the amount of memory left over for the\n> > OS\n> > to cache the filesystem after running all programs and having 100 users\n> > connected, plus a little slack.\n\n> I reduced the connection to 160 and configured as below there is some\n> improvement in speed .\n> shared_buffers = 27853 [Should I reduce it to nearly as you do and what\n> will happen?]\n\nat some point, more shared buffers will do less good than leaving the\nmemory to the OS to use as disk buffers. you might want to experiment\na bit with different values to find what suits your real-life conditions\n\n> sort_mem = 8192\n> vacuum_mem = 16384\n> effective_cache_size = 81920 [Should I increase it to more than 200000 ?]\nas Iain wrote, this value is an indication of how much memory will be\navailable to the OS for disk cache.\nwhen all other settings have been made, try to see how much memory your\nOS has left under normal conditions, and adjust your setting\naccordingly, if it differs significantly.\nI have seen cases where an incorrect value (too low) influenced the\nplanner to use sequential scans instead of better indexscans,\npresumably because of a higher ratio of estimated cache hits.\n\n> Thanks for any comment again.\n> \n> NB. There is a huge diaster in my country \"Tsunamies\" and all the people\n> over the country include me felt into deep sorrow.\n\nmy condolescences.\n\n> Amrit Angsusingh\n> Thailand\n\ngnari\n\n\n", "msg_date": "Mon, 27 Dec 2004 18:08:39 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Ho Cosimo,\n\nI had read that before, so you are right. The amount of memory being used \ncould run much higher than I wrote.\n\nIn my case, I know that not all the connections are not busy all the time \n(this isn't a web application with thousands of users connecting to a pool) \nso not all active connections will be doing sorts all the time. As far as I \ncan tell, sort memory is allocated as needed, so my estimate of 400MB should \nstill be reasonable, and I have plenty of unaccounted for memory outside the \neffective cache so it shouldn't be a problem.\n\nPresumably, that memory isn't needed after the result set is built.\n\nIf I understand correctly, there isn't any way to limit the amount of memory \nallocated for sorting, which means that you can't specifiy generous sort_mem \nvalues to help out when there is spare capacity (few connections) because in \nthe worst case it could cause swapping when the system is busy. In the the \nnot so bad case, the effective cache size estimate will just be completely \nwrong.\n\nMaybe a global sort memory limit would be a good idea, I don't know.\n\nregards\nIain\n\n\n> Iain wrote:\n>\n>> sort_mem 4096 (=400MB RAM for 100 connections)\n>\n> If I understand correctly, memory usage related to `sort_mem'\n> is per connection *and* per sort.\n> If every client runs a query with 3 sorts in its plan, you are\n> going to need (in theory) 100 connections * 4Mb * 3 sorts,\n> which is 1.2 Gb.\n>\n> Please correct me if I'm wrong...\n>\n> -- \n> Cosimo\n\n\n", "msg_date": "Tue, 28 Dec 2004 11:23:41 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Hi Amrit,\n\nI'm sorry to hear about the disaster in Thailand. I live in a tsunami prone \narea myself :-(\n\nI think that you have enough information to solve your problem now, but it \nwill just take some time and testing. When you have eliminated the excessive \nswapping and tuned your system as best you can, then you can decide if that \nis fast enough for you. More memory might help, but I can't say for sure. \nThere are many other things to consider. I suggest that you spend some time \nreading through the performance and maybe the admin lists.\n\nregards\nIain\n\n----- Original Message ----- \nFrom: \"Amrit Angsusingh\" <[email protected]>\nTo: \"Iain\" <[email protected]>\nCc: <[email protected]>\nSent: Tuesday, December 28, 2004 1:48 AM\nSubject: Re: [PERFORM] Howto Increased performace ?\n\n\n>> Hi,\n>>\n>> These are some settings that I am planning to start with for a 4GB RAM\n>> dual\n>> opteron system with a maximum of 100 connections:\n>>\n>>\n>> shared_buffers 8192 (=67MB RAM)\n>> sort_mem 4096 (=400MB RAM for 100 connections)\n>> effective_cache_size 380000(@8KB =3.04GB RAM)\n>> vacuum_mem 32768 KB\n>> wal_buffers 64\n>> checkpoint_segments 8\n>>\n>> In theory, effective cache size is the amount of memory left over for the\n>> OS\n>> to cache the filesystem after running all programs and having 100 users\n>> connected, plus a little slack.\n>>\n>> regards\n>> Iain\n>\n>\n> I'm not sure if I put more RAM on my mechine ie: 6 GB . The performance\n> would increase for more than 20 % ?\n> Any comment please,\n>\n> Amrit Angsusingh\n> Comcenter\n> Sawanpracharuck Hospital\n> Thailand\n>\n> \n\n", "msg_date": "Tue, 28 Dec 2004 11:31:55 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" }, { "msg_contents": "Hi,\n\nThese are the /etc/sysctl.conf settings that I am planning to use. \nCoincidentally, these are the settings recommended by Oracle. If anything \nthey would be generous, I think.\n\nfile-max 65536 (for 2.2 and 2.4 kernels)\nkernel.shmall 134217728 (=128MB)\nkernel.shmmax 268435456\nfs.file-max 65536\n\nBy the way, when you tested your changes, was that with a busy system? I \nthink that a configuration that gives the best performance (at the client \nend) on a machine with just a few connections might not be the configuration \nthat will give you the best throughput when the system is stressed.\n\nI'm certainly no expert on tuning Linux systems, or even Postgres but I'd \nsuggest that you become knowlegable in the use of the various system \nmonitoring tools that Linux has and keep a record of their output so you can \ncompare as you change your configuration. In the end though, I think your \naim is to reduce swapping by tuning your memory usage for busy times.\n\nAlso, I heard that (most?what versions?) 32 bit linux kernals are slow at \nhandling more than 2GB memory so a kernal upgrade might be worth \nconsidering.\n\nregards\nIain \n\n", "msg_date": "Tue, 28 Dec 2004 13:56:42 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Howto Increased performace ?" } ]
[ { "msg_contents": "> > A demo I've set up for sales seems to be spending much of it's time in\n> > disk wait states.\n> >\n> >\n> > The particular system I'm working with is:\n> > Ext3 on Debian inside Microsoft VirtualPC on NTFS\n> > on WindowsXP on laptops of our sales team.\n> \n> As this is only for demo purposes, you might consider turning fsync off,\n> although I have no idea if it would have any effect on your setup.\n\nTry removing VirtualPC from the equation. You can run the win32 native port or dual boot your laptop for example.\n\nMerlin\n", "msg_date": "Tue, 21 Dec 2004 13:36:12 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tips for a system with _extremely_ slow IO?" } ]
[ { "msg_contents": "Hello, i have a problem between V7.4.3 Cygwin and\nV8.0RC2 W2K. I have 2 systems:\n\n1. Production Machine\n - Dual P4 3000MHz\n - 2 GB RAM\n - W2K\n - PostgreSQL 7.4.3 under Cygwin\n - i connect to it over a DSL Line\n2. Develop Machine\n - P4 1800MHz\n - 760 MB RAM\n - PostgreSQL Native Windows\n - local connection 100MB/FD\n\nBoth systems use the default postgresql.conf. Now the problem.\nI have an (unoptimized, dynamic) query wich was execute on the\nproduction machine over DSL in 2 seconds and on my develop\nmachine, connected over local LAN, in 119 seconds!\n\nWhats this? I can not post the query details here public, its a commercial\nproject. Any first idea? I execute on both machine the same query with\nthe same database design!\n---------------------------------------------\nThomas Wegner\nCabrioMeter - The Weather Plugin for Trillian\nhttp://www.wegner24.de/cabriometer\n\n\n", "msg_date": "Wed, 22 Dec 2004 00:03:18 +0100", "msg_from": "\"Thomas Wegner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Speed in V8.0" }, { "msg_contents": "2. Develop Machine\n - P4 1800MHz\n - 760 MB RAM\n - W2K\n - PostgreSQL 8.0 RC2 Native Windows\n - local connection 100MB/FD\n---------------------------------------------\nThomas Wegner\nCabrioMeter - The Weather Plugin for Trillian\nhttp://www.wegner24.de/cabriometer\n\n\"Thomas Wegner\" <[email protected]> schrieb im Newsbeitrag \nnews:[email protected]...\n> Hello, i have a problem between V7.4.3 Cygwin and\n> V8.0RC2 W2K. I have 2 systems:\n>\n> 1. Production Machine\n> - Dual P4 3000MHz\n> - 2 GB RAM\n> - W2K\n> - PostgreSQL 7.4.3 under Cygwin\n> - i connect to it over a DSL Line\n> 2. Develop Machine\n> - P4 1800MHz\n> - 760 MB RAM\n> - PostgreSQL Native Windows\n> - local connection 100MB/FD\n>\n> Both systems use the default postgresql.conf. Now the problem.\n> I have an (unoptimized, dynamic) query wich was execute on the\n> production machine over DSL in 2 seconds and on my develop\n> machine, connected over local LAN, in 119 seconds!\n>\n> Whats this? I can not post the query details here public, its a commercial\n> project. Any first idea? I execute on both machine the same query with\n> the same database design!\n> ---------------------------------------------\n> Thomas Wegner\n> CabrioMeter - The Weather Plugin for Trillian\n> http://www.wegner24.de/cabriometer\n>\n> \n\n\n", "msg_date": "Wed, 22 Dec 2004 01:25:45 +0100", "msg_from": "\"Thomas Wegner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed in V8.0" }, { "msg_contents": "Another man working to the bitter end this Christmas!\n\nThere could be many reasons, but maybe first you should look at the amount\nof RAM available? If the tables fit in RAM on the production server but not\non the dev server, then that will easily defeat the improvement due to using\nthe native DB version.\n\nWhy don't you install cygwin on the dev box and do the comparison using the\nsame hardware?\n\nM\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Thomas Wegner\n> Sent: 21 December 2004 23:03\n> To: [email protected]\n> Subject: [PERFORM] Speed in V8.0\n> \n> \n> Hello, i have a problem between V7.4.3 Cygwin and\n> V8.0RC2 W2K. I have 2 systems:\n> \n> 1. Production Machine\n> - Dual P4 3000MHz\n> - 2 GB RAM\n> - W2K\n> - PostgreSQL 7.4.3 under Cygwin\n> - i connect to it over a DSL Line\n> 2. Develop Machine\n> - P4 1800MHz\n> - 760 MB RAM\n> - PostgreSQL Native Windows\n> - local connection 100MB/FD\n> \n> Both systems use the default postgresql.conf. Now the \n> problem. I have an (unoptimized, dynamic) query wich was \n> execute on the production machine over DSL in 2 seconds and \n> on my develop machine, connected over local LAN, in 119 seconds!\n> \n> Whats this? I can not post the query details here public, its \n> a commercial project. Any first idea? I execute on both \n> machine the same query with the same database design!\n> ---------------------------------------------\n> Thomas Wegner\n> CabrioMeter - The Weather Plugin for Trillian \n> http://www.wegner24.de/cabriometer\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Fri, 24 Dec 2004 14:52:11 -0000", "msg_from": "\"Matt Clark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed in V8.0" }, { "msg_contents": "On Wed, 2004-12-22 at 00:03 +0100, Thomas Wegner wrote:\n> Hello, i have a problem between V7.4.3 Cygwin and\n> V8.0RC2 W2K. I have 2 systems:\n> \n> 1. Production Machine\n> - Dual P4 3000MHz\n> - 2 GB RAM\n> - W2K\n> - PostgreSQL 7.4.3 under Cygwin\n> - i connect to it over a DSL Line\n> 2. Develop Machine\n> - P4 1800MHz\n> - 760 MB RAM\n> - PostgreSQL Native Windows\n> - local connection 100MB/FD\n> \n> Both systems use the default postgresql.conf. Now the problem.\n> I have an (unoptimized, dynamic) query wich was execute on the\n> production machine over DSL in 2 seconds and on my develop\n> machine, connected over local LAN, in 119 seconds!\n\nhas the development database been ANALYZED ?\n \ngnari\n\n\n", "msg_date": "Fri, 24 Dec 2004 15:45:10 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Speed in V8.0" }, { "msg_contents": "Ok, i installed the 7.4.3 on the dev machine under\nCygwin and the was 4 times slower than the V8.\nThey need 394 seconds. Whats wrong with my dev\nmachine. There was enough free memory available.\n---------------------------------------------\nThomas Wegner\nCabrioMeter - The Weather Plugin for Trillian\nhttp://www.wegner24.de/cabriometer\n\n\"\"Matt Clark\"\" <[email protected]> schrieb im Newsbeitrag \nnews:014001c4e9c8$266ec0a0$8300a8c0@solent...\n> Another man working to the bitter end this Christmas!\n>\n> There could be many reasons, but maybe first you should look at the amount\n> of RAM available? If the tables fit in RAM on the production server but \n> not\n> on the dev server, then that will easily defeat the improvement due to \n> using\n> the native DB version.\n>\n> Why don't you install cygwin on the dev box and do the comparison using \n> the\n> same hardware?\n>\n> M\n>\n>> -----Original Message-----\n>> From: [email protected]\n>> [mailto:[email protected]] On Behalf Of\n>> Thomas Wegner\n>> Sent: 21 December 2004 23:03\n>> To: [email protected]\n>> Subject: [PERFORM] Speed in V8.0\n>>\n>>\n>> Hello, i have a problem between V7.4.3 Cygwin and\n>> V8.0RC2 W2K. I have 2 systems:\n>>\n>> 1. Production Machine\n>> - Dual P4 3000MHz\n>> - 2 GB RAM\n>> - W2K\n>> - PostgreSQL 7.4.3 under Cygwin\n>> - i connect to it over a DSL Line\n>> 2. Develop Machine\n>> - P4 1800MHz\n>> - 760 MB RAM\n>> - PostgreSQL Native Windows\n>> - local connection 100MB/FD\n>>\n>> Both systems use the default postgresql.conf. Now the\n>> problem. I have an (unoptimized, dynamic) query wich was\n>> execute on the production machine over DSL in 2 seconds and\n>> on my develop machine, connected over local LAN, in 119 seconds!\n>>\n>> Whats this? I can not post the query details here public, its\n>> a commercial project. Any first idea? I execute on both\n>> machine the same query with the same database design!\n>> ---------------------------------------------\n>> Thomas Wegner\n>> CabrioMeter - The Weather Plugin for Trillian\n>> http://www.wegner24.de/cabriometer\n>>\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n\n", "msg_date": "Sat, 25 Dec 2004 11:21:39 +0100", "msg_from": "\"Thomas Wegner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed in V8.0" }, { "msg_contents": "Thats was it. Now the speed was ok. Thank you.\n---------------------------------------------\nThomas Wegner\nCabrioMeter - The Weather Plugin for Trillian\nhttp://www.wegner24.de/cabriometer\n\n\"Ragnar \"Hafsta�\"\" <[email protected]> schrieb im Newsbeitrag \nnews:[email protected]...\n> On Wed, 2004-12-22 at 00:03 +0100, Thomas Wegner wrote:\n>> Hello, i have a problem between V7.4.3 Cygwin and\n>> V8.0RC2 W2K. I have 2 systems:\n>>\n>> 1. Production Machine\n>> - Dual P4 3000MHz\n>> - 2 GB RAM\n>> - W2K\n>> - PostgreSQL 7.4.3 under Cygwin\n>> - i connect to it over a DSL Line\n>> 2. Develop Machine\n>> - P4 1800MHz\n>> - 760 MB RAM\n>> - PostgreSQL Native Windows\n>> - local connection 100MB/FD\n>>\n>> Both systems use the default postgresql.conf. Now the problem.\n>> I have an (unoptimized, dynamic) query wich was execute on the\n>> production machine over DSL in 2 seconds and on my develop\n>> machine, connected over local LAN, in 119 seconds!\n>\n> has the development database been ANALYZED ?\n>\n> gnari\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n> \n\n\n", "msg_date": "Sat, 25 Dec 2004 14:28:41 +0100", "msg_from": "\"Thomas Wegner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Speed in V8.0" } ]
[ { "msg_contents": "Hi,\n\nA small test with 8rc2 and BLCKSZ of 8k and 32k.\nIt seems there is a 10% increase in the number of transactions by \nsecond.\nDoes someone plan to carefully test the impact of BLCKSZ ?\n\nCordialement,\nJean-Gérard Pailloncy\n\nwith 8k:\n > /test/bin/pgbench -c 10 -t 300 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 300\nnumber of transactions actually processed: 3000/3000\n...\ntps = 26.662146 (excluding connections establishing)\ntps = 23.742071 (excluding connections establishing)\ntps = 28.323828 (excluding connections establishing)\ntps = 27.944931 (excluding connections establishing)\ntps = 25.898393 (excluding connections establishing)\ntps = 26.727316 (excluding connections establishing)\ntps = 27.499692 (excluding connections establishing)\ntps = 25.430853 (excluding connections establishing)\n\nwith 32k:\n > /test/bin/pgbench -c 10 -t 300 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 300\nnumber of transactions actually processed: 3000/3000\n...\ntps = 28.609049 (excluding connections establishing)\ntps = 29.978503 (excluding connections establishing)\ntps = 30.502606 (excluding connections establishing)\ntps = 33.406386 (excluding connections establishing)\ntps = 30.422134 (excluding connections establishing)\ntps = 26.878762 (excluding connections establishing)\ntps = 31.461116 (excluding connections establishing)\n\n", "msg_date": "Wed, 22 Dec 2004 17:31:50 +0100", "msg_from": "Pailloncy Jean-Gerard <[email protected]>", "msg_from_op": true, "msg_subject": "8rc2 & BLCKSZ" }, { "msg_contents": ">>>>> \"PJ\" == Pailloncy Jean-Gerard <[email protected]> writes:\n\nPJ> Hi,\nPJ> A small test with 8rc2 and BLCKSZ of 8k and 32k.\nPJ> It seems there is a 10% increase in the number of transactions by\nPJ> second.\nPJ> Does someone plan to carefully test the impact of BLCKSZ ?\n\nOne of the suggestions handed to me a long time ago for speeding up PG\non FreeBSD was to double the default blocksize in PG. I tried it, but\nfound not a significant enough speed up to make it worth the trouble\nto remember to patch every version of Pg during the upgrade path (ie,\n7.4.0 -> 7.4.2 etc.) Forgetting to do that would be disastrous!\n\n\n-- \n=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=\nVivek Khera, Ph.D. Khera Communications, Inc.\nInternet: [email protected] Rockville, MD +1-301-869-4449 x806\nAIM: vivekkhera Y!: vivek_khera http://www.khera.org/~vivek/\n", "msg_date": "Wed, 22 Dec 2004 15:51:02 -0500", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8rc2 & BLCKSZ" }, { "msg_contents": "Vivek Khera <[email protected]> writes:\n> One of the suggestions handed to me a long time ago for speeding up PG\n> on FreeBSD was to double the default blocksize in PG. I tried it, but\n> found not a significant enough speed up to make it worth the trouble\n> to remember to patch every version of Pg during the upgrade path (ie,\n> 7.4.0 -> 7.4.2 etc.) Forgetting to do that would be disastrous!\n\nNot really --- the postmaster will refuse to start if the BLCKSZ shown\nin pg_control doesn't match what is compiled in. I concur though that\nthere may be no significant performance gain. For some workloads there\nmay well be a performance loss from increasing BLCKSZ.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 22 Dec 2004 16:04:50 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8rc2 & BLCKSZ " }, { "msg_contents": "Am Mittwoch, 22. Dezember 2004 22:04 schrieb Tom Lane:\n> Vivek Khera <[email protected]> writes:\n> > One of the suggestions handed to me a long time ago for speeding up PG\n> > on FreeBSD was to double the default blocksize in PG. I tried it, but\n> > found not a significant enough speed up to make it worth the trouble\n> > to remember to patch every version of Pg during the upgrade path (ie,\n> > 7.4.0 -> 7.4.2 etc.) Forgetting to do that would be disastrous!\n>\n> Not really --- the postmaster will refuse to start if the BLCKSZ shown\n> in pg_control doesn't match what is compiled in. I concur though that\n> there may be no significant performance gain. For some workloads there\n> may well be a performance loss from increasing BLCKSZ.\n\nI've several databases of the same version 7.2 with rowsizes from 8k and 32k \nwith the same workload (a content management system), and the performance of \nthe 32k variants is slightly better for a few queries, overall responsivness \nseems to better with 8k (maybe because the 8k variant has 4x more buffers).\n\nRegards,\n Mario Weilguni\n", "msg_date": "Thu, 23 Dec 2004 08:00:41 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8rc2 & BLCKSZ" } ]
[ { "msg_contents": "Merry Xmas!\n\nI have a query. It sometimes runs OK and sometimes\nhorrible. Here is result from explain analyze:\n\nexplain analyze\nSELECT module, sum(c1) + sum(c2) + sum(c3) + sum(c4)\n+ sum(c5) AS \"count\"\nFROM xxx\nWHERE created >= ('now'::timestamptz - '1\nday'::interval) AND customer_id='158'\n AND domain='xyz.com'\nGROUP BY module;\n\nThere is an index:\nIndexes: xxx_idx btree (customer_id, created,\n\"domain\")\n\nTable are regularlly \"vacuum full\" and reindex and\nit has 3 million rows.\n\n \n QUERY PLAN \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=139.53..141.65 rows=12 width=30)\n(actual time=17623.65..17623.65 rows=0 loops=1)\n -> Group (cost=139.53..140.14 rows=121 width=30)\n(actual time=17623.64..17623.64 rows=0 loops=1)\n -> Sort (cost=139.53..139.83 rows=121\nwidth=30) (actual time=17623.63..17623.63 rows=0\nloops=1)\n Sort Key: module\n -> Index Scan using xxx_idx on xxx \n(cost=0.00..135.33 rows=121 width=30) (actual\ntime=17622.95..17622.95 rows=0 loops=1)\n Index Cond: ((customer_id = 158)\nAND (created >= '2004-12-02\n11:26:22.596656-05'::timestamp with time zone) AND\n(\"domain\" = 'xyz.com'::character varying))\n Total runtime: 17624.05 msec\n(7 rows)\n \n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=142.05..144.21 rows=12 width=30)\n(actual time=1314931.09..1314931.09 rows=0 loops=1)\n -> Group (cost=142.05..142.66 rows=124 width=30)\n(actual time=1314931.08..1314931.08 rows=0 loops=1)\n -> Sort (cost=142.05..142.36 rows=124\nwidth=30) (actual time=1314931.08..1314931.08 rows=0\nloops=1)\n Sort Key: module\n -> Index Scan using xxx_idx on xxx \n(cost=0.00..137.74 rows=124 width=30) (actual\ntime=1314930.72..1314930.72 rows=0 loops=1)\n Index Cond: ((customer_id = 158)\nAND (created >= '2004-12-01\n15:21:51.785526-05'::timestamp with time zone) AND\n(\"domain\" = 'xyz.com'::character varying))\n Total runtime: 1314933.16 msec\n(7 rows)\n\nWhat can I try?\n\nThanks,\n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nDress up your holiday email, Hollywood style. Learn more. \nhttp://celebrity.mail.yahoo.com\n", "msg_date": "Wed, 22 Dec 2004 12:09:04 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Why so much time difference with a same query/plan?" }, { "msg_contents": "Does the order of columns in the index matter since\nmore than 50% customer_id = 158?\n\nI think it does not in Oracle.\n\nWill the performance be better if I change index\nxxx_idx to (\"domain\", customer_id, created)?\n\nI will test myself when possible.\n\nThanks,\n\n--- Litao Wu <[email protected]> wrote:\n\n> Merry Xmas!\n> \n> I have a query. It sometimes runs OK and sometimes\n> horrible. Here is result from explain analyze:\n> \n> explain analyze\n> SELECT module, sum(c1) + sum(c2) + sum(c3) +\n> sum(c4)\n> + sum(c5) AS \"count\"\n> FROM xxx\n> WHERE created >= ('now'::timestamptz - '1\n> day'::interval) AND customer_id='158'\n> AND domain='xyz.com'\n> GROUP BY module;\n> \n> There is an index:\n> Indexes: xxx_idx btree (customer_id, created,\n> \"domain\")\n> \n> Table are regularlly \"vacuum full\" and reindex and\n> it has 3 million rows.\n> \n> \n> \n> QUERY PLAN \n> \n> \n>\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=139.53..141.65 rows=12 width=30)\n> (actual time=17623.65..17623.65 rows=0 loops=1)\n> -> Group (cost=139.53..140.14 rows=121\n> width=30)\n> (actual time=17623.64..17623.64 rows=0 loops=1)\n> -> Sort (cost=139.53..139.83 rows=121\n> width=30) (actual time=17623.63..17623.63 rows=0\n> loops=1)\n> Sort Key: module\n> -> Index Scan using xxx_idx on xxx \n> (cost=0.00..135.33 rows=121 width=30) (actual\n> time=17622.95..17622.95 rows=0 loops=1)\n> Index Cond: ((customer_id =\n> 158)\n> AND (created >= '2004-12-02\n> 11:26:22.596656-05'::timestamp with time zone) AND\n> (\"domain\" = 'xyz.com'::character varying))\n> Total runtime: 17624.05 msec\n> (7 rows)\n> \n> \n> QUERY PLAN \n> \n> \n>\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=142.05..144.21 rows=12 width=30)\n> (actual time=1314931.09..1314931.09 rows=0 loops=1)\n> -> Group (cost=142.05..142.66 rows=124\n> width=30)\n> (actual time=1314931.08..1314931.08 rows=0 loops=1)\n> -> Sort (cost=142.05..142.36 rows=124\n> width=30) (actual time=1314931.08..1314931.08 rows=0\n> loops=1)\n> Sort Key: module\n> -> Index Scan using xxx_idx on xxx \n> (cost=0.00..137.74 rows=124 width=30) (actual\n> time=1314930.72..1314930.72 rows=0 loops=1)\n> Index Cond: ((customer_id =\n> 158)\n> AND (created >= '2004-12-01\n> 15:21:51.785526-05'::timestamp with time zone) AND\n> (\"domain\" = 'xyz.com'::character varying))\n> Total runtime: 1314933.16 msec\n> (7 rows)\n> \n> What can I try?\n> \n> Thanks,\n> \n> \n> \n> \t\t\n> __________________________________ \n> Do you Yahoo!? \n> Dress up your holiday email, Hollywood style. Learn\n> more. \n> http://celebrity.mail.yahoo.com\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the\n> unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> [email protected])\n> \n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - 250MB free storage. Do more. Manage less. \nhttp://info.mail.yahoo.com/mail_250\n", "msg_date": "Wed, 22 Dec 2004 13:52:40 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Why so much time difference with a same query/plan?" }, { "msg_contents": "Hi,\n\nOn Wed, Dec 22, 2004 at 01:52:40PM -0800, Litao Wu wrote:\n> Does the order of columns in the index matter since\n> more than 50% customer_id = 158?\n> \n> I think it does not in Oracle.\n> \n> Will the performance be better if I change index\n> xxx_idx to (\"domain\", customer_id, created)?\n\nWell, in Oracle this would of cause matter. Oracle calculates index\nusage by being able to fill all index's attributes from the left to the\nright. If any one attribute within is missing Oracle would not test if\nit is only one attribute missing, or if all other attributes are missing\nwithin the query's where clause. \nNormaly you'd create an index using the most frequently parametrized\nattributes first, then the second ones and so on. If the usage isn't\nthat different, you would use the most granule attribute in foremost\nfollowed by the second and so on.\n\nRegards,\nYann\n", "msg_date": "Thu, 23 Dec 2004 08:04:53 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so much time difference with a same query/plan?" }, { "msg_contents": "Yann Michel <[email protected]> writes:\n\n> On Wed, Dec 22, 2004 at 01:52:40PM -0800, Litao Wu wrote:\n>> Does the order of columns in the index matter since\n>> more than 50% customer_id = 158?\n>> \n>> I think it does not in Oracle.\n>> \n>> Will the performance be better if I change index\n>> xxx_idx to (\"domain\", customer_id, created)?\n>\n> Well, in Oracle this would of cause matter. Oracle calculates index\n> usage by being able to fill all index's attributes from the left to the\n> right. If any one attribute within is missing Oracle would not test if\n> it is only one attribute missing, or if all other attributes are missing\n> within the query's where clause. \n\nThis depends on the version of Oracle you're using. Oracle 9i \nintroduced Index Skip Scans:\n\n http://www.oracle.com/technology//products/oracle9i/daily/apr22.html\n\nI don't know whether pg has something similar?\n", "msg_date": "Sun, 26 Dec 2004 13:30:15 +0100", "msg_from": "Karl Vogel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so much time difference with a same query/plan?" }, { "msg_contents": "On Sun, Dec 26, 2004 at 13:30:15 +0100,\n Karl Vogel <[email protected]> wrote:\n> \n> This depends on the version of Oracle you're using. Oracle 9i \n> introduced Index Skip Scans:\n> \n> http://www.oracle.com/technology//products/oracle9i/daily/apr22.html\n> \n> I don't know whether pg has something similar?\n\nPostgres doesn't currently do this. There was some discussion about this\nnot too long ago, but I don't think anyone indicated that they were going to\nwork on it for 8.1.\n\nPostgres can use the leading part of a multikey index to start a scan,\nbut it will just do a normal index scan with a filter.\n", "msg_date": "Fri, 31 Dec 2004 00:10:42 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why so much time difference with a same query/plan?" } ]
[ { "msg_contents": "Hi,\ni recently run pgbench against different servers and got some results I \ndont quite understand.\n\nA) EV1: Dual Xenon, 2GHz, 1GB Memory, SCSI 10Krpm, RHE3\nB) Dual Pentium3 1.4ghz (Blade), SCSI Disk 10Krmp, 1GB Memory, Redhat 8\nC) P4 3.2GHz, IDE 7.2Krpm, 1GBMem, Fedora Core2\n\nAll did run only postgres 7.4.6\n\npgconf settings:\nmax_connections = 100\nshared_buffers = 8192\nsort_mem = 8192\nvacuum_mem = 32768\nmax_fsm_pages = 200000\nmax_fsm_relations = 10000\nwal_sync_method = fsync \nwal_buffers = 64 \ncheckpoint_segments = 10 \neffective_cache_size = 65536\nrandom_page_cost = 1.4\n\n/etc/sysctl.conf\nshmall and shmmax set to 768mb\n\n\nRunnig PGbench reported\nA) 220 tps\nB) 240 tps\nC) 510 tps\n\nRunning hdparm reported\nA) 920mb/s (SCSI 10k)\nB) 270mb/s (SCSI 10k)\nC) 1750mb/s (IDE 7.2k)\n\nWhat I dont quite understand is why a P3.2 is twice as fast as a Dual \nXenon with SCSI disks, A dual Xenon 2GHz is not faster than a dual P3 \n1.4Ghz, and the hdparm results also dont make much sense.\n\nHas anybody an explanation for that? Is there something I can do to get \nmore performance out of the SCSI disks?\n\nThanks for any advise\nAlex\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 24 Dec 2004 01:27:15 +1100", "msg_from": "Alex <[email protected]>", "msg_from_op": true, "msg_subject": "Some Performance Advice Needed" }, { "msg_contents": "\nOn Dec 23, 2004, at 9:27 AM, Alex wrote:\n\n\n> Running hdparm reported\n> A) 920mb/s (SCSI 10k)\n> B) 270mb/s (SCSI 10k)\n> C) 1750mb/s (IDE 7.2k)\n\n\nIDE disks lie about write completion (This can be disabled on some \ndrives) whereas SCSI drives wait for the data to actually be written \nbefore they report success. It is quite\neasy to corrupt a PG (Or most any db really) on an IDE drive. Check \nthe archives for more info.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Thu, 23 Dec 2004 09:44:31 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" }, { "msg_contents": "Alex wrote:\n\n> Hi,\n> i recently run pgbench against different servers and got some results I \n> dont quite understand.\n> \n> A) EV1: Dual Xenon, 2GHz, 1GB Memory, SCSI 10Krpm, RHE3\n> B) Dual Pentium3 1.4ghz (Blade), SCSI Disk 10Krmp, 1GB Memory, Redhat 8\n> C) P4 3.2GHz, IDE 7.2Krpm, 1GBMem, Fedora Core2\n >\n> Runnig PGbench reported\n> A) 220 tps\n> B) 240 tps\n> C) 510 tps\n> \n> Running hdparm reported\n> A) 920mb/s (SCSI 10k)\n> B) 270mb/s (SCSI 10k)\n> C) 1750mb/s (IDE 7.2k)\n> \n> What I dont quite understand is why a P3.2 is twice as fast as a Dual \n> Xenon with SCSI disks, A dual Xenon 2GHz is not faster than a dual P3 \n> 1.4Ghz, and the hdparm results also dont make much sense.\n\nA few things to clear up about the P3/P4/Xeons.\n\nXeons are P4s. Hence, a P4 2ghz will run the same speed as a Xeon 2ghz \nassuming all other variables are the same. Of course they aren't because \nyour P4 is probably running unregistered memory, uses either a 533mhz or \n800mhz FSB compared to the Xeon's shared 400mhz amongs 2 CPUs, running a \nfaster non-smp kernel. Add all those variables up and it's definitely \npossible for a P4 3.2ghz to run twice as fast as a Dual Xeon 2ghz on a \nsingle-thread benchmark. (The corollary here is that in a multi-thread \nbenchmark, the 2X Xeon can only hope to equal your P4 3.2.)\n\nP3s are faster than P4s at the same clock rate. By a lot. It's not \nreally that surprising that a P3 1.4 is faster than a P4/Xeon 2.0. I've \nseen results like this many times over a wide range of applications.\n\nThe only variable that is throwing off your comparisons are the hard \ndrives. IDE drives have write caching on by default -- SCSI drives have \nit off. Use: hdparm -W0 /dev/hda to turn it off on the P4 system and \nrerun the tests then.\n", "msg_date": "Thu, 23 Dec 2004 12:49:09 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" }, { "msg_contents": "Jeff wrote:\n\n>\n> On Dec 23, 2004, at 9:27 AM, Alex wrote:\n>\n>\n>> Running hdparm reported\n>> A) 920mb/s (SCSI 10k)\n>> B) 270mb/s (SCSI 10k)\n>> C) 1750mb/s (IDE 7.2k)\n>\n>\n>\n> IDE disks lie about write completion (This can be disabled on some \n> drives) whereas SCSI drives wait for the data to actually be written \n> before they report success. It is quite\n> easy to corrupt a PG (Or most any db really) on an IDE drive. Check \n> the archives for more info.\n\nDo we have any real info on this? Specifically which drives? Is SATA the \nsame way? What about SATA-II?\nI am not saying it isn't true (I know it is) but this is a blanket \nstatement that may or may not be\ntrue with newer tech.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n>\n> -- \n> Jeff Trout <[email protected]>\n> http://www.jefftrout.com/\n> http://www.stuarthamm.net/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 23 Dec 2004 13:27:12 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" }, { "msg_contents": ">> IDE disks lie about write completion (This can be disabled on some \n>> drives) whereas SCSI drives wait for the data to actually be written \n>> before they report success. It is quite\n>> easy to corrupt a PG (Or most any db really) on an IDE drive. Check \n>> the archives for more info.\n> \n> \n> Do we have any real info on this? Specifically which drives? Is SATA the \n> same way? What about SATA-II?\n> I am not saying it isn't true (I know it is) but this is a blanket \n> statement that may or may not be\n> true with newer tech.\n\n From my experience with SATA controllers, write caching is controlled \nvia the BIOS.\n", "msg_date": "Thu, 23 Dec 2004 13:45:57 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" }, { "msg_contents": "\nOn Dec 23, 2004, at 4:27 PM, Joshua D. Drake wrote:\n>>\n>> IDE disks lie about write completion (This can be disabled on some \n>> drives) whereas SCSI drives wait for the data to actually be written \n>> before they report success. It is quite\n>> easy to corrupt a PG (Or most any db really) on an IDE drive. Check \n>> the archives for more info.\n>\n> Do we have any real info on this? Specifically which drives? Is SATA \n> the same way? What about SATA-II?\n> I am not saying it isn't true (I know it is) but this is a blanket \n> statement that may or may not be\n> true with newer tech.\n\nScott Marlowe did some tests a while ago on it. They are likely in the \narchives.\nMaybe we can get him to pipe up :)\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Mon, 27 Dec 2004 08:46:28 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" }, { "msg_contents": "\"[email protected] (\"Joshua D. Drake\")\" wrote in pgsql.performance:\n\n> Jeff wrote:\n> \n>>\n>> On Dec 23, 2004, at 9:27 AM, Alex wrote:\n>>\n>>\n>>> Running hdparm reported\n>>> A) 920mb/s (SCSI 10k)\n>>> B) 270mb/s (SCSI 10k)\n>>> C) 1750mb/s (IDE 7.2k)\n>>\n>>\n>>\n>> IDE disks lie about write completion (This can be disabled on some \n>> drives) whereas SCSI drives wait for the data to actually be written \n>> before they report success. It is quite\n>> easy to corrupt a PG (Or most any db really) on an IDE drive. Check \n>> the archives for more info.\n> \n> Do we have any real info on this? Specifically which drives? Is SATA the \n> same way? What about SATA-II?\n> I am not saying it isn't true (I know it is) but this is a blanket \n> statement that may or may not be\n> true with newer tech.\n\n \tThe name hasn't changed, but don't let that give you the wrong \nimpression because SCSI continues to improve. I only use SCSI drives in \nall my servers, and that's because they always seem to outperform SATA and \nIDE when there's a multi-user[1] requirement (of course, the choice of OS\n[2] is an important factor here too).\n\n \tDisk fragmentation also plays a role, but can actually become a \nhinderance when in a multi-user environment. I find that the caching \nalgorithm in the OS that I usually choose [2] actually performs extremely \nwell when more users are accessing data on volumes where the data is \nfragmented. I'm told that this is very similar in the Unix environment as \nwell. Defragmentation makes more sense in a single-user environment \nbecause there are generally a very small number of files being loaded at \none time, and so a user can benefit hugely from defragmentation.\n\n \tHere's an interesting article (it comes complete with anonymous non-\nlogical emotion-based reader comments too):\n\n \t \tSCSI vs. IDE: Which is really faster?\n \t \n\thttp://hardware.devchannel.org/hardwarechannel/03/10/20/1953249.shtml?\ntid=20&tid=38&tid=49\n\n[1] A somewhat busy web and/or eMail server certainly counts as a multi-\nuser requirement. Put a database on it where the data isn't being accessed \nsequentially, and that can certainly meet the requirements too.\n[2] Nearly all my servers run Novell NetWare.\n\n-- \nRandolf Richardson, pro-active spam fighter - [email protected]\nVancouver, British Columbia, Canada\n\nSending eMail to other SMTP servers is a privilege.\n", "msg_date": "Thu, 6 Jan 2005 19:14:45 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some Performance Advice Needed" } ]
[ { "msg_contents": "Hi All,\n \nI have a database running on Postgres 7.3.2. I am dumping the database schema from postgres 7.4.6 to restore it on the new Postgres version. The two postgres versions are running on different machines. I did the dump and tried restoring it. I got an error message saying type \"lo\" is not defined yet. I reordered the list and moved the type definition and the functions using the type \"lo\" to the top, using pg_restore and tried restoring it again.\n \nThese are the corresponding functions/types defined using the type \"lo\":\n \nSET SESSION AUTHORIZATION 'user';--\n-- TOC entry 5 (OID 19114)\n-- Name: lo; Type: TYPE; Schema: public; Owner: user\n-- Data Pos: 0\n--\nCREATE TYPE lo (\n INTERNALLENGTH = 4,\n INPUT = lo_in,\n OUTPUT = lo_out,\n DEFAULT = '-',\n ALIGNMENT = int4,\n STORAGE = plain\n);\n\n\nSET SESSION AUTHORIZATION 'postgres';\n--\n-- TOC entry 851 (OID 19115)\n-- Name: lo_in(cstring); Type: FUNCTION; Schema: public; Owner: postgres\n-- Data Pos: 0\n--\nCREATE FUNCTION lo_in(cstring) RETURNS lo\n AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_in'\n LANGUAGE c;\n\n--\n-- TOC entry 852 (OID 19116)\n-- Name: lo_out(lo); Type: FUNCTION; Schema: public; Owner: postgres\n-- Data Pos: 0\n--\nCREATE FUNCTION lo_out(lo) RETURNS cstring\n AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_out'\n LANGUAGE c;\n\n--\n-- TOC entry 853 (OID 19117)\n-- Name: lo_manage(); Type: FUNCTION; Schema: public; Owner: postgres\n-- Data Pos: 0\n--\nCREATE FUNCTION lo_manage() RETURNS \"trigger\"\n AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_manage'\n LANGUAGE c;\n\n--\n-- TOC entry 854 (OID 19129)\n-- Name: lo_oid(lo); Type: FUNCTION; Schema: public; Owner: postgres\n-- Data Pos: 0\n--\nCREATE FUNCTION lo_oid(lo) RETURNS oid\n AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_oid'\n LANGUAGE c;\n \n--\n-- TOC entry 855 (OID 19130)\n-- Name: oid(lo); Type: FUNCTION; Schema: public; Owner: postgres\n-- Data Pos: 0\n--\nCREATE FUNCTION oid(lo) RETURNS oid\n AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_oid'\n LANGUAGE c;\n \nSET SESSION AUTHORIZATION 'user';\n--\n-- TOC entry 278 (OID 19119)\n-- Name: session; Type: TABLE; Schema: public; Owner: user\n-- Data Pos: 0\n--\nCREATE TABLE \"session\" (\n session_id text NOT NULL,\n pid_owner integer DEFAULT 0,\n pid_pending integer DEFAULT 0,\n created timestamp with time zone DEFAULT now(),\n accessed timestamp with time zone DEFAULT now(),\n modified timestamp with time zone DEFAULT now(),\n uid integer,\n ip inet,\n browser character varying(200),\n params character varying(200),\n content lo\n);\n \n I still get the following errors:\n \n\npsql:trialdump1:4364: NOTICE: type \"lo\" is not yet defined\n\nDETAIL: Creating a shell type definition.\n\npsql:trialdump1:4364: ERROR: could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\n\npsql:trialdump1:4374: ERROR: type lo does not exist\n\npsql:trialdump1:4391: ERROR: function lo_in(cstring) does not exist\n\npsql:trialdump1:4403: ERROR: could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\n\npsql:trialdump1:4425: ERROR: type \"lo\" does not exist\n\npsql:trialdump1:4437: ERROR: type lo does not exist\n\npsql:trialdump1:4447: ERROR: type lo does not exist\n\npsql:trialdump1:4460: ERROR: type \"lo\" does not exist\n\npsql:trialdump1:4472: ERROR: could not access file \"/usr/lib/test_funcs.so\": No such file or directory\n\npsql:trialdump1:7606: ERROR: relation \"session\" does not exist\n\npsql:trialdump1:10868: ERROR: relation \"session\" does not exist\n\npsql:trialdump1:13155: ERROR: relation \"session\" does not exist\n\n \n\nThe session table uses type \"lo\" for one of it's columns and hence it does not get created.\n\nWhat could the problem be? Is it some sort of access rights problem with respect to the files it is not able to access? \n\n \n\nWhen I restored the dump after commenting out all tables/functions using the type \"lo\", everything works fine. \n\n \n\nIt will be great if someone could throw light on this problem.\n\n \n\nThanks,\n\nSaranya\n\n \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi All,\n \nI have a database running on Postgres 7.3.2. I am dumping the database schema from postgres 7.4.6 to restore it on the new Postgres version. The two postgres versions are running on different machines. I did the dump and tried restoring it. I got an error message saying type \"lo\" is not defined yet. I reordered the list and moved the type definition and the functions using the type \"lo\" to the top, using pg_restore and tried restoring it again.\n \nThese are the corresponding functions/types defined using the type \"lo\":\n \nSET SESSION AUTHORIZATION 'user';\n---- TOC entry 5 (OID 19114)-- Name: lo; Type: TYPE; Schema: public; Owner: user-- Data Pos: 0--\nCREATE TYPE lo (    INTERNALLENGTH = 4,    INPUT = lo_in,    OUTPUT = lo_out,    DEFAULT = '-',    ALIGNMENT = int4,    STORAGE = plain);\nSET SESSION AUTHORIZATION 'postgres';\n---- TOC entry 851 (OID 19115)-- Name: lo_in(cstring); Type: FUNCTION; Schema: public; Owner: postgres-- Data Pos: 0--\nCREATE FUNCTION lo_in(cstring) RETURNS lo    AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_in'    LANGUAGE c;\n---- TOC entry 852 (OID 19116)-- Name: lo_out(lo); Type: FUNCTION; Schema: public; Owner: postgres-- Data Pos: 0--\nCREATE FUNCTION lo_out(lo) RETURNS cstring    AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_out'    LANGUAGE c;\n---- TOC entry 853 (OID 19117)-- Name: lo_manage(); Type: FUNCTION; Schema: public; Owner: postgres-- Data Pos: 0--\nCREATE FUNCTION lo_manage() RETURNS \"trigger\"    AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_manage'    LANGUAGE c;\n---- TOC entry 854 (OID 19129)-- Name: lo_oid(lo); Type: FUNCTION; Schema: public; Owner: postgres-- Data Pos: 0--\nCREATE FUNCTION lo_oid(lo) RETURNS oid    AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_oid'    LANGUAGE c;\n \n---- TOC entry 855 (OID 19130)-- Name: oid(lo); Type: FUNCTION; Schema: public; Owner: postgres-- Data Pos: 0--\nCREATE FUNCTION oid(lo) RETURNS oid    AS '/usr/local/pgsql/lib/contrib/lo.so', 'lo_oid'    LANGUAGE c;\n \nSET SESSION AUTHORIZATION 'user';\n---- TOC entry 278 (OID 19119)-- Name: session; Type: TABLE; Schema: public; Owner: user-- Data Pos: 0--\nCREATE TABLE \"session\" (    session_id text NOT NULL,    pid_owner integer DEFAULT 0,    pid_pending integer DEFAULT 0,    created timestamp with time zone DEFAULT now(),    accessed timestamp with time zone DEFAULT now(),    modified timestamp with time zone DEFAULT now(),    uid integer,    ip inet,    browser character varying(200),    params character varying(200),    content lo);\n \n I still get the following errors:\n \n\npsql:trialdump1:4364: NOTICE:  type \"lo\" is not yet defined\nDETAIL:  Creating a shell type definition.\npsql:trialdump1:4364: ERROR:  could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\npsql:trialdump1:4374: ERROR:  type lo does not exist\npsql:trialdump1:4391: ERROR:  function lo_in(cstring) does not exist\npsql:trialdump1:4403: ERROR:  could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\npsql:trialdump1:4425: ERROR:  type \"lo\" does not exist\npsql:trialdump1:4437: ERROR:  type lo does not exist\npsql:trialdump1:4447: ERROR:  type lo does not exist\npsql:trialdump1:4460: ERROR:  type \"lo\" does not exist\npsql:trialdump1:4472: ERROR:  could not access file \"/usr/lib/test_funcs.so\": No such file or directory\npsql:trialdump1:7606: ERROR:  relation \"session\" does not exist\npsql:trialdump1:10868: ERROR:  relation \"session\" does not exist\npsql:trialdump1:13155: ERROR:  relation \"session\" does not exist\n \nThe session table uses type \"lo\" for one of it's columns and hence it does not get created.\nWhat could the problem be? Is it some sort of access rights problem with respect to the files it is not able to access? \n \nWhen I restored the dump after commenting out all tables/functions using the type \"lo\", everything works fine. \n \nIt will be great if someone could throw light on this problem.\n \nThanks,\nSaranya\n __________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Mon, 27 Dec 2004 09:32:33 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "user defined data type problem while dumping?" }, { "msg_contents": "sarlav kumar <[email protected]> writes:\n> I still get the following errors:\n\n> psql:trialdump1:4364: ERROR: could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\n\nLooks like you forgot to build the datatype's shared library on the new\ninstallation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Dec 2004 12:57:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: user defined data type problem while dumping? " }, { "msg_contents": "Hi Saranya,\n\n> psql:trialdump1:4364: ERROR: could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\n> psql:trialdump1:4403: ERROR: could not access file \"/usr/local/pgsql/lib/contrib/lo.so\": No such file or directory\n\nIt looks like you need to install the lo library on the machine you are \ntrying to restore to. It contains the implementation for the type you \nare missing and it is not installed by default. You can find it in the \ncontrib section of the PostgreSQL source tree.\n\n> psql:trialdump1:4472: ERROR: could not access file \"/usr/lib/test_funcs.so\": No such file or directory\n\nDon't know about this one, but this may be a similar problem (i.e. file \nexists on the machine you dumped from but not on the one you try to \nrestore to).\n\n> What could the problem be? Is it some sort of access rights problem with respect to the files it is not able to access? \n\nIf this was an access permission problem you would probably get \n'Permission denied' instead of 'No such file or directory' in the error \nmessages.\n\nHTH,\n\nAlex\n", "msg_date": "Wed, 29 Dec 2004 10:58:06 +1100", "msg_from": "Alexander Borkowski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] user defined data type problem while dumping?" } ]
[ { "msg_contents": "Hi Everybody.\n\n I have a table in my production database which gets updated \nregularly and the stats on this table in pg_class are totally wrong. I \nused to run vacuumdb on the whole database daily once and when i posted \nthe same problem of wrong stats in the pg_class most of them from this \nlist and also from postgres docs suggested me to run the \"vacuum \nanalyze\" more frequently on this table.\n\nI had a setup a cronjob couple of weeks ago to run vacuum analyze every \n3 hours on this table and still my stats are totally wrong. This is \naffecting the performance of the queries running on this table very badly. \n\nHow can i fix this problem ? or is this the standard postgres behaviour ?\n\nHere are the stats from the problem table on my production database\n\n relpages | reltuples\n----------+-------------\n 168730 | 2.19598e+06\n\nIf i rebuild the same table on dev db and check the stats they are \ntotally different, I was hoping that there would be some difference in \nthe stats from the production db stats but not at this extent, as you \ncan see below there is a huge difference in the stats.\n\n relpages | reltuples\n----------+-----------\n 25230 | 341155\n\n\nThanks!\nPallav\n\n\n", "msg_date": "Mon, 27 Dec 2004 13:52:27 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong Stats and Poor Performance" }, { "msg_contents": "Pallav Kalva wrote:\n\n> Hi Everybody.\n>\n> I have a table in my production database which gets updated \n> regularly and the stats on this table in pg_class are totally wrong. \n> I used to run vacuumdb on the whole database daily once and when i \n> posted the same problem of wrong stats in the pg_class most of them \n> from this list and also from postgres docs suggested me to run the \n> \"vacuum analyze\" more frequently on this table.\n>\n> I had a setup a cronjob couple of weeks ago to run vacuum analyze \n> every 3 hours on this table and still my stats are totally wrong. This \n> is affecting the performance of the queries running on this table very \n> badly.\n> How can i fix this problem ? or is this the standard postgres \n> behaviour ?\n>\n> Here are the stats from the problem table on my production database\n>\n> relpages | reltuples\n> ----------+-------------\n> 168730 | 2.19598e+06\n>\n> If i rebuild the same table on dev db and check the stats they are \n> totally different, I was hoping that there would be some difference in \n> the stats from the production db stats but not at this extent, as you \n> can see below there is a huge difference in the stats.\n>\n> relpages | reltuples\n> ----------+-----------\n> 25230 | 341155\n>\n>\n> Thanks!\n> Pallav\n>\n\nWhat version of the database? As I recall, there are versions which \nsuffer from index bloat if there is a large amount of turnover on the \ntable. I believe VACUUM FULL ANALYZE helps with this. As does increasing \nthe max_fsm_pages (after a vacuum full verbose the last couple of lines \ncan give you an indication of how big max_fsm_pages might need to be.)\n\nVacuum full does some locking, which means you don't want to do it all \nthe time, but if you can do it on the weekend, or maybe evenings or \nsomething it might fix the problem.\n\nI don't know if you can recover without a vacuum full, but there might \nalso be something about rebuild index, or maybe dropping and re-creating \nthe index.\nJohn\n=:->", "msg_date": "Mon, 27 Dec 2004 13:27:31 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong Stats and Poor Performance" }, { "msg_contents": "John A Meinel wrote:\n\n> Pallav Kalva wrote:\n>\n>> Hi Everybody.\n>>\n>> I have a table in my production database which gets updated \n>> regularly and the stats on this table in pg_class are totally \n>> wrong. I used to run vacuumdb on the whole database daily once and \n>> when i posted the same problem of wrong stats in the pg_class most of \n>> them from this list and also from postgres docs suggested me to run \n>> the \"vacuum analyze\" more frequently on this table.\n>>\n>> I had a setup a cronjob couple of weeks ago to run vacuum analyze \n>> every 3 hours on this table and still my stats are totally wrong. \n>> This is affecting the performance of the queries running on this \n>> table very badly.\n>> How can i fix this problem ? or is this the standard postgres \n>> behaviour ?\n>>\n>> Here are the stats from the problem table on my production database\n>>\n>> relpages | reltuples\n>> ----------+-------------\n>> 168730 | 2.19598e+06\n>>\n>> If i rebuild the same table on dev db and check the stats they are \n>> totally different, I was hoping that there would be some difference \n>> in the stats from the production db stats but not at this extent, as \n>> you can see below there is a huge difference in the stats.\n>>\n>> relpages | reltuples\n>> ----------+-----------\n>> 25230 | 341155\n>>\n>>\n>> Thanks!\n>> Pallav\n>>\n>\n> What version of the database? As I recall, there are versions which \n> suffer from index bloat if there is a large amount of turnover on the \n> table. I believe VACUUM FULL ANALYZE helps with this. As does \n> increasing the max_fsm_pages (after a vacuum full verbose the last \n> couple of lines can give you an indication of how big max_fsm_pages \n> might need to be.)\n>\n> Vacuum full does some locking, which means you don't want to do it all \n> the time, but if you can do it on the weekend, or maybe evenings or \n> something it might fix the problem.\n>\n> I don't know if you can recover without a vacuum full, but there might \n> also be something about rebuild index, or maybe dropping and \n> re-creating the index.\n> John\n> =:->\n\nHi John,\n\n Thanks! for the reply, My postgres version is 7.4.2. since this \nis on a production database and one of critical table in our system I \ncant run the vacuum full analyze on this table because of the locks. I \nrecently rebuilt this table from the scratch and recreated all the \nindexes and after 2-3 weeks the same problem again. My max_fsm_pages are \nset to the default value due think it might be the problem ? i would \nlike to change it but that involves restarting the postgres database \nwhich i cant do at this moment . What is index bloat ? do you think \nrebuilding the indexes again might help some extent ?\n\nPallav\n\n", "msg_date": "Mon, 27 Dec 2004 14:51:21 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong Stats and Poor Performance" }, { "msg_contents": "Pallav Kalva wrote:\n\n> John A Meinel wrote:\n>\n>> Pallav Kalva wrote:\n>>\n>>> Hi Everybody.\n>>>\n>>> I have a table in my production database which gets updated \n>>> regularly and the stats on this table in pg_class are totally \n>>> wrong. I used to run vacuumdb on the whole database daily once and \n>>> when i posted the same problem of wrong stats in the pg_class most \n>>> of them from this list and also from postgres docs suggested me to \n>>> run the \"vacuum analyze\" more frequently on this table.\n>>>\n>>> I had a setup a cronjob couple of weeks ago to run vacuum analyze \n>>> every 3 hours on this table and still my stats are totally wrong. \n>>> This is affecting the performance of the queries running on this \n>>> table very badly.\n>>> How can i fix this problem ? or is this the standard postgres \n>>> behaviour ?\n>>>\n>>> Here are the stats from the problem table on my production database\n>>>\n>>> relpages | reltuples\n>>> ----------+-------------\n>>> 168730 | 2.19598e+06\n>>>\n>>> If i rebuild the same table on dev db and check the stats they are \n>>> totally different, I was hoping that there would be some difference \n>>> in the stats from the production db stats but not at this extent, as \n>>> you can see below there is a huge difference in the stats.\n>>>\n>>> relpages | reltuples\n>>> ----------+-----------\n>>> 25230 | 341155\n>>>\n>>>\n>>> Thanks!\n>>> Pallav\n>>>\n>>\n>> What version of the database? As I recall, there are versions which \n>> suffer from index bloat if there is a large amount of turnover on the \n>> table. I believe VACUUM FULL ANALYZE helps with this. As does \n>> increasing the max_fsm_pages (after a vacuum full verbose the last \n>> couple of lines can give you an indication of how big max_fsm_pages \n>> might need to be.)\n>>\n>> Vacuum full does some locking, which means you don't want to do it \n>> all the time, but if you can do it on the weekend, or maybe evenings \n>> or something it might fix the problem.\n>>\n>> I don't know if you can recover without a vacuum full, but there \n>> might also be something about rebuild index, or maybe dropping and \n>> re-creating the index.\n>> John\n>> =:->\n>\n>\n> Hi John,\n>\n> Thanks! for the reply, My postgres version is 7.4.2. since this \n> is on a production database and one of critical table in our system I \n> cant run the vacuum full analyze on this table because of the locks. I \n> recently rebuilt this table from the scratch and recreated all the \n> indexes and after 2-3 weeks the same problem again. My max_fsm_pages \n> are set to the default value due think it might be the problem ? i \n> would like to change it but that involves restarting the postgres \n> database which i cant do at this moment . What is index bloat ? do \n> you think rebuilding the indexes again might help some extent ?\n>\n> Pallav\n>\n\nI'm going off of what I remember reading from the mailing lists, so \nplease search them to find more information. But basically, there are \nbugs in older version of postgres that don't clean up indexes properly. \nSo if you add and delete a lot of entries, my understanding is that the \nindex still contains entries for the deleted items. Which means that if \nyou have a lot of turnover your index keeps growing in size.\n\n From what I'm hearing you do need to increase max_fsm_pages, but \nwithout the vacuum full analyze verbose, I don't have any feelings for \nwhat it needs to be. Probably doing a search through the mailing lists \nfor \"increase max_fsm_relations max_fsm_pages\" (I forgot about the first \none earlier), should help.\n\nAt the end of a \"vacuum full analyze verbose\" (vfav) it prints out \nsomething like:\nINFO: free space map: 104 relations, 64 pages stored; 1664 total pages \nneeded\nDETAIL: Allocated FSM size: 1000 relations + 20000 pages = 178 kB \nshared memory.\n\nThat can be used to understand what you need to set max_fsm_relations \nand max_fsm_pages to. As I understand it, you should run under normal \nload for a while, run \"vfav\" and look at the pages. Move your max number \nto something closer (you shouldn't jump the whole way). Then run for a \nwhile again, and repeat. I believe the idea is that when you increase \nthe number, you allow a normal vacuum analyze to keep up with the load. \nSo the vacuum full doesn't have as much to do. So the requirement is less.\n\nObviously my example is a toy database, your numbers should be much higher.\n\nJohn\n=:->", "msg_date": "Mon, 27 Dec 2004 14:33:40 -0600", "msg_from": "John A Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong Stats and Poor Performance" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n\n> >> I had a setup a cronjob couple of weeks ago to run vacuum analyze every 3\n> >> hours on this table and still my stats are totally wrong. This is affecting\n> >> the performance of the queries running on this table very badly.\n> >> How can i fix this problem ? or is this the standard postgres behaviour ?\n\nIf you need it there's nothing wrong with running vacuum even more often than\nthis. As often as every 5 minutes isn't unheard of.\n\nYou should also look at raising the fsm settings. You need to run vacuum often\nenough that on average not more tuples are updated in the intervening time\nthan can be kept track of in the fsm settings. So raising the fsm settings\nallow you to run vacuum less often without having things bloat.\n\nThere's a way to use the output vacuum verbose gives you to find out what fsm\nsettings you need. But I don't remember which number you should be looking at\nthere offhand.\n\n-- \ngreg\n\n", "msg_date": "27 Dec 2004 15:51:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong Stats and Poor Performance" } ]
[ { "msg_contents": "I took advantage of the holidays to update a production server (dual\nOpteron on win2k) from an 11/16 build (about beta5 or so) to the latest\nrelease candidate. No configuration changes were made, just a binary\nswap and a server stop/start. \n\nI was shocked to see that statement latency dropped by a fairly large\nmargin. Here is a log snippet taken as measured from the client\napplication:\n\n0.000278866 sec: data1_read_key_item_vendor_file_0 params: $1=005988\n$2=002255 \n0.00032731 sec: data1_read_key_item_link_file_1 params: $1=005988 \n0.000327063 sec: data1_read_key_bm_header_file_0 params: $1=008704 \n0.000304915 sec: data1_read_key_item_vendor_file_0 params: $1=008704\n$2=000117 \n0.00029838 sec: data1_read_key_item_link_file_1 params: $1=008704 \n0.0003252 sec: data1_read_key_bm_header_file_0 params: $1=000268 \n0.000274747 sec: data1_read_key_item_vendor_file_0 params: $1=000268\n$2=000117 \n0.000324275 sec: data1_read_key_item_link_file_1 params: $1=000268\n\nThese are statements that are run (AFIK) the fastest possible way, which\nis using prepared statements over parse/bind. The previous latencies\nusually varied between .0005 and .0007 sec, but never below .5 ms for a\nindex read. Now, as demonstated by the log, I'm getting times less than\nhalf that figure. I benchmarked a transversal over a bill of materials\n(several thousand statements) and noticed about a 40% reduction in time\nto complete the operation.\n\nI wonder exactly what and when this happened, has anybody else noticed a\nsimilar change?\n\nMerlin\n", "msg_date": "Thu, 30 Dec 2004 17:05:54 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "sudden drop in statement turnaround latency -- yay!." }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> I took advantage of the holidays to update a production server (dual\n> Opteron on win2k) from an 11/16 build (about beta5 or so) to the latest\n> release candidate. No configuration changes were made, just a binary\n> swap and a server stop/start. \n\n> I was shocked to see that statement latency dropped by a fairly large\n> margin.\n\nHmm ... I trawled through the CVS logs since 11/16, and did not see very\nmany changes that looked like they might improve performance (list\nattached) --- and even of those, hardly any looked like the change would\nbe significant. Do you know whether the query plans changed? Are you\nrunning few enough queries per connection that backend startup overhead\nmight be an issue?\n\n\t\t\tregards, tom lane\n\n\n2004-12-15 14:16 tgl\n\n\t* src/backend/access/nbtree/nbtutils.c: Calculation of\n\tkeys_are_unique flag was wrong for cases involving redundant\n\tcross-datatype comparisons. Per example from Merlin Moncure.\n\n2004-12-02 10:32 momjian\n\n\t* configure, configure.in, doc/src/sgml/libpq.sgml,\n\tdoc/src/sgml/ref/copy.sgml, src/interfaces/libpq/fe-connect.c,\n\tsrc/interfaces/libpq/fe-print.c, src/interfaces/libpq/fe-secure.c,\n\tsrc/interfaces/libpq/libpq-fe.h, src/interfaces/libpq/libpq-int.h:\n\tRework libpq threaded SIGPIPE handling to avoid interference with\n\tcalling applications. This is done by blocking sigpipe in the\n\tlibpq thread and using sigpending/sigwait to possibily discard any\n\tsigpipe we generated.\n\n2004-12-01 20:34 tgl\n\n\t* src/: backend/optimizer/path/costsize.c,\n\tbackend/optimizer/util/plancat.c,\n\ttest/regress/expected/geometry.out,\n\ttest/regress/expected/geometry_1.out,\n\ttest/regress/expected/geometry_2.out,\n\ttest/regress/expected/inherit.out, test/regress/expected/join.out,\n\ttest/regress/sql/inherit.sql, test/regress/sql/join.sql: Make some\n\tadjustments to reduce platform dependencies in plan selection.\tIn\n\tparticular, there was a mathematical tie between the two possible\n\tnestloop-with-materialized-inner-scan plans for a join (ie, we\n\tcomputed the same cost with either input on the inside), resulting\n\tin a roundoff error driven choice, if the relations were both small\n\tenough to fit in sort_mem. Add a small cost factor to ensure we\n\tprefer materializing the smaller input. This changes several\n\tregression test plans, but with any luck we will now have more\n\tstability across platforms.\n\n2004-12-01 14:00 tgl\n\n\t* doc/src/sgml/catalogs.sgml, doc/src/sgml/diskusage.sgml,\n\tdoc/src/sgml/perform.sgml, doc/src/sgml/release.sgml,\n\tsrc/backend/access/nbtree/nbtree.c, src/backend/catalog/heap.c,\n\tsrc/backend/catalog/index.c, src/backend/commands/vacuum.c,\n\tsrc/backend/commands/vacuumlazy.c,\n\tsrc/backend/optimizer/util/plancat.c,\n\tsrc/backend/optimizer/util/relnode.c, src/include/access/genam.h,\n\tsrc/include/nodes/relation.h, src/test/regress/expected/case.out,\n\tsrc/test/regress/expected/inherit.out,\n\tsrc/test/regress/expected/join.out,\n\tsrc/test/regress/expected/join_1.out,\n\tsrc/test/regress/expected/polymorphism.out: Change planner to use\n\tthe current true disk file size as its estimate of a relation's\n\tnumber of blocks, rather than the possibly-obsolete value in\n\tpg_class.relpages. Scale the value in pg_class.reltuples\n\tcorrespondingly to arrive at a hopefully more accurate number of\n\trows. When pg_class contains 0/0, estimate a tuple width from the\n\tcolumn datatypes and divide that into current file size to estimate\n\tnumber of rows. This improved methodology allows us to jettison\n\tthe ancient hacks that put bogus default values into pg_class when\n\ta table is first created. Also, per a suggestion from Simon, make\n\tVACUUM (but not VACUUM FULL or ANALYZE) adjust the value it puts\n\tinto pg_class.reltuples to try to represent the mean tuple density\n\tinstead of the minimal density that actually prevails just after\n\tVACUUM. These changes alter the plans selected for certain\n\tregression tests, so update the expected files accordingly. (I\n\tremoved join_1.out because it's not clear if it still applies; we\n\tcan add back any variant versions as they are shown to be needed.)\n\n2004-11-21 17:57 tgl\n\n\t* src/backend/utils/hash/dynahash.c: Fix rounding problem in\n\tdynahash.c's decision about when the target fill factor has been\n\texceeded. We usually run with ffactor == 1, but the way the test\n\twas coded, it wouldn't split a bucket until the actual fill factor\n\treached 2.0, because of use of integer division. Change from > to\n\t>= so that it will split more aggressively when the table starts to\n\tget full.\n\n2004-11-21 17:48 tgl\n\n\t* src/backend/utils/mmgr/portalmem.c: Reduce the default size of\n\tthe PortalHashTable in order to save a few cycles during\n\ttransaction exit. A typical session probably wouldn't have as many\n\tas half a dozen portals open at once, so the original value of 64\n\tseems far larger than needed.\n\n2004-11-20 15:19 tgl\n\n\t* src/backend/utils/cache/relcache.c: Avoid scanning the relcache\n\tduring AtEOSubXact_RelationCache when there is nothing to do, which\n\tis most of the time. This is another simple improvement to cut\n\tsubtransaction entry/exit overhead.\n\n2004-11-20 15:16 tgl\n\n\t* src/backend/storage/lmgr/lock.c: Reduce the default size of the\n\tlocal lock hash table.\tThere's usually no need for it to be nearly\n\tas big as the global hash table, and since it's not in shared\n\tmemory it can grow if it does need to be bigger. By reducing the\n\tsize, we speed up hash_seq_search(), which saves a significant\n\tfraction of subtransaction entry/exit overhead.\n\n2004-11-19 19:48 tgl\n\n\t* src/backend/tcop/postgres.c: Move pgstat_report_tabstat() call so\n\tthat stats are not reported to the collector until the transaction\n\tcommits. Per recent discussion, this should avoid confusing\n\tautovacuum when an updating transaction runs for a long time.\n\n2004-11-16 22:13 neilc\n\n\t* src/backend/access/: hash/hash.c, nbtree/nbtree.c:\n\tMicro-optimization of markpos() and restrpos() in btree and hash\n\tindexes. Rather than using ReadBuffer() to increment the reference\n\tcount on an already-pinned buffer, we should use\n\tIncrBufferRefCount() as it is faster and does not require acquiring\n\tthe BufMgrLock.\n\n2004-11-16 19:14 tgl\n\n\t* src/: backend/main/main.c, backend/port/win32/signal.c,\n\tbackend/postmaster/pgstat.c, backend/postmaster/postmaster.c,\n\tinclude/port/win32.h: Fix Win32 problems with signals and sockets,\n\tby making the forkexec code even uglier than it was already :-(. \n\tAlso, on Windows only, use temporary shared memory segments instead\n\tof ordinary files to pass over critical variable values from\n\tpostmaster to child processes.\tMagnus Hagander\n", "msg_date": "Fri, 31 Dec 2004 00:30:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden drop in statement turnaround latency -- yay!. " } ]
[ { "msg_contents": "Dear all,\n\nWhat would be the best configure line that would suite for optimization\n\nAs I understand by eliminating unwanted modules, I would make the DB lighter\nand faster.\n\nLets say the module needed are only english module with LC_collate C\nmodule type.\n\nHow could we eliminate the unwanted modules.\n\n\n-- \nWith Best Regards,\nVishal Kashyap.\nhttp://vishalkashyap.tk\n", "msg_date": "Fri, 31 Dec 2004 11:18:02 +0530", "msg_from": "\"Vishal Kashyap @ [SaiHertz]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimization while compiling" } ]
[ { "msg_contents": "I try to adjust my server for a couple of weeks with some sucess but it still\nslow when the server has stress in the moring from many connection . I used\npostgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.\nSince 1 1/2 yr. when I started to use the database server after optimizing the\npostgresql.conf everything went fine until a couple of weeks ago , my database\ngrew up to 3.5 Gb and there were more than 160 concurent connections.\nThe server seemed to be slower in the rush hour peroid than before . There\nis some swap process too. My top and meminfo are shown here below:\n\n207 processes: 203 sleeping, 4 running, 0 zombie, 0 stopped\nCPU0 states: 15.0% user 12.1% system 0.0% nice 0.0% iowait 72.2% idle\nCPU1 states: 11.0% user 11.1% system 0.0% nice 0.0% iowait 77.2% idle\nCPU2 states: 22.3% user 27.3% system 0.0% nice 0.0% iowait 49.3% idle\nCPU3 states: 15.4% user 13.0% system 0.0% nice 0.0% iowait 70.4% idle\nMem: 4124720k av, 4085724k used, 38996k free, 0k shrd, 59012k buff\n 3141420k actv, 48684k in_d, 76596k in_c\nSwap: 20370412k av, 46556k used, 20323856k free 3493136k\ncached\n\n PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND\n16708 postgres 15 0 264M 264M 261M S 14.7 6.5 0:18 2 postmaster\n16685 postgres 15 0 264M 264M 261M S 14.5 6.5 1:22 0 postmaster\n16690 postgres 15 0 264M 264M 261M S 13.7 6.5 1:35 3 postmaster\n16692 postgres 15 0 264M 264M 261M S 13.3 6.5 0:49 1 postmaster\n16323 postgres 16 0 264M 264M 261M R 11.1 6.5 1:48 2 postmaster\n16555 postgres 15 0 264M 264M 261M S 9.7 6.5 1:52 3 postmaster\n16669 postgres 15 0 264M 264M 261M S 8.7 6.5 1:58 3 postmaster\n16735 postgres 15 0 264M 264M 261M S 7.7 6.5 0:15 0 postmaster\n16774 postgres 16 0 256M 256M 254M R 7.5 6.3 0:09 0 postmaster\n16247 postgres 15 0 263M 263M 261M S 7.1 6.5 0:46 0 postmaster\n16696 postgres 15 0 263M 263M 261M S 6.7 6.5 0:24 1 postmaster\n16682 postgres 15 0 264M 264M 261M S 4.3 6.5 1:19 3 postmaster\n16726 postgres 15 0 263M 263M 261M S 1.5 6.5 0:21 3 postmaster\n 14 root 15 0 0 0 0 RW 1.3 0.0 126:42 1 kscand/HighMem\n16766 postgres 15 0 134M 134M 132M S 1.1 3.3 0:01 2 postmaster\n16772 postgres 15 0 258M 258M 256M S 1.1 6.4 0:04 1 postmaster\n16835 root 15 0 1252 1252 856 R 0.9 0.0 0:00 3 top\n 2624 root 24 0 13920 7396 1572 S 0.5 0.1 6:25 1 java\n16771 postgres 15 0 263M 263M 261M S 0.5 6.5 0:06 0 postmaster\n 26 root 15 0 0 0 0 SW 0.3 0.0 3:24 1 kjournald\n 2114 root 15 0 276 268 216 S 0.1 0.0 2:48 2 irqbalance\n 1 root 15 0 108 76 56 S 0.0 0.0 0:07 3 init\n 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0\n 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1\n 4 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2\n 5 root RT 0 0 0 0 SW 0.0 0.0 0:00 3 migration/3\n 6 root 15 0 0 0 0 SW 0.0 0.0 0:03 1 keventd\n\n[root@data3 root]# cat < /proc/meminfo\n total: used: free: shared: buffers: cached:\nMem: 4223713280 4203782144 19931136 0 37982208 3684573184\nSwap: 20859301888 65757184 20793544704\nMemTotal: 4124720 kB\nMemFree: 19464 kB\nMemShared: 0 kB\nBuffers: 37092 kB\nCached: 3570800 kB\nSwapCached: 27416 kB\nActive: 3215984 kB\nActiveAnon: 245576 kB\nActiveCache: 2970408 kB\nInact_dirty: 330796 kB\nInact_laundry: 164256 kB\nInact_clean: 160968 kB\nInact_target: 774400 kB\nHighTotal: 3276736 kB\nHighFree: 1024 kB\nLowTotal: 847984 kB\nLowFree: 18440 kB\nSwapTotal: 20370412 kB\nSwapFree: 20306196 kB\n\n[root@data3 root]# cat < /proc/sys/kernel/shmmax\n4000000000[root@data3 root]# cat < /proc/sys/kernel/shmall\n134217728\n\nmax_connections = 165\nshared_buffers = 32768\nsort_mem = 20480\nvacuum_mem = 16384\neffective_cache_size = 256900\n\nI still in doubt whether this figture is optimized and putting more ram will\nhelp the system throughtput.\n\nAny idea please . My organization is one oof the big hospital in Thailand\nThanks\nAmrit\nThailand\n\n", "msg_date": "Sun, 2 Jan 2005 09:54:32 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Low Performance for big hospital server .." }, { "msg_contents": "[email protected] wrote:\n\n>I try to adjust my server for a couple of weeks with some sucess but it still\n>slow when the server has stress in the moring from many connection . I used\n>postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.\n>Since 1 1/2 yr. when I started to use the database server after optimizing the\n>postgresql.conf everything went fine until a couple of weeks ago , my database\n>grew up to 3.5 Gb and there were more than 160 concurent connections.\n>The server seemed to be slower in the rush hour peroid than before . There\n>is some swap process too. My top and meminfo are shown here below:\n> \n>\nYou might just be running low on ram - your sort_mem setting means that\n160 connections need about 3.1G. Add to that the 256M for your\nshared_buffers and there may not be much left for the os to use\neffectively (this could explain the fact that some swap is being used).\n\nIs reducing sort_mem an option ?\n\nregards\n\nMark\n\n", "msg_date": "Sun, 02 Jan 2005 22:23:45 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Sun, Jan 02, 2005 at 09:54:32AM +0700, [email protected] wrote:\n> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4 Gb.\n\nYou may want to try disabling hyperthreading, if you don't mind\nrebooting. \n\n> grew up to 3.5 Gb and there were more than 160 concurent connections.\n\nLooks like your growing dataset won't fit in your OS disk cache any\nlonger. Isolate your most problematic queries and check out their\nquery plans. I bet you have some sequential scans that used to read\nfrom cache but now need to read the disk. An index may help you. \n\nMore RAM wouldn't hurt. =)\n\n -Mike Adler\n", "msg_date": "Sun, 2 Jan 2005 09:08:28 -0500", "msg_from": "Michael Adler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> > postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4\n> Gb.\n>\n> You may want to try disabling hyperthreading, if you don't mind\n> rebooting.\n\nCan you give me an idea why should I use the SMP kernel instead of Bigmen kernel\n[turn off the hyperthreading]? Will it be better to turn off ?\n\n> > grew up to 3.5 Gb and there were more than 160 concurent connections.\n>\n> Looks like your growing dataset won't fit in your OS disk cache any\n> longer. Isolate your most problematic queries and check out their\n> query plans. I bet you have some sequential scans that used to read\n> from cache but now need to read the disk. An index may help you.\n>\n> More RAM wouldn't hurt. =)\n\nI think so that there may be some query load on our programe and I try to locate\nit.\nBut if I reduce the config to :\nmax_connections = 160\nshared_buffers = 2048\t [Total = 2.5 Gb.]\nsort_mem = 8192 [Total = 1280 Mb.]\nvacuum_mem = 16384\neffective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]\nWill it be more suitable for my server than before?\n\nThanks for all comment.\nAmrit\nThailand\n\n", "msg_date": "Sun, 2 Jan 2005 23:28:13 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "The common wisdom of shared buffers is around 6-10% of available memory. \nYour proposal below is about 50% of memory.\n\nI'm not sure what the original numbers actually meant, they are quite large.\n\nalso effective cache is the sum of kernel buffers + shared_buffers so it \nshould be bigger than shared buffers.\n\nAlso turning hyperthreading off may help, it is unlikely it is doing any \ngood unless you are running a relatively new (2.6.x) kernel.\n\nI presume you are vacuuming on a regular basis?\n\[email protected] wrote:\n\n>>>postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of 4\n>>> \n>>>\n>>Gb.\n>>\n>>You may want to try disabling hyperthreading, if you don't mind\n>>rebooting.\n>> \n>>\n>\n>Can you give me an idea why should I use the SMP kernel instead of Bigmen kernel\n>[turn off the hyperthreading]? Will it be better to turn off ?\n>\n> \n>\n>>>grew up to 3.5 Gb and there were more than 160 concurent connections.\n>>> \n>>>\n>>Looks like your growing dataset won't fit in your OS disk cache any\n>>longer. Isolate your most problematic queries and check out their\n>>query plans. I bet you have some sequential scans that used to read\n>>from cache but now need to read the disk. An index may help you.\n>>\n>>More RAM wouldn't hurt. =)\n>> \n>>\n>\n>I think so that there may be some query load on our programe and I try to locate\n>it.\n>But if I reduce the config to :\n>max_connections = 160\n>shared_buffers = 2048\t [Total = 2.5 Gb.]\n>sort_mem = 8192 [Total = 1280 Mb.]\n>vacuum_mem = 16384\n>effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]\n>Will it be more suitable for my server than before?\n>\n>Thanks for all comment.\n>Amrit\n>Thailand\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Sun, 02 Jan 2005 11:56:33 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> The common wisdom of shared buffers is around 6-10% of available memory.\n> Your proposal below is about 50% of memory.\n>\n> I'm not sure what the original numbers actually meant, they are quite large.\n>\nI will try to reduce shared buffer to 1536 [1.87 Mb].\n\n> also effective cache is the sum of kernel buffers + shared_buffers so it\n> should be bigger than shared buffers.\nalso make the effective cache to 2097152 [2 Gb].\nI will give you the result , because tomorrow [4/12/05] will be the official day\nof my hospital [which have more than 1700 OPD patient/day].\n\n\n> Also turning hyperthreading off may help, it is unlikely it is doing any\n> good unless you are running a relatively new (2.6.x) kernel.\nWhy , could you give me the reason?\n\n> I presume you are vacuuming on a regular basis?\nYes , vacuumdb daily.\n\n\n", "msg_date": "Mon, 3 Jan 2005 08:54:03 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "[email protected] wrote:\n\n>\n>max_connections = 160\n>shared_buffers = 2048\t [Total = 2.5 Gb.]\n>sort_mem = 8192 [Total = 1280 Mb.]\n>vacuum_mem = 16384\n>effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]\n>Will it be more suitable for my server than before?\n>\n>\n> \n>\nI would keep shared_buffers in the 10000->20000 range, as this is\nallocated *once* into shared memory, so only uses 80->160 Mb in *total*.\n\nThe lower sort_mem will help reduce memory pressure (as this is\nallocated for every backend connection) and this will help performance -\n*unless* you have lots of queries that need to sort large datasets. If\nso, then these will hammer your i/o subsystem, possibly canceling any\ngain from freeing up more memory. So there is a need to understand what\nsort of workload you have!\n\nbest wishes\n\nMark\n\n", "msg_date": "Mon, 03 Jan 2005 15:26:10 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> >max_connections = 160\n> >shared_buffers = 2048\t [Total = 2.5 Gb.]\n> >sort_mem = 8192 [Total = 1280 Mb.]\n> >vacuum_mem = 16384\n> >effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]\n> >Will it be more suitable for my server than before?\n> >\n> >\n> >\n> >\n> I would keep shared_buffers in the 10000->20000 range, as this is\n> allocated *once* into shared memory, so only uses 80->160 Mb in *total*.\n\nYou mean that if I increase the share buffer to arround 12000 [160 comnnections\n] , this will not affect the mem. usage ?\n\n> The lower sort_mem will help reduce memory pressure (as this is\n> allocated for every backend connection) and this will help performance -\n> *unless* you have lots of queries that need to sort large datasets. If\n> so, then these will hammer your i/o subsystem, possibly canceling any\n> gain from freeing up more memory. So there is a need to understand what\n> sort of workload you have!\n\nWill the increasing in effective cache size to arround 200000 make a little bit\nimprovement ? Do you think so?\n\nAny comment please , thanks.\nAmrit\nThailand\n\n", "msg_date": "Mon, 3 Jan 2005 11:54:10 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "[email protected] wrote:\n\n>>>max_connections = 160\n>>>shared_buffers = 2048\t [Total = 2.5 Gb.]\n>>>sort_mem = 8192 [Total = 1280 Mb.]\n>>>vacuum_mem = 16384\n>>>effective_cache_size = 128897 [= 1007 Mb. = 1 Gb. ]\n>>>Will it be more suitable for my server than before?\n>>>\n>>>\n>>>\n>>>\n>>> \n>>>\n>>I would keep shared_buffers in the 10000->20000 range, as this is\n>>allocated *once* into shared memory, so only uses 80->160 Mb in *total*.\n>> \n>>\n>\n>You mean that if I increase the share buffer to arround 12000 [160 comnnections\n>] , this will not affect the mem. usage ?\n>\n> \n>\nshared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is\nshared, so no matter how many connections you have it will only use 96M.\n\n\n>>The lower sort_mem will help reduce memory pressure (as this is\n>>allocated for every backend connection) and this will help performance -\n>>*unless* you have lots of queries that need to sort large datasets. If\n>>so, then these will hammer your i/o subsystem, possibly canceling any\n>>gain from freeing up more memory. So there is a need to understand what\n>>sort of workload you have!\n>> \n>>\n>\n>Will the increasing in effective cache size to arround 200000 make a little bit\n>improvement ? Do you think so?\n>\n> \n>\nI would leave it at the figure you proposed (128897), and monitor your\nperformance.\n(you can always increase it later and see what the effect is).\n\nregards\n\nMark\n\n", "msg_date": "Mon, 03 Jan 2005 19:19:50 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is\n> shared, so no matter how many connections you have it will only use 96M.\n\nNow I use the figure of 27853\n\n> >\n> >Will the increasing in effective cache size to arround 200000 make a little\n> bit\n> >improvement ? Do you think so?\n> >\nDecrease the sort mem too much [8196] make the performance much slower so I use\nsort_mem = 16384\nand leave effective cache to the same value , the result is quite better but I\nshould wait for tomorrow morning [official hour] to see the end result.\n\n> >\n> I would leave it at the figure you proposed (128897), and monitor your\n> performance.\n> (you can always increase it later and see what the effect is).\nYes , I use this figure.\n\nIf the result still poor , putting more ram \"6-8Gb\" [also putting more money\ntoo] will solve the problem ?\nThanks ,\nAmrit\nThailand\n\n", "msg_date": "Mon, 3 Jan 2005 15:18:56 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "[email protected] wrote:\n> I will try to reduce shared buffer to 1536 [1.87 Mb].\n\n1536 is probaby too low. I've tested a bunch of different settings on my \n 8GB Opteron server and 10K seems to be the best setting.\n\n\n>>also effective cache is the sum of kernel buffers + shared_buffers so it\n>>should be bigger than shared buffers.\n> \n> also make the effective cache to 2097152 [2 Gb].\n> I will give you the result , because tomorrow [4/12/05] will be the official day\n> of my hospital [which have more than 1700 OPD patient/day].\n\nTo figure out your effective cache size, run top and add free+cached.\n\n\n>>Also turning hyperthreading off may help, it is unlikely it is doing any\n>>good unless you are running a relatively new (2.6.x) kernel.\n> \n> Why , could you give me the reason?\n\nPre 2.6, the kernel does not know the difference between logical and \nphysical CPUs. Hence, in a dual processor system with hyperthreading, it \nactually sees 4 CPUs. And when assigning processes to CPUs, it may \nassign to 2 logical CPUs in the same physical CPU.\n\n\n> \n> \n>>I presume you are vacuuming on a regular basis?\n> \n> Yes , vacuumdb daily.\n\nDo you vacuum table by table or the entire DB? I find over time, the \nsystem tables can get very bloated and cause a lot of slowdowns just due \nto schema queries/updates. You might want to try a VACUUM FULL ANALYZE \njust on the system tables.\n", "msg_date": "Mon, 03 Jan 2005 00:32:10 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "William Yu wrote:\n> [email protected] wrote:\n>> Yes , vacuumdb daily.\n> \n> Do you vacuum table by table or the entire DB? I find over time, the \n> system tables can get very bloated and cause a lot of slowdowns just due \n> to schema queries/updates. You might want to try a VACUUM FULL ANALYZE \n> just on the system tables.\n\nA REINDEX of the system tables in stand-alone mode might also be in \norder, even for a 7.4.x database:\n\nhttp://www.postgresql.org/docs/7.4/interactive/sql-reindex.html\n\nIf a dump-reload-analyze cycle yields significant performance \nimprovements then we know it's due to dead-tuple bloat - either heap \ntuples or index tuples.\n\nMike Mascari\n\n", "msg_date": "Mon, 03 Jan 2005 03:51:42 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n\n> Decrease the sort mem too much [8196] make the performance much slower \n> so I use\n> sort_mem = 16384\n> and leave effective cache to the same value , the result is quite better \n> but I\n> should wait for tomorrow morning [official hour] to see the end result.\n\n\tYou could also profile your queries to see where those big sorts come \nfrom, and maybe add some indexes to try to replace sorts by \nindex-scans-in-order, which use no temporary memory. Can you give an \nexample of your queries which make use of big sorts like this ?\n", "msg_date": "Mon, 03 Jan 2005 11:08:32 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n\nWilliam Yu wrote:\n\n> [email protected] wrote:\n>\n>> I will try to reduce shared buffer to 1536 [1.87 Mb].\n>\n>\n> 1536 is probaby too low. I've tested a bunch of different settings on \n> my 8GB Opteron server and 10K seems to be the best setting.\n\nBe careful here, he is not using opterons which can access physical \nmemory above 4G efficiently. Also he only has 4G the 6-10% rule still \napplies\n\n>\n>\n>>> also effective cache is the sum of kernel buffers + shared_buffers \n>>> so it\n>>> should be bigger than shared buffers.\n>>\n>>\n>> also make the effective cache to 2097152 [2 Gb].\n>> I will give you the result , because tomorrow [4/12/05] will be the \n>> official day\n>> of my hospital [which have more than 1700 OPD patient/day].\n>\n>\n> To figure out your effective cache size, run top and add free+cached.\n\nMy understanding is that effective cache is the sum of shared buffers, \nplus kernel buffers, not sure what free + cached gives you?\n\n>\n>\n>>> Also turning hyperthreading off may help, it is unlikely it is doing \n>>> any\n>>> good unless you are running a relatively new (2.6.x) kernel.\n>>\n>>\n>> Why , could you give me the reason?\n>\n>\n> Pre 2.6, the kernel does not know the difference between logical and \n> physical CPUs. Hence, in a dual processor system with hyperthreading, \n> it actually sees 4 CPUs. And when assigning processes to CPUs, it may \n> assign to 2 logical CPUs in the same physical CPU.\n\nRight, the pre 2.6 kernels don't really know how to handle hyperthreaded \nCPU's\n\n>\n>\n>>\n>>\n>>> I presume you are vacuuming on a regular basis?\n>>\n>>\n>> Yes , vacuumdb daily.\n>\n>\n> Do you vacuum table by table or the entire DB? I find over time, the \n> system tables can get very bloated and cause a lot of slowdowns just \n> due to schema queries/updates. You might want to try a VACUUM FULL \n> ANALYZE just on the system tables.\n\nYou may want to try this but regular vacuum analyze should work fine as \nlong as you have the free space map settings correct. Also be aware that \npre-7.4.x the free space map is not populated on startup so you should \ndo a vacuum analyze right after startup.\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 03 Jan 2005 09:01:38 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Amrit,\n\nI realize you may be stuck with 7.3.x but you should be aware that 7.4 \nis considerably faster, and 8.0 appears to be even faster yet.\n\nI would seriously consider upgrading, if at all possible.\n\nA few more hints.\n\nRandom page cost is quite conservative if you have reasonably fast disks.\nSpeaking of fast disks, not all disks are created equal, some RAID \ndrives are quite slow (Bonnie++ is your friend here)\n\nSort memory can be set on a per query basis, I'd consider lowering it \nquite low and only increasing it when necessary.\n\nWhich brings us to how to find out when it is necessary.\nTurn logging on and turn on log_pid, and log_duration, then you will \nneed to sort through the logs to find the slow queries.\n\nThere are some special cases where postgresql can be quite slow, and \nminor adjustments to the query can improve it significantly\n\nFor instance pre-8.0 select * from foo where id = '1'; where id is a \nint8 will never use an index even if it exists.\n\n\nRegards,\n\nDave\n\n\[email protected] wrote:\n\n>>The common wisdom of shared buffers is around 6-10% of available memory.\n>>Your proposal below is about 50% of memory.\n>>\n>>I'm not sure what the original numbers actually meant, they are quite large.\n>>\n>> \n>>\n>I will try to reduce shared buffer to 1536 [1.87 Mb].\n>\n> \n>\n>>also effective cache is the sum of kernel buffers + shared_buffers so it\n>>should be bigger than shared buffers.\n>> \n>>\n>also make the effective cache to 2097152 [2 Gb].\n>I will give you the result , because tomorrow [4/12/05] will be the official day\n>of my hospital [which have more than 1700 OPD patient/day].\n>\n>\n> \n>\n>>Also turning hyperthreading off may help, it is unlikely it is doing any\n>>good unless you are running a relatively new (2.6.x) kernel.\n>> \n>>\n>Why , could you give me the reason?\n>\n> \n>\n>>I presume you are vacuuming on a regular basis?\n>> \n>>\n>Yes , vacuumdb daily.\n>\n>\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nAmrit,\n\nI realize you may be stuck with 7.3.x but you should be aware that 7.4\nis considerably faster, and 8.0 appears to be even faster yet.\n\nI would seriously consider upgrading, if at all possible.\n\nA few more hints. \n\nRandom page cost is quite conservative if you have reasonably fast\ndisks.\nSpeaking of fast disks, not all disks are created equal, some RAID\ndrives are quite slow (Bonnie++ is your friend here)\n\nSort memory can be set on a per query basis, I'd consider lowering it\nquite low and only increasing it when necessary.\n\nWhich brings us to how to find out when it is necessary.\nTurn logging on and turn on log_pid, and log_duration, then you will\nneed to sort through the logs to find the slow queries.\n\nThere are some special cases where postgresql can be quite slow, and\nminor adjustments to the query can improve it significantly\n\nFor instance pre-8.0 select * from foo where id = '1'; where id is a\nint8 will never use an index even if it exists.\n\n\nRegards,\n\nDave\n\n\[email protected] wrote:\n\n\nThe common wisdom of shared buffers is around 6-10% of available memory.\nYour proposal below is about 50% of memory.\n\nI'm not sure what the original numbers actually meant, they are quite large.\n\n \n\nI will try to reduce shared buffer to 1536 [1.87 Mb].\n\n \n\nalso effective cache is the sum of kernel buffers + shared_buffers so it\nshould be bigger than shared buffers.\n \n\nalso make the effective cache to 2097152 [2 Gb].\nI will give you the result , because tomorrow [4/12/05] will be the official day\nof my hospital [which have more than 1700 OPD patient/day].\n\n\n \n\nAlso turning hyperthreading off may help, it is unlikely it is doing any\ngood unless you are running a relatively new (2.6.x) kernel.\n \n\nWhy , could you give me the reason?\n\n \n\nI presume you are vacuuming on a regular basis?\n \n\nYes , vacuumdb daily.\n\n\n\n\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Mon, 03 Jan 2005 09:10:37 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> I realize you may be stuck with 7.3.x but you should be aware that 7.4\n> is considerably faster, and 8.0 appears to be even faster yet.\n\nThere are a little bit incompatibility between 7.3 -8 , so rather difficult to\nchange.\n\n> I would seriously consider upgrading, if at all possible.\n>\n> A few more hints.\n>\n> Random page cost is quite conservative if you have reasonably fast disks.\n> Speaking of fast disks, not all disks are created equal, some RAID\n> drives are quite slow (Bonnie++ is your friend here)\n>\n> Sort memory can be set on a per query basis, I'd consider lowering it\n> quite low and only increasing it when necessary.\n>\n> Which brings us to how to find out when it is necessary.\n> Turn logging on and turn on log_pid, and log_duration, then you will\n> need to sort through the logs to find the slow queries.\n\nIn standard RH 9.0 , if I enable both of the log [pid , duration] , where could\nI look for the result of the log, and would it make the system to be slower?\n\n\nAmrit\nThailand\n\n", "msg_date": "Mon, 3 Jan 2005 22:40:05 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Monday 03 January 2005 10:40, [email protected] wrote:\n> > I realize you may be stuck with 7.3.x but you should be aware that 7.4\n> > is considerably faster, and 8.0 appears to be even faster yet.\n>\n> There are a little bit incompatibility between 7.3 -8 , so rather difficult\n> to change.\n>\n\nSure, but even moving to 7.4 would be a bonus, especially if you use a lot of \nselect * from tab where id in (select ... ) type queries, and the \nincompataibility is less as well. \n\n> > I would seriously consider upgrading, if at all possible.\n> >\n> > A few more hints.\n> >\n\nOne thing I didn't see mentioned that should have been was to watch for index \nbloat, which was a real problem on 7.3 machines. You can determine which \nindexes are bloated by studying vacuum output or by comparing index size on \ndisk to table size on disk. \n\nAnother thing I didn't see mentioned was to your free space map settings. \nMake sure these are large enough to hold your data... max_fsm_relations \nshould be larger then the total # of tables you have in your system (check \nthe archives for the exact query needed) and max_fsm_pages needs to be big \nenough to hold all of the pages you use in a day... this is hard to calculate \nin 7.3, but if you look at your vacuum output and add the number of pages \ncleaned up for all tables, this could give you a good number to work with. It \nwould certainly tell you if your setting is too small. \n\n-- \nRobert Treat\nBuild A Brighter Lamp :: Linux Apache {middleware} PostgreSQL\n", "msg_date": "Mon, 3 Jan 2005 13:51:55 -0500", "msg_from": "Robert Treat <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Dave Cramer wrote:\n\n> \n> \n> William Yu wrote:\n> \n>> [email protected] wrote:\n>>\n>>> I will try to reduce shared buffer to 1536 [1.87 Mb].\n>>\n>>\n>>\n>> 1536 is probaby too low. I've tested a bunch of different settings on \n>> my 8GB Opteron server and 10K seems to be the best setting.\n> \n> \n> Be careful here, he is not using opterons which can access physical \n> memory above 4G efficiently. Also he only has 4G the 6-10% rule still \n> applies\n\n10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% rule.\n\n\n>> To figure out your effective cache size, run top and add free+cached.\n> \n> \n> My understanding is that effective cache is the sum of shared buffers, \n> plus kernel buffers, not sure what free + cached gives you?\n\nNot true. Effective cache size is the free memory available that the OS \ncan use for caching for Postgres. In a system that runs nothing but \nPostgres, it's free + cached.\n", "msg_date": "Mon, 03 Jan 2005 11:57:50 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n\[email protected] wrote:\n\n>>I realize you may be stuck with 7.3.x but you should be aware that 7.4\n>>is considerably faster, and 8.0 appears to be even faster yet.\n>> \n>>\n>\n>There are a little bit incompatibility between 7.3 -8 , so rather difficult to\n>change.\n>\n> \n>\n>>I would seriously consider upgrading, if at all possible.\n>>\n>>A few more hints.\n>>\n>>Random page cost is quite conservative if you have reasonably fast disks.\n>>Speaking of fast disks, not all disks are created equal, some RAID\n>>drives are quite slow (Bonnie++ is your friend here)\n>>\n>>Sort memory can be set on a per query basis, I'd consider lowering it\n>>quite low and only increasing it when necessary.\n>>\n>>Which brings us to how to find out when it is necessary.\n>>Turn logging on and turn on log_pid, and log_duration, then you will\n>>need to sort through the logs to find the slow queries.\n>> \n>>\n>\n>In standard RH 9.0 , if I enable both of the log [pid , duration] , where could\n>I look for the result of the log, and would it make the system to be slower?\n> \n>\nOn a redhat system logging is more or less disabled if you used the rpm\n\nyou can set syslog=2 in the postgresql.conf and then you will get the \nlogs in messages.log\nYes, it will make it slower, but you have to find out which queries are \nslow.\n\nDave\n\n>\n>Amrit\n>Thailand\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 03 Jan 2005 19:57:20 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n\nWilliam Yu wrote:\n\n> Dave Cramer wrote:\n>\n>>\n>>\n>> William Yu wrote:\n>>\n>>> [email protected] wrote:\n>>>\n>>>> I will try to reduce shared buffer to 1536 [1.87 Mb].\n>>>\n>>>\n>>>\n>>>\n>>> 1536 is probaby too low. I've tested a bunch of different settings \n>>> on my 8GB Opteron server and 10K seems to be the best setting.\n>>\n>>\n>>\n>> Be careful here, he is not using opterons which can access physical \n>> memory above 4G efficiently. Also he only has 4G the 6-10% rule still \n>> applies\n>\n>\n> 10% of 4GB is 400MB. 10K buffers is 80MB. Easily less than the 6-10% \n> rule.\n>\nCorrect, I didn't actually do the math, I refrain from giving actual \nnumbers as every system is different.\n\n>\n>>> To figure out your effective cache size, run top and add free+cached.\n>>\n>>\n>>\n>> My understanding is that effective cache is the sum of shared \n>> buffers, plus kernel buffers, not sure what free + cached gives you?\n>\n>\n> Not true. Effective cache size is the free memory available that the \n> OS can use for caching for Postgres. In a system that runs nothing but \n> Postgres, it's free + cached.\n\nYou still need to add in the shared buffers as they are part of the \n\"effective cache\"\n\nDave\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 03 Jan 2005 19:58:44 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." } ]
[ { "msg_contents": "Amrit --\n\n>-----Original Message-----\n>From:\[email protected] [mailto:[email protected]]\n>Sent:\tMon 1/3/2005 12:18 AM\n>To:\tMark Kirkwood\n>Cc:\tPGsql-performance\n>Subject:\tRe: [PERFORM] Low Performance for big hospital server ..\n>> shared_buffers = 12000 will use 12000*8192 bytes (i.e about 96Mb). It is\n>> shared, so no matter how many connections you have it will only use 96M.\n>\n>Now I use the figure of 27853\n>\n>> >\n>> >Will the increasing in effective cache size to arround 200000 make a >little\n>> bit\n>> >improvement ? Do you think so?\n>> >\n>Decrease the sort mem too much [8196] make the performance much slower so I >use\n>sort_mem = 16384\n>and leave effective cache to the same value , the result is quite better but >I\n>should wait for tomorrow morning [official hour] to see the end result.\n>\n>> >\n>> I would leave it at the figure you proposed (128897), and monitor your\n>> performance.\n>> (you can always increase it later and see what the effect is).\n>Yes , I use this figure.\n>\n>If the result still poor , putting more ram \"6-8Gb\" [also putting more money\n>too] will solve the problem ?\n\nAdding RAM will almost always help, at least for a while. Our small runitme servers have 2 gigs of RAM; the larger ones have 4 gigs; I do anticipate the need to add RAM as we add users.\n\nIf you have evaluated the queries that are running and verified that they are using indexes properly, etc., and tuned the other parameters for your system and its disks, adding memory helps because it increases the chance that data is already in memory, thus saving the time to fetch it from disk. Studying performance under load with top, vmstat, etc. and detailed analysis of queries can often trade some human time for the money that extra hardware would cost. Sometimes easier to do than getting downtime for a critical server, as well.\n\nIf you don't have a reliable way of reproducing real loads on a test system, it is best to change things cautiously, and observe the system under load; if you change too many things (ideally only 1 at a time but often that is not possible) you mau actually defeat a good change with a bad one; at the least,m you may not know which change was the most important one if you make several at once.\n\nBest of luck,\n\nGreg Williamson\nDBA\nGlobeXplorer LLC\n>Thanks ,\n>Amrit\n>Thailand\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "Mon, 3 Jan 2005 01:32:45 -0800", "msg_from": "\"Gregory S. Williamson\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." } ]
[ { "msg_contents": "amrit wrote:\n> I try to adjust my server for a couple of weeks with some sucess but\nit\n> still\n> slow when the server has stress in the moring from many connection . I\n> used\n> postgresql 7.3.2-1 with RH 9 on a mechine of 2 Xeon 3.0 Ghz and ram of\n4\n> Gb.\n> Since 1 1/2 yr. when I started to use the database server after\noptimizing\n> the\n> postgresql.conf everything went fine until a couple of weeks ago , my\n> database\n> grew up to 3.5 Gb and there were more than 160 concurent connections.\n> The server seemed to be slower in the rush hour peroid than before .\nThere\n> is some swap process too. My top and meminfo are shown here below:\n\nwell, you've hit the 'wall'...your system seems to be more or less at\nthe limit of what 32 bit technology can deliver. If upgrade to Opteron\nand 64 bit is out of the question, here are a couple of new tactics you\ncan try. Optimizing postgresql.conf can help, but only so much. \n\nOptimize queries:\nOne big often looked performance gainer is to use functional indexes to\naccess data from a table. This can save space by making the index\nsmaller and more efficient. This wins on cache and speed at the price\nof some flexibility. \n\nOptimize datums: replace numeric(4) with int2, numeric(6) with int4,\netc. This will save a little space on the tuple which will ease up on\nthe cache a bit. Use constraints where necessary to preserve data\nintegrity.\n\nMaterialized views: These can provide an enormous win if you can deal\nincorporate them into your application. With normal views, multiple\nbackends can share a query plan. With mat-views, backends can share\nboth the plan and its execution.\n\nMerlin\n\n", "msg_date": "Mon, 3 Jan 2005 09:55:06 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." } ]
[ { "msg_contents": "Hi,\n\nare there any plans for rewriting queries to preexisting materialized\nviews? I mean, rewrite a query (within the optimizer) to use a\nmaterialized view instead of the originating table?\n\nRegards,\nYann\n", "msg_date": "Mon, 3 Jan 2005 17:55:32 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": true, "msg_subject": "query rewrite using materialized views" }, { "msg_contents": "Yann,\n\n> are there any plans for rewriting queries to preexisting materialized\n> views? I mean, rewrite a query (within the optimizer) to use a\n> materialized view instead of the originating table?\n\nAutomatically, and by default, no. Using the RULES system? Yes, you can \nalready do this and the folks on the MattView project on pgFoundry are \nworking to make it easier.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 4 Jan 2005 10:06:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 04, 2005 at 10:06:18AM -0800, Josh Berkus wrote:\n> > are there any plans for rewriting queries to preexisting materialized\n> > views? I mean, rewrite a query (within the optimizer) to use a\n> > materialized view instead of the originating table?\n> \n> Automatically, and by default, no. Using the RULES system? Yes, you can \n> already do this and the folks on the MattView project on pgFoundry are \n> working to make it easier.\n\nI was just wondering if this might be on schedule for 8.x due to I read\nthe thread about materialized views some days ago. If materialized views\nare someday implemented one should kepp this requested feature in mind\ndue to I know from Oracle to let it improve query execution plans...\n\nRegards,\nYann\n", "msg_date": "Wed, 5 Jan 2005 09:11:41 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query rewrite using materialized views" } ]
[ { "msg_contents": "Tom Lane wrote:\n> \"Merlin Moncure\" <[email protected]> writes:\n> > I took advantage of the holidays to update a production server (dual\n> > Opteron on win2k) from an 11/16 build (about beta5 or so) to the\nlatest\n> > release candidate. No configuration changes were made, just a\nbinary\n> > swap and a server stop/start.\n> \n> > I was shocked to see that statement latency dropped by a fairly\nlarge\n> > margin.\n> \n> Hmm ... I trawled through the CVS logs since 11/16, and did not see\nvery\n> many changes that looked like they might improve performance (list\n> attached) --- and even of those, hardly any looked like the change\nwould\n> be significant. Do you know whether the query plans changed? Are you\n> running few enough queries per connection that backend startup\noverhead\n> might be an issue?\n\nNo, everything is run over persistent connections and prepared\nstatements. All queries boil down to an index scan of some sort, so the\nplanner is not really a factor. It's all strictly execution times, and\ndata is almost always read right off of the cache. The performance of\nthe ISAM driver is driven by 3 factors (in order).\n1. network latency (including o/s overhead context switches, etc.)\n2. i/o factors (data read from cache, disk, etc).\n3. overhead for pg to execute trivial transaction.\n\n#1 & #2 are well understood problems. It's #3 that improved\nsubstantially and without warning. See my comments below:\n\n> \t\t\tregards, tom lane\n> \n> \n> 2004-12-15 14:16 tgl\n> \n> \t* src/backend/access/nbtree/nbtutils.c: Calculation of\n> \tkeys_are_unique flag was wrong for cases involving redundant\n> \tcross-datatype comparisons. Per example from Merlin Moncure.\n\nNot likely to have a performance benefit.\n \n> 2004-12-02 10:32 momjian\n> \n> \t* configure, configure.in, doc/src/sgml/libpq.sgml,\n> \tdoc/src/sgml/ref/copy.sgml, src/interfaces/libpq/fe-connect.c,\n> \tsrc/interfaces/libpq/fe-print.c,\nsrc/interfaces/libpq/fe-secure.c,\n> \tsrc/interfaces/libpq/libpq-fe.h,\nsrc/interfaces/libpq/libpq-int.h:\n> \tRework libpq threaded SIGPIPE handling to avoid interference\nwith\n> \tcalling applications. This is done by blocking sigpipe in the\n> \tlibpq thread and using sigpending/sigwait to possibily discard\nany\n> \tsigpipe we generated.\n\nDoubtful.\n \n> 2004-12-01 20:34 tgl\n> \n> \t* src/: backend/optimizer/path/costsize.c,\n> \tbackend/optimizer/util/plancat.c,\n> \ttest/regress/expected/geometry.out,\n> \ttest/regress/expected/geometry_1.out,\n> \ttest/regress/expected/geometry_2.out,\n> \ttest/regress/expected/inherit.out,\ntest/regress/expected/join.out,\n> \ttest/regress/sql/inherit.sql, test/regress/sql/join.sql: Make\nsome\n> \tadjustments to reduce platform dependencies in plan selection.\nIn\n> \tparticular, there was a mathematical tie between the two\npossible\n> \tnestloop-with-materialized-inner-scan plans for a join (ie, we\n> \tcomputed the same cost with either input on the inside),\nresulting\n> \tin a roundoff error driven choice, if the relations were both\nsmall\n> \tenough to fit in sort_mem. Add a small cost factor to ensure we\n> \tprefer materializing the smaller input. This changes several\n> \tregression test plans, but with any luck we will now have more\n> \tstability across platforms.\n\nNo. The planner is not a factor.\n \n> 2004-12-01 14:00 tgl\n> \n> \t* doc/src/sgml/catalogs.sgml, doc/src/sgml/diskusage.sgml,\n> \tdoc/src/sgml/perform.sgml, doc/src/sgml/release.sgml,\n> \tsrc/backend/access/nbtree/nbtree.c, src/backend/catalog/heap.c,\n> \tsrc/backend/catalog/index.c, src/backend/commands/vacuum.c,\n> \tsrc/backend/commands/vacuumlazy.c,\n> \tsrc/backend/optimizer/util/plancat.c,\n> \tsrc/backend/optimizer/util/relnode.c,\nsrc/include/access/genam.h,\n> \tsrc/include/nodes/relation.h,\nsrc/test/regress/expected/case.out,\n> \tsrc/test/regress/expected/inherit.out,\n> \tsrc/test/regress/expected/join.out,\n> \tsrc/test/regress/expected/join_1.out,\n> \tsrc/test/regress/expected/polymorphism.out: Change planner to\nuse\n> \tthe current true disk file size as its estimate of a relation's\n> \tnumber of blocks, rather than the possibly-obsolete value in\n> \tpg_class.relpages. Scale the value in pg_class.reltuples\n> \tcorrespondingly to arrive at a hopefully more accurate number of\n> \trows. When pg_class contains 0/0, estimate a tuple width from\nthe\n> \tcolumn datatypes and divide that into current file size to\nestimate\n> \tnumber of rows. This improved methodology allows us to jettison\n> \tthe ancient hacks that put bogus default values into pg_class\nwhen\n> \ta table is first created. Also, per a suggestion from Simon,\nmake\n> \tVACUUM (but not VACUUM FULL or ANALYZE) adjust the value it puts\n> \tinto pg_class.reltuples to try to represent the mean tuple\ndensity\n> \tinstead of the minimal density that actually prevails just after\n> \tVACUUM. These changes alter the plans selected for certain\n> \tregression tests, so update the expected files accordingly. (I\n> \tremoved join_1.out because it's not clear if it still applies;\nwe\n> \tcan add back any variant versions as they are shown to be\nneeded.)\n\ndoesn't seem like this would apply.\n \n> 2004-11-21 17:57 tgl\n> \n> \t* src/backend/utils/hash/dynahash.c: Fix rounding problem in\n> \tdynahash.c's decision about when the target fill factor has been\n> \texceeded. We usually run with ffactor == 1, but the way the\ntest\n> \twas coded, it wouldn't split a bucket until the actual fill\nfactor\n> \treached 2.0, because of use of integer division. Change from >\nto\n> \t>= so that it will split more aggressively when the table starts\nto\n> \tget full.\n\nHmm. Not likely.\n \n> 2004-11-21 17:48 tgl\n> \n> \t* src/backend/utils/mmgr/portalmem.c: Reduce the default size of\n> \tthe PortalHashTable in order to save a few cycles during\n> \ttransaction exit. A typical session probably wouldn't have as\nmany\n> \tas half a dozen portals open at once, so the original value of\n64\n> \tseems far larger than needed.\n\nStrong possibility...'few cycles' seems pretty small tho :).\n \n> 2004-11-20 15:19 tgl\n> \n> \t* src/backend/utils/cache/relcache.c: Avoid scanning the\nrelcache\n> \tduring AtEOSubXact_RelationCache when there is nothing to do,\nwhich\n> \tis most of the time. This is another simple improvement to cut\n> \tsubtransaction entry/exit overhead.\n\nNot clear from the comments: does this apply to every transaction, or\nonly ones with savepoints? If all transactions, it's a contender.\n \n> 2004-11-20 15:16 tgl\n> \n> \t* src/backend/storage/lmgr/lock.c: Reduce the default size of\nthe\n> \tlocal lock hash table.\tThere's usually no need for it to be\nnearly\n> \tas big as the global hash table, and since it's not in shared\n> \tmemory it can grow if it does need to be bigger. By reducing\nthe\n> \tsize, we speed up hash_seq_search(), which saves a significant\n> \tfraction of subtransaction entry/exit overhead.\n\nSame comments as above.\n\n \n> 2004-11-19 19:48 tgl\n> \n> \t* src/backend/tcop/postgres.c: Move pgstat_report_tabstat() call\nso\n> \tthat stats are not reported to the collector until the\ntransaction\n> \tcommits. Per recent discussion, this should avoid confusing\n> \tautovacuum when an updating transaction runs for a long time.\n\nNot likely.\n \n> 2004-11-16 22:13 neilc\n> \n> \t* src/backend/access/: hash/hash.c, nbtree/nbtree.c:\n> \tMicro-optimization of markpos() and restrpos() in btree and hash\n> \tindexes. Rather than using ReadBuffer() to increment the\nreference\n> \tcount on an already-pinned buffer, we should use\n> \tIncrBufferRefCount() as it is faster and does not require\nacquiring\n> \tthe BufMgrLock.\n\nAnother contender...maybe the cost of acquiring the lock is higher on\nsome platforms than others.\n \n> 2004-11-16 19:14 tgl\n> \n> \t* src/: backend/main/main.c, backend/port/win32/signal.c,\n> \tbackend/postmaster/pgstat.c, backend/postmaster/postmaster.c,\n> \tinclude/port/win32.h: Fix Win32 problems with signals and\nsockets,\n> \tby making the forkexec code even uglier than it was already :-(.\n> \tAlso, on Windows only, use temporary shared memory segments\ninstead\n> \tof ordinary files to pass over critical variable values from\n> \tpostmaster to child processes.\tMagnus Hagander\n\nAs I understand it, this only affects backend startup time, so, no.\nI'll benchmark some more until I get a better answer.\n\nMerlin\n", "msg_date": "Mon, 3 Jan 2005 16:11:44 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sudden drop in statement turnaround latency -- yay!. " }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> Tom Lane wrote:\n>> Add a small cost factor to ensure we\n>> prefer materializing the smaller input. This changes several\n>> regression test plans, but with any luck we will now have more\n>> stability across platforms.\n\n> No. The planner is not a factor.\n\nYou are missing the point: the possible change in a generated plan could\nbe a factor.\n\n>> Change planner to use\n>> the current true disk file size as its estimate of a relation's\n>> number of blocks, rather than the possibly-obsolete value in\n>> pg_class.relpages.\n\n> doesn't seem like this would apply.\n\nSame point. Unless you have done EXPLAINs to verify that the same plans\nwere used before and after, you can't dismiss this.\n\n>> * src/backend/utils/cache/relcache.c: Avoid scanning the\n>> relcache\n>> during AtEOSubXact_RelationCache when there is nothing to do,\n>> which\n>> is most of the time. This is another simple improvement to cut\n>> subtransaction entry/exit overhead.\n\n> Not clear from the comments: does this apply to every transaction, or\n> only ones with savepoints? If all transactions, it's a contender.\n\nIt only applies to subtransactions, ie something involving savepoints.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Jan 2005 19:48:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sudden drop in statement turnaround latency -- yay!. " } ]
[ { "msg_contents": "I've got a table using a data type that I have created as the type for its primary key. I (hope) I have the type set up properly - it seems okay, and does not have any problem creating a b-tree index for the type. The problem I am having is that the index seems to never be chosen for use. I can force the use of the index by setting enable_seqscan to off. The table has about 1.2 million rows. I have also analyzed the table - and immediately afterwards there is no affect on the index's behaviour.\n\nAny thoughts?\n\n-Adam\n\n\n\n\n\n\nI've got a table using a data type that I have \ncreated as the type for its primary key.  I (hope) I have the type set up \nproperly - it seems okay, and does not have any problem creating a b-tree index \nfor the type.  The problem I am having is that the index seems to never be \nchosen for use.  I can force the use of the index by setting enable_seqscan \nto off.  The table has about 1.2 million rows.  I have also analyzed \nthe table - and immediately afterwards there is no affect on the index's \nbehaviour.\n \nAny thoughts?\n \n-Adam", "msg_date": "Mon, 3 Jan 2005 13:44:27 -0800", "msg_from": "\"Adam Palmblad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Bad Index Choices with user defined data type" }, { "msg_contents": "On Mon, Jan 03, 2005 at 01:44:27PM -0800, Adam Palmblad wrote:\n\n> I've got a table using a data type that I have created as the type for\n> its primary key. I (hope) I have the type set up properly - it seems\n> okay, and does not have any problem creating a b-tree index for the\n> type. The problem I am having is that the index seems to never be\n> chosen for use. I can force the use of the index by setting\n> enable_seqscan to off. The table has about 1.2 million rows. I have\n> also analyzed the table - and immediately afterwards there is no affect\n> on the index's behaviour.\n\nPlease post the query and the EXPLAIN ANALYZE output for both cases:\none query with enable_seqscan on and one with it off. It might\nalso be useful to see the column's statistics from pg_stats, and\nperhaps the SQL statements that create the table, the type, the\ntype's operators, etc.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Mon, 3 Jan 2005 22:06:21 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad Index Choices with user defined data type" } ]
[ { "msg_contents": "Hi ,\n\n I am experiencing a very bad performance on my production database \nlately , all my queries are slowing down. Our application is a webbased \nsystem with lot of selects and updates. I am running \"vacuumdb\" daily on \nall the databases, are the below postgres configuration parameters are \nset properly ? can anyone take a look. Let me know if you need anymore \ninformation.\n\n\nPostgres Version: 7.4\nOperating System: Linux Red Hat 9\nCpus: 2 Hyperthreaded\nRAM: 4 gb\nPostgres Settings:\nmax_fsm_pages | 20000\nmax_fsm_relations | 1000\nshared_buffers | 65536\nsort_mem | 16384\nvacuum_mem | 32768\nwal_buffers | 64\neffective_cache_size | 393216\n\nThanks!\nPallav\n\n", "msg_date": "Mon, 03 Jan 2005 17:19:15 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Very Bad Performance." }, { "msg_contents": "Well, it's not quite that simple\n\nthe rule of thumb is 6-10% of available memory before postgres loads is \nallocated to shared_buffers.\nthen effective cache is set to the SUM of shared_buffers + kernel buffers\n\nThen you have to look at individual slow queries to determine why they \nare slow, fortunately you are running 7.4 so you can set \nlog_min_duration to some number like 1000ms and then\ntry to analyze why those queries are slow.\n\nAlso hyperthreading may not be helping you..\n\nDave\n\nPallav Kalva wrote:\n\n> Hi ,\n>\n> I am experiencing a very bad performance on my production database \n> lately , all my queries are slowing down. Our application is a \n> webbased system with lot of selects and updates. I am running \n> \"vacuumdb\" daily on all the databases, are the below postgres \n> configuration parameters are set properly ? can anyone take a look. \n> Let me know if you need anymore information.\n>\n>\n> Postgres Version: 7.4\n> Operating System: Linux Red Hat 9\n> Cpus: 2 Hyperthreaded\n> RAM: 4 gb\n> Postgres Settings:\n> max_fsm_pages | 20000\n> max_fsm_relations | 1000\n> shared_buffers | 65536\n> sort_mem | 16384\n> vacuum_mem | 32768\n> wal_buffers | 64\n> effective_cache_size | 393216\n>\n> Thanks!\n> Pallav\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 03 Jan 2005 18:44:01 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Bad Performance." }, { "msg_contents": "Dave Cramer wrote:\n\n> Well, it's not quite that simple\n>\n> the rule of thumb is 6-10% of available memory before postgres loads \n> is allocated to shared_buffers.\n> then effective cache is set to the SUM of shared_buffers + kernel buffers\n>\n> Then you have to look at individual slow queries to determine why they \n> are slow, fortunately you are running 7.4 so you can set \n> log_min_duration to some number like 1000ms and then\n> try to analyze why those queries are slow. \n\n I had that already set on my database , and when i look at the log \nfor all the problem queries, most of the queries are slow from one of \nthe table. when i look at the stats on that table they are really wrong, \nnot sure how to fix them. i run vacuumdb and analyze daily.\n\n>\n>\n> Also hyperthreading may not be helping you.. \n\n does it do any harm to the system if it is hyperthreaded ?\n\n>\n>\n> Dave\n>\n> Pallav Kalva wrote:\n>\n>> Hi ,\n>>\n>> I am experiencing a very bad performance on my production \n>> database lately , all my queries are slowing down. Our application is \n>> a webbased system with lot of selects and updates. I am running \n>> \"vacuumdb\" daily on all the databases, are the below postgres \n>> configuration parameters are set properly ? can anyone take a look. \n>> Let me know if you need anymore information.\n>>\n>>\n>> Postgres Version: 7.4\n>> Operating System: Linux Red Hat 9\n>> Cpus: 2 Hyperthreaded\n>> RAM: 4 gb\n>> Postgres Settings:\n>> max_fsm_pages | 20000\n>> max_fsm_relations | 1000\n>> shared_buffers | 65536\n>> sort_mem | 16384\n>> vacuum_mem | 32768\n>> wal_buffers | 64\n>> effective_cache_size | 393216\n>>\n>> Thanks!\n>> Pallav\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 4: Don't 'kill -9' the postmaster\n>>\n>>\n>\n\n\n", "msg_date": "Tue, 04 Jan 2005 09:38:04 -0500", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very Bad Performance." }, { "msg_contents": "Martha Stewart called it a Good Thing when [email protected] (Pallav Kalva) wrote:\n>> Then you have to look at individual slow queries to determine why\n>> they are slow, fortunately you are running 7.4 so you can set\n>> log_min_duration to some number like 1000ms and then\n>> try to analyze why those queries are slow.\n>\n> I had that already set on my database , and when i look at the log\n> for all the problem queries, most of the queries are slow from one of\n> the table. when i look at the stats on that table they are really\n> wrong, not sure how to fix them. i run vacuumdb and analyze daily.\n\nWell, it's at least good news to be able to focus attention on one\ntable, rather than being unfocused.\n\nIf the problem is that stats on one table are bad, then the next\nquestion is \"Why is that?\"\n\nA sensible answer might be that the table is fairly large, but has\nsome fields (that are relevant to indexing) that have a small number\nof values where some are real common and others aren't.\n\nFor instance, you might have a customer/supplier ID where there are\nmaybe a few hundred unique values, but where the table is dominated by\na handful of them.\n\nThe default in PostgreSQL is to collect a histogram of statistics\nbased on having 10 \"bins,\" filling them using 300 samples. If you\nhave a pretty skewed distribution on some of the fields, that won't be\ngood enough.\n\nI would suggest looking for columns where things are likely to be\n\"skewed\" (customer/supplier IDs are really good candidates for this),\nand bump them up to collect more stats.\n\nThus, something like:\n\n alter table my_table alter column something_id set statistics 100;\n\nThen ANALYZE MY_TABLE, which will collect 100 bins worth of stats for\nthe 'offending' column, based on 3000 sampled records, and see if that\nhelps.\n\n>> Also hyperthreading may not be helping you..\n>\n> does it do any harm to the system if it is hyperthreaded ?\n\nYes. If you have multiple \"hyperthreads\" running on one CPU, that'll\nwind up causing extra memory contention of one sort or another.\n-- \nlet name=\"cbbrowne\" and tld=\"linuxfinances.info\" in name ^ \"@\" ^ tld;;\nhttp://www.ntlug.org/~cbbrowne/sgml.html\n\"People who don't use computers are more sociable, reasonable, and ...\nless twisted\" -- Arthur Norman\n", "msg_date": "Tue, 04 Jan 2005 14:58:04 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very Bad Performance." } ]
[ { "msg_contents": "Besides the tables pg_stat_xxx, are there any stronger tools for\nPostgreSQL as the counterpart of Oracle's Statspack? Is it possible at\nall to trace and log the cpu and io cost for each committed\ntransaction?\nThanks a lot! -Stan\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Mon, 3 Jan 2005 16:13:45 -0800 (PST)", "msg_from": "Stan Y <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL's Statspack?" } ]
[ { "msg_contents": "Today is the first official day of this weeks and the system run better in\nserveral points but there are still some points that need to be corrected. Some\nqueries or some tables are very slow. I think the queries inside the programe\nneed to be rewrite.\nNow I put the sort mem to a little bit bigger:\nsort mem = 16384 increase the sort mem makes no effect on the slow point\neventhough there is little connnection.\nshared_buffers = 27853\neffective cache = 120000\n\nI will put more ram but someone said RH 9.0 had poor recognition on the Ram\nabove 4 Gb?\nShould I close the hyperthreading ? Would it make any differnce between open and\nclose the hyperthreading?\nThanks for any comment\nAmrit\nThailand\n", "msg_date": "Tue, 4 Jan 2005 16:31:47 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Tue, 4 Jan 2005 [email protected] wrote:\n\n> Today is the first official day of this weeks and the system run better in\n> serveral points but there are still some points that need to be corrected. Some\n> queries or some tables are very slow. I think the queries inside the programe\n> need to be rewrite.\n> Now I put the sort mem to a little bit bigger:\n> sort mem = 16384 increase the sort mem makes no effect on the slow point\n> eventhough there is little connnection.\n> shared_buffers = 27853\n> effective cache = 120000\n\nEven though others have said otherwise, I've had good results from setting\nsort_mem higher -- even if that is per query.\n\n>\n> I will put more ram but someone said RH 9.0 had poor recognition on the Ram\n> above 4 Gb?\n\nI think they were refering to 32 bit architectures, not distributions as\nsuch.\n\n> Should I close the hyperthreading ? Would it make any differnce between open and\n> close the hyperthreading?\n> Thanks for any comment\n\nIn my experience, the largest performance increases come from intensive\nanalysis and optimisation of queries. Look at the output of EXPLAIN\nANALYZE for the queries your application is generating and see if they can\nbe tuned in anyway. More often than not, they can.\n\nFeel free to ask for assistence on irc at irc.freenode.net #postgresql.\nPeople there help optimise queries all day ;-).\n\n> Amrit\n> Thailand\n\nGavin\n", "msg_date": "Tue, 4 Jan 2005 22:13:12 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "> > I will put more ram but someone said RH 9.0 had poor recognition on the Ram\n> > above 4 Gb?\n>\n> I think they were refering to 32 bit architectures, not distributions as\n> such.\n\nSorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit\nArche.?\n\n> > Should I close the hyperthreading ? Would it make any differnce between\n> open and\n> > close the hyperthreading?\n> > Thanks for any comment\n>\n> In my experience, the largest performance increases come from intensive\n> analysis and optimisation of queries. Look at the output of EXPLAIN\n> ANALYZE for the queries your application is generating and see if they can\n> be tuned in anyway. More often than not, they can.\n\nSo what you mean is that the result is the same whether close or open\nhyperthreading ?\nWill it be any harm if I open it ?\nThe main point shiuld be adjustment the query , right.\n\n> Feel free to ask for assistence on irc at irc.freenode.net #postgresql.\n> People there help optimise queries all day ;-).\n\nHow could I contact with those people ;=> which url ?\nThanks again.\nAmrit\nThailand\n", "msg_date": "Wed, 5 Jan 2005 12:07:13 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n> > Today is the first official day of this weeks and the system run\n> > better in serveral points but there are still some points that need to\n> > be corrected. Some queries or some tables are very slow. I think the\n> > queries inside the programe need to be rewrite.\n> > Now I put the sort mem to a little bit bigger:\n> > sort mem = 16384 increase the sort mem makes no effect on the\n> > slow point eventhough there is little connnection.\n> > shared_buffers = 27853\n> > effective cache = 120000\n\n> If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after\n> some initial testing. You'll see a huge improvement of speed on the\n> system as a whole. I would try turning hyperthreading off also.\n\n\nNow I turn hyperthreading off and readjust the conf . I found the bulb query\nthat was :\nupdate one flag of the table [8 million records which I think not too much]\n.When I turned this query off everything went fine.\nI don't know whether update the data is much slower than insert [Postgresql\n7.3.2] and how could we improve the update method?\nThanks for many helps.\nAmrit\nThailand\n\nNB. I would like to give my appreciation to all of the volunteers from many\ncountries who combat with big disaster [Tsunamies] in my country [Thailand].\n", "msg_date": "Wed, 5 Jan 2005 22:35:42 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Amrit,\n\ncan you post\n\nexplain <your slow update query>\n\nso we can see what it does ?\n\nDave\n\[email protected] wrote:\n\n>>>Today is the first official day of this weeks and the system run\n>>>better in serveral points but there are still some points that need to\n>>>be corrected. Some queries or some tables are very slow. I think the\n>>>queries inside the programe need to be rewrite.\n>>>Now I put the sort mem to a little bit bigger:\n>>>sort mem = 16384 increase the sort mem makes no effect on the\n>>>slow point eventhough there is little connnection.\n>>>shared_buffers = 27853\n>>>effective cache = 120000\n>>> \n>>>\n>\n> \n>\n>> If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after\n>> some initial testing. You'll see a huge improvement of speed on the\n>> system as a whole. I would try turning hyperthreading off also.\n>> \n>>\n>\n>\n>Now I turn hyperthreading off and readjust the conf . I found the bulb query\n>that was :\n>update one flag of the table [8 million records which I think not too much]\n>.When I turned this query off everything went fine.\n>I don't know whether update the data is much slower than insert [Postgresql\n>7.3.2] and how could we improve the update method?\n>Thanks for many helps.\n>Amrit\n>Thailand\n>\n>NB. I would like to give my appreciation to all of the volunteers from many\n>countries who combat with big disaster [Tsunamies] in my country [Thailand].\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nAmrit,\n\ncan you post\n\nexplain <your slow update query> \n\nso we can see what it does ?\n\nDave\n\[email protected] wrote:\n\n\n\nToday is the first official day of this weeks and the system run\nbetter in serveral points but there are still some points that need to\nbe corrected. Some queries or some tables are very slow. I think the\nqueries inside the programe need to be rewrite.\nNow I put the sort mem to a little bit bigger:\nsort mem = 16384 increase the sort mem makes no effect on the\nslow point eventhough there is little connnection.\nshared_buffers = 27853\neffective cache = 120000\n \n\n\n\n \n\n If I were you I would upgrade from RH 9 to Fedora Core 2 or 3 after\n some initial testing. You'll see a huge improvement of speed on the\n system as a whole. I would try turning hyperthreading off also.\n \n\n\n\nNow I turn hyperthreading off and readjust the conf . I found the bulb query\nthat was :\nupdate one flag of the table [8 million records which I think not too much]\n.When I turned this query off everything went fine.\nI don't know whether update the data is much slower than insert [Postgresql\n7.3.2] and how could we improve the update method?\nThanks for many helps.\nAmrit\nThailand\n\nNB. I would like to give my appreciation to all of the volunteers from many\ncountries who combat with big disaster [Tsunamies] in my country [Thailand].\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend\n\n\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Wed, 05 Jan 2005 13:21:59 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "[email protected] wrote:\n> Now I turn hyperthreading off and readjust the conf . I found the bulb query\n> that was :\n> update one flag of the table [8 million records which I think not too much]\n> .When I turned this query off everything went fine.\n> I don't know whether update the data is much slower than insert [Postgresql\n> 7.3.2] and how could we improve the update method?\n\nUPDATE is expensive. Under a MVCC setup, it's roughtly the equivalent of \nDELETE + INSERT new record (ie, old record deprecated, new version of \nrecord. Updating 8 million records would be very I/O intensive and \nprobably flushes your OS cache so all other queries hit disk versus \nsuperfast memory. And if this operation is run multiple times during the \nday, you may end up with a lot of dead tuples in the table which makes \nquerying it deadly slow.\n\nIf it's a dead tuples issue, you probably have to increase your \nfreespace map and vacuum analyze that specific table more often. If it's \nan I/O hit issue, a lazy updating procedure would help if the operation \nis not time critical (eg. load the record keys that need updating and \nloop through the records with a time delay.)\n", "msg_date": "Wed, 05 Jan 2005 19:31:10 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Wed, 5 Jan 2005 22:35:42 +0700, [email protected]\n<[email protected]> wrote:\n> Now I turn hyperthreading off and readjust the conf . I found the bulb query\n> that was :\n> update one flag of the table [8 million records which I think not too much]\n\nAhh, the huge update. Below are my \"hints\" I've\nfound while trying to optimize such updates.\n\nFirst of all, does this update really changes this 'flag'?\nSay, you have update:\nUPDATE foo SET flag = 4 WHERE [blah];\nare you sure, that flag always is different than 4?\nIf not, then add:\nUPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah];\nThis makes sure only tuples which actually need the change will\nreceive it. [ IIRC mySQL does this, while PgSQL will always perform\nUPDATE, regardless if it changes or not ];\n\nDivide the update, if possible. This way query uses\nless memory and you may call VACUUM inbetween\nupdates. To do this, first SELECT INTO TEMPORARY\ntable the list of rows to update (their ids or something),\nand then loop through it to update the values.\n\nI guess the problem with huge updates is that\nuntil the update is finished, the new tuples are\nnot visible, so the old cannot be freed...\n\n Regards,\n Dawid\n", "msg_date": "Thu, 6 Jan 2005 13:15:33 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\n> Ahh, the huge update. Below are my \"hints\" I've\n> found while trying to optimize such updates.\n>\n> First of all, does this update really changes this 'flag'?\n> Say, you have update:\n> UPDATE foo SET flag = 4 WHERE [blah];\n> are you sure, that flag always is different than 4?\n> If not, then add:\n> UPDATE foo SET flag = 4 WHERE flag <> 4 AND [blah];\n> This makes sure only tuples which actually need the change will\n> receive it. [ IIRC mySQL does this, while PgSQL will always perform\n> UPDATE, regardless if it changes or not ];\n>\n> Divide the update, if possible. This way query uses\n> less memory and you may call VACUUM inbetween\n> updates. To do this, first SELECT INTO TEMPORARY\n> table the list of rows to update (their ids or something),\n> and then loop through it to update the values.\n>\n> I guess the problem with huge updates is that\n> until the update is finished, the new tuples are\n> not visible, so the old cannot be freed...\n\nYes, very good point I must try this and I will give you the result , thanks a\nlot.\nAmrit\nThailand\n\n", "msg_date": "Thu, 6 Jan 2005 23:34:43 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Dawid,\n\n> Ahh, the huge update. Below are my \"hints\" I've\n> found while trying to optimize such updates.\n> Divide the update, if possible. This way query uses\n> less memory and you may call VACUUM inbetween\n> updates. To do this, first SELECT INTO TEMPORARY\n> table the list of rows to update (their ids or something),\n> and then loop through it to update the values.\n\nThere are other ways to deal as well -- one by normalizing the database. \nOften, I find that massive updates like this are caused by a denormalized \ndatabase.\n\nFor example, Lyris stores its \"mailing numbers\" only as repeated numbers in \nthe recipients table. When a mailing is complete, Lyris updates all of the \nrecipients .... up to 750,000 rows in the case of my client ... to indicate \nthe completion of the mailing (it's actually a little more complicated than \nthat, but the essential problem is the example)\n\nIt would be far better for Lyris to use a seperate mailings table, with a \nstatus in that table ... which would then require only *one* update row to \nindicate completion, instead of 750,000. \n\nI can't tell you how many times I've seen this sort of thing. And the \ndevelopers always tell me \"Well, we denormalized for performance reasons ... \n\"\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 6 Jan 2005 09:06:55 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Thu, 6 Jan 2005 09:06:55 -0800\nJosh Berkus <[email protected]> wrote:\n\n> I can't tell you how many times I've seen this sort of thing. And\n> the developers always tell me \"Well, we denormalized for performance\n> reasons ... \"\n\n Now that's rich. I don't think I've ever seen a database perform\n worse after it was normalized. In fact, I can't even think of a\n situation where it could! \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 6 Jan 2005 11:12:07 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Reading can be worse for a normalized db, which is likely what the \ndevelopers were concerned about.\n\nOne always have to be careful to measure the right thing.\n\nDave\n\nFrank Wiles wrote:\n\n>On Thu, 6 Jan 2005 09:06:55 -0800\n>Josh Berkus <[email protected]> wrote:\n>\n> \n>\n>>I can't tell you how many times I've seen this sort of thing. And\n>>the developers always tell me \"Well, we denormalized for performance\n>>reasons ... \"\n>> \n>>\n>\n> Now that's rich. I don't think I've ever seen a database perform\n> worse after it was normalized. In fact, I can't even think of a\n> situation where it could! \n>\n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\t\nICQ#14675561\n\n", "msg_date": "Thu, 06 Jan 2005 12:35:33 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Frank,\n\n> Now that's rich. I don't think I've ever seen a database perform\n> worse after it was normalized. In fact, I can't even think of a\n> situation where it could!\n\nOh, there are some. For example, Primer's issues around his dating \ndatabase; it turned out that a fully normalized design resulted in very bad \nselect performance because of the number of joins involved. Of course, the \nmethod that did perform well was *not* a simple denormalization, either.\n\nThe issue with denormalization is, I think, that a lot of developers cut their \nteeth on the likes of MS Access, Sybase 2 or Informix 1.0, where a \npoor-performing join often didn't complete at all. As a result, they got \ninto the habit of \"preemptive tuning\"; that is, doing things \"for performance \nreasons\" when the system was still in the design phase, before they even know \nwhat the performance issues *were*. \n\nNot that this was a good practice even then, but the average software project \nallocates grossly inadequate time for testing, so you can see how it became a \nbad habit. And most younger DBAs learn their skills on the job from the \nolder DBAs, so the misinformation gets passed down.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 6 Jan 2005 09:38:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Denormalization WAS: Low Performance for big hospital server .." }, { "msg_contents": "On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:\n> Reading can be worse for a normalized db, which is likely what the \n> developers were concerned about.\n\nTo a point. Once you have enough data that you start running out of\nspace in memory then normalization starts to rapidly gain ground again\nbecause it's often smaller in size and won't hit the disk as much.\n\nMoral of the story is don't tune with a smaller database than you expect\nto have.\n\n> Frank Wiles wrote:\n> \n> >On Thu, 6 Jan 2005 09:06:55 -0800\n> >Josh Berkus <[email protected]> wrote:\n> >\n> > \n> >\n> >>I can't tell you how many times I've seen this sort of thing. And\n> >>the developers always tell me \"Well, we denormalized for performance\n> >>reasons ... \"\n> >> \n> >>\n> >\n> > Now that's rich. I don't think I've ever seen a database perform\n> > worse after it was normalized. In fact, I can't even think of a\n> > situation where it could! \n> >\n> > ---------------------------------\n> > Frank Wiles <[email protected]>\n> > http://www.wiles.org\n> > ---------------------------------\n> >\n> >\n> >---------------------------(end of broadcast)---------------------------\n> >TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n> >\n> > \n> >\n> \n-- \n\n", "msg_date": "Thu, 06 Jan 2005 12:51:14 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "On Thu, 6 Jan 2005 09:38:45 -0800\nJosh Berkus <[email protected]> wrote:\n\n> Frank,\n> \n> > Now that's rich. I don't think I've ever seen a database perform\n> > worse after it was normalized. In fact, I can't even think of a\n> > situation where it could!\n> \n> Oh, there are some. For example, Primer's issues around his dating \n> database; it turned out that a fully normalized design resulted in\n> very bad select performance because of the number of joins involved. \n> Of course, the method that did perform well was *not* a simple\n> denormalization, either.\n> \n> The issue with denormalization is, I think, that a lot of developers\n> cut their teeth on the likes of MS Access, Sybase 2 or Informix 1.0,\n> where a poor-performing join often didn't complete at all. As a\n> result, they got into the habit of \"preemptive tuning\"; that is, doing\n> things \"for performance reasons\" when the system was still in the\n> design phase, before they even know what the performance issues\n> *were*. \n> \n> Not that this was a good practice even then, but the average software\n> project allocates grossly inadequate time for testing, so you can see\n> how it became a bad habit. And most younger DBAs learn their skills\n> on the job from the older DBAs, so the misinformation gets passed\n> down.\n\n Yeah the more I thought about it I had a fraud detection system I\n built for a phone company years ago that when completely normalized \n couldn't get the sub-second response the users wanted. It was Oracle\n and we didn't have the best DBA in the world. \n\n I ended up having to push about 20% of the deep call details into\n flat files and surprisingly enough it was faster to grep the flat\n files than use the database, because as was previously mentioned\n all of the joins. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 6 Jan 2005 12:24:40 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Denormalization WAS: Low Performance for big hospital" }, { "msg_contents": "Hi \n\nOn Thu, Jan 06, 2005 at 12:51:14PM -0500, Rod Taylor wrote:\n> On Thu, 2005-01-06 at 12:35 -0500, Dave Cramer wrote:\n> > Reading can be worse for a normalized db, which is likely what the \n> > developers were concerned about.\n> \n> To a point. Once you have enough data that you start running out of\n> space in memory then normalization starts to rapidly gain ground again\n> because it's often smaller in size and won't hit the disk as much.\n\nWell, in datawarehousing applications you'll often denormalize your\nentities due to most of the time the access method is a (more or less)\nsimple select. \n\nRegards,\nYann\n", "msg_date": "Thu, 6 Jan 2005 20:09:57 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "\nFrank Wiles <[email protected]> writes:\n\n> Now that's rich. I don't think I've ever seen a database perform\n> worse after it was normalized. In fact, I can't even think of a\n> situation where it could! \n\nJust remember. All generalisations are false.\n\n-- \ngreg\n\n", "msg_date": "06 Jan 2005 14:47:22 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." }, { "msg_contents": "Greg Stark wrote:\n> Frank Wiles <[email protected]> writes:\n> \n> \n>> Now that's rich. I don't think I've ever seen a database perform\n>> worse after it was normalized. In fact, I can't even think of a\n>> situation where it could! \n> \n> \n> Just remember. All generalisations are false.\n\nIn general, I would agree.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n\n\n-- \nCommand Prompt, Inc., home of PostgreSQL Replication, and plPHP.\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nMammoth PostgreSQL Replicator. Integrated Replication for PostgreSQL", "msg_date": "Thu, 06 Jan 2005 12:21:19 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low Performance for big hospital server .." } ]
[ { "msg_contents": "All,\n I am currently working on a project for my company that entails\nDatabasing upwards of 300 million specific parameters. In the current\nDB Design, these parameters are mapped against two lookup tables (2\nmillion, and 1.5 million respectively) and I am having extreme issues\ngetting PG to scale to a working level. Here are my issues:\n 1)the 250 million records are currently whipped and reinserted as a\n\"daily snapshot\" and the fastest way I have found \"COPY\" to do this from\na file is no where near fast enough to do this. SQL*Loader from Oracle\ndoes some things that I need, ie Direct Path to the db files access\n(skipping the RDBMS), inherently ignoring indexing rules and saving a\nton of time (Dropping the index, COPY'ing 250 million records, then\nRecreating the index just takes way too long).\n 2)Finding a way to keep this many records in a fashion that can be\neasily queried. I even tried breaking it up into almost 2800 separate\ntables, basically views of the data pre-broken down, if this is a\nworking method it can be done this way, but when I tried it, VACUUM, and\nthe COPY's all seemed to slow down extremely.\n If there is anyone that can give me some tweak parameters or design\nhelp on this, it would be ridiculously appreciated. I have already\ncreated this in Oracle and it works, but we don't want to have to pay\nthe monster if something as wonderful as Postgres can handle it.\n\n\nRyan Wager\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Tuesday, January 04, 2005 12:06 PM\nTo: [email protected]\nCc: Yann Michel\nSubject: Re: [PERFORM] query rewrite using materialized views\n\nYann,\n\n> are there any plans for rewriting queries to preexisting materialized\n> views? I mean, rewrite a query (within the optimizer) to use a\n> materialized view instead of the originating table?\n\nAutomatically, and by default, no. Using the RULES system? Yes, you\ncan \nalready do this and the folks on the MattView project on pgFoundry are \nworking to make it easier.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n", "msg_date": "Tue, 4 Jan 2005 12:41:45 -0600", "msg_from": "\"Wager, Ryan D [NTK]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "> 1)the 250 million records are currently whipped and reinserted as a\n> \"daily snapshot\" and the fastest way I have found \"COPY\" to do this from\n> a file is no where near fast enough to do this. SQL*Loader from Oracle\n> does some things that I need, ie Direct Path to the db files access\n> (skipping the RDBMS), inherently ignoring indexing rules and saving a\n> ton of time (Dropping the index, COPY'ing 250 million records, then\n> Recreating the index just takes way too long).\n\nIf you have the hardware for it, instead of doing 1 copy, do 1 copy\ncommand per CPU (until your IO is maxed out anyway) and divide the work\namongst them. I can push through 100MB/sec using methods like this --\nwhich makes loading 100GB of data much faster.\n\nDitto for indexes. Don't create a single index on one CPU and wait --\nsend off one index creation command per CPU.\n\n> 2)Finding a way to keep this many records in a fashion that can be\n> easily queried. I even tried breaking it up into almost 2800 separate\n> tables, basically views of the data pre-broken down, if this is a\n> working method it can be done this way, but when I tried it, VACUUM, and\n> the COPY's all seemed to slow down extremely.\n\nCan you send us EXPLAIN ANALYSE output for the slow selects and a little\ninsight into what your doing? A basic table structure, and indexes\ninvolved would be handy. You may change column and table names if you\nlike.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Josh Berkus\n> Sent: Tuesday, January 04, 2005 12:06 PM\n> To: [email protected]\n> Cc: Yann Michel\n> Subject: Re: [PERFORM] query rewrite using materialized views\n> \n> Yann,\n> \n> > are there any plans for rewriting queries to preexisting materialized\n> > views? I mean, rewrite a query (within the optimizer) to use a\n> > materialized view instead of the originating table?\n> \n> Automatically, and by default, no. Using the RULES system? Yes, you\n> can \n> already do this and the folks on the MattView project on pgFoundry are \n> working to make it easier.\n> \n-- \n\n", "msg_date": "Tue, 04 Jan 2005 14:02:18 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "Wagner,\n\n> If there is anyone that can give me some tweak parameters or design\n> help on this, it would be ridiculously appreciated. I have already\n> created this in Oracle and it works, but we don't want to have to pay\n> the monster if something as wonderful as Postgres can handle it.\n\nIn addition to Rod's advice, please increase your checkpoint_segments and \ncheckpoint_timeout parameters and make sure that the pg_xlog is on a seperate \ndisk resource from the database.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 4 Jan 2005 11:16:49 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "On Tue, 2005-01-04 at 14:02 -0500, Rod Taylor wrote:\n> > 1)the 250 million records are currently whipped and reinserted as a\n> > \"daily snapshot\" and the fastest way I have found \"COPY\" to do this from\n> > a file is no where near fast enough to do this. SQL*Loader from Oracle\n> > does some things that I need, ie Direct Path to the db files access\n> > (skipping the RDBMS), inherently ignoring indexing rules and saving a\n> > ton of time (Dropping the index, COPY'ing 250 million records, then\n> > Recreating the index just takes way too long).\n> \n> If you have the hardware for it, instead of doing 1 copy, do 1 copy\n> command per CPU (until your IO is maxed out anyway) and divide the work\n> amongst them. I can push through 100MB/sec using methods like this --\n> which makes loading 100GB of data much faster.\n> \n> Ditto for indexes. Don't create a single index on one CPU and wait --\n> send off one index creation command per CPU.\n\nNot sure what you mean by \"whipped\". If you mean select and re-insert\nthen perhaps using a pipe would produce better performance, since no\ndisk access for the data file would be involved.\n\nIn 8.0 COPY and CREATE INDEX is optimised to not use WAL at all if\narchive_command is not set. 8 is great...\n\n> > 2)Finding a way to keep this many records in a fashion that can be\n> > easily queried. I even tried breaking it up into almost 2800 separate\n> > tables, basically views of the data pre-broken down, if this is a\n> > working method it can be done this way, but when I tried it, VACUUM, and\n> > the COPY's all seemed to slow down extremely.\n> \n> Can you send us EXPLAIN ANALYSE output for the slow selects and a little\n> insight into what your doing? A basic table structure, and indexes\n> involved would be handy. You may change column and table names if you\n> like.\n\nThere's a known issue using UNION ALL views in 8.0 that makes them\nslightly more inefficient than using a single table. Perhaps that would\nexplain your results.\n\nThere shouldn't be any need to do the 2800 table approach in this\ninstance.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Tue, 04 Jan 2005 23:20:10 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" } ]
[ { "msg_contents": "Rod,\n I do this, PG gets forked many times, it is tough to find the max\nnumber of times I can do this, but I have a Proc::Queue Manager Perl\ndriver that handles all of the copy calls. I have a quad CPU machine.\nEach COPY only hits ones CPU for like 2.1% but anything over about 5\nkicks the load avg up.\n\n Ill get some explain analysis and table structures out there pronto.\n\n-----Original Message-----\nFrom: Rod Taylor [mailto:[email protected]] \nSent: Tuesday, January 04, 2005 1:02 PM\nTo: Wager, Ryan D [NTK]\nCc: Postgresql Performance\nSubject: Re: [PERFORM] query rewrite using materialized views\n\n> 1)the 250 million records are currently whipped and reinserted as a\n> \"daily snapshot\" and the fastest way I have found \"COPY\" to do this\nfrom\n> a file is no where near fast enough to do this. SQL*Loader from\nOracle\n> does some things that I need, ie Direct Path to the db files access\n> (skipping the RDBMS), inherently ignoring indexing rules and saving a\n> ton of time (Dropping the index, COPY'ing 250 million records, then\n> Recreating the index just takes way too long).\n\nIf you have the hardware for it, instead of doing 1 copy, do 1 copy\ncommand per CPU (until your IO is maxed out anyway) and divide the work\namongst them. I can push through 100MB/sec using methods like this --\nwhich makes loading 100GB of data much faster.\n\nDitto for indexes. Don't create a single index on one CPU and wait --\nsend off one index creation command per CPU.\n\n> 2)Finding a way to keep this many records in a fashion that can be\n> easily queried. I even tried breaking it up into almost 2800 separate\n> tables, basically views of the data pre-broken down, if this is a\n> working method it can be done this way, but when I tried it, VACUUM,\nand\n> the COPY's all seemed to slow down extremely.\n\nCan you send us EXPLAIN ANALYSE output for the slow selects and a little\ninsight into what your doing? A basic table structure, and indexes\ninvolved would be handy. You may change column and table names if you\nlike.\n\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Josh\nBerkus\n> Sent: Tuesday, January 04, 2005 12:06 PM\n> To: [email protected]\n> Cc: Yann Michel\n> Subject: Re: [PERFORM] query rewrite using materialized views\n> \n> Yann,\n> \n> > are there any plans for rewriting queries to preexisting\nmaterialized\n> > views? I mean, rewrite a query (within the optimizer) to use a\n> > materialized view instead of the originating table?\n> \n> Automatically, and by default, no. Using the RULES system? Yes, you\n> can \n> already do this and the folks on the MattView project on pgFoundry are\n\n> working to make it easier.\n> \n-- \n\n", "msg_date": "Tue, 4 Jan 2005 13:26:36 -0600", "msg_from": "\"Wager, Ryan D [NTK]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "On Tue, 2005-01-04 at 13:26 -0600, Wager, Ryan D [NTK] wrote:\n> Rod,\n> I do this, PG gets forked many times, it is tough to find the max\n> number of times I can do this, but I have a Proc::Queue Manager Perl\n> driver that handles all of the copy calls. I have a quad CPU machine.\n> Each COPY only hits ones CPU for like 2.1% but anything over about 5\n> kicks the load avg up.\n\nSounds like disk IO is slowing down the copy then.\n\n> Ill get some explain analysis and table structures out there pronto.\n> \n> -----Original Message-----\n> From: Rod Taylor [mailto:[email protected]] \n> Sent: Tuesday, January 04, 2005 1:02 PM\n> To: Wager, Ryan D [NTK]\n> Cc: Postgresql Performance\n> Subject: Re: [PERFORM] query rewrite using materialized views\n> \n> > 1)the 250 million records are currently whipped and reinserted as a\n> > \"daily snapshot\" and the fastest way I have found \"COPY\" to do this\n> from\n> > a file is no where near fast enough to do this. SQL*Loader from\n> Oracle\n> > does some things that I need, ie Direct Path to the db files access\n> > (skipping the RDBMS), inherently ignoring indexing rules and saving a\n> > ton of time (Dropping the index, COPY'ing 250 million records, then\n> > Recreating the index just takes way too long).\n> \n> If you have the hardware for it, instead of doing 1 copy, do 1 copy\n> command per CPU (until your IO is maxed out anyway) and divide the work\n> amongst them. I can push through 100MB/sec using methods like this --\n> which makes loading 100GB of data much faster.\n> \n> Ditto for indexes. Don't create a single index on one CPU and wait --\n> send off one index creation command per CPU.\n> \n> > 2)Finding a way to keep this many records in a fashion that can be\n> > easily queried. I even tried breaking it up into almost 2800 separate\n> > tables, basically views of the data pre-broken down, if this is a\n> > working method it can be done this way, but when I tried it, VACUUM,\n> and\n> > the COPY's all seemed to slow down extremely.\n> \n> Can you send us EXPLAIN ANALYSE output for the slow selects and a little\n> insight into what your doing? A basic table structure, and indexes\n> involved would be handy. You may change column and table names if you\n> like.\n> \n> > -----Original Message-----\n> > From: [email protected]\n> > [mailto:[email protected]] On Behalf Of Josh\n> Berkus\n> > Sent: Tuesday, January 04, 2005 12:06 PM\n> > To: [email protected]\n> > Cc: Yann Michel\n> > Subject: Re: [PERFORM] query rewrite using materialized views\n> > \n> > Yann,\n> > \n> > > are there any plans for rewriting queries to preexisting\n> materialized\n> > > views? I mean, rewrite a query (within the optimizer) to use a\n> > > materialized view instead of the originating table?\n> > \n> > Automatically, and by default, no. Using the RULES system? Yes, you\n> > can \n> > already do this and the folks on the MattView project on pgFoundry are\n> \n> > working to make it easier.\n> > \n-- \n\n", "msg_date": "Tue, 04 Jan 2005 14:54:22 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" }, { "msg_contents": "Ryan,\n\n> > I do this, PG gets forked many times, it is tough to find the max\n> > number of times I can do this, but I have a Proc::Queue Manager Perl\n> > driver that handles all of the copy calls. I have a quad CPU machine.\n> > Each COPY only hits ones CPU for like 2.1% but anything over about 5\n> > kicks the load avg up.\n\nThat's consistent with Xeon problems we've seen elsewhere. Keep the # of \nprocesses at or below the # of processors.\n\nMoving pg_xlog is accomplished through:\n1) in 8.0, changes to postgresql.conf\n\t(in 8.0 you'd also want to explore using multiple arrays with tablespaces to \nmake things even faster)\n2) in other versions:\n\ta) mount a seperate disk on PGDATA/pg_xlog, or\n\tb) symlink PGDATA/pg_xlog to another location\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 4 Jan 2005 13:37:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query rewrite using materialized views" } ]
[ { "msg_contents": "I have an integer column that is not needed for some rows in the table\n(whether it is necessary is a factor of other row attributes, and it \nisn't worth putting in a separate table).\n\nWhat are the performance tradeoffs (storage space, query speed) of using \nNULL versus a sentinel integer value?\n\nNot that it matters, but in the event where the column values matter,\nthe numberic value is a foreign key. Advice on that welcome too.\n\nThanks!\n", "msg_date": "Wed, 05 Jan 2005 14:09:11 -0500", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Null integer columns" } ]
[ { "msg_contents": "Has anyone seen a benchmark on the speed difference between:\n\nSELECT * FROM item WHERE id=123;\nand\nSELECT * FROM vendor WHERE id=515;\n\nversus:\n\nSELECT * FROM item LEFT JOIN vendor ON item.vendor_id=vendor.id WHERE\nitem.id=123;\n\n\n\nI only have a laptop here so I can't really benchmark properly.\nI'm hoping maybe someone else has, or just knows which would be faster\nunder high traffic/quantity.\n\nThanks!\n", "msg_date": "Wed, 5 Jan 2005 18:31:49 -0800", "msg_from": "Miles Keaton <[email protected]>", "msg_from_op": true, "msg_subject": "Benchmark two separate SELECTs versus one LEFT JOIN" }, { "msg_contents": "Miles,\n\n> I only have a laptop here so I can't really benchmark properly.\n> I'm hoping maybe someone else has, or just knows which would be faster\n> under high traffic/quantity.\n\nWell, it's really a difference between round-trip time vs. the time required \nto compute the join. If your database is setup correctly, the 2nd should be \nfaster.\n\nHowever, it should be very easy to test ....\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 6 Jan 2005 08:57:56 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Benchmark two separate SELECTs versus one LEFT JOIN" } ]
[ { "msg_contents": "I'm still relatively new to Postgres. I usually just do SQL programming \nbut have found my self having to administer the DB now. I have I have \na problem on my website that when there is high amounts of traffic \ncoming from one computer to my web server. I suspect it is because of a \nvirus. But what when I notice this, my processor drops to 0.0% idle \nwith postmaster being my highest CPU user. Under normal circumstances \nthe processor runs >90% idle or <10% used. I have tried tuning postgres \nbut it doesn't seem to make a difference, unless I am doing something \nwrong. If I would like to find a solution other than rewriting all of \nmy SQL statements and creating them to take the least amount of time to \nprocess. \n\n", "msg_date": "Thu, 6 Jan 2005 12:02:49 +0400", "msg_from": "Ben Bostow <[email protected]>", "msg_from_op": true, "msg_subject": "Problems with high traffic" }, { "msg_contents": "Ben\n\nWell, we need more information\n\npg version, hardware, memory, etc\n\nyou may want to turn on log_duration to see exactly which statement is \ncauseing the problem. I'm assuming since it is taking a lot of CPU it \nwill take some time to complete( this may not be true)\n\nOn your last point, that is where you will get the most optimization, \nbut I'd still use log_duration to make sure optimizing the statement \nwill actually help.\n\ndave\n\nBen Bostow wrote:\n\n> I'm still relatively new to Postgres. I usually just do SQL \n> programming but have found my self having to administer the DB now. I \n> have I have a problem on my website that when there is high amounts of \n> traffic coming from one computer to my web server. I suspect it is \n> because of a virus. But what when I notice this, my processor drops to \n> 0.0% idle with postmaster being my highest CPU user. Under normal \n> circumstances the processor runs >90% idle or <10% used. I have tried \n> tuning postgres but it doesn't seem to make a difference, unless I am \n> doing something wrong. If I would like to find a solution other than \n> rewriting all of my SQL statements and creating them to take the least \n> amount of time to process.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 06 Jan 2005 08:06:51 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with high traffic" }, { "msg_contents": "I am running postgresql 7.2.4-5.73, Dual P4, 1GB Ram. The big problem \nis that I redirect all internal port 80 traffic to my web server so I \nsee all traffic whether it is a virus or not and intended for my server \nor not. I originally had a problem with running out of memory but I \nfound a bug in my software that kept the DB connection open so the next \ntime a new connection was made on top of that. As soon as I removed \nthat I started getting the processor problem. I am working on patching \nmy kernel to have the string matching and other new iptables features \nto limit the virus traffic but I would like to figure the Processor \nproblem out as I am working on moving everything to the 2.6 kernel when \nRedHat finalizes their release.\n\nI am not familular with many of the logging features of postgres just \nthe outputing the output to a file instead of /dev/null.\n\nBenjamin\n\nOn Jan 6, 2005, at 5:06 PM, Dave Cramer wrote:\n\n> Ben\n>\n> Well, we need more information\n>\n> pg version, hardware, memory, etc\n>\n> you may want to turn on log_duration to see exactly which statement is \n> causeing the problem. I'm assuming since it is taking a lot of CPU it \n> will take some time to complete( this may not be true)\n>\n> On your last point, that is where you will get the most optimization, \n> but I'd still use log_duration to make sure optimizing the statement \n> will actually help.\n>\n> dave\n>\n> Ben Bostow wrote:\n>\n>> I'm still relatively new to Postgres. I usually just do SQL \n>> programming but have found my self having to administer the DB now. \n>> I have I have a problem on my website that when there is high amounts \n>> of traffic coming from one computer to my web server. I suspect it is \n>> because of a virus. But what when I notice this, my processor drops \n>> to 0.0% idle with postmaster being my highest CPU user. Under normal \n>> circumstances the processor runs >90% idle or <10% used. I have tried \n>> tuning postgres but it doesn't seem to make a difference, unless I am \n>> doing something wrong. If I would like to find a solution other than \n>> rewriting all of my SQL statements and creating them to take the \n>> least amount of time to process.\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>>\n>>\n>\n> -- \n> Dave Cramer\n> http://www.postgresintl.com\n> 519 939 0336\n> ICQ#14675561\n>\n\n", "msg_date": "Thu, 6 Jan 2005 17:18:43 +0400", "msg_from": "Ben Bostow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with high traffic" }, { "msg_contents": "Ben,\n\nHmmm... ok 7.2.4 is quite old now and log_duration doesn't exist in the \nlogging. You will see an immediate performance benefit just by moving to \n7.4.x, but I'll bet that's not a reasonable path for you.\n\nin postgresql.conf you can change the logging to:\n\nlog_pid=true\nlog_duration=true\nlog_statement=true\n\nsyslog=2 ; to log to syslog\n\nThen in syslogd.conf\n\nadd local0.none to the /var/log/messages line to stop logging to messages\nredirect local0.* to /var/log/postgres ; this step isn't really \nnecesssary but will keep postgres logs separate\n\nHUP syslogd\n\nrestart postgres\n\nThen you should be able to see which statements are taking the longest.\n\nWhy do random hits to your web server cause postgres activity? Is your \nsite dynamically created from the database ?\n\nDave\n\nBen Bostow wrote:\n\n> I am running postgresql 7.2.4-5.73, Dual P4, 1GB Ram. The big problem \n> is that I redirect all internal port 80 traffic to my web server so I \n> see all traffic whether it is a virus or not and intended for my \n> server or not. I originally had a problem with running out of memory \n> but I found a bug in my software that kept the DB connection open so \n> the next time a new connection was made on top of that. As soon as I \n> removed that I started getting the processor problem. I am working on \n> patching my kernel to have the string matching and other new iptables \n> features to limit the virus traffic but I would like to figure the \n> Processor problem out as I am working on moving everything to the 2.6 \n> kernel when RedHat finalizes their release.\n>\n> I am not familular with many of the logging features of postgres just \n> the outputing the output to a file instead of /dev/null.\n>\n> Benjamin\n>\n> On Jan 6, 2005, at 5:06 PM, Dave Cramer wrote:\n>\n>> Ben\n>>\n>> Well, we need more information\n>>\n>> pg version, hardware, memory, etc\n>>\n>> you may want to turn on log_duration to see exactly which statement \n>> is causeing the problem. I'm assuming since it is taking a lot of CPU \n>> it will take some time to complete( this may not be true)\n>>\n>> On your last point, that is where you will get the most optimization, \n>> but I'd still use log_duration to make sure optimizing the statement \n>> will actually help.\n>>\n>> dave\n>>\n>> Ben Bostow wrote:\n>>\n>>> I'm still relatively new to Postgres. I usually just do SQL \n>>> programming but have found my self having to administer the DB now. \n>>> I have I have a problem on my website that when there is high \n>>> amounts of traffic coming from one computer to my web server. I \n>>> suspect it is because of a virus. But what when I notice this, my \n>>> processor drops to 0.0% idle with postmaster being my highest CPU \n>>> user. Under normal circumstances the processor runs >90% idle or \n>>> <10% used. I have tried tuning postgres but it doesn't seem to make \n>>> a difference, unless I am doing something wrong. If I would like to \n>>> find a solution other than rewriting all of my SQL statements and \n>>> creating them to take the least amount of time to process.\n>>>\n>>> ---------------------------(end of \n>>> broadcast)---------------------------\n>>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>>> subscribe-nomail command to [email protected] so that your\n>>> message can get through to the mailing list cleanly\n>>>\n>>>\n>>\n>> -- \n>> Dave Cramer\n>> http://www.postgresintl.com\n>> 519 939 0336\n>> ICQ#14675561\n>>\n>\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 06 Jan 2005 08:32:41 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with high traffic" }, { "msg_contents": "I know 7.2 is old I'm trying to fix this in the mean time moving \neverything to the latest Linux software when RedHat releases the \nenterprise with 2.6. Postgres complains about log_duration and \nlog_statement are they a different name under 7.2? Is there \ndocumentation on the type of logging the postgres can do? I can't seem \nto find it in the 7.2 docs. If you know of any good resources for \nPostgres in administering and tuning I would like to know.\n\nBenjamin\n\nOn Jan 6, 2005, at 5:32 PM, Dave Cramer wrote:\n\n> Ben,\n>\n> Hmmm... ok 7.2.4 is quite old now and log_duration doesn't exist in \n> the logging. You will see an immediate performance benefit just by \n> moving to 7.4.x, but I'll bet that's not a reasonable path for you.\n>\n> in postgresql.conf you can change the logging to:\n>\n> log_pid=true\n> log_duration=true\n> log_statement=true\n>\n> syslog=2 ; to log to syslog\n>\n> Then in syslogd.conf\n>\n> add local0.none to the /var/log/messages line to stop logging to \n> messages\n> redirect local0.* to /var/log/postgres ; this step isn't really \n> necesssary but will keep postgres logs separate\n>\n> HUP syslogd\n>\n> restart postgres\n>\n> Then you should be able to see which statements are taking the longest.\n>\n> Why do random hits to your web server cause postgres activity? Is your \n> site dynamically created from the database ?\n>\n> Dave\n>\n> Ben Bostow wrote:\n>\n>> I am running postgresql 7.2.4-5.73, Dual P4, 1GB Ram. The big problem \n>> is that I redirect all internal port 80 traffic to my web server so I \n>> see all traffic whether it is a virus or not and intended for my \n>> server or not. I originally had a problem with running out of memory \n>> but I found a bug in my software that kept the DB connection open so \n>> the next time a new connection was made on top of that. As soon as I \n>> removed that I started getting the processor problem. I am working on \n>> patching my kernel to have the string matching and other new iptables \n>> features to limit the virus traffic but I would like to figure the \n>> Processor problem out as I am working on moving everything to the 2.6 \n>> kernel when RedHat finalizes their release.\n>>\n>> I am not familular with many of the logging features of postgres just \n>> the outputing the output to a file instead of /dev/null.\n>>\n>> Benjamin\n>>\n>> On Jan 6, 2005, at 5:06 PM, Dave Cramer wrote:\n>>\n>>> Ben\n>>>\n>>> Well, we need more information\n>>>\n>>> pg version, hardware, memory, etc\n>>>\n>>> you may want to turn on log_duration to see exactly which statement \n>>> is causeing the problem. I'm assuming since it is taking a lot of \n>>> CPU it will take some time to complete( this may not be true)\n>>>\n>>> On your last point, that is where you will get the most \n>>> optimization, but I'd still use log_duration to make sure optimizing \n>>> the statement will actually help.\n>>>\n>>> dave\n>>>\n>>> Ben Bostow wrote:\n>>>\n>>>> I'm still relatively new to Postgres. I usually just do SQL \n>>>> programming but have found my self having to administer the DB now. \n>>>> I have I have a problem on my website that when there is high \n>>>> amounts of traffic coming from one computer to my web server. I \n>>>> suspect it is because of a virus. But what when I notice this, my \n>>>> processor drops to 0.0% idle with postmaster being my highest CPU \n>>>> user. Under normal circumstances the processor runs >90% idle or \n>>>> <10% used. I have tried tuning postgres but it doesn't seem to make \n>>>> a difference, unless I am doing something wrong. If I would like to \n>>>> find a solution other than rewriting all of my SQL statements and \n>>>> creating them to take the least amount of time to process.\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>>>> subscribe-nomail command to [email protected] so that \n>>>> your\n>>>> message can get through to the mailing list cleanly\n>>>>\n>>>>\n>>>\n>>> -- \n>>> Dave Cramer\n>>> http://www.postgresintl.com\n>>> 519 939 0336\n>>> ICQ#14675561\n>>>\n>>\n>>\n>>\n>\n> -- \n> Dave Cramer\n> http://www.postgresintl.com\n> 519 939 0336\n> ICQ#14675561\n>\n\n", "msg_date": "Thu, 6 Jan 2005 17:57:38 +0400", "msg_from": "Ben Bostow <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problems with high traffic" }, { "msg_contents": "Ben,\n\nIt turns out that 7.2 has neither of those options you will have to set \nthe debug_level to something higher than 0 and less than 4 to get \ninformation out. I'm afraid I'm not sure which value will give you what \nyou are looking for.\n\nThe link below explains what is available, and it isn't much :(\n\nhttp://www.postgresql.org/docs/7.2/static/runtime-config.html#LOGGING\n\nDave\n\nBen Bostow wrote:\n\n> I know 7.2 is old I'm trying to fix this in the mean time moving \n> everything to the latest Linux software when RedHat releases the \n> enterprise with 2.6. Postgres complains about log_duration and \n> log_statement are they a different name under 7.2? Is there \n> documentation on the type of logging the postgres can do? I can't seem \n> to find it in the 7.2 docs. If you know of any good resources for \n> Postgres in administering and tuning I would like to know.\n>\n> Benjamin\n>\n> On Jan 6, 2005, at 5:32 PM, Dave Cramer wrote:\n>\n>> Ben,\n>>\n>> Hmmm... ok 7.2.4 is quite old now and log_duration doesn't exist in \n>> the logging. You will see an immediate performance benefit just by \n>> moving to 7.4.x, but I'll bet that's not a reasonable path for you.\n>>\n>> in postgresql.conf you can change the logging to:\n>>\n>> log_pid=true\n>> log_duration=true\n>> log_statement=true\n>>\n>> syslog=2 ; to log to syslog\n>>\n>> Then in syslogd.conf\n>>\n>> add local0.none to the /var/log/messages line to stop logging to \n>> messages\n>> redirect local0.* to /var/log/postgres ; this step isn't really \n>> necesssary but will keep postgres logs separate\n>>\n>> HUP syslogd\n>>\n>> restart postgres\n>>\n>> Then you should be able to see which statements are taking the longest.\n>>\n>> Why do random hits to your web server cause postgres activity? Is \n>> your site dynamically created from the database ?\n>>\n>> Dave\n>>\n>> Ben Bostow wrote:\n>>\n>>> I am running postgresql 7.2.4-5.73, Dual P4, 1GB Ram. The big \n>>> problem is that I redirect all internal port 80 traffic to my web \n>>> server so I see all traffic whether it is a virus or not and \n>>> intended for my server or not. I originally had a problem with \n>>> running out of memory but I found a bug in my software that kept the \n>>> DB connection open so the next time a new connection was made on top \n>>> of that. As soon as I removed that I started getting the processor \n>>> problem. I am working on patching my kernel to have the string \n>>> matching and other new iptables features to limit the virus traffic \n>>> but I would like to figure the Processor problem out as I am working \n>>> on moving everything to the 2.6 kernel when RedHat finalizes their \n>>> release.\n>>>\n>>> I am not familular with many of the logging features of postgres \n>>> just the outputing the output to a file instead of /dev/null.\n>>>\n>>> Benjamin\n>>>\n>>> On Jan 6, 2005, at 5:06 PM, Dave Cramer wrote:\n>>>\n>>>> Ben\n>>>>\n>>>> Well, we need more information\n>>>>\n>>>> pg version, hardware, memory, etc\n>>>>\n>>>> you may want to turn on log_duration to see exactly which statement \n>>>> is causeing the problem. I'm assuming since it is taking a lot of \n>>>> CPU it will take some time to complete( this may not be true)\n>>>>\n>>>> On your last point, that is where you will get the most \n>>>> optimization, but I'd still use log_duration to make sure \n>>>> optimizing the statement will actually help.\n>>>>\n>>>> dave\n>>>>\n>>>> Ben Bostow wrote:\n>>>>\n>>>>> I'm still relatively new to Postgres. I usually just do SQL \n>>>>> programming but have found my self having to administer the DB \n>>>>> now. I have I have a problem on my website that when there is \n>>>>> high amounts of traffic coming from one computer to my web server. \n>>>>> I suspect it is because of a virus. But what when I notice this, \n>>>>> my processor drops to 0.0% idle with postmaster being my highest \n>>>>> CPU user. Under normal circumstances the processor runs >90% idle \n>>>>> or <10% used. I have tried tuning postgres but it doesn't seem to \n>>>>> make a difference, unless I am doing something wrong. If I would \n>>>>> like to find a solution other than rewriting all of my SQL \n>>>>> statements and creating them to take the least amount of time to \n>>>>> process.\n>>>>>\n>>>>> ---------------------------(end of \n>>>>> broadcast)---------------------------\n>>>>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>>>>> subscribe-nomail command to [email protected] so that \n>>>>> your\n>>>>> message can get through to the mailing list cleanly\n>>>>>\n>>>>>\n>>>>\n>>>> -- \n>>>> Dave Cramer\n>>>> http://www.postgresintl.com\n>>>> 519 939 0336\n>>>> ICQ#14675561\n>>>>\n>>>\n>>>\n>>>\n>>\n>> -- \n>> Dave Cramer\n>> http://www.postgresintl.com\n>> 519 939 0336\n>> ICQ#14675561\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Thu, 06 Jan 2005 10:16:40 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problems with high traffic" } ]
[ { "msg_contents": "Hi there! I'm doing my first tunning on my postgreSQL, my server is for \na small app, largest table shall never exceed 10k rows, and less than 1k \ntransactions/day. So I don't think I should run out of resources. The \nmachine is a Fedora Core 3.0 with 1gb ran and kernel 2.6. I'm thinking \nin having 50 connections limit, so besides semaphores should I do \nanything special on kernel parameters. The app is so small that during \nlate night time almost no one will access so, I'm thinking in full \nvacuuming it every day at 1:00AM.\n\nAny tips are very very welcome :D\n\nThanks all\n", "msg_date": "Thu, 06 Jan 2005 11:19:51 -0200", "msg_from": "Vinicius Caldeira Carvalho <[email protected]>", "msg_from_op": true, "msg_subject": "first postgrreSQL tunning" }, { "msg_contents": "On Thu, 06 Jan 2005 11:19:51 -0200\nVinicius Caldeira Carvalho <[email protected]> wrote:\n\n> Hi there! I'm doing my first tunning on my postgreSQL, my server is\n> for a small app, largest table shall never exceed 10k rows, and less\n> than 1k transactions/day. So I don't think I should run out of\n> resources. The machine is a Fedora Core 3.0 with 1gb ran and kernel\n> 2.6. I'm thinking in having 50 connections limit, so besides\n> semaphores should I do anything special on kernel parameters. The app\n> is so small that during late night time almost no one will access so,\n> I'm thinking in full vacuuming it every day at 1:00AM.\n> \n> Any tips are very very welcome :D\n\n You'll want to tune shared_buffers and sort_mem. Possibly also \n effective_cache_size. It really depends on your system.\n\n Having tuned a ton of apps to work with PostgreSQL what I usually\n do is write a small script that does the major resource intensive\n queries on the database and time it. Tweak a PostgreSQL parameter\n and re-run, wash, rinse, repeat until I get what I believe is the\n best performance I can. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Thu, 6 Jan 2005 09:52:19 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: first postgrreSQL tunning" } ]
[ { "msg_contents": "\n> Hi there! I'm doing my first tunning on my postgreSQL, my server is\nfor\n> a small app, largest table shall never exceed 10k rows, and less than\n1k\n> transactions/day. So I don't think I should run out of resources. The\n> machine is a Fedora Core 3.0 with 1gb ran and kernel 2.6. I'm thinking\n> in having 50 connections limit, so besides semaphores should I do\n> anything special on kernel parameters. The app is so small that during\n> late night time almost no one will access so, I'm thinking in full\n> vacuuming it every day at 1:00AM.\n\nThe biggest danger with small databases is it's easy to become\noverconfident...writing poor queries and such and not properly indexing.\n50 users can hit your system pretty hard if they all decide to do\nsomething at once. Aside from that, just remember to bump up work_mem a\nbit for fast joining.\n\nYour application may be small, but if it is written well and works, it\nwell inevitably become larger and more complex, so plan for the future\n:)\n\nMerlin\n\n", "msg_date": "Thu, 6 Jan 2005 08:34:23 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: first postgrreSQL tunning" }, { "msg_contents": "Merlin Moncure wrote:\n\n>>Hi there! I'm doing my first tunning on my postgreSQL, my server is\n>> \n>>\n>for\n> \n>\n>>a small app, largest table shall never exceed 10k rows, and less than\n>> \n>>\n>1k\n> \n>\n>>transactions/day. So I don't think I should run out of resources. The\n>>machine is a Fedora Core 3.0 with 1gb ran and kernel 2.6. I'm thinking\n>>in having 50 connections limit, so besides semaphores should I do\n>>anything special on kernel parameters. The app is so small that during\n>>late night time almost no one will access so, I'm thinking in full\n>>vacuuming it every day at 1:00AM.\n>> \n>>\n>\n>The biggest danger with small databases is it's easy to become\n>overconfident...writing poor queries and such and not properly indexing.\n>50 users can hit your system pretty hard if they all decide to do\n>something at once. Aside from that, just remember to bump up work_mem a\n>bit for fast joining.\n>\n>Your application may be small, but if it is written well and works, it\n>well inevitably become larger and more complex, so plan for the future\n>:)\n>\n>Merlin\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n> \n>\nThanks Merlin, besides what I said of tunning anything else I should \ncare looking at?\n", "msg_date": "Thu, 06 Jan 2005 11:46:37 -0200", "msg_from": "Vinicius Caldeira Carvalho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: first postgrreSQL tunning" } ]
[ { "msg_contents": "In my younger days I denormalized a database for performance reasons and\nhave been paid for it dearly with increased maintenance costs. Adding\nenhanced capabilities and new functionality will render denormalization\nworse than useless quickly. --Rick\n\n\n \n Frank Wiles \n <[email protected]> To: Josh Berkus <[email protected]> \n Sent by: cc: [email protected] \n pgsql-performance-owner@pos Subject: Re: [PERFORM] Low Performance for big hospital server .. \n tgresql.org \n \n \n 01/06/2005 12:12 PM \n \n \n\n\n\n\nOn Thu, 6 Jan 2005 09:06:55 -0800\nJosh Berkus <[email protected]> wrote:\n\n> I can't tell you how many times I've seen this sort of thing. And\n> the developers always tell me \"Well, we denormalized for performance\n> reasons ... \"\n\n Now that's rich. I don't think I've ever seen a database perform\n worse after it was normalized. In fact, I can't even think of a\n situation where it could!\n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n", "msg_date": "Thu, 6 Jan 2005 13:33:00 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low Performance for big hospital server .." } ]
[ { "msg_contents": " \tI'm looking for recent performance statistics on PostgreSQL vs. Oracle \nvs. Microsoft SQL Server. Recently someone has been trying to convince my \nclient to switch from SyBASE to Microsoft SQL Server (they originally wanted \nto go with Oracle but have since fallen in love with Microsoft). All this \ntime I've been recommending PostgreSQL for cost and stability (my own testing \nhas shown it to be better at handling abnormal shutdowns and using fewer \nsystem resources) in addition to true cross-platform compatibility.\n\n \tIf I can show my client some statistics that PostgreSQL outperforms \nthese (I'm more concerned about it beating Oracle because I know that \nMicrosoft's stuff is always slower, but I need the information anyway to \nprotect my client from falling victim to a 'sales job'), then PostgreSQL will \nbe the solution of choice as the client has always believed that they need a \nhigh-performance solution.\n\n \tI've already convinced them on the usual price, cross-platform \ncompatibility, open source, long history, etc. points, and I've been assured \nthat if the performance is the same or better than Oracle's and Microsoft's \nsolutions that PostgreSQL is what they'll choose.\n\n \tThanks in advance.\n", "msg_date": "Thu, 6 Jan 2005 19:01:38 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Thu, 6 Jan 2005 19:01:38 +0000 (UTC)\nRandolf Richardson <[email protected]> wrote:\n\n> \tI'm looking for recent performance statistics on PostgreSQL vs.\n> \tOracle \n> vs. Microsoft SQL Server. Recently someone has been trying to\n> convince my client to switch from SyBASE to Microsoft SQL Server (they\n> originally wanted to go with Oracle but have since fallen in love with\n> Microsoft). All this time I've been recommending PostgreSQL for cost\n> and stability (my own testing has shown it to be better at handling\n> abnormal shutdowns and using fewer system resources) in addition to\n> true cross-platform compatibility.\n> \n> \tIf I can show my client some statistics that PostgreSQL\n> \toutperforms \n> these (I'm more concerned about it beating Oracle because I know that \n> Microsoft's stuff is always slower, but I need the information anyway\n> to protect my client from falling victim to a 'sales job'), then\n> PostgreSQL will be the solution of choice as the client has always\n> believed that they need a high-performance solution.\n> \n> \tI've already convinced them on the usual price, cross-platform \n> compatibility, open source, long history, etc. points, and I've been\n> assured that if the performance is the same or better than Oracle's\n> and Microsoft's solutions that PostgreSQL is what they'll choose.\n\n While this doesn't exactly answer your question, I use this little\n tidbit of information when \"selling\" people on PostgreSQL. PostgreSQL\n was chosen over Oracle as the database to handle all of the .org TLDs\n information. While I don't believe the company that won was chosen \n solely because they used PostgreSQL vs Oracle ( vs anything else ),\n it does go to show that PostgreSQL can be used in a large scale\n environment. \n\n Another tidbit you can use in this particular case: I was involved\n in moving www.ljworld.com, www.lawrence.com, and www.kusports.com from\n a Sybase backend to a PostgreSQL backend back in 2000-2001. We got\n roughly a 200% speed improvement at that time and PostgreSQL has only\n improved since then. I would be more than happy to elaborate on this\n migration off list if you would like. kusports.com gets a TON of \n hits especially during \"March Madness\" and PostgreSQL has never been\n an issue in the performance of the site. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Sun, 9 Jan 2005 18:04:52 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Quoting Randolf Richardson <[email protected]>:\n\n> \tI'm looking for recent performance statistics on PostgreSQL vs. Oracle\n> \n> vs. Microsoft SQL Server. Recently someone has been trying to convince my \n\nI don't know anything about your customer's requirements other than that they\nhave a DB currently and somebody(ies) is(are) trying to get them to switch to\nanother.\n\nI don't think you'll find meaningful numbers unless you do your own benchmarks.\n\n DB performance is very largely determined by how the application functions,\nhardware, OS and the DBA's familiarity with the platform. I would suspect that\nfor any given workload on relatively similar hardware that just about any of the\nDB's you mention would perform similarly if tuned appropriately.\n\n> client to switch from SyBASE to Microsoft SQL Server (they originally wanted\n> \n> to go with Oracle but have since fallen in love with Microsoft). All this \n> time I've been recommending PostgreSQL for cost and stability (my own testing\n> \n> has shown it to be better at handling abnormal shutdowns and using fewer \n> system resources) in addition to true cross-platform compatibility.\n\nRight for the customer? How about \"Don't fix it if it ain't broke\"? Replacing\na DB backend isn't always trivial (understatement). I suppose if their\napplication is very simple and uses few if any proprietary features of Sybase\nthen changing the DB would be simple. That depends heavily on the application.\nIn general, though, you probably shouldn't rip and replace DB platforms unless\nthere's a very good strategic reason.\n\nI don't know about MSSQL, but I know that, if managed properly, Sybase and\nOracle can be pretty rock-solid and high performing. If *you* have found FooDB\nto be the most stable and highest performing, then that probably means that\nFooDB is the one you're most familiar with rather than FooDB being the best in\nall circumstances. PostgreSQL is great. I love it. In the right hands and\nunder the right circumstances, it is the best DB. So is Sybase. And Oracle. \nAnd MSSQL.\n\n> \n> \tIf I can show my client some statistics that PostgreSQL outperforms \n> these (I'm more concerned about it beating Oracle because I know that \n> Microsoft's stuff is always slower, but I need the information anyway to \n> protect my client from falling victim to a 'sales job'), then PostgreSQL will\n> \n> be the solution of choice as the client has always believed that they need a\n> \n> high-performance solution.\n> \n\nUnless there's a really compelling reason to switch, optimizing what they\nalready have is probably the best thing for them. They've already paid for it.\n They've already written their own application and have some familiarity with\nmanaging the DB. According to Sybase, Sybase is the fastest thing going. :-)\nWhich is probably pretty close to the truth if the application and DB are tuned\nappropriately.\n\n> \tI've already convinced them on the usual price, cross-platform \n> compatibility, open source, long history, etc. points, and I've been assured\n> \n> that if the performance is the same or better than Oracle's and Microsoft's\n> \n> solutions that PostgreSQL is what they'll choose.\n\nAre you telling me that they're willing to pay $40K per CPU for Oracle if it\nperforms 1% better than PostgreSQL, which is $0? Not to mention throw away\nSybase, which is a highly scalable platform in and of itself.\n\nThe best DB platform is what they currently have, regardless of what they have,\nunless there is a very compelling reason to switch.\n\n> \n> \tThanks in advance.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n\n\n", "msg_date": "Sun, 9 Jan 2005 21:04:26 -0800", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Randolf Richardson wrote:\n> \tI'm looking for recent performance statistics on PostgreSQL vs. Oracle \n> vs. Microsoft SQL Server. Recently someone has been trying to convince my \n> client to switch from SyBASE to Microsoft SQL Server (they originally wanted \n> to go with Oracle but have since fallen in love with Microsoft). All this \n> time I've been recommending PostgreSQL for cost and stability (my own testing \n> has shown it to be better at handling abnormal shutdowns and using fewer \n> system resources) in addition to true cross-platform compatibility.\n> \n\nI'm not sure that you are going to get a simple answer to this one. It\nreally depends on what you are trying to do. The only way you will know\nfor sure what the performance of PostgreSQL is is to try it with samples\nof your common queries, updates etc.\n\nI have recently ported a moderately complex database from MS SQLServer\nto Postgres with reasonable success. 70% selects, 20% updates, 10%\ninsert/deletes. I had to do a fair bit of work to get the best\nperformance out of Postgres, but most of the SQL has as good or better\nperformance then SQLServer. There are still areas where SQLServer\noutperforms Postgres. For me these tend to be the larger SQL Statements\nwith correlated subqueries. SQLServer tends to optimise them better a\nlot of the time. Updates tend to be a fair bit faster on SQLServer too,\nthis may be MS taking advantage of Windows specific optimisations in the\nfilesystem.\n\nI did give Oracle a try out of curiosity. I never considered it\nseriously because of the cost. The majority of my SQL was *slower* under\nOracle than SQLServer. I spent some time with it and did get good\nperformance, but it took a *lot* of work tuning to Oracle specific ways\nof doing things.\n\nMy Summary:\n\nSQLServer: A good all round database, fast, stable. Moderately expensive\nto buy, cheap and easy to work with and program for (on Windows)\n\nPostgreSQL: A good all rounder, fast most of the time, stable. Free to\nacquire, more expensive to work with and program for. Client drivers may\nbe problematic depending on platform and programming language. Needs\nmore work than SQLServer to get the best out of it. Improving all the\ntime and worth serious consideration.\n\nOracle: A bit of a monstrosity. Can be very fast with a lot of work,\ncan't comment on stability but I guess it's pretty good. Very expensive\nto acquire and work with. Well supported server and clients.\n\nCheers,\nGary.\n\n", "msg_date": "Mon, 10 Jan 2005 07:30:12 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Quick reply on this - I have worked with Oracle, MSSQL and Postgresql,\nthe first and last extensively.\n\nOracle is not that expensive - standard one can be got for $149/user\nor $5k/CPU, and for most applications, the features in standard one\nare fine.\n\nOracle is a beast to manage. It does alot more logging that most\nother RDBMses, which is where you start needed more disk partitions\nfor it to be effective (System, Redo, Archive Redo, Undo, Table\n(posibly Index)). The biggest cost for Oracle is hiring someone who\nknows how to set it up and maintain it properly, and it can be quite a\nfeat.\n\nMS-SQL _is_ expensive for what you get. MS-SQL lacks many features\nthat both Postgresql and oracle. I have particularly noticed\naggregate queries and grouping operations aren't as advanced. \nTransact-SQL is also big pain in the ass.\n\nNeither Oracle nor MS-SQL have the range of stored procedure langauges\nthat Postgresql supports. Postgresql is certainly the easiest to set\nup and maintain and get good performance. For small to medium\ndatabase sizes on systems with limited drive partitions, I would\nexpect postgresql to outperform Oracle in most tests. If you have\n$25k to spend on a DB server, and over $100k/year for an Oracle DBA,\nand you need 60x60x24x7x365 uptime with recoverability, realtime\nreplication and clustering - Oracle might be your best bet, otherwise\n- pick Postgresql ;)\n\nAlex Turner\nNetEconoimst\n\n\nOn Mon, 10 Jan 2005 07:30:12 +0000, Gary Doades <[email protected]> wrote:\n> Randolf Richardson wrote:\n> > I'm looking for recent performance statistics on PostgreSQL vs. Oracle\n> > vs. Microsoft SQL Server. Recently someone has been trying to convince my\n> > client to switch from SyBASE to Microsoft SQL Server (they originally wanted\n> > to go with Oracle but have since fallen in love with Microsoft). All this\n> > time I've been recommending PostgreSQL for cost and stability (my own testing\n> > has shown it to be better at handling abnormal shutdowns and using fewer\n> > system resources) in addition to true cross-platform compatibility.\n> >\n> \n> I'm not sure that you are going to get a simple answer to this one. It\n> really depends on what you are trying to do. The only way you will know\n> for sure what the performance of PostgreSQL is is to try it with samples\n> of your common queries, updates etc.\n> \n> I have recently ported a moderately complex database from MS SQLServer\n> to Postgres with reasonable success. 70% selects, 20% updates, 10%\n> insert/deletes. I had to do a fair bit of work to get the best\n> performance out of Postgres, but most of the SQL has as good or better\n> performance then SQLServer. There are still areas where SQLServer\n> outperforms Postgres. For me these tend to be the larger SQL Statements\n> with correlated subqueries. SQLServer tends to optimise them better a\n> lot of the time. Updates tend to be a fair bit faster on SQLServer too,\n> this may be MS taking advantage of Windows specific optimisations in the\n> filesystem.\n> \n> I did give Oracle a try out of curiosity. I never considered it\n> seriously because of the cost. The majority of my SQL was *slower* under\n> Oracle than SQLServer. I spent some time with it and did get good\n> performance, but it took a *lot* of work tuning to Oracle specific ways\n> of doing things.\n> \n> My Summary:\n> \n> SQLServer: A good all round database, fast, stable. Moderately expensive\n> to buy, cheap and easy to work with and program for (on Windows)\n> \n> PostgreSQL: A good all rounder, fast most of the time, stable. Free to\n> acquire, more expensive to work with and program for. Client drivers may\n> be problematic depending on platform and programming language. Needs\n> more work than SQLServer to get the best out of it. Improving all the\n> time and worth serious consideration.\n> \n> Oracle: A bit of a monstrosity. Can be very fast with a lot of work,\n> can't comment on stability but I guess it's pretty good. Very expensive\n> to acquire and work with. Well supported server and clients.\n> \n> Cheers,\n> Gary.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Mon, 10 Jan 2005 11:07:55 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 10, 2005 at 11:07:55AM -0500, Alex Turner wrote:\n> Neither Oracle nor MS-SQL have the range of stored procedure langauges\n> that Postgresql supports. \n\nThat is not true. Oracle uses PL/SQL for its stored procedures and\nM$-SQL does have a stored procedural language.\n\n\nRegards,\nYann - OCA ;-)\n", "msg_date": "Mon, 10 Jan 2005 18:33:07 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Mon, 10 Jan 2005 18:33:07 +0100\nYann Michel <[email protected]> wrote:\n\n> Hi,\n> \n> On Mon, Jan 10, 2005 at 11:07:55AM -0500, Alex Turner wrote:\n> > Neither Oracle nor MS-SQL have the range of stored procedure\n> > langauges that Postgresql supports. \n> \n> That is not true. Oracle uses PL/SQL for its stored procedures and\n> M$-SQL does have a stored procedural language.\n\n By \"range\" I believe he meant number of stored procedure languages. \n He wasn't saying they didn't have a stored procedure langauge or\n support, just that PostgreSQL had more languages to choose from. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Mon, 10 Jan 2005 11:42:00 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n\nAlex Turner\nNetEconomist\n\n\nOn Mon, 10 Jan 2005 11:42:00 -0600, Frank Wiles <[email protected]> wrote:\n> On Mon, 10 Jan 2005 18:33:07 +0100\n> Yann Michel <[email protected]> wrote:\n> \n> > Hi,\n> >\n> > On Mon, Jan 10, 2005 at 11:07:55AM -0500, Alex Turner wrote:\n> > > Neither Oracle nor MS-SQL have the range of stored procedure\n> > > langauges that Postgresql supports.\n> >\n> > That is not true. Oracle uses PL/SQL for its stored procedures and\n> > M$-SQL does have a stored procedural language.\n> \n> By \"range\" I believe he meant number of stored procedure languages.\n> He wasn't saying they didn't have a stored procedure langauge or\n> support, just that PostgreSQL had more languages to choose from.\n> \n> ---------------------------------\n> Frank Wiles <[email protected]>\n> http://www.wiles.org\n> ---------------------------------\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n", "msg_date": "Mon, 10 Jan 2005 12:46:01 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Mon, 10 Jan 2005 12:46:01 -0500, Alex Turner <[email protected]> wrote:\n\n> You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n\n\tCan you benefit from the luminous power of Visual Basic as a pl in MSSQL ?\n", "msg_date": "Mon, 10 Jan 2005 21:08:37 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Pierre-Frᅵdᅵric Caillaud wrote:\n> On Mon, 10 Jan 2005 12:46:01 -0500, Alex Turner <[email protected]> wrote:\n> \n>> You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n> \n> \n> Can you benefit from the luminous power of Visual Basic as a pl in \n> MSSQL ?\n> \n\nThe .NET Runtime will be a part of the next MS SQLServer engine. You \nwill be able to have C# as a pl in the database engine with the next \nversion of MSSQL. That certainly will be something to think about.\n\nCheers,\nGary.\n\n\n", "msg_date": "Mon, 10 Jan 2005 20:12:13 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Currently there are two java pl's available for postgres.\n\nDave\n\nGary Doades wrote:\n\n> Pierre-Frᅵdᅵric Caillaud wrote:\n>\n>> On Mon, 10 Jan 2005 12:46:01 -0500, Alex Turner <[email protected]> \n>> wrote:\n>>\n>>> You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n>>\n>>\n>>\n>> Can you benefit from the luminous power of Visual Basic as a pl \n>> in MSSQL ?\n>>\n>\n> The .NET Runtime will be a part of the next MS SQLServer engine. You \n> will be able to have C# as a pl in the database engine with the next \n> version of MSSQL. That certainly will be something to think about.\n>\n> Cheers,\n> Gary.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 10 Jan 2005 15:30:02 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "> The .NET Runtime will be a part of the next MS SQLServer engine. You \n> will be able to have C# as a pl in the database engine with the next \n> version of MSSQL. That certainly will be something to think about.\n\n\tAh, well, if it's C# (or even VB.NET) then it's serious !\n\tI thought postgres had pl/java ?\n", "msg_date": "Mon, 10 Jan 2005 21:34:04 +0100", "msg_from": "=?iso-8859-15?Q?Pierre-Fr=E9d=E9ric_Caillaud?=\n\t<[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "while you weren't looking, Gary Doades wrote:\n\n> The .NET Runtime will be a part of the next MS SQLServer engine.\n\nIt won't be long before someone writes a procedural language binding\nto PostgreSQL for Parrot [1]. That should offer us a handful or six\nmore languages that can be used, including BASIC, Ruby and Scheme,\nPerl (5 and 6), Python and TCL for more or less free, and ... wait for\nit, BrainF***.\n\nIIRC, people have talked about porting C# to Parrot, as well.\n\n/rls\n\n[1] The new VM for Perl 6, &c: http://www.parrotcode.org\n\n-- \n:wq\n", "msg_date": "Mon, 10 Jan 2005 14:34:31 -0600", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Rosser Schwarz wrote:\n> while you weren't looking, Gary Doades wrote:\n> \n> \n>>The .NET Runtime will be a part of the next MS SQLServer engine.\n> \n> \n> It won't be long before someone writes a procedural language binding\n> to PostgreSQL for Parrot [1]. That should offer us a handful or six\n> more languages that can be used, including BASIC, Ruby and Scheme,\n> Perl (5 and 6), Python and TCL for more or less free, and ... wait for\n> it, BrainF***.\n> \n> IIRC, people have talked about porting C# to Parrot, as well.\n> \n\nOr perhaps get the mono engine in there somewhere to pick up another \ndozen or so languages supported by .NET and mono......\n\nCheers,\nGary.\n", "msg_date": "Mon, 10 Jan 2005 20:44:47 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "I'm curious, why do you think that's serious ? What do you really expect \nto do in the stored procedure ? Anything of consequence will seriously \ndegrade performance if you select it in say a million rows.\n\nPierre-Frᅵdᅵric Caillaud wrote:\n\n>> The .NET Runtime will be a part of the next MS SQLServer engine. You \n>> will be able to have C# as a pl in the database engine with the next \n>> version of MSSQL. That certainly will be something to think about.\n>\n>\n> Ah, well, if it's C# (or even VB.NET) then it's serious !\n> I thought postgres had pl/java ?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 10 Jan 2005 16:31:40 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Dave Cramer wrote:\n> I'm curious, why do you think that's serious ? What do you really expect \n> to do in the stored procedure ? Anything of consequence will seriously \n> degrade performance if you select it in say a million rows.\n> \n\nI'm not sure what you mean by \"select it in a million rows\". I would \nexpect to write a procedure within the database engine to select a \nmillion rows, process them and return the result to the client. Very \nefficient.\n\nCheers,\nGary.\n", "msg_date": "Mon, 10 Jan 2005 21:55:57 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Mon, Jan 10, 2005 at 12:46:01PM -0500, Alex Turner wrote:\n> You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n \nOn the other hand, PL/SQL is incredibly powerful, especially combined\nwith all the tools/utilities that come with Oracle. I think you'd be\nhard-pressed to find too many real-world examples where you could do\nsomething with a PostgreSQL procedural language that you couldn't do\nwith PL/SQL.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 10 Jan 2005 17:29:52 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Oops! [email protected] (Pierre-Frᅵdᅵric Caillaud) was seen spray-painting on a wall:\n>> The .NET Runtime will be a part of the next MS SQLServer engine. You\n>> will be able to have C# as a pl in the database engine with the next\n>> version of MSSQL. That certainly will be something to think about.\n>\n> \tAh, well, if it's C# (or even VB.NET) then it's serious !\n> \tI thought postgres had pl/java ?\n\nSomeone's working on pl/Mono...\n\n <http://gborg.postgresql.org/project/plmono/projdisplay.php>\n-- \n\"cbbrowne\",\"@\",\"gmail.com\"\nhttp://cbbrowne.com/info/slony.html\n\"... the open research model is justified. There is a passage in the\nBible (John 8:32, and on a plaque in CIA HQ), \"And ye shall know the\ntruth, and the truth shall set ye free.\" -- Dave Dittrich \n", "msg_date": "Mon, 10 Jan 2005 18:57:25 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Ok, so one use case is to select a large number of rows and do some \nnon-trivial operation on them.\nI can see where getting the rows inside the server process ( ie some \nprocedural language ) thereby reducing the round trip overhead would be \nbeneficial. However how do you deal with the lack of control ? For \ninstance what happens if you run out of memory while doing this ? I'm \nnot sure about other DB'S but if you crash the procedural language \ninside postgres you will bring the server down.\n\nIt would seem to me that any non-trivial operation would be better \nhandled outside the server process, even if it costs you the round trip.\n\nDave\n\n\n\nGary Doades wrote:\n\n> Dave Cramer wrote:\n>\n>> I'm curious, why do you think that's serious ? What do you really \n>> expect to do in the stored procedure ? Anything of consequence will \n>> seriously degrade performance if you select it in say a million rows.\n>>\n>\n> I'm not sure what you mean by \"select it in a million rows\". I would \n> expect to write a procedure within the database engine to select a \n> million rows, process them and return the result to the client. Very \n> efficient.\n>\n> Cheers,\n> Gary.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Mon, 10 Jan 2005 19:04:37 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\n> I'm curious, why do you think that's serious ? What do you really expect\n\n\tSimply because I don't like VB non .NET, but C# is a much much better \nlanguage, and even VB.NET is decent.\n\n> to do in the stored procedure ? Anything of consequence will seriously \n> degrade performance if you select it in say a million rows.\n\n\tWell, if such a thing needed to be done, like processing a lot of rows to \nyield a small result set, it certainly should be done inside the server, \nbut as another poster said, being really careful about memory usage.\n\n\tBut, that was not my original idea ; I find that even for small functions \nplsql is a bit ugly compared to the usual suspects like Python and others \n; unfortunately I think there is overhead in converting the native \npostgres datatype to their other language counterparts, which is why I did \nnot try them (yet).\n", "msg_date": "Tue, 11 Jan 2005 02:24:47 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Dave Cramer wrote:\n> Ok, so one use case is to select a large number of rows and do some \n> non-trivial operation on them.\n> I can see where getting the rows inside the server process ( ie some \n> procedural language ) thereby reducing the round trip overhead would be \n> beneficial. However how do you deal with the lack of control ? For \n> instance what happens if you run out of memory while doing this ? I'm \n> not sure about other DB'S but if you crash the procedural language \n> inside postgres you will bring the server down.\n> \n> It would seem to me that any non-trivial operation would be better \n> handled outside the server process, even if it costs you the round trip.\n\nSince a .NET language is operating effectively inside a VM it is pretty \nmuch impossible to bring down the server that way. Only a bug in the \n.NET runtime itself will do that. The C# try/catch/finally with .NET \nglobal execption last chance handlers will ensure the server and your \ncode is well protected.\n\nCheers,\nGary.\n", "msg_date": "Tue, 11 Jan 2005 07:37:21 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "I understand that but I have seen VM's crash.\n\nThis does bring up another point. Since postgresql is not threaded a \n.NET pl would require a separate VM for each connection (unless you can \nshare the vm ?). One of the java pl's (pl-j) for postgres has dealt \nwith this issue.\nFor a hundred connections that's a hundred .NET vm's or java vm's.\n\nIs the .NET VM shareable ?\n\nDave\n\nGary Doades wrote:\n\n> Dave Cramer wrote:\n>\n>> Ok, so one use case is to select a large number of rows and do some \n>> non-trivial operation on them.\n>> I can see where getting the rows inside the server process ( ie some \n>> procedural language ) thereby reducing the round trip overhead would \n>> be beneficial. However how do you deal with the lack of control ? For \n>> instance what happens if you run out of memory while doing this ? I'm \n>> not sure about other DB'S but if you crash the procedural language \n>> inside postgres you will bring the server down.\n>>\n>> It would seem to me that any non-trivial operation would be better \n>> handled outside the server process, even if it costs you the round trip.\n>\n>\n> Since a .NET language is operating effectively inside a VM it is \n> pretty much impossible to bring down the server that way. Only a bug \n> in the .NET runtime itself will do that. The C# try/catch/finally with \n> .NET global execption last chance handlers will ensure the server and \n> your code is well protected.\n>\n> Cheers,\n> Gary.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Tue, 11 Jan 2005 08:41:42 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Connect to an external data system using a socket and propagate data\nchanges using a trigger... I've had to do this, and it sucks to be\nstuck in Oracle!\n\nAlex Turner\nNetEconomist\n\n\nOn Mon, 10 Jan 2005 17:29:52 -0600, Jim C. Nasby <[email protected]> wrote:\n> On Mon, Jan 10, 2005 at 12:46:01PM -0500, Alex Turner wrote:\n> > You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n> \n> On the other hand, PL/SQL is incredibly powerful, especially combined\n> with all the tools/utilities that come with Oracle. I think you'd be\n> hard-pressed to find too many real-world examples where you could do\n> something with a PostgreSQL procedural language that you couldn't do\n> with PL/SQL.\n> --\n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n> \n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n", "msg_date": "Tue, 11 Jan 2005 09:23:38 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Dave Cramer wrote:\n> I understand that but I have seen VM's crash.\n> \n> This does bring up another point. Since postgresql is not threaded a \n> .NET pl would require a separate VM for each connection (unless you can \n> share the vm ?). One of the java pl's (pl-j) for postgres has dealt \n> with this issue.\n> For a hundred connections that's a hundred .NET vm's or java vm's.\n> \n> Is the .NET VM shareable ?\n> \nIn Windows, most certainly. Not sure about mono.\n\nCheers,\nGary.\n", "msg_date": "Tue, 11 Jan 2005 18:02:21 +0000", "msg_from": "Gary Doades <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n \n \n> Oracle is not that expensive - standard one can be got for $149/user\n> or $5k/CPU, and for most applications, the features in standard one\n> are fine.\n \nDon't forget your support contract cost, as well as licenses for each\nof your servers: development, testing, QA, etc.\n \nIs it really as \"cheap\" as 5K? I've heard that for any fairly modern\nsystem, it's much more, but that may be wrong.\n \n- --\nGreg Sabino Mullane [email protected]\nPGP Key: 0x14964AC8 200501122029\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n \n-----BEGIN PGP SIGNATURE-----\n \niD8DBQFB5c8gvJuQZxSWSsgRAhRzAKDeWZ9LE2etLspiAiFCG8OeeEGoHwCgoLhb\ncrxreFQ2LNVjAp24beDMK5g=\n=C59m\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Thu, 13 Jan 2005 01:28:32 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Greg Sabino Mullane wrote:\n> Don't forget your support contract cost, as well as licenses for each\n> of your servers: development, testing, QA, etc.\n> \n> Is it really as \"cheap\" as 5K? I've heard that for any fairly modern\n> system, it's much more, but that may be wrong.\n> \n\nSort of -- see:\nhttp://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=15105\n \"It is available on single server systems supporting up to a maximum\n of 2 CPUs\"\n\nAlso note that most industrial strength features (like table \npartitioning, RAC, OLAP, Enterprise Manager plugins, etc, etc) are high \npriced options (mostly $10K to $20K per CPU) and they can only be used \nwith the Enterprise edition (which is $40K/CPU *not* $2.5K/CPU).\nhttp://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=10103\n\nAnd you are correct, they expect to be paid for each dev, test, and QA \nmachine too.\n\nThe $5K edition is just there to get you hooked ;-) By the time you add \nup what you really want/need, figure you'll spend a couple of orders of \nmagnatude higher, and then > 20% per year for ongoing \nmaintenance/upgrades/support.\n\nJoe\n", "msg_date": "Wed, 12 Jan 2005 22:51:24 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "This is somewhat correct, and somewhat unfair - bear in mind that\nPostgresql doesn't have the equivalent features of Oracle enterprise\nedition including RAC and Enterprise Manager.\n\nYou can use Oracle Personal edition for development, or pay a per\nhead cost of $149/user for your dev group for standard one, which if\nyou have a small team isn't really that much.\n\nIf you want commercial support for Postgresql, you must also pay for that too.\n\nIt's $5k/CPU for standard one edition, so $10k for a dual CPU box.\n\nUpgrades are free - once you have an Oracle license it is pretty much\ngood for any version on your platform with your number of CPUs.\n\nI'm not advocating that people switch to Oracle at all, It's still\nmuch more expensive than Postgresql, and for most small and medium\napplications Postgresql is much easier to manage and maintain. I\nwould just like to make sure people get their facts straight. I\nworked for a company that selected MS SQL Server because it was\n'cheaper' than Oracle, when infact with the correct Oracle pricing,\nOracle was cheaper, and had superior features. I would have prefered\nthat they use Postgresql, which for the project in question would have\nbeen more appropriate and cost much less in hardware and software\nrequirements, but they had to have 'Industry Standard'. Oracle ended\nup costing <$10k with licenses at $149 ea for 25 users, and the\nsupport contract wasn't that much of a bear - I can't remember exactly\nhow much, I think it was around $1800/yr.\n\n\nAlex Turner\nNetEconomist\n--\nRemember, what most consider 'convential wisdom' is neither wise nor\nthe convention. Don't speculate, educate.\n\nOn Wed, 12 Jan 2005 22:51:24 -0800, Joe Conway <[email protected]> wrote:\n> Greg Sabino Mullane wrote:\n> > Don't forget your support contract cost, as well as licenses for each\n> > of your servers: development, testing, QA, etc.\n> >\n> > Is it really as \"cheap\" as 5K? I've heard that for any fairly modern\n> > system, it's much more, but that may be wrong.\n> >\n> \n> Sort of -- see:\n> http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=15105\n> \"It is available on single server systems supporting up to a maximum\n> of 2 CPUs\"\n> \n> Also note that most industrial strength features (like table\n> partitioning, RAC, OLAP, Enterprise Manager plugins, etc, etc) are high\n> priced options (mostly $10K to $20K per CPU) and they can only be used\n> with the Enterprise edition (which is $40K/CPU *not* $2.5K/CPU).\n> http://oraclestore.oracle.com/OA_HTML/ibeCCtpSctDspRte.jsp?section=10103\n> \n> And you are correct, they expect to be paid for each dev, test, and QA\n> machine too.\n> \n> The $5K edition is just there to get you hooked ;-) By the time you add\n> up what you really want/need, figure you'll spend a couple of orders of\n> magnatude higher, and then > 20% per year for ongoing\n> maintenance/upgrades/support.\n> \n> Joe\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n", "msg_date": "Thu, 13 Jan 2005 09:35:29 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Alex Turner wrote:\n> I'm not advocating that people switch to Oracle at all, It's still\n> much more expensive than Postgresql, and for most small and medium\n> applications Postgresql is much easier to manage and maintain. I\n> would just like to make sure people get their facts straight. I\n> worked for a company that selected MS SQL Server because it was\n> 'cheaper' than Oracle, when infact with the correct Oracle pricing,\n> Oracle was cheaper, and had superior features. I would have prefered\n> that they use Postgresql, which for the project in question would have\n> been more appropriate and cost much less in hardware and software\n> requirements, but they had to have 'Industry Standard'. Oracle ended\n> up costing <$10k with licenses at $149 ea for 25 users, and the\n> support contract wasn't that much of a bear - I can't remember exactly\n> how much, I think it was around $1800/yr.\n\nMy facts were straight, and they come from firsthand experience. The \npoint is, it is easy to get trapped into thinking to yourself, \"great, I \ncan get a dual CPU oracle server for ~$10K, that's not too bad...\". But \nthen later you figure out you really need table partitioning or RAC, and \nsuddenly you have to jump directly to multiple 6 figures. The entry \nlevel Oracle pricing is mainly a marketing gimmick -- it is intended to \nget you hooked.\n\nAlso note that the per named user license scheme is subject to per CPU \nminimums that guarantee you'll never spend less than half the per CPU \nprice. Oracle's licensing is so complex that there are businesses out \nthere that subsist solely on helping companies figure it out to save \nmoney, and they take a cut of the savings. Oracle's own account reps had \na hard time answering this question -- does a hyperthreaded Intel CPU \ncount as 1 or 2 CPUs from a licensing standpoint? We were eventually \ntold 1, but that the decision was \"subject to change in the future\".\n\nJoe\n", "msg_date": "Thu, 13 Jan 2005 06:56:52 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Joe,\nI appreciate your information, but it's not valid. Most people don't\nneed RAC or table partitioning. Many of the features in Oracle EE are\njust not available in Postgresql at all, and many aren't available in\nany version of SQL Server (table partitioning, bitmap indexes and\nothers). If you want all the wiz-bang features, you have to pay the\nwiz-bang price. Just because Oracle reps are a little clueless\nsometimes doesn't mean that the product pricing sucks.\nThe minimum user requirement for standard one is 5 users. 5*149=$745,\nmuch less than half the price of a dual or single CPU config.\n\nI'm sorry that you had a bad experience with Oracle, but Oracle is a\nfine product, that is available for not alot of $$ if you are willing\nto use a bit of elbow grease to learn how it works and don't need\nenterprise features, which many other database product simply don't\nhave, or work very poorly.\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 13 Jan 2005 06:56:52 -0800, Joe Conway <[email protected]> wrote:\n> Alex Turner wrote:\n> > I'm not advocating that people switch to Oracle at all, It's still\n> > much more expensive than Postgresql, and for most small and medium\n> > applications Postgresql is much easier to manage and maintain. I\n> > would just like to make sure people get their facts straight. I\n> > worked for a company that selected MS SQL Server because it was\n> > 'cheaper' than Oracle, when infact with the correct Oracle pricing,\n> > Oracle was cheaper, and had superior features. I would have prefered\n> > that they use Postgresql, which for the project in question would have\n> > been more appropriate and cost much less in hardware and software\n> > requirements, but they had to have 'Industry Standard'. Oracle ended\n> > up costing <$10k with licenses at $149 ea for 25 users, and the\n> > support contract wasn't that much of a bear - I can't remember exactly\n> > how much, I think it was around $1800/yr.\n> \n> My facts were straight, and they come from firsthand experience. The\n> point is, it is easy to get trapped into thinking to yourself, \"great, I\n> can get a dual CPU oracle server for ~$10K, that's not too bad...\". But\n> then later you figure out you really need table partitioning or RAC, and\n> suddenly you have to jump directly to multiple 6 figures. The entry\n> level Oracle pricing is mainly a marketing gimmick -- it is intended to\n> get you hooked.\n> \n> Also note that the per named user license scheme is subject to per CPU\n> minimums that guarantee you'll never spend less than half the per CPU\n> price. Oracle's licensing is so complex that there are businesses out\n> there that subsist solely on helping companies figure it out to save\n> money, and they take a cut of the savings. Oracle's own account reps had\n> a hard time answering this question -- does a hyperthreaded Intel CPU\n> count as 1 or 2 CPUs from a licensing standpoint? We were eventually\n> told 1, but that the decision was \"subject to change in the future\".\n> \n> Joe\n> \n>\n", "msg_date": "Thu, 13 Jan 2005 13:43:30 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Alex Turner wrote:\n> I appreciate your information, but it's not valid. Most people don't\n> need RAC or table partitioning.\n\n From a small company perspective, maybe, but not in the least invalid \nfor larger companies.\n\n> Many of the features in Oracle EE are just not available in Postgresql at all, and many aren't available in\n> any version of SQL Server (table partitioning, bitmap indexes and\n> others).\n\nI never claimed otherwise. I said the low end product gets you hooked. \nOnce you're hooked, you'll start to wish for all the wiz-bang features \n-- after all, that's why you picked Oracle in the first place.\n\n> Just because Oracle reps are a little clueless\n> sometimes doesn't mean that the product pricing sucks.\n> The minimum user requirement for standard one is 5 users. 5*149=$745,\n> much less than half the price of a dual or single CPU config.\n\nAnd what happens once you need a quad server?\n\n> I'm sorry that you had a bad experience with Oracle, but Oracle is a\n> fine product, that is available for not alot of $$ if you are willing\n> to use a bit of elbow grease to learn how it works and don't need\n> enterprise features, which many other database product simply don't\n> have, or work very poorly.\n\nI never said I had a \"bad experience\" with Oracle. I pointed out the \ngotchas. We have several large Oracle boxes running, several MSSQL, and \nseveral Postgres -- they all have their strengths and weaknesses.\n\nNuff said -- this thread is way off topic now...\n\nJoe\n", "msg_date": "Thu, 13 Jan 2005 12:04:46 -0800", "msg_from": "Joe Conway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\"[email protected] (Frank Wiles)\" wrote in pgsql.performance:\n\n> On Thu, 6 Jan 2005 19:01:38 +0000 (UTC)\n> Randolf Richardson <[email protected]> wrote:\n> \n>> I'm looking for recent performance statistics on PostgreSQL vs.\n>> Oracle \n>> vs. Microsoft SQL Server. Recently someone has been trying to\n>> convince my client to switch from SyBASE to Microsoft SQL Server (they\n>> originally wanted to go with Oracle but have since fallen in love with\n>> Microsoft). All this time I've been recommending PostgreSQL for cost\n>> and stability (my own testing has shown it to be better at handling\n>> abnormal shutdowns and using fewer system resources) in addition to\n>> true cross-platform compatibility.\n>> \n>> If I can show my client some statistics that PostgreSQL\n>> outperforms \n>> these (I'm more concerned about it beating Oracle because I know that \n>> Microsoft's stuff is always slower, but I need the information anyway\n>> to protect my client from falling victim to a 'sales job'), then\n>> PostgreSQL will be the solution of choice as the client has always\n>> believed that they need a high-performance solution.\n>> \n>> I've already convinced them on the usual price, cross-platform \n>> compatibility, open source, long history, etc. points, and I've been\n>> assured that if the performance is the same or better than Oracle's\n>> and Microsoft's solutions that PostgreSQL is what they'll choose.\n> \n> While this doesn't exactly answer your question, I use this little\n> tidbit of information when \"selling\" people on PostgreSQL. PostgreSQL\n> was chosen over Oracle as the database to handle all of the .org TLDs\n> information. While I don't believe the company that won was chosen \n> solely because they used PostgreSQL vs Oracle ( vs anything else ),\n> it does go to show that PostgreSQL can be used in a large scale\n> environment. \n\n \tDo you have a link for that information? I've told a few people about \nthis and one PostgreSQL advocate (thanks to me -- they were going to be a \nMicrosoft shop before that) is asking.\n\n> Another tidbit you can use in this particular case: I was involved\n> in moving www.ljworld.com, www.lawrence.com, and www.kusports.com from\n> a Sybase backend to a PostgreSQL backend back in 2000-2001. We got\n> roughly a 200% speed improvement at that time and PostgreSQL has only\n> improved since then. I would be more than happy to elaborate on this\n> migration off list if you would like. kusports.com gets a TON of \n> hits especially during \"March Madness\" and PostgreSQL has never been\n> an issue in the performance of the site. \n\n \tSyBase is better suited to the small projects in my opinion. I have a \nnumber of customers in the legal industry who have to use it because the \nproducts they use have a proprietary requirement for it. Fortunately it's \nquite stable, and uses very little in the way of system resources, but \nthere is a license fee -- I'm not complaining at all, it has always been \nworking well for my clients.\n", "msg_date": "Thu, 20 Jan 2005 16:49:55 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\"[email protected]\" wrote in pgsql.performance:\n\n> Quoting Randolf Richardson <[email protected]>:\n> \n>> I'm looking for recent performance statistics on PostgreSQL\n>> vs. Oracle \n>> \n>> vs. Microsoft SQL Server. Recently someone has been trying to convince\n>> my \n> \n> I don't know anything about your customer's requirements other than that\n> they have a DB currently and somebody(ies) is(are) trying to get them to\n> switch to another.\n> \n> I don't think you'll find meaningful numbers unless you do your own\n> benchmarks. \n> \n> DB performance is very largely determined by how the application\n> functions, \n> hardware, OS and the DBA's familiarity with the platform. I would\n> suspect that for any given workload on relatively similar hardware that\n> just about any of the DB's you mention would perform similarly if tuned\n> appropriately. \n> \n>> client to switch from SyBASE to Microsoft SQL Server (they originally\n>> wanted \n>> \n>> to go with Oracle but have since fallen in love with Microsoft). All\n>> this time I've been recommending PostgreSQL for cost and stability (my\n>> own testing \n>> \n>> has shown it to be better at handling abnormal shutdowns and using\n>> fewer system resources) in addition to true cross-platform\n>> compatibility. \n> \n> Right for the customer? How about \"Don't fix it if it ain't broke\"? \n> Replacing a DB backend isn't always trivial (understatement). I suppose\n> if their application is very simple and uses few if any proprietary\n> features of Sybase then changing the DB would be simple. That depends\n> heavily on the application. In general, though, you probably shouldn't\n> rip and replace DB platforms unless there's a very good strategic\n> reason. \n> \n> I don't know about MSSQL, but I know that, if managed properly, Sybase\n> and Oracle can be pretty rock-solid and high performing. If *you* have\n> found FooDB to be the most stable and highest performing, then that\n> probably means that FooDB is the one you're most familiar with rather\n> than FooDB being the best in all circumstances. PostgreSQL is great. I\n> love it. In the right hands and under the right circumstances, it is\n> the best DB. So is Sybase. And Oracle. And MSSQL.\n\n \tThat's an objective answer. Unfortunately the issue I'm stuck with is \na Microsoft-crazy sales droid who's arguing that \"MS-SQL is so easy to \nmanage, like all Microsoft products, that a novice can make it outperform \nother high-end systems like Oracle even when tuned by an expert.\" This \ncrap makes me want to throw up, but in order to keep the client I'm doing \nmy best to hold it down (I bet many of you are shaking your heads).\n\n \tThe client is leaning away from the sales droid, however, and this is \npartly due to the help I've recieved here in these newsgroups -- thanks \neveryone.\n\n>> If I can show my client some statistics that PostgreSQL\n>> outperforms \n>> these (I'm more concerned about it beating Oracle because I know that \n>> Microsoft's stuff is always slower, but I need the information anyway\n>> to protect my client from falling victim to a 'sales job'), then\n>> PostgreSQL will \n>> \n>> be the solution of choice as the client has always believed that they\n>> need a \n>> \n>> high-performance solution.\n> \n> Unless there's a really compelling reason to switch, optimizing what\n> they already have is probably the best thing for them. They've already\n> paid for it. \n> They've already written their own application and have some familiarity\n> with \n> managing the DB. According to Sybase, Sybase is the fastest thing\n> going. :-) Which is probably pretty close to the truth if the\n> application and DB are tuned appropriately.\n\n \tI agree with you completely. However, the client's looking at getting \nthe application completely re-programmed. The current developer didn't \nplan it properly, and has been slapping code together as if it's a bowl of \nspaghetti. In short, there are many problems with the existing system, and \nI'm talking about proper testing procedures that begin even at the design \nstage (before any coding begins).\n\n>> I've already convinced them on the usual price, cross-platform\n>> compatibility, open source, long history, etc. points, and I've been\n>> assured that if the performance is the same or better than Oracle's and\n>> Microsoft's solutions that PostgreSQL is what they'll choose.\n> \n> Are you telling me that they're willing to pay $40K per CPU for Oracle\n> if it performs 1% better than PostgreSQL, which is $0? Not to mention\n> throw away Sybase, which is a highly scalable platform in and of itself.\n> \n> The best DB platform is what they currently have, regardless of what\n> they have, unless there is a very compelling reason to switch.\n[sNip]\n\n \tHave you heard the saying \"Nobody ever got fired for picking IBM?\" It \nis one of those situations where if they don't spend the money in their \nbudget, then they lose it the next time around (no suggestions are needed \non this issue, but thanks anyway).\n", "msg_date": "Thu, 20 Jan 2005 17:00:51 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Randolf Richardson wrote:\n\n>> While this doesn't exactly answer your question, I use this little\n>> tidbit of information when \"selling\" people on PostgreSQL. PostgreSQL\n>> was chosen over Oracle as the database to handle all of the .org TLDs\n>> information. ...\n> \n> \tDo you have a link for that information? I've told a few people about \n> this and one PostgreSQL advocate (thanks to me -- they were going to be a \n> Microsoft shop before that) is asking.\n\nOf course you could read their application when they were competing\nwith a bunch of other companies using databases from different vendors.\n\nI believe this is the link to their response to the database\n questions...\n \nhttp://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13TheInternetSocietyISOC\n", "msg_date": "Thu, 20 Jan 2005 10:03:17 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\"Ron Mayer <[email protected]>\" wrote in pgsql.performance:\n> Randolf Richardson wrote:\n> \n>>> While this doesn't exactly answer your question, I use this little\n>>> tidbit of information when \"selling\" people on PostgreSQL. \n>>> PostgreSQL was chosen over Oracle as the database to handle all of\n>>> the .org TLDs information. ...\n>> \n>> Do you have a link for that information? I've told a few\n>> people about \n>> this and one PostgreSQL advocate (thanks to me -- they were going to be\n>> a Microsoft shop before that) is asking.\n> \n> Of course you could read their application when they were competing\n> with a bunch of other companies using databases from different vendors.\n> \n> I believe this is the link to their response to the database\n> questions...\n> \n> http://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13Th\n> eInternetSocietyISOC \n\n \tThat's perfect. Thanks!\n", "msg_date": "Thu, 20 Jan 2005 21:45:39 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "I sometimes also think it's fun to point out that Postgresql\nbigger companies supporting it's software - like this one:\n\nhttp://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf\n\nwith $43 billion revenue -- instead of those little companies\nlike Mysql AB or Oracle.\n\n :)\n", "msg_date": "Thu, 20 Jan 2005 15:39:46 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Randolf Richardson <[email protected]> writes:\n> \"Ron Mayer <[email protected]>\" wrote in pgsql.performance:\n>> Randolf Richardson wrote:\n>>> While this doesn't exactly answer your question, I use this little\n>>> tidbit of information when \"selling\" people on PostgreSQL. \n>>> PostgreSQL was chosen over Oracle as the database to handle all of\n>>> the .org TLDs information. ...\n> \n> Do you have a link for that information?\n>> \n>> http://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13TheInternetSocietyISOC \n\n> \tThat's perfect. Thanks!\n\nThis is rather old news, actually, as Afilias (the outfit actually\nrunning the registry for ISOC) has been running the .info TLD on\nPostgres since 2001. They have the contract for the new .mobi TLD.\nAnd they are currently one of not many bidders to take over the .net\nregistry when Verisign's contract expires this June. Now *that* will\nbe a hard TLD to ignore ;-)\n\nI am actually sitting in a Toronto hotel room right now because I'm\nattending a meeting sponsored by Afilias for the purpose of initial\ndesign of the Slony-II replication system for Postgres (see Slony-I).\nAccording to the Afilias guys I've been having dinners with, they\ngot absolutely zero flak about their use of Postgres in connection\nwith the .mobi bid, after having endured very substantial bombardment\n(cf above link) --- and a concerted disinformation campaign by Oracle\n--- in connection with the .org and .info bids. As far as the ICANN\ncommunity is concerned, this is established technology.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jan 2005 02:00:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft " }, { "msg_contents": "\"Ron Mayer <[email protected]>\" wrote in pgsql.performance:\n\n> I sometimes also think it's fun to point out that Postgresql\n> bigger companies supporting it's software - like this one:\n> \n> http://www.fastware.com.au/docs/FujitsuSupportedPostgreSQLWhitePaper.pdf\n> \n> with $43 billion revenue -- instead of those little companies\n> like Mysql AB or Oracle.\n> \n> :)\n\n \tHeheh. That is a good point indeed. When the illogical \"everyone else \nis doing it\" argument comes along (as typically does whenever someone is \npushing for a Microsoft solution), then this will be very helpful.\n", "msg_date": "Fri, 21 Jan 2005 16:30:56 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "\"[email protected] (Tom Lane)\" wrote in pgsql.performance:\n> Randolf Richardson <[email protected]> writes:\n>> \"Ron Mayer <[email protected]>\" wrote in pgsql.performance:\n>>> Randolf Richardson wrote:\n>>> \n>>>> While this doesn't exactly answer your question, I use this little\n>>>> tidbit of information when \"selling\" people on PostgreSQL. \n>>>> PostgreSQL was chosen over Oracle as the database to handle all of\n>>>> the .org TLDs information. ...\n>> \n>> Do you have a link for that information?\n>> \n>>> http://www.icann.org/tlds/org/questions-to-applicants-13.htm#Response13\n>>> TheInternetSocietyISOC \n>> \n>> That's perfect. Thanks!\n> \n> This is rather old news, actually, as Afilias (the outfit actually\n> running the registry for ISOC) has been running the .info TLD on\n> Postgres since 2001. They have the contract for the new .mobi TLD.\n\n \tPerhaps it's old, but it's new to me because I don't follow that area \nof the internet very closely.\n\n> And they are currently one of not many bidders to take over the .net\n> registry when Verisign's contract expires this June. Now *that* will\n> be a hard TLD to ignore ;-)\n\n \tYes, indeed, that will be. My feeling is that Network Solutions \nactually manages the .NET and .COM registries far better than anyone else \ndoes, and when .ORG was switched away I didn't like the lack of flexibility \nthat I have always enjoyed with .NET and .COM -- the problem is that I have \nto create a separate account and password for each .ORG internet domain \nname now and can't just use one master account and password for all of \nthem, and if the same folks are going to be running .NET then I'm going to \nwind up having more management to do for that one as well (and I'm not \ntalking about just a mere handlful of internet domain names either).\n\n> I am actually sitting in a Toronto hotel room right now because I'm\n> attending a meeting sponsored by Afilias for the purpose of initial\n> design of the Slony-II replication system for Postgres (see Slony-I).\n> According to the Afilias guys I've been having dinners with, they\n> got absolutely zero flak about their use of Postgres in connection\n> with the .mobi bid, after having endured very substantial bombardment\n> (cf above link) --- and a concerted disinformation campaign by Oracle\n> --- in connection with the .org and .info bids. As far as the ICANN\n> community is concerned, this is established technology.\n\n \tPerhaps you could mention this problem I've noticed to them if you \nhappen to be talking with them. It's obviously not a difficult problem to \nsolve when it comes to good database management and would really make life \na lot easier for those of us who are responsible for large numbers of \ninternet domain names.\n", "msg_date": "Fri, 21 Jan 2005 16:35:38 +0000 (UTC)", "msg_from": "Randolf Richardson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Randolf Richardson wrote:\n> > The best DB platform is what they currently have, regardless of what\n> > they have, unless there is a very compelling reason to switch.\n> [sNip]\n> \n> \tHave you heard the saying \"Nobody ever got fired for picking IBM?\" It \n> is one of those situations where if they don't spend the money in their \n> budget, then they lose it the next time around (no suggestions are needed \n> on this issue, but thanks anyway).\n\nIf that's their situation, then they're almost certainly better off\nthrowing the additional money at beefier hardware than at a more\nexpensive database engine, because the amount of incremental\nperformance they'll get is almost certainly going to be greater with\nbetter hardware than with a different database engine. In particular,\nthey're probably best off throwing the money at the highest\nperformance disk subsystem they can afford. But that, like anything\nelse, depends on what they're going to be doing. If it's likely to be\na small database with lots of processor-intensive analysis, then a\nbeefier CPU setup would be in order. But in my (limited) experience,\nthe disk subsystem is likely to be a bottleneck long before the CPU is\nin the general case, especially these days as disk subsystems haven't\nimproved in performance nearly as quickly as CPUs have.\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 21 Jan 2005 15:23:30 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Randolf Richardson <[email protected]> writes:\n> ... the problem is that I have \n> to create a separate account and password for each .ORG internet domain \n> name now and can't just use one master account and password for all of \n> them,\n\nThis is a registrar issue; if you don't like the user-interface your\nregistrar provides, choose another one. It's got nothing to do with\nthe backend registry, which is merely a database of the publicly visible\n(WHOIS) info about your domain.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 12:56:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft " }, { "msg_contents": "On Fri, Jan 21, 2005 at 04:35:38PM +0000, Randolf Richardson wrote:\n\n> \tYes, indeed, that will be. My feeling is that Network Solutions \n> actually manages the .NET and .COM registries far better than anyone else \n> does, and when .ORG was switched away I didn't like the lack of flexibility \n> that I have always enjoyed with .NET and .COM -- the problem is that I have \n> to create a separate account and password for each .ORG internet domain \n> name now and can't just use one master account and password for all of \n> them, and if the same folks are going to be running .NET then I'm going to \n> wind up having more management to do for that one as well (and I'm not \n> talking about just a mere handlful of internet domain names either).\n\nWildly off-topic, but that's registrar driven, not registry driven.\nI have a range of domains (.com, .net, .org and others) all accessed\nfrom a single login through a single registrar. You need to use a\nbetter registrar.\n\nAs a bit of obPostgresql, though... While the registry for .org is\nrun on Postgresql, the actual DNS is run on Oracle. That choice was\ndriven by the availability of multi-master replication.\n\nLike many of the cases where the problem looks like it needs\nmulti-master replication, though, it doesn't really need it. A single\nmaster at any one time, but with the ability to dub any of the slaves\na new master at any time would be adequate. If that were available for\nPostgresql I'd choose it over Oracle were I doing a big distributed\ndatabase backed system again.\n\nCheers,\n Steve\n", "msg_date": "Tue, 25 Jan 2005 09:59:20 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "Centuries ago, Nostradamus foresaw when [email protected] (Steve Atkins) would write:\n> As a bit of obPostgresql, though... While the registry for .org is\n> run on Postgresql, the actual DNS is run on Oracle. That choice was\n> driven by the availability of multi-master replication.\n>\n> Like many of the cases where the problem looks like it needs\n> multi-master replication, though, it doesn't really need it. A\n> single master at any one time, but with the ability to dub any of\n> the slaves a new master at any time would be adequate. If that were\n> available for Postgresql I'd choose it over Oracle were I doing a\n> big distributed database backed system again.\n\nWell, this is something that actually _IS_ available for PostgreSQL in\nthe form of Slony-I. Between \"MOVE SET\" (that does controlled\ntakeover) and \"FAILOVER\" (that recovers from the situation where a\n'master' node craters), this has indeed become available.\n\nAutomating activation of the failover process isn't quite there yet,\nthough that's mostly a matter that the methodology would involve\nconsiderable tuning of recovery scripts to system behaviour.\n-- \nselect 'cbbrowne' || '@' || 'ntlug.org';\nhttp://cbbrowne.com/info/slony.html\nPay no attention to the PDP-11 behind the front panel.\n-- PGS, in reference to OZ\n", "msg_date": "Tue, 25 Jan 2005 16:46:48 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Fri, Jan 21, 2005 at 02:00:03AM -0500, Tom Lane wrote:\n> got absolutely zero flak about their use of Postgres in connection\n> with the .mobi bid, after having endured very substantial bombardment\n\nWell, \"absolutely zero\" is probably overstating it, but Tom is right\nthat PostgreSQL is not the sort of major, gee-it's-strange technology\nit once was. PostgreSQL is indeed established technology in the\nICANN world now, and I don't think anyone has an argument that it\ncan't run a registry without trouble. I certainly believe that\nPostgreSQL is a fine technology for this. And it scales just fine;\nwe added a million domains to .info over a couple days in September,\nand the effect on performance was unmeasurable (we'd have added them\nfaster, but the bottleneck was actually the client). A domain add in\nour case is on the order of 10 database write operations; that isn't\na huge load, of course, compared to large real-time manufacturing\ndata collection or other such applications. (Compared to those kinds\nof applications, the entire set of Internet registry systems,\nincluding all the registrars, is not that big.)\n\nIncidentally, someone in this thread was concerned about having to\nmaintain a separate password for each .org domain. It's true that\nthat is a registrar, rather than a registry, issue; but it may also\nbe a case where the back end is getting exposed. The .org registry\nuses a new protocol, EPP, to manage objects. One of the features of\nEPP is that it gives a kind of password (it's called authInfo) to\neach domain. The idea is that the registrant knows this authInfo,\nand also the currently-sponsoring registrar. If the registrant wants\nto switch to another registrar, s/he can give the authInfo to the new\nregistrar, who can then use the authInfo in validating a transfer\nrequest. This is intended to prevent the practice (relatively\nwidespread, alas, under the old protocol) where an unscrupulous party\nrequests transfers for a (substantial number of) domain(s) without\nany authorization. (This very thing has happened recently to a\nsomewhat famous domain on the Internet. I'll leave it to the gentle\nreader to do the required googling. The word \"panix\" might be of\nassistance.) So the additional passwords actually do have a purpose;\nbut different registrars handle this feature differently. My\nsuggestion is either to talk to your registrar or change registrars\n(or both) to get the behaviour you like. There are hundreds of\nregistrars for both .info and .org, so finding one which acts the way\nyou want shouldn't be too tricky.\n\nAnyway, this is pretty far off topic. But in answer to the original\nquestion, Afilias does indeed use PostgreSQL for this, and is happy\nto talk on the record about it.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Thu, 27 Jan 2005 13:26:28 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" }, { "msg_contents": "On Fri, Jan 21, 2005 at 03:23:30PM -0800, Kevin Brown wrote:\n\n> beefier CPU setup would be in order. But in my (limited) experience,\n> the disk subsystem is likely to be a bottleneck long before the CPU is\n> in the general case, especially these days as disk subsystems haven't\n> improved in performance nearly as quickly as CPUs have.\n\nIndeed. And you can go through an awful lot of budget buying solid\nstate storage ;-)\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nI remember when computers were frustrating because they *did* exactly what \nyou told them to. That actually seems sort of quaint now.\n\t\t--J.D. Baldwin\n", "msg_date": "Thu, 27 Jan 2005 13:27:31 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" } ]
[ { "msg_contents": "Josh Berkus wrote:\n\n>Amrit,\n>\n> \n>\n>>I use RH 9.0 with postgresql 7.3.2 and 4 Gb ram [server spec. Dual Xeon\n>>3.0] and someone mention that the hyperthreading will not help but if I let\n>>it there will it be any harm to the system?\n>>Any comment please.\n>> \n>>\n>\n>Sometimes. Run a test and take a look at your CS (context switch) levels on \n>VMSTAT. If they're high, turn HT off.\n>\n>If it's a dedicated PG system, though, just turn HT off. We can't use it.\n>\n>Also, upgrade PostgreSQL to 7.3.8 at least. 7.3.2 is known-buggy.\n>\n> \n>\nSorry for the \"dumb\" question, but what would be considered high \nregarding CS levels? We just upgraded our server's to dual 2.8Ghz Xeon \nCPUs from dual Xeon 1.8Ghz which unfortunately HT built-in. We also \nupgraded our database from version 7.3.4 to 7.4.2\n\nThanks.\n\nSteve Poe\n\n\n\n", "msg_date": "Fri, 07 Jan 2005 18:48:52 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Does \"HYPERTHREADING\" do any harm if we use with RH9.0" }, { "msg_contents": "I use RH 9.0 with postgresql 7.3.2 and 4 Gb ram [server spec. Dual Xeon 3.0]\nand someone mention that the hyperthreading will not help but if I let it there\nwill it be any harm to the system?\nAny comment please.\nAmrit\nThailand\n\n", "msg_date": "Sat, 8 Jan 2005 07:29:18 +0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Does \"HYPERTHREADING\" do any harm if we use with RH9.0 and\n postgresql?" }, { "msg_contents": "Amrit,\n\n> I use RH 9.0 with postgresql 7.3.2 and 4 Gb ram [server spec. Dual Xeon\n> 3.0] and someone mention that the hyperthreading will not help but if I let\n> it there will it be any harm to the system?\n> Any comment please.\n\nSometimes. Run a test and take a look at your CS (context switch) levels on \nVMSTAT. If they're high, turn HT off.\n\nIf it's a dedicated PG system, though, just turn HT off. We can't use it.\n\nAlso, upgrade PostgreSQL to 7.3.8 at least. 7.3.2 is known-buggy.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 7 Jan 2005 17:20:08 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does \"HYPERTHREADING\" do any harm if we use with RH9.0 and\n\tpostgresql?" } ]
[ { "msg_contents": "Summary: Doing a two or three table join for a date range performs\nworse than doing the same query individually for each date in the\nrange.\n\nWhat works: Doing a query just on a single date or a date range\n(against just one table) runs quick; 'explain' says it uses an index\nscan. Doing a query on a single date for one store or for one market\nuses all index scans, and runs quick as well.\n\nThe problem: Doing a query for a date range on a particular store or\nmarket, though, for a date range of more than a few days does a\nsequential scan of sales_tickets, and performs worse than doing one\nsingle date query for each date. My 'explain' for one such query is\nbelow.\n\nBackground: I have two or three tables involved in a query. One table\nis holds stores (7 rows at present), one holds sales tickets (about 5\nmillion) and one holds line items (about 10 million). It's test data\nthat I've generated and loaded using '\\copy from'. Each has a primary\nkey, and line items have two dates, written and delivered, that are\nindexed individually. Store has a market id; a market contains\nmultiple stores (in my case, 2 or 3). Each sales ticket has 1-3 line\nitems.\n\nIs there a way to tell postgres to use an index scan on sales_tickets? \n\nCuriously, in response to recent postings in the \"Low Performance for\nbig hospital server\" thread, when I flatten the tables by putting\nstoreid into line_items, it runs somewhat faster in all cases, and\nmuch faster in some; (I have times, if anyone is interested).\n\nThanks,\nDave\n\n\nmydb=> explain select * from line_items t, sales_tickets s where\nwrittenDate >= '12/01/2002' and writtenDate <= '12/31/2002' and\nt.ticketId = s.ticketId and s.storeId = 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------\n Hash Join (cost=93865.46..114054.74 rows=19898 width=28)\n Hash Cond: (\"outer\".ticketId = \"inner\".ticketId)\n -> Index Scan using line_items_written on line_items t \n(cost=0.00..3823.11 rows=158757 width=16)\n Index Cond: ((writtendate >= '2002-12-01'::date) AND\n(writtendate <= '2002-12-31'::date))\n -> Hash (cost=89543.50..89543.50 rows=626783 width=12)\n -> Seq Scan on sales_tickets s (cost=0.00..89543.50\nrows=626783 width=12)\n Filter: (storeid = 1)\n(7 rows)\n\nmydb=> explain select * from line_items t, sales_tickets s where\nwrittenDate = '12/01/2002' and t.ticketId = s.ticketId and s.storeid =\n1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..16942.25 rows=697 width=28)\n -> Index Scan using line_items_written on soldtrx t \n(cost=0.00..121.97 rows=5554 width=16)\n Index Cond: (writtendate = '2002-12-01'::date)\n -> Index Scan using sales_tickets_pkey on sales_tickets s \n(cost=0.00..3.02 rows=1 width=12)\n Index Cond: (\"outer\".ticketId = s.ticketId)\n Filter: (storeid = 1)\n(6 rows)\n\n\nThe tables:\n\ncreate table stores -- 7 rows\n(\n\tstoreId integer not null,\n\tmarketId integer not null\n);\n\ncreate table sales_tickets -- 500,000 rows\n(\n\tticketId integer primary key,\n\tstoreId integer not null,\n\tcustId integer not null\n);\n\ncreate table line_items -- 1,000,000 rows\n(\n\tlineItemId integer primary key,\n\tticketId integer references sales_tickets,\n\twrittenDate date not null,\n\tdeliveredDate date not null\n);\n\ncreate index line_items_written on line_items (writtenDate);\ncreate index line_items_delivered on line_items (deliveredDate);\n", "msg_date": "Fri, 7 Jan 2005 14:17:31 -0500", "msg_from": "David Jaquay <[email protected]>", "msg_from_op": true, "msg_subject": "Query across a date range" }, { "msg_contents": "David Jaquay <[email protected]> writes:\n> Summary: Doing a two or three table join for a date range performs\n> worse than doing the same query individually for each date in the\n> range.\n\nCould we see EXPLAIN ANALYZE, not just EXPLAIN, results?\n\nAlso, have you ANALYZEd lately? If the estimated row counts are at all\naccurate, I doubt that forcing a nestloop indexscan would improve the\nsituation.\n\nAlso, what PG version is this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Jan 2005 14:35:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query across a date range " }, { "msg_contents": "David,\n\n> The problem: Doing a query for a date range on a particular store or\n> market, though, for a date range of more than a few days does a\n> sequential scan of sales_tickets, and performs worse than doing one\n> single date query for each date.  My 'explain' for one such query is\n> below.\n\nCan you run EXPLAIN ANALYZE instead of just EXPLAIN? That will show you the \ndiscrepancy between estimated and actual costs, and probably show you what \nneeds fixing.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 7 Jan 2005 11:35:11 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query across a date range" }, { "msg_contents": "'explain analyze' output is below. I have done analyze recently, and\nam using pg 7.4.2 on SuSE 9.1. I'd be curious to know how to \"a\nnestloop indexscan\" to try it out.\n\nThanks,\nDave\n\nmydb=> explain analyze select * from line_items t, sales_tickets s\nwhere writtenDate >= '12/01/2002' and writtenDate <= '12/31/2002' and\nt.ticketid = s.ticketId and s.storeId = 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=93865.46..114054.74 rows=19898 width=28) (actual\ntime=25419.088..32140.217 rows=23914 loops=1)\n Hash Cond: (\"outer\".ticketid = \"inner\".ticketid)\n -> Index Scan using line_items_written on line_items t \n(cost=0.00..3823.11 rows=158757 width=16) (actual\ntime=100.621..3354.818 rows=169770 loops=1)\n Index Cond: ((writtendate >= '2002-12-01'::date) AND\n(writtendate <= '2002-12-31'::date))\n -> Hash (cost=89543.50..89543.50 rows=626783 width=12) (actual\ntime=22844.146..22844.146 rows=0 loops=1)\n -> Seq Scan on sales_tickets s (cost=0.00..89543.50\nrows=626783 width=12) (actual time=38.017..19387.447 rows=713846\nloops=1)\n Filter: (storeid = 1)\n Total runtime: 32164.948 ms\n(8 rows)\n\n\n\n\nOn Fri, 7 Jan 2005 11:35:11 -0800, Josh Berkus <[email protected]> wrote:\n> Can you run EXPLAIN ANALYZE instead of just EXPLAIN? That will show you the\n> discrepancy between estimated and actual costs, and probably show you what\n> needs fixing.\n\nAlso, Tom Lane wrote:\n> Could we see EXPLAIN ANALYZE, not just EXPLAIN, results?\n> \n> Also, have you ANALYZEd lately? If the estimated row counts are at all\n> accurate, I doubt that forcing a nestloop indexscan would improve the\n> situation.\n> \n> Also, what PG version is this?\n", "msg_date": "Fri, 7 Jan 2005 15:04:28 -0500", "msg_from": "David Jaquay <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query across a date range" }, { "msg_contents": "Dave,\n\nAh ....\n\n> -> Seq Scan on sales_tickets s (cost=0.00..89543.50\n> rows=626783 width=12) (actual time=38.017..19387.447 rows=713846\n> loops=1)\n\nThis is just more than 1/2 the time of your query. The issue is that you're \npulling 713,000 rows (PG estimates 626,000 which is in the right ballpark) \nand PG thinks that this is enough rows where a seq scan is faster. It could \nbe right.\n\nYou can test that, force an indexscan by doing:\nSET enable_seqscan = FALSE;\n\nAlso, please remember to run each query 3 times and report the time of the \n*last* run to us. We don't want differences in caching to throw off your \nevaulation.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 7 Jan 2005 12:15:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query across a date range" } ]
[ { "msg_contents": "Do large TEXT or VARCHAR entries in postgresql cause any performance\ndegradation when a query is being executed to search for data in a table\nwhere the TEXT/VARCHAR fields aren't being searched themselves?\n\nSince, according to the postgresql docs, theirs no performance\ndifference between VARCHAR and TEXT, I'm assuming VARCHAR is identical\nto TEXT entries with a restriction set on the length. And since TEXT\ncan be of any possible size, then they must be stored independently of\nthe rest of the table which is probably all stored in a fixed size rows\nsince all or nearly all of the other types have a specific size\nincluding CHAR. Therefore TEXT entries must be in some other hash table\nthat only needs to be looked up when that column is referenced. If this\nis the case then all other row data will need to be read in for an\nunindexed query, but the TEXT columns will only be read if their being\nsearched though or queried. And if they're only being queried, then only\nthe rows that matched will need the TEXT columns read in which should\nhave minimal impact on performance even if they contain kilobytes of\ninformation.\n\n-- \nI sense much NT in you.\nNT leads to Bluescreen.\nBluescreen leads to downtime.\nDowntime leads to suffering.\nNT is the path to the darkside.\nPowerful Unix is.\n\nPublic Key: ftp://ftp.tallye.com/pub/lorenl_pubkey.asc\nFingerprint: B3B9 D669 69C9 09EC 1BCD 835A FAF3 7A46 E4A3 280C\n \n", "msg_date": "Fri, 7 Jan 2005 19:36:47 -0800", "msg_from": "\"Loren M. Lang\" <[email protected]>", "msg_from_op": true, "msg_subject": "TEXT field and Postgresql Perfomance" }, { "msg_contents": "On Fri, Jan 07, 2005 at 19:36:47 -0800,\n \"Loren M. Lang\" <[email protected]> wrote:\n> Do large TEXT or VARCHAR entries in postgresql cause any performance\n> degradation when a query is being executed to search for data in a table\n> where the TEXT/VARCHAR fields aren't being searched themselves?\n\nYes in that the data is more spread out because of the wider rows and that\nresults in more disk blocks being looked at to get the desired data.\n\n> Since, according to the postgresql docs, theirs no performance\n> difference between VARCHAR and TEXT, I'm assuming VARCHAR is identical\n> to TEXT entries with a restriction set on the length. And since TEXT\n> can be of any possible size, then they must be stored independently of\n\nNo.\n\n> the rest of the table which is probably all stored in a fixed size rows\n\nNo, Postgres uses variable length records.\n", "msg_date": "Fri, 7 Jan 2005 22:03:23 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT field and Postgresql Perfomance" }, { "msg_contents": "I guess my question that would follow is, when does it work best to\nstart using BLOBs/CLOBs (I forget if pg has CLOBs) instead of\ntextfields because your table is going to balloon in disk blocks if\nyou have large amounts of data, and all fields you want to search on\nwould have to be indexed, increasing insert time substantialy.\n\nDoes it ever pay to use text and not CLOB unless your text is going to\nbe short, in which case why not just varchar, leading to the thought\nthat the text datatype is just bad?\n\nAlex Turner\nNetEconomist\n\n\nOn Fri, 7 Jan 2005 22:03:23 -0600, Bruno Wolff III <[email protected]> wrote:\n> On Fri, Jan 07, 2005 at 19:36:47 -0800,\n> \"Loren M. Lang\" <[email protected]> wrote:\n> > Do large TEXT or VARCHAR entries in postgresql cause any performance\n> > degradation when a query is being executed to search for data in a table\n> > where the TEXT/VARCHAR fields aren't being searched themselves?\n> \n> Yes in that the data is more spread out because of the wider rows and that\n> results in more disk blocks being looked at to get the desired data.\n> \n> > Since, according to the postgresql docs, theirs no performance\n> > difference between VARCHAR and TEXT, I'm assuming VARCHAR is identical\n> > to TEXT entries with a restriction set on the length. And since TEXT\n> > can be of any possible size, then they must be stored independently of\n> \n> No.\n> \n> > the rest of the table which is probably all stored in a fixed size rows\n> \n> No, Postgres uses variable length records.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n", "msg_date": "Sat, 8 Jan 2005 00:02:54 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT field and Postgresql Perfomance" }, { "msg_contents": "On Fri, Jan 07, 2005 at 10:03:23PM -0600, Bruno Wolff III wrote:\n> On Fri, Jan 07, 2005 at 19:36:47 -0800, \"Loren M. Lang\" <[email protected]> wrote:\n> \n> > Since, according to the postgresql docs, theirs no performance\n> > difference between VARCHAR and TEXT, I'm assuming VARCHAR is identical\n> > to TEXT entries with a restriction set on the length. And since TEXT\n> > can be of any possible size, then they must be stored independently of\n> \n> No.\n> \n> > the rest of the table which is probably all stored in a fixed size rows\n> \n> No, Postgres uses variable length records.\n\nA discussion of TOAST and ALTER TABLE SET STORAGE might be appropriate\nhere, but I'll defer that to somebody who understands such things\nbetter than me.\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Fri, 7 Jan 2005 22:23:13 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT field and Postgresql Perfomance" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> On Fri, Jan 07, 2005 at 19:36:47 -0800,\n> \"Loren M. Lang\" <[email protected]> wrote:\n>> Do large TEXT or VARCHAR entries in postgresql cause any performance\n>> degradation when a query is being executed to search for data in a table\n>> where the TEXT/VARCHAR fields aren't being searched themselves?\n\n> Yes in that the data is more spread out because of the wider rows and that\n> results in more disk blocks being looked at to get the desired data.\n\nYou are overlooking the effects of TOAST. Fields wider than a kilobyte\nor two will be pushed out-of-line and will thereby not impose a penalty\non queries that only access the other fields in the table.\n\n(If Loren's notion of \"large\" is \"a hundred bytes\" then there may be a\nmeasurable impact. If it's \"a hundred K\" then there won't be.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2005 00:49:07 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT field and Postgresql Perfomance " } ]
[ { "msg_contents": "I have an integer column that is not needed for some rows in the table\n(whether it is necessary is a factor of other row attributes, and it\nisn't worth putting in a separate table).\n\nWhat are the performance tradeoffs (storage space, query speed) of using \na NULL enabled column versus a NOT-NULL column with a sentinel integer \nvalue?\n\nNot that it matters, but in the event where the column values matter,\nthe numberic value is a foreign key. Advice on that welcome too.\n\nThanks!\n\n", "msg_date": "Sat, 08 Jan 2005 16:07:00 -0500", "msg_from": "Jeffrey Tenny <[email protected]>", "msg_from_op": true, "msg_subject": "Null integer columns" }, { "msg_contents": "Jeffrey Tenny <[email protected]> writes:\n> What are the performance tradeoffs (storage space, query speed) of using \n> a NULL enabled column versus a NOT-NULL column with a sentinel integer \n> value?\n> Not that it matters, but in the event where the column values matter,\n> the numberic value is a foreign key. Advice on that welcome too.\n\nIn that case you want to use NULL, because the foreign key mechanism\nwill understand that there's no reference implied. With a sentinel\nvalue you'd have to have a dummy row in the master table --- which will\ncause you enough semantic headaches that you don't want to go there.\n\nThe performance difference could go either way depending on a lot of\nother details, but it will be insignificant in any case. Don't screw up\nyour database semantics for it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Jan 2005 16:55:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Null integer columns " } ]
[ { "msg_contents": "Hi,\n \nI am a recent convert to Postgresql, and am trying to tune a very slow query\nacross ten tables all with only a few rows at this stage (<20), and was\nlooking for some help to get me out of a dead-end.\n \nIt runs very slowly both on a hosted Postgresql 7.3.4 server running on\nFreeBSD UNIX box, and also on a Postgresql 8.0.0.0-rc2 server running on a\nWin XP box.\n \nOn the latter, the EXPLAIN ANALYZE returned what I thought was a strange\nresult - here is the excerpt ...\n\n(Start)\n \nSQL: Query Results\nQUERY PLAN\nUnique (cost=7.16..7.32 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n -> Sort (cost=7.16..7.16 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n Sort Key: am.id_assessment, c.id_claim, c.nm_claim, p.id_provider,\np.nm_title, p.nm_first, p.nm_last, ad.id_address, ad.nm_address_1,\nad.nm_address_2, ad.nm_address_3, ad.nm_suburb, ad.nm_city,\ns.nm_state_short, ad.nm_postcode, am.dt_assessment, am.dt_booking,\nast.nm_assessmentstatus, ast.b_offer_report, asn.id_assessmentstatus,\nasn.nm_assessmentstatus\n -> Merge Join (cost=4.60..7.13 rows=3 width=188) (actual\ntime=41.000..51.000 rows=16 loops=1)\n Merge Cond: (\"outer\".id_datastatus = \"inner\".id_datastatus)\n Join Filter: ((\"inner\".id_claim = \"outer\".id_claim) AND\n(\"inner\".id_assessment = \"outer\".id_assessment))\n\n:\n:\n:\n\n -> Index Scan using address_pkey on\naddress ad (cost=0.00..14.14 rows=376 width=76) (actual time=10.000..10.000\nrows=82 loops=1)\n -> Sort (cost=1.05..1.06 rows=3\nwidth=36) (actual time=0.000..0.000 rows=3 loops=1)\n Sort Key: am.id_address\n -> Seq Scan on assessment am\n(cost=0.00..1.03 rows=3 width=36) (actual time=0.000..0.000 rows=3 loops=1)\nTotal runtime: 51.000 ms\n\n44 row(s)\n\nTotal runtime: 11,452.979 ms\n\n(End)\n\nIt's the bit at the bottom that throws me - I can't work out why one Total\nruntime says 51ms, and yet the next Total runtime would be 11,452ms. (I'm\nassuming that the clue to getting the query time down is to solve this\npuzzle.)\n\nI've done vacuum analyze on all tables, but that didn't help. This query\nstands out among others as being very slow.\n\nAny ideas or suggestions? \n\nThanks in advance,\n\nMartin\n\n\n", "msg_date": "Sun, 9 Jan 2005 15:44:30 +1100", "msg_from": "\"Guenzl, Martin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Help with EXPLAIN ANALYZE runtimes" }, { "msg_contents": "\"Guenzl, Martin\" <[email protected]> writes:\n> On the latter, the EXPLAIN ANALYZE returned what I thought was a strange\n> result - here is the excerpt ...\n\nDo you think we are psychics who can guess at your problem when you've\nshown us none of the table definitions, none of the query, and only a\nsmall part of the EXPLAIN output?\n\nDonning my Karnak headgear, I will guess that this is actually not a\nSELECT query but some kind of update operation, and that the time\nsink is in the updating part and not in the data extraction part.\n(Inefficient foreign-key operations would be a likely cause, as would\npoorly written user-defined triggers.) But that's strictly a guess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 09 Jan 2005 00:23:42 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with EXPLAIN ANALYZE runtimes " }, { "msg_contents": "LOL ... Excuse my ignorance but what's Karnak headear?\n\nIt's a SELECT statement. There are no foreign-keys, just primary keys and\nindexes (some clustered). All joins are through integers / big integers\n(since anything beginning with id_ is either an integer or big integer).\n\nThe intention of showing an excerpt was to keep the focus of my question on\nthe two different runtimes - what these two runtimes mean (in contrast to\neach other), and what causes them to be so different, so that I could tackle\nthe optimisation of the query. This obviously backfired :-(\n\nBelow are the EXPLAIN ANALYZE and queries in full. What has got me\nbamboozled is how the query plan seems to report 51ms but it then reports a\nfinal figure of over 11 seconds - why the huge jump?\n\nThanks and regards\nMartin\n________________________________\n\nStart of EXPLAIN ANALYZE ...\n\nSQL: Query Results\nQUERY PLAN\nUnique (cost=7.16..7.32 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n -> Sort (cost=7.16..7.16 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n Sort Key: am.id_assessment, c.id_claim, c.nm_claim, p.id_provider,\np.nm_title, p.nm_first, p.nm_last, ad.id_address, ad.nm_address_1,\nad.nm_address_2, ad.nm_address_3, ad.nm_suburb, ad.nm_city,\ns.nm_state_short, ad.nm_postcode, am.dt_assessment, am.dt_booking,\nast.nm_assessmentstatus, ast.b_offer_report, asn.id_assessmentstatus,\nasn.nm_assessmentstatus\n -> Merge Join (cost=4.60..7.13 rows=3 width=188) (actual\ntime=41.000..51.000 rows=16 loops=1)\n Merge Cond: (\"outer\".id_datastatus = \"inner\".id_datastatus)\n Join Filter: ((\"inner\".id_claim = \"outer\".id_claim) AND\n(\"inner\".id_assessment = \"outer\".id_assessment))\n -> Nested Loop (cost=0.00..19.31 rows=8 width=97) (actual\ntime=0.000..0.000 rows=48 loops=1)\n Join Filter: (\"inner\".id_datastatus =\n\"outer\".id_datastatus)\n -> Nested Loop (cost=0.00..16.09 rows=3 width=74)\n(actual time=0.000..0.000 rows=16 loops=1)\n Join Filter: ((\"inner\".id_previous =\n\"outer\".id_assessmentstatus) AND (\"inner\".id_datastatus =\n\"outer\".id_datastatus))\n -> Nested Loop (cost=0.00..8.23 rows=1 width=53)\n(actual time=0.000..0.000 rows=2 loops=1)\n Join Filter: ((\"outer\".id_assessmentstatus =\n\"inner\".id_assessmentstatus) AND (\"inner\".id_datastatus =\n\"outer\".id_datastatus))\n -> Nested Loop (cost=0.00..6.98 rows=1\nwidth=20) (actual time=0.000..0.000 rows=2 loops=1)\n Join Filter: (\"inner\".id_datastatus =\n\"outer\".id_datastatus)\n -> Index Scan using datastatus_pkey\non datastatus ds (cost=0.00..5.93 rows=1 width=8) (actual time=0.000..0.000\nrows=1 loops=1)\n Filter: (b_active <> 0)\n -> Seq Scan on assessmentworkflow aw\n(cost=0.00..1.02 rows=2 width=12) (actual time=0.000..0.000 rows=2 loops=1)\n -> Seq Scan on assessmentstatus ast\n(cost=0.00..1.10 rows=10 width=33) (actual time=0.000..0.000 rows=10\nloops=2)\n -> Merge Join (cost=0.00..7.23 rows=42 width=37)\n(actual time=0.000..0.000 rows=42 loops=2)\n Merge Cond: (\"outer\".id_assessmentstatus =\n\"inner\".id_assessmentstatus)\n Join Filter: (\"outer\".id_datastatus =\n\"inner\".id_datastatus)\n -> Index Scan using assessmentstatus_pkey\non assessmentstatus asn (cost=0.00..3.11 rows=10 width=29) (actual\ntime=0.000..0.000 rows=10 loops=2)\n -> Index Scan using\nidx_assessmenttransition_1 on assessmenttransition \"at\" (cost=0.00..3.46\nrows=42 width=12) (actual time=0.000..0.000 rows=42 loops=2)\n -> Seq Scan on claim c (cost=0.00..1.04 rows=3\nwidth=23) (actual time=0.000..0.000 rows=3 loops=16)\n Filter: (id_user = 1)\n -> Sort (cost=4.60..4.60 rows=3 width=143) (actual\ntime=41.000..41.000 rows=97 loops=1)\n Sort Key: p.id_datastatus\n -> Merge Join (cost=3.94..4.57 rows=3 width=143)\n(actual time=10.000..41.000 rows=3 loops=1)\n Merge Cond: (\"outer\".id_provider =\n\"inner\".id_provider)\n Join Filter: ((\"inner\".id_state =\n\"outer\".id_state) AND (\"outer\".id_datastatus = \"inner\".id_datastatus))\n -> Nested Loop (cost=0.00..508.65 rows=3336\nwidth=51) (actual time=0.000..20.000 rows=2153 loops=1)\n Join Filter: (\"outer\".id_datastatus =\n\"inner\".id_datastatus)\n -> Index Scan using provider_pkey on\nprovider p (cost=0.00..16.59 rows=417 width=33) (actual time=0.000..0.000\nrows=270 loops=1)\n -> Seq Scan on state s (cost=0.00..1.08\nrows=8 width=18) (actual time=0.000..0.000 rows=8 loops=270)\n -> Sort (cost=3.94..3.94 rows=3 width=108)\n(actual time=10.000..10.000 rows=17 loops=1)\n Sort Key: am.id_provider\n -> Merge Join (cost=1.05..3.91 rows=3\nwidth=108) (actual time=10.000..10.000 rows=3 loops=1)\n Merge Cond: (\"outer\".id_address =\n\"inner\".id_address)\n Join Filter: (\"outer\".id_datastatus =\n\"inner\".id_datastatus)\n -> Index Scan using address_pkey on\naddress ad (cost=0.00..14.14 rows=376 width=76) (actual time=10.000..10.000\nrows=82 loops=1)\n -> Sort (cost=1.05..1.06 rows=3\nwidth=36) (actual time=0.000..0.000 rows=3 loops=1)\n Sort Key: am.id_address\n -> Seq Scan on assessment am\n(cost=0.00..1.03 rows=3 width=36) (actual time=0.000..0.000 rows=3 loops=1)\nTotal runtime: 51.000 ms\n\n44 row(s)\n\nTotal runtime: 11,452.979 ms\n\n... End of EXPLAIN ANALYZE\n\nStart of query ...\n\nSELECT DISTINCT am.id_assessment, \n c.id_claim, \n c.nm_claim, \n p.id_provider, \n p.nm_title, \n p.nm_first, \n p.nm_last, \n ad.id_address, \n ad.nm_address_1, \n ad.nm_address_2, \n ad.nm_address_3, \n ad.nm_suburb, \n ad.nm_city, \n s.nm_state_short, \n ad.nm_postcode, \n am.dt_assessment, \n am.dt_booking, \n ast.nm_assessmentstatus, \n ast.b_offer_report, \n asn.id_assessmentstatus, \n asn.nm_assessmentstatus \nFROM assessment am, \n address ad, \n assessmentworkflow aw, \n assessmenttransition at, \n assessmentstatus ast, \n assessmentstatus asn, \n claim c, \n state s, \n provider p, \n datastatus ds \nWHERE am.id_claim = c.id_claim \nAND am.id_assessment = aw.id_assessment \nAND aw.id_assessmentstatus = ast.id_assessmentstatus \nAND am.id_provider = p.id_provider \nAND c.id_user = 1 \nAND at.id_previous = aw.id_assessmentstatus \nAND asn.id_assessmentstatus = at.id_assessmentstatus \nAND am.id_address = ad.id_address \nAND ad.id_state = s.id_state\nAND am.id_datastatus = ds.id_datastatus\nAND ad.id_datastatus = ds.id_datastatus\nAND aw.id_datastatus = ds.id_datastatus\nAND at.id_datastatus = ds.id_datastatus\nAND ast.id_datastatus = ds.id_datastatus \nAND asn.id_datastatus = ds.id_datastatus \nAND c.id_datastatus = ds.id_datastatus \nAND s.id_datastatus = ds.id_datastatus \nAND p.id_datastatus = ds.id_datastatus \nAND ds.b_active <> 0\n\n... End of query.\n\n\n\n", "msg_date": "Sun, 9 Jan 2005 16:45:18 +1100", "msg_from": "\"Guenzl, Martin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with EXPLAIN ANALYZE runtimes " }, { "msg_contents": "On Sun, Jan 09, 2005 at 16:45:18 +1100,\n \"Guenzl, Martin\" <[email protected]> wrote:\n> LOL ... Excuse my ignorance but what's Karnak headear?\n\nJonny Carson used to do sketches on the Tonight show where he was Karnak\nand would give answers to questions in sealed envelopes which would later\nbe read by Ed McMahon.\n", "msg_date": "Sun, 9 Jan 2005 01:06:41 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Help with EXPLAIN ANALYZE runtimes" }, { "msg_contents": "In case anyone is interested, I finally found what I believe to be the cause\n... Or at least, I found the solution, and think I understand why.\n\nHaving read \"Section 10.3 Controlling the Planner with Explicit JOIN\nClauses\" (http://postgresql.org/docs/7.3/interactive/explicit-joins.html), I\nmodified the query to use INNER JOINS with the table datastatus, instead of\nthe implicit cross joins.\n\nThe INNER JOINS now seem to reduce the choices the planner has to make. The\nclue was the high number of tables involved, and the repeated reference to\nthe same table.\n\nAll's well that ends well ... with or without the Karnak headgear.\n\nMartin\n\n-----Original Message-----\nFrom: Guenzl, Martin [mailto:[email protected]] \nSent: Sunday, 9 January 2005 3:45 PM\nTo: [email protected]\nSubject: [PERFORM] Help with EXPLAIN ANALYZE runtimes\n\nHi,\n \nI am a recent convert to Postgresql, and am trying to tune a very slow query\nacross ten tables all with only a few rows at this stage (<20), and was\nlooking for some help to get me out of a dead-end.\n \nIt runs very slowly both on a hosted Postgresql 7.3.4 server running on\nFreeBSD UNIX box, and also on a Postgresql 8.0.0.0-rc2 server running on a\nWin XP box.\n \nOn the latter, the EXPLAIN ANALYZE returned what I thought was a strange\nresult - here is the excerpt ...\n\n(Start)\n \nSQL: Query Results\nQUERY PLAN\nUnique (cost=7.16..7.32 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n -> Sort (cost=7.16..7.16 rows=3 width=188) (actual time=51.000..51.000\nrows=16 loops=1)\n Sort Key: am.id_assessment, c.id_claim, c.nm_claim, p.id_provider,\np.nm_title, p.nm_first, p.nm_last, ad.id_address, ad.nm_address_1,\nad.nm_address_2, ad.nm_address_3, ad.nm_suburb, ad.nm_city,\ns.nm_state_short, ad.nm_postcode, am.dt_assessment, am.dt_booking,\nast.nm_assessmentstatus, ast.b_offer_report, asn.id_assessmentstatus,\nasn.nm_assessmentstatus\n -> Merge Join (cost=4.60..7.13 rows=3 width=188) (actual\ntime=41.000..51.000 rows=16 loops=1)\n Merge Cond: (\"outer\".id_datastatus = \"inner\".id_datastatus)\n Join Filter: ((\"inner\".id_claim = \"outer\".id_claim) AND\n(\"inner\".id_assessment = \"outer\".id_assessment))\n\n:\n:\n:\n\n -> Index Scan using address_pkey on\naddress ad (cost=0.00..14.14 rows=376 width=76) (actual time=10.000..10.000\nrows=82 loops=1)\n -> Sort (cost=1.05..1.06 rows=3\nwidth=36) (actual time=0.000..0.000 rows=3 loops=1)\n Sort Key: am.id_address\n -> Seq Scan on assessment am\n(cost=0.00..1.03 rows=3 width=36) (actual time=0.000..0.000 rows=3 loops=1)\nTotal runtime: 51.000 ms\n\n44 row(s)\n\nTotal runtime: 11,452.979 ms\n\n(End)\n\nIt's the bit at the bottom that throws me - I can't work out why one Total\nruntime says 51ms, and yet the next Total runtime would be 11,452ms. (I'm\nassuming that the clue to getting the query time down is to solve this\npuzzle.)\n\nI've done vacuum analyze on all tables, but that didn't help. This query\nstands out among others as being very slow.\n\nAny ideas or suggestions? \n\nThanks in advance,\n\nMartin\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: the planner will ignore your desire to choose an index scan if your\n joining column's datatypes do not match\n\n\n\n", "msg_date": "Sun, 9 Jan 2005 19:30:23 +1100", "msg_from": "\"Guenzl, Martin\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Help with EXPLAIN ANALYZE runtimes" } ]
[ { "msg_contents": "I'm sorry if there's a URL out there answering this, but I couldn't find it.\n\nFor those of us that need the best performance possible out of a\ndedicated dual-CPU PostgreSQL server, what is recommended?\n\nAMD64/Opteron or i386/Xeon?\n\nLinux or FreeBSD or _?_\n\nI'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk performance.\n\nAny hardware-comparison benchmarks out there showing the results for\ndifferent PostgreSQL setups?\n\nThanks!\n", "msg_date": "Mon, 10 Jan 2005 18:42:13 -0800", "msg_from": "Miles Keaton <[email protected]>", "msg_from_op": true, "msg_subject": "which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Miles Keaton wrote:\n\n>I'm sorry if there's a URL out there answering this, but I couldn't find it.\n>\n>For those of us that need the best performance possible out of a\n>dedicated dual-CPU PostgreSQL server, what is recommended?\n>\n>AMD64/Opteron or i386/Xeon?\n> \n>\nAMD64/Opteron\n\n>Linux or FreeBSD or _?_\n> \n>\n\nThis is a religious question :)\n\n>I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk performance.\n> \n>\nAnd many, many disks -- yes.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>Any hardware-comparison benchmarks out there showing the results for\n>different PostgreSQL setups?\n>\n>Thanks!\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Mon, 10 Jan 2005 19:44:34 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Quoth [email protected] (Miles Keaton):\n> I'm sorry if there's a URL out there answering this, but I couldn't\n> find it.\n>\n> For those of us that need the best performance possible out of a\n> dedicated dual-CPU PostgreSQL server, what is recommended?\n>\n> AMD64/Opteron or i386/Xeon?\n\nXeon sux pretty bad...\n\n> Linux or FreeBSD or _?_\n\nThe killer question won't be of what OS is \"faster,\" but rather of\nwhat OS better supports the fastest hardware you can get your hands\non. \n\nWe tried doing some FreeBSD benchmarking on a quad-Opteron box, only\nto discover that the fibrechannel controller worked in what amounted\nto a \"PAE-like\" mode where it only talked DMA in a 32 bit manner. We\nmight have found a more suitable controller, given time that was not\navailable.\n\nA while back I tried to do some FreeBSD benchmarking on a quad-Xeon\nbox with 8GB of RAM. I couldn't find _any_ RAID controller compatible\nwith that configuration, so FreeBSD wasn't usable on that hardware\nunless I told the box to ignore half the RAM.\n\nThere lies the rub of the problem: you need to make sure all the vital\ncomponents are able to run \"full blast\" in order to maximize\nperformance.\n\nThe really high end SCSI controllers may only have supported drivers\nfor some specific set of OSes, and it seems to be pretty easy to put\ntogether boxes where one or another component leaps into the \"That\nDoesn't Work!\" category.\n\n> I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk\n> performance.\n\nRAID controllers tend to use i960 or StrongARM CPUs that run at speeds\nthat _aren't_ all that impressive. With software RAID, you can take\nadvantage of the _enormous_ increases in the speed of the main CPU.\n\nI don't know so much about FreeBSD's handling of this, but on Linux,\nthere's pretty strong indication that _SOFTWARE_ RAID is faster than\nhardware RAID.\n\nIt has the further merit that you're not dependent on some disk\nformatting scheme that is only compatible with the model of RAID\ncontroller that you've got, where if the controller breaks down, you\nlikely have to rebuild the whole array from scratch and your data is\ntoast.\n\nThe assumptions change if you're looking at really high end disk\narrays, but that's certainly another story.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://linuxfinances.info/info/finances.html\nReal Programmers are surprised when the odometers in their cars don't\nturn from 99999 to A0000.\n", "msg_date": "Mon, 10 Jan 2005 23:04:20 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Xeon sux pretty bad...\n\n> Linux or FreeBSD or _?_\n\nThe killer question won't be of what OS is \"faster,\" but rather of\nwhat OS better supports the fastest hardware you can get your hands\non. \n\nWe tried doing some FreeBSD benchmarking on a quad-Opteron box, only\nto discover that the fibrechannel controller worked in what amounted\nto a \"PAE-like\" mode where it only talked DMA in a 32 bit manner. We\nmight have found a more suitable controller, given time that was not\navailable.\n\nA while back I tried to do some FreeBSD benchmarking on a quad-Xeon\nbox with 8GB of RAM. I couldn't find _any_ RAID controller compatible\nwith that configuration, so FreeBSD wasn't usable on that hardware\nunless I told the box to ignore half the RAM.\n\nThere lies the rub of the problem: you need to make sure all the vital\ncomponents are able to run \"full blast\" in order to maximize\nperformance.\n\nThe really high end SCSI controllers may only have supported drivers\nfor some specific set of OSes, and it seems to be pretty easy to put\ntogether boxes where one or another component leaps into the \"That\nDoesn't Work!\" category.\n\n> I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk\n> performance.\n\nRAID controllers tend to use i960 or StrongARM CPUs that run at speeds\nthat _aren't_ all that impressive. With software RAID, you can take\nadvantage of the _enormous_ increases in the speed of the main CPU.\n\nI don't know so much about FreeBSD's handling of this, but on Linux,\nthere's pretty strong indication that _SOFTWARE_ RAID is faster than\nhardware RAID.\n\nIt has the further merit that you're not dependent on some disk\nformatting scheme that is only compatible with the model of RAID\ncontroller that you've got, where if the controller breaks down, you\nlikely have to rebuild the whole array from scratch and your data is\ntoast.\n\nThe assumptions change if you're looking at really high end disk\narrays, but that's certainly another story.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"acm.org\")\nhttp://linuxfinances.info/info/finances.html\nReal Programmers are surprised when the odometers in their cars don't\nturn from 99999 to A0000.\n", "msg_date": "11 Jan 2005 04:25:04 GMT", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": ">\n>RAID controllers tend to use i960 or StrongARM CPUs that run at speeds\n>that _aren't_ all that impressive. With software RAID, you can take\n>advantage of the _enormous_ increases in the speed of the main CPU.\n>\n>I don't know so much about FreeBSD's handling of this, but on Linux,\n>there's pretty strong indication that _SOFTWARE_ RAID is faster than\n>hardware RAID.\n> \n>\nUnless something has changed though, you can't run raid 10\nwith linux software raid and raid 5 sucks for heavy writes.\n\nJ\n\n\n\n>It has the further merit that you're not dependent on some disk\n>formatting scheme that is only compatible with the model of RAID\n>controller that you've got, where if the controller breaks down, you\n>likely have to rebuild the whole array from scratch and your data is\n>toast.\n>\n>The assumptions change if you're looking at really high end disk\n>arrays, but that's certainly another story.\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Mon, 10 Jan 2005 20:31:22 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Chris,\n\n> I don't know so much about FreeBSD's handling of this, but on Linux,\n> there's pretty strong indication that _SOFTWARE_ RAID is faster than\n> hardware RAID.\n\nCertainly better than an Adaptec. But not necessarily better than a \nmedium-end RAID card, like an LSI. It really depends on the quality of the \ncontroller.\n\nAlso, expected concurrent activity should influence you. On a dedicated \ndatabase server, you'll seldom max out the CPU but will often max of the \ndisk, so the CPU required by software RAID is \"free\". However, if you have \na Web/PG/E-mail box which frequently hits 100% CPU, then even a lower-end \nRAID card can be beneficial simply by taking load off the CPU.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 10 Jan 2005 22:35:18 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "On Mon, Jan 10, 2005 at 08:31:22PM -0800, Joshua D. Drake wrote:\n> Unless something has changed though, you can't run raid 10\n> with linux software raid\n\nHm, why not? What stops you from making two RAID-0 devices and mirroring\nthose? (Or the other way round, I can never remember :-) )\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 11 Jan 2005 11:14:13 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Joshua D. Drake wrote:\n>>RAID controllers tend to use i960 or StrongARM CPUs that run at speeds\n>>that _aren't_ all that impressive. With software RAID, you can take\n>>advantage of the _enormous_ increases in the speed of the main CPU.\n>>\n>>I don't know so much about FreeBSD's handling of this, but on Linux,\n>>there's pretty strong indication that _SOFTWARE_ RAID is faster than\n>>hardware RAID.\n>> \n>>\n> \n> Unless something has changed though, you can't run raid 10\n> with linux software raid and raid 5 sucks for heavy writes.\n\nYou could always do raid 1 over raid 0, with newer kernels (2.6ish)\nthere is even a dedicated raid10 driver.\n\nJan\n", "msg_date": "Tue, 11 Jan 2005 11:16:32 +0100", "msg_from": "Jan Dittmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "...and on Mon, Jan 10, 2005 at 08:31:22PM -0800, Joshua D. Drake used the keyboard:\n> \n> >\n> >RAID controllers tend to use i960 or StrongARM CPUs that run at speeds\n> >that _aren't_ all that impressive. With software RAID, you can take\n> >advantage of the _enormous_ increases in the speed of the main CPU.\n> >\n> >I don't know so much about FreeBSD's handling of this, but on Linux,\n> >there's pretty strong indication that _SOFTWARE_ RAID is faster than\n> >hardware RAID.\n> > \n> >\n> Unless something has changed though, you can't run raid 10\n> with linux software raid and raid 5 sucks for heavy writes.\n> \n> J\n\nHello, Joshua.\n\nThings have changed. :)\n\nFrom 2.6.10's drivers/md/Kconfig:\n\nconfig MD_RAID10\n tristate \"RAID-10 (mirrored striping) mode (EXPERIMENTAL)\"\n depends on BLK_DEV_MD && EXPERIMENTAL\n ---help---\n RAID-10 provides a combination of striping (RAID-0) and\n mirroring (RAID-1) with easier configuration and more flexable\n layout.\n Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to\n be the same size (or atleast, only as much as the smallest device\n will be used).\n RAID-10 provides a variety of layouts that provide different levels\n of redundancy and performance.\n\n RAID-10 requires mdadm-1.7.0 or later, available at:\n\n ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/\n\nThere is a problem, however, that may render software RAID non-viable\nthough. According to one of my benchmarks, it makes up for an up to\n10% increase in system time consumed under full loads, so if the original\nposter's application is going to be CPU-bound, which might be the case,\nas he is looking for a machine that's strong on the CPU side, that may\nbe the \"too much\" bit.\n\nOf course, if Opteron is being chosen for the increase in the amount of\nmemory it can address, this is not the issue.\n\nHTH,\n-- \n Grega Bremec\n gregab at p0f dot net", "msg_date": "Tue, 11 Jan 2005 11:29:20 +0100", "msg_from": "Grega Bremec <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "On 11 Jan 2005 04:25:04 GMT\nChristopher Browne <[email protected]> wrote:\n> Xeon sux pretty bad...\n> \n> > Linux or FreeBSD or _?_\n> \n> The killer question won't be of what OS is \"faster,\" but rather of\n> what OS better supports the fastest hardware you can get your hands\n> on. \n\nWell, if multiple OSs work on the hardware you like, there is nothing\nwrong with selecting the fastest among them of course. As for Linux or\nFreeBSD, you may also want to consider NetBSD. It seems that with the\nlatest releases of both, NetBSD outperforms FreeBSD in at least one\nbenchmark.\n\nhttp://www.feyrer.de/NetBSD/gmcgarry/\n\nThe benchmarks were run on a single processor but you can always run the\nbenchmark on whatever hardware you select - assuming that it runs both.\n\nIsn't there also a PostgreSQL specific benchmark available?\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 11 Jan 2005 06:46:36 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Steinar H. Gunderson wrote:\n\n>On Mon, Jan 10, 2005 at 08:31:22PM -0800, Joshua D. Drake wrote:\n> \n>\n>>Unless something has changed though, you can't run raid 10\n>>with linux software raid\n>> \n>>\n>\n>Hm, why not? What stops you from making two RAID-0 devices and mirroring\n>those? (Or the other way round, I can never remember :-) )\n> \n>\n\nO.k. that seems totally wrong ;) but yes your correct you could\nprobably do it.\n\nSincerely,\n\nJosuha D. Drake\n\n\n\n>/* Steinar */\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Tue, 11 Jan 2005 08:01:00 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Hi,\n\n From what I've been reading on the list for the last few months, adaptec\nisn't that good when it comes to RAID controllers, but LSI keeps popping up.\nIs there any particual models that are recommended as I'm in the market for\ntwo new servers both with RAID controllers. The server specs I'm thinking\nare as follows:\n\nBox 1 \nFedora 64bit core 3\n4 GB RAM (2GB per CPU)\n2 x Opteron CPU ???\nTyan K8S\nLSI® 53C1030 U320 SCSI controller Dual-channel \n\nBox 2\nFedora 64bit core 3\n2 GB RAM (1GB per CPU)\n2 x Opteron CPU ???\nTyan K8S\nLSI® 53C1030 U320 SCSI controller Dual-channel \n\nThis motherboard has can \"Connects to PCI-X Bridge A, LSI® ZCR (Zero Channel\nRAID) support (SCSI Interface Steering Logic)\". I believe this means I can\nget a LSI MegaRAID 320-0 which a few have mentioned on the list\n(http://www.lsilogic.com/products/megaraid/scsi_320_0.html). It supports\nRAID 10 and supports battery backed cache. Anyone had any experience with\nthis? \n\nAny other particular controller that people recommend? From what I've been\nreading RAID 10 and battery backed cache sound like things I need. :)\n\nThanks,\n\nBenjamin Wragg\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Tuesday, 11 January 2005 5:35 PM\nTo: [email protected]\nCc: Christopher Browne\nSubject: Re: [PERFORM] which dual-CPU hardware/OS is fastest for PostgreSQL?\n\nChris,\n\n> I don't know so much about FreeBSD's handling of this, but on Linux, \n> there's pretty strong indication that _SOFTWARE_ RAID is faster than \n> hardware RAID.\n\nCertainly better than an Adaptec. But not necessarily better than a\nmedium-end RAID card, like an LSI. It really depends on the quality of the\ncontroller.\n\nAlso, expected concurrent activity should influence you. On a dedicated \ndatabase server, you'll seldom max out the CPU but will often max of the \ndisk, so the CPU required by software RAID is \"free\". However, if you have\n\na Web/PG/E-mail box which frequently hits 100% CPU, then even a lower-end\nRAID card can be beneficial simply by taking load off the CPU.\n\n--\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n--\nNo virus found in this incoming message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.6.10 - Release Date: 10/01/2005\n \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.6.11 - Release Date: 12/01/2005\n \n\n", "msg_date": "Fri, 14 Jan 2005 16:52:42 +1100", "msg_from": "\"Benjamin Wragg\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Without starting too much controvesy I hope, I would seriously\nrecommend you evaluate the AMCC Escalade 9500S SATA controller. It\nhas many of the features of a SCSI controler, but works with cheaper\ndrives, and for half the price or many SCSI controlers (9500S-8MI goes\nfor abour $500). See http://plexq.com/~aturner/3ware.pdf for their 4\nway, 8 way and 12 way RAID benchmarks including RAID 0, RAID 5 and\nRAID 10. If others have similar data, I would be very interested to\nsee how it stacks up against other RAID controllers.\n\nAlex Turner\nNetEconomist\n\n\nOn Fri, 14 Jan 2005 16:52:42 +1100, Benjamin Wragg <[email protected]> wrote:\n> Hi,\n> \n> From what I've been reading on the list for the last few months, adaptec\n> isn't that good when it comes to RAID controllers, but LSI keeps popping up.\n> Is there any particual models that are recommended as I'm in the market for\n> two new servers both with RAID controllers. The server specs I'm thinking\n> are as follows:\n> \n> Box 1\n> Fedora 64bit core 3\n> 4 GB RAM (2GB per CPU)\n> 2 x Opteron CPU ???\n> Tyan K8S\n> LSI® 53C1030 U320 SCSI controller Dual-channel\n> \n> Box 2\n> Fedora 64bit core 3\n> 2 GB RAM (1GB per CPU)\n> 2 x Opteron CPU ???\n> Tyan K8S\n> LSI® 53C1030 U320 SCSI controller Dual-channel\n> \n> This motherboard has can \"Connects to PCI-X Bridge A, LSI® ZCR (Zero Channel\n> RAID) support (SCSI Interface Steering Logic)\". I believe this means I can\n> get a LSI MegaRAID 320-0 which a few have mentioned on the list\n> (http://www.lsilogic.com/products/megaraid/scsi_320_0.html). It supports\n> RAID 10 and supports battery backed cache. Anyone had any experience with\n> this?\n> \n> Any other particular controller that people recommend? From what I've been\n> reading RAID 10 and battery backed cache sound like things I need. :)\n> \n> Thanks,\n> \n> Benjamin Wragg\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Josh Berkus\n> Sent: Tuesday, 11 January 2005 5:35 PM\n> To: [email protected]\n> Cc: Christopher Browne\n> Subject: Re: [PERFORM] which dual-CPU hardware/OS is fastest for PostgreSQL?\n> \n> Chris,\n> \n> > I don't know so much about FreeBSD's handling of this, but on Linux,\n> > there's pretty strong indication that _SOFTWARE_ RAID is faster than\n> > hardware RAID.\n> \n> Certainly better than an Adaptec. But not necessarily better than a\n> medium-end RAID card, like an LSI. It really depends on the quality of the\n> controller.\n> \n> Also, expected concurrent activity should influence you. On a dedicated\n> database server, you'll seldom max out the CPU but will often max of the\n> disk, so the CPU required by software RAID is \"free\". However, if you have\n> \n> a Web/PG/E-mail box which frequently hits 100% CPU, then even a lower-end\n> RAID card can be beneficial simply by taking load off the CPU.\n> \n> --\n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n> --\n> No virus found in this incoming message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.6.10 - Release Date: 10/01/2005\n> \n> --\n> No virus found in this outgoing message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.6.11 - Release Date: 12/01/2005\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n", "msg_date": "Fri, 14 Jan 2005 10:58:30 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Jan Dittmer <[email protected]> writes:\n\n> You could always do raid 1 over raid 0, with newer kernels (2.6ish)\n> there is even a dedicated raid10 driver.\n\nAren't you much better off doing raid 0 over raid 1? \n\nWith raid 1 over raid 0 you're mirroring two stripe sets. That means if any\ndrive from the first stripe set goes you lose the whole side of the mirror. If\nany drive of the second stripe set goes you lost your array. Even if they're\nnot the same position in the array.\n\nIf you do raid 0 over raid 1 then you're striping a series of mirrored drives.\nSo if any drive fails you only lose that drive from the stripe set. If another\ndrive fails then you're ok as long as it isn't the specific drive that was\npaired with the first failed drive.\n\n-- \ngreg\n\n", "msg_date": "14 Jan 2005 11:29:20 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Greg Stark wrote:\n> Jan Dittmer <[email protected]> writes:\n> \n> \n>>You could always do raid 1 over raid 0, with newer kernels (2.6ish)\n>>there is even a dedicated raid10 driver.\n> \n> \n> Aren't you much better off doing raid 0 over raid 1? \n> \n> With raid 1 over raid 0 you're mirroring two stripe sets. That means if any\n> drive from the first stripe set goes you lose the whole side of the mirror. If\n> any drive of the second stripe set goes you lost your array. Even if they're\n> not the same position in the array.\n> \n> If you do raid 0 over raid 1 then you're striping a series of mirrored drives.\n> So if any drive fails you only lose that drive from the stripe set. If another\n> drive fails then you're ok as long as it isn't the specific drive that was\n> paired with the first failed drive.\n\n\nEver heart of Murphy? :-) But of course you're right - I tend to mix up\nthe raid levels...\n\nJan\n", "msg_date": "Sat, 15 Jan 2005 00:32:55 +0100", "msg_from": "Jan Dittmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" } ]
[ { "msg_contents": "Greetings to one and all,\n\n I've been trying to find some information on selecting an optimal \nfilesystem setup for a volume that will only contain a PostgreSQL Database \nCluster under Linux. Searching through the mailing list archive showed some \npromising statistics on the various filesystems available to Linux, ranging \nfrom ext2 through reiserfs and xfs.\n\n I have come to understand that PostgreSQLs Write Ahead Logging (WAL) \nperforms a lot of the journal functionality provided by the majoirty of \ncontemporary filesystems and that having both WAL and filesystem journalling \ncan degrade performance.\n\n Could anyone point me in the right direction so that I can read up some \nmore on this issue to discern which filesystem to choose and how to tune \nboth the FS and PostgreSQL so that they can compliment each other? I've \nattempted to find this information via the FAQ, Google and the mailing list \narchives but have lucked out for the moment.\n\n Regards,\n\n Pete de Zwart. \n\n\n", "msg_date": "Tue, 11 Jan 2005 20:23:25 +1100", "msg_from": "\"Pete de Zwart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Best filesystem for PostgreSQL Database Cluster under Linux" }, { "msg_contents": "After a long battle with technology, \"Pete de Zwart\" <[email protected]>, an earthling, wrote:\n> Greetings to one and all,\n>\n> I've been trying to find some information on selecting an optimal \n> filesystem setup for a volume that will only contain a PostgreSQL Database \n> Cluster under Linux. Searching through the mailing list archive showed some \n> promising statistics on the various filesystems available to Linux, ranging \n> from ext2 through reiserfs and xfs.\n>\n> I have come to understand that PostgreSQLs Write Ahead Logging\n> (WAL) performs a lot of the journal functionality provided by the\n> majoirty of contemporary filesystems and that having both WAL and\n> filesystem journalling can degrade performance.\n>\n> Could anyone point me in the right direction so that I can read\n> up some more on this issue to discern which filesystem to choose and\n> how to tune both the FS and PostgreSQL so that they can compliment\n> each other? I've attempted to find this information via the FAQ,\n> Google and the mailing list archives but have lucked out for the\n> moment.\n\nYour understanding of the impact of filesystem journalling isn't\nentirely correct. In the cases of interest, journalling is done on\nmetadata, not on the contents of files, with the result that there\nisn't really that much overlap between the two forms of \"journalling\"\nthat are taking place.\n\nI did some benchmarking last year that compared, on a write-heavy\nload, ext3, XFS, and JFS.\n\nI found that ext3 was materially (if memory serves, 15%) slower than\nthe others, and that there was a persistent _slight_ (a couple\npercent) advantage to JFS over XFS.\n\nThis _isn't_ highly material, particularly considering that I was\nworking with a 100% Write load, whereas \"real world\" work is likely to\nhave more of a mixture.\n\nIf you have reason to consider one filesystem or another better\nsupported by your distribution vendor, THAT is a much more important\nreason to pick a particular filesystem than 'raw speed.'\n-- \noutput = (\"cbbrowne\" \"@\" \"cbbrowne.com\")\nhttp://cbbrowne.com/info/fs.html\nRules of the Evil Overlord #138. \"The passageways to and within my\ndomain will be well-lit with fluorescent lighting. Regrettably, the\nspooky atmosphere will be lost, but my security patrols will be more\neffective.\" <http://www.eviloverlord.com/>\n", "msg_date": "Tue, 11 Jan 2005 08:15:10 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best filesystem for PostgreSQL Database Cluster under Linux" }, { "msg_contents": "Thanks for the info.\n\nI managed to pull out some archived posts to this list from the PostgreSQL \nweb site about this issue which have helped a bit.\n\nUnfortunatly, the FS has been chosen before considering the impact of it on \nI/O for PostgreSQL. As the Cluster is sitting on it's on 200GB IDE drive for \nthe moment and the system is partially live, it's not feasable to change the \nunderlying file system without great pain and suffering.\n\nIn the great fsync debates that I've seen, the pervasive opinion about \njournalling file systems under Linux and PostgreSQL is to have the \nfilesystem mount option data=writeback, assuming that fsync in PostgreSQL \nwill handle coherency of the file data and the FS will handle metadata.\n\nThis is all academic to a point, as tuning the FS will get a small \nimprovement on I/O compared to the improvement potential of moving to \nSCSI/FCAL, that and getting more memory.\n\n Regards,\n\n Pete de Zwart.\n\n\"Christopher Browne\" <[email protected]> wrote in message \nnews:[email protected]...\n> Your understanding of the impact of filesystem journalling isn't\n> entirely correct. In the cases of interest, journalling is done on\n> metadata, not on the contents of files, with the result that there\n> isn't really that much overlap between the two forms of \"journalling\"\n> that are taking place.\n>\n> I did some benchmarking last year that compared, on a write-heavy\n> load, ext3, XFS, and JFS.\n>\n> I found that ext3 was materially (if memory serves, 15%) slower than\n> the others, and that there was a persistent _slight_ (a couple\n> percent) advantage to JFS over XFS.\n>\n> This _isn't_ highly material, particularly considering that I was\n> working with a 100% Write load, whereas \"real world\" work is likely to\n> have more of a mixture.\n>\n> If you have reason to consider one filesystem or another better\n> supported by your distribution vendor, THAT is a much more important\n> reason to pick a particular filesystem than 'raw speed.'\n\n\n", "msg_date": "Wed, 12 Jan 2005 07:25:43 +1100", "msg_from": "\"Pete de Zwart\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best filesystem for PostgreSQL Database Cluster under Linux" }, { "msg_contents": "On Wed, 12 Jan 2005 07:25:43 +1100, Pete de Zwart <[email protected]> wrote:\n[snip]\n> improvement on I/O compared to the improvement potential of moving to\n> SCSI/FCAL, that and getting more memory.\n> \n\nI would like to ask the question that continues to loom large over all\nDBAs. SCSI, FCAL and SATA, which works best.\n\nMost FCAL loops have a speed limit of either 1Gbps or 2Gbps. This is\nonly 100MB/sec or 200MB/sec. U320 SCSI can handle 320MB/sec and the\nAMCC (formerly 3Ware) SATA Raid cards show throughput over 400MB/sec\nwith good IOs/sec on PCI-X.\n\nI am not prepared to stand by whilst someone makes a sideways claim\nthat SCSI or FCAL is implicitly going to give better performance than\nanything else. It will depend on your data set, and how you configure\nyour drives, and how good your controller is. We have a Compaq Smart\nArray controler with a 3 drive RAID 5 than can't break 10MB/sec write\non a Bonnie++ benchmark. This is virtualy the slowest system in our\ndatacenter, but has a modern controler and 10k disks, whilst our PATA\nsystems manage much better throughput. (Yes I know that MB/sec is not\nthe only speed measure, it also does badly on IO/sec).\n\nAlex Turner\nNetEconomist\n", "msg_date": "Fri, 14 Jan 2005 10:51:50 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best filesystem for PostgreSQL Database Cluster under Linux" } ]
[ { "msg_contents": "> Subject: [PERFORM] which dual-CPU hardware/OS is fastest for\nPostgreSQL?\n> \n> I'm sorry if there's a URL out there answering this, but I couldn't\nfind\n> it.\n> \n> For those of us that need the best performance possible out of a\n> dedicated dual-CPU PostgreSQL server, what is recommended?\n> \n> AMD64/Opteron or i386/Xeon?\n> \n> Linux or FreeBSD or _?_\n> \n> I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk\n> performance.\n> \n> Any hardware-comparison benchmarks out there showing the results for\n> different PostgreSQL setups?\n\nMy recommendation would be:\n2 way or 4 way Opteron depending on needs (looking on a price for 4-way?\nGo here: http://www.swt.com/qo3.html). Go no less than Opteron 246.\nTyan motherboard\nSerial ATA controller by 3ware (their latest escalade series size for\nyour needs) (if money is no object, go scsi). Make sure you pick up the\nbbu.\nRedhat Linux FC3 x86-64\nGood memory (DDR400 registered, at least)...lots of it.\n\nYou can get a two way rackmount for under 4000$. You can get a 4-way\nfor under 10k$. Make sure you pick up a rackmount case that has a\nserial ATA backplane that supports led status light for disk drives, and\nmake sure you get the right riser, heh. \n\nMerlin\n", "msg_date": "Tue, 11 Jan 2005 08:33:09 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "while you weren't looking, Merlin Moncure wrote:\n\n> 2 way or 4 way Opteron depending on needs (looking on a price for 4-way?\n> Go here: http://www.swt.com/qo3.html). \n\nTry also the Appro 1U 4-way Opteron server, at:\nhttp://www.appro.com/product/server_1142h.asp\n\nI specced a 4-way 842 (1.6 GHz: little to none of our db work is CPU\nbound; there's just a lot of it going on at once) with 32G core for\nwithin delta of what SWT wants /just/ for the 32G -- the price of the\nbox itself and anything else atop that. Stepping up to a faster CPU\nshould increase the cost directly in line with the retail price for\nthe silicon.\n\nWe haven't yet ordered the machine (and the quote was from early last\nmonth, so their prices will have fluctuated) and consequently, I can't\ncomment on their quality. Their default warranty is three years,\n\"rapid exchange\", though, and they offer on-site service for only\nnominally more, IIRC. Some slightly more than cursory googling hasn't\nturned up anything overly negative, either.\n\nAs a 1U, the box has no appreciable storage of its own but we're\nshopping for a competent, non bank-breaking fibre setup right now, so\nthat's not an issue for our situation. While on the subject, anyone\nhere have anything to say about JMR fibre raid cabinets? \n(Fibre-to-fibre, not fibre-to-SATA or the like.)\n\n/rls\n\n-- \n:wq\n", "msg_date": "Tue, 11 Jan 2005 07:58:34 -0600", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Rosser Schwarz <[email protected]> writes:\n\n> Try also the Appro 1U 4-way Opteron server, at:\n> http://www.appro.com/product/server_1142h.asp\n\nBack in the day, we used to have problems with our 1U dual pentiums. We\nattributed it to heat accelerating failure. I would fear four opterons in 1U\nwould be damned hard to cool effectively, no?\n\n-- \ngreg\n\n", "msg_date": "11 Jan 2005 09:30:53 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "$4000 is not going to get you much disk - If you buy components from\nthe cheapest source I know (newegg.com) you end up around $5k with\n14x36gig Raptor SATA drives and a 4U chasis with a 14xSATA built in\nback plane packing 2x9500S AMCC Escalade RAID cards, which are\nsupported in Linux, 4Gig RAM and 2xOpteron 242. If you are not CPU\nbound, there isn't much point going to 246. If you want SCSI, then\nyou will be paying more. Check out rackmountmart.com for Chasises,\nthey have a nice 5U that has a 24xSATA backplane (We will be acquiring\nthis in the next few weeks). If you really want to go nuts, they have\nan 8U with 40xSATA backplane.\n\nAlex Turner\nNetEconomist\n\n\nOn Tue, 11 Jan 2005 08:33:09 -0500, Merlin Moncure\n<[email protected]> wrote:\n> > Subject: [PERFORM] which dual-CPU hardware/OS is fastest for\n> PostgreSQL?\n> >\n> > I'm sorry if there's a URL out there answering this, but I couldn't\n> find\n> > it.\n> >\n> > For those of us that need the best performance possible out of a\n> > dedicated dual-CPU PostgreSQL server, what is recommended?\n> >\n> > AMD64/Opteron or i386/Xeon?\n> > \n> > Linux or FreeBSD or _?_\n> >\n> > I'm assuming hardware RAID 10 on 15k SCSI drives is fastest disk\n> > performance.\n> >\n> > Any hardware-comparison benchmarks out there showing the results for\n> > different PostgreSQL setups?\n> \n> My recommendation would be:\n> 2 way or 4 way Opteron depending on needs (looking on a price for 4-way?\n> Go here: http://www.swt.com/qo3.html). Go no less than Opteron 246.\n> Tyan motherboard\n> Serial ATA controller by 3ware (their latest escalade series size for\n> your needs) (if money is no object, go scsi). Make sure you pick up the\n> bbu.\n> Redhat Linux FC3 x86-64\n> Good memory (DDR400 registered, at least)...lots of it.\n> \n> You can get a two way rackmount for under 4000$. You can get a 4-way\n> for under 10k$. Make sure you pick up a rackmount case that has a\n> serial ATA backplane that supports led status light for disk drives, and\n> make sure you get the right riser, heh.\n> \n> Merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n", "msg_date": "Tue, 11 Jan 2005 09:34:15 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "while you weren't looking, Greg Stark wrote:\n\n> Back in the day, we used to have problems with our 1U dual pentiums. We\n> attributed it to heat accelerating failure. I would fear four opterons in 1U\n> would be damned hard to cool effectively, no?\n\nOpterons actually run pretty coolly, comparatively. If it's a big\nconcern, you can always drop a few more clams for the low-voltage\nversions -- available in 1.4 and 2.0 GHz flavors, and of which I've\nheard several accounts of their being run successfully /without/\nactive cooling -- or punt until later this year, when they ship\nWinchester core Opterons (90nm SOI -- the current, uniprocessor\nsilicon fabbed with that process has some 3W heat dissipation idle,\n~30W under full load; as a point of contrast, current 90nm P4s have\n34W idle dissipation, and some 100W peak).\n\nWe have a number of 1U machines (P4s, I believe), and a Dell blade\nserver (six or seven P3 machines in a 3U cabinet) as our webservers,\nand none of them seem to have any trouble with heat. That's actually\na bigger deal than it might first seem, given how frighteningly\ncrammed with crap our machine room is.\n\n/rls\n\n-- \n:wq\n", "msg_date": "Tue, 11 Jan 2005 08:52:10 -0600", "msg_from": "Rosser Schwarz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" } ]
[ { "msg_contents": "> $4000 is not going to get you much disk - If you buy components from\n> the cheapest source I know (newegg.com) you end up around $5k with\n> 14x36gig Raptor SATA drives and a 4U chasis with a 14xSATA built in\n> back plane packing 2x9500S AMCC Escalade RAID cards, which are\n> supported in Linux, 4Gig RAM and 2xOpteron 242. If you are not CPU\n> bound, there isn't much point going to 246. If you want SCSI, then\n> you will be paying more. Check out rackmountmart.com for Chasises,\n> they have a nice 5U that has a 24xSATA backplane (We will be acquiring\n> this in the next few weeks). If you really want to go nuts, they have\n> an 8U with 40xSATA backplane.\n> \n> Alex Turner\n> NetEconomist\n\nheh, our apps do tend to be CPU bound. Generally, I think the extra CPU\nhorsepower is worth the investment until you get to the really high end\ncpus.\n\nI definitely agree with all your hardware choices though...seems like\nyou've hit the 'magic formula'.\n\nMerlin\n\n", "msg_date": "Tue, 11 Jan 2005 09:44:21 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> heh, our apps do tend to be CPU bound. Generally, I think the extra CPU\n> horsepower is worth the investment until you get to the really high end\n> cpus.\n\nI find that while most applications I work with shouldn't be cpu intensive\nthey do seem end up being cpu bound quite frequently. What happens is that 90%\nof the workload has a working set that fits in RAM. So the system ends up\nbeing bound by the memory bus speed. That appears exactly the same as\ncpu-bound, though I'm unclear whether increasing the cpu clock will help.\n\nIt's quite possible to have this situation at the same time as other queries\nare i/o bound. It's quite common to have 95% of your workload be frequently\nexecuted fast queries on commonly accessed data and 5% be bigger data\nwarehouse style queries that need to do large sequential reads.\n\nIncidentally, the same was true for Oracle on Solaris. If we found excessive\ncpu use typically meant some frequently executed query was using a sequential\nscan on a small table. Small enough to fit in RAM but large enough to consume\nlots of cycles reading it.\n\n-- \ngreg\n\n", "msg_date": "11 Jan 2005 10:39:01 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Infact the cache hit ratio that Oracle suggests is the minimum good\nvalue is 95%. Anything below that is bad news. The reason is pretty\nobvious - RAM transfer speed is around 3.2G/sec these days, whilst\neven the best array isn't going to give more than 400MB/sec, and\nthat's not even starting to talk about seek time. anything below 90%\nis not going to keep even the best disc hardware saturated. I know\nthat our dataset is 99% cached, and therefore better CPUs/better RAM\nhas a huge impact on overall performance.\n\nAlex Turner\nNetEconomist\n\nOn 11 Jan 2005 10:39:01 -0500, Greg Stark <[email protected]> wrote:\n> \n> \"Merlin Moncure\" <[email protected]> writes:\n> \n> > heh, our apps do tend to be CPU bound. Generally, I think the extra CPU\n> > horsepower is worth the investment until you get to the really high end\n> > cpus.\n> \n> I find that while most applications I work with shouldn't be cpu intensive\n> they do seem end up being cpu bound quite frequently. What happens is that 90%\n> of the workload has a working set that fits in RAM. So the system ends up\n> being bound by the memory bus speed. That appears exactly the same as\n> cpu-bound, though I'm unclear whether increasing the cpu clock will help.\n> \n> It's quite possible to have this situation at the same time as other queries\n> are i/o bound. It's quite common to have 95% of your workload be frequently\n> executed fast queries on commonly accessed data and 5% be bigger data\n> warehouse style queries that need to do large sequential reads.\n> \n> Incidentally, the same was true for Oracle on Solaris. If we found excessive\n> cpu use typically meant some frequently executed query was using a sequential\n> scan on a small table. Small enough to fit in RAM but large enough to consume\n> lots of cycles reading it.\n> \n> --\n> greg\n> \n>\n", "msg_date": "Wed, 12 Jan 2005 11:57:47 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> Infact the cache hit ratio that Oracle suggests is the minimum good\n> value is 95%. Anything below that is bad news. \n\nWell that seems very workload dependent. No amount of cache is going to be\nable to achieve that for a DSS system chugging sequentially through terabytes\nof data. Whereas for OLTP systems I would wouldn't be surprised to see upwards\nof 99% hit rate.\n\nNote that a high cache hit rate can also be a sign of a problem. After all, it\nmeans the same data is being accessed repeatedly which implicitly means\nsomething is being done inefficiently. For an SQL database it could mean the\nquery plans are suboptimal.\n\nOn several occasions we found Oracle behaving poorly despite excellent cache\nhit rates because it was doing a sequential scan of a moderately sized table\ninstead of an index lookup. The table was small enough to fit in RAM but large\nenough to consume a significant amount of cpu, especially with the query being\nrun thousands of times per minute.\n\n-- \ngreg\n\n", "msg_date": "12 Jan 2005 12:25:23 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "No - I agree - Analysis cache hit rate as a single indicator is\ndangerous. You can easily increase cache hit rate by de-optimizing a\ngood query so it uses more CPU cylces, and therefore has a higher\ncache hit rate. All information has to be taken as a whole when\nperforming optimization on a system. Cache hit rate is just one\nfactor. For data warehousing, it's obviously that you are going to\nhave a lower cache hit rate because you are often performing scans\nacross large data sets that will never fit in memory. But for most\nsystem, not necesarily just OLTP, a high cache hit ratio is\nacheivable. Cache hit ratio is just one small indication of\nperformance.\n\nRelating to that - How to extract this kind of information from\npostgresql? Is there a way to get the cache hti ratio, or determine\nthe worst 10 queries in a database?\n\nAlex Turner\nNetEconomist\n\n\nOn 12 Jan 2005 12:25:23 -0500, Greg Stark <[email protected]> wrote:\n> \n> Alex Turner <[email protected]> writes:\n> \n> > Infact the cache hit ratio that Oracle suggests is the minimum good\n> > value is 95%. Anything below that is bad news.\n> \n> Well that seems very workload dependent. No amount of cache is going to be\n> able to achieve that for a DSS system chugging sequentially through terabytes\n> of data. Whereas for OLTP systems I would wouldn't be surprised to see upwards\n> of 99% hit rate.\n> \n> Note that a high cache hit rate can also be a sign of a problem. After all, it\n> means the same data is being accessed repeatedly which implicitly means\n> something is being done inefficiently. For an SQL database it could mean the\n> query plans are suboptimal.\n> \n> On several occasions we found Oracle behaving poorly despite excellent cache\n> hit rates because it was doing a sequential scan of a moderately sized table\n> instead of an index lookup. The table was small enough to fit in RAM but large\n> enough to consume a significant amount of cpu, especially with the query being\n> run thousands of times per minute.\n> \n> --\n> greg\n> \n>\n", "msg_date": "Wed, 12 Jan 2005 12:36:45 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" } ]
[ { "msg_contents": "Jim wrote: you'd be hard-pressed to find too many real-world examples where\nyou could do\nsomething with a PostgreSQL procedural language that you couldn't do\nwith PL/SQL.\n\nRick mumbled: You can't get it for nothing! %)\n\n\n \n \"Jim C. Nasby\" \n <[email protected]> To: [email protected] \n Sent by: cc: Frank Wiles <[email protected]>, Yann Michel <[email protected]>, \n pgsql-performance-owner@pos [email protected], [email protected] \n tgresql.org Subject: Re: [PERFORM] PostgreSQL vs. Oracle vs. Microsoft \n \n \n 01/10/2005 06:29 PM \n \n \n\n\n\n\nOn Mon, Jan 10, 2005 at 12:46:01PM -0500, Alex Turner wrote:\n> You sir are correct! You can't use perl in MS-SQL or Oracle ;).\n\nOn the other hand, PL/SQL is incredibly powerful, especially combined\nwith all the tools/utilities that come with Oracle. I think you'd be\nhard-pressed to find too many real-world examples where you could do\nsomething with a PostgreSQL procedural language that you couldn't do\nwith PL/SQL.\n--\nJim C. Nasby, Database Consultant [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n\n", "msg_date": "Tue, 11 Jan 2005 09:54:37 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" } ]
[ { "msg_contents": "\nAll of these recent threads about fastest hardware and \"who's better than \nwho\" has inspired me to create a new website:\n\nhttp://www.dbtuning.org\n\nI snipped a few bits from recent posts to get some pages started - hope \nthe innocent don't mind. It's a bit postgres biased at the moment, since \nwell, so am I (though FireBird is now mounting a strong showing...) This \nsite uses a wiki so anyone interested can make contributions. We are all \nshort on time, so I would love any help. I haven't entered any hardware \ninfo yet.\n\nI'll also take a minute to plug a postgres saavy open-source project used \nfor this site - http://www.tikipro.org - It's a very flexible web \nframework with a very powerful and extendible CMS engine. It just hit \nAlpha 4, and we hope to go beta very soon. If you have feedback (or bugs), \nplease send me a note. (and of course dbtuning is running on postgres ;-)\n\n\n[ \\ /\n[ >X< Christian Fowler | spider AT viovio.com\n[ / \\ http://www.viovio.com | http://www.tikipro.org\n", "msg_date": "Tue, 11 Jan 2005 13:05:04 -0500 (EST)", "msg_from": "Christian Fowler <[email protected]>", "msg_from_op": true, "msg_subject": "Assimilation of these \"versus\" and hardware threads" }, { "msg_contents": "People:\n\n> All of these recent threads about fastest hardware and \"who's better than\n> who\" has inspired me to create a new website:\n>\n> http://www.dbtuning.org\n\nWell, time to plug my web site, too, I guess:\nhttp://www.powerpostgresql.com\n\nI've got a configuration primer up there, and the 8.0 Annotated .Conf file \nwill be coming this week.\n\nThat web site runs on Framewerk, a PostgreSQL-based CMS developed by our own \nGavin Roy.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Jan 2005 11:36:44 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Assimilation of these \"versus\" and hardware threads" }, { "msg_contents": "Matt,\n\n> I had one comment on the pg_autovacuum section. Near the bottom it\n> lists some of it's limitations, and I want to clarify the 1st one: \"Does\n> not reset the transaction counter\". I assume this is talking about the\n> xid wraparound problem? If so, then that bullet can be removed.\n> pg_autovacuum does check for xid wraparound and perform a database wide\n> vacuum analyze when it's needed.\n\nKeen. That's an 8.0 fix?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 11 Jan 2005 12:52:29 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Assimilation of these \"versus\" and hardware threads" }, { "msg_contents": "Josh Berkus wrote:\n\n>Matt,\n> \n>\n>>I had one comment on the pg_autovacuum section. Near the bottom it\n>>lists some of it's limitations, and I want to clarify the 1st one: \"Does\n>>not reset the transaction counter\". I assume this is talking about the\n>>xid wraparound problem? If so, then that bullet can be removed.\n>>pg_autovacuum does check for xid wraparound and perform a database wide\n>>vacuum analyze when it's needed.\n>> \n>>\n>\n>Keen. That's an 8.0 fix?\n>\n\nNope, been there since before 7.4 was released.\n\n", "msg_date": "Tue, 11 Jan 2005 16:44:55 -0500", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Assimilation of these \"versus\" and hardware" } ]
[ { "msg_contents": "I wonder if I would like to increase more RAM from 4 Gb. to 6 Gb. [which I hope\nto increase more performance ] and I now I used RH 9 and Pgsql 7.3.2 ON DUAL\nXeon 3.0 server thay has the limtation of 4 Gb. ram, I should use which OS\nbetween FC 2-3 or redhat EL 3 [which was claimed to support 64 Gb.ram] .May I\nuse FC 2 [which is freely downloaded] with 6 Gb. and PGsql 7.4 ?\nAmrit\nThailand\n", "msg_date": "Wed, 12 Jan 2005 10:49:28 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "Is that 4GB limit a hardware limitation? If it is, then there is not\nmuch you can do except upgrading the server. If the server is capable\nof handling more than 4GB of ram then you can just upgrade the kernel\nand enable high memory support (up to 64GB of memory in kernel 2.6.9).\nThere is no need to migrate your distro, but if you do I recommend\nupgrading your Pgsql too.\n\nMartin\n\nOn Wed, 12 Jan 2005 10:49:28 +0700, [email protected]\n<[email protected]> wrote:\n> I wonder if I would like to increase more RAM from 4 Gb. to 6 Gb. [which I hope\n> to increase more performance ] and I now I used RH 9 and Pgsql 7.3.2 ON DUAL\n> Xeon 3.0 server thay has the limtation of 4 Gb. ram, I should use which OS\n> between FC 2-3 or redhat EL 3 [which was claimed to support 64 Gb.ram] .May I\n> use FC 2 [which is freely downloaded] with 6 Gb. and PGsql 7.4 ?\n> Amrit\n> Thailand\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Wed, 12 Jan 2005 12:02:00 +0100", "msg_from": "Martin Tedjawardhana <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "On Wed, 12 Jan 2005 [email protected] wrote:\n\n> I wonder if I would like to increase more RAM from 4 Gb. to 6 Gb. [which I hope\n> to increase more performance ] and I now I used RH 9 and Pgsql 7.3.2 ON DUAL\n> Xeon 3.0 server thay has the limtation of 4 Gb. ram, I should use which OS\n> between FC 2-3 or redhat EL 3 [which was claimed to support 64 Gb.ram] .May I\n> use FC 2 [which is freely downloaded] with 6 Gb. and PGsql 7.4 ?\n> Amrit\n> Thailand\n\nTry 7.4 before the memory upgrade. If you still have performance issues,\ntry optimising your queries. As I mentioned before, you can join the\n#postgresql channel on irc.freenode.net and we can assist.\n\nGavin\n\n", "msg_date": "Wed, 12 Jan 2005 23:40:03 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "On Wed, 12 Jan 2005 [email protected] wrote:\n\n> I wonder if I would like to increase more RAM from 4 Gb. to 6 Gb. [which I hope\n> to increase more performance ] and I now I used RH 9 and Pgsql 7.3.2 ON DUAL\n> Xeon 3.0 server thay has the limtation of 4 Gb. ram, I should use which OS\n> between FC 2-3 or redhat EL 3 [which was claimed to support 64 Gb.ram] .May I\n> use FC 2 [which is freely downloaded] with 6 Gb. and PGsql 7.4 ?\n\nThere is no problem with free Linux distros handling > 4 GB of memory. The\nproblem is that 32 hardware must make use of some less than efficient\nmechanisms to be able to address the memory.\n\nSo, try 7.4 before the memory upgrade. If you still have performance issues,\ntry optimising your queries. As I mentioned before, you can join the\n#postgresql channel on irc.freenode.net and we can assist.\n\nGavin\n\n\n> Amrit\n> Thailand\n\nGavin\n", "msg_date": "Wed, 12 Jan 2005 23:44:55 +1100 (EST)", "msg_from": "Gavin Sherry <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "> There is no problem with free Linux distros handling > 4 GB of memory. The\n> problem is that 32 hardware must make use of some less than efficient\n> mechanisms to be able to address the memory.\n>\n> So, try 7.4 before the memory upgrade. If you still have performance issues,\n> try optimising your queries. As I mentioned before, you can join the\n> #postgresql channel on irc.freenode.net and we can assist.\n\nYes , of course I must try to upgrade PGsql to 7.4 and may be OS to FC 2-3 too.\nMy server products are intel based [mainboard , CPU ,Case , Power supply ,RAID\nNetwork card] dual Xeon 32 bit 3.0 Ghz which I consulted Thai intel supervisor\nand they told me that increasing the ram for more than 4 Gb. may be possible\ndepending on the OS.\nI ask the programmer who wrote that huge query and they told me that it was the\nquery generated by Delphi 6.0 component and not written by themselve.\n\nAmrit\nThailand\n", "msg_date": "Wed, 12 Jan 2005 21:48:33 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "> Yes , of course I must try to upgrade PGsql to 7.4 and may be OS to FC 2-3 too.\n> My server products are intel based [mainboard , CPU ,Case , Power supply ,RAID\n> Network card] dual Xeon 32 bit 3.0 Ghz which I consulted Thai intel supervisor\n> and they told me that increasing the ram for more than 4 Gb. may be possible\n> depending on the OS.\n\nI never tried FC before, but I recommend using Debian (with custom\nkernel) or if you have the patience: Gentoo. Those are \"strictly\nbusiness\" distros, no unnecesary stuffs running after installation.\nProviding a good base for you to focus on performance tweaks. Others\nmay have different opinions, though...\n", "msg_date": "Wed, 12 Jan 2005 16:19:06 +0100", "msg_from": "Martin Tedjawardhana <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "Gavin Sherry wrote:\n> There is no problem with free Linux distros handling > 4 GB of memory. The\n> problem is that 32 hardware must make use of some less than efficient\n> mechanisms to be able to address the memory.\n\nThe theshold for using PAE is actually far lower than 4GB. 4GB is the \ntotal memory address space -- split that in half for 2GB for userspace, \n2GB for kernel. The OS cache resides in kernel space -- after you take \nalway the memory allocation for devices, you're left with a window of \nroughly 900MB.\n\nSince the optimal state is to allocate a small amount of memory to \nPostgres and leave a huge chunk to the OS cache, this means you are \nalready hitting the PAE penalty at 1.5GB of memory.\n", "msg_date": "Thu, 13 Jan 2005 08:32:49 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "\n> The theshold for using PAE is actually far lower than 4GB. 4GB is the\n> total memory address space -- split that in half for 2GB for userspace,\n> 2GB for kernel. The OS cache resides in kernel space -- after you take\n> alway the memory allocation for devices, you're left with a window of\n> roughly 900MB.\n\nI set shammax =\n[root@data3 /]# cat < /proc/sys/kernel/shmmax\n4000000000\nshmall =\n[root@data3 /]# cat < /proc/sys/kernel/shmall\n134217728\nIs that ok for 4 Gb. mechine?\n\n> Since the optimal state is to allocate a small amount of memory to\n> Postgres and leave a huge chunk to the OS cache, this means you are\n> already hitting the PAE penalty at 1.5GB of memory.\n>\nHow could I chang this hitting?\nThanks\nAmrit\nThailand\n", "msg_date": "Sun, 16 Jan 2005 08:02:46 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "William,\n\n> The theshold for using PAE is actually far lower than 4GB. 4GB is the\n> total memory address space -- split that in half for 2GB for userspace,\n> 2GB for kernel. The OS cache resides in kernel space -- after you take\n> alway the memory allocation for devices, you're left with a window of\n> roughly 900MB.\n\nI'm curious, how do you get 1.1GB for memory allocation for devices?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 16 Jan 2005 09:44:47 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "I inferred this from reading up on the compressed vm project. It can be \nhigher or lower depending on what devices you have in your system -- \nhowever, I've read messages from kernel hackers saying Linux is very \naggressive in reserving memory space for devices because it must be \nallocated at boottime.\n\n\n\nJosh Berkus wrote:\n> William,\n> \n> \n>>The theshold for using PAE is actually far lower than 4GB. 4GB is the\n>>total memory address space -- split that in half for 2GB for userspace,\n>>2GB for kernel. The OS cache resides in kernel space -- after you take\n>>alway the memory allocation for devices, you're left with a window of\n>>roughly 900MB.\n> \n> \n> I'm curious, how do you get 1.1GB for memory allocation for devices?\n> \n", "msg_date": "Mon, 17 Jan 2005 09:43:10 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "[email protected] wrote:\n>>Since the optimal state is to allocate a small amount of memory to\n>>Postgres and leave a huge chunk to the OS cache, this means you are\n>>already hitting the PAE penalty at 1.5GB of memory.\n>>\n> \n> How could I chang this hitting?\n\nUpgrade to 64-bit processors + 64-bit linux.\n", "msg_date": "Mon, 17 Jan 2005 09:43:51 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "> >>Since the optimal state is to allocate a small amount of memory to\n> >>Postgres and leave a huge chunk to the OS cache, this means you are\n> >>already hitting the PAE penalty at 1.5GB of memory.\n> >>\n> >\n> > How could I change this hitting?\n>\n> Upgrade to 64-bit processors + 64-bit linux.\n\nDoes the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit\narchetech ? My intel server board could handle the mem of 12 Gb [according to\nintel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more\nthan >4Gb.?\n\nAny comment please?\nAmrit\nThailand\n\n", "msg_date": "Tue, 18 Jan 2005 06:17:29 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "Amrit,\n\nIt's not useless, it's just not optimal.\n\nAll operating systems, FC2, FC3, .... will have the same problem with\ngreater than 4G of memory on a 32 bit processor.\n\nThe *only* way to avoid this is to go to a 64 bit processor (opteron) \nand then\nfor greater performance use a linux distribution compiled for a 64bit \nprocessor.\n\nHave you identified and optimized the queries, are you sure you need \nmore memory?\n\nDave\n\[email protected] wrote:\n\n>>>>Since the optimal state is to allocate a small amount of memory to\n>>>>Postgres and leave a huge chunk to the OS cache, this means you are\n>>>>already hitting the PAE penalty at 1.5GB of memory.\n>>>>\n>>>> \n>>>>\n>>>How could I change this hitting?\n>>> \n>>>\n>>Upgrade to 64-bit processors + 64-bit linux.\n>> \n>>\n>\n>Does the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit\n>archetech ? My intel server board could handle the mem of 12 Gb [according to\n>intel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more\n>than >4Gb.?\n>\n>Any comment please?\n>Amrit\n>Thailand\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n>\n>\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nAmrit, \n\nIt's not useless, it's just not optimal.\n\nAll operating systems, FC2, FC3, .... will have the same problem with \ngreater than 4G of memory on a 32 bit processor.\n\nThe *only* way to avoid this is to go to a 64 bit processor (opteron)\nand then\nfor greater performance use a linux distribution compiled for a 64bit\nprocessor.\n\nHave you identified and optimized the queries, are you sure you need\nmore memory?\n\nDave\n\[email protected] wrote:\n\n\n\n\nSince the optimal state is to allocate a small amount of memory to\nPostgres and leave a huge chunk to the OS cache, this means you are\nalready hitting the PAE penalty at 1.5GB of memory.\n\n \n\nHow could I change this hitting?\n \n\nUpgrade to 64-bit processors + 64-bit linux.\n \n\n\nDoes the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit\narchetech ? My intel server board could handle the mem of 12 Gb [according to\nintel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more\nthan >4Gb.?\n\nAny comment please?\nAmrit\nThailand\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Mon, 17 Jan 2005 18:36:35 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "[email protected] wrote:\n\n> \n> Does the PAE help linux in handling the memory of more than 4 Gb limit in 32 bit\n> archetech ? My intel server board could handle the mem of 12 Gb [according to\n> intel spec.] and if I use Fedora C2 with PAE , will it useless for mem of more\n> than >4Gb.?\n> \n> Any comment please?\n>\nI understand that the 2.6.* kernels are much better at large memory\nsupport (with respect to performance issues), so unless you have a\n64-bit machine lying around - this is probably worth a try.\n\nYou may need to build a new kernel with the various parameters :\n\nCONFIG_NOHIGHMEM\nCONFIG_HIGHMEM4G\nCONFIG_HIGHMEM64G\n\nset appropriately (or even upgrade to the latest 2.6.10). I would expect\nthat some research and experimentation will be required to get the best\nout of it - (e.g. the 'bounce buffers' issue).\n\nregards\n\nMark\n\n\n", "msg_date": "Tue, 18 Jan 2005 13:16:47 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "> I understand that the 2.6.* kernels are much better at large memory\n> support (with respect to performance issues), so unless you have a\n> 64-bit machine lying around - this is probably worth a try.\n>\n> You may need to build a new kernel with the various parameters :\n>\n> CONFIG_NOHIGHMEM\n> CONFIG_HIGHMEM4G\n> CONFIG_HIGHMEM64G\n>\n> set appropriately (or even upgrade to the latest 2.6.10). I would expect\n> that some research and experimentation will be required to get the best\n> out of it - (e.g. the 'bounce buffers' issue).\n\nIn the standard rpm FC 2-3 with a newly install server , would it automatically\ndetect and config it if I use the mechine with > 4 Gb [6Gb.] or should I\nmanually config it?\nAmrit\nThailand\n", "msg_date": "Tue, 18 Jan 2005 09:35:12 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "[email protected] wrote:\n\n> \n> In the standard rpm FC 2-3 with a newly install server , would it automatically\n> detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I\n> manually config it?\n> Amrit\n> Thailand\n\nGood question. I dont have FC2-3 here to check. I recommend firing off a\nquestion to [email protected] (you need to subscribe first):\n\nhttp://www.redhat.com/mailman/listinfo/fedora-list\n\nbest wishes\n\nMark\n\n", "msg_date": "Tue, 18 Jan 2005 15:42:38 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "My experience is RH9 auto detected machines >= 2GB of RAM and installs \nthe PAE bigmem kernel by default. I'm pretty sure the FC2/3 installer \nwill do the same.\n\n\n\[email protected] wrote:\n>>I understand that the 2.6.* kernels are much better at large memory\n>>support (with respect to performance issues), so unless you have a\n>>64-bit machine lying around - this is probably worth a try.\n>>\n>>You may need to build a new kernel with the various parameters :\n>>\n>>CONFIG_NOHIGHMEM\n>>CONFIG_HIGHMEM4G\n>>CONFIG_HIGHMEM64G\n>>\n>>set appropriately (or even upgrade to the latest 2.6.10). I would expect\n>>that some research and experimentation will be required to get the best\n>>out of it - (e.g. the 'bounce buffers' issue).\n> \n> \n> In the standard rpm FC 2-3 with a newly install server , would it automatically\n> detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I\n> manually config it?\n> Amrit\n> Thailand\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n", "msg_date": "Mon, 17 Jan 2005 19:03:18 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "On Mon, 17 Jan 2005 18:36:35 -0500\nDave Cramer <[email protected]> wrote:\n> The *only* way to avoid this is to go to a 64 bit processor (opteron) \n> and then\n> for greater performance use a linux distribution compiled for a 64bit \n> processor.\n\nOr NetBSD (http://www.NetBSD.org/) which has been 64 bit clean since\n1995 and has had the Opteron port integrated in its main tree (not as\npatches to or a separate tree) since April 2003.\n\n-- \nD'Arcy J.M. Cain <[email protected]>\nhttp://www.NetBSD.org/\n", "msg_date": "Tue, 18 Jan 2005 05:38:42 -0500", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "Why dont you just grab the latest stable kernel from kernel.org,\ncustomize it, compile it and the see what happens?\n\n\nOn Tue, 18 Jan 2005 09:35:12 +0700, [email protected]\n<[email protected]> wrote:\n> > I understand that the 2.6.* kernels are much better at large memory\n> > support (with respect to performance issues), so unless you have a\n> > 64-bit machine lying around - this is probably worth a try.\n> >\n> > You may need to build a new kernel with the various parameters :\n> >\n> > CONFIG_NOHIGHMEM\n> > CONFIG_HIGHMEM4G\n> > CONFIG_HIGHMEM64G\n> >\n> > set appropriately (or even upgrade to the latest 2.6.10). I would expect\n> > that some research and experimentation will be required to get the best\n> > out of it - (e.g. the 'bounce buffers' issue).\n> \n> In the standard rpm FC 2-3 with a newly install server , would it automatically\n> detect and config it if I use the mechine with > 4 Gb [6Gb.] or should I\n> manually config it?\n> Amrit\n> Thailand\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Tue, 18 Jan 2005 13:00:33 +0100", "msg_from": "Martin Tedjawardhana <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "This must be a linux'ism because to my knowledge FreeBSD does not keep the \nos-cache mapped into the kernel address space unless it have active objects \nassociated with the data.\n\nAnd FreeBSD also have a default split of 3GB userspace and 1GB. kernelspace \nwhen running with a default configuration. Linux people might want to try \nother os'es to compare the performance.\n\nBest regards,\nNicolai Petri\n\nPs. Sorry for my lame MS mailer - quoting is not something it knows how to \ndo. :)\n----- Original Message ----- \nFrom: \"William Yu\" <[email protected]>\n\n\n>I inferred this from reading up on the compressed vm project. It can be \n>higher or lower depending on what devices you have in your system -- \n> however, I've read messages from kernel hackers saying Linux is very \n> aggressive in reserving memory space for devices because it must be \n> allocated at boottime.\n>\n>\n>\n> Josh Berkus wrote:\n>> William,\n>>\n>>\n>>>The theshold for using PAE is actually far lower than 4GB. 4GB is the\n>>>total memory address space -- split that in half for 2GB for userspace,\n>>>2GB for kernel. The OS cache resides in kernel space -- after you take\n>>>alway the memory allocation for devices, you're left with a window of\n>>>roughly 900MB.\n>>\n>>\n>> I'm curious, how do you get 1.1GB for memory allocation for devices?\n>>\n\n\n", "msg_date": "Tue, 18 Jan 2005 14:04:45 +0100", "msg_from": "\"Nicolai Petri (lists)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "I would like to upgrade both OS kernel and PGsql version , so in my opinion the\nbest way to handle it is to *backup* the data in .tar and use a newly install\n2.6 OS linux [ from 2.4.9] with build in PGsql 7.4.6 rpm[ from 7.3.2] and may\nup the ram to 6 GB. and *restore* the data again.\nI wonder whether the PAE [physical address ext.] will be put in place and could\nI use the RAM for more than 4 Gb. Does any one have different idea ? Since\nupgrade to Operon 64 Bit needs a lot of money , I may postpone it for a couple\nwhile.\nAny comment ,please.\n\nAmrit\nThailand\n", "msg_date": "Tue, 18 Jan 2005 21:38:26 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "[email protected] wrote:\n> I would like to upgrade both OS kernel and PGsql version , so in my opinion the\n> best way to handle it is to *backup* the data in .tar\n\nJust remember if you're going from 7.3.2 => 7.4.x or 8.0 then you'll \nneed to use pg_dump not just tar up the directories. If you do use tar, \nremember to backup *all* the directories.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 19 Jan 2005 11:12:17 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" }, { "msg_contents": "You can *not* go from any major release to another major release using \nany kind of file backup. Please use pg_dump.\n\nAdditionally there are known issues dumping and restoring from 7.3 -> \n7.4 if you use the default copy command. Use the pg_dump --inserts option.\n\nI would still tar the directory just in case you *have* to fall back to \n7.3 for some reason (Better safe than sorry )\n\nDave\n\nRichard Huxton wrote:\n\n> [email protected] wrote:\n>\n>> I would like to upgrade both OS kernel and PGsql version , so in my \n>> opinion the\n>> best way to handle it is to *backup* the data in .tar\n>\n>\n> Just remember if you're going from 7.3.2 => 7.4.x or 8.0 then you'll \n> need to use pg_dump not just tar up the directories. If you do use \n> tar, remember to backup *all* the directories.\n>\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: the planner will ignore your desire to choose an index scan if \n> your\n> joining column's datatypes do not match\n>\n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n", "msg_date": "Wed, 19 Jan 2005 08:41:29 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Increasing RAM for more than 4 Gb. using postgresql" } ]
[ { "msg_contents": "Hi All,\n\nHere is my test comparison between Postgres (7.3.2)\noptimizer vs Oracle (10g) optimizer. \n\nIt seems to me that Postgres optimizer is not smart \nenough.\n\nDid I miss anything?\n\nThanks,\n\nIn Postgres:\n============\ndrop table test;\ncreate table test (\n module character varying(50),\n action_deny integer,\n created timestamp with time zone,\n customer_id integer,\n domain character varying(255));\ncreate or replace function insert_rows () returns\ninteger as '\nBEGIN\n for i in 1 .. 500000 loop\n insert into test values (i, 2, now(), 100, i);\n end loop;\n return 1;\nEND;\n' LANGUAGE 'plpgsql';\n\nselect insert_rows();\n\ncreate index test_id1 on test (customer_id, created,\ndomain);\n\nanalyze test;\n\nexplain analyze\nSELECT module, sum(action_deny)\nFROM test\nWHERE created >= ('now'::timestamptz - '1\nday'::interval) AND customer_id='100'\n AND domain='100'\nGROUP BY module;\n\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3.12..3.13 rows=1 width=9) (actual\ntime=91.05..91.05 rows=1 loops=1)\n -> Group (cost=3.12..3.12 rows=1 width=9) (actual\ntime=91.04..91.04 rows=1 loops=1)\n -> Sort (cost=3.12..3.12 rows=1 width=9)\n(actual time=91.03..91.03 rows=1 loops=1)\n Sort Key: module\n -> Index Scan using test_id1 on test \n(cost=0.00..3.11 rows=1 width=9) (actual\ntime=0.03..91.00 rows=1 loops=1)\n Index Cond: ((customer_id = 100)\nAND (created >= '2005-01-11\n14:48:44.832552-07'::timestamp with time zone) AND\n(\"domain\" = '100'::character varying))\n Total runtime: 91.13 msec\n(7 rows)\n\ncreate index test_id2 on test(domain);\nanalyze test;\n\nexplain analyze\nSELECT module, sum(action_deny)\nFROM test\nWHERE created >= ('now'::timestamptz - '1\nday'::interval) AND customer_id='100'\n AND domain='100'\nGROUP BY module;\n\n \n QUERY PLAN \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3.12..3.13 rows=1 width=9) (actual\ntime=90.30..90.30 rows=1 loops=1)\n -> Group (cost=3.12..3.12 rows=1 width=9) (actual\ntime=90.29..90.30 rows=1 loops=1)\n -> Sort (cost=3.12..3.12 rows=1 width=9)\n(actual time=90.29..90.29 rows=1 loops=1)\n Sort Key: module\n -> Index Scan using test_id1 on test \n(cost=0.00..3.11 rows=1 width=9) (actual\ntime=0.03..90.25 rows=1 loops=1)\n Index Cond: ((customer_id = 100)\nAND (created >= '2005-01-11\n14:51:09.555974-07'::timestamp with time zone) AND\n(\"domain\" = '100'::character varying))\n Total runtime: 90.38 msec\n(7 rows)\n\nWHY PG STILL CHOOSE INDEX test_id1???\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nBECAUSE QUERY WILL RUN MUCH FASTER USING test_id2!!!\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\ndrop index test_id1;\nexplain analyze\nSELECT module, sum(action_deny)\nFROM test\nWHERE created >= ('now'::timestamptz - '1\nday'::interval) AND customer_id='100'\n AND domain='100'\nGROUP BY module;\n \n QUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=3.12..3.13 rows=1 width=9) (actual\ntime=0.08..0.08 rows=1 loops=1)\n -> Group (cost=3.12..3.13 rows=1 width=9) (actual\ntime=0.08..0.08 rows=1 loops=1)\n -> Sort (cost=3.12..3.13 rows=1 width=9)\n(actual time=0.07..0.07 rows=1 loops=1)\n Sort Key: module\n -> Index Scan using test_id2 on test \n(cost=0.00..3.11 rows=1 width=9) (actual\ntime=0.04..0.05 rows=1 loops=1)\n Index Cond: (\"domain\" =\n'100'::character varying)\n Filter: ((created >= '2005-01-11\n14:53:58.806364-07'::timestamp with time zone) AND\n(customer_id = 100))\n Total runtime: 0.14 msec\n(8 rows)\n\nIn Oracle:\n==========\ndrop table test;\ncreate table test (\n module character varying(50),\n action_deny integer,\n created timestamp with time zone,\n customer_id integer,\n domain character varying(255));\n\nbegin\n for i in 1..500000 loop\n insert into test values (i, 2, current_timestamp,\n100, i);\n end loop;\nend;\n/\n\ncreate index test_id1 on test (customer_id, created,\ndomain);\n\nanalyze table test compute statistics;\n\nset autot on\nset timing on\n\nSELECT module, sum(action_deny)\nFROM test\nWHERE created >= (current_timestamp - interval '1'\nday) AND customer_id=100\n AND domain='100'\nGROUP BY module\n/\n\nMODULE \nSUM(ACTION_DENY)\n--------------------------------------------------\n----------------\n100 \n 2\n\nElapsed: 00:00:00.67\n\nExecution Plan\n----------------------------------------------------------\n 0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=25\nCard=1 Bytes=29\n )\n\n 1 0 SORT (GROUP BY) (Cost=25 Card=1 Bytes=29)\n 2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TEST'\n(TABLE) (Cost=24\n Card=1 Bytes=29)\n\n 3 2 INDEX (RANGE SCAN) OF 'TEST_ID1'\n(INDEX) (Cost=23 Card\n =4500)\n\n\n\n\n\nStatistics\n----------------------------------------------------------\n 1 recursive calls\n 0 db block gets\n 2292 consistent gets\n 2291 physical reads\n 0 redo size\n 461 bytes sent via SQL*Net to client\n 508 bytes received via SQL*Net from client\n 2 SQL*Net roundtrips to/from client\n 1 sorts (memory)\n 0 sorts (disk)\n 1 rows processed\n\ncreate index test_id2 on test (domain);\n\nSELECT module, sum(action_deny)\nFROM test\nWHERE created >= (current_timestamp - interval '1'\nday) AND customer_id=100\n AND domain='100'\nGROUP BY module\n/\n\nMODULE \nSUM(ACTION_DENY)\n--------------------------------------------------\n----------------\n100 \n 2\n\nElapsed: 00:00:00.03\n\nExecution Plan\n----------------------------------------------------------\n 0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5\nCard=1 Bytes=29)\n 1 0 SORT (GROUP BY) (Cost=5 Card=1 Bytes=29)\n 2 1 TABLE ACCESS (BY INDEX ROWID) OF 'TEST'\n(TABLE) (Cost=4\n Card=1 Bytes=29)\n\n 3 2 INDEX (RANGE SCAN) OF 'TEST_ID2'\n(INDEX) (Cost=3 Card=\n 1)\n\n\n\n\n\nStatistics\n----------------------------------------------------------\n 0 recursive calls\n 0 db block gets\n 4 consistent gets\n 0 physical reads\n 0 redo size\n 461 bytes sent via SQL*Net to client\n 508 bytes received via SQL*Net from client\n 2 SQL*Net roundtrips to/from client\n 1 sorts (memory)\n 0 sorts (disk)\n 1 rows processed\n\n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nAll your favorites on one personal page ��� Try My Yahoo!\nhttp://my.yahoo.com \n", "msg_date": "Wed, 12 Jan 2005 14:25:06 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres Optimizer is not smart enough?" }, { "msg_contents": "Litao Wu wrote:\n> Hi All,\n> \n> Here is my test comparison between Postgres (7.3.2)\n> optimizer vs Oracle (10g) optimizer. \n> \n> It seems to me that Postgres optimizer is not smart \n> enough.\n> \n> Did I miss anything?\n\nYeah, 7.4.\n\n7.3.2 is *ancient*. Here's output from 7.4:\n\n[test@ferrari] explain analyze\ntest-# SELECT module, sum(action_deny)\ntest-# FROM test\ntest-# WHERE created >= ('now'::timestamptz - '1\ntest'# day'::interval) AND customer_id='100'\ntest-# AND domain='100'\ntest-# GROUP BY module;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5.69..5.69 rows=1 width=13) (actual \ntime=715.058..715.060 rows=1 loops=1)\n -> Index Scan using test_id1 on test (cost=0.00..5.68 rows=1 \nwidth=13) (actual time=0.688..690.459 rows=1 loops=1)\n Index Cond: ((customer_id = 100) AND (created >= '2005-01-11 \n17:52:22.364145-05'::timestamp with time zone) AND ((\"domain\")::text = \n'100'::text))\n Total runtime: 717.546 ms\n(4 rows)\n\n[test@ferrari] create index test_id2 on test(domain);\nCREATE INDEX\n[test@ferrari] analyze test;\nANALYZE\n[test@ferrari]\n[test@ferrari] explain analyze\ntest-# SELECT module, sum(action_deny)\ntest-# FROM test\ntest-# WHERE created >= ('now'::timestamptz - '1\ntest'# day'::interval) AND customer_id='100'\ntest-# AND domain='100'\ntest-# GROUP BY module;\n QUERY PLAN \n\n------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5.68..5.69 rows=1 width=13) (actual \ntime=10.778..10.780 rows=1 loops=1)\n -> Index Scan using test_id2 on test (cost=0.00..5.68 rows=1 \nwidth=13) (actual time=10.702..10.721 rows=1 loops=1)\n Index Cond: ((\"domain\")::text = '100'::text)\n Filter: ((created >= '2005-01-11 \n17:53:16.720749-05'::timestamp with time zone) AND (customer_id = 100))\n Total runtime: 11.039 ms\n(5 rows)\n\n[test@ferrari] select version();\n PostgreSQL 7.4.5 on i686-pc-linux-gnu, compiled by GCC \ni686-pc-linux-gnu-gcc (GCC) 3.4.0 20040204 (prerelease)\n(1 row)\n\nHope that helps,\n\nMike Mascari\n", "msg_date": "Wed, 12 Jan 2005 17:55:39 -0500", "msg_from": "Mike Mascari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough?" }, { "msg_contents": "Litao Wu Wrote:\n> explain analyze\n> SELECT module, sum(action_deny)\n> FROM test\n> WHERE created >= ('now'::timestamptz - '1\n> day'::interval) AND customer_id='100'\n> AND domain='100'\n> GROUP BY module;\n\nHere is my output for this query:\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=3.03..3.03 rows=1 width=13) (actual\ntime=0.132..0.135 rows=1 loops=1)\n -> Index Scan using test_id2 on test (cost=0.00..3.02 rows=1\nwidth=13) (actual time=0.085..0.096 rows=1 loops=1)\n Index Cond: ((\"domain\")::text = '100'::text)\n Filter: ((created >= ('2005-01-13\n11:57:34.673833+13'::timestamp with time zone - '1 day'::interval)) AND\n(customer_id = 100))\n Total runtime: 0.337 ms\n(5 rows)\n\nTime: 8.424 ms\n\n\nThe version is:\nPostgreSQL 8.0.0rc5 on i386-unknown-freebsd5.3, compiled by GCC gcc\n(GCC) 3.4.2 [FreeBSD] 20040728\n\n\nI have random_page_cost = 0.8 in my postgresql.conf. Setting it back to\nthe default (4) results in a plan using test_id1. A little\nexperimentation showed that for my system random_page_cost=1 was where\nit changed from using test_id1 to test_id2.\n\nSo changing this parameter may be helpful.\n\nI happen to have some debugging code enabled for the optimizer, and the\nissue appears to be that the costs of paths using these indexes are\nquite similar, so are quite sensitive to (some) parameter values.\n\nregards\n\nMark\n\nP.s : 7.3.2 is quite old.\n\n", "msg_date": "Thu, 13 Jan 2005 12:14:07 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough?" }, { "msg_contents": "On Thu, 2005-01-13 at 12:14 +1300, Mark Kirkwood wrote:\n\n[snip some explains]\n\n> \n> I have random_page_cost = 0.8 in my postgresql.conf. Setting it back to\n> the default (4) results in a plan using test_id1.\n\nit is not rational to have random_page_cost < 1.\n\nif you see improvement with such a setting, it is as likely that \nsomething else is wrong, such as higher statistic targets needed,\nor a much too low effective_cache setting. \n\ngnari\n\n\n", "msg_date": "Thu, 13 Jan 2005 00:50:16 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough?" }, { "msg_contents": "Ragnar Hafsta� wrote:\n> \n> \n> \n> it is not rational to have random_page_cost < 1.\n>\nI agree, in theory one should never *need* to set it < 1. However in\ncases when the optimizers understanding of things is a little off,\ncompensation may be required to achieve better plans (e.g. encouraging\nindex scans on data with funny distributions or collelations).\n\n> if you see improvement with such a setting, it is as likely that \n> something else is wrong, such as higher statistic targets needed,\n> or a much too low effective_cache setting. \n> \nAltho this is good advice, it is not always sufficient. For instance I\nhave my effective_cache_size=20000. Now the machine has 512Mb ram and\nright now cache+buf+free is about 100M, and shared_buffers=2000. So in\nfact I probably have it a bit high :-).\n\nIncreasing stats target will either make the situation better or worse -\na better sample of data is obtained for analysis, but this is not\n*guaranteed* to lead to a faster execution plan, even if in\ngeneral/usually it does.\n\ncheers\n\nMark\n\n", "msg_date": "Thu, 13 Jan 2005 15:11:12 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough?" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> I happen to have some debugging code enabled for the optimizer, and the\n> issue appears to be that the costs of paths using these indexes are\n> quite similar, so are quite sensitive to (some) parameter values.\n\nThey'll be exactly the same, actually, as long as the thing predicts\nexactly one row retrieved. So it's quasi-random which plan you get.\n\nbtcostestimate needs to be improved to understand that in multicolumn\nindex searches with inequality conditions, we may have to scan through\ntuples that don't meet all the qualifications. It's not accounting for\nthat cost at the moment, which is why the estimates are the same.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Jan 2005 21:39:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough? " }, { "msg_contents": "Tom Lane wrote:\n> Mark Kirkwood <[email protected]> writes:\n> the costs of paths using these indexes are\n>>quite similar, so are quite sensitive to (some) parameter values.\n> \n> \n> They'll be exactly the same, actually, as long as the thing predicts\n> exactly one row retrieved. So it's quasi-random which plan you get.\n> \n> btcostestimate needs to be improved to understand that in multicolumn\n> index searches with inequality conditions, we may have to scan through\n> tuples that don't meet all the qualifications. It's not accounting for\n> that cost at the moment, which is why the estimates are the same.\n> \nI see some small differences in the numbers - I am thinking that these\nare due to the calculations etc in cost_index(). e.g:\n\ncreate_index_paths : index oid 12616389 (test_id2)\ncost_index : cost=2.839112 (startup_cost=0.000000 run_cost=2.839112)\n : tuples=1.000000 cpu_per_tuple=0.017500\n : selectivity=0.000002\n : run_index_tot_cost=2.003500 run_io_cost=0.818112)\n\ncreate_index_paths : index oid 12616388 (test_id1)\ncost_index : cost=2.933462 (startup_cost=0.002500 run_cost=2.930962)\n : tuples=1.000000 cpu_per_tuple=0.010000\n : selectivity=0.000002\n : run_index_tot_cost=2.008500 run_io_cost=0.912462\n\n\nWhere:\n\nrun_index_tot_cost=indexTotalCost - indexStartupCost;\nrun_io_cost=max_IO_cost + csquared * (min_IO_cost - max_IO_cost)\nselectivity=indexSelectivity\n\nHmmm ... so it's only the selectivity that is the same (sourced from\nindex->amcostestimate which I am guessing points to btcostestimate), is\nthat correct?\n\ncheers\n\nMark\n\n\n", "msg_date": "Thu, 13 Jan 2005 22:02:15 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough?" }, { "msg_contents": "Mark Kirkwood <[email protected]> writes:\n> Hmmm ... so it's only the selectivity that is the same (sourced from\n> index->amcostestimate which I am guessing points to btcostestimate), is\n> that correct?\n\nNo, the point is that btcostestimate will compute not only the same\nselectivities but the identical index access cost values, because it\nthinks that only one index entry will be fetched in both cases. It\nneeds to account for the fact that the inequality condition will cause a\nscan over a larger range of the index than is actually returned. See\n_bt_preprocess_keys() and _bt_checkkeys().\n\nThe small differences you are showing have to do with different\nassumptions about where the now() function will get evaluated (once per\nrow or once at scan start). That's not the effect that I'm worried\nabout.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2005 10:24:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres Optimizer is not smart enough? " } ]
[ { "msg_contents": "Hi,\n\njust want to share with all of you a wierd thing that i found when i \ntested it.\n\ni was doing a query that will call a function long2ip to convert bigint \nto ips.\n\nso the query looks something like this.\n\nselect id, long2ip(srcip), long2ip(dstip) from sometable\nwhere timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n23:59' order by id limit 30;\n\nfor your info, there are about 300k rows for that timeframe.\n\nit cost me about 57+ secs to get the list.\n\nwhich is about the same if i query\nselect id, long2ip(srcip), long2ip(dstip) from sometable\nwhere timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n23:59'\n\nit will cost me about 57+ secs also.\n\nNow if i did this\nselect id,long2ip(srcip), long2ip(dstip) from (\n* from sometable\nwhere timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n23:59' order by id limit 30) as t;\n\nit will cost me about 3+ secs\n\nAnyone knows why this is the case?\n\nHasnul\n\n\n\n\n", "msg_date": "Thu, 13 Jan 2005 16:34:28 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Performance delay" }, { "msg_contents": "Hasnul Fadhly bin Hasan wrote:\n> Hi,\n> \n> just want to share with all of you a wierd thing that i found when i \n> tested it.\n> \n> i was doing a query that will call a function long2ip to convert bigint \n> to ips.\n> \n> so the query looks something like this.\n> \n> select id, long2ip(srcip), long2ip(dstip) from sometable\n> where timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n> 23:59' order by id limit 30;\n> \n> for your info, there are about 300k rows for that timeframe.\n> \n> it cost me about 57+ secs to get the list.\n> \n> which is about the same if i query\n> select id, long2ip(srcip), long2ip(dstip) from sometable\n> where timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n> 23:59'\n> \n> it will cost me about 57+ secs also.\n> \n> Now if i did this\n> select id,long2ip(srcip), long2ip(dstip) from (\n> * from sometable\n> where timestamp between timestamp '01-10-2005' and timestamp '01-10-2005 \n> 23:59' order by id limit 30) as t;\n> \n> it will cost me about 3+ secs\n\nThe difference will be that in the final case you only make 30 calls to \nlong2ip() whereas in the first two you call it 300,000 times and then \nthrow away most of them.\nTry running EXPLAIN ANALYSE ... for both - that will show how PG is \nplanning the query.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 13 Jan 2005 11:02:04 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance delay" }, { "msg_contents": "Hi Richard,\n\nThanks for the reply.. is that the case? i thought it would comply to \nthe where condition first..\nand after that it will format the output to what we want..\n\nHasnul\n\nRichard Huxton wrote:\n\n> Hasnul Fadhly bin Hasan wrote:\n>\n>> Hi,\n>>\n>> just want to share with all of you a wierd thing that i found when i \n>> tested it.\n>>\n>> i was doing a query that will call a function long2ip to convert \n>> bigint to ips.\n>>\n>> so the query looks something like this.\n>>\n>> select id, long2ip(srcip), long2ip(dstip) from sometable\n>> where timestamp between timestamp '01-10-2005' and timestamp \n>> '01-10-2005 23:59' order by id limit 30;\n>>\n>> for your info, there are about 300k rows for that timeframe.\n>>\n>> it cost me about 57+ secs to get the list.\n>>\n>> which is about the same if i query\n>> select id, long2ip(srcip), long2ip(dstip) from sometable\n>> where timestamp between timestamp '01-10-2005' and timestamp \n>> '01-10-2005 23:59'\n>>\n>> it will cost me about 57+ secs also.\n>>\n>> Now if i did this\n>> select id,long2ip(srcip), long2ip(dstip) from (\n>> * from sometable\n>> where timestamp between timestamp '01-10-2005' and timestamp \n>> '01-10-2005 23:59' order by id limit 30) as t;\n>>\n>> it will cost me about 3+ secs\n>\n>\n> The difference will be that in the final case you only make 30 calls \n> to long2ip() whereas in the first two you call it 300,000 times and \n> then throw away most of them.\n> Try running EXPLAIN ANALYSE ... for both - that will show how PG is \n> planning the query.\n> -- \n> Richard Huxton\n> Archonet Ltd\n>\n>\n\n", "msg_date": "Thu, 13 Jan 2005 19:14:10 +0800", "msg_from": "Hasnul Fadhly bin Hasan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance delay" }, { "msg_contents": "\n\tHello,\n\n\tHere I'm implementing a session management, which has a connections table \npartitioned between active and archived connections. A connection \nrepresents a connection between a user and a chatroom.\n\n\tI use partitioning for performance reasons.\n\n\tThe active table contains all the data for the active session : user_id, \nchatroom_id, session start time, and other information.\n\tThe archive table contains just the user_id, chatroom_id, session start \nand end time, for logging purposes, and for displaying on the site, which \nuser was logged to which chatroom and from when to when.\n\n\tThus, when a user disconnects from a chatroom, I must move one row from \nthe active to the archive table. This poses no problem as there is a \nUNIQUE index (iser_id,chatroom_id) so I select the row FOR UPDATE, insert \nit in the archive table, then delete it.\n\n\tNow, when a user logs out from the site, or when his session is purged by \nthe auto-expiration cron job, I must also expire ALL his open chatroom \nconnections.\n\tINSERT INTO archive (...) SELECT ... FROM active WHERE user_id = ...;\n\tDELETE FROM active WHERE user_id = ...;\n\n\tNow, if the user inserts a connection between the two queries above, the \nthing will fail (the connection will just be deleted). I know that there \nare many ways to do it right :\n\t- LOCK the table in exclusive mode\n\t- use an additional primary key on the active table which is not related \nto the user_id and the chatroom_id, select the id's of the sessions to \nexpire in a temporary table, and use that\n\t- use an extra field in the table to mark that the rows are being \nprocessed\n\t- use transaction isolation level SERIALIZABLE\n\n\tHowever, all these methods somehow don't feel right, and as this is an \noften encountered problem, I'd really like to have a sql command, say \nMOVE, or SELECT AND DELETE, whatever, which acts like a SELECT, returning \nthe rows, but deleting them as well. Then I'd just do INSERT INTO archive \n(...) SELECT ... AND DELETE FROM active WHERE user_id = ...;\n\n\twhich would have the following advantages :\n\t- No worries about locks :\n\t\t- less chance of bugs\n\t\t- higher performance because locks have to be waited on, by definition\n\t- No need to do the request twice (so, it is twice as fast !)\n\t- Simplicity and elegance\n\n\tThere would be an hidden bonus, that if you acquire locks, you better \nCOMMIT the transaction as soon as possible to release them, whereas here, \nyou can happily continue in the transaction.\n\n\tI think this command would make a nice cousin to the also very popular \nINSERT... OR UPDATE which tries to insert a row, and if it exists, UPDATES \nit instead of inserting it !\n\n\tWhat do you think ?\n\n\n\n\n\n", "msg_date": "Thu, 13 Jan 2005 13:16:19 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "MOVE command" }, { "msg_contents": "On Thu, Jan 13, 2005 at 07:14:10PM +0800, Hasnul Fadhly bin Hasan wrote:\n> Hi Richard,\n> \n> Thanks for the reply.. is that the case? i thought it would comply to \n> the where condition first..\n> and after that it will format the output to what we want..\n\nThat is in fact exactly what it's doing. The second query is faster not\nbecause of the where clause, but because of the limit clause. The first\nquery builds a list of id, long2ip(srcip), long2ip(dstip) for the\ntimestamp range, then it orders that list and gives you the first 30.\nThe second query builds a list of everything from sometable for the\ntimestamp range, orders it, keeps the first 30, THEN in calculates\nlong2ip based on that list of 30 items.\n\n> Hasnul\n> \n> Richard Huxton wrote:\n> \n> >Hasnul Fadhly bin Hasan wrote:\n> >\n> >>Hi,\n> >>\n> >>just want to share with all of you a wierd thing that i found when i \n> >>tested it.\n> >>\n> >>i was doing a query that will call a function long2ip to convert \n> >>bigint to ips.\n> >>\n> >>so the query looks something like this.\n> >>\n> >>select id, long2ip(srcip), long2ip(dstip) from sometable\n> >>where timestamp between timestamp '01-10-2005' and timestamp \n> >>'01-10-2005 23:59' order by id limit 30;\n> >>\n> >>for your info, there are about 300k rows for that timeframe.\n> >>\n> >>it cost me about 57+ secs to get the list.\n> >>\n> >>which is about the same if i query\n> >>select id, long2ip(srcip), long2ip(dstip) from sometable\n> >>where timestamp between timestamp '01-10-2005' and timestamp \n> >>'01-10-2005 23:59'\n> >>\n> >>it will cost me about 57+ secs also.\n> >>\n> >>Now if i did this\n> >>select id,long2ip(srcip), long2ip(dstip) from (\n> >>* from sometable\n> >>where timestamp between timestamp '01-10-2005' and timestamp \n> >>'01-10-2005 23:59' order by id limit 30) as t;\n> >>\n> >>it will cost me about 3+ secs\n> >\n> >\n> >The difference will be that in the final case you only make 30 calls \n> >to long2ip() whereas in the first two you call it 300,000 times and \n> >then throw away most of them.\n> >Try running EXPLAIN ANALYSE ... for both - that will show how PG is \n> >planning the query.\n> >-- \n> > Richard Huxton\n> > Archonet Ltd\n> >\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 13 Jan 2005 07:45:09 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance delay" } ]
[ { "msg_contents": "Hi all,\n\n Is there a fast(er) way to get the sum of all integer values for a \ncertain condition over many thousands of rows? What I am currently doing \nis this (which takes ~5-10sec.):\n\nSELECT SUM (a.file_size) FROM file_info_1 a, file_set_1 b WHERE \na.file_name=b.fs_name AND a.file_parent_dir=b.fs_parent_dir AND \na.file_type=b.fs_type AND b.fs_backup='t';\n\n I need to keep parts of the data in two tables. I currently use \n'file_name/fs_name', 'file_parent_dir/fs_parent_dir' and \n'file_type/fs_type' to match the entries in the two tables. The \n'file_info_#' table is frequently dropped and re-created so this was the \nonly way I could think to match the data.\n\n I am hoping that maybe there is something I can do differently that \nwill return this value a lot faster (ideally within a second). I know \nthat this is heavily dependant on the system underneath but the program \nis designed for Joe/Jane User so I am trying to do what I can in the \nscript and within my DB calls to make this as efficient as possible. I \nrealise that my goal may not be viable.\n\n Here are the schemas, in case they help:\n\ntle-bu=> \\d file_info_1 Table \"public.file_info_1\"\n Column | Type | Modifiers\n-----------------+---------+----------------------------\n file_acc_time | bigint | not null\n file_group_name | text | not null\n file_group_uid | integer | not null\n file_mod_time | bigint | not null\n file_name | text | not null\n file_parent_dir | text | not null\n file_perm | text | not null\n file_size | bigint | not null\n file_type | text | not null default 'f'::text\n file_user_name | text | not null\n file_user_uid | integer | not null\nIndexes:\n \"file_info_1_display_idx\" btree (file_parent_dir, file_name, file_type)\n \"file_info_1_search_idx\" btree (file_parent_dir, file_name, file_type)\n\ntle-bu=> \\d file_set_1 Table \"public.file_set_1\"\n Column | Type | Modifiers\n---------------+---------+----------------------------\n fs_backup | boolean | not null default true\n fs_display | boolean | not null default false\n fs_name | text | not null\n fs_parent_dir | text | not null\n fs_restore | boolean | not null default false\n fs_type | text | not null default 'f'::text\nIndexes:\n \"file_set_1_sync_idx\" btree (fs_parent_dir, fs_name, fs_type)\n\n Thanks all!\n\nMadison\n", "msg_date": "Thu, 13 Jan 2005 22:31:12 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "sum of all values" }, { "msg_contents": "Madison Kelly wrote:\n> Hi all,\n> \n> Is there a fast(er) way to get the sum of all integer values for a \n> certain condition over many thousands of rows? What I am currently doing \n> is this (which takes ~5-10sec.):\n\nOK, I'm assuming you've configured PG to your satisfaction and this is \nthe only query giving you problems.\n\n> SELECT SUM (a.file_size) FROM file_info_1 a, file_set_1 b WHERE \n> a.file_name=b.fs_name AND a.file_parent_dir=b.fs_parent_dir AND \n> a.file_type=b.fs_type AND b.fs_backup='t';\n\nYou'll want to run EXPLAIN ANALYSE SELECT SUM... and post the output of \nthat, although the query looks straightforward enough.\n\n> Here are the schemas, in case they help:\n> \n> tle-bu=> \\d file_info_1 Table \"public.file_info_1\"\n> Column | Type | Modifiers\n> -----------------+---------+----------------------------\n> file_acc_time | bigint | not null\n> file_group_name | text | not null\n> file_group_uid | integer | not null\n> file_mod_time | bigint | not null\n> file_name | text | not null\n> file_parent_dir | text | not null\n> file_perm | text | not null\n> file_size | bigint | not null\n> file_type | text | not null default 'f'::text\n> file_user_name | text | not null\n> file_user_uid | integer | not null\n> Indexes:\n> \"file_info_1_display_idx\" btree (file_parent_dir, file_name, file_type)\n> \"file_info_1_search_idx\" btree (file_parent_dir, file_name, file_type)\n> \n> tle-bu=> \\d file_set_1 Table \"public.file_set_1\"\n> Column | Type | Modifiers\n> ---------------+---------+----------------------------\n> fs_backup | boolean | not null default true\n> fs_display | boolean | not null default false\n> fs_name | text | not null\n> fs_parent_dir | text | not null\n> fs_restore | boolean | not null default false\n> fs_type | text | not null default 'f'::text\n> Indexes:\n> \"file_set_1_sync_idx\" btree (fs_parent_dir, fs_name, fs_type)\n\n1. WHERE ARE YOUR PRIMARY KEYS???\n2. Why do you have two identical indexes on file_info_1\n3. WHERE ARE YOUR PRIMARY KEYS???\n4. Am I right in thinking that always, file_name==fs_name (i.e. they \nrepresent the same piece of information) and if so, why are you storing \nit twice? Same for _parent_dir too\n5. file_type/fs_type are being held as unbounded text? Not an index into \nsome lookup table or a varchar(N)?\n\nCan you explain what you're trying to do here - it might be you want to \nalter your database design.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 14 Jan 2005 09:39:26 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sum of all values" }, { "msg_contents": "Richard Huxton wrote:\n> Madison Kelly wrote:\n> \n>> Hi all,\n>>\n>> Is there a fast(er) way to get the sum of all integer values for a \n>> certain condition over many thousands of rows? What I am currently \n>> doing is this (which takes ~5-10sec.):\n> \n> \n> OK, I'm assuming you've configured PG to your satisfaction and this is \n> the only query giving you problems.\n\n This is a program for general consumption (hopefully... \neventually...) so I want to leave the psql config alone. Once I am \nhappier with the program I will try different tuning options and write a \nfaq though I expect 9 out of 10 users won't read it.\n\n>> SELECT SUM (a.file_size) FROM file_info_1 a, file_set_1 b WHERE \n>> a.file_name=b.fs_name AND a.file_parent_dir=b.fs_parent_dir AND \n>> a.file_type=b.fs_type AND b.fs_backup='t';\n> \n> \n> You'll want to run EXPLAIN ANALYSE SELECT SUM... and post the output of \n> that, although the query looks straightforward enough.\n\ntle-bu=> EXPLAIN ANALYZE SELECT SUM (a.file_size) FROM file_info_1 a, \nfile_set_1 b WHERE a.file_name=b.fs_name AND \na.file_parent_dir=b.fs_parent_dir AND a.file_type=b.fs_type AND \nb.fs_backup='t';\n \n QUERY PLAN\n----------------------------------------------------------------\n Aggregate (cost=2202.54..2202.54 rows=1 width=8) (actual \ntime=5078.744..5078.748 rows=1 loops=1)\n -> Merge Join (cost=724.94..2202.51 rows=11 width=8) (actual \ntime=3281.677..4969.719 rows=12828 loops=1)\n Merge Cond: ((\"outer\".file_parent_dir = \"inner\".fs_parent_dir) \nAND (\"outer\".file_name = \"inner\".fs_name) AND (\"outer\".file_type = \n\"inner\".fs_type))\n -> Index Scan using file_info_1_search_idx on file_info_1 a \n(cost=0.00..1317.11 rows=12828 width=104) (actual time=0.042..116.825 \nrows=12828 loops=1)\n -> Sort (cost=724.94..740.97 rows=6414 width=96) (actual \ntime=3281.516..3350.640 rows=12828 loops=1)\n Sort Key: b.fs_parent_dir, b.fs_name, b.fs_type\n -> Seq Scan on file_set_1 b (cost=0.00..319.35 \nrows=6414 width=96) (actual time=0.029..129.129 rows=12828 loops=1)\n Filter: (fs_backup = true)\n Total runtime: 5080.729 ms\n(9 rows)\n\n>> Here are the schemas, in case they help:\n>>\n>> tle-bu=> \\d file_info_1 Table \"public.file_info_1\"\n>> Column | Type | Modifiers\n>> -----------------+---------+----------------------------\n>> file_acc_time | bigint | not null\n>> file_group_name | text | not null\n>> file_group_uid | integer | not null\n>> file_mod_time | bigint | not null\n>> file_name | text | not null\n>> file_parent_dir | text | not null\n>> file_perm | text | not null\n>> file_size | bigint | not null\n>> file_type | text | not null default 'f'::text\n>> file_user_name | text | not null\n>> file_user_uid | integer | not null\n>> Indexes:\n>> \"file_info_1_display_idx\" btree (file_parent_dir, file_name, \n>> file_type)\n>> \"file_info_1_search_idx\" btree (file_parent_dir, file_name, \n>> file_type)\n>>\n>> tle-bu=> \\d file_set_1 Table \"public.file_set_1\"\n>> Column | Type | Modifiers\n>> ---------------+---------+----------------------------\n>> fs_backup | boolean | not null default true\n>> fs_display | boolean | not null default false\n>> fs_name | text | not null\n>> fs_parent_dir | text | not null\n>> fs_restore | boolean | not null default false\n>> fs_type | text | not null default 'f'::text\n>> Indexes:\n>> \"file_set_1_sync_idx\" btree (fs_parent_dir, fs_name, fs_type)\n> \n> \n> 1. WHERE ARE YOUR PRIMARY KEYS???\n> 2. Why do you have two identical indexes on file_info_1\n> 3. WHERE ARE YOUR PRIMARY KEYS???\n> 4. Am I right in thinking that always, file_name==fs_name (i.e. they \n> represent the same piece of information) and if so, why are you storing \n> it twice? Same for _parent_dir too\n> 5. file_type/fs_type are being held as unbounded text? Not an index into \n> some lookup table or a varchar(N)?\n> \n> Can you explain what you're trying to do here - it might be you want to \n> alter your database design.\n> -- \n> Richard Huxton\n> Archonet Ltd\n\n This is where I have to admit my novice level of knowledge. Until now \nI have been more concerned with \"making it work\". It is only now that I \nhave finished (more or less) the program that I have started going back \nand trying to find ways to speed it up. I have not used postgres (or \nperl or anything) before this program. I hope my questions aren't too \nbasic. ^.^;\n\n I keep hearing about Primary Keys but I can't say that I know what \nthey are or how they are used. If I do understand, it is a way to \nreference another table's entry (using a foreign key)? The two matching \nindexes is a typo in my program that I hadn't noticed, I'll fix that asap.\n\n Here is what the database is used for:\n\n This is a backup program and I use the DB to store extended \ninformation on all selected files and directories on a partition. Each \npartition has it's own 'file_info_#' and 'file_set_#' tables where '#' \nmatches the ID stored for that partition in the DB in another table.\n\n The 'file_info_#' table stored the data that can change such as file \nsize, last modified/accessed, owing user and group and so forth. The \n'file_set_#' table stores the flags that say to include or exclude it \nfrom a backup/restore job and whether it has been selected for display \nin the file browser.\n\n In the first iteration I -used- to have the data in a single table \nand I identified the partition with a column called 'file_in_id' (or \nsomething similar). As I looked at each file on the system I would do a \ndb call to see if the entry existed and if so, update it and if not, \ninsert it. This was horribly slow though so I decided to break out into \nthe schema above.\n\n With the schema above what I do now is just drop the 'file_info_#' \ntable, recreate the table and matching indexes and then do a mass 'COPY' \nof all the file info on the partition. After this is done I read in the \nnew data from the reloaded 'file_info_#' table and sync the data in \n'file_set_#' which removes entries no longer in 'file_info_#', adds new \nones matching the parent's values and leaves the existing entries alone.\n\n I found droping the table and re-creating it a lot faster than a \n'DELETE FROM' call and it also seems to have made 'VACUUM FULL' a lot \nfaster.\n\n Thank you very much for your feedback! I hope I haven't done \nsomething -too- foolish. :p If I have, I will change it.\n\nMadison\n", "msg_date": "Fri, 14 Jan 2005 10:37:38 -0500", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sum of all values" }, { "msg_contents": "Madison Kelly wrote:\n> Richard Huxton wrote:\n> \n>> Madison Kelly wrote:\n>>\n>>> Hi all,\n>>>\n>>> Is there a fast(er) way to get the sum of all integer values for a \n>>> certain condition over many thousands of rows? What I am currently \n>>> doing is this (which takes ~5-10sec.):\n>>\n>> OK, I'm assuming you've configured PG to your satisfaction and this is \n>> the only query giving you problems.\n> \n> This is a program for general consumption (hopefully... eventually...) \n> so I want to leave the psql config alone. Once I am happier with the \n> program I will try different tuning options and write a faq though I \n> expect 9 out of 10 users won't read it.\n\nPostgreSQL is not FireFox, and you can't expect it to work efficiently \nwithout doing at least some configuration. The settings to support 100 \nsimultaneous connections on a dual-Opteron with 8GB RAM are not the same \nas on a single-user laptop.\nTake half an hour to read through the performance-tuning guide here:\n http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html\n\n>>> SELECT SUM (a.file_size) FROM file_info_1 a, file_set_1 b WHERE \n>>> a.file_name=b.fs_name AND a.file_parent_dir=b.fs_parent_dir AND \n>>> a.file_type=b.fs_type AND b.fs_backup='t';\n>>\n>> You'll want to run EXPLAIN ANALYSE SELECT SUM... and post the output \n>> of that, although the query looks straightforward enough.\n> \n> tle-bu=> EXPLAIN ANALYZE SELECT SUM (a.file_size) FROM file_info_1 a, \n> file_set_1 b WHERE a.file_name=b.fs_name AND \n> a.file_parent_dir=b.fs_parent_dir AND a.file_type=b.fs_type AND \n> b.fs_backup='t';\n> \n> QUERY PLAN\n> ----------------------------------------------------------------\n> Aggregate (cost=2202.54..2202.54 rows=1 width=8) (actual \n> time=5078.744..5078.748 rows=1 loops=1)\n> -> Merge Join (cost=724.94..2202.51 rows=11 width=8) (actual \n> time=3281.677..4969.719 rows=12828 loops=1)\n> Merge Cond: ((\"outer\".file_parent_dir = \"inner\".fs_parent_dir) \n> AND (\"outer\".file_name = \"inner\".fs_name) AND (\"outer\".file_type = \n> \"inner\".fs_type))\n> -> Index Scan using file_info_1_search_idx on file_info_1 a \n> (cost=0.00..1317.11 rows=12828 width=104) (actual time=0.042..116.825 \n> rows=12828 loops=1)\n> -> Sort (cost=724.94..740.97 rows=6414 width=96) (actual \n> time=3281.516..3350.640 rows=12828 loops=1)\n> Sort Key: b.fs_parent_dir, b.fs_name, b.fs_type\n> -> Seq Scan on file_set_1 b (cost=0.00..319.35 \n> rows=6414 width=96) (actual time=0.029..129.129 rows=12828 loops=1)\n> Filter: (fs_backup = true)\n> Total runtime: 5080.729 ms\n\nWell, it's slow, but that's probably your settings. Run VACUUM ANALYSE \non the tables though, it looks like you've got default statistics (It's \nexpecting exactly 1/2 the fs_backup values to be true - 6414 out of 12828).\n\n>>> Here are the schemas, in case they help:\n>>>\n>>> tle-bu=> \\d file_info_1 Table \"public.file_info_1\"\n>>> Column | Type | Modifiers\n>>> -----------------+---------+----------------------------\n>>> file_acc_time | bigint | not null\n>>> file_group_name | text | not null\n>>> file_group_uid | integer | not null\n>>> file_mod_time | bigint | not null\n>>> file_name | text | not null\n>>> file_parent_dir | text | not null\n>>> file_perm | text | not null\n>>> file_size | bigint | not null\n>>> file_type | text | not null default 'f'::text\n>>> file_user_name | text | not null\n>>> file_user_uid | integer | not null\n>>> Indexes:\n>>> \"file_info_1_display_idx\" btree (file_parent_dir, file_name, \n>>> file_type)\n>>> \"file_info_1_search_idx\" btree (file_parent_dir, file_name, \n>>> file_type)\n>>>\n>>> tle-bu=> \\d file_set_1 Table \"public.file_set_1\"\n>>> Column | Type | Modifiers\n>>> ---------------+---------+----------------------------\n>>> fs_backup | boolean | not null default true\n>>> fs_display | boolean | not null default false\n>>> fs_name | text | not null\n>>> fs_parent_dir | text | not null\n>>> fs_restore | boolean | not null default false\n>>> fs_type | text | not null default 'f'::text\n>>> Indexes:\n>>> \"file_set_1_sync_idx\" btree (fs_parent_dir, fs_name, fs_type)\n>>\n>>\n>>\n>> 1. WHERE ARE YOUR PRIMARY KEYS???\n>> 2. Why do you have two identical indexes on file_info_1\n>> 3. WHERE ARE YOUR PRIMARY KEYS???\n>> 4. Am I right in thinking that always, file_name==fs_name (i.e. they \n>> represent the same piece of information) and if so, why are you \n>> storing it twice? Same for _parent_dir too\n>> 5. file_type/fs_type are being held as unbounded text? Not an index \n>> into some lookup table or a varchar(N)?\n>>\n>> Can you explain what you're trying to do here - it might be you want \n>> to alter your database design.\n>> -- \n>> Richard Huxton\n>> Archonet Ltd\n> \n> This is where I have to admit my novice level of knowledge. Until now \n> I have been more concerned with \"making it work\". It is only now that I \n> have finished (more or less) the program that I have started going back \n> and trying to find ways to speed it up. I have not used postgres (or \n> perl or anything) before this program. I hope my questions aren't too \n> basic. ^.^;\n\nThere's a rule of thumb about throwing the first version of anything \naway. This could well be the time to apply that. I'd recommend getting \nbook, \"An Introduction to Database Systems\" by \"C.J.Date\". It's not an \nSQL or \"Learn X in 24 hours\" but there's plenty of those about and \nyou've managed to pick up SQL/Perl already. It will explain relational \ntheory and why it's useful to you.\n\n> I keep hearing about Primary Keys but I can't say that I know what \n> they are or how they are used. If I do understand, it is a way to \n> reference another table's entry (using a foreign key)? The two matching \n> indexes is a typo in my program that I hadn't noticed, I'll fix that asap.\n\nOK - here are a few rules-of-thumb you might find useful until you've \nread the book.\n\n1. Every piece of information should be represented explicitly.\nIf there is an order for your data, it should be based on values already \npresent, or introduce an explicit \"sort_order\" column.\n2. Every piece of information (row) should be uniquely identifiable. The \ncolumn value(s) that uniquely identify a row are known as a key. If \nthere are several keys pick one - that is your \"primary key\".\n3. Every non-key column in a table should depend on the key and nothing \nbut the key.\n4. Avoid repeating information - you can do this by following points 2,3.\n5. Avoid inconsistencies - again 2,3 will help here.\n\nLooking at file_info_1, you have no primary key. This means you can have \ntwo rows with the same (file_parent_dir, file_name) - probably not what \nyou want. Since these uniquely identify a file on a partition (afaik), \nyou could make them your primary key.\n\nAlso, you have two columns user_uid, user_name. If user_uid is the \nfile's owner and user_name is their name then user_name doesn't depend \non the primary key, but on user_uid. If one file has uid=123 and \nname=\"Fred\" then *all* files with uid=123 will have an owner with name \n\"Fred\".\n\nSo - this goes into a separate table:\nCREATE TABLE user_details (\n uid int4 NOT NULL UNIQUE,\n name text,\n PRIMARY KEY (uid)\n);\n\nThen, in file_info_1 you remove user_name, and make user_uid reference \nuser_details.uid so that you can't enter an invalid user number. If you \nneed the name, just join the two tables on user_uid=uid. The term \n\"foreign key\" is used because you're referencing the key of a \"foreign\" \ntable.\n\n> Here is what the database is used for:\n> \n> This is a backup program and I use the DB to store extended \n> information on all selected files and directories on a partition. Each \n> partition has it's own 'file_info_#' and 'file_set_#' tables where '#' \n> matches the ID stored for that partition in the DB in another table.\n> \n> The 'file_info_#' table stored the data that can change such as file \n> size, last modified/accessed, owing user and group and so forth. The \n> 'file_set_#' table stores the flags that say to include or exclude it \n> from a backup/restore job and whether it has been selected for display \n> in the file browser.\n\nI don't see how you'd flag a whole directory for backup with what you've \ngot, but maybe I'm missing something.\nI'd separate the information into three tables:\n file_core (id, path, name)\n file_details (id, size, last_mod, etc)\n file_backup (id, backup_flag, display_flag, etc)\nDefine file_core.id as a SERIAL (auto-generated number) and make it the \nprimary key. Define a unique constraint on file_core.(path,name). This \nlets you have a simple number referencing file_core from the other two \ntables.\nNow, if I file gets updated you only alter file_details, and if the user \ndecides to flag more/less files then you only change file_backup.\n\n> In the first iteration I -used- to have the data in a single table and \n> I identified the partition with a column called 'file_in_id' (or \n> something similar). As I looked at each file on the system I would do a \n> db call to see if the entry existed and if so, update it and if not, \n> insert it. This was horribly slow though so I decided to break out into \n> the schema above.\n\nProbably the wrong choice. Keep your design clean and simple for as long \nas you can, only mangle it once you know you've hit the limitations of \nthe database server. It might be you hit that, but since you haven't \ndone any tuning, probably not.\n\n> With the schema above what I do now is just drop the 'file_info_#' \n> table, recreate the table and matching indexes and then do a mass 'COPY' \n> of all the file info on the partition. After this is done I read in the \n> new data from the reloaded 'file_info_#' table and sync the data in \n> 'file_set_#' which removes entries no longer in 'file_info_#', adds new \n> ones matching the parent's values and leaves the existing entries alone.\n> \n> I found droping the table and re-creating it a lot faster than a \n> 'DELETE FROM' call and it also seems to have made 'VACUUM FULL' a lot \n> faster.\n\nThe VACUUM FULL is faster because it's not doing anything - the new data \nis in a brand new table. Make sure you ANALYSE the new table though.\n\n> Thank you very much for your feedback! I hope I haven't done something \n> -too- foolish. :p If I have, I will change it.\n\nNo foolishness, just inexperience. Go forth and get some books that \ncover relational theory. A day spent on the principles will save you a \nweek of work later.\n\nGood Luck!\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 14 Jan 2005 18:34:14 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sum of all values" } ]
[ { "msg_contents": "Hi All,\n \nI have the following query to generate a report grouped by \"states\".\n \nSELECT distinct upper(cd.state) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d left JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year',d.time)= 2004 GROUP BY myst\nate ORDER BY mystate;\n\n mystate | total_amount | total_fee \n---------+--------------+-----------\n | 3695 | 0\n AR | 3000 | 0\n AZ | 1399 | 0\n CA | 113100 | 6242\n FL | 121191 | 9796\n GA | 34826876 | 478888\n GEORGIA | 57990 | 3500\n IEIE | 114000 | 4849\n MD | 20000 | 1158\n MI | 906447 | 0\n NY | 8000 | 600\n PA | 6200 | 375\n SC | 25000 | 600\n TN | 1443681 | 1124\n | 13300 | 0\n(15 rows)\n\nIf you notice, my problem in this query is that the records for GA, GEORGIA appear separately. But what I want to do is to have them combined to a single entry with their values summed up . Initially we had accepted both formats as input for the state field. Also, there are some invalid entries for the state field (like the \"IEIE\" and null values), which appear because the input for state was not validated initially. These entries have to be eliminated from the report.This query did not take a long time to complete, but did not meet the needs for the report. \n \nSo, the query was rewritten to the following query which takes nearly 7-8 mins to complete on our test database:\n \nSELECT (SELECT DISTINCT pc.state FROM postalcode pc WHERE UPPER(cd.state) IN (pc.state, pc.state_code)) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year', d.time) = 2004 GROUP BY mystate ORDER BY mystate;\n mystate | total_amount | total_fee \n----------------+--------------+-----------\n ARIZONA | 1399 | 0\n ARKANSAS | 3000 | 0\n CALIFORNIA | 113100 | 6242\n FLORIDA | 121191 | 9796\n GEORGIA | 34884866 | 482388\n MARYLAND | 20000 | 1158\n MICHIGAN | 906447 | 0\n NEW YORK | 8000 | 600\n PENNSYLVANIA | 6200 | 375\n SOUTH CAROLINA | 25000 | 600\n TENNESSEE | 1443681 | 1124\n | 130995 | 4849\n\n \nHere is the explain analyze of this query:\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1226.57..1226.58 rows=1 width=38) (actual time=362355.58..362372.09 rows=12 loops=1)\n -> Group (cost=1226.57..1226.57 rows=1 width=38) (actual time=362355.54..362367.73 rows=2197 loops=1)\n -> Sort (cost=1226.57..1226.57 rows=1 width=38) (actual time=362355.53..362356.96 rows=2197 loops=1)\n Sort Key: (subplan)\n -> Nested Loop (cost=0.00..1226.56 rows=1 width=38) (actual time=166.11..362321.46 rows=2197 loops=1)\n -> Nested Loop (cost=0.00..1220.53 rows=1 width=26) (actual time=1.68..361.32 rows=2115 loops=1)\n -> Seq Scan on customerdata cd (cost=0.00..274.32 rows=31 width=10) (actual time=0.04..29.87 rows=3303 loops=1)\n Filter: (lower((country)::text) = 'us'::text)\n -> Index Scan using data_uid_idx on data d (cost=0.00..30.08 rows=1 width=16) (actual time=0.04..0.09 rows=1 loops=3303)\n Index Cond: (d.uid = \"outer\".uid)\n Filter: (((what = 26) OR (what = 0) OR (what = 15)) AND ((flags = 1) OR (flags = 9) OR (flags = 10) OR (flags = 12)) AND (date_part('year'::text, \"time\") = 2004::double precision))\n -> Index Scan using merchant_purchase_data_idx on merchant_purchase mp (cost=0.00..6.01 rows=1 width=12) (actual time=0.05..0.05 rows=1 loops=2115)\n Index Cond: (\"outer\".id = mp.data_id)\n SubPlan\n -> Unique (cost=2237.12..2243.22 rows=122 width=13) (actual time=161.25..164.68 rows=1 loops=2197)\n -> Sort (cost=2237.12..2240.17 rows=1220 width=13) (actual time=161.21..161.88 rows=1033 loops=2197)\n Sort Key: state\n -> Seq Scan on postalcode pc (cost=0.00..2174.56 rows=1220 width=13) (actual time=35.79..148.33 rows=1033 loops=2197)\n Filter: ((upper(($0)::text) = (state)::text) OR (upper(($0)::text) = (state_code)::text))\n Total runtime: 362372.57 msec\n \n \nThe postalcode table is used in the query to validate the states and to combine the entries like GA and GEORGIA.\n\n\\d postalcode\n Table \"public.postalcode\"\n Column | Type | Modifiers \n------------+-----------------------+------------------------------------------------------------\n id | integer | not null default nextval('public.postalcode_id_seq'::text)\n country | character(2) | \n state | character varying(30) | \n zipcode | character varying(20) | \n city | character varying(50) | \n city_alias | character varying(20) | \n state_code | character varying(2) | \nIndexes: postalcode_country_key unique btree (country, state_code, zipcode),\n postalcode_state_code_idx btree (state_code),\n postalcode_state_idx btree (state)\n \nThe postalcode table has 70328 rows! \n \nCan some one please help me optimize this query? \n \nThanks,\nSaranya\n\n\n\t\t\n---------------------------------\nDo you Yahoo!?\n Yahoo! Mail - You care about security. So do we.\nHi All,\n \nI have the following query to generate a report grouped by \"states\".\n \nSELECT distinct upper(cd.state) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d left JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year',d.time)= 2004 GROUP BY mystate ORDER BY mystate;\n mystate | total_amount | total_fee ---------+--------------+-----------         |         3695 |         0 AR      |         3000 |         0 AZ      |         1399 |         0 CA      |       113100 |      6242 FL      |       121191 |      9796 GA      |     34826876 |    478888 GEORGIA |        57990 |   &nbs\n p; \n 3500 IEIE    |       114000 |      4849 MD      |        20000 |      1158 MI      |       906447 |         0 NY      |         8000 |       600 PA      |         6200 |       375 SC      |        25000 |       600 TN      |      1443681 |      1124        \n |        13300 |         0(15 rows)\nIf you notice, my problem in this query is that the records for GA, GEORGIA appear separately. But what I want to do is  to have them combined to a single entry with their values summed up . Initially we had accepted both formats as input for the state field. Also, there are some invalid entries for the state field (like the \"IEIE\" and null values), which appear because the input for state was not validated initially. These entries have to be eliminated from the report.This query did not take a long time to complete, but did not meet the needs for the report. \n \nSo, the query was rewritten to the following query which takes nearly 7-8 mins to complete on our test database:\n \nSELECT (SELECT DISTINCT pc.state FROM postalcode pc WHERE UPPER(cd.state) IN (pc.state, pc.state_code)) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year', d.time) = 2004 GROUP BY mystate ORDER BY mystate;    mystate     | total_amount | total_fee ----------------+--------------+----------- ARIZONA        |         1399 |         0 ARKANSAS       |         3000\n |         0 CALIFORNIA     |       113100 |      6242 FLORIDA        |       121191 |      9796 GEORGIA        |     34884866 |    482388 MARYLAND       |        20000 |      1158 MICHIGAN       |       906447 |         0 NEW YORK       |         8000 |       600 PENNSYLVANIA   |         6200\n |       375 SOUTH CAROLINA |        25000 |       600 TENNESSEE      |      1443681 |      1124                |       130995 |      4849\n \nHere is the explain analyze of this query:\n  QUERY PLAN                                                                                                      ---------------------------------------------------------------------------------------------------------------------------------------- Aggregate  (cost=1226.57..1226.58 rows=1 width=38) (actual time=362355.58..362372.09 rows=12 loops=1)   ->  Group  (cost=1226.57..1226.57 rows=1 width=38) (actual\n time=362355.54..362367.73 rows=2197 loops=1)         ->  Sort  (cost=1226.57..1226.57 rows=1 width=38) (actual time=362355.53..362356.96 rows=2197 loops=1)               Sort Key: (subplan)               ->  Nested Loop  (cost=0.00..1226.56 rows=1 width=38) (actual time=166.11..362321.46 rows=2197 loops=1)                     ->  Nested Loop  (cost=0.00..1220.53 rows=1 width=26) (actual time=1.68..361.32 rows=2115 loops=1)                           ->  Seq Scan on customerdata cd  (cost=0.00..274.32 ro\n ws=31\n width=10) (actual time=0.04..29.87 rows=3303 loops=1)                                 Filter: (lower((country)::text) = 'us'::text)                           ->  Index Scan using data_uid_idx on data d  (cost=0.00..30.08 rows=1 width=16) (actual time=0.04..0.09 rows=1 loops=3303)                                 Index Cond: (d.uid =\n \"outer\".uid)                                 Filter: (((what = 26) OR (what = 0) OR (what = 15)) AND ((flags = 1) OR (flags = 9) OR (flags = 10) OR (flags = 12)) AND (date_part('year'::text, \"time\") = 2004::double precision))                     ->  Index Scan using merchant_purchase_data_idx on merchant_purchase mp  (cost=0.00..6.01 rows=1 width=12) (actual time=0.05..0.05 rows=1 loops=2115)                           Index Cond: (\"outer\".id =\n mp.data_id)                     SubPlan                       ->  Unique  (cost=2237.12..2243.22 rows=122 width=13) (actual time=161.25..164.68 rows=1 loops=2197)                             ->  Sort  (cost=2237.12..2240.17 rows=1220 width=13) (actual time=161.21..161.88 rows=1033 loops=2197)                                   Sort Key:\n state                                   ->  Seq Scan on postalcode pc  (cost=0.00..2174.56 rows=1220 width=13) (actual time=35.79..148.33 rows=1033 loops=2197)                                         Filter: ((upper(($0)::text) = (state)::text) OR (upper(($0)::text) = (state_code)::text)) Total runtime: 362372.57 msec  \n \nThe postalcode table is used in the query to validate the states and to combine the entries like GA and GEORGIA.\n\n\\d postalcode                                    Table \"public.postalcode\"   Column   |         Type          |                         Modifiers                          ------------+-----------------------+------------------------------------------------------------ id         | integer               | not null default\n nextval('public.postalcode_id_seq'::text) country    | character(2)          |  state      | character varying(30) |  zipcode    | character varying(20) |  city       | character varying(50) |  city_alias | character varying(20) |  state_code | character varying(2)  | Indexes: postalcode_country_key unique btree (country, state_code, zipcode),         postalcode_state_code_idx btree (state_code),         postalcode_state_idx btree (state)\n \nThe postalcode table has 70328 rows! \n \nCan some one please help me optimize this query? \n \nThanks,\nSaranya\nDo you Yahoo!?\nYahoo! Mail - You care about security. So do we.", "msg_date": "Fri, 14 Jan 2005 06:39:30 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": true, "msg_subject": "query optimization help" } ]
[ { "msg_contents": "Please post in plaintext, not html where possible.\nYour group by clause was 'myst'...was that supposed to be mystate?\n\nHer is something you might try...use the original query form and create a function which resolves the state code from the input data...you are already doing that with upper.\n\nSo,\n\ncreate function get_state_code(text) returns char(2) as \n$$\n\tselect case when len($1) = 2 \n\t\tthen upper($1)\n\t\telse lookup_state_code($1)\n\t\tend;\n$$\nlanguage sql stable;\n\nlookup_state_code is a similar function which is boils down to a select from a lookup table. Or, you could make a giant cast statement (when GEORGIA then GA, etc). and now your function becomes IMMUTABLE and should execute very fast. Just make sure all the states are spelled correct in the original table via domain constraint.\n\nMerlin\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of sarlav kumar\nSent: Friday, January 14, 2005 9:40 AM\nTo: pgsqlnovice; pgsqlperform\nSubject: [PERFORM] query optimization help\n\nHi All,\n \nI have the following query to generate a report grouped by \"states\".\n \nSELECT distinct upper(cd.state) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d left JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year',d.time)= 2004 GROUP BY myst\nate ORDER BY mystate;\n\n", "msg_date": "Fri, 14 Jan 2005 10:04:49 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query optimization help" }, { "msg_contents": "Hi,\n \nThanks for the help. I actually got around with it by doing the following.\nI created a temporary table:\n \ncreate table statesnew as select distinct state,state_code from postalcode where lower(country)='us';\n \nAnd then changed the query to :\n \nSELECT (SELECT sn.state FROM statesnew sn WHERE UPPER(cd.state) IN (sn.state, sn.state_code)) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country\n) = 'us' AND date_part('year', d.time) = 2004 GROUP BY mystate ORDER BY mystate;\n \nThis worked well, as it reduced the number of entries it had to search from.\n \nI am not sure how to use the function you have written. Can you give me pointers on that?\n \nThanks,\nSaranya\n \n\n\nMerlin Moncure <[email protected]> wrote:\n\nPlease post in plaintext, not html where possible.\nYour group by clause was 'myst'...was that supposed to be mystate?\n\nYes, It is mystate. It continues on the next line:)\n\n\nHer is something you might try...use the original query form and create a function which resolves the state code from the input data...you are already doing that with upper.\n\nSo,\n\ncreate function get_state_code(text) returns char(2) as \n$$\nselect case when len($1) = 2 \nthen upper($1)\nelse lookup_state_code($1)\nend;\n$$\nlanguage sql stable;\n\nlookup_state_code is a similar function which is boils down to a select from a lookup table. Or, you could make a giant cast statement (when GEORGIA then GA, etc). and now your function becomes IMMUTABLE and should execute very fast. Just make sure all the states are spelled correct in the original table via domain constraint.\n\nMerlin\n\n\n\n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \nHi,\n \nThanks for the help. I actually got around with it by doing the following.\nI created a temporary table:\n \ncreate table statesnew as select distinct state,state_code from postalcode where lower(country)='us';\n \nAnd then changed the query to :\n \nSELECT (SELECT sn.state FROM statesnew sn WHERE UPPER(cd.state) IN (sn.state, sn.state_code)) as mystate, SUM(d.amount) as total_amount, SUM(COALESCE(d.fee,0) + COALESCE(mp.seller_fee, 0) + COALESCE(mp.buyer_fee,0)) as total_fee FROM data d JOIN customerdata cd ON d.uid = cd.uid LEFT JOIN merchant_purchase mp ON d.id = mp.data_id WHERE d.what IN (26,0, 15) AND d.flags IN (1,9,10,12 ) AND lower(cd.country) = 'us' AND date_part('year', d.time) = 2004 GROUP BY mystate ORDER BY mystate;\n \nThis worked well, as it reduced the number of entries it had to search from.\n \nI am not sure how to use the function you have written. Can you give me pointers on that?\n \nThanks,\nSaranya\n \nMerlin Moncure <[email protected]> wrote:\n\nPlease post in plaintext, not html where possible.Your group by clause was 'myst'...was that supposed to be mystate?\nYes, It is mystate. It continues on the next line:)\nHer is something you might try...use the original query form and create a function which resolves the state code from the input data...you are already doing that with upper.So,create function get_state_code(text) returns char(2) as $$select case when len($1) = 2 then upper($1)else lookup_state_code($1)end;$$language sql stable;lookup_state_code is a similar function which is boils down to a select from a lookup table. Or, you could make a giant cast statement (when GEORGIA then GA, etc). and now your function becomes IMMUTABLE and should execute very fast. Just make sure all the states are spelled correct in the original table via domain constraint.Merlin__________________________________________________Do You Yahoo!?Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com", "msg_date": "Fri, 14 Jan 2005 07:27:06 -0800 (PST)", "msg_from": "sarlav kumar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] query optimization help" } ]
[ { "msg_contents": "Alex wrote:\n> Without starting too much controvesy I hope, I would seriously\n> recommend you evaluate the AMCC Escalade 9500S SATA controller. It\n> has many of the features of a SCSI controler, but works with cheaper\n> drives, and for half the price or many SCSI controlers (9500S-8MI goes\n> for abour $500). See http://plexq.com/~aturner/3ware.pdf for their 4\n> way, 8 way and 12 way RAID benchmarks including RAID 0, RAID 5 and\n> RAID 10. If others have similar data, I would be very interested to\n> see how it stacks up against other RAID controllers.\n\nAt the risk of shaming myself with another 'me too' post, I'd like to\nsay that my experiences back this up 100%. The Escalade controllers are\nexcellent and the Raptor drives are fast and reliable (so far). With\nthe money saved from going SCSI, instead of a RAID 5 a 10 could be built\nfor roughly the same price and capacity, guess which array is going to\nbe faster?\n\nI think the danger about SATA is that many SATA components are not\nserver quality, so you have to be more careful about what you buy. For\nexample, you can't just assume your SATA backplane has hot swap lights\n(got bit by this one myself, heh). \n\nMerlin\n\n", "msg_date": "Fri, 14 Jan 2005 12:22:50 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Merlin,\n\n> I think the danger about SATA is that many SATA components are not\n> server quality, so you have to be more careful about what you buy. For\n> example, you can't just assume your SATA backplane has hot swap lights\n> (got bit by this one myself, heh).\n\nYeah, that's my big problem with anything IDE. My personal experience of \nfailure rates for IDE drives, for example, is about 1 out of 10 fails in \nservice before it's a year old; SCSI has been more like 1 out of 50. \n\nAlso, while I've seen benchmarks like Escalade's, my real-world experience has \nbeen that the full bi-directional r/w of SCSI means that it takes 2 SATA \ndrives to equal one SCSI drive in a heavy r/w application. However, ODSL is \nall SCSI so I don't have any numbers to back that up.\n\nBut one of my clients needs a new docs server, so maybe I can give an Escalade \na spin.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 14 Jan 2005 09:36:08 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n\n> Merlin,\n> \n> > I think the danger about SATA is that many SATA components are not\n> > server quality, so you have to be more careful about what you buy. For\n> > example, you can't just assume your SATA backplane has hot swap lights\n> > (got bit by this one myself, heh).\n> \n> Yeah, that's my big problem with anything IDE. My personal experience of \n> failure rates for IDE drives, for example, is about 1 out of 10 fails in \n> service before it's a year old; SCSI has been more like 1 out of 50. \n\nUm. I'm pretty sure the actual hardware is just the same stuff. It's just the\ninterface electronics that change.\n\n> Also, while I've seen benchmarks like Escalade's, my real-world experience has \n> been that the full bi-directional r/w of SCSI means that it takes 2 SATA \n> drives to equal one SCSI drive in a heavy r/w application. However, ODSL is \n> all SCSI so I don't have any numbers to back that up.\n\nDo we know that these SATA/IDE controllers and drives don't \"lie\" about fsync\nthe way most IDE drives do? Does the controller just automatically disable the\nwrite caching entirely?\n\nI don't recall, did someone have a program that tested the write latency of a\ndrive to test this?\n\n-- \ngreg\n\n", "msg_date": "14 Jan 2005 13:47:34 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Greg Stark wrote:\n\n>\"Merlin Moncure\" <[email protected]> writes:\n>\n> \n>\n>>Alex wrote:\n>> \n>>\n>>>Without starting too much controvesy I hope, I would seriously\n>>>recommend you evaluate the AMCC Escalade 9500S SATA controller. \n>>> \n>>>\n>.\n> \n>\n>>At the risk of shaming myself with another 'me too' post, I'd like to\n>>say that my experiences back this up 100%. The Escalade controllers are\n>>excellent and the Raptor drives are fast and reliable (so far). \n>> \n>>\n>.\n>\n>I assume AMCC == 3ware now?\n>\n>Has anyone verified that fsync is safe on these controllers? Ie, that they\n>aren't caching writes and \"lying\" about the write completing like IDE\n>drives oft\n> \n>\n\nFor those who speak highly of the Escalade controllers and/Raptor SATA \ndrives, how is the database being utilized, OLTP or primarily read \naccess? This is good information I am learning, but I also see the need \nto understand the context of how the hardware is being used.\n\nSteve Poe\n\n", "msg_date": "Mon, 28 Mar 2005 12:11:59 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "\n\"Merlin Moncure\" <[email protected]> writes:\n\n> Alex wrote:\n> > Without starting too much controvesy I hope, I would seriously\n> > recommend you evaluate the AMCC Escalade 9500S SATA controller. \n...\n> At the risk of shaming myself with another 'me too' post, I'd like to\n> say that my experiences back this up 100%. The Escalade controllers are\n> excellent and the Raptor drives are fast and reliable (so far). \n...\n\nI assume AMCC == 3ware now?\n\nHas anyone verified that fsync is safe on these controllers? Ie, that they\naren't caching writes and \"lying\" about the write completing like IDE\ndrives often do by default?\n\n-- \ngreg\n\n", "msg_date": "28 Mar 2005 15:09:29 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "\n> I assume AMCC == 3ware now?\n> \n> Has anyone verified that fsync is safe on these controllers? Ie, that they\n> aren't caching writes and \"lying\" about the write completing like IDE\n> drives often do by default?\n\nThe higher end AMCC/3ware controllers actually warn you about using\nwrite-cache. You have to explicitly turn it on within the controller\nbios.\n\nThey also have optional battery backed cache.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n> \n-- \nCommand Prompt, Inc., Your PostgreSQL solutions company. 503-667-4564\nCustom programming, 24x7 support, managed services, and hosting\nOpen Source Authors: plPHP, pgManage, Co-Authors: plPerlNG\nReliable replication, Mammoth Replicator - http://www.commandprompt.com/\n\n", "msg_date": "Mon, 28 Mar 2005 12:57:08 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n\n> > I assume AMCC == 3ware now?\n> > \n> > Has anyone verified that fsync is safe on these controllers? Ie, that they\n> > aren't caching writes and \"lying\" about the write completing like IDE\n> > drives often do by default?\n> \n> The higher end AMCC/3ware controllers actually warn you about using\n> write-cache. You have to explicitly turn it on within the controller\n> bios.\n\nWell that's a good sign.\n\nBut if they're using SATA drives my concern is that the drives themselves may\nbe doing some caching on their own. Has anyone verified that the controllers\nare disabling the drive cache or issuing flushes or doing something else to be\nsure to block the drives from caching writes?\n\n-- \ngreg\n\n", "msg_date": "28 Mar 2005 16:36:06 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Greg Stark wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n> \n> > > I assume AMCC == 3ware now?\n> > > \n> > > Has anyone verified that fsync is safe on these controllers? Ie, that they\n> > > aren't caching writes and \"lying\" about the write completing like IDE\n> > > drives often do by default?\n> > \n> > The higher end AMCC/3ware controllers actually warn you about using\n> > write-cache. You have to explicitly turn it on within the controller\n> > bios.\n> \n> Well that's a good sign.\n> \n> But if they're using SATA drives my concern is that the drives themselves may\n> be doing some caching on their own. Has anyone verified that the controllers\n> are disabling the drive cache or issuing flushes or doing something else to be\n> sure to block the drives from caching writes?\n\nI asked 3ware this at the Linuxworld Boston show and they said their\ncontroller keeps the information in cache until they are sure it is on\nthe platters and not just in the disk cache, but that is far from a 100%\nreliable report.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 28 Mar 2005 18:37:54 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Anyone using power5 platform? something like an ibm eserver p5 520\nrunning red hat linux.\n(http://www-1.ibm.com/servers/eserver/pseries/hardware/entry/520.html)?\n\nklint.\n\n+---------------------------------------+-----------------+\n: Klint Gore : \"Non rhyming :\n: EMail : [email protected] : slang - the :\n: Snail : A.B.R.I. : possibilities :\n: Mail University of New England : are useless\" :\n: Armidale NSW 2351 Australia : L.J.J. :\n: Fax : +61 2 6772 5376 : :\n+---------------------------------------+-----------------+\n", "msg_date": "Tue, 29 Mar 2005 10:40:45 +1000", "msg_from": "Klint Gore <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" }, { "msg_contents": "Bruce Momjian <[email protected]> writes:\n\n> I asked 3ware this at the Linuxworld Boston show and they said their\n> controller keeps the information in cache until they are sure it is on\n> the platters and not just in the disk cache, but that is far from a 100%\n> reliable report.\n\nHm. Well, keeping it in cache is one thing. But what it needs to do is not\nconfirm the write to the host OS. Unless they want to sell their battery\nbacked unit which is an expensive add-on...\n\n-- \ngreg\n\n", "msg_date": "28 Mar 2005 19:51:01 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" } ]
[ { "msg_contents": "If I have this table, function and index in Postgres 7.3.6 ...\n\n\"\"\"\nCREATE TABLE news_stories (\n id serial primary key NOT NULL,\n pub_date timestamp with time zone NOT NULL,\n ...\n)\nCREATE OR REPLACE FUNCTION get_year_trunc(timestamp with time zone) returns \ntimestamp with time zone AS 'SELECT date_trunc(\\'year\\',$1);' LANGUAGE 'SQL' \nIMMUTABLE;\nCREATE INDEX news_stories_pub_date_year_trunc ON \nnews_stories( get_year_trunc(pub_date) );\n\"\"\"\n \n...why does this query not use the index?\n \ndb=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM news_stories;\n QUERY PLAN\n---------------------------------------------------------------------------------\n Unique (cost=59597.31..61311.13 rows=3768 width=8)\n -> Sort (cost=59597.31..60454.22 rows=342764 width=8)\n Sort Key: date_trunc('year'::text, pub_date)\n -> Seq Scan on news_stories (cost=0.00..23390.55 rows=342764 \nwidth=8)\n(4 rows)\n\nThe query is noticably slow (2 seconds) on a database with 150,000+ records. \nHow can I speed it up?\n\nThanks,\nAdrian\n", "msg_date": "Fri, 14 Jan 2005 12:32:12 -0600", "msg_from": "Adrian Holovaty <[email protected]>", "msg_from_op": true, "msg_subject": "Index on a function and SELECT DISTINCT" }, { "msg_contents": "On Fri, 14 Jan 2005 12:32:12 -0600\nAdrian Holovaty <[email protected]> wrote:\n\n> If I have this table, function and index in Postgres 7.3.6 ...\n> \n> \"\"\"\n> CREATE TABLE news_stories (\n> id serial primary key NOT NULL,\n> pub_date timestamp with time zone NOT NULL,\n> ...\n> )\n> CREATE OR REPLACE FUNCTION get_year_trunc(timestamp with time zone)\n> returns timestamp with time zone AS 'SELECT date_trunc(\\'year\\',$1);'\n> LANGUAGE 'SQL' IMMUTABLE;\n> CREATE INDEX news_stories_pub_date_year_trunc ON \n> news_stories( get_year_trunc(pub_date) );\n> \"\"\"\n> \n> ...why does this query not use the index?\n> \n> db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM\n> news_stories;\n> QUERY PLAN\n> ---------------------------------------------------------------------\n> ------------\n> Unique (cost=59597.31..61311.13 rows=3768 width=8)\n> -> Sort (cost=59597.31..60454.22 rows=342764 width=8)\n> Sort Key: date_trunc('year'::text, pub_date)\n> -> Seq Scan on news_stories (cost=0.00..23390.55\n> rows=342764 \n> width=8)\n> (4 rows)\n> \n> The query is noticably slow (2 seconds) on a database with 150,000+\n> records. How can I speed it up?\n\n It's doing a sequence scan because you're not limiting the query in\n the FROM clause. No point in using an index when you're asking for\n the entire table. :) \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Mon, 17 Jan 2005 10:09:42 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index on a function and SELECT DISTINCT" }, { "msg_contents": "Frank Wiles wrote:\n> Adrian Holovaty <[email protected]> wrote:\n> > If I have this table, function and index in Postgres 7.3.6 ...\n> >\n> > \"\"\"\n> > CREATE TABLE news_stories (\n> > id serial primary key NOT NULL,\n> > pub_date timestamp with time zone NOT NULL,\n> > ...\n> > )\n> > CREATE OR REPLACE FUNCTION get_year_trunc(timestamp with time zone)\n> > returns timestamp with time zone AS 'SELECT date_trunc(\\'year\\',$1);'\n> > LANGUAGE 'SQL' IMMUTABLE;\n> > CREATE INDEX news_stories_pub_date_year_trunc ON\n> > news_stories( get_year_trunc(pub_date) );\n> > \"\"\"\n> >\n> > ...why does this query not use the index?\n> >\n> > db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM\n> > news_stories;\n> > QUERY PLAN\n> > ---------------------------------------------------------------------\n> > ------------\n> > Unique (cost=59597.31..61311.13 rows=3768 width=8)\n> > -> Sort (cost=59597.31..60454.22 rows=342764 width=8)\n> > Sort Key: date_trunc('year'::text, pub_date)\n> > -> Seq Scan on news_stories (cost=0.00..23390.55\n> > rows=342764\n> > width=8)\n> > (4 rows)\n> >\n> > The query is noticably slow (2 seconds) on a database with 150,000+\n> > records. How can I speed it up?\n>\n> It's doing a sequence scan because you're not limiting the query in\n> the FROM clause. No point in using an index when you're asking for\n> the entire table. :)\n\nAh, that makes sense. So is there a way to optimize SELECT DISTINCT queries \nthat have no WHERE clause?\n\nAdrian\n", "msg_date": "Mon, 17 Jan 2005 11:59:24 -0600", "msg_from": "Adrian Holovaty <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index on a function and SELECT DISTINCT" }, { "msg_contents": "\n\n\tTry :\n\nEXPLAIN SELECT get_year_trunc(pub_date) as foo FROM ... GROUP BY foo\n\n\tApart from that, you could use a materialized view...\n\n>> > db=# EXPLAIN SELECT DISTINCT get_year_trunc(pub_date) FROM\n\n> Ah, that makes sense. So is there a way to optimize SELECT DISTINCT \n> queries\n> that have no WHERE clause?\n>\n> Adrian\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faqs/FAQ.html\n>\n\n\n", "msg_date": "Mon, 17 Jan 2005 19:17:57 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index on a function and SELECT DISTINCT" } ]
[ { "msg_contents": "Greg wrote:\n> Josh Berkus <[email protected]> writes:\n> \n> > Merlin,\n> >\n> > > I think the danger about SATA is that many SATA components are not\n> > > server quality, so you have to be more careful about what you buy.\n> For\n> > > example, you can't just assume your SATA backplane has hot swap\nlights\n> > > (got bit by this one myself, heh).\n> >\n> > Yeah, that's my big problem with anything IDE. My personal\nexperience\n> of\n> > failure rates for IDE drives, for example, is about 1 out of 10\nfails in\n> > service before it's a year old; SCSI has been more like 1 out of 50.\n> \n> Um. I'm pretty sure the actual hardware is just the same stuff. It's\njust\n> the\n> interface electronics that change.\n> \n> > Also, while I've seen benchmarks like Escalade's, my real-world\n> experience has\n> > been that the full bi-directional r/w of SCSI means that it takes 2\nSATA\n> > drives to equal one SCSI drive in a heavy r/w application.\nHowever,\n> ODSL is\n> > all SCSI so I don't have any numbers to back that up.\n> \n> Do we know that these SATA/IDE controllers and drives don't \"lie\"\nabout\n> fsync\n> the way most IDE drives do? Does the controller just automatically\ndisable\n> the\n> write caching entirely?\n> \n> I don't recall, did someone have a program that tested the write\nlatency\n> of a\n> drive to test this?\n> \n> --\n> greg\n\nThe Escalades, at least, work the way they are supposed to. The raid\ncontroller supports write back/write through. Thus, you can leave fsync\non in pg with decent performance (not as good as fsync=off, though) and\ncount on the bbu to cover you in the event of a power failure. Our\ninternal testing here confirmed the controller and the disks sync when\nyou tell them to (namely escalade/raptor).\n\nMerlin\n", "msg_date": "Fri, 14 Jan 2005 14:46:17 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: which dual-CPU hardware/OS is fastest for PostgreSQL?" } ]
[ { "msg_contents": "Tom,\n\nHmmm ... I'm seeing an issue with IN() optimization -- or rather the lack of \nit -- in 8.0rc5. It seems to me that this worked better in 7.4, although \nI've not been able to load this particular database and test\n\ndm=# explain\ndm-# SELECT personid FROM mr.person_attributes_old\ndm-# WHERE personid NOT IN (SELECT \npersonid FROM mr.person_attributes);\n QUERY PLAN\n-----------------------------------------------------------------------------------\n Seq Scan on person_attributes_old (cost=0.00..3226144059.85 rows=235732 \nwidth=4)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on person_attributes (cost=0.00..12671.07 rows=405807 \nwidth=4)\n(4 rows)\n\ndm=# explain select pao.personid from mr.person_attributes_old pao\ndm-# left outer join mr.person_attributes p on pao.personid = p.personid\ndm-# where p.personid is null;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------\n Merge Left Join (cost=0.00..34281.83 rows=471464 width=4)\n Merge Cond: (\"outer\".personid = \"inner\".personid)\n Filter: (\"inner\".personid IS NULL)\n -> Index Scan using idx_opa_person on person_attributes_old pao \n(cost=0.00..13789.29 rows=471464 width=4)\n -> Index Scan using idx_pa_person on person_attributes p \n(cost=0.00..14968.25 rows=405807 width=4)\n(5 rows)\n\nIt seems like the planner ought to recognize that the first form of the query \nis optimizable into the 2nd form, and that I've seen it do so in 7.4. \nHowever, *no* amount of manipulation of query parameters I did on the 1st \nform of the query were successful in getting the planner to recognize that it \ncould use indexes for the IN() form of the query.\n\nThoughts?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 15 Jan 2005 12:23:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "IN() Optimization issue in 8.0rc5" }, { "msg_contents": "Josh Berkus <[email protected]> writes:\n> dm=# explain\n> dm-# SELECT personid FROM mr.person_attributes_old\n> dm-# WHERE personid NOT IN (SELECT \n> personid FROM mr.person_attributes);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------\n> Seq Scan on person_attributes_old (cost=0.00..3226144059.85 rows=235732 \n> width=4)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on person_attributes (cost=0.00..12671.07 rows=405807 \n> width=4)\n> (4 rows)\n\nHmm. What you want for a NOT IN is for it to say\n Filter: (NOT (hashed subplan))\nwhich you are not getting. What's the datatypes of the two personid\ncolumns? Is the 400k-row estimate for person_attributes reasonable?\nMaybe you need to increase work_mem (nee sort_mem) to allow a\n400k-row hash table?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 15 Jan 2005 15:53:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: IN() Optimization issue in 8.0rc5 " }, { "msg_contents": "Tom,\n\n> Hmm. What you want for a NOT IN is for it to say\n> Filter: (NOT (hashed subplan))\n> which you are not getting. What's the datatypes of the two personid\n> columns? \n\nINT\n\n> Is the 400k-row estimate for person_attributes reasonable? \n\nYes, the estimates are completely accurate.\n\n> Maybe you need to increase work_mem (nee sort_mem) to allow a\n> 400k-row hash table?\n\nAha, that's it. I thought I'd already set that, but apparently it was a \ndifferent session. Fixed. Thanks!\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 15 Jan 2005 13:32:27 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: IN() Optimization issue in 8.0rc5" } ]
[ { "msg_contents": "Hi,\n\nI have the following problem. A week ago we've migrated from PGv7.2.3 to \n7.4.6. There were a lot of things in the apps to chenge but we made \nthem. But one query doesn't want to run. In the old PGv7.2.3 it passes \nfor 10 min. In the new one it gaves:\n \nDBD::Pg::st execute failed: ERROR: out of memory\n\nSo the Server was not upgrated or preconfigured, so I suppose that the \nproblem is somewhere in the configuration of the Postgres. Here I post \nthe query and the explain. I can't post the explain analyze, \nbecause:))... the query can't execute:)\nI also post the result of SHOW ALL to give a view of the server \nconfiguration.\n\nThanks in advance for all sugestions.\n\nKaloyan Iliev\n\nSHOW ALL\n\n\nname \tsetting\nadd_missing_from \ton\naustralian_timezones \toff\nauthentication_timeout \t60\ncheck_function_bodies \ton\ncheckpoint_segments \t16\ncheckpoint_timeout \t300\ncheckpoint_warning \t30\nclient_encoding \tSQL_ASCII\nclient_min_messages \tnotice\ncommit_delay \t0\ncommit_siblings \t5\ncpu_index_tuple_cost \t0.001\ncpu_operator_cost \t0.0025\ncpu_tuple_cost \t0.01\nDateStyle \tISO, DMY\ndb_user_namespace \toff\ndeadlock_timeout \t1000\ndebug_pretty_print \toff\ndebug_print_parse \toff\ndebug_print_plan \toff\ndebug_print_rewritten \toff\ndefault_statistics_target \t10\ndefault_transaction_isolation \tread committed\ndefault_transaction_read_only \toff\ndynamic_library_path \t$libdir\neffective_cache_size \t13000\nenable_hashagg \ton\nenable_hashjoin \ton\nenable_indexscan \ton\nenable_mergejoin \ton\nenable_nestloop \ton\nenable_seqscan \ton\nenable_sort \ton\nenable_tidscan \ton\nexplain_pretty_print \ton\nextra_float_digits \t0\nfrom_collapse_limit \t8\nfsync \ton\ngeqo \ton\ngeqo_effort \t1\ngeqo_generations \t0\ngeqo_pool_size \t0\ngeqo_selection_bias \t2\ngeqo_threshold \t11\njoin_collapse_limit \t8\nkrb_server_keyfile \tunset\nlc_collate \tC\nlc_ctype \tCP1251\nlc_messages \tC\nlc_monetary \tC\nlc_numeric \tC\nlc_time \tC\nlog_connections \toff\nlog_duration \toff\nlog_error_verbosity \tdefault\nlog_executor_stats \toff\nlog_hostname \toff\nlog_min_duration_statement \t-1\nlog_min_error_statement \tpanic\nlog_min_messages \tnotice\nlog_parser_stats \toff\nlog_pid \toff\nlog_planner_stats \toff\nlog_source_port \toff\nlog_statement \toff\nlog_statement_stats \toff\nlog_timestamp \ton\nmax_connections \t256\nmax_expr_depth \t10000\nmax_files_per_process \t1000\nmax_fsm_pages \t20000\nmax_fsm_relations \t1000\nmax_locks_per_transaction \t64\npassword_encryption \ton\nport \t5432\npre_auth_delay \t0\npreload_libraries \tunset\nrandom_page_cost \t4\nregex_flavor \tadvanced\nrendezvous_name \tunset\nsearch_path \t$user,public\nserver_encoding \tSQL_ASCII\nserver_version \t7.4.6\nshared_buffers \t1000\nsilent_mode \toff\nsort_mem \t1024\nsql_inheritance \toff\nssl \toff\nstatement_timeout \t0\nstats_block_level \ton\nstats_command_string \ton\nstats_reset_on_server_start \toff\nstats_row_level \ton\nstats_start_collector \ton\nsuperuser_reserved_connections \t2\nsyslog \t0\nsyslog_facility \tLOCAL0\nsyslog_ident \tpostgres\ntcpip_socket \ton\nTimeZone \tunknown\ntrace_notify \toff\ntransaction_isolation \tread committed\ntransaction_read_only \toff\ntransform_null_equals \toff\nunix_socket_directory \tunset\nunix_socket_group \tunset\nunix_socket_permissions \t511\nvacuum_mem \t8192\nvirtual_host \tunset\nwal_buffers \t8\nwal_debug \t0\nwal_sync_method \tfsync\nzero_damaged_pages \toff\n\n\n(113 rows)\n\nAnd now the query:\n\nexplain select UNPAID.ino,\n I.idate,\n round(UNPAID.saldo - \n ( select round(coalesce(sum(total),0),5)\n from invoices I1 \n where I1.iino = I.ino AND\n I1.istatus = 0 AND\n I1.itype = 2 )\n ,2) AS saldo,\n C.name AS client_name,\n SC.branch AS client_branch,\n I.total,\n I.nomenclature_no AS nom,\n I.subnom_no AS subnom,\n OF.description AS office, \n coalesce((select 1.2 * sum(AD.bgl_amount)::float / AC.amount\n from acc_clients AC, \n\t\t\t\t config C,\n\t\t\t\t acc_debts AD, \n\t\t\t\t debts_desc D\n\t\t\t\t\t\t where \n\t\t\t\t\t\t C.office = OF.officeid AND\n\t\t\t\t\t\t not AC.credit AND\n\t\t\t\t\t\t AC.ino = I.ino AND\n\t\t\t\t\t\t AC.transact_no = AD.transact_no AND\n\t\t\t\t\t\t AD.credit AND\n\t\t\t\t\t\t AD.debtid = D.debtid AND\n\t\t\t\t\t\t C.confid = D.refid AND\n C.oid = (select max(oid) \n from config \n\t\t\t\t\t\t\t where confid=D.refid )\n group by AC.amount ),0) AS perc, \n 1\n from invoices I,\n offices OF, \n\n (\n select nomenclature_no, \n subnom_no, \n ino, \n sum(saldo) as saldo\n from\n (\n select nomenclature_no, \n subnom_no, \n ino, \n round(sum(saldo_sign(not credit)*amount),5) AS saldo\n from acc_clients\n group by ino, nomenclature_no, subnom_no\n UNION ALL\n select c.nomenclature_no, \n c.subnom_no, \n c.ino, \n round(COALESCE(sum(p.bgl_amount), 0),5) AS saldo\n from acc_clients c, acc_payments p \n where c.transact_no = p.transact_no AND \n p.fisc_status = 4\n group by c.ino, c.nomenclature_no, c.subnom_no\n ) TTUNPAID\n group by ino, nomenclature_no, subnom_no\n ) UNPAID,\n\n clients C,\n subnom SC\n where \n I.idate >= '01-01-2004' AND I.idate <= '01-01-2005' AND \n UNPAID.ino = I.ino AND\n I.istatus = 0 AND\n I.itype <> 2 AND\n I.nomenclature_no = C.nomenclature_no AND\n I.nomenclature_no = SC.nomenclature_no AND\n \n I.subnom_no = SC.subnom_no\n union all\n select UNPAID.ino,\n I.idate,\n round(UNPAID.saldo - \n ( select round(coalesce(sum(total),0),5)\n from invoices I1 \n where I1.iino = I.ino AND\n I1.istatus = 0 AND\n I1.itype = 2 )\n ,2) AS saldo,\n C.name AS client_name,\n SC.branch AS client_branch,\n I.total,\n I.nomenclature_no AS nom,\n I.subnom_no AS subnom,\n '����������' AS office,\n coalesce((select 1.2 * sum(AD.bgl_amount)::float / AC.amount\n from acc_clients AC, \n\t\t\t\t acc_debts AD, \n\t\t\t\t debts_desc D\n\t\t\t\t\t\t where \n\t\t\t\t\t\t not AC.credit AND\n\t\t\t\t\t\t AC.ino = I.ino AND\n\t\t\t\t\t\t AC.transact_no = AD.transact_no AND\n\t\t\t\t\t\t AD.credit AND\n\t\t\t\t\t\t AD.debtid = D.debtid AND\n\t\t\t\t\t\t D.refid is null\n group by AC.amount ),0) AS perc, \n 1\n from invoices I,\n ( select nomenclature_no, \n subnom_no, \n ino, \n round(sum(saldo_sign(not credit)*amount),5) AS saldo\n from acc_clients\n group by ino, nomenclature_no, subnom_no ) UNPAID,\n clients C,\n subnom SC\n where \n I.idate >= '01-01-2004' AND I.idate <= '01-01-2005' AND \n UNPAID.ino = I.ino AND\n I.istatus = 0 AND\n I.itype <> 2 AND\n I.nomenclature_no = C.nomenclature_no AND\n I.nomenclature_no = SC.nomenclature_no AND\n exists (select 1\n from acc_clients AC,\n acc_debts AD,\n debts_desc DD\n where AC.ino = I.ino AND\n AD.transact_no = AC.transact_no AND\n AD.debtid = DD.debtid AND\n DD.refid is null ) AND \n I.subnom_no = SC.subnom_no order by office, ino DESC\n\n\nQUERY PLAN\nSort (cost=453579405.72..453585516.16 rows=2444177 width=108)\nSort Key: office, ino\n-> Append (cost=93725.37..452807307.33 rows=2444177 width=108)\n-> Subquery Scan \"*SELECT* 1\" (cost=93725.37..447433349.67 rows=2418773 \nwidth=108)\n-> Nested Loop (cost=93725.37..447409161.94 rows=2418773 width=108)\n-> Merge Join (cost=93723.86..101789.54 rows=50867 width=94)\nMerge Cond: (\"outer\".ino = \"inner\".ino)\n-> Subquery Scan unpaid (cost=82961.98..89647.68 rows=267428 width=36)\n-> GroupAggregate (cost=82961.98..86973.40 rows=267428 width=44)\n-> Sort (cost=82961.98..83630.55 rows=267428 width=44)\nSort Key: ino, nomenclature_no, subnom_no\n-> Subquery Scan ttunpaid (cost=35143.93..49845.48 rows=267428 width=44)\n-> Append (cost=35143.93..47171.20 rows=267428 width=21)\n-> Subquery Scan \"*SELECT* 1\" (cost=35143.93..44492.88 rows=267113 \nwidth=21)\n-> GroupAggregate (cost=35143.93..41821.75 rows=267113 width=21)\n-> Sort (cost=35143.93..35811.71 rows=267113 width=21)\nSort Key: ino, nomenclature_no, subnom_no\n-> Seq Scan on acc_clients (cost=0.00..4758.13 rows=267113 width=21)\n-> Subquery Scan \"*SELECT* 2\" (cost=2672.80..2678.32 rows=315 width=20)\n-> HashAggregate (cost=2672.80..2675.17 rows=315 width=20)\n-> Nested Loop (cost=0.00..2669.65 rows=315 width=20)\n-> Index Scan using acc_payments_fisc_status_idx on acc_payments p \n(cost=0.00..892.52 rows=315 width=12)\nIndex Cond: (fisc_status = 4)\n-> Index Scan using acc_clients_transact_no_uidx on acc_clients c \n(cost=0.00..5.63 rows=1 width=16)\nIndex Cond: (c.transact_no = \"outer\".transact_no)\n-> Sort (cost=10761.89..10817.21 rows=22128 width=58)\nSort Key: i.ino\n-> Hash Join (cost=1774.86..8710.88 rows=22128 width=58)\nHash Cond: ((\"outer\".nomenclature_no = \"inner\".nomenclature_no) AND \n(\"outer\".subnom_no = \"inner\".subnom_no))\n-> Seq Scan on invoices i (cost=0.00..5556.52 rows=22292 width=24)\nFilter: ((idate >= '2004-01-01'::date) AND (idate <= '2005-01-01'::date) \nAND (istatus = 0) AND (itype <> 2))\n-> Hash (cost=1592.90..1592.90 rows=13193 width=46)\n-> Hash Join (cost=577.25..1592.90 rows=13193 width=46)\nHash Cond: (\"outer\".nomenclature_no = \"inner\".nomenclature_no)\n-> Seq Scan on subnom sc (cost=0.00..393.93 rows=13193 width=19)\n-> Hash (cost=463.20..463.20 rows=12820 width=27)\n-> Seq Scan on clients c (cost=0.00..463.20 rows=12820 width=27)\n-> Materialize (cost=1.51..2.02 rows=51 width=14)\n-> Seq Scan on offices \"of\" (cost=0.00..1.51 rows=51 width=14)\nSubPlan\n-> HashAggregate (cost=179.30..179.31 rows=1 width=19)\n-> Nested Loop (cost=0.00..179.30 rows=1 width=19)\nJoin Filter: (\"inner\".oid = (subplan))\n-> Nested Loop (cost=0.00..77.57 rows=2 width=23)\n-> Nested Loop (cost=0.00..66.58 rows=2 width=23)\n-> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 \nrows=4 width=12)\nIndex Cond: (ino = $0)\nFilter: (NOT credit)\n-> Index Scan using acc_debts_transact_no_idx on acc_debts ad \n(cost=0.00..9.71 rows=45 width=19)\nIndex Cond: (\"outer\".transact_no = ad.transact_no)\nFilter: credit\n-> Index Scan using debts_desc_pkey on debts_desc d (cost=0.00..5.48 \nrows=1 width=8)\nIndex Cond: (\"outer\".debtid = d.debtid)\n-> Index Scan using config_confid_idx on config c (cost=0.00..25.42 \nrows=1 width=8)\nIndex Cond: (c.confid = \"outer\".refid)\nFilter: (office = $2)\nSubPlan\n-> Aggregate (cost=25.43..25.43 rows=1 width=4)\n-> Index Scan using config_confid_idx on config (cost=0.00..25.40 rows=9 \nwidth=4)\nIndex Cond: (confid = $1)\n-> Aggregate (cost=5.59..5.59 rows=1 width=8)\n-> Index Scan using invoices_iino_idx on invoices i1 (cost=0.00..5.58 \nrows=1 width=8)\nIndex Cond: (iino = $0)\nFilter: ((istatus = 0) AND (itype = 2))\n-> Subquery Scan \"*SELECT* 2\" (cost=3250111.65..5373957.66 rows=25404 \nwidth=94)\n-> Merge Join (cost=3250111.65..5373703.62 rows=25404 width=94)\nMerge Cond: (\"outer\".ino = \"inner\".ino)\n-> Subquery Scan unpaid (cost=35143.93..44492.88 rows=267113 width=36)\n-> GroupAggregate (cost=35143.93..41821.75 rows=267113 width=21)\n-> Sort (cost=35143.93..35811.71 rows=267113 width=21)\nSort Key: ino, nomenclature_no, subnom_no\n-> Seq Scan on acc_clients (cost=0.00..4758.13 rows=267113 width=21)\n-> Sort (cost=3214967.73..3214995.39 rows=11064 width=58)\nSort Key: i.ino\n-> Hash Join (cost=3212283.98..3214224.58 rows=11064 width=58)\nHash Cond: (\"outer\".nomenclature_no = \"inner\".nomenclature_no)\n-> Merge Join (cost=3211706.73..3213082.65 rows=11867 width=39)\nMerge Cond: (\"outer\".nomenclature_no = \"inner\".nomenclature_no)\nJoin Filter: (\"inner\".subnom_no = \"outer\".subnom_no)\n-> Index Scan using subnom_nom_idx on subnom sc (cost=0.00..1135.01 \nrows=13193 width=19)\n-> Sort (cost=3211706.73..3211734.59 rows=11146 width=24)\nSort Key: i.nomenclature_no\n-> Index Scan using invoices_idate_idx on invoices i \n(cost=0.00..3210957.48 rows=11146 width=24)\nIndex Cond: ((idate >= '2004-01-01'::date) AND (idate <= \n'2005-01-01'::date))\nFilter: ((istatus = 0) AND (itype <> 2) AND (subplan))\nSubPlan\n-> Nested Loop (cost=0.00..140.00 rows=1 width=0)\n-> Nested Loop (cost=0.00..101.54 rows=7 width=4)\n-> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 \nrows=7 width=4)\nIndex Cond: (ino = $0)\n-> Index Scan using acc_debts_transact_no_idx on acc_debts ad \n(cost=0.00..9.71 rows=93 width=8)\nIndex Cond: (ad.transact_no = \"outer\".transact_no)\n-> Index Scan using debts_desc_pkey on debts_desc dd (cost=0.00..5.48 \nrows=1 width=4)\nIndex Cond: (\"outer\".debtid = dd.debtid)\nFilter: (refid IS NULL)\n-> Hash (cost=463.20..463.20 rows=12820 width=27)\n-> Seq Scan on clients c (cost=0.00..463.20 rows=12820 width=27)\nSubPlan\n-> HashAggregate (cost=77.58..77.59 rows=1 width=19)\n-> Nested Loop (cost=0.00..77.57 rows=1 width=19)\n-> Nested Loop (cost=0.00..66.58 rows=2 width=23)\n-> Index Scan using acc_clients_ino on acc_clients ac (cost=0.00..25.47 \nrows=4 width=12)\nIndex Cond: (ino = $0)\nFilter: (NOT credit)\n-> Index Scan using acc_debts_transact_no_idx on acc_debts ad \n(cost=0.00..9.71 rows=45 width=19)\nIndex Cond: (\"outer\".transact_no = ad.transact_no)\nFilter: credit\n-> Index Scan using debts_desc_pkey on debts_desc d (cost=0.00..5.48 \nrows=1 width=4)\nIndex Cond: (\"outer\".debtid = d.debtid)\nFilter: (refid IS NULL)\n-> Aggregate (cost=5.59..5.59 rows=1 width=8)\n-> Index Scan using invoices_iino_idx on invoices i1 (cost=0.00..5.58 \nrows=1 width=8)\nIndex Cond: (iino = $0)\nFilter: ((istatus = 0) AND (itype = 2))\n\n\n(114 rows)\n\n\n", "msg_date": "Mon, 17 Jan 2005 17:37:31 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem from migrating between versions!" }, { "msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> I have the following problem. A week ago we've migrated from PGv7.2.3 to \n> 7.4.6. There were a lot of things in the apps to chenge but we made \n> them. But one query doesn't want to run. In the old PGv7.2.3 it passes \n> for 10 min. In the new one it gaves:\n> DBD::Pg::st execute failed: ERROR: out of memory\n\nDoes setting enable_hashagg to OFF fix it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2005 11:13:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem from migrating between versions! " }, { "msg_contents": "Thanks,\n\nIt worked. I have read in the docs what this \"enable_hashagg\" do, but I \ncouldn't understand it. What does it change?\n\n From the Doc:\n-------\nenable_hashagg (boolean)\n\n Enables or disables the query planner's use of hashed aggregation\n plan types. The default is on. This is used for debugging the query\n planner. \n\n--------\n\nHow it is used to debug the query planner? And why it lower the mem usage?\n\nThank you in advance.\n\nKaloyan Iliev\n\n\n\n\nTom Lane wrote:\n\n>Kaloyan Iliev Iliev <[email protected]> writes:\n> \n>\n>>I have the following problem. A week ago we've migrated from PGv7.2.3 to \n>>7.4.6. There were a lot of things in the apps to chenge but we made \n>>them. But one query doesn't want to run. In the old PGv7.2.3 it passes \n>>for 10 min. In the new one it gaves:\n>>DBD::Pg::st execute failed: ERROR: out of memory\n>> \n>>\n>\n>Does setting enable_hashagg to OFF fix it?\n>\n>\t\t\tregards, tom lane\n>\n>\n> \n>\n", "msg_date": "Mon, 17 Jan 2005 20:02:39 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem from migrating between versions!" }, { "msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> It worked. I have read in the docs what this \"enable_hashagg\" do, but I \n> couldn't understand it. What does it change?\n\nYour original 7.4 query plan has several HashAgg steps in it, which are\ndoing aggregate/GROUP BY operations. The planner thinks that they will\nuse only nominal amounts of memory because there are only a few distinct\ngroups in each case. Evidently that is wrong and at least one of them\nis dealing with so many groups as to run out of memory. So the next\nquestion is have you ANALYZEd all of these tables recently?\n\nI wouldn't recommend turning off hashagg as a permanent solution, it\nwas just a quickie to verify my suspicion of where the memory was going.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2005 13:07:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem from migrating between versions! " }, { "msg_contents": "Hi,\n\nI am asking the prev. question because there is no change in the query \nplan (as far as I see) but the mem usage decreases from 258M to 16M.\n\nKaloyan Iliev\n\nTom Lane wrote:\n\n>Kaloyan Iliev Iliev <[email protected]> writes:\n> \n>\n>>I have the following problem. A week ago we've migrated from PGv7.2.3 to \n>>7.4.6. There were a lot of things in the apps to chenge but we made \n>>them. But one query doesn't want to run. In the old PGv7.2.3 it passes \n>>for 10 min. In the new one it gaves:\n>>DBD::Pg::st execute failed: ERROR: out of memory\n>> \n>>\n>\n>Does setting enable_hashagg to OFF fix it?\n>\n>\t\t\tregards, tom lane\n>\n>\n> \n>\n\n\n\n\n\n\n\nHi,\n\nI am asking the prev. question because there is no change in the query\nplan (as far as I see) but the mem usage decreases from 258M to 16M.\n\nKaloyan Iliev\n\nTom Lane wrote:\n\nKaloyan Iliev Iliev <[email protected]> writes:\n \n\nI have the following problem. A week ago we've migrated from PGv7.2.3 to \n7.4.6. There were a lot of things in the apps to chenge but we made \nthem. But one query doesn't want to run. In the old PGv7.2.3 it passes \nfor 10 min. In the new one it gaves:\nDBD::Pg::st execute failed: ERROR: out of memory\n \n\n\nDoes setting enable_hashagg to OFF fix it?\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 17 Jan 2005 20:15:31 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem from migrating between versions!" }, { "msg_contents": "Tom Lane wrote:\n\n>I wouldn't recommend turning off hashagg as a permanent solution, it\n>was just a quickie to verify my suspicion of where the memory was going.\n>\n> \n>\n\nHi,\n\nHow to understant the upper sentence? I shouldn't turn \"hashagg\" off \npermanently for this query or for the entire database. For now I turn it \noff for this query, so it can work. If I shouldn't, then what should I \ndo? Will ANALYZE resove this?\n\nKaloyan Iliev\n", "msg_date": "Mon, 17 Jan 2005 20:57:10 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem from migrating between versions!" }, { "msg_contents": "Kaloyan Iliev Iliev <[email protected]> writes:\n> Will ANALYZE resove this?\n\nTry it and find out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2005 14:23:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem from migrating between versions! " }, { "msg_contents": "Hi,\n\nI try it and it doesn't resolve the problem:(\nSo, now what? To leave it that way for this query or .... There must be \npermanent solution because if other queries behave like that?\n\nKaloyan Iliev\n\n\nTom Lane wrote:\n\n>Kaloyan Iliev Iliev <[email protected]> writes:\n> \n>\n>>Will ANALYZE resove this?\n>> \n>>\n>\n>Try it and find out.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 8: explain analyze is your friend\n>\n>\n> \n>\n\n\n\n\n\n\n\nHi,\n\nI try it and it doesn't resolve the problem:(\nSo, now what? To leave it that way for this query or .... There must be\npermanent solution because if other queries behave like that?\n\nKaloyan Iliev\n\n\nTom Lane wrote:\n\nKaloyan Iliev Iliev <[email protected]> writes:\n \n\nWill ANALYZE resove this?\n \n\n\nTry it and find out.\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 8: explain analyze is your friend", "msg_date": "Tue, 18 Jan 2005 15:43:11 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem from migrating between versions!" } ]
[ { "msg_contents": "Hi to all, \n\nI have a query which counts how many elements I have in the database.\n\nSELECT count(o.id) FROM orders o\n INNER JOIN report r ON o.id=r.id_order\n INNER JOIN status s ON o.id_status=s.id\n INNER JOIN contact c ON o.id_ag=c.id\n INNER JOIN endkunde e ON o.id_endkunde=e.id\n INNER JOIN zufriden z ON r.id_zufriden=z.id\n INNER JOIN plannung v ON v.id=o.id_plannung\n INNER JOIN mpsworker w ON v.id_worker=w.id\n INNER JOIN person p ON p.id = w.id_person\n WHERE o.id_status>3 \n\nIn the tables are not quite so many rows (~ 100000).\n\nI keep the joins because in the where clause there can be also other search elemens which are searched in the other tables. \nNow the id_status from the orders table (>3) can be 4 or 6. The id_status=6 has the most bigger percentage (4 = 10%, 6 = 70% and the rest are other statuses < 4). I think this is why the planner uses \n\nI'm asking how can I improve the execution time of this query, because these tables are always increasing. And this count sometimes takes more than 10 secs and I need to run this count very offen.\n\nBest regards, \nAndy.\n\n\nThe explain:\nAggregate (cost=37931.33..37931.33 rows=1 width=4)\n -> Hash Join (cost=27277.86..37828.45 rows=41154 width=4)\n Hash Cond: (\"outer\".id_person = \"inner\".id)\n -> Hash Join (cost=27269.79..37100.18 rows=41153 width=8)\n Hash Cond: (\"outer\".id_worker = \"inner\".id)\n -> Hash Join (cost=27268.28..36378.50 rows=41152 width=8)\n Hash Cond: (\"outer\".id_endkunde = \"inner\".id)\n -> Hash Join (cost=25759.54..33326.98 rows=41151 width=12)\n Hash Cond: (\"outer\".id_ag = \"inner\".id)\n -> Hash Join (cost=25587.07..32331.51 rows=41150 width=16)\n Hash Cond: (\"outer\".id_status = \"inner\".id)\n -> Hash Join (cost=25586.00..31713.18 rows=41150 width=20)\n Hash Cond: (\"outer\".id_zufriden = \"inner\".id)\n -> Hash Join (cost=25584.85..31094.78 rows=41150 width=24)\n Hash Cond: (\"outer\".id_plannung = \"inner\".id)\n -> Hash Join (cost=24135.60..27869.53 rows=41149 width=24)\n Hash Cond: (\"outer\".id = \"inner\".id_order)\n -> Seq Scan on orders o (cost=0.00..2058.54 rows=42527 width=20)\n Filter: (id_status > 3)\n -> Hash (cost=23860.48..23860.48 rows=42848 width=8)\n -> Seq Scan on report r (cost=0.00..23860.48 rows=42848 width=8)\n -> Hash (cost=1050.80..1050.80 rows=62180 width=8)\n -> Seq Scan on plannung v (cost=0.00..1050.80 rows=62180 width=8)\n -> Hash (cost=1.12..1.12 rows=12 width=4)\n -> Seq Scan on zufriden z (cost=0.00..1.12 rows=12 width=4)\n -> Hash (cost=1.06..1.06 rows=6 width=4)\n -> Seq Scan on status s (cost=0.00..1.06 rows=6 width=4)\n -> Hash (cost=161.57..161.57 rows=4357 width=4)\n -> Seq Scan on contact c (cost=0.00..161.57 rows=4357 width=4)\n -> Hash (cost=1245.99..1245.99 rows=44299 width=4)\n -> Seq Scan on endkunde e (cost=0.00..1245.99 rows=44299 width=4)\n -> Hash (cost=1.41..1.41 rows=41 width=8)\n -> Seq Scan on mpsworker w (cost=0.00..1.41 rows=41 width=8)\n -> Hash (cost=7.66..7.66 rows=166 width=4)\n -> Seq Scan on person p (cost=0.00..7.66 rows=166 width=4)\n\n\n\n\n\n\nHi to \nall, \n \nI have a query which counts how many elements I have in the \ndatabase.\n \nSELECT count(o.id) FROM orders \no      INNER JOIN report r ON \no.id=r.id_order      INNER JOIN status s ON \no.id_status=s.id      INNER JOIN contact c ON \no.id_ag=c.id      INNER JOIN endkunde e ON \no.id_endkunde=e.id      INNER JOIN zufriden z \nON r.id_zufriden=z.id      INNER JOIN plannung \nv ON v.id=o.id_plannung      INNER JOIN \nmpsworker w ON v.id_worker=w.id      INNER \nJOIN person p ON p.id = w.id_person      WHERE \no.id_status>3 \n \nIn the tables are not quite so many rows (~ \n100000).\n \nI keep the joins because in the where clause there \ncan be also other search elemens which are searched in the other tables. \n\nNow the id_status from the orders table (>3) can \nbe 4 or 6. The id_status=6 has the most bigger percentage (4 = 10%, 6 = 70% \nand the rest are other statuses < 4). I think this is why the planner uses \n\n \nI'm asking how can I improve the execution time of \nthis query, because these tables are always increasing. And this count sometimes \ntakes more than 10 secs and I need to run this count very offen.\n \nBest regards, \nAndy.\n \n \nThe explain:\nAggregate  (cost=37931.33..37931.33 rows=1 \nwidth=4)  ->  Hash Join  (cost=27277.86..37828.45 \nrows=41154 width=4)        Hash Cond: \n(\"outer\".id_person = \"inner\".id)        \n->  Hash Join  (cost=27269.79..37100.18 rows=41153 \nwidth=8)              \nHash Cond: (\"outer\".id_worker = \n\"inner\".id)              \n->  Hash Join  (cost=27268.28..36378.50 rows=41152 \nwidth=8)                    \nHash Cond: (\"outer\".id_endkunde = \n\"inner\".id)                    \n->  Hash Join  (cost=25759.54..33326.98 rows=41151 \nwidth=12)                          \nHash Cond: (\"outer\".id_ag = \n\"inner\".id)                          \n->  Hash Join  (cost=25587.07..32331.51 rows=41150 \nwidth=16)                                \nHash Cond: (\"outer\".id_status = \n\"inner\".id)                                \n->  Hash Join  (cost=25586.00..31713.18 rows=41150 \nwidth=20)                                      \nHash Cond: (\"outer\".id_zufriden = \n\"inner\".id)                                      \n->  Hash Join  (cost=25584.85..31094.78 rows=41150 \nwidth=24)                                            \nHash Cond: (\"outer\".id_plannung = \n\"inner\".id)                                            \n->  Hash Join  (cost=24135.60..27869.53 rows=41149 \nwidth=24)                                                  \nHash Cond: (\"outer\".id = \n\"inner\".id_order)                                                  \n->  Seq Scan on orders o  (cost=0.00..2058.54 rows=42527 \nwidth=20)                                                        \nFilter: (id_status > \n3)                                                  \n->  Hash  (cost=23860.48..23860.48 rows=42848 \nwidth=8)                                                        \n->  Seq Scan on report r  (cost=0.00..23860.48 rows=42848 \nwidth=8)                                            \n->  Hash  (cost=1050.80..1050.80 rows=62180 \nwidth=8)                                                  \n->  Seq Scan on plannung v  (cost=0.00..1050.80 rows=62180 \nwidth=8)                                      \n->  Hash  (cost=1.12..1.12 rows=12 \nwidth=4)                                            \n->  Seq Scan on zufriden z  (cost=0.00..1.12 rows=12 \nwidth=4)                                \n->  Hash  (cost=1.06..1.06 rows=6 \nwidth=4)                                      \n->  Seq Scan on status s  (cost=0.00..1.06 rows=6 \nwidth=4)                          \n->  Hash  (cost=161.57..161.57 rows=4357 \nwidth=4)                                \n->  Seq Scan on contact c  (cost=0.00..161.57 rows=4357 \nwidth=4)                    \n->  Hash  (cost=1245.99..1245.99 rows=44299 \nwidth=4)                          \n->  Seq Scan on endkunde e  (cost=0.00..1245.99 rows=44299 \nwidth=4)              \n->  Hash  (cost=1.41..1.41 rows=41 \nwidth=8)                    \n->  Seq Scan on mpsworker w  (cost=0.00..1.41 rows=41 \nwidth=8)        ->  Hash  \n(cost=7.66..7.66 rows=166 \nwidth=4)              \n->  Seq Scan on person p  (cost=0.00..7.66 rows=166 \nwidth=4)", "msg_date": "Mon, 17 Jan 2005 18:58:09 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing this count query" }, { "msg_contents": "\"Andrei Bintintan\" <[email protected]> writes:\n> SELECT count(o.id) FROM orders o\n> INNER JOIN report r ON o.id=r.id_order\n> INNER JOIN status s ON o.id_status=s.id\n> INNER JOIN contact c ON o.id_ag=c.id\n> INNER JOIN endkunde e ON o.id_endkunde=e.id\n> INNER JOIN zufriden z ON r.id_zufriden=z.id\n> INNER JOIN plannung v ON v.id=o.id_plannung\n> INNER JOIN mpsworker w ON v.id_worker=w.id\n> INNER JOIN person p ON p.id = w.id_person\n> WHERE o.id_status>3\n\n> I'm asking how can I improve the execution time of this query, because =\n> these tables are always increasing. And this count sometimes takes more =\n> than 10 secs and I need to run this count very offen.\n\nUnless you've increased the default value of join_collapse_limit, this\nconstruction will be forcing the join order; see\nhttp://www.postgresql.org/docs/7.4/static/explicit-joins.html\n\nI'm not sure if you can improve the join order at all --- since you only\nshowed EXPLAIN and not EXPLAIN ANALYZE, it's hard to be sure whether any\nof the steps are producing large intermediate results. But it's\nsomething to look into.\n\nYou should also ask yourself if you need to be joining so many tables at\nall. The planner seems to think that only the o/r join is really going\nto affect the result row count. I can't tell if it's right or not, but\nif this is a star schema and the other seven tables are just detail\ntables, you don't need them in order to obtain a count.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2005 12:55:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing this count query " }, { "msg_contents": "I have to do all the joins because in the where cause I can also have other \nconditions that are related to the other tables.\nFor example:\n....WHERE o.id_status>3 AND o.id_ag=72 AND v.id_worker=5 AND z.id=10.\n\nNow if these search functions are IN then the query runs faster.\n\nOne thing I could do at this point is not to make the join if that table is \nnot needed in the where clause.\n\nThis is the explain analize for the first query.\nAggregate (cost=37182.56..37182.56 rows=1 width=4) (actual \ntime=3032.126..3032.126 rows=1 loops=1)\n -> Hash Join (cost=27279.22..37079.68 rows=41154 width=4) (actual \ntime=662.600..2999.845 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_endkunde = \"inner\".id)\n -> Hash Join (cost=25770.48..34068.10 rows=41153 width=8) (actual \ntime=561.112..2444.574 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_worker = \"inner\".id)\n -> Hash Join (cost=25759.54..33326.98 rows=41151 width=12) \n(actual time=560.514..2361.776 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_ag = \"inner\".id)\n -> Hash Join (cost=25587.07..32331.51 rows=41150 \nwidth=16) (actual time=551.505..2240.217 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_status = \"inner\".id)\n -> Hash Join (cost=25586.00..31713.18 rows=41150 \nwidth=20) (actual time=551.418..2150.224 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_zufriden = \n\"inner\".id)\n -> Hash Join (cost=25584.85..31094.78 \nrows=41150 width=24) (actual time=551.341..2057.142 rows=42835 loops=1)\n Hash Cond: (\"outer\".id_plannung = \n\"inner\".id)\n -> Hash Join \n(cost=24135.60..27869.53 rows=41149 width=24) (actual time=415.189..1162.429 \nrows=42835 loops=1)\n Hash Cond: (\"outer\".id = \n\"inner\".id_order)\n -> Seq Scan on orders o \n(cost=0.00..2058.54 rows=42527 width=20) (actual time=0.046..93.692 \nrows=42835 loops=1)\n Filter: (id_status > 3)\n -> Hash \n(cost=23860.48..23860.48 rows=42848 width=8) (actual time=414.923..414.923 \nrows=0 loops=1)\n -> Seq Scan on report r \n(cost=0.00..23860.48 rows=42848 width=8) (actual time=282.905..371.401 \nrows=42848 loops=1)\n -> Hash (cost=1050.80..1050.80 \nrows=62180 width=8) (actual time=133.505..133.505 rows=0 loops=1)\n -> Seq Scan on plannung v \n(cost=0.00..1050.80 rows=62180 width=8) (actual time=0.034..73.048 \nrows=62180 loops=1)\n -> Hash (cost=1.12..1.12 rows=12 width=4) \n(actual time=0.048..0.048 rows=0 loops=1)\n -> Seq Scan on zufriden z \n(cost=0.00..1.12 rows=12 width=4) (actual time=0.027..0.040 rows=12 loops=1)\n -> Hash (cost=1.06..1.06 rows=6 width=4) (actual \ntime=0.045..0.045 rows=0 loops=1)\n -> Seq Scan on status s (cost=0.00..1.06 \nrows=6 width=4) (actual time=0.032..0.037 rows=6 loops=1)\n -> Hash (cost=161.57..161.57 rows=4357 width=4) \n(actual time=8.973..8.973 rows=0 loops=1)\n -> Seq Scan on contact c (cost=0.00..161.57 \nrows=4357 width=4) (actual time=0.032..5.902 rows=4357 loops=1)\n -> Hash (cost=10.84..10.84 rows=42 width=4) (actual \ntime=0.557..0.557 rows=0 loops=1)\n -> Hash Join (cost=1.51..10.84 rows=42 width=4) \n(actual time=0.182..0.523 rows=41 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".id_person)\n -> Seq Scan on person p (cost=0.00..7.66 \nrows=166 width=4) (actual time=0.027..0.216 rows=166 loops=1)\n -> Hash (cost=1.41..1.41 rows=41 width=8) \n(actual time=0.125..0.125 rows=0 loops=1)\n -> Seq Scan on mpsworker w \n(cost=0.00..1.41 rows=41 width=8) (actual time=0.038..0.086 rows=41 loops=1)\n -> Hash (cost=1245.99..1245.99 rows=44299 width=4) (actual \ntime=101.257..101.257 rows=0 loops=1)\n -> Seq Scan on endkunde e (cost=0.00..1245.99 rows=44299 \nwidth=4) (actual time=0.050..59.641 rows=44301 loops=1)\nTotal runtime: 3033.230 ms\n\nThanks for help.\nAndy.\n\n----- Original Message ----- \nFrom: \"Tom Lane\" <[email protected]>\nTo: \"Andrei Bintintan\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, January 17, 2005 7:55 PM\nSubject: Re: [PERFORM] Optimizing this count query\n\n\n> \"Andrei Bintintan\" <[email protected]> writes:\n>> SELECT count(o.id) FROM orders o\n>> INNER JOIN report r ON o.id=r.id_order\n>> INNER JOIN status s ON o.id_status=s.id\n>> INNER JOIN contact c ON o.id_ag=c.id\n>> INNER JOIN endkunde e ON o.id_endkunde=e.id\n>> INNER JOIN zufriden z ON r.id_zufriden=z.id\n>> INNER JOIN plannung v ON v.id=o.id_plannung\n>> INNER JOIN mpsworker w ON v.id_worker=w.id\n>> INNER JOIN person p ON p.id = w.id_person\n>> WHERE o.id_status>3\n>\n>> I'm asking how can I improve the execution time of this query, because =\n>> these tables are always increasing. And this count sometimes takes more =\n>> than 10 secs and I need to run this count very offen.\n>\n> Unless you've increased the default value of join_collapse_limit, this\n> construction will be forcing the join order; see\n> http://www.postgresql.org/docs/7.4/static/explicit-joins.html\n>\n> I'm not sure if you can improve the join order at all --- since you only\n> showed EXPLAIN and not EXPLAIN ANALYZE, it's hard to be sure whether any\n> of the steps are producing large intermediate results. But it's\n> something to look into.\n>\n> You should also ask yourself if you need to be joining so many tables at\n> all. The planner seems to think that only the o/r join is really going\n> to affect the result row count. I can't tell if it's right or not, but\n> if this is a star schema and the other seven tables are just detail\n> tables, you don't need them in order to obtain a count.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n\n", "msg_date": "Tue, 18 Jan 2005 09:48:47 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing this count query " } ]
[ { "msg_contents": "Hi,\n \nI have the go ahead of a customer to do some testing on Postgresql in a\ncouple of weeks as a replacement for Oracle.\nThe reason for the test is that the number of users of the warehouse is\ngoing to increase and this will have a serious impact on licencing costs. (I\nbet that sounds familiar)\n \nWe're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb RAM)\non Oracle.\nBasically we have 2 large fact tables to deal with: one going for 400M rows,\nthe other will be hitting 1B rows soon.\n(around 250Gb of data)\n \nMy questions to the list are: has this sort of thing been attempted before?\nIf so, what where the results?\nI've been reading up on partitioned tabes on pgsql, will the performance\nbenefit will be comparable to Oracle partitioned tables?\nWhat are the gotchas? Should I be testing on 8 or the 7 version?\n \nThanks in advance for any help you may have, I'll do my best to keep\npgsql-performance up to date on the results.\n \nBest regards,\n \nMatt\n___________________________________________\nMatt Casters\ni-Bridge bvba, http://www.kettle.be <http://www.kettle.be/> \nFonteinstraat 70, 9400 OKEGEM, Belgium\nTel. 054/25.01.37\nGSM 0486/97.29.37\n \n \n\n\n\n\n\nHi,\n \nI have the go ahead \nof a customer to do some testing on Postgresql in a couple of weeks as a \nreplacement for Oracle.\n\nThe reason for the \ntest is that the number of users of the warehouse is going to increase and this \nwill have a serious impact on licencing costs. (I bet that sounds \nfamiliar)\n \nWe're running a medium sized data \nwarehouse on a Solaris box (4CPU, 8Gb RAM) on Oracle.\nBasically we have 2 large fact tables \nto deal with: one going for 400M rows, the other will be \nhitting 1B rows soon.\n(around 250Gb of \ndata)\n \nMy questions to the \nlist are: has this sort of thing been attempted before? If so, what where \nthe results?\nI've been reading up \non partitioned tabes on pgsql, will the performance benefit will be comparable \nto Oracle partitioned tables?\nWhat are the \ngotchas?  Should I be testing on 8 or the 7 version?\n \nThanks in \nadvance for any help you may have, I'll do my best to keep \npgsql-performance up to date on the results.\n \nBest \nregards,\n \nMatt\n___________________________________________\nMatt Casters\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 OKEGEM, Belgium\nTel. 054/25.01.37\nGSM 0486/97.29.37", "msg_date": "Tue, 18 Jan 2005 22:32:14 +0100", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "DWH on Postgresql" }, { "msg_contents": "Cross-posting to GENERAL for additional comment.\n\nMatt Casters wrote:\n\n> Hi,\n> \n> I have the go ahead of a customer to do some testing on Postgresql in \n> a couple of weeks as a replacement for Oracle.\n> The reason for the test is that the number of users of the warehouse \n> is going to increase and this will have a serious impact on licencing \n> costs. (I bet that sounds familiar)\n> \n> We're running a medium sized data warehouse on a Solaris box (4CPU, \n> 8Gb RAM) on Oracle.\n> Basically we have 2 large fact tables to deal with: one going for 400M \n> rows, the other will be hitting 1B rows soon.\n> (around 250Gb of data)\n\nI have heard of databases larger than 1TB on PostgreSQL. Don't have \nmuch experience with them. but here are thoughts that come to mind.\n\n> \n> My questions to the list are: has this sort of thing been attempted \n> before? If so, what where the results?\n\nIf you search the archives (of the General list, I think) and you will \nbe able to find people talking about databases much larger than this. \nMore \"look what PostgreSQL can do\" rather than \"I need help.\"\n\n> I've been reading up on partitioned tabes on pgsql, will the \n> performance benefit will be comparable to Oracle partitioned tables?\n\nI am not aware of any data to base such a comparison on.\n\n> What are the gotchas? \n\nA few I can think of: Cross-table indexes don't really work for \nconstraing purposes, so you need to assume that only one table will be \nactively getting inserts/updates. Secondly, you will probably need to \nconsider the level of transparency you need. If you need more \ntransparency, you can do it with views, rules, etc. (or simply having on \ninsert rules on your base table and inheriting new tables from it \nregularly).\n\nAlso, I have seen posts in the past regarding performance issues \nspecific to Solaris. You may want to research this too.\n\n> Should I be testing on 8 or the 7 version?\n>\n8. Has better cache management, meaning will likely perform better.\n\nHope this helps. It is not a typical question on the list, but if you \nstart running into issues, this is a good list to ask question on :-)\n\nBest Wishes,\nChris Travers\nMetatron Technology Consulting", "msg_date": "Sat, 22 Jan 2005 10:41:52 -0800", "msg_from": "Chris Travers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] DWH on Postgresql" } ]
[ { "msg_contents": "I just wanted to bounce off the list the best way to configure disks for a\npostgresql server. My gut feeling is as follows:\n \nKeep the OS and postgresql install on seperate disks to the postgresql /data\ndirectory?\nIs a single hard disk drive acceptable for the OS and postgresql program, or\nwill this create a bottle neck? Would a multi disk array be more\nappropriate?\n \nCheers,\n \nBenjamin Wragg\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n \n\n\n\n\n\nI just wanted to \nbounce off the list the best way to configure disks for a postgresql server. \nMy gut \nfeeling is as follows:\n \nKeep the OS and \npostgresql install on seperate disks to the postgresql /data \ndirectory?\nIs a single hard \ndisk drive acceptable for the OS and postgresql program, or will this create a \nbottle neck? Would a multi disk array be more appropriate?\n \nCheers,\n \nBenjamin \nWragg", "msg_date": "Wed, 19 Jan 2005 09:03:44 +1100", "msg_from": "\"Benjamin Wragg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Disk configuration" }, { "msg_contents": "Benjamin,\n\n> I just wanted to bounce off the list the best way to configure disks for a\n> postgresql server. My gut feeling is as follows:\n>\n> Keep the OS and postgresql install on seperate disks to the postgresql\n> /data directory?\n> Is a single hard disk drive acceptable for the OS and postgresql program,\n> or will this create a bottle neck? Would a multi disk array be more\n> appropriate?\n\nAll of this depends heavily on your database size, read/write balance, and \ntransaction volume. For example, the PostgreSQL Press list runs fine on my \nsingle-drive IDE laptop (1 user, < 2mb database) but I wouldn't run the DBT2 \n(high-volume OLTP test) on it.\n\nMore info?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 18 Jan 2005 14:49:21 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk configuration" }, { "msg_contents": "The primary goal is to reduce the number of seeks a disk or array has\nto perform. Serial write throughput is much higher than random write\nthroughput. If you are performing very high volume throughput on a\nserver that is doing multiple things, then it maybe advisable to have\none partition for OS, one for postgresql binaries, one for xlog and\none for table data (or multiple if you are PG8.0). This is the\nultimate configuration, but most people don't require this level of\nseperation. If you do need this level of seperation, also bare in\nmind that table data writes are more likely to be random writes so you\nwant an array that can sustain a high levels of IO/sec, so RAID 10\nwith 6 or more drives is ideal. If you want fault tolerance, then\nRAID 1 for OS and postgresql binaries is a minimum, and I believe that\nxlog can also go on a RAID 1 unless you need more MB/sec. Ultimately\nyou will need to benchmark any configuration you build in order to\ndetermine if it's successfull and meets your needs. This of course\nsucks, because you don't want to buy too much because it's a waste of\n$$s.\n\nWhat I can tell you is my own experience which is a database running\nwith xlog, software and OS on a RAID 1, with Data partition running on\n3 disk RAID 5 with a database of about 3 million rows total gets an\ninsert speed of about 200 rows/sec on an average size table using a\ncompaq proliant ML370 Dual Pentium 933 w/2G RAM. Most of the DB is in\nRAM, so read times are very good with most queries returning sub\nsecond.\n\nHope this helps at least a little\n\nAlex Turner\nNetEconomist\n\n\nOn Wed, 19 Jan 2005 09:03:44 +1100, Benjamin Wragg <[email protected]> wrote:\n> \n> I just wanted to bounce off the list the best way to configure disks for a\n> postgresql server. My gut feeling is as follows: \n> \n> Keep the OS and postgresql install on seperate disks to the postgresql /data\n> directory? \n> Is a single hard disk drive acceptable for the OS and postgresql program, or\n> will this create a bottle neck? Would a multi disk array be more\n> appropriate? \n> \n> Cheers, \n> \n> Benjamin Wragg \n> \n> \n> --\n> No virus found in this outgoing message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n>\n", "msg_date": "Wed, 19 Jan 2005 10:52:35 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk configuration" }, { "msg_contents": " \nThanks. That sorts out all my questions regarding disk configuration. One\nmore regarding RAID. Is RAID 1+0 and 0+1 essentially the same at a\nperformance level?\n\nThanks,\n\nBenjamin\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Alex Turner\nSent: Thursday, 20 January 2005 2:53 AM\nTo: Benjamin Wragg\nCc: [email protected]\nSubject: Re: [PERFORM] Disk configuration\n\nThe primary goal is to reduce the number of seeks a disk or array has to\nperform. Serial write throughput is much higher than random write\nthroughput. If you are performing very high volume throughput on a server\nthat is doing multiple things, then it maybe advisable to have one partition\nfor OS, one for postgresql binaries, one for xlog and one for table data (or\nmultiple if you are PG8.0). This is the ultimate configuration, but most\npeople don't require this level of seperation. If you do need this level of\nseperation, also bare in mind that table data writes are more likely to be\nrandom writes so you want an array that can sustain a high levels of IO/sec,\nso RAID 10 with 6 or more drives is ideal. If you want fault tolerance,\nthen RAID 1 for OS and postgresql binaries is a minimum, and I believe that\nxlog can also go on a RAID 1 unless you need more MB/sec. Ultimately you\nwill need to benchmark any configuration you build in order to determine if\nit's successfull and meets your needs. This of course sucks, because you\ndon't want to buy too much because it's a waste of $$s.\n\nWhat I can tell you is my own experience which is a database running with\nxlog, software and OS on a RAID 1, with Data partition running on\n3 disk RAID 5 with a database of about 3 million rows total gets an insert\nspeed of about 200 rows/sec on an average size table using a compaq proliant\nML370 Dual Pentium 933 w/2G RAM. Most of the DB is in RAM, so read times\nare very good with most queries returning sub second.\n\nHope this helps at least a little\n\nAlex Turner\nNetEconomist\n\n\nOn Wed, 19 Jan 2005 09:03:44 +1100, Benjamin Wragg <[email protected]>\nwrote:\n> \n> I just wanted to bounce off the list the best way to configure disks \n> for a postgresql server. My gut feeling is as follows:\n> \n> Keep the OS and postgresql install on seperate disks to the postgresql \n> /data directory?\n> Is a single hard disk drive acceptable for the OS and postgresql \n> program, or will this create a bottle neck? Would a multi disk array \n> be more appropriate?\n> \n> Cheers,\n> \n> Benjamin Wragg\n> \n> \n> --\n> No virus found in this outgoing message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n--\nNo virus found in this incoming message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n \n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.7.1 - Release Date: 19/01/2005\n \n\n", "msg_date": "Thu, 20 Jan 2005 11:55:37 +1100", "msg_from": "\"Benjamin Wragg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk configuration" }, { "msg_contents": "I have never seen benchmarks for RAID 0+1. Very few people use it\nbecause it's not very fault tolerant, so I couldn't answer for sure. \nI would imagine that RAID 0+1 could acheive better read throughput\nbecause you could, in theory, read from each half of the mirror\nindependantly. Write would be the same I would imagine because you\nstill have to write all data to all drives. Thats my best guess.\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 20 Jan 2005 11:55:37 +1100, Benjamin Wragg <[email protected]> wrote:\n> \n> Thanks. That sorts out all my questions regarding disk configuration. One\n> more regarding RAID. Is RAID 1+0 and 0+1 essentially the same at a\n> performance level?\n> \n> Thanks,\n> \n> Benjamin\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Alex Turner\n> Sent: Thursday, 20 January 2005 2:53 AM\n> To: Benjamin Wragg\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Disk configuration\n> \n> The primary goal is to reduce the number of seeks a disk or array has to\n> perform. Serial write throughput is much higher than random write\n> throughput. If you are performing very high volume throughput on a server\n> that is doing multiple things, then it maybe advisable to have one partition\n> for OS, one for postgresql binaries, one for xlog and one for table data (or\n> multiple if you are PG8.0). This is the ultimate configuration, but most\n> people don't require this level of seperation. If you do need this level of\n> seperation, also bare in mind that table data writes are more likely to be\n> random writes so you want an array that can sustain a high levels of IO/sec,\n> so RAID 10 with 6 or more drives is ideal. If you want fault tolerance,\n> then RAID 1 for OS and postgresql binaries is a minimum, and I believe that\n> xlog can also go on a RAID 1 unless you need more MB/sec. Ultimately you\n> will need to benchmark any configuration you build in order to determine if\n> it's successfull and meets your needs. This of course sucks, because you\n> don't want to buy too much because it's a waste of $$s.\n> \n> What I can tell you is my own experience which is a database running with\n> xlog, software and OS on a RAID 1, with Data partition running on\n> 3 disk RAID 5 with a database of about 3 million rows total gets an insert\n> speed of about 200 rows/sec on an average size table using a compaq proliant\n> ML370 Dual Pentium 933 w/2G RAM. Most of the DB is in RAM, so read times\n> are very good with most queries returning sub second.\n> \n> Hope this helps at least a little\n> \n> Alex Turner\n> NetEconomist\n> \n> On Wed, 19 Jan 2005 09:03:44 +1100, Benjamin Wragg <[email protected]>\n> wrote:\n> >\n> > I just wanted to bounce off the list the best way to configure disks\n> > for a postgresql server. My gut feeling is as follows:\n> >\n> > Keep the OS and postgresql install on seperate disks to the postgresql\n> > /data directory?\n> > Is a single hard disk drive acceptable for the OS and postgresql\n> > program, or will this create a bottle neck? Would a multi disk array\n> > be more appropriate?\n> >\n> > Cheers,\n> >\n> > Benjamin Wragg\n> >\n> >\n> > --\n> > No virus found in this outgoing message.\n> > Checked by AVG Anti-Virus.\n> > Version: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> --\n> No virus found in this incoming message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n> \n> \n> --\n> No virus found in this outgoing message.\n> Checked by AVG Anti-Virus.\n> Version: 7.0.300 / Virus Database: 265.7.1 - Release Date: 19/01/2005\n> \n>\n", "msg_date": "Thu, 20 Jan 2005 11:05:57 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk configuration" } ]
[ { "msg_contents": "Hello,\n I'm running PostgreSQL on a Solaris 8 system with 2GB of RAM and I'm \nhaving some difficulty getting PostgreSQL to use the available RAM. My RAM \nsettings in postgresql.conf are\n\nshared_buffers = 8192 # min 16, at least max_connections*2, 8KB each\nsort_mem = 131072 # min 64, size in KB\nvacuum_mem = 131072 # min 1024, size in KB\n\nIgnoring the fact that the sort and vacuum numbers are really high, this is \nwhat Solaris shows me when running top:\n\nMemory: 2048M real, 1376M free, 491M swap in use, 2955M swap free\n\nFor some reason I have 1.25GB of free RAM but PostgreSQL seems compelled to \nswap to the hard drive rather than use that RAM. I have the shared buffers \nset as high as the Solaris kernel will let me. I also know that Solaris \nwill cache frequently used files in RAM, thereby lowering the amount of RAM \navailable to an application, but my understanding is that Solaris will dump \nthat cache if an application or the kernel itself requires it.\n\n The system has about 1,000 active email users using unix mailboxes which \ncould what is keeping the database from exploiting as much RAM as available \nbut my primary concern is to allow PostgreSQL to use as much RAM as it \nrequires without swapping.\n\n What can I do to force the system to allow PostgreSQL to do this?\n\nRegards,\nKevin Schroeder \n\n", "msg_date": "Tue, 18 Jan 2005 16:49:35 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Swapping on Solaris" }, { "msg_contents": "Kevin Schroeder wrote:\n>\n> \n> Ignoring the fact that the sort and vacuum numbers are really high, this \n> is what Solaris shows me when running top:\n> \n> Memory: 2048M real, 1376M free, 491M swap in use, 2955M swap free\n> \nMaybe check the swap usage with 'swap -l' which reports reliably if any\n(device or file) swap is actually used.\n\nI think Solaris 'top' does some strange accounting to calculate the\n'swap in use' value (like including used memory).\n\nIt looks to me like you are using no (device or file) swap at all, and\nhave 1.3G of real memory free, so could in fact give Postgres more of it :-)\n\nregards\n\nMark\n\n", "msg_date": "Wed, 19 Jan 2005 20:40:53 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "\n> Kevin Schroeder wrote:\n> It looks to me like you are using no (device or file) swap at all, and\n> have 1.3G of real memory free, so could in fact give Postgres more of it :-)\n>\n\nIndeed.\nIf you DO run into trouble after giving Postgres more RAM, use the vmstat command.\nYou can use this command like \"vmstat 10\". (ignore the first line)\nKeep an eye on the \"pi\" and \"po\" parameters. (kilobytes paged in and out)\n\nHTH,\n\nMatt\n------\nMatt Casters <[email protected]>\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 Okegem, Belgium\nPhone +32 (0) 486/97.29.37\n\n", "msg_date": "Wed, 19 Jan 2005 10:57:10 +0100 (CET)", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Mark Kirkwood wrote:\n\n> Kevin Schroeder wrote:\n>\n>>\n>>\n>> Ignoring the fact that the sort and vacuum numbers are really high, \n>> this is what Solaris shows me when running top:\n>>\n>> Memory: 2048M real, 1376M free, 491M swap in use, 2955M swap free\n>>\n> Maybe check the swap usage with 'swap -l' which reports reliably if any\n> (device or file) swap is actually used.\n>\n> I think Solaris 'top' does some strange accounting to calculate the\n> 'swap in use' value (like including used memory).\n>\n> It looks to me like you are using no (device or file) swap at all, and\n> have 1.3G of real memory free, so could in fact give Postgres more of \n> it :-)\n\nI suspect that \"free\" memory is in fact being used for the file system \ncache. There were some changes in the meaning of \"free\" in Solaris 8 \nand 9. The memstat command gives a nice picture of memory usage on the \nsystem. I don't think memstat came with Solaris 8, but you can get it \nfrom solarisinternals.com. The Solaris Internals book is an excellent \nread as well; it explains all of this in gory detail. \n\nNote that files in /tmp are usually in a tmpfs file system. These \nfiles may be the usage of swap that you're seeing (as they will be paged \nout on an active system with some memory pressure)\n\nFinally, just as everyone suggests upgrading to newer postgresql \nreleases, you probably want to get to a newer Solaris release. \n\n-- Alan\n", "msg_date": "Wed, 19 Jan 2005 08:51:56 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "I suspect that the memory is being used to cache files as well since the \nemail boxes are using unix mailboxes, for the time being. With people \nchecking their email sometimes once per minute I can see why Solaris would \nwant to cache those files. Perhaps my question would be more appropriate to \na Solaris mailing list since what I really want to do is get Solaris to \nsimply allow PostgreSQL to use more RAM and reduce the amount of RAM used \nfor file caching. I would have thought that Solaris gives some deference to \na running application that's being swapped than for a file cache.\n\nIs there any way to set custom parameters on Solaris' file-caching behavior \nto allow PostgreSQL to use more physical RAM?\n\nI will also check out memstat. It's not on my system, but I'll get it from \nthe site you noted.\n\nThanks\nKevin\n\n\n----- Original Message ----- \nFrom: \"Alan Stange\" <[email protected]>\nCc: \"Kevin Schroeder\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, January 19, 2005 7:51 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> Mark Kirkwood wrote:\n>\n>> Kevin Schroeder wrote:\n>>\n>>>\n>>>\n>>> Ignoring the fact that the sort and vacuum numbers are really high, this \n>>> is what Solaris shows me when running top:\n>>>\n>>> Memory: 2048M real, 1376M free, 491M swap in use, 2955M swap free\n>>>\n>> Maybe check the swap usage with 'swap -l' which reports reliably if any\n>> (device or file) swap is actually used.\n>>\n>> I think Solaris 'top' does some strange accounting to calculate the\n>> 'swap in use' value (like including used memory).\n>>\n>> It looks to me like you are using no (device or file) swap at all, and\n>> have 1.3G of real memory free, so could in fact give Postgres more of it \n>> :-)\n>\n> I suspect that \"free\" memory is in fact being used for the file system \n> cache. There were some changes in the meaning of \"free\" in Solaris 8 and \n> 9. The memstat command gives a nice picture of memory usage on the \n> system. I don't think memstat came with Solaris 8, but you can get it \n> from solarisinternals.com. The Solaris Internals book is an excellent \n> read as well; it explains all of this in gory detail.\n> Note that files in /tmp are usually in a tmpfs file system. These files \n> may be the usage of swap that you're seeing (as they will be paged out on \n> an active system with some memory pressure)\n>\n> Finally, just as everyone suggests upgrading to newer postgresql releases, \n> you probably want to get to a newer Solaris release.\n> -- Alan\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 08:31:17 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "po and pi are relatively low, but do pick up when there's an increase in \nactivity. I am seeing a lot of \"minor faults\", though. vmstat -S 5 reports\n\n [9:38am]# vmstat -S 5\n procs memory page disk faults cpu\n r b w swap free si so pi po fr de sr s0 s1 s3 -- in sy cs us sy \nid\n 0 0 0 3235616 1414536 0 0 303 11 10 0 0 6 24 0 0 13 192 461 17 11 \n72\n 1 0 0 3004376 1274912 0 0 0 0 0 0 0 3 16 0 0 494 1147 441 52 25 \n23\n\n494 in faults\n1147 sy faults\n\nGenerally faults are a bad thing. Is that the case here?\n\nKevin\n\n----- Original Message ----- \nFrom: \"Matt Casters\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, January 19, 2005 3:57 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n\n> Kevin Schroeder wrote:\n> It looks to me like you are using no (device or file) swap at all, and\n> have 1.3G of real memory free, so could in fact give Postgres more of it \n> :-)\n>\n\nIndeed.\nIf you DO run into trouble after giving Postgres more RAM, use the vmstat \ncommand.\nYou can use this command like \"vmstat 10\". (ignore the first line)\nKeep an eye on the \"pi\" and \"po\" parameters. (kilobytes paged in and out)\n\nHTH,\n\nMatt\n------\nMatt Casters <[email protected]>\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 Okegem, Belgium\nPhone +32 (0) 486/97.29.37\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Wed, 19 Jan 2005 08:52:28 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "I take that back. There actually is some paging going on. I ran sar -g 5 \n10 and when a request was made (totally about 10 DB queries) my pgout/s \njumped to 5.8 and my ppgout/s jumped to 121.8. pgfree/s also jumped to \n121.80.\n\nKevin\n\n----- Original Message ----- \nFrom: \"Matt Casters\" <[email protected]>\nTo: <[email protected]>\nSent: Wednesday, January 19, 2005 3:57 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n\n> Kevin Schroeder wrote:\n> It looks to me like you are using no (device or file) swap at all, and\n> have 1.3G of real memory free, so could in fact give Postgres more of it \n> :-)\n>\n\nIndeed.\nIf you DO run into trouble after giving Postgres more RAM, use the vmstat \ncommand.\nYou can use this command like \"vmstat 10\". (ignore the first line)\nKeep an eye on the \"pi\" and \"po\" parameters. (kilobytes paged in and out)\n\nHTH,\n\nMatt\n------\nMatt Casters <[email protected]>\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 Okegem, Belgium\nPhone +32 (0) 486/97.29.37\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n", "msg_date": "Wed, 19 Jan 2005 08:57:23 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Alan Stange wrote:\n> \n> Note that files in /tmp are usually in a tmpfs file system. These \n> files may be the usage of swap that you're seeing (as they will be paged \n> out on an active system with some memory pressure)\n\nYou can do a couple things with /tmp. Create a separate file system\nfor it so it will have zero impact on swap and use the \"noatime\" mount\noption. Alternatively, limit the size of /tmp using the mount option\n\"size=MBm\" replacing \"MB\" with the size you want it to be in MBytes. If\nyour application uses /tmp heavily, be sure to put it on a speedy,\nlocal LUN.\n\n\n> Finally, just as everyone suggests upgrading to newer postgresql \n> releases, you probably want to get to a newer Solaris release.\n\nIf you really want to avoid swapping I'd suggest tuning your database\nfirst with swap turned off and put it under a \"normal\" load while\nwatching both top and vmstat. When you're happy with it, turn swap\nback on for those \"heavy\" load times and move on.\n\nGreg\n\n-- \nGreg Spiegelberg\n Product Development Manager\n Cranel, Incorporated.\n Phone: 614.318.4314\n Fax: 614.431.8388\n Email: [email protected]\nTechnology. Integrity. Focus.\n\n", "msg_date": "Wed, 19 Jan 2005 10:07:45 -0500", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "/tmp doesn't seem to be much of a problem. I have about 1k worth of data in \nthere and 72k in /var/tmp.\n\nWould turning swap off help in tuning the database in this regard? top is \nreporting that there's 1.25GB of RAM free on a 2GB system so, in my \nestimation, there's no need for PostgreSQL to be swapped unless that free \nmemory is Solaris caching files in RAM.\n\nKevin\n\n\n----- Original Message ----- \nFrom: \"Greg Spiegelberg\" <[email protected]>\nTo: <[email protected]>\nCc: \"Kevin Schroeder\" <[email protected]>; \n<[email protected]>\nSent: Wednesday, January 19, 2005 9:07 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> Alan Stange wrote:\n>>\n>> Note that files in /tmp are usually in a tmpfs file system. These files \n>> may be the usage of swap that you're seeing (as they will be paged out on \n>> an active system with some memory pressure)\n>\n> You can do a couple things with /tmp. Create a separate file system\n> for it so it will have zero impact on swap and use the \"noatime\" mount\n> option. Alternatively, limit the size of /tmp using the mount option\n> \"size=MBm\" replacing \"MB\" with the size you want it to be in MBytes. If\n> your application uses /tmp heavily, be sure to put it on a speedy,\n> local LUN.\n>\n>\n>> Finally, just as everyone suggests upgrading to newer postgresql \n>> releases, you probably want to get to a newer Solaris release.\n>\n> If you really want to avoid swapping I'd suggest tuning your database\n> first with swap turned off and put it under a \"normal\" load while\n> watching both top and vmstat. When you're happy with it, turn swap\n> back on for those \"heavy\" load times and move on.\n>\n> Greg\n>\n> -- \n> Greg Spiegelberg\n> Product Development Manager\n> Cranel, Incorporated.\n> Phone: 614.318.4314\n> Fax: 614.431.8388\n> Email: [email protected]\n> Technology. Integrity. Focus.\n>\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 09:17:03 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Kevin Schroeder wrote:\n\n> I suspect that the memory is being used to cache files as well since \n> the email boxes are using unix mailboxes, for the time being. With \n> people checking their email sometimes once per minute I can see why \n> Solaris would want to cache those files. Perhaps my question would be \n> more appropriate to a Solaris mailing list since what I really want to \n> do is get Solaris to simply allow PostgreSQL to use more RAM and \n> reduce the amount of RAM used for file caching. I would have thought \n> that Solaris gives some deference to a running application that's \n> being swapped than for a file cache.\n>\n> Is there any way to set custom parameters on Solaris' file-caching \n> behavior to allow PostgreSQL to use more physical RAM?\n\nYour explanation doesn't sound quite correct. If postgresql malloc()'s \nsome memory and uses it, the file cache will be reduced in size and the \nmemory given to postgresql. But if postgresql doesn't ask for or use \nthe memory, then solaris is going to use it for something else. There's \nnothing in Solaris that doesn't \"allow\" postgresql to use more RAM.\n\n-- Alan\n", "msg_date": "Wed, 19 Jan 2005 10:30:33 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "I may be asking the question the wrong way, but when I start up PostgreSQL \nswap is what gets used the most of. I've got 1282MB free RAM right now and \nand 515MB swap in use. Granted, swap file usage probably wouldn't be zero, \nbut I would guess that it should be a lot lower so something must be keeping \nPostgreSQL from using the free RAM that my system is reporting. For \nexample, one of my postgres processes is 201M in size but on 72M is resident \nin RAM. That extra 130M is available in RAM, according to top, but postgres \nisn't using it.\n\nKevin\n\n----- Original Message ----- \nFrom: \"Alan Stange\" <[email protected]>\nTo: \"Kevin Schroeder\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, January 19, 2005 9:30 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> Kevin Schroeder wrote:\n>\n>> I suspect that the memory is being used to cache files as well since the \n>> email boxes are using unix mailboxes, for the time being. With people \n>> checking their email sometimes once per minute I can see why Solaris \n>> would want to cache those files. Perhaps my question would be more \n>> appropriate to a Solaris mailing list since what I really want to do is \n>> get Solaris to simply allow PostgreSQL to use more RAM and reduce the \n>> amount of RAM used for file caching. I would have thought that Solaris \n>> gives some deference to a running application that's being swapped than \n>> for a file cache.\n>>\n>> Is there any way to set custom parameters on Solaris' file-caching \n>> behavior to allow PostgreSQL to use more physical RAM?\n>\n> Your explanation doesn't sound quite correct. If postgresql malloc()'s \n> some memory and uses it, the file cache will be reduced in size and the \n> memory given to postgresql. But if postgresql doesn't ask for or use the \n> memory, then solaris is going to use it for something else. There's \n> nothing in Solaris that doesn't \"allow\" postgresql to use more RAM.\n>\n> -- Alan\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 09:40:12 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Kevin Schroeder wrote:\n\n> I take that back. There actually is some paging going on. I ran sar \n> -g 5 10 and when a request was made (totally about 10 DB queries) my \n> pgout/s jumped to 5.8 and my ppgout/s jumped to 121.8. pgfree/s also \n> jumped to 121.80.\n\nI'm fairly sure that the pi and po numbers include file IO in Solaris, \nbecause of the unified VM and file systems.\n\n-- Alan\n", "msg_date": "Wed, 19 Jan 2005 10:42:26 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Maybe, I'm just seeing a problem where none exists. I ran sar -w 3 100 and \nI actually did not see any swap activity despite the fact that I've got \n500+MB of swap file being used.\n\nKevin\n\n----- Original Message ----- \nFrom: \"Alan Stange\" <[email protected]>\nTo: \"Kevin Schroeder\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, January 19, 2005 9:42 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> Kevin Schroeder wrote:\n>\n>> I take that back. There actually is some paging going on. I ran sar -g \n>> 5 10 and when a request was made (totally about 10 DB queries) my pgout/s \n>> jumped to 5.8 and my ppgout/s jumped to 121.8. pgfree/s also jumped to \n>> 121.80.\n>\n> I'm fairly sure that the pi and po numbers include file IO in Solaris, \n> because of the unified VM and file systems.\n>\n> -- Alan\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 09:53:58 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "\nOn Jan 19, 2005, at 10:42 AM, Alan Stange wrote:\n\n> Kevin Schroeder wrote:\n>\n>> I take that back. There actually is some paging going on. I ran sar \n>> -g 5 10 and when a request was made (totally about 10 DB queries) my \n>> pgout/s jumped to 5.8 and my ppgout/s jumped to 121.8. pgfree/s also \n>> jumped to 121.80.\n>\n> I'm fairly sure that the pi and po numbers include file IO in Solaris, \n> because of the unified VM and file systems.\n\nCuriously, what are your shared_buffers and sort_mem set too?\nPerhaps they are too high?\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 19 Jan 2005 11:58:21 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Kevin Schroeder wrote:\n\n> I may be asking the question the wrong way, but when I start up \n> PostgreSQL swap is what gets used the most of. I've got 1282MB free \n> RAM right now and and 515MB swap in use. Granted, swap file usage \n> probably wouldn't be zero, but I would guess that it should be a lot \n> lower so something must be keeping PostgreSQL from using the free RAM \n> that my system is reporting. For example, one of my postgres \n> processes is 201M in size but on 72M is resident in RAM. That extra \n> 130M is available in RAM, according to top, but postgres isn't using it. \n\nThe test you're doing doesn't measure what you think you're measuring.\n\nFirst, what else is running on the machine? Note that some shared \nmemory allocations do reserve backing pages in swap, even though the \npages aren't currently in use. Perhaps this is what you're measuring? \n\"swap -s\" has better numbers than top.\n\nYou'd be better by trying a reboot then starting pgsql and seeing what \nmemory is used.\n\nJust because you start a process and see the swap number increase \ndoesn't mean that the new process is in swap. It means some anonymous \npages had to be evicted to swap to make room for the new process or some \npages had to be reserved in swap for future use. Typically a new \nprocess won't be paged out unless something else is causing enormous \nmemory pressure...\n\n-- Alan\n", "msg_date": "Wed, 19 Jan 2005 12:04:21 -0500", "msg_from": "Alan Stange <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "I think it's probably just reserving them. I can't think of anything else. \nAlso, when I run swap activity with sar I don't see any activity, which also \npoints to reserved swap space, not used swap space.\n\nswap -s reports\n\ntotal: 358336k bytes allocated + 181144k reserved = 539480k used, 2988840k \navailable\n\nKevin\n\n----- Original Message ----- \nFrom: \"Alan Stange\" <[email protected]>\nTo: \"Kevin Schroeder\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, January 19, 2005 11:04 AM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> Kevin Schroeder wrote:\n>\n>> I may be asking the question the wrong way, but when I start up \n>> PostgreSQL swap is what gets used the most of. I've got 1282MB free RAM \n>> right now and and 515MB swap in use. Granted, swap file usage probably \n>> wouldn't be zero, but I would guess that it should be a lot lower so \n>> something must be keeping PostgreSQL from using the free RAM that my \n>> system is reporting. For example, one of my postgres processes is 201M \n>> in size but on 72M is resident in RAM. That extra 130M is available in \n>> RAM, according to top, but postgres isn't using it.\n>\n> The test you're doing doesn't measure what you think you're measuring.\n>\n> First, what else is running on the machine? Note that some shared \n> memory allocations do reserve backing pages in swap, even though the pages \n> aren't currently in use. Perhaps this is what you're measuring? \n> \"swap -s\" has better numbers than top.\n>\n> You'd be better by trying a reboot then starting pgsql and seeing what \n> memory is used.\n>\n> Just because you start a process and see the swap number increase doesn't \n> mean that the new process is in swap. It means some anonymous pages had \n> to be evicted to swap to make room for the new process or some pages had \n> to be reserved in swap for future use. Typically a new process won't be \n> paged out unless something else is causing enormous memory pressure...\n>\n> -- Alan\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 11:08:51 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "\nOn Jan 19, 2005, at 10:40 AM, Kevin Schroeder wrote:\n\n> I may be asking the question the wrong way, but when I start up \n> PostgreSQL swap is what gets used the most of. I've got 1282MB free \n> RAM right now and and 515MB swap in use. Granted, swap file usage \n> probably wouldn't be zero, but I would guess that it should be a lot \n> lower so something must be keeping PostgreSQL from using the free RAM \n> that my system is reporting. For example, one of my postgres \n> processes is 201M in size but on 72M is resident in RAM. That extra \n> 130M is available in RAM, according to top, but postgres isn't using \n> it.\n\nCan you please give us your exact shared_buffer and sort_mem settings?\nThis will help greatly. As a general thing, we say don't use more than \n10k shared bufs unless you have done testing and enjoy a benefit. \nManaging all those buffers isn't free.\n\nI'm also not sure how Solaris reports shared memory usage for apps... a \nlot of that could be shared mem.\n\nCan you watch say, vmstat 1 for a minute or two while PG is running and \nsee if you're actually swapping?\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Wed, 19 Jan 2005 12:36:54 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "This page may be of use:\n\nhttp://www.serverworldmagazine.com/monthly/2003/02/solaris.shtml\n\n From personal experience, for god's sake don't think Solaris' VM/swap \nimplementation is easy - it's damn good, but it ain't easy!\n\nMatt\n\nKevin Schroeder wrote:\n\n> I think it's probably just reserving them. I can't think of anything \n> else. Also, when I run swap activity with sar I don't see any \n> activity, which also points to reserved swap space, not used swap space.\n>\n> swap -s reports\n>\n> total: 358336k bytes allocated + 181144k reserved = 539480k used, \n> 2988840k available\n>\n> Kevin\n>\n> ----- Original Message ----- From: \"Alan Stange\" <[email protected]>\n> To: \"Kevin Schroeder\" <[email protected]>\n> Cc: <[email protected]>\n> Sent: Wednesday, January 19, 2005 11:04 AM\n> Subject: Re: [PERFORM] Swapping on Solaris\n>\n>\n>> Kevin Schroeder wrote:\n>>\n>>> I may be asking the question the wrong way, but when I start up \n>>> PostgreSQL swap is what gets used the most of. I've got 1282MB free \n>>> RAM right now and and 515MB swap in use. Granted, swap file usage \n>>> probably wouldn't be zero, but I would guess that it should be a lot \n>>> lower so something must be keeping PostgreSQL from using the free \n>>> RAM that my system is reporting. For example, one of my postgres \n>>> processes is 201M in size but on 72M is resident in RAM. That extra \n>>> 130M is available in RAM, according to top, but postgres isn't using \n>>> it.\n>>\n>>\n>> The test you're doing doesn't measure what you think you're measuring.\n>>\n>> First, what else is running on the machine? Note that some shared \n>> memory allocations do reserve backing pages in swap, even though the \n>> pages aren't currently in use. Perhaps this is what you're \n>> measuring? \"swap -s\" has better numbers than top.\n>>\n>> You'd be better by trying a reboot then starting pgsql and seeing \n>> what memory is used.\n>>\n>> Just because you start a process and see the swap number increase \n>> doesn't mean that the new process is in swap. It means some \n>> anonymous pages had to be evicted to swap to make room for the new \n>> process or some pages had to be reserved in swap for future use. \n>> Typically a new process won't be paged out unless something else is \n>> causing enormous memory pressure...\n>>\n>> -- Alan\n>>\n>>\n>>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n", "msg_date": "Wed, 19 Jan 2005 19:01:48 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Well, easy it ain't and I believe it's good. One final question: When I \nrun sar -w I get no swap activity, but the process switch column registers \nbetween 400 and 700 switches per second. Would that be in the normal range \nfor a medium-use system?\n\nThanks\nKevin\n\n----- Original Message ----- \nFrom: \"Matt Clark\" <[email protected]>\nTo: \"Kevin Schroeder\" <[email protected]>\nCc: <[email protected]>\nSent: Wednesday, January 19, 2005 1:01 PM\nSubject: Re: [PERFORM] Swapping on Solaris\n\n\n> This page may be of use:\n>\n> http://www.serverworldmagazine.com/monthly/2003/02/solaris.shtml\n>\n> From personal experience, for god's sake don't think Solaris' VM/swap \n> implementation is easy - it's damn good, but it ain't easy!\n>\n> Matt\n>\n> Kevin Schroeder wrote:\n>\n>> I think it's probably just reserving them. I can't think of anything \n>> else. Also, when I run swap activity with sar I don't see any activity, \n>> which also points to reserved swap space, not used swap space.\n>>\n>> swap -s reports\n>>\n>> total: 358336k bytes allocated + 181144k reserved = 539480k used, \n>> 2988840k available\n>>\n>> Kevin\n>>\n>> ----- Original Message ----- From: \"Alan Stange\" <[email protected]>\n>> To: \"Kevin Schroeder\" <[email protected]>\n>> Cc: <[email protected]>\n>> Sent: Wednesday, January 19, 2005 11:04 AM\n>> Subject: Re: [PERFORM] Swapping on Solaris\n>>\n>>\n>>> Kevin Schroeder wrote:\n>>>\n>>>> I may be asking the question the wrong way, but when I start up \n>>>> PostgreSQL swap is what gets used the most of. I've got 1282MB free \n>>>> RAM right now and and 515MB swap in use. Granted, swap file usage \n>>>> probably wouldn't be zero, but I would guess that it should be a lot \n>>>> lower so something must be keeping PostgreSQL from using the free RAM \n>>>> that my system is reporting. For example, one of my postgres processes \n>>>> is 201M in size but on 72M is resident in RAM. That extra 130M is \n>>>> available in RAM, according to top, but postgres isn't using it.\n>>>\n>>>\n>>> The test you're doing doesn't measure what you think you're measuring.\n>>>\n>>> First, what else is running on the machine? Note that some shared \n>>> memory allocations do reserve backing pages in swap, even though the \n>>> pages aren't currently in use. Perhaps this is what you're measuring? \n>>> \"swap -s\" has better numbers than top.\n>>>\n>>> You'd be better by trying a reboot then starting pgsql and seeing what \n>>> memory is used.\n>>>\n>>> Just because you start a process and see the swap number increase \n>>> doesn't mean that the new process is in swap. It means some anonymous \n>>> pages had to be evicted to swap to make room for the new process or some \n>>> pages had to be reserved in swap for future use. Typically a new \n>>> process won't be paged out unless something else is causing enormous \n>>> memory pressure...\n>>>\n>>> -- Alan\n>>>\n>>>\n>>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n>\n>\n>\n> \n\n", "msg_date": "Wed, 19 Jan 2005 13:46:51 -0600", "msg_from": "\"Kevin Schroeder\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "On Wed, 2005-01-19 at 09:40 -0600, Kevin Schroeder wrote:\n> I may be asking the question the wrong way, but when I start up PostgreSQL \n> swap is what gets used the most of. I've got 1282MB free RAM right now and \n> and 515MB swap in use. Granted, swap file usage probably wouldn't be zero, \n> but I would guess that it should be a lot lower so something must be keeping \n> PostgreSQL from using the free RAM that my system is reporting. For \n> example, one of my postgres processes is 201M in size but on 72M is resident \n> in RAM. That extra 130M is available in RAM, according to top, but postgres \n> isn't using it.\n\nYou probably need to look at the way Solaris memory allocation works.\n\nOn Linux 2.6, my understanding is that if a process allocates memory,\nbut doesn't actually use it, then the OS is smart enough to swap the\noverallocated portion out to swap. The effect of that is that the\nprogram stays happy because it has all the \"memory\" it thinks it needs,\nwhile the OS is happy because it conserves it valuable physical RAM for\nmemory that is actually being used.\n\nIf what I say is correct, you should actually observe very low swapping\nI/O rates on the system.\n\nAnyway, look at how the algorithms work if you are worried by what you\nsee. But mostly, if the system is performing OK, then no need to worry -\nif your only measure of that is system performance data then you need to\ninstrument your application better, so you can look at the data that\nreally matters.\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Wed, 19 Jan 2005 23:47:04 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "On Wed, Jan 19, 2005 at 10:42:26AM -0500, Alan Stange wrote:\n> \n> I'm fairly sure that the pi and po numbers include file IO in Solaris, \n> because of the unified VM and file systems.\n\nThat's correct.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nWhen my information changes, I alter my conclusions. What do you do sir?\n\t\t--attr. John Maynard Keynes\n", "msg_date": "Thu, 27 Jan 2005 16:36:59 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Wed, Jan 19, 2005 at 10:42:26AM -0500, Alan Stange wrote:\n> > \n> > I'm fairly sure that the pi and po numbers include file IO in Solaris, \n> > because of the unified VM and file systems.\n> \n> That's correct.\n\nI have seen cases on BSDs where 'pi' includes page-faulting in the\nexecutables from the file system, but Solaris actually has 'po' as\nfilesystem I/O. That is a new one to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Feb 2005 10:54:50 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Swapping on Solaris" } ]
[ { "msg_contents": "Hi,\n \nHas anyone had any experiance with any of the Areca SATA RAID controllers? I\nwas looking at a 3ware one but it won't fit in the 2U case we have so the\nsales guy recommended these.\n \nCheers,\n \nBenjamin Wragg\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.7.0 - Release Date: 17/01/2005\n \n\n\n\n\n\nHi,\n \nHas anyone had any \nexperiance with any of the Areca SATA RAID controllers? I was looking at a 3ware \none but it won't fit in the 2U case we have so the sales guy recommended \nthese.\n \nCheers,\n \nBenjamin \nWragg", "msg_date": "Thu, 20 Jan 2005 09:31:16 +1100", "msg_from": "\"Benjamin Wragg\" <[email protected]>", "msg_from_op": true, "msg_subject": "areca raid controller" } ]
[ { "msg_contents": "Hi folks,\n\nRunning on 7.4.2, recently vacuum analysed the three tables in \nquestion.\n\nThe query plan in question changes dramatically when a WHERE clause \nchanges from ports.broken to ports.deprecated. I don't see why. \nWell, I do see why: a sequential scan of a 130,000 rows. The query \ngoes from 13ms to 1100ms because the of this. The full plans are at \nhttp://rafb.net/paste/results/v8ccvQ54.html\n\nI have tried some tuning by:\n\n set effective_cache_size to 4000, was 1000\n set random_page_cost to 1, was 4\n\nThe resulting plan changes, but no speed improvment, are at \nhttp://rafb.net/paste/results/rV8khJ18.html\n\nAny suggestions please? \n\n-- \nDan Langille : http://www.langille.org/\nBSDCan - The Technical BSD Conference - http://www.bsdcan.org/\n\n", "msg_date": "Wed, 19 Jan 2005 20:37:59 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "index scan of whole table, can't see why" }, { "msg_contents": "On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:\n> Hi folks,\n> \n> Running on 7.4.2, recently vacuum analysed the three tables in \n> question.\n> \n> The query plan in question changes dramatically when a WHERE clause \n> changes from ports.broken to ports.deprecated. I don't see why. \n> Well, I do see why: a sequential scan of a 130,000 rows. The query \n> goes from 13ms to 1100ms because the of this. The full plans are at \n> http://rafb.net/paste/results/v8ccvQ54.html\n> \n> I have tried some tuning by:\n> \n> set effective_cache_size to 4000, was 1000\n> set random_page_cost to 1, was 4\n> \n> The resulting plan changes, but no speed improvment, are at \n> http://rafb.net/paste/results/rV8khJ18.html\n> \n\nthis just confirms that an indexscan is not always better than a\ntablescan. by setting random_page_cost to 1, you deceiving the\nplanner into thinking that the indexscan is almost as effective\nas a tablescan.\n\n> Any suggestions please? \n\ndid you try to increase sort_mem ?\n\ngnari\n\n\n", "msg_date": "Thu, 20 Jan 2005 09:34:29 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On 20 Jan 2005 at 9:34, Ragnar Hafstað wrote:\n\n> On Wed, 2005-01-19 at 20:37 -0500, Dan Langille wrote:\n> > Hi folks,\n> > \n> > Running on 7.4.2, recently vacuum analysed the three tables in \n> > question.\n> > \n> > The query plan in question changes dramatically when a WHERE clause \n> > changes from ports.broken to ports.deprecated. I don't see why. \n> > Well, I do see why: a sequential scan of a 130,000 rows. The query \n> > goes from 13ms to 1100ms because the of this. The full plans are at \n> > http://rafb.net/paste/results/v8ccvQ54.html\n> > \n> > I have tried some tuning by:\n> > \n> > set effective_cache_size to 4000, was 1000\n> > set random_page_cost to 1, was 4\n> > \n> > The resulting plan changes, but no speed improvment, are at \n> > http://rafb.net/paste/results/rV8khJ18.html\n> > \n> \n> this just confirms that an indexscan is not always better than a\n> tablescan. by setting random_page_cost to 1, you deceiving the\n> planner into thinking that the indexscan is almost as effective\n> as a tablescan.\n> \n> > Any suggestions please? \n> \n> did you try to increase sort_mem ?\n\nI tried sort_mem = 4096 and then 16384. This did not make a \ndifference. See http://rafb.net/paste/results/AVDqEm55.html\n\nThank you.\n-- \nDan Langille : http://www.langille.org/\nBSDCan - The Technical BSD Conference - http://www.bsdcan.org/\n\n", "msg_date": "Thu, 20 Jan 2005 06:56:20 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On Wed, 19 Jan 2005, Dan Langille wrote:\n\n> Hi folks,\n>\n> Running on 7.4.2, recently vacuum analysed the three tables in\n> question.\n>\n> The query plan in question changes dramatically when a WHERE clause\n> changes from ports.broken to ports.deprecated. I don't see why.\n> Well, I do see why: a sequential scan of a 130,000 rows. The query\n> goes from 13ms to 1100ms because the of this. The full plans are at\n> http://rafb.net/paste/results/v8ccvQ54.html\n>\n> I have tried some tuning by:\n>\n> set effective_cache_size to 4000, was 1000\n> set random_page_cost to 1, was 4\n>\n> The resulting plan changes, but no speed improvment, are at\n> http://rafb.net/paste/results/rV8khJ18.html\n>\n> Any suggestions please?\n\nAs a question, what does it do if enable_hashjoin is false? I'm wondering\nif it'll pick a nested loop for that step for the element/ports join and\nwhat it estimates the cost to be.\n\n", "msg_date": "Thu, 20 Jan 2005 06:14:31 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On 20 Jan 2005 at 6:14, Stephan Szabo wrote:\n\n> On Wed, 19 Jan 2005, Dan Langille wrote:\n> \n> > Hi folks,\n> >\n> > Running on 7.4.2, recently vacuum analysed the three tables in\n> > question.\n> >\n> > The query plan in question changes dramatically when a WHERE clause\n> > changes from ports.broken to ports.deprecated. I don't see why.\n> > Well, I do see why: a sequential scan of a 130,000 rows. The query\n> > goes from 13ms to 1100ms because the of this. The full plans are at\n> > http://rafb.net/paste/results/v8ccvQ54.html\n> >\n> > I have tried some tuning by:\n> >\n> > set effective_cache_size to 4000, was 1000\n> > set random_page_cost to 1, was 4\n> >\n> > The resulting plan changes, but no speed improvment, are at\n> > http://rafb.net/paste/results/rV8khJ18.html\n> >\n> > Any suggestions please?\n> \n> As a question, what does it do if enable_hashjoin is false? I'm wondering\n> if it'll pick a nested loop for that step for the element/ports join and\n> what it estimates the cost to be.\n\nWith enable_hashjoin = false, no speed improvement. Execution plan \nat http://rafb.net/paste/results/qtSFVM72.html\n\nthanks\n-- \nDan Langille : http://www.langille.org/\nBSDCan - The Technical BSD Conference - http://www.bsdcan.org/\n\n", "msg_date": "Thu, 20 Jan 2005 09:40:21 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On Thu, 20 Jan 2005, Dan Langille wrote:\n\n> On 20 Jan 2005 at 6:14, Stephan Szabo wrote:\n>\n> > On Wed, 19 Jan 2005, Dan Langille wrote:\n> >\n> > > Hi folks,\n> > >\n> > > Running on 7.4.2, recently vacuum analysed the three tables in\n> > > question.\n> > >\n> > > The query plan in question changes dramatically when a WHERE clause\n> > > changes from ports.broken to ports.deprecated. I don't see why.\n> > > Well, I do see why: a sequential scan of a 130,000 rows. The query\n> > > goes from 13ms to 1100ms because the of this. The full plans are at\n> > > http://rafb.net/paste/results/v8ccvQ54.html\n> > >\n> > > I have tried some tuning by:\n> > >\n> > > set effective_cache_size to 4000, was 1000\n> > > set random_page_cost to 1, was 4\n> > >\n> > > The resulting plan changes, but no speed improvment, are at\n> > > http://rafb.net/paste/results/rV8khJ18.html\n> > >\n> > > Any suggestions please?\n> >\n> > As a question, what does it do if enable_hashjoin is false? I'm wondering\n> > if it'll pick a nested loop for that step for the element/ports join and\n> > what it estimates the cost to be.\n>\n> With enable_hashjoin = false, no speed improvement. Execution plan\n> at http://rafb.net/paste/results/qtSFVM72.html\n\nHonestly I expected it to be slower (which it was), but I figured it's\nworth seeing what alternate plans it'll generate (specifically to see how\nit cost a nested loop on that join to compare to the fast plan).\nUnfortunately, it generated a merge join, so I think it might require both\nenable_hashjoin=false and enable_mergejoin=false to get it which is likely\nto be even slower in practice but still may be useful to see.\n\n", "msg_date": "Thu, 20 Jan 2005 07:26:37 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On 20 Jan 2005 at 7:26, Stephan Szabo wrote:\n\n> On Thu, 20 Jan 2005, Dan Langille wrote:\n> \n> > On 20 Jan 2005 at 6:14, Stephan Szabo wrote:\n> >\n> > > On Wed, 19 Jan 2005, Dan Langille wrote:\n> > >\n> > > > Hi folks,\n> > > >\n> > > > Running on 7.4.2, recently vacuum analysed the three tables in\n> > > > question.\n> > > >\n> > > > The query plan in question changes dramatically when a WHERE clause\n> > > > changes from ports.broken to ports.deprecated. I don't see why.\n> > > > Well, I do see why: a sequential scan of a 130,000 rows. The query\n> > > > goes from 13ms to 1100ms because the of this. The full plans are at\n> > > > http://rafb.net/paste/results/v8ccvQ54.html\n> > > >\n> > > > I have tried some tuning by:\n> > > >\n> > > > set effective_cache_size to 4000, was 1000\n> > > > set random_page_cost to 1, was 4\n> > > >\n> > > > The resulting plan changes, but no speed improvment, are at\n> > > > http://rafb.net/paste/results/rV8khJ18.html\n> > > >\n> > > > Any suggestions please?\n> > >\n> > > As a question, what does it do if enable_hashjoin is false? I'm wondering\n> > > if it'll pick a nested loop for that step for the element/ports join and\n> > > what it estimates the cost to be.\n> >\n> > With enable_hashjoin = false, no speed improvement. Execution plan\n> > at http://rafb.net/paste/results/qtSFVM72.html\n> \n> Honestly I expected it to be slower (which it was), but I figured it's\n> worth seeing what alternate plans it'll generate (specifically to see how\n> it cost a nested loop on that join to compare to the fast plan).\n> Unfortunately, it generated a merge join, so I think it might require both\n> enable_hashjoin=false and enable_mergejoin=false to get it which is likely\n> to be even slower in practice but still may be useful to see.\n\nSetting both to false gives a dramatic performance boost. See \nhttp://rafb.net/paste/results/b70KAi42.html\n\nThis gives suitable speed, but why does the plan vary so much with \nsuch a minor change in the WHERE clause?\n-- \nDan Langille : http://www.langille.org/\nBSDCan - The Technical BSD Conference - http://www.bsdcan.org/\n\n", "msg_date": "Thu, 20 Jan 2005 10:36:04 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On Fri, 21 Jan 2005 02:36 am, Dan Langille wrote:\n> On 20 Jan 2005 at 7:26, Stephan Szabo wrote:\n\n[snip]\n> > Honestly I expected it to be slower (which it was), but I figured it's\n> > worth seeing what alternate plans it'll generate (specifically to see how\n> > it cost a nested loop on that join to compare to the fast plan).\n> > Unfortunately, it generated a merge join, so I think it might require both\n> > enable_hashjoin=false and enable_mergejoin=false to get it which is likely\n> > to be even slower in practice but still may be useful to see.\n> \n> Setting both to false gives a dramatic performance boost. See \n> http://rafb.net/paste/results/b70KAi42.html\n> \n -> Materialize (cost=15288.70..15316.36 rows=2766 width=35) (actual time=0.004..0.596 rows=135 loops=92)\n -> Nested Loop (cost=0.00..15288.70 rows=2766 width=35) (actual time=0.060..9.130 rows=135 loops=1)\n\nThe Planner here has a quite inaccurate guess at the number of rows that will match in the join. An alternative to \nturning off join types is to up the statistics on the Element columns because that's where the join is happening. Hopefully the planner will\nget a better idea. However it may not be able too. 2766 rows vs 135 is quite likely to choose different plans. As you can\nsee you have had to turn off two join types to give something you wanted/expected.\n\n> This gives suitable speed, but why does the plan vary so much with \n> such a minor change in the WHERE clause?\nPlan 1 - broken\n -> Nested Loop (cost=0.00..3825.30 rows=495 width=35) (actual time=0.056..16.161 rows=218 loops=1)\n\nPlan 2 - deprecated\n -> Hash Join (cost=3676.78..10144.06 rows=2767 width=35) (actual time=7.638..1158.128 rows=135 loops=1)\n\nThe performance difference is when the where is changed, you have a totally different set of selection options.\nThe Plan 1 and Plan 2 shown from your paste earlier, report that you are out by a factor of 2 for plan 1. But for plan 2\nits a factor of 20. The planner is likely to make the wrong choice when the stats are out by that factor.\n\nBeware what is a small \"typing\" change does not mean they queries are anything alight.\n\nRegards\n\nRussell Smith.\n", "msg_date": "Fri, 21 Jan 2005 08:38:19 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On 21 Jan 2005 at 8:38, Russell Smith wrote:\n\n> On Fri, 21 Jan 2005 02:36 am, Dan Langille wrote:\n> > On 20 Jan 2005 at 7:26, Stephan Szabo wrote:\n> \n> [snip]\n> > > Honestly I expected it to be slower (which it was), but I figured\n> > > it's worth seeing what alternate plans it'll generate\n> > > (specifically to see how it cost a nested loop on that join to\n> > > compare to the fast plan). Unfortunately, it generated a merge\n> > > join, so I think it might require both enable_hashjoin=false and\n> > > enable_mergejoin=false to get it which is likely to be even slower\n> > > in practice but still may be useful to see.\n> > \n> > Setting both to false gives a dramatic performance boost. See\n> > http://rafb.net/paste/results/b70KAi42.html\n> > \n> -> Materialize (cost=15288.70..15316.36 rows=2766 width=35)\n> (actual time=0.004..0.596 rows=135 loops=92)\n> -> Nested Loop (cost=0.00..15288.70 rows=2766\n> width=35) (actual time=0.060..9.130 rows=135 loops=1)\n> \n> The Planner here has a quite inaccurate guess at the number of rows\n> that will match in the join. An alternative to turning off join types\n> is to up the statistics on the Element columns because that's where\n> the join is happening. Hopefully the planner will get a better idea. \n> However it may not be able too. 2766 rows vs 135 is quite likely to\n> choose different plans. As you can see you have had to turn off two\n> join types to give something you wanted/expected.\n\nFair comment. However, the statistics on ports.element_id, \nports.deprecated, ports.broken, and element.id are both set to 1000.\n\n> > This gives suitable speed, but why does the plan vary so much with\n> > such a minor change in the WHERE clause?\n> Plan 1 - broken\n> -> Nested Loop (cost=0.00..3825.30 rows=495 width=35) (actual\n> time=0.056..16.161 rows=218 loops=1)\n> \n> Plan 2 - deprecated\n> -> Hash Join (cost=3676.78..10144.06 rows=2767 width=35)\n> (actual time=7.638..1158.128 rows=135 loops=1)\n> \n> The performance difference is when the where is changed, you have a\n> totally different set of selection options. The Plan 1 and Plan 2\n> shown from your paste earlier, report that you are out by a factor of\n> 2 for plan 1. But for plan 2 its a factor of 20. The planner is\n> likely to make the wrong choice when the stats are out by that factor.\n> \n> Beware what is a small \"typing\" change does not mean they queries are\n> anything alight.\n\nAgreed. I just did not expect such a dramatic change which a result \nset that is similar. Actually, they aren't that similar at all.\n\nThank you.\n-- \nDan Langille : http://www.langille.org/\nBSDCan - The Technical BSD Conference - http://www.bsdcan.org/\n\n", "msg_date": "Thu, 20 Jan 2005 19:55:20 -0500", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan of whole table, can't see why" } ]
[ { "msg_contents": "Hi,\n \nI'm trying to tune a query that is taking to long to execute. I haven't done\nmuch sql tuning and have only had a little exposure to explain and explain\nanalyze but from what I've read on the list and in books the following is\ngenerally true:\n \nSeq Scans are the almost always evil (except if a table has only a few\nvalues)\nNested Joins are generally evil as every node below it is executed the\nnumber of times the \"loops=\" value says.\nHash Joins are extremely quick. This is because when postgres uses Hash\njoins it creates a copy of the values of the table in memory and then Hashes\n(some type of memory join) to the other table. \n\nIs that correct?\n \nIf so, I'm after some help on the following query which I feel is taking too\nlong. At the outset I want to apologise for the length of this email, I just\nwanted to provide as much info as possible. I just can't seem to make sense\nof it and have been trying for days!\n \nSELECT abs(item.area-userpolygon.area) as area,item.title as\nitem_title,item.id as item_id,item.collection_id as\nitem_collection_id,item.type_id as item_type_id,item.scale as\nitem_scale,publisher.publisher as publisher_publisher,publisher.description\nas publisher_description,language.language as\nlanguage_language,language.description as\nlanguage_description,language.code2 as language_code2,language.code3 as\nlanguage_code3,collection.collection as\ncollection_collection,collection.description as\ncollection_description,item_base_type.type as\nitem_type_combination_type,item_subtype.subtype as\nitem_type_combination_subtype,item_format.format as\nitem_type_combination_format,status.status as\nstatus_status,status.description as status_description,currency.code as\ncurrency_code,currency.description as currency_description,item.subtitle as\nitem_subtitle,item.description as item_description,item.item_number as\nitem_item_number,item.edition as item_edition,item.h_datum as\nitem_h_datum,item.v_datum as item_v_datum,item.projection as\nitem_projection,item.isbn as item_isbn,client_item_field.stock as\nclient_item_field_stock,client_item_field.price as\nclient_item_field_price,client_freight.freight as\nclient_freight_freight,client_freight.description as\nclient_freight_description \nFROM item INNER JOIN (client INNER JOIN client_item ON\n(client.id=client_item.client_id)) ON (client_item.item_id=item.id )\nINNER JOIN publisher ON (item.publisher_id = publisher.id) \nINNER JOIN language ON (item.language_id = language.id) \nLEFT OUTER JOIN collection ON (item.collection_id = collection.id) \nINNER JOIN item_base_type ON (item.type_id = item_base_type.id) \nINNER JOIN item_subtype ON (item.subtype_id = item_subtype.id) \nINNER JOIN item_format ON (item.format_id = item_format.id) \nINNER JOIN status ON (item.status_id = status.id) \nINNER JOIN currency ON (item.publisher_currency_id = currency.id) \nLEFT OUTER JOIN client_item_field ON\n(client_item.client_id=client_item_field.client_id) AND\n(client_item.item_id=client_item_field.item_id) \nLEFT OUTER JOIN client_item_freight ON\n(client_item.client_id=client_item_freight.client_id) AND\n(client_item.item_id=client_item_freight.item_id) \nLEFT OUTER JOIN client_freight ON\n(client_freight.id=client_item_freight.client_freight_id), userpolygon \nWHERE item.the_geom && userpolygon.the_geom AND distance(item.the_geom,\nuserpolygon.the_geom)=0 AND userpolygon.session_id='TestQuery' \nAND client.id=1 ORDER BY area asc\n \nWhen I explain analyze it I get:\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n------------\n Sort (cost=4793.89..4793.91 rows=7 width=622) (actual\ntime=4066.52..4067.79 rows=4004 loops=1)\n Sort Key: abs((item.area - userpolygon.area))\n -> Nested Loop (cost=533.45..4793.79 rows=7 width=622) (actual\ntime=66.89..4054.01 rows=4004 loops=1)\n Join Filter: ((\"outer\".the_geom && \"inner\".the_geom) AND\n(distance(\"outer\".the_geom, \"inner\".the_geom) = 0::double precision))\n -> Hash Join (cost=533.45..4548.30 rows=14028 width=582) (actual\ntime=63.79..3826.16 rows=14028 loops=1)\n Hash Cond: (\"outer\".client_freight_id = \"inner\".id)\n -> Hash Join (cost=532.38..4437.64 rows=14028 width=540)\n(actual time=63.52..3413.48 rows=14028 loops=1)\n Hash Cond: (\"outer\".item_id = \"inner\".item_id)\n Join Filter: (\"outer\".client_id = \"inner\".client_id)\n -> Hash Join (cost=532.38..4367.49 rows=14028\nwidth=528) (actual time=62.95..2993.37 rows=14028 loops=1)\n Hash Cond: (\"outer\".item_id = \"inner\".item_id)\n Join Filter: (\"outer\".client_id =\n\"inner\".client_id)\n -> Hash Join (cost=532.38..4297.33 rows=14028\nwidth=508) (actual time=62.48..2576.46 rows=14028 loops=1)\n Hash Cond: (\"outer\".publisher_currency_id =\n\"inner\".id)\n -> Hash Join (cost=528.23..4047.69\nrows=14028 width=476) (actual time=61.64..2189.57 rows=14028 loops=1)\n Hash Cond: (\"outer\".status_id =\n\"inner\".id)\n -> Hash Join (cost=527.17..3766.07\nrows=14028 width=430) (actual time=61.30..1846.30 rows=14028 loops=1)\n Hash Cond: (\"outer\".format_id =\n\"inner\".id)\n -> Hash Join\n(cost=526.02..3519.43 rows=14028 width=417) (actual time=60.62..1537.19\nrows=14028 loops=1)\n Hash Cond:\n(\"outer\".subtype_id = \"inner\".id)\n -> Hash Join\n(cost=524.67..3272.59 rows=14028 width=400) (actual time=60.09..1258.45\nrows=14028 loops=1)\n Hash Cond:\n(\"outer\".type_id = \"inner\".id)\n -> Hash Join\n(cost=523.60..2990.96 rows=14028 width=388) (actual time=59.53..1009.52\nrows=14028 loops=1)\n Hash Cond:\n(\"outer\".collection_id = \"inner\".id)\n -> Hash Join\n(cost=522.35..2709.15 rows=14028 width=329) (actual time=59.21..785.50\nrows=14028 loops=1)\n Hash\nCond: (\"outer\".language_id = \"inner\".id)\n ->\nHash Join (cost=513.30..2419.54 rows=14028 width=269) (actual\ntime=57.65..582.34 rows=14028 loops=1)\n \nHash Cond: (\"outer\".publisher_id = \"inner\".id)\n \n-> Hash Join (cost=510.85..2171.60 rows=14028 width=220) (actual\ntime=57.03..414.43 rows=14028 loops=1)\n \nHash Cond: (\"outer\".id = \"inner\".item_id)\n \n-> Seq Scan on item (cost=0.00..924.28 rows=14028 width=208) (actual\ntime=0.03..211.81 rows=14028 loops=1)\n \n-> Hash (cost=475.78..475.78 rows=14028 width=12) (actual\ntime=56.47..56.47 rows=0 loops=1)\n \n-> Nested Loop (cost=0.00..475.78 rows=14028 width=12) (actual\ntime=0.06..43.86 rows=14028 loops=1)\n \n-> Seq Scan on client (cost=0.00..1.05 rows=1 width=4) (actual\ntime=0.01..0.03 rows=1 loops=1)\n \nFilter: (id = 1)\n \n-> Index Scan using client_item_client_id_idx on client_item\n(cost=0.00..299.38 rows=14028 width=8) (actual time=0.03..27.45 rows=14028\nloops=1)\n \nIndex Cond: (\"outer\".id = client_item.client_id)\n \n-> Hash (cost=2.21..2.21 rows=97 width=49) (actual time=0.33..0.33 rows=0\nloops=1)\n \n-> Seq Scan on organisation (cost=0.00..2.21 rows=97 width=49) (actual\ntime=0.02..0.22 rows=97 loops=1)\n \nFilter: (type_id = 1)\n ->\nHash (cost=8.04..8.04 rows=404 width=60) (actual time=1.27..1.27 rows=0\nloops=1)\n \n-> Seq Scan on \"language\" (cost=0.00..8.04 rows=404 width=60) (actual\ntime=0.01..0.81 rows=404 loops=1)\n -> Hash\n(cost=1.20..1.20 rows=20 width=59) (actual time=0.06..0.06 rows=0 loops=1)\n -> Seq\nScan on collection (cost=0.00..1.20 rows=20 width=59) (actual\ntime=0.01..0.04 rows=20 loops=1)\n -> Hash\n(cost=1.05..1.05 rows=5 width=12) (actual time=0.02..0.02 rows=0 loops=1)\n -> Seq Scan\non item_base_type (cost=0.00..1.05 rows=5 width=12) (actual time=0.01..0.02\nrows=5 loops=1)\n -> Hash\n(cost=1.28..1.28 rows=28 width=17) (actual time=0.07..0.07 rows=0 loops=1)\n -> Seq Scan on\nitem_subtype (cost=0.00..1.28 rows=28 width=17) (actual time=0.01..0.05\nrows=28 loops=1)\n -> Hash (cost=1.12..1.12\nrows=12 width=13) (actual time=0.05..0.05 rows=0 loops=1)\n -> Seq Scan on\nitem_format (cost=0.00..1.12 rows=12 width=13) (actual time=0.01..0.03\nrows=12 loops=1)\n -> Hash (cost=1.05..1.05 rows=5\nwidth=46) (actual time=0.02..0.02 rows=0 loops=1)\n -> Seq Scan on status\n(cost=0.00..1.05 rows=5 width=46) (actual time=0.01..0.02 rows=5 loops=1)\n -> Hash (cost=3.72..3.72 rows=172\nwidth=32) (actual time=0.45..0.45 rows=0 loops=1)\n -> Seq Scan on currency\n(cost=0.00..3.72 rows=172 width=32) (actual time=0.02..0.28 rows=172\nloops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=20)\n(actual time=0.01..0.01 rows=0 loops=1)\n -> Seq Scan on client_item_field\n(cost=0.00..0.00 rows=1 width=20) (actual time=0.00..0.00 rows=0 loops=1)\n -> Hash (cost=0.00..0.00 rows=1 width=12) (actual\ntime=0.01..0.01 rows=0 loops=1)\n -> Seq Scan on client_item_freight\n(cost=0.00..0.00 rows=1 width=12) (actual time=0.00..0.00 rows=0 loops=1)\n -> Hash (cost=1.05..1.05 rows=5 width=42) (actual\ntime=0.03..0.03 rows=0 loops=1)\n -> Seq Scan on client_freight (cost=0.00..1.05 rows=5\nwidth=42) (actual time=0.01..0.02 rows=5 loops=1)\n -> Seq Scan on userpolygon (cost=0.00..0.00 rows=1 width=40)\n(actual time=0.01..0.01 rows=1 loops=14028)\n Filter: (session_id = 'TestQuery'::character varying)\n Total runtime: 4070.87 msec\n(63 rows)\n\n(if you have trouble reading it I can send it in a formatted txt file)\n\n\nSo from my basic knowledge of explain analyze, am I correct is saying that\npostgres is deciding to attack the query in the following way:\n\t\n1) All the small tables which I join to item should be loaded into a hashes.\n(e.g currency, status, collection, language, etc)?\n\n2) The following indicates that the client table is joined to the\nclient_item table and a hash is created in memory?\n\n -> Hash (cost=475.78..475.78 rows=14028 width=12) (actual\ntime=56.47..56.47 rows=0 loops=1)\n -> Nested Loop (cost=0.00..475.78 rows=14028 width=12) (actual\ntime=0.06..43.86 rows=14028 loops=1)\n -> Seq Scan on client (cost=0.00..1.05 rows=1 width=4) (actual\ntime=0.01..0.03 rows=1 loops=1)\n Filter: (id = 1)\n\t -> Index Scan using client_item_client_id_idx on client_item\n(cost=0.00..299.38 rows=14028 width=8) (actual time=0.03..27.45 rows=14028\nloops=1)\n\t\tIndex Cond: (\"outer\".id = client_item.client_id)\n\t\t\n\t\n3) All the records in the items table are selected with:\n\t-> Seq Scan on item (cost=0.00..924.28 rows=14028 width=208)\n(actual time=0.03..211.81 rows=14028 loops=1). \nThis is ok as I am selecting everything from the item table.\n\t\n4) Then the hash in step 2 is joined to the records from step 3:\n\n -> Hash Join (cost=510.85..2171.60 rows=14028 width=220) (actual\ntime=57.03..414.43 rows=14028 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".item_id)\n\t\t\n5) All the hashes created in the first step are joined to the items returned\nin the second step (e.g currency, status, collection, language, etc)?\n\t\n6) A nested loop runs at the end (actually a spatial operation for PostGIS)\nand a sort occurs.\n\nAm I correct???\n\nIf I am correct in the above execution path, doesn't this show that what is\nslowing down the query down is all the hash joins of the small tables???\n\nI say this because at the start of step 4 the time taken so far is 57.03\nmilli secs and at the end of step 4 the time taken is 414.43 millisecs. So\nthat part of the query took 414.43-57.03 but then just before step 6 at the\nlast hash join the time taken is reported as:\n\n -> Hash Join (cost=533.45..4548.30 rows=14028 width=582) (actual\ntime=63.79..3826.16 rows=14028 loops=1)\n\nSo does this mean that at that point 3826.16 milli seconds had past and if\nwe take 414.43 from 3826.16 it shows that all the hash joins took about 3.4\nseconds to do? This doesn't seem right, as I thought that the hash joins\nwere the quickest way to do a join?\n\nMy final question is that on the last Nested loops, as well as on some of\nthe hash joins I see \"actual time\" report as:\n\n -> Nested Loop (cost=533.45..4793.79 rows=7 width=622) (actual\ntime=66.89..4054.01 rows=4004 loops=1)\n\nWhat does the first time, 66.89, represent? It can't be the time taken so\nfar in the query because the node below it reports 3826.16 milli sec had\npassed.\nHow do I interpret this?\n\n\nThanks,\n\nBenjamin Wragg\n\nP.S I have indexes on all the primary and foreign keys and have vacuum\nanalyzed\n\n-- \nNo virus found in this outgoing message.\nChecked by AVG Anti-Virus.\nVersion: 7.0.300 / Virus Database: 265.7.1 - Release Date: 19/01/2005\n \n\n", "msg_date": "Thu, 20 Jan 2005 14:23:51 +1100", "msg_from": "\"Benjamin Wragg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance and understanding explain analzye" } ]
[ { "msg_contents": "Hi,\n\nAnyone have tips for performance of Postgresql, running on HP-UX 11.11,\nPA-RISC (HP RP3410)? What is better compiler (GCC or HP C/ANSI), flags of\ncompilation, kernel and FS tunning?\n\nI have installed pgsql, and compiled with gcc -O2\n-fexpensive-optimizations flags only.\n\nAnother question: Postgres running well on HP-UX? What is the better:\nHP-UX or Linux on HP RP3410?\n\nThanks!\n\n\nGustavo Franklin N�brega\nInfraestrutura e Banco de Dados\nPlanae Tecnologia da Informa��o\n(+55) 14 3224-3066 Ramal 209\nwww.planae.com.br\n\n", "msg_date": "Thu, 20 Jan 2005 02:05:26 -0200 (BRST)", "msg_from": "Gustavo Franklin =?iso-8859-1?Q?N=F3brega?= <[email protected]>", "msg_from_op": true, "msg_subject": "Tips and tunning for pgsql on HP-UX PA-RISC (RP3410)" }, { "msg_contents": "Well you probably will need to run your own tests to get a conclusive \nanswer. It should be that hard -- compile once with gcc, make a copy of \nthe installed binaries to pgsql.gcc -- then repeat with the HP compiler.\n\nIn general though, gcc works best under x86 computers. Comparisons of \ngcc on x86 versus Itanium versus PPC show binaries compiled for Itanium \nand PPC drastically underperform compared to gcc/x86. I suspect it's \nprobably the same situation for HP-UX.\n\n\nGustavo Franklin N�brega wrote:\n> Hi,\n> \n> Anyone have tips for performance of Postgresql, running on HP-UX 11.11,\n> PA-RISC (HP RP3410)? What is better compiler (GCC or HP C/ANSI), flags of\n> compilation, kernel and FS tunning?\n> \n> I have installed pgsql, and compiled with gcc -O2\n> -fexpensive-optimizations flags only.\n> \n> Another question: Postgres running well on HP-UX? What is the better:\n> HP-UX or Linux on HP RP3410?\n> \n> Thanks!\n> \n> \n> Gustavo Franklin N�brega\n> Infraestrutura e Banco de Dados\n> Planae Tecnologia da Informa��o\n> (+55) 14 3224-3066 Ramal 209\n> www.planae.com.br\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n> \n", "msg_date": "Thu, 20 Jan 2005 01:21:09 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tips and tunning for pgsql on HP-UX PA-RISC (RP3410)" } ]
[ { "msg_contents": "This is a multi-part message in MIME format.\n\n--bound1106197232\nContent-Type: text/plain\nContent-Transfer-Encoding: 7bit\n\nLet's see if I have been paying enough attention to the SQL gurus. The planner is making a different estimate of how many deprecated<>'' versus how many broken <> ''. I would try SET STATISTICS to a larger number on the ports table, and re-analyze.\n\n--bound1106197232--\n", "msg_date": "Wed, 19 Jan 2005 21:00:32 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: index scan of whole table, can't see why" }, { "msg_contents": "On Wed, 2005-01-19 at 21:00 -0800, [email protected] wrote:\n> Let's see if I have been paying enough attention to the SQL gurus. \n> The planner is making a different estimate of how many deprecated<>'' versus how many broken <> ''. \n> I would try SET STATISTICS to a larger number on the ports table, and re-analyze.\n\nthat should not help, as the estimate is accurate, according to the\nexplain analyze.\n\ngnari\n\n\n", "msg_date": "Thu, 20 Jan 2005 09:23:58 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan of whole table, can't see why" } ]
[ { "msg_contents": "\nHi,\n\nI have the go ahead of a customer to do some testing on Postgresql in a couple of weeks as a\nreplacement for Oracle.\nThe reason for the test is that the number of users of the warehouse is going to increase and this\nwill have a serious impact on licencing costs. (I bet that sounds familiar)\n\nWe're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb RAM) on Oracle.\nBasically we have 2 large fact tables to deal with: one going for 400M rows, the other will be\nhitting 1B rows soon.\n(around 250Gb of data)\n\nMy questions to the list are: has this sort of thing been attempted before? If so, what where the\nperformance results compared to Oracle?\nI've been reading up on partitioned tabes on pgsql, will the performance benefit will be\ncomparable to Oracle partitioned tables?\nWhat are the gotchas?\nShould I be testing on 8 or the 7 version?\nWhile I didn't find any documents immediately, are there any fine manuals to read on data\nwarehouse performance tuning on PostgreSQL?\n\nThanks in advance for any help you may have, I'll do my best to keep pgsql-performance up to date\non the results.\n\nBest regards,\n\nMatt\n------\nMatt Casters <[email protected]>\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 Okegem, Belgium\nPhone +32 (0) 486/97.29.37\n\n\n", "msg_date": "Thu, 20 Jan 2005 10:34:35 +0100 (CET)", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "* Matt Casters ([email protected]) wrote:\n> I have the go ahead of a customer to do some testing on Postgresql in a couple of weeks as a\n> replacement for Oracle.\n> The reason for the test is that the number of users of the warehouse is going to increase and this\n> will have a serious impact on licencing costs. (I bet that sounds familiar)\n\nRather familiar, yes... :)\n\n> We're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb RAM) on Oracle.\n> Basically we have 2 large fact tables to deal with: one going for 400M rows, the other will be\n> hitting 1B rows soon.\n> (around 250Gb of data)\n\nQuite a bit of data. There's one big thing to note here I think-\nPostgres will not take advantage of multiple CPUs for a given query,\nOracle will. So, it depends on your workload as to how that may impact\nyou. Situations where this will be unlikely to affect you:\n\nYour main bottle-neck is IO/disk and not CPU.\nYou run multiple queries in parallel frequently.\nThere are other processes on the system which chew up CPU time anyway.\n\nSituations where you're likely to be affected would be:\n\nYou periodically run one big query.\nYou run a set of queries in sequential order.\n\n> My questions to the list are: has this sort of thing been attempted before? If so, what where the\n> performance results compared to Oracle?\n\nI'm pretty sure it's been attempted before but unfortunately I don't\nhave any numbers on it myself. My data sets aren't that large (couple\nmillion rows) but I've found PostgreSQL at least as fast as Oracle for\nwhat we do, and much easier to work with.\n\n> I've been reading up on partitioned tabes on pgsql, will the performance benefit will be\n> comparable to Oracle partitioned tables?\n\nIn this case I would think so, except that PostgreSQL still won't use\nmultiple CPUs for a given query, even against partitioned tables, aiui.\n\n> What are the gotchas?\n\nSee above? :) Other issues are things having to do w/ your specific\nSQL- Oracle's old join syntax isn't supported by PostgreSQL (what is it,\nsomething like select x,y from a,b where x=%y; to do a right-join,\niirc).\n\n> Should I be testing on 8 or the 7 version?\n\nNow that 8.0 is out I'd say probably test with that and just watch for\n8.0.x releases before you go production, if you have time before you\nhave to go into production with the new solution (sounds like you do-\nchanging databases takes time anyway).\n\n> Thanks in advance for any help you may have, I'll do my best to keep pgsql-performance up to date\n> on the results.\n\nHope that helps. Others on here will correct me if I misspoke. :)\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 09:26:03 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "I am curious - I wasn't aware that postgresql supported partitioned tables,\nCould someone point me to the docs on this.\n\nThanks,\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 20 Jan 2005 09:26:03 -0500, Stephen Frost <[email protected]> wrote:\n> * Matt Casters ([email protected]) wrote:\n> > I have the go ahead of a customer to do some testing on Postgresql in a couple of weeks as a\n> > replacement for Oracle.\n> > The reason for the test is that the number of users of the warehouse is going to increase and this\n> > will have a serious impact on licencing costs. (I bet that sounds familiar)\n> \n> Rather familiar, yes... :)\n> \n> > We're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb RAM) on Oracle.\n> > Basically we have 2 large fact tables to deal with: one going for 400M rows, the other will be\n> > hitting 1B rows soon.\n> > (around 250Gb of data)\n> \n> Quite a bit of data. There's one big thing to note here I think-\n> Postgres will not take advantage of multiple CPUs for a given query,\n> Oracle will. So, it depends on your workload as to how that may impact\n> you. Situations where this will be unlikely to affect you:\n> \n> Your main bottle-neck is IO/disk and not CPU.\n> You run multiple queries in parallel frequently.\n> There are other processes on the system which chew up CPU time anyway.\n> \n> Situations where you're likely to be affected would be:\n> \n> You periodically run one big query.\n> You run a set of queries in sequential order.\n> \n> > My questions to the list are: has this sort of thing been attempted before? If so, what where the\n> > performance results compared to Oracle?\n> \n> I'm pretty sure it's been attempted before but unfortunately I don't\n> have any numbers on it myself. My data sets aren't that large (couple\n> million rows) but I've found PostgreSQL at least as fast as Oracle for\n> what we do, and much easier to work with.\n> \n> > I've been reading up on partitioned tabes on pgsql, will the performance benefit will be\n> > comparable to Oracle partitioned tables?\n> \n> In this case I would think so, except that PostgreSQL still won't use\n> multiple CPUs for a given query, even against partitioned tables, aiui.\n> \n> > What are the gotchas?\n> \n> See above? :) Other issues are things having to do w/ your specific\n> SQL- Oracle's old join syntax isn't supported by PostgreSQL (what is it,\n> something like select x,y from a,b where x=%y; to do a right-join,\n> iirc).\n> \n> > Should I be testing on 8 or the 7 version?\n> \n> Now that 8.0 is out I'd say probably test with that and just watch for\n> 8.0.x releases before you go production, if you have time before you\n> have to go into production with the new solution (sounds like you do-\n> changing databases takes time anyway).\n> \n> > Thanks in advance for any help you may have, I'll do my best to keep pgsql-performance up to date\n> > on the results.\n> \n> Hope that helps. Others on here will correct me if I misspoke. :)\n> \n> Stephen\n> \n> \n>\n", "msg_date": "Thu, 20 Jan 2005 11:31:29 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\n\"Matt Casters\" <[email protected]> writes:\n\n> I've been reading up on partitioned tabes on pgsql, will the performance\n> benefit will be comparable to Oracle partitioned tables?\n\nPostgres doesn't have any built-in support for partitioned tables. You can do\nit the same way people did it on Oracle up until 8.0 which is by creating\nviews of UNIONs or using inherited tables.\n\nThe main advantage of partitioned tables is being able to load and drop data\nin large chunks instantaneously. This avoids having to perform large deletes\nand then having to vacuum huge tables to recover the space.\n\nHowever in Postgres you aren't going to get most of the performance advantage\nof partitions in your query plans. The Oracle planner can prune partitions it\nknows aren't relevant to the query to avoid having to search through them.\n\nThis can let it get the speed of a full table scan without the disadvantage of\nhaving to read irrelevant tuples. Postgres is sometimes going to be forced to\neither do a much slower index scan or read tables that aren't relevant.\n\n-- \ngreg\n\n", "msg_date": "20 Jan 2005 11:31:52 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "No support for partitioned tables? Perhaps in name ... but I use a time-based\n\"partition\" tables that inherit from a base table; new partitions are \"placed\"\n(moved) round-robin on a set of drives. Somewhat manual, but if you really need\na solution now, it works.\n\nQuoting Greg Stark <[email protected]>:\n\n> \n> \"Matt Casters\" <[email protected]> writes:\n> \n> > I've been reading up on partitioned tabes on pgsql, will the performance\n> > benefit will be comparable to Oracle partitioned tables?\n> \n> Postgres doesn't have any built-in support for partitioned tables. You can\n> do\n> it the same way people did it on Oracle up until 8.0 which is by creating\n> views of UNIONs or using inherited tables.\n> \n> The main advantage of partitioned tables is being able to load and drop data\n> in large chunks instantaneously. This avoids having to perform large deletes\n> and then having to vacuum huge tables to recover the space.\n> \n> However in Postgres you aren't going to get most of the performance\n> advantage\n> of partitions in your query plans. The Oracle planner can prune partitions\n> it\n> knows aren't relevant to the query to avoid having to search through them.\n> \n> This can let it get the speed of a full table scan without the disadvantage\n> of\n> having to read irrelevant tuples. Postgres is sometimes going to be forced\n> to\n> either do a much slower index scan or read tables that aren't relevant.\n> \n> -- \n> greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n> \n\n\n-- \n\"Dreams come true, not free.\"\n\n", "msg_date": "Thu, 20 Jan 2005 10:50:55 -0800", "msg_from": "Mischa <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": " \nThanks Stephen,\n\nMy main concern is to get as much read performance on the disks as possible\non this given system. CPU is rarely a problem on a typical data warehouse\nsystem, this one's not any different.\n\nWe basically have 2 RAID5 disk sets (300Gb) and 150Gb) with a third one\ncoming along.(around 350Gb)\nI was kind of hoping that the new PGSQL tablespaces would allow me to create\na storage container spanning multiple file-systems, but unfortunately, that\nseems to be not the case. Is this correct?\n\nThat tells me that I probably need to do a full reconfiguration of the disks\non the Solaris level to get maximum performance out of the system.\nMmmm. This is going to be a though one to crack. Perhaps it will be\npossible to get some extra juice out of placing the indexes on the smaller\ndisks (150G) and the data on the bigger ones?\n\nThanks!\n\nMatt\n\n-----Oorspronkelijk bericht-----\nVan: Stephen Frost [mailto:[email protected]] \nVerzonden: donderdag 20 januari 2005 15:26\nAan: Matt Casters\nCC: [email protected]\nOnderwerp: Re: [PERFORM]\n\n* Matt Casters ([email protected]) wrote:\n> I have the go ahead of a customer to do some testing on Postgresql in \n> a couple of weeks as a replacement for Oracle.\n> The reason for the test is that the number of users of the warehouse \n> is going to increase and this will have a serious impact on licencing \n> costs. (I bet that sounds familiar)\n\nRather familiar, yes... :)\n\n> We're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb\nRAM) on Oracle.\n> Basically we have 2 large fact tables to deal with: one going for 400M \n> rows, the other will be hitting 1B rows soon.\n> (around 250Gb of data)\n\nQuite a bit of data. There's one big thing to note here I think- Postgres\nwill not take advantage of multiple CPUs for a given query, Oracle will.\nSo, it depends on your workload as to how that may impact you. Situations\nwhere this will be unlikely to affect you:\n\nYour main bottle-neck is IO/disk and not CPU.\nYou run multiple queries in parallel frequently.\nThere are other processes on the system which chew up CPU time anyway.\n\nSituations where you're likely to be affected would be:\n\nYou periodically run one big query.\nYou run a set of queries in sequential order.\n\n> My questions to the list are: has this sort of thing been attempted \n> before? If so, what where the performance results compared to Oracle?\n\nI'm pretty sure it's been attempted before but unfortunately I don't have\nany numbers on it myself. My data sets aren't that large (couple million\nrows) but I've found PostgreSQL at least as fast as Oracle for what we do,\nand much easier to work with.\n\n> I've been reading up on partitioned tabes on pgsql, will the \n> performance benefit will be comparable to Oracle partitioned tables?\n\nIn this case I would think so, except that PostgreSQL still won't use\nmultiple CPUs for a given query, even against partitioned tables, aiui.\n\n> What are the gotchas?\n\nSee above? :) Other issues are things having to do w/ your specific\nSQL- Oracle's old join syntax isn't supported by PostgreSQL (what is it,\nsomething like select x,y from a,b where x=%y; to do a right-join, iirc).\n\n> Should I be testing on 8 or the 7 version?\n\nNow that 8.0 is out I'd say probably test with that and just watch for 8.0.x\nreleases before you go production, if you have time before you have to go\ninto production with the new solution (sounds like you do- changing\ndatabases takes time anyway).\n\n> Thanks in advance for any help you may have, I'll do my best to keep \n> pgsql-performance up to date on the results.\n\nHope that helps. Others on here will correct me if I misspoke. :)\n\n\tStephen\n\n\n", "msg_date": "Thu, 20 Jan 2005 21:06:07 +0100", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Matt Casters wrote:\n> \n> Thanks Stephen,\n> \n> My main concern is to get as much read performance on the disks as possible\n> on this given system. CPU is rarely a problem on a typical data warehouse\n> system, this one's not any different.\n> \n> We basically have 2 RAID5 disk sets (300Gb) and 150Gb) with a third one\n> coming along.(around 350Gb)\n\nWhy not run two raid systems. A RAID 1 for your OS and a RAID 10 for \nyour database? Push all of your extra drives into the RAID 10.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> I was kind of hoping that the new PGSQL tablespaces would allow me to create\n> a storage container spanning multiple file-systems, but unfortunately, that\n> seems to be not the case. Is this correct?\n> \n> That tells me that I probably need to do a full reconfiguration of the disks\n> on the Solaris level to get maximum performance out of the system.\n> Mmmm. This is going to be a though one to crack. Perhaps it will be\n> possible to get some extra juice out of placing the indexes on the smaller\n> disks (150G) and the data on the bigger ones?\n> \n> Thanks!\n> \n> Matt\n> \n> -----Oorspronkelijk bericht-----\n> Van: Stephen Frost [mailto:[email protected]] \n> Verzonden: donderdag 20 januari 2005 15:26\n> Aan: Matt Casters\n> CC: [email protected]\n> Onderwerp: Re: [PERFORM]\n> \n> * Matt Casters ([email protected]) wrote:\n> \n>>I have the go ahead of a customer to do some testing on Postgresql in \n>>a couple of weeks as a replacement for Oracle.\n>>The reason for the test is that the number of users of the warehouse \n>>is going to increase and this will have a serious impact on licencing \n>>costs. (I bet that sounds familiar)\n> \n> \n> Rather familiar, yes... :)\n> \n> \n>>We're running a medium sized data warehouse on a Solaris box (4CPU, 8Gb\n> \n> RAM) on Oracle.\n> \n>>Basically we have 2 large fact tables to deal with: one going for 400M \n>>rows, the other will be hitting 1B rows soon.\n>>(around 250Gb of data)\n> \n> \n> Quite a bit of data. There's one big thing to note here I think- Postgres\n> will not take advantage of multiple CPUs for a given query, Oracle will.\n> So, it depends on your workload as to how that may impact you. Situations\n> where this will be unlikely to affect you:\n> \n> Your main bottle-neck is IO/disk and not CPU.\n> You run multiple queries in parallel frequently.\n> There are other processes on the system which chew up CPU time anyway.\n> \n> Situations where you're likely to be affected would be:\n> \n> You periodically run one big query.\n> You run a set of queries in sequential order.\n> \n> \n>>My questions to the list are: has this sort of thing been attempted \n>>before? If so, what where the performance results compared to Oracle?\n> \n> \n> I'm pretty sure it's been attempted before but unfortunately I don't have\n> any numbers on it myself. My data sets aren't that large (couple million\n> rows) but I've found PostgreSQL at least as fast as Oracle for what we do,\n> and much easier to work with.\n> \n> \n>>I've been reading up on partitioned tabes on pgsql, will the \n>>performance benefit will be comparable to Oracle partitioned tables?\n> \n> \n> In this case I would think so, except that PostgreSQL still won't use\n> multiple CPUs for a given query, even against partitioned tables, aiui.\n> \n> \n>>What are the gotchas?\n> \n> \n> See above? :) Other issues are things having to do w/ your specific\n> SQL- Oracle's old join syntax isn't supported by PostgreSQL (what is it,\n> something like select x,y from a,b where x=%y; to do a right-join, iirc).\n> \n> \n>>Should I be testing on 8 or the 7 version?\n> \n> \n> Now that 8.0 is out I'd say probably test with that and just watch for 8.0.x\n> releases before you go production, if you have time before you have to go\n> into production with the new solution (sounds like you do- changing\n> databases takes time anyway).\n> \n> \n>>Thanks in advance for any help you may have, I'll do my best to keep \n>>pgsql-performance up to date on the results.\n> \n> \n> Hope that helps. Others on here will correct me if I misspoke. :)\n> \n> \tStephen\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n-- \nCommand Prompt, Inc., your source for PostgreSQL replication,\nprofessional support, programming, managed services, shared\nand dedicated hosting. Home of the Open Source Projects plPHP,\nplPerlNG, pgManage, and pgPHPtoolkit.\nContact us now at: +1-503-667-4564 - http://www.commandprompt.com", "msg_date": "Thu, 20 Jan 2005 12:26:19 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\nJoshua,\n\nActually that's a great idea!\nI'll have to check if Solaris wants to play ball though.\nWe'll have to see as we don't have the new disks yet, ETA is next week.\n\nCheers,\n\nMatt\n\n-----Oorspronkelijk bericht-----\nVan: Joshua D. Drake [mailto:[email protected]] \nVerzonden: donderdag 20 januari 2005 21:26\nAan: [email protected]\nCC: [email protected]\nOnderwerp: Re: [PERFORM]\n\nMatt Casters wrote:\n> \n> Thanks Stephen,\n> \n> My main concern is to get as much read performance on the disks as \n> possible on this given system. CPU is rarely a problem on a typical \n> data warehouse system, this one's not any different.\n> \n> We basically have 2 RAID5 disk sets (300Gb) and 150Gb) with a third \n> one coming along.(around 350Gb)\n\nWhy not run two raid systems. A RAID 1 for your OS and a RAID 10 for your\ndatabase? Push all of your extra drives into the RAID 10.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n> I was kind of hoping that the new PGSQL tablespaces would allow me to \n> create a storage container spanning multiple file-systems, but \n> unfortunately, that seems to be not the case. Is this correct?\n> \n> That tells me that I probably need to do a full reconfiguration of the \n> disks on the Solaris level to get maximum performance out of the system.\n> Mmmm. This is going to be a though one to crack. Perhaps it will be \n> possible to get some extra juice out of placing the indexes on the \n> smaller disks (150G) and the data on the bigger ones?\n> \n> Thanks!\n> \n> Matt\n> \n> -----Oorspronkelijk bericht-----\n> Van: Stephen Frost [mailto:[email protected]]\n> Verzonden: donderdag 20 januari 2005 15:26\n> Aan: Matt Casters\n> CC: [email protected]\n> Onderwerp: Re: [PERFORM]\n> \n> * Matt Casters ([email protected]) wrote:\n> \n>>I have the go ahead of a customer to do some testing on Postgresql in \n>>a couple of weeks as a replacement for Oracle.\n>>The reason for the test is that the number of users of the warehouse \n>>is going to increase and this will have a serious impact on licencing \n>>costs. (I bet that sounds familiar)\n> \n> \n> Rather familiar, yes... :)\n> \n> \n>>We're running a medium sized data warehouse on a Solaris box (4CPU, \n>>8Gb\n> \n> RAM) on Oracle.\n> \n>>Basically we have 2 large fact tables to deal with: one going for 400M \n>>rows, the other will be hitting 1B rows soon.\n>>(around 250Gb of data)\n> \n> \n> Quite a bit of data. There's one big thing to note here I think- \n> Postgres will not take advantage of multiple CPUs for a given query,\nOracle will.\n> So, it depends on your workload as to how that may impact you. \n> Situations where this will be unlikely to affect you:\n> \n> Your main bottle-neck is IO/disk and not CPU.\n> You run multiple queries in parallel frequently.\n> There are other processes on the system which chew up CPU time anyway.\n> \n> Situations where you're likely to be affected would be:\n> \n> You periodically run one big query.\n> You run a set of queries in sequential order.\n> \n> \n>>My questions to the list are: has this sort of thing been attempted \n>>before? If so, what where the performance results compared to Oracle?\n> \n> \n> I'm pretty sure it's been attempted before but unfortunately I don't \n> have any numbers on it myself. My data sets aren't that large (couple \n> million\n> rows) but I've found PostgreSQL at least as fast as Oracle for what we \n> do, and much easier to work with.\n> \n> \n>>I've been reading up on partitioned tabes on pgsql, will the \n>>performance benefit will be comparable to Oracle partitioned tables?\n> \n> \n> In this case I would think so, except that PostgreSQL still won't use \n> multiple CPUs for a given query, even against partitioned tables, aiui.\n> \n> \n>>What are the gotchas?\n> \n> \n> See above? :) Other issues are things having to do w/ your specific\n> SQL- Oracle's old join syntax isn't supported by PostgreSQL (what is \n> it, something like select x,y from a,b where x=%y; to do a right-join,\niirc).\n> \n> \n>>Should I be testing on 8 or the 7 version?\n> \n> \n> Now that 8.0 is out I'd say probably test with that and just watch for \n> 8.0.x releases before you go production, if you have time before you \n> have to go into production with the new solution (sounds like you do- \n> changing databases takes time anyway).\n> \n> \n>>Thanks in advance for any help you may have, I'll do my best to keep \n>>pgsql-performance up to date on the results.\n> \n> \n> Hope that helps. Others on here will correct me if I misspoke. :)\n> \n> \tStephen\n> \n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n--\nCommand Prompt, Inc., your source for PostgreSQL replication, professional\nsupport, programming, managed services, shared and dedicated hosting. Home\nof the Open Source Projects plPHP, plPerlNG, pgManage, and pgPHPtoolkit.\nContact us now at: +1-503-667-4564 - http://www.commandprompt.com\n\n\n\n", "msg_date": "Thu, 20 Jan 2005 22:39:02 +0100", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Matt Casters wrote:\n> Hi,\n> \n> My questions to the list are: has this sort of thing been attempted before? If so, what where the\n> performance results compared to Oracle?\n> I've been reading up on partitioned tabes on pgsql, will the performance benefit will be\n> comparable to Oracle partitioned tables?\n> What are the gotchas?\n> Should I be testing on 8 or the 7 version?\n> While I didn't find any documents immediately, are there any fine manuals to read on data\n> warehouse performance tuning on PostgreSQL?\n> \nSome of the previous postings on this list discuss various methods for\ndoing partitioning (UNION and INHERIT), as well as the use of partial\nindexes - see the thread titled : 'Data Warehouse Reevaluation - MySQL\nvs Postgres -- merge tables'.\n\nUnfortunately none of these work well for a standard 'star' because :\n\ni) all conditions are on the dimension tables, and\nii) the optimizer can eliminate 'partition' tables only on the basis of\n *constant* conditions, and the resulting implied restrictions caused\nby the join to the dimension table(s) are not usable for this.\n\nSo I think to get it to work well some violence to your 'star' may be\nrequired (e.g. adding constant columns to 'fact' tables to aid the\noptimizer, plus rewriting queries to include conditions on the added\ncolumns).\n\n\nOne other gotcha is that Pg cannot do index only access, which can hurt.\nHowever it may be possibly to get good performance using CLUSTER on the\nfact tables (or just loading them in a desirable order) plus using\npartial indexes.\n\n\nregards\n\nMark\n\n", "msg_date": "Fri, 21 Jan 2005 10:44:09 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "On Thu, Jan 20, 2005 at 11:31:29 -0500,\n Alex Turner <[email protected]> wrote:\n> I am curious - I wasn't aware that postgresql supported partitioned tables,\n> Could someone point me to the docs on this.\n\nSome people have been doing it using a union view. There isn't actually\na partition feature.\n", "msg_date": "Thu, 20 Jan 2005 18:14:27 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\n> On Thu, Jan 20, 2005 at 11:31:29 -0500,\n> Alex Turner <[email protected]> wrote:\n>> I am curious - I wasn't aware that postgresql supported partitioned tables,\n>> Could someone point me to the docs on this.\n>\n> Some people have been doing it using a union view. There isn't actually\n> a partition feature.\n>\n>\n\nActually, there is. If found this example on pgsql-performance:\n\n>> CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n>> ANALYZE super_foo ;\n>>\n>> CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n>> INSERT INTO sub_foo1 VALUES ( 1, 1 );\n>> -- repeat insert until sub_foo1 has 1,000,000 rows\n>> CREATE INDEX idx_subfoo1_partition ON sub_foo1 ( partition );\n>> ANALYZE sub_foo1 ;\n>>\n>> CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n>> INSERT INTO sub_foo2 VALUES ( 2, 1 );\n>> -- repeat insert until sub_foo2 has 1,000,000 rows\n>> CREATE INDEX idx_subfoo2_partition ON sub_foo2 ( partition );\n>> ANALYZE sub_foo2 ;\n>>\n\nI think that in certain cases this system even beats Oracle as it stores less information in the\ntable partitions. (and in doing so is causing less disk IO)\nBTW, internally, Oracle sees partitions as tables too. Even the \"Union all\" system that MS SQL\nServer uses works fine as long as the optimiser supports it to prune correctly.\n\nCheers,\n\nMatt\n------\nMatt Casters <[email protected]>\ni-Bridge bvba, http://www.kettle.be\nFonteinstraat 70, 9400 Okegem, Belgium\nPhone +32 (0) 486/97.29.37\n\n\n", "msg_date": "Fri, 21 Jan 2005 09:50:46 +0100 (CET)", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Hi,\n\nOn Fri, Jan 21, 2005 at 09:50:46AM +0100, Matt Casters wrote:\n> \n> > Some people have been doing it using a union view. There isn't actually\n> > a partition feature.\n> \n> Actually, there is. If found this example on pgsql-performance:\n> \n> >> CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n> >> ANALYZE super_foo ;\n> >>\n> >> CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n[...]\n> >>\n> >> CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n[...]\n> >>\n\nYes, this could be used instead of a view. But there is one thing\nmissing. You can't just insert into super_foo and aquire the \"correct\npartition\". You will still have to insert into the correct underlying\ntable. \"Real\" partitioning will take care of correct partition\nselection.\n\nRegards,\nYann\n\n", "msg_date": "Fri, 21 Jan 2005 13:30:08 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "\n>> > Some people have been doing it using a union view. There isn't actually\n>> > a partition feature.\n>>\n>> Actually, there is. If found this example on pgsql-performance:\n>>\n>> >> CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n>> >> ANALYZE super_foo ;\n>> >>\n>> >> CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n> [...]\n>> >>\n>> >> CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n> [...]\n>> >>\n>\n> Yes, this could be used instead of a view. But there is one thing\n> missing. You can't just insert into super_foo and aquire the \"correct\n> partition\". You will still have to insert into the correct underlying\n> table. \"Real\" partitioning will take care of correct partition\n> selection.\n\nThis IS bad news. It would mean a serious change in the ETL.\nI think I can solve the other problems, but I don't know about this one...\n\nRegards,\n\nMatt\n\n\n\n\n", "msg_date": "Fri, 21 Jan 2005 13:51:02 +0100 (CET)", "msg_from": "\"Matt Casters\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: " }, { "msg_contents": "Hi,\n\n>>>> CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n>>>> ANALYZE super_foo ;\n>>>>\n>>>> CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n>>>> CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n> \n> Yes, this could be used instead of a view. But there is one thing\n> missing. You can't just insert into super_foo and aquire the \"correct\n> partition\". You will still have to insert into the correct underlying\n> table. \"Real\" partitioning will take care of correct partition\n> selection.\n\nI've recently used this method for partitioning data. In my setup \ninserts are done inside a pl/pgsql function called at regular intervals, \nso this isn't a problem for me. I didn't test it, but I think some rules \n(or a trigger) could do the trick.\n\n\nBest regards\n--\nMatteo Beccati\nhttp://phpadsnew.com\nhttp://phppgads.com\n\n", "msg_date": "Fri, 21 Jan 2005 15:37:20 +0100", "msg_from": "Matteo Beccati <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " }, { "msg_contents": "Hi,\n\nOn Fri, Jan 21, 2005 at 03:37:20PM +0100, Matteo Beccati wrote:\n> \n> >>>> CREATE TABLE super_foo ( partition NUMERIC, bar NUMERIC );\n> >>>> ANALYZE super_foo ;\n> >>>>\n> >>>> CREATE TABLE sub_foo1 () INHERITS ( super_foo );\n> >>>> CREATE TABLE sub_foo2 () INHERITS ( super_foo );\n> >\n> >Yes, this could be used instead of a view. But there is one thing\n> >missing. You can't just insert into super_foo and aquire the \"correct\n> >partition\". You will still have to insert into the correct underlying\n> >table. \"Real\" partitioning will take care of correct partition\n> >selection.\n> \n> I've recently used this method for partitioning data. In my setup \n> inserts are done inside a pl/pgsql function called at regular intervals, \n> so this isn't a problem for me. I didn't test it, but I think some rules \n> (or a trigger) could do the trick.\n\nYes, a pl/pgsql function or any software solution can solve this\nproblem, but what you normally expect from a partitioning support is\nthat you don't have to care about where to put your data due to the db\nwill take care for that. \nOf cause a trigger could do this as well, but don't forget, that a\ntrigger in dwh environments, where you process thousands of row at once\nduring data loading, is very expensive and therefore no solution for\nproduction use. \n\n\nRegards,\nYann\n", "msg_date": "Fri, 21 Jan 2005 17:05:39 +0100", "msg_from": "Yann Michel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "Hello everyone,\n\nI'm having a problem with some of my tables and I'm not sure if \npostgres' behaviour is maybe even a bug. I'm (still) using 8.0rc5 at \npresent.\n\nI have a table that contains among other columns one of the sort:\n\tpurge_date timestamp\n\nmost records will have this field set to NULL, at present all of them \nreally. the table has about 100k row right now. in regular intervals \nI'm doing some cleanup on this table using a query like:\n\tdelete from mytable where purge_date is not null and purge_date < \ncurrent_date\n\nAnd I have created these btree indexes:\n\tcreate index on mytable (purge_date);\n\tcreate index on mytable (purge_date) where purge_date is not null;\n\nmy problem is that the planner always chooses a seq scan over an index \nscan. only when I set enable_seqscan to false does it use an index \nscan. The costs of both plans are extremely different, with the index \nscan being 5-10 times more expensive than the seq scan, which is \nobviously not true given that all rows have this column set to NULL.\n\nI wondered why the planner was making such bad assumptions about the \nnumber of rows to find and had a look at pg_stats. and there was the \nsurprise:\nthere is no entry in pg_stats for that column at all!! I can only \nsuspect that this has to do with the column being all null. I tried to \nchange a few records to a not-null value, but re-ANALYZE didn't catch \nthem apparently.\n\nIs this desired behaviour for analyze? Can I change it somehow? If not, \nis there a better way to accomplish what I'm trying? I'm not to keen on \ndisabling seqscan for that query explicitly. It's a simple enough query \nand the planner should be able to find the right plan without help - \nand I'm sure it would if it had stats about it.\n\nAny help appreciated.\n\nBernd\n\n", "msg_date": "Thu, 20 Jan 2005 11:14:28 +0100", "msg_from": "Bernd Heller <[email protected]>", "msg_from_op": true, "msg_subject": "column without pg_stats entry?!" }, { "msg_contents": "On Thu, Jan 20, 2005 at 11:14:28 +0100,\n Bernd Heller <[email protected]> wrote:\n> \n> I wondered why the planner was making such bad assumptions about the \n> number of rows to find and had a look at pg_stats. and there was the \n> surprise:\n> there is no entry in pg_stats for that column at all!! I can only \n> suspect that this has to do with the column being all null. I tried to \n> change a few records to a not-null value, but re-ANALYZE didn't catch \n> them apparently.\n\nSomeone else reported this recently and I think it is going to be fixed.\n\n> Is this desired behaviour for analyze? Can I change it somehow? If not, \n> is there a better way to accomplish what I'm trying? I'm not to keen on \n> disabling seqscan for that query explicitly. It's a simple enough query \n> and the planner should be able to find the right plan without help - \n> and I'm sure it would if it had stats about it.\n\nIn the short run you could add an IS NOT NULL clause to your query.\nThe optimizer doesn't know that < being TRUE implies IS NOT NULL and\nso the partial index won't be used unless you add that clause explicitly.\n", "msg_date": "Thu, 20 Jan 2005 18:26:20 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column without pg_stats entry?!" }, { "msg_contents": "Bruno Wolff III <[email protected]> writes:\n> Bernd Heller <[email protected]> wrote:\n>> there is no entry in pg_stats for that column at all!! I can only \n>> suspect that this has to do with the column being all null.\n\n> Someone else reported this recently and I think it is going to be fixed.\n\nYeah, this was griped of a little bit ago, but I felt it was too close\nto 8.0 release to risk fooling with for this cycle.\n\n> In the short run you could add an IS NOT NULL clause to your query.\n> The optimizer doesn't know that < being TRUE implies IS NOT NULL and\n> so the partial index won't be used unless you add that clause explicitly.\n\nActually, as of 8.0 the optimizer *does* know that. I'm a bit surprised\nthat it didn't pick the partial index, since even without any analyze\nstats, the small physical size of the partial index should have clued it\nthat there weren't many such tuples. Could we see EXPLAIN output for\nboth cases (both settings of enable_seqscan)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jan 2005 01:02:59 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: column without pg_stats entry?! " }, { "msg_contents": "Ah no, I think both of you have mistaken me. The problem here is not \nabout partial indexes (not really anyway).\nI do have a partial index with \"WHERE purge_date IS NOT NULL\", and my \nquery does contain \"WHERE purge_date IS NOT NULL\" as well. The problem \nhere is, that all rows (or almost all) have the column purge_date set \nto NULL. The planner expected the query to return 33% of all rows in \nthe table. So it made the seq scan MUCH cheaper, which was right in the \nplanner's way of thinking because it didn't know anything about the \ncolumn from pg_stats.\n\nI had a look at the source code of the analyze command meanwhile:\nthe compute_*_stats functions don't return valid statistics if they \ncan't find any non-null values, and as a result no statistics tuple for \nthat column is created in pg_stats. I think this is wrong. Not finding \nany non-null values IS a very useful information, it means a \nnull-fraction of 100%. I have patched my postgres to return valid \nstatistics even in that case (patch below).\nThe difference now is that the planner doesn't assume anymore it would \nget about 33% of rows back, instead it knows that the null-fraction of \nthat column is approximately 1.0 and it chooses the index scan because \nthat is now the by far cheapest plan.\n\n--- analyze.c Thu Jan 20 11:37:58 2005\n+++ analyze.c.orig Sun Nov 14 03:04:13 2004\n@@ -1704,9 +1704,6 @@\n stats->stavalues[0] = mcv_values;\n stats->numvalues[0] = num_mcv;\n }\n- } else {\n- stats->stats_valid = true;\n- stats->stanullfrac = 1.0;\n }\n\n /* We don't need to bother cleaning up any of our temporary \npalloc's */\n@@ -2164,9 +2161,6 @@\n stats->numnumbers[slot_idx] = 1;\n slot_idx++;\n }\n- } else {\n- stats->stats_valid = true;\n- stats->stanullfrac = 1.0;\n }\n\n /* We don't need to bother cleaning up any of our temporary \npalloc's */\n\n\nOn 21.01.2005, at 7:02 Uhr, Tom Lane wrote:\n\n> Bruno Wolff III <[email protected]> writes:\n>> Bernd Heller <[email protected]> wrote:\n>>> there is no entry in pg_stats for that column at all!! I can only\n>>> suspect that this has to do with the column being all null.\n>\n>> Someone else reported this recently and I think it is going to be \n>> fixed.\n>\n> Yeah, this was griped of a little bit ago, but I felt it was too close\n> to 8.0 release to risk fooling with for this cycle.\n>\n>> In the short run you could add an IS NOT NULL clause to your query.\n>> The optimizer doesn't know that < being TRUE implies IS NOT NULL and\n>> so the partial index won't be used unless you add that clause \n>> explicitly.\n>\n> Actually, as of 8.0 the optimizer *does* know that. I'm a bit \n> surprised\n> that it didn't pick the partial index, since even without any analyze\n> stats, the small physical size of the partial index should have clued \n> it\n> that there weren't many such tuples. Could we see EXPLAIN output for\n> both cases (both settings of enable_seqscan)?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Fri, 21 Jan 2005 10:55:00 +0100", "msg_from": "Bernd Heller <[email protected]>", "msg_from_op": true, "msg_subject": "Re: column without pg_stats entry?! " } ]
[ { "msg_contents": "Hi to all, \n\nI have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>50000) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I somehow improve the performance on this? \n\nBest regards, \nAndy.\n\nexplain analyze\nSELECT o.id\nFROM report r \nINNER JOIN orders o ON o.id=r.id_order AND o.id_status=6\nORDER BY 1 LIMIT 10 OFFSET 10\n\n\nLimit (cost=44.37..88.75 rows=10 width=4) (actual time=0.160..0.275 rows=10 loops=1)\n -> Merge Join (cost=0.00..182150.17 rows=41049 width=4) (actual time=0.041..0.260 rows=20 loops=1)\n Merge Cond: (\"outer\".id_order = \"inner\".id)\n -> Index Scan using report_id_order_idx on report r (cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 rows=20 loops=1)\n -> Index Scan using orders_pkey on orders o (cost=0.00..24127.04 rows=42501 width=4) (actual time=0.013..0.078 rows=20 loops=1)\n Filter: (id_status = 6)\nTotal runtime: 0.373 ms\n\nexplain analyze\nSELECT o.id\nFROM report r \nINNER JOIN orders o ON o.id=r.id_order AND o.id_status=6\nORDER BY 1 LIMIT 10 OFFSET 1000000\n\nLimit (cost=31216.85..31216.85 rows=1 width=4) (actual time=1168.152..1168.152 rows=0 loops=1)\n -> Sort (cost=31114.23..31216.85 rows=41049 width=4) (actual time=1121.769..1152.246 rows=42693 loops=1)\n Sort Key: o.id\n -> Hash Join (cost=2329.99..27684.03 rows=41049 width=4) (actual time=441.879..925.498 rows=42693 loops=1)\n Hash Cond: (\"outer\".id_order = \"inner\".id)\n -> Seq Scan on report r (cost=0.00..23860.62 rows=42862 width=4) (actual time=38.634..366.035 rows=42864 loops=1)\n -> Hash (cost=2077.74..2077.74 rows=42501 width=4) (actual time=140.200..140.200 rows=0 loops=1)\n -> Seq Scan on orders o (cost=0.00..2077.74 rows=42501 width=4) (actual time=0.059..96.890 rows=42693 loops=1)\n Filter: (id_status = 6)\nTotal runtime: 1170.586 ms\n\n\n\n\n\n\n\n\nHi to all, I have the following 2 examples. Now, \nregarding on the offset if it is small(10) or big(>50000) what is the impact \non the performance of the query?? I noticed that if I return more \ndata's(columns) or if I make more joins then the query runs even \nslower if the OFFSET is bigger. How can I somehow improve the performance on \nthis? \nBest regards, Andy.\nexplain analyzeSELECT \no.idFROM \nreport r INNER JOIN orders o ON \no.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 10\n \nLimit  (cost=44.37..88.75 rows=10 width=4) \n(actual time=0.160..0.275 rows=10 loops=1)  ->  Merge \nJoin  (cost=0.00..182150.17 rows=41049 width=4) (actual time=0.041..0.260 \nrows=20 loops=1)        Merge Cond: \n(\"outer\".id_order = \"inner\".id)        \n->  Index Scan using report_id_order_idx on report r  \n(cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 rows=20 \nloops=1)        ->  Index Scan \nusing orders_pkey on orders o  (cost=0.00..24127.04 rows=42501 width=4) \n(actual time=0.013..0.078 rows=20 \nloops=1)              \nFilter: (id_status = 6)Total runtime: 0.373 ms\n\nexplain analyzeSELECT \no.idFROM \nreport r INNER JOIN orders o ON \no.id=r.id_order AND o.id_status=6ORDER BY 1 LIMIT 10 OFFSET 1000000Limit  (cost=31216.85..31216.85 rows=1 width=4) (actual \ntime=1168.152..1168.152 rows=0 loops=1)  ->  Sort  \n(cost=31114.23..31216.85 rows=41049 width=4) (actual time=1121.769..1152.246 \nrows=42693 loops=1)        Sort Key: \no.id        ->  Hash Join  \n(cost=2329.99..27684.03 rows=41049 width=4) (actual time=441.879..925.498 \nrows=42693 \nloops=1)              \nHash Cond: (\"outer\".id_order = \n\"inner\".id)              \n->  Seq Scan on report r  (cost=0.00..23860.62 rows=42862 width=4) \n(actual time=38.634..366.035 rows=42864 \nloops=1)              \n->  Hash  (cost=2077.74..2077.74 rows=42501 width=4) (actual \ntime=140.200..140.200 rows=0 \nloops=1)                    \n->  Seq Scan on orders o  (cost=0.00..2077.74 rows=42501 width=4) \n(actual time=0.059..96.890 rows=42693 \nloops=1)                          \nFilter: (id_status = 6)Total runtime: 1170.586 \nms", "msg_date": "Thu, 20 Jan 2005 13:13:44 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "OFFSET impact on Performance???" }, { "msg_contents": "Andrei Bintintan wrote:\n> Hi to all,\n> \n> I have the following 2 examples. Now, regarding on the offset if it\n> is small(10) or big(>50000) what is the impact on the performance of\n> the query?? I noticed that if I return more data's(columns) or if I\n> make more joins then the query runs even slower if the OFFSET is\n> bigger. How can I somehow improve the performance on this?\n\nThere's really only one way to do an offset of 1000 and that's to fetch \n1000 rows and then some and discard the first 1000.\n\nIf you're using this to provide \"pages\" of results, could you use a cursor?\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Jan 2005 12:10:59 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "> If you're using this to provide \"pages\" of results, could you use a \n> cursor?\nWhat do you mean by that? Cursor?\n\nYes I'm using this to provide \"pages\", but If I jump to the last pages it \ngoes very slow.\n\nAndy.\n\n----- Original Message ----- \nFrom: \"Richard Huxton\" <[email protected]>\nTo: \"Andrei Bintintan\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Thursday, January 20, 2005 2:10 PM\nSubject: Re: [SQL] OFFSET impact on Performance???\n\n\n> Andrei Bintintan wrote:\n>> Hi to all,\n>>\n>> I have the following 2 examples. Now, regarding on the offset if it\n>> is small(10) or big(>50000) what is the impact on the performance of\n>> the query?? I noticed that if I return more data's(columns) or if I\n>> make more joins then the query runs even slower if the OFFSET is\n>> bigger. How can I somehow improve the performance on this?\n>\n> There's really only one way to do an offset of 1000 and that's to fetch \n> 1000 rows and then some and discard the first 1000.\n>\n> If you're using this to provide \"pages\" of results, could you use a \n> cursor?\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n> \n\n", "msg_date": "Thu, 20 Jan 2005 15:45:47 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Andrei Bintintan wrote:\n>> If you're using this to provide \"pages\" of results, could you use a \n>> cursor?\n> \n> What do you mean by that? Cursor?\n> \n> Yes I'm using this to provide \"pages\", but If I jump to the last pages \n> it goes very slow.\n\nDECLARE mycursor CURSOR FOR SELECT * FROM ...\nFETCH FORWARD 10 IN mycursor;\nCLOSE mycursor;\n\nRepeated FETCHes would let you step through your results. That won't \nwork if you have a web-app making repeated connections.\n\nIf you've got a web-application then you'll probably want to insert the \nresults into a cache table for later use.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Jan 2005 15:20:59 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "I am also very interesting in this very question.. Is there any way to\ndeclare a persistant cursor that remains open between pg sessions? \nThis would be better than a temp table because you would not have to\ndo the initial select and insert into a fresh table and incur those IO\ncosts, which are often very heavy, and the reason why one would want\nto use a cursor.\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 20 Jan 2005 15:20:59 +0000, Richard Huxton <[email protected]> wrote:\n> Andrei Bintintan wrote:\n> >> If you're using this to provide \"pages\" of results, could you use a\n> >> cursor?\n> >\n> > What do you mean by that? Cursor?\n> >\n> > Yes I'm using this to provide \"pages\", but If I jump to the last pages\n> > it goes very slow.\n> \n> DECLARE mycursor CURSOR FOR SELECT * FROM ...\n> FETCH FORWARD 10 IN mycursor;\n> CLOSE mycursor;\n> \n> Repeated FETCHes would let you step through your results. That won't\n> work if you have a web-app making repeated connections.\n> \n> If you've got a web-application then you'll probably want to insert the\n> results into a cache table for later use.\n> \n> --\n> Richard Huxton\n> Archonet Ltd\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Thu, 20 Jan 2005 11:39:16 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Richard Huxton wrote:\n> \n> If you've got a web-application then you'll probably want to insert the \n> results into a cache table for later use.\n> \n\nIf I have quite a bit of activity like this (people selecting 10000 out\nof a few million rows and paging through them in a web browser), would\nit be good to have a single table with a userid column shared by all\nusers, or a separate table for each user that can be truncated/dropped?\n\nI started out with one table; but with people doing 10s of thousand\nof inserts and deletes per session, I had a pretty hard time figuring\nout a reasonable vacuum strategy.\n\nEventually I started doing a whole bunch of create table tmp_XXXX\ntables where XXXX is a userid; and a script to drop these tables - but\nthat's quite ugly in a different way.\n\nWith 8.0 I guess I'll try the single table again - perhaps what I\nwant may be to always have a I/O throttled vacuum running... hmm.\n\nAny suggestions?\n", "msg_date": "Thu, 20 Jan 2005 08:49:39 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Alex Turner wrote:\n> I am also very interesting in this very question.. Is there any way\n> to declare a persistant cursor that remains open between pg sessions?\n\nNot sure how this would work. What do you do with multiple connections? \nOnly one can access the cursor, so which should it be?\n\n> This would be better than a temp table because you would not have to\n> do the initial select and insert into a fresh table and incur those\n> IO costs, which are often very heavy, and the reason why one would\n> want to use a cursor.\n\nI'm pretty sure two things mean there's less difference than you might \nexpect:\n1. Temp tables don't fsync\n2. A cursor will spill to disk beyond a certain size\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Jan 2005 16:53:14 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\"Andrei Bintintan\" <[email protected]> writes:\n\n> > If you're using this to provide \"pages\" of results, could you use a cursor?\n> What do you mean by that? Cursor?\n> \n> Yes I'm using this to provide \"pages\", but If I jump to the last pages it goes\n> very slow.\n\nThe best way to do pages for is not to use offset or cursors but to use an\nindex. This only works if you can enumerate all the sort orders the\napplication might be using and can have an index on each of them.\n\nTo do this the query would look something like:\n\nSELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n\nThen you take note of the last value used on a given page and if the user\nselects \"next\" you pass that as the starting point for the next page.\n\nThis query takes the same amount of time no matter how many records are in the\ntable and no matter what page of the result set the user is on. It should\nactually be instantaneous even if the user is on the hundredth page of\nmillions of records because it uses an index both for the finding the right\npoint to start and for the ordering.\n\nIt also has the advantage that it works even if the list of items changes as\nthe user navigates. If you use OFFSET and someone inserts a record in the\ntable then the \"next\" page will overlap the current page. Worse, if someone\ndeletes a record then \"next\" will skip a record.\n\nThe disadvantages of this are a) it's hard (but not impossible) to go\nbackwards. And b) it's impossible to give the user a list of pages and let\nthem skip around willy nilly.\n\n\n(If this is for a web page then specifically don't recommend cursors. It will\nmean you'll have to have some complex session management system that\nguarantees the user will always come to the same postgres session and has some\ngarbage collection if the user disappears. And it means the URL is only good\nfor a limited amount of time. If they bookmark it it'll break if they come\nback the next day.)\n\n-- \ngreg\n\n", "msg_date": "20 Jan 2005 11:59:34 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Ron Mayer wrote:\n> Richard Huxton wrote:\n> \n>>\n>> If you've got a web-application then you'll probably want to insert \n>> the results into a cache table for later use.\n>>\n> \n> If I have quite a bit of activity like this (people selecting 10000 out\n> of a few million rows and paging through them in a web browser), would\n> it be good to have a single table with a userid column shared by all\n> users, or a separate table for each user that can be truncated/dropped?\n> \n> I started out with one table; but with people doing 10s of thousand\n> of inserts and deletes per session, I had a pretty hard time figuring\n> out a reasonable vacuum strategy.\n\nAs often as you can, and make sure your config allocates enough \nfree-space-map for them. Unless, of course, you end up I/O saturated.\n\n> Eventually I started doing a whole bunch of create table tmp_XXXX\n> tables where XXXX is a userid; and a script to drop these tables - but\n> that's quite ugly in a different way.\n> \n> With 8.0 I guess I'll try the single table again - perhaps what I\n> want may be to always have a I/O throttled vacuum running... hmm.\n\nWell, there have been some tweaks, but I don't know if they'll help in \nthis case.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Jan 2005 17:04:23 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Greg Stark wrote:\n> \"Andrei Bintintan\" <[email protected]> writes:\n> \n> \n>>>If you're using this to provide \"pages\" of results, could you use a cursor?\n>>\n>>What do you mean by that? Cursor?\n>>\n>>Yes I'm using this to provide \"pages\", but If I jump to the last pages it goes\n>>very slow.\n> \n> \n> The best way to do pages for is not to use offset or cursors but to use an\n> index. This only works if you can enumerate all the sort orders the\n> application might be using and can have an index on each of them.\n> \n> To do this the query would look something like:\n> \n> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n> \n> Then you take note of the last value used on a given page and if the user\n> selects \"next\" you pass that as the starting point for the next page.\n\nGreg's is the most efficient, but you need to make sure you have a \nsuitable key available in the output of your select.\n\nAlso, since you are repeating the query you could get different results \nas people insert/delete rows. This might or might not be what you want.\n\nA similar solution is to partition by date/alphabet or similar, then \npage those results. That can reduce your resultset to a manageable size.\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 20 Jan 2005 17:24:36 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:\n\n> The best way to do pages for is not to use offset or cursors but to use an\n> index. This only works if you can enumerate all the sort orders the\n> application might be using and can have an index on each of them.\n> \n> To do this the query would look something like:\n> \n> SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n> \n> Then you take note of the last value used on a given page and if the user\n> selects \"next\" you pass that as the starting point for the next page.\n\nthis will only work unchanged if the index is unique. imagine , for\nexample if you have more than 50 rows with the same value of col.\n\none way to fix this is to use ORDER BY col,oid\n\ngnari\n\n\n", "msg_date": "Thu, 20 Jan 2005 19:12:06 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "On Thu, 2005-01-20 at 19:12 +0000, Ragnar Hafsta� wrote:\n> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:\n> \n> > The best way to do pages for is not to use offset or cursors but to use an\n> > index. This only works if you can enumerate all the sort orders the\n> > application might be using and can have an index on each of them.\n> > \n> > To do this the query would look something like:\n> > \n> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n> > \n> > Then you take note of the last value used on a given page and if the user\n> > selects \"next\" you pass that as the starting point for the next page.\n> \n> this will only work unchanged if the index is unique. imagine , for\n> example if you have more than 50 rows with the same value of col.\n> \n> one way to fix this is to use ORDER BY col,oid\n\nand a slightly more complex WHERE clause as well, of course\n\ngnari\n\n\n", "msg_date": "Thu, 20 Jan 2005 19:23:12 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Now I read all the posts and I have some answers.\n\nYes, I have a web aplication.\nI HAVE to know exactly how many pages I have and I have to allow the user to \njump to a specific page(this is where I used limit and offset). We have this \nfeature and I cannot take it out.\n\n\n>> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\nNow this solution looks very fast, but I cannot implement it, because I \ncannot jump from page 1 to page xxxx only to page 2. Because I know with \nthis type where did the page 1 ended. And we have some really complicated \nwhere's and about 10 tables are involved in the sql query.\n\nAbout the CURSOR I have to read more about them because this is my first \ntime when I hear about.\nI don't know if temporary tables are a solution, really I don't think so, \nthere are a lot of users that are working in the same time at the same page.\n\nSo... still DIGGING for solutions.\n\nAndy.\n\n----- Original Message ----- \nFrom: \"Ragnar Hafsta�\" <[email protected]>\nTo: <[email protected]>\nCc: \"Andrei Bintintan\" <[email protected]>; <[email protected]>\nSent: Thursday, January 20, 2005 9:23 PM\nSubject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n\n\n> On Thu, 2005-01-20 at 19:12 +0000, Ragnar Hafsta� wrote:\n>> On Thu, 2005-01-20 at 11:59 -0500, Greg Stark wrote:\n>>\n>> > The best way to do pages for is not to use offset or cursors but to use \n>> > an\n>> > index. This only works if you can enumerate all the sort orders the\n>> > application might be using and can have an index on each of them.\n>> >\n>> > To do this the query would look something like:\n>> >\n>> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n>> >\n>> > Then you take note of the last value used on a given page and if the \n>> > user\n>> > selects \"next\" you pass that as the starting point for the next page.\n>>\n>> this will only work unchanged if the index is unique. imagine , for\n>> example if you have more than 50 rows with the same value of col.\n>>\n>> one way to fix this is to use ORDER BY col,oid\n>\n> and a slightly more complex WHERE clause as well, of course\n>\n> gnari\n>\n>\n> \n\n", "msg_date": "Fri, 21 Jan 2005 11:20:48 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> I am also very interesting in this very question.. Is there any way to\n> declare a persistant cursor that remains open between pg sessions? \n> This would be better than a temp table because you would not have to\n> do the initial select and insert into a fresh table and incur those IO\n> costs, which are often very heavy, and the reason why one would want\n> to use a cursor.\n\nTANSTAAFL. How would such a persistent cursor be implemented if not by\nbuilding a temporary table somewhere behind the scenes?\n\nThere could be some advantage if the data were stored in a temporary table\nmarked as not having to be WAL logged. Instead it could be automatically\ncleared on every database start.\n\n-- \ngreg\n\n", "msg_date": "25 Jan 2005 13:28:46 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "The problems still stays open.\n\nThe thing is that I have about 20 - 30 clients that are using that SQL query \nwhere the offset and limit are involved. So, I cannot create a temp table, \nbecause that means that I'll have to make a temp table for each session... \nwhich is a very bad ideea. Cursors somehow the same. In my application the \nWhere conditions can be very different for each user(session) apart.\n\nThe only solution that I see in the moment is to work at the query, or to \nwrite a more complex where function to limit the results output. So no \nreplace for Offset/Limit.\n\nBest regards,\nAndy.\n\n\n----- Original Message ----- \nFrom: \"Greg Stark\" <[email protected]>\nTo: <[email protected]>\nCc: \"Richard Huxton\" <[email protected]>; \"Andrei Bintintan\" \n<[email protected]>; <[email protected]>; \n<[email protected]>\nSent: Tuesday, January 25, 2005 8:28 PM\nSubject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n\n\n>\n> Alex Turner <[email protected]> writes:\n>\n>> I am also very interesting in this very question.. Is there any way to\n>> declare a persistant cursor that remains open between pg sessions?\n>> This would be better than a temp table because you would not have to\n>> do the initial select and insert into a fresh table and incur those IO\n>> costs, which are often very heavy, and the reason why one would want\n>> to use a cursor.\n>\n> TANSTAAFL. How would such a persistent cursor be implemented if not by\n> building a temporary table somewhere behind the scenes?\n>\n> There could be some advantage if the data were stored in a temporary table\n> marked as not having to be WAL logged. Instead it could be automatically\n> cleared on every database start.\n>\n> -- \n> greg\n>\n> \n\n", "msg_date": "Wed, 26 Jan 2005 12:11:49 +0200", "msg_from": "\"Andrei Bintintan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "As I read the docs, a temp table doesn't solve our problem, as it does\nnot persist between sessions. With a web page there is no guarentee\nthat you will receive the same connection between requests, so a temp\ntable doesn't solve the problem. It looks like you either have to\ncreate a real table (which is undesirable becuase it has to be\nphysicaly synced, and TTFB will be very poor) or create an application\ntier in between the web tier and the database tier to allow data to\npersist between requests tied to a unique session id.\n\nLooks like the solutions to this problem is not RDBMS IMHO.\n\nAlex Turner\nNetEconomist\n\n\nOn Wed, 26 Jan 2005 12:11:49 +0200, Andrei Bintintan <[email protected]> wrote:\n> The problems still stays open.\n> \n> The thing is that I have about 20 - 30 clients that are using that SQL query\n> where the offset and limit are involved. So, I cannot create a temp table,\n> because that means that I'll have to make a temp table for each session...\n> which is a very bad ideea. Cursors somehow the same. In my application the\n> Where conditions can be very different for each user(session) apart.\n> \n> The only solution that I see in the moment is to work at the query, or to\n> write a more complex where function to limit the results output. So no\n> replace for Offset/Limit.\n> \n> Best regards,\n> Andy.\n> \n> \n> ----- Original Message -----\n> From: \"Greg Stark\" <[email protected]>\n> To: <[email protected]>\n> Cc: \"Richard Huxton\" <[email protected]>; \"Andrei Bintintan\"\n> <[email protected]>; <[email protected]>;\n> <[email protected]>\n> Sent: Tuesday, January 25, 2005 8:28 PM\n> Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n> \n> \n> >\n> > Alex Turner <[email protected]> writes:\n> >\n> >> I am also very interesting in this very question.. Is there any way to\n> >> declare a persistant cursor that remains open between pg sessions?\n> >> This would be better than a temp table because you would not have to\n> >> do the initial select and insert into a fresh table and incur those IO\n> >> costs, which are often very heavy, and the reason why one would want\n> >> to use a cursor.\n> >\n> > TANSTAAFL. How would such a persistent cursor be implemented if not by\n> > building a temporary table somewhere behind the scenes?\n> >\n> > There could be some advantage if the data were stored in a temporary table\n> > marked as not having to be WAL logged. Instead it could be automatically\n> > cleared on every database start.\n> >\n> > --\n> > greg\n> >\n> >\n> \n>\n", "msg_date": "Wed, 26 Jan 2005 08:47:32 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Alex Turner wrote:\n> As I read the docs, a temp table doesn't solve our problem, as it does\n> not persist between sessions. With a web page there is no guarentee\n> that you will receive the same connection between requests, so a temp\n> table doesn't solve the problem. It looks like you either have to\n> create a real table (which is undesirable becuase it has to be\n> physicaly synced, and TTFB will be very poor) or create an application\n> tier in between the web tier and the database tier to allow data to\n> persist between requests tied to a unique session id.\n> \n> Looks like the solutions to this problem is not RDBMS IMHO.\n\nIt's less the RDBMS than the web application. You're trying to mix a \nstateful setup (the application) with a stateless presentation layer \n(the web). If you're using PHP (which doesn't offer a \"real\" middle \nlayer) you might want to look at memcached.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Jan 2005 13:57:00 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Although larger offsets have some effect, your real problem is the sort \n(of 42693 rows).\n\nTry:\n\nSELECT r.id_order\nFROM report r\nWHERE r.id_order IN\n (SELECT id\n FROM orders\n WHERE id_status = 6\n ORDER BY 1\n LIMIT 10 OFFSET 1000)\nORDER BY 1\n\nThe subquery doesn't *have* to sort because the table is already ordered \non the primary key.\nYou can still add a join to orders outside the subselect without \nsignificant cost.\n\nIncidentally, I don't know how you got the first plan - it should \ninclude a sort as well.\n\nAndrei Bintintan wrote:\n\n > explain analyze\n > SELECT o.id\n > FROM report r\n > INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6\n > ORDER BY 1 LIMIT 10 OFFSET 10\n > \n > Limit (cost=44.37..88.75 rows=10 width=4) (actual time=0.160..0.275 \nrows=10 loops=1)\n > -> Merge Join (cost=0.00..182150.17 rows=41049 width=4) (actual \ntime=0.041..0.260 rows=20 loops=1)\n > Merge Cond: (\"outer\".id_order = \"inner\".id)\n > -> Index Scan using report_id_order_idx on report r \n(cost=0.00..157550.90 rows=42862 width=4) (actual time=0.018..0.075 \nrows=20 loops=1)\n > -> Index Scan using orders_pkey on orders o \n(cost=0.00..24127.04 rows=42501 width=4) (actual time=0.013..0.078 \nrows=20 loops=1)\n > Filter: (id_status = 6)\n > Total runtime: 0.373 ms\n >\n > explain analyze\n > SELECT o.id\n > FROM report r\n > INNER JOIN orders o ON o.id=r.id_order AND o.id_status=6\n > ORDER BY 1 LIMIT 10 OFFSET 1000000\n > Limit (cost=31216.85..31216.85 rows=1 width=4) (actual \ntime=1168.152..1168.152 rows=0 loops=1)\n > -> Sort (cost=31114.23..31216.85 rows=41049 width=4) (actual \ntime=1121.769..1152.246 rows=42693 loops=1)\n > Sort Key: o.id\n > -> Hash Join (cost=2329.99..27684.03 rows=41049 width=4) \n(actual time=441.879..925.498 rows=42693 loops=1)\n > Hash Cond: (\"outer\".id_order = \"inner\".id)\n > -> Seq Scan on report r (cost=0.00..23860.62 \nrows=42862 width=4) (actual time=38.634..366.035 rows=42864 loops=1)\n > -> Hash (cost=2077.74..2077.74 rows=42501 width=4) \n(actual time=140.200..140.200 rows=0 loops=1)\n > -> Seq Scan on orders o (cost=0.00..2077.74 \nrows=42501 width=4) (actual time=0.059..96.890 rows=42693 loops=1)\n > Filter: (id_status = 6)\n > Total runtime: 1170.586 ms\n", "msg_date": "Thu, 27 Jan 2005 13:50:25 +1000", "msg_from": "David Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OFFSET impact on Performance???" }, { "msg_contents": "\n> As I read the docs, a temp table doesn't solve our problem, as it does\n> not persist between sessions. With a web page there is no guarentee\n> that you will receive the same connection between requests, so a temp\n> table doesn't solve the problem. It looks like you either have to\n> create a real table (which is undesirable becuase it has to be\n> physicaly synced, and TTFB will be very poor) or create an application\n> tier in between the web tier and the database tier to allow data to\n> persist between requests tied to a unique session id.\n>\n> Looks like the solutions to this problem is not RDBMS IMHO.\n>\n> Alex Turner\n> NetEconomist\n\n\tDid you miss the proposal to store arrays of the found rows id's in a \n\"cache\" table ? Is 4 bytes per result row still too large ?\n\n\tIf it's still too large, you can still implement the same cache in the \nfilesystem !\n\tIf you want to fetch 100.000 rows containing just an integer, in my case \n(psycopy) it's a lot faster to use an array aggregate. Time to get the \ndata in the application (including query) :\n\nselect id from temp\n\t=> 849 ms\nselect int_array_aggregate(id) as ids from temp\n\t=> 300 ms\n\n\tSo you can always fetch the whole wuery results (in the form of an \ninteger per row) and cache it in the filesystem. It won't work if you have \n10 million rows though !\n", "msg_date": "Tue, 01 Feb 2005 10:16:47 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Andrei:\nHi to all, \n\nI have the following 2 examples. Now, regarding on the offset if it is small(10) or big(>50000) what is the impact on the performance of the query?? I noticed that if I return more data's(columns) or if I make more joins then the query runs even slower if the OFFSET is bigger. How can I \nsomehow improve the performance on this? \n\nMerlin:\nOffset is not suitable for traversal of large data sets. Better not use it at all!\n\nThere are many ways to deal with this problem, the two most direct being the view approach and the cursor approach.\n\ncursor approach:\ndeclare report_order with hold cursor for select * from report r, order o [...]\nRemember to close the cursor when you're done. Now fetch time is proportional to the number of rows fetched, and should be very fast. The major drawback to this approach is that cursors in postgres (currently) are always insensitive, so that record changes after you declare the cursor from other users are not visible to you. If this is a big deal, try the view approach.\n\nview approach:\ncreate view report_order as select * from report r, order o [...]\n\nand this:\nprepare fetch_from_report_order(numeric, numeric, int4) as\n\tselect * from report_order where order_id >= $1 and\n\t\t(order_id > $1 or report_id > $2)\n\t\torder by order_id, report_id limit $3;\n\nfetch next 1000 records from report_order:\nexecute fetch_from_report_order(o, f, 1000); o and f being the last key values you fetched (pass in zeroes to start it off).\n\nThis is not quite as fast as the cursor approach (but it will be when we get a proper row constructor, heh), but it more flexible in that it is sensitive to changes from other users. This is more of a 'permanent' binding whereas cursor is a binding around a particular task.\n\nGood luck!\nMerlin\n\n\n", "msg_date": "Thu, 20 Jan 2005 08:20:21 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: OFFSET impact on Performance???" } ]
[ { "msg_contents": "Dear community,\n\nMy company, which I actually represent, is a fervent user of PostgreSQL.\nWe used to make all our applications using PostgreSQL for more than 5 years.\nWe usually do classical client/server applications under Linux, and Web \ninterface (php, perl, C/C++). We used to manage also public web services with \n10/15 millions records and up to 8 millions pages view by month.\n\nNow we are in front of a new need, but we do not find any good solution with \nPostgreSQL.\nWe need to make a sort of directory of millions of data growing about 4/8 \nmillions per month, and to be able to be used by many users from the web. In \norder to do this, our solution need to be able to run perfectly with many \ninsert and many select access (done before each insert, and done by web site \nvisitors). We will also need to make a search engine for the millions of data \n(140/150 millions records at the immediate beginning) ... No it's not google, \nbut the kind of volume of data stored in the main table is similar.\n\nThen ... we have made some tests, with the actual servers we have here, like a \nBi-Pro Xeon 2.8 Ghz, with 4 Gb of RAM and the result of the cumulative \ninserts, and select access is slowing down the service really quickly ... \n(Load average is going up to 10 really quickly on the database).\n\nWe were at this moment thinking about a Cluster solution ... We saw on the \nInternet many solution talking about Cluster solution using MySQL ... but \nnothing about PostgreSQL ... the idea is to use several servers to make a \nsort of big virtual server using the disk space of each server as one, and \nhaving the ability to use the CPU and RAM of each servers in order to \nmaintain good service performance ...one can imagin it is like a GFS but \ndedicated to postgreSQL...\n\nIs there any solution with PostgreSQL matching these needs ... ?\nDo we have to backport our development to MySQL for this kind of problem ?\nIs there any other solution than a Cluster for our problem ?\n\nLooking for your reply,\n\nRegards,\n-- \nHervᅵ\n", "msg_date": "Thu, 20 Jan 2005 15:03:31 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, 20 Jan 2005 15:03:31 +0100, Hervé Piedvache <[email protected]> wrote:\n\n> We were at this moment thinking about a Cluster solution ... We saw on the\n> Internet many solution talking about Cluster solution using MySQL ... but\n> nothing about PostgreSQL ... the idea is to use several servers to make a\n> sort of big virtual server using the disk space of each server as one, and\n> having the ability to use the CPU and RAM of each servers in order to\n> maintain good service performance ...one can imagin it is like a GFS but\n> dedicated to postgreSQL...\n> \n\nforget mysql cluster for now.\nWe have a small database which size is 500 Mb.\nIt is not possible to load these base in a computer with 2 Mb of RAM\nand loading the base in RAM is required.\nSo, we shrink the database and it is ok with 350 Mb to fit in the 2 Gb RAM.\nFirst tests of performance on a basic request: 500x slower, yes 500x.\nThis issue is reported to mysql team but no answer (and correction)\n\nActually, the solution is running with a replication database: 1 node\nfor write request and all the other nodes for read requests and the\nload balancer is made with round robin solution.\n\n\n-- \nJean-Max Reymond\nCKR Solutions\nNice France\nhttp://www.ckr-solutions.com\n", "msg_date": "Thu, 20 Jan 2005 15:23:03 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Is there any solution with PostgreSQL matching these needs ... ?\n\nYou want: http://www.slony.info/\n\n> Do we have to backport our development to MySQL for this kind of problem ?\n> Is there any other solution than a Cluster for our problem ?\n\nWell, Slony does replication which is basically what you want :)\n\nOnly master->slave though, so you will need to have all inserts go via \nthe master server, but selects can come off any server.\n\nChris\n", "msg_date": "Thu, 20 Jan 2005 14:24:05 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "* Herv? Piedvache ([email protected]) wrote:\n> Is there any solution with PostgreSQL matching these needs ... ?\n\nYou might look into pg_pool. Another possibility would be slony, though\nI'm not sure it's to the point you need it at yet, depends on if you can\nhandle some delay before an insert makes it to the slave select systems.\n\n> Do we have to backport our development to MySQL for this kind of problem ?\n\nWell, hopefully not. :)\n\n> Is there any other solution than a Cluster for our problem ?\n\nBigger server, more CPUs/disks in one box. Try to partition up your\ndata some way such that it can be spread across multiple machines, then\nif you need to combine the data have it be replicated using slony to a\nbig box that has a view which joins all the tables and do your big\nqueries against that.\n\nJust some thoughts.\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 09:30:45 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a ᅵcrit :\n> > Is there any solution with PostgreSQL matching these needs ... ?\n>\n> You want: http://www.slony.info/\n>\n> > Do we have to backport our development to MySQL for this kind of problem\n> > ? Is there any other solution than a Cluster for our problem ?\n>\n> Well, Slony does replication which is basically what you want :)\n>\n> Only master->slave though, so you will need to have all inserts go via\n> the master server, but selects can come off any server.\n\nSorry but I don't agree with this ... Slony is a replication solution ... I \ndon't need replication ... what will I do when my database will grow up to 50 \nGb ... I'll need more than 50 Gb of RAM on each server ???\nThis solution is not very realistic for me ...\n\nI need a Cluster solution not a replication one or explain me in details how I \nwill do for managing the scalabilty of my database ...\n\nregards,\n-- \nHervᅵ\n", "msg_date": "Thu, 20 Jan 2005 15:36:08 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Sorry but I don't agree with this ... Slony is a replication solution ... I \n> don't need replication ... what will I do when my database will grow up to 50 \n> Gb ... I'll need more than 50 Gb of RAM on each server ???\n> This solution is not very realistic for me ...\n> \n> I need a Cluster solution not a replication one or explain me in details how I \n> will do for managing the scalabilty of my database ...\n\nBuy Oracle\n", "msg_date": "Thu, 20 Jan 2005 14:38:34 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a ᅵcrit :\n> * Herv? Piedvache ([email protected]) wrote:\n> > Is there any solution with PostgreSQL matching these needs ... ?\n>\n> You might look into pg_pool. Another possibility would be slony, though\n> I'm not sure it's to the point you need it at yet, depends on if you can\n> handle some delay before an insert makes it to the slave select systems.\n\nI think not ... pgpool or slony are replication solutions ... but as I have \nsaid to Christopher Kings-Lynne how I'll manage the scalabilty of the \ndatabase ? I'll need several servers able to load a database growing and \ngrowing to get good speed performance ...\n\n> > Do we have to backport our development to MySQL for this kind of problem\n> > ?\n>\n> Well, hopefully not. :)\n\nI hope so ;o)\n\n> > Is there any other solution than a Cluster for our problem ?\n>\n> Bigger server, more CPUs/disks in one box. Try to partition up your\n> data some way such that it can be spread across multiple machines, then\n> if you need to combine the data have it be replicated using slony to a\n> big box that has a view which joins all the tables and do your big\n> queries against that.\n\nBut I'll arrive to limitation of a box size quickly I thing a 4 processors \nwith 64 Gb of RAM ... and after ?\n\nregards,\n-- \nHervᅵ\n", "msg_date": "Thu, 20 Jan 2005 15:39:49 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 15:38, Christopher Kings-Lynne a ᅵcrit :\n> > Sorry but I don't agree with this ... Slony is a replication solution ...\n> > I don't need replication ... what will I do when my database will grow up\n> > to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???\n> > This solution is not very realistic for me ...\n> >\n> > I need a Cluster solution not a replication one or explain me in details\n> > how I will do for managing the scalabilty of my database ...\n>\n> Buy Oracle\n\nI think this is not my solution ... sorry I'm talking about finding a \nPostgreSQL solution ... \n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 15:42:06 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n\n>Dear community,\n>\n>My company, which I actually represent, is a fervent user of PostgreSQL.\n>We used to make all our applications using PostgreSQL for more than 5 years.\n>We usually do classical client/server applications under Linux, and Web \n>interface (php, perl, C/C++). We used to manage also public web services with \n>10/15 millions records and up to 8 millions pages view by month.\n> \n>\nDepending on your needs either:\n\nSlony: www.slony.info\n\nor\n\nReplicator: www.commandprompt.com\n\nWill both do what you want. Replicator is easier to setup but\nSlony is free.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>Now we are in front of a new need, but we do not find any good solution with \n>PostgreSQL.\n>We need to make a sort of directory of millions of data growing about 4/8 \n>millions per month, and to be able to be used by many users from the web. In \n>order to do this, our solution need to be able to run perfectly with many \n>insert and many select access (done before each insert, and done by web site \n>visitors). We will also need to make a search engine for the millions of data \n>(140/150 millions records at the immediate beginning) ... No it's not google, \n>but the kind of volume of data stored in the main table is similar.\n>\n>Then ... we have made some tests, with the actual servers we have here, like a \n>Bi-Pro Xeon 2.8 Ghz, with 4 Gb of RAM and the result of the cumulative \n>inserts, and select access is slowing down the service really quickly ... \n>(Load average is going up to 10 really quickly on the database).\n>\n>We were at this moment thinking about a Cluster solution ... We saw on the \n>Internet many solution talking about Cluster solution using MySQL ... but \n>nothing about PostgreSQL ... the idea is to use several servers to make a \n>sort of big virtual server using the disk space of each server as one, and \n>having the ability to use the CPU and RAM of each servers in order to \n>maintain good service performance ...one can imagin it is like a GFS but \n>dedicated to postgreSQL...\n>\n>Is there any solution with PostgreSQL matching these needs ... ?\n>Do we have to backport our development to MySQL for this kind of problem ?\n>Is there any other solution than a Cluster for our problem ?\n>\n>Looking for your reply,\n>\n>Regards,\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 06:44:16 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "* Herv? Piedvache ([email protected]) wrote:\n> Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :\n> > * Herv? Piedvache ([email protected]) wrote:\n> > > Is there any solution with PostgreSQL matching these needs ... ?\n> >\n> > You might look into pg_pool. Another possibility would be slony, though\n> > I'm not sure it's to the point you need it at yet, depends on if you can\n> > handle some delay before an insert makes it to the slave select systems.\n> \n> I think not ... pgpool or slony are replication solutions ... but as I have \n> said to Christopher Kings-Lynne how I'll manage the scalabilty of the \n> database ? I'll need several servers able to load a database growing and \n> growing to get good speed performance ...\n\nThey're both replication solutions, but they also help distribute the\nload. For example:\n\npg_pool will distribute the select queries amoung the servers. They'll\nall get the inserts, so that hurts, but at least the select queries are\ndistributed.\n\nslony is similar, but your application level does the load distribution\nof select statements instead of pg_pool. Your application needs to know\nto send insert statements to the 'main' server, and select from the\nothers.\n\n> > > Is there any other solution than a Cluster for our problem ?\n> >\n> > Bigger server, more CPUs/disks in one box. Try to partition up your\n> > data some way such that it can be spread across multiple machines, then\n> > if you need to combine the data have it be replicated using slony to a\n> > big box that has a view which joins all the tables and do your big\n> > queries against that.\n> \n> But I'll arrive to limitation of a box size quickly I thing a 4 processors \n> with 64 Gb of RAM ... and after ?\n\nGo to non-x86 hardware after if you're going to continue to increase the\nsize of the server. Personally I think your better bet might be to\nfigure out a way to partition up your data (isn't that what google\ndoes anyway?).\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 09:44:16 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "\nOn Jan 20, 2005, at 9:36 AM, Hervé Piedvache wrote:\n\n> Sorry but I don't agree with this ... Slony is a replication solution \n> ... I\n> don't need replication ... what will I do when my database will grow \n> up to 50\n> Gb ... I'll need more than 50 Gb of RAM on each server ???\n\nSlony doesn't use much ram. The mysql clustering product, ndb I believe \nit is called, requires all data fit in RAM. (At least, it used to). \nWhat you'll need is disk space.\n\nAs for a cluster I think you are thinking of multi-master replication.\n\nYou should look into what others have said about trying to partiition \ndata among several boxes and then join the results together.\n\nOr you could fork over hundreds of thousands of dollars for Oracle's \nRAC.\n\n--\nJeff Trout <[email protected]>\nhttp://www.jefftrout.com/\nhttp://www.stuarthamm.net/\n\n", "msg_date": "Thu, 20 Jan 2005 09:48:07 -0500", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Stephen Frost wrote:\n\n>* Herv? Piedvache ([email protected]) wrote:\n> \n>\n>>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a �crit :\n>> \n>>\n>>>* Herv? Piedvache ([email protected]) wrote:\n>>> \n>>>\n>>>>Is there any solution with PostgreSQL matching these needs ... ?\n>>>> \n>>>>\n>>>You might look into pg_pool. Another possibility would be slony, though\n>>>I'm not sure it's to the point you need it at yet, depends on if you can\n>>>handle some delay before an insert makes it to the slave select systems.\n>>> \n>>>\n>>I think not ... pgpool or slony are replication solutions ... but as I have \n>>said to Christopher Kings-Lynne how I'll manage the scalabilty of the \n>>database ? I'll need several servers able to load a database growing and \n>>growing to get good speed performance ...\n>> \n>>\n>\n>They're both replication solutions, but they also help distribute the\n>load. For example:\n>\n>pg_pool will distribute the select queries amoung the servers. They'll\n>all get the inserts, so that hurts, but at least the select queries are\n>distributed.\n>\n>slony is similar, but your application level does the load distribution\n>of select statements instead of pg_pool. Your application needs to know\n>to send insert statements to the 'main' server, and select from the\n>others.\n> \n>\nYou can put pgpool in front of replicator or slony to get load\nbalancing for reads.\n\n> \n>\n>>>>Is there any other solution than a Cluster for our problem ?\n>>>> \n>>>>\n>>>Bigger server, more CPUs/disks in one box. Try to partition up your\n>>>data some way such that it can be spread across multiple machines, then\n>>>if you need to combine the data have it be replicated using slony to a\n>>>big box that has a view which joins all the tables and do your big\n>>>queries against that.\n>>> \n>>>\n>>But I'll arrive to limitation of a box size quickly I thing a 4 processors \n>>with 64 Gb of RAM ... and after ?\n>> \n>>\nOpteron.\n\n\n>\n>Go to non-x86 hardware after if you're going to continue to increase the\n>size of the server. Personally I think your better bet might be to\n>figure out a way to partition up your data (isn't that what google\n>does anyway?).\n>\n>\tStephen\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 06:49:56 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">>>Sorry but I don't agree with this ... Slony is a replication solution ...\n>>>I don't need replication ... what will I do when my database will grow up\n>>>to 50 Gb ... I'll need more than 50 Gb of RAM on each server ???\n>>>This solution is not very realistic for me ...\n>>>\n>>>I need a Cluster solution not a replication one or explain me in details\n>>>how I will do for managing the scalabilty of my database ...\n>>\n>>Buy Oracle\n> \n> \n> I think this is not my solution ... sorry I'm talking about finding a \n> PostgreSQL solution ... \n\nMy point being is that there is no free solution. There simply isn't. \nI don't know why you insist on keeping all your data in RAM, but the \nmysql cluster requires that ALL data MUST fit in RAM all the time.\n\nPostgreSQL has replication, but not partitioning (which is what you want).\n\nSo, your only option is Oracle or another very expensive commercial \ndatabase.\n\nChris\n", "msg_date": "Thu, 20 Jan 2005 14:51:21 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 15:48, Jeff a ᅵcrit :\n> On Jan 20, 2005, at 9:36 AM, Hervᅵ Piedvache wrote:\n> > Sorry but I don't agree with this ... Slony is a replication solution\n> > ... I\n> > don't need replication ... what will I do when my database will grow\n> > up to 50\n> > Gb ... I'll need more than 50 Gb of RAM on each server ???\n>\n> Slony doesn't use much ram. The mysql clustering product, ndb I believe\n> it is called, requires all data fit in RAM. (At least, it used to).\n> What you'll need is disk space.\n\nSlony do not use RAM ... but PostgreSQL will need RAM for accessing a database \nof 50 Gb ... so having two servers with the same configuration replicated by \nslony do not slove the problem of the scalability of the database ...\n\n> As for a cluster I think you are thinking of multi-master replication.\n\nNo I'm really thinking about a Cluster solution ... having several servers \nmaking one big virtual server to have several processors, and many RAM in \nmany boxes ...\n\n> You should look into what others have said about trying to partiition\n> data among several boxes and then join the results together.\n\n??? Who talk about this ?\n\n> Or you could fork over hundreds of thousands of dollars for Oracle's\n> RAC.\n\nNo please do not talk about this again ... I'm looking about a PostgreSQL \nsolution ... I know RAC ... and I'm not able to pay for a RAC certify \nhardware configuration plus a RAC Licence.\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 15:54:23 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">>Or you could fork over hundreds of thousands of dollars for Oracle's\n>>RAC.\n> \n> \n> No please do not talk about this again ... I'm looking about a PostgreSQL \n> solution ... I know RAC ... and I'm not able to pay for a RAC certify \n> hardware configuration plus a RAC Licence.\n\nThere is absolutely zero PostgreSQL solution...\n\nYou may have to split the data yourself onto two independent db servers \nand combine the results somehow in your application.\n\nChris\n", "msg_date": "Thu, 20 Jan 2005 14:58:42 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Joshua,\n\nLe Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a ᅵcrit :\n> Hervᅵ Piedvache wrote:\n> >\n> >My company, which I actually represent, is a fervent user of PostgreSQL.\n> >We used to make all our applications using PostgreSQL for more than 5\n> > years. We usually do classical client/server applications under Linux,\n> > and Web interface (php, perl, C/C++). We used to manage also public web\n> > services with 10/15 millions records and up to 8 millions pages view by\n> > month.\n>\n> Depending on your needs either:\n>\n> Slony: www.slony.info\n>\n> or\n>\n> Replicator: www.commandprompt.com\n>\n> Will both do what you want. Replicator is easier to setup but\n> Slony is free.\n\nNo ... as I have said ... how I'll manage a database getting a table of may be \n250 000 000 records ? I'll need incredible servers ... to get quick access or \nindex reading ... no ?\n\nSo what we would like to get is a pool of small servers able to make one \nvirtual server ... for that is called a Cluster ... no ?\n\nI know they are not using PostgreSQL ... but how a company like Google do to \nget an incredible database in size and so quick access ?\n\nregards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:00:47 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 15:51, Christopher Kings-Lynne a ᅵcrit :\n> >>>Sorry but I don't agree with this ... Slony is a replication solution\n> >>> ... I don't need replication ... what will I do when my database will\n> >>> grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server\n> >>> ??? This solution is not very realistic for me ...\n> >>>\n> >>>I need a Cluster solution not a replication one or explain me in details\n> >>>how I will do for managing the scalabilty of my database ...\n> >>\n> >>Buy Oracle\n> >\n> > I think this is not my solution ... sorry I'm talking about finding a\n> > PostgreSQL solution ...\n>\n> My point being is that there is no free solution. There simply isn't.\n> I don't know why you insist on keeping all your data in RAM, but the\n> mysql cluster requires that ALL data MUST fit in RAM all the time.\n\nI don't insist about have data in RAM .... but when you use PostgreSQL with \nbig database you know that for quick access just for reading the index file \nfor example it's better to have many RAM as possible ... I just want to be \nable to get a quick access with a growing and growind database ...\n\n> PostgreSQL has replication, but not partitioning (which is what you want).\n\n:o(\n\n> So, your only option is Oracle or another very expensive commercial\n> database.\n\nThat's not a good news ...\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:02:39 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">\n>No please do not talk about this again ... I'm looking about a PostgreSQL \n>solution ... I know RAC ... and I'm not able to pay for a RAC certify \n>hardware configuration plus a RAC Licence.\n> \n>\nWhat you want does not exist for PostgreSQL. You will either\nhave to build it yourself or pay somebody to build it for you.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>Regards,\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 07:03:01 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">So what we would like to get is a pool of small servers able to make one \n>virtual server ... for that is called a Cluster ... no ?\n>\n>I know they are not using PostgreSQL ... but how a company like Google do to \n>get an incredible database in size and so quick access ?\n> \n>\nYou could use dblink with multiple servers across data partitions\nwithin PostgreSQL but I don't know how fast that would be.\n\nJ\n\n\n\n>regards,\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 07:04:19 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n>>> Or you could fork over hundreds of thousands of dollars for Oracle's\n>>> RAC.\n>>\n>>\n>>\n>> No please do not talk about this again ... I'm looking about a \n>> PostgreSQL solution ... I know RAC ... and I'm not able to pay for a \n>> RAC certify hardware configuration plus a RAC Licence.\n>\n>\n> There is absolutely zero PostgreSQL solution...\n\n\nI just replied the same thing but then I was thinking. Couldn't he use \nmultiple databases\nover multiple servers with dblink?\n\nIt is not exactly how I would want to do it, but it would provide what \nhe needs I think???\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n> You may have to split the data yourself onto two independent db \n> servers and combine the results somehow in your application.\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 07:05:25 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "* Herv? Piedvache ([email protected]) wrote:\n> I know they are not using PostgreSQL ... but how a company like Google do to \n> get an incredible database in size and so quick access ?\n\nThey segment their data across multiple machines and have an algorithm\nwhich tells the application layer which machine to contact for what\ndata.\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 10:07:37 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a ᅵcrit :\n> Christopher Kings-Lynne wrote:\n> >>> Or you could fork over hundreds of thousands of dollars for Oracle's\n> >>> RAC.\n> >>\n> >> No please do not talk about this again ... I'm looking about a\n> >> PostgreSQL solution ... I know RAC ... and I'm not able to pay for a\n> >> RAC certify hardware configuration plus a RAC Licence.\n> >\n> > There is absolutely zero PostgreSQL solution...\n>\n> I just replied the same thing but then I was thinking. Couldn't he use\n> multiple databases\n> over multiple servers with dblink?\n>\n> It is not exactly how I would want to do it, but it would provide what\n> he needs I think???\n\nYes seems to be the only solution ... but I'm a little disapointed about \nthis ... could you explain me why there is not this kind of \nfunctionnality ... it seems to be a real need for big applications no ?\n\nThanks all for your answers ...\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:07:51 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "* Christopher Kings-Lynne ([email protected]) wrote:\n> PostgreSQL has replication, but not partitioning (which is what you want).\n\nIt doesn't have multi-server partitioning.. It's got partitioning\nwithin a single server (doesn't it? I thought it did, I know it was\ndiscussed w/ the guy from Cox Communications and I thought he was using\nit :).\n\n> So, your only option is Oracle or another very expensive commercial \n> database.\n\nOr partition the data at the application layer.\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 10:08:47 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">> then I was thinking. Couldn't he use\n>>multiple databases\n>>over multiple servers with dblink?\n>>\n>>It is not exactly how I would want to do it, but it would provide what\n>>he needs I think???\n>> \n>>\n>\n>Yes seems to be the only solution ... but I'm a little disapointed about \n>this ... could you explain me why there is not this kind of \n>functionnality ... it seems to be a real need for big applications no ?\n> \n>\nBecause it is really, really hard to do correctly and hard\nequals expensive.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>Thanks all for your answers ...\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 07:12:42 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n\n> \n> No ... as I have said ... how I'll manage a database getting a table of may be \n> 250 000 000 records ? I'll need incredible servers ... to get quick access or \n> index reading ... no ?\n> \n> So what we would like to get is a pool of small servers able to make one \n> virtual server ... for that is called a Cluster ... no ?\n> \n> I know they are not using PostgreSQL ... but how a company like Google do to \n> get an incredible database in size and so quick access ?\n\nProbably by carefully partitioning their data. I can't imagine anything\nbeing fast on a single table in 250,000,000 tuple range. Nor can I\nreally imagine any database that efficiently splits a single table\nacross multiple machines (or even inefficiently unless some internal\npartitioning is being done).\n\nSo, you'll have to do some work at your end and not just hope that\na \"magic bullet\" is available.\n\nOnce you've got the data partitioned, the question becomes one of\nhow to inhance performance/scalability. Have you considered RAIDb?\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Thu, 20 Jan 2005 08:14:28 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 16:14, Steve Wampler a ᅵcrit :\n> Once you've got the data partitioned, the question becomes one of\n> how to inhance performance/scalability. Have you considered RAIDb?\n\nNo but I'll seems to be very interesting ... close to the explanation of \nJoshua ... but automaticly done ...\n\nThanks !\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:23:17 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Google uses something called the google filesystem, look it up in \ngoogle. It is a distributed file system.\n\nDave\n\nHervᅵ Piedvache wrote:\n\n>Joshua,\n>\n>Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a ᅵcrit :\n> \n>\n>>Hervᅵ Piedvache wrote:\n>> \n>>\n>>>My company, which I actually represent, is a fervent user of PostgreSQL.\n>>>We used to make all our applications using PostgreSQL for more than 5\n>>>years. We usually do classical client/server applications under Linux,\n>>>and Web interface (php, perl, C/C++). We used to manage also public web\n>>>services with 10/15 millions records and up to 8 millions pages view by\n>>>month.\n>>> \n>>>\n>>Depending on your needs either:\n>>\n>>Slony: www.slony.info\n>>\n>>or\n>>\n>>Replicator: www.commandprompt.com\n>>\n>>Will both do what you want. Replicator is easier to setup but\n>>Slony is free.\n>> \n>>\n>\n>No ... as I have said ... how I'll manage a database getting a table of may be \n>250 000 000 records ? I'll need incredible servers ... to get quick access or \n>index reading ... no ?\n>\n>So what we would like to get is a pool of small servers able to make one \n>virtual server ... for that is called a Cluster ... no ?\n>\n>I know they are not using PostgreSQL ... but how a company like Google do to \n>get an incredible database in size and so quick access ?\n>\n>regards,\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nGoogle uses something called the google filesystem, look it up in\ngoogle. It is a distributed file system.\n\nDave\n\nHervᅵ Piedvache wrote:\n\nJoshua,\n\nLe Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a ᅵcrit :\n \n\nHervᅵ Piedvache wrote:\n \n\nMy company, which I actually represent, is a fervent user of PostgreSQL.\nWe used to make all our applications using PostgreSQL for more than 5\nyears. We usually do classical client/server applications under Linux,\nand Web interface (php, perl, C/C++). We used to manage also public web\nservices with 10/15 millions records and up to 8 millions pages view by\nmonth.\n \n\nDepending on your needs either:\n\nSlony: www.slony.info\n\nor\n\nReplicator: www.commandprompt.com\n\nWill both do what you want. Replicator is easier to setup but\nSlony is free.\n \n\n\nNo ... as I have said ... how I'll manage a database getting a table of may be \n250 000 000 records ? I'll need incredible servers ... to get quick access or \nindex reading ... no ?\n\nSo what we would like to get is a pool of small servers able to make one \nvirtual server ... for that is called a Cluster ... no ?\n\nI know they are not using PostgreSQL ... but how a company like Google do to \nget an incredible database in size and so quick access ?\n\nregards,\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Thu, 20 Jan 2005 10:23:34 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "I have no experience with pgCluster, but I found:\nPGCluster is a multi-master and synchronous replication system that\nsupports load balancing of PostgreSQL.\nhttp://www.software-facilities.com/databases-software/pgcluster.php\n\nMay be some have some expierience with this tool?\n\n----- Original Message ----- \nFrom: \"Christopher Kings-Lynne\" <[email protected]>\nTo: \"Hervᅵ Piedvache\" <[email protected]>\nCc: \"Jeff\" <[email protected]>; <[email protected]>\nSent: Thursday, January 20, 2005 4:58 PM\nSubject: [spam] Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\n\n>>>Or you could fork over hundreds of thousands of dollars for Oracle's\n>>>RAC.\n>>\n>>\n>> No please do not talk about this again ... I'm looking about a PostgreSQL \n>> solution ... I know RAC ... and I'm not able to pay for a RAC certify \n>> hardware configuration plus a RAC Licence.\n>\n> There is absolutely zero PostgreSQL solution...\n>\n> You may have to split the data yourself onto two independent db servers \n> and combine the results somehow in your application.\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n", "msg_date": "Thu, 20 Jan 2005 17:24:37 +0200", "msg_from": "\"Edgars Diebelis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a ᅵcrit :\n> Google uses something called the google filesystem, look it up in\n> google. It is a distributed file system.\n\nYes that's another point I'm working on ... make a cluster of server using \nGFS ... and making PostgreSQL running with it ...\n\nBut I have not finished my test ... and may be people could have experience \nwith this ...\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:32:27 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n> Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a ᅵcrit :\n> \n>>Google uses something called the google filesystem, look it up in\n>>google. It is a distributed file system.\n> \n> \n> Yes that's another point I'm working on ... make a cluster of server using \n> GFS ... and making PostgreSQL running with it ...\n\nA few years ago I played around with GFS, but not for postgresql.\n\nI don't think it's going to help - logically there's no difference\nbetween putting PG on GFS and putting PG on NFS - in both cases\nthe filesystem doesn't provide any support for distributing the\ntask at hand - and a PG database server isn't written to be\ndistributed across hosts regardless of the distribution of the\ndata across filesystems.\n\n\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Thu, 20 Jan 2005 08:40:04 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Probably by carefully partitioning their data. I can't imagine anything\n> being fast on a single table in 250,000,000 tuple range. Nor can I\n> really imagine any database that efficiently splits a single table\n> across multiple machines (or even inefficiently unless some internal\n> partitioning is being done).\n\nAh, what about partial indexes - those might help. As a kind of \n'semi-partition'.\n\nChris\n", "msg_date": "Thu, 20 Jan 2005 15:57:43 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, 2005-01-20 at 15:36 +0100, Herv� Piedvache wrote:\n> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a �crit :\n> > > Is there any solution with PostgreSQL matching these needs ... ?\n> >\n> > You want: http://www.slony.info/\n> >\n> > > Do we have to backport our development to MySQL for this kind of problem\n> > > ? Is there any other solution than a Cluster for our problem ?\n> >\n> > Well, Slony does replication which is basically what you want :)\n> >\n> > Only master->slave though, so you will need to have all inserts go via\n> > the master server, but selects can come off any server.\n> \n> Sorry but I don't agree with this ... Slony is a replication solution ... I \n> don't need replication ... what will I do when my database will grow up to 50 \n> Gb ... I'll need more than 50 Gb of RAM on each server ???\n> This solution is not very realistic for me ...\n\nSlony has some other issues with databases > 200GB in size as well\n(well, it hates long running transactions -- and pg_dump is a regular\nlong running transaction)\n\nHowever, you don't need RAM one each server for this, you simply need\nenough disk space.\n\nHave a Master which takes writes, a \"replicator\" which you can consider\nto be a hot-backup of the master, have N slaves replicate off of the\notherwise untouched \"replicator\" machine.\n\nFor your next trick, have the application send read requests for Clients\nA-C to slave 1, D-F to slave 2, ...\n\nYou need enough memory to hold the index sections for clients A-C on\nslave 1. The rest of the index can remain on disk. It's available should\nit be required (D-F box crashed, so your application is now feeding\nthose read requests to the A-C machine)...\n\nGo to more slaves and smaller segments as you require. Use the absolute\ncheapest hardware you can find for the slaves that gives reasonable\nperformance. They don't need to be reliable, so RAID 0 on IDE drives is\nperfectly acceptable.\n\nPostgreSQL can do the replication portion quite nicely. You need to\nimplement the \"cluster\" part in the application side.\n-- \n\n", "msg_date": "Thu, 20 Jan 2005 11:02:58 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n\n>> Probably by carefully partitioning their data. I can't imagine anything\n>> being fast on a single table in 250,000,000 tuple range. Nor can I\n>> really imagine any database that efficiently splits a single table\n>> across multiple machines (or even inefficiently unless some internal\n>> partitioning is being done).\n>\n>\n> Ah, what about partial indexes - those might help. As a kind of \n> 'semi-partition'.\n\nHe could also you schemas to partition out the information within the \nsame database.\n\nJ\n\n>\n> Chris\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 08:04:04 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "What you want is some kind of huge pararell computing , isn't it? I have heard\nfrom many groups of Japanese Pgsql developer did it but they are talking in\njapanese website and of course in Japanese.\nI can name one of them \" Asushi Mitani\" and his website\nhttp://www.csra.co.jp/~mitani/jpug/pgcluster/en/index.html\nand you may directly contact him.\n\nAmrit\nThailand\n", "msg_date": "Thu, 20 Jan 2005 23:10:03 +0700", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Steve Wampler <[email protected]> writes:\n\n> Herv� Piedvache wrote:\n> \n> > No ... as I have said ... how I'll manage a database getting a table of may\n> > be 250 000 000 records ? I'll need incredible servers ... to get quick access\n> > or index reading ... no ?\n> \n> Probably by carefully partitioning their data. I can't imagine anything\n> being fast on a single table in 250,000,000 tuple range. \n\nWhy are you all so psyched out by the size of the table? That's what indexes\nare for.\n\nThe size of the table really isn't relevant here. The important thing is the\nsize of the working set. Ie, How many of those records are required to respond\nto queries.\n\nAs long as you tune your application so every query can be satisfied by\nreading a (very) limited number of those records and have indexes to speed\naccess to those records you can have quick response time even if you have\nterabytes of raw data. \n\nI would start by looking at the plans for the queries you're running and\nseeing if you have any queries that are reading more than hundred records or\nso. If so then you have to optimize them or rethink your application design.\nYou might need to restructure your data so you don't have to scan too many\nrecords for any query.\n\nNo clustering system is going to help you if your application requires reading\nthrough too much data. If every query is designed to not have to read more\nthan a hundred or so records then there's no reason you can't have sub-100ms\nresponse time even if you had terabytes of raw data.\n\nIf the problem is just that each individual query is fast but there's too many\ncoming for a single server then something like slony is all you need. It'll\nspread the load over multiple machines. If you spread the load in an\nintelligent way you can even concentrate each server on certain subsets of the\ndata. But that shouldn't even really be necessary, just a nice improvement.\n\n-- \ngreg\n\n", "msg_date": "20 Jan 2005 11:44:20 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, 20 Jan 2005 16:32:27 +0100, Herv� Piedvache wrote:\n\n> Le Jeudi 20 Janvier 2005 16:23, Dave Cramer a �crit :\n>> Google uses something called the google filesystem, look it up in\n>> google. It is a distributed file system.\n> \n> Yes that's another point I'm working on ... make a cluster of server using\n> GFS ... and making PostgreSQL running with it ...\n\nDid you read the GFS whitepaper? It really works differently from other\nfilesystems with regard to latency and consistency. You'll probably have\nbetter success with Lustre (http://www.clusterfs.com/) or RedHat's Global\nFile System (http://www.redhat.com/software/rha/gfs/).\nIf you're looking for a 'cheap, free and easy' solution you can just as\nwell stop right now. :-)\n\n-h\n\n\n", "msg_date": "Thu, 20 Jan 2005 17:55:56 +0100", "msg_from": "\"Holger Hoffstaette\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n> Sorry but I don't agree with this ... Slony is a replication solution ... I \n> don't need replication ... what will I do when my database will grow up to 50 \n> Gb ... I'll need more than 50 Gb of RAM on each server ???\n> This solution is not very realistic for me ...\n\nHave you confirmed you need a 1:1 RAM:data ratio? Of course more memory \ngets more speed but often at a diminishing rate of return. Unless every \nrecord of your 50GB is used in every query, only the most commonly used \nelements of your DB needs to be in RAM. This is the very idea of caching.\n", "msg_date": "Thu, 20 Jan 2005 09:12:01 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On January 20, 2005 06:49 am, Joshua D. Drake wrote:\n> Stephen Frost wrote:\n> >* Herv? Piedvache ([email protected]) wrote:\n> >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :\n> >>>* Herv? Piedvache ([email protected]) wrote:\n> >>>>Is there any solution with PostgreSQL matching these needs ... ?\n> >>>\n> >>>You might look into pg_pool. Another possibility would be slony, though\n> >>>I'm not sure it's to the point you need it at yet, depends on if you can\n> >>>handle some delay before an insert makes it to the slave select systems.\n> >>\n> >>I think not ... pgpool or slony are replication solutions ... but as I\n> >> have said to Christopher Kings-Lynne how I'll manage the scalabilty of\n> >> the database ? I'll need several servers able to load a database growing\n> >> and growing to get good speed performance ...\n> >\n> >They're both replication solutions, but they also help distribute the\n> >load. For example:\n> >\n> >pg_pool will distribute the select queries amoung the servers. They'll\n> >all get the inserts, so that hurts, but at least the select queries are\n> >distributed.\n> >\n> >slony is similar, but your application level does the load distribution\n> >of select statements instead of pg_pool. Your application needs to know\n> >to send insert statements to the 'main' server, and select from the\n> >others.\n>\n> You can put pgpool in front of replicator or slony to get load\n> balancing for reads.\n\nLast time I checked load ballanced reads was only available in pgpool if you \nwere using pgpools's internal replication. Has something changed recently?\n\n>\n> >>>>Is there any other solution than a Cluster for our problem ?\n> >>>\n> >>>Bigger server, more CPUs/disks in one box. Try to partition up your\n> >>>data some way such that it can be spread across multiple machines, then\n> >>>if you need to combine the data have it be replicated using slony to a\n> >>>big box that has a view which joins all the tables and do your big\n> >>>queries against that.\n> >>\n> >>But I'll arrive to limitation of a box size quickly I thing a 4\n> >> processors with 64 Gb of RAM ... and after ?\n>\n> Opteron.\n\nIBM Z-series, or other big iron.\n\n>\n> >Go to non-x86 hardware after if you're going to continue to increase the\n> >size of the server. Personally I think your better bet might be to\n> >figure out a way to partition up your data (isn't that what google\n> >does anyway?).\n> >\n> >\tStephen\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Thu, 20 Jan 2005 09:29:37 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On January 20, 2005 06:51 am, Christopher Kings-Lynne wrote:\n> >>>Sorry but I don't agree with this ... Slony is a replication solution\n> >>> ... I don't need replication ... what will I do when my database will\n> >>> grow up to 50 Gb ... I'll need more than 50 Gb of RAM on each server\n> >>> ??? This solution is not very realistic for me ...\n> >>>\n> >>>I need a Cluster solution not a replication one or explain me in details\n> >>>how I will do for managing the scalabilty of my database ...\n> >>\n> >>Buy Oracle\n> >\n> > I think this is not my solution ... sorry I'm talking about finding a\n> > PostgreSQL solution ...\n>\n> My point being is that there is no free solution. There simply isn't.\n> I don't know why you insist on keeping all your data in RAM, but the\n> mysql cluster requires that ALL data MUST fit in RAM all the time.\n>\n> PostgreSQL has replication, but not partitioning (which is what you want).\n>\n> So, your only option is Oracle or another very expensive commercial\n> database.\n\nAnother Option to consider would be pgmemcache. that way you just build the \nfarm out of lots of large memory, diskless boxes for keeping the whole \ndatabase in memory in the whole cluster. More information on it can be found \nat: http://people.freebsd.org/~seanc/pgmemcache/\n\n>\n> Chris\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 8: explain analyze is your friend\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Thu, 20 Jan 2005 09:33:42 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen\n<[email protected]> wrote:\n> \n> Another Option to consider would be pgmemcache. that way you just build the\n> farm out of lots of large memory, diskless boxes for keeping the whole\n> database in memory in the whole cluster. More information on it can be found\n> at: http://people.freebsd.org/~seanc/pgmemcache/\n\nWhich brings up another question: why not just cluster at the hardware\nlayer? Get an external fiberchannel array, and cluster a bunch of dual\nOpterons, all sharing that storage. In that sense you would be getting\none big PostgreSQL 'image' running across all of the servers.\n\nOr is that idea too 90's? ;-)\n\n-- Mitch\n", "msg_date": "Thu, 20 Jan 2005 13:42:25 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On January 20, 2005 10:42 am, Mitch Pirtle wrote:\n> On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen\n>\n> <[email protected]> wrote:\n> > Another Option to consider would be pgmemcache. that way you just build\n> > the farm out of lots of large memory, diskless boxes for keeping the\n> > whole database in memory in the whole cluster. More information on it\n> > can be found at: http://people.freebsd.org/~seanc/pgmemcache/\n>\n> Which brings up another question: why not just cluster at the hardware\n> layer? Get an external fiberchannel array, and cluster a bunch of dual\n> Opterons, all sharing that storage. In that sense you would be getting\n> one big PostgreSQL 'image' running across all of the servers.\n\nIt dosn't quite work that way, thanks to shared memory, and kernel disk cache. \n(among other things)\n>\n> Or is that idea too 90's? ;-)\n>\n> -- Mitch\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n\n-- \nDarcy Buskermolen\nWavefire Technologies Corp.\nph: 250.717.0200\nfx: 250.763.1759\nhttp://www.wavefire.com\n", "msg_date": "Thu, 20 Jan 2005 11:07:23 -0800", "msg_from": "Darcy Buskermolen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Mitch Pirtle wrote:\n\n> Which brings up another question: why not just cluster at the hardware\n> layer? Get an external fiberchannel array, and cluster a bunch of dual\n> Opterons, all sharing that storage. In that sense you would be getting\n> one big PostgreSQL 'image' running across all of the servers.\n\nThis isn't as easy as it sounds. Simply sharing the array among\nhosts with a 'standard' file system won't work because of cache\ninconsistencies. So, you need to put a shareable filesystem\n(such as GFS or Lustre) on it.\n\nBut that's not enough, because you're going to be running separate\npostgresql backends on the different hosts, and there are\ndefinitely consistency issues with trying to do that. So far as\nI know (right, experts?) postgresql isn't designed with providing\ndistributed consistency in mind (isn't shared memory used for\nconsistency, which restricts all the backends to a single host?).\n\n\n-- \nSteve Wampler -- [email protected]\nThe gods that smiled on your birth are now laughing out loud.\n", "msg_date": "Thu, 20 Jan 2005 12:13:17 -0700", "msg_from": "Steve Wampler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, 20 Jan 2005 12:13:17 -0700, Steve Wampler <[email protected]> wrote:\n> Mitch Pirtle wrote:\n\n> But that's not enough, because you're going to be running separate\n> postgresql backends on the different hosts, and there are\n> definitely consistency issues with trying to do that. So far as\n> I know (right, experts?) postgresql isn't designed with providing\n> distributed consistency in mind (isn't shared memory used for\n> consistency, which restricts all the backends to a single host?).\n\nyes, you're right: you'll need a Distributed Lock Manager and an\napplication to manage it , Postgres ?\n", "msg_date": "Thu, 20 Jan 2005 20:35:44 +0100", "msg_from": "Jean-Max Reymond <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": " \nI was thinking the same! I'd like to know how other databases such as Oracle\ndo it.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mitch Pirtle\nSent: Thursday, January 20, 2005 4:42 PM\nTo: [email protected]\nSubject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\nOn Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen\n<[email protected]> wrote:\n> \n> Another Option to consider would be pgmemcache. that way you just build\nthe\n> farm out of lots of large memory, diskless boxes for keeping the whole\n> database in memory in the whole cluster. More information on it can be\nfound\n> at: http://people.freebsd.org/~seanc/pgmemcache/\n\nWhich brings up another question: why not just cluster at the hardware\nlayer? Get an external fiberchannel array, and cluster a bunch of dual\nOpterons, all sharing that storage. In that sense you would be getting\none big PostgreSQL 'image' running across all of the servers.\n\nOr is that idea too 90's? ;-)\n\n-- Mitch\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n", "msg_date": "Thu, 20 Jan 2005 22:40:02 -0200", "msg_from": "\"Bruno Almeida do Lago\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "This idea won't work with postgresql only one instance can operate on a \ndatastore at a time.\n\nDave\n\nBruno Almeida do Lago wrote:\n\n> \n>I was thinking the same! I'd like to know how other databases such as Oracle\n>do it.\n>\n>-----Original Message-----\n>From: [email protected]\n>[mailto:[email protected]] On Behalf Of Mitch Pirtle\n>Sent: Thursday, January 20, 2005 4:42 PM\n>To: [email protected]\n>Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n>\n>On Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen\n><[email protected]> wrote:\n> \n>\n>>Another Option to consider would be pgmemcache. that way you just build\n>> \n>>\n>the\n> \n>\n>>farm out of lots of large memory, diskless boxes for keeping the whole\n>>database in memory in the whole cluster. More information on it can be\n>> \n>>\n>found\n> \n>\n>>at: http://people.freebsd.org/~seanc/pgmemcache/\n>> \n>>\n>\n>Which brings up another question: why not just cluster at the hardware\n>layer? Get an external fiberchannel array, and cluster a bunch of dual\n>Opterons, all sharing that storage. In that sense you would be getting\n>one big PostgreSQL 'image' running across all of the servers.\n>\n>Or is that idea too 90's? ;-)\n>\n>-- Mitch\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 7: don't forget to increase your free space map settings\n>\n>\n> \n>\n\n\n\n\n\n\n\nThis idea won't work with postgresql only one instance can operate on a\ndatastore at a time.\n\nDave\n\nBruno Almeida do Lago wrote:\n\n \nI was thinking the same! I'd like to know how other databases such as Oracle\ndo it.\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Mitch Pirtle\nSent: Thursday, January 20, 2005 4:42 PM\nTo: [email protected]\nSubject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\nOn Thu, 20 Jan 2005 09:33:42 -0800, Darcy Buskermolen\n<[email protected]> wrote:\n \n\nAnother Option to consider would be pgmemcache. that way you just build\n \n\nthe\n \n\nfarm out of lots of large memory, diskless boxes for keeping the whole\ndatabase in memory in the whole cluster. More information on it can be\n \n\nfound\n \n\nat: http://people.freebsd.org/~seanc/pgmemcache/\n \n\n\nWhich brings up another question: why not just cluster at the hardware\nlayer? Get an external fiberchannel array, and cluster a bunch of dual\nOpterons, all sharing that storage. In that sense you would be getting\none big PostgreSQL 'image' running across all of the servers.\n\nOr is that idea too 90's? ;-)\n\n-- Mitch\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to [email protected]\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: don't forget to increase your free space map settings", "msg_date": "Thu, 20 Jan 2005 20:04:19 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Bruno,\n\n> Which brings up another question: why not just cluster at the hardware\n> layer? Get an external fiberchannel array, and cluster a bunch of dual\n> Opterons, all sharing that storage. In that sense you would be getting\n> one big PostgreSQL 'image' running across all of the servers.\n>\n> Or is that idea too 90's? ;-)\n\nNo, it just doesn't work. Multiple postmasters can't share one database.\n\nLinuxLabs (as I've gathered) tried to go one better by using a tool that \nallows shared memory to bridge multple networked servers -- in other words, \none postmaster controlling 4 or 5 servers. The problem is that IPC via this \nmethod is about 1,000 times slower than IPC on a single machine, wiping out \nall of the scalability gains from having the cluster in the first place.\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Jan 2005 17:25:41 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:\n> \n> I was thinking the same! I'd like to know how other databases such as Oracle\n> do it.\n> \nIn a nutshell, in a clustered environment (which iirc in oracle means\nshared disks), they use a set of files for locking and consistency\nacross machines. So you better have fast access to the drive array, and\nthe array better have caching of some kind.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 20 Jan 2005 19:30:40 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 10:08:47AM -0500, Stephen Frost wrote:\n> * Christopher Kings-Lynne ([email protected]) wrote:\n> > PostgreSQL has replication, but not partitioning (which is what you want).\n> \n> It doesn't have multi-server partitioning.. It's got partitioning\n> within a single server (doesn't it? I thought it did, I know it was\n> discussed w/ the guy from Cox Communications and I thought he was using\n> it :).\n\nNo, PostgreSQL doesn't support any kind of partitioning, unless you\nwrite it yourself. I think there's some work being done in this area,\nthough.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 20 Jan 2005 19:32:04 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 07:12:42AM -0800, Joshua D. Drake wrote:\n> \n> >>then I was thinking. Couldn't he use\n> >>multiple databases\n> >>over multiple servers with dblink?\n> >>\n> >>It is not exactly how I would want to do it, but it would provide what\n> >>he needs I think???\n> >> \n> >>\n> >\n> >Yes seems to be the only solution ... but I'm a little disapointed about \n> >this ... could you explain me why there is not this kind of \n> >functionnality ... it seems to be a real need for big applications no ?\n> > \n> >\n> Because it is really, really hard to do correctly and hard\n> equals expensive.\n\nTo expand on what Josh said, the expense in this case is development\nresources. If you look on the developer site you'll see a huge TODO list\nand a relatively small list of PostgreSQL developers. To develop a\ncluster solution similar to RAC would probably take the efforts of the\nentire development team for a year or more, during which time very\nlittle else would be done.\n\nI'm glad to see your persistance in wanting to use PostgreSQL, and there\nmight be some kind of limited clustering scheme that could be\nimplemented without a great amount of effort by the core developers. In\nthat case I think there's a good chance you could find people willing to\nwork on it.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Thu, 20 Jan 2005 19:39:22 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> On January 20, 2005 06:49 am, Joshua D. Drake wrote:\n> > Stephen Frost wrote:\n> > >* Herv? Piedvache ([email protected]) wrote:\n> > >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a écrit :\n> > >>>* Herv? Piedvache ([email protected]) wrote:\n> > >>>>Is there any solution with PostgreSQL matching these needs ... ?\n> > >>>\n> > >>>You might look into pg_pool. Another possibility would be slony, though\n> > >>>I'm not sure it's to the point you need it at yet, depends on if you can\n> > >>>handle some delay before an insert makes it to the slave select systems.\n> > >>\n> > >>I think not ... pgpool or slony are replication solutions ... but as I\n> > >> have said to Christopher Kings-Lynne how I'll manage the scalabilty of\n> > >> the database ? I'll need several servers able to load a database growing\n> > >> and growing to get good speed performance ...\n> > >\n> > >They're both replication solutions, but they also help distribute the\n> > >load. For example:\n> > >\n> > >pg_pool will distribute the select queries amoung the servers. They'll\n> > >all get the inserts, so that hurts, but at least the select queries are\n> > >distributed.\n> > >\n> > >slony is similar, but your application level does the load distribution\n> > >of select statements instead of pg_pool. Your application needs to know\n> > >to send insert statements to the 'main' server, and select from the\n> > >others.\n> >\n> > You can put pgpool in front of replicator or slony to get load\n> > balancing for reads.\n> \n> Last time I checked load ballanced reads was only available in pgpool if you \n> were using pgpools's internal replication. Has something changed recently?\n\nYes. However it would be pretty easy to modify pgpool so that it could\ncope with Slony-I. I.e.\n\n1) pgpool does the load balance and sends query to Slony-I's slave and\n master if the query is SELECT.\n\n2) pgpool sends query only to the master if the query is other than\n SELECT.\n\nRemaining problem is that Slony-I is not a sync replication\nsolution. Thus you need to prepare that the load balanced query\nresults might differ among servers.\n\nIf there's enough demand, I would do such that enhancements to pgpool.\n--\nTatsuo Ishii\n\n> > >>>>Is there any other solution than a Cluster for our problem ?\n> > >>>\n> > >>>Bigger server, more CPUs/disks in one box. Try to partition up your\n> > >>>data some way such that it can be spread across multiple machines, then\n> > >>>if you need to combine the data have it be replicated using slony to a\n> > >>>big box that has a view which joins all the tables and do your big\n> > >>>queries against that.\n> > >>\n> > >>But I'll arrive to limitation of a box size quickly I thing a 4\n> > >> processors with 64 Gb of RAM ... and after ?\n> >\n> > Opteron.\n> \n> IBM Z-series, or other big iron.\n> \n> >\n> > >Go to non-x86 hardware after if you're going to continue to increase the\n> > >size of the server. Personally I think your better bet might be to\n> > >figure out a way to partition up your data (isn't that what google\n> > >does anyway?).\n> > >\n> > >\tStephen\n> \n> -- \n> Darcy Buskermolen\n> Wavefire Technologies Corp.\n> ph: 250.717.0200\n> fx: 250.763.1759\n> http://www.wavefire.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n> \n", "msg_date": "Fri, 21 Jan 2005 10:40:07 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Oracle's RAC is good, but I think it's best to view it as a step in the high \navailability direction rather than a performance enhancer. While it can help \nyour application scale up, that depends on the usage pattern. Also it's not \n100% transparent to the application for example you can't depend on a \nsequence numbers being allocated uniquely as there can be delays propagating \nthem to all nodes. So in clusters where insert rates are high this means you \nshould explicitly check for unique key violations and try again. Dealing \nwith propagation delays comes with the clustering technology I guess. \nNonetheless, I would love to see this kind of functionality in postgres.\n\nRegards\nIain\n\n----- Original Message ----- \nFrom: \"Jim C. Nasby\" <[email protected]>\nTo: \"Bruno Almeida do Lago\" <[email protected]>\nCc: \"'Mitch Pirtle'\" <[email protected]>; \n<[email protected]>\nSent: Friday, January 21, 2005 10:30 AM\nSubject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\n\n> On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:\n>>\n>> I was thinking the same! I'd like to know how other databases such as \n>> Oracle\n>> do it.\n>>\n> In a nutshell, in a clustered environment (which iirc in oracle means\n> shared disks), they use a set of files for locking and consistency\n> across machines. So you better have fast access to the drive array, and\n> the array better have caching of some kind.\n> -- \n> Jim C. Nasby, Database Consultant [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n> Windows: \"Where do you want to go today?\"\n> Linux: \"Where do you want to go tomorrow?\"\n> FreeBSD: \"Are you guys coming, or what?\"\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly \n\n", "msg_date": "Fri, 21 Jan 2005 11:14:59 +0900", "msg_from": "\"Iain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">1) pgpool does the load balance and sends query to Slony-I's slave and\n> master if the query is SELECT.\n>\n>2) pgpool sends query only to the master if the query is other than\n> SELECT.\n>\n>Remaining problem is that Slony-I is not a sync replication\n>solution. Thus you need to prepare that the load balanced query\n>results might differ among servers.\n>\n>If there's enough demand, I would do such that enhancements to pgpool.\n> \n>\nWell I know that Replicator could also use this functionality.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>--\n>Tatsuo Ishii\n>\n> \n>\n>>>>>>>Is there any other solution than a Cluster for our problem ?\n>>>>>>> \n>>>>>>>\n>>>>>>Bigger server, more CPUs/disks in one box. Try to partition up your\n>>>>>>data some way such that it can be spread across multiple machines, then\n>>>>>>if you need to combine the data have it be replicated using slony to a\n>>>>>>big box that has a view which joins all the tables and do your big\n>>>>>>queries against that.\n>>>>>> \n>>>>>>\n>>>>>But I'll arrive to limitation of a box size quickly I thing a 4\n>>>>>processors with 64 Gb of RAM ... and after ?\n>>>>> \n>>>>>\n>>>Opteron.\n>>> \n>>>\n>>IBM Z-series, or other big iron.\n>>\n>> \n>>\n>>>>Go to non-x86 hardware after if you're going to continue to increase the\n>>>>size of the server. Personally I think your better bet might be to\n>>>>figure out a way to partition up your data (isn't that what google\n>>>>does anyway?).\n>>>>\n>>>>\tStephen\n>>>> \n>>>>\n>>-- \n>>Darcy Buskermolen\n>>Wavefire Technologies Corp.\n>>ph: 250.717.0200\n>>fx: 250.763.1759\n>>http://www.wavefire.com\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>>\n>> \n>>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n>\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Thu, 20 Jan 2005 19:16:15 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo,\n\n> Yes. However it would be pretty easy to modify pgpool so that it could\n> cope with Slony-I. I.e.\n>\n> 1) pgpool does the load balance and sends query to Slony-I's slave and\n> master if the query is SELECT.\n>\n> 2) pgpool sends query only to the master if the query is other than\n> SELECT.\n>\n> Remaining problem is that Slony-I is not a sync replication\n> solution. Thus you need to prepare that the load balanced query\n> results might differ among servers.\n\nYes, please, some of us are already doing the above ad-hoc.\n\nThe simple workaround to replication lag is to calculate the longest likely \nlag (<3 seconds if Slony is tuned right) and have the dispatcher (pgpool) \nsend all requests from that connection to the master for that period. Then \nit switches back to \"pool\" mode where the slaves may be used.\n\nOf course, all of the above is only useful if you're doing a web app where 96% \nof query activity is selects. For additional scalability, put all of your \nsession maintenance in memcached, so that you're not doing database writes \nevery time a page loads.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 20 Jan 2005 19:49:24 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Tatsuo,\n> \n> > Yes. However it would be pretty easy to modify pgpool so that it could\n> > cope with Slony-I. I.e.\n> >\n> > 1) pgpool does the load balance and sends query to Slony-I's slave and\n> > master if the query is SELECT.\n> >\n> > 2) pgpool sends query only to the master if the query is other than\n> > SELECT.\n> >\n> > Remaining problem is that Slony-I is not a sync replication\n> > solution. Thus you need to prepare that the load balanced query\n> > results might differ among servers.\n> \n> Yes, please, some of us are already doing the above ad-hoc.\n> \n> The simple workaround to replication lag is to calculate the longest likely \n> lag (<3 seconds if Slony is tuned right) and have the dispatcher (pgpool) \n> send all requests from that connection to the master for that period. Then \n> it switches back to \"pool\" mode where the slaves may be used.\n\nCan I ask a question?\n\nSuppose table A gets updated on the master at time 00:00. Until 00:03\npgpool needs to send all queries regarding A to the master only. My\nquestion is, how can pgpool know a query is related to A?\n--\nTatsuo Ishii\n\n> Of course, all of the above is only useful if you're doing a web app where 96% \n> of query activity is selects. For additional scalability, put all of your \n> session maintenance in memcached, so that you're not doing database writes \n> every time a page loads.\n> \n> -- \n> Josh Berkus\n> Aglio Database Solutions\n> San Francisco\n> \n", "msg_date": "Fri, 21 Jan 2005 17:07:31 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Presumably it can't _ever_ know without being explicitly told, because \neven for a plain SELECT there might be triggers involved that update \ntables, or it might be a select of a stored proc, etc. So in the \ngeneral case, you can't assume that a select doesn't cause an update, \nand you can't be sure that the table list in an update is a complete \nlist of the tables that might be updated.\n\n\n\nTatsuo Ishii wrote:\n\n>Can I ask a question?\n>\n>Suppose table A gets updated on the master at time 00:00. Until 00:03\n>pgpool needs to send all queries regarding A to the master only. My\n>question is, how can pgpool know a query is related to A?\n>--\n>Tatsuo Ishii\n>\n> \n>\n", "msg_date": "Fri, 21 Jan 2005 09:16:08 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Matt Clark wrote:\n\n> Presumably it can't _ever_ know without being explicitly told, because \n> even for a plain SELECT there might be triggers involved that update \n> tables, or it might be a select of a stored proc, etc. So in the \n> general case, you can't assume that a select doesn't cause an update, \n> and you can't be sure that the table list in an update is a complete \n> list of the tables that might be updated.\n\nUhmmm no :) There is no such thing as a select trigger. The closest you \nwould get\nis a function that is called via select which could be detected by \nmaking sure\nyou are prepending with a BEGIN or START Transaction. Thus yes pgPool \ncan be made\nto do this.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n>\n>\n>\n> Tatsuo Ishii wrote:\n>\n>> Can I ask a question?\n>>\n>> Suppose table A gets updated on the master at time 00:00. Until 00:03\n>> pgpool needs to send all queries regarding A to the master only. My\n>> question is, how can pgpool know a query is related to A?\n>> -- \n>> Tatsuo Ishii\n>>\n>> \n>>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n\n\n\n-- \nCommand Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC\nPostgresql support, programming shared hosting and dedicated hosting.\n+1-503-667-4564 - [email protected] - http://www.commandprompt.com\nPostgreSQL Replicator -- production quality replication for PostgreSQL", "msg_date": "Fri, 21 Jan 2005 07:40:57 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Joshua D. Drake wrote:\n> Matt Clark wrote:\n> \n>> Presumably it can't _ever_ know without being explicitly told, because \n>> even for a plain SELECT there might be triggers involved that update \n>> tables, or it might be a select of a stored proc, etc. So in the \n>> general case, you can't assume that a select doesn't cause an update, \n>> and you can't be sure that the table list in an update is a complete \n>> list of the tables that might be updated.\n> \n> \n> Uhmmm no :) There is no such thing as a select trigger. The closest you \n> would get\n> is a function that is called via select which could be detected by \n> making sure\n> you are prepending with a BEGIN or START Transaction. Thus yes pgPool \n> can be made\n> to do this.\n\nSELECT SETVAL() is another case.\n\nI'd really love to see pgpool do this.\n\nI am also curious about Slony-II development, Tom mentioned a first \nmeeting about it :)\n\nRegards,\nBjoern\n", "msg_date": "Fri, 21 Jan 2005 17:04:56 +0100", "msg_from": "Bjoern Metzdorf <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Yes, I wasn't really choosing my examples particularly carefully, but I \nthink the conclusion stands: pgpool (or anyone/thing except for the \nserver) cannot in general tell from the SQL it is handed by the client \nwhether an update will occur, nor which tables might be affected.\n\nThat's not to say that pgpool couldn't make a good guess in the majority \nof cases!\n\nM\n\n\nJoshua D. Drake wrote:\n\n> Matt Clark wrote:\n>\n>> Presumably it can't _ever_ know without being explicitly told, \n>> because even for a plain SELECT there might be triggers involved that \n>> update tables, or it might be a select of a stored proc, etc. So in \n>> the general case, you can't assume that a select doesn't cause an \n>> update, and you can't be sure that the table list in an update is a \n>> complete list of the tables that might be updated.\n>\n>\n> Uhmmm no :) There is no such thing as a select trigger. The closest \n> you would get\n> is a function that is called via select which could be detected by \n> making sure\n> you are prepending with a BEGIN or START Transaction. Thus yes pgPool \n> can be made\n> to do this.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>\n", "msg_date": "Fri, 21 Jan 2005 17:35:13 +0000", "msg_from": "Matt Clark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo,\n\n> Suppose table A gets updated on the master at time 00:00. Until 00:03\n> pgpool needs to send all queries regarding A to the master only. My\n> question is, how can pgpool know a query is related to A?\n\nWell, I'm a little late to head off tangental discussion about this, but ....\n\nThe systems where I've implemented something similar are for web applications. \nIn the case of the web app, you don't care if a most users see data which is \n2 seconds out of date; with caching and whatnot, it's often much more than \nthat!\n\nThe one case where it's not permissable for a user to see \"old\" data is the \ncase where the user is updating the data. Namely:\n\n(1) 00:00 User A updates \"My Profile\"\n(2) 00:01 \"My Profile\" UPDATE finishes executing.\n(3) 00:02 User A sees \"My Profile\" re-displayed\n(6) 00:04 \"My Profile\":UserA cascades to the last Slave server\n\nSo in an application like the above, it would be a real problem if User A were \nto get switched over to a slave server immediately after the update; she \nwould see the old data, assume that her update was not saved, and update \nagain. Or send angry e-mails to webmaster@. \n\nHowever, it makes no difference what User B sees:\n\n(1) 00:00 User A updates \"My Profile\"v1\t\t\tMaster\n(2) 00:01 \"My Profile\" UPDATE finishes executing.\tMaster\n(3) 00:02 User A sees \"My Profile\"v2 displayed\t\tMaster\n(4) 00:02 User B requests \"MyProfile\":UserA\t\tSlave2\n(5) 00:03 User B sees \"My Profile\"v1\t\t\t\tSlave2\n(6) 00:04 \"My Profile\"v2 cascades to the last Slave server Slave2\n\nIf the web application is structured properly, the fact that UserB is seeing \nUserA's information which is 2 seconds old is not a problem (though it might \nbe for web auctions, where it could result in race conditions. Consider \nmemcached as a helper). This means that pgPool only needs to monitor \n\"update switching\" by *connection* not by *table*.\n\nMake sense?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 21 Jan 2005 09:47:53 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo,\n\tWhat would happen with SELECT queries that, through a function or some\nother mechanism, updates data in the database? Would those need to be\npassed to pgpool in some special way?\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Tatsuo Ishii\nSent: Thursday, January 20, 2005 5:40 PM\nTo: [email protected]\nCc: [email protected]; [email protected]; [email protected];\[email protected]\nSubject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\n\n> On January 20, 2005 06:49 am, Joshua D. Drake wrote:\n> > Stephen Frost wrote:\n> > >* Herv? Piedvache ([email protected]) wrote:\n> > >>Le Jeudi 20 Janvier 2005 15:30, Stephen Frost a �crit :\n> > >>>* Herv? Piedvache ([email protected]) wrote:\n> > >>>>Is there any solution with PostgreSQL matching these needs ... ?\n> > >>>\n> > >>>You might look into pg_pool. Another possibility would be slony,\nthough\n> > >>>I'm not sure it's to the point you need it at yet, depends on if you\ncan\n> > >>>handle some delay before an insert makes it to the slave select\nsystems.\n> > >>\n> > >>I think not ... pgpool or slony are replication solutions ... but as I\n> > >> have said to Christopher Kings-Lynne how I'll manage the scalabilty\nof\n> > >> the database ? I'll need several servers able to load a database\ngrowing\n> > >> and growing to get good speed performance ...\n> > >\n> > >They're both replication solutions, but they also help distribute the\n> > >load. For example:\n> > >\n> > >pg_pool will distribute the select queries amoung the servers. They'll\n> > >all get the inserts, so that hurts, but at least the select queries are\n> > >distributed.\n> > >\n> > >slony is similar, but your application level does the load distribution\n> > >of select statements instead of pg_pool. Your application needs to\nknow\n> > >to send insert statements to the 'main' server, and select from the\n> > >others.\n> >\n> > You can put pgpool in front of replicator or slony to get load\n> > balancing for reads.\n>\n> Last time I checked load ballanced reads was only available in pgpool if\nyou\n> were using pgpools's internal replication. Has something changed\nrecently?\n\nYes. However it would be pretty easy to modify pgpool so that it could\ncope with Slony-I. I.e.\n\n1) pgpool does the load balance and sends query to Slony-I's slave and\n master if the query is SELECT.\n\n2) pgpool sends query only to the master if the query is other than\n SELECT.\n\nRemaining problem is that Slony-I is not a sync replication\nsolution. Thus you need to prepare that the load balanced query\nresults might differ among servers.\n\nIf there's enough demand, I would do such that enhancements to pgpool.\n--\nTatsuo Ishii\n\n> > >>>>Is there any other solution than a Cluster for our problem ?\n> > >>>\n> > >>>Bigger server, more CPUs/disks in one box. Try to partition up your\n> > >>>data some way such that it can be spread across multiple machines,\nthen\n> > >>>if you need to combine the data have it be replicated using slony to\na\n> > >>>big box that has a view which joins all the tables and do your big\n> > >>>queries against that.\n> > >>\n> > >>But I'll arrive to limitation of a box size quickly I thing a 4\n> > >> processors with 64 Gb of RAM ... and after ?\n> >\n> > Opteron.\n>\n> IBM Z-series, or other big iron.\n>\n> >\n> > >Go to non-x86 hardware after if you're going to continue to increase\nthe\n> > >size of the server. Personally I think your better bet might be to\n> > >figure out a way to partition up your data (isn't that what google\n> > >does anyway?).\n> > >\n> > >\tStephen\n>\n> --\n> Darcy Buskermolen\n> Wavefire Technologies Corp.\n> ph: 250.717.0200\n> fx: 250.763.1759\n> http://www.wavefire.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Fri, 21 Jan 2005 14:34:40 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Peter, Tatsuo:\n\nwould happen with SELECT queries that, through a function or some\n> other mechanism, updates data in the database? Would those need to be\n> passed to pgpool in some special way?\n\nOh, yes, that reminds me. It would be helpful if pgPool accepted a control \nstring ... perhaps one in a SQL comment ... which indicated that the \nstatement to follow was, despite appearances, an update. For example:\n--STATEMENT_IS_UPDATE\\n\n\nThe alternative is, of course, that pgPool direct all explicit transactions to \nthe master ... which is a good idea anyway. So you could do:\n\nBEGIN;\nSELECT some_update_function();\nCOMMIT;\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Fri, 21 Jan 2005 16:34:39 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Tatsuo,\n> \n> > Suppose table A gets updated on the master at time 00:00. Until 00:03\n> > pgpool needs to send all queries regarding A to the master only. My\n> > question is, how can pgpool know a query is related to A?\n> \n> Well, I'm a little late to head off tangental discussion about this, but ....\n> \n> The systems where I've implemented something similar are for web applications. \n> In the case of the web app, you don't care if a most users see data which is \n> 2 seconds out of date; with caching and whatnot, it's often much more than \n> that!\n> \n> The one case where it's not permissable for a user to see \"old\" data is the \n> case where the user is updating the data. Namely:\n> \n> (1) 00:00 User A updates \"My Profile\"\n> (2) 00:01 \"My Profile\" UPDATE finishes executing.\n> (3) 00:02 User A sees \"My Profile\" re-displayed\n> (6) 00:04 \"My Profile\":UserA cascades to the last Slave server\n> \n> So in an application like the above, it would be a real problem if User A were \n> to get switched over to a slave server immediately after the update; she \n> would see the old data, assume that her update was not saved, and update \n> again. Or send angry e-mails to webmaster@. \n> \n> However, it makes no difference what User B sees:\n> \n> (1) 00:00 User A updates \"My Profile\"v1\t\t\tMaster\n> (2) 00:01 \"My Profile\" UPDATE finishes executing.\tMaster\n> (3) 00:02 User A sees \"My Profile\"v2 displayed\t\tMaster\n> (4) 00:02 User B requests \"MyProfile\":UserA\t\tSlave2\n> (5) 00:03 User B sees \"My Profile\"v1\t\t\t\tSlave2\n> (6) 00:04 \"My Profile\"v2 cascades to the last Slave server Slave2\n> \n> If the web application is structured properly, the fact that UserB is seeing \n> UserA's information which is 2 seconds old is not a problem (though it might \n> be for web auctions, where it could result in race conditions. Consider \n> memcached as a helper). This means that pgPool only needs to monitor \n> \"update switching\" by *connection* not by *table*.\n> \n> Make sense?\n\nI'm not clear what \"pgPool only needs to monitor \"update switching\" by\n*connection* not by *table*\" means. In your example:\n\n> (1) 00:00 User A updates \"My Profile\"\n> (2) 00:01 \"My Profile\" UPDATE finishes executing.\n> (3) 00:02 User A sees \"My Profile\" re-displayed\n> (6) 00:04 \"My Profile\":UserA cascades to the last Slave server\n\nI think (2) and (3) are on different connections, thus pgpool cannot\njudge if SELECT in (3) should go only to the master or not.\n\nTo solve the problem you need to make pgpool understand \"web sessions\"\nnot \"database connections\" and it seems impossible for pgpool to\nunderstand \"sessions\".\n--\nTatsuo Ishii\n", "msg_date": "Sat, 22 Jan 2005 12:01:28 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Peter, Tatsuo:\n> \n> would happen with SELECT queries that, through a function or some\n> > other mechanism, updates data in the database? Would those need to be\n> > passed to pgpool in some special way?\n> \n> Oh, yes, that reminds me. It would be helpful if pgPool accepted a control \n> string ... perhaps one in a SQL comment ... which indicated that the \n> statement to follow was, despite appearances, an update. For example:\n> --STATEMENT_IS_UPDATE\\n\n\nActually the way judging if it's a \"pure\" SELECT or not in pgpool is\nvery simple. pgpool just checkes if the SQL statement exactly begins\nwith \"SELECT\" (case insensitive, of course). So, for example, you\ncould insert an SQL comment something like \"/*this SELECT has side\neffect*/ at the beginning of line to indicate that pgpool should not\nsend this query to the slave.\n\n> The alternative is, of course, that pgPool direct all explicit transactions to \n> the master ... which is a good idea anyway. So you could do:\n> \n> BEGIN;\n> SELECT some_update_function();\n> COMMIT;\n\nYes. pgpool has already done this in load balancing. Expanding this\nfor Slony-I is pretty easy.\n--\nTatsuo Ishii\n", "msg_date": "Sat, 22 Jan 2005 12:39:28 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (Hervᅵ Piedvache) transmitted:\n> Le Jeudi 20 Janvier 2005 15:24, Christopher Kings-Lynne a ᅵcrit :\n>> > Is there any solution with PostgreSQL matching these needs ... ?\n>>\n>> You want: http://www.slony.info/\n>>\n>> > Do we have to backport our development to MySQL for this kind of problem\n>> > ? Is there any other solution than a Cluster for our problem ?\n>>\n>> Well, Slony does replication which is basically what you want :)\n>>\n>> Only master->slave though, so you will need to have all inserts go via\n>> the master server, but selects can come off any server.\n>\n> Sorry but I don't agree with this ... Slony is a replication\n> solution ... I don't need replication ... what will I do when my\n> database will grow up to 50 Gb ... I'll need more than 50 Gb of RAM\n> on each server ??? This solution is not very realistic for me ...\n\nHuh? Why on earth do you imagine that Slony-I requires a lot of\nmemory?\n\nIt doesn't. A fairly _large_ Slony-I process is about 10MB. There\nwill be some demand for memory on the DB servers, but you don't need\nan enormous quantity of extra memory to run it.\n\nThere is a MySQL \"replicating/clustering\" system that uses an\nin-memory database which means that if your DB is 50GB in size, you\nneed something like 200GB of RAM. If you're thinking of that, that's\nnot relevant to PostgreSQL or Slony-I...\n\n> I need a Cluster solution not a replication one or explain me in\n> details how I will do for managing the scalabilty of my database ...\n\nI'm not sure you understand clustering if you imagine it doesn't\ninvolve replication.\n\nThere are numerous models for clustering, much as there are numerous\nRAID models.\n\nBut the only sorts of clustering cases where you get to NOT do\nreplication are the cases where all you're looking for from clustering\nis improved speed, and you're willing for any breakage on any host to\npotentially destroy your cluster.\n\nPerhaps you need to describe what you _think_ you mean by a \"cluster\nsolution.\" It may be that it'll take further thought to determine\nwhat you actually need...\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://www3.sympatico.ca/cbbrowne/postgresql.html\n\"Not me, guy. I read the Bash man page each day like a Jehovah's\nWitness reads the Bible. No wait, the Bash man page IS the bible.\nExcuse me...\" (More on confusing aliases, taken from\ncomp.os.linux.misc)\n", "msg_date": "Sun, 23 Jan 2005 00:41:22 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "In the last exciting episode, [email protected] (Hervᅵ Piedvache) wrote:\n> Le Jeudi 20 Janvier 2005 16:05, Joshua D. Drake a ᅵcrit :\n>> Christopher Kings-Lynne wrote:\n>> >>> Or you could fork over hundreds of thousands of dollars for Oracle's\n>> >>> RAC.\n>> >>\n>> >> No please do not talk about this again ... I'm looking about a\n>> >> PostgreSQL solution ... I know RAC ... and I'm not able to pay for a\n>> >> RAC certify hardware configuration plus a RAC Licence.\n>> >\n>> > There is absolutely zero PostgreSQL solution...\n>>\n>> I just replied the same thing but then I was thinking. Couldn't he use\n>> multiple databases\n>> over multiple servers with dblink?\n>>\n>> It is not exactly how I would want to do it, but it would provide what\n>> he needs I think???\n>\n> Yes seems to be the only solution ... but I'm a little disapointed about \n> this ... could you explain me why there is not this kind of \n> functionnality ... it seems to be a real need for big applications no ?\n\nIf this is what you actually need, well, it's something that lots of\npeople would sort of like to have, but it's really DIFFICULT to\nimplement it.\n\nPartitioning data onto different servers appears like it ought to be a\ngood idea. Unfortunately, getting _exactly_ the right semantics is\nhard. \n\nIf the data is all truly independent, then it's no big deal; just have\none server for one set of data, and another for the other.\n\nBut reality normally is that if you _think_ you need a cluster, that's\nbecause some of the data needs to be _shared_, which means you need to\neither:\n\n a) Have queries that run across two databases, or\n\n b) Replicate the shared data between the systems.\n\nWe're likely back to the need for replication.\n-- \nIf this was helpful, <http://svcs.affero.net/rm.php?r=cbbrowne> rate me\nhttp://www3.sympatico.ca/cbbrowne/rdbms.html\n\"It is the user who should parameterize procedures, not their\ncreators.\" -- Alan Perlis\n", "msg_date": "Sun, 23 Jan 2005 00:46:51 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "After a long battle with technology, [email protected] (Hervᅵ Piedvache), an earthling, wrote:\n> Joshua,\n>\n> Le Jeudi 20 Janvier 2005 15:44, Joshua D. Drake a ᅵcrit :\n>> Hervᅵ Piedvache wrote:\n>> >\n>> >My company, which I actually represent, is a fervent user of PostgreSQL.\n>> >We used to make all our applications using PostgreSQL for more than 5\n>> > years. We usually do classical client/server applications under Linux,\n>> > and Web interface (php, perl, C/C++). We used to manage also public web\n>> > services with 10/15 millions records and up to 8 millions pages view by\n>> > month.\n>>\n>> Depending on your needs either:\n>>\n>> Slony: www.slony.info\n>>\n>> or\n>>\n>> Replicator: www.commandprompt.com\n>>\n>> Will both do what you want. Replicator is easier to setup but\n>> Slony is free.\n>\n> No ... as I have said ... how I'll manage a database getting a table\n> of may be 250 000 000 records ? I'll need incredible servers ... to\n> get quick access or index reading ... no ?\n>\n> So what we would like to get is a pool of small servers able to make\n> one virtual server ... for that is called a Cluster ... no ?\n\nThe term \"cluster\" simply indicates the use of multiple servers.\n\nThere are numerous _DIFFERENT_ forms of \"clusters,\" so that for\nsomeone to say \"I want a cluster\" commonly implies that since they\ndidn't realize the need to specify things further, they really don't\nknow what they want in a usefully identifiable way.\n\n> I know they are not using PostgreSQL ... but how a company like\n> Google do to get an incredible database in size and so quick access\n> ?\n\nGoogle has built a specialized application that evidently falls into\nthe category known as \"embarrassingly parallel.\"\n<http://c2.com/cgi/wiki?EmbarrassinglyParallel>\n\nThere are classes of applications that are amenable to\nparallelization.\n\nThose tend to be applications completely different from those\nimplemented atop transactional data stores like PostgreSQL.\n\nIf your problem is \"embarrassingly parallel,\" then I'd bet lunch that\nPostgreSQL (and all other SQL databases) are exactly the _wrong_ tool\nfor implementing its data store.\n\nIf your problem is _not_ \"embarrassingly parallel,\" then you'll almost\ncertainly discover that the cheapest way to make it fast involves\nfitting all the data onto _one_ computer so that you do not have to\npay the costs of transmitting data over slow inter-computer\ncommunications links.\n-- \nlet name=\"cbbrowne\" and tld=\"gmail.com\" in String.concat \"@\" [name;tld];;\nhttp://www.ntlug.org/~cbbrowne/\nIt isn't that physicists enjoy physics more than they enjoy sex, its\nthat they enjoy sex more when they are thinking of physics.\n", "msg_date": "Sun, 23 Jan 2005 00:58:28 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo,\n\n> I'm not clear what \"pgPool only needs to monitor \"update switching\" by\n>\n> *connection* not by *table*\" means. In your example:\n> > (1) 00:00 User A updates \"My Profile\"\n> > (2) 00:01 \"My Profile\" UPDATE finishes executing.\n> > (3) 00:02 User A sees \"My Profile\" re-displayed\n> > (6) 00:04 \"My Profile\":UserA cascades to the last Slave server\n>\n> I think (2) and (3) are on different connections, thus pgpool cannot\n> judge if SELECT in (3) should go only to the master or not.\n>\n> To solve the problem you need to make pgpool understand \"web sessions\"\n> not \"database connections\" and it seems impossible for pgpool to\n> understand \"sessions\".\n\nDepends on your connection pooling software, I suppose. Most connection \npooling software only returns connections to the pool after a user has been \ninactive for some period ... generally more than 3 seconds. So connection \ncontinuity could be trusted.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sun, 23 Jan 2005 14:42:52 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 11:02-0500), kirjutas\nRod Taylor:\n\n\n> Slony has some other issues with databases > 200GB in size as well\n> (well, it hates long running transactions -- and pg_dump is a regular\n> long running transaction)\n\nIIRC it hates pg_dump mainly on master. If you are able to run pg_dump\nfrom slave, it should be ok.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 24 Jan 2005 01:28:29 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Ühel kenal päeval (neljapäev, 20. jaanuar 2005, 16:00+0100), kirjutas\nHervé Piedvache:\n\n> > Will both do what you want. Replicator is easier to setup but\n> > Slony is free.\n> \n> No ... as I have said ... how I'll manage a database getting a table of may be \n> 250 000 000 records ? I'll need incredible servers ... to get quick access or \n> index reading ... no ?\n> \n> So what we would like to get is a pool of small servers able to make one \n> virtual server ... for that is called a Cluster ... no ?\n> \n> I know they are not using PostgreSQL ... but how a company like Google do to \n> get an incredible database in size and so quick access ?\n\nThey use lots of boxes and lots custom software to implement a very\nspecific kind of cluster.\n\n> regards,\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 24 Jan 2005 01:44:39 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Tatsuo,\n> \n> > I'm not clear what \"pgPool only needs to monitor \"update switching\" by\n> >\n> > *connection* not by *table*\" means. In your example:\n> > > (1) 00:00 User A updates \"My Profile\"\n> > > (2) 00:01 \"My Profile\" UPDATE finishes executing.\n> > > (3) 00:02 User A sees \"My Profile\" re-displayed\n> > > (6) 00:04 \"My Profile\":UserA cascades to the last Slave server\n> >\n> > I think (2) and (3) are on different connections, thus pgpool cannot\n> > judge if SELECT in (3) should go only to the master or not.\n> >\n> > To solve the problem you need to make pgpool understand \"web sessions\"\n> > not \"database connections\" and it seems impossible for pgpool to\n> > understand \"sessions\".\n> \n> Depends on your connection pooling software, I suppose. Most connection \n> pooling software only returns connections to the pool after a user has been \n> inactive for some period ... generally more than 3 seconds. So connection \n> continuity could be trusted.\n\nNot sure what you mean by \"most connection pooling software\", but I'm\nsure that pgpool behaves differently.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 24 Jan 2005 10:30:33 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo,\n\n> > Depends on your connection pooling software, I suppose. Most connection\n> > pooling software only returns connections to the pool after a user has\n> > been inactive for some period ... generally more than 3 seconds. So\n> > connection continuity could be trusted.\n>\n> Not sure what you mean by \"most connection pooling software\", but I'm\n> sure that pgpool behaves differently.\n\nAh, clarity problem here. I'm talking about connection pooling tools from \nthe client (webserver) side, such as Apache::DBI, PHP's pg_pconnect, \nJakarta's connection pools, etc. Not pooling on the database server side, \nwhich is what pgPool provides.\n\nMost of these tools allocate a database connection to an HTTP/middleware \nclient, and only release it after a specific period of inactivity. This \nmeans that you *could* count on \"web-user==connection\" for purposes of \nswitching back and forth to the master -- as long as the connection-recycling \ntimeout were set higher than the pgPool switch-off period.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Jan 2005 09:52:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Mon, 2005-01-24 at 09:52 -0800, Josh Berkus wrote:\n> [about keeping connections open in web context]\n> Ah, clarity problem here. I'm talking about connection pooling tools from \n> the client (webserver) side, such as Apache::DBI, PHP's pg_pconnect, \n> Jakarta's connection pools, etc. Not pooling on the database server side, \n> which is what pgPool provides.\n\nnote that these sometimes do not provide connection pooling as such,\njust persistent connections (Apache::DBI)\n\n> Most of these tools allocate a database connection to an HTTP/middleware \n> client, and only release it after a specific period of inactivity. This \n> means that you *could* count on \"web-user==connection\" for purposes of \n> switching back and forth to the master -- as long as the connection-recycling \n> timeout were set higher than the pgPool switch-off period.\n\nno. you can only count on web-server-process==connection, but not\nweb-user==connection, unless you can garantee that the same user\nclient always connects to same web-server process.\n\nam i missing something ?\n\ngnari\n\n\n", "msg_date": "Mon, 24 Jan 2005 18:24:05 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "Ragnar,\n\n> note that these sometimes do not provide connection pooling as such,\n> just persistent connections (Apache::DBI)\n\nYes, right.\n\n> no. you can only count on web-server-process==connection, but not\n> web-user==connection, unless you can garantee that the same user\n> client always connects to same web-server process.\n\nAre there ones that you use which might use several different connections to \nsend a series of queries from a single web-user, less than 5 seconds apart?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Mon, 24 Jan 2005 15:45:51 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "> On Mon, 2005-01-24 at 09:52 -0800, Josh Berkus wrote:\n> > [about keeping connections open in web context]\n> > Ah, clarity problem here. I'm talking about connection pooling tools from \n> > the client (webserver) side, such as Apache::DBI, PHP's pg_pconnect, \n> > Jakarta's connection pools, etc. Not pooling on the database server side, \n> > which is what pgPool provides.\n> \n> note that these sometimes do not provide connection pooling as such,\n> just persistent connections (Apache::DBI)\n\nRight. Same thing can be said to pg_pconnect.\n\n> > Most of these tools allocate a database connection to an HTTP/middleware \n> > client, and only release it after a specific period of inactivity. This \n> > means that you *could* count on \"web-user==connection\" for purposes of \n> > switching back and forth to the master -- as long as the connection-recycling \n> > timeout were set higher than the pgPool switch-off period.\n> \n> no. you can only count on web-server-process==connection, but not\n> web-user==connection, unless you can garantee that the same user\n> client always connects to same web-server process.\n\nI have same opinion.\n\n> am i missing something ?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 25 Jan 2005 09:21:09 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "On Mon, 2005-01-24 at 15:45 -0800, Josh Berkus wrote:\n\n> [about keeping open DB connections between web-client connections]\n\n> [I wrote:]\n> > no. you can only count on web-server-process==connection, but not\n> > web-user==connection, unless you can garantee that the same user\n> > client always connects to same web-server process.\n> \n> Are there ones that you use which might use several different connections to \n> send a series of queries from a single web-user, less than 5 seconds apart?\n\nactually, it had never occurred to me to test all browsers in this\nreguard, but i can think of LWP::UserAgent.\n\ngnari\n\n\n", "msg_date": "Tue, 25 Jan 2005 09:52:06 +0000", "msg_from": "Ragnar =?ISO-8859-1?Q?Hafsta=F0?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "Josh,\n\n\tPlease excuse how my client quotes things...\n\n> Are there ones that you use which might use several different connections to \n> send a series of queries from a single web-user, less than 5 seconds apart?\n\n\tUsing Apache/Perl I often have a situation where we're sending several queries from the same user (web client) within seconds, or even simultaneously, that use different connections.\n\n\tWhen someone logs in to our system they get a frameset that has 5 windows, each of which is filled with data from queries. Since the pages in the frames are requested separately by the client the system doesn't insure that they go to the same process, and subsequently, that they're not served by the same db connection.\n\n\tSession information is stored in the database (so it's easily persistent across server processes), so it would be bad if a request for a page was served by a db server that didn't yet have information about the user (such as that they're logged in, etc.).\n\n\tIf we ever have enough traffic to warrant it, we're going to go to a load balancer that passes requests to different identical web servers, at which point we won't even be getting requests from the same machine, much less the same connection.\n\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]On Behalf Of Josh Berkus\nSent: Monday, January 24, 2005 3:46 PM\nTo: Ragnar Hafstað\nCc: [email protected]; Tatsuo Ishii\nSubject: Re: [PERFORM] PgPool changes WAS: PostgreSQL clustering VS\nMySQL\n\n\nRagnar,\n\n> note that these sometimes do not provide connection pooling as such,\n> just persistent connections (Apache::DBI)\n\nYes, right.\n\n> no. you can only count on web-server-process==connection, but not\n> web-user==connection, unless you can garantee that the same user\n> client always connects to same web-server process.\n\nAre there ones that you use which might use several different connections to \nsend a series of queries from a single web-user, less than 5 seconds apart?\n\n-- \n--Josh\n\nJosh Berkus\nAglio Database Solutions\nSan Francisco\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to [email protected])\n\n", "msg_date": "Tue, 25 Jan 2005 08:49:58 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "Peter, Ragnar,\n\n> > Are there ones that you use which might use several different connections\n> > to send a series of queries from a single web-user, less than 5 seconds\n> > apart?\n>\n> \tUsing Apache/Perl I often have a situation where we're sending several\n> queries from the same user (web client) within seconds, or even\n> simultaneously, that use different connections.\n\nSo from the sound of it, the connection methods I've been using are the \nexception rather than the rule. Darn, it worked well for us. :-(\n\nWhat this would point to is NOT being able to use Slony-I for database server \npooling for most web applications. Yes? Users should look to pgCluster and \nC-JDBC instead.\n\nBTW, Tatsuo, what's the code relationship between pgPool and pgCluster, if \nany?\n\n--Josh\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 25 Jan 2005 08:58:37 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "> Peter, Ragnar,\n> \n> > > Are there ones that you use which might use several different connections\n> > > to send a series of queries from a single web-user, less than 5 seconds\n> > > apart?\n> >\n> > \tUsing Apache/Perl I often have a situation where we're sending several\n> > queries from the same user (web client) within seconds, or even\n> > simultaneously, that use different connections.\n> \n> So from the sound of it, the connection methods I've been using are the \n> exception rather than the rule. Darn, it worked well for us. :-(\n> \n> What this would point to is NOT being able to use Slony-I for database server \n> pooling for most web applications. Yes? Users should look to pgCluster and \n> C-JDBC instead.\n\nYup. That's the limitaion of async replication solutions.\n\n> BTW, Tatsuo, what's the code relationship between pgPool and pgCluster, if \n> any?\n\nPGCluster consists of three kind of servers, \"load balance server\",\n\"cluster server\"(modified PostgreSQL backend) and \"replication\nserver\". I believe some of codes of pgpool are used in the load\nbalance server to avoid \"re-invent a wheel\". This is a beauty of open\nsource software project.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 26 Jan 2005 10:09:04 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PgPool changes WAS: PostgreSQL clustering VS MySQL" }, { "msg_contents": "On Thu, Jan 20, 2005 at 04:02:39PM +0100, Herv� Piedvache wrote:\n> \n> I don't insist about have data in RAM .... but when you use PostgreSQL with \n> big database you know that for quick access just for reading the index file \n> for example it's better to have many RAM as possible ... I just want to be \n> able to get a quick access with a growing and growind database ...\n\nWell, in any case, you need much better hardware than you're looking\nat. I mean, dual Xeon with 2 Gig isn't hardly big iron. Why don't\nyou try benchmarking on a honking big box -- IBM P690 or a big Sun\n(I'd counsel against that, though) or something like that? Or even\nsome Opterons. Dual Xeon is probablt your very worst choice at the\nmoment.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nInformation security isn't a technological problem. It's an economics\nproblem.\n\t\t--Bruce Schneier\n", "msg_date": "Fri, 28 Jan 2005 10:29:58 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:\n> \n> I was thinking the same! I'd like to know how other databases such as Oracle\n> do it.\n\nYou mean \"how Oracle does it\". They're the only ones in the market\nthat really have this technology.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThis work was visionary and imaginative, and goes to show that visionary\nand imaginative work need not end up well. \n\t\t--Dennis Ritchie\n", "msg_date": "Fri, 28 Jan 2005 10:31:38 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 03:54:23PM +0100, Herv� Piedvache wrote:\n> Slony do not use RAM ... but PostgreSQL will need RAM for accessing a database \n> of 50 Gb ... so having two servers with the same configuration replicated by \n> slony do not slove the problem of the scalability of the database ...\n\nYou could use SSD for your storage. That'd make it go rather quickly\neven if it had to seek on disk.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe plural of anecdote is not data.\n\t\t--Roger Brinner\n", "msg_date": "Fri, 28 Jan 2005 10:33:03 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Thu, Jan 20, 2005 at 04:07:51PM +0100, Herv� Piedvache wrote:\n> Yes seems to be the only solution ... but I'm a little disapointed about \n> this ... could you explain me why there is not this kind of \n> functionnality ... it seems to be a real need for big applications no ?\n\nI hate to be snarky, but the reason there isn't this kind of system\njust hanging around is that it's a Very Hard Problem. I spent 2 days\nlast week in a room with some of the smartest people I know, and\nthere was widespread agreement that what you want is a very tough\nproblem.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nIn the future this spectacle of the middle classes shocking the avant-\ngarde will probably become the textbook definition of Postmodernism. \n --Brad Holland\n", "msg_date": "Fri, 28 Jan 2005 10:34:25 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:\n> \n> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump\n> from slave, it should be ok.\n\nFor the sake of the archives, that's not really a good idea. There\nis some work afoot to solve it, but at the moment dumping from a\nslave gives you a useless database dump.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\nThe fact that technology doesn't work is no bar to success in the marketplace.\n\t\t--Philip Greenspun\n", "msg_date": "Fri, 28 Jan 2005 10:36:20 -0500", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "At this point I will interject a couple of benchmark numbers based on\na new system we just configured as food for thought.\n\nSystem A (old system):\nCompaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID\n1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original\nPrice: $6500\n\nSystem B (new system):\nSelf Built Dual Opteron 242 with 2x3ware 9500S-8MI SATA, one RAID 1\n(OS), one 4 drive RAID 10 (pg_xlog), one 6 drive RAID 10 (data) on 10k\nRPM Raptors, 4GB PC3200 RAM. Current price $7200\n\nSystem A for our large insert job: 125 minutes\nSystem B for our large insert job: 10 minutes.\n\nThere is no logical way there should be a 12x performance difference\nbetween these two systems, maybe 2x or even 4x, but not 12x\n\nBad controler cards/configuration will seriously ruin your day. 3ware\nescalade cards are very well supported on linux, and work excellently.\n Compaq smart array cards are not. Bonnie++ benchmarks show a 9MB/sec\nwrite, 29MB/sec read on the RAID 5, but a 172MB/sec write on the\n6xRAID 10, and 66MB/sec write on the RAID 1 on the 3ware.\n\nWith the right configuration you can get very serious throughput. The\nnew system is processing over 2500 insert transactions per second. We\ndon't need more RAM with this config. The disks are fast enough. \n2500 transaction/second is pretty damn fast.\n\nAlex Turner\n\nOn Fri, 28 Jan 2005 10:31:38 -0500, Andrew Sullivan <[email protected]> wrote:\n> On Thu, Jan 20, 2005 at 10:40:02PM -0200, Bruno Almeida do Lago wrote:\n> >\n> > I was thinking the same! I'd like to know how other databases such as Oracle\n> > do it.\n> \n> You mean \"how Oracle does it\". They're the only ones in the market\n> that really have this technology.\n> \n> A\n> \n> --\n> Andrew Sullivan | [email protected]\n> This work was visionary and imaginative, and goes to show that visionary\n> and imaginative work need not end up well.\n> --Dennis Ritchie\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n", "msg_date": "Fri, 28 Jan 2005 10:59:58 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Fri, 28 Jan 2005 10:59:58 -0500\nAlex Turner <[email protected]> wrote:\n\n> At this point I will interject a couple of benchmark numbers based on\n> a new system we just configured as food for thought.\n> \n> System A (old system):\n> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID\n> 1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original\n> Price: $6500\n> \n> System B (new system):\n> Self Built Dual Opteron 242 with 2x3ware 9500S-8MI SATA, one RAID 1\n> (OS), one 4 drive RAID 10 (pg_xlog), one 6 drive RAID 10 (data) on 10k\n> RPM Raptors, 4GB PC3200 RAM. Current price $7200\n> \n> System A for our large insert job: 125 minutes\n> System B for our large insert job: 10 minutes.\n> \n> There is no logical way there should be a 12x performance difference\n> between these two systems, maybe 2x or even 4x, but not 12x\n> \n> Bad controler cards/configuration will seriously ruin your day. 3ware\n> escalade cards are very well supported on linux, and work excellently.\n> Compaq smart array cards are not. Bonnie++ benchmarks show a 9MB/sec\n> write, 29MB/sec read on the RAID 5, but a 172MB/sec write on the\n> 6xRAID 10, and 66MB/sec write on the RAID 1 on the 3ware.\n> \n> With the right configuration you can get very serious throughput. The\n> new system is processing over 2500 insert transactions per second. We\n> don't need more RAM with this config. The disks are fast enough. \n> 2500 transaction/second is pretty damn fast.\n\n I agree that badly supported or configured cards can ruin your\n performance. \n\n However, don't you think moving pg_xlog onto a separate RAID and\n increasing your number of spindles from 3 to 6 on the data RAID would\n also have a significant impact on performance, no matter what card was\n used? \n\n I'm not sure you can give all the credit to the card on this one. \n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n", "msg_date": "Fri, 28 Jan 2005 10:17:24 -0600", "msg_from": "Frank Wiles <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On 01/28/2005-10:59AM, Alex Turner wrote:\n> At this point I will interject a couple of benchmark numbers based on\n> a new system we just configured as food for thought.\n> \n> System A (old system):\n> Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID\n> 1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original\n> Price: $6500\n> \n> System B (new system):\n> Self Built Dual Opteron 242 with 2x3ware 9500S-8MI SATA, one RAID 1\n> (OS), one 4 drive RAID 10 (pg_xlog), one 6 drive RAID 10 (data) on 10k\n> RPM Raptors, 4GB PC3200 RAM. Current price $7200\n> \n> System A for our large insert job: 125 minutes\n> System B for our large insert job: 10 minutes.\n> \n> There is no logical way there should be a 12x performance difference\n> between these two systems, maybe 2x or even 4x, but not 12x\n> \n\nYour system A has the absolute worst case Raid 5, 3 drives. The more\ndrives you add to Raid 5 the better it gets but it will never beat Raid\n10. On top of it being the worst case, pg_xlog is not on a separate\nspindle.\n\nYour system B has a MUCH better config. Raid 10 is faster than Raid 5 to\nbegin with but on top of that you have more drives involved plus pg_xlog\nis on a separate spindle.\n\nI'd say I am not surprised by your performance difference.\n\n> Bad controler cards/configuration will seriously ruin your day. 3ware\n> escalade cards are very well supported on linux, and work excellently.\n> Compaq smart array cards are not. Bonnie++ benchmarks show a 9MB/sec\n> write, 29MB/sec read on the RAID 5, but a 172MB/sec write on the\n> 6xRAID 10, and 66MB/sec write on the RAID 1 on the 3ware.\n> \n\nWhat does bonnie say about the Raid 1 on the Compaq? Comparing the two\nRaid 1s is really the only valid comparison that can be made between\nthese two machines. Other than that you are comparing apples to \nsnow shovels.\n\n", "msg_date": "Fri, 28 Jan 2005 11:54:57 -0500", "msg_from": "Christopher Weimann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "[email protected] (Andrew Sullivan) writes:\n> On Mon, Jan 24, 2005 at 01:28:29AM +0200, Hannu Krosing wrote:\n>> \n>> IIRC it hates pg_dump mainly on master. If you are able to run pg_dump\n>> from slave, it should be ok.\n>\n> For the sake of the archives, that's not really a good idea. There\n> is some work afoot to solve it, but at the moment dumping from a\n> slave gives you a useless database dump.\n\nThat overstates things a tad; I think it's worth elaborating on a bit.\n\nThere's a problem with the results of dumping the _schema_ from a\nSlony-I 'subscriber' node; you want to get the schema from the origin\nnode. The problem has to do with triggers; Slony-I suppresses RI\ntriggers and such like on subscriber nodes in a fashion that leaves\nthe dumped schema a bit broken with regard to triggers.\n\nBut there's nothing wrong with the idea of using \"pg_dump --data-only\"\nagainst a subscriber node to get you the data without putting a load\non the origin. And then pulling the schema from the origin, which\noughtn't be terribly expensive there.\n-- \n\"cbbrowne\",\"@\",\"ca.afilias.info\"\n<http://dev6.int.libertyrms.com/>\nChristopher Browne\n(416) 673-4124 (land)\n", "msg_date": "Fri, 28 Jan 2005 14:49:29 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hervᅵ Piedvache wrote:\n>>My point being is that there is no free solution. There simply isn't.\n>>I don't know why you insist on keeping all your data in RAM, but the\n>>mysql cluster requires that ALL data MUST fit in RAM all the time.\n> \n> \n> I don't insist about have data in RAM .... but when you use PostgreSQL with \n> big database you know that for quick access just for reading the index file \n> for example it's better to have many RAM as possible ... I just want to be \n> able to get a quick access with a growing and growind database ...\n\nIf it's an issue of RAM and not CPU power, think about this scenario. \nLet's just say you *COULD* partition your DB over multiple servers. What \n are your plans then? Are you going to buy 4 Dual Xeon servers? Ok, \nlet's price that out.\n\nFor a full-blown rackmount server w/ RAID, 6+ SCSI drives and so on, you \nare looking at roughly $4000 per machine. So now you have 4 machines -- \ntotal of 16GB of RAM over the 4 machines.\n\nOn the otherhand, let's say you spent that money on a Quad Opteron \ninstead. 4x850 will cost you roughly $8000. 16GB of RAM using 1GB DIMMs \nis $3000. If you went with 2GB DIMMs, you could stuff 32GB of RAM onto \nthat machine for $7500.\n\nLet's review the math:\n\n4X server cluster, total 16GB RAM = $16K\n1 beefy server w/ 16GB RAM = $11K\n1 beefy server w/ 32GB RAM = $16K\n\nI know what I would choose. I'd get the mega server w/ a ton of RAM and \nskip all the trickyness of partitioning a DB over multiple servers. Yes \nyour data will grow to a point where even the XXGB can't cache \neverything. On the otherhand, memory prices drop just as fast. By that \ntime, you can ebay your original 16/32GB and get 64/128GB.\n", "msg_date": "Fri, 28 Jan 2005 14:04:24 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Fri, 28 Jan 2005 11:54:57 -0500, Christopher Weimann\n<[email protected]> wrote:\n> On 01/28/2005-10:59AM, Alex Turner wrote:\n> > At this point I will interject a couple of benchmark numbers based on\n> > a new system we just configured as food for thought.\n> >\n> > System A (old system):\n> > Compaq Proliant Dual Pentium III 933 with Smart Array 5300, one RAID\n> > 1, one 3 Disk RAID 5 on 10k RPM drives, 2GB PC133 RAM. Original\n> > Price: $6500\n> >\n> > System B (new system):\n> > Self Built Dual Opteron 242 with 2x3ware 9500S-8MI SATA, one RAID 1\n> > (OS), one 4 drive RAID 10 (pg_xlog), one 6 drive RAID 10 (data) on 10k\n> > RPM Raptors, 4GB PC3200 RAM. Current price $7200\n> >\n> > System A for our large insert job: 125 minutes\n> > System B for our large insert job: 10 minutes.\n> >\n> > There is no logical way there should be a 12x performance difference\n> > between these two systems, maybe 2x or even 4x, but not 12x\n> >\n> \n> Your system A has the absolute worst case Raid 5, 3 drives. The more\n> drives you add to Raid 5 the better it gets but it will never beat Raid\n> 10. On top of it being the worst case, pg_xlog is not on a separate\n> spindle.\n> \n\nTrue for writes, but not for reads.\n\n> Your system B has a MUCH better config. Raid 10 is faster than Raid 5 to\n> begin with but on top of that you have more drives involved plus pg_xlog\n> is on a separate spindle.\n\nI absolutely agree, it is a much better config, thats why we bought it\n;).. In system A, the xlog was actualy on the RAID 1, so it was\ninfact on a seperate spindle set.\n\n> \n> I'd say I am not surprised by your performance difference.\n> \n\nI'm not surprised at all that the new system outperformed the old,\nit's more the factor of improvement. 12x is a _VERY_ big performance\njump.\n\n> > Bad controler cards/configuration will seriously ruin your day. 3ware\n> > escalade cards are very well supported on linux, and work excellently.\n> > Compaq smart array cards are not. Bonnie++ benchmarks show a 9MB/sec\n> > write, 29MB/sec read on the RAID 5, but a 172MB/sec write on the\n> > 6xRAID 10, and 66MB/sec write on the RAID 1 on the 3ware.\n> >\n> \n> What does bonnie say about the Raid 1 on the Compaq? Comparing the two\n> Raid 1s is really the only valid comparison that can be made between\n> these two machines. Other than that you are comparing apples to\n> snow shovels.\n> \n> \n\n\nMy main point is that you can spend $7k on a server and believe you\nhave a fast system. The person who bought the original system was\nunder the delusion that it would make a good DB server. For the same\n$7k a different configuration can yield a vastly different performance\noutput. This means that it's not quite apples to snow shovels. \nPeople who _believe_ they have an adequate config are often sorely\nmistaken, and ask misguided questions about needed 20GB of RAM because\nthe system can't page to disk fast enough, when what they really need\nis a good RAID 10 with a high quality controler. A six drive RAID 10\nis going to run a bit less than 20G of SSD.\n\nAlex Turner\nNetEconomist\n", "msg_date": "Fri, 28 Jan 2005 17:57:11 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On 01/28/2005-05:57PM, Alex Turner wrote:\n> > \n> > Your system A has the absolute worst case Raid 5, 3 drives. The more\n> > drives you add to Raid 5 the better it gets but it will never beat Raid\n> > 10. On top of it being the worst case, pg_xlog is not on a separate\n> > spindle.\n> > \n> \n> True for writes, but not for reads.\n> \n\nGood point.\n\n> \n> My main point is that you can spend $7k on a server and believe you\n> have a fast system. The person who bought the original system was\n> under the delusion that it would make a good DB server. For the same\n> $7k a different configuration can yield a vastly different performance\n> output. This means that it's not quite apples to snow shovels. \n\nThat point is definatly made. I primarily wanted to point out that the\ncontrolers involved were not the only difference. \n\nIn my experience with SQL servers of various flavors fast disks and \ngetting things onto a separate spindles is more important than just\nabout anything else. Depending on the size of your 'hot' dataset\nRAM could be more important and CPU never is. \n\n", "msg_date": "Fri, 28 Jan 2005 19:48:37 -0500", "msg_from": "Christopher Weimann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "\nWilliam Yu <[email protected]> writes:\n\n> 1 beefy server w/ 32GB RAM = $16K\n> \n> I know what I would choose. I'd get the mega server w/ a ton of RAM and skip\n> all the trickyness of partitioning a DB over multiple servers. Yes your data\n> will grow to a point where even the XXGB can't cache everything. On the\n> otherhand, memory prices drop just as fast. By that time, you can ebay your\n> original 16/32GB and get 64/128GB.\n\na) What do you do when your calculations show you need 256G of ram? [Yes such\nmachines exist but you're not longer in the realm of simply \"add more RAM\".\nAdministering such machines is nigh as complex as clustering]\n\nb) What do you do when you find you need multiple machines anyways to divide\nthe CPU or I/O or network load up. Now you need n big beefy servers when n\nservers 1/nth as large would really have sufficed. This is a big difference\nwhen you're talking about the difference between colocating 16 1U boxen with\n4G of ram vs 16 4U opterons with 64G of RAM...\n\nAll that said, yes, speaking as a user I think the path of least resistance is\nto build n complete slaves using Slony and then just divide the workload.\nThat's how I'm picturing going when I get to that point.\n\nEven if I just divide the workload randomly it's easier than building a\nmachine with n times the cpu and i/o. And if I divide the workload up in a way\nthat correlates with data in the database I can probably get close to the same\nperformance as clustering. The actual cost of replicating the unused data is\nslight. And the simplicity of master-slave makes it much more appealing than\nfull on clustering.\n\n-- \ngreg\n\n", "msg_date": "29 Jan 2005 02:30:14 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">>I know what I would choose. I'd get the mega server w/ a ton of RAM and skip\n>>all the trickyness of partitioning a DB over multiple servers. Yes your data\n>>will grow to a point where even the XXGB can't cache everything. On the\n>>otherhand, memory prices drop just as fast. By that time, you can ebay your\n>>original 16/32GB and get 64/128GB.\n> \n> \n> a) What do you do when your calculations show you need 256G of ram? [Yes such\n> machines exist but you're not longer in the realm of simply \"add more RAM\".\n> Administering such machines is nigh as complex as clustering]\n\nIf you need that much memory, you've got enough customers paying you \ncash to pay for anything. :) Technology always increase -- 8X Opterons \nwould double your memory capacity, higher capacity DIMMs, etc.\n\n> b) What do you do when you find you need multiple machines anyways to divide\n> the CPU or I/O or network load up. Now you need n big beefy servers when n\n> servers 1/nth as large would really have sufficed. This is a big difference\n> when you're talking about the difference between colocating 16 1U boxen with\n> 4G of ram vs 16 4U opterons with 64G of RAM...\n> \n> All that said, yes, speaking as a user I think the path of least resistance is\n> to build n complete slaves using Slony and then just divide the workload.\n> That's how I'm picturing going when I get to that point.\n\nReplication is good for uptime and high read systems. The problem is \nthat if your system has a high volume of writes and you need near \nrealtime data syncing, clusters don't get you anything. A write on one \nserver means a write on every server. Spreading out the damage over \nmultiple machines doesn't help a bit.\n\nPlus the fact that we don't have multi-master replication yet is quite a \nbugaboo. That requires writing quite extensive code if you can't afford \nto have 1 server be your single point of failure. We wrote our own \nmulti-master replication code at the client app level and it's quite a \nchore making sure the replication act logically. Every table needs to \nhave separate logic to parse situations like \"voucher was posted on \nserver 1 but voided after on server 2, what's the correct action here?\" \nSo I've got a slew of complicated if-then-else statements that not only \nhave to take into account type of update being made but the sequence.\n\nAnd yes, I tried doing realtime locks over a VPN link over our servers \nin SF and VA. Ugh...latency was absolutely horrible and made \ntransactions run 1000X slower.\n", "msg_date": "Sat, 29 Jan 2005 00:22:14 -0800", "msg_from": "William Yu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On 1/20/2005 9:23 AM, Jean-Max Reymond wrote:\n\n> On Thu, 20 Jan 2005 15:03:31 +0100, Herv� Piedvache <[email protected]> wrote:\n> \n>> We were at this moment thinking about a Cluster solution ... We saw on the\n>> Internet many solution talking about Cluster solution using MySQL ... but\n>> nothing about PostgreSQL ... the idea is to use several servers to make a\n>> sort of big virtual server using the disk space of each server as one, and\n>> having the ability to use the CPU and RAM of each servers in order to\n>> maintain good service performance ...one can imagin it is like a GFS but\n>> dedicated to postgreSQL...\n>> \n> \n> forget mysql cluster for now.\n\nSorry for the late reply.\n\nI'd second that. I was just on the Solutions Linux in Paris and spoke \nwith MySQL people.\n\nThere were some questions I had around the new NDB cluster tables and I \nstopped by at their booth. My question if there are any plans to add \nforeign key support to NDB cluster tables got answered with \"it will \ndefinitely be in the next version, which is the one containing NDB \ncluster, so yes, it will support foreign key from the start\".\n\nBack home I found some more time to investigate and found this forum \narticle http://lists.mysql.com/cluster/1442 posted by a MySQL AB senior \nsoftware architect, where he says exactly the opposite.\n\nI don't know about your application, but trust me that maintaining \nproper referential integrity on the application level against a \nmultimaster clustered database isn't that easy. So this is in fact a \nvery important question.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Sun, 06 Feb 2005 11:42:41 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On 1/28/2005 2:49 PM, Christopher Browne wrote:\n\n> But there's nothing wrong with the idea of using \"pg_dump --data-only\"\n> against a subscriber node to get you the data without putting a load\n> on the origin. And then pulling the schema from the origin, which\n> oughtn't be terribly expensive there.\n\nAnd there is a script in the current CVS head that extracts the schema \nfrom the origin in a clean, slony-traces-removed state.\n\n\nJan\n\n-- \n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== [email protected] #\n", "msg_date": "Sun, 06 Feb 2005 18:06:03 -0500", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Josh Berkus wrote:\n> Tatsuo,\n> \n> \n>>Yes. However it would be pretty easy to modify pgpool so that it could\n>>cope with Slony-I. I.e.\n>>\n>>1) pgpool does the load balance and sends query to Slony-I's slave and\n>> master if the query is SELECT.\n>>\n>>2) pgpool sends query only to the master if the query is other than\n>> SELECT.\n\nDon't you think that this is unsafe ?\n\n\nSELECT foo(id), id\nFROM bar;\n\n\nwhere foo have side effect.\n\nIs pgpool able to detect it and perform this select on the master ?\n\n\nRegards\nGaetano Mendola\n\n\n\n", "msg_date": "Sat, 19 Feb 2005 00:20:08 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Jim C. Nasby wrote:\n> On Thu, Jan 20, 2005 at 10:08:47AM -0500, Stephen Frost wrote:\n> \n>>* Christopher Kings-Lynne ([email protected]) wrote:\n>>\n>>>PostgreSQL has replication, but not partitioning (which is what you want).\n>>\n>>It doesn't have multi-server partitioning.. It's got partitioning\n>>within a single server (doesn't it? I thought it did, I know it was\n>>discussed w/ the guy from Cox Communications and I thought he was using\n>>it :).\n> \n> \n> No, PostgreSQL doesn't support any kind of partitioning, unless you\n> write it yourself. I think there's some work being done in this area,\n> though.\n\nSeen my last attempts to perform an horizontal partition I have to say\nthat postgres do not support it even if you try to write it yourself\n(see my post \"horizontal partion\" ).\n\n\nRegards\nGaetano Mendola\n\n\n\n", "msg_date": "Sat, 19 Feb 2005 00:27:07 +0100", "msg_from": "Gaetano Mendola <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "> No please do not talk about this again ... I'm looking about a PostgreSQL\n> solution ... I know RAC ... and I'm not able to pay for a RAC certify\n> hardware configuration plus a RAC Licence.\n\nAre you totally certain you can't solve your problem with a single server solution?\n\nHow about:\nPrice out a 4 way Opteron 4u rackmount server with 64 bit linux, stuffed with hard drives (like 40) set up in a complex raid configuration (multiple raid controllers) allowing you (with tablespaces) to divide up your database.\n\nYou can drop in dual core opterons at some later point for an easy upgrade. Let's say this server costs 20k$...are you sure this will not be enough to handle your load?\n\nMerlin\n", "msg_date": "Thu, 20 Jan 2005 10:16:21 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 16:16, Merlin Moncure a ᅵcrit :\n> > No please do not talk about this again ... I'm looking about a PostgreSQL\n> > solution ... I know RAC ... and I'm not able to pay for a RAC certify\n> > hardware configuration plus a RAC Licence.\n>\n> Are you totally certain you can't solve your problem with a single server\n> solution?\n>\n> How about:\n> Price out a 4 way Opteron 4u rackmount server with 64 bit linux, stuffed\n> with hard drives (like 40) set up in a complex raid configuration (multiple\n> raid controllers) allowing you (with tablespaces) to divide up your\n> database.\n>\n> You can drop in dual core opterons at some later point for an easy upgrade.\n> Let's say this server costs 20k$...are you sure this will not be enough to\n> handle your load?\n\nI'm not as I said ibn my mail I want to do a Cluster of servers ... :o)\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 16:31:06 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Could you explain us what do you have in mind for that solution? I mean,\nforget the PostgreSQL (or any other database) restrictions and explain us\nhow this hardware would be. Where the data would be stored?\n\nI've something in mind for you, but first I need to understand your needs!\n\n\nC ya.\nBruno Almeida do Lago\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Hervé Piedvache\nSent: Thursday, January 20, 2005 1:31 PM\nTo: Merlin Moncure\nCc: [email protected]\nSubject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering\n\nLe Jeudi 20 Janvier 2005 16:16, Merlin Moncure a écrit :\n> > No please do not talk about this again ... I'm looking about a\nPostgreSQL\n> > solution ... I know RAC ... and I'm not able to pay for a RAC certify\n> > hardware configuration plus a RAC Licence.\n>\n> Are you totally certain you can't solve your problem with a single server\n> solution?\n>\n> How about:\n> Price out a 4 way Opteron 4u rackmount server with 64 bit linux, stuffed\n> with hard drives (like 40) set up in a complex raid configuration\n(multiple\n> raid controllers) allowing you (with tablespaces) to divide up your\n> database.\n>\n> You can drop in dual core opterons at some later point for an easy\nupgrade.\n> Let's say this server costs 20k$...are you sure this will not be enough\nto\n> handle your load?\n\nI'm not as I said ibn my mail I want to do a Cluster of servers ... :o)\n-- \nHervé Piedvache\n\nElma Ingénierie Informatique\n6 rue du Faubourg Saint-Honoré\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n\n", "msg_date": "Thu, 20 Jan 2005 16:09:57 -0200", "msg_from": "\"Bruno Almeida do Lago\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a ᅵcrit :\n> Could you explain us what do you have in mind for that solution? I mean,\n> forget the PostgreSQL (or any other database) restrictions and explain us\n> how this hardware would be. Where the data would be stored?\n>\n> I've something in mind for you, but first I need to understand your needs!\n\nI just want to make a big database as explained in my first mail ... At the \nbeginning we will have aprox. 150 000 000 records ... each month we will add \nabout 4/8 millions new rows in constant flow during the day ... and in same \ntime web users will access to the database in order to read those data.\nStored data are quite close to data stored by google ... (we are not making a \ngoogle clone ... just a lot of data many small values and some big ones ... \nthat's why I'm comparing with google for data storage).\nThen we will have a search engine searching into those data ...\n\nDealing about the hardware, for the moment we have only a bi-pentium Xeon \n2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so \nwe are thinking about a new solution with maybe several servers (server \ndesign may vary from one to other) ... to get a kind of cluster to get better \nperformance ...\n\nAm I clear ?\n\nRegards,\n-- \nHervᅵ Piedvache\n\nElma Ingᅵnierie Informatique\n6 rue du Faubourg Saint-Honorᅵ\nF-75008 - Paris - France\nPho. 33-144949901\nFax. 33-144949902\n", "msg_date": "Thu, 20 Jan 2005 20:00:03 +0100", "msg_from": "=?iso-8859-15?q?Herv=E9_Piedvache?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Herv� Piedvache <[email protected]> writes:\n\n> Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a �crit :\n> > Could you explain us what do you have in mind for that solution? I mean,\n> > forget the PostgreSQL (or any other database) restrictions and explain us\n> > how this hardware would be. Where the data would be stored?\n> >\n> > I've something in mind for you, but first I need to understand your needs!\n> \n> I just want to make a big database as explained in my first mail ... At the \n> beginning we will have aprox. 150 000 000 records ... each month we will add \n> about 4/8 millions new rows in constant flow during the day ... and in same \n> time web users will access to the database in order to read those data.\n> Stored data are quite close to data stored by google ... (we are not making a \n> google clone ... just a lot of data many small values and some big ones ... \n> that's why I'm comparing with google for data storage).\n> Then we will have a search engine searching into those data ...\n\nYou're concentrating on the data within the database. That's only half the\npicture. What are you going to *do* with the data in the database? You need to\nanalyze what \"we will have a search engine searching into those data\" means in\nmore detail.\n\nPostgres is more than capable of storing 150Gb of data. There are people with\nterabyte databases on this list. You need to define what types of queries you\nneed to perform, how many data they need to manipulate, and what your\nperformance requirements are for those queries.\n\n-- \ngreg\n\n", "msg_date": "20 Jan 2005 15:00:17 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Two way xeon's are as fast as a single opteron, 150M rows isn't a big deal.\nClustering isn't really the solution, I fail to see how clustering \nactually helps since it has to slow down file access.\n\nDave\n\nHervᅵ Piedvache wrote:\n\n>Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a ᅵcrit :\n> \n>\n>>Could you explain us what do you have in mind for that solution? I mean,\n>>forget the PostgreSQL (or any other database) restrictions and explain us\n>>how this hardware would be. Where the data would be stored?\n>>\n>>I've something in mind for you, but first I need to understand your needs!\n>> \n>>\n>\n>I just want to make a big database as explained in my first mail ... At the \n>beginning we will have aprox. 150 000 000 records ... each month we will add \n>about 4/8 millions new rows in constant flow during the day ... and in same \n>time web users will access to the database in order to read those data.\n>Stored data are quite close to data stored by google ... (we are not making a \n>google clone ... just a lot of data many small values and some big ones ... \n>that's why I'm comparing with google for data storage).\n>Then we will have a search engine searching into those data ...\n>\n>Dealing about the hardware, for the moment we have only a bi-pentium Xeon \n>2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so \n>we are thinking about a new solution with maybe several servers (server \n>design may vary from one to other) ... to get a kind of cluster to get better \n>performance ...\n>\n>Am I clear ?\n>\n>Regards,\n> \n>\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561\n\n\n\n\n\n\n\n\nTwo way xeon's are as fast as a single opteron, 150M rows isn't a big\ndeal.\nClustering isn't really the solution, I fail to see how clustering\nactually helps since it has to slow down file access.\n\nDave\n\nHervᅵ Piedvache wrote:\n\nLe Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a ᅵcrit :\n \n\nCould you explain us what do you have in mind for that solution? I mean,\nforget the PostgreSQL (or any other database) restrictions and explain us\nhow this hardware would be. Where the data would be stored?\n\nI've something in mind for you, but first I need to understand your needs!\n \n\n\nI just want to make a big database as explained in my first mail ... At the \nbeginning we will have aprox. 150 000 000 records ... each month we will add \nabout 4/8 millions new rows in constant flow during the day ... and in same \ntime web users will access to the database in order to read those data.\nStored data are quite close to data stored by google ... (we are not making a \ngoogle clone ... just a lot of data many small values and some big ones ... \nthat's why I'm comparing with google for data storage).\nThen we will have a search engine searching into those data ...\n\nDealing about the hardware, for the moment we have only a bi-pentium Xeon \n2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so \nwe are thinking about a new solution with maybe several servers (server \ndesign may vary from one to other) ... to get a kind of cluster to get better \nperformance ...\n\nAm I clear ?\n\nRegards,\n \n\n\n-- \nDave Cramer\nhttp://www.postgresintl.com\n519 939 0336\nICQ#14675561", "msg_date": "Thu, 20 Jan 2005 15:03:41 -0500", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Herv� Piedvache wrote:\n> \n> \n> Dealing about the hardware, for the moment we have only a bi-pentium Xeon \n> 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ... so \n> we are thinking about a new solution with maybe several servers (server \n> design may vary from one to other) ... to get a kind of cluster to get better \n> performance ...\n>\nThe poor performance may not necessarily be:\n\ni) attributable to the hardware or,\nii) solved by clustering.\n\nI would recommend determining *why* you got the slowdown. A few possible\nreasons are:\n\ni) not vacuuming often enough, freespacemap settings too small.\nii) postgresql.conf setting very non optimal.\niii) index and/or data design not optimal for PG.\n\nMy suspicions would start at iii).\n\nOther posters have pointed out that 250000000 records in itself is not\nnecessarily a problem, so this sort of data size is manageable.\n\nregards\n\nMark\n\n\n\n", "msg_date": "Fri, 21 Jan 2005 10:05:47 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "I think maybe a SAN in conjunction with tablespaces might be the answer.\nStill need one honking server.\n\nRick\n\n\n \n Stephen Frost \n <[email protected]> To: Christopher Kings-Lynne <[email protected]> \n Sent by: cc: Hervé Piedvache <[email protected]>, [email protected] \n pgsql-performance-owner@pos Subject: Re: [PERFORM] PostgreSQL clustering VS MySQL clustering \n tgresql.org \n \n \n 01/20/2005 10:08 AM \n \n \n\n\n\n\n* Christopher Kings-Lynne ([email protected]) wrote:\n> PostgreSQL has replication, but not partitioning (which is what you\nwant).\n\nIt doesn't have multi-server partitioning.. It's got partitioning\nwithin a single server (doesn't it? I thought it did, I know it was\ndiscussed w/ the guy from Cox Communications and I thought he was using\nit :).\n\n> So, your only option is Oracle or another very expensive commercial\n> database.\n\nOr partition the data at the application layer.\n\n Stephen\n(See attached file: signature.asc)", "msg_date": "Thu, 20 Jan 2005 10:42:27 -0500", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "* [email protected] ([email protected]) wrote:\n> I think maybe a SAN in conjunction with tablespaces might be the answer.\n> Still need one honking server.\n\nThat's interesting- can a PostgreSQL partition be acress multiple\ntablespaces?\n\n\tStephen", "msg_date": "Thu, 20 Jan 2005 11:13:25 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "The problem is very large ammounts of data that needs to be both read\nand updated. If you replicate a system, you will need to\nintelligently route the reads to the server that has the data in RAM\nor you will always be hitting DIsk which is slow. This kind of routing\nAFAIK is not possible with current database technology, and you are\nstill stuck for writes.\n\nWrites are always going to be the bane of any cluster. Clustering can\ngive better parallel read performance i.e. large no. of clients\naccessing data simultaneously, but your write performance is always\ngoing to be bound by the underlying disk infrastructure, not even\nOracle RAC can get around this (It uses multiple read nodes accessing\nthe same set of database files underneath)\n\nGoogle solved the problem by building this intelligence into the\nmiddle tier, and using a distributed file system. Java Entity Beans\nare supposed to solve this problem somewhat by distributing the data\nacross multiple servers in a cluster and allowing you to defer write\nsyncing, but it really doesn't work all that well.\n\nThe only way I know to solve this at the RDBMS layer is to configure a\nvery powerfull disk layer, which is basicaly going to a SAN mesh with\nmultiple cards on a single system with multiple IO boards, or an OS\nthat clusters at the base level, thinking HP Superdome or z900. Even\nOpteron w/PCI-X cards has a limit of about 400MB/sec throughput on a\nsingle IO channel, and there are only two independent channels on any\nboards I know about.\n\nThe other solution is to do what google did. Implement your own\nmiddle tier that knows how to route queries to the appropriate place. \nEach node can then have it's own independant database with it's own\nindependant disk subsystem, and your throughput is only limited by\nyour network interconnects, and your internet pipe. This kind of\nmiddle tier is really not that hard to if your data can easily be\nsegmented. Each node runs it's own query sort and filter\nindependantly, and supplies the result to the central data broker,\nwhich then collates the results and supplies them back to the user. \nUpdated work in a similar fasion. The update comes into the central\nbroker that decides which nodes it will affect, and then issues\nupdates to those nodes.\n\nI've built this kind of architecture, if you want to do it, don't use\nJava unless you want to pay top dollar for your programmers, because\nit's hard to make it work well in Java (most JMS implementations suck,\nlook at MQueue or a custom queue impl, forget XML it's too slow to\nserialize and deserialize requests).\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 20 Jan 2005 11:13:25 -0500, Stephen Frost <[email protected]> wrote:\n> * [email protected] ([email protected]) wrote:\n> > I think maybe a SAN in conjunction with tablespaces might be the answer.\n> > Still need one honking server.\n> \n> That's interesting- can a PostgreSQL partition be acress multiple\n> tablespaces?\n> \n> Stephen\n> \n> \n>\n", "msg_date": "Thu, 20 Jan 2005 12:23:12 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "I'm dealing with big database [3.8 Gb] and records of 3 millions . Some of the\nquery seems to be slow eventhough just a few users in the night. I would like\nto know which parameter list below is most effective in rising the speed of\nthese queries?\n\nShmmax = 32384*8192 =265289728\nShare buffer = 32384\nsort_mem = 34025 <===== I guess increase this one is most effective but too\nhigh cause reading the swap , is that right?\neffective cache = 153204\n\nMy server has 4 Gb. ram and ~ 140 clients in rush hours.\n\nAmrit\nThailand\n", "msg_date": "Thu, 20 Jan 2005 22:45:05 +0700", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Which PARAMETER is most important for load query??" }, { "msg_contents": "\n\[email protected] wrote:\n> I'm dealing with big database [3.8 Gb] and records of 3 millions . Some of the\n> query seems to be slow eventhough just a few users in the night. I would like\n> to know which parameter list below is most effective in rising the speed of\n> these queries?\n> \n> Shmmax = 32384*8192 =265289728\n> Share buffer = 32384\n\nThat's the one you want to increase...\n\n> sort_mem = 34025 <===== I guess increase this one is most effective but too\n\nYou should reduce this. This is memory PER SORT. You could have 10 \nsorts in one query and that query being run 10 times at once, using 100x \nthat sort_mem in total - causing lots of swapping. So something like \n8192 would probably be better, even lower at 4096 perhaps.\n\n> effective cache = 153204\n\nThat's probably about right.\n\nChris\n", "msg_date": "Thu, 20 Jan 2005 16:00:16 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which PARAMETER is most important for load query??" } ]
[ { "msg_contents": "> I am also very interesting in this very question.. Is there any way to\n> declare a persistant cursor that remains open between pg sessions?\n> This would be better than a temp table because you would not have to\n> do the initial select and insert into a fresh table and incur those IO\n> costs, which are often very heavy, and the reason why one would want\n> to use a cursor.\n\nYes, it's called a 'view' :-)\n\nEverything you can do with cursors you can do with a view, including\nselecting records in blocks in a reasonably efficient way. As long as\nyour # records fetched is not real small (> 10) and your query is not\nsuper complex, you can slide your view just like a cursor with zero real\nimpact on performance.\n\nIf the query in question does not scale in time complexity with the\namount of data returned (there is a fix processing step which can't be\navoided), then it's materialized view time, such that they can be done\nin PostgreSQL.\n\nNow, cursors can be passed around in pl/pgsql functions which makes them\nvery useful in that context. However, for normal data processing via\nqueries, they have some limitations that makes them hard to use in a\ngeneral sense.\n\nMerlin\n", "msg_date": "Thu, 20 Jan 2005 12:00:06 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "How do you create a temporary view that has only a small subset of the\ndata from the DB init? (Links to docs are fine - I can read ;). My\nquery isn't all that complex, and my number of records might be from\n10 to 2k depending on how I implement it.\n\nAlex Turner\nNetEconomist\n\n\nOn Thu, 20 Jan 2005 12:00:06 -0500, Merlin Moncure\n<[email protected]> wrote:\n> > I am also very interesting in this very question.. Is there any way to\n> > declare a persistant cursor that remains open between pg sessions?\n> > This would be better than a temp table because you would not have to\n> > do the initial select and insert into a fresh table and incur those IO\n> > costs, which are often very heavy, and the reason why one would want\n> > to use a cursor.\n> \n> Yes, it's called a 'view' :-)\n> \n> Everything you can do with cursors you can do with a view, including\n> selecting records in blocks in a reasonably efficient way. As long as\n> your # records fetched is not real small (> 10) and your query is not\n> super complex, you can slide your view just like a cursor with zero real\n> impact on performance.\n> \n> If the query in question does not scale in time complexity with the\n> amount of data returned (there is a fix processing step which can't be\n> avoided), then it's materialized view time, such that they can be done\n> in PostgreSQL.\n> \n> Now, cursors can be passed around in pl/pgsql functions which makes them\n> very useful in that context. However, for normal data processing via\n> queries, they have some limitations that makes them hard to use in a\n> general sense.\n> \n> Merlin\n> \n>\n", "msg_date": "Thu, 20 Jan 2005 16:35:12 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Isn't this a prime example of when to use a servlet or something similar\nin function? It will create the cursor, maintain it, and fetch against\nit for a particular page.\n\nGreg\n\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]]\nSent: Thursday, January 20, 2005 10:21 AM\nTo: Andrei Bintintan\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n\n\nAndrei Bintintan wrote:\n>> If you're using this to provide \"pages\" of results, could you use a \n>> cursor?\n> \n> What do you mean by that? Cursor?\n> \n> Yes I'm using this to provide \"pages\", but If I jump to the last pages \n> it goes very slow.\n\nDECLARE mycursor CURSOR FOR SELECT * FROM ...\nFETCH FORWARD 10 IN mycursor;\nCLOSE mycursor;\n\nRepeated FETCHes would let you step through your results. That won't \nwork if you have a web-app making repeated connections.\n\nIf you've got a web-application then you'll probably want to insert the \nresults into a cache table for later use.\n\n--\n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n", "msg_date": "Thu, 20 Jan 2005 13:04:02 -0500", "msg_from": "\"Spiegelberg, Greg\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "> this will only work unchanged if the index is unique. imagine , for\n> example if you have more than 50 rows with the same value of col.\n> \n> one way to fix this is to use ORDER BY col,oid\n\nnope! oid is\n1. deprecated\n2. not guaranteed to be unique even inside a (large) table.\n\nUse a sequence instead. \n\ncreate view a_b as\n\tselect nextval('some_sequnce')::k, a.*, b.* from a, b [...]\n\t\n\nselect * from a_b where k > k1 order by k limit 1000\n*or*\nexecute fetch_a_b(k1, 1000) <-- pass limit into prepared statement for extra flexibility.\n\n", "msg_date": "Thu, 20 Jan 2005 14:29:34 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "> Dealing about the hardware, for the moment we have only a bi-pentium Xeon\n> 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results ...\n> so\n> we are thinking about a new solution with maybe several servers (server\n> design may vary from one to other) ... to get a kind of cluster to get\n> better\n> performance ...\n> \n> Am I clear ?\n\nyes. Clustering is not the answer to your problem. You need to build a bigger, faster box with lots of storage.\n\nClustering is \nA: a headache\nB: will cost you more, not less\nC: not designed for what you are trying to do.\n\nGoing the x86 route, for about 20k$ you can get quad Opteron with 1-2 terabytes of storage (SATA), depending on how you configure your raid. This is the best bang for the buck you are going to get, period. Replicate for redundancy, not performance.\n\nIf you are doing fair amount of writes, you will not be able to make a faster system than this for similar amount of cash. You can drop the price a bit by pushing optional upgrades out to the future...\n\nIf this is not good enough for you, it's time to start thinking about a mid range server.\n\nMerlin\n", "msg_date": "Thu, 20 Jan 2005 15:21:18 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Merlin Moncure wrote:\n> ...You need to build a bigger, faster box with lots of storage...\n> Clustering ... \n> B: will cost you more, not less\n\n\nIs this still true when you get to 5-way or 17-way systems?\n\nMy (somewhat outdated) impression is that up to about 4-way systems\nthey're price competitive; but beyond that, I thought multiple cheap\nservers scales much more afordably than large servers. Certainly\nat the point of a 129-CPU system I bet you're better off with a\nnetwork of cheap servers.\n\n > A: a headache\n\nAgreed if you mean clustering as-in making it look like one single \ndatabase to the end user. However in my experience a few years ago, if \n you can partition the data in a way managed by the application, it'll \nnot only be less of a headache, but probably provide a more flexable \nsolution. Currently I'm working on a pretty big GIS database, that \nwe're looking to partition our data in a manner similar to the microsoft \nwhitepaper on scaling terraserver that can be found here:\nhttp://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53\n\nI think this paper is a very nice analysis of many aspects of \nlarger-server&SAN vs. application-partitioned-clusters, including \nlooking at cost, reliability, managability, etc. After reading that \npaper, we started very seriously looking into application-level \npartitioning.\n", "msg_date": "Thu, 20 Jan 2005 13:46:06 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Ron Mayer wrote:\n\n> http://research.microsoft.com/research/pubs/view.aspx?msr_tr_id=MSR-TR-2002-53 \n\nWrong link...\n\nhttp://research.microsoft.com/research/pubs/view.aspx?type=Technical%20Report&id=812\n\nThis is the one that discusses scalability, price, performance, \nfailover, power consumption, hardware components, etc.\n\nBottom line was that the large server with SAN had $1877K hardware costs \nwhile the application-partitioned cluster had $110K hardware costs -- \nbut it's apples-to-oranges since they were deployed in different years.\n\nStill a big advantage for the small systems.\n", "msg_date": "Thu, 20 Jan 2005 13:49:53 -0800", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Quoth Ron Mayer <[email protected]>:\n> Merlin Moncure wrote:\n>> ...You need to build a bigger, faster box with lots of storage...\n>> Clustering ... B: will cost you more, not less\n>\n>\n> Is this still true when you get to 5-way or 17-way systems?\n>\n> My (somewhat outdated) impression is that up to about 4-way systems\n> they're price competitive; but beyond that, I thought multiple cheap\n> servers scales much more afordably than large servers. Certainly\n> at the point of a 129-CPU system I bet you're better off with a\n> network of cheap servers.\n\nNot necessarily.\n\nIf you have 129 boxes that you're trying to keep synced, it is likely\nthat the cost of syncing them will be greater than the other write\nload.\n\nIf the problem being addressed is that a 4-way box won't handle the\ntransaction load, it is unlikely that building a cluster of _smaller_\nmachines will help terribly much.\n\nThe reason to \"cluster\" in the context of a transactional system is\nthat you need improved _reliability_. \n\nSince communications between servers is _thousands_ of times slower\nthan communicating with local memory, you have to be willing to live\nwith an ENORMOUS degradation of performance when hosts are\nsynchronized.\n\nAnd if \"real estate\" has a cost, where you have to pay for rack space,\nhaving _fewer_ machines is preferable to having more.\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://www.ntlug.org/~cbbrowne/postgresql.html\nIf con is the opposite of pro, is Congress the opposite of progress?\n", "msg_date": "Sun, 23 Jan 2005 01:08:26 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "Wondering ...\n\n>From a performance standpoint, is it a bad idea to use inheritance\nsimply as a tool for easy database building. That is for creating\ntables that share the same columns but otherwise are unrelated.\n\nFor example, let's say I have the following set of columns that are\ncommon to many of my tables.\n\nobjectid int,\ncreatedby varchar(32),\ncreateddate timestamp\n\n... and let's say I create a table with these columns just so that I can\nthen create other tables that inherit this table so that I have these\ncolumns in it without having to respecify them over and over again\nseparately for each table that contains them.\n\n>From my understanding, all the data for these columns in all the child\ntables will be stored in this one parent table and that, furthermore,\nthere is a \"hidden\" column in the parent table called tableoid that\nallows postgres to determine which row is stored in which child table.\n\nGiven that, is there a performance hit for queries on the child tables\nbecause postgres has to effectively put a condition on every query based\non the tableoid of the given child table?\n\nIn other words, if say child table A has 10 million rows in it and child\nB has 2 rows in it. Will a query on child table B be slowed down by the\nfact that it inherits from the same table as A. I'm sure the answer is\nabsolutely yes, and so I guess I'm just looking for corroboration.\n\nMaybe I'll be surprised!\n\nThanks a bunch,\n\nKen\n\n\n\n", "msg_date": "Fri, 21 Jan 2005 00:19:05 -0800", "msg_from": "ken <[email protected]>", "msg_from_op": true, "msg_subject": "inheritance performance" }, { "msg_contents": "\nken <[email protected]> writes:\n\n> >From my understanding, all the data for these columns in all the child\n> tables will be stored in this one parent table \n\nNo, all the data is stored in the child table.\n\n> and that, furthermore, there is a \"hidden\" column in the parent table called\n> tableoid that allows postgres to determine which row is stored in which\n> child table.\n\nThat's true.\n\n> Given that, is there a performance hit for queries on the child tables\n> because postgres has to effectively put a condition on every query based on\n> the tableoid of the given child table?\n\nThere's a performance hit for the extra space required to store the tableoid.\nThis means slightly fewer records will fit on a page and i/o requirements will\nbe slightly higher. This will probably only be noticeable on narrow tables,\nand even then probably only on large sequential scans.\n\nThere's also a slight performance hit because there's an optimization that the\nplanner does normally for simple queries that isn't currently done for either\nUNION ALL or inherited tables. I think it's planned to fix that soon.\n\n> In other words, if say child table A has 10 million rows in it and child\n> B has 2 rows in it. Will a query on child table B be slowed down by the\n> fact that it inherits from the same table as A. I'm sure the answer is\n> absolutely yes, and so I guess I'm just looking for corroboration.\n\nNo, it isn't slowed down by the records in A. It's slightly slower because it\nis an inherited table, but that impact is the same regardless of what other\ntables inherit from the same parent and how many records are in them.\n\n-- \ngreg\n\n", "msg_date": "21 Jan 2005 11:14:12 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "On Fri, 2005-01-21 at 08:14, Greg Stark wrote:\n> ken <[email protected]> writes:\n> \n> > >From my understanding, all the data for these columns in all the child\n> > tables will be stored in this one parent table \n> \n> No, all the data is stored in the child table.\n\nSo if you perform a \"select * from parent\" then does postgres internally\ncreate a union between all the child tables and return you the results\nof that?\n\nken\n\n\n", "msg_date": "Fri, 21 Jan 2005 09:58:39 -0800", "msg_from": "ken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "> So if you perform a \"select * from parent\" then does postgres internally\n> create a union between all the child tables and return you the results\n> of that?\n\nBasically, yes. Kind of.\n\nChris\n", "msg_date": "Fri, 21 Jan 2005 18:15:16 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "ken <[email protected]> writes:\n\n> On Fri, 2005-01-21 at 08:14, Greg Stark wrote:\n> > ken <[email protected]> writes:\n> > \n> > > >From my understanding, all the data for these columns in all the child\n> > > tables will be stored in this one parent table \n> > \n> > No, all the data is stored in the child table.\n> \n> So if you perform a \"select * from parent\" then does postgres internally\n> create a union between all the child tables and return you the results\n> of that?\n\nEssentially, yes.\n\n-- \ngreg\n\n", "msg_date": "21 Jan 2005 13:16:09 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "\n\nOn Fri, 21 Jan 2005, Greg Stark wrote:\n\n> There's also a slight performance hit because there's an optimization that the\n> planner does normally for simple queries that isn't currently done for either\n> UNION ALL or inherited tables. I think it's planned to fix that soon.\n\nCan you explain me in more details what kind of optimization is missing in\nthat case?\n", "msg_date": "Sat, 22 Jan 2005 03:09:28 +0200 (EET)", "msg_from": "Ioannis Theoharis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "Ioannis Theoharis <[email protected]> writes:\n\n> Can you explain me in more details what kind of optimization is missing in\n> that case?\n\nUh, no I can't really. It was mentioned on the mailing list with regards to\nUNION ALL specifically. I think it applied to inherited tables as well but I\nwouldn't know for sure. You could search the mailing list archives for recent\ndiscussions of partitioned tables.\n\nIn any acse it was a purely technical detail. Some step in the processing of\nthe data that could be skipped if there weren't any actual changes to the data\nbeing done or something like that. It made a small but noticeable difference\nin the runtime but nothing that made the technique infeasible.\n\n-- \ngreg\n\n", "msg_date": "21 Jan 2005 23:22:13 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance" }, { "msg_contents": "Greg Stark <[email protected]> writes:\n> ken <[email protected]> writes:\n>> From my understanding, all the data for these columns in all the child\n>> tables will be stored in this one parent table \n\n> No, all the data is stored in the child table.\n\nCorrect ...\n\n>> and that, furthermore, there is a \"hidden\" column in the parent table called\n>> tableoid that allows postgres to determine which row is stored in which\n>> child table.\n\n> That's true.\n> There's a performance hit for the extra space required to store the tableoid.\n\nBzzzt ...\n\ntableoid isn't actually stored anywhere on disk. It's a pseudo-column\nthat is generated during row fetch. (It works for all tables, not only\ninheritance children.)\n\n>> Given that, is there a performance hit for queries on the child tables\n>> because postgres has to effectively put a condition on every query based on\n>> the tableoid of the given child table?\n\nAFAIR, a query directed specifically to a child table is *completely*\nunaware of the fact that that table is a child. Only queries directed\nto a parent table, which have to implicitly UNION in the children, pay\nany price for inheritance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Jan 2005 15:22:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance performance " } ]
[ { "msg_contents": "> Now I read all the posts and I have some answers.\n> \n> Yes, I have a web aplication.\n> I HAVE to know exactly how many pages I have and I have to allow the\nuser\n> to\n> jump to a specific page(this is where I used limit and offset). We\nhave\n> this\n> feature and I cannot take it out.\n\nIf your working set is small, say a couple hundred records at the most\n(web form or such), limit/offset may be ok. However you are already\npaying double because you are extracting the # of records matching your\nwhere clause, yes? Also, this # can change while the user is browsing,\nheh.\n\nIOW, your application code is writing expensive checks that the database\nhas to cash.\n\n> >> > SELECT * FROM tab WHERE col > ? ORDER BY col LIMIT 50\n> Now this solution looks very fast, but I cannot implement it, because\nI\n> cannot jump from page 1 to page xxxx only to page 2. Because I know\nwith\n> this type where did the page 1 ended. And we have some really\ncomplicated\n> where's and about 10 tables are involved in the sql query.\n> About the CURSOR I have to read more about them because this is my\nfirst\n> time when I hear about.\n> I don't know if temporary tables are a solution, really I don't think\nso,\n> there are a lot of users that are working in the same time at the same\n> page.\n\nCursors held by a connection. If your web app keeps persistent\nconnection, you can use them. In this case, pass the where clause to a\nplpgsql function which returns a composite object containing a refcursor\nobject and the number of rows (read the docs!). If/When pg gets shared\ncursors, this may be the way to go...but in this case you may have to\nworry about closing them.\n\nWithout a connection, you need some type of persistence on the database.\nThis is complex but it can be done...but it will not be faster than\nlimit offset for browsing relatively small sets.\n\nMerlin\n", "msg_date": "Fri, 21 Jan 2005 08:33:24 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n> > Now I read all the posts and I have some answers.\n> > \n> > Yes, I have a web aplication. I HAVE to know exactly how many pages I have\n> > and I have to allow the user to jump to a specific page(this is where I\n> > used limit and offset). We have this feature and I cannot take it out.\n\nI'm afraid you have a problem then. The only way postgres can know exactly how\nmany pages and allow users to jump to a specific point for an arbitrary query\nis by doing what OFFSET and LIMIT does. \n\nThere are ways to optimize this but they'll be lots of work. And they'll only\namount to moving around when the work is done. The work of gathering all the\nrecords from the query will still have to be done sometime.\n\nIf the queries are relatively static you could preprocess the data so you have\nall the results in a table with a sequential id. Then you can get the maximum\nand jump around in the table using an index all you want.\n\nOtherwise you could consider performing the queries on demand and storing them\nin a temporary table. Then fetch the actual records for the page from the\ntemporary table again using an index on a sequential id to jump around. This\nmight make the actual performing of the initial query much slower though since\nyou have to wait for the entire query to be performed and the records stored.\nYou'll also have to deal with vacuuming this table aggressively.\n\n\n-- \ngreg\n\n", "msg_date": "21 Jan 2005 11:22:56 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\nSupposing your searches display results which are rows coming from one \nspecific table, you could create a cache table :\n\nsearch_id\tserial primary key\nindex_n\tposition of this result in the global result set\nresult_id\tid of the resulting row.\n\nThen, making a search with 50k results would INSERT INTO cache ... SELECT \n FROM search query, with a way to set the index_n column, which can be a \ntemporary sequence...\n\nThen to display your pages, SELECT from your table with index_n BETWEEN so \nand so, and join to the data table.\n\nIf you're worried that it might take up too much space : store an integer \narray of result_id instead of just a result_id ; this way you insert fewer \nrows and save on disk space. Generate it with a custom aggregate... then \njust grab a row from this table, it contains all the id's of the rows to \ndisplay.\n\n\n", "msg_date": "Wed, 26 Jan 2005 13:58:18 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "The problem with this approach is TTFB (Time to first Byte). The\ninitial query is very slow, but additional requests are fast. In most\nsituations we do not want the user to have to wait a disproportionate\namount of time for the initial query. If this is the first time using\nthe system this will be the impression that will stick with them. I\nguess we could experiment and see how much extra time creating a cache\ntable will take...\n\nAlex Turner\nNetEconomist\n\n\nOn Wed, 26 Jan 2005 13:58:18 +0100, PFC <[email protected]> wrote:\n> \n> Supposing your searches display results which are rows coming from one\n> specific table, you could create a cache table :\n> \n> search_id serial primary key\n> index_n position of this result in the global result set\n> result_id id of the resulting row.\n> \n> Then, making a search with 50k results would INSERT INTO cache ... SELECT\n> FROM search query, with a way to set the index_n column, which can be a\n> temporary sequence...\n> \n> Then to display your pages, SELECT from your table with index_n BETWEEN so\n> and so, and join to the data table.\n> \n> If you're worried that it might take up too much space : store an integer\n> array of result_id instead of just a result_id ; this way you insert fewer\n> rows and save on disk space. Generate it with a custom aggregate... then\n> just grab a row from this table, it contains all the id's of the rows to\n> display.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Wed, 26 Jan 2005 13:35:27 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\nAlex Turner <[email protected]> writes:\n\n> The problem with this approach is TTFB (Time to first Byte). The\n> initial query is very slow, but additional requests are fast. In most\n> situations we do not want the user to have to wait a disproportionate\n> amount of time for the initial query. If this is the first time using\n> the system this will be the impression that will stick with them. I\n> guess we could experiment and see how much extra time creating a cache\n> table will take...\n\nYou could cheat and do queries with an offset of 0 directly but also start up\na background job to fetch the complete results and cache them. queries with a\nnon-zero offset would have to wait until the complete cache is built. You have\nto be careful about people arriving from bookmarks to non-zero offsets and\npeople hitting reload before the cache is finished being built.\n\nAs someone else suggested you could look into other systems for storing the\ncache. If you don't need to join against other database tables and you don't\nneed the reliability of a database then there are faster solutions like\nmemcached for example. (The problem of joining against database tables is even\nsolvable, look up pgmemcached. No idea how it performs though.)\n\nBut I think you're running into a fundamental tension here. The feature you're\nlooking for: being able to jump around in an arbitrary non-indexed query\nresult set which can be arbitrarily large, requires a lot of work. All you can\ndo is shift around *when* that work is done. There's not going to be any way\nto avoid doing the work entirely.\n\n-- \ngreg\n\n", "msg_date": "26 Jan 2005 14:48:27 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "> The problem with this approach is TTFB (Time to first Byte). The\n> initial query is very slow, but additional requests are fast. In most\n> situations we do not want the user to have to wait a disproportionate\n> amount of time for the initial query. If this is the first time using\n> the system this will be the impression that will stick with them. I\n> guess we could experiment and see how much extra time creating a cache\n> table will take...\n\n\tDo it on the second page then ;)\n\n\tSeriously :\n\t- If you want to display the result count and page count, you'll need to \ndo the whole query anyway, so you might as well save the results.\n\t- inserting the result id's in a temp table one by one will be slow, but \nyou can do this :\n\nselect array_accum(id) from temp group by id/20 limit 3;\n array_accum\n---------------------------------------------------------------\n {1,2,4,8,16,17,9,18,19,5,10,11,3,6,12,13,7,14,15}\n {32,33,34,35,36,37,38,39,20,21,22,23,24,25,26,27,28,29,30,31}\n {40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59}\n\n\t- a really big search of 131072 results :\ncreate table cache (id serial primary key, value integer[]);\nexplain analyze insert into cache (value) select array_accum(id) from temp \ngroup by id/100;\n Subquery Scan \"*SELECT*\" (cost=14382.02..17986.50 rows=131072 width=32) \n(actual time=961.746..1446.630 rows=1311 loops=1)\n -> GroupAggregate (cost=14382.02..16020.42 rows=131072 width=4) \n(actual time=961.607..1423.803 rows=1311 loops=1)\n -> Sort (cost=14382.02..14709.70 rows=131072 width=4) (actual \ntime=961.181..1077.662 rows=131072 loops=1)\n Sort Key: (id / 100)\n -> Seq Scan on \"temp\" (cost=0.00..2216.40 rows=131072 \nwidth=4) (actual time=0.032..291.652 rows=131072 loops=1)\n Total runtime: 1493.304 ms\n\n\tNote that the \"SELECT...\" part takes 1400 ms, and the INSERT part takes \nthe rest, which is really small. It's the sort which takes most of the \ntime, but you'll be doing it anyway to get your results in order, so it \ncomes free to you. This will generate 1000 pages with 100 results on each. \nIf your searches yield say 1000 results it'll be perfectly fine and can \ntarget times in the sub-100 ms for caching the results (not counting the \ntotal query time of course !)\n\n\tUsing arrays is the key here, because inserting all the results as \nindividual rows in the table is gonna be a whole lot slower !\n\n\n\n\n", "msg_date": "Wed, 26 Jan 2005 22:24:34 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Thats a really good idea, just store a list of the sorted ids in the\ntemp table - small amount of data for insert... I like it!\n\nAlex Turner\nNetEconomist\n\n\nOn Wed, 26 Jan 2005 22:24:34 +0100, PFC <[email protected]> wrote:\n> > The problem with this approach is TTFB (Time to first Byte). The\n> > initial query is very slow, but additional requests are fast. In most\n> > situations we do not want the user to have to wait a disproportionate\n> > amount of time for the initial query. If this is the first time using\n> > the system this will be the impression that will stick with them. I\n> > guess we could experiment and see how much extra time creating a cache\n> > table will take...\n> \n> Do it on the second page then ;)\n> \n> Seriously :\n> - If you want to display the result count and page count, you'll need to\n> do the whole query anyway, so you might as well save the results.\n> - inserting the result id's in a temp table one by one will be slow, but\n> you can do this :\n> \n> select array_accum(id) from temp group by id/20 limit 3;\n> array_accum\n> ---------------------------------------------------------------\n> {1,2,4,8,16,17,9,18,19,5,10,11,3,6,12,13,7,14,15}\n> {32,33,34,35,36,37,38,39,20,21,22,23,24,25,26,27,28,29,30,31}\n> {40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59}\n> \n> - a really big search of 131072 results :\n> create table cache (id serial primary key, value integer[]);\n> explain analyze insert into cache (value) select array_accum(id) from temp\n> group by id/100;\n> Subquery Scan \"*SELECT*\" (cost=14382.02..17986.50 rows=131072 width=32)\n> (actual time=961.746..1446.630 rows=1311 loops=1)\n> -> GroupAggregate (cost=14382.02..16020.42 rows=131072 width=4)\n> (actual time=961.607..1423.803 rows=1311 loops=1)\n> -> Sort (cost=14382.02..14709.70 rows=131072 width=4) (actual\n> time=961.181..1077.662 rows=131072 loops=1)\n> Sort Key: (id / 100)\n> -> Seq Scan on \"temp\" (cost=0.00..2216.40 rows=131072\n> width=4) (actual time=0.032..291.652 rows=131072 loops=1)\n> Total runtime: 1493.304 ms\n> \n> Note that the \"SELECT...\" part takes 1400 ms, and the INSERT part takes\n> the rest, which is really small. It's the sort which takes most of the\n> time, but you'll be doing it anyway to get your results in order, so it\n> comes free to you. This will generate 1000 pages with 100 results on each.\n> If your searches yield say 1000 results it'll be perfectly fine and can\n> target times in the sub-100 ms for caching the results (not counting the\n> total query time of course !)\n> \n> Using arrays is the key here, because inserting all the results as\n> individual rows in the table is gonna be a whole lot slower !\n> \n>\n", "msg_date": "Wed, 26 Jan 2005 23:42:21 -0500", "msg_from": "Alex Turner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n> Thats a really good idea, just store a list of the sorted ids in the\n> temp table - small amount of data for insert... I like it!\n>\n> Alex Turner\n> NetEconomist\n\n\tThe best part is that you can skip the LIMIT/OFFSET entirely if you put \npage numbers in your cache table while inserting into it, via a temporary \nsequence or something. Retrieving the results will then be very fast, but \nbeware that SELECT * FROM table WHERE id =ANY( array ) won't use an index, \nso you'll have to trick the thing by generating a query with IN, or \njoining against a SRF returning the elements of the array one by one, \nwhich might be better.\n", "msg_date": "Thu, 27 Jan 2005 10:54:25 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "On Thu, 27 Jan 2005, PFC wrote:\n\n>\n>> Thats a really good idea, just store a list of the sorted ids in the\n>> temp table - small amount of data for insert... I like it!\n>> \n>> Alex Turner\n>> NetEconomist\n>\n> \tThe best part is that you can skip the LIMIT/OFFSET entirely if you \n> put page numbers in your cache table while inserting into it, via a temporary \n> sequence or something. Retrieving the results will then be very fast, but \n> beware that SELECT * FROM table WHERE id =ANY( array ) won't use an index, so\n\ncontrib/intarray provides index access to such queries.\n\n> you'll have to trick the thing by generating a query with IN, or joining \n> against a SRF returning the elements of the array one by one, which might be \n> better.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 27 Jan 2005 13:33:00 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n>> \tThe best part is that you can skip the LIMIT/OFFSET entirely if you \n>> put page numbers in your cache table while inserting into it, via a \n>> temporary sequence or something. Retrieving the results will then be \n>> very fast, but beware that SELECT * FROM table WHERE id =ANY( array ) \n>> won't use an index, so\n>\n> contrib/intarray provides index access to such queries.\n\n\tCan you provide an example of such a query ? I've looked at the operators \nfor intarray without finding it.\n\tThanks.\n", "msg_date": "Thu, 27 Jan 2005 11:52:50 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "On Thu, 27 Jan 2005, PFC wrote:\n\n>\n>>> \tThe best part is that you can skip the LIMIT/OFFSET entirely if you \n>>> put page numbers in your cache table while inserting into it, via a \n>>> temporary sequence or something. Retrieving the results will then be very \n>>> fast, but beware that SELECT * FROM table WHERE id =ANY( array ) won't use \n>>> an index, so\n>> \n>> contrib/intarray provides index access to such queries.\n>\n> \tCan you provide an example of such a query ? I've looked at the \n> operators for intarray without finding it.\n\nfor example, \nhttp://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray\nsee OPERATIONS and EXAMPLE USAGE:\n\nSELECT * FROM table WHERE id && int[]\n\n\n> \tThanks.\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 27 Jan 2005 14:19:35 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "Oleg Bartunov <[email protected]> writes:\n\n> On Thu, 27 Jan 2005, PFC wrote:\n> \n> >\n> > > > beware that SELECT * FROM table WHERE id =ANY( array ) won't use an index,\n\n> > > contrib/intarray provides index access to such queries.\n> >\n> > Can you provide an example of such a query ? I've looked at the operators\n> > for intarray without finding it.\n> \n> for example,\n> http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray\n> see OPERATIONS and EXAMPLE USAGE:\n> \n> SELECT * FROM table WHERE id && int[]\n\nI don't think that helps him. He wants the join to the *other* table to use an\nindex. It would be nice if the IN plan used an index for =ANY(array) just like\nit does for =ANY(subquery) but I'm not sure the statistics are there. It might\nnot be a bad plan to just assume arrays are never going to be millions of\nelements long though. \n\nThere is a way to achieve this using \"int_array_enum\" from another contrib\nmodule, \"intagg\". My current project uses something similar to this except the\narrays are precomputed. When I went to 7.4 the new array support obsoleted\neverything else I was using from the \"intagg\" and \"array\" contrib moduels\nexcept for this one instance where intagg is still necessary.\n\nIt is a bit awkward but it works:\n\nslo=> EXPLAIN \n SELECT * \n FROM foo \n JOIN (SELECT int_array_enum(foo_ids) AS foo_id \n FROM cache \n WHERE cache_id = 1) AS x\n USING (foo_id) ;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..6.40 rows=1 width=726)\n -> Subquery Scan x (cost=0.00..3.18 rows=1 width=4)\n -> Index Scan using idx_cache on cache (cost=0.00..3.17 rows=1 width=30)\n Index Cond: (cache_id = 1)\n -> Index Scan using foo_pkey on foo (cost=0.00..3.21 rows=1 width=726)\n Index Cond: (foo.foo_id = \"outer\".foo_id)\n(6 rows)\n\n\n(query and plan edited for clarity and for paranoia purposes)\n\n\n-- \ngreg\n\n", "msg_date": "27 Jan 2005 07:57:15 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n\n> for example, \n> http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray\n> see OPERATIONS and EXAMPLE USAGE:\n>\n\n\tThanks, I already know this documentation and have used intarray before \n(I find it absolutely fabulous in the right application, it has a great \npotential for getting out of tight situations which would involve huge \nunmanageable pivot or attributes tables). Its only drawback is that the \ngist index creation time is slow and sometimes just... takes forever until \nthe disk is full.\n\tHowever, it seems that integer && integer[] does not exist :\n\n> SELECT * FROM table WHERE id && int[]\n\nexplain analyze select * from temp t where id && \n( '{1,2,3,4,5,6,7,8,9,10,11,12}'::integer[] );\nERREUR: L'operateur n'existe pas : integer && integer[]\nASTUCE : Aucun operateur correspond au nom donne et aux types d'arguments. \nVous devez ajouter des conversions explicites de type.\n\n\tI have already used this type of intarray indexes, but you have to create \na special gist index with the gist__int_ops on the column, and the column \nhas to be an array. In my case the column is just a SERIAL PRIMARY KEY, \nand should stay this way, and I don't want to create a functional index in \narray[id] just for this feature ; so I guess I can't use the && operator. \nAm I mistaken ? My index is the standard btree here.\n\t\n\tIt would be nice if the =ANY() could use the index just like IN does ; \nbesides at planning time the length of the array is known which makes it \nbehave quite just like IN().\n\n\tSo I'll use either an EXECUTE'd plpgsql-generated query (IN (....)) , \nwhich I don't like because it's a kludge ; or this other solution which I \nfind more elegant :\n\nCREATE OR REPLACE FUNCTION tools.array_srf( INTEGER[] )\n RETURNS SETOF INTEGER RETURNS NULL ON NULL INPUT \nLANGUAGE plpgsql AS\n$$\nDECLARE\n\t_data\tALIAS FOR $1;\n\t_i\t\tINTEGER;\nBEGIN\n\tFOR _i IN 1..icount(_data) LOOP\n\t\tRETURN NEXT _data[_i];\n\tEND LOOP;\n\tRETURN;\nEND;\n$$;\n\n-----------------------------------------------------------------------------------\nexplain analyze select * from temp t where id \n=ANY( '{1,2,3,4,5,6,7,8,9,10,11,12}' );\n Seq Scan on \"temp\" t (cost=0.00..5165.52 rows=65536 width=8) (actual \ntime=0.030..173.319 rows=12 loops=1)\n Filter: (id = ANY ('{1,2,3,4,5,6,7,8,9,10,11,12}'::integer[]))\n Total runtime: 173.391 ms\n\n-----------------------------------------------------------------------------------\nexplain analyze select * from temp t where id \nIN( 1,2,3,4,5,6,7,8,9,10,11,12 );\n Index Scan using temp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey, \ntemp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey,\ntemp_pkey on \"temp\" t (cost=0.00..36.49 rows=12 width=8) (actual \ntime=0.046..0.137 rows=12 loops=1)\n Index Cond: ((id = 1) OR (id = 2) OR (id = 3) OR (id = 4) OR (id = 5) \nOR (id = 6) OR (id = 7) OR (id = 8) OR (id = 9) OR (id = 10) OR (id = 11) \nOR (id = 12))\n Total runtime: 0.292 ms\n\n-----------------------------------------------------------------------------------\nexplain analyze select * from temp t where id in (select * from \ntools.array_srf('{1,2,3,4,5,6,7,8,9,10,11,12}'));\n Nested Loop (cost=15.00..620.20 rows=200 width=8) (actual \ntime=0.211..0.368 rows=12 loops=1)\n -> HashAggregate (cost=15.00..15.00 rows=200 width=4) (actual \ntime=0.160..0.173 rows=12 loops=1)\n -> Function Scan on array_srf (cost=0.00..12.50 rows=1000 \nwidth=4) (actual time=0.127..0.139 rows=12 loops=1)\n -> Index Scan using temp_pkey on \"temp\" t (cost=0.00..3.01 rows=1 \nwidth=8) (actual time=0.010..0.012 rows=1 loops=12)\n Index Cond: (t.id = \"outer\".array_srf)\n Total runtime: 0.494 ms\n\n-----------------------------------------------------------------------------------\nexplain analyze select * from temp t, (select * from \ntools.array_srf('{1,2,3,4,5,6,7,8,9,10,11,12}')) foo where foo.array_srf = \nt.id;\n\n Merge Join (cost=62.33..2824.80 rows=1000 width=12) (actual \ntime=0.215..0.286 rows=12 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".array_srf)\n -> Index Scan using temp_pkey on \"temp\" t (cost=0.00..2419.79 \nrows=131072 width=8) (actual time=0.032..0.056 rows=13 loops=1)\n -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual \ntime=0.169..0.173 rows=12 loops=1)\n Sort Key: array_srf.array_srf\n -> Function Scan on array_srf (cost=0.00..12.50 rows=1000 \nwidth=4) (actual time=0.127..0.139 rows=12 loops=1)\n Total runtime: 0.391 ms\n\n-----------------------------------------------------------------------------------\nNote that the meaning is different ; the IN removes duplicates in the \narray but the join does not.\n\n\n", "msg_date": "Thu, 27 Jan 2005 14:11:13 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "On Thu, 27 Jan 2005, PFC wrote:\n\n>\n>\n>> for example, \n>> http://www.sai.msu.su/~megera/postgres/gist/code/7.3/README.intarray\n>> see OPERATIONS and EXAMPLE USAGE:\n>> \n>\n> \tThanks, I already know this documentation and have used intarray \n> before (I find it absolutely fabulous in the right application, it has a \n> great potential for getting out of tight situations which would involve huge \n> unmanageable pivot or attributes tables). Its only drawback is that the gist \n> index creation time is slow and sometimes just... takes forever until the \n> disk is full.\n> \tHowever, it seems that integer && integer[] does not exist :\n\nTry intset(id) && int[]. intset is an undocumented function :)\nI'm going to add intset() to README.\n\n>\n>> SELECT * FROM table WHERE id && int[]\n>\n> explain analyze select * from temp t where id && ( \n> '{1,2,3,4,5,6,7,8,9,10,11,12}'::integer[] );\n> ERREUR: L'operateur n'existe pas : integer && integer[]\n> ASTUCE : Aucun operateur correspond au nom donne et aux types d'arguments. \n> Vous devez ajouter des conversions explicites de type.\n>\n> \tI have already used this type of intarray indexes, but you have to \n> create a special gist index with the gist__int_ops on the column, and the \n> column has to be an array. In my case the column is just a SERIAL PRIMARY \n> KEY, and should stay this way, and I don't want to create a functional index \n> in array[id] just for this feature ; so I guess I can't use the && operator. \n> Am I mistaken ? My index is the standard btree here.\n> \t\tIt would be nice if the =ANY() could use the index just like \n> IN does ; besides at planning time the length of the array is known which \n> makes it behave quite just like IN().\n>\n> \tSo I'll use either an EXECUTE'd plpgsql-generated query (IN (....)) , \n> which I don't like because it's a kludge ; or this other solution which I \n> find more elegant :\n>\n> CREATE OR REPLACE FUNCTION tools.array_srf( INTEGER[] )\n> RETURNS SETOF INTEGER RETURNS NULL ON NULL INPUT \n> LANGUAGE plpgsql AS\n> $$\n> DECLARE\n> \t_data\tALIAS FOR $1;\n> \t_i\t\tINTEGER;\n> BEGIN\n> \tFOR _i IN 1..icount(_data) LOOP\n> \t\tRETURN NEXT _data[_i];\n> \tEND LOOP;\n> \tRETURN;\n> END;\n> $$;\n>\n> -----------------------------------------------------------------------------------\n> explain analyze select * from temp t where id =ANY( \n> '{1,2,3,4,5,6,7,8,9,10,11,12}' );\n> Seq Scan on \"temp\" t (cost=0.00..5165.52 rows=65536 width=8) (actual \n> time=0.030..173.319 rows=12 loops=1)\n> Filter: (id = ANY ('{1,2,3,4,5,6,7,8,9,10,11,12}'::integer[]))\n> Total runtime: 173.391 ms\n>\n> -----------------------------------------------------------------------------------\n> explain analyze select * from temp t where id IN( 1,2,3,4,5,6,7,8,9,10,11,12 \n> );\n> Index Scan using temp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey, \n> temp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey, temp_pkey,\n> temp_pkey on \"temp\" t (cost=0.00..36.49 rows=12 width=8) (actual \n> time=0.046..0.137 rows=12 loops=1)\n> Index Cond: ((id = 1) OR (id = 2) OR (id = 3) OR (id = 4) OR (id = 5) OR \n> (id = 6) OR (id = 7) OR (id = 8) OR (id = 9) OR (id = 10) OR (id = 11) OR (id \n> = 12))\n> Total runtime: 0.292 ms\n>\n> -----------------------------------------------------------------------------------\n> explain analyze select * from temp t where id in (select * from \n> tools.array_srf('{1,2,3,4,5,6,7,8,9,10,11,12}'));\n> Nested Loop (cost=15.00..620.20 rows=200 width=8) (actual time=0.211..0.368 \n> rows=12 loops=1)\n> -> HashAggregate (cost=15.00..15.00 rows=200 width=4) (actual \n> time=0.160..0.173 rows=12 loops=1)\n> -> Function Scan on array_srf (cost=0.00..12.50 rows=1000 width=4) \n> (actual time=0.127..0.139 rows=12 loops=1)\n> -> Index Scan using temp_pkey on \"temp\" t (cost=0.00..3.01 rows=1 \n> width=8) (actual time=0.010..0.012 rows=1 loops=12)\n> Index Cond: (t.id = \"outer\".array_srf)\n> Total runtime: 0.494 ms\n>\n> -----------------------------------------------------------------------------------\n> explain analyze select * from temp t, (select * from \n> tools.array_srf('{1,2,3,4,5,6,7,8,9,10,11,12}')) foo where foo.array_srf = \n> t.id;\n>\n> Merge Join (cost=62.33..2824.80 rows=1000 width=12) (actual \n> time=0.215..0.286 rows=12 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".array_srf)\n> -> Index Scan using temp_pkey on \"temp\" t (cost=0.00..2419.79 \n> rows=131072 width=8) (actual time=0.032..0.056 rows=13 loops=1)\n> -> Sort (cost=62.33..64.83 rows=1000 width=4) (actual time=0.169..0.173 \n> rows=12 loops=1)\n> Sort Key: array_srf.array_srf\n> -> Function Scan on array_srf (cost=0.00..12.50 rows=1000 width=4) \n> (actual time=0.127..0.139 rows=12 loops=1)\n> Total runtime: 0.391 ms\n>\n> -----------------------------------------------------------------------------------\n> Note that the meaning is different ; the IN removes duplicates in the array \n> but the join does not.\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n", "msg_date": "Thu, 27 Jan 2005 16:44:04 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\n>> \tHowever, it seems that integer && integer[] does not exist :\n>\n> Try intset(id) && int[]. intset is an undocumented function :)\n> I'm going to add intset() to README.\n>\n>>\n>>> SELECT * FROM table WHERE id && int[]\n\n\tMm.\n\tintset(x) seems to be like array[x] ?\n\tActually what I want is the opposite. I have a btree index on an integer \ncolumn ; I wanted to use this index and not create a functional index... \nwhich is why I wanted to use =ANY(). If I had a gist index on an integer \narray column, I would of course use what you suggest, but this is not the \ncase...\n\n\tAnyway I think the SRF function solution works well, I like it.\n\n\tNote that int_agg_final_array() crashes my postgres, see my message in \npsql/general\n\n\tRegards,\n\tPierre\n", "msg_date": "Thu, 27 Jan 2005 17:44:15 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\nPFC <[email protected]> writes:\n\n> \tintset(x) seems to be like array[x] ?\n> \tActually what I want is the opposite. \n\nWhat you want is called UNNEST. It didn't get done in time for 8.0. But if\nwhat you have is an array of integers the int_array_enum() function I quoted\nin the other post is basically that.\n\n> Note that int_agg_final_array() crashes my postgres, see my message in\n> psql/general\n\nYou don't really need the int_array_aggregate function any more. You can write\nyour own aggregate using the new array operators:\n\ntest=> create or replace function array_push (anyarray, anyelement) returns anyarray as 'select $1 || $2' language sql immutable strict;\nCREATE FUNCTION\ntest=> create aggregate array_aggregate (basetype=anyelement, sfunc=array_push, stype=anyarray, initcond = '{}');\nCREATE AGGREGATE\n\nOf course it's about 50x slower than the C implementation though:\n\ntest=> select icount(array_aggregate (foo_id)) from foo;\n icount \n--------\n 15127\n(1 row)\n\nTime: 688.419 ms\n\ntest=> select icount(int_array_aggregate (foo_id)) from foo;\n icount \n--------\n 15127\n(1 row)\n\nTime: 13.680 ms\n\n(And no, that's not a caching artifact; the whole table is cached for both\ntrials)\n\n-- \ngreg\n\n", "msg_date": "27 Jan 2005 13:33:33 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "> What you want is called UNNEST. It didn't get done in time for 8.0. But \n> if\n> what you have is an array of integers the int_array_enum() function I \n> quoted\n> in the other post is basically that.\n\n\tYes, I used it, thanks. That's what I wanted. The query plans are good.\n\n> You don't really need the int_array_aggregate function any more. You can \n> write\n> your own aggregate using the new array operators:\n> Of course it's about 50x slower than the C implementation though:\n\n\tHeh. I'll keep using int_array_aggregate ;)\n\n\tHave a nice day.\n", "msg_date": "Thu, 27 Jan 2005 20:11:55 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "PFC wrote:\n> \n> Supposing your searches display results which are rows coming from one \n> specific table, you could create a cache table :\n> \n> search_id\tserial primary key\n> index_n\tposition of this result in the global result set\n> result_id\tid of the resulting row.\n> \n> Then, making a search with 50k results would INSERT INTO cache ... SELECT \n> FROM search query, with a way to set the index_n column, which can be a \n> temporary sequence...\n> \n> Then to display your pages, SELECT from your table with index_n BETWEEN so \n> and so, and join to the data table.\n\nThis is a nice way of doing a fast materialized view. But it looked\nto me like one of the requirements of the original poster is that the\nresult set being displayed has to be \"current\" as of the page display\ntime. If inserts to the original table have been committed between\nthe time the current page was displayed and \"now\", the next page view\nis supposed to show them. That basically means rerunning the query\nthat was used to build the cache table.\n\nBut perhaps the original poster is willing to live with the idea that\nnew rows won't show up in the result set, as long as updates show up\n(because the cache table is just a fancy index) and deletes \"work\"\n(because the join against the data table will show only rows that are\ncommon between both).\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Thu, 27 Jan 2005 19:59:39 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "> Uhmmm no :) There is no such thing as a select trigger. The closest\nyou\n> would get\n> is a function that is called via select which could be detected by\n> making sure\n> you are prepending with a BEGIN or START Transaction. Thus yes pgPool\n> can be made\n> to do this.\n\nTechnically, you can also set up a rule to do things on a select with DO\nALSO. However putting update statements in there would be considered (at\nleast by me) very bad form. Note that this is not a trigger because it\ndoes not operate at the row level [I know you knew that already :-)].\n\nMerlin\n", "msg_date": "Fri, 21 Jan 2005 11:08:41 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "How do I profile a user-defined function so that I know which parts of the\nfunction are the ones that are taking the biggest chunk of time?\n\nWhen I run EXPLAIN on the queries within the function none of them show up\nas onerous burdens to the performance. But when they are all operating\ntogether within the function and within the functional logic they become\nreally expensive. Obviously I've made a mistake somewhere but it isn't\nobvious (otherwise it would be fixed already) and I'd prefer having a\nprofile report telling me what is taking so long rather than guessing and\npossibly making things worse.\n\nSo is there any way to get a line-by-line timing profile of a user-defined\nfunction?\n\nThanks!\n\nrjsjr\n", "msg_date": "Fri, 21 Jan 2005 10:57:19 -0600", "msg_from": "Robert Sanford <[email protected]>", "msg_from_op": true, "msg_subject": "Profiling a function..." }, { "msg_contents": "Robert Sanford wrote:\n> How do I profile a user-defined function so that I know which parts of the\n> function are the ones that are taking the biggest chunk of time?\n> \n> When I run EXPLAIN on the queries within the function none of them show up\n> as onerous burdens to the performance. But when they are all operating\n> together within the function and within the functional logic they become\n> really expensive. Obviously I've made a mistake somewhere but it isn't\n> obvious (otherwise it would be fixed already) and I'd prefer having a\n> profile report telling me what is taking so long rather than guessing and\n> possibly making things worse.\n> \n> So is there any way to get a line-by-line timing profile of a user-defined\n> function?\n\nNot really. What you can do is simulate the queries in functions by \nusing PREPARE. You're probably seeing a difference because when PG plans \n the queries for functions/prepared queries it doesn't know the actual \nvalues.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 21 Jan 2005 17:20:55 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Profiling a function..." } ]
[ { "msg_contents": "This is probably a lot easier than you would think. You say that your \nDB will have lots of data, lots of updates and lots of reads.\n\nVery likely the disk bottleneck is mostly index reads and writes, with \nsome critical WAL fsync() calls. In the grand scheme of things, the \nactual data is likely not accessed very often.\n\nThe indexes can be put on a RAM disk tablespace and that's the end of \nindex problems -- just make sure you have enough memory available. Also \nmake sure that the machine can restart correctly after a crash: the \ntablespace is dropped and recreated, along with the indexes. This will \ncause a machine restart to take some time.\n\nAfter that, if the WAL fsync() calls are becoming a problem, put the WAL \nfiles on a fast RAID array, etiher a card or external enclosure, that \nhas a good amount of battery-backed write cache. This way, the WAL \nfsync() calls will flush quickly to the RAM and Pg can move on while the \nRAID controller worries about putting the data to disk. With WAL, low \naccess time is usually more important than total throughput.\n\nThe truth is that you could have this running for not much money.\n\nGood Luck,\nMarty\n\n> Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :\n> > Could you explain us what do you have in mind for that solution? I mean,\n> > forget the PostgreSQL (or any other database) restrictions and \n> explain us\n> > how this hardware would be. Where the data would be stored?\n> >\n> > I've something in mind for you, but first I need to understand your \n> needs!\n> \n> I just want to make a big database as explained in my first mail ... At the\n> beginning we will have aprox. 150 000 000 records ... each month we will \n> add\n> about 4/8 millions new rows in constant flow during the day ... and in same\n> time web users will access to the database in order to read those data.\n> Stored data are quite close to data stored by google ... (we are not \n> making a\n> google clone ... just a lot of data many small values and some big ones ...\n> that's why I'm comparing with google for data storage).\n> Then we will have a search engine searching into those data ...\n> \n> Dealing about the hardware, for the moment we have only a bi-pentium Xeon\n> 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results \n> ... so\n> we are thinking about a new solution with maybe several servers (server\n> design may vary from one to other) ... to get a kind of cluster to get \n> better\n> performance ...\n> \n> Am I clear ?\n> \n> Regards,\n\n\n\n", "msg_date": "Fri, 21 Jan 2005 11:18:00 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "IMO the bottle neck is not WAL but table/index bloat. Lots of updates\non large tables will produce lots of dead tuples. Problem is, There'\nis no effective way to reuse these dead tuples since VACUUM on huge\ntables takes longer time. 8.0 adds new vacuum delay\nparamters. Unfortunately this does not help. It just make the\nexecution time of VACUUM longer, that means more and more dead tuples\nare being made while updating.\n\nProbably VACUUM works well for small to medium size tables, but not\nfor huge ones. I'm considering about to implement \"on the spot\nsalvaging dead tuples\".\n--\nTatsuo Ishii\n\n> This is probably a lot easier than you would think. You say that your \n> DB will have lots of data, lots of updates and lots of reads.\n> \n> Very likely the disk bottleneck is mostly index reads and writes, with \n> some critical WAL fsync() calls. In the grand scheme of things, the \n> actual data is likely not accessed very often.\n> \n> The indexes can be put on a RAM disk tablespace and that's the end of \n> index problems -- just make sure you have enough memory available. Also \n> make sure that the machine can restart correctly after a crash: the \n> tablespace is dropped and recreated, along with the indexes. This will \n> cause a machine restart to take some time.\n> \n> After that, if the WAL fsync() calls are becoming a problem, put the WAL \n> files on a fast RAID array, etiher a card or external enclosure, that \n> has a good amount of battery-backed write cache. This way, the WAL \n> fsync() calls will flush quickly to the RAM and Pg can move on while the \n> RAID controller worries about putting the data to disk. With WAL, low \n> access time is usually more important than total throughput.\n> \n> The truth is that you could have this running for not much money.\n> \n> Good Luck,\n> Marty\n> \n> > Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :\n> > > Could you explain us what do you have in mind for that solution? I mean,\n> > > forget the PostgreSQL (or any other database) restrictions and \n> > explain us\n> > > how this hardware would be. Where the data would be stored?\n> > >\n> > > I've something in mind for you, but first I need to understand your \n> > needs!\n> > \n> > I just want to make a big database as explained in my first mail ... At the\n> > beginning we will have aprox. 150 000 000 records ... each month we will \n> > add\n> > about 4/8 millions new rows in constant flow during the day ... and in same\n> > time web users will access to the database in order to read those data.\n> > Stored data are quite close to data stored by google ... (we are not \n> > making a\n> > google clone ... just a lot of data many small values and some big ones ...\n> > that's why I'm comparing with google for data storage).\n> > Then we will have a search engine searching into those data ...\n> > \n> > Dealing about the hardware, for the moment we have only a bi-pentium Xeon\n> > 2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results \n> > ... so\n> > we are thinking about a new solution with maybe several servers (server\n> > design may vary from one to other) ... to get a kind of cluster to get \n> > better\n> > performance ...\n> > \n> > Am I clear ?\n> > \n> > Regards,\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to [email protected])\n> \n", "msg_date": "Sat, 22 Jan 2005 12:13:00 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Sat, 22 Jan 2005 12:13:00 +0900 (JST), Tatsuo Ishii\n<[email protected]> wrote:\n> IMO the bottle neck is not WAL but table/index bloat. Lots of updates\n> on large tables will produce lots of dead tuples. Problem is, There'\n> is no effective way to reuse these dead tuples since VACUUM on huge\n> tables takes longer time. 8.0 adds new vacuum delay\n> paramters. Unfortunately this does not help. It just make the\n> execution time of VACUUM longer, that means more and more dead tuples\n> are being made while updating.\n>\n> Probably VACUUM works well for small to medium size tables, but not\n> for huge ones. I'm considering about to implement \"on the spot\n> salvaging dead tuples\".\n\nQuick thought -- would it be to possible to implement a 'partial VACUUM'\nper analogiam to partial indexes?\n\nIt would be then posiible to do:\nVACUUM footable WHERE footime < current_date - 60;\nafter a statement to DELETE all/some rows older than 60 days.\n\nThe VACUUM would check visibility of columns which are mentioned\nin an index (in this case: footable_footime_index ;)).\n\nOf course it is not a great solution, but could be great for doing\nhousecleaning after large update/delete in a known range.\n\n...and should be relatively simple to implement, I guess\n(maybe without 'ANALYZE' part).\n\n Regards,\n Dawid\n", "msg_date": "Sat, 22 Jan 2005 14:18:27 +0100", "msg_from": "Dawid Kuroczko <[email protected]>", "msg_from_op": false, "msg_subject": "PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Dawid Kuroczko <[email protected]> writes:\n\n> Quick thought -- would it be to possible to implement a 'partial VACUUM'\n> per analogiam to partial indexes?\n\nNo.\n\nBut it gave me another idea. Perhaps equally infeasible, but I don't see why.\n\nWhat if there were a map of modified pages. So every time any tuple was marked\ndeleted it could be marked in the map as modified. VACUUM would only have to\nlook at these pages. And if it could mark as free every tuple that was marked\nas deleted then it could unmark the page.\n\nThe only downside I see is that this could be a source of contention on\nmulti-processor machines running lots of concurrent update/deletes.\n\n-- \ngreg\n\n", "msg_date": "22 Jan 2005 12:20:53 -0500", "msg_from": "Greg Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Sat, Jan 22, 2005 at 12:13:00 +0900,\n Tatsuo Ishii <[email protected]> wrote:\n> \n> Probably VACUUM works well for small to medium size tables, but not\n> for huge ones. I'm considering about to implement \"on the spot\n> salvaging dead tuples\".\n\nYou are probably vacuuming too often. You want to wait until a significant\nfraction of a large table is dead tuples before doing a vacuum. If you are\nscanning a large table and only marking a few tuples as deleted, you aren't\ngetting much bang for your buck.\n", "msg_date": "Sat, 22 Jan 2005 12:41:24 -0600", "msg_from": "Bruno Wolff III <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Sat, 2005-01-22 at 12:41 -0600, Bruno Wolff III wrote:\n> On Sat, Jan 22, 2005 at 12:13:00 +0900,\n> Tatsuo Ishii <[email protected]> wrote:\n> > \n> > Probably VACUUM works well for small to medium size tables, but not\n> > for huge ones. I'm considering about to implement \"on the spot\n> > salvaging dead tuples\".\n> \n> You are probably vacuuming too often. You want to wait until a significant\n> fraction of a large table is dead tuples before doing a vacuum. If you are\n> scanning a large table and only marking a few tuples as deleted, you aren't\n> getting much bang for your buck.\n\nThe big problem occurs when you have a small set of hot tuples within a\nlarge table. In the time it takes to vacuum a table with 200M tuples\none can update a small subset of that table many many times.\n\nSome special purpose vacuum which can target hot spots would be great,\nbut I've always assumed this would come in the form of table\npartitioning and the ability to vacuum different partitions\nindependently of each-other.\n\n-- \n\n", "msg_date": "Sat, 22 Jan 2005 14:00:40 -0500", "msg_from": "Rod Taylor <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": ">From http://developer.postgresql.org/todo.php:\n\nMaintain a map of recently-expired rows\n\nThis allows vacuum to reclaim free space without requiring a sequential\nscan \n\nOn Sat, Jan 22, 2005 at 12:20:53PM -0500, Greg Stark wrote:\n> Dawid Kuroczko <[email protected]> writes:\n> \n> > Quick thought -- would it be to possible to implement a 'partial VACUUM'\n> > per analogiam to partial indexes?\n> \n> No.\n> \n> But it gave me another idea. Perhaps equally infeasible, but I don't see why.\n> \n> What if there were a map of modified pages. So every time any tuple was marked\n> deleted it could be marked in the map as modified. VACUUM would only have to\n> look at these pages. And if it could mark as free every tuple that was marked\n> as deleted then it could unmark the page.\n> \n> The only downside I see is that this could be a source of contention on\n> multi-processor machines running lots of concurrent update/deletes.\n> \n> -- \n> greg\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sat, 22 Jan 2005 14:10:49 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> Probably VACUUM works well for small to medium size tables, but not\n> for huge ones. I'm considering about to implement \"on the spot\n> salvaging dead tuples\".\n\nThat's impossible on its face, except for the special case where the\nsame transaction inserts and deletes a tuple. In all other cases, the\ntransaction deleting a tuple cannot know whether it will commit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Jan 2005 16:10:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering " }, { "msg_contents": "A long time ago, in a galaxy far, far away, [email protected] (Greg Stark) wrote:\n> Dawid Kuroczko <[email protected]> writes:\n>\n>> Quick thought -- would it be to possible to implement a 'partial VACUUM'\n>> per analogiam to partial indexes?\n>\n> No.\n>\n> But it gave me another idea. Perhaps equally infeasible, but I don't see why.\n>\n> What if there were a map of modified pages. So every time any tuple\n> was marked deleted it could be marked in the map as modified. VACUUM\n> would only have to look at these pages. And if it could mark as free\n> every tuple that was marked as deleted then it could unmark the\n> page.\n>\n> The only downside I see is that this could be a source of contention\n> on multi-processor machines running lots of concurrent\n> update/deletes.\n\nI was thinking the same thing after hearing fairly extensive\n\"pooh-poohing\" of the notion of vacuuming based on all the pages in\nthe shared cache.\n\nThis \"hot list page table\" would probably need to be a hash table. It\nrather parallels the FSM, including the way that it would need to be\nlimited in size.\n-- \nwm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\nhttp://cbbrowne.com/info/lsf.html\nRules of the Evil Overlord #57. \"Before employing any captured\nartifacts or machinery, I will carefully read the owner's manual.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Sun, 23 Jan 2005 01:16:20 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Cheaper VACUUMing" }, { "msg_contents": "On Sat, 2005-01-22 at 16:10 -0500, Tom Lane wrote:\n> Tatsuo Ishii <[email protected]> writes:\n> > Probably VACUUM works well for small to medium size tables, but not\n> > for huge ones. I'm considering about to implement \"on the spot\n> > salvaging dead tuples\".\n> \n> That's impossible on its face, except for the special case where the\n> same transaction inserts and deletes a tuple. In all other cases, the\n> transaction deleting a tuple cannot know whether it will commit.\n\nPerhaps Tatsuo has an idea...\n\nAs Tom says, if you have only a single row version and then you update\nthat row to create a second version, then we must not remove the first\nversion, since it is effectively the Undo copy.\n\nHowever, if there were already 2+ row versions, then as Tatsuo suggests,\nit might be possible to use on the spot salvaging of dead tuples. It\nmight be worth checking the Xid of the earlier row version(s), to see if\nthey are now expired and could be removed immediately.\n\nHowever, if you had a high number of concurrent updaters, this extra\neffort would not be that useful, since the other row versions might\nstill be transaction-in-progress versions. That would mean implementing\nthis idea would be useful often, but not in the case of repeatedly\nupdated rows.\n\nChanging the idea slightly might be better: if a row update would cause\na block split, then if there is more than one row version then we vacuum\nthe whole block first, then re-attempt the update. That way we wouldn't\ndo the row every time, just when it becomes a problem.\n\nI'm suggesting putting a call to vacuum_page() into heap_update(),\nimmediately before any call to RelationGetBufferForTuple().\n\nWe already know that page splitting is an expensive operation, so doing\nsome work to try to avoid that could frequently pay off. This would be\nisolated to updating. \n\nThis wouldn't remove the need for vacuuming, but it would act to prevent\nsevere performance degradation caused by frequent re-updating.\n\nWhat do you think?\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Sun, 23 Jan 2005 20:15:52 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> Changing the idea slightly might be better: if a row update would cause\n> a block split, then if there is more than one row version then we vacuum\n> the whole block first, then re-attempt the update.\n\n\"Block split\"? I think you are confusing tables with indexes.\n\nChasing down prior versions of the same row is not very practical\nanyway, since there is no direct way to find them.\n\nOne possibility is, if you tried to insert a row on a given page but\nthere's not room, to look through the other rows on the same page to see\nif any are deletable (xmax below the GlobalXmin event horizon). This\nstrikes me as a fairly expensive operation though, especially when you\ntake into account the need to get rid of their index entries first.\nMoreover, the check would often be unproductive.\n\nThe real issue with any such scheme is that you are putting maintenance\ncosts into the critical paths of foreground processes that are executing\nuser queries. I think that one of the primary advantages of the\nPostgres storage design is that we keep that work outside the critical\npath and delegate it to maintenance processes that can run in the\nbackground. We shouldn't lightly toss away that advantage.\n\nThere was some discussion in Toronto this week about storing bitmaps\nthat would tell VACUUM whether or not there was any need to visit\nindividual pages of each table. Getting rid of useless scans through\nnot-recently-changed areas of large tables would make for a significant\nreduction in the cost of VACUUM.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jan 2005 15:40:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering " }, { "msg_contents": "For reference, here's the discussion about this that took place on\nhackers: http://lnk.nu/archives.postgresql.org/142.php \n\nOn Sun, Jan 23, 2005 at 01:16:20AM -0500, Christopher Browne wrote:\n> A long time ago, in a galaxy far, far away, [email protected] (Greg Stark) wrote:\n> > Dawid Kuroczko <[email protected]> writes:\n> >\n> >> Quick thought -- would it be to possible to implement a 'partial VACUUM'\n> >> per analogiam to partial indexes?\n> >\n> > No.\n> >\n> > But it gave me another idea. Perhaps equally infeasible, but I don't see why.\n> >\n> > What if there were a map of modified pages. So every time any tuple\n> > was marked deleted it could be marked in the map as modified. VACUUM\n> > would only have to look at these pages. And if it could mark as free\n> > every tuple that was marked as deleted then it could unmark the\n> > page.\n> >\n> > The only downside I see is that this could be a source of contention\n> > on multi-processor machines running lots of concurrent\n> > update/deletes.\n> \n> I was thinking the same thing after hearing fairly extensive\n> \"pooh-poohing\" of the notion of vacuuming based on all the pages in\n> the shared cache.\n> \n> This \"hot list page table\" would probably need to be a hash table. It\n> rather parallels the FSM, including the way that it would need to be\n> limited in size.\n> -- \n> wm(X,Y):-write(X),write('@'),write(Y). wm('cbbrowne','gmail.com').\n> http://cbbrowne.com/info/lsf.html\n> Rules of the Evil Overlord #57. \"Before employing any captured\n> artifacts or machinery, I will carefully read the owner's manual.\"\n> <http://www.eviloverlord.com/>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sun, 23 Jan 2005 16:18:38 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cheaper VACUUMing" }, { "msg_contents": "On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:\n> There was some discussion in Toronto this week about storing bitmaps\n> that would tell VACUUM whether or not there was any need to visit\n> individual pages of each table. Getting rid of useless scans through\n> not-recently-changed areas of large tables would make for a significant\n> reduction in the cost of VACUUM.\nFWIW, that's already on the TODO. See also\nhttp://lnk.nu/archives.postgresql.org/142.php.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sun, 23 Jan 2005 16:21:34 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> The real issue with any such scheme is that you are putting maintenance\n> costs into the critical paths of foreground processes that are executing\n> user queries. I think that one of the primary advantages of the\n> Postgres storage design is that we keep that work outside the critical\n> path and delegate it to maintenance processes that can run in the\n> background. We shouldn't lightly toss away that advantage.\n\nAs a rather naive user, I'd consider modifying the FSM so that it has pages\nwith 'possibly freeable' space on them, as well as those with free space.\n\nThis way when the pages of actually free space is depleted, the list of\n'possibly freeable' pages could be vacuumed (as a batch for that relation)\nthen placed on the actually-free list like vacuum currently does\n\nSince there is concern about critical path performance, there could be an\nextra backend process that would wake up perodically (or on a signal) and\nvacuum the pages, so theyre not processed inline with some transaction. Then\ngrabbing a page with free space is the same as it is currently.\n\nActually I was hoping to find some time to investigate this myself, but my\nemployer is keeping me busy with other tasks ;/. Our particular data\nmanagement problems could be mitigated much better with a data partitioning\napproach, anyway.\n\nOn another note, is anybody investigating backing up the FSM with disk files\nso when the FSM size exceeds memory allocated, the appropriate data is\nswapped to disk? At least since 7.4 you no longer need a VACUUM when\npostgres starts, to learn about free space ;)\n\n- Guy Thornley\n", "msg_date": "Mon, 24 Jan 2005 13:21:50 +1300", "msg_from": "Guy Thornley <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Ühel kenal päeval (pühapäev, 23. jaanuar 2005, 15:40-0500), kirjutas Tom\nLane:\n> Simon Riggs <[email protected]> writes:\n> > Changing the idea slightly might be better: if a row update would cause\n> > a block split, then if there is more than one row version then we vacuum\n> > the whole block first, then re-attempt the update.\n> \n> \"Block split\"? I think you are confusing tables with indexes.\n> \n> Chasing down prior versions of the same row is not very practical\n> anyway, since there is no direct way to find them.\n> \n> One possibility is, if you tried to insert a row on a given page but\n> there's not room, to look through the other rows on the same page to see\n> if any are deletable (xmax below the GlobalXmin event horizon). This\n> strikes me as a fairly expensive operation though, especially when you\n> take into account the need to get rid of their index entries first.\n\nWhy is removing index entries essential ?\n\nIn pg yuo always have to visit data page, so finding the wrong tuple\nthere could just produce the same result as deleted tuple (which in this\ncase it actually is). The cleaning of index entries could be left to the\nreal vacuum.\n\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Mon, 24 Jan 2005 03:56:25 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> Tatsuo Ishii <[email protected]> writes:\n> > Probably VACUUM works well for small to medium size tables, but not\n> > for huge ones. I'm considering about to implement \"on the spot\n> > salvaging dead tuples\".\n> \n> That's impossible on its face, except for the special case where the\n> same transaction inserts and deletes a tuple. In all other cases, the\n> transaction deleting a tuple cannot know whether it will commit.\n\nOf course. We need to keep a list of such that tuples until commit or\nabort.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 24 Jan 2005 11:52:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering " }, { "msg_contents": "On Sun, 2005-01-23 at 15:40 -0500, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > Changing the idea slightly might be better: if a row update would cause\n> > a block split, then if there is more than one row version then we vacuum\n> > the whole block first, then re-attempt the update.\n> \n> \"Block split\"? I think you are confusing tables with indexes.\n\nTerminologically loose, as ever. :(\nI meant both tables and indexes and was referring to the part of the\nalgorithm that is entered when we have a block-full situation.\n\n> Chasing down prior versions of the same row is not very practical\n> anyway, since there is no direct way to find them.\n> \n> One possibility is, if you tried to insert a row on a given page but\n> there's not room, to look through the other rows on the same page to see\n> if any are deletable (xmax below the GlobalXmin event horizon). This\n> strikes me as a fairly expensive operation though, especially when you\n> take into account the need to get rid of their index entries first.\n\nThats what I was suggesting, vac the whole page, not just those rows.\n\nDoing it immediately greatly increases the chance that the index blocks\nwould be in cache also.\n\n> Moreover, the check would often be unproductive.\n> The real issue with any such scheme is that you are putting maintenance\n> costs into the critical paths of foreground processes that are executing\n> user queries. I think that one of the primary advantages of the\n> Postgres storage design is that we keep that work outside the critical\n> path and delegate it to maintenance processes that can run in the\n> background. We shouldn't lightly toss away that advantage.\n\nCompletely agree. ...which is why I was trying to find a place for such\nan operation in-front-of another expensive operation which is also\ncurrently on the critical path. That way there might be benefit rather\nthan just additional overhead.\n\n> There was some discussion in Toronto this week about storing bitmaps\n> that would tell VACUUM whether or not there was any need to visit\n> individual pages of each table. Getting rid of useless scans through\n> not-recently-changed areas of large tables would make for a significant\n> reduction in the cost of VACUUM.\n\nISTM there are two issues here, which are only somewhat related:\n- speed of VACUUM on large tables\n- ability to run VACUUM very frequently on very frequently updated\ntables\n\nThe needs-maintenance bitmap idea hits both, whilst the on-the-spot idea\nonly hits the second one, even if it does it +/- better. Gut feel says\nwe would implement only one idea...so...\n\nOn balance that indicates the need-maintenance bitmap is a better idea,\nand one for which we already have existing code.\nA few questions...\n- wouldn't we need a bitmap per relation?\n- wouldn't all the extra bitmaps need to be cached in shared_buffers,\nwhich could use up a good proportion of buffer cache space\n- maybe we should use a smaller block size and a different cache for it\n- how would we update the bitmap without creating a new LWlock that\nneeds to be acquired for every block write and so reducing scalability?\n- would this be implemented as an option for each table, so that we\ncould avoid the implementation overhead? (Or perhaps don't have a bitmap\nif table is less than 16 blocks?)\n\n-- \nBest Regards, Simon Riggs\n\n", "msg_date": "Mon, 24 Jan 2005 08:41:37 +0000", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Faster and more frequent VACUUM (was PostgreSQL clustering VS\n\tMySQL clustering)" }, { "msg_contents": "Tatsuo,\n\nI agree completely that vacuum falls apart on huge tables. We could \nprobably do the math and figure out what the ratio of updated rows per \ntotal rows is each day, but on a constantly growing table, that ratio \ngets smaller and smaller, making the impact of dead tuples in the table \nproportionately less and less.\n\nIf multi-version indexes are handled the same way as table rows, then \nthe indexes will also suffer the same fate, if not worse. For huge \ntables, the b-tree depth can get fairly large. When a b-tree is of \ndepth X and the machine holds the first Y levels of the b-tree in \nmemory, then each table row selected requires a MINIMUM of (X-Y) disk \naccess *before* the table row is accessed. Substitute any numbers you \nwant for X and Y, but you will find that huge tables require many index \nreads.\n\nIndex updates are even worse. A table row update requires only a copy \nof the row. An index update requires at least a copy of the leaf node, \nand possibly more nodes if nodes must be split or collapsed. These \nsplits and collapses can cascade, causing many nodes to be affected.\n\nThis whole process takes place for each and every index affected by the \nchange, which is every index on the table when a row is added or \ndeleted. All of this monkeying around takes place above and beyond the \nsimple change of the row data. Further, each and every affected index \npage is dumped to WAL.\n\nAssuming the indexes have the same MVCC proprties of row data, then the \nindexes would get dead tuples at a rate far higher than that of the \ntable data.\n\nSo yes, vacuuming is a problem on large tables. It is a bigger problem \nfor indexes. On large tables, index I/O comprises most of the I/O mix.\n\nDon't take my word for it. Run a benchmark on Pg. Then, soft-link the \nindex files and the WAL directories to a RAM disk. Rerun the benchmark \nand you will find that Pg far faster, much faster than if only the data \nwere on the RAM disk.\n\nMarty\n\nTatsuo Ishii wrote:\n> IMO the bottle neck is not WAL but table/index bloat. Lots of updates\n> on large tables will produce lots of dead tuples. Problem is, There'\n> is no effective way to reuse these dead tuples since VACUUM on huge\n> tables takes longer time. 8.0 adds new vacuum delay\n> paramters. Unfortunately this does not help. It just make the\n> execution time of VACUUM longer, that means more and more dead tuples\n> are being made while updating.\n> \n> Probably VACUUM works well for small to medium size tables, but not\n> for huge ones. I'm considering about to implement \"on the spot\n> salvaging dead tuples\".\n> --\n> Tatsuo Ishii\n> \n> \n>>This is probably a lot easier than you would think. You say that your \n>>DB will have lots of data, lots of updates and lots of reads.\n>>\n>>Very likely the disk bottleneck is mostly index reads and writes, with \n>>some critical WAL fsync() calls. In the grand scheme of things, the \n>>actual data is likely not accessed very often.\n>>\n>>The indexes can be put on a RAM disk tablespace and that's the end of \n>>index problems -- just make sure you have enough memory available. Also \n>>make sure that the machine can restart correctly after a crash: the \n>>tablespace is dropped and recreated, along with the indexes. This will \n>>cause a machine restart to take some time.\n>>\n>>After that, if the WAL fsync() calls are becoming a problem, put the WAL \n>>files on a fast RAID array, etiher a card or external enclosure, that \n>>has a good amount of battery-backed write cache. This way, the WAL \n>>fsync() calls will flush quickly to the RAM and Pg can move on while the \n>>RAID controller worries about putting the data to disk. With WAL, low \n>>access time is usually more important than total throughput.\n>>\n>>The truth is that you could have this running for not much money.\n>>\n>>Good Luck,\n>>Marty\n>>\n>>\n>>>Le Jeudi 20 Janvier 2005 19:09, Bruno Almeida do Lago a écrit :\n>>> > Could you explain us what do you have in mind for that solution? I mean,\n>>> > forget the PostgreSQL (or any other database) restrictions and \n>>>explain us\n>>> > how this hardware would be. Where the data would be stored?\n>>> >\n>>> > I've something in mind for you, but first I need to understand your \n>>>needs!\n>>>\n>>>I just want to make a big database as explained in my first mail ... At the\n>>>beginning we will have aprox. 150 000 000 records ... each month we will \n>>>add\n>>>about 4/8 millions new rows in constant flow during the day ... and in same\n>>>time web users will access to the database in order to read those data.\n>>>Stored data are quite close to data stored by google ... (we are not \n>>>making a\n>>>google clone ... just a lot of data many small values and some big ones ...\n>>>that's why I'm comparing with google for data storage).\n>>>Then we will have a search engine searching into those data ...\n>>>\n>>>Dealing about the hardware, for the moment we have only a bi-pentium Xeon\n>>>2.8Ghz with 4 Gb of RAM ... and we saw we had bad performance results \n>>>... so\n>>>we are thinking about a new solution with maybe several servers (server\n>>>design may vary from one to other) ... to get a kind of cluster to get \n>>>better\n>>>performance ...\n>>>\n>>>Am I clear ?\n>>>\n>>>Regards,\n>>\n>>\n>>\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 2: you can get off all lists at once with the unregister command\n>> (send \"unregister YourEmailAddressHere\" to [email protected])\n>>\n> \n\n\n\n", "msg_date": "Mon, 24 Jan 2005 08:45:57 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "On Sun, Jan 23, 2005 at 03:40:03PM -0500, Tom Lane wrote:\n> The real issue with any such scheme is that you are putting maintenance\n> costs into the critical paths of foreground processes that are executing\n> user queries. I think that one of the primary advantages of the\n> Postgres storage design is that we keep that work outside the critical\n> path and delegate it to maintenance processes that can run in the\n> background. We shouldn't lightly toss away that advantage.\n\nTo pull out the oft-used \"show me the numbers\" card... has anyone done a\nstudy to see if keeping this stuff out of the 'critical path' actually\nhelps overall system performance? While the current scheme initially\nspeeds up transactions, eventually you have to run vacuum, which puts a\nbig load on the system. If you can put off vacuuming until off-hours\n(assuming your system has off-hours), then this doesn't matter, but more\nand more we're seeing systems where vacuum is a big performance issue\n(hence recent work with the delay in vacuum so as not to swamp the IO\nsystem).\n\nIf you vacuum as part of the transaction it's going to be more efficient\nof resources, because you have more of what you need right there (ie:\nodds are that you're on the same page as the old tuple). In cases like\nthat it very likely makes a lot of sense to take a small hit in your\ntransaction time up-front, instead of a larger hit doing a vacuum down\nthe road.\n\nOf course, without numbers this is a bunch of hand-waving, but I don't\nthink it's valid to assume that minimizing the amount of work you do in\na transaction means better throughput without considering what it will\ncost to do the work you're putting off until later.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Mon, 24 Jan 2005 21:11:04 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Ühel kenal päeval (esmaspäev, 24. jaanuar 2005, 11:52+0900), kirjutas\nTatsuo Ishii:\n> > Tatsuo Ishii <[email protected]> writes:\n> > > Probably VACUUM works well for small to medium size tables, but not\n> > > for huge ones. I'm considering about to implement \"on the spot\n> > > salvaging dead tuples\".\n> > \n> > That's impossible on its face, except for the special case where the\n> > same transaction inserts and deletes a tuple. In all other cases, the\n> > transaction deleting a tuple cannot know whether it will commit.\n> \n> Of course. We need to keep a list of such that tuples until commit or\n> abort.\n\nwhat about other transactions, which may have started before current one\nand be still running when current one commites ?\n\n\nI once proposed an extra parameter added to VACUUM FULL which determines\nhow much free space to leave in each page vacuumed. If there were room\nthe new tuple could be placed near the old one in most cases and thus\navoid lots of disk head movement when updating huge tables in one go.\n\n------------\n\nHannu Krosing <[email protected]>\n", "msg_date": "Tue, 25 Jan 2005 12:42:47 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "> > > Tatsuo Ishii <[email protected]> writes:\n> > > > Probably VACUUM works well for small to medium size tables, but not\n> > > > for huge ones. I'm considering about to implement \"on the spot\n> > > > salvaging dead tuples\".\n> > > \n> > > That's impossible on its face, except for the special case where the\n> > > same transaction inserts and deletes a tuple. In all other cases, the\n> > > transaction deleting a tuple cannot know whether it will commit.\n> > \n> > Of course. We need to keep a list of such that tuples until commit or\n> > abort.\n> \n> what about other transactions, which may have started before current one\n> and be still running when current one commites ?\n\nThen dead tuples should be left. Perhaps in this case we could\nregister them in FSM or whatever for later processing.\n--\nTatsuo Ishii\n\n> I once proposed an extra parameter added to VACUUM FULL which determines\n> how much free space to leave in each page vacuumed. If there were room\n> the new tuple could be placed near the old one in most cases and thus\n> avoid lots of disk head movement when updating huge tables in one go.\n> \n> ------------\n> \n> Hannu Krosing <[email protected]>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n", "msg_date": "Tue, 25 Jan 2005 23:19:17 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> Why is removing index entries essential ?\n\nBecause once you re-use the tuple slot, any leftover index entries would\nbe pointing to the wrong rows.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 10:41:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering " }, { "msg_contents": "Ühel kenal päeval (teisipäev, 25. jaanuar 2005, 10:41-0500), kirjutas\nTom Lane:\n> Hannu Krosing <[email protected]> writes:\n> > Why is removing index entries essential ?\n> \n> Because once you re-use the tuple slot, any leftover index entries would\n> be pointing to the wrong rows.\n\nThat much I understood ;)\n\nBut can't clearing up the index be left for \"later\" ? \n\nIndexscan has to check the data tuple anyway, at least for visibility.\nwould adding the check for field sameness in index and data tuples be\ntoo big performance hit ?\n\n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n-- \nHannu Krosing <[email protected]>\n", "msg_date": "Wed, 26 Jan 2005 11:41:18 +0200", "msg_from": "Hannu Krosing <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Hannu Krosing <[email protected]> writes:\n> But can't clearing up the index be left for \"later\" ? \n\nBased on what? Are you going to store the information about what has to\nbe cleaned up somewhere else, and if so where?\n\n> Indexscan has to check the data tuple anyway, at least for visibility.\n> would adding the check for field sameness in index and data tuples be\n> too big performance hit ?\n\nIt does pretty much suck, especially when you think about functional\nindexes on expensive functions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Jan 2005 10:17:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering " }, { "msg_contents": "\nhttp://borg.postgresql.org/docs/8.0/interactive/storage-page-layout.html\n\n\n> If you vacuum as part of the transaction it's going to be more efficient\n> of resources, because you have more of what you need right there (ie:\n> odds are that you're on the same page as the old tuple). In cases like\n> that it very likely makes a lot of sense to take a small hit in your\n> transaction time up-front, instead of a larger hit doing a vacuum down\n> the road.\n\n\tSome pros would be that you're going to make a disk write anyway because \nthe page is modified, so why not vacuum that page while it's there. If the \nmachine is CPU bound you lose, if it's IO bound you save some IO, but the \ncost of index updates has to be taken into account...\n\n\tIt prompted a few questions :\n\nNote : temp contains 128k (131072) values generated from a sequence.\n\ncreate table test (id serial primary key, a integer, z integer, e integer, \nr integer, t integer, y integer ) without oids;\ninsert into test (id,a,z,e,r,t,y) select id,0,0,0,0,0,0 from temp;\n INSERT 0 131072\n\n\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..2226.84 rows=126284 width=30) (ac Seq Scan \non test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.046..964.590 rows=131072 loops=1)\n Total runtime: 15628.143 ms\ntual time=0.047..617.553 rows=131072 loops=1)\n Total runtime: 4432.509 ms\n\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..4453.68 rows=252568 width=30) (actual \ntime=52.198..611.594 rows=131072 loops=1)\n Total runtime: 5739.064 ms\n\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..6680.52 rows=378852 width=30) (actual \ntime=127.301..848.762 rows=131072 loops=1)\n Total runtime: 6548.206 ms\n\nGets slower as more and more dead tuples accumulate... normal as this is a \nseq scan. Note the row estimations getting bigger with the table size...\n\n\nvacuum full test;\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.019..779.864 rows=131072 loops=1)\n Total runtime: 5600.311 ms\n\nvacuum full test;\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.039..1021.847 rows=131072 loops=1)\n Total runtime: 5126.590 ms\n\n-> Seems vacuum full does its job....\n\nvacuum test;\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..3894.08 rows=196608 width=30) (actual \ntime=36.491..860.135 rows=131072 loops=1)\n Total runtime: 7293.698 ms\n\nvacuum test;\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..3894.08 rows=196608 width=30) (actual \ntime=0.044..657.125 rows=131072 loops=1)\n Total runtime: 5934.141 ms\n\nvacuum analyze test;\nexplain analyze update test set y=1;\n Seq Scan on test (cost=0.00..3894.08 rows=196608 width=30) (actual \ntime=0.018..871.132 rows=131072 loops=1)\n Total runtime: 5548.053 ms\n\n-> here vacuum is about as slow as vacuum full (which is normal as the \nwhole table is updated) however the row estimation is still off even after \nANALYZE.\n\n\n Let's create a few indices :\n\nvacuum full test;\ncreate index testa on test(a);\ncreate index testz on test(z);\ncreate index teste on test(e);\ncreate index testr on test(r);\ncreate index testt on test(t);\n-- we don't create an index on y\n\n\nvacuum full test;\nexplain analyze update test set a=id;\n Seq Scan on test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.044..846.102 rows=131072 loops=1)\n Total runtime: 14998.307 ms\n\nWe see that the index updating time has made this query a lot slower. This \nis normal, but :\n\nvacuum full test;\nexplain analyze update test set a=id;\n Seq Scan on test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.045..1387.626 rows=131072 loops=1)\n Total runtime: 17644.368 ms\n\nNow, we updated ALL rows but didn't actually change a single value. \nHowever it took about the same time as the first one. I guess the updates \nall really took place, even if all it did was copy the rows with new \ntransaction ID's.\nNow, let's update a column which is not indexed :\n\nvacuum full test;\nexplain analyze update test set y=id;\n Seq Scan on test (cost=0.00..2274.72 rows=131072 width=30) (actual \ntime=0.046..964.590 rows=131072 loops=1)\n Total runtime: 15628.143 ms\n\nTakes 'bout the same time : the indexes still have to be updated to \nreference the new rows after all.\n\nSo, here is something annoying with the current approach : Updating rows \nin a table bloats ALL indices, not just those whose indexed values have \nbeen actually updated. So if you have a table with many indexed fields and \nyou often update some obscure timestamp field, all the indices will bloat, \nwhich will of course be corrected by VACUUM, but vacuum will have extra \nwork to do.\n\n\tI don't have suggestions, just questions :\n\n\tIs there a way that an update to the indices can be avoided if the \nindexed values do not change ?\n\tWould it depend if an updated tuple can be stored on the same page it was \nbefore (along with the old version) ?\n\tIf the answer is Yes :\n\t\t- would saving the cost of updating the indexes pay off over vacuuming \nthe page on the run to try to squeeze the new tuple version in ?\n\t\t- would it be interesting to specify for each table a target % of free \nspace ('air holes') in pages for vacuum to try to achieve, in order to be \nable to insert updated row versions on the same page they were before, and \nsave index updates ?\n\n\tRegards...\n\t\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 26 Jan 2005 20:46:49 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "PFC wrote:\n> So, here is something annoying with the current approach : Updating rows \n> in a table bloats ALL indices, not just those whose indexed values have \n> been actually updated. So if you have a table with many indexed fields and \n> you often update some obscure timestamp field, all the indices will bloat, \n> which will of course be corrected by VACUUM, but vacuum will have extra \n> work to do.\n\nThe MVCC approach probably doesn't leave you with many choices here.\nThe index entries point directly to the rows in the table, and since\nan update creates a new row (it's the equivalent of doing an insert\nthen a delete), all indexes have to be updated to reflect the location\nof the new row.\n\nUnless my understanding of how this works is completely off...\n\n\n\n\n-- \nKevin Brown\t\t\t\t\t [email protected]\n", "msg_date": "Fri, 28 Jan 2005 12:46:10 -0800", "msg_from": "Kevin Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": "Le Vendredi 21 Janvier 2005 19:18, Marty Scholes a écrit :\n> The indexes can be put on a RAM disk tablespace and that's the end of\n> index problems -- just make sure you have enough memory available. Also\n> make sure that the machine can restart correctly after a crash: the\n> tablespace is dropped and recreated, along with the indexes. This will\n> cause a machine restart to take some time.\nTell me if I am wrong but it sounds to me like like an endless problem....This \nsolution may work with small indexes (less than 4GB) but what appends when \nthe indexes grow ? You would add more memory to your server ? But there will \nbe a moment were you can not add more so what's next ?\n", "msg_date": "Mon, 31 Jan 2005 16:16:07 +0100", "msg_from": "Olivier Sirven <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" }, { "msg_contents": " > Tell me if I am wrong but it sounds to me like like\n > an endless problem....\n\nAgreed. Such it is with caching. After doing some informal \nbenchmarking with 8.0 under Solaris, I am convinced that our major choke \npoint is WAL synchronization, at least for applications with a high \ncommit rate.\n\nWe have noticed a substantial improvement in performance with 8.0 vs \n7.4.6. All of the update/insert problems seem to have gone away, save \nWAL syncing.\n\nI may have to take back what I said about indexes.\n\n\nOlivier Sirven wrote:\n> Le Vendredi 21 Janvier 2005 19:18, Marty Scholes a écrit :\n> \n>>The indexes can be put on a RAM disk tablespace and that's the end of\n>>index problems -- just make sure you have enough memory available. Also\n>>make sure that the machine can restart correctly after a crash: the\n>>tablespace is dropped and recreated, along with the indexes. This will\n>>cause a machine restart to take some time.\n> \n> Tell me if I am wrong but it sounds to me like like an endless problem....This \n> solution may work with small indexes (less than 4GB) but what appends when \n> the indexes grow ? You would add more memory to your server ? But there will \n> be a moment were you can not add more so what's next ?\n\n\n\n", "msg_date": "Mon, 31 Jan 2005 08:24:55 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "> >Technically, you can also set up a rule to do things on a select with\nDO\n> >ALSO. However putting update statements in there would be considered\n(at\n> >least by me) very bad form. Note that this is not a trigger because\nit\n> >does not operate at the row level [I know you knew that already :-)].\n> >\n> >\n> >\n> Unfortunately, you can't. Select operations only allow a single rule,\n> and it must be a DO INSTEAD rule, unless this has changed in 8.0 and I\n> missed it in the docs. However, you can do this in a view by calling\na\n> function either in the row definition or in the where clause.\n\nYou're right...forgot about that. Heh, the do instead rule could be a\nset returning function which could (besides returning the set) do almost\nanything! So in theory it makes no difference...diclaimer: never tried\ndoing this!\n\nMerlin\n\n", "msg_date": "Fri, 21 Jan 2005 14:23:24 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL clustering VS MySQL clustering" } ]
[ { "msg_contents": "Randolf,\n\nYou probably won't want to hear this, but this decision likely has \nnothing to do with brands, models, performance or applications.\n\nYou are up against a pro salesman who is likely very good at what he \ndoes. Instead spewing all sorts of \"facts\" and statistics to your \nclient, the salesman is probably trying to figure out what is driving \nyour client. Do you know what is driving your client? Why does he want \nto switch? Why now? Why not next quarter? Why not last quarter? Why \ndoes he want to do the application at all?\n\nForget the expected answers, e.g., \"We need this application to enhance \nour competitiveness in the marketplace and increase the blah blah blah.\"\n\nWhy does YOUR CLIENT actually care about any of this? Is he trying to \nimpress his boss? Build his career? Demonstrate that he can manage a \nsignificant project? Is he trying to get rid of old code from an \nex-coworker that he hated? Is it spite? Pride? Is he angling for a \nbigger budget next year? Is there someone who will be assigned to this \nproject that your client wants to lord over?\n\nThe list goes on and on, and there is no way that your client is going \nto admit the truth and say something like, \"The real reason I want to do \nthis right now is that my childhood rival at XYZ corp just did a project \nlike this. I need to boost my ego, so I *MUST* do a bigger project, \nright now.\"\n\nYou gotta read between the lines. How important is this and why? How \nurgent and why? Who all is behind this project? What are each \nindividual's personal motivations? Does anyone resent a leader on the \nteam and secretly wish for this project to fail?\n\nOnce you know what is actually going on in people's heads, you can begin \nto build rapport and influence them. You can establish your own \ncredibility and safety with your solution, while planting seeds of doubt \nabout another solution.\n\nAt its core, this decision is (very likely) not at all about RDBMS \nperformance or anything else related to computing.\n\nHave you asked yourself why you care about one solution over another? \nWhat's driving you to push Pg over MS? Why? You might want to start \nanswering those questions before you even talk to your client.\n\nGood Luck,\nMarty\n\nRandolf Richardson wrote:\n> I'm looking for recent performance statistics on PostgreSQL vs. Oracle\n> vs. Microsoft SQL Server. Recently someone has been trying to convince my\n> client to switch from SyBASE to Microsoft SQL Server (they originally \n> wanted\n> to go with Oracle but have since fallen in love with Microsoft). All this\n> time I've been recommending PostgreSQL for cost and stability (my own \n> testing\n> has shown it to be better at handling abnormal shutdowns and using fewer\n> system resources) in addition to true cross-platform compatibility.\n> \n> If I can show my client some statistics that PostgreSQL outperforms\n> these (I'm more concerned about it beating Oracle because I know that\n> Microsoft's stuff is always slower, but I need the information anyway to\n> protect my client from falling victim to a 'sales job'), then PostgreSQL \n> will\n> be the solution of choice as the client has always believed that they \n> need a\n> high-performance solution.\n> \n> I've already convinced them on the usual price, cross-platform\n> compatibility, open source, long history, etc. points, and I've been \n> assured\n> that if the performance is the same or better than Oracle's and Microsoft's\n> solutions that PostgreSQL is what they'll choose.\n> \n> Thanks in advance.\n> \n> \n\n\n\n", "msg_date": "Fri, 21 Jan 2005 12:32:12 -0700", "msg_from": "Marty Scholes <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL vs. Oracle vs. Microsoft" } ]
[ { "msg_contents": "I have a query that thinks it's going to generate a huge number of rows,\nwhen in fact it won't:\n\nINSERT INTO page_log.rrs\n ( bucket_id, page_id,project_id,other, hits,min_hits,max_hits,total_duration,min_duration,max_duration )\n SELECT a.rrs_bucket_id, page_id,project_id,other\n , count(*),count(*),count(*),sum(duration),min(duration),max(duration)\n FROM\n (SELECT b.bucket_id AS rrs_bucket_id, s.*\n FROM rrs.bucket b\n JOIN page_log.log s\n ON (\n b.prev_end_time < log_time\n AND b.end_time >= log_time )\n WHERE b.rrs_id = '1'\n AND b.end_time <= '2005-01-21 20:23:00+00'\n AND b.end_time > '1970-01-01 00:00:00+00'\n ) a\n GROUP BY rrs_bucket_id, page_id,project_id,other;\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan \"*SELECT*\" (cost=170461360504.98..183419912556.69 rows=91175544 width=77)\n -> GroupAggregate (cost=170461360504.98..183418316984.67 rows=91175544 width=29)\n -> Sort (cost=170461360504.98..171639141309.21 rows=471112321692 width=29)\n Sort Key: b.bucket_id, s.page_id, s.project_id, s.other\n -> Nested Loop (cost=0.00..17287707964.10 rows=471112321692 width=29)\n -> Seq Scan on bucket b (cost=0.00..9275.84 rows=281406 width=20)\n Filter: ((rrs_id = 1) AND (end_time <= '2005-01-21 20:23:00+00'::timestamp with time zone) AND (end_time > '1970-01-01 00:00:00+00'::timestamp with time zone))\n -> Index Scan using log__log_time on log s (cost=0.00..36321.24 rows=1674137 width=33)\n Index Cond: ((\"outer\".prev_end_time < s.log_time) AND (\"outer\".end_time >= s.log_time))\n\nThe final rowcount after the aggregate will actually be about 1.9M\nrows:\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan \"*SELECT*\" (cost=170461360504.98..183419912556.69 rows=91175544 width=77) (actual time=156777.374..234613.843 rows=1945123 loops=1)\n -> GroupAggregate (cost=170461360504.98..183418316984.67 rows=91175544 width=29) (actual time=156777.345..214246.751 rows=1945123 loops=1)\n -> Sort (cost=170461360504.98..171639141309.21 rows=471112321692 width=29) (actual time=156777.296..177517.663 rows=4915567 loops=1)\n Sort Key: b.bucket_id, s.page_id, s.project_id, s.other\n -> Nested Loop (cost=0.00..17287707964.10 rows=471112321692 width=29) (actual time=0.662..90702.755 rows=4915567 loops=1)\n -> Seq Scan on bucket b (cost=0.00..9275.84 rows=281406 width=20) (actual time=0.063..1591.591 rows=265122 loops=1)\n Filter: ((rrs_id = 1) AND (end_time <= '2005-01-21 20:23:00+00'::timestamp with time zone) AND (end_time > '1970-01-01 00:00:00+00'::timestamp with time zone))\n -> Index Scan using log__log_time on log s (cost=0.00..36321.24 rows=1674137 width=33) (actual time=0.014..0.174 rows=19 loops=265122)\n Index Cond: ((\"outer\".prev_end_time < s.log_time) AND (\"outer\".end_time >= s.log_time))\n Total runtime: 299623.954 ms\n\nEverything is analyzed, and the statistics target is set to 1000.\nBasically, it seems that it doesn't understand that each row in log will\nmatch up with at most one row in bucket. There is a unique index on\nbucket(rrs_id, end_time), so it should be able to tell this.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Fri, 21 Jan 2005 14:38:27 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Odd number of rows expected" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> (SELECT b.bucket_id AS rrs_bucket_id, s.*\n> FROM rrs.bucket b\n> JOIN page_log.log s\n> ON (\n> b.prev_end_time < log_time\n> AND b.end_time >= log_time )\n> WHERE b.rrs_id = '1'\n> AND b.end_time <= '2005-01-21 20:23:00+00'\n> AND b.end_time > '1970-01-01 00:00:00+00'\n> ) a\n\n> Basically, it seems that it doesn't understand that each row in log will\n> match up with at most one row in bucket. There is a unique index on\n> bucket(rrs_id, end_time), so it should be able to tell this.\n\nWhy should it be able to tell that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Jan 2005 22:18:00 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd number of rows expected " }, { "msg_contents": "On Sat, Jan 22, 2005 at 10:18:00PM -0500, Tom Lane wrote:\n> \"Jim C. Nasby\" <[email protected]> writes:\n> > (SELECT b.bucket_id AS rrs_bucket_id, s.*\n> > FROM rrs.bucket b\n> > JOIN page_log.log s\n> > ON (\n> > b.prev_end_time < log_time\n> > AND b.end_time >= log_time )\n> > WHERE b.rrs_id = '1'\n> > AND b.end_time <= '2005-01-21 20:23:00+00'\n> > AND b.end_time > '1970-01-01 00:00:00+00'\n> > ) a\n> \n> > Basically, it seems that it doesn't understand that each row in log will\n> > match up with at most one row in bucket. There is a unique index on\n> > bucket(rrs_id, end_time), so it should be able to tell this.\n> \n> Why should it be able to tell that?\n\nIndexes:\n \"rrs_bucket__rrs_id__end_time\" unique, btree (rrs_id, end_time)\n\nErr, crap, I guess that wouldn't work, because of prev_end_time not\nbeing in there...\n\nIn english, each bucket defines a specific time period, and no two\nbuckets can over-lap (though there's no constraints defined to actually\nprevent that). So reality is that each row in page_log.log will in fact\nonly match one row in bucket (at least for each value of rrs_id).\n\nGiven that, would the optimizer make a better choice if it knew that\n(since it means a much smaller result set). Is there any way to tell the\noptimizer this is the case?\n\nMaybe what I ultimately need is a timestamp with interval datatype, that\nspecifies an interval that's fixed in time.\n-- \nJim C. Nasby, Database Consultant [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828\n\nWindows: \"Where do you want to go today?\"\nLinux: \"Where do you want to go tomorrow?\"\nFreeBSD: \"Are you guys coming, or what?\"\n", "msg_date": "Sun, 23 Jan 2005 16:29:42 -0600", "msg_from": "\"Jim C. Nasby\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Odd number of rows expected" }, { "msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> In english, each bucket defines a specific time period, and no two\n> buckets can over-lap (though there's no constraints defined to actually\n> prevent that). So reality is that each row in page_log.log will in fact\n> only match one row in bucket (at least for each value of rrs_id).\n\n> Given that, would the optimizer make a better choice if it knew that\n> (since it means a much smaller result set).\n\nGiven that the join condition is not an equality, there's no hope of\nusing hash or merge join; so the join itself is about as good as you're\ngonna get. With a more accurate rows estimate for the join result, it\nmight have decided to use HashAggregate instead of Sort/GroupAggregate,\nbut AFAICS that would not have made a huge difference ... at best maybe\n25% of the total query time.\n\n> Is there any way to tell the\n> optimizer this is the case?\n\nNope. This gets back to the old problem of not having any cross-column\n(cross-table in this case) statistics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Jan 2005 17:39:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Odd number of rows expected " } ]
[ { "msg_contents": "Hello everyone,\n\nFirst time poster to the mailing list here. \n\nWe have been running pgsql for about a year now at a pretty basic level (I guess) as a backend for custom web (intranet) application software. Our database so far is a \"huge\" (note sarcasm) 10 Mb containing of about 6 or so principle tables. \n\nOur 'test' screen we've been using loads a 600kb HTML document which is basically a summary of our client's orders. It took originally 11.5 seconds to load in internet explorer (all 10.99 seconds were pretty much taken up by postgres processes on a freebsd server). \n\nI then re-wrote the page to use a single select query to call all the information needed by PHP to draw the screen. That managed to shave it down to 3.5 seconds... but this so far is as fast as I can get the page to load. Have tried vacuuming and creating indexes but to no avail. (increasing shared mem buffers yet to be done)\n\nNow heres the funny bit ... \n\nEvery time I tested an idea to speed it up, I got exactly the same loading time on a Athlon 1800+, 256Mb RAM, 20Gb PATA computer as compared to a Dual Opteron 246, 1Gb RAM, 70Gb WD Raptor SATA server. Now, why a dual opteron machine can't perform any faster than a lowly 1800+ athlon in numerous tests is completely beyond me ... increased memory and RAID 0 disc configurations so far have not resulted in any significant performance gain in the opteron server.\n\nDo these facts sound right? If postgres is meant to be a 200Gb industrial strength database, should it really be taking this long pulling 600kb worth of info from a 10Mb database? And why no performance difference between two vastly different hardware spec'd computers??? Am I missing some vital postgres.conf setting??\n\nAny advice welcome.\n\nThanks,\nDave\[email protected]\n\n\n\n\n\n\n\nHello everyone,First time poster to the \nmailing list here. \n \nWe have been running pgsql for about a year now at \na pretty basic level (I guess) as a backend for custom \nweb (intranet) application software. Our database so far is a \n\"huge\" (note sarcasm) 10 Mb containing of about 6 or so principle \ntables. \n \nOur 'test' screen we've been using loads a 600kb \nHTML document which is basically a summary of our client's orders. It took \noriginally 11.5 seconds to load in internet explorer (all 10.99 seconds were \npretty much taken up by postgres processes on a freebsd server). \n \nI then re-wrote the page to use a single select \nquery to call all the information needed by PHP to draw the screen. That \nmanaged to shave it down to 3.5 seconds... but this so far is as fast as I can \nget the page to load. Have tried vacuuming and creating indexes but to no avail. \n(increasing shared mem buffers yet to be done)\n \nNow heres the funny bit ... Every time I \ntested an idea to speed it up, I got exactly the same loading time on a Athlon \n1800+, 256Mb RAM, 20Gb PATA computer as compared to a Dual Opteron 246, 1Gb RAM, \n70Gb WD Raptor SATA server. Now, why a dual opteron machine can't perform \nany faster than a lowly 1800+ athlon in numerous tests is completely beyond me \n.. increased memory and RAID 0 disc configurations so far have not resulted in \nany significant performance gain in the opteron server.\n \nDo these facts sound right? If postgres is meant to \nbe a 200Gb industrial strength database, should it really be taking this long \npulling 600kb worth of info from a 10Mb database? And why no performance \ndifference between two vastly different hardware spec'd computers??? Am I \nmissing some vital postgres.conf setting??Any advice \nwelcome.Thanks,Dave\[email protected]", "msg_date": "Mon, 24 Jan 2005 15:56:41 +0800", "msg_from": "\"SpaceBallOne\" <[email protected]>", "msg_from_op": true, "msg_subject": "poor performance of db?" }, { "msg_contents": "SpaceBallOne wrote:\n\n> Hello everyone,\n>\n> First time poster to the mailing list here.\n> \n> We have been running pgsql for about a year now at a pretty basic \n> level (I guess) as a backend for custom \n> web (intranet) application software. Our database so far is a \"huge\" \n> (note sarcasm) 10 Mb containing of about 6 or so principle tables. \n> \n> Our 'test' screen we've been using loads a 600kb HTML document which \n> is basically a summary of our client's orders. It took originally 11.5 \n> seconds to load in internet explorer (all 10.99 seconds were pretty \n> much taken up by postgres processes on a freebsd server).\n> \n> I then re-wrote the page to use a single select query to call all the \n> information needed by PHP to draw the screen. That managed to shave it \n> down to 3.5 seconds... but this so far is as fast as I can get the \n> page to load. Have tried vacuuming and creating indexes but to no \n> avail. (increasing shared mem buffers yet to be done)\n> \n> Now heres the funny bit ...\n>\n> Every time I tested an idea to speed it up, I got exactly the same \n> loading time on a Athlon 1800+, 256Mb RAM, 20Gb PATA computer as \n> compared to a Dual Opteron 246, 1Gb RAM, 70Gb WD Raptor SATA server. \n> Now, why a dual opteron machine can't perform any faster than a lowly \n> 1800+ athlon in numerous tests is completely beyond me .. increased \n> memory and RAID 0 disc configurations so far have not resulted in any \n> significant performance gain in the opteron server.\n> \n> Do these facts sound right? If postgres is meant to be a 200Gb \n> industrial strength database, should it really be taking this long \n> pulling 600kb worth of info from a 10Mb database? And why no \n> performance difference between two vastly different hardware spec'd \n> computers??? Am I missing some vital postgres.conf setting??\n>\n> Any advice welcome.\n>\n> Thanks,\n> Dave\n> [email protected] <mailto:[email protected]>\n> \n\nCould you give us a bit more info.\nWhat you are trying to do. EXPLAIN ANALYZE would be great.\nIn my experience first problem with the first db app is no indexes used \nin joining.\n\n-- \n-- Andrei Reinus", "msg_date": "Mon, 24 Jan 2005 11:22:13 +0200", "msg_from": "Andrei Reinus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance of db?" }, { "msg_contents": "> I then re-wrote the page to use a single select query to call all the\n> information needed by PHP to draw the screen. That managed to shave it\n> down to 3.5 seconds... but this so far is as fast as I can get the\n> page to load. Have tried vacuuming and creating indexes but to no\n> avail. (increasing shared mem buffers yet to be done)\n\nIf you call this select statement directly from psql instead of through\nthe PHP thing, does timing change?\n\n(just to make sure, time is actually spent in the query and not\nsomewhere else)\n\nPS: use \\timing in psql to see timing information\n\nBye, Chris.\n\n\n", "msg_date": "Mon, 24 Jan 2005 11:38:47 +0100", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance of db?" }, { "msg_contents": "Thanks for the replies guys,\n\nChris -\nvery cool feature timing - didnt know about that one. Appears to be taking \nthe following times in pulling up the page:\nweb browser: 1.15 sec\npostgres: 1.52 sec\nother: 0.83 sec\n\nAndrew:\nQuery looks like the following:\n\nexplain analyse SELECT\n\njob.*,\ncustomer.*,\nubd.suburb, location.*,\nstreet.street,\nlocation.designation_no,\na1.initials as surveyor,\na2.initials as draftor,\nprices.*,\nplans.*\n\nFROM\n\njob,\nlogin a1,\nlogin a2,\nprices,\nlocation,\nubd,\nplans\n\nWHERE\n\n(\na1.code = job.surveyor_no AND\na2.code = job.draftor_no AND\njob.customer_no = customer.customer_no AND\njob.location_no = location.location_no AND\nlocation.suburb_no = ubd.suburb_id AND\nlocation.street_no = street.street_no AND\njob.customer_no = customer.customer_no AND\njob.price_id = prices.pricelist_id AND\njob.price_revision = prices.revision AND\nlocation.plan_no = plans.number AND\nlocation.plan_type = plans.plantype AND\n\n( (job.jobbookflag <> 'flagged') AND ( job.status = 'normal' ) ))\n\nORDER BY job_no DESC;\n\n\n\n \n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=566.31..567.06 rows=298 width=2626) (actual \ntime=1378.38..1380.08 rows=353 loops=1)\n Sort Key: job.job_no\n -> Hash Join (cost=232.59..554.06 rows=298 width=2626) (actual \ntime=124.96..1374.12 rows=353 loops=1)\n Hash Cond: (\"outer\".suburb_no = \"inner\".suburb_id)\n -> Hash Join (cost=221.45..519.06 rows=288 width=2606) (actual \ntime=118.60..1187.87 rows=353 loops=1)\n Hash Cond: (\"outer\".street_no = \"inner\".street_no)\n -> Hash Join (cost=204.79..496.64 rows=287 width=2587) \n(actual time=108.16..997.57 rows=353 loops=1)\n Hash Cond: (\"outer\".surveyor_no = \"inner\".code)\n -> Hash Join (cost=203.21..490.05 rows=287 \nwidth=2573) (actual time=106.89..823.47 rows=353 loops=1)\n Hash Cond: (\"outer\".customer_no = \n\"inner\".customer_no)\n -> Hash Join (cost=159.12..440.93 rows=287 \nwidth=2291) (actual time=92.16..654.51 rows=353 loops=1)\n Hash Cond: (\"outer\".draftor_no = \n\"inner\".code)\n -> Hash Join (cost=157.55..434.33 \nrows=287 width=2277) (actual time=90.96..507.34 rows=353 loops=1)\n Hash Cond: (\"outer\".price_id = \n\"inner\".pricelist_id)\n Join Filter: (\"outer\".price_revision \n= \"inner\".revision)\n -> Hash Join (cost=142.95..401.01 \nrows=336 width=2150) (actual time=82.57..377.87 rows=353 loops=1)\n Hash Cond: (\"outer\".plan_no = \n\"inner\".number)\n Join Filter: (\"outer\".plan_type \n= \"inner\".plantype)\n -> Hash Join \n(cost=25.66..272.20 rows=418 width=2110) (actual time=14.58..198.50 rows=353 \nloops=1)\n Hash Cond: \n(\"outer\".location_no = \"inner\".location_no)\n -> Seq Scan on job \n(cost=0.00..238.18 rows=418 width=2029) (actual time=0.31..95.21 rows=353 \nloops=1)\n Filter: \n((jobbookflag <> 'flagged'::character varying) AND (status = \n'normal'::character varying))\n -> Hash \n(cost=23.53..23.53 rows=853 width=81) (actual time=13.91..13.91 rows=0 \nloops=1)\n -> Seq Scan on \n\"location\" (cost=0.00..23.53 rows=853 width=81) (actual time=0.03..8.92 \nrows=853 loops=1)\n -> Hash (cost=103.43..103.43 \nrows=5543 width=40) (actual time=67.55..67.55 rows=0 loops=1)\n -> Seq Scan on plans \n(cost=0.00..103.43 rows=5543 width=40) (actual time=0.01..36.89 rows=5544 \nloops=1)\n -> Hash (cost=13.68..13.68 rows=368 \nwidth=127) (actual time=7.98..7.98 rows=0 loops=1)\n -> Seq Scan on prices \n(cost=0.00..13.68 rows=368 width=127) (actual time=0.03..5.83 rows=368 \nloops=1)\n -> Hash (cost=1.46..1.46 rows=46 \nwidth=14) (actual time=0.57..0.57 rows=0 loops=1)\n -> Seq Scan on login a2 \n(cost=0.00..1.46 rows=46 width=14) (actual time=0.02..0.31 rows=46 loops=1)\n -> Hash (cost=42.07..42.07 rows=807 width=282) \n(actual time=14.24..14.24 rows=0 loops=1)\n -> Seq Scan on customer (cost=0.00..42.07 \nrows=807 width=282) (actual time=0.03..9.03 rows=807 loops=1)\n -> Hash (cost=1.46..1.46 rows=46 width=14) (actual \ntime=0.57..0.57 rows=0 loops=1)\n -> Seq Scan on login a1 (cost=0.00..1.46 \nrows=46 width=14) (actual time=0.02..0.31 rows=46 loops=1)\n -> Hash (cost=14.53..14.53 rows=853 width=19) (actual \ntime=9.79..9.79 rows=0 loops=1)\n -> Seq Scan on street (cost=0.00..14.53 rows=853 \nwidth=19) (actual time=0.01..5.12 rows=853 loops=1)\n -> Hash (cost=9.91..9.91 rows=491 width=20) (actual \ntime=5.73..5.73 rows=0 loops=1)\n -> Seq Scan on ubd (cost=0.00..9.91 rows=491 width=20) \n(actual time=0.02..2.98 rows=491 loops=1)\n Total runtime: 1383.99 msec\n(39 rows)\n\nTime: 1445.80 ms\n\n\n\nI tried setting up 10-15 indexes yesterday, but couldn't see they were doing \nanything. I have since deleted them (on the premise that I didn't have a \nclue what I was doing).\n\nI'm not actually running any keys in this database... would that be a \nsimpler way of running my queries? I only learnt postgres / unix from \nscratch a year ago so my db setup and queries is probably pretty messy :)\n\nThanks,\nDave\[email protected]\n\n\n\n\n\n----- Original Message ----- \nFrom: \"Andrei Reinus\" <[email protected]>\nTo: \"SpaceBallOne\" <[email protected]>\nCc: <[email protected]>\nSent: Monday, January 24, 2005 5:22 PM\nSubject: Re: [PERFORM] poor performance of db?\n\n\n> SpaceBallOne wrote:\n>\n>> Hello everyone,\n>>\n>> First time poster to the mailing list here.\n>>\n>> We have been running pgsql for about a year now at a pretty basic\n>> level (I guess) as a backend for custom\n>> web (intranet) application software. Our database so far is a \"huge\"\n>> (note sarcasm) 10 Mb containing of about 6 or so principle tables.\n>>\n>> Our 'test' screen we've been using loads a 600kb HTML document which\n>> is basically a summary of our client's orders. It took originally 11.5\n>> seconds to load in internet explorer (all 10.99 seconds were pretty\n>> much taken up by postgres processes on a freebsd server).\n>>\n>> I then re-wrote the page to use a single select query to call all the\n>> information needed by PHP to draw the screen. That managed to shave it\n>> down to 3.5 seconds... but this so far is as fast as I can get the\n>> page to load. Have tried vacuuming and creating indexes but to no\n>> avail. (increasing shared mem buffers yet to be done)\n>>\n>> Now heres the funny bit ...\n>>\n>> Every time I tested an idea to speed it up, I got exactly the same\n>> loading time on a Athlon 1800+, 256Mb RAM, 20Gb PATA computer as\n>> compared to a Dual Opteron 246, 1Gb RAM, 70Gb WD Raptor SATA server.\n>> Now, why a dual opteron machine can't perform any faster than a lowly\n>> 1800+ athlon in numerous tests is completely beyond me .. increased\n>> memory and RAID 0 disc configurations so far have not resulted in any\n>> significant performance gain in the opteron server.\n>>\n>> Do these facts sound right? If postgres is meant to be a 200Gb\n>> industrial strength database, should it really be taking this long\n>> pulling 600kb worth of info from a 10Mb database? And why no\n>> performance difference between two vastly different hardware spec'd\n>> computers??? Am I missing some vital postgres.conf setting??\n>>\n>> Any advice welcome.\n>>\n>> Thanks,\n>> Dave\n>> [email protected] <mailto:[email protected]>\n>>\n>\n> Could you give us a bit more info.\n> What you are trying to do. EXPLAIN ANALYZE would be great.\n> In my experience first problem with the first db app is no indexes used\n> in joining.\n>\n> -- \n> -- Andrei Reinus\n>\n>\n\n\n--------------------------------------------------------------------------------\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n> \n", "msg_date": "Tue, 25 Jan 2005 09:22:36 +0800", "msg_from": "\"SpaceBallOne\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor performance of db?" }, { "msg_contents": "Thanks for the reply John,\n\nThere are approximately 800 rows total in our job table (which stays \napproximately the same because 'completed' jobs get moved to a 'job_archive' \ntable).The other jobs not shown by the specific query could be on backorder \nstatus, temporary deleted status, etc etc.\n\nYou are correct in assuming the _id and _no (stands for 'number') fields are \nunique - this was one of the first pages I built when I started learning \npostgres, so not knowing how to set up primary and foriegn keys at the time, \nI did it that way ... it is normalised to a point (probably rather sloppy, \nbut its a juggling act between learning on the fly, what I'd like to have, \nand time constraints of being the only I.T. guy in the company!)...\n\nI think I will definitely focus on converting my database and php pages to \nusing proper primary keys in postgres - especially if they automatically \nindex themselves. I didn't do a vacuum analyse on them so that may explain \nwhy they didn't seem to do much.\n\nThanks,\nDave\[email protected]\n\n\n\n----- Original Message ----- \nFrom: \"John Arbash Meinel\" <[email protected]>\nTo: \"SpaceBallOne\" <[email protected]>\nSent: Tuesday, January 25, 2005 9:56 AM\nSubject: Re: [PERFORM] poor performance of db?\n\nSpaceBallOne wrote:\n\n>\n>\n> I tried setting up 10-15 indexes yesterday, but couldn't see they were\n> doing anything. I have since deleted them (on the premise that I\n> didn't have a clue what I was doing).\n\nDid you VACUUM ANALYZE after you created the indexes? It really depends\non how many rows you need vs how many rows are in the table. If you are\ntrying to show everything in the tables, then it won't help.\n\nI can tell that your query is returning 353 rows. How many rows total\ndo you have? I think the rule is that indexes help when you need < 10%\nof your data.\n\n From what I can see, it looks like all of the *_no columns, and *_id\ncolumns (which are basically your keys), would be helped by having an\nindex on them.\n\n>\n> I'm not actually running any keys in this database... would that be a\n> simpler way of running my queries? I only learnt postgres / unix from\n> scratch a year ago so my db setup and queries is probably pretty\n> messy :)\n>\nI would probably think that you would want a \"primary key\" on every\ntable, and this would be your column for references. This way you can\nget referential integrity, *and* it automatically creates an index.\n\nFor instance, the job table could be:\n\ncreate table job (\n id serial primary key,\n surveyor_id integer references surveyor(id),\n draftor_id integer references draftor(id),\n ...\n);\n\nThen your other tables would also need an id field. I can't say much\nmore without looking deeper, but from the looks of it, all of your \"_no\"\nand \"_id\" references should probably be referencing a primary key on the\nother table. Personally, I always name it \"id\" and \"_id\", but if \"_no\"\nmeans something to you, then you certainly could keep it.\n\nIf these entries are not unique, then probably your database isn't\nproperly normalized.\nJohn\n=:->\n\n> Thanks,\n> Dave\n> [email protected]\n>\n\n", "msg_date": "Tue, 25 Jan 2005 10:31:10 +0800", "msg_from": "\"SpaceBallOne\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: poor performance of db?" }, { "msg_contents": "SpaceBallOne wrote:\n\n> Thanks for the reply John,\n>\n> There are approximately 800 rows total in our job table (which stays\n> approximately the same because 'completed' jobs get moved to a\n> 'job_archive' table).The other jobs not shown by the specific query\n> could be on backorder status, temporary deleted status, etc etc.\n>\n> You are correct in assuming the _id and _no (stands for 'number')\n> fields are unique - this was one of the first pages I built when I\n> started learning postgres, so not knowing how to set up primary and\n> foriegn keys at the time, I did it that way ... it is normalised to a\n> point (probably rather sloppy, but its a juggling act between learning\n> on the fly, what I'd like to have, and time constraints of being the\n> only I.T. guy in the company!)...\n>\n> I think I will definitely focus on converting my database and php\n> pages to using proper primary keys in postgres - especially if they\n> automatically index themselves. I didn't do a vacuum analyse on them\n> so that may explain why they didn't seem to do much.\n\n\nYou probably can add them now if you don't want to do a lot of redesign.\nALTER TABLE job ADD PRIMARY KEY (id);\n\nIf they are not unique this will cause problems, but as they should be\nunique, I think it will work.\n\nI'm not sure how much help indexes will be if you only have 800 rows,\nand your queries use 300+ of them.\n\nYou might need re-think the query/table design.\n\nYou might try doing nested queries, or explicit joins, rather than one\nbig query with a WHERE clause.\n\nMeaning do stuff like:\n\nSELECT\n (job JOIN customer ON job.customer_no = customer.customer_no) as jc\n JOIN location on jc.location_no = location.location_no\n...\n\nI also see that the planner seems to mis-estimate the number of rows in\nsome cases. Like here:\n\n> -> Hash (cost=14.53..14.53 rows=853 width=19) (actual\n> time=9.79..9.79 rows=0 loops=1)\n> -> Seq Scan on street (cost=0.00..14.53 rows=853\n> width=19) (actual time=0.01..5.12 rows=853 loops=1)\n> -> Hash (cost=9.91..9.91 rows=491 width=20) (actual\n> time=5.73..5.73 rows=0 loops=1)\n> -> Seq Scan on ubd (cost=0.00..9.91 rows=491 width=20)\n> (actual time=0.02..2.98 rows=491\n\nWhere it thinks the hash will return all of the rows from the sequential\nscan, when in reality it returns none.\n\nI think problems with the planner fall into 3 categories.\n\n 1. You didn't VACUUM ANALYZE.\n 2. You did, but the planner doesn't keep sufficient statistics (ALTER\n TABLE job ALTER COLUMN no SET STATISTICS <a number>)\n 3. You're join needs cross column statistics, which postgres doesn't\n support (yet).\n\nIf you only have 800 rows, I don't think you have to worry about\nstatistics, so that leaves things at 1 or 3. If you did do 1, then I\ndon't know what to tell you.\n\nJohn\n=:->\n\nPS> I'm not a guru at this stuff, so some of what I say may be wrong.\nBut hopefully I point you in the right direction.\n\n\n>\n> Thanks,\n> Dave\n> [email protected]\n>", "msg_date": "Mon, 24 Jan 2005 21:15:28 -0600", "msg_from": "John Arbash Meinel <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance of db?" }, { "msg_contents": "\n\n> Every time I tested an idea to speed it up, I got exactly the same \n> loading time on a Athlon 1800+, 256Mb RAM, 20Gb PATA computer as \n> compared to a Dual Opteron 246, 1Gb RAM, 70Gb WD Raptor SATA server. \n> Now, why a dual opteron machine can't perform any faster than a lowly \n> 1800+ athlon in numerous tests is completely beyond me ... increased \n> memory and RAID 0 disc configurations so far have not resulted in any \n> significant performance gain in the opteron server.\n\n\tHow many rows does the query return ?\n\n\tMaybe a lot of time is spent, hidden in the PHP libraries, converting the \nrows returned by psql into PHP objects.\n\n\tYou should try that :\n\n\tEXPLAIN ANALYZE SELECT your query\n\t-> time is T1\n\n\tCREATE TABLE cache AS SELECT your query\n\tEXPLAIN ANALYZE SELECT * FROM cache\n\t-> time is T2 (probably very small)\n\n\tNow in your PHP script replace SELECT your query by SELECT * FROM cache. \nHow much does the final page time changes ? This will tell you the time \nspend in the postgres engine, not in data transmission and PHPing. It will \ntell wat you can gain optimizing the query.\n", "msg_date": "Wed, 26 Jan 2005 21:02:42 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: poor performance of db?" } ]
[ { "msg_contents": "Hi,\n I have a query which is executed using ilike. The query values are\nreceived from user and it is executed using PreparedStatement.\nCurrently all queries are executed as it is using iilike irrespective\nof whether it have a pattern matching character or not. Can using =\ninstead of ilike boot performance ?. If creating index can help then\nhow the index should be created on lower case or uppercase ?.\n\nrgds\nAntony Paul\n", "msg_date": "Mon, 24 Jan 2005 14:48:10 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "How to boost performance of ilike queries ?" }, { "msg_contents": "On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:\n> Hi,\n> I have a query which is executed using ilike. The query values are\n> received from user and it is executed using PreparedStatement.\n> Currently all queries are executed as it is using iilike irrespective\n> of whether it have a pattern matching character or not. Can using =\n> instead of ilike boot performance ?. If creating index can help then\n> how the index should be created on lower case or uppercase ?.\n> \nIt depends on the type of queries you are doing.\n\nchanging it to something like lower(column) like lower('text%'), and\ncreating an index on lower(column) will give you much better performance.\n\nIf you have % in the middle of the query, it will still be slow, but I assume that is not\nthe general case.\n\nI am not sure what the effect of it being prepared will be, however I've had much success\nwith the method above without the queries being prepared. Others may be able to offer advice\nabout if prepare will effect it.\n\nRegards\n\nRussell Smith\n", "msg_date": "Mon, 24 Jan 2005 20:58:54 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of ilike queries ?" }, { "msg_contents": "Creating an index and using lower(column) does not change the explain\nplan estimates.\nIt seems that it is not using index for like or ilike queries\nirrespective of whether it have a pattern matching character in it or\nnot. (using PostgreSQL 7.3.3)\n\nOn googling I found this thread \n\nhttp://archives.postgresql.org/pgsql-sql/2004-11/msg00285.php\n\nIt says that index is not used if the search string begins with a % symbol.\n\nrgds\nAntony Paul\n\nOn Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[email protected]> wrote:\n> On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:\n> > Hi,\n> > I have a query which is executed using ilike. The query values are\n> > received from user and it is executed using PreparedStatement.\n> > Currently all queries are executed as it is using iilike irrespective\n> > of whether it have a pattern matching character or not. Can using =\n> > instead of ilike boot performance ?. If creating index can help then\n> > how the index should be created on lower case or uppercase ?.\n> > \n> It depends on the type of queries you are doing.\n> \n> changing it to something like lower(column) like lower('text%'), and\n> creating an index on lower(column) will give you much better performance.\n> \n> If you have % in the middle of the query, it will still be slow, but I assume that is not\n> the general case.\n> \n> I am not sure what the effect of it being prepared will be, however I've had much success\n> with the method above without the queries being prepared. Others may be able to offer advice\n> about if prepare will effect it.\n> \n> Regards\n> \n> Russell Smith\n>\n", "msg_date": "Tue, 25 Jan 2005 13:53:26 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of ilike queries ?" }, { "msg_contents": "On Tue, 25 Jan 2005 07:23 pm, Antony Paul wrote:\n> Creating an index and using lower(column) does not change the explain\n> plan estimates.\n> It seems that it is not using index for like or ilike queries\n> irrespective of whether it have a pattern matching character in it or\n> not. (using PostgreSQL 7.3.3)\n> \n> On googling I found this thread \n> \n> http://archives.postgresql.org/pgsql-sql/2004-11/msg00285.php\n> \n> It says that index is not used if the search string begins with a % symbol.\n\nWhat exactly are the type of like queries you are going? there is a solution\nfor having the % at the start, but you can win everyway.\n\n> \n> rgds\n> Antony Paul\n> \n> On Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[email protected]> wrote:\n> > On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:\n> > > Hi,\n> > > I have a query which is executed using ilike. The query values are\n> > > received from user and it is executed using PreparedStatement.\n> > > Currently all queries are executed as it is using iilike irrespective\n> > > of whether it have a pattern matching character or not. Can using =\n> > > instead of ilike boot performance ?. If creating index can help then\n> > > how the index should be created on lower case or uppercase ?.\n> > > \n> > It depends on the type of queries you are doing.\n> > \n> > changing it to something like lower(column) like lower('text%'), and\n> > creating an index on lower(column) will give you much better performance.\n> > \n> > If you have % in the middle of the query, it will still be slow, but I assume that is not\n> > the general case.\n> > \n> > I am not sure what the effect of it being prepared will be, however I've had much success\n> > with the method above without the queries being prepared. Others may be able to offer advice\n> > about if prepare will effect it.\n> > \n> > Regards\n> > \n> > Russell Smith\n> >\n> \n> \n", "msg_date": "Tue, 25 Jan 2005 19:49:12 +1100", "msg_from": "Russell Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of ilike queries ?" }, { "msg_contents": "Actually the query is created like this.\nUser enters the query in a user interface. User can type any character\nin the query criteria. ie. % and _ can be at any place. User have the\nfreedom to choose query columns as well. The query is agianst a single\ntable .\n\nrgds\nAntony Paul\n\n\nOn Tue, 25 Jan 2005 19:49:12 +1100, Russell Smith <[email protected]> wrote:\n> On Tue, 25 Jan 2005 07:23 pm, Antony Paul wrote:\n> > Creating an index and using lower(column) does not change the explain\n> > plan estimates.\n> > It seems that it is not using index for like or ilike queries\n> > irrespective of whether it have a pattern matching character in it or\n> > not. (using PostgreSQL 7.3.3)\n> >\n> > On googling I found this thread\n> >\n> > http://archives.postgresql.org/pgsql-sql/2004-11/msg00285.php\n> >\n> > It says that index is not used if the search string begins with a % symbol.\n> \n> What exactly are the type of like queries you are going? there is a solution\n> for having the % at the start, but you can win everyway.\n> \n> >\n> > rgds\n> > Antony Paul\n> >\n> > On Mon, 24 Jan 2005 20:58:54 +1100, Russell Smith <[email protected]> wrote:\n> > > On Mon, 24 Jan 2005 08:18 pm, Antony Paul wrote:\n> > > > Hi,\n> > > > I have a query which is executed using ilike. The query values are\n> > > > received from user and it is executed using PreparedStatement.\n> > > > Currently all queries are executed as it is using iilike irrespective\n> > > > of whether it have a pattern matching character or not. Can using =\n> > > > instead of ilike boot performance ?. If creating index can help then\n> > > > how the index should be created on lower case or uppercase ?.\n> > > >\n> > > It depends on the type of queries you are doing.\n> > >\n> > > changing it to something like lower(column) like lower('text%'), and\n> > > creating an index on lower(column) will give you much better performance.\n> > >\n> > > If you have % in the middle of the query, it will still be slow, but I assume that is not\n> > > the general case.\n> > >\n> > > I am not sure what the effect of it being prepared will be, however I've had much success\n> > > with the method above without the queries being prepared. Others may be able to offer advice\n> > > about if prepare will effect it.\n> > >\n> > > Regards\n> > >\n> > > Russell Smith\n> > >\n> >\n> >\n>\n", "msg_date": "Tue, 25 Jan 2005 15:39:56 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of ilike queries ?" }, { "msg_contents": "On Tue, 25 Jan 2005, Antony Paul wrote:\n\n> Creating an index and using lower(column) does not change the explain\n> plan estimates.\n> It seems that it is not using index for like or ilike queries\n> irrespective of whether it have a pattern matching character in it or\n> not. (using PostgreSQL 7.3.3)\n\nI believe in 7.3.x an index is only considered for like in \"C\" locale, I\nthink the *_pattern_op opclasses were added in 7.4 for which you can make\nindexes that are considered for non wildcard starting search strings in\nnon \"C\" locales. And it may have trouble doing estimates before 8.0 on the\nfunctional index because of lack of statistics. You may want to consider\nan upgrade once 8.0 shakes out a bit.\n\n", "msg_date": "Tue, 25 Jan 2005 06:01:25 -0800 (PST)", "msg_from": "Stephan Szabo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of ilike queries ?" } ]
[ { "msg_contents": "Russell wrote:\n> I am not sure what the effect of it being prepared will be, however\nI've\n> had much success\n> with the method above without the queries being prepared. Others may\nbe\n> able to offer advice\n> about if prepare will effect it.\n> \nThere are two general cases I tend to use prepared queries. First case\nis when there is an extremely complex plan generation step that you want\nto skip. IMO, this is fairly rare in the normal course of doing things.\n\nSecond case is when you have a relatively simple query that gets\nexecuted very, very frequently, such as select a,b,c from t where k.\nEven though the query plan is simple, using a prepared query can shave\n5-15% off your query time depending on various factors (on a low latency\nnetwork). If you fire off the statement a lot, this adds up. Not\ngenerally worthwhile to go this route if you are executing over a high\nlatency network like the internet.\n\nIf your application behavior can benefit from the second case, it can\nprobably benefit from using parse/bind as well...use ExecPrepared, etc.\nlibpq interface functions.\n\nThe cumulative savings of using ExecPrepared() vs. using vanilla\nPQExec() (for simple queries over a high latency network) can be 50% or\nbetter. This is both from client's perspective and in server CPU load\n(especially when data is read from cache). This is most interesting to\ndriver and middleware writers who broker data exchange between the\napplication and the data. The performance minded application developer\n(who can make calls to the connection object) can take advantage of this\nhowever.\n\nMerlin\n", "msg_date": "Mon, 24 Jan 2005 09:01:49 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of ilike queries ?" }, { "msg_contents": "I used PreparedStatements to avoid SQL injection attack and it is the\nbest way to do in JDBC.\n\nrgds\nAntony Paul\n\n\nOn Mon, 24 Jan 2005 09:01:49 -0500, Merlin Moncure\n<[email protected]> wrote:\n> Russell wrote:\n> > I am not sure what the effect of it being prepared will be, however\n> I've\n> > had much success\n> > with the method above without the queries being prepared. Others may\n> be\n> > able to offer advice\n> > about if prepare will effect it.\n> > \n> There are two general cases I tend to use prepared queries. First case\n> is when there is an extremely complex plan generation step that you want\n> to skip. IMO, this is fairly rare in the normal course of doing things.\n> \n> Second case is when you have a relatively simple query that gets\n> executed very, very frequently, such as select a,b,c from t where k.\n> Even though the query plan is simple, using a prepared query can shave\n> 5-15% off your query time depending on various factors (on a low latency\n> network). If you fire off the statement a lot, this adds up. Not\n> generally worthwhile to go this route if you are executing over a high\n> latency network like the internet.\n> \n> If your application behavior can benefit from the second case, it can\n> probably benefit from using parse/bind as well...use ExecPrepared, etc.\n> libpq interface functions.\n> \n> The cumulative savings of using ExecPrepared() vs. using vanilla\n> PQExec() (for simple queries over a high latency network) can be 50% or\n> better. This is both from client's perspective and in server CPU load\n> (especially when data is read from cache). This is most interesting to\n> driver and middleware writers who broker data exchange between the\n> application and the data. The performance minded application developer\n> (who can make calls to the connection object) can take advantage of this\n> however.\n> \n> Merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n>\n", "msg_date": "Mon, 24 Jan 2005 19:44:36 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of ilike queries ?" } ]
[ { "msg_contents": "Alex wrote:\n> How do you create a temporary view that has only a small subset of the\n> data from the DB init? (Links to docs are fine - I can read ;). My\n> query isn't all that complex, and my number of records might be from\n> 10 to 2k depending on how I implement it.\n\nWell, you can't. My point was that the traditional query/view approach\nis often more appropriate for these cases. \n\nCursors are really designed to provide an in-transaction working set.\nBecause of this, they provide the luxury of absolute addressing which is\nnormally impossible in SQL. \n\nQueries allow for relative addressing, in other words 'fetch me the next\nc of x based on y'. This is a good thing, because it forces the\napplication developer to consider changes that happen from other users\nwhile browsing a dataset. Applications that don't use transactions\nshould not provide any guarantees about the data in between queries like\nthe number of records matching a certain criteria. This is a trap that\nmany developers fall into, especially when coming from flat file\ndatabases that use to allow this. This puts particularly nasty\nconstraints on web application developers who are unable to hold a\ntransaction between page refreshes. However this just a variant of SQL\ndeveloper trap #2, which is that you are not supposed to hold a\ntransaction open waiting for user input.\n\nIn your particular case IMO what you really need is a materialized view.\nCurrently, it is possible to rig them up in a fashion with plgsql that\nmay or may not meet your requirements. Given some careful thought,\nmat-views can be used to solve all kinds of nasty performance related\nissues (and it all boils down to performance, otherwise we'd all just\nuse limit/offset). \n\nMerlin\n\n", "msg_date": "Mon, 24 Jan 2005 10:30:22 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "In an attempt to throw the authorities off his trail, [email protected] (\"Merlin Moncure\") transmitted:\n> Alex wrote:\n>> How do you create a temporary view that has only a small subset of the\n>> data from the DB init? (Links to docs are fine - I can read ;). My\n>> query isn't all that complex, and my number of records might be from\n>> 10 to 2k depending on how I implement it.\n>\n> Well, you can't. My point was that the traditional query/view\n> approach is often more appropriate for these cases.\n\nActually, you can if you assume you can \"temporarily materialize\" that\nview.\n\nYou take the initial query and materialize it into a temporary table\nwhich can then be used to browse \"detail.\"\n\nThus, suppose you've got a case where the selection criteria draw in\n8000 objects/transactions, of which you only want to fit 20/page.\n\nIt's ugly and slow to process the 15th page, and you essentially\nreprocess the whole set from scratch each time:\n\n select [details] from [big table] where [criteria]\n order by [something]\n offset 280 limit 20;\n\nInstead, you might start out by doing:\n\n select [key fields] into temp table my_query\n from [big table] where [criteria];\n\n create index my_query_idx on my_query(interesting fields);\n\nWith 8000 records, the number of pages in the table will correspond\nroughly to the number of bytes per record which is probably pretty\nsmall.\n\nThen, you use a join on my_query to pull the bits you want:\n\n select [big table.details] from [big table], \n [select * from my_query order by [something] offset 280 limit 20]\n where [join criteria between my_query and big table]\n order by [something];\n\nFor this to be fast is predicated on my_query being compact, but that\nshould surely be so.\n\nThe big table may be 20 million records; for the result set to be even\nvaguely browsable means that my_query ought to be relatively small so\nyou can pull subsets reasonably efficiently.\n\nThis actually has a merit over looking at a dynamic, possibly-changing\nbig table that you won't unexpectedly see the result set changing\nsize.\n\nThis strikes me as a pretty slick way to handle \"data warehouse-style\"\nbrowsing...\n-- \noutput = (\"cbbrowne\" \"@\" \"gmail.com\")\nhttp://www.ntlug.org/~cbbrowne/oses.html\nThe first cup of coffee recapitulates phylogeny.\n", "msg_date": "Thu, 27 Jan 2005 00:10:24 -0500", "msg_from": "Christopher Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Hi all,\n\nWe are developing some application that works with DB over JDBC. We've used\nMSSQL before and trying to migrate to PostgreSQL now. Unfortunately problems\nwith performance are found. MSSQL with default configuration looks like much\nfaster then PostgreSQL on the same hardware (PostgreSQL8 rc5 was used). I've\ntried to increase work_mem significant (work_mem = 262144) but it doesn't\nhelp. \nHere is result of simple benchmark. I have table \nCREATE TABLE elt_tcli_messagelog\n(\n connectionname varchar(64),\n msgseqnum int4,\n connectionmessageid int4,\n logtimestamp varchar(64),\n isfromcounterparty char(1),\n msgtype varchar(64),\n possdupflag char(1),\n isoutofsequence char(1),\n ordtrnid varchar(64),\n ordrqstid varchar(64),\n counterrequestid varchar(64),\n clordid varchar(64),\n origclordid varchar(64),\n execid varchar(64),\n exectranstype varchar(64),\n exectype varchar(64),\n ordstatus varchar(64),\n lastqty float8,\n orderqty float8,\n cumqty float8,\n leavesqty float8,\n sendercompid varchar(64),\n targetcompid varchar(64),\n tradeaccthrchy varchar(64),\n tradeacctid varchar(64),\n routedtransactiondestination varchar(64),\n originatingconnectionname varchar(64),\n originatingconnectionmsgid int4,\n instrument varchar(64),\n portfolio varchar(64),\n prevseqnum int4,\n \"Message\" text,\n nonmetadatafields text\n)\n\nwith about 8000 rows. For this table query:\n\nSELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog \nWHERE LogTimestamp >= '0' AND IsFromCounterParty = 'Y' AND\nIsOutOfSequence = 'N' \n AND ConnectionName = 'DB_BENCHMARK' \n AND LogTimestamp IN (SELECT MAX(LogTimestamp) \n FROM ELT_tcli_MessageLog \n WHERE MsgSeqNum > 0 AND IsFromCounterParty = 'Y'\n\n AND IsOutOfSequence = 'N' AND\nConnectionName = 'DB_BENCHMARK')\n\ntakes about 1 second on MSSQL Server and 257 seconds on PostgreSQL one.\n\nDoes anybody have idea about reasons of such results?\n\nThanks,\nAlexander Dolgin.\n\n", "msg_date": "Mon, 24 Jan 2005 20:33:39 +0200", "msg_from": "\"Alexander Dolgin\" <[email protected]>", "msg_from_op": true, "msg_subject": "200 times slower then MSSQL??" }, { "msg_contents": "\n> with about 8000 rows. For this table query:\n> \n> SELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog \n> WHERE LogTimestamp >= '0' AND IsFromCounterParty = 'Y' AND\n> IsOutOfSequence = 'N' \n> AND ConnectionName = 'DB_BENCHMARK' \n> AND LogTimestamp IN (SELECT MAX(LogTimestamp) \n> FROM ELT_tcli_MessageLog \n> WHERE MsgSeqNum > 0 AND IsFromCounterParty = 'Y'\n> \n> AND IsOutOfSequence = 'N' AND\n> ConnectionName = 'DB_BENCHMARK')\n> \n> takes about 1 second on MSSQL Server and 257 seconds on PostgreSQL one.\n> \n> Does anybody have idea about reasons of such results?\n\n1. Have you run vaccum analyze recently?\n2. Reply with the output of EXPLAIN ANALYZE SELECT...\n\nChris\n", "msg_date": "Tue, 25 Jan 2005 17:52:28 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 200 times slower then MSSQL??" }, { "msg_contents": "\"Alexander Dolgin\" <[email protected]> writes:\n> Does anybody have idea about reasons of such results?\n\nTry converting the MAX() functions to queries that will use indexes.\nSee FAQ entry 4.7 \"My queries are slow or don't make use of the\nindexes. Why?\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 13:02:23 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 200 times slower then MSSQL?? " }, { "msg_contents": "Hi,\nFirst it will be good if you supply some EXPLAIN ANALYZE results from \nyour query.\nSecond, do you created the indexes which can be used with WHERE conditions.\nAnd Third AFAK MAX doesn't use index. If you only need max then you can try:\n\nORDER BY .... DESC and LIMIT 1. But you can't use this if you want to \nselect the two max values at once.\nI am not an expert so if I am wrong, please someone to correct me.\n\nKaloyan\n\nAlexander Dolgin wrote:\n\n>Hi all,\n>\n>We are developing some application that works with DB over JDBC. We've used\n>MSSQL before and trying to migrate to PostgreSQL now. Unfortunately problems\n>with performance are found. MSSQL with default configuration looks like much\n>faster then PostgreSQL on the same hardware (PostgreSQL8 rc5 was used). I've\n>tried to increase work_mem significant (work_mem = 262144) but it doesn't\n>help. \n>Here is result of simple benchmark. I have table \n>CREATE TABLE elt_tcli_messagelog\n>(\n> connectionname varchar(64),\n> msgseqnum int4,\n> connectionmessageid int4,\n> logtimestamp varchar(64),\n> isfromcounterparty char(1),\n> msgtype varchar(64),\n> possdupflag char(1),\n> isoutofsequence char(1),\n> ordtrnid varchar(64),\n> ordrqstid varchar(64),\n> counterrequestid varchar(64),\n> clordid varchar(64),\n> origclordid varchar(64),\n> execid varchar(64),\n> exectranstype varchar(64),\n> exectype varchar(64),\n> ordstatus varchar(64),\n> lastqty float8,\n> orderqty float8,\n> cumqty float8,\n> leavesqty float8,\n> sendercompid varchar(64),\n> targetcompid varchar(64),\n> tradeaccthrchy varchar(64),\n> tradeacctid varchar(64),\n> routedtransactiondestination varchar(64),\n> originatingconnectionname varchar(64),\n> originatingconnectionmsgid int4,\n> instrument varchar(64),\n> portfolio varchar(64),\n> prevseqnum int4,\n> \"Message\" text,\n> nonmetadatafields text\n>)\n>\n>with about 8000 rows. For this table query:\n>\n>SELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog \n>WHERE LogTimestamp >= '0' AND IsFromCounterParty = 'Y' AND\n>IsOutOfSequence = 'N' \n> AND ConnectionName = 'DB_BENCHMARK' \n> AND LogTimestamp IN (SELECT MAX(LogTimestamp) \n> FROM ELT_tcli_MessageLog \n> WHERE MsgSeqNum > 0 AND IsFromCounterParty = 'Y'\n>\n> AND IsOutOfSequence = 'N' AND\n>ConnectionName = 'DB_BENCHMARK')\n>\n>takes about 1 second on MSSQL Server and 257 seconds on PostgreSQL one.\n>\n>Does anybody have idea about reasons of such results?\n>\n>Thanks,\n>Alexander Dolgin.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to [email protected]\n>\n>\n> \n>\n", "msg_date": "Tue, 25 Jan 2005 20:09:24 +0200", "msg_from": "Kaloyan Iliev Iliev <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 200 times slower then MSSQL??" }, { "msg_contents": "\n> with about 8000 rows. For this table query:\n>\n> SELECT MAX(MsgSeqNum),MAX(LogTimestamp) FROM ELT_tcli_MessageLog\n> WHERE LogTimestamp >= '0' AND IsFromCounterParty = 'Y' AND\n> IsOutOfSequence = 'N'\n> AND ConnectionName = 'DB_BENCHMARK'\n> AND LogTimestamp IN (SELECT MAX(LogTimestamp)\n> FROM ELT_tcli_MessageLog\n> WHERE MsgSeqNum > 0 AND IsFromCounterParty = \n> 'Y'\n>\n> AND IsOutOfSequence = 'N' AND\n> ConnectionName = 'DB_BENCHMARK')\n>\n\n\tCan you explain (with words) what this query is supposed to return ? It \nis probably possible to write it in an entirely different way.\n\tBasically your problem is that max() in postgres does not use an index \nthe way you think it should.\n\t\"SELECT max(x) FROM t\" should be written \"SELECT x FROM t ORDER BY x DESC \nLIMIT 1\" to use the index. Depending on additional Where conditions, you \nshould add other columns to your index and also order-by clause.\n", "msg_date": "Wed, 26 Jan 2005 22:10:19 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 200 times slower then MSSQL??" } ]
[ { "msg_contents": "Hi,\n\nI noticed that reltuples are way off if\nI vacuum the table and analyze the table.\nAnd the data (296901) after vacuum seems \naccurate while\nthe reltuples (1.90744e+06)\nafter anlayze is too wrong.\n\nMy PG version is 7.3.2 (I know it is old).\n\nAny thought?\n\nThanks,\n\nmy_db=# analyze my_tab;\nANALYZE\nmy_db=# SELECT relname, relpages * 8 as size_kb,\nrelfilenode, reltuples\nmy_db=# FROM pg_class c1\nmy_db=# WHERE relkind = 'r'\nmy_db=# AND relname = 'my_tab';\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-------------\n my_tab | 394952 | 211002264 | 1.90744e+06\n(1 row)\n\nmy_db=# select count(*) from my_tab;\n count\n--------\n 296694\n(1 row)\n\nmy_db=# vacuum verbose my_tab;\nINFO: --Relation public.my_tab--\nINFO: Index my_tab_pkey: Pages 5909; Tuples 296901:\nDeleted 6921.\n CPU 0.20s/0.19u sec elapsed 4.76 sec.\nINFO: Index my_tab_hid_state_idx: Pages 5835; Tuples\n297808: Deleted 6921.\n CPU 0.17s/0.07u sec elapsed 9.62 sec.\nINFO: Removed 6921 tuples in 310 pages.\n CPU 0.00s/0.01u sec elapsed 0.08 sec.\nINFO: Pages 49369: Changed 12, Empty 0; Tup 296901:\nVac 6921, Keep 0, UnUsed 1431662.\n Total CPU 1.71s/0.47u sec elapsed 28.48 sec.\nVACUUM\nmy_db=# SELECT relname, relpages * 8 as size_kb,\nrelfilenode, reltuples\nmy_db=# FROM pg_class c1\nmy_db=# WHERE relkind = 'r'\nmy_db=# AND relname = 'my_tab';\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-----------\n my_tab | 394952 | 211002264 | 296901\n(1 row)\n\nmy_db=# analyze my_tab;\nANALYZE\nmy_db=# SELECT relname, relpages * 8 as size_kb,\nrelfilenode, reltuples\nmy_db=# FROM pg_class c1\nmy_db=# WHERE relkind = 'r'\nmy_db=# AND relname = 'my_tab';\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-------------\n my_tab | 394952 | 211002264 | 1.90744e+06\n(1 row)\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - Helps protect you from nasty viruses. \nhttp://promotions.yahoo.com/new_mail\n", "msg_date": "Mon, 24 Jan 2005 15:02:42 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "reltuples after vacuum and analyze" }, { "msg_contents": "Litao Wu <[email protected]> writes:\n> I noticed that reltuples are way off if\n> I vacuum the table and analyze the table.\n> And the data (296901) after vacuum seems \n> accurate while\n> the reltuples (1.90744e+06)\n> after anlayze is too wrong.\n\nVACUUM derives an exact count because it scans the whole table. ANALYZE\nsamples just a subset of the table and extrapolates. It would appear\nthat you've got radically different tuple densities in different parts\nof the table, and that's confusing ANALYZE.\n\n> My PG version is 7.3.2 (I know it is old).\n\n8.0's ANALYZE uses a new sampling method that we think is less prone\nto this error, though of course any sampling method will fail some of\nthe time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Jan 2005 19:26:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reltuples after vacuum and analyze " }, { "msg_contents": "Thanks,\n\nThen how to explain relpages \n(size_kb in result returned)?\n\nSELECT relname, relpages * 8 as size_kb,\nrelfilenode, reltuples\nFROM pg_class c1\nWHERE relkind = 'r'\nAND relname = 'my_tab';\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-----------\n my_tab | 30088 | 266181583 | 165724\nanalyze my_tab;\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-------------\n my_tab | 2023024 | 266181583 |\n1.12323e+07\nvacuum my_tab;\nSELECT relname, relpages * 8 as size_kb,\nrelfilenode, reltuples\nFROM pg_class c1\nWHERE relkind = 'r'\nAND relname = 'my_tab';\n relname | size_kb | relfilenode | reltuples\n------------------+---------+-------------+-----------\n my_tab | 2038016 | 266181583 | 189165\n(1 row)\n\n--- Tom Lane <[email protected]> wrote:\n\n> Litao Wu <[email protected]> writes:\n> > I noticed that reltuples are way off if\n> > I vacuum the table and analyze the table.\n> > And the data (296901) after vacuum seems \n> > accurate while\n> > the reltuples (1.90744e+06)\n> > after anlayze is too wrong.\n> \n> VACUUM derives an exact count because it scans the\n> whole table. ANALYZE\n> samples just a subset of the table and extrapolates.\n> It would appear\n> that you've got radically different tuple densities\n> in different parts\n> of the table, and that's confusing ANALYZE.\n> \n> > My PG version is 7.3.2 (I know it is old).\n> \n> 8.0's ANALYZE uses a new sampling method that we\n> think is less prone\n> to this error, though of course any sampling method\n> will fail some of\n> the time.\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nYahoo! Mail - Find what you need with new enhanced search.\nhttp://info.mail.yahoo.com/mail_250\n", "msg_date": "Tue, 25 Jan 2005 10:29:26 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reltuples after vacuum and analyze " }, { "msg_contents": "Litao Wu <[email protected]> writes:\n> Then how to explain relpages \n> (size_kb in result returned)?\n\nrelpages should be accurate in either case, since we get that by asking\nthe kernel (lseek).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 13:33:06 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reltuples after vacuum and analyze " }, { "msg_contents": "I know it is accurate.\nMy question is why the table takes \n2023024KB after analyzed?\nAnd why it does not shink to 30088 after vacuumed?\n\nI know \"vacuum full verbose\"\nwill force it shrink to\nreasonable size. But I do not understand\nwhy \"analyze\" bloats the table size so \nbig??\n\nPlease note all above commands are done within\nminutes and I truely do not believe the table\nof 189165 rows takes that much space.\n\nFurthermore, I notice last weekly \"vacuum full\"\neven did not reclaim the space back.\n\nThanks,\n\n--- Tom Lane <[email protected]> wrote:\n\n> Litao Wu <[email protected]> writes:\n> > Then how to explain relpages \n> > (size_kb in result returned)?\n> \n> relpages should be accurate in either case, since we\n> get that by asking\n> the kernel (lseek).\n> \n> \t\t\tregards, tom lane\n> \n\n\n__________________________________________________\nDo You Yahoo!?\nTired of spam? Yahoo! Mail has the best spam protection around \nhttp://mail.yahoo.com \n", "msg_date": "Tue, 25 Jan 2005 10:41:16 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reltuples after vacuum and analyze " }, { "msg_contents": "Litao Wu <[email protected]> writes:\n> reasonable size. But I do not understand\n> why \"analyze\" bloats the table size so \n> big??\n\nANALYZE won't bloat anything. I suppose you have other processes\ninserting or updating data in the table meanwhile.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 13:43:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: reltuples after vacuum and analyze " }, { "msg_contents": "Believe or not.\nThe above command is my screen snapshot.\n\nI believe it is most possibably a PG bug!\n\n--- Tom Lane <[email protected]> wrote:\n\n> Litao Wu <[email protected]> writes:\n> > reasonable size. But I do not understand\n> > why \"analyze\" bloats the table size so \n> > big??\n> \n> ANALYZE won't bloat anything. I suppose you have\n> other processes\n> inserting or updating data in the table meanwhile.\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n\t\t\n__________________________________ \nDo you Yahoo!? \nAll your favorites on one personal page ��� Try My Yahoo!\nhttp://my.yahoo.com \n", "msg_date": "Tue, 25 Jan 2005 10:48:08 -0800 (PST)", "msg_from": "Litao Wu <[email protected]>", "msg_from_op": true, "msg_subject": "Re: reltuples after vacuum and analyze " } ]
[ { "msg_contents": "This is a multi-part message in MIME format.\n\n--bound1106633891\nContent-Type: text/plain\nContent-Transfer-Encoding: 7bit\n\nI'm also an autodidact on DB design, although it's well more than a year now. If you are planning to clean up the design, I strongly suggest getting a visual tool. Google for something like \"database design tool\". Some are extremely expensive (e.g. ERwin, which I think is renamed having been bought out). There's a very cheap shareware one that I won't mention by name because it crashed my machine consistently. Right now I'm using \"Case Studio\", which has some very eccentric UI (no one enforced consistency of UI across modules, which is rather ironic in a design tool) but capable and user-extensible. ERwin's manual also had the best explanation of denormalization I've read, short and to the point.\n\nThe ability to make schema revisions quickly lets me concentrate on *better-written queries* and *improved table definition* without having to overcome inertia.\n\n--bound1106633891--\n", "msg_date": "Mon, 24 Jan 2005 22:18:11 -0800", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: poor performance of db?" } ]
[ { "msg_contents": "I help manage an animal hospital of 100-employees Linux servers. I am \nnew to database setup and tuning, I was hoping I could get some \ndirection on a setting up drive array we're considering moving our \ndatabase to.\n\nThey're currently on a two-disk Adaptec RAID1 with Postgresql 7.4.2.\n\nThe drive array is a 7-disk fibre channel on a Qlogic 2100 controller. I \nam currently testing RAID5 (sw).\n\nThe main reason of moving to a drive array is the high level of context \nswitches we get during the day (>30K for 20 mins per hour). The OS and \ndatabase exist on the same disk but seperate parition (which probably \nmakes little difference)\n\n\nadditional info:\n\nOn average, 30-35 vets/doctors are connecting to the database at any \ntime from 7am - 7pm. The database is very active for the small company.\n\nServer Info:\nCentos 3.3 (RHEL 3.x equivelent)\n4GB RAM\nAdaptec 2100S RAID\nQlogic QLA2100 Fibre\n\nAny feedback/suggestions are greatly appreciated.\n\nThanks.\n\nSteve Poe\n\n\n", "msg_date": "Tue, 25 Jan 2005 11:57:24 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Josh,\n\nThanks for your feedback, I appreciate it.\n\n>Check what I have to say at http://www.powerpostgresql.com/PerfList\n> \n>\nWill do.\n\n>>They're currently on a two-disk Adaptec RAID1 with Postgresql 7.4.2.\n>> \n>>\n>\n>And you've not upgraded to 7.4.6 because .... ?\n>\n> \n>\nBecause the proprietary application running the business has not \ncertified on it. Unfortunately, I am at the mercy of their support in \ncase something goes wrong.\n\n>>The drive array is a 7-disk fibre channel on a Qlogic 2100 controller. I\n>>am currently testing RAID5 (sw).\n>> \n>>\n>\n>In general, RAID 5 is not so great for databases. See the article for more.\n>\n> \n>\nOkay, thanks. Even with 7-disks? I trust that. So, RAID 1+0 (sw) is \nprobably the best option. I've run sw RAID personally for years without \nissue. I am a bit hesitant in doing sw RAID for a production server for \na database --- probably because its not my server. Any thoughts on sw \nRAID for Postgresql?\n\n>>The main reason of moving to a drive array is the high level of context\n>>switches we get during the day (>30K for 20 mins per hour). The OS and\n>>database exist on the same disk but seperate parition (which probably\n>>makes little difference)\n>> \n>>\n>\n>Unfortunately, the context switches are probably due to a known issue in \n>PostgreSQL, and changing the drive array won't help this issue (it may help \n>other issues though). Search the archives of this list, and pgsql-hackers, \n>for \"Context Switch Bug\".\n>\n>For the CS bug, the only workaround right now is to avoid the query structures \n>that trigger it.\n> \n>\nOkay. Darn. While I don't write the queries for the application, I do \ninteract with the company frequently. Their considering moving the \nqueries into the database with PL/pgSQL. Currently their queries are \ndone through ProvIV development using ODBC. Will context switching be \nminimized here by using PL/pgSQL?\n\n> \n>\n>>Server Info:\n>>Centos 3.3 (RHEL 3.x equivelent)\n>>4GB RAM\n>>Adaptec 2100S RAID\n>>Qlogic QLA2100 Fibre\n>> \n>>\n>\n>CPU?\n> \n>\nDual Xeon 2.8 CPUs with HT turned off.\n\n\nThanks again.\n\nSteve Poe\n", "msg_date": "Tue, 25 Jan 2005 23:52:56 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Steve,\n\n> I help manage an animal hospital of 100-employees Linux servers. I am\n> new to database setup and tuning, I was hoping I could get some\n> direction on a setting up drive array we're considering moving our\n> database to.\n\nCheck what I have to say at http://www.powerpostgresql.com/PerfList\n\n> They're currently on a two-disk Adaptec RAID1 with Postgresql 7.4.2.\n\nAnd you've not upgraded to 7.4.6 because .... ?\n\n> The drive array is a 7-disk fibre channel on a Qlogic 2100 controller. I\n> am currently testing RAID5 (sw).\n\nIn general, RAID 5 is not so great for databases. See the article for more.\n\n> The main reason of moving to a drive array is the high level of context\n> switches we get during the day (>30K for 20 mins per hour). The OS and\n> database exist on the same disk but seperate parition (which probably\n> makes little difference)\n\nUnfortunately, the context switches are probably due to a known issue in \nPostgreSQL, and changing the drive array won't help this issue (it may help \nother issues though). Search the archives of this list, and pgsql-hackers, \nfor \"Context Switch Bug\".\n\nFor the CS bug, the only workaround right now is to avoid the query structures \nthat trigger it.\n\n> Server Info:\n> Centos 3.3 (RHEL 3.x equivelent)\n> 4GB RAM\n> Adaptec 2100S RAID\n> Qlogic QLA2100 Fibre\n\nCPU?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 25 Jan 2005 19:03:09 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "\n>FWIW, 7.4.6 is a binary, drop-in place upgrade for 7.4.2. And 7.4.2 has known \n>bugs. However, I understand your situation.\n>\n> \n>\nAs soon as we get the go-ahead, I will upgrade. I think the company is \nactually looking towards 8.0 certification.\n\n>>Okay, thanks. Even with 7-disks? I trust that. \n>> \n>>\n>\n>Well, it's less bad with 7 disks than it is with 3, certainly. However,there \n>is an obvious and quick gain to be had by splitting off the WAL logs onto \n>their own disk resource ... up to 14%+ performance in some applications.\n>\n> \n>\nPardon my ignorance, but the WAL logs are comprised of pg_xlog and \npg_clog? Their own disk resource, but not within the same channel of \ndisks the database is on, right?\n\n>>So, RAID 1+0 (sw) is \n>>probably the best option. I've run sw RAID personally for years without\n>>issue. I am a bit hesitant in doing sw RAID for a production server for\n>>a database --- probably because its not my server. Any thoughts on sw\n>>RAID for Postgresql?\n>> \n>>\n>\n>Yes. See my article for one. In generaly, SW RAID on BSD or Linux works \n>well for PostgreSQL ... UNLESS your machine is already CPU-bound, in which \n>case it's a bad idea. If you're hitting the CS bug, it's definitely a bad \n>idea, because the SW RAID will increase context switching.\n>\n>So if your choice, on your system, is between sw RAID 10, and hw RAID 5, and \n>you're having excessive CSes, I'd stick with the HW RAID.\n>\n> \n>\nOkay. InCPU-bound servers, use hw RAID. Any hw raids to avoid?\n\n>>Okay. Darn. While I don't write the queries for the application, I do\n>>interact with the company frequently. Their considering moving the\n>>queries into the database with PL/pgSQL. Currently their queries are\n>>done through ProvIV development using ODBC. Will context switching be\n>>minimized here by using PL/pgSQL?\n>> \n>>\n>\n>Won't make a difference, actually. Should improve performance in other ways, \n>though, by reducing round-trip time on procedures. Feel free to recommend \n>the company to this list.\n>\n> \n>\nI think their too busy to monitor/watch this list. Not a put-down to \nthem, but I have to do my own leg work to help decide what we're going \nto do.\n\n>>Dual Xeon 2.8 CPUs with HT turned off.\n>> \n>>\n>\n>Yeah, thought it was a Xeon.\n>\n> \n>\nIf we went with a single CPU, like Athlon/Opertron64, would CS \nstorming go away?\n\n\nThanks.\n\nSteve Poe\n", "msg_date": "Wed, 26 Jan 2005 13:07:28 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Steve,\n\n> Because the proprietary application running the business has not\n> certified on it. Unfortunately, I am at the mercy of their support in\n> case something goes wrong.\n\nFWIW, 7.4.6 is a binary, drop-in place upgrade for 7.4.2. And 7.4.2 has known \nbugs. However, I understand your situation.\n\n> Okay, thanks. Even with 7-disks? I trust that. \n\nWell, it's less bad with 7 disks than it is with 3, certainly. However,there \nis an obvious and quick gain to be had by splitting off the WAL logs onto \ntheir own disk resource ... up to 14%+ performance in some applications.\n\n> So, RAID 1+0 (sw) is \n> probably the best option. I've run sw RAID personally for years without\n> issue. I am a bit hesitant in doing sw RAID for a production server for\n> a database --- probably because its not my server. Any thoughts on sw\n> RAID for Postgresql?\n\nYes. See my article for one. In generaly, SW RAID on BSD or Linux works \nwell for PostgreSQL ... UNLESS your machine is already CPU-bound, in which \ncase it's a bad idea. If you're hitting the CS bug, it's definitely a bad \nidea, because the SW RAID will increase context switching.\n\nSo if your choice, on your system, is between sw RAID 10, and hw RAID 5, and \nyou're having excessive CSes, I'd stick with the HW RAID.\n\n> Okay. Darn. While I don't write the queries for the application, I do\n> interact with the company frequently. Their considering moving the\n> queries into the database with PL/pgSQL. Currently their queries are\n> done through ProvIV development using ODBC. Will context switching be\n> minimized here by using PL/pgSQL?\n\nWon't make a difference, actually. Should improve performance in other ways, \nthough, by reducing round-trip time on procedures. Feel free to recommend \nthe company to this list.\n\n> Dual Xeon 2.8 CPUs with HT turned off.\n\nYeah, thought it was a Xeon.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 26 Jan 2005 10:03:15 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Steve Poe <[email protected]> writes:\n>> Well, it's less bad with 7 disks than it is with 3, certainly. However,there \n>> is an obvious and quick gain to be had by splitting off the WAL logs onto \n>> their own disk resource ... up to 14%+ performance in some applications.\n>> \n> Pardon my ignorance, but the WAL logs are comprised of pg_xlog and \n> pg_clog? Their own disk resource, but not within the same channel of \n> disks the database is on, right?\n\nJust pg_xlog. Ideally you don't want any other traffic on the physical\ndisk pg_xlog is on --- the idea is that the write heads need to stay\nover the current xlog file. I don't think it hurts too much to share a\ncontroller channel though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Jan 2005 16:39:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4? " }, { "msg_contents": "Josh,\n\nThanks again for the feedback.\n\n>Well, the list of ones which are good is shorter: pretty much LSI and 3Ware \n>(for SATA). You can suffer with Adaptec if you have to.\n>\n> \n>\nGood. We don't plan on using IDE, but I've pondered Firewire.\n\n>>If we went with a single CPU, like Athlon/Opertron64, would CS\n>>storming go away?\n>> \n>>\n>\n>Yes. And then you might be able to use SW Raid. Of course, you may lose \n>performance in other areas with the 1 processor.\n>\n> \n>\nGood to know.\n\nYou mentioned earlier that to get around the CS bug, avoid the query \nstructures which trigger it. Dumb question: How do you isolate this?\n\nIs there a way in a Postgresql query to only look at 1 processor only in \na dual-CPU setup?\n\nFYI:Our company has an near-identical server (SCSI and IDE)for testing \npurposes of the animal hopsital application that is used. If there are \nany test patches to Postgresql to deal with CS storm, we can test it out \nif this is possible.\n\nAny likelyhood this CS storm will be understood in the next couple months?\n\nThanks.\n\nSteve Poe\n", "msg_date": "Wed, 26 Jan 2005 23:49:54 +0000", "msg_from": "Steve Poe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Steve,\n\n> Okay. InCPU-bound servers, use hw RAID. Any hw raids to avoid?\n\nWell, the list of ones which are good is shorter: pretty much LSI and 3Ware \n(for SATA). You can suffer with Adaptec if you have to.\n\n> If we went with a single CPU, like Athlon/Opertron64, would CS\n> storming go away?\n\nYes. And then you might be able to use SW Raid. Of course, you may lose \nperformance in other areas with the 1 processor.\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Wed, 26 Jan 2005 18:49:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Steve,\n\n> You mentioned earlier that to get around the CS bug, avoid the query\n> structures which trigger it. Dumb question: How do you isolate this?\n\nIn real terms, it's generally triggered by a query joining against a very \nlarge table requiring a seq scan.\n\nYou can probably find the \"bad queries\" just by using PQA, and looking for \nselect, delete and update queries which last over 60 seconds. \n\n> Is there a way in a Postgresql query to only look at 1 processor only in\n> a dual-CPU setup?\n\nThat would be an OS question. I personally can't see how.\n\n> Any likelyhood this CS storm will be understood in the next couple months?\n\nIt's well understood. See the archives of this list. The problem is that \nimplementing the solution is very, very hard -- 100+ hours from a top-notch \nprogrammer. I'm still hoping to find a corporate sponsor for the issue ...\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Thu, 27 Jan 2005 08:56:03 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "On Thu, Jan 27, 2005 at 08:56:03AM -0800, Josh Berkus wrote:\n> It's well understood. See the archives of this list. The problem is that \n> implementing the solution is very, very hard -- 100+ hours from a top-notch \n> programmer. I'm still hoping to find a corporate sponsor for the issue ...\n\nHm, I must have missed something -- all I read earlier (and in the archives)\nindicated that it was _not_ well understood... Care to give URLs giving the\nanswer away?\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Thu, 27 Jan 2005 18:03:04 +0100", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" }, { "msg_contents": "Josh Berkus wrote:\n> Steve,\n> \n> > I help manage an animal hospital of 100-employees Linux servers. I am\n> > new to database setup and tuning, I was hoping I could get some\n> > direction on a setting up drive array we're considering moving our\n> > database to.\n> \n> Check what I have to say at http://www.powerpostgresql.com/PerfList\n> \n\nAdded to our FAQ.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n [email protected] | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Feb 2005 16:11:11 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Ideal disk setup for Postgresql 7.4?" } ]
[ { "msg_contents": "Hi all,\n I am running PostgreSQL 7.3.3 on a RHL 7.0 box with PIII and 512\nMB RAM. Recenlty I upgraded the kernel from 2.2.16 to 2.4.28. Now the\nproblem is Postgres is using only half of the memory now while before\nupgrading the kernel it was using full memory plus swap. If Postgres\nuse the full available memory will it run faster ?. I disabled\nHigmemory support while compiling kernel.\n\n>From Top\n\n total used free shared buffers cached\nMem: 501 209 292 0 2 173\n-/+ buffers/cache: 34 467\nSwap: 501 0 501\n\n\nrgds\nAntony Paul\n", "msg_date": "Tue, 25 Jan 2005 18:31:45 +0530", "msg_from": "Antony Paul <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL not utilising available memory" } ]
[ { "msg_contents": "My db server is running under high load recently and the number of\nconnections during the morning hours is actually very high.\n\nThis morning I found the postgres not running and the following in my log file:\n\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\n2005-01-25 01:38:00 WARNING: terminating connection because of crash\nof another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\n2005-01-25 01:38:05 WARNING: terminating connection because of crash\nof another server process\nDETAIL: The postmaster has commanded this server process to roll back\nthe current transaction and exit, because another server process\nexited abnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\n2005-01-25 01:38:16 LOG: all server processes terminated; reinitializing\n2005-01-25 01:38:22 FATAL: could not create shared memory segment:\nCannot allocate memory\nDETAIL: Failed system call was shmget(key=5432001, size=273383424, 03600).\nHINT: This error usually means that PostgreSQL's request for a shared\nmemory segment exceeded available memory or swap space. To reduce the\nrequest size (currently 273383424 bytes), reduce PostgreSQL's\nshared_buffers parameter (currently 32768) and/or its max_connections\nparameter (currently 40).\n The PostgreSQL documentation contains more information about\nshared memory configuration.\n2005-01-25 08:00:07 LOG: database system was interrupted at\n2005-01-25 00:30:15 CST\n\nI'm confused to as to what is the problem. My shared memory kernel\nsetting are as follows:\n[root@katie data]# tail /etc/sysctl.conf\n\n# Controls whether core dumps will append the PID to the core filename.\n# Useful for debugging multi-threaded applications.\nkernel.core_uses_pid = 1\n\n# For POSTGRESQL -Drake 8/1/04\nkernel.shmall = 2097152\nkernel.shmmax = 1073741824\nkernel.shmmni = 4096\nkernel.sem = 250 32000 100 128\n\n[root@katie data]# cat /proc/sys/kernel/shmall\n2097152\n[root@katie data]# cat /proc/sys/kernel/shmmax\n1073741824\n\nHere's my ipcs output after restarting the server:\n[root@katie data]# ipcs\n\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status \n0x0052e2c1 196608 postgres 600 273383424 11 \n\n------ Semaphore Arrays --------\nkey semid owner perms nsems \n0x0052e2c1 589824 postgres 600 17 \n0x0052e2c2 622593 postgres 600 17 \n0x0052e2c3 655362 postgres 600 17 \n\n------ Message Queues --------\nkey msqid owner perms used-bytes messages \n\nI have 2GB of RAM, is this telling me I need more RAM? There are some\nother processes running on this server besides postgres.\n\nThanks.\n\n-Don\n-- \nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\n312-560-1574\n", "msg_date": "Tue, 25 Jan 2005 09:10:50 -0600", "msg_from": "Don Drake <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres stopped running (shmget failed)" }, { "msg_contents": "Don Drake <[email protected]> writes:\n> This morning I found the postgres not running and the following in my log file:\n\n> 2005-01-25 01:38:22 FATAL: could not create shared memory segment:\n> Cannot allocate memory\n> DETAIL: Failed system call was shmget(key=5432001, size=273383424, 03600).\n> HINT: This error usually means that PostgreSQL's request for a shared\n> memory segment exceeded available memory or swap space. To reduce the\n> request size (currently 273383424 bytes), reduce PostgreSQL's\n> shared_buffers parameter (currently 32768) and/or its max_connections\n> parameter (currently 40).\n\nI have seen this happen when the old shmem segment didn't get released\nfor some reason, and your kernel settings are such that it won't allow\ncreation of two shmem segments of that size at once. For robustness\nit's probably a good idea to make sure you *can* create two such\nsegments at once, but for the moment getting rid of the old one with\n\"ipcrm\" should be enough to let you restart the postmaster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 25 Jan 2005 11:18:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres stopped running (shmget failed) " }, { "msg_contents": "On Tue, 25 Jan 2005 11:18:05 -0500, Tom Lane <[email protected]> wrote:\n> Don Drake <[email protected]> writes:\n> > This morning I found the postgres not running and the following in my log file:\n> \n> > 2005-01-25 01:38:22 FATAL: could not create shared memory segment:\n> > Cannot allocate memory\n> > DETAIL: Failed system call was shmget(key=5432001, size=273383424, 03600).\n> > HINT: This error usually means that PostgreSQL's request for a shared\n> > memory segment exceeded available memory or swap space. To reduce the\n> > request size (currently 273383424 bytes), reduce PostgreSQL's\n> > shared_buffers parameter (currently 32768) and/or its max_connections\n> > parameter (currently 40).\n> \n> I have seen this happen when the old shmem segment didn't get released\n> for some reason, and your kernel settings are such that it won't allow\n> creation of two shmem segments of that size at once. For robustness\n> it's probably a good idea to make sure you *can* create two such\n> segments at once, but for the moment getting rid of the old one with\n> \"ipcrm\" should be enough to let you restart the postmaster.\n> \n> regards, tom lane\n> \n\nI was able to just restart it, after the server died and before I\nrestarted nothing showed up in the ipcs output.\n\nOn an unrelated note, the value 273MB seems relatively low to me. The\nDB uses over 27GB for data and indexes, I would think it needs more\nshared memory.\n\nThanks.\n\n-Don\n\n-- \nDonald Drake\nPresident\nDrake Consulting\nhttp://www.drakeconsult.com/\n312-560-1574\n", "msg_date": "Tue, 25 Jan 2005 22:24:15 -0600", "msg_from": "Don Drake <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres stopped running (shmget failed)" } ]
[ { "msg_contents": "Hi gang,\n\nI just inherited a FreeBSD box, and it is horribly sick. So we moved\neverything to a new machine (power supply failures) and finally got\nstuff running again.\n\nOk, for two days (rimshot)\n\nHere are the two problems, and for the life of me I cannot find any\ndocumentation on either:\n\n1) freebsd will only let PostgreSQL have 38 connections at a time,\nregardless of kernel settings or postgresql.conf settings. Where\nexactly (and how, exactly) does one remedy that problem?\n\n2) As of this morning, the machine was down again, this time apache\nfired up normally but pg refuses to start - without errors. When I\nstart with postmaster on the CLI I also get no errors, just no\npostmaster. Why am I not seeing the errors, is this a FreeBSD or\nPostgreSQL issue?\n\nFor example:\n\n$ pg_ctl start\npostmaster successfully started\n$ pg_ctl status\npg_ctl: postmaster or postgres not running\n\nOR:\n\n$ postmaster -D /usr/local/pgsql/data\n$\n(no response)\n\nThere are no errors in /var/log/pgsql either, so I have absolutely no\nidea how to troubleshoot :-(\n\n-- Mitch\n", "msg_date": "Tue, 25 Jan 2005 13:19:48 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": true, "msg_subject": "PG versus FreeBSD, startup and connections problems" }, { "msg_contents": "\n\nMitch Pirtle wrote:\n\n>1) freebsd will only let PostgreSQL have 38 connections at a time,\n>regardless of kernel settings or postgresql.conf settings. Where\n>exactly (and how, exactly) does one remedy that problem?\n>\n> \n>\nWhat version of FreeBSD is the box running?\nGenerally you need to change semaphores and shared memory settings, e.g. \nfor 5.x or late 4.x :\n\nin /etc/sysctl.conf :\nkern.ipc.shmmax=100000000\nkern.ipc.shmall=32768\n(can be set online using systcl -w)\n\nSemaphores need to be set in /boot/loader.conf\nkern.ipc.semmni=256\nkern.ipc.semmns=256\n(can typed at the loader prompt using set)\n\nThese settings should let you have ~100 connections and use about 100M \nof shared memory for shared_buffers.\n\nEarly 4.x (I think) and before will need a kernel rebuild, see \nhttp://www.postgresql.org/docs/7.4/static/kernel-resources.html#SYSVIPC\n\n>2) As of this morning, the machine was down again, this time apache\n>fired up normally but pg refuses to start - without errors. When I\n>start with postmaster on the CLI I also get no errors, just no\n>postmaster. Why am I not seeing the errors, is this a FreeBSD or\n>PostgreSQL issue?\n>\n>For example:\n>\n>$ pg_ctl start\n>postmaster successfully started\n>$ pg_ctl status\n>pg_ctl: postmaster or postgres not running\n>\n>OR:\n>\n>$ postmaster -D /usr/local/pgsql/data\n>$\n>(no response)\n>\n> \n>\nBTW - What version of Pg are you using?\nCheck your postgresql.conf to see if all the output is going to syslog, e.g,\nsyslog=2\nIf not, then unfortunately it is possible that whatever is causing the \nmachine to be unstable has corrupted the data directory (so ...err... \nrestore time). Starting the postmaster in debug mode will provide more \noutput:\n$ postmaster -d 5\nAdditionally utilitites like 'strace' will give you a bit more info \nabout where the startup process gets to.\n\nDo you know why the box went down? The freebsd-bugs or freebsd-stable \nlist will help with respect to Freebsd crashing, see \nhttp://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/eresources.html#ERESOURCES-MAIL\n\nregards\n\nMark\n\n\n\n", "msg_date": "Wed, 26 Jan 2005 10:08:58 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PG versus FreeBSD, startup and connections problems" }, { "msg_contents": "Just a quick shout-out to Mark, as you provided the winning answer. I\nfound numerous mailing list discussions and web pages, but all were\neither fragmented or out of date.\n\nAgain, many thanks!\n\n-- Mitch\n\nOn Wed, 26 Jan 2005 10:08:58 +1300, Mark Kirkwood <[email protected]> wrote:\n> \n> in /etc/sysctl.conf :\n> kern.ipc.shmmax=100000000\n> kern.ipc.shmall=32768\n> (can be set online using systcl -w)\n> \n> Semaphores need to be set in /boot/loader.conf\n> kern.ipc.semmni=256\n> kern.ipc.semmns=256\n> (can typed at the loader prompt using set)\n> \n> These settings should let you have ~100 connections and use about 100M\n> of shared memory for shared_buffers.\n", "msg_date": "Wed, 26 Jan 2005 14:24:40 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PG versus FreeBSD, startup and connections problems" } ]
[ { "msg_contents": "Folks,\n\n\tI'm using PostgreSQL 7.4.1 on Linux, and I'm trying to figure out weather a\nquery I have is going to be slow when I have more information in my tables.\nboth tables involved will likely have ~500K rows within a year or so.\n\n\tSpecifically I can't tell if I'm causing myself future problems with the\nsubquery, and should maybe re-write the query to use a join. The reason I\nwent with the subquery is that I don't know weather a row in Assignments\nwill have a corresponding row in Assignment_Settings\n\n\tThe query is:\nSELECT User_ID\nFROM Assignments A\nWHERE A.User_ID IS NOT NULL\n\tAND (SELECT Value FROM Assignment_Settings WHERE Setting='Status' AND\nAssignment_ID=A.Assignment_ID) IS NULL\nGROUP BY User_ID;\n\n\tThe tables and an explain analyze of the query are as follows:\n\nneo=# \\d assignments;\n Table \"shopper.assignments\"\n Column | Type |\nModifiers\n---------------+--------------------------------+---------------------------\n----------------------------------------------\n assignment_id | integer | not null default\nnextval('shopper.assignments_assignment_id_seq'::text)\n sample_id | integer | not null\n user_id | integer |\n time | timestamp(0) without time zone | not null default now()\n address_id | integer |\nIndexes:\n \"assignments_pkey\" primary key, btree (assignment_id)\n \"assignments_sample_id\" unique, btree (sample_id)\n \"assignments_address_id\" btree (address_id)\n \"assignments_user_id\" btree (user_id)\nTriggers:\n assignments_check_assignment BEFORE INSERT ON assignments FOR EACH ROW\nEXECUTE PROCEDURE check_assignment()\n\nneo=# \\d assignment_settings\n Table\n\"shopper.assignment_settings\"\n Column | Type |\nModifiers\n-----------------------+------------------------+---------------------------\n--------------------------------------------------------------\n assignment_setting_id | integer | not null default\nnextval('shopper.assignment_settings_assignment_setting_id_seq'::text)\n assignment_id | integer | not null\n setting | character varying(250) | not null\n value | text |\nIndexes:\n \"assignment_settings_pkey\" primary key, btree (assignment_setting_id)\n \"assignment_settings_assignment_id_setting\" unique, btree\n(assignment_id, setting)\n\nneo=# explain analyze SELECT User_ID FROM Assignments A WHERE A.User_ID IS\nNOT NULL AND (SELECT Value FROM Assignment_Settings WHERE Setti\nng='Status' AND Assignment_ID=A.Assignment_ID) IS NULL GROUP BY User_ID;\n QUERY PLAN\n----------------------------------------------------------------------------\n------------------------------------------------\n HashAggregate (cost=1.01..1.01 rows=1 width=4) (actual time=0.057..0.058\nrows=1 loops=1)\n -> Seq Scan on assignments a (cost=0.00..1.01 rows=1 width=4) (actual\ntime=0.033..0.040 rows=2 loops=1)\n Filter: ((user_id IS NOT NULL) AND ((subplan) IS NULL))\n SubPlan\n -> Seq Scan on assignment_settings (cost=0.00..0.00 rows=1\nwidth=13) (actual time=0.001..0.001 rows=0 loops=2)\n Filter: (((setting)::text = 'Status'::text) AND\n(assignment_id = $0))\n Total runtime: 0.159 ms\n(7 rows)\n\n\n\tThanks in advance for any help!\n\nThanks,\nPeter Darley\n\n", "msg_date": "Tue, 25 Jan 2005 16:19:42 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Possibly slow query" }, { "msg_contents": "Peter Darley wrote:\n> Folks,\n> \n> \tI'm using PostgreSQL 7.4.1 on Linux, and I'm trying to figure out weather a\n> query I have is going to be slow when I have more information in my tables.\n> both tables involved will likely have ~500K rows within a year or so.\n> \n> \tSpecifically I can't tell if I'm causing myself future problems with the\n> subquery, and should maybe re-write the query to use a join. The reason I\n> went with the subquery is that I don't know weather a row in Assignments\n> will have a corresponding row in Assignment_Settings\n> \n> \tThe query is:\n> SELECT User_ID\n> FROM Assignments A\n> WHERE A.User_ID IS NOT NULL\n> \tAND (SELECT Value FROM Assignment_Settings WHERE Setting='Status' AND\n> Assignment_ID=A.Assignment_ID) IS NULL\n> GROUP BY User_ID;\n\nYou could always use a LEFT JOIN instead, like you say. I'd personally \nbe tempted to select distinct user_id's then join, but it depends on how \nmany of each.\n\nYou're not going to know for sure whether you'll have problems without \ntesting. Generate 500k rows of plausible looking test-data and give it a \ntry.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Jan 2005 09:36:02 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possibly slow query" }, { "msg_contents": "Peter Darley wrote:\n> Folks,\n> \n> \tI'm using PostgreSQL 7.4.1 on Linux\n\nOh, and move to the latest in the 7.4 series too.\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 26 Jan 2005 09:36:47 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possibly slow query" }, { "msg_contents": "Richard,\n\tI tried a left join, which has to be a little weird, because there may or\nmay not be a corresponding row in Assignment_Settings for each Assignment,\nand they may or may not have Setting='Status', so I came up with:\n\nSELECT User_ID\nFROM Assignments A NATURAL LEFT JOIN (SELECT * FROM Assignment_Settings\nWHERE Setting='Status') ASet\nWHERE A.User_ID IS NOT NULL\n\tAND ASet.Assignment_ID IS NULL\nGROUP BY User_ID;\n\n\tWhich explain analyze is saying takes 0.816 ms as compared to 0.163 ms for\nmy other query. So, I'm not sure that I'm writing the best LEFT JOIN that I\ncan. Also, I suspect that these ratios wouldn't hold as the data got bigger\nand started using indexes, etc. I'll mock up a couple of tables with a\nbunch of data and see how things go. It would be nice to understand WHY I\nget the results I get, which I'm not sure I will.\n\n\tI'm not sure what you mean by selecting a distinct User_ID first. Since\nI'm joining the tables on Assignment_ID, I'm not sure how I'd do a distinct\nbefore the join (because I'd lose Assignment_ID). I was also under the\nimpression that group by was likely to be faster than a distinct, tho I\ncan't really recall where I got that idea from.\n\nThanks for your suggestions!\nPeter Darley\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]]\nSent: Wednesday, January 26, 2005 1:36 AM\nTo: Peter Darley\nCc: Pgsql-Performance\nSubject: Re: [PERFORM] Possibly slow query\n\n\nPeter Darley wrote:\n> Folks,\n>\n> \tI'm using PostgreSQL 7.4.1 on Linux, and I'm trying to figure out weather\na\n> query I have is going to be slow when I have more information in my\ntables.\n> both tables involved will likely have ~500K rows within a year or so.\n>\n> \tSpecifically I can't tell if I'm causing myself future problems with the\n> subquery, and should maybe re-write the query to use a join. The reason I\n> went with the subquery is that I don't know weather a row in Assignments\n> will have a corresponding row in Assignment_Settings\n>\n> \tThe query is:\n> SELECT User_ID\n> FROM Assignments A\n> WHERE A.User_ID IS NOT NULL\n> \tAND (SELECT Value FROM Assignment_Settings WHERE Setting='Status' AND\n> Assignment_ID=A.Assignment_ID) IS NULL\n> GROUP BY User_ID;\n\nYou could always use a LEFT JOIN instead, like you say. I'd personally\nbe tempted to select distinct user_id's then join, but it depends on how\nmany of each.\n\nYou're not going to know for sure whether you'll have problems without\ntesting. Generate 500k rows of plausible looking test-data and give it a\ntry.\n\n--\n Richard Huxton\n Archonet Ltd\n\n", "msg_date": "Wed, 26 Jan 2005 07:16:25 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possibly slow query" }, { "msg_contents": "On Wed, 26 Jan 2005 07:16:25 -0800, \"Peter Darley\"\n<[email protected]> wrote:\n>SELECT User_ID\n>FROM Assignments A NATURAL LEFT JOIN (SELECT * FROM Assignment_Settings\n>WHERE Setting='Status') ASet\n>WHERE A.User_ID IS NOT NULL\n>\tAND ASet.Assignment_ID IS NULL\n>GROUP BY User_ID;\n\n\"ASet.Assignment_ID IS NULL\" and \"value IS NULL\" as you had in your\noriginal post don't necessarily result in the same set of rows.\n\nSELECT DISTINCT a.User_ID\n FROM Assignments a\n LEFT JOIN Assignment_Settings s\n ON (a.Assignment_ID=s.Assignment_ID\n AND s.Setting='Status')\n WHERE a.User_ID IS NOT NULL\n AND s.Value IS NULL;\n\nNote how the join condition can contain subexpressions that only depend\non columns from one table.\n\nBTW,\n|neo=# \\d assignment_settings\n| [...]\n| setting | character varying(250) | not null\n| [...]\n|Indexes:\n| [...]\n| \"assignment_settings_assignment_id_setting\" unique, btree (assignment_id, setting)\n\nstoring the setting names in their own table and referencing them by id\nmight speed up some queries (and slow down others). Certainly worth a\ntry ...\n\nServus\n Manfred\n", "msg_date": "Mon, 31 Jan 2005 12:06:01 +0100", "msg_from": "Manfred Koizar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possibly slow query" }, { "msg_contents": "Manfred,\n\tYeah, that was a typo. It should have been ASet.Value IS NULL.\n\tI have considered storing the setting names by key, since I do have a\nseparate table with the names and a key as you suggest, but since my\napplication is only ~75% finished, it's still pretty important to have human\nreadable/editable tables.\nThanks,\nPeter Darley\n\n-----Original Message-----\nFrom: Manfred Koizar [mailto:[email protected]]\nSent: Monday, January 31, 2005 3:06 AM\nTo: Peter Darley\nCc: Richard Huxton; Pgsql-Performance\nSubject: Re: [PERFORM] Possibly slow query\n\n\nOn Wed, 26 Jan 2005 07:16:25 -0800, \"Peter Darley\"\n<[email protected]> wrote:\n>SELECT User_ID\n>FROM Assignments A NATURAL LEFT JOIN (SELECT * FROM Assignment_Settings\n>WHERE Setting='Status') ASet\n>WHERE A.User_ID IS NOT NULL\n>\tAND ASet.Assignment_ID IS NULL\n>GROUP BY User_ID;\n\n\"ASet.Assignment_ID IS NULL\" and \"value IS NULL\" as you had in your\noriginal post don't necessarily result in the same set of rows.\n\nSELECT DISTINCT a.User_ID\n FROM Assignments a\n LEFT JOIN Assignment_Settings s\n ON (a.Assignment_ID=s.Assignment_ID\n AND s.Setting='Status')\n WHERE a.User_ID IS NOT NULL\n AND s.Value IS NULL;\n\nNote how the join condition can contain subexpressions that only depend\non columns from one table.\n\nBTW,\n|neo=# \\d assignment_settings\n| [...]\n| setting | character varying(250) | not null\n| [...]\n|Indexes:\n| [...]\n| \"assignment_settings_assignment_id_setting\" unique, btree\n(assignment_id, setting)\n\nstoring the setting names in their own table and referencing them by id\nmight speed up some queries (and slow down others). Certainly worth a\ntry ...\n\nServus\n Manfred\n\n", "msg_date": "Mon, 31 Jan 2005 07:06:50 -0800", "msg_from": "\"Peter Darley\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possibly slow query" } ]
[ { "msg_contents": "Hi,\n\nWhat you could do is create a table containing all the fields from your SELECT, plus a per-session unique ID. Then you can store the query results in there, and use SELECT with OFFSET / LIMIT on that table. The WHERE clause for this temp-results table only needs to contain the per-session unique id.\n\nThis of course gives you a new problem: cleaning stale data out of the temp-results table. And another new problem is that users will not see new data appear on their screen until somehow the query is re-run (... but that might even be desirable, actually, depending on how your users do their work and what their work is).\n\nAnd of course better performance cannot be guaranteed until you try it.\n\n\nWould such a scheme give you any hope of improved performance, or would it be too much of a nightmare?\n\ncheers,\n\n--Tim\n\n\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of Andrei Bintintan\nSent: Wed 1/26/2005 11:11 AM\nTo: [email protected]; Greg Stark\nCc: Richard Huxton; [email protected]; [email protected]\nSubject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n \nThe problems still stays open.\n\nThe thing is that I have about 20 - 30 clients that are using that SQL query \nwhere the offset and limit are involved. So, I cannot create a temp table, \nbecause that means that I'll have to make a temp table for each session... \nwhich is a very bad ideea. Cursors somehow the same. In my application the \nWhere conditions can be very different for each user(session) apart.\n\nThe only solution that I see in the moment is to work at the query, or to \nwrite a more complex where function to limit the results output. So no \nreplace for Offset/Limit.\n\nBest regards,\nAndy.\n\n\n----- Original Message ----- \nFrom: \"Greg Stark\" <[email protected]>\nTo: <[email protected]>\nCc: \"Richard Huxton\" <[email protected]>; \"Andrei Bintintan\" \n<[email protected]>; <[email protected]>; \n<[email protected]>\nSent: Tuesday, January 25, 2005 8:28 PM\nSubject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n\n\n>\n> Alex Turner <[email protected]> writes:\n>\n>> I am also very interesting in this very question.. Is there any way to\n>> declare a persistant cursor that remains open between pg sessions?\n>> This would be better than a temp table because you would not have to\n>> do the initial select and insert into a fresh table and incur those IO\n>> costs, which are often very heavy, and the reason why one would want\n>> to use a cursor.\n>\n> TANSTAAFL. How would such a persistent cursor be implemented if not by\n> building a temporary table somewhere behind the scenes?\n>\n> There could be some advantage if the data were stored in a temporary table\n> marked as not having to be WAL logged. Instead it could be automatically\n> cleared on every database start.\n>\n> -- \n> greg\n>\n> \n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n", "msg_date": "Wed, 26 Jan 2005 11:36:35 +0100", "msg_from": "\"Leeuw van der, Tim\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" }, { "msg_contents": "\nOn Jan 26, 2005, at 5:36 AM, Leeuw van der, Tim wrote:\n\n> Hi,\n>\n> What you could do is create a table containing all the fields from \n> your SELECT, plus a per-session unique ID. Then you can store the \n> query results in there, and use SELECT with OFFSET / LIMIT on that \n> table. The WHERE clause for this temp-results table only needs to \n> contain the per-session unique id.\n>\n\nThis is what I do, but I use two columns for indexing the original \nquery, a user_id (not session-id) and an index to the \"query_id\" that \nis unique within user. This \"query_id\" is a foreign key to another \ntable that describes the query (often just a name). I allow the user \nonly a fixed number of \"stored\" queries and recycle after hitting the \nmaximum. You can timestamp your queries so that when you recycle you \ndrop the oldest one first. If you don't need multiple stored query \nresults, then using the user_id is probably adequate (assuming the user \nis not logged on in several locations simultaneously).\n\n> This of course gives you a new problem: cleaning stale data out of the \n> temp-results table. And another new problem is that users will not see \n> new data appear on their screen until somehow the query is re-run (... \n> but that might even be desirable, actually, depending on how your \n> users do their work and what their work is).\n>\n\nSee above. The query refresh issue remains.\n\n> And of course better performance cannot be guaranteed until you try it.\n>\n\nFor the standard operating procedure of perform query===>view results, \nI have found this to be a nice system. The user is accustomed to \nqueries taking a bit of time to perform, but then wants to be able to \nmanipulate and view data rather quickly; this paradigm is pretty well \nserved by making a separate table of results, particularly if the \noriginal query is costly.\n\n\n>\n> Would such a scheme give you any hope of improved performance, or \n> would it be too much of a nightmare?\n>\n\nThis question still applies....\n\nSean\n\n>\n> -----Original Message-----\n> From: [email protected] on behalf of Andrei \n> Bintintan\n> Sent: Wed 1/26/2005 11:11 AM\n> To: [email protected]; Greg Stark\n> Cc: Richard Huxton; [email protected]; \n> [email protected]\n> Subject: Re: [PERFORM] [SQL] OFFSET impact on Performance???\n>\n> The problems still stays open.\n>\n> The thing is that I have about 20 - 30 clients that are using that SQL \n> query\n> where the offset and limit are involved. So, I cannot create a temp \n> table,\n> because that means that I'll have to make a temp table for each \n> session...\n> which is a very bad ideea. Cursors somehow the same. In my application \n> the\n> Where conditions can be very different for each user(session) apart.\n>\n> The only solution that I see in the moment is to work at the query, or \n> to\n> write a more complex where function to limit the results output. So no\n> replace for Offset/Limit.\n>\n> Best regards,\n> Andy.\n\n", "msg_date": "Tue, 1 Feb 2005 06:38:05 -0500", "msg_from": "Sean Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Does anybody know where I can lay my hands on some guidelines to get best SQL performance\r\nout of PostgreSQL? We are about to get into a project that will be new from the ground up (and\\we are using Postgres for the first time). Would like to share some guidelines with developers on best practices\r\nin Postgres? Thanks for your help.\r\n", "msg_date": "Wed, 26 Jan 2005 11:44:45 -0500", "msg_from": "\"Van Ingen, Lane\" <[email protected]>", "msg_from_op": true, "msg_subject": "SQL Performance Guidelines" } ]
[ { "msg_contents": "Clarification: I am talking about SQL coding practices in Postgres (how to write queries for best \r\nresults), not tuning-related considerations (although that would be welcomed too).\r\n \r\n-----Original Message----- \r\nFrom: [email protected] on behalf of Van Ingen, Lane \r\nSent: Wed 1/26/2005 11:44 AM \r\nTo: [email protected] \r\nCc: \r\nSubject: [PERFORM] SQL Performance Guidelines\r\n\r\nDoes anybody know where I can lay my hands on some guidelines to get best SQL performance\r\nout of PostgreSQL? We are about to get into a project that will be new from the ground up (and\\we are using Postgres for the first time). Would like to share some guidelines with developers on best practices\r\nin Postgres? Thanks for your help.\r\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 7: don't forget to increase your free space map settings\r\n\r\n", "msg_date": "Wed, 26 Jan 2005 13:27:08 -0500", "msg_from": "\"Van Ingen, Lane\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SQL Performance Guidelines" }, { "msg_contents": "\nOn Jan 26, 2005, at 10:27, Van Ingen, Lane wrote:\n\n> Clarification: I am talking about SQL coding practices in Postgres \n> (how to write queries for best\n> results), not tuning-related considerations (although that would be \n> welcomed too).\n\n\tYour question is a bit too vague. At this point in your development, \nall that really can be said is to understand relational database \nconcepts in general, and use explain a lot when developing queries. \n(Oh, and don't forget to analyze before asking specific questions).\n\n> -----Original Message-----\n> From: [email protected] on behalf of Van Ingen, \n> Lane\n> Sent: Wed 1/26/2005 11:44 AM\n> To: [email protected]\n> Cc:\n> Subject: [PERFORM] SQL Performance Guidelines\n>\n> Does anybody know where I can lay my hands on some guidelines to get \n> best SQL performance\n> out of PostgreSQL? We are about to get into a project that will be new \n> from the ground up (and\\we are using Postgres for the first time). \n> Would like to share some guidelines with developers on best practices\n> in Postgres? Thanks for your help.\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n>\n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: don't forget to increase your free space map settings\n>\n--\nSPY My girlfriend asked me which one I like better.\npub 1024/3CAE01D5 1994/11/03 Dustin Sallings <[email protected]>\n| Key fingerprint = 87 02 57 08 02 D0 DA D6 C8 0F 3E 65 51 98 D8 BE\nL_______________________ I hope the answer won't upset her. ____________\n\n", "msg_date": "Thu, 27 Jan 2005 00:02:29 -0800", "msg_from": "Dustin Sallings <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Performance Guidelines" }, { "msg_contents": "On Thu, 27 Jan 2005 00:02:29 -0800, Dustin Sallings <[email protected]> wrote:\n> \n> On Jan 26, 2005, at 10:27, Van Ingen, Lane wrote:\n> \n> > Clarification: I am talking about SQL coding practices in Postgres\n> > (how to write queries for best\n> > results), not tuning-related considerations (although that would be\n> > welcomed too).\n> \n> Your question is a bit too vague. At this point in your development,\n> all that really can be said is to understand relational database\n> concepts in general, and use explain a lot when developing queries.\n> (Oh, and don't forget to analyze before asking specific questions).\n\nI disagree - there are plenty of tricks that are PostgreSQL only, and\nmany people on this list have that knowledge but it is not documented\nanywhere, or is hidden within thousands of mailing list posts.\n\nFor example, IIRC when joining an integer column with a SERIAL column,\nyou must expicitly cast it as an integer or the planner will not use\nthe indexes, right? (This is a guess, as I remember reading something\nlike this and thinking, \"How in the world is someone supposed to\nfigure that out, even with EXPLAIN?\")\n\nThere is another thread about how a query using a WHERE NOT NULL\nclause is faster than one without.\n\nThese things are PostgreSQL specific, and documenting them would go a\nlong way towards educating the switchover crowd.\n\nThe closest thing I have seen to this is the PostgreSQL Gotchas page:\n\nhttp://sql-info.de/postgresql/postgres-gotchas.html\n\nHTH,\n\n-- Mitch\n", "msg_date": "Thu, 27 Jan 2005 09:50:32 -0500", "msg_from": "Mitch Pirtle <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Performance Guidelines" }, { "msg_contents": "> For example, IIRC when joining an integer column with a SERIAL column,\n> you must expicitly cast it as an integer or the planner will not use\n> the indexes, right? (This is a guess, as I remember reading something\n> like this and thinking, \"How in the world is someone supposed to\n> figure that out, even with EXPLAIN?\")\n\nThat's not true at all. Perhaps you're thinking about BIGSERIAL and \nint8 indexes - something that's been addressed in 8.0.\n\nChris\n", "msg_date": "Thu, 27 Jan 2005 15:19:41 +0000", "msg_from": "Christopher Kings-Lynne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SQL Performance Guidelines" } ]
[ { "msg_contents": "> The problem with this approach is TTFB (Time to first Byte). The\n> initial query is very slow, but additional requests are fast. In most\n> situations we do not want the user to have to wait a disproportionate\n> amount of time for the initial query. If this is the first time using\n> the system this will be the impression that will stick with them. I\n> guess we could experiment and see how much extra time creating a cache\n> table will take...\n\n\nHave you read this?\nhttp://jonathangardner.net/PostgreSQL/materialized_views/matviews.html\n\nDon't know your exact situation, but this is always worth considering in\nthose hard to optimize corner cases. Moving this stuff into the\napplication space or 'middleware' is going to be a lot of pain and\naggravation.\n\n\nMerlin\n\n\n\n", "msg_date": "Wed, 26 Jan 2005 13:49:33 -0500", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [SQL] OFFSET impact on Performance???" } ]
[ { "msg_contents": "Will I have to dump and reload all my databases when migrating from\n7.4.2 to 8.0?\n \n\n____________________________________\n\n \n\nJim Gunzelman\n\nSenior Software Engineer\n\n \n\nphone: 402.361.3078 fax: 402.361.3178\n\ne-mail: [email protected]\n\n \n\nSolutionary, Inc.\n\nwww.Solutionary.com <http://www.solutionary.com/> \n\n \n\nMaking Security Manageable 24x7\n\n_____________________________________\n\n \n\nConfidentiality Notice\n\nThe content of this communication, along with any attachments, is\ncovered by federal and state law governing electronic communications and\nmay contain confidential and legally privileged information. If the\nreader of this message is not the intended recipient, you are hereby\nnotified that any dissemination, distribution, use or copying of the\ninformation contained herein is strictly prohibited. If you have\nreceived this communication in error, please immediately contact us by\ntelephone at (402) 361-3000 or e-mail [email protected]. Thank\nyou.\n\n \n\nCopyright 2000-2005, Solutionary, Inc. All rights reserved.\nActiveGuard, eV3, Solutionary and the Solutionary logo are registered\ntrademarks of Solutionary, Inc.\n\n \n\n \n\n \n\nMessage\n\n\n\nWill I have to dump \nand reload all my databases when migrating from 7.4.2 to \n8.0?\n \n____________________________________\n \nJim Gunzelman\nSenior Software \nEngineer\n \nphone: 402.361.3078   fax: 402.361.3178\ne-mail:  \[email protected]\n \nSolutionary, \nInc.\nwww.Solutionary.com       \n\n \nMaking Security Manageable \n24x7\n_____________________________________\n \nConfidentiality \nNotice\nThe content of this \ncommunication, along with any attachments, is covered by federal and state law \ngoverning electronic communications and may contain confidential and legally \nprivileged information.  If the \nreader of this message is not the intended recipient, you are hereby notified \nthat any dissemination, distribution, use or copying of the information \ncontained herein is strictly prohibited.  \nIf you have received this communication in error, please immediately \ncontact us by telephone at (402) 361-3000 or e-mail \[email protected].  Thank \nyou.\n \nCopyright 2000-2005, Solutionary, \nInc. All rights reserved.  ActiveGuard, eV3, Solutionary and the \nSolutionary logo are registered trademarks of Solutionary, \nInc.", "msg_date": "Wed, 26 Jan 2005 12:51:14 -0600", "msg_from": "\"James Gunzelman\" <[email protected]>", "msg_from_op": true, "msg_subject": "Upgrading from from 7.4.2 to 8.0" }, { "msg_contents": "\"James Gunzelman\" <[email protected]> writes:\n\n> Will I have to dump and reload all my databases when migrating from\n> 7.4.2 to 8.0?\n\nYes.\n\n-Doug\n\n", "msg_date": "Wed, 26 Jan 2005 14:42:57 -0500", "msg_from": "Doug McNaught <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrading from from 7.4.2 to 8.0" }, { "msg_contents": "On Wed, Jan 26, 2005 at 12:51:14PM -0600, James Gunzelman wrote:\n\n> Will I have to dump and reload all my databases when migrating from\n> 7.4.2 to 8.0?\n\nYes -- the Release Notes mention it under \"Migration to version 8.0\":\n\nhttp://www.postgresql.org/docs/8.0/static/release.html#RELEASE-8-0\n\nThose unfamiliar with doing an upgrade might want to read \"If You\nAre Upgrading\" in the \"Installation Instructions\" chapter of the\ndocumenation, and \"Migration Between Releases\" in the \"Backup and\nRestore\" chapter:\n\nhttp://www.postgresql.org/docs/8.0/static/install-upgrading.html\nhttp://www.postgresql.org/docs/8.0/static/migration.html\n\n(Install or upgrade questions should probably go to pgsql-admin or\npgsql-general instead of pgsql-performance.)\n\n-- \nMichael Fuhr\nhttp://www.fuhr.org/~mfuhr/\n", "msg_date": "Wed, 26 Jan 2005 12:51:25 -0700", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrading from from 7.4.2 to 8.0" }, { "msg_contents": "It should be noted that users who use Slony can create a subscriber \nnode running 8.0 that subscribes to a node running 7.4.x and can \ntransition with only the downtime required for failover.\n\nThis obviates the need for a dump/restore.\n\nSee <http://slony.info/>.\n\n-tfo\n\n--\nThomas F. O'Connell\nCo-Founder, Information Architect\nSitening, LLC\nhttp://www.sitening.com/\n110 30th Avenue North, Suite 6\nNashville, TN 37203-6320\n615-260-0005\n\nOn Jan 26, 2005, at 1:51 PM, Michael Fuhr wrote:\n\n> On Wed, Jan 26, 2005 at 12:51:14PM -0600, James Gunzelman wrote:\n>\n>> Will I have to dump and reload all my databases when migrating from\n>> 7.4.2 to 8.0?\n>\n> Yes -- the Release Notes mention it under \"Migration to version 8.0\":\n>\n> http://www.postgresql.org/docs/8.0/static/release.html#RELEASE-8-0\n>\n> Those unfamiliar with doing an upgrade might want to read \"If You\n> Are Upgrading\" in the \"Installation Instructions\" chapter of the\n> documenation, and \"Migration Between Releases\" in the \"Backup and\n> Restore\" chapter:\n>\n> http://www.postgresql.org/docs/8.0/static/install-upgrading.html\n> http://www.postgresql.org/docs/8.0/static/migration.html\n>\n> (Install or upgrade questions should probably go to pgsql-admin or\n> pgsql-general instead of pgsql-performance.)\n>\n> -- \n> Michael Fuhr\n> http://www.fuhr.org/~mfuhr/\n\n", "msg_date": "Wed, 26 Jan 2005 14:14:35 -0600", "msg_from": "Thomas F.O'Connell <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgrading from from 7.4.2 to 8.0" } ]
[ { "msg_contents": "\nShort summary... the second query runs faster, and I think\nthey should be identical queries. Should the optimizer\nhave found this optimization?\n\n\nI have two identical (or so I believe) queries; one where I \nexplicitly add a \"is not null\" comparison; and one where I \nthink it would implicitly only find not-null columns.\n\nThe queries are \n\n select *\n from rt4, rt5\n where rt4.tigerfile = rt5.tigerfile\n and feat = feat3;\n\nand \n\n select *\n from (select * from rt4 where feat3 is not null) as rt4, rt5\n where rt4.tigerfile = rt5.tigerfile\n and feat = feat3;\n\nI would have thought that the optimizer would see that\nif feat3 is null (which it usually is), it doesn't need\nto keep those rows and sort them -- but it seems (looking\nboth at explain analyze and \"du\" on the tmp directory)\nthat in the first query it is indeed sorting all the\nrows --- even the ones with feat3=null.\n\n \n\nThe tables are the Census Tiger Line data explained in detail here:\n http://www.census.gov/geo/www/tiger/tiger2003/TGR2003.pdf\nI can attach the create statemnts for the tables if people \nthink they'd help. Basically, table rt4 has a column\ncalled feat3 which is usually null, and table rt5 has a\ncolumn called feat which is never null. Both tables have\na few million rows.\n\nNo indexes were used, since I'm joining everything to \neverything, they shouldn't have helped anyway. However\nvacuum analyze was run, and (as seen in the second query)\nthe stats did know that the column feat3 was mostly null.\n\n=====================================================================================================\nfli=# \nfli=# explain analyze \n select * \n from rt4, rt5 \n where rt4.tigerfile = rt5.tigerfile \n and feat = feat3;\nfli-# fli-# fli-# fli-# \n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1922903.02..1967385.35 rows=117698 width=100) (actual time=179246.872..218920.724 rows=153091 loops=1)\n Merge Cond: ((\"outer\".feat3 = \"inner\".feat) AND (\"outer\".tigerfile = \"inner\".tigerfile))\n -> Sort (cost=876532.10..888964.80 rows=4973079 width=45) (actual time=57213.327..67313.216 rows=4971022 loops=1)\n Sort Key: rt4.feat3, rt4.tigerfile\n -> Seq Scan on rt4 (cost=0.00..94198.79 rows=4973079 width=45) (actual time=0.053..10433.883 rows=4971022 loops=1)\n -> Sort (cost=1046370.92..1060457.95 rows=5634813 width=55) (actual time=122033.463..134037.127 rows=5767675 loops=1)\n Sort Key: rt5.feat, rt5.tigerfile\n -> Seq Scan on rt5 (cost=0.00..127146.13 rows=5634813 width=55) (actual time=0.016..22538.958 rows=5635077 loops=1)\n Total runtime: 219632.580 ms\n(9 rows)\n\nfli=# fli=# fli=# \nfli=# explain analyze \n select * \n from (select * from rt4 where feat3 is not null) as rt4, rt5 \n where rt4.tigerfile = rt5.tigerfile \n and feat = feat3; \n\nfli-# fli-# fli-# fli-# QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Merge Join (cost=1152466.47..1194789.77 rows=3296 width=100) (actual time=125982.562..145927.220 rows=153091 loops=1)\n Merge Cond: ((\"outer\".feat3 = \"inner\".feat) AND (\"outer\".tigerfile = \"inner\".tigerfile))\n -> Sort (cost=106095.56..106443.67 rows=139247 width=45) (actual time=11729.319..11823.006 rows=153091 loops=1)\n Sort Key: tgr.rt4.feat3, tgr.rt4.tigerfile\n -> Seq Scan on rt4 (cost=0.00..94198.79 rows=139247 width=45) (actual time=32.404..10893.373 rows=153091 loops=1)\n Filter: (feat3 IS NOT NULL)\n -> Sort (cost=1046370.92..1060457.95 rows=5634813 width=55) (actual time=114253.157..126650.225 rows=5767675 loops=1)\n Sort Key: rt5.feat, rt5.tigerfile\n -> Seq Scan on rt5 (cost=0.00..127146.13 rows=5634813 width=55) (actual time=0.012..19253.431 rows=5635077 loops=1)\n Total runtime: 146480.294 ms\n(10 rows)\n\nfli=# fli=# \nfli=# \n\n", "msg_date": "Wed, 26 Jan 2005 17:27:59 -0800 (PST)", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": true, "msg_subject": "Should the optimizer see this?" }, { "msg_contents": "Ron Mayer <[email protected]> writes:\n> Should the optimizer have found this optimization?\n\nI can't get excited about it. Joining on a column that's mostly nulls\ndoesn't seem like a common thing to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Jan 2005 21:31:20 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should the optimizer see this? " } ]
[ { "msg_contents": "Hi everyone.\n\nI'm new to this forum and was wondering if anyone would be kind enough to help me out with a pretty severe performance issue. I believe the problem to be rather generic, so I'll put it in generic terms. Since I'm at home and not a work (but this is really bugging me), I can't post any specifics. However, I think my explaination will suffice.\n\nI have a 2 tables that are are getting large and will only get larger with time (expoentially as more users sign on to the system). Right the now, a table called 'shipment' contains about 16,000 rows and 'shipment_status' contains about 32,500 rows. These aren't massive rows (I keep reading about tables with millions), but they will definately get into 6 digits by next year and query performance is quite poor.\n\nNow, from what I can understand about tuning, you want to specify good filters, provide good indexes on the driving filter as well as any referencial keys that are used while joining. This has helped me solve performance problems many times in the past (for example, changing a query speed from 2 seconds to 21 milliseconds). \n\nHowever, I am now tuning queries that operate on these two tables and the filters aren't very good (the best is a filter ratio of 0.125) and the number of rows returned is very large (not taking into consideration limits).\n\nFor example, consider something like this query that takes ~1 second to finish:\n\nselect s.*, ss.*\nfrom shipment s, shipment_status ss, release_code r\nwhere s.current_status_id = ss.id\n and ss.release_code_id = r.id\n and r.filtered_column = '5'\norder by ss.date desc\nlimit 100;\n\nRelease code is just a very small table of 8 rows by looking at the production data, hence the 0.125 filter ratio. However, the data distribution is not normal since the filtered column actually pulls out about 54% of the rows in shipment_status when it joins. Postgres seems to be doing a sequencial scan to pull out all of these rows. Next, it joins approx 17550 rows to shipment. Since this query has a limit, it only returns the first 100, which seems like a waste.\n\nNow, for this query, I know I can filter out the date instead to speed it up. For example, I can probably search for all the shipments in the last 3 days instead of limiting it to 100. But since this isn't a real production query, I only wanted to show it as an example since many times I cannot do a filter by the date (and the sort may be date or something else irrelavant).\n\nI'm just stressed out how I can make queries like this more efficient since all I see is a bunch of hash joins and sequencial scans taking all kinds of time.\n\nI guess here are my 2 questions:\n\n1. Should I just change beg to change the requirements so that I can make more specific queries and more screens to access those?\n2. Can you recommend ways so that postgres acts on big tables more efficiently? I'm not really interested in this specific case (I just made it up). I'm more interested in general solutions to this general problem of big table sizes with bad filters and where join orders don't seem to help much.\n\nThank you very much for your help.\n\nBest Regards,\nKen Egervari\n\n\n\n\n\n\nHi everyone.\n \nI'm new to this forum and was wondering if anyone \nwould be kind enough to help me out with a pretty severe performance \nissue.  I believe the problem to be rather generic, so I'll put it in \ngeneric terms.  Since I'm at home and not a work (but this is really \nbugging me), I can't post any specifics.  However, I think my explaination \nwill suffice.\n \nI have a 2 tables that are are getting large and \nwill only get larger with time (expoentially as more users sign on to the \nsystem).  Right the now, a table called 'shipment' contains about 16,000 \nrows and 'shipment_status' contains about 32,500 rows.  These aren't \nmassive rows (I keep reading about tables with millions), but they will \ndefinately get into 6 digits by next year and query performance is quite \npoor.\n \nNow, from what I can understand about tuning, you \nwant to specify good filters, provide good indexes on the driving filter as well \nas any referencial keys that are used while joining.  This has helped me \nsolve performance problems many times in the past (for example, changing a query \nspeed from 2 seconds to 21 milliseconds).  \n \nHowever, I am now tuning queries that operate on \nthese two tables and the filters aren't very good (the best is a filter ratio of \n0.125) and the number of rows returned is very large (not taking into \nconsideration limits).\n \nFor example, consider something like this \nquery that takes ~1 second to finish:\n \nselect s.*, ss.*\nfrom shipment s, shipment_status ss, release_code \nr\nwhere s.current_status_id = ss.id\n   and ss.release_code_id = \nr.id\n   and r.filtered_column = \n'5'\norder by ss.date desc\nlimit 100;\n \nRelease code is just a very small table of 8 rows \nby looking at the production data, hence the 0.125 filter ratio.  However, \nthe data distribution is not normal since the filtered column actually pulls out \nabout 54% of the rows in shipment_status when it joins.  Postgres seems to \nbe doing a sequencial scan to pull out all of these rows.  Next, it joins \napprox 17550 rows to shipment.  Since this query has a limit, it only \nreturns the first 100, which seems like a waste.\n \nNow, for this query, I know I can filter out the \ndate instead to speed it up.  For example, I can probably search for all \nthe shipments in the last 3 days instead of limiting it to 100.  But since \nthis isn't a real production query, I only wanted to show it as an example since \nmany times I cannot do a filter by the date (and the sort may be date or \nsomething else irrelavant).\n \nI'm just stressed out how I can make queries like \nthis more efficient since all I see is a bunch of hash joins and sequencial \nscans taking all kinds of time.\n \nI guess here are my 2 questions:\n \n1. Should I just change beg to change the \nrequirements so that I can make more specific queries and more screens to access \nthose?\n2. Can you recommend ways so that postgres acts on \nbig tables more efficiently?  I'm not really interested in this specific \ncase (I just made it up).  I'm more interested in general solutions to this \ngeneral problem of big table sizes with bad filters and where join orders don't \nseem to help much.\n \nThank you very much for your \nhelp.\n \nBest Regards,\nKen Egervari", "msg_date": "Wed, 26 Jan 2005 21:17:23 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem with semi-large tables" }, { "msg_contents": "Ken,\n\nActually, your problem isn't that generic, and might be better solved by \ndissecting an EXPLAIN ANALYZE.\n\n> 1. Should I just change beg to change the requirements so that I can make\n> more specific queries and more screens to access those? \n\nThis is always good.\n\n> 2. Can you \n> recommend ways so that postgres acts on big tables more efficiently?  I'm\n> not really interested in this specific case (I just made it up).  I'm more\n> interested in general solutions to this general problem of big table sizes\n> with bad filters and where join orders don't seem to help much.\n\nWell, you appear to be using ORDER BY ... LIMIT. Is there a corresponding \nindex on the order by criteria?\n\n-- \nJosh Berkus\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Sat, 29 Jan 2005 13:51:10 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "\n\n> select s.*, ss.*\n> from shipment s, shipment_status ss, release_code r\n> where s.current_status_id = ss.id\n> and ss.release_code_id = r.id\n> and r.filtered_column = '5'\n> order by ss.date desc\n> limit 100;\n\n> Release code is just a very small table of 8 rows by looking at the \n> production data, hence the 0.125 filter ratio. However, the data \n> distribution is not normal since the filtered column actually pulls out \n> about 54% of the rows in shipment_status when it joins. Postgres seems \n> to be doing a sequencial scan to pull out all of these rows. Next, it \n> joins approx 17550 rows to shipment. Since this query has a limit, it \n> only returns the first 100, which seems like a waste.\n\n\tWell, postgres does what you asked. It will be slow, because you have a \nfull table join. LIMIT does not change this because the rows have to be \nsorted first.\n\n\tThe date is in shipment_status so you should first get the \nshipment_status.id that you need and later join to shipment. This will \navoid the big join :\n\n\nSELECT s.*, ss.* FROM\n\t(SELECT * FROM shipment_status WHERE release_code_id IN\n\t\t(SELECT r.id FROM release_code WHERE r.filtered_column = '5')\n\tORDER BY date DESC LIMIT 100\n\t) as ss, shipment s\nWHERE s.current_status_id = ss.id\nORDER BY date DESC LIMIT 100\n\n\tIs this better ?\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sat, 29 Jan 2005 23:08:58 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "> Well, postgres does what you asked. It will be slow, because you have a \n> full table join. LIMIT does not change this because the rows have to be \n> sorted first.\n\nI am aware that limit doesn't really affect the execution time all that \nmuch. It does speed up ORM though and keeps the rows to a manageable list \nso users don't have to look at thousands, which is good enough for me. My \nintention here is that the date was supposed to be a good filter.\n\n> The date is in shipment_status so you should first get the \n> shipment_status.id that you need and later join to shipment. This will \n> avoid the big join :\n>\n>\n> SELECT s.*, ss.* FROM\n> (SELECT * FROM shipment_status WHERE release_code_id IN\n> (SELECT r.id FROM release_code WHERE r.filtered_column = '5')\n> ORDER BY date DESC LIMIT 100\n> ) as ss, shipment s\n> WHERE s.current_status_id = ss.id\n> ORDER BY date DESC LIMIT 100\n>\n> Is this better ?\n\nThis looks like it might be what I want. It's not that I was not aware of \nthe correct join order. I used Dan Tow's diagram method and learned that \nfiltering on date first is the best approach, then releae code, then finally \nshipment for this particular query. I just didn't know how to tell \nPostgreSQL how to do this.\n\nSo are you suggesting as a general rule then that sub-queries are the way to \nforce a specific join order in postgres? If that is the case, I will do \nthis from now on. \n\n", "msg_date": "Sat, 29 Jan 2005 17:44:33 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "\n> So are you suggesting as a general rule then that sub-queries are the \n> way to force a specific join order in postgres? If that is the case, I \n> will do this from now on.\n\n\tI'll try to explain a bit better...\n\tHere's your original query :\n\n> select s.*, ss.*\n> from shipment s, shipment_status ss, release_code r\n> where s.current_status_id = ss.id\n> and ss.release_code_id = r.id\n> and r.filtered_column = '5'\n> order by ss.date desc\n> limit 100;\n\n\tIf you write something like :\n\nSELECT * FROM shipment_status WHERE release_code_id = constant ORDER BY \nrelease_code_id DESC, date DESC LIMIT 100;\n\n\tIn this case, if you have an index on (release_code_id, date), the \nplanner will use a limited index scan which will yield the rows in index \norder, which will be very fast.\n\n\tHowever, if you just have an index on date, this won't help you.\n\tIn your case, moreover, you don't use release_code_id = constant, but it \ncomes from a join. So there may be several different values for \nrelease_code_id ; thus the planner can't use the optimization, it has to \nfind the rows with the release_code_id first. And it can't use the index \non (release_code_id, date) to get the rows in sorted order precisely \nbecause there could be several different values for the release_code_id. \nAnd then it has to sort by date.\n\n\tI hope this makes it clearer. If you are absolutely sure there is only \none row in release_code with r.filtered_column = '5', then this means \nrelease_code_id is a constant and your query could get a huge speedup by \nwriting it differently.\n", "msg_date": "Sun, 30 Jan 2005 00:28:59 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "Thanks again for your response. I'll try and clarify some metrics that I \ntook a few days to figure out what would be the best join order.\n\nBy running some count queries on the production database, I noticed there \nwere only 8 rows in release_code. The filtered column is unique, so that \nmeans the filter ratio is 0.125. However, the data distribution is not \nnormal. When the filtered column is the constant '5', Postgres will join to \n54% of the shipment_status rows. Since shipment_status has 32,000+ rows, \nthis join is not a very good one to make.\n\nThe shipment table has 17k rows, but also due to the distribution of data, \nalmost every shipment will join to a shipment_status with a release_code of \n'5'. For your information, this column indicates that a shipment has been \n\"released\", as most shipments will move to this state eventually. The \nactual join ratio from shipment_status to shipment is about 98.5% of the \nrows in the shipment table, which is still basically 17k rows.\n\nI was simply curious how to make something like this faster. You see, it's \nthe table size and the bad filters are really destroying this query example. \nI would never make a query to the database like this in practice, but I have \nsimilar queries that I do make that aren't much better (and can't be due to \nbusiness requirements).\n\nFor example, let's add another filter to get all the shipments with release \ncode '5' that are 7 days old or newer.\n\n ss.date >= current_date - 7\n\nBy analyzing the production data, this where clause has a filter ratio of \n0.08, which is far better than the release_code filter both in ratio and in \nthe number of rows that it can avoid joining. However, if I had this filter \ninto the original query, Postgres will not act on it first - and I think it \nreally should before it even touches release_code. However, the planner \n(using EXPLAIN ANALYZE) will actually pick this filter last and will join \n17k rows prematurely to release_code. In this example, I'd like force \npostgres to do the date filter first, join to release_code next, then \nfinally to shipment.\n\nAnother example is filtering by the driver_id, which is a foreign key column \non the shipment table itself to a driver table. This has a filter ratio of \n0.000625 when analyzing the production data. However, PostgreSQL will not \nact on this filter first either. The sad part is that since drivers are \nactually distributed more evenly in the database, it would filter out the \nshipment table from 17k to about 10 shipments on average. In most cases, it \nends up being more than 10, but not more than 60 or 70, which is very good \nsince some drivers don't have any shipments (I question why they are even in \nthe database, but that's another story). As you can see, joining to \nshipment_status at this point (using the primary key index from \nshipment.current_status_id to shipment_status.id) should be extremely \nefficient. Yet, Postgres's planner/optimizer won't make the right call \nuntil might later in the plan.\n\n> SELECT * FROM shipment_status WHERE release_code_id = constant ORDER BY \n> release_code_id DESC, date DESC LIMIT 100;\n>\n> In this case, if you have an index on (release_code_id, date), the \n> planner will use a limited index scan which will yield the rows in index \n> order, which will be very fast.\n\nI have done this in other queries where sorting by both release code and \ndate were important. You are right, it is very fast and I do have this index \nin play. However, most of the time I retreive shipment's when their \nshipment_status all have the same release_code, which makes sorting kind of \nmoot :/ I guess that answers your comment below.\n\n> However, if you just have an index on date, this won't help you.\n> In your case, moreover, you don't use release_code_id = constant, but it \n> comes from a join. So there may be several different values for \n> release_code_id ; thus the planner can't use the optimization, it has to \n> find the rows with the release_code_id first. And it can't use the index \n> on (release_code_id, date) to get the rows in sorted order precisely \n> because there could be several different values for the release_code_id. \n> And then it has to sort by date.\n\nWell, the filtered column is actually unique (but it's not the primary key). \nShould I just make it the primary key? Can't postgres be equally efficient \nwhen using other candidate keys as well? If not, then I will definately \nchange the design of my database. I mostly use synthetic keys to make \nHibernate configuration fairly straight-forward and to make it easy so all \nof my entities extend from the same base class.\n\n> I hope this makes it clearer. If you are absolutely sure there is only \n> one row in release_code with r.filtered_column = '5', then this means \n> release_code_id is a constant and your query could get a huge speedup by \n> writing it differently.\n\nYou mean by avoiding the filter on number and avoiding the join? You see, I \nnever thought joining to release_code should be so bad since the table only \nhas 8 rows in it.\n\nAnyway, I hope my comments provide you with better insight to the problem \nI'm having. I really do appreciate your comments because I think you are \nright on target with your direction, discussing things I haven't really \nthought up on my own. I thank you. \n\n", "msg_date": "Sat, 29 Jan 2005 20:21:40 -0500", "msg_from": "\"Ken Egervari\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": ">> SELECT * FROM shipment_status WHERE release_code_id = constant ORDER BY \n>> release_code_id DESC, date DESC LIMIT 100;\n>\n> I have done this in other queries where sorting by both release code and \n> date were important. You are right, it is very fast and I do have this \n> index in play. However, most of the time I retreive shipment's when \n> their shipment_status all have the same release_code, which makes \n> sorting kind of moot :/ I guess that answers your comment below.\n\n\tAh, well in this case, ORDER BY release_code_id DESC seems of course \nuseless because you only have one order_code_id, but it is in fact \nnecessary to make the planner realize it can use the index on \n(release_code_id,date) for the ordering. If you just ORDER BY date, the \nplanner will not use your index.\n\n> Thanks again for your response. I'll try and clarify some metrics that \n> I took a few days to figure out what would be the best join order.\n>\n> By running some count queries on the production database, I noticed \n> there were only 8 rows in release_code. The filtered column is unique,\n\n\tLet's forget the shipments table for now.\n\n\tSo you mean there is an unique, one-to-one relation between \nrelease_code_id and filtered_column ?\n\tThe planner is not able to derermine this ahead of time ; and in your \ncase, it's important that it be unique to be able to use the index to \nretrieve quickly the rows in (date DESC) order.\n\tSo if you'll join only to ONE release_code_id, you can do this :\n\n(SELECT * FROM shipment_status WHERE release_code_id =\n(SELECT r.id FROM release_code WHERE r.filtered_column = '5' LIMIT 1)\nORDER BY release_code_id DESC, date DESC LIMIT 100)\n\n\tWhich is no longer a join and will get your shipment_status_id's very \nquickly.\n\n> so that means the filter ratio is 0.125. However, the data distribution \n> is not normal. When the filtered column is the constant '5', Postgres \n> will join to 54% of the shipment_status rows. Since shipment_status has \n> 32,000+ rows, this join is not a very good one to make.\n\n\tSure !\n\n> The shipment table has 17k rows, but also due to the distribution of \n> data, almost every shipment will join to a shipment_status with a \n> release_code of '5'. For your information, this column indicates that a \n> shipment has been \"released\", as most shipments will move to this state \n> eventually. The actual join ratio from shipment_status to shipment is \n> about 98.5% of the rows in the shipment table, which is still basically \n> 17k rows.\n>\n> I was simply curious how to make something like this faster. You see, \n> it's the table size and the bad filters are really destroying this query \n> example. I would never make a query to the database like this in \n> practice, but I have similar queries that I do make that aren't much \n> better (and can't be due to business requirements).\n>\n> For example, let's add another filter to get all the shipments with \n> release code '5' that are 7 days old or newer.\n>\n> ss.date >= current_date - 7\n\n\tIt's the order by + limit which makes the query behaves badly, and which \nforces use of kludges to use the index. If you add another condition like \nthat, it should be a breeze.\n\n> By analyzing the production data, this where clause has a filter ratio \n> of 0.08, which is far better than the release_code filter both in ratio \n> and in the number of rows that it can avoid joining. However, if I had \n> this filter into the original query, Postgres will not act on it first - \n> and I think it really should before it even touches release_code.\n\n\tWell I think too.\n\tWhat with the subqueries I wrote with the LIMIT inside the subquery ? Any \nbetter ?\n\tNormally the planner is able to deconstruct subqueries and change the \norder as it sees fit, but if there are LIMIT's I don't know.\n\n> However, the planner (using EXPLAIN ANALYZE) will actually pick this \n> filter last and will join 17k rows prematurely to release_code. In this \n> example, I'd like force postgres to do the date filter first, join to \n> release_code next, then finally to shipment.\n\n\tYou could use the JOIN keywords to specify the join order youself.\n\n> Another example is filtering by the driver_id, which is a foreign key \n> column on the shipment table itself to a driver table. This has a \n> filter ratio of 0.000625 when analyzing the production data. However, \n> PostgreSQL will not act on this filter first either. The sad part is \n> that since drivers are actually distributed more evenly in the database, \n> it would filter out the shipment table from 17k to about 10 shipments on \n> average. In most cases, it ends up being more than 10, but not more \n> than 60 or 70, which is very good since some drivers don't have any \n> shipments (I question why they are even in the database, but that's \n> another story). As you can see, joining to shipment_status at this \n> point (using the primary key index from shipment.current_status_id to \n> shipment_status.id) should be extremely efficient. Yet, Postgres's \n> planner/optimizer won't make the right call until might later in the \n> plan.\n\n\tAnd if you select on shipment_status where driver_id=something, does it \nuse the index ?\n\n> Well, the filtered column is actually unique (but it's not the primary \n> key). Should I just make it the primary key? Can't postgres be equally\n\n\tIt won't change anything, so probably not. What will make it faster will \nbe changing :\nWHERE release_code_id IN (SELECT r.id\n\tinto :\nWHERE release_code_id = (SELECT r.id\n\n> efficient when using other candidate keys as well? If not, then I will \n> definately change the design of my database. I mostly use synthetic \n> keys to make Hibernate configuration fairly straight-forward and to make \n> it easy so all of my entities extend from the same base class.\n>\n\n> You mean by avoiding the filter on number and avoiding the join? You \n> see, I never thought joining to release_code should be so bad since the \n> table only has 8 rows in it.\n\n\tIt's not the join itself that's bad, it's the order by...\n\tWel the planner insisting on joining the two big tables before limiting, \nalso is worrying.\n\n> Anyway, I hope my comments provide you with better insight to the \n> problem I'm having. I really do appreciate your comments because I \n> think you are right on target with your direction, discussing things I \n> haven't really thought up on my own. I thank you.\n\n\tThanks ;)\n\n\n\n\n\n\n\n\n\n", "msg_date": "Sun, 30 Jan 2005 12:10:55 +0100", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables" }, { "msg_contents": "PFC <[email protected]> writes:\n>> For example, let's add another filter to get all the shipments with \n>> release code '5' that are 7 days old or newer.\n>> \n>> ss.date >= current_date - 7\n\n> \tIt's the order by + limit which makes the query behaves badly, and which \n> forces use of kludges to use the index. If you add another condition like \n> that, it should be a breeze.\n\nActually, that date condition has its own problem, namely that the\ncompared-to value isn't a constant. The 8.0 planner is able to realize\nthat this is a pretty selective condition, but prior releases fall back\non a very pessimistic default estimate. I'm sure that has something to\ndo with Ken not being able to get it to use an index on date.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2005 12:50:02 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with semi-large tables " } ]
[ { "msg_contents": "Hello,\n\nif i have the following (simple) table layout:\n\ncreate table a (\n id serial primary key\n);\n\ncreate table b (\n id integer references a,\n test text\n);\n\ncreate view c as\n select a.id,b.test from a\n left join b\n on a.id = b.id;\n\nSo if i do a select * from c i get the following:\n\ntest=# EXPLAIN SELECT * from g;\n QUERY PLAN\n----------------------------------------------------------------\n Hash Left Join (cost=2.45..8.91 rows=8 width=36)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on a (cost=0.00..1.08 rows=8 width=4)\n -> Hash (cost=2.16..2.16 rows=116 width=36)\n -> Seq Scan on b (cost=0.00..2.16 rows=116 width=36)\n\nand a select id from c executes as\n\ntest=# EXPLAIN SELECT id from g;\n QUERY PLAN\n---------------------------------------------------------------\n Hash Left Join (cost=2.45..7.02 rows=8 width=4)\n Hash Cond: (\"outer\".id = \"inner\".id)\n -> Seq Scan on a (cost=0.00..1.08 rows=8 width=4)\n -> Hash (cost=2.16..2.16 rows=116 width=4)\n -> Seq Scan on b (cost=0.00..2.16 rows=116 width=4)\n\nso the only difference is the width estimation.\n\nBut why is the scan on table b performed?\nIf i understand it correctly this is unnecessary because the\nresult contains only rows from table a.\n\nIs there a way to tell postgres not to do the extra work.\nMy aim is to speed up lookup to complex joins.\n\nThanks\n\nSebastian\n", "msg_date": "Thu, 27 Jan 2005 09:19:38 +0100", "msg_from": "=?ISO-8859-1?Q?Sebastian_B=F6ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing Outer Joins" }, { "msg_contents": "Sebastian B�ck wrote:\n> Hello,\n> \n> if i have the following (simple) table layout:\n> \n> create table a (\n> id serial primary key\n> );\n> \n> create table b (\n> id integer references a,\n> test text\n> );\n> \n> create view c as\n> select a.id,b.test from a\n> left join b\n> on a.id = b.id;\n\n> test=# EXPLAIN SELECT * from g;\n\n> test=# EXPLAIN SELECT id from g;\n\n> so the only difference is the width estimation.\n> \n> But why is the scan on table b performed?\n> If i understand it correctly this is unnecessary because the\n> result contains only rows from table a.\n\nIt's only unnecessary in the case where there is a 1:1 correspondence \nbetween a.id and b.id - if you had more than one matching row in \"b\" \nthen there'd be repeated rows from \"a\" in the result. Not sure if PG can \n tell what the situation is regarding references and pkeys, but in your \nexample you don't have one anyway.\n\n--\n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 27 Jan 2005 10:18:35 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing Outer Joins" }, { "msg_contents": "Richard Huxton wrote:\n> Sebastian Böck wrote:\n>> But why is the scan on table b performed?\n>> If i understand it correctly this is unnecessary because the\n>> result contains only rows from table a.\n> \n> \n> It's only unnecessary in the case where there is a 1:1 correspondence \n> between a.id and b.id - if you had more than one matching row in \"b\" \n> then there'd be repeated rows from \"a\" in the result. Not sure if PG can \n> tell what the situation is regarding references and pkeys, but in your \n> example you don't have one anyway.\n\nOk, is there a way to avoid the extra scan if only one row is\nreturned (distinc on for example)?\n\nWhat would be great is if a subselect could work with more than\none column returning. Is there a way to achieve this?\n\nThanks Sebastian\n\n", "msg_date": "Fri, 28 Jan 2005 13:30:29 +0100", "msg_from": "=?ISO-8859-1?Q?Sebastian_B=F6ck?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing Outer Joins" } ]