threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hello Everyone,\n\nI'm trying to find out/understand what causes my 'out of memory' error. \nI do not have enough experience with such logs to understand what is \nwrong or how to fix it. So i hope someone can point me in the right \ndirection.\n\nThe 'rpt.rpt_verrichting' table contains about 8.5 million records and \nthe 'rpt.rpt_dbc_traject' table contains 700k records.\nIt's part of a nightly process, so there is only 1 user active.\n\nThe server PostgreSQL 8.1.4 is running on, has 4GB Ram, OS FreeBSD \n6.1-Stable.\n\npostgresql.conf\nshared_buffers = 8192\nwork_mem = 524288\nmaintenance_work_mem = 524288\neffective_cache_size = 104858\n\n\nResource limits (current):\n cputime infinity secs\n filesize infinity kB\n datasize 1048576 kB <---- could this be a problem?\n stacksize 131072 kB <---- could this be a problem?\n coredumpsize infinity kB\n memoryuse infinity kB\n memorylocked infinity kB\n maxprocesses 5547\n openfiles 11095\n sbsize infinity bytes\n vmemoryuse infinity kB\n\n\nThanks in advance.\n\n_*The Query that is causing the out of memory error.*_\nLOG: statement: insert into rpt.rpt_verrichting_dbc\n (\n verrichting_id\n , verrichting_secid\n , dbcnr\n , vc_dbcnr\n )\n select\n t1.verrichting_id\n , t1.verrichting_secid\n , t1.dbcnr\n , max(t1.vc_dbcnr) as vc_dbcnr\n from\n rpt.rpt_verrichting t1\n , rpt.rpt_dbc_traject t00\n where\n t1.vc_patientnr = t00.vc_patientnr\n and\n t1.vc_agb_specialisme_nr_toek = t00.agb_specialisme_nr\n and\n t1.verrichtingsdatum between t00.begindat_dbc and \nCOALESCE(t00.einddat_dbc, t00.begindat_dbc + interval '365 days')\n group by\n t1.verrichting_id\n , t1.verrichting_secid\n , t1.dbcnr\n ;\n\n_*An EXPLAIN for the query:*_\n Subquery Scan \"*SELECT*\" (cost=1837154.04..1839811.72 rows=106307 \nwidth=74)\n -> HashAggregate (cost=1837154.04..1838482.88 rows=106307 width=56)\n -> Merge Join (cost=1668759.55..1836090.97 rows=106307 width=56)\n Merge Cond: (((\"outer\".vc_patientnr)::text = \n\"inner\".\"?column8?\") AND (\"outer\".agb_specialisme_nr = \n\"inner\".vc_agb_specialisme_nr_toek))\n Join Filter: ((\"inner\".verrichtingsdatum >= \n\"outer\".begindat_dbc) AND (\"inner\".verrichtingsdatum <= \nCOALESCE(\"outer\".einddat_dbc, (\"outer\".begindat_dbc + '365 \ndays'::interval))))\n -> Index Scan using rpt_dbc_traject_idx1 on \nrpt_dbc_traject t00 (cost=0.00..84556.01 rows=578274 width=37)\n -> Sort (cost=1668759.55..1689806.46 rows=8418765 width=79)\n Sort Key: (t1.vc_patientnr)::text, \nt1.vc_agb_specialisme_nr_toek\n -> Seq Scan on rpt_verrichting t1 \n(cost=0.00..302720.65 rows=8418765 width=79)\n\n_*Out of memory log.*_\nTopMemoryContext: 16384 total in 2 blocks; 3824 free (4 chunks); 12560 used\nType information cache: 8192 total in 1 blocks; 1864 free (0 chunks); \n6328 used\nOperator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 \nused\nTopTransactionContext: 8192 total in 1 blocks; 7856 free (0 chunks); 336 \nused\nMessageContext: 122880 total in 4 blocks; 64568 free (4 chunks); 58312 used\nsmgr relation table: 8192 total in 1 blocks; 2872 free (0 chunks); 5320 used\nPortal hash: 8192 total in 1 blocks; 3912 free (0 chunks); 4280 used\nPortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used\nPortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used\nExecutorState: 8192 total in 1 blocks; 5304 free (1 chunks); 2888 used\nExecutorState: 562316108 total in 94 blocks; 528452720 free (2593154 \nchunks); 33863388 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 8192 total in 1 blocks; 8176 free (0 chunks); 16 used\nAggContext: 399499264 total in 58 blocks; 5928 free (110 chunks); \n399493336 used\nTupleHashTable: 109109272 total in 23 blocks; 2468576 free (70 chunks); \n106640696 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used\nRelcache by OID: 8192 total in 1 blocks; 3376 free (0 chunks); 4816 used\nCacheMemoryContext: 516096 total in 6 blocks; 83448 free (0 chunks); \n432648 used\nrpt_dbc_traject_idx1: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\nrpt_dbc_traject_pk: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nrpt_verrichting_idx2: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\nrpt_verrichting_idx1: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_index_indrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_type_typname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); \n696 used\npg_type_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_statistic_relid_att_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_auth_members_member_role_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_auth_members_role_member_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 256 free (0 \nchunks); 768 used\npg_proc_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 192 free (0 \nchunks); 832 used\npg_operator_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_opclass_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 256 free (0 \nchunks); 768 used\npg_namespace_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 \nused\npg_namespace_nspname_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_language_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_language_name_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 \nused\npg_inherits_relid_seqno_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_index_indexrelid_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_authid_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_authid_rolname_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_database_datname_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_conversion_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\npg_conversion_name_nsp_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_conversion_default_index: 1024 total in 1 blocks; 192 free (0 \nchunks); 832 used\npg_class_relname_nsp_index: 1024 total in 1 blocks; 328 free (0 chunks); \n696 used\npg_class_oid_index: 1024 total in 1 blocks; 392 free (0 chunks); 632 used\npg_cast_source_target_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_attribute_relid_attnum_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_attribute_relid_attnam_index: 1024 total in 1 blocks; 328 free (0 \nchunks); 696 used\npg_amproc_opc_proc_index: 1024 total in 1 blocks; 256 free (0 chunks); \n768 used\npg_amop_opr_opc_index: 1024 total in 1 blocks; 328 free (0 chunks); 696 used\npg_amop_opc_strat_index: 1024 total in 1 blocks; 256 free (0 chunks); \n768 used\npg_aggregate_fnoid_index: 1024 total in 1 blocks; 392 free (0 chunks); \n632 used\nMdSmgr: 8192 total in 1 blocks; 7312 free (0 chunks); 880 used\nLockTable (locallock hash): 8192 total in 1 blocks; 3912 free (0 \nchunks); 4280 used\nTimezones: 47592 total in 2 blocks; 5968 free (0 chunks); 41624 used\nErrorContext: 8192 total in 1 blocks; 8176 free (4 chunks); 16 used\nERROR: out of memory\nDETAIL: Failed on request of size 98.\n\n*Indexes*\nCREATE INDEX rpt_verrichting_idx1\n ON rpt.rpt_verrichting\n USING btree\n (dbcnr)\n TABLESPACE rpt_index;\n\nCREATE INDEX rpt_verrichting_idx2\n ON rpt.rpt_verrichting\n USING btree\n (vc_patientnr)\n TABLESPACE rpt_index;\n\nCREATE INDEX rpt_dbc_traject_idx1\n ON rpt.rpt_dbc_traject\n USING btree\n (vc_patientnr, agb_specialisme_nr)\n TABLESPACE rpt_index_all;\n\n",
"msg_date": "Wed, 12 Jul 2006 09:38:30 +0200",
"msg_from": "nicky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Out of Memory Problem. "
}
] |
[
{
"msg_contents": "Hello!\n\nWe're facing a performance problem here in a Oracle 10g -> PostgreSQL \nenvironment. The Oracle DB accesses the PostgreSQL DB via UnixODBC + \npsqlODBC.\n\nThere are some 100 records on the Oracle DB to be updated with data \nobtained from a view of the PostgreSQL DB. The fetched data accumulates to \na few kB's and the update takes some 5 - 7 hours. During this time the \nPostgreSQL machine is running at approx. 100% CPU usage.\nIf a select for the same data is issued from the Oracle DB the statement is \nexecuted in half a second.\n\nIs this the correct place to issue this problem?\n\nHow can I trace down the cause for this performance problem?\n\nThanx in advance!\n\nRegards, \nThomas Radnetter\n\np.s. Mr. Ludek Finstrle, can you help again? \n\n-- \n\n \"Feel free\" – 10 GB Mailbox, 100 FreeSMS/Monat ...\n Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail \n-- \n\n\n\"Feel free\" – 10 GB Mailbox, 100 FreeSMS/Monat ...\nJetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail\n\n\n\n\n\n\n\n Hello!We're facing a performance problem here in a Oracle 10g -> PostgreSQL environment. The Oracle DB accesses the PostgreSQL DB via UnixODBC + psqlODBC.There are some 100 records on the Oracle DB to be updated with data obtained from a view of the PostgreSQL DB. The fetched data accumulates to a few kB's and the update takes some 5 - 7 hours. During this time the PostgreSQL machine is running at approx. 100% CPU usage.If a select for the same data is issued from the Oracle DB the statement is executed in half a second.Is this the correct place to issue this problem?How can I trace down the cause for this performance problem?Thanx in advance!Regards, Thomas Radnetterp.s. Mr. Ludek Finstrle, can you help again?\n -- \"Feel free\" – 10 GB Mailbox, 100 FreeSMS/Monat ... Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail \n-- \n\"Feel free\" – 10 GB Mailbox, 100 FreeSMS/Monat ...\nJetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail",
"msg_date": "Wed, 12 Jul 2006 10:33:44 +0200",
"msg_from": "\"Thomas Radnetter\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance Problem between Ora 10g and Psql"
},
{
"msg_contents": "Thomas,\n\nOn 7/12/06, Thomas Radnetter <[email protected]> wrote:\n> Is this the correct place to issue this problem?\n\nIt is if your issue is due to a PostgreSQL performance problem.\n\n> How can I trace down the cause for this performance problem?\n\nThe first thing to do is to determine if it is a problem due to the\nOracle -> ODBC -> PostgreSQL thing or if it is a problem with the\nquery. My advice is to set log_min_duration_statement to 0 in your\npostgresql.conf (and set the logger so that you can see the log output\nsomewhere). Then you'll see if your query is slow.\n\nIf your query is slow, post the output of an explain analyze on the\nlist with all the relevant information (structure of the concerned\ntables, indexes, size...).\n\nIf not, it's probably more an ODBC problem.\n\nRegards,\n\n--\nGuillaume Smet\nOpen Wide\n",
"msg_date": "Wed, 12 Jul 2006 11:47:06 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem between Ora 10g and Psql"
},
{
"msg_contents": "On Wednesday 12 July 2006 01:33, Thomas Radnetter wrote:\n> Hello!\n>\n> We're facing a performance problem here in a Oracle 10g -> PostgreSQL\n> environment. The Oracle DB accesses the PostgreSQL DB via UnixODBC +\n> psqlODBC.\n>\n> There are some 100 records on the Oracle DB to be updated with data\n> obtained from a view of the PostgreSQL DB. The fetched data accumulates to\n> a few kB's and the update takes some 5 - 7 hours. During this time the\n> PostgreSQL machine is running at approx. 100% CPU usage.\n> If a select for the same data is issued from the Oracle DB the statement is\n> executed in half a second.\n>\n> Is this the correct place to issue this problem?\n\nSure but you haven't provided a TON of information that would be needed to \nhelp?\n\nIf you execute the same query that is being executed via ODBC, via psql is the \nperformance problem still there?\n\nIf so, you probably have a postgresql issue, otherwise look at Oracle or ODBC.\n\nIf it is a PostgreSQL issue:\n Do you have indexes applied?\n What is the explain plan?\n When was the last time you analyzed?\n What about vacuum?\n\netc. etc.\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n> How can I trace down the cause for this performance problem?\n>\n> Thanx in advance!\n>\n> Regards,\n> Thomas Radnetter\n>\n> p.s. Mr. Ludek Finstrle, can you help again?\n>\n> --\n>\n> \"Feel free\" – 10 GB Mailbox, 100 FreeSMS/Monat ...\n> Jetzt GMX TopMail testen: http://www.gmx.net/de/go/topmail\n\n-- \n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 12 Jul 2006 07:33:52 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance Problem between Ora 10g and Psql"
}
] |
[
{
"msg_contents": "\nHi,\n\nplease help me with the following problem:\n\nI have noticed a strange performance behaviour using a commit statement on two different machines. On one of the machines the commit is many times faster than on the other machine which has faster hardware. Server and client are running always on the same machine.\n\nServer version (same on both machines): PostgreSQL 8.1.3. (same binaries as well)\n\nPC1:\r\n----\nPentium 4 (2.8 GHz)\n1GB RAM\nIDE-HDD (approx. 50 MB/s rw), fs: ext3\nMandrake Linux: Kernel 2.4.22\n\n\nPC2:\n----\nPentium 4 (3.0 GHz)\n2GB RAM\nSCSI-HDD (approx. 65 MB/s rw), fs: ext3\nMandrake Linux: Kernel 2.4.32\n\n\nBoth installations of the database have the same configuration, different from default are only the following settings on both machines:\n\nshared_buffers = 20000\nlisten_addresses = '*'\nmax_stack_depth = 4096\n\n\npgbench gives me the following results:\nPC1:\n----\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 10/10\ntps = 269.905533 (including connections establishing)\ntps = 293.625393 (excluding connections establishing)\n\nPC2:\n----\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 1\nnumber of transactions per client: 10\nnumber of transactions actually processed: 10/10\ntps = 46.061935 (including connections establishing)\ntps = 46.519634 (excluding connections establishing)\n\n\nMy own performance test sql script which inserts and (auto)commits some data into a simple table produces the following log output in the server log:\n\nPC1:\n----\nLOG: duration: 1.441 ms statement: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\nSTATEMENT: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n\nPC2:\n----\nLOG: duration: 29.979 ms statement: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\nSTATEMENT: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n\n\nI created a 'strace' one both machines which is interesting:\n\nOpening the socket:\n-------------------\nPC1: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 10 <0.000021>\nPC2: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 8 <0.000015>\n\nPC1: bind(10, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"0.0.0.0\")}, 16) = 0 <0.000007>\nPC2: bind (8, {sin_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"0.0.0.0\")}}, 16) = 0 <0.000007>\n\nPC1: getsockname(10, {sa_family=AF_INET, sin_port=htons(32820), sin_addr=inet_addr(\"0.0.0.0\")}, [16]) = 0 <0.000005>\nPC2: getsockname( 8, {sin_family=AF_INET, sin_port=htons(36219), sin_addr=inet_addr(\"0.0.0.0\")}}, [16]) = 0 <0.000005>\n\nPC1: connect(10, {sa_family=AF_INET, sin_port=htons(5432), sin_addr=inet_addr(\"127.0.0.1\")}, 16) = 0 <0.000440>\nPC2: connect( 8, {sin_family=AF_INET, sin_port=htons(5432), sin_addr=inet_addr(\"127.0.0.1\")}}, 16) = 0 <0.000394>\n\nPC1: setsockopt(10, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.000006>\nPC2: setsockopt (8, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.000004>\n\n\nInserting and commiting the data: <exec. time>\n---------------------------------\nPC1:\n----\nsend(10, \"B\\....\\0<\\0INSERT INTO performance_test VAL\"..., 175, 0) = 175 <0.000015>\nrecv(10, \"2\\....0\\17INSERT 0 1\\0Z\\0\\0\\0\\5T\", 8192, 0) = 53 <0.000007>\nsend(10, \"B\\0\\0\\0\\17\\0S_2\\0\\0\\0\\0\\0\\0\\0E\\0\\0\\0\\t\\0\\0\\0\\0\\1S\\0\\0\\0\\4\", 31, 0) = 31 <0.000011>\nrecv(10, \"2\\0\\0\\0\\4C\\0\\0\\0\\vCOMMIT\\0Z\\0\\0\\0\\5I\", 8192, 0) = 23 <0.000211>\n\nPC2:\n----\nsend(8, \"B\\....\\0<\\0INSERT INTO performance_test VAL\"..., 175, 0) = 175 <0.000014>\nrecv(8, \"2\\....0\\17INSERT 0 1\\0Z\\0\\0\\0\\5T\", 8192, 0) = 53 <0.000005>\nsend(8, \"B\\0\\0\\0\\17\\0S_2\\0\\0\\0\\0\\0\\0\\0E\\0\\0\\0\\t\\0\\0\\0\\0\\1S\\0\\0\\0\\4\", 31, 0) = 31 <0.000009>\nrecv(8, \"2\\0\\0\\0\\4C\\0\\0\\0\\vCOMMIT\\0Z\\0\\0\\0\\5I\", 8192, 0) = 23 <0.0253>\n\nEvery command is a bit faster on PC2 except the last one which is many times slower.\nAny help or hint where to look at would be highly appreciated because I'm running out of ideas ;-).\n\n\nregards,\nChristian\n\n\n******************************************\nThe information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email.\n******************************************\n",
"msg_date": "Wed, 12 Jul 2006 10:16:40 -0600",
"msg_from": "\"Koth, Christian (DWBI)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Commit slower on faster PC"
},
{
"msg_contents": "The IDE drive is almost certainly lying about flushing data to the disk.\nLower-end consumer drives often do.\n\nWhat this means is that commits will be a whole lot faster, but the\ndatabase loses its ACID guarantees, because a power failure at the wrong\nmoment could corrupt the whole database.\n\nIf you don't care about your data and want the SCSI drive to perform\nfast just like the IDE drive, you can set fsync = off in your\nconfiguration file.\n\n-- Mark\n\nOn Wed, 2006-07-12 at 10:16 -0600, Koth, Christian (DWBI) wrote:\n> Hi,\n> \n> please help me with the following problem:\n> \n> I have noticed a strange performance behaviour using a commit statement on two different machines. On one of the machines the commit is many times faster than on the other machine which has faster hardware. Server and client are running always on the same machine.\n> \n> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as well)\n> \n> PC1:\n> ----\n> Pentium 4 (2.8 GHz)\n> 1GB RAM\n> IDE-HDD (approx. 50 MB/s rw), fs: ext3\n> Mandrake Linux: Kernel 2.4.22\n> \n> \n> PC2:\n> ----\n> Pentium 4 (3.0 GHz)\n> 2GB RAM\n> SCSI-HDD (approx. 65 MB/s rw), fs: ext3\n> Mandrake Linux: Kernel 2.4.32\n> \n> \n> Both installations of the database have the same configuration, different from default are only the following settings on both machines:\n> \n> shared_buffers = 20000\n> listen_addresses = '*'\n> max_stack_depth = 4096\n> \n> \n> pgbench gives me the following results:\n> PC1:\n> ----\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1\n> number of transactions per client: 10\n> number of transactions actually processed: 10/10\n> tps = 269.905533 (including connections establishing)\n> tps = 293.625393 (excluding connections establishing)\n> \n> PC2:\n> ----\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1\n> number of transactions per client: 10\n> number of transactions actually processed: 10/10\n> tps = 46.061935 (including connections establishing)\n> tps = 46.519634 (excluding connections establishing)\n> \n> \n> My own performance test sql script which inserts and (auto)commits some data into a simple table produces the following log output in the server log:\n> \n> PC1:\n> ----\n> LOG: duration: 1.441 ms statement: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n> STATEMENT: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n> \n> PC2:\n> ----\n> LOG: duration: 29.979 ms statement: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n> STATEMENT: INSERT INTO performance_test VALUES (500938362, 'Xawhefjmd');\n> \n> \n> I created a 'strace' one both machines which is interesting:\n> \n> Opening the socket:\n> -------------------\n> PC1: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 10 <0.000021>\n> PC2: socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 8 <0.000015>\n> \n> PC1: bind(10, {sa_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"0.0.0.0\")}, 16) = 0 <0.000007>\n> PC2: bind (8, {sin_family=AF_INET, sin_port=htons(0), sin_addr=inet_addr(\"0.0.0.0\")}}, 16) = 0 <0.000007>\n> \n> PC1: getsockname(10, {sa_family=AF_INET, sin_port=htons(32820), sin_addr=inet_addr(\"0.0.0.0\")}, [16]) = 0 <0.000005>\n> PC2: getsockname( 8, {sin_family=AF_INET, sin_port=htons(36219), sin_addr=inet_addr(\"0.0.0.0\")}}, [16]) = 0 <0.000005>\n> \n> PC1: connect(10, {sa_family=AF_INET, sin_port=htons(5432), sin_addr=inet_addr(\"127.0.0.1\")}, 16) = 0 <0.000440>\n> PC2: connect( 8, {sin_family=AF_INET, sin_port=htons(5432), sin_addr=inet_addr(\"127.0.0.1\")}}, 16) = 0 <0.000394>\n> \n> PC1: setsockopt(10, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.000006>\n> PC2: setsockopt (8, SOL_TCP, TCP_NODELAY, [1], 4) = 0 <0.000004>\n> \n> \n> Inserting and commiting the data: <exec. time>\n> ---------------------------------\n> PC1:\n> ----\n> send(10, \"B\\....\\0<\\0INSERT INTO performance_test VAL\"..., 175, 0) = 175 <0.000015>\n> recv(10, \"2\\....0\\17INSERT 0 1\\0Z\\0\\0\\0\\5T\", 8192, 0) = 53 <0.000007>\n> send(10, \"B\\0\\0\\0\\17\\0S_2\\0\\0\\0\\0\\0\\0\\0E\\0\\0\\0\\t\\0\\0\\0\\0\\1S\\0\\0\\0\\4\", 31, 0) = 31 <0.000011>\n> recv(10, \"2\\0\\0\\0\\4C\\0\\0\\0\\vCOMMIT\\0Z\\0\\0\\0\\5I\", 8192, 0) = 23 <0.000211>\n> \n> PC2:\n> ----\n> send(8, \"B\\....\\0<\\0INSERT INTO performance_test VAL\"..., 175, 0) = 175 <0.000014>\n> recv(8, \"2\\....0\\17INSERT 0 1\\0Z\\0\\0\\0\\5T\", 8192, 0) = 53 <0.000005>\n> send(8, \"B\\0\\0\\0\\17\\0S_2\\0\\0\\0\\0\\0\\0\\0E\\0\\0\\0\\t\\0\\0\\0\\0\\1S\\0\\0\\0\\4\", 31, 0) = 31 <0.000009>\n> recv(8, \"2\\0\\0\\0\\4C\\0\\0\\0\\vCOMMIT\\0Z\\0\\0\\0\\5I\", 8192, 0) = 23 <0.0253>\n> \n> Every command is a bit faster on PC2 except the last one which is many times slower.\n> Any help or hint where to look at would be highly appreciated because I'm running out of ideas ;-).\n> \n> \n> regards,\n> Christian\n> \n> \n> ******************************************\n> The information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email.\n> ******************************************\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n",
"msg_date": "Wed, 12 Jul 2006 10:26:31 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit slower on faster PC"
},
{
"msg_contents": "On Wed, 12 Jul 2006 10:16:40 -0600\n\"Koth, Christian (DWBI)\" <[email protected]> wrote:\n> I have noticed a strange performance behaviour using a commit statement on two different machines. On one of the machines the commit is many times faster than on the other machine which has faster hardware. Server and client are running always on the same machine.\n> \n> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as well)\n> \n> PC1:\n> ----\n> Pentium 4 (2.8 GHz)\n> 1GB RAM\n> IDE-HDD (approx. 50 MB/s rw), fs: ext3\n> Mandrake Linux: Kernel 2.4.22\n> \n> \n> PC2:\n> ----\n> Pentium 4 (3.0 GHz)\n> 2GB RAM\n> SCSI-HDD (approx. 65 MB/s rw), fs: ext3\n> Mandrake Linux: Kernel 2.4.32\n> \n> \n> Both installations of the database have the same configuration, different from default are only the following settings on both machines:\n> \n> shared_buffers = 20000\n> listen_addresses = '*'\n> max_stack_depth = 4096\n> \n> \n> pgbench gives me the following results:\n> PC1:\n> ----\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1\n> number of transactions per client: 10\n> number of transactions actually processed: 10/10\n> tps = 269.905533 (including connections establishing)\n> tps = 293.625393 (excluding connections establishing)\n> \n> PC2:\n> ----\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> number of clients: 1\n> number of transactions per client: 10\n> number of transactions actually processed: 10/10\n> tps = 46.061935 (including connections establishing)\n> tps = 46.519634 (excluding connections establishing)\n\nI'm not sure 10 transactions is enough of a test. You could just be\nseeing the result of your IDE drive lying to you about actually writing\nyour data. There may be other considerations but I would start with\nchecking with 10,000 or 100,000 transactions to overcome the driver\nbuffering.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Wed, 12 Jul 2006 13:26:57 -0400",
"msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit slower on faster PC"
},
{
"msg_contents": "On Wednesday 12 July 2006 09:16, Koth, Christian (DWBI) wrote:\n> Hi,\n>\n> please help me with the following problem:\n>\n> I have noticed a strange performance behaviour using a commit statement on\n> two different machines. On one of the machines the commit is many times\n> faster than on the other machine which has faster hardware. Server and\n> client are running always on the same machine.\n>\n> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as\n> well)\n\nHeh, I bet you are being bit by the cache on the IDE drive. What happens if \nyou turn fsync off?\n\nSincerely,\n\nJoshua D. Drake\n-- \n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 12 Jul 2006 10:46:05 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit slower on faster PC"
},
{
"msg_contents": "On Wed, Jul 12, 2006 at 10:16:40 -0600,\n \"Koth, Christian (DWBI)\" <[email protected]> wrote:\n> \n> I have noticed a strange performance behaviour using a commit statement on two different machines. On one of the machines the commit is many times faster than on the other machine which has faster hardware. Server and client are running always on the same machine.\n> \n> Server version (same on both machines): PostgreSQL 8.1.3. (same binaries as well)\n> \n> PC1:\n> ----\n> IDE-HDD (approx. 50 MB/s rw), fs: ext3\n> \n> PC2:\n> ----\n> SCSI-HDD (approx. 65 MB/s rw), fs: ext3\n> \n> Both installations of the database have the same configuration, different from default are only the following settings on both machines:\n> \n> pgbench gives me the following results:\n> PC1:\n> ----\n> tps = 293.625393 (excluding connections establishing)\n> \n> PC2:\n> ----\n> tps = 46.519634 (excluding connections establishing)\n\nHave you checked to see if the ide drive is lying about having written the\ndata to the platters?\n",
"msg_date": "Wed, 12 Jul 2006 14:25:39 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Commit slower on faster PC"
}
] |
[
{
"msg_contents": "I can't find an address to complain about the mailing list itself, so apologies but I'm posting directly to this list. Every time I post to this group, I get returned mails about OTHER subscribers' invalid accounts, like the one below. What's up? This seems to be a new phenomenon. Should the [email protected] be getting these and discarding them?\n\nThanks,\nCraig\n\n\n-------- Original Message --------\nSubject: Delivery Status Notification (Failure)\nDate: Wed, 12 Jul 2006 13:15:16 -0400\nFrom: [email protected]\nTo: [email protected]\n\nThis is an automatically generated Delivery Status Notification.\n\nDelivery to the following recipients failed.\n\n [email protected]\n\n\n\n\n",
"msg_date": "Wed, 12 Jul 2006 09:39:54 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "[Fwd: Delivery Status Notification (Failure)]"
},
{
"msg_contents": "\nOn Jul 12, 2006, at 11:39 , Craig A. James wrote:\n\n> I can't find an address to complain about the mailing list itself, \n> so apologies but I'm posting directly to this list. Every time I \n> post to this group, I get returned mails about OTHER subscribers' \n> invalid accounts, like the one below.\n\nIs this when you're replying to a post or creating a new post? If the \nformer, and you're using reply-to-all, you'll be sending one message \nto the list and another directly to the poster of the message you're \nresponding to. The directly sent message is outside of the list \nentirely, so any returned mail is also outside of the list. I've seen \nthis happen occasionally myself. Could this be what you're seeing? \nAFAICT, such messages sent to the list *do* get filtered out.\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n",
"msg_date": "Wed, 12 Jul 2006 15:23:06 -0500",
"msg_from": "Michael Glaesemann <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Delivery Status Notification (Failure)]"
},
{
"msg_contents": "> I can't find an address to complain about the mailing list itself, so apologies but I'm posting\n> directly to this list. Every time I post to this group, I get returned mails about OTHER\n> subscribers' invalid accounts, like the one below. What's up? This seems to be a new\n> phenomenon. Should the [email protected] be getting these and discarding them?\n> \n> Thanks,\n> Craig\n\nDoes the message come from postgresql.org or is the bounced email coming from these specific users\nwhen you include them in reply-all?\n\nRegards,\n\nRichard Broersma jr.\n\n\n",
"msg_date": "Wed, 12 Jul 2006 13:38:55 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Delivery Status Notification (Failure)]"
},
{
"msg_contents": "I wrote:\n>> I can't find an address to complain about the mailing list itself, so \n>> apologies but I'm posting directly to this list. Every time I post to \n>> this group, I get returned mails about OTHER subscribers' invalid \n>> accounts, like the one below.\n\nMichael Glaesemann replied:\n> Is this when you're replying to a post or creating a new post? If the \n> former, and you're using reply-to-all, you'll be sending one message to \n> the list and another directly to the poster of the message you're \n> responding to. \n\nAnd Richard Broersma Jr replied:\n> Does the message come from postgresql.org or is the bounced email coming from these specific users\n> when you include them in reply-all?\n\nThanks to both for your answers. But no -- It's for new posts. In fact, when writing the email that started this thread, it was only to [email protected] (I double-checked by using emacs on my Thunderbird \"Sent\" folder), yet I still got another \"undeliverable\" reply along with your message:\n\n> This is an automatically generated Delivery Status Notification.\n> Delivery to the following recipients failed.\n> [email protected]\n\n\nCraig\n\n",
"msg_date": "Wed, 12 Jul 2006 13:47:38 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: Delivery Status Notification (Failure)]"
},
{
"msg_contents": "> > This is an automatically generated Delivery Status Notification.\n> > Delivery to the following recipients failed.\n> > [email protected]\n\nyes, I got the same thing that you did here. only i got when I replied all to your email. Are you\nsure this individual wasn't listed in any of your CC or BCC addresses?\n\nThis is an automatically generated Delivery Status Notification.\nDelivery to the following recipients failed.\n [email protected]\n\nReporting-MTA: dns;enpocket-exch.usaemail.enpocket.com\nReceived-From-MTA: dns;middx.enpocketbureau.com\nArrival-Date: Wed, 12 Jul 2006 18:10:58 -0400\n\nFinal-Recipient: rfc822;[email protected]\nAction: failed\nStatus: 5.1.1\n\n\n",
"msg_date": "Wed, 12 Jul 2006 17:54:16 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Delivery Status Notification (Failure)]"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n> Thanks to both for your answers. But no -- It's for new posts. In fact, when writing the email that started this thread, it was only to [email protected] (I double-checked by using emacs on my Thunderbird \"Sent\" folder), yet I still got another \"undeliverable\" reply along with your message:\n\n>> This is an automatically generated Delivery Status Notification.\n>> Delivery to the following recipients failed.\n>> [email protected]\n\nThis means that usaemail.enpocket.com has seriously misconfigured mail\nsoftware --- it's bouncing undeliverable messages to the From: address\nrather than to the envelope sender (which will be pgsql-performance-owner\nfor a message coming through the pgsql-performance list). This is\ngenerally considered sufficiently unfriendly behavior that proof of it\nis grounds for instant ejection from a mailing list, because the From:\naddress is someone who has no control over where the mailing list tries\nto deliver to. Certainly it's grounds for ejection from any PG list.\nSend the bounce message with full headers to the list admin (Marc\nFournier, scrappy at postgresql.org) and [email protected]\nwill soon be an ex-subscriber.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2006 23:07:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Delivery Status Notification (Failure)] "
}
] |
[
{
"msg_contents": "I notice that non-printables in bytea values are being spit out by pg_dump\nusing escaped octet sequences even when the \"-Fc\" option is present\nspecifying use of the custom binary output format rather than plain text\nformat. This bloats the size of bytea values in the dump file by a factor\nof 3+ typically. When you have alot of large bytea values in your db this\ncan add up very quickly.\n\nShouldn't the custom format be smart and just write the raw bytes to the\noutput file rather than trying to make them ascii readable?\n\nThanks.\n\nSteve McWilliams\nSoftware Engineer\nEmprisa Networks\n703-691-0433x21\[email protected]\n\nThe information contained in this communication is intended only for the\nuse of the recipient named above, and may be legally privileged,\nconfidential and exempt from disclosure under applicable law. If the\nreader of this communication is not the intended recipient, you are hereby\nnotified that any dissemination, distribution or copying of this\ncommunication, or any of its contents, is strictly prohibited. If you have\nreceived this communication in error, please resend this communication to\nthe sender and delete the original communication and any copy of it from\nyour computer system. Thank you.\n\n\n\n",
"msg_date": "Wed, 12 Jul 2006 14:36:27 -0400 (EDT)",
"msg_from": "\"Steve McWilliams\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "size of pg_dump files containing bytea values"
},
{
"msg_contents": "\"Steve McWilliams\" <[email protected]> writes:\n> I notice that non-printables in bytea values are being spit out by pg_dump\n> using escaped octet sequences even when the \"-Fc\" option is present\n> specifying use of the custom binary output format rather than plain text\n> format. This bloats the size of bytea values in the dump file by a factor\n> of 3+ typically.\n\nNo, because the subsequent compression step should buy back most of\nthat.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jul 2006 22:53:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: size of pg_dump files containing bytea values "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> \"Steve McWilliams\" <[email protected]> writes:\n> > I notice that non-printables in bytea values are being spit out by pg_dump\n> > using escaped octet sequences even when the \"-Fc\" option is present\n> > specifying use of the custom binary output format rather than plain text\n> > format. This bloats the size of bytea values in the dump file by a factor\n> > of 3+ typically.\n> \n> No, because the subsequent compression step should buy back most of\n> that.\n\nDidn't byteas used to get printed as hex? Even in psql they're now being\nprinted in the escaped octet sequence. When did this change?\n\n-- \ngreg\n\n",
"msg_date": "13 Jul 2006 12:30:50 -0400",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: size of pg_dump files containing bytea values"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> Didn't byteas used to get printed as hex?\n\nNo, not that I recall. I don't have anything older than 7.0 running,\nbut it behaves the same as now:\n\nplay=> select 'xyz\\\\001'::bytea;\n ?column?\n----------\n xyz\\001\n(1 row)\n\nplay=>\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2006 13:42:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: size of pg_dump files containing bytea values "
},
{
"msg_contents": "* Greg Stark:\n\n> Didn't byteas used to get printed as hex?\n\nNo, they didn't. It would be useful to support hexadecimal BYTEA\nliterals, though. Unfortunately, X'DEADBEEF' has already been taken\nby bit strings.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 14 Jul 2006 09:05:31 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: size of pg_dump files containing bytea values"
}
] |
[
{
"msg_contents": "I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls\nare extremely slow. I didn't need to preserve any old data so at this\npoint all my tables are empty. Just connecting to a db takes several\nseconds.\n\n \n\nWhen I was accidentally linking my app with the 7.3.4 libs but running\nthe 8.1.4 postmaster everything was fine.\n\n \n\nI know I'm not giving much to go on but I'm stumped. Can anyone suggest\nhow I might track down the cause of this problem?\n\n \n\nMedora Schauer\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls are extremely slow. I\ndidn’t need to preserve any old data so at this point all my tables are\nempty. Just connecting to a db takes several seconds.\n \nWhen I was accidentally linking my app with the 7.3.4 libs\nbut running the 8.1.4 postmaster everything was fine.\n \nI know I’m not giving much to go on but I’m\nstumped. Can anyone suggest how I might track down the cause of this problem?\n \nMedora Schauer",
"msg_date": "Wed, 12 Jul 2006 15:41:14 -0500",
"msg_from": "\"Medora Schauer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "hyper slow after upgrade to 8.1.4"
},
{
"msg_contents": "On Wednesday 12 July 2006 13:41, Medora Schauer wrote:\n> I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls\n> are extremely slow. I didn't need to preserve any old data so at this\n> point all my tables are empty. Just connecting to a db takes several\n> seconds.\n>\n>\n>\n> When I was accidentally linking my app with the 7.3.4 libs but running\n> the 8.1.4 postmaster everything was fine.\n>\n>\n>\n> I know I'm not giving much to go on but I'm stumped. Can anyone suggest\n> how I might track down the cause of this problem?\n\nanalyze?\n\nSincerely,\n\nJoshua D. Drake\n\n\n>\n>\n>\n> Medora Schauer\n\n-- \n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 12 Jul 2006 15:06:15 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4"
},
{
"msg_contents": "On Wed, Jul 12, 2006 at 15:41:14 -0500,\n Medora Schauer <[email protected]> wrote:\n> I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access calls\n> are extremely slow. I didn't need to preserve any old data so at this\n> point all my tables are empty. Just connecting to a db takes several\n> seconds.\n> \n> I know I'm not giving much to go on but I'm stumped. Can anyone suggest\n> how I might track down the cause of this problem?\n\nThat connections are slow makes me think DNS is worth looking at. It might\nbe that reverse lookups are timing out.\n",
"msg_date": "Wed, 12 Jul 2006 21:53:52 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4"
}
] |
[
{
"msg_contents": "\nHello all,\r\nthanks a lot for your help so far. You all where right about the IDE drive is\r\nsomehow caching the data. See the test results below. I also get different \r\ntps values every time I run pgbench on PC1 (between 300 and 80 tps for 100 transactions).\r\n\nI don't think it's a good idea to disable fsync even if ext3 (doing a sync also\r\nevery x second or so) and a UPS is used.\r\n\nChristian\n\n\n\nPC1 (IDE):\n----------\n\nfsynch on:\n----------\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 213.115558 (including connections establishing)\ntps = 214.710227 (excluding connections establishing)\n\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 10000/10000\ntps = 126.159130 (including connections establishing)\ntps = 126.163172 (excluding connections establishing)\n\n\nfsynch off:\n-----------\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 413.849044 (including connections establishing)\ntps = 419.028942 (excluding connections establishing)\n\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 10000/10000\ntps = 166.057838 (including connections establishing)\ntps = 166.064227 (excluding connections establishing)\n\n\n\nPC2 (SCSI):\n-----------\n\nfsynch on:\n----------\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 44.640785 (including connections establishing)\ntps = 44.684649 (excluding connections establishing)\n\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 10000/10000\ntps = 42.322486 (including connections establishing)\ntps = 42.324096 (excluding connections establishing)\n\n\nfsynch off:\n-----------\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 910.406861 (including connections establishing)\ntps = 925.428936 (excluding connections establishing)\n\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 10000/10000\ntps = 957.376543 (including connections establishing)\ntps = 957.603815 (excluding connections establishing)\n\n******************************************\nThe information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email.\n******************************************\n",
"msg_date": "Thu, 13 Jul 2006 02:59:31 -0600",
"msg_from": "\"Koth, Christian (DWBI)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Commit slower on faster PC"
}
] |
[
{
"msg_contents": "> From: Bruno Wolff III [mailto:[email protected]]\n> Sent: Wednesday, July 12, 2006 8:54 PM\n> To: Medora Schauer\n> Cc: postgresql\n> Subject: Re: hyper slow after upgrade to 8.1.4\n> \n> On Wed, Jul 12, 2006 at 15:41:14 -0500,\n> Medora Schauer <[email protected]> wrote:\n> > I have just upgraded from 7.3.4 to 8.1.4 and now *all* db access\ncalls\n> > are extremely slow. I didn't need to preserve any old data so at\nthis\n> > point all my tables are empty. Just connecting to a db takes\nseveral\n> > seconds.\n> >\n> > I know I'm not giving much to go on but I'm stumped. Can anyone\nsuggest\n> > how I might track down the cause of this problem?\n> \n> That connections are slow makes me think DNS is worth looking at. It\nmight\n> be that reverse lookups are timing out.\n\nIt does seem to be network related. Using the 8.1.4 psql on a machine\nother than the db server connecting to a database takes ~11 secs. Using\nit on the db server the connection is virtually instantaneous. Using\nthe 7.3.4 psql (still the 8.1.4 postmaster) the connection is fast\nregardless of what machine I am on.\n\nThe pg.log contains the following at the top:\n\nLOG: could not create IPv6 socket: Address family not supported by\nprotocol\n\nI am using a 2.4.25 linux kernel. For reasons I can't get into I have\nno choice but to use this kernel. The config utility for the kernel\nshows support for the IPv6 protocol as \"experimental\" and is not\nincluded in my current build. I can try building a kernel that includes\nIPv6 but the \"experimental\" caveat is scary.\n\nCan it be that the connection delay is because first an IPv6 socket is\ntrying to be established and when that fails an IPv4 socket is created?\nIf so, is there any way to make 8.1.4 use only IPv4 sockets?\n\nBTW - The slowness seems to be only during db connection. Once the\nconnection is established queries seem to execute in a timely manner.\n\nMedora\n\n\n",
"msg_date": "Thu, 13 Jul 2006 08:22:46 -0500",
"msg_from": "\"Medora Schauer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4"
},
{
"msg_contents": "On Thu, Jul 13, 2006 at 08:22:46AM -0500, Medora Schauer wrote:\n> Can it be that the connection delay is because first an IPv6 socket is\n> trying to be established and when that fails an IPv4 socket is created?\n\nA sniffer like tcpdump or ethereal might reveal why connecting is\nso slow. The problem might be with DNS queries for AAAA (IPv6)\nrecords prior to queries for A (IPv4) records; see this thread from\nalmost a year ago:\n\nhttp://archives.postgresql.org/pgsql-general/2005-08/msg00216.php\n\n-- \nMichael Fuhr\n",
"msg_date": "Thu, 13 Jul 2006 07:54:21 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4"
},
{
"msg_contents": "Michael Fuhr <[email protected]> writes:\n> On Thu, Jul 13, 2006 at 08:22:46AM -0500, Medora Schauer wrote:\n>> Can it be that the connection delay is because first an IPv6 socket is\n>> trying to be established and when that fails an IPv4 socket is created?\n\n> A sniffer like tcpdump or ethereal might reveal why connecting is\n> so slow.\n\nI'd try strace'ing the client process first --- whatever is slow might\nnot be exposed as TCP traffic. It does sound though that the problem\nis related to userland expecting IPv6 support that the kernel doesn't\nactually have.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jul 2006 13:11:56 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4 "
}
] |
[
{
"msg_contents": "> From: Tom Lane [mailto:[email protected]]\n> Sent: Thursday, July 13, 2006 11:12 AM\n> \n> Michael Fuhr <[email protected]> writes:\n> > On Thu, Jul 13, 2006 at 08:22:46AM -0500, Medora Schauer wrote:\n> >> Can it be that the connection delay is because first an IPv6 socket\nis\n> >> trying to be established and when that fails an IPv4 socket is\ncreated?\n> \n> > A sniffer like tcpdump or ethereal might reveal why connecting is\n> > so slow.\n> \n> I'd try strace'ing the client process first --- whatever is slow might\n> not be exposed as TCP traffic. It does sound though that the problem\n> is related to userland expecting IPv6 support that the kernel doesn't\n> actually have.\n> \n> \t\t\tregards, tom lane\n\nGood idea Tom. Strace showed communications with a machine that didn't\nmake sense. Turns out someone had configured DNS on the \"slow\" machine\nbut the DNS server wasn't running. When I use the IP address of the PG\nserver rather than the name with psql, the connection is made quickly.\n\nThanks for all the help everyone,\n\nMedora Schauer\n\n\n",
"msg_date": "Thu, 13 Jul 2006 13:25:03 -0500",
"msg_from": "\"Medora Schauer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: hyper slow after upgrade to 8.1.4 "
}
] |
[
{
"msg_contents": "I'm doing a self join of some shipping data and wanted to get the best\nquery possible. The interesting table is the event table, and it has\nthe following structure:\n\n startnode int,\n endnode int,\n weight int,\n starttime timestamp,\n endtime timestamp\n\nand the query that I would like to run is:\n\nSELECT e1.endnode, count(*), sum(e1.weight) AS weight1, sum(e2.weight)\nAS weight2\nFROM event e1, event e2\nWHERE e1.endnode = e2.startnode AND e1.starttime < e2.starttime AND\ne2.starttime < e1.endtime\nGROUP BY e1.endnode\n\nAssuming that I have indexes on all the columns, should this query be\nable to make use of the indexes on starttime and endtime?\n\nThe \"best\" plan that I could see is a merge join between a sorted\nsequential scan on e2.startnode and an index scan on e1.endnode, which\nI figure takes care of the \"e1.endnode = e2.startnode\". The join\nfilter is then \"e1.starttime < e2.starttime AND e2.starttime <\ne1.endtime\" ... does this use an index? Can the planner to use a\nbitmap index scan to use the indexes on the start/endtimes in the join?\n\nTable is about 3GB.\n\n",
"msg_date": "14 Jul 2006 11:56:53 -0700",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Self-join query and index usage"
},
{
"msg_contents": "[email protected] writes:\n> and the query that I would like to run is:\n\n> SELECT e1.endnode, count(*), sum(e1.weight) AS weight1, sum(e2.weight)\n> AS weight2\n> FROM event e1, event e2\n> WHERE e1.endnode = e2.startnode AND e1.starttime < e2.starttime AND\n> e2.starttime < e1.endtime\n> GROUP BY e1.endnode\n\n> Assuming that I have indexes on all the columns, should this query be\n> able to make use of the indexes on starttime and endtime?\n\nThis is just really poorly suited for btree indexes. What you're\nlooking for is an interval overlap test, which is something that can be\nhandled by rtree or gist indexes, but you'll need to change the form of\nthe query ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jul 2006 15:19:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Self-join query and index usage "
}
] |
[
{
"msg_contents": "Hi all. I have a strange (and serious) problem with an application\nported from postgres 8.0 to 8.1.\n\nThe old installation is postgresql-8.0.4-2.FC4.1 running on a Fedora 4,\nthe new one is postgresql-8.1.4-1.FC5.1 running on a fedora 5.\n\nSome query is now _very_ slow. I've found some deep differences between\nquery plans.\n\nAs example. The query is:\n\nselect count(*) from orario_ap join registrazioni using(treno, data)\njoin personale using(personale_id) join ruoli using(ruolo_id) where\ndata=today_or) where data=today_orario();\n\norario_ap is a view.\n\nOn 8.0 the query runs in 138.146 ms\nOn 8.1 the query runs in 6761.112 ms\nOn 8.1 with nested loops disabled: 63.184 ms\n\nThis is not the only query affected. \n\nTwo notes: please cc answer directly to me, and I'm sorry, my english is\nalpha version.\n\n\nOn a 8.0 the plan is:\n\nrailcomm04=# explain analyze select count(*) from orario_ap join\nregistrazioni using(treno, data) join personale using(personale_id) join\nruoli using(ruolo_id) where data=today_or) where data=today_orario();\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1106.77..1106.77 rows=1 width=0) (actual\ntime=137.786..137.787 rows=1 loops=1)\n -> Merge Join (cost=1088.96..1105.66 rows=444 width=0) (actual\ntime=124.173..137.190 rows=349 loops=1)\n Merge Cond: ((\"outer\".tipo_treno = \"inner\".tipo_treno) AND\n(\"outer\".num_treno = \"inner\".num_treno) AND (\"outer\".orario =\n\"inner\".orario))\n -> Sort (cost=574.10..575.09 rows=395 width=26) (actual\ntime=97.647..98.010 rows=349 loops=1)\n Sort Key: o1.tipo_treno, o1.num_treno, o1.orario\n -> Hash Join (cost=28.45..557.06 rows=395 width=26)\n(actual time=35.326..93.415 rows=349 loops=1)\n Hash Cond: (\"outer\".ruolo_id = \"inner\".ruolo_id)\n -> Hash Join (cost=27.41..550.10 rows=395\nwidth=30) (actual time=12.827..69.411 rows=349 loops=1)\n Hash Cond: (\"outer\".personale_id =\n\"inner\".personale_id)\n -> Hash Join (cost=12.85..529.61 rows=395\nwidth=34) (actual time=10.453..65.365 rows=349 loops=1)\n Hash Cond: (\"outer\".treno =\n\"inner\".treno)\n -> Seq Scan on orario o1\n(cost=0.00..504.38 rows=843 width=33) (actual time=3.691..57.487\nrows=797 loops=1)\n Filter: ((seq_fermata = 1) AND\n(data = date((now() - '02:00:00'::interval))))\n -> Hash (cost=11.98..11.98 rows=349\nwidth=19) (actual time=2.665..2.665 rows=0 loops=1)\n -> Seq Scan on registrazioni\n(cost=0.00..11.98 rows=349 width=19) (actual time=0.029..2.042 rows=349\nloops=1)\n Filter: (date((now() -\n'02:00:00'::interval)) = data)\n -> Hash (cost=12.85..12.85 rows=685\nwidth=4) (actual time=2.350..2.350 rows=0 loops=1)\n -> Seq Scan on personale\n(cost=0.00..12.85 rows=685 width=4) (actual time=0.005..1.350 rows=685\nloops=1)\n -> Hash (cost=1.03..1.03 rows=3 width=4) (actual\ntime=22.479..22.479 rows=0 loops=1)\n -> Seq Scan on ruoli (cost=0.00..1.03\nrows=3 width=4) (actual time=22.461..22.468 rows=3 loops=1)\n -> Sort (cost=514.86..516.94 rows=831 width=26) (actual\ntime=26.493..27.490 rows=949 loops=1)\n Sort Key: o2.tipo_treno, o2.num_treno, o2.orario\n -> Seq Scan on orario o2 (cost=0.00..474.56 rows=831\nwidth=26) (actual time=0.056..17.398 rows=797 loops=1)\n Filter: ((orario_partenza IS NULL) AND (date((now()\n- '02:00:00'::interval)) = data))\n Total runtime: 138.146 ms\n\n\n\nOn a standard 8.1 is:\n\nrailcomm04=# explain analyze select count(*) from orario_ap join\nregistrazioni using(treno, data) join personale using(personale_id) join\nruoli using(ruolo_id) where data=today_orario();\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=500.45..500.46 rows=1 width=0) (actual\ntime=6760.876..6760.877 rows=1 loops=1)\n -> Nested Loop (cost=0.00..500.44 rows=1 width=0) (actual\ntime=5.915..6759.550 rows=349 loops=1)\n Join Filter: ((\"outer\".orario = \"inner\".orario) AND\n(\"outer\".num_treno = \"inner\".num_treno) AND (\"outer\".tipo_treno =\n\"inner\".tipo_treno))\n -> Nested Loop (cost=0.00..25.87 rows=1 width=72) (actual\ntime=0.124..42.617 rows=349 loops=1)\n -> Nested Loop (cost=0.00..20.15 rows=1 width=76)\n(actual time=0.106..34.330 rows=349 loops=1)\n -> Nested Loop (cost=0.00..14.12 rows=1 width=40)\n(actual time=0.045..12.037 rows=349 loops=1)\n Join Filter: (\"outer\".ruolo_id =\n\"inner\".ruolo_id)\n -> Seq Scan on registrazioni\n(cost=0.00..11.98 rows=2 width=44) (actual time=0.025..2.315 rows=349\nloops=1)\n Filter: (date((now() -\n'02:00:00'::interval)) = data)\n -> Seq Scan on ruoli (cost=0.00..1.03\nrows=3 width=4) (actual time=0.003..0.009 rows=3 loops=349)\n -> Index Scan using orario_pkey on orario o1\n(cost=0.00..6.02 rows=1 width=104) (actual time=0.053..0.056 rows=1\nloops=349)\n Index Cond: ((o1.treno = \"outer\".treno) AND\n(o1.seq_fermata = 1))\n Filter: (data = date((now() -\n'02:00:00'::interval)))\n -> Index Scan using personale_pkey on personale\n(cost=0.00..5.71 rows=1 width=4) (actual time=0.013..0.017 rows=1\nloops=349)\n Index Cond: (\"outer\".personale_id =\npersonale.personale_id)\n -> Seq Scan on orario o2 (cost=0.00..474.56 rows=1 width=72)\n(actual time=0.030..17.784 rows=797 loops=349)\n Filter: ((orario_partenza IS NULL) AND (date((now() -\n'02:00:00'::interval)) = data))\n Total runtime: 6761.112 ms\n\n\nOn a 8.1 with nested loops disabled:\n\nrailcomm04=# explain analyze select count(*) from orario_ap join\nregistrazioni using(treno, data) join personale using(personale_id) join\nruoli using(ruolo_id) where data=today_orario();\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=802.82..802.83 rows=1 width=0) (actual\ntime=62.309..62.310 rows=1 loops=1)\n -> Hash Join (cost=328.23..802.82 rows=1 width=0) (actual\ntime=44.443..61.867 rows=349 loops=1)\n Hash Cond: ((\"outer\".orario = \"inner\".orario) AND\n(\"outer\".num_treno = \"inner\".num_treno) AND (\"outer\".tipo_treno =\n\"inner\".tipo_treno))\n -> Seq Scan on orario o2 (cost=0.00..474.56 rows=1 width=72)\n(actual time=0.068..16.558 rows=797 loops=1)\n Filter: ((orario_partenza IS NULL) AND (date((now() -\n'02:00:00'::interval)) = data))\n -> Hash (cost=328.22..328.22 rows=1 width=72) (actual\ntime=38.479..38.479 rows=349 loops=1)\n -> Hash Join (cost=29.33..328.22 rows=1 width=72)\n(actual time=6.700..37.530 rows=349 loops=1)\n Hash Cond: (\"outer\".treno = \"inner\".treno)\n -> Index Scan using orario_pkey on orario o1\n(cost=0.00..298.88 rows=1 width=104) (actual time=0.069..29.033 rows=797\nloops=1)\n Index Cond: (seq_fermata = 1)\n Filter: (data = date((now() -\n'02:00:00'::interval)))\n -> Hash (cost=29.32..29.32 rows=1 width=36)\n(actual time=6.595..6.595 rows=349 loops=1)\n -> Hash Join (cost=13.04..29.32 rows=1\nwidth=36) (actual time=3.361..5.887 rows=349 loops=1)\n Hash Cond: (\"outer\".personale_id =\n\"inner\".personale_id)\n -> Seq Scan on personale\n(cost=0.00..12.85 rows=685 width=4) (actual time=0.013..1.098 rows=685\nloops=1)\n -> Hash (cost=13.04..13.04 rows=1\nwidth=40) (actual time=3.301..3.301 rows=349 loops=1)\n -> Hash Join (cost=1.04..13.04\nrows=1 width=40) (actual time=0.090..2.602 rows=349 loops=1)\n Hash Cond:\n(\"outer\".ruolo_id = \"inner\".ruolo_id)\n -> Seq Scan on\nregistrazioni (cost=0.00..11.98 rows=2 width=44) (actual\ntime=0.025..1.465 rows=349 loops=1)\n Filter: (date((now()\n- '02:00:00'::interval)) = data)\n -> Hash (cost=1.03..1.03\nrows=3 width=4) (actual time=0.040..0.040 rows=3 loops=1)\n -> Seq Scan on ruoli\n(cost=0.00..1.03 rows=3 width=4) (actual time=0.014..0.025 rows=3\nloops=1)\n Total runtime: 63.184 ms\n\n\nRegards,\nGabriele\n\n",
"msg_date": "Sat, 15 Jul 2006 16:14:11 +0200",
"msg_from": "Gabriele Turchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "On Sat, Jul 15, 2006 at 04:14:11PM +0200, Gabriele Turchi wrote:\n> Hi all. I have a strange (and serious) problem with an application\n> ported from postgres 8.0 to 8.1.\n> \n> The old installation is postgresql-8.0.4-2.FC4.1 running on a Fedora 4,\n> the new one is postgresql-8.1.4-1.FC5.1 running on a fedora 5.\n> \n> Some query is now _very_ slow. I've found some deep differences between\n> query plans.\n\nHave you run ANALYZE in 8.1? Some of the row count estimates in\nthe 8.1 plan differ significantly from the actual number of rows\nreturned, while in the 8.0 plan the estimates are accurate. For\nexample, in one case the 8.0 plan shows 349 rows estimated, 349\nrows returned:\n\n -> Seq Scan on registrazioni (cost=0.00..11.98 rows=349 width=19) (actual time=0.029..2.042 rows=349 loops=1)\n Filter: (date((now() - '02:00:00'::interval)) = data)\n\nbut the 8.1 plan shows 2 rows estimated, 349 rows returned:\n\n -> Seq Scan on registrazioni (cost=0.00..11.98 rows=2 width=44) (actual time=0.025..2.315 rows=349 loops=1)\n Filter: (date((now() - '02:00:00'::interval)) = data)\n\nThis suggests that the 8.1 statistics are out of date, possibly\nbecause ANALYZE or VACUUM ANALYZE hasn't been run since the data\nwas loaded. Try running ANALYZE in 8.1 and post the new plans if\nthat doesn't help.\n\n-- \nMichael Fuhr\n",
"msg_date": "Sat, 15 Jul 2006 13:02:10 -0600",
"msg_from": "Michael Fuhr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Il giorno sab, 15/07/2006 alle 13.02 -0600, Michael Fuhr ha scritto:\n> On Sat, Jul 15, 2006 at 04:14:11PM +0200, Gabriele Turchi wrote:\n> > Hi all. I have a strange (and serious) problem with an application\n> > ported from postgres 8.0 to 8.1.\n> > \n> > The old installation is postgresql-8.0.4-2.FC4.1 running on a Fedora 4,\n> > the new one is postgresql-8.1.4-1.FC5.1 running on a fedora 5.\n> > \n> > Some query is now _very_ slow. I've found some deep differences between\n> > query plans.\n> \n> Have you run ANALYZE in 8.1? Some of the row count estimates in\n> the 8.1 plan differ significantly from the actual number of rows\n> returned, while in the 8.0 plan the estimates are accurate. For\n\nRunning an ANALYZE really change the plan, now it is fast as before\n(8.0).\n\nOn the production system a VACUUM FULL ANALYZE is run every morning\nafter a clean-up, when the \"registrazioni\" table is empty. During the\nday this table fills up (about 500 record any day), and apparently the\nperformances are free-falling very quickly. This behaviour has not\nchanged between the old and the new installation.\t\n\nCan you suggest an easy way to collect and keep up-to-date these\nstatistics in a very low-impact way?\n\nI'm stunned from a so big difference in execution time from a so small\ndifference in the records number...\n\n> example, in one case the 8.0 plan shows 349 rows estimated, 349\n> rows returned:\n> \n> -> Seq Scan on registrazioni (cost=0.00..11.98 rows=349 width=19) (actual time=0.029..2.042 rows=349 loops=1)\n> Filter: (date((now() - '02:00:00'::interval)) = data)\n> \n> but the 8.1 plan shows 2 rows estimated, 349 rows returned:\n> \n> -> Seq Scan on registrazioni (cost=0.00..11.98 rows=2 width=44) (actual time=0.025..2.315 rows=349 loops=1)\n> Filter: (date((now() - '02:00:00'::interval)) = data)\n> \n> This suggests that the 8.1 statistics are out of date, possibly\n> because ANALYZE or VACUUM ANALYZE hasn't been run since the data\n> was loaded. Try running ANALYZE in 8.1 and post the new plans if\n> that doesn't help.\n> \n\nThank you very much,\nGabriele\n\n\n",
"msg_date": "Sat, 15 Jul 2006 21:55:49 +0200",
"msg_from": "Gabriele Turchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Gabriele Turchi wrote:\n> Running an ANALYZE really change the plan, now it is fast as before\n> (8.0).\n> \n> On the production system a VACUUM FULL ANALYZE is run every morning\n> after a clean-up, when the \"registrazioni\" table is empty. During the\n> day this table fills up (about 500 record any day), and apparently the\n> performances are free-falling very quickly. This behaviour has not\n> changed between the old and the new installation.\t\n> \n> Can you suggest an easy way to collect and keep up-to-date these\n> statistics in a very low-impact way?\n> \n\nWhy not just periodically (once an hour?) run \"ANALYZE registrazioni;\" \nduring the day. This will only update the statistics, and should be very \nlow impact.\n\nHTH,\n\nJoe\n",
"msg_date": "Sat, 15 Jul 2006 13:04:33 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:\n> Gabriele Turchi wrote:\n> > Running an ANALYZE really change the plan, now it is fast as before\n> > (8.0).\n> > \n> > On the production system a VACUUM FULL ANALYZE is run every morning\n> > after a clean-up, when the \"registrazioni\" table is empty. During the\n> > day this table fills up (about 500 record any day), and apparently the\n> > performances are free-falling very quickly. This behaviour has not\n> > changed between the old and the new installation.\t\n> > \n> > Can you suggest an easy way to collect and keep up-to-date these\n> > statistics in a very low-impact way?\n> > \n> \n> Why not just periodically (once an hour?) run \"ANALYZE registrazioni;\" \n> during the day. This will only update the statistics, and should be very \n> low impact.\n> \n\nThis is my \"solution\" too... but: is enough? Or else: there is a better\nway to do this? If the performance in the better case is 50 times faster\nthan the worse case, during an hour (50/100 record inserted in\n\"registrazioni\") how much the performance can fall before the new\n\"ANALYZE\" is run? Otherwise, running ANALYZE more frequently can badly\naffect the overall performance?\n\nA so big difference in postgres performance, can be considered a bug or\na over-optimization in the plan making? Why (at least apparently) the\n8.0 version is not affected?\n\n> HTH,\n> \n> Joe\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\nThank you all very much,\nGabriele\n\n\n",
"msg_date": "Sat, 15 Jul 2006 22:22:50 +0200",
"msg_from": "Gabriele Turchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Gabriele Turchi wrote:\n> Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:\n>>Why not just periodically (once an hour?) run \"ANALYZE registrazioni;\" \n>>during the day. This will only update the statistics, and should be very \n>>low impact.\n> \n> This is my \"solution\" too... but: is enough? Or else: there is a better\n> way to do this? If the performance in the better case is 50 times faster\n> than the worse case, during an hour (50/100 record inserted in\n> \"registrazioni\") how much the performance can fall before the new\n> \"ANALYZE\" is run? Otherwise, running ANALYZE more frequently can badly\n> affect the overall performance?\n\nOne thing I noticed is that in both plans there is a seq scan on \nregistrazioni. Given that performance degrades so quickly as records are \ninserted into registrazioni, I'm wondering if you're missing an index. \nWhat indexes do you have on registrazioni?\n\nJoe\n",
"msg_date": "Sun, 16 Jul 2006 11:08:07 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Just turn on autovacuuming on your 8.1 database. You can tune the vacuum\nand autovacuum parameters to minimize the impact to your system. This is\nthe optimal route to take since PG will maintain the tables for you as\nneeded.\n\nHTH,\n\nChris\n\nOn 7/15/06, Gabriele Turchi <[email protected]> wrote:\n>\n> Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:\n> > Gabriele Turchi wrote:\n> > > Running an ANALYZE really change the plan, now it is fast as before\n> > > (8.0).\n> > >\n> > > On the production system a VACUUM FULL ANALYZE is run every morning\n> > > after a clean-up, when the \"registrazioni\" table is empty. During the\n> > > day this table fills up (about 500 record any day), and apparently the\n> > > performances are free-falling very quickly. This behaviour has not\n> > > changed between the old and the new installation.\n> > >\n> > > Can you suggest an easy way to collect and keep up-to-date these\n> > > statistics in a very low-impact way?\n> > >\n> >\n> > Why not just periodically (once an hour?) run \"ANALYZE registrazioni;\"\n> > during the day. This will only update the statistics, and should be very\n> > low impact.\n> >\n>\n> This is my \"solution\" too... but: is enough? Or else: there is a better\n> way to do this? If the performance in the better case is 50 times faster\n> than the worse case, during an hour (50/100 record inserted in\n> \"registrazioni\") how much the performance can fall before the new\n> \"ANALYZE\" is run? Otherwise, running ANALYZE more frequently can badly\n> affect the overall performance?\n>\n> A so big difference in postgres performance, can be considered a bug or\n> a over-optimization in the plan making? Why (at least apparently) the\n> 8.0 version is not affected?\n>\n> > HTH,\n> >\n> > Joe\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n>\n> Thank you all very much,\n> Gabriele\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nJust turn on autovacuuming on your 8.1 database. You can tune the vacuum and autovacuum parameters to minimize the impact to your system. This is the optimal route to take since PG will maintain the tables for you as needed.\nHTH,ChrisOn 7/15/06, Gabriele Turchi <[email protected]> wrote:\nIl giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:> Gabriele Turchi wrote:> > Running an ANALYZE really change the plan, now it is fast as before> > (8.0).> >> > On the production system a VACUUM FULL ANALYZE is run every morning\n> > after a clean-up, when the \"registrazioni\" table is empty. During the> > day this table fills up (about 500 record any day), and apparently the> > performances are free-falling very quickly. This behaviour has not\n> > changed between the old and the new installation.> >> > Can you suggest an easy way to collect and keep up-to-date these> > statistics in a very low-impact way?> >\n>> Why not just periodically (once an hour?) run \"ANALYZE registrazioni;\"> during the day. This will only update the statistics, and should be very> low impact.>This is my \"solution\" too... but: is enough? Or else: there is a better\nway to do this? If the performance in the better case is 50 times fasterthan the worse case, during an hour (50/100 record inserted in\"registrazioni\") how much the performance can fall before the new\n\"ANALYZE\" is run? Otherwise, running ANALYZE more frequently can badlyaffect the overall performance?A so big difference in postgres performance, can be considered a bug ora over-optimization in the plan making? Why (at least apparently) the\n8.0 version is not affected?> HTH,>> Joe>> ---------------------------(end of broadcast)---------------------------> TIP 6: explain analyze is your friendThank you all very much,\nGabriele---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Mon, 17 Jul 2006 16:19:14 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
},
{
"msg_contents": "Il giorno dom, 16/07/2006 alle 11.08 -0700, Joe Conway ha scritto:\n> Gabriele Turchi wrote:\n> > Il giorno sab, 15/07/2006 alle 13.04 -0700, Joe Conway ha scritto:\n> >>Why not just periodically (once an hour?) run \"ANALYZE registrazioni;\" \n> >>during the day. This will only update the statistics, and should be very \n> >>low impact.\n> > \n> > This is my \"solution\" too... but: is enough? Or else: there is a better\n> > way to do this? If the performance in the better case is 50 times faster\n> > than the worse case, during an hour (50/100 record inserted in\n> > \"registrazioni\") how much the performance can fall before the new\n> > \"ANALYZE\" is run? Otherwise, running ANALYZE more frequently can badly\n> > affect the overall performance?\n> \n> One thing I noticed is that in both plans there is a seq scan on \n> registrazioni. Given that performance degrades so quickly as records are \n> inserted into registrazioni, I'm wondering if you're missing an index. \n> What indexes do you have on registrazioni?\n> \n> Joe\n\nNo one. The application was not fine-tuned, because the original\nperformance (under 8.0) was \"more than enough\". I thought that creating\nan index on a table with no more than some hundred of records was not\nuseful...\n\nMy biggest doubt is anyway related to the very big difference between\nthe plans in 8.0 and 8.1 under the same conditions. \n\nThank you,\nGabriele",
"msg_date": "Tue, 18 Jul 2006 09:25:40 +0200",
"msg_from": "Gabriele Turchi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big differences in plans between 8.0 and 8.1"
}
] |
[
{
"msg_contents": "Hassan,\n\n> 1. I have a function that given two parameter produces an arbitrary id, and\n> text. However arbitrary the id and text are, they are in certain order. i.e. it\n> is imperative that whatever processing I do, the order is preserved.\n\nWhat type of function is this? Did you write it in C? An SQL procedure?\n\nIf the function is written in C, you can create a static local variable which you increment every time you call your function, and which you return along with your other two values. As long as your client is connected to the back-end server, you're guaranteed that it's a single process, and it's not multi-threaded, so this is a safe approach. However, note that if you disconnect and reconnect, your counter will be reset to zero.\n\nIf your function is written in a different language or is a procedure, you might create a sequence that your function can query.\n\nThe trick is that it is the function itself that must return the incremented value, i.e. you must return three, not two, values from your function. That way, you're not relying on any specific features of the planner, so your three values will stick together.\n\nCraig\n",
"msg_date": "Sat, 15 Jul 2006 10:27:20 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: increment Rows in an SQL Result Set postgresql"
},
{
"msg_contents": "HI,\n\n1. I have a function that given two parameter produces an arbitrary id, and\ntext. However arbitrary the id and text are, they are in certain order. i.e. it\nis imperative that whatever processing I do, the order is preserved.\n\n2. An odd thing happens when I perform a join on the result set such that the\norder that I hope to preserved in destroyed. The same result set but different\nordering. I gather this is due to the query planner. Enough said.\n\nI was hoping to insert a counter in the select query of 1. such that when I\nperform the join of 2, I can order by the counter. \n\n\ni.e. \n\n1. select id, astext from function1(1,2) \n2. select id, astext, table2.name from function1(1,2) as tmp, table2 where\ntmp.id = table2.id\n \nwhen I perform 1., I get something of sort\n\nid | astext\n2 | abc\n6 | efg\n3 | fhg\n\nI will like to preserve ordering....\n\nWhen I perform 2, I get somthing of sort\n\nid | astext | table2.name\n6 | efg | joe\n2 | abc | zyi\n3 | fgh | mec\n\nCan someone help such that I get something like \n\nid | astext | table2.name | increment\n6 | efg | joe | 2\n2 | abc | zyi | 1\n3 | fgh | mec | 3\n\nThanks!\n\n",
"msg_date": "Sat, 15 Jul 2006 17:46:40 +0000 (UTC)",
"msg_from": "Hassan Adekoya <[email protected]>",
"msg_from_op": false,
"msg_subject": "increment Rows in an SQL Result Set postgresql"
},
{
"msg_contents": "Sadly I didnt write this function. It was written in C and packaged in a shared module .so. I access it thru postgresql as plpgsql function. I cannot edit the function thus. \n \n I tried this\n \n CREATE TEMPORARY SEQUENCE serial START 1; \n SELECT nextval('serial'), astext(tmp.the_geom), street FROM shortest_path_as_geometry('bklion', 185, 10953) AS tmp LEFT JOIN (SELECT * FROM bklion) AS ss ON ss.the_geom = tmp.the_geom; \n \n I know this is inefficient, and I surely dont know the repercussion of using the temporary sequence in a web application. Do you?\n \n Appreciate any input.\n \n Thanks!\n \n - Hassan Adekoya\n \n \n----- Original Message ----\nFrom: Craig A. James <[email protected]>\nTo: Hassan Adekoya <[email protected]>\nCc: [email protected]\nSent: Saturday, July 15, 2006 1:27:20 PM\nSubject: Re: [PERFORM] increment Rows in an SQL Result Set postgresql\n\nHassan,\n\n> 1. I have a function that given two parameter produces an arbitrary id, and\n> text. However arbitrary the id and text are, they are in certain order. i.e. it\n> is imperative that whatever processing I do, the order is preserved.\n\nWhat type of function is this? Did you write it in C? An SQL procedure?\n\nIf the function is written in C, you can create a static local variable which you increment every time you call your function, and which you return along with your other two values. As long as your client is connected to the back-end server, you're guaranteed that it's a single process, and it's not multi-threaded, so this is a safe approach. However, note that if you disconnect and reconnect, your counter will be reset to zero.\n\nIf your function is written in a different language or is a procedure, you might create a sequence that your function can query.\n\nThe trick is that it is the function itself that must return the incremented value, i.e. you must return three, not two, values from your function. That way, you're not relying on any specific features of the planner, so your three values will stick together.\n\nCraig\n\n\n\n",
"msg_date": "Sat, 15 Jul 2006 11:53:07 -0700 (PDT)",
"msg_from": "Hassan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increment Rows in an SQL Result Set postgresql"
},
{
"msg_contents": "Hassan Adekoya wrote:\n> I will like to preserve ordering....\n\nTables are inherently unordered. If you want a particular order, you \nneed to use the ORDER BY clause. And you will need to have a column to \nsort by. If you don't have one, the generate_series() function may \nhelp.\n\nThis has nothing to do with performance, I gather, so it might be more \nappropriate for the pgsql-sql list.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Sun, 16 Jul 2006 01:53:30 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: increment Rows in an SQL Result Set postgresql"
}
] |
[
{
"msg_contents": "I have finally gotten my hands on the MSA1500 that we ordered some time\nago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) +\n18 (for data). There's only one controller (an emulex), but I hope\nperformance won't suffer too much from that. Raid level is 0+1,\nfilesystem is ext3. \n\nNow to the interesting part: would it make sense to use different stripe\nsizes on the separate disk arrays? In theory, a smaller stripe size\n(8-32K) should increase sequential write throughput at the cost of\ndecreased positioning performance, which sounds good for WAL (assuming\nWAL is never \"searched\" during normal operation). And for disks holding\nthe data, a larger stripe size (>32K) should provide for more concurrent\n(small) reads/writes at the cost of decreased raw throughput. This is\nwith an OLTP type application in mind, so I'd rather have high\ntransaction throughput than high sequential read speed. The interface is\na 2Gb FC so I'm throttled to (theoretically) 192Mb/s, anyway.\n\nSo, does this make sense? Has anyone tried it and seen any performance\ngains from it?\n\nRegards,\nMikael.\n\n\n\n\n\nRAID stripe size question\n\n\n\nI have finally gotten my hands on the MSA1500 that we ordered some time ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) + 18 (for data). There's only one controller (an emulex), but I hope performance won't suffer too much from that. Raid level is 0+1, filesystem is ext3. \nNow to the interesting part: would it make sense to use different stripe sizes on the separate disk arrays? In theory, a smaller stripe size (8-32K) should increase sequential write throughput at the cost of decreased positioning performance, which sounds good for WAL (assuming WAL is never \"searched\" during normal operation). And for disks holding the data, a larger stripe size (>32K) should provide for more concurrent (small) reads/writes at the cost of decreased raw throughput. This is with an OLTP type application in mind, so I'd rather have high transaction throughput than high sequential read speed. The interface is a 2Gb FC so I'm throttled to (theoretically) 192Mb/s, anyway.\nSo, does this make sense? Has anyone tried it and seen any performance gains from it?\n\nRegards,\nMikael.",
"msg_date": "Mon, 17 Jul 2006 00:52:17 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "RAID stripe size question"
},
{
"msg_contents": "On Mon, Jul 17, 2006 at 12:52:17AM +0200, Mikael Carneholm wrote:\n> Now to the interesting part: would it make sense to use different stripe\n> sizes on the separate disk arrays? In theory, a smaller stripe size\n> (8-32K) should increase sequential write throughput at the cost of\n> decreased positioning performance, which sounds good for WAL (assuming\n> WAL is never \"searched\" during normal operation).\n\nFor large writes (ie. sequential write throughput), it doesn't really matter\nwhat the stripe size is; all the disks will have to both seek and write\nanyhow.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 17 Jul 2006 01:10:05 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On Mon, Jul 17, 2006 at 12:52:17AM +0200, Mikael Carneholm wrote:\n>I have finally gotten my hands on the MSA1500 that we ordered some time\n>ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) +\n>18 (for data). There's only one controller (an emulex), but I hope\n\nYou've got 1.4TB assigned to the WAL, which doesn't normally have more \nthan a couple of gigs?\n\nMike Stone\n",
"msg_date": "Sun, 16 Jul 2006 20:03:40 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "With 18 disks dedicated to data, you could make 100/7*9 seeks/second (7ms\nav seeks time, 9 independant units) which is 128seeks/second writing on\naverage 64kb of data, which is 4.1MB/sec throughput worst case, probably 10x\nbest case so 40Mb/sec - you might want to take more disks for your data and\nless for your WAL.\n\nSomeone check my math here...\n\nAnd as always - run benchmarks with your app to verify\n\nAlex.\n\nOn 7/16/06, Mikael Carneholm <[email protected]> wrote:\n>\n> I have finally gotten my hands on the MSA1500 that we ordered some time\n> ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) + 18\n> (for data). There's only one controller (an emulex), but I hope performance\n> won't suffer too much from that. Raid level is 0+1, filesystem is ext3.\n>\n> Now to the interesting part: would it make sense to use different stripe\n> sizes on the separate disk arrays? In theory, a smaller stripe size (8-32K)\n> should increase sequential write throughput at the cost of decreased\n> positioning performance, which sounds good for WAL (assuming WAL is never\n> \"searched\" during normal operation). And for disks holding the data, a\n> larger stripe size (>32K) should provide for more concurrent (small)\n> reads/writes at the cost of decreased raw throughput. This is with an OLTP\n> type application in mind, so I'd rather have high transaction throughput\n> than high sequential read speed. The interface is a 2Gb FC so I'm throttled\n> to (theoretically) 192Mb/s, anyway.\n>\n> So, does this make sense? Has anyone tried it and seen any performance\n> gains from it?\n>\n> Regards,\n> Mikael.\n>\n\nWith 18 disks dedicated to data, you could make 100/7*9 seeks/second (7ms av seeks time, 9 independant units) which is 128seeks/second writing on average 64kb of data, which is 4.1MB/sec throughput worst case, probably 10x best case so 40Mb/sec - you might want to take more disks for your data and less for your WAL.\nSomeone check my math here...And as always - run benchmarks with your app to verifyAlex.On 7/16/06, Mikael Carneholm <\[email protected]> wrote:\n\nI have finally gotten my hands on the MSA1500 that we ordered some time ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) + 18 (for data). There's only one controller (an emulex), but I hope performance won't suffer too much from that. Raid level is 0+1, filesystem is ext3. \n\nNow to the interesting part: would it make sense to use different stripe sizes on the separate disk arrays? In theory, a smaller stripe size (8-32K) should increase sequential write throughput at the cost of decreased positioning performance, which sounds good for WAL (assuming WAL is never \"searched\" during normal operation). And for disks holding the data, a larger stripe size (>32K) should provide for more concurrent (small) reads/writes at the cost of decreased raw throughput. This is with an OLTP type application in mind, so I'd rather have high transaction throughput than high sequential read speed. The interface is a 2Gb FC so I'm throttled to (theoretically) 192Mb/s, anyway.\n\nSo, does this make sense? Has anyone tried it and seen any performance gains from it?\n\nRegards,\nMikael.",
"msg_date": "Mon, 17 Jul 2006 02:13:04 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "Yeah, it seems to be a waste of disk space (spindles as well?). I was\nunsure how much activity the WAL disks would have compared to the data\ndisks, so I created an array from 10 disks as the application is very\nwrite intense (many spindles / high throughput is crucial). I guess that\na mirror of two disks is enough from a disk space perspective, but from\na throughput perspective it will limit me to ~25Mb/s (roughly\ncalculated). \n\nAn 0+1 array of 4 disks *could* be enough, but I'm still unsure how WAL\nactivity correlates to \"normal data\" activity (is it 1:1, 1:2, 1:4,\n...?) \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Michael\nStone\nSent: den 17 juli 2006 02:04\nTo: [email protected]\nSubject: Re: [PERFORM] RAID stripe size question\n\nOn Mon, Jul 17, 2006 at 12:52:17AM +0200, Mikael Carneholm wrote:\n>I have finally gotten my hands on the MSA1500 that we ordered some time\n\n>ago. It has 28 x 10K 146Gb drives, currently grouped as 10 (for wal) +\n>18 (for data). There's only one controller (an emulex), but I hope\n\nYou've got 1.4TB assigned to the WAL, which doesn't normally have more\nthan a couple of gigs?\n\nMike Stone\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n\n",
"msg_date": "Mon, 17 Jul 2006 10:00:39 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "Hi, Mikael,\n\nMikael Carneholm wrote:\n> An 0+1 array of 4 disks *could* be enough, but I'm still unsure how WAL\n> activity correlates to \"normal data\" activity (is it 1:1, 1:2, 1:4,\n> ...?) \n\nI think the main difference is that the WAL activity is mostly linear,\nwhere the normal data activity is rather random access. Thus, a mirror\nof few disks (or, with good controller hardware, raid6 on 4 disks or so)\nfor WAL should be enough to cope with a large set of data and index\ndisks, who have a lot more time spent in seeking.\n\nBtw, it may make sense to spread different tables or tables and indices\nonto different Raid-Sets, as you seem to have enough spindles.\n\nAnd look into the commit_delay/commit_siblings settings, they allow you\nto deal latency for throughput (means a little more latency per\ntransaction, but much more transactions per second throughput for the\nwhole system.)\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 17 Jul 2006 11:47:05 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": ">I think the main difference is that the WAL activity is mostly linear,\nwhere the normal data activity is rather random access. \n\nThat was what I was expecting, and after reading\nhttp://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html I\nfigured that a different stripe size for the WAL set could be worth\ninvestigating. I have now dropped the old sets (10+18) and created two\nnew raid1+0 sets (4 for WAL, 24 for data) instead. Bonnie++ is still\nrunning, but I'll post the numbers as soon as it has finished. I did\nactually use different stripe sizes for the sets as well, 8k for the WAL\ndisks and 64k for the data. It's quite painless to do these things with\nHBAnywhere, so it's no big deal if I have to go back to another\nconfiguration. The battery cache only has 256Mb though and that botheres\nme, I assume a larger (512Mb - 1Gb) cache would make quite a difference.\nOh well.\n\n>Btw, it may make sense to spread different tables or tables and indices\nonto different Raid-Sets, as you seem to have enough spindles.\n\nThis is something I'd also would like to test, as a common best-practice\nthese days is to go for a SAME (stripe all, mirror everything) setup.\n>From a development perspective it's easier to use SAME as the developers\nwon't have to think about physical location for new tables/indices, so\nif there's no performance penalty with SAME I'll gladly keep it that\nway.\n\n>And look into the commit_delay/commit_siblings settings, they allow you\nto deal latency for throughput (means a little more latency per\ntransaction, but much more transactions per second throughput for the\nwhole system.)\n\nIn a previous test, using cd=5000 and cs=20 increased transaction\nthroughput by ~20% so I'll definitely fiddle with that in the coming\ntests as well.\n\nRegards,\nMikael.\n",
"msg_date": "Mon, 17 Jul 2006 13:33:55 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "Hi, Mikael,\n\nMikael Carneholm wrote:\n\n> This is something I'd also would like to test, as a common best-practice\n> these days is to go for a SAME (stripe all, mirror everything) setup.\n> From a development perspective it's easier to use SAME as the developers\n> won't have to think about physical location for new tables/indices, so\n> if there's no performance penalty with SAME I'll gladly keep it that\n> way.\n\nUsually, it's not the developers task to care about that, but the DBAs\nresponsibility.\n\n>> And look into the commit_delay/commit_siblings settings, they allow you\n> to deal latency for throughput (means a little more latency per\n> transaction, but much more transactions per second throughput for the\n> whole system.)\n> \n> In a previous test, using cd=5000 and cs=20 increased transaction\n> throughput by ~20% so I'll definitely fiddle with that in the coming\n> tests as well.\n\nHow many parallel transactions do you have?\n\nMarkus\n\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 17 Jul 2006 13:40:36 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": ">> This is something I'd also would like to test, as a common \n>> best-practice these days is to go for a SAME (stripe all, mirror\neverything) setup.\n>> From a development perspective it's easier to use SAME as the \n>> developers won't have to think about physical location for new \n>> tables/indices, so if there's no performance penalty with SAME I'll \n>> gladly keep it that way.\n\n>Usually, it's not the developers task to care about that, but the DBAs\nresponsibility.\n\nAs we don't have a full-time dedicated DBA (although I'm the one who do\nmost DBA related tasks) I would aim for making physical location as\ntransparent as possible, otherwise I'm afraid I won't be doing anything\nelse than supporting developers with that - and I *do* have other things\nto do as well :)\n\n>> In a previous test, using cd=5000 and cs=20 increased transaction \n>> throughput by ~20% so I'll definitely fiddle with that in the coming \n>> tests as well.\n\n>How many parallel transactions do you have?\n\nThat was when running BenchmarkSQL\n(http://sourceforge.net/projects/benchmarksql) with 100 concurrent users\n(\"terminals\"), which I assume means 100 parallel transactions at most.\nThe target application for this DB has 3-4 times as many concurrent\nconnections so it's possible that one would have to find other cs/cd\nnumbers better suited for that scenario. Tweaking bgwriter is another\ntask I'll look into as well..\n\nBtw, here's the bonnie++ results from two different array sets (10+18,\n4+24) on the MSA1500:\n\nLUN: WAL, 10 disks, stripe size 32K\n------------------------------------\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nsesell01 32G 56139 93 73250 22 16530 3 30488 45 57489 5\n477.3 1\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 2458 90 +++++ +++ +++++ +++ 3121 99 +++++ +++\n10469 98\n\n\nLUN: WAL, 4 disks, stripe size 8K\n----------------------------------\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nsesell01 32G 49170 82 60108 19 13325 2 15778 24 21489 2\n266.4 0\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 2432 86 +++++ +++ +++++ +++ 3106 99 +++++ +++\n10248 98\n\n\nLUN: DATA, 18 disks, stripe size 32K\n-------------------------------------\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nsesell01 32G 59990 97 87341 28 19158 4 30200 46 57556 6\n495.4 1\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 1640 92 +++++ +++ +++++ +++ 1736 99 +++++ +++\n10919 99\n\n\nLUN: DATA, 24 disks, stripe size 64K\n-------------------------------------\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nsesell01 32G 59443 97 118515 39 25023 5 30926 49 60835 6\n531.8 1\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 2499 90 +++++ +++ +++++ +++ 2817 99 +++++ +++\n10971 100\n\nRegards,\nMikael\n",
"msg_date": "Mon, 17 Jul 2006 14:52:28 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On 7/17/06, Mikael Carneholm <[email protected]> wrote:\n>\n> >> This is something I'd also would like to test, as a common\n> >> best-practice these days is to go for a SAME (stripe all, mirror\n> everything) setup.\n> >> From a development perspective it's easier to use SAME as the\n> >> developers won't have to think about physical location for new\n> >> tables/indices, so if there's no performance penalty with SAME I'll\n> >> gladly keep it that way.\n>\n> >Usually, it's not the developers task to care about that, but the DBAs\n> responsibility.\n>\n> As we don't have a full-time dedicated DBA (although I'm the one who do\n> most DBA related tasks) I would aim for making physical location as\n> transparent as possible, otherwise I'm afraid I won't be doing anything\n> else than supporting developers with that - and I *do* have other things\n> to do as well :)\n>\n> >> In a previous test, using cd=5000 and cs=20 increased transaction\n> >> throughput by ~20% so I'll definitely fiddle with that in the coming\n> >> tests as well.\n>\n> >How many parallel transactions do you have?\n>\n> That was when running BenchmarkSQL\n> (http://sourceforge.net/projects/benchmarksql) with 100 concurrent users\n> (\"terminals\"), which I assume means 100 parallel transactions at most.\n> The target application for this DB has 3-4 times as many concurrent\n> connections so it's possible that one would have to find other cs/cd\n> numbers better suited for that scenario. Tweaking bgwriter is another\n> task I'll look into as well..\n>\n> Btw, here's the bonnie++ results from two different array sets (10+18,\n> 4+24) on the MSA1500:\n>\n> LUN: WAL, 10 disks, stripe size 32K\n> ------------------------------------\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> sesell01 32G 56139 93 73250 22 16530 3 30488 45 57489 5\n> 477.3 1\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 2458 90 +++++ +++ +++++ +++ 3121 99 +++++ +++\n> 10469 98\n>\n>\n> LUN: WAL, 4 disks, stripe size 8K\n> ----------------------------------\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> sesell01 32G 49170 82 60108 19 13325 2 15778 24 21489 2\n> 266.4 0\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 2432 86 +++++ +++ +++++ +++ 3106 99 +++++ +++\n> 10248 98\n>\n>\n> LUN: DATA, 18 disks, stripe size 32K\n> -------------------------------------\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> sesell01 32G 59990 97 87341 28 19158 4 30200 46 57556 6\n> 495.4 1\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 1640 92 +++++ +++ +++++ +++ 1736 99 +++++ +++\n> 10919 99\n>\n>\n> LUN: DATA, 24 disks, stripe size 64K\n> -------------------------------------\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> sesell01 32G 59443 97 118515 39 25023 5 30926 49 60835 6\n> 531.8 1\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 2499 90 +++++ +++ +++++ +++ 2817 99 +++++ +++\n> 10971 100\n\n\n\nThese bonnie++ number are very worrying. Your controller should easily max\nout your FC interface on these tests passing 192MB/sec with ease on anything\nmore than an 6 drive RAID 10 . This is a bad omen if you want high\nperformance... Each mirror pair can do 60-80MB/sec. A 24Disk RAID 10 can\ndo 12*60MB/sec which is 740MB/sec - I have seen this performance, it's not\nunreachable, but time and again, we see these bad perf numbers from FC and\nSCSI systems alike. Consider a different controller, because this one is\nnot up to snuff. A single drive would get better numbers than your 4 disk\nRAID 10, 21MB/sec read speed is really pretty sorry, it should be closer to\n120Mb/sec. If you can't swap out, software RAID may turn out to be your\nfriend. The only saving grace is that this is OLTP, and perhaps, just\nmaybe, the controller will be better at ordering IOs, but I highly doubt it.\n\nPlease people, do the numbers, benchmark before you buy, many many HBAs\nreally suck under Linux/Free BSD, and you may end up paying vast sums of\nmoney for very sub-optimal performance (I'd say sub-standard, but alas, it\nseems that this kind of poor performance is tolerated, even though it's way\noff where it should be). There's no point having a 40disk cab, if your\ncontroller can't handle it.\n\nMaximum theoretical linear throughput can be acheived in a White Box for\nunder $20k, and I have seen this kind of system outperform a server 5 times\nit's price even in OLTP.\n\nAlex\n\nOn 7/17/06, Mikael Carneholm <[email protected]> wrote:\n>> This is something I'd also would like to test, as a common>> best-practice these days is to go for a SAME (stripe all, mirroreverything) setup.>> From a development perspective it's easier to use SAME as the\n>> developers won't have to think about physical location for new>> tables/indices, so if there's no performance penalty with SAME I'll>> gladly keep it that way.>Usually, it's not the developers task to care about that, but the DBAs\nresponsibility.As we don't have a full-time dedicated DBA (although I'm the one who domost DBA related tasks) I would aim for making physical location astransparent as possible, otherwise I'm afraid I won't be doing anything\nelse than supporting developers with that - and I *do* have other thingsto do as well :)>> In a previous test, using cd=5000 and cs=20 increased transaction>> throughput by ~20% so I'll definitely fiddle with that in the coming\n>> tests as well.>How many parallel transactions do you have?That was when running BenchmarkSQL(http://sourceforge.net/projects/benchmarksql\n) with 100 concurrent users(\"terminals\"), which I assume means 100 parallel transactions at most.The target application for this DB has 3-4 times as many concurrentconnections so it's possible that one would have to find other cs/cd\nnumbers better suited for that scenario. Tweaking bgwriter is anothertask I'll look into as well..Btw, here's the bonnie++ results from two different array sets (10+18,4+24) on the MSA1500:LUN: WAL, 10 disks, stripe size 32K\n------------------------------------Version 1.03 ------Sequential Output------ --Sequential Input-\n--Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CPsesell01 32G 56139 93 73250 22 16530 3 30488 45 57489 5\n477.3 1 ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read----Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 2458 90 +++++ +++ +++++ +++ 3121 99 +++++ +++\n10469 98LUN: WAL, 4 disks, stripe size 8K----------------------------------Version 1.03 ------Sequential Output------ --Sequential Input-\n--Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CPsesell01 32G 49170 82 60108 19 13325 2 15778 24 21489 2\n266.4 0 ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read----Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 2432 86 +++++ +++ +++++ +++ 3106 99 +++++ +++\n10248 98LUN: DATA, 18 disks, stripe size 32K-------------------------------------Version 1.03\n ------Sequential Output------ --Sequential Input---Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CPsesell01 32G 59990 97 87341 28 19158 4 30200 46 57556 6\n495.4 1 ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read----Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 1640 92 +++++ +++ +++++ +++ 1736 99 +++++ +++\n10919 99LUN: DATA, 24 disks, stripe size 64K-------------------------------------Version 1.03\n ------Sequential Output------ --Sequential Input---Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CPsesell01 32G 59443 97 118515 39 25023 5 30926 49 60835 6\n531.8 1 ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read----Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 2499 90 +++++ +++ +++++ +++ 2817 99 +++++ +++\n10971 100These bonnie++ number are very worrying. Your controller should easily max out your FC interface on these tests passing 192MB/sec with ease on anything more than an 6 drive RAID 10 . This is a bad omen if you want high performance... Each mirror pair can do 60-80MB/sec. A 24Disk RAID 10 can do 12*60MB/sec which is 740MB/sec - I have seen this performance, it's not unreachable, but time and again, we see these bad perf numbers from FC and SCSI systems alike. Consider a different controller, because this one is not up to snuff. A single drive would get better numbers than your 4 disk RAID 10, 21MB/sec read speed is really pretty sorry, it should be closer to 120Mb/sec. If you can't swap out, software RAID may turn out to be your friend. The only saving grace is that this is OLTP, and perhaps, just maybe, the controller will be better at ordering IOs, but I highly doubt it.\nPlease people, do the numbers, benchmark before you buy, many many HBAs really suck under Linux/Free BSD, and you may end up paying vast sums of money for very sub-optimal performance (I'd say sub-standard, but alas, it seems that this kind of poor performance is tolerated, even though it's way off where it should be). There's no point having a 40disk cab, if your controller can't handle it.\nMaximum theoretical linear throughput can be acheived in a White Box for under $20k, and I have seen this kind of system outperform a server 5 times it's price even in OLTP.Alex",
"msg_date": "Mon, 17 Jul 2006 11:23:23 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "Mikael Carneholm wrote:\n> \n> Btw, here's the bonnie++ results from two different array sets (10+18,\n> 4+24) on the MSA1500:\n> \n>\n> LUN: DATA, 24 disks, stripe size 64K\n> -------------------------------------\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> sesell01 32G 59443 97 118515 39 25023 5 30926 49 60835 6\n> 531.8 1\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 2499 90 +++++ +++ +++++ +++ 2817 99 +++++ +++\n> 10971 100\n> \n\n\nIt might be interesting to see if 128K or 256K stripe size gives better \nsequential throughput, while still leaving the random performance ok. \nHaving said that, the seeks/s figure of 531 not that great - for \ninstance I've seen a 12 disk (15K SCSI) system report about 1400 seeks/s \nin this test.\n\nSorry if you mentioned this already - but what OS and filesystem are you \nusing? (if Linux and ext3, it might be worth experimenting with xfs or jfs).\n\nCheers\n\nMark\n",
"msg_date": "Tue, 18 Jul 2006 12:22:29 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": ">From: Mikael Carneholm <[email protected]>\n>Sent: Jul 16, 2006 6:52 PM\n>To: [email protected]\n>Subject: [PERFORM] RAID stripe size question\n>\n>I have finally gotten my hands on the MSA1500 that we ordered some time\n>ago. It has 28 x 10K 146Gb drives,\n>\nUnless I'm missing something, the only FC or SCSI HDs of ~147GB capacity are 15K, not 10K.\n(unless they are old?)\nI'm not just being pedantic. The correct, let alone optimal, answer to your question depends on your exact HW characteristics as well as your SW config and your usage pattern.\n15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of 7-8ms.\nMost modern HDs in this class will do ~60MB/s inner tracks ~75MB/s avg and ~90MB/s outer tracks.\n\nIf you are doing OLTP-like things, you are more sensitive to latency than most and should use the absolute lowest latency HDs available within you budget. The current latency best case is 15Krpm FC HDs.\n\n\n>currently grouped as 10 (for wal) + 18 (for data). There's only one controller (an emulex), but I hope\n>performance won't suffer too much from that. Raid level is 0+1,\n>filesystem is ext3. \n>\nI strongly suspect having only 1 controller is an I/O choke w/ 28 HDs.\n\n28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9= ~675MB/s.\nIf both sets are to run at peak average speed, the Emulex would have to be able to handle ~1050MBps on average.\nIt is doubtful the 1 Emulex can do this.\n\nIn order to handle this level of bandwidth, a RAID controller must aggregate multiple FC, SCSI, or SATA streams as well as down any RAID 5 checksumming etc that is required.\nVery, very few RAID controllers can do >= 1GBps \nOne thing that help greatly with bursty IO patterns is to up your battery backed RAID cache as high as you possibly can. Even multiple GBs of BBC can be worth it. Another reason to have multiple controllers ;-)\n\nThen there is the question of the BW of the bus that the controller is plugged into.\n~800MB/s is the RW max to be gotten from a 64b 133MHz PCI-X channel.\nPCI-E channels are usually good for 1/10 their rated speed in bps as Bps.\nSo a PCI-Ex4 10Gbps bus can be counted on for 1GBps, PCI-Ex8 for 2GBps, etc.\nAt present I know of no RAID controllers that can singlely saturate a PCI-Ex4 or greater bus.\n\n...and we haven't even touched on OS, SW, and usage pattern issues.\n\nBottom line is that the IO chain is only as fast as its slowest component.\n\n\n>Now to the interesting part: would it make sense to use different stripe\n>sizes on the separate disk arrays? \n>\nThe short answer is Yes.\nWAL's are basically appends that are written in bursts of your chosen log chunk size and that are almost never read afterwards. Big DB pages and big RAID stripes makes sense for WALs.\n\nTables with OLTP-like characteristics need smaller DB pages and stripes to minimize latency issues (although locality of reference can make the optimum stripe size larger).\n\nTables with Data Mining like characteristics usually work best with larger DB pages sizes and RAID stripe sizes.\n\nOS and FS overhead can make things more complicated. So can DB layout and access pattern issues.\n\nSide note: a 10 HD RAID 10 seems a bit much for WAL. Do you really need 375MBps IO on average to your WAL more than you need IO capacity for other tables?\nIf WAL IO needs to be very high, I'd suggest getting a SSD or SSD-like device that fits your budget and having said device async mirror to HD. \n\nBottom line is to optimize your RAID stripe sizes =after= you optimize your OS, FS, and pg design for best IO for your usage pattern(s).\n\nHope this helps,\nRon\n",
"msg_date": "Mon, 17 Jul 2006 09:40:30 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On Mon, Jul 17, 2006 at 09:40:30AM -0400, Ron Peacetree wrote:\n> Unless I'm missing something, the only FC or SCSI HDs of ~147GB capacity are 15K, not 10K.\n> (unless they are old?)\n\nThere are still 146GB SCSI 10000rpm disks being sold here, at least.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 17 Jul 2006 16:51:46 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": ">Unless I'm missing something, the only FC or SCSI HDs of ~147GB capacity are 15K, not 10K.\n\nIn the spec we got from HP, they are listed as model 286716-B22 (http://www.dealtime.com/xPF-Compaq_HP_146_8_GB_286716_B22) which seems to run at 10K. Don't know how old those are, but that's what we got from HP anyway.\n\n>15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of 7-8ms.\n\nAverage seek time for that disk is listed as 4.9ms, maybe sounds a bit optimistic?\n\n> 28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9= ~675MB/s.\n\nI guess it's still limited by the 2Gbit FC (192Mb/s), right?\n\n>Very, very few RAID controllers can do >= 1GBps One thing that help greatly with bursty IO patterns is to up your battery backed RAID cache as high as you possibly can. Even multiple GBs of BBC can be worth it. Another reason to have multiple controllers ;-)\n\nI use 90% of the raid cache for writes, don't think I could go higher than that. Too bad the emulex only has 256Mb though :/\n\n>Then there is the question of the BW of the bus that the controller is plugged into.\n>~800MB/s is the RW max to be gotten from a 64b 133MHz PCI-X channel.\n>PCI-E channels are usually good for 1/10 their rated speed in bps as Bps.\n>So a PCI-Ex4 10Gbps bus can be counted on for 1GBps, PCI-Ex8 for 2GBps, etc.\n>At present I know of no RAID controllers that can singlely saturate a PCI-Ex4 or greater bus.\n\nThe controller is a FC2143 (http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=), which uses PCI-E. Don't know how it compares to other controllers, haven't had the time to search for / read any reviews yet.\n\n>>Now to the interesting part: would it make sense to use different \n>>stripe sizes on the separate disk arrays?\n>>\n>The short answer is Yes.\n\nOk\n\n>WAL's are basically appends that are written in bursts of your chosen log chunk size and that are almost never read afterwards. Big DB pages and big RAID stripes makes sense for WALs.\n\nAccording to http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it seems to be the other way around? (\"As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.\")\n\nI guess I'll have to find out which theory that holds by good ol´ trial and error... :)\n\n- Mikael\n",
"msg_date": "Mon, 17 Jul 2006 23:16:51 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "According to http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it seems to be the other way around?\n(\"As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives\nthat an average file will use to hold all the blocks containing the data of that file, \n\n->>>>theoretically increasing transfer performance, but decreasing positioning performance.\")\n\nMikael,\nIn OLTP you utterly need best possible latency. If you decompose the response time if you physical request you will\nsee positioning performance plays the dominant role in the response time (ignore for a moment caches and their effects).\n\nSo, if you need really good response times of your SQL queries, choose 15 rpm disks(and add as much cache as possible\nto magnify the effect ;) )\n\nBest Regards. \nMilen \n\n",
"msg_date": "Tue, 18 Jul 2006 22:01:27 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "-----Original Message-----\n>From: Mikael Carneholm <[email protected]>\n>Sent: Jul 17, 2006 5:16 PM\n>To: Ron Peacetree <[email protected]>, [email protected]\n>Subject: RE: [PERFORM] RAID stripe size question\n>\n>>15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of 7-8ms.\n>\n>Average seek time for that disk is listed as 4.9ms, maybe sounds a bit optimistic?\n>\nAh, the games vendors play. \"average seek time\" for a 10Krpm HD may very well be 4.9ms. However, what matters to you the user is \"average =access= time\". The 1st is how long it takes to position the heads to the correct track. The 2nd is how long it takes to actually find and get data from a specified HD sector.\n\n>> 28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9= ~675MB/s.\n>\n>I guess it's still limited by the 2Gbit FC (192Mb/s), right?\n>\nNo. A decent HBA has multiple IO channels on it. So for instance Areca's ARC-6080 (8/12/16-port 4Gbps Fibre-to-SATA ll Controller) has 2 4Gbps FCs in it (...and can support up to 4GB of BB cache!). Nominally, this card can push 8Gbps= 800MBps. ~600-700MBps is the RW number.\n\nAssuming ~75MBps ASTR per HD, that's ~ enough bandwidth for a 16 HD RAID 10 set per ARC-6080. \n\n>>Very, very few RAID controllers can do >= 1GBps One thing that help greatly with \n>>bursty IO patterns is to up your battery backed RAID cache as high as you possibly\n>>can. Even multiple GBs of BBC can be worth it. \n>>Another reason to have multiple controllers ;-)\n>\n>I use 90% of the raid cache for writes, don't think I could go higher than that. \n>Too bad the emulex only has 256Mb though :/\n>\nIf your RAID cache hit rates are in the 90+% range, you probably would find it profitable to make it greater. I've definitely seen access patterns that benefitted from increased RAID cache for any size I could actually install. For those access patterns, no amount of RAID cache commercially available was enough to find the \"flattening\" point of the cache percentage curve. 256MB of BB RAID cache per HBA is just not that much for many IO patterns.\n\n\n>The controller is a FC2143 (http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=), which uses PCI-E. Don't know how it compares to other controllers, haven't had the time to search for / read any reviews yet.\n>\nThis is a relatively low end HBA with 1 4Gb FC on it. Max sustained IO on it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set made of 75MBps ASTR HD's.\n\n28 such HDs are =definitely= IO choked on this HBA. \n\nThe arithmatic suggests you need a better HBA or more HBAs or both.\n\n\n>>WAL's are basically appends that are written in bursts of your chosen log chunk size and that are almost never read afterwards. Big DB pages and big RAID stripes makes sense for WALs.\n>\n>According to http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it seems to be the other way around? (\"As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.\")\n>\n>I guess I'll have to find out which theory that holds by good ol� trial and error... :)\n>\nIME, stripe sizes of 64, 128, or 256 are the most common found to be optimal for most access patterns + SW + FS + OS + HW.\n\n",
"msg_date": "Mon, 17 Jul 2006 23:07:55 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On 7/17/06, Ron Peacetree <[email protected]> wrote:\n>\n> -----Original Message-----\n> >From: Mikael Carneholm <[email protected]>\n> >Sent: Jul 17, 2006 5:16 PM\n> >To: Ron Peacetree <[email protected]>,\n> [email protected]\n> >Subject: RE: [PERFORM] RAID stripe size question\n> >\n> >>15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of\n> 7-8ms.\n> >\n> >Average seek time for that disk is listed as 4.9ms, maybe sounds a bit\n> optimistic?\n> >\n> Ah, the games vendors play. \"average seek time\" for a 10Krpm HD may very\n> well be 4.9ms. However, what matters to you the user is \"average =access=\n> time\". The 1st is how long it takes to position the heads to the correct\n> track. The 2nd is how long it takes to actually find and get data from a\n> specified HD sector.\n>\n> >> 28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9=\n> ~675MB/s.\n> >\n> >I guess it's still limited by the 2Gbit FC (192Mb/s), right?\n> >\n> No. A decent HBA has multiple IO channels on it. So for instance Areca's\n> ARC-6080 (8/12/16-port 4Gbps Fibre-to-SATA ll Controller) has 2 4Gbps FCs in\n> it (...and can support up to 4GB of BB cache!). Nominally, this card can\n> push 8Gbps= 800MBps. ~600-700MBps is the RW number.\n>\n> Assuming ~75MBps ASTR per HD, that's ~ enough bandwidth for a 16 HD RAID\n> 10 set per ARC-6080.\n>\n> >>Very, very few RAID controllers can do >= 1GBps One thing that help\n> greatly with\n> >>bursty IO patterns is to up your battery backed RAID cache as high as\n> you possibly\n> >>can. Even multiple GBs of BBC can be worth it.\n> >>Another reason to have multiple controllers ;-)\n> >\n> >I use 90% of the raid cache for writes, don't think I could go higher\n> than that.\n> >Too bad the emulex only has 256Mb though :/\n> >\n> If your RAID cache hit rates are in the 90+% range, you probably would\n> find it profitable to make it greater. I've definitely seen access patterns\n> that benefitted from increased RAID cache for any size I could actually\n> install. For those access patterns, no amount of RAID cache commercially\n> available was enough to find the \"flattening\" point of the cache percentage\n> curve. 256MB of BB RAID cache per HBA is just not that much for many IO\n> patterns.\n\n\n90% as in 90% of the RAM, not 90% hit rate I'm imagining.\n\n>The controller is a FC2143 (\n> http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=),\n> which uses PCI-E. Don't know how it compares to other controllers, haven't\n> had the time to search for / read any reviews yet.\n> >\n> This is a relatively low end HBA with 1 4Gb FC on it. Max sustained IO on\n> it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set made of\n> 75MBps ASTR HD's.\n>\n> 28 such HDs are =definitely= IO choked on this HBA.\n\n\n\nNot they aren't. This is OLTP, not data warehousing. I already posted math\nfor OLTP throughput, which is in the order of 8-80MB/second actual data\nthroughput based on maximum theoretical seeks/second.\n\nThe arithmatic suggests you need a better HBA or more HBAs or both.\n>\n>\n> >>WAL's are basically appends that are written in bursts of your chosen\n> log chunk size and that are almost never read afterwards. Big DB pages and\n> big RAID stripes makes sense for WALs.\n\n\nunless of course you are running OLTP, in which case a big stripe isn't\nnecessary, spend the disks on your data parition, because your WAL activity\nis going to be small compared with your random IO.\n\n>\n> >According to\n> http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it\n> seems to be the other way around? (\"As stripe size is decreased, files are\n> broken into smaller and smaller pieces. This increases the number of drives\n> that an average file will use to hold all the blocks containing the data of\n> that file, theoretically increasing transfer performance, but decreasing\n> positioning performance.\")\n> >\n> >I guess I'll have to find out which theory that holds by good ol� trial\n> and error... :)\n> >\n> IME, stripe sizes of 64, 128, or 256 are the most common found to be\n> optimal for most access patterns + SW + FS + OS + HW.\n\n\nNew records will be posted at the end of a file, and will only increase the\nfile by the number of blocks in the transactions posted at write time.\nUpdated records are modified in place unless they have grown too big to be\nin place. If you are updated mutiple tables on each transaction, a 64kb\nstripe size or lower is probably going to be best as block sizes are just\n8kb. How much data does your average transaction write? How many xacts per\nsecond, this will help determine how many writes your cache will queue up\nbefore it flushes, and therefore what the optimal stripe size will be. Of\ncourse, the fastest and most accurate way is probably just to try different\nsettings and see how it works. Alas some controllers seem to handle some\nstripe sizes more effeciently in defiance of any logic.\n\nWork out how big your xacts are, how many xacts/second you can post, and you\nwill figure out how fast WAL will be writting. Allocate enough disk for\npeak load plus planned expansion on WAL and then put the rest to\ntablespace. You may well find that a single RAID 1 is enough for WAL (if\nyou acheive theoretical performance levels, which it's clear your controller\nisn't).\n\nFor example, you bonnie++ benchmark shows 538 seeks/second. If on each seek\none writes 8k of data (one block) then your total throughput to disk is\n538*8k=4304k which is just 4MB/second actual throughput for WAL, which is\nabout what I estimated in my calculations earlier. A single RAID 1 will\neasily suffice to handle WAL for this kind of OLTP xact rate. Even if you\nwrite a full stripe on every pass at 64kb, thats still only 538*64k = 34432k\nor around 34Meg, still within the capability of a correctly running RAID 1,\nand even with your low bonnie scores, within the capability of your 4 disk\nRAID 10.\n\nRemember when it comes to OLTP, massive serial throughput is not gonna help\nyou, it's low seek times, which is why people still buy 15k RPM drives, and\nwhy you don't necessarily need a honking SAS/SATA controller which can\nharness the full 1066MB/sec of your PCI-X bus, or more for PCIe. Of course,\nonce you have a bunch of OLTP data, people will innevitably want reports on\nthat stuff, and what was mainly an OLTP database suddenly becomes a data\nwarehouse in a matter of months, so don't neglect to consider that problem\nalso.\n\nAlso more RAM on the RAID card will seriously help bolster your transaction\nrate, as your controller can queue up a whole bunch of table writes and\nburst them all at once in a single seek, which will increase your overall\nthroughput by as much as an order of magnitude (and you would have to\nincrease WAL accordingly therefore).\n\nBut finally - if your card/cab isn't performing RMA it. Send the damn thing\nback and get something that actualy can do what it should. Don't tolerate\nmanufacturers BS!!\n\nAlex\n\nOn 7/17/06, Ron Peacetree <[email protected]> wrote:\n-----Original Message----->From: Mikael Carneholm <[email protected]>>Sent: Jul 17, 2006 5:16 PM>To: Ron Peacetree <\[email protected]>, [email protected]>Subject: RE: [PERFORM] RAID stripe size question>>>15Krpm HDs will have average access times of 5-6ms. 10Krpm ones of 7-8ms.\n>>Average seek time for that disk is listed as 4.9ms, maybe sounds a bit optimistic?>Ah, the games vendors play. \"average seek time\" for a 10Krpm HD may very well be 4.9ms. However, what matters to you the user is \"average =access= time\". The 1st is how long it takes to position the heads to the correct track. The 2nd is how long it takes to actually find and get data from a specified HD sector.\n>> 28HDs as above setup as 2 RAID 10's => ~75MBps*5= ~375MB/s, ~75*9= ~675MB/s.>>I guess it's still limited by the 2Gbit FC (192Mb/s), right?>No. A decent HBA has multiple IO channels on it. So for instance Areca's ARC-6080 (8/12/16-port 4Gbps Fibre-to-SATA ll Controller) has 2 4Gbps FCs in it (...and can support up to 4GB of BB cache!). Nominally, this card can push 8Gbps= 800MBps. ~600-700MBps is the RW number.\nAssuming ~75MBps ASTR per HD, that's ~ enough bandwidth for a 16 HD RAID 10 set per ARC-6080.>>Very, very few RAID controllers can do >= 1GBps One thing that help greatly with>>bursty IO patterns is to up your battery backed RAID cache as high as you possibly\n>>can. Even multiple GBs of BBC can be worth it.>>Another reason to have multiple controllers ;-)>>I use 90% of the raid cache for writes, don't think I could go higher than that.>Too bad the emulex only has 256Mb though :/\n>If your RAID cache hit rates are in the 90+% range, you probably would find it profitable to make it greater. I've definitely seen access patterns that benefitted from increased RAID cache for any size I could actually install. For those access patterns, no amount of RAID cache commercially available was enough to find the \"flattening\" point of the cache percentage curve. 256MB of BB RAID cache per HBA is just not that much for many IO patterns.\n90% as in 90% of the RAM, not 90% hit rate I'm imagining.\n>The controller is a FC2143 (http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=\n), which uses PCI-E. Don't know how it compares to other controllers, haven't had the time to search for / read any reviews yet.>This is a relatively low end HBA with 1 4Gb FC on it. Max sustained IO on it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set made of 75MBps ASTR HD's.\n28 such HDs are =definitely= IO choked on this HBA.Not they aren't. This is OLTP, not data warehousing. I already posted math for OLTP throughput, which is in the order of 8-80MB/second actual data throughput based on maximum theoretical seeks/second.\nThe arithmatic suggests you need a better HBA or more HBAs or both.\n>>WAL's are basically appends that are written in bursts of your chosen log chunk size and that are almost never read afterwards. Big DB pages and big RAID stripes makes sense for WALs.unless of course you are running OLTP, in which case a big stripe isn't necessary, spend the disks on your data parition, because your WAL activity is going to be small compared with your random IO. \n>>According to \nhttp://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it seems to be the other way around? (\"As stripe size is decreased, files are broken into smaller and smaller pieces. This increases the number of drives that an average file will use to hold all the blocks containing the data of that file, theoretically increasing transfer performance, but decreasing positioning performance.\")\n>>I guess I'll have to find out which theory that holds by good ol� trial and error... :)>IME, stripe sizes of 64, 128, or 256 are the most common found to be optimal for most access patterns + SW + FS + OS + HW.\nNew records will be posted at the end of a file, and will only increase the file by the number of blocks in the transactions posted at write time. Updated records are modified in place unless they have grown too big to be in place. If you are updated mutiple tables on each transaction, a 64kb stripe size or lower is probably going to be best as block sizes are just 8kb. How much data does your average transaction write? How many xacts per second, this will help determine how many writes your cache will queue up before it flushes, and therefore what the optimal stripe size will be. Of course, the fastest and most accurate way is probably just to try different settings and see how it works. Alas some controllers seem to handle some stripe sizes more effeciently in defiance of any logic.\nWork out how big your xacts are, how many xacts/second you can post, and you will figure out how fast WAL will be writting. Allocate enough disk for peak load plus planned expansion on WAL and then put the rest to tablespace. You may well find that a single RAID 1 is enough for WAL (if you acheive theoretical performance levels, which it's clear your controller isn't).\nFor example, you bonnie++ benchmark shows 538 seeks/second. If on each seek one writes 8k of data (one block) then your total throughput to disk is 538*8k=4304k which is just 4MB/second actual throughput for WAL, which is about what I estimated in my calculations earlier. A single RAID 1 will easily suffice to handle WAL for this kind of OLTP xact rate. Even if you write a full stripe on every pass at 64kb, thats still only 538*64k = 34432k or around 34Meg, still within the capability of a correctly running RAID 1, and even with your low bonnie scores, within the capability of your 4 disk RAID 10.\nRemember when it comes to OLTP, massive serial throughput is not gonna help you, it's low seek times, which is why people still buy 15k RPM drives, and why you don't necessarily need a honking SAS/SATA controller which can harness the full 1066MB/sec of your PCI-X bus, or more for PCIe. Of course, once you have a bunch of OLTP data, people will innevitably want reports on that stuff, and what was mainly an OLTP database suddenly becomes a data warehouse in a matter of months, so don't neglect to consider that problem also.\nAlso more RAM on the RAID card will seriously help bolster your transaction rate, as your controller can queue up a whole bunch of table writes and burst them all at once in a single seek, which will increase your overall throughput by as much as an order of magnitude (and you would have to increase WAL accordingly therefore).\nBut finally - if your card/cab isn't performing RMA it. Send the damn thing back and get something that actualy can do what it should. Don't tolerate manufacturers BS!!Alex",
"msg_date": "Tue, 18 Jul 2006 00:21:51 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On 7/18/06, Alex Turner <[email protected]> wrote:\n> Remember when it comes to OLTP, massive serial throughput is not gonna help\n> you, it's low seek times, which is why people still buy 15k RPM drives, and\n> why you don't necessarily need a honking SAS/SATA controller which can\n> harness the full 1066MB/sec of your PCI-X bus, or more for PCIe. Of course,\n\nhm. i'm starting to look seriously at SAS to take things to the next\nlevel. it's really not all that expensive, cheaper than scsi even,\nand you can mix/match sata/sas drives in the better enclosures. the\nreal wild card here is the raid controller. i still think raptors are\nthe best bang for the buck and SAS gives me everything i like about\nsata and scsi in one package.\n\nmoving a gigabyte around/sec on the server, attached or no, is pretty\nheavy lifting on x86 hardware.\n\nmerlin\n",
"msg_date": "Thu, 3 Aug 2006 00:46:41 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "Hi,\n\nIt would seem that doing any changes on a temp table forces a copy of \nthe entire contents of the table to be retained in memory/disk. Is \nthis happening due to MVCC? Is there a way to change this behavior? \nIt could be very useful when you have really huge temp tables that \nneed to be updated a few times before they can be dropped.\n\nBelow is an example of the problem. I'll create a temp table, insert \n600 rows (just a bunch of urls, you can use anything really), then \nupdate the table a few times without actually changing anything. Of \ncourse this test case really doesn't show the extent of the problem, \nbecause its such a small amount of data involved. When I have a temp \ntable of about 150 megs and do more then a few updates on it, it \nforces postgresql to use the disk making things really slow. \nOriginally the entire temp table fit into RAM.\n\nI tried using savepoints and releasing them to see if it would make \nany difference and it did not, which isn't unexpected. Could \npg_relation_size() be incorrect in this case?\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n\ntest=# begin;\nBEGIN\ntest=# create temp table test_urls (u text);\nCREATE TABLE\ntest=# insert into test_urls (u) select url from url limit 600;\nINSERT 0 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 73728\n(1 row)\n\ntest=# update test_urls set u = u;\nUPDATE 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 147456\n(1 row)\n\ntest=# update test_urls set u = u;\nUPDATE 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 212992\n(1 row)\n\ntest=# update test_urls set u = u;\nUPDATE 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 286720\n(1 row)\n\ntest=# update test_urls set u = u;\nUPDATE 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 352256\n(1 row)\n\ntest=# update test_urls set u = u;\nUPDATE 600\ntest=# select pg_relation_size('test_urls');\npg_relation_size\n------------------\n 425984\n(1 row)\n\n\n",
"msg_date": "Tue, 18 Jul 2006 00:42:58 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Temporary table retains old contents on update eventually causing\n\tslow temp file usage."
},
{
"msg_contents": "Sorry for replying to my own post.\n\nI forgot to include my version information, I used:\n\nPostgreSQL 8.1.0 on powerpc-apple-darwin8.3.0, compiled by GCC \npowerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 (Apple Computer, Inc. \nbuild 5026)\n\nand\n\nPostgreSQL 8.1.3 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) \n4.0.1 20050727 (Red Hat 4.0.1-5)\n\nOn both the same result happens.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com\n\n\n\n",
"msg_date": "Tue, 18 Jul 2006 00:56:13 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporary table retains old contents on update eventually causing\n\tslow temp file usage."
},
{
"msg_contents": "On Tue, 18 Jul 2006, Rusty Conover wrote:\n\n> Hi,\n>\n> It would seem that doing any changes on a temp table forces a copy of\n> the entire contents of the table to be retained in memory/disk. Is\n> this happening due to MVCC? Is there a way to change this behavior?\n> It could be very useful when you have really huge temp tables that\n> need to be updated a few times before they can be dropped.\n\nThis is caused by our MVCC implementation. It cannot be easily changed. We\nrely on MVCC for two things: concurrency and rolling back of aborted\ncommands. Without the latter, we couldn't support the following trivially:\n\ntemplate1=# create temp table bar (i int);\nCREATE TABLE\ntemplate1=# begin;\nBEGIN\ntemplate1=# insert into bar values(1);\nINSERT 0 1\ntemplate1=# abort;\nROLLBACK\ntemplate1=# select * from bar;\n i\n---\n(0 rows)\n\nIt would be nice if we could special case temp tables because of the fact\nthat concurrency does not come into the equation but I cannot see it\nhappening without a generalised overwriting MVCC system.\n\nThe only alternative in the mean time is to vacuum your temporary table(s)\nas part of your interaction with them.\n\nThanks,\n\nGavin\n",
"msg_date": "Tue, 18 Jul 2006 22:22:57 +1000 (EST)",
"msg_from": "Gavin Sherry <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Temporary table retains old contents on update eventually"
},
{
"msg_contents": "\nOn Jul 18, 2006, at 6:22 AM, Gavin Sherry wrote:\n\n> On Tue, 18 Jul 2006, Rusty Conover wrote:\n>\n>> Hi,\n>>\n>> It would seem that doing any changes on a temp table forces a copy of\n>> the entire contents of the table to be retained in memory/disk. Is\n>> this happening due to MVCC? Is there a way to change this behavior?\n>> It could be very useful when you have really huge temp tables that\n>> need to be updated a few times before they can be dropped.\n>\n> This is caused by our MVCC implementation. It cannot be easily \n> changed. We\n> rely on MVCC for two things: concurrency and rolling back of aborted\n> commands. Without the latter, we couldn't support the following \n> trivially:\n>\n> template1=# create temp table bar (i int);\n> CREATE TABLE\n> template1=# begin;\n> BEGIN\n> template1=# insert into bar values(1);\n> INSERT 0 1\n> template1=# abort;\n> ROLLBACK\n> template1=# select * from bar;\n> i\n> ---\n> (0 rows)\n>\n> It would be nice if we could special case temp tables because of \n> the fact\n> that concurrency does not come into the equation but I cannot see it\n> happening without a generalised overwriting MVCC system.\n>\n> The only alternative in the mean time is to vacuum your temporary \n> table(s)\n> as part of your interaction with them.\n\nI forgot to add in my original post that the temporary tables I'm \ndealing with have the \"on commit drop\" flag, so really persisting \nbeyond the transaction isn't needed. But I don't think that makes \nany difference, because of savepoints' required functionality.\n\nThe problem with vacuuming is that you can't do it by default right \nnow inside of a transaction.\n\nReading vacuum.c though, it leaves the door open:\n\n/*\n * We cannot run VACUUM inside a user transaction block; if we were \ninside\n * a transaction, then our commit- and start-transaction-command calls\n * would not have the intended effect! Furthermore, the forced \ncommit that\n * occurs before truncating the relation's file would have the \neffect of\n * committing the rest of the user's transaction too, which would\n * certainly not be the desired behavior. (This only applies to VACUUM\n * FULL, though. We could in theory run lazy VACUUM inside a \ntransaction\n * block, but we choose to disallow that case because we'd rather \ncommit\n * as soon as possible after finishing the vacuum. This is \nmainly so that\n * we can let go the AccessExclusiveLock that we may be holding.)\n *\n * ANALYZE (without VACUUM) can run either way.\n */\n\nSince we're dealing with a temporary table we shouldn't have any \nproblems with the AccessExclusiveLock. Would lazy vacuuming mark the \npages as free? I assume it wouldn't release them or shrink the size \nof the relation, but could they be reused on later updates in that \nsame transaction?\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nWeb: http://www.infogears.com\n\n\n\n",
"msg_date": "Tue, 18 Jul 2006 11:12:23 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Temporary table retains old contents on update eventually causing\n\tslow temp file usage."
}
] |
[
{
"msg_contents": ">From: Alex Turner <[email protected]>\n>Sent: Jul 18, 2006 12:21 AM\n>To: Ron Peacetree <[email protected]>\n>Cc: Mikael Carneholm <[email protected]>, [email protected]\n>Subject: Re: [PERFORM] RAID stripe size question\n>\n>On 7/17/06, Ron Peacetree <[email protected]> wrote:\n>>\n>> -----Original Message-----\n>> >From: Mikael Carneholm <[email protected]>\n>> >Sent: Jul 17, 2006 5:16 PM\n>> >To: Ron Peacetree <[email protected]>,\n>> [email protected]\n>> >Subject: RE: [PERFORM] RAID stripe size question\n>> >\n>> >I use 90% of the raid cache for writes, don't think I could go higher\n>> >than that.\n>> >Too bad the emulex only has 256Mb though :/\n>> >\n>> If your RAID cache hit rates are in the 90+% range, you probably would\n>> find it profitable to make it greater. I've definitely seen access patterns\n>> that benefitted from increased RAID cache for any size I could actually\n>> install. For those access patterns, no amount of RAID cache commercially\n>> available was enough to find the \"flattening\" point of the cache percentage\n>> curve. 256MB of BB RAID cache per HBA is just not that much for many IO\n>> patterns.\n>\n>90% as in 90% of the RAM, not 90% hit rate I'm imagining.\n>\nEither way, =particularly= for OLTP-like I/O patterns, the more RAID cache the better unless the IO pattern is completely random. In which case the best you can do is cache the entire sector map of the RAID set and use as many spindles as possible for the tables involved. I've seen high end set ups in Fortune 2000 organizations that look like some of the things you read about on tpc.org: =hundreds= of HDs are used.\n\nClearly, completely random IO patterns are to be avoided whenever and however possible.\n\nThankfully, most things can be designed to not have completely random IO and stuff like WAL IO are definitely not random.\n\nThe important point here about cache size is that unless you make cache large enough that you see a flattening in the cache behavior, you probably can still use more cache. Working sets are often very large for DB applications.\n\n \n>>The controller is a FC2143 (\n>> http://h71016.www7.hp.com/dstore/MiddleFrame.asp?page=config&ProductLineId=450&FamilyId=1449&BaseId=17621&oi=E9CED&BEID=19701&SBLID=),\n>> which uses PCI-E. Don't know how it compares to other controllers, haven't\n>> had the time to search for / read any reviews yet.\n>> >\n>> This is a relatively low end HBA with 1 4Gb FC on it. Max sustained IO on\n>> it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set made of\n>> 75MBps ASTR HD's.\n>>\n>> 28 such HDs are =definitely= IO choked on this HBA.\n>\n>Not they aren't. This is OLTP, not data warehousing. I already posted math\n>for OLTP throughput, which is in the order of 8-80MB/second actual data\n>throughput based on maximum theoretical seeks/second.\n>\nWAL IO patterns are not OLTP-like. Neither are most support or decision support IO patterns. Even in an OLTP system, there are usually only a few scenarios and tables where the IO pattern is pessimal.\nAlex is quite correct that those few will be the bottleneck on overall system performance if the system's primary function is OLTP-like.\n\nFor those few, you dedicate as many spindles and RAID cache as you can afford and as show any performance benefit. I've seen an entire HBA maxed out with cache and as many HDs as would saturate the attainable IO rate dedicated to =1= table (unfortunately SSD was not a viable option in this case).\n\n\n>>The arithmetic suggests you need a better HBA or more HBAs or both.\n>>\n>>\n>> >>WAL's are basically appends that are written in bursts of your chosen\n>> log chunk size and that are almost never read afterwards. Big DB pages and\n>> big RAID stripes makes sense for WALs.\n>\n>\n>unless of course you are running OLTP, in which case a big stripe isn't\n>necessary, spend the disks on your data parition, because your WAL activity\n>is going to be small compared with your random IO.\n>\nOr to put it another way, the scenarios and tables that have the most random looking IO patterns are going to be the performance bottleneck on the whole system. In an OLTP-like system, WAL IO is unlikely to be your biggest performance issue. As in any other performance tuning effort, you only gain by speeding up the current bottleneck.\n\n\n>>\n>> >According to\n>> http://www.pcguide.com/ref/hdd/perf/raid/concepts/perfStripe-c.html, it\n>> seems to be the other way around? (\"As stripe size is decreased, files are\n>> broken into smaller and smaller pieces. This increases the number of drives\n>> that an average file will use to hold all the blocks containing the data of\n>> that file, theoretically increasing transfer performance, but decreasing\n>> positioning performance.\")\n>> >\n>> >I guess I'll have to find out which theory that holds by good ol? trial\n>> and error... :)\n>> >\n>> IME, stripe sizes of 64, 128, or 256 are the most common found to be\n>> optimal for most access patterns + SW + FS + OS + HW.\n>\n>\n>New records will be posted at the end of a file, and will only increase the\n>file by the number of blocks in the transactions posted at write time.\n>Updated records are modified in place unless they have grown too big to be\n>in place. If you are updated mutiple tables on each transaction, a 64kb\n>stripe size or lower is probably going to be best as block sizes are just\n>8kb.\n>\nHere's where Theory and Practice conflict. pg does not \"update\" and modify in place in the true DB sense. A pg UPDATE is actually an insert of a new row or rows, !not! a modify in place.\nI'm sure Alex knows this and just temporily forgot some of the context of this thread :-)\n\nThe append behavior Alex refers to is the best case scenario for pg where a) the table is unfragmented and b) the file segment of say 2GB holding that part of the pg table is not full.\nVACUUM and autovacuum are your friend.\n\n\n>How much data does your average transaction write? How many xacts per\n>second, this will help determine how many writes your cache will queue up\n>before it flushes, and therefore what the optimal stripe size will be. Of\n>course, the fastest and most accurate way is probably just to try different\n>settings and see how it works. Alas some controllers seem to handle some\n>stripe sizes more effeciently in defiance of any logic.\n>\n>Work out how big your xacts are, how many xacts/second you can post, and you\n>will figure out how fast WAL will be writting. Allocate enough disk for\n>peak load plus planned expansion on WAL and then put the rest to\n>tablespace. You may well find that a single RAID 1 is enough for WAL (if\n>you acheive theoretical performance levels, which it's clear your controller\n>isn't).\n>\nThis is very good advice.\n\n\n>For example, you bonnie++ benchmark shows 538 seeks/second. If on each seek\n>one writes 8k of data (one block) then your total throughput to disk is\n>538*8k=4304k which is just 4MB/second actual throughput for WAL, which is\n>about what I estimated in my calculations earlier. A single RAID 1 will\n>easily suffice to handle WAL for this kind of OLTP xact rate. Even if you\n>write a full stripe on every pass at 64kb, thats still only 538*64k = 34432k\n>or around 34Meg, still within the capability of a correctly running RAID 1,\n>and even with your low bonnie scores, within the capability of your 4 disk\n>RAID 10.\n>\nI'd also suggest that you figure out what the max access per sec is for HDs and make sure you are attaining it since this will set the ceiling on your overall system performance.\n\nLike I've said, I've seen organizations dedicate as much HW as could make any difference on a per table basis for important OLTP systems.\n\n\n>Remember when it comes to OLTP, massive serial throughput is not gonna help\n>you, it's low seek times, which is why people still buy 15k RPM drives, and\n>why you don't necessarily need a honking SAS/SATA controller which can\n>harness the full 1066MB/sec of your PCI-X bus, or more for PCIe. Of course,\n>once you have a bunch of OLTP data, people will innevitably want reports on\n>that stuff, and what was mainly an OLTP database suddenly becomes a data\n>warehouse in a matter of months, so don't neglect to consider that problem\n>also.\n>\nOne Warning to expand on Alex's point here.\n\nDO !NOT! use the same table schema and/or DB for your reporting and OLTP.\nYou will end up with a DBMS that is neither good at reporting nor OLTP.\n\n\n>Also more RAM on the RAID card will seriously help bolster your transaction\n>rate, as your controller can queue up a whole bunch of table writes and\n>burst them all at once in a single seek, which will increase your overall\n>throughput by as much as an order of magnitude (and you would have to\n>increase WAL accordingly therefore).\n>\n*nods*\n\n\n>But finally - if your card/cab isn't performing RMA it. Send the damn thing\n>back and get something that actualy can do what it should. Don't tolerate\n>manufacturers BS!!\n>\nOn this Alex and I are in COMPLETE agreement.\n\nRon\n\n",
"msg_date": "Tue, 18 Jul 2006 08:32:35 -0400 (EDT)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "> This is a relatively low end HBA with 1 4Gb FC on it. Max sustained\nIO on it is going to be ~320MBps. Or ~ enough for an 8 HD RAID 10 set\nmade of 75MBps ASTR HD's.\n\nLooking at http://h30094.www3.hp.com/product.asp?sku=2260908&extended=1,\nI notice that the controller has a Ultra160 SCSI interface which implies\nthat the theoretical max throughput is 160Mb/s. Ouch.\n\nHowever, what's more important is the seeks/s - ~530/s on a 28 disk\narray is quite lousy compared to the 1400/s on a 12 x 15Kdisk array as\nmentioned by Mark here:\nhttp://archives.postgresql.org/pgsql-performance/2006-07/msg00170.php.\nCould be the disk RPM (10K vs 15K) that makes the difference here...\n\nI will test another stripe size (128K) for the DATA lun (28 disks) to\nsee what difference that makes, I think I read somewhere that linux\nflushes blocks of 128K at a time, so it might be worth evaluating.\n\n/Mikael\n\n\n",
"msg_date": "Tue, 18 Jul 2006 15:34:07 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "Mikael,\n\nOn 7/18/06 6:34 AM, \"Mikael Carneholm\" <[email protected]>\nwrote:\n\n> However, what's more important is the seeks/s - ~530/s on a 28 disk\n> array is quite lousy compared to the 1400/s on a 12 x 15Kdisk array\n\nI'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS, Solaris\n10) on a Sun X4500:\n\n=========== Single Stream ============\n\nWith a very recent update to the zfs module that improves I/O scheduling and\nprefetching, I get the following bonnie++ 1.03a results with a 36 drive\nRAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs\nchecksumming is off):\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94\n1801 4\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++ +++++\n+++\n\n=========== Two Streams ============\n\nBumping up the number of concurrent processes to 2, we get about 1.5x speed\nreads of RAID10 with a concurrent workload (you have to add the rates\ntogether): \n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88\n1233 2\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++ 4381\n97\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\nthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87\n1274 3\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec\n%CP\n 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++ 4272\n97\n\nSo that¹s 2500 seeks per second, 1440MB/s sequential block read, 212MB/s per\ncharacter sequential read.\n=======================\n\n- Luke\n\n\n\nRe: [PERFORM] RAID stripe size question\n\n\nMikael,\n\nOn 7/18/06 6:34 AM, \"Mikael Carneholm\" <[email protected]> wrote:\n\n> However, what's more important is the seeks/s - ~530/s on a 28 disk\n> array is quite lousy compared to the 1400/s on a 12 x 15Kdisk array\n\nI'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS, Solaris 10) on a Sun X4500:\n\n=========== Single Stream ============\n\nWith a very recent update to the zfs module that improves I/O scheduling and prefetching, I get the following bonnie++ 1.03a results with a 36 drive RAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs checksumming is off):\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94 1801 4\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++ +++++ +++\n\n=========== Two Streams ============\n\nBumping up the number of concurrent processes to 2, we get about 1.5x speed reads of RAID10 with a concurrent workload (you have to add the rates together): \n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88 1233 2\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++ 4381 97\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87 1274 3\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++ 4272 97\n\nSo that’s 2500 seeks per second, 1440MB/s sequential block read, 212MB/s per character sequential read.\n=======================\n\n- Luke",
"msg_date": "Tue, 18 Jul 2006 11:56:45 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "This is a great testament to the fact that very often software RAID will\nseriously outperform hardware RAID because the OS guys who implemented it\ntook the time to do it right, as compared with some controller manufacturers\nwho seem to think it's okay to provided sub-standard performance.\n\nBased on the bonnie++ numbers comming back from your array, I would also\nencourage you to evaluate software RAID, as you might see significantly\nbetter performance as a result. RAID 10 is also a good candidate as it's\nnot so heavy on the cache and CPU as RAID 5.\n\nAlex.\n\nOn 7/18/06, Luke Lonergan <[email protected]> wrote:\n>\n> Mikael,\n>\n>\n> On 7/18/06 6:34 AM, \"Mikael Carneholm\" <[email protected]>\n> wrote:\n>\n> > However, what's more important is the seeks/s - ~530/s on a 28 disk\n> > array is quite lousy compared to the 1400/s on a 12 x 15Kdisk array\n>\n> I'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS,\n> Solaris 10) on a Sun X4500:\n>\n> =========== Single Stream ============\n>\n> With a very recent update to the zfs module that improves I/O scheduling\n> and prefetching, I get the following bonnie++ 1.03a results with a 36\n> drive RAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs\n> checksumming is off):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> thumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94\n> 1801 4\n>\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++\n> +++++ +++\n>\n> =========== Two Streams ============\n>\n> Bumping up the number of concurrent processes to 2, we get about 1.5xspeed reads of RAID10 with a concurrent workload (you have to add the rates\n> together):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> thumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88\n> 1233 2\n>\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++\n> 4381 97\n>\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> thumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87\n> 1274 3\n>\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++\n> 4272 97\n>\n> So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s\n> per character sequential read.\n> =======================\n>\n> - Luke\n>\n\nThis is a great testament to the fact that very often software RAID will seriously outperform hardware RAID because the OS guys who implemented it took the time to do it right, as compared with some controller manufacturers who seem to think it's okay to provided sub-standard performance.\nBased on the bonnie++ numbers comming back from your array, I would also encourage you to evaluate software RAID, as you might see significantly better performance as a result. RAID 10 is also a good candidate as it's not so heavy on the cache and CPU as RAID 5.\nAlex.On 7/18/06, Luke Lonergan <[email protected]> wrote:\n\n\nMikael,\n\nOn 7/18/06 6:34 AM, \"Mikael Carneholm\" <\[email protected]> wrote:\n\n> However, what's more important is the seeks/s - ~530/s on a 28 disk\n> array is quite lousy compared to the 1400/s on a 12 x 15Kdisk array\n\nI'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS, Solaris 10) on a Sun X4500:\n\n=========== Single Stream ============\n\nWith a very recent update to the zfs module that improves I/O scheduling and prefetching, I get the following bonnie++ 1.03a results with a 36 drive RAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs checksumming is off):\n\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94 1801 4\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++ +++++ +++\n\n=========== Two Streams ============\n\nBumping up the number of concurrent processes to 2, we get about 1.5x speed reads of RAID10 with a concurrent workload (you have to add the rates together): \n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88 1233 2\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++ 4381 97\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\nthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87 1274 3\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++ 4272 97\n\nSo that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s per character sequential read.\n=======================\n\n- Luke",
"msg_date": "Tue, 18 Jul 2006 15:27:42 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On Tue, 2006-07-18 at 14:27, Alex Turner wrote:\n> This is a great testament to the fact that very often software RAID\n> will seriously outperform hardware RAID because the OS guys who\n> implemented it took the time to do it right, as compared with some\n> controller manufacturers who seem to think it's okay to provided\n> sub-standard performance. \n> \n> Based on the bonnie++ numbers comming back from your array, I would\n> also encourage you to evaluate software RAID, as you might see\n> significantly better performance as a result. RAID 10 is also a good\n> candidate as it's not so heavy on the cache and CPU as RAID 5. \n\nAlso, consider testing a mix, where your hardware RAID controller does\nthe mirroring and the OS stripes ((R)AID 0) over the top of it. I've\ngotten good performance from mediocre hardware cards doing this. It has\nthe advantage of still being able to use the battery backed cache and\nits instant fsync while not relying on some cards that have issues\nlayering RAID layers one atop the other.\n",
"msg_date": "Tue, 18 Jul 2006 14:37:27 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "Have you done any experiments implementing RAID 50 this way (HBA does RAID 5, OS does RAID 0)? If so, what were the results?\n\nRon\n\n-----Original Message-----\n>From: Scott Marlowe <[email protected]>\n>Sent: Jul 18, 2006 3:37 PM\n>To: Alex Turner <[email protected]>\n>Cc: Luke Lonergan <[email protected]>, Mikael Carneholm <[email protected]>, Ron Peacetree <[email protected]>, [email protected]\n>Subject: Re: [PERFORM] RAID stripe size question\n>\n>On Tue, 2006-07-18 at 14:27, Alex Turner wrote:\n>> This is a great testament to the fact that very often software RAID\n>> will seriously outperform hardware RAID because the OS guys who\n>> implemented it took the time to do it right, as compared with some\n>> controller manufacturers who seem to think it's okay to provided\n>> sub-standard performance. \n>> \n>> Based on the bonnie++ numbers comming back from your array, I would\n>> also encourage you to evaluate software RAID, as you might see\n>> significantly better performance as a result. RAID 10 is also a good\n>> candidate as it's not so heavy on the cache and CPU as RAID 5. \n>\n>Also, consider testing a mix, where your hardware RAID controller does\n>the mirroring and the OS stripes ((R)AID 0) over the top of it. I've\n>gotten good performance from mediocre hardware cards doing this. It has\n>the advantage of still being able to use the battery backed cache and\n>its instant fsync while not relying on some cards that have issues\n>layering RAID layers one atop the other.\n\n",
"msg_date": "Tue, 18 Jul 2006 15:43:29 -0400 (GMT-04:00)",
"msg_from": "Ron Peacetree <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "Nope, haven't tried that. At the time I was testing this I didn't even\nthink of trying it. I'm not even sure I'd heard of RAID 50 at the\ntime... :)\n\nI basically had an old MegaRAID 4xx series card in a dual PPro 200 and a\nstack of 6 9 gig hard drives. Spare parts. And even though the RAID\n1+0 was relatively much faster on this hardware, the Dual P IV 2800 with\na pair of 15k USCSI drives and a much later model MegaRAID at it for\nlunch with a single mirror set, and was plenty fast for our use at the\ntime, so I never really had call to test it in production.\n\nBut it definitely made our test server, the aforementioned PPro200\nmachine, more livable.\n\nOn Tue, 2006-07-18 at 14:43, Ron Peacetree wrote:\n> Have you done any experiments implementing RAID 50 this way (HBA does RAID 5, OS does RAID 0)? If so, what were the results?\n> \n> Ron\n> \n> -----Original Message-----\n> >From: Scott Marlowe <[email protected]>\n> >Sent: Jul 18, 2006 3:37 PM\n> >To: Alex Turner <[email protected]>\n> >Cc: Luke Lonergan <[email protected]>, Mikael Carneholm <[email protected]>, Ron Peacetree <[email protected]>, [email protected]\n> >Subject: Re: [PERFORM] RAID stripe size question\n> >\n> >On Tue, 2006-07-18 at 14:27, Alex Turner wrote:\n> >> This is a great testament to the fact that very often software RAID\n> >> will seriously outperform hardware RAID because the OS guys who\n> >> implemented it took the time to do it right, as compared with some\n> >> controller manufacturers who seem to think it's okay to provided\n> >> sub-standard performance. \n> >> \n> >> Based on the bonnie++ numbers comming back from your array, I would\n> >> also encourage you to evaluate software RAID, as you might see\n> >> significantly better performance as a result. RAID 10 is also a good\n> >> candidate as it's not so heavy on the cache and CPU as RAID 5. \n> >\n> >Also, consider testing a mix, where your hardware RAID controller does\n> >the mirroring and the OS stripes ((R)AID 0) over the top of it. I've\n> >gotten good performance from mediocre hardware cards doing this. It has\n> >the advantage of still being able to use the battery backed cache and\n> >its instant fsync while not relying on some cards that have issues\n> >layering RAID layers one atop the other.\n> \n",
"msg_date": "Tue, 18 Jul 2006 14:48:46 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "Hello,\n\nI am seeking advice/comment/experience you may have had for the performance\ncost for remote access to postgresql 8.1.X?\n\nI have two servers, one is Sun V240 (say server A) and the other is dual\nintel Xeon (say Server B) and both installed Solaris 10.\n\nWith Server A, there is postgresql 8.1.3 installed with pgpool\n(pgpool-3.0.2), with server B, there is a pgpool (v3.0.2) installed.\n\nThe test program is installed on both A and B, where the test application on\nserver B is accessing to DBMS on A through pgpool.\n\nNote that the test code is not fancy but can insert a large number of record\n(say 100k rows) with configurable transaction size.\n\nFollowing are the results (repeated many times with the mean value and shall\nbe accurate) for various setting by fixed 100k insertion operation with a\ntransaction size as 100 rows):\n------------------------------------------\n1. Test program running on server A directly access to LOCAL postgresql:\n24.03 seconds\n2. Test progam running on server A access to LOCAL postgresql through\npgpool: \t30.05 seconds\n3. Test progam running on server A access REMOTE postgresql through local\npgpool: 74.06 seconds\n------------------------------------------\nI have to say both machines are very light load and interconnected with\nlocal LAN.\n\n From 1 and 2, pgpool add 20% overhead, it sounds reasonable but any way to\nreduce it???\n\n From 2 and 3, it suggests the remote access is much slower than local\naccess.\n\nMy question is:\n a) Anyone has the similar experience? How do you deal with it?\n b) Why TCP stack imposes such big delay? any tuning point I shall do?\n\n\nThe time call reports\n for test 2 is\n real 0m32.71s\n user 0m2.42s\n sys 0m2.65s\n\n for test 3 is\n real 1:14.0\n user 2.5\n sys 3.2\n\n c) Obviously, CPU time for (user + sys) for both tests are very similar,\nbut the overall time is quite different. I assume the time used on TCP stack\nmakes the difference.\n\n\nMany thanks,\nRegards,\nGuoping Zhang\n\n",
"msg_date": "Wed, 19 Jul 2006 15:40:02 +1000",
"msg_from": "\"Guoping Zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "* Guoping Zhang:\n\n> a) Anyone has the similar experience? How do you deal with it?\n> b) Why TCP stack imposes such big delay? any tuning point I shall do?\n\nIf you use INSERT, you'll incur a network round-trip delay for each\nrecord. Try using COPY FROM instead, possibly to a temporary table if\nyou need more complex calculations. If you do this, there won't be a\nhuge difference between local and remote access as long as the\nbandwidth is sufficient.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Wed, 19 Jul 2006 08:30:14 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "Hi, Florian\n\nThanks for pointing me the cause, but we simply cannot use the COPY FROM\nsolution.\n\nCurrently, our application service is running with its own dedicated local\ndatabase, IF Feasible, we want to separate the application services out of\ndatabase server and run SEVERAL instances of applation serivice on its own\nserver (one per server), and make them all shall one database server. This\nhelps to the scalability and also reduce the device cost as only database\nserver would need mirror/backup/UPS etc.\n\nObviously, if there is no better solution, the TCP round trip penalty will\nstop us doing so as we do have performance requirement.\n\nI guess there shall be quite number of people out there facing the similar\nproblem, right? No alternative solution?\n\nRegards,\nGuoping Zhang\n\n\n\n\n\n\n-----Original Message-----\nFrom: Florian Weimer [mailto:[email protected]]\nSent: 2006Ae7OA19EO 16:30\nTo: [email protected]\nCc: [email protected]; Guoping Zhang (E-mail)\nSubject: Re: [PERFORM] Performance penalty for remote access of\npostgresql (8.1.3)? any experiance?\n\n\n* Guoping Zhang:\n\n> a) Anyone has the similar experience? How do you deal with it?\n> b) Why TCP stack imposes such big delay? any tuning point I shall do?\n\nIf you use INSERT, you'll incur a network round-trip delay for each\nrecord. Try using COPY FROM instead, possibly to a temporary table if\nyou need more complex calculations. If you do this, there won't be a\nhuge difference between local and remote access as long as the\nbandwidth is sufficient.\n\n--\nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n\n",
"msg_date": "Wed, 19 Jul 2006 17:33:34 +1000",
"msg_from": "\"Guoping Zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "* Guoping Zhang:\n\n> Thanks for pointing me the cause, but we simply cannot use the COPY FROM\n> solution.\n\nWhy not? Just do something like this:\n\nCREATE TEMPORARY TABLE tmp (col1 TEXT NOT NULL, col2 INTEGER NOT NULL);\nCOPY tmp FROM STDIN;\nrow1\t1\nrow2\t2\n...\n\\.\nINSERT INTO target SELECT * FROM tmp;\n\nIf you need some kind of SELECT/INSERT/UPDATE cycle, it's far more\ncomplex, of course, and I'm not quite happy with what I'm using right\nnow.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Wed, 19 Jul 2006 09:38:10 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "* Guoping Zhang ([email protected]) wrote:\n> Obviously, if there is no better solution, the TCP round trip penalty will\n> stop us doing so as we do have performance requirement.\n\nActually, can't you stick multiple inserts into a given 'statement'?\nie: insert into abc (123); insert into abc (234);\n\nI'm not 100% sure if that solves the round-trip issue, but it might..\nAlso, it looks like we might have multi-value insert support in 8.2 (I\ntruely hope so anyway), so you could do something like this:\ninsert into abc (123),(234);\n\n> I guess there shall be quite number of people out there facing the similar\n> problem, right? No alternative solution?\n\nHavn't run into it myself... Quite often you either have large inserts\nbeing done using COPY commands (data warehousing and analysis work) or you\nhave a relatively small number of one-off inserts (OLTP) per transaction.\n\n\tEnjoy,\n\n\t\tStephen",
"msg_date": "Wed, 19 Jul 2006 10:01:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "* Stephen Frost:\n\n> Actually, can't you stick multiple inserts into a given 'statement'?\n> ie: insert into abc (123); insert into abc (234);\n\nIIRC, this breaks with PQexecParams, which is the recommended method\nfor executing SQL statements nowadays.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Wed, 19 Jul 2006 16:18:03 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "* Florian Weimer ([email protected]) wrote:\n> * Stephen Frost:\n> > Actually, can't you stick multiple inserts into a given 'statement'?\n> > ie: insert into abc (123); insert into abc (234);\n> \n> IIRC, this breaks with PQexecParams, which is the recommended method\n> for executing SQL statements nowadays.\n\nFor prepared queries you're absolutely correct. It's also true that\nit's the recommended approach for large numbers of inserts. If the\nnetwork delay is more of a problem than the processing speed then it\nmight make sense.\n\nIt does seem to me that with multi-value insert we might consider\nchanges to libpq to be able to use multi-value prepared inserts... Or\nit might be interesting to see the performance of non-prepared\nmulti-value inserts vs. prepared statements.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Wed, 19 Jul 2006 10:26:43 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "In response to \"Guoping Zhang\" <[email protected]>:\n> \n> Thanks for pointing me the cause, but we simply cannot use the COPY FROM\n> solution.\n> \n> Currently, our application service is running with its own dedicated local\n> database, IF Feasible, we want to separate the application services out of\n> database server and run SEVERAL instances of applation serivice on its own\n> server (one per server), and make them all shall one database server. This\n> helps to the scalability and also reduce the device cost as only database\n> server would need mirror/backup/UPS etc.\n> \n> Obviously, if there is no better solution, the TCP round trip penalty will\n> stop us doing so as we do have performance requirement.\n> \n> I guess there shall be quite number of people out there facing the similar\n> problem, right? No alternative solution?\n\nI suppose I'm a little confused on two points:\n1) What did you expect.\n2) What is your network?\n\nOn #1: networking adds overhead. Period. Always. I believe you earlier\nsaid you estimated around %20 perf hit. For small transactions, I wouldn't\nexpect much better. TCP adds a good bit of header to each packet, plus\nthe time in the kernel, and the RTT. 20% sounds about average to me.\n\n#2 falls into a number of different categories. For example:\na) What is your topology? If you absolutely need blazing speed, you should\n have a dedicated gigabit switched network between the machines.\nb) Not all network hardware is created equal. Cheap switches seldom\n perform at their advertised speed. Stick with high-end stuff. NICs\n are the same way.\nOn #2, you'll want to ensure that the problem is not in the hardware before\nyou start complaining about PostgreSQL, or even TCP. If you've got a cheap,\nlaggy switch, not amount of TCP or PostgreSQL tuning is going to overcome\nit.\n\nHope some of this is helpful.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n",
"msg_date": "Wed, 19 Jul 2006 10:41:49 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql"
},
{
"msg_contents": "Stephen Frost wrote:\n> * Guoping Zhang ([email protected]) wrote:\n> \n>>Obviously, if there is no better solution, the TCP round trip penalty will\n>>stop us doing so as we do have performance requirement.\n> \n> Actually, can't you stick multiple inserts into a given 'statement'?\n> ie: insert into abc (123); insert into abc (234);\n> \n> I'm not 100% sure if that solves the round-trip issue, but it might..\n> Also, it looks like we might have multi-value insert support in 8.2 (I\n> truely hope so anyway), so you could do something like this:\n> insert into abc (123),(234);\n\nYeah, see my post from last night on PATCHES. Something like \"insert \ninto abc (123); insert into abc (234); ...\" actually seems to work \npretty well as long as you don't drive the machine into swapping. If \nyou're doing a very large number of INSERTs, break it up into bite-sized \nchunks and you should be fine.\n\nJoe\n",
"msg_date": "Wed, 19 Jul 2006 08:49:17 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql"
},
{
"msg_contents": "Thanks for all the replies from different ppl.\n\nAs you pointed out, each INSERT/UPDATE operation will result in a TCP\nround-trip delay for postgresql (may well true for all DBMS), this is the\nbig problem to challenge our requirements, as extensively modify the\n(legacy) applicatioin is not a preferable choice.\n\nI measured the round-trip (UDP) delay as below:\n\na) SERVER A to SERVER B: 0.35ms\n SERVER A to itself (Local host): 0.022ms\n\nThat is, in the tests I did yesterday, it is about 100k insert operations,\nwhich means added around 35 seconds of delay.....\n\nb) Also, using Iperf shows that\n TCP bandwidth between Server A and B is about 92.3 Mbits/sec\n TCP bandwidth between two ports at same Server A can reach 10.9Gbits/sec\n\nThat indicates the performance impact for the networking....\n\nThere might be parameter in Solaris to tune the 'ack response delay', but I\ndidn't try now.\n\nThanks for all the answers...\n\nRegards,\nGuoping Zhang\n\n\n\n-----Original Message-----\nFrom: Florian Weimer [mailto:[email protected]]\nSent: 2006Ae7OA20EO 0:18\nTo: Guoping Zhang\nCc: [email protected]\nSubject: Re: [PERFORM] Performance penalty for remote access of\npostgresql (8.1.3)? any experiance?\n\n\n* Stephen Frost:\n\n> Actually, can't you stick multiple inserts into a given 'statement'?\n> ie: insert into abc (123); insert into abc (234);\n\nIIRC, this breaks with PQexecParams, which is the recommended method\nfor executing SQL statements nowadays.\n\n--\nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n\n",
"msg_date": "Thu, 20 Jul 2006 16:32:45 +1000",
"msg_from": "\"Guoping Zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance penalty for remote access of postgresql (8.1.3)? any\n\texperiance?"
},
{
"msg_contents": "Guoping Zhang wrote:\n\n>\n>a) SERVER A to SERVER B: 0.35ms\n> SERVER A to itself (Local host): 0.022ms\n>\n> \n>\n0.35ms seems rather slow. You might try investigating what's in the path.\nFor comparison, between two machines here (three GigE switches in the\npath), I see 0.10ms RTT. Between two machines on the same switch I\nget 0.08ms.\n\n\n\n\n",
"msg_date": "Thu, 20 Jul 2006 07:54:58 -0600",
"msg_from": "David Boreham <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance penalty for remote access of postgresql"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 2543\nLogged by: Alaa El Gohary\nEmail address: [email protected]\nPostgreSQL version: 7.4.12\nOperating system: FreeBSD 6.0\nDescription: Performance delay acrros the same day\nDetails: \n\nA query on the postgresql DB takes about 5 seconds and then it starts to\ntake more time till it reaches about 60 seconds by the end of the same day.\nI tried vacuum but nothing changed the only thing that works is to dump the\nDB ,drop and create a new one with the dump taken.\ni need to know if there is any way to restore the performance back without\nthe need for drop and create\ncause i can't do this accross the day\n",
"msg_date": "Fri, 21 Jul 2006 07:41:02 GMT",
"msg_from": "\"Alaa El Gohary\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #2543: Performance delay acrros the same day"
},
{
"msg_contents": "On Fri, Jul 21, 2006 at 07:41:02 +0000,\n Alaa El Gohary <[email protected]> wrote:\n> \n> The following bug has been logged online:\n\nThe report below isn't a bug, its a performance question and should have\nbeen sent to [email protected]. I am redirecting replies there.\n\n> A query on the postgresql DB takes about 5 seconds and then it starts to\n> take more time till it reaches about 60 seconds by the end of the same day.\n> I tried vacuum but nothing changed the only thing that works is to dump the\n> DB ,drop and create a new one with the dump taken.\n> i need to know if there is any way to restore the performance back without\n> the need for drop and create\n> cause i can't do this accross the day\n\nYou most likely aren't vacuuming often enough and/or don't have your FSM\nsetting high enough.\n",
"msg_date": "Fri, 21 Jul 2006 13:33:43 -0500",
"msg_from": "Bruno Wolff III <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2543: Performance delay acrros the same day"
},
{
"msg_contents": "Hi, Bruno,\n\nBruno Wolff III wrote:\n> On Fri, Jul 21, 2006 at 07:41:02 +0000,\n> Alaa El Gohary <[email protected]> wrote:\n>> The following bug has been logged online:\n> \n> The report below isn't a bug, its a performance question and should have\n> been sent to [email protected]. I am redirecting replies there.\n> \n>> A query on the postgresql DB takes about 5 seconds and then it starts to\n>> take more time till it reaches about 60 seconds by the end of the same day.\n>> I tried vacuum but nothing changed the only thing that works is to dump the\n>> DB ,drop and create a new one with the dump taken.\n>> i need to know if there is any way to restore the performance back without\n>> the need for drop and create\n>> cause i can't do this accross the day\n> \n> You most likely aren't vacuuming often enough and/or don't have your FSM\n> setting high enough.\n\nDepending on the PostgreSQL version, it might also be that he suffers\nfrom index bloat. He might look into the manual pages about REINDEX for\na description.\n\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 24 Jul 2006 09:54:29 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2543: Performance delay acrros the same day"
}
] |
[
{
"msg_contents": "I have been testing the performance of PostgreSQL using the simple tool\nfound at http://benchw.sourceforge.net however I have found that all the\nqueries it run execute with sequential scans. The website where the code\nruns has examples of the execution plan using indexes.\n\nWhen I disable the sequential plan query 0 and query 1 run faster (\nhttp://benchw.sourceforge.net/benchw_results_postgres_history.html ) by\nusing the indexes as suggested by the website.\n\nI have tried increasing the effective_cache_size and reducing the\nrandom_page_cost to try and force the optimiser to use the index but it\nalways uses the sequential scan.\n\nWhat is the best way to force the use of indexes in these queries?\nCurrently testing with version 8.1.4.\n\nRegards\n\nRobin Smith\n\nBritish Telecommunications plc Registered office: 81 Newgate Street\nLondon EC1A 7AJ\n\nRegistered in England no. 1800000\n\nThis electronic message contains information from British\nTelecommunications plc which may be privileged and confidential. The\ninformation is intended to be for the use of the individual(s) or entity\nnamed above. If you are not the intended recipient, be aware that any\ndisclosure, copying, distribution or use of the contents of this\ninformation is prohibited. If you have received this electronic message\nin error, please notify us by telephone or e-mail (to the number or\naddress above) immediately.\n\n\n\n\n\n\n\nForcing using index instead of sequential scan?\n\n\n\nI have been testing the performance of PostgreSQL using the simple tool found at http://benchw.sourceforge.net however I have found that all the queries it run execute with sequential scans. The website where the code runs has examples of the execution plan using indexes.\nWhen I disable the sequential plan query 0 and query 1 run faster ( http://benchw.sourceforge.net/benchw_results_postgres_history.html ) by using the indexes as suggested by the website.\nI have tried increasing the effective_cache_size and reducing the random_page_cost to try and force the optimiser to use the index but it always uses the sequential scan.\nWhat is the best way to force the use of indexes in these queries? Currently testing with version 8.1.4.\n\nRegards\n\nRobin Smith\n\nBritish Telecommunications plc Registered office: 81 Newgate Street London EC1A 7AJ\n\nRegistered in England no. 1800000\n\nThis electronic message contains information from British Telecommunications plc which may be privileged and confidential. The information is intended to be for the use of the individual(s) or entity named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited. If you have received this electronic message in error, please notify us by telephone or e-mail (to the number or address above) immediately.",
"msg_date": "Fri, 21 Jul 2006 11:40:33 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing using index instead of sequential scan?"
},
{
"msg_contents": "[email protected] wrote:\n> What is the best way to force the use of indexes in these queries?\n\nWell, the brute-force method is to use SET enable_seqscan TO off, but if \nyou want to get to the bottom of this, you should look at or post the \nEXPLAIN ANALYZE output of the offending queries.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Fri, 21 Jul 2006 13:45:41 +0200",
"msg_from": "Peter Eisentraut <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "<[email protected]> writes:\n> I have been testing the performance of PostgreSQL using the simple tool\n> found at http://benchw.sourceforge.net however I have found that all the\n> queries it run execute with sequential scans. The website where the code\n> runs has examples of the execution plan using indexes.\n\nThe reason the website gets indexscans is that he's fooled with the\nplanner cost parameters. In particular I see that benchw's\ndocumentation suggests\n\teffective_cache_size\t= 48000\n\trandom_page_cost\t= 0.8\nThe latter is physically silly but it's a pretty effective thumb on the\nscales if you want to force indexscan usage.\n\nThe real issue here is caching across successive queries, an effect that\nPostgres doesn't deal with very well at the moment. If you run these\nqueries from a standing start (freshly booted machine) you'll likely\nfind that the indexscan plan is indeed slower than the seqscan/hash\nplan, just like the planner thinks. I get about 52 sec for query0\nwith an indexscan vs about 35 sec for the seqscan. However, successive\nexecutions of the seqscan plan stay at about 35 sec, whereas the\nindexscan plan drops to 2 sec(!). This is because the fraction of the\ntable touched by the indexscan plan is small enough to fit in my\nmachine's RAM --- I can see by das blinkenlights (and also vmstat) that\nthere's no I/O going on at all during re-executions of the indexscan.\nIf I run the seqscan and then the indexscan, the indexscan takes about\n28 sec, so there's still some useful cached data even though the seqscan\nread more stuff than fits in RAM. (Note: this is with Fedora Core 5,\nYMMV depending on your kernel's cache algorithms.)\n\nIn a real-world situation it's unlikely you'd just re-execute the same\nquery over and over, so this benchmark is really too simplistic to trust\nvery far as an indicator of what to do in practice.\n\nI find that CVS tip will choose the indexscan for query0 if I set\neffective_cache_size to 62500 (ie, half a gigabyte, or half of this\nmachine's RAM) and set random_page_cost to 1.5 or less.\n\nIf you want the planner to work on the assumption that everything's\ncached, set effective_cache_size to a large value and set\nrandom_page_cost to 1.0 --- you might also want to increase the CPU\ncost settings, reflecting the fact that I/O is cheaper relative to\nCPU effort than the default settings assume. However, if your database\nis too large to fit in RAM then these are likely to be pretty bad\nsettings. Many people compromise with a random_page_cost around 2\nor so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Jul 2006 12:22:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan? "
},
{
"msg_contents": "> The real issue here is caching across successive queries, an effect that\n> Postgres doesn't deal with very well at the moment. If you run these\n> queries from a standing start (freshly booted machine) you'll likely\n> find that the indexscan plan is indeed slower than the seqscan/hash\n> plan, just like the planner thinks.\n\nHere's a little trick I learned to speed up this test.\n\n find / -type f -exec grep foobar {} \\;\n\nThis causes massive file-system activity and flushes all files that the kernel has cached. If you run this between each Postgres test (let it run for a couple minutes), it gives you an apples-to-apples comparison between successive benchmarks, and eliminates the effects of caching.\n\nIf you run this as a regular user (NOT super-user or 'postgres'), you won't have permission to access your Postgres files, so you're guaranteed they'll be flushed from the cache.\n\nCraig\n",
"msg_date": "Sat, 22 Jul 2006 10:26:53 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "\"Craig A. James\" <[email protected]> writes:\n\n> This causes massive file-system activity and flushes all files that the\n> kernel has cached. If you run this between each Postgres test (let it run\n> for a couple minutes), it gives you an apples-to-apples comparison between\n> successive benchmarks, and eliminates the effects of caching.\n\nOn Linux at least the best way to flush the cache is to unmount and then mount\nthe filesystem. This requires putting the data files on partition that you\naren't otherwise using and shutting down postgres.\n\nNote that \"nothing cached\" isn't necessarily any more accurate a model as\n\"everything cached\". In reality many databases *do* in fact run the same\nqueries over and over again, though often with some parameters different each\ntime. But the upper pages of most indexes and many of the common leaf pages\nand heap pages will in fact be cached.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "22 Jul 2006 19:15:31 -0400",
"msg_from": "Gregory Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "Tom Lane wrote:\n> <[email protected]> writes:\n>> I have been testing the performance of PostgreSQL using the simple tool\n>> found at http://benchw.sourceforge.net however I have found that all the\n>> queries it run execute with sequential scans. The website where the code\n>> runs has examples of the execution plan using indexes.\n> \n> The reason the website gets indexscans is that he's fooled with the\n> planner cost parameters. In particular I see that...(snipped)\n> \n\nIndeed I did - probably should have discussed that alteration better in \nthe documentation for the test suite!\n\nIn addition I was a bit naughty in running the benchmark using size 1 \n(i.e about 1G) an a box with 2G ram - as this meant that (on the machine \nI was using then anyway) indexscans on query 0 and 1 were *always* \nbetter than the sequential options.\n\nA better test is to use the size factor at 2 x physical ram, as then the \nplanners defaults make more sense! (unless or course you *want* to model \na data mart smaller than physical ram).\n\nBest wishes\n\nMark\n\n\n\n",
"msg_date": "Sun, 23 Jul 2006 15:28:48 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "[email protected] wrote:\n> I have been testing the performance of PostgreSQL using the simple tool \n> found at _http://benchw.sourceforge.net_ however I have found that all \n> the queries it run execute with sequential scans. The website where the \n> code runs has examples of the execution plan using indexes.\n> \n> When I disable the sequential plan query 0 and query 1 run faster ( \n> _http://benchw.sourceforge.net/benchw_results_postgres_history.html_ ) \n> by using the indexes as suggested by the website.\n> \n> I have tried increasing the effective_cache_size and reducing the \n> random_page_cost to try and force the optimiser to use the index but it \n> always uses the sequential scan.\n> \n> What is the best way to force the use of indexes in these queries? \n> Currently testing with version 8.1.4.\n> \n>\n\nHi Robin,\n\n\nBeing responsible for this piece of software, I should try to help, only \nsaw this now sorry (nice to see someone using this).\n\nUnless you really want to reproduce the numbers on the website, it is \nbest to test with Benchw's scale factor at least 2 x your physical ram, \nas this makes the planner's defaults work more sensibly (and models \n*most* real world data warehouse situations better!).\n\nCheers\n\nMark\n",
"msg_date": "Sun, 23 Jul 2006 15:39:21 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:\n>This causes massive file-system activity and flushes all files that the \n>kernel has cached. If you run this between each Postgres test (let it run \n>for a couple minutes), it gives you an apples-to-apples comparison between \n>successive benchmarks, and eliminates the effects of caching.\n\nAssuming a system with small ram or an unusually large system \ninstallation. Unmounting is a much more realiable mechanism.\n\nMike Stone\n",
"msg_date": "Sun, 23 Jul 2006 07:07:26 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
},
{
"msg_contents": "Michael Stone wrote:\n> On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:\n>> This causes massive file-system activity and flushes all files that \n>> the kernel has cached. If you run this between each Postgres test \n>> (let it run for a couple minutes), it gives you an apples-to-apples \n>> comparison between successive benchmarks, and eliminates the effects \n>> of caching.\n> \n> Assuming a system with small ram or an unusually large system \n> installation. Unmounting is a much more realiable mechanism.\n\nIndeed, but it only works if you can. For example, in my small-ish installation, my WAL and system tables are mounted on the root disk. Or someone might not have super-user access.\n\nCraig\n",
"msg_date": "Sun, 23 Jul 2006 07:03:59 -0700",
"msg_from": "\"Craig A. James\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
}
] |
[
{
"msg_contents": "More information from the query:-\n\nexplain analyze\nSELECT\n d0.dmth,\n count(f.fval )\nFROM\n dim0 AS d0,\n fact0 AS f\nWHERE d0.d0key = f.d0key\nAND d0.ddate BETWEEN '2010-01-01' AND '2010-12-28'\nGROUP BY\n d0.dmth\n;\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n-------------------------------------------------------------\n HashAggregate (cost=336998.83..336998.84 rows=1 width=8) (actual\ntime=33823.124..33823.134 rows=12 loops=1)\n -> Hash Join (cost=214.83..335343.83 rows=331000 width=8) (actual\ntime=61.065..33605.343 rows=336000 loops=1)\n Hash Cond: (\"outer\".d0key = \"inner\".d0key)\n -> Seq Scan on fact0 f (cost=0.00..281819.00 rows=10000000\nwidth=8) (actual time=12.766..28945.036 rows=10000000 loops=1)\n -> Hash (cost=214.00..214.00 rows=331 width=8) (actual\ntime=31.120..31.120 rows=336 loops=1)\n -> Seq Scan on dim0 d0 (cost=0.00..214.00 rows=331\nwidth=8) (actual time=26.362..30.895 rows=336 loops=1)\n Filter: ((ddate >= '2010-01-01'::date) AND (ddate\n<= '2010-12-28'::date))\n Total runtime: 33823.220 ms\n(8 rows)\n\n\nbenchw=# \\d fact0\n Table \"public.fact0\"\n Column | Type | Modifiers\n--------+------------------------+-----------\n d0key | integer | not null\n d1key | integer | not null\n d2key | integer | not null\n fval | integer | not null\n ffill | character varying(100) | not null\nIndexes:\n \"fact0_d0key\" btree (d0key)\n \"fact0_d1key\" btree (d1key)\n \"fact0_d2key\" btree (d2key)\n\nbenchw=# \\d dim0\n Table \"public.dim0\"\n Column | Type | Modifiers\n--------+---------+-----------\n d0key | integer | not null\n ddate | date | not null\n dyr | integer | not null\n dmth | integer | not null\n dday | integer | not null\nIndexes:\n \"dim0_d0key\" UNIQUE, btree (d0key)\n\nThe example on the web site has the following execution plan:-\n\n QUERY PLAN\n\n------------------------------------------------------------------------\n--------------------\n HashAggregate (cost=286953.94..286953.94 rows=1 width=8)\n -> Nested Loop (cost=0.00..285268.93 rows=337002 width=8)\n -> Seq Scan on dim0 d0 (cost=0.00..219.00 rows=337 width=8)\n Filter: ((ddate >= '2010-01-01'::date) AND (ddate <=\n'2010-12-28'::date))\n -> Index Scan using fact0_d0key on fact0 f (cost=0.00..833.07\nrows=1022 width=8)\n Index Cond: (\"outer\".d0key = f.d0key)\n\nIt uses the index on the join condition.\n\nWhen I disable the sequential scan with:-\n\nSET enable_seqscan TO off;\n\nThe execution plan looks like:-\n\n QUERY\nPLAN \n------------------------------------------------------------------------\n----------------------------------------------------------------\n HashAggregate (cost=648831.52..648831.53 rows=1 width=8) (actual\ntime=19155.060..19155.071 rows=12 loops=1)\n -> Nested Loop (cost=7.51..647176.52 rows=331000 width=8) (actual\ntime=97.878..18943.155 rows=336000 loops=1)\n -> Index Scan using dim0_d0key on dim0 d0 (cost=0.00..248.00\nrows=331 width=8) (actual time=40.467..55.780 rows=336 loops=1)\n Filter: ((ddate >= '2010-01-01'::date) AND (ddate <=\n'2010-12-28'::date))\n -> Bitmap Heap Scan on fact0 f (cost=7.51..1941.94 rows=1002\nwidth=8) (actual time=0.991..55.391 rows=1000 loops=336)\n Recheck Cond: (\"outer\".d0key = f.d0key)\n -> Bitmap Index Scan on fact0_d0key (cost=0.00..7.51\nrows=1002 width=0) (actual time=0.583..0.583 rows=1000 loops=336)\n Index Cond: (\"outer\".d0key = f.d0key)\n Total runtime: 19155.176 ms\n(9 rows)\n\nThe query is 19 seconds long now; down from 34 seconds although the\nexecution plan doesn't match the example from the website.\n\nRegards\n\nRobin\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:[email protected]] \nSent: 21 July 2006 12:46\nTo: [email protected]\nCc: Smith,R,Robin,XJE4JA C\nSubject: Re: [PERFORM] Forcing using index instead of sequential scan?\n\n\[email protected] wrote:\n> What is the best way to force the use of indexes in these queries?\n\nWell, the brute-force method is to use SET enable_seqscan TO off, but if\n\nyou want to get to the bottom of this, you should look at or post the \nEXPLAIN ANALYZE output of the offending queries.\n\n-- \nPeter Eisentraut\nhttp://developer.postgresql.org/~petere/\n",
"msg_date": "Fri, 21 Jul 2006 13:02:06 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
}
] |
[
{
"msg_contents": "The tables have all been analysed.\n\nI set the work_mem to 500000 and it still doesn't use the index :-(\n\nRegards\n\nRobin\n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: 21 July 2006 12:54\nTo: Smith,R,Robin,XJE4JA C\nSubject: Re: [PERFORM] Forcing using index instead of sequential scan?\n\n\[email protected] wrote:\n> I have been testing the performance of PostgreSQL using the simple \n> tool found at http://benchw.sourceforge.net however I have found that \n> all the queries it run execute with sequential scans. The website \n> where the code runs has examples of the execution plan using indexes.\n> \n> When I disable the sequential plan query 0 and query 1 run faster ( \n> http://benchw.sourceforge.net/benchw_results_postgres_history.html ) \n> by using the indexes as suggested by the website.\n> \n> I have tried increasing the effective_cache_size and reducing the \n> random_page_cost to try and force the optimiser to use the index but \n> it always uses the sequential scan.\n> \n> What is the best way to force the use of indexes in these queries? \n> Currently testing with version 8.1.4.\n\nWell, you don't want to be forcing it if possible. Ideally, PG should be\n\nable to figure out what to use itself.\n\nIn the case of query0 and query1 as shown on your web-page I'd expect a \nsequential scan of dim0 then access via the index on fact0. Reasons why \nthis might not be happening include:\n1. Inaccurate stats - ANALYSE your tables\n2. Insufficient memory for sorting etc - issue SET work_mem=XXX before \nthe query and try increased values.\n3. Other parameters are out-of-whack. For example, effective_cache_size \ndoesn't change how much cache PG uses, it tells PG how much the O.S. \nwill cache. You might find http://www.powerpostgresql.com/PerfList is a \ngood quick introduction.\n\n\nSo - ANALYSE your tables\nhttp://www.postgresql.org/docs/8.1/static/sql-analyze.html\n\nThen post EXPLAIN ANALYSE for the queries and we'll see what they're\ndoing.\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Fri, 21 Jul 2006 13:10:29 +0100",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing using index instead of sequential scan?"
}
] |
[
{
"msg_contents": "Hello,\n\ndoes anybody use OSDB benchmarks for postgres?\nif not, which kind of bechmarks are used for postgres?\n\nThanks,\nDenis.\n",
"msg_date": "Fri, 21 Jul 2006 16:35:30 +0300",
"msg_from": "\"Petronenko D.S.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres benchmarks"
},
{
"msg_contents": "At EnterpriseDB we make extensive use of the OSDB's OLTP Benchmark. We\nalso use the Java based benchamrk called BenchmarkSQL from SourceForge.\nBoth of these benchmarks are update intensive OLTP tests that closely mimic\nthe Traqnsaction Processing COuncil's TPC-C benchmark.\n\nPostgres also ships with pg_bench, which is a simpler OLTP benchmark that I\nbelieve is similar to a TPC-B.\n\n--Denis Lussier\n CTO\n http://www.enterprisedb.com\n\n\nOn 7/21/06, Petronenko D.S. <[email protected]> wrote:\n>\n> Hello,\n>\n> does anybody use OSDB benchmarks for postgres?\n> if not, which kind of bechmarks are used for postgres?\n>\n> Thanks,\n> Denis.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\n \nAt EnterpriseDB we make extensive use of the OSDB's OLTP Benchmark. We also use the Java based benchamrk called BenchmarkSQL from SourceForge. Both of these benchmarks are update intensive OLTP tests that closely mimic the Traqnsaction Processing COuncil's TPC-C benchmark.\n\n \nPostgres also ships with pg_bench, which is a simpler OLTP benchmark that I believe is similar to a TPC-B.\n \n--Denis Lussier\n CTO\n http://www.enterprisedb.com \nOn 7/21/06, Petronenko D.S. <[email protected]> wrote:\nHello,does anybody use OSDB benchmarks for postgres?if not, which kind of bechmarks are used for postgres?\nThanks,Denis.---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Sun, 23 Jul 2006 19:47:25 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres benchmarks"
}
] |
[
{
"msg_contents": "I discussed this with a few members of #postgresql freenode this morning. I'll keep it breif; [note: i have cleaned out columns not relevant]\n\nI have two tables, brands and models_brands. The first has about 300 records, the later about 350,000 records. The number of distinct brands in the models_brands table is < 10.\n\n\n\n=# \\d models_brands\n Table \"public.models_brands\"\n Column | Type | Modifiers\n--------+-----------------------+-----------\n model | integer | not null\n brand | integer | not null\nIndexes:\n \"models_brands_brand\" btree (brand)\nForeign-key constraints:\n \"models_brands_brand_fkey\" FOREIGN KEY (brand) REFERENCES brands(brand_id) ON UPDATE CASCADE ON DELETE CASCADE\n \"models_brands_model_fkey\" FOREIGN KEY (model) REFERENCES models(model_id) ON UPDATE CASCADE ON DELETE CASCADE\n\na=# \\d brands;\n Table \"public.brands\"\n Column | Type | Modifiers\n------------+------------------------+-----------------------------------------------------------\n brand_id | integer | not null default nextval('brands_brand_id_seq'::regclass)\n brand_name | character varying(255) | not null\nIndexes:\n \"brands_pkey\" PRIMARY KEY, btree (brand_id)\n\n\nNow the plans/problems..\n\n=# set enable_seqscan to on;\nSET\n=# explain analyze select distinct brand from models_brands;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=46300.70..48148.15 rows=4 width=4) (actual time=3699.691..6215.216 rows=4 loops=1)\n -> Sort (cost=46300.70..47224.43 rows=369489 width=4) (actual time=3699.681..5027.069 rows=369489 loops=1)\n Sort Key: brand\n -> Seq Scan on models_brands (cost=0.00..6411.89 rows=369489 width=4) (actual time=0.040..1352.997 rows=369489 loops=1)\n Total runtime: 6223.666 ms\n(5 rows)\n\n=# set enable_seqscan to off;\nSET\n=# explain analyze select distinct brand from models_brands;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..863160.68 rows=4 width=4) (actual time=0.131..2584.779 rows=4 loops=1)\n -> Index Scan using models_brands_brand on models_brands (cost=0.00..862236.96 rows=369489 width=4) (actual time=0.122..1440.809 rows=369489 loops=1)\n Total runtime: 2584.871 ms\n(3 rows)\n\n\nPicks the wrong plan here. Should pick the index with seqscanning enabled.\n\n\nMore (as a different wording/query)... (as suggested by others on irc)\n\n\n=# set enable_seqscan to on;\nSET\n=# explain analyze select brand_id from brands where exists (select 1 from models_brands where brand = brands.brand_id);\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on brands (cost=0.00..30.09 rows=152 width=4) (actual time=7742.460..62567.543 rows=4 loops=1)\n Filter: (subplan)\n SubPlan\n -> Seq Scan on models_brands (cost=0.00..7335.61 rows=92372 width=0) (actual time=206.467..206.467 rows=0 loops=303)\n Filter: (brand = $0)\n Total runtime: 62567.626 ms\n\na=# set enable_seqscan to off;\nSET\n=# explain analyze select brand_id from brands where exists (select 1 from models_brands where brand = brands.brand_id);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on brands (cost=100000000.00..100000715.90 rows=152 width=4) (actual time=0.615..3.710 rows=4 loops=1)\n Filter: (subplan)\n SubPlan\n -> Index Scan using models_brands_brand on models_brands (cost=0.00..216410.97 rows=92372 width=0) (actual time=0.008..0.008 rows=0 loops=303)\n Index Cond: (brand = $0)\n Total runtime: 3.790 ms\n\n\nIt was also tried to similar results with a LIMIT 1 in the subquery for exist.\n\nMore...\n\nSeqscan still off..\n\n\n=# explain analyze select distinct brand_id from brands inner join models_brands on (brand_id = brand);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=0.00..867782.58 rows=303 width=4) (actual time=0.391..4898.579 rows=4 loops=1)\n -> Merge Join (cost=0.00..866858.85 rows=369489 width=4) (actual time=0.383..3749.771 rows=369489 loops=1)\n Merge Cond: (\"outer\".brand_id = \"inner\".brand)\n -> Index Scan using brands_pkey on brands (cost=0.00..15.53 rows=303 width=4) (actual time=0.080..0.299 rows=60 loops=1)\n -> Index Scan using models_brands_brand on models_brands (cost=0.00..862236.96 rows=369489 width=4) (actual time=0.013..1403.175 rows=369489 loops=1)\n Total runtime: 4898.697 ms\n\n=# set enable_seqscan to on;\nSET\n=# explain analyze select distinct brand_id from brands inner join models_brands on (brand_id = brand);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=46300.70..52770.04 rows=303 width=4) (actual time=3742.046..8560.833 rows=4 loops=1)\n -> Merge Join (cost=46300.70..51846.32 rows=369489 width=4) (actual time=3742.035..7406.677 rows=369489 loops=1)\n Merge Cond: (\"outer\".brand_id = \"inner\".brand)\n -> Index Scan using brands_pkey on brands (cost=0.00..15.53 rows=303 width=4) (actual time=0.077..0.407 rows=60 loops=1)\n -> Sort (cost=46300.70..47224.43 rows=369489 width=4) (actual time=3741.584..5051.348 rows=369489 loops=1)\n Sort Key: models_brands.brand\n -> Seq Scan on models_brands (cost=0.00..6411.89 rows=369489 width=4) (actual time=0.027..1346.178 rows=369489 loops=1)\n Total runtime: 8589.502 ms\n(8 rows)\n\n\nHope that helps\n\nKevin McArthur\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nI discussed this with a few members of #postgresql \nfreenode this morning. I'll keep it breif; [note: i have cleaned out columns not \nrelevant]\n \nI have two tables, brands and models_brands. The \nfirst has about 300 records, the later about 350,000 records. The number of \ndistinct brands in the models_brands table is < 10.\n \n \n \n=# \\d \nmodels_brands Table \n\"public.models_brands\" Column \n| \nType | \nModifiers--------+-----------------------+----------- model | \ninteger \n| not null brand | \ninteger \n| not nullIndexes: \"models_brands_brand\" btree \n(brand)Foreign-key constraints: \n\"models_brands_brand_fkey\" FOREIGN KEY (brand) REFERENCES brands(brand_id) ON \nUPDATE CASCADE ON DELETE CASCADE \n\"models_brands_model_fkey\" FOREIGN KEY (model) REFERENCES models(model_id) ON \nUPDATE CASCADE ON DELETE CASCADE\n \na=# \\d \nbrands; \nTable \"public.brands\" Column \n| \nType \n| \nModifiers------------+------------------------+----------------------------------------------------------- brand_id \n| \ninteger \n| not null default nextval('brands_brand_id_seq'::regclass) brand_name \n| character varying(255) | not nullIndexes: \n\"brands_pkey\" PRIMARY KEY, btree (brand_id)\n \nNow the plans/problems..\n \n=# set enable_seqscan to on;SET=# explain \nanalyze select distinct brand from \nmodels_brands; \nQUERY \nPLAN----------------------------------------------------------------------------------------------------------------------------------- Unique \n(cost=46300.70..48148.15 rows=4 width=4) (actual time=3699.691..6215.216 rows=4 \nloops=1) -> Sort (cost=46300.70..47224.43 \nrows=369489 width=4) (actual time=3699.681..5027.069 rows=369489 \nloops=1) Sort Key: \nbrand -> Seq Scan \non models_brands (cost=0.00..6411.89 rows=369489 width=4) (actual \ntime=0.040..1352.997 rows=369489 loops=1) Total runtime: 6223.666 \nms(5 rows)\n \n=# set enable_seqscan to off;SET=# explain \nanalyze select distinct brand from \nmodels_brands; \nQUERY \nPLAN----------------------------------------------------------------------------------------------------------------------------------------------------------- Unique \n(cost=0.00..863160.68 rows=4 width=4) (actual time=0.131..2584.779 rows=4 \nloops=1) -> Index Scan using models_brands_brand on \nmodels_brands (cost=0.00..862236.96 rows=369489 width=4) (actual \ntime=0.122..1440.809 rows=369489 loops=1) Total runtime: 2584.871 \nms(3 rows)\n \n \nPicks the wrong plan here. Should pick the index \nwith seqscanning enabled.\n \n \nMore (as a different wording/query)... (as \nsuggested by others on irc)\n \n \n=# set enable_seqscan to on;SET=# explain \nanalyze select brand_id from brands where exists (select 1 from models_brands \nwhere brand = \nbrands.brand_id); \nQUERY \nPLAN---------------------------------------------------------------------------------------------------------------------------- Seq \nScan on brands (cost=0.00..30.09 rows=152 width=4) (actual \ntime=7742.460..62567.543 rows=4 loops=1) Filter: \n(subplan) SubPlan -> Seq \nScan on models_brands (cost=0.00..7335.61 rows=92372 width=0) (actual \ntime=206.467..206.467 rows=0 \nloops=303) \nFilter: (brand = $0) Total runtime: 62567.626 ms\n \na=# set enable_seqscan to off;SET\n=# explain analyze select brand_id from brands \nwhere exists (select 1 from models_brands where brand = \nbrands.brand_id); \nQUERY \nPLAN------------------------------------------------------------------------------------------------------------------------------------------------------ Seq \nScan on brands (cost=100000000.00..100000715.90 rows=152 width=4) (actual \ntime=0.615..3.710 rows=4 loops=1) Filter: \n(subplan) SubPlan -> Index \nScan using models_brands_brand on models_brands (cost=0.00..216410.97 \nrows=92372 width=0) (actual time=0.008..0.008 rows=0 \nloops=303) Index \nCond: (brand = $0) Total runtime: 3.790 ms\n \n \nIt was also tried to similar results with a LIMIT 1 \nin the subquery for exist.\n \nMore...\n \nSeqscan still off..\n \n \n=# explain analyze select distinct brand_id from \nbrands inner join models_brands on (brand_id = \nbrand); \nQUERY \nPLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------- Unique \n(cost=0.00..867782.58 rows=303 width=4) (actual time=0.391..4898.579 rows=4 \nloops=1) -> Merge Join (cost=0.00..866858.85 \nrows=369489 width=4) (actual time=0.383..3749.771 rows=369489 \nloops=1) Merge Cond: \n(\"outer\".brand_id = \n\"inner\".brand) -> \nIndex Scan using brands_pkey on brands (cost=0.00..15.53 rows=303 width=4) \n(actual time=0.080..0.299 rows=60 \nloops=1) -> Index \nScan using models_brands_brand on models_brands (cost=0.00..862236.96 \nrows=369489 width=4) (actual time=0.013..1403.175 rows=369489 \nloops=1) Total runtime: 4898.697 ms\n \n=# set enable_seqscan to on;SET=# explain \nanalyze select distinct brand_id from brands inner join models_brands on \n(brand_id = \nbrand); \nQUERY \nPLAN----------------------------------------------------------------------------------------------------------------------------------------- Unique \n(cost=46300.70..52770.04 rows=303 width=4) (actual time=3742.046..8560.833 \nrows=4 loops=1) -> Merge Join \n(cost=46300.70..51846.32 rows=369489 width=4) (actual time=3742.035..7406.677 \nrows=369489 loops=1) Merge \nCond: (\"outer\".brand_id = \n\"inner\".brand) -> \nIndex Scan using brands_pkey on brands (cost=0.00..15.53 rows=303 width=4) \n(actual time=0.077..0.407 rows=60 \nloops=1) -> \nSort (cost=46300.70..47224.43 rows=369489 width=4) (actual \ntime=3741.584..5051.348 rows=369489 \nloops=1) \nSort Key: \nmodels_brands.brand \n-> Seq Scan on models_brands (cost=0.00..6411.89 rows=369489 \nwidth=4) (actual time=0.027..1346.178 rows=369489 loops=1) Total \nruntime: 8589.502 ms(8 rows)\n \n \nHope that helps\n \nKevin McArthur",
"msg_date": "Fri, 21 Jul 2006 12:15:28 -0600",
"msg_from": "\"Kevin McArthur\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad Planner Statistics for Uneven distribution."
},
{
"msg_contents": "\"Kevin McArthur\" <[email protected]> writes:\n> -> Seq Scan on models_brands (cost=0.00..6411.89 rows=369489 width=4) (actual time=0.040..1352.997 rows=369489 loops=1)\n> ...\n> -> Index Scan using models_brands_brand on models_brands (cost=0.00..862236.96 rows=369489 width=4) (actual time=0.122..1440.809 rows=369489 loops=1)\n\n> Picks the wrong plan here. Should pick the index with seqscanning enabled.\n\nIt's really not possible for a full-table indexscan to be faster than a\nseqscan, and not very credible for it even to be approximately as fast.\nI suspect your second query here is the beneficiary of the first query\nhaving fetched all the pages into cache. In general, if you want to\noptimize for a mostly-cached database, you need to reduce\nrandom_page_cost below its default value ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jul 2006 17:29:53 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Planner Statistics for Uneven distribution. "
},
{
"msg_contents": "Tom,\n\nOn 7/21/06, Tom Lane <[email protected]> wrote:\n> It's really not possible for a full-table indexscan to be faster than a\n> seqscan, and not very credible for it even to be approximately as fast.\n> I suspect your second query here is the beneficiary of the first query\n> having fetched all the pages into cache. In general, if you want to\n> optimize for a mostly-cached database, you need to reduce\n> random_page_cost below its default value ...\n\nWe discussed this case on IRC and the problem was not the first set of\nqueries but the second one:\nselect brand_id from brands where exists (select 1 from models_brands\nwhere brand = brands.brand_id);).\n\nIsn't there any way to make PostgreSQL have a better estimation here:\n-> Index Scan using models_brands_brand on models_brands\n(cost=0.00..216410.97 rows=92372 width=0) (actual time=0.008..0.008\nrows=0 loops=303)\n Index Cond: (brand = $0)\n\nI suppose it's because the planner estimates that there will be 92372\nresult rows that it chooses the seqscan instead of the index scan.\nALTER STATISTICS didn't change anything.\nIIRC, there were already a few threads about the same sort of\nestimation problem and there wasn't any solution to solve this\nproblem. Do you have any hint/ideas?\n\n--\nGuillaume\n",
"msg_date": "Sat, 22 Jul 2006 00:00:01 +0200",
"msg_from": "\"Guillaume Smet\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Planner Statistics for Uneven distribution."
},
{
"msg_contents": "\"Guillaume Smet\" <[email protected]> writes:\n> Isn't there any way to make PostgreSQL have a better estimation here:\n> -> Index Scan using models_brands_brand on models_brands\n> (cost=0.00..216410.97 rows=92372 width=0) (actual time=0.008..0.008\n> rows=0 loops=303)\n> Index Cond: (brand = $0)\n\nNote that the above plan extract is pretty misleading, because it\ndoesn't account for the implicit \"LIMIT 1\" of an EXISTS() clause.\nWhat the planner is *actually* imputing to this plan is 216410.97/92372\ncost units, or about 2.34. However that applies to the seqscan variant\nas well.\n\nI think the real issue with Kevin's example is that when doing an\nEXISTS() on a brand_id that doesn't actually exist in the table, the\nseqscan plan has worst-case behavior (ie, scan the whole table) while\nthe indexscan plan still manages to be cheap. Because his brands table\nhas so many brand_ids that aren't in the table, that case dominates the\nresults. Not sure how we could factor that risk into the cost\nestimates. The EXISTS code could probably special-case it reasonably\nwell for the simplest seqscan and indexscan subplans, but I don't see\nwhat to do with more general subqueries (like joins).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Jul 2006 13:03:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad Planner Statistics for Uneven distribution. "
}
] |
[
{
"msg_contents": "I have a case where I am partitioning tables based on a date range in \nversion 8.1.4. For example:\n\ntable_with_millions_of_records\ninteraction_id char(16) primary key\nstart_date timestamp (without timezone) - indexed\n.. other columns\n\nchild_1 start_date >= 2006-07-21 00:00:00\nchild_2 start_date >= 2006-07-20 00:00:00 and start_date < 2006-07-21 \n00:00:00\n...\nchild_5 start_date >= 2006-07-17 00:00:00 and start_date < 2006-07-18 \n00:00:00\n\nwith rules on the parent and child tables that redirect the data to the \nappropriate child table based on the start_date.\n\nBecause this table is going to grow very large (very quickly), and will \nneed to be purged daily, I created partitions, or child tables to hold \ndata for each day. I have done the same thing in Oracle in the past, and \nthe PostgreSQL solution works great. The archival process is very simple \n- drop the expired child table. I am having one problem.\n\nIf I run a query on the full table (there are 5 child tables with data \nfor the last 5 days), and my where clause contains data for the current \nday only:\nwhere start_date > date_trunc('day', now())\nall 5 child tables are scanned when I look at the output from explain \nanalyze.\n\nMy question is - can I force the planner to only scan the relevant child \ntable - when the key related to the partitioned data it part of the \nwhere clause?\n\nThanks,\n\nKevin\n\n\n...\n",
"msg_date": "Fri, 21 Jul 2006 15:17:38 -0400",
"msg_from": "Kevin Keith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioned tables in queries"
},
{
"msg_contents": "\nOn Jul 21, 2006, at 12:17 PM, Kevin Keith wrote:\n\n> I have a case where I am partitioning tables based on a date range \n> in version 8.1.4. For example:\n>\n> table_with_millions_of_records\n> interaction_id char(16) primary key\n> start_date timestamp (without timezone) - indexed\n> .. other columns\n>\n> child_1 start_date >= 2006-07-21 00:00:00\n> child_2 start_date >= 2006-07-20 00:00:00 and start_date < \n> 2006-07-21 00:00:00\n> ...\n> child_5 start_date >= 2006-07-17 00:00:00 and start_date < \n> 2006-07-18 00:00:00\n>\n> with rules on the parent and child tables that redirect the data to \n> the appropriate child table based on the start_date.\n>\n> Because this table is going to grow very large (very quickly), and \n> will need to be purged daily, I created partitions, or child tables \n> to hold data for each day. I have done the same thing in Oracle in \n> the past, and the PostgreSQL solution works great. The archival \n> process is very simple - drop the expired child table. I am having \n> one problem.\n>\n> If I run a query on the full table (there are 5 child tables with \n> data for the last 5 days), and my where clause contains data for \n> the current day only:\n> where start_date > date_trunc('day', now())\n> all 5 child tables are scanned when I look at the output from \n> explain analyze.\n>\n> My question is - can I force the planner to only scan the relevant \n> child table - when the key related to the partitioned data it part \n> of the where clause?\n\nYes. You'll need non-overlapping check constraints in each child \ntable and to set constraint_exclusion to \"on\" in postgresql.conf.\n\nSee http://www.postgresql.org/docs/8.1/static/ddl-partitioning.html \nfor the gory details.\n\nCheers,\n Steve\n",
"msg_date": "Fri, 21 Jul 2006 12:34:34 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned tables in queries"
},
{
"msg_contents": "My post might have been a little premature - and I apologize for that.\n\nI have figured out what was causing the problem:\n1. Constraint exclusion was disabled. I re-enabled.\n2. I found that using the now() function - and arbitrary interval will \nproduce a different execution plan that using a specific date. For example:\n assuming the current time is 16:00:\n a) where start_date > now() - interval '4 hours' scans all child tables.\n b) where start_date > '2006-07-21 12:00:00' only scans the child \ntable with today's data.\n\nSo am I to assume that the value in the query must be a constant, and \ncannot be a result of a built-in function in order for \nconstraint_exclusion to work correctly?\n\nThanks,\n\nKevin\n\n\nKevin Keith wrote:\n> I have a case where I am partitioning tables based on a date range in \n> version 8.1.4. For example:\n>\n> table_with_millions_of_records\n> interaction_id char(16) primary key\n> start_date timestamp (without timezone) - indexed\n> .. other columns\n>\n> child_1 start_date >= 2006-07-21 00:00:00\n> child_2 start_date >= 2006-07-20 00:00:00 and start_date < \n> 2006-07-21 00:00:00\n> ...\n> child_5 start_date >= 2006-07-17 00:00:00 and start_date < \n> 2006-07-18 00:00:00\n>\n> with rules on the parent and child tables that redirect the data to \n> the appropriate child table based on the start_date.\n>\n> Because this table is going to grow very large (very quickly), and \n> will need to be purged daily, I created partitions, or child tables to \n> hold data for each day. I have done the same thing in Oracle in the \n> past, and the PostgreSQL solution works great. The archival process is \n> very simple - drop the expired child table. I am having one problem.\n>\n> If I run a query on the full table (there are 5 child tables with data \n> for the last 5 days), and my where clause contains data for the \n> current day only:\n> where start_date > date_trunc('day', now())\n> all 5 child tables are scanned when I look at the output from explain \n> analyze.\n>\n> My question is - can I force the planner to only scan the relevant \n> child table - when the key related to the partitioned data it part of \n> the where clause?\n>\n> Thanks,\n>\n> Kevin\n>\n>\n> ...\n>\n\n",
"msg_date": "Fri, 21 Jul 2006 16:28:57 -0400",
"msg_from": "Kevin Keith <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioned tables in queries"
},
{
"msg_contents": "> 2. I found that using the now() function - and arbitrary interval will\n> produce a different execution plan that using a specific date. For example:\n> assuming the current time is 16:00:\n> a) where start_date > now() - interval '4 hours' scans all child tables.\n> b) where start_date > '2006-07-21 12:00:00' only scans the child\n> table with today's data.\n>\n> So am I to assume that the value in the query must be a constant, and\n> cannot be a result of a built-in function in order for\n> constraint_exclusion to work correctly?\n\nHave you tried WHERE start_date > (SELECT now() - interval '4 hours')?\nCertainly using the constant will allow CBE to work. I think that a\nsubquery might too.\n\n",
"msg_date": "25 Jul 2006 13:48:28 -0700",
"msg_from": "\"Andrew Hammond\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned tables in queries"
}
] |
[
{
"msg_contents": "Hello,\nSorry for my poor english,\n\nMy problem :\n\nI meet some performance problem during load increase.\n\nmassive update of 50.000.000 records and 2.000.000 insert with a weekly\nfrequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)\n\ncurrent performance obtained : 120 records / s\nAt the beginning, I got a better speed : 1400 records/s\n\n\nCPU : bi xeon 2.40GHz (cache de 512KB)\npostgresql version : 8.1.4\nOS : debian Linux sa 2.6.17-mm2\nHard disk scsi U320 with scsi card U160 on software RAID 1\nMemory : only 1 Go at this time.\n\n\nMy database contains less than ten tables. But the main table takes more than 12\nGo on harddisk. This table has got ten text records and two date records.\n\nI use few connection on this database.\n\nI try many ideas :\n- put severals thousands operations into transaction (with BEGIN and COMMIT)\n- modify parameters in postgres.conf like\n\tshared_buffers (several tests with 30000 50000 75000)\n\tfsync = off\n\tcheckpoint_segments = 10 (several tests with 20 - 30)\n\tcheckpoint_timeout = 1000 (30-1800)\n\tstats_start_collector = off\n\n\tunfortunately, I can't use another disk for pg_xlog file.\n\n\nBut I did not obtain a convincing result\n\n\n\nMy program does some resquest quite simple.\nIt does some\nUPDATE table set dat_update=current_date where id=XXXX ;\nAnd if not found\nid does some\ninsert into table\n\n\nMy sysadmin tells me write/read on hard disk aren't the pb (see with iostat)\n\n\nHave you got some idea to increase performance for my problem ?\n\nThanks.\n\nLarry.\n",
"msg_date": "Wed, 26 Jul 2006 17:34:47 +0200",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "loading increase into huge table with 50.000.000 records"
},
{
"msg_contents": "Hi Larry,\n\nDo you run vacuum and analyze frequently?\nDid you check PowerPostgresql.com for hints about PostgreSQL tuning?\n<http://www.powerpostgresql.com/Docs/>\n\nYou can increase wal_buffers, checkpoint_segments and checkpoint_timeout\nmuch higher.\n\nHere is a sample which works for me.\nwal_buffers = 128\ncheckpoint_segments = 256\ncheckpoint_timeout = 3600\n\nCheers\nSven.\n\n\[email protected] schrieb:\n> Hello,\n> Sorry for my poor english,\n> \n> My problem :\n> \n> I meet some performance problem during load increase.\n> \n> massive update of 50.000.000 records and 2.000.000 insert with a weekly\n> frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)\n> \n> current performance obtained : 120 records / s\n> At the beginning, I got a better speed : 1400 records/s\n> \n> \n> CPU : bi xeon 2.40GHz (cache de 512KB)\n> postgresql version : 8.1.4\n> OS : debian Linux sa 2.6.17-mm2\n> Hard disk scsi U320 with scsi card U160 on software RAID 1\n> Memory : only 1 Go at this time.\n> \n> \n> My database contains less than ten tables. But the main table takes more than 12\n> Go on harddisk. This table has got ten text records and two date records.\n> \n> I use few connection on this database.\n> \n> I try many ideas :\n> - put severals thousands operations into transaction (with BEGIN and COMMIT)\n> - modify parameters in postgres.conf like\n> \tshared_buffers (several tests with 30000 50000 75000)\n> \tfsync = off\n> \tcheckpoint_segments = 10 (several tests with 20 - 30)\n> \tcheckpoint_timeout = 1000 (30-1800)\n> \tstats_start_collector = off\n> \n> \tunfortunately, I can't use another disk for pg_xlog file.\n> \n> \n> But I did not obtain a convincing result\n> \n> \n> \n> My program does some resquest quite simple.\n> It does some\n> UPDATE table set dat_update=current_date where id=XXXX ;\n> And if not found\n> id does some\n> insert into table\n> \n> \n> My sysadmin tells me write/read on hard disk aren't the pb (see with iostat)\n> \n> \n> Have you got some idea to increase performance for my problem ?\n> \n> Thanks.\n> \n> Larry.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n\n-- \n/This email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you are not the intended recipient, you should not\ncopy it, re-transmit it, use it or disclose its contents, but should\nreturn it to the sender immediately and delete your copy from your\nsystem. Thank you for your cooperation./\n\nSven Geisler <[email protected]> Tel +49.30.5362.1627 Fax .1638\nSenior Developer, AEC/communications GmbH Berlin, Germany\n",
"msg_date": "Wed, 26 Jul 2006 18:02:27 +0200",
"msg_from": "Sven Geisler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: loading increase into huge table with 50.000.000 records"
},
{
"msg_contents": "Hi, Larry,\nHi, Sven,\n\nSven Geisler wrote:\n\n> You can increase wal_buffers, checkpoint_segments and checkpoint_timeout\n> much higher.\n\nYou also should increase the free space map settings, it must be large\nenough to cope with your weekly bunch.\n\n\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 26 Jul 2006 18:39:39 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: loading increase into huge table with 50.000.000 records"
},
{
"msg_contents": "On 7/26/06, [email protected] <[email protected]> wrote:\n> Hello,\n> Sorry for my poor english,\n>\n> My problem :\n>\n> I meet some performance problem during load increase.\n\n>\n> massive update of 50.000.000 records and 2.000.000 insert with a weekly\n> frequency in a huge table (+50.000.000 records, ten fields, 12 Go on hard disk)\n>\n> current performance obtained : 120 records / s\n> At the beginning, I got a better speed : 1400 records/s\n>\n>\n> CPU : bi xeon 2.40GHz (cache de 512KB)\n> postgresql version : 8.1.4\n> OS : debian Linux sa 2.6.17-mm2\n> Hard disk scsi U320 with scsi card U160 on software RAID 1\n> Memory : only 1 Go at this time.\n>\n>\n> My database contains less than ten tables. But the main table takes more than 12\n> Go on harddisk. This table has got ten text records and two date records.\n>\n> I use few connection on this database.\n>\n> I try many ideas :\n> - put severals thousands operations into transaction (with BEGIN and COMMIT)\n> - modify parameters in postgres.conf like\n> shared_buffers (several tests with 30000 50000 75000)\n> fsync = off\n> checkpoint_segments = 10 (several tests with 20 - 30)\n> checkpoint_timeout = 1000 (30-1800)\n> stats_start_collector = off\n>\n> unfortunately, I can't use another disk for pg_xlog file.\n>\n>\n> But I did not obtain a convincing result\n>\n>\n>\n> My program does some resquest quite simple.\n> It does some\n> UPDATE table set dat_update=current_date where id=XXXX ;\n> And if not found\n> id does some\n> insert into table\n>\n>\n> My sysadmin tells me write/read on hard disk aren't the pb (see with iostat)\n\nyour sysadmin is probably wrong. random query across 50m table on\nmachine with 1gb memory is going to cause alot of seeking. take a\nlook at your pg data folder and you will see it is much larger than\n1gb. a lookup of a cached tuple via a cached index might take 0.2ms,\nand might take 200ms if it has to completely to disk on a 50m table.\nnormal reality is somehwere in between depending on various factors.\nmy guess is that as you add more memory, the time will drift from the\nslow case (120/sec) to the fast case (1400/sec).\n\nyou may consider the following alternative:\n1. bulk load your 2m update set into scratch table via copy interface\n2. update table set dat_update=current_date where table.id=scratch.id\n3. insert into table select [...], current_date where not exists\n(select id from table where table.id = scratch.id);\n\nyou may experiment with boolean form of #3, using 'except' also.\nwhile running these monster queries definately crank up work mem in\nexpesnse of shared buffers.\n\nmerlin\n\n\nim am guessing you are bottlenecked at the lookup, not the update. so, if\n",
"msg_date": "Wed, 26 Jul 2006 12:56:21 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: loading increase into huge table with 50.000.000 records"
}
] |
[
{
"msg_contents": "Hi all,\n\n I execute the following query on postgresql 8.1.0:\n\nSELECT\n u.telephone_number\n , u.telecom_operator_id\n , u.name\nFROM\n campanas_subcampaign AS sub\n , agenda_users AS u\n , agenda_users_groups ug\nWHERE\n sub.customer_app_config_id = 19362\n AND sub.subcampaign_id = 9723\n AND ug.agenda_user_group_id >= sub.ini_user_group_id\n AND ug.user_id=u.user_id\n AND ug.group_id IN ( SELECT group_id FROM campanas_groups WHERE \ncustomer_app_config_id = 19362 )\n ORDER BY ug.agenda_user_group_id ASC LIMIT 150\n\nthe explain analyze shouts the following:\n\n \n\n Limit (cost=1.20..4600.56 rows=150 width=74) (actual \ntime=76516.312..76853.191 rows=150 loops=1)\n -> Nested Loop (cost=1.20..333424.31 rows=10874 width=74) (actual \ntime=76516.307..76852.896 rows=150 loops=1)\n -> Nested Loop (cost=1.20..299653.89 rows=10874 width=20) \n(actual time=76506.926..76512.608 rows=150 loops=1)\n Join Filter: (\"outer\".agenda_user_group_id >= \n\"inner\".ini_user_group_id)\n -> Nested Loop IN Join (cost=1.20..189802.77 \nrows=32623 width=20) (actual time=75938.659..76353.748 rows=16200 loops=1)\n Join Filter: (\"outer\".group_id = \"inner\".group_id)\n -> Index Scan using pk_agndusrgrp_usergroup on \nagenda_users_groups ug (cost=0.00..123740.26 rows=2936058 width=30) \n(actual time=0.101..61921.260 rows=2836638 loops=1)\n -> Materialize (cost=1.20..1.21 rows=1 width=10) \n(actual time=0.001..0.002 rows=1 loops=2836638)\n -> Seq Scan on campanas_groups \n(cost=0.00..1.20 rows=1 width=10) (actual time=0.052..0.053 rows=1 loops=1)\n Filter: (customer_app_config_id = \n19362::numeric)\n -> Index Scan using pk_cmpnssubc_subcmpnid on \ncampanas_subcampaign sub (cost=0.00..3.35 rows=1 width=8) (actual \ntime=0.005..0.006 rows=1 loops=16200)\n Index Cond: (subcampaign_id = 9723)\n Filter: (customer_app_config_id = 19362::numeric)\n -> Index Scan using pk_agenda_uid on agenda_users u \n(cost=0.00..3.09 rows=1 width=78) (actual time=2.262..2.264 rows=1 \nloops=150)\n Index Cond: (\"outer\".user_id = u.user_id)\n Total runtime: 76853.504 ms\n(16 rows)\n\n\n\nDo you think I could do anything to speed it up?\n\n\nCheers!!\n-- \nArnau\n",
"msg_date": "Wed, 26 Jul 2006 19:01:33 +0200",
"msg_from": "Arnau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Is it possible to speed this query up?"
},
{
"msg_contents": "Arnau <[email protected]> writes:\n> the explain analyze shouts the following:\n\nThe expensive part appears to be this indexscan:\n\n> -> Index Scan using pk_agndusrgrp_usergroup on \n> agenda_users_groups ug (cost=0.00..123740.26 rows=2936058 width=30) \n> (actual time=0.101..61921.260 rows=2836638 loops=1)\n\nSince there's no index condition, the planner is evidently using this\nscan just to obtain sort order. I think ordinarily it would use a\nseqscan and then sort the final result, which'd be a lot faster if the\nwhole result were being selected. But you have a LIMIT and it's\nmistakenly guessing that only a small part of the table will need to be\nscanned before the LIMIT is satisfied.\n\nBottom line: try dropping the LIMIT. If you really need the limit to be\nenforced on the SQL side, you could try declaring the query as a cursor\nand only fetching 150 rows from it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 26 Jul 2006 16:24:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Is it possible to speed this query up? "
}
] |
[
{
"msg_contents": "Hi!\n\nI hope I'm sending my question to the right list, please don't flame if it's\nthe wrong one.\n\nI have noticed that while a query runs in about 1.5seconds on a 8.xx version\npostgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are\nusing RHEL4 on our server we are stuck with 7.4.13. The enormous time\ndifference between the different builds drives me crazy. Can you please help\nme identifying the bottleneck or suggest anything to improve the dismal\nperformance.\nThe query is the following:\n\nSelect\n car_license_plate.license_plate,\n substr(date_trunc('day', car_km_fuel.transaction_time), 1, 10),\n substr(date_trunc('second', car_km_fuel.transaction_time), 12, 8),\n vehicle_make.make,\n vehicle_type.model,\n engine_size,\n vehicle_fuel_type.fuel_type,\n v_org_person_displayname.displayname_lastfirst,\n car_km_fuel.ammount,\n car_km_fuel.unit_price,\n car_km_fuel.total_ammount,\n currency.currency AS,\n car_km_fuel.km AS,\n vehicle_specific.fuel_capacity,\n CASE WHEN (car_km_fuel.ammount > vehicle_specific.fuel_capacity) THEN\nCAST(ROUND(CAST(car_km_fuel.ammount - vehicle_specific.fuel_capacity AS\nNUMERIC), 2) AS varchar) ELSE '---' END AS \"over\",\n car_km_fuel.notes,\nCASE WHEN (prev_car_km_fuel.km IS NOT NULL AND car_km_fuel.km IS NOT NULL\nAND (car_km_fuel.km - prev_car_km_fuel.km <> 0)) THEN\n CAST(Round(CAST(((car_km_fuel.ammount / (car_km_fuel.km -\nprev_car_km_fuel.km)) * 100) AS Numeric), 2) AS VARCHAR)\n WHEN (prev_car_km_fuel.km IS NULL) THEN 'xxxx'\n WHEN (car_km_fuel.km IS NULL) THEN 'error' END AS \"average\",\n vehicle_specific.consumption_town,\n org_person.email_address\n\nFROM\n car_km_fuel\n\nLEFT JOIN\n car ON car.id = car_km_fuel.car_id\n\nLEFT JOIN\n car_license_plate ON car_license_plate.car_id = car.id AND\n (car_license_plate.license_plate_end_date < date_trunc('day',\ncar_km_fuel.transaction_time) OR car_license_plate.license_plate_end_date IS\nNULL)\nLEFT JOIN\n vehicle_specific ON vehicle_specific.id = car.vehicle_specific_id\n\nLEFT JOIN\n vehicle_variant ON vehicle_variant.id =\nvehicle_specific.vehicle_variant_id\n\nLEFT JOIN\n vehicle_type ON vehicle_type.id = vehicle_variant.vehicle_type_id\n\nLEFT JOIN\n vehicle_make ON vehicle_make.id = vehicle_type.vehicle_make_id\n\nLEFT JOIN\n vehicle_fuel_type ON vehicle_fuel_type.id = vehicle_specific.fuel_type_id\n\nLEFT JOIN\n car_driver ON car_driver.car_id = car.id AND\n car_driver.allocation_date <= date_trunc('day',\ncar_km_fuel.transaction_time) AND\n (car_driver.end_date >= date_trunc('day',\ncar_km_fuel.transaction_time) OR car_driver.end_date IS NULL)\n\nLEFT JOIN\n v_org_person_displayname ON v_org_person_displayname.id =\ncar_driver.car_driver_id\n\nLEFT JOIN\n org_person ON org_person.id = v_org_person_displayname.id\n\nLEFT JOIN\n currency ON currency.id = car_km_fuel.currency_id\n\nLEFT JOIN\n car_km_fuel AS prev_car_km_fuel ON\n prev_car_km_fuel.transaction_time = (SELECT MAX(transaction_time) FROM\ncar_km_fuel as car_km_fuel2 WHERE car_km_fuel2.car_id = car.id AND\ncar_km_fuel2.transaction_time < car_km_fuel.transaction_time)\n\nLEFT JOIN\n org_company ON org_company.id = org_person.company_id\n\nWHERE\n (lower(org_company.name) LIKE lower(:param3) || '%') AND\n (car_km_fuel.transaction_time >= :param1 OR :param1 IS NULL) AND\n (car_km_fuel.transaction_time <= :param2 OR :param2 IS NULL)\n\nORDER BY\n 1, 2, 3;\n\n The output of explain if the following under 7.4.13:\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=66.66..66.66 rows=1 width=917)\n Sort Key: car_license_plate.license_plate,\nsubstr((date_trunc('day'::text, car_km_fuel.transaction_time))::text,\n1, 10), substr((date_trunc('second'::text,\ncar_km_fuel.transaction_time))::text, 12, 8)\n -> Nested Loop (cost=44.93..66.65 rows=1 width=917)\n -> Nested Loop Left Join (cost=44.93..62.23 rows=1 width=921)\n Join Filter: (\"inner\".transaction_time = (subplan))\n -> Nested Loop Left Join (cost=44.93..62.21 rows=1 width=917)\n Join Filter: (\"inner\".id = \"outer\".currency_id)\n -> Nested Loop (cost=44.93..60.92 rows=1 width=828)\n -> Hash Join (cost=44.93..58.32 rows=1 width=805)\n Hash Cond: (\"outer\".id =\n\"inner\".car_driver_id)\n -> Subquery Scan\nv_org_person_displayname (cost=16.42..28.82 rows=196 width=520)\n -> Merge Right Join\n(cost=16.42..26.86 rows=196 width=51)\n Merge Cond: (\"outer\".id\n= \"inner\".company_id)\n -> Index Scan using\npk_org_company on org_company co (cost=0.00..29.82 rows=47 width=27)\n -> Sort\n(cost=16.42..16.91 rows=196 width=28)\n Sort Key: pers.company_id\n -> Seq Scan on\norg_person pers (cost=0.00..8.96 rows=196 width=28)\n -> Hash (cost=28.51..28.51 rows=1 width=285)\n -> Hash Join\n(cost=19.81..28.51 rows=1 width=285)\n Hash Cond:\n(\"outer\".car_id = \"inner\".car_id)\n Join Filter:\n(((\"outer\".allocation_date)::timestamp without time zone <=\ndate_trunc('day'::text, \"inner\".transaction_time)) AND\n(((\"outer\".end_date)::timestamp without time zone >=\ndate_trunc('day'::text, \"inner\".transaction_time)) OR\n(\"outer\".end_date IS NULL)))\n -> Seq Scan on\ncar_driver (cost=0.00..7.73 rows=173 width=16)\n -> Hash\n(cost=19.80..19.80 rows=4 width=285)\n -> Hash Left Join\n(cost=19.53..19.80 rows=4 width=285)\n Hash Cond:\n(\"outer\".fuel_type_id = \"inner\".id)\n -> Hash Left\nJoin (cost=18.50..18.72 rows=4 width=279)\n Hash\nCond: (\"outer\".vehicle_make_id = \"inner\".id)\n ->\nMerge Left Join (cost=17.38..17.53 rows=3 width=274)\n\nMerge Cond: (\"outer\".vehicle_type_id = \"inner\".id)\n\n-> Sort (cost=15.67..15.67 rows=2 width=265)\n\n Sort Key: vehicle_variant.vehicle_type_id\n\n -> Nested Loop Left Join (cost=0.00..15.66 rows=2 width=265)\n\n Join Filter: (\"inner\".id = \"outer\".vehicle_variant_id)\n\n -> Nested Loop Left Join (cost=0.00..13.83 rows=1\nwidth=265)\n\n Join Filter: (\"inner\".id =\n\"outer\".vehicle_specific_id)\n\n -> Nested Loop Left Join (cost=0.00..10.50 rows=1\nwidth=234)\n\n Join Filter:\n(((\"inner\".license_plate_end_date)::timestamp without time zone <\ndate_trunc('day'::text, \"outer\".transaction_time)) OR\n(\"inner\".license_plate_end_date IS NULL))\n\n -> Nested Loop (cost=0.00..4.83 rows=1\nwidth=224)\n\n -> Seq Scan on car_km_fuel\n(cost=0.00..0.00 rows=1 width=216)\n\n Filter: (((transaction_time >=\n'2005-01-01 00:00:00'::timestamp without time zone) OR (now() IS\nNULL)) AND (((transaction_time)::timestamp with time zone <= now()) OR\n(now() IS NULL)))\n\n -> Index Scan using pk_car on car\n(cost=0.00..4.82 rows=1 width=8)\n\n Index Cond: (car.id =\n\"outer\".car_id)\n\n -> Index Scan using\nix_car_license_plate__car_id on car_license_plate (cost=0.00..5.65\nrows=1 width=18)\n\n Index Cond: (car_license_plate.car_id =\n\"outer\".id)\n\n -> Seq Scan on vehicle_specific (cost=0.00..2.59\nrows=59 width=39)\n\n -> Seq Scan on vehicle_variant (cost=0.00..1.37 rows=37\nwidth=8)\n\n-> Sort (cost=1.71..1.77 rows=22 width=17)\n\n Sort Key: vehicle_type.id\n\n -> Seq Scan on vehicle_type (cost=0.00..1.22 rows=22 width=17)\n ->\nHash (cost=1.10..1.10 rows=10 width=13)\n\n-> Seq Scan on vehicle_make (cost=0.00..1.10 rows=10 width=13)\n -> Hash\n(cost=1.02..1.02 rows=2 width=14)\n -> Seq\nScan on vehicle_fuel_type (cost=0.00..1.02 rows=2 width=14)\n -> Index Scan using pk_org_person on\norg_person (cost=0.00..2.59 rows=1 width=35)\n Index Cond: (org_person.id =\n\"outer\".car_driver_id)\n -> Seq Scan on currency (cost=0.00..1.13\nrows=13 width=97)\n -> Seq Scan on car_km_fuel prev_car_km_fuel\n(cost=0.00..0.00 rows=1 width=16)\n SubPlan\n -> Aggregate (cost=0.01..0.01 rows=1 width=8)\n -> Seq Scan on car_km_fuel car_km_fuel2\n(cost=0.00..0.00 rows=1 width=8)\n Filter: ((car_id = $0) AND (transaction_time < $1))\n -> Index Scan using pk_org_company on org_company\n(cost=0.00..4.36 rows=1 width=4)\n Index Cond: (org_company.id = \"outer\".company_id)\n Filter: (lower((name)::text) ~~ '%'::text)\n\n(64 rows)\n\n\nIf I leave off the where clause or run it on just a couple of recods, the\nresult is fine. Any ideas?\n\nRegards\neliott\n\nHi!I hope I'm sending my question to the right list, please don't flame if it's the wrong one.I have noticed that while a query runs in about 1.5seconds on a 8.xx version postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are using RHEL4 on our server we are stuck with \n7.4.13. The enormous time difference between the different builds drives me crazy. Can you please help me identifying the bottleneck or suggest anything to improve the dismal performance.The query is the following:\nSelect car_license_plate.license_plate, substr(date_trunc('day', car_km_fuel.transaction_time), 1, 10), substr(date_trunc('second', car_km_fuel.transaction_time), 12, 8), vehicle_make.make, vehicle_type.model,\n engine_size, vehicle_fuel_type.fuel_type, v_org_person_displayname.displayname_lastfirst, car_km_fuel.ammount, car_km_fuel.unit_price, car_km_fuel.total_ammount, currency.currency AS,\n car_km_fuel.km AS, vehicle_specific.fuel_capacity, CASE WHEN (car_km_fuel.ammount > vehicle_specific.fuel_capacity) THEN CAST(ROUND(CAST(car_km_fuel.ammount - vehicle_specific.fuel_capacity AS NUMERIC), 2) AS varchar) ELSE '---' END AS \"over\",\n car_km_fuel.notes,CASE WHEN (prev_car_km_fuel.km IS NOT NULL AND car_km_fuel.km IS NOT NULL AND (car_km_fuel.km - prev_car_km_fuel.km <> 0)) THEN CAST(Round(CAST(((car_km_fuel.ammount / (car_km_fuel.km - prev_car_km_fuel.km)) * 100) AS Numeric), 2) AS VARCHAR)\n WHEN (prev_car_km_fuel.km IS NULL) THEN 'xxxx' WHEN (car_km_fuel.km IS NULL) THEN 'error' END AS \"average\", vehicle_specific.consumption_town, org_person.email_addressFROM\n car_km_fuelLEFT JOIN car ON car.id = car_km_fuel.car_idLEFT JOIN car_license_plate ON car_license_plate.car_id = car.id AND (car_license_plate.license_plate_end_date < date_trunc('day', car_km_fuel.transaction_time) OR car_license_plate.license_plate_end_date IS NULL)\nLEFT JOIN vehicle_specific ON vehicle_specific.id = car.vehicle_specific_idLEFT JOIN vehicle_variant ON vehicle_variant.id = vehicle_specific.vehicle_variant_idLEFT JOIN vehicle_type ON vehicle_type.id = vehicle_variant.vehicle_type_id\nLEFT JOIN vehicle_make ON vehicle_make.id = vehicle_type.vehicle_make_idLEFT JOIN vehicle_fuel_type ON vehicle_fuel_type.id = vehicle_specific.fuel_type_idLEFT JOIN car_driver ON car_driver.car_id = \ncar.id AND car_driver.allocation_date <= date_trunc('day', car_km_fuel.transaction_time) AND (car_driver.end_date >= date_trunc('day', car_km_fuel.transaction_time) OR car_driver.end_date IS NULL)\nLEFT JOIN v_org_person_displayname ON v_org_person_displayname.id = car_driver.car_driver_idLEFT JOIN org_person ON org_person.id = v_org_person_displayname.idLEFT JOIN currency ON \ncurrency.id = car_km_fuel.currency_idLEFT JOIN car_km_fuel AS prev_car_km_fuel ON prev_car_km_fuel.transaction_time = (SELECT MAX(transaction_time) FROM car_km_fuel as car_km_fuel2 WHERE car_km_fuel2.car_id = \ncar.id AND car_km_fuel2.transaction_time < car_km_fuel.transaction_time)LEFT JOIN org_company ON org_company.id = org_person.company_idWHERE (lower(org_company.name) LIKE lower(:param3) || '%') AND\n (car_km_fuel.transaction_time >= :param1 OR :param1 IS NULL) AND (car_km_fuel.transaction_time <= :param2 OR :param2 IS NULL) ORDER BY 1, 2, 3; The output of explain if the following under \n7.4.13: QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=66.66..66.66 rows=1 width=917) Sort Key: car_license_plate.license_plate, substr((date_trunc('day'::text, car_km_fuel.transaction_time))::text, 1, 10), substr((date_trunc('second'::text, car_km_fuel.transaction_time))::text, 12, 8)\n -> Nested Loop (cost=44.93..66.65 rows=1 width=917) -> Nested Loop Left Join (cost=44.93..62.23 rows=1 width=921) Join Filter: (\"inner\".transaction_time = (subplan))\n -> Nested Loop Left Join (cost=44.93..62.21 rows=1 width=917) Join Filter: (\"inner\".id = \"outer\".currency_id) -> Nested Loop (cost=\n44.93..60.92 rows=1 width=828) -> Hash Join (cost=44.93..58.32 rows=1 width=805) Hash Cond: (\"outer\".id = \"inner\".car_driver_id)\n -> Subquery Scan v_org_person_displayname (cost=16.42..28.82 rows=196 width=520) -> Merge Right Join (cost=16.42..26.86 rows=196 width=51)\n Merge Cond: (\"outer\".id = \"inner\".company_id) -> Index Scan using pk_org_company on org_company co (cost=\n0.00..29.82 rows=47 width=27) -> Sort (cost=16.42..16.91 rows=196 width=28) Sort Key: pers.company_id -> Seq Scan on org_person pers (cost=\n0.00..8.96 rows=196 width=28) -> Hash (cost=28.51..28.51 rows=1 width=285) -> Hash Join (cost=19.81..28.51 rows=1 width=285) Hash Cond: (\"outer\".car_id = \"inner\".car_id)\n Join Filter: (((\"outer\".allocation_date)::timestamp without time zone <= date_trunc('day'::text, \"inner\".transaction_time)) AND (((\"outer\".end_date)::timestamp without time zone >= date_trunc('day'::text, \"inner\".transaction_time)) OR (\"outer\".end_date IS NULL)))\n -> Seq Scan on car_driver (cost=0.00..7.73 rows=173 width=16) -> Hash (cost=19.80..19.80 rows=4 width=285) -> Hash Left Join (cost=\n19.53..19.80 rows=4 width=285) Hash Cond: (\"outer\".fuel_type_id = \"inner\".id) -> Hash Left Join (cost=\n18.50..18.72 rows=4 width=279) Hash Cond: (\"outer\".vehicle_make_id = \"inner\".id) -> Merge Left Join (cost=\n17.38..17.53 rows=3 width=274) Merge Cond: (\"outer\".vehicle_type_id = \"inner\".id) -> Sort (cost=\n15.67..15.67 rows=2 width=265) Sort Key: vehicle_variant.vehicle_type_id -> Nested Loop Left Join (cost=\n0.00..15.66 rows=2 width=265) Join Filter: (\"inner\".id = \"outer\".vehicle_variant_id) -> Nested Loop Left Join (cost=\n0.00..13.83 rows=1 width=265) Join Filter: (\"inner\".id = \"outer\".vehicle_specific_id) -> Nested Loop Left Join (cost=\n0.00..10.50 rows=1 width=234) Join Filter: (((\"inner\".license_plate_end_date)::timestamp without time zone < date_trunc('day'::text, \"outer\".transaction_time)) OR (\"inner\".license_plate_end_date IS NULL))\n -> Nested Loop (cost=0.00..4.83 rows=1 width=224) -> Seq Scan on car_km_fuel (cost=\n0.00..0.00 rows=1 width=216) Filter: (((transaction_time >= '2005-01-01 00:00:00'::timestamp without time zone) OR (now() IS NULL)) AND (((transaction_time)::timestamp with time zone <= now()) OR (now() IS NULL)))\n -> Index Scan using pk_car on car (cost=0.00..4.82 rows=1 width=8) Index Cond: (\ncar.id = \"outer\".car_id) -> Index Scan using ix_car_license_plate__car_id on car_license_plate (cost=\n0.00..5.65 rows=1 width=18) Index Cond: (car_license_plate.car_id = \"outer\".id) -> Seq Scan on vehicle_specific (cost=\n0.00..2.59 rows=59 width=39) -> Seq Scan on vehicle_variant (cost=0.00..1.37 rows=37 width=8) -> Sort (cost=\n1.71..1.77 rows=22 width=17) Sort Key: vehicle_type.id -> Seq Scan on vehicle_type (cost=\n0.00..1.22 rows=22 width=17) -> Hash (cost=1.10..1.10 rows=10 width=13) -> Seq Scan on vehicle_make (cost=\n0.00..1.10 rows=10 width=13) -> Hash (cost=1.02..1.02 rows=2 width=14) -> Seq Scan on vehicle_fuel_type (cost=\n0.00..1.02 rows=2 width=14) -> Index Scan using pk_org_person on org_person (cost=0.00..2.59 rows=1 width=35) Index Cond: (org_person.id = \"outer\".car_driver_id)\n -> Seq Scan on currency (cost=0.00..1.13 rows=13 width=97) -> Seq Scan on car_km_fuel prev_car_km_fuel (cost=0.00..0.00 rows=1 width=16) SubPlan -> Aggregate (cost=\n0.01..0.01 rows=1 width=8) -> Seq Scan on car_km_fuel car_km_fuel2 (cost=0.00..0.00 rows=1 width=8) Filter: ((car_id = $0) AND (transaction_time < $1))\n -> Index Scan using pk_org_company on org_company (cost=0.00..4.36 rows=1 width=4) Index Cond: (org_company.id = \"outer\".company_id) Filter: (lower((name)::text) ~~ '%'::text)\n(64 rows)If I leave off the where clause or run it on just a couple of recods, the result is fine. Any ideas?Regardseliott",
"msg_date": "Thu, 27 Jul 2006 16:23:28 +0200",
"msg_from": "Eliott <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance issue with a specific query"
},
{
"msg_contents": "On Thu, 2006-07-27 at 09:23, Eliott wrote:\n> Hi!\n> \n> I hope I'm sending my question to the right list, please don't flame\n> if it's the wrong one.\n> \n> I have noticed that while a query runs in about 1.5seconds on a 8.xx\n> version postgresql server on our 7.4.13 it takes around 15-20 minutes.\n> Since we are using RHEL4 on our server we are stuck with 7.4.13. The\n> enormous time difference between the different builds drives me crazy.\n> Can you please help me identifying the bottleneck or suggest anything\n> to improve the dismal performance.\n\nYou are absolutely on the right list. A couple of points.\n\n1: Which 8.xx? 8.0.x or 8.1.x? 8.1.x is literally light years ahead\nof 7.4 in terms of performance. 8.0 is somewhere between them. The\nperformance difference you're seeing is pretty common.\n\n2: Looking at your query, there are places where you're joining on\nthings like date_trunc(...). In 7.4 the database will not, and cannot\nuse a normal index on the date field for those kinds of things. It can,\nhowever, use a funtional index on some of them. Try creating an index\non date_trunc('day',yourfieldhere) and see if that helps.\n\n3: You are NOT Stuck on 7.4.13. I have a RHEL server that will be\nrunning 8.1.4 or so pretty soon as a dataware house. It may get updated\nto RHEL4, may not. You can either compile from the .tar.[gz|bz2] files\nor download the PGDG rpms for your distro.\n\n4: You are fighting an uphill battle. There were a LOT of improvements\nmade all over in the march from 7.4 to 8.1. Not all of them were simple\nplanner tweaks and shortcuts, but honest to goodness changes to the way\nthings happen. No amount of tuning can make 7.4 run as fast as 8.1.\n",
"msg_date": "Thu, 27 Jul 2006 09:46:26 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance issue with a specific query"
},
{
"msg_contents": "On 7/27/06, Eliott <[email protected]> wrote:\n> Hi!\n>\n> I hope I'm sending my question to the right list, please don't flame if it's\n> the wrong one.\n>\n> I have noticed that while a query runs in about 1.5seconds on a 8.xx version\n> postgresql server on our 7.4.13 it takes around 15-20 minutes. Since we are\n> using RHEL4 on our server we are stuck with 7.4.13. The enormous time\n> difference between the different builds drives me crazy. Can you please help\n> me identifying the bottleneck or suggest anything to improve the dismal\n> performance.\n> The query is the following:\n>\n\ntry turning off genetic query optimization. regarding the rhel4\nissue...does rhel not come with a c compiler? :)\n\nmerlin\n",
"msg_date": "Thu, 27 Jul 2006 10:52:31 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance issue with a specific query"
},
{
"msg_contents": ">>\n> \n> try turning off genetic query optimization. regarding the rhel4\n> issue...does rhel not come with a c compiler? :)\n\nEnterprises are not going to compile. They are going to accept the \nlatest support by vendor release.\n\nRedhat has a tendency to be incredibly stupid about this particular \narea of their packaging.\n\nJoshua D. Drake\n\n\n> \n> merlin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Thu, 27 Jul 2006 08:39:42 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance issue with a specific query"
},
{
"msg_contents": "Joshua D. Drake wrote:\n> >>\n> >\n> >try turning off genetic query optimization. regarding the rhel4\n> >issue...does rhel not come with a c compiler? :)\n> \n> Enterprises are not going to compile. They are going to accept the \n> latest support by vendor release.\n> \n> Redhat has a tendency to be incredibly stupid about this particular \n> area of their packaging.\n\nStupid how?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 27 Jul 2006 13:25:48 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance issue with a specific query"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Joshua D. Drake wrote:\n>> Enterprises are not going to compile. They are going to accept the \n>> latest support by vendor release.\n>> \n>> Redhat has a tendency to be incredibly stupid about this particular \n>> area of their packaging.\n\n> Stupid how?\n\nRed Hat feels (apparently accurately, judging by their subscription\nrevenue ;-)) that what RHEL customers want is a platform that's stable\nover multi-year application lifespans. So major incompatible changes in\nthe system software are not looked on with favor. That's why RHEL4\nis still shipping PG 7.4.*. You can call it a stupid policy if you\nlike, but it's hard to argue with success.\n\nHowever, there will be an RH-supported release of PG 8.1.* as an optional\nadd-on for RHEL4. Real Soon Now, I hope --- the release date has been\npushed back a couple times already.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Jul 2006 16:18:07 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance issue with a specific query "
},
{
"msg_contents": "Hi!\n\nthanks for the quick help\ni managed to reduce the response time from seemingly taking infinity to a\ntolerable level, which is by the way a huge improvement.\nWhat helped was the functional index on date_trunc('day',yourfieldhere) as\nScott suggested.\nI tried to disable the geqo, but it didn't make any noticeable difference.\nFor now, I am happy with the result, if the result set stays the same I can\nlive without an upgrade.\n\n\n> 1: Which 8.xx? 8.0.x or 8.1.x? 8.1.x is literally light years ahead\n> of 7.4 in terms of performance. 8.0 is somewhere between them. The\n> performance difference you're seeing is pretty common.\n\n\nThe benchmark was done on a small notebook running 8.1.4.1, versus 7.4.13 on\na 2gig P4 server. The difference is astounding, even without the functional\nindex 8.1 is 5-10 times faster than a full fledged server.\n\n3: You are NOT Stuck on 7.4.13. I have a RHEL server that will be\n> running 8.1.4 or so pretty soon as a dataware house. It may get updated\n> to RHEL4, may not. You can either compile from the .tar.[gz|bz2] files\n> or download the PGDG rpms for your distro.\n\n\nI know, but that's was I was trying to avoid. It is not that I would use the\nRHEL support provided for 7.4.13, but you know, staying official is the\nwhole point of subscribing to RHEL4.\n\nMoreover, since many of our other applications are running happily running\nunder 7.4, I would be afraid to upgrade the whole thing.\n\nSo, again, thanks everybody for the help, you saved the day for me.\n\nRegards\nEliott\n\nHi!thanks for the quick helpi managed to reduce the response time from seemingly taking infinity to a tolerable level, which is by the way a huge improvement. What helped was the functional index on date_trunc('day',yourfieldhere) as Scott suggested.\nI tried to disable the geqo, but it didn't make any noticeable difference. For now, I am happy with the result, if the result set stays the same I can live without an upgrade.\n1: Which 8.xx? 8.0.x or \n8.1.x? 8.1.x is literally light years aheadof 7.4 in terms of performance. 8.0 is somewhere between them. Theperformance difference you're seeing is pretty common.The benchmark was done on a small notebook running \n8.1.4.1, versus 7.4.13 on a 2gig P4 server. The difference is astounding, even without the functional index \n8.1 is 5-10 times faster than a full fledged server. 3: You are NOT Stuck on \n7.4.13. I have a RHEL server that will berunning 8.1.4 or so pretty soon as a dataware house. It may get updatedto RHEL4, may not. You can either compile from the .tar.[gz|bz2] files\nor download the PGDG rpms for your distro.I know, but that's was I was trying to avoid. It is not that I would use the RHEL support provided for 7.4.13, but you know, staying official is the whole point of subscribing to RHEL4. \nMoreover, since many of our other applications are running happily running under \n7.4, I would be afraid to upgrade the whole thing. So, again, thanks everybody for the help, you saved the day for me.RegardsEliott",
"msg_date": "Fri, 28 Jul 2006 12:04:01 +0200",
"msg_from": "Eliott <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: performance issue with a specific query"
}
] |
[
{
"msg_contents": "Hello,\n \n My name is Hristo Markov. I am software developer. \n \n I am developing software systems (with C/C++ program language) that work on Windows operation system and uses ODBC driver and ACCESS database. I want to change database with PostgreSQL.\n \n The systems working without problems with PostgreSQL and ODBC, but the performance and speed of updating and reading of data are very low. \n \n I run the test program working on one single computer under Windows XP operating system and working with equal data (I use around 10 tables at the same time). The difference is only databases and ODBC drivers.\n \n \n The results from speed and performance of the test program are:\n \n Around 10 seconds under Access database.\nAround 40 seconds under PostgreSQL database.\n \nPlease help me to increase speed and performance of PostgreSQL. \n \n /I am freshman in PostgreSQL and I thing that may be must set some settings /\n \n Thank you in advance,\n \n Sincerely,\nHristo Markov\n \n\n \t\t\n---------------------------------\nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1�/min.\nHello, My name is Hristo Markov. I am software developer. I am developing software systems (with C/C++ program language) that work on Windows operation system and uses ODBC driver and ACCESS database. I want to change database with PostgreSQL. The systems working without problems with PostgreSQL and ODBC, but the performance and speed of updating and reading of data are very low. I run the test program working on one single computer under Windows XP operating system and working with equal data (I use around 10 tables at the same time). The difference is only databases and ODBC drivers. The results from speed and performance of the test program are: Around 10 seconds under Access database.Around 40 seconds under PostgreSQL database. Please help me to\n increase speed and performance of PostgreSQL. /I am freshman in PostgreSQL and I thing that may be must set some settings / Thank you in advance, Sincerely,Hristo Markov \nTalk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls. Great rates starting at 1�/min.",
"msg_date": "Thu, 27 Jul 2006 09:29:29 -0700 (PDT)",
"msg_from": "Hristo Markov <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to increase performance?"
},
{
"msg_contents": "Hristo Markov wrote:\n> Hello,\n> \n> My name is Hristo Markov. I am software developer. \n> I am developing software systems (with C/C++ program language) that work on Windows operation system and uses ODBC driver and ACCESS database. I want to change database with PostgreSQL.\n> The systems working without problems with PostgreSQL and ODBC, but the performance and speed of updating and reading of data are very low. \n> I run the test program working on one single computer under Windows XP operating system and working with equal data (I use around 10 tables at the same time). The difference is only databases and ODBC drivers.\n> \n> The results from speed and performance of the test program are:\n> Around 10 seconds under Access database.\n> Around 40 seconds under PostgreSQL database.\n> \n> Please help me to increase speed and performance of PostgreSQL. \n> /I am freshman in PostgreSQL and I thing that may be must set some settings /\n\nAre there specific queries you're having problems with? How many \ntransactions does this 40 seconds represent? What is the bottle-neck - \nCPU/disk/memory?\n\nYou might find this a good place to start reading about configuration \nsettings, and then follow that with the manuals.\nhttp://www.powerpostgresql.com/PerfList\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 31 Jul 2006 10:23:31 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to increase performance?"
}
] |
[
{
"msg_contents": "See Query 200x slower on server [PART 1] before reading any further\n\nQUERY PLAN ON MY HOME SERVER\nSort (cost=1516.55..1516.59 rows=15 width=640) (actual time=123.008..123.435 rows=1103 loops=1)\n Sort Key: aanmaakdatum\n -> Subquery Scan producttabel (cost=1515.39..1516.26 rows=15 width=640) (actual time=112.890..119.067 rows=1103 loops=1)\n -> Unique (cost=1515.39..1516.11 rows=15 width=834) (actual time=112.886..117.950 rows=1103 loops=1)\n InitPlan\n -> Index Scan using geg_winkel_pkey on geg_winkel (cost=0.00..5.44 rows=1 width=4) (actual time=0.022..0.023 rows=1 loops=1)\n Index Cond: (winkelid = 0)\n -> Index Scan using geg_winkel_pkey on geg_winkel (cost=0.00..5.44 rows=1 width=4) (actual time=0.004..0.005 rows=1 loops=1)\n Index Cond: (winkelid = 0)\n -> Group (cost=1504.51..1505.18 rows=15 width=834) (actual time=112.880..115.682 rows=1136 loops=1)\n -> Sort (cost=1504.51..1504.55 rows=15 width=834) (actual time=112.874..113.255 rows=1137 loops=1)\n Sort Key: p.productid, p.serienummer, p.artikelnaam, p.inkoopprijs, p.vasteverkoopprijs, gegw.winkelid, gegw.winkelnaam, gegw.winkelnaamnl, gegw.winkelnaamenkelvoud, gegw.winkelnaamenkelvoudnl, defg.genrenaam, defg.genrenaamnl, p. (..)\n -> Hash Join (cost=925.74..1504.22 rows=15 width=834) (actual time=34.143..107.937 rows=1137 loops=1)\n Hash Cond: (\"outer\".leverancierid = \"inner\".leverancierid)\n -> Nested Loop (cost=924.29..1502.54 rows=15 width=829) (actual time=34.041..105.706 rows=1137 loops=1)\n -> Hash Join (cost=924.29..1399.67 rows=20 width=829) (actual time=32.698..71.780 rows=3852 loops=1)\n Hash Cond: (\"outer\".winkelid = \"inner\".winkelid)\n -> Hash Left Join (cost=918.33..1373.61 rows=3981 width=249) (actual time=31.997..64.938 rows=3852 loops=1)\n Hash Cond: (\"outer\".genreid = \"inner\".genreid)\n -> Hash Left Join (cost=917.14..1312.71 rows=3981 width=117) (actual time=31.946..60.961 rows=3852 loops=1)\n Hash Cond: (\"outer\".onderwerpid = \"inner\".onderwerpid)\n -> Hash Left Join (cost=904.72..1240.57 rows=3981 width=117) (actual time=31.104..56.264 rows=3852 loops=1)\n Hash Cond: (\"outer\".onderwerpid = \"inner\".onderwerpid)\n -> Merge Right Join (cost=890.28..1166.42 rows=3981 width=101) (actual time=29.938..50.406 rows=3852 loops=1)\n Merge Cond: (\"outer\".productid = \"inner\".productid)\n -> Index Scan using koppel_product_onderwerp_pkey on koppel_product_onderwerp kpo (cost=0.00..216.34 rows=5983 width=8) (actual time=0.011..8.537 rows=5965 loops=1)\n -> Sort (cost=890.28..900.23 rows=3981 width=97) (actual time=29.918..31.509 rows=3852 loops=1)\n Sort Key: p.productid\n -> Seq Scan on product p (cost=0.00..652.24 rows=3981 width=97) (actual time=0.012..18.012 rows=3819 loops=1)\n Filter: (afdelingid = 1)\n -> Hash (cost=12.75..12.75 rows=675 width=20) (actual time=1.119..1.119 rows=675 loops=1)\n -> Seq Scan on geg_onderwerp gego (cost=0.00..12.75 rows=675 width=20) (actual time=0.010..0.598 rows=675 loops=1)\n -> Hash (cost=10.74..10.74 rows=674 width=8) (actual time=0.822..0.822 rows=674 loops=1)\n -> Seq Scan on koppel_onderwerp_genre kog (cost=0.00..10.74 rows=674 width=8) (actual time=0.010..0.423 rows=674 loops=1)\n -> Hash (cost=1.15..1.15 rows=15 width=140) (actual time=0.033..0.033 rows=15 loops=1)\n -> Seq Scan on geg_genre defg (cost=0.00..1.15 rows=15 width=140) (actual time=0.004..0.017 rows=15 loops=1)\n -> Hash (cost=5.96..5.96 rows=1 width=584) (actual time=0.682..0.682 rows=197 loops=1)\n -> Seq Scan on geg_winkel gegw (cost=0.00..5.96 rows=1 width=584) (actual time=0.042..0.390 rows=197 loops=1)\n Filter: ((lft >= $0) AND (lft <= $1))\n -> Index Scan using product_eigenschap_key on product_eigenschap pe (cost=0.00..5.13 rows=1 width=4) (actual time=0.006..0.007 rows=0 loops=3852)\n Index Cond: (\"outer\".productid = pe.productid)\n Filter: (stocktypeid < 3)\n -> Hash (cost=1.36..1.36 rows=36 width=13) (actual time=0.081..0.081 rows=36 loops=1)\n -> Seq Scan on geg_leverancier dl (cost=0.00..1.36 rows=36 width=13) (actual time=0.010..0.042 rows=36 loops=1)\nTotal runtime: 125.432 ms\n\n\nThis means that the Query is 200 times slower on the webhost!\n\nHow can I resolve this?\n\n\n\n\n\n\nSee Query 200x slower on server [PART 1] before reading any \nfurther\n \nQUERY PLAN ON MY HOME SERVER\nSort (cost=1516.55..1516.59 rows=15 width=640) (actual \ntime=123.008..123.435 rows=1103 loops=1) Sort Key: \naanmaakdatum -> Subquery Scan producttabel \n(cost=1515.39..1516.26 rows=15 width=640) (actual time=112.890..119.067 \nrows=1103 loops=1) -> \nUnique (cost=1515.39..1516.11 rows=15 width=834) (actual \ntime=112.886..117.950 rows=1103 \nloops=1) \nInitPlan \n-> Index Scan using geg_winkel_pkey on geg_winkel \n(cost=0.00..5.44 rows=1 width=4) (actual time=0.022..0.023 rows=1 \nloops=1) \nIndex Cond: (winkelid = \n0) \n-> Index Scan using geg_winkel_pkey on geg_winkel \n(cost=0.00..5.44 rows=1 width=4) (actual time=0.004..0.005 rows=1 \nloops=1) \nIndex Cond: (winkelid = \n0) \n-> Group (cost=1504.51..1505.18 rows=15 width=834) (actual \ntime=112.880..115.682 rows=1136 \nloops=1) \n-> Sort (cost=1504.51..1504.55 rows=15 width=834) (actual \ntime=112.874..113.255 rows=1137 \nloops=1) \nSort Key: p.productid, p.serienummer, p.artikelnaam, p.inkoopprijs, \np.vasteverkoopprijs, gegw.winkelid, gegw.winkelnaam, gegw.winkelnaamnl, \ngegw.winkelnaamenkelvoud, gegw.winkelnaamenkelvoudnl, defg.genrenaam, \ndefg.genrenaamnl, p. \n(..) \n-> Hash Join (cost=925.74..1504.22 rows=15 width=834) (actual \ntime=34.143..107.937 rows=1137 \nloops=1) \nHash Cond: (\"outer\".leverancierid = \n\"inner\".leverancierid) \n-> Nested Loop (cost=924.29..1502.54 rows=15 width=829) (actual \ntime=34.041..105.706 rows=1137 \nloops=1) \n-> Hash Join (cost=924.29..1399.67 rows=20 width=829) (actual \ntime=32.698..71.780 rows=3852 \nloops=1) \nHash Cond: (\"outer\".winkelid = \n\"inner\".winkelid) \n-> Hash Left Join (cost=918.33..1373.61 rows=3981 width=249) \n(actual time=31.997..64.938 rows=3852 \nloops=1) \nHash Cond: (\"outer\".genreid = \n\"inner\".genreid) \n-> Hash Left Join (cost=917.14..1312.71 rows=3981 width=117) \n(actual time=31.946..60.961 rows=3852 \nloops=1) \nHash Cond: (\"outer\".onderwerpid = \n\"inner\".onderwerpid) \n-> Hash Left Join (cost=904.72..1240.57 rows=3981 width=117) \n(actual time=31.104..56.264 rows=3852 \nloops=1) \nHash Cond: (\"outer\".onderwerpid = \n\"inner\".onderwerpid) \n-> Merge Right Join (cost=890.28..1166.42 rows=3981 width=101) \n(actual time=29.938..50.406 rows=3852 \nloops=1) \nMerge Cond: (\"outer\".productid = \n\"inner\".productid) \n-> Index Scan using koppel_product_onderwerp_pkey on \nkoppel_product_onderwerp kpo (cost=0.00..216.34 rows=5983 width=8) (actual \ntime=0.011..8.537 rows=5965 \nloops=1) \n-> Sort (cost=890.28..900.23 rows=3981 width=97) (actual \ntime=29.918..31.509 rows=3852 \nloops=1) \nSort Key: \np.productid \n-> Seq Scan on product p (cost=0.00..652.24 rows=3981 width=97) \n(actual time=0.012..18.012 rows=3819 \nloops=1) \nFilter: (afdelingid = \n1) \n-> Hash (cost=12.75..12.75 rows=675 width=20) (actual \ntime=1.119..1.119 rows=675 \nloops=1) \n-> Seq Scan on geg_onderwerp gego (cost=0.00..12.75 rows=675 \nwidth=20) (actual time=0.010..0.598 rows=675 \nloops=1) \n-> Hash (cost=10.74..10.74 rows=674 width=8) (actual \ntime=0.822..0.822 rows=674 \nloops=1) \n-> Seq Scan on koppel_onderwerp_genre kog (cost=0.00..10.74 \nrows=674 width=8) (actual time=0.010..0.423 rows=674 \nloops=1) \n-> Hash (cost=1.15..1.15 rows=15 width=140) (actual \ntime=0.033..0.033 rows=15 \nloops=1) \n-> Seq Scan on geg_genre defg (cost=0.00..1.15 rows=15 width=140) \n(actual time=0.004..0.017 rows=15 \nloops=1) \n-> Hash (cost=5.96..5.96 rows=1 width=584) (actual \ntime=0.682..0.682 rows=197 \nloops=1) \n-> Seq Scan on geg_winkel gegw (cost=0.00..5.96 rows=1 width=584) \n(actual time=0.042..0.390 rows=197 \nloops=1) \nFilter: ((lft >= $0) AND (lft <= \n$1)) \n-> Index Scan using product_eigenschap_key on product_eigenschap \npe (cost=0.00..5.13 rows=1 width=4) (actual time=0.006..0.007 rows=0 \nloops=3852) \nIndex Cond: (\"outer\".productid = \npe.productid) \nFilter: (stocktypeid < \n3) \n-> Hash (cost=1.36..1.36 rows=36 width=13) (actual \ntime=0.081..0.081 rows=36 \nloops=1) \n-> Seq Scan on geg_leverancier dl (cost=0.00..1.36 rows=36 \nwidth=13) (actual time=0.010..0.042 rows=36 loops=1)Total runtime: 125.432 \nms\n \nThis means that the Query is 200 times slower on the \nwebhost!\n \nHow can I resolve this?",
"msg_date": "Thu, 27 Jul 2006 18:45:03 +0200",
"msg_from": "\"NbForYou\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query 200x slower on server [PART 2]"
},
{
"msg_contents": "NbForYou wrote:\n> See Query 200x slower on server [PART 1] before reading any further\n\nCant' find it. Sorry.\n\n> \n> QUERY PLAN ON MY HOME SERVER\n[snip]\n> Total runtime: 125.432 ms\n> \n> This means that the Query is 200 times slower on the webhost!\n> \n> How can I resolve this?\n\nFirst - what is different between the two plans and why? PostgreSQL will \nbe choosing a different plan because:\n1. It's estimating different numbers of rows for one or more steps\n2. It's estimating a different cost for one or more steps\n3. It's triggering the genetic optimiser which means you're not \nnecessarily going to get the same plan each time.\n4. You've got different versions of PG on the different machines.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 31 Jul 2006 10:26:55 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query 200x slower on server [PART 2]"
}
] |
[
{
"msg_contents": "All,\n\nI support a system that runs on several databases including PostgreSQL.\nI've noticed that the other DB's always put an implicit savepoint before\neach statement executed, and roll back to that savepoint if the\nstatement fails for some reason. PG does not, so unless you manually\nspecify a savepoint you lose all previous work in the transaction.\n\nSo my question is, how expensive is setting a savepoint in PG? If it's\nnot too expensive, I'm wondering if it would be feasible to add a config\nparameter to psql or other client interfaces (thinking specifically of\njdbc here) to do it automatically. Doing so would make it a little\neasier to work with PG in a multi-db environment.\n\nMy main reason for wanting this is so that I can more easily import,\nsay, 50 new 'objects' (related rows stored across several tables) in a\ntransaction instead of only one at a time without fear that an error in\none object would invalidate the whole batch. I could do this now by\nmanually setting savepoints, but if it's not a big deal performance-wise\nto modify the JDBC driver to start an anonymous savepoint with each\nstatement, then I'd prefer that approach as it seems that it would make\nlife easier for other folks too.\n\nThanks in advance for any feedback :)\n\n-- Mark Lewis\n",
"msg_date": "Thu, 27 Jul 2006 10:19:15 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Savepoint performance"
},
{
"msg_contents": "Mark Lewis wrote:\n\n> So my question is, how expensive is setting a savepoint in PG? If it's\n> not too expensive, I'm wondering if it would be feasible to add a config\n> parameter to psql or other client interfaces (thinking specifically of\n> jdbc here) to do it automatically. Doing so would make it a little\n> easier to work with PG in a multi-db environment.\n\nIt is moderately expensive. It's cheaper than starting/committing a\ntransaction, but certainly much more expensive than not setting a\nsavepoint.\n\nIn psql you can do what you want using \\set ON_ERROR_ROLLBACK on. This\nis clearly a client-only issue, so the server does not provide any\nspecial support for it (just like autocommit mode).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Thu, 27 Jul 2006 15:35:20 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Savepoint performance"
},
{
"msg_contents": "On 7/27/06, Mark Lewis <[email protected]> wrote:\n> All,\n>\n> I support a system that runs on several databases including PostgreSQL.\n> I've noticed that the other DB's always put an implicit savepoint before\n> each statement executed, and roll back to that savepoint if the\n> statement fails for some reason. PG does not, so unless you manually\n> specify a savepoint you lose all previous work in the transaction.\n>\n\nyou're talking about transactions not savepoints (savepoints is\nsomething more like nested transactions), i guess...\n\npostgres execute every single statement inside an implicit transaction\nunless you put BEGIN/COMMIT between a block of statements... in that\ncase if an error occurs the entire block of statements must\nROLLBACK...\n\nif other db's doesn't do that, is a bug in their implementation of the\nSQL standard\n\n-- \nregards,\nJaime Casanova\n\n\"Programming today is a race between software engineers striving to\nbuild bigger and better idiot-proof programs and the universe trying\nto produce bigger and better idiots.\nSo far, the universe is winning.\"\n Richard Cook\n",
"msg_date": "Thu, 27 Jul 2006 19:52:24 -0500",
"msg_from": "\"Jaime Casanova\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Savepoint performance"
},
{
"msg_contents": "We've actually done some prelim benchmarking of this feature about six\nmonths ago and we are actively considering adding it to our \"closer to\nOracle\" version of PLpgSQL. I certainly don't want to suggest that it's a\ngood idea to do this because it's Oracle compatible. :-)\n\nI'll get someone to post our performance results on this thread. As Alvaro\ncorrectly alludes, it has an overhead impact that is measurable, but, likely\nacceptable for situations where the feature is desired (as long as it\ndoesn't negatively affect performance in the \"normal\" case). I believe the\nimpact was something around a 12% average slowdown for the handful of\nPLpgSQL functions we tested when this feature is turned on.\n\nWould the community be potentially interested in this feature if we created\na BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??\n\n--Luss\n\nDenis Lussier\nCTO\nhttp://www.enterprisedb.com\n\n\nOn 7/27/06, Alvaro Herrera <[email protected]> wrote:\n>\n> Mark Lewis wrote:\n>\n> > So my question is, how expensive is setting a savepoint in PG? If it's\n> > not too expensive, I'm wondering if it would be feasible to add a config\n> > parameter to psql or other client interfaces (thinking specifically of\n> > jdbc here) to do it automatically. Doing so would make it a little\n> > easier to work with PG in a multi-db environment.\n>\n> It is moderately expensive. It's cheaper than starting/committing a\n> transaction, but certainly much more expensive than not setting a\n> savepoint.\n>\n> In psql you can do what you want using \\set ON_ERROR_ROLLBACK on. This\n> is clearly a client-only issue, so the server does not provide any\n> special support for it (just like autocommit mode).\n>\n> --\n> Alvaro Herrera\n> http://www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nWe've actually done some prelim benchmarking of this feature about six months ago and we are actively considering adding it to our \"closer to Oracle\" version of PLpgSQL. I certainly don't want to suggest that it's a good idea to do this because it's Oracle compatible. :-)\n\n \nI'll get someone to post our performance results on this thread. As Alvaro correctly alludes, it has an overhead impact that is measurable, but, likely acceptable for situations where the feature is desired (as long as it doesn't negatively affect performance in the \"normal\" case). I believe the impact was something around a 12% average slowdown for the handful of PLpgSQL functions we tested when this feature is turned on.\n\n \nWould the community be potentially interested in this feature if we created a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??\n \n--Luss\n \nDenis Lussier\nCTO\nhttp://www.enterprisedb.com \nOn 7/27/06, Alvaro Herrera <[email protected]> wrote:\nMark Lewis wrote:> So my question is, how expensive is setting a savepoint in PG? If it's> not too expensive, I'm wondering if it would be feasible to add a config\n> parameter to psql or other client interfaces (thinking specifically of> jdbc here) to do it automatically. Doing so would make it a little> easier to work with PG in a multi-db environment.\nIt is moderately expensive. It's cheaper than starting/committing atransaction, but certainly much more expensive than not setting asavepoint.In psql you can do what you want using \\set ON_ERROR_ROLLBACK on. This\nis clearly a client-only issue, so the server does not provide anyspecial support for it (just like autocommit mode).--Alvaro Herrera \nhttp://www.CommandPrompt.com/PostgreSQL Replication, Consulting, Custom Development, 24x7 support---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Thu, 27 Jul 2006 21:13:05 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Savepoint performance"
},
{
"msg_contents": "\"Denis Lussier\" <[email protected]> writes:\n> Would the community be potentially interested in this feature if we created\n> a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??\n\nBased on our rather disastrous experiment in 7.3, I'd say that fooling\naround with transaction start/end semantics on the server side is\nunlikely to fly ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 27 Jul 2006 21:34:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Savepoint performance "
},
{
"msg_contents": "My understanding of EDB's approach is that our prototype just\nimplicitly does a savepoint before each INSERT, UPDATE, or DELETE\nstatement inside of PLpgSQL. We then rollback to that savepoint if a\nsql error occurs. I don 't believe our prelim approach changes any\ntransaction start/end semantics on the server side and it doesn't\nchange any PLpgSQL syntax either (although it does allow you to\noptionally code commits &/or rollbacks inside stored procs).\n\nCan anybody point me to a thread on the 7.3 disastrous experiment?\n\nI personally think that doing commit or rollbacks inside stored\nprocedures is usually bad coding practice AND can be avoided... It's\na backward compatibility thing for non-ansi legacy stuff and this is\nwhy I was previously guessing that the community wouldn't be\ninterested in this for PLpgSQL. Actually... does anybody know\noffhand if the ansi standard for stored procs allows for explicit\ntransaction control inside of a stored procedure?\n\n--Luss\n\nOn 7/27/06, Tom Lane <[email protected]> wrote:\n> \"Denis Lussier\" <[email protected]> writes:\n> > Would the community be potentially interested in this feature if we created\n> > a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??\n>\n> Based on our rather disastrous experiment in 7.3, I'd say that fooling\n> around with transaction start/end semantics on the server side is\n> unlikely to fly ...\n>\n> regards, tom lane\n>\n",
"msg_date": "Thu, 27 Jul 2006 22:33:38 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Savepoint performance"
},
{
"msg_contents": "Actually, what we did in the tests at EnterpriseDB was encapsulate each\nSQL statement within its own BEGIN/EXCEPTION/END block.\n\nUsing this approach, if a SQL statement aborts, the rollback is\nconfined \nto the BEGIN/END block that encloses it. Other SQL statements would\nnot be affected since the block would isolate and capture that\nexception.\n\nIn the tests, the base-line version was a PL/pgSQL function for the\ndbt-2 new order transaction written within a single BEGIN/END block.\nThe experimental version was a variation of the base-line altered so\nthe processing of each order entailed entering three sub-blocks from\nthe main BEGIN/END block. In addition, another sub-block was\nentered each time a detail line within an order was processed.\n\nThe transactions per minute were recorded for runs of 20 minutes\nsimulating 10 terminals and 6 hours simulating 10 terminals.\nBelow are some of the numbers we got:\n\n With Sub-\n Test # Base Line Blocks \nDifference % Variation\n -------- ------------ ----------- \n------------- --------------\n10 terminals, 1 6128 5861\n20 minutes 2 5700 5702\n 3 6143 5556\n 4 5954 5750\n 5 5695 5925\n\nAverage of tests 1 - 5 5924 5758.8 \n-165.2 -2.79\n\n10 terminals, 6 hours 5341 5396 \n55 1.03\n\nAs you can see, we didn't encounter a predictable, significant\ndifference.\n\nErnie Nishiseki, Architect\nEnterpriseDB Corporation wrote:\n\n>---------- Forwarded message ----------\n>From: Denis Lussier \n>Date: Jul 27, 2006 10:33 PM\n>Subject: Re: [PERFORM] Savepoint performance\n>To: Tom Lane \n>Cc: [email protected]\n>\n>\n>My understanding of EDB's approach is that our prototype just\n>implicitly does a savepoint before each INSERT, UPDATE, or DELETE\n>statement inside of PLpgSQL. We then rollback to that savepoint if a\n>sql error occurs. I don 't believe our prelim approach changes any\n>transaction start/end semantics on the server side and it doesn't\n>change any PLpgSQL syntax either (although it does allow you to\n>optionally code commits &/or rollbacks inside stored procs).\n>\n>Can anybody point me to a thread on the 7.3 disastrous experiment?\n>\n>I personally think that doing commit or rollbacks inside stored\n>procedures is usually bad coding practice AND can be avoided... It's\n>a backward compatibility thing for non-ansi legacy stuff and this is\n>why I was previously guessing that the community wouldn't be\n>interested in this for PLpgSQL. Actually... does anybody know\n>offhand if the ansi standard for stored procs allows for explicit\n>transaction control inside of a stored procedure?\n>\n>--Luss\n>\n>On 7/27/06, Tom Lane wrote:\n>>\"Denis Lussier\" writes:\n>>>Would the community be potentially interested in this feature if we\n>>>created\n>>>a BSD Postgres patch of this feature for PLpgSQL (likely for 8.3)??\n>>\n>>Based on our rather disastrous experiment in 7.3, I'd say that fooling\n>>around with transaction start/end semantics on the server side is\n>>unlikely to fly ...\n>>\n>>regards, tom lane\n>>\n>\n>---------------------------(end of\n>broadcast)---------------------------\n>TIP 4: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n>\n>--\n>Jonah H. Harris, Software Architect | phone: 732.331.1300\n>EnterpriseDB Corporation | fax: 732.331.1301\n>33 Wood Ave S, 2nd Floor | [email protected]\n>Iselin, New Jersey 08830 | http://www.enterprisedb.com/\n\n",
"msg_date": "Tue, 1 Aug 2006 08:09:04 -0500 (CDT)",
"msg_from": "Ernest Nishiseki <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Savepoint performance"
}
] |
[
{
"msg_contents": "We are using a BI tool that generates some rather ugly queries. One of\nthe ugly queries is taking much longer to return thin I think it\nshould. \n\nThe select expression when run alone returns in 2 seconds with 35k rows\n(http://www.bowmansystems.com/~richard/explain_select.analyze) \n\nThe \"where clause\" when run alone returns 5200 rows in 10 seconds\n(http://www.bowmansystems.com/~richard/explain_where.analyze)\n\nHowever when I put to two together it takes much, much longer to run.\n(http://www.bowmansystems.com/~richard/full.analyze)\n\nCan anyone shed any light on what is going on here? Why does the\noptimizer choose such a slow plan in the combined query when the only\nreal difference between the full query and the \"where only\" query is the\nnumber of rows in the result set on the \"outside\" of the \"IN\" clause?\n\nA few pertinent observations/facts below\n\n1. The query is generated by a BI tool, I know it is ugly and stupid in\nmany cases. However, please try to see the larger issue, that if the\nselect and where portions are run separately they are both fast but\ntogether it is insanely slow.\n\n2. The database has vacuumdb -f -z run on it nightly.\n\n3. Modifications to the stock postgresql.conf:\nshared_buffers = 15000 \nwork_mem = 131072 \ndefault_statistics_target = 100\n\n4. Dual Dual core Opterons, 4 gigs of ram, 6 disk Serial ATA hardware\nRAID 10 running Postgres 8.03 compiled from source running on Debian\nstable.\n\n5. The tables being queried are only 200 megs or so combined on disk,\nthe whole DB is ~ 4 gigs\nSELECT sum(relpages*8/1024) AS size_M FROM pg_class;\n size_m\n--------\n 4178\n\nThanks!\n\n",
"msg_date": "Thu, 27 Jul 2006 17:37:02 -0400",
"msg_from": "Richard Rowell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange behaviour"
},
{
"msg_contents": "Richard Rowell <[email protected]> writes:\n> We are using a BI tool that generates some rather ugly queries. One of\n> the ugly queries is taking much longer to return thin I think it\n> should. \n> (http://www.bowmansystems.com/~richard/full.analyze)\n> Can anyone shed any light on what is going on here?\n\nSeems like you have some bad rowcount estimates leading to poor plan\nselection. Most of the problem looks to be coming from the FunctionScan\nnodes, wherein the planner doesn't have any real way to estimate how\nmany rows come out. You might look into whether you can replace those\nfunctions with views, so that the planner isn't dealing with \"black boxes\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 03 Aug 2006 12:39:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange behaviour "
}
] |
[
{
"msg_contents": "Hi,\n\nWe've a fedora core 3 box with PostgreSQL 8.0.\n\nThere is some performance problems with the server and I discovered with vmstat tool that there is some process writing a lot of information in the disk subsystem.\n\nI stopped the database and even so vmstat showed the same rates of disk writes.\n\nI could I discover who is sending so many data to the disks?\n\nThanks in advance, \n\nReimer\n\n\nHi,\n \nWe've a fedora core 3 box with PostgreSQL 8.0.\n \nThere is some performance problems with the server and I discovered with vmstat tool that there is some process writing a lot of information in the disk subsystem.\n \nI stopped the database and even so vmstat showed the same rates of disk writes.\n \nI could I discover who is sending so many data to the disks?\n \nThanks in advance, \n \nReimer",
"msg_date": "Thu, 27 Jul 2006 22:25:54 -0300",
"msg_from": "\"carlosreimer\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Disk writes"
},
{
"msg_contents": "> I could I discover who is sending so many data to the disks?\n\nDocumentation/laptop-mode.txt in the Linux kernel tree has some\ninstructions how to track down unwanted disk writes.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Fri, 28 Jul 2006 08:32:46 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk writes"
},
{
"msg_contents": "On Thu, 2006-07-27 at 20:25, carlosreimer wrote:\n> Hi,\n> \n> We've a fedora core 3 box with PostgreSQL 8.0.\n> \n> There is some performance problems with the server and I discovered\n> with vmstat tool that there is some process writing a lot of\n> information in the disk subsystem.\n> \n> I stopped the database and even so vmstat showed the same rates of\n> disk writes.\n> \n> I could I discover who is sending so many data to the disks?\n\nDoes top show any processes running?\n\nOn my FC4 laptop, the one that kept cranking up all the time was\nprelink. I don't really care if it takes an extra couple seconds for an\napp to open each time, so I disabled that.\n\nThe other process I've seen do this, on older flavors of linux mostly,\nis kswapd erroneously writing and reading the swap partition a lot. \nSeems to happen when the swap partition is smaller than physical memory,\nand there's a lot of other I/O going on. But I think that got fixed in\nthe 2.6 kernel tree.\n",
"msg_date": "Fri, 28 Jul 2006 09:03:07 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk writes"
},
{
"msg_contents": "Hi, Reimer,\n\ncarlosreimer wrote:\n\n> There is some performance problems with the server and I discovered with\n> vmstat tool that there is some process writing a lot of information in\n> the disk subsystem.\n[..]\n> I could I discover who is sending so many data to the disks?\n\nIt could be something triggered by your crontab (updatedb comes in my\nmind, or texpire from leafnode etc.).\n\nAnother idea would be that you have statement logging on, or something\nelse that produces lots of kernel or syslog messages[1], and your\nsyslogd is configured to sync() after every line...\n\nHTH,\nMarkus\n\n[1] We once had such a problem because an ill-compiled kernel having USB\nverbose logging on...\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 07 Aug 2006 13:51:23 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Disk writes"
}
] |
[
{
"msg_contents": "---------- Forwarded message ----------\nFrom: Kjell Tore Fossbakk <[email protected]>\nDate: Jul 26, 2006 8:55 AM\nSubject: Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200\nTo: [email protected]\n\nHello!\n\nI have upgraded my server to an HP Proliant DL585. It got two Processor\nmotherboards, each holding an AMD/Opteron 2.6Ghz and 4GIG of memory.\n\nI got 4 150GIG SCSI disks in a Smart Array 5i 1+0 RAID.\n\nI'v been using Postgres for a few years, and the reason for this major\nhardware upgrade is to boost my performance. I have created a small web\napplication that pulls huge amounts of data from my database.\n\nThe database consists of basically one table, but it's big. It got 10\ncolumns and a few indexes on the 3-4 most used fields (based on my queries\nof course). My queries use a few aggregated functions, such as sum and\ncount, which makes the data moving process time-consuming.\n\nNow, on my older server (2gig memory and probabably some 2ghz cpu) my\nqueries took quite a lot of time (30sec - several minutes). Now, with a much\nbetter hardware platform, I was hoping I could juice the process!\n\nAs I have understood, there is alot of tuning using both postgres.conf and\nanalyzing queries to make the values of postgres.conf fit my needs, system\nand hardware. This is where I need some help. I have looked into\npostgres.conf , and seen the tunings. But I'm still not sure what I should\nput into those variables (in postgres.conf) with my hardware.\n\nAny suggestions would be most appreciated!\n\n- Kjell Tore\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\n---------- Forwarded message ----------From: Kjell Tore Fossbakk <[email protected]>Date: Jul 26, 2006 8:55 AM\nSubject: Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200To: [email protected]!I have upgraded my server to an HP Proliant DL585. It got two Processor motherboards, each holding an AMD/Opteron \n2.6Ghz and 4GIG of memory.I got 4 150GIG SCSI disks in a Smart Array 5i 1+0 RAID.\nI'v been using Postgres for a few years, and the reason for this major hardware upgrade is to boost my performance. I have created a small web application that pulls huge amounts of data from my database.The database consists of basically one table, but it's big. It got 10 columns and a few indexes on the 3-4 most used fields (based on my queries of course). My queries use a few aggregated functions, such as sum and count, which makes the data moving process time-consuming.\nNow, on my older server (2gig memory and probabably some 2ghz cpu) my queries took quite a lot of time (30sec - several minutes). Now, with a much better hardware platform, I was hoping I could juice the process!\n\nAs I have understood, there is alot of tuning using both postgres.conf and analyzing queries to make the values of postgres.conf fit my needs, system and hardware. This is where I need some help. I have looked into postgres.conf\n\n, and seen the tunings. But I'm still not sure what I should put into those variables (in postgres.conf) with my hardware.Any suggestions would be most appreciated!- Kjell Tore\n-- \"Be nice to people on your way up because you meet them on your way down.\"\n\n-- \"Be nice to people on your way up because you meet them on your way down.\"",
"msg_date": "Fri, 28 Jul 2006 08:37:33 +0200",
"msg_from": "\"Kjell Tore Fossbakk\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200"
},
{
"msg_contents": "> As I have understood, there is alot of tuning using both postgres.conf and\n> analyzing queries to make the values of postgres.conf fit my needs, system\n> and hardware. This is where I need some help. I have looked into\n> postgres.conf , and seen the tunings. But I'm still not sure what I should\n> put into those variables (in postgres.conf) with my hardware.\n>\n> Any suggestions would be most appreciated!\n\nWhat OS is it running and what version is postgresql?\n\nregards\nClaus\n",
"msg_date": "Fri, 28 Jul 2006 14:51:11 +0200",
"msg_from": "\"Claus Guttesen\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200"
},
{
"msg_contents": "Hello.\n\nOS: Gentoo 2006.0 with gentoo's hardened kernel\nVersion: I haven't checked. Im guessing 8.0.8 (latest stable on all systems)\nor 8.1.4 which is the latest package.\n\nI'm still gonna try to run with smart array 5i. How can i find out that my\nperformance with that is crappy? Without ripping down my systems, and using\nsoftware raid?\n\nKjell Tore\n\nOn 7/28/06, Claus Guttesen <[email protected]> wrote:\n>\n> > As I have understood, there is alot of tuning using both postgres.confand\n> > analyzing queries to make the values of postgres.conf fit my needs,\n> system\n> > and hardware. This is where I need some help. I have looked into\n> > postgres.conf , and seen the tunings. But I'm still not sure what I\n> should\n> > put into those variables (in postgres.conf) with my hardware.\n> >\n> > Any suggestions would be most appreciated!\n>\n> What OS is it running and what version is postgresql?\n>\n> regards\n> Claus\n>\n\n\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\nHello.OS: Gentoo 2006.0 with gentoo's hardened kernelVersion: I haven't checked. Im guessing 8.0.8 (latest stable on all systems) or 8.1.4 which is the latest package.I'm still gonna try to run with smart array 5i. How can i find out that my performance with that is crappy? Without ripping down my systems, and using software raid?\nKjell ToreOn 7/28/06, Claus Guttesen <[email protected]> wrote:\n> As I have understood, there is alot of tuning using both postgres.conf and> analyzing queries to make the values of postgres.conf fit my needs, system> and hardware. This is where I need some help. I have looked into\n> postgres.conf , and seen the tunings. But I'm still not sure what I should> put into those variables (in postgres.conf) with my hardware.>> Any suggestions would be most appreciated!What OS is it running and what version is postgresql?\nregardsClaus-- \"Be nice to people on your way up because you meet them on your way down.\"",
"msg_date": "Sun, 30 Jul 2006 21:01:37 +0200",
"msg_from": "\"Kjell Tore Fossbakk\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200"
}
] |
[
{
"msg_contents": "Kjell,\n\n> I got 4 150GIG SCSI disks in a Smart Array 5i 1+0 RAID.\n\nThe Smart Array 5i is a terrible performer on Linux. I would be\nsurprised if you exceed the performance of a single hard drive with this\ncontroller when doing I/O from disk. Since your database working set is\nlarger than memory on the machine, I would recommend you use a simple\nnon-RAID U320 SCSI controller, like those from LSI Logic (which HP\nresells) and implement Linux software RAID. You should see a nearly 10x\nincrease in performance as compared to the SmartArray 5i.\n\nIf you have a good relationship with HP, please ask them for some\ndocumentation of RAID performance on Linux with the SmartArray 5i. I\npredict they will tell you what they've told me and others: \"the 5i is\nonly useful for booting the OS\". Alternately they could say: \"we have\nworld record performance with our RAID controllers\", in which case you\nshould ask them if that was with the 5i on Linux or whether it was the\n6-series on Windows.\n\n- Luke\n\n",
"msg_date": "Fri, 28 Jul 2006 02:54:40 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "I would be interested in what numbers you would get out of bonnie++\n(http://www.coker.com.au/bonnie++) and BenchmarkSQL\n(http://sourceforge.net/projects/benchmarksql) on that hardware, for\ncomparison with our DL385 (2xOpteron 280, 16Gb ram) and MSA1500. If you\nneed help building benchmarksql, I can assist you with that.\n\nActually, I would be interested if everyone who's reading this that has\na similar machine (2 cpu, dual core opteron) with different storage\nsystems could send me their bonnie + benchmarksql results! \n\n/Mikael\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Luke\nLonergan\nSent: den 28 juli 2006 08:55\nTo: Kjell Tore Fossbakk; [email protected]\nSubject: Re: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig\n\nKjell,\n\n> I got 4 150GIG SCSI disks in a Smart Array 5i 1+0 RAID.\n\nThe Smart Array 5i is a terrible performer on Linux. I would be\nsurprised if you exceed the performance of a single hard drive with this\ncontroller when doing I/O from disk. Since your database working set is\nlarger than memory on the machine, I would recommend you use a simple\nnon-RAID U320 SCSI controller, like those from LSI Logic (which HP\nresells) and implement Linux software RAID. You should see a nearly 10x\nincrease in performance as compared to the SmartArray 5i.\n\nIf you have a good relationship with HP, please ask them for some\ndocumentation of RAID performance on Linux with the SmartArray 5i. I\npredict they will tell you what they've told me and others: \"the 5i is\nonly useful for booting the OS\". Alternately they could say: \"we have\nworld record performance with our RAID controllers\", in which case you\nshould ask them if that was with the 5i on Linux or whether it was the\n6-series on Windows.\n\n- Luke\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Fri, 28 Jul 2006 10:46:46 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Mikael Carneholm wrote:\n> I would be interested in what numbers you would get out of bonnie++\n> (http://www.coker.com.au/bonnie++) and BenchmarkSQL\n> (http://sourceforge.net/projects/benchmarksql) on that hardware, for\n> comparison with our DL385 (2xOpteron 280, 16Gb ram) and MSA1500. If you\n> need help building benchmarksql, I can assist you with that.\n> \n> Actually, I would be interested if everyone who's reading this that has\n> a similar machine (2 cpu, dual core opteron) with different storage\n> systems could send me their bonnie + benchmarksql results! \n> \n\nHere's the bonnie++ results from our Sun Fire V40z (2x Opteron 250, 4GB \nRAM) with 6 15krpm 73GB drives connected to an LSI MegaRAID 320-2X \ncontroller with 512MB cache. It's running Linux, and I'm using what \nseems to be a fairly typical 6-drive setup: 2 drives in RAID-1 for OS \nand WAL, and 4 drives in RAID-10 for data. This is from the 4-drive \nRAID-10 array:\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\ngaz 8G 56692 88 73061 12 33048 6 44994 64 132571 14 \n474.0 0\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\n 16 19448 88 +++++ +++ 18611 72 19952 90 +++++ +++ \n15167 65\n\nThis system is actually in production currently, and while it's a rather \nquiet time at the moment, it still wasn't _entirely_ inactive when those \nnumbers were run, so the real performance is probably a little higher. \nI'll see if I can run some BenchmarkSQL numbers as well.\n\nThanks\nLeigh\n\n> /Mikael\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Luke\n> Lonergan\n> Sent: den 28 juli 2006 08:55\n> To: Kjell Tore Fossbakk; [email protected]\n> Subject: Re: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig\n> \n> Kjell,\n> \n>> I got 4 150GIG SCSI disks in a Smart Array 5i 1+0 RAID.\n> \n> The Smart Array 5i is a terrible performer on Linux. I would be\n> surprised if you exceed the performance of a single hard drive with this\n> controller when doing I/O from disk. Since your database working set is\n> larger than memory on the machine, I would recommend you use a simple\n> non-RAID U320 SCSI controller, like those from LSI Logic (which HP\n> resells) and implement Linux software RAID. You should see a nearly 10x\n> increase in performance as compared to the SmartArray 5i.\n> \n> If you have a good relationship with HP, please ask them for some\n> documentation of RAID performance on Linux with the SmartArray 5i. I\n> predict they will tell you what they've told me and others: \"the 5i is\n> only useful for booting the OS\". Alternately they could say: \"we have\n> world record performance with our RAID controllers\", in which case you\n> should ask them if that was with the 5i on Linux or whether it was the\n> 6-series on Windows.\n> \n> - Luke\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n",
"msg_date": "Sat, 29 Jul 2006 00:17:37 +1000",
"msg_from": "Leigh Dyer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "> systems could send me their bonnie + benchmarksql results!\n\nI am one of the authors of BenchmarkSQL, it is similar to a DBT2. But, its\nvery easy to use (&/or abuse). It's a multithreaded Java Swing client that\ncan run the exact same benchmark (uses JDBC prepared statements) against\nPostgres/EnterpriseDB/Bizgres, MySQueeL, Horacle, Microsloth, etc, etc. You\ncan find BenchmarkSQL on pgFoundry and SourceForge.\n\nAs expected, Postgres is good on this benchmark and is getting better all\nthe time.\n\nIf you run an EnterpriseDB install right out of the box versus a PG install\nright out of the box you'll notice that EnterpriseDB outperforms PG by\nbetter than 2x. This does NOT mean that EnterpriseDB is 3x faster than\nPostgres... EnterpriseDB is the same speed as Postgres. We do something\nwe call \"Dynatune\" at db startup time. The algorithm is pretty simple in\nour current GA version and really only considers the amount of RAM, SHARED\nMemory, and machine usage pattern. Manual tuning is required to really\noptimize performance....\n\nFor great insight into the basics of quickly tuning PostgreSQL for a\nreasonable starting point, check out the great instructions offered by Josh\nBerkus and Joe Conway at http://www.powerpostgresql.com/PerfList/.\n\nThe moral of this unreasonably verbose email is that you shouldn't abuse\nBenchmarkSQL and measure runs without making sure that, at least,\nquick/simple best practices have been applied to tuning the db's you are\nchoosing to test.\n\n--Denis Lussier\n CTO\n http://www.enterprisedb.com\n\n\n\n> >\n>\n\n> systems could send me their bonnie + benchmarksql results! \nI am one of the authors of BenchmarkSQL, it is similar to a DBT2. But, its very easy to use (&/or abuse). It's a multithreaded Java Swing client that can run the exact same benchmark (uses JDBC prepared statements) against Postgres/EnterpriseDB/Bizgres, MySQueeL, Horacle, Microsloth, etc, etc. You can find BenchmarkSQL on pgFoundry and SourceForge.\n\n \nAs expected, Postgres is good on this benchmark and is getting better all the time.\n \nIf you run an EnterpriseDB install right out of the box versus a PG install right out of the box you'll notice that EnterpriseDB outperforms PG by better than 2x. This does NOT mean that EnterpriseDB is 3x faster than Postgres... EnterpriseDB is the same speed as Postgres. We do something we call \"Dynatune\" at db startup time. The algorithm is pretty simple in our current GA version and really only considers the amount of RAM, SHARED Memory, and machine usage pattern. Manual tuning is required to really optimize performance....\n\n \nFor great insight into the basics of quickly tuning PostgreSQL for a reasonable starting point, check out the great instructions offered by Josh Berkus and Joe Conway at \nhttp://www.powerpostgresql.com/PerfList/.\n \nThe moral of this unreasonably verbose email is that you shouldn't abuse BenchmarkSQL and measure runs without making sure that, at least, quick/simple best practices have been applied to tuning the db's you are choosing to test.\n\n \n--Denis Lussier\n CTO\n http://www.enterprisedb.com \n\n>",
"msg_date": "Sat, 29 Jul 2006 14:09:15 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Denis,\n\nOn 7/29/06 11:09 AM, \"Denis Lussier\" <[email protected]> wrote:\n\n> We do something we call \"Dynatune\" at db startup time.\n\nSounds great - where do we download it?\n\n- Luke\n\n\n",
"msg_date": "Sat, 29 Jul 2006 11:39:50 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Not sure that EnterpriseDB's Dynatune is the general purpose answer that the\nPG community has been searching to find. Actually, I think it could be,\nbut... the community process will decide.\n\nWe are presently planning to create a site that will be called\nhttp://gforge.enterprisedb.com that will be similar in spirit to BizGres.\nBy this I mean that we will be open sourcing many key small \"improvements\"\n(in the eye of the beholder) for PG that will potentially make it into PG\n(likely in some modified format) depending on the reactions and desires of\nthe general Postgres community.\n\nIn case anyone is wondering... NO, EnterpriseDB won't be open sourcing the\nlegacy Horacle stuff we've added to our product (at least not yet). This\nstuff is distributed under our Commercial Open Source license (similar to\nSugarCRM's). Our Commercial Open Source license simply means that if you\nbuy a Platinum Subscription to our product, then you can keep the source\ncode under your pillow and use it internally at your company however you see\nfit.\n\n--Denis Lussier\n CTO\n http://www.enterprisedb.com\n\n\nOn 7/29/06, Luke Lonergan <[email protected]> wrote:\n>\n> Denis,\n>\n> On 7/29/06 11:09 AM, \"Denis Lussier\" <[email protected]> wrote:\n>\n> > We do something we call \"Dynatune\" at db startup time.\n>\n> Sounds great - where do we download it?\n>\n> - Luke\n>\n>\n>\n\n \nNot sure that EnterpriseDB's Dynatune is the general purpose answer that the PG community has been searching to find. Actually, I think it could be, but... the community process will decide.\n \nWe are presently planning to create a site that will be called http://gforge.enterprisedb.com that will be similar in spirit to BizGres. By this I mean that we will be open sourcing many key small \"improvements\" (in the eye of the beholder) for PG that will potentially make it into PG (likely in some modified format) depending on the reactions and desires of the general Postgres community.\n\n \nIn case anyone is wondering... NO, EnterpriseDB won't be open sourcing the legacy Horacle stuff we've added to our product (at least not yet). This stuff is distributed under our Commercial Open Source license (similar to SugarCRM's). Our Commercial Open Source license simply means that if you buy a Platinum Subscription to our product, then you can keep the source code under your pillow and use it internally at your company however you see fit.\n\n \n--Denis Lussier\n CTO\n http://www.enterprisedb.com\n \nOn 7/29/06, Luke Lonergan <[email protected]> wrote:\nDenis,On 7/29/06 11:09 AM, \"Denis Lussier\" <\[email protected]> wrote:> We do something we call \"Dynatune\" at db startup time.Sounds great - where do we download it?- Luke",
"msg_date": "Sat, 29 Jul 2006 16:00:35 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "Mikael, \n\n> -----Original Message-----\n> From: Mikael Carneholm [mailto:[email protected]] \n> Sent: Friday, July 28, 2006 1:47 AM\n>\n> I would be interested in what numbers you would get out of bonnie++\n> (http://www.coker.com.au/bonnie++) and BenchmarkSQL\n> (http://sourceforge.net/projects/benchmarksql) on that \n> hardware, for comparison with our DL385 (2xOpteron 280, 16Gb \n> ram) and MSA1500. If you need help building benchmarksql, I \n> can assist you with that.\n\nMe too. Can you post your MSA1500 results?\n\nThe MSA500/1000 come with two SmartArray 6402 controllers, but the RAID\nis done inside the MSA500/1000 chassis from what I understand. I have\nheard that the performance on Linux is pretty good, but I've not seen\nthe benchmarks to prove it. Bonnie++ is fine - should tell us what we\nneed to know.\n\nAlso, I am *very* interested in seeing what the P600 SAS controller\nresults look like when coupled with an MSA50 SAS chassis with 10 disks.\nThis is the new SAS controller that can be configured on the DL385 and\n585.\n\n- Luke\n\n",
"msg_date": "Fri, 28 Jul 2006 04:53:15 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "Mikael, \n\n> -----Original Message-----\n> From: Mikael Carneholm [mailto:[email protected]] \n> Sent: Friday, July 28, 2006 2:05 AM\n>\n> My bonnie++ results are found in this message:\n> http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php\n> \n\nApologies if I've already said this, but those bonnie++ results are very\ndisappointing. The sequential transfer rates between 20MB/s and 57MB/s\nare slower than a single SATA disk, and your SCSI disks might even do\n80MB/s sequential transfer rate each.\n\nRandom access is also very poor, though perhaps equal to 5 disk drives\nat 500/second.\n\nBy comparison, we routinely get 950MB/s sequential transfer rate using\n16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux.\n\nOn Solaris ZFS on an X4500, we recently got this bonnie++ result on 36\nSATA disk drives in RAID10 (single thread first):\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr-\n--Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344\n94 1801 4\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++\n+++++ +++\n\nBumping up the number of concurrent processes to 2, we get about 1.5x\nspeed reads of RAID10 with a concurrent workload (you have to add the\nrates together): \n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472\n88 1233 2\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++\n4381 97\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr-\n--Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030\n87 1274 3\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++\n4272 97\n\nSo that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s\nper character sequential read.\n\n- Luke\n\n",
"msg_date": "Fri, 28 Jul 2006 05:16:47 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "I too have a DL385 with a single DC Opteron 270.\nIt claims to have a smart array 6i controller and over the last \ncouple of days I've been runnign some tests on it, which have been \nyielding some suprising results.\n\nI've got 6 10k U320 disks in it. 2 are in a mirror set. We'll not \npay any attention to them.\nThe remaining 4 disks I've been toying with to see what config works \nbest, using hardware raid and software raid.\n\nsystem info:\ndl dl385 - 1 opteron 270 - 5GB ram - smart array 6i\ncciss0: HP Smart Array 6i Controller\nFirmware Version: 2.58\nLinux db03 2.6.17-1.2157_FC5 #1 SMP Tue Jul 11 22:53:56 EDT 2006 \nx86_64 x86_64 x86_64 GNU/Linux\nusing xfs\n\nEach drive can sustain 80MB/sec read (dd, straight off device)\n\nSo here are the results I have so far. (averaged)\n\n\nhardware raid 5:\ndd - write 20GB file - 48MB/sec\ndd - read 20GB file - 247MB/sec\n[ didn't do a bonnie run on this yet ]\npretty terrible write performance. good read.\n\nhardware raid 10\ndd - write 20GB - 104MB/sec\ndd - read 20GB - 196MB/sec\nbonnie++\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\ndb03 9592M 45830 97 129501 31 62981 14 48524 99 185818 \n19 949.0 1\n\nsoftware raid 5\ndd - write 20gb - 85MB/sec\ndd - read 20gb - 135MB/sec\n\nI was very suprised at those results. I was sort of expecting it to \nsmoke the hardware. I repeated the test many times, and kept getting\nthese numbers.\n\nbonnie++:\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\ndb03 9592M 44110 97 81481 23 34604 10 44495 95 157063 \n28 919.3 1\n\nsoftware 10:\ndd - write - 20GB - 108MB/sec\ndd - read - 20GB - 86MB/sec(!!!! WTF? - this is repeatable!!)\nbonnie++\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- -- \nBlock-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \nCP /sec %CP\ndb03 9592M 44539 98 105444 20 34127 8 39830 83 100374 \n10 1072 1\n\n\nso I'm going to be going with hw r5, which went against what I \nthought going in - read perf is more important for my usage than write.\n\nI'm still not sure about that software 10 read number. something is \nnot right there...\n\n--\nJeff Trout <[email protected]>\nhttp://www.dellsmartexitin.com/\nhttp://www.stuarthamm.net/\n\n\n\n",
"msg_date": "Fri, 28 Jul 2006 13:31:21 -0400",
"msg_from": "Jeff Trout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "This isn't all that surprising. The main weaknesses of RAID-5 are poor\nwrite performance and stupid hardware controllers that make the write\nperformance even worse than it needs to be. Your numbers bear that out.\nReads off RAID-5 are usually pretty good.\n\nYour 'dd' test is going to be a little misleading though. Most DB\naccess isn't usually purely sequential; while it's easy to see why HW\nRAID-5 might outperform HW-RAID-10 in large sequential reads (the RAID\ncontroller would need to be smarter than most to make RAID-10 as fast as\nRAID-5), I would expect that HW RAID-5 and RAID-10 random reads would be\nabout equal or else maybe give a slight edge to RAID-10. \n\n-- Mark Lewis\n\n\nOn Fri, 2006-07-28 at 13:31 -0400, Jeff Trout wrote:\n> I too have a DL385 with a single DC Opteron 270.\n> It claims to have a smart array 6i controller and over the last \n> couple of days I've been runnign some tests on it, which have been \n> yielding some suprising results.\n> \n> I've got 6 10k U320 disks in it. 2 are in a mirror set. We'll not \n> pay any attention to them.\n> The remaining 4 disks I've been toying with to see what config works \n> best, using hardware raid and software raid.\n> \n> system info:\n> dl dl385 - 1 opteron 270 - 5GB ram - smart array 6i\n> cciss0: HP Smart Array 6i Controller\n> Firmware Version: 2.58\n> Linux db03 2.6.17-1.2157_FC5 #1 SMP Tue Jul 11 22:53:56 EDT 2006 \n> x86_64 x86_64 x86_64 GNU/Linux\n> using xfs\n> \n> Each drive can sustain 80MB/sec read (dd, straight off device)\n> \n> So here are the results I have so far. (averaged)\n> \n> \n> hardware raid 5:\n> dd - write 20GB file - 48MB/sec\n> dd - read 20GB file - 247MB/sec\n> [ didn't do a bonnie run on this yet ]\n> pretty terrible write performance. good read.\n> \n> hardware raid 10\n> dd - write 20GB - 104MB/sec\n> dd - read 20GB - 196MB/sec\n> bonnie++\n> Version 1.03 ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n> CP /sec %CP\n> db03 9592M 45830 97 129501 31 62981 14 48524 99 185818 \n> 19 949.0 1\n> \n> software raid 5\n> dd - write 20gb - 85MB/sec\n> dd - read 20gb - 135MB/sec\n> \n> I was very suprised at those results. I was sort of expecting it to \n> smoke the hardware. I repeated the test many times, and kept getting\n> these numbers.\n> \n> bonnie++:\n> Version 1.03 ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n> CP /sec %CP\n> db03 9592M 44110 97 81481 23 34604 10 44495 95 157063 \n> 28 919.3 1\n> \n> software 10:\n> dd - write - 20GB - 108MB/sec\n> dd - read - 20GB - 86MB/sec(!!!! WTF? - this is repeatable!!)\n> bonnie++\n> Version 1.03 ------Sequential Output------ --Sequential Input- \n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- -- \n> Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec % \n> CP /sec %CP\n> db03 9592M 44539 98 105444 20 34127 8 39830 83 100374 \n> 10 1072 1\n> \n> \n> so I'm going to be going with hw r5, which went against what I \n> thought going in - read perf is more important for my usage than write.\n> \n> I'm still not sure about that software 10 read number. something is \n> not right there...\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.dellsmartexitin.com/\n> http://www.stuarthamm.net/\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n",
"msg_date": "Fri, 28 Jul 2006 11:01:54 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Jeff,\n\nOn 7/28/06 10:31 AM, \"Jeff Trout\" <[email protected]> wrote:\n\n> I'm still not sure about that software 10 read number. something is\n> not right there...\n\nIt's very consistent with what we've seen before - the hardware RAID\ncontroller doesn't do JBOD with SCSI command queuing like a simple SCSI\ncontroller would do. The Smart Array 6402 makes a very bad SCSI controller\nfor software RAID.\n\nThe hardware results look very good - seems like the 2.6.17 linux kernel has\na drastically improved CCISS driver as compared to what I've previously\nseen.\n\n- Luke \n\n\n",
"msg_date": "Fri, 28 Jul 2006 12:22:28 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "Luke,\n\nYeah, I read those results, and I'm very disappointed with my results\nfrom the MSA1500. I would however be interested in other people's\nbonnie++ and benchmarksql results using a similar machine (2 cpu dual\ncore opteron) with other \"off the shelf\" storage systems\n(EMC/Netapp/Xyratex/../). Could you run benchmarksql against that\nmachine with the 16 SATA disk and 3Ware 9550SX SATA RAID adapters? It\nwould be *very* interesting to see how the I/O performance correlates to\nbenchmarksql (postgres) transaction throughout.\n\n/Mikael \n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: den 28 juli 2006 11:17\nTo: Mikael Carneholm; Kjell Tore Fossbakk;\[email protected]\nSubject: RE: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig\n\nMikael, \n\n> -----Original Message-----\n> From: Mikael Carneholm [mailto:[email protected]]\n> Sent: Friday, July 28, 2006 2:05 AM\n>\n> My bonnie++ results are found in this message:\n> http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php\n> \n\nApologies if I've already said this, but those bonnie++ results are very\ndisappointing. The sequential transfer rates between 20MB/s and 57MB/s\nare slower than a single SATA disk, and your SCSI disks might even do\n80MB/s sequential transfer rate each.\n\nRandom access is also very poor, though perhaps equal to 5 disk drives\nat 500/second.\n\nBy comparison, we routinely get 950MB/s sequential transfer rate using\n16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux.\n\nOn Solaris ZFS on an X4500, we recently got this bonnie++ result on 36\nSATA disk drives in RAID10 (single thread first):\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr-\n--Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344\n94 1801 4\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++\n+++++ +++\n\nBumping up the number of concurrent processes to 2, we get about 1.5x\nspeed reads of RAID10 with a concurrent workload (you have to add the\nrates together): \n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472\n88 1233 2\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++\n4381 97\n\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr-\n--Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CP\nthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030\n87 1274 3\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++\n4272 97\n\nSo that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s\nper character sequential read.\n\n- Luke\n\n\n",
"msg_date": "Fri, 28 Jul 2006 11:55:25 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Hello.\n\nUnfortunately, I'm leaving for my vacation now, gone 3 weeks. When I'm back\nI'll run benchmarksql and bonnie++ and give the results here.\n\nThe spec I will be using:\n\nProlite DL585\n2 x AMD/Opteron 64-bit 2,6GHZ\n8G DDR PC3200\n4 x 150G SCSI in SmartArray 5i\nRunning Gentoo 2006.0 AMD_64 Hardened kernel\n\nThen I will remove the SmartArray 5i, and use a simple nonRAID SCSI\ncontroller and implement Linux software RAID, and re-run the tests.\n\nI'll give signal in 3 weeks\n\n- Kjell Tore.\n\nOn 7/28/06, Mikael Carneholm <[email protected]> wrote:\n>\n> Luke,\n>\n> Yeah, I read those results, and I'm very disappointed with my results\n> from the MSA1500. I would however be interested in other people's\n> bonnie++ and benchmarksql results using a similar machine (2 cpu dual\n> core opteron) with other \"off the shelf\" storage systems\n> (EMC/Netapp/Xyratex/../). Could you run benchmarksql against that\n> machine with the 16 SATA disk and 3Ware 9550SX SATA RAID adapters? It\n> would be *very* interesting to see how the I/O performance correlates to\n> benchmarksql (postgres) transaction throughout.\n>\n> /Mikael\n>\n> -----Original Message-----\n> From: Luke Lonergan [mailto:[email protected]]\n> Sent: den 28 juli 2006 11:17\n> To: Mikael Carneholm; Kjell Tore Fossbakk;\n> [email protected]\n> Subject: RE: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig\n>\n> Mikael,\n>\n> > -----Original Message-----\n> > From: Mikael Carneholm [mailto:[email protected]]\n> > Sent: Friday, July 28, 2006 2:05 AM\n> >\n> > My bonnie++ results are found in this message:\n> > http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php\n> >\n>\n> Apologies if I've already said this, but those bonnie++ results are very\n> disappointing. The sequential transfer rates between 20MB/s and 57MB/s\n> are slower than a single SATA disk, and your SCSI disks might even do\n> 80MB/s sequential transfer rate each.\n>\n> Random access is also very poor, though perhaps equal to 5 disk drives\n> at 500/second.\n>\n> By comparison, we routinely get 950MB/s sequential transfer rate using\n> 16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux.\n>\n> On Solaris ZFS on an X4500, we recently got this bonnie++ result on 36\n> SATA disk drives in RAID10 (single thread first):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344\n> 94 1801 4\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++\n> +++++ +++\n>\n> Bumping up the number of concurrent processes to 2, we get about 1.5x\n> speed reads of RAID10 with a concurrent workload (you have to add the\n> rates together):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472\n> 88 1233 2\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++\n> 4381 97\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030\n> 87 1274 3\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++\n> 4272 97\n>\n> So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s\n> per character sequential read.\n>\n> - Luke\n>\n>\n>\n\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\nHello.\n\nUnfortunately, I'm leaving for my vacation now, gone 3 weeks. When I'm\nback I'll run benchmarksql and bonnie++ and give the results here.\n\nThe spec I will be using:\n\nProlite DL585\n2 x AMD/Opteron 64-bit 2,6GHZ\n8G DDR PC3200\n4 x 150G SCSI in SmartArray 5i\nRunning Gentoo 2006.0 AMD_64 Hardened kernel\n\nThen I will remove the SmartArray 5i, and use a simple nonRAID SCSI\ncontroller and implement Linux software RAID, and re-run the tests.\n\nI'll give signal in 3 weeks\n\n- Kjell Tore.On 7/28/06, Mikael Carneholm <[email protected]> wrote:\nLuke,Yeah, I read those results, and I'm very disappointed with my resultsfrom the MSA1500. I would however be interested in other people's\nbonnie++ and benchmarksql results using a similar machine (2 cpu dualcore opteron) with other \"off the shelf\" storage systems(EMC/Netapp/Xyratex/../). Could you run benchmarksql against thatmachine with the 16 SATA disk and 3Ware 9550SX SATA RAID adapters? It\nwould be *very* interesting to see how the I/O performance correlates tobenchmarksql (postgres) transaction throughout./Mikael-----Original Message-----From: Luke Lonergan [mailto:\[email protected]]Sent: den 28 juli 2006 11:17To: Mikael Carneholm; Kjell Tore Fossbakk;[email protected]: RE: [PERFORM] Performance with 2 AMD/Opteron \n2.6Ghz and 8gigMikael,> -----Original Message-----> From: Mikael Carneholm [mailto:[email protected]]> Sent: Friday, July 28, 2006 2:05 AM\n>> My bonnie++ results are found in this message:> http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php\n>Apologies if I've already said this, but those bonnie++ results are verydisappointing. The sequential transfer rates between 20MB/s and 57MB/sare slower than a single SATA disk, and your SCSI disks might even do\n80MB/s sequential transfer rate each.Random access is also very poor, though perhaps equal to 5 disk drivesat 500/second.By comparison, we routinely get 950MB/s sequential transfer rate using16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux.\nOn Solaris ZFS on an X4500, we recently got this bonnie++ result on 36SATA disk drives in RAID10 (single thread first):Version 1.03 ------Sequential Output------ --Sequential Input---Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr---Block-- --Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec%CP /sec %CPthumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344\n94 1801 4 ------Sequential Create------ --------RandomCreate-------- -Create-- --Read--- -Delete-- -Create-- --Read----Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ ++++++++ +++Bumping up the number of concurrent processes to 2, we get about 1.5xspeed reads of RAID10 with a concurrent workload (you have to add the\nrates together):Version 1.03 ------Sequential Output------ --Sequential Input---Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block----Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n%CP /sec %CPthumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 71947288 1233 2 ------Sequential Create------ --------RandomCreate-------- -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP/sec %CP 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++4381 97Version 1.03 ------Sequential Output------ --Sequential Input-\n--Random- -Per Chr- --Block-- -Rewrite- -Per Chr---Block-- --Seeks--Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec%CP /sec %CPthumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030\n87 1274 3 ------Sequential Create------ --------RandomCreate-------- -Create-- --Read--- -Delete-- -Create-- --Read----Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++4272 97So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/sper character sequential read.- Luke\n-- \"Be nice to people on your way up because you meet them on your way down.\"",
"msg_date": "Fri, 28 Jul 2006 12:38:46 +0200",
"msg_from": "\"Kjell Tore Fossbakk\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "On Fri, 28 Jul 2006, Mikael Carneholm wrote:\n\n> Luke,\n>\n> Yeah, I read those results, and I'm very disappointed with my results\n> from the MSA1500. I would however be interested in other people's\n> bonnie++ and benchmarksql results using a similar machine (2 cpu dual\n> core opteron) with other \"off the shelf\" storage systems\n> (EMC/Netapp/Xyratex/../). Could you run benchmarksql against that\n> machine with the 16 SATA disk and 3Ware 9550SX SATA RAID adapters? It\n> would be *very* interesting to see how the I/O performance correlates to\n> benchmarksql (postgres) transaction throughout.\n\nFWIW, once our vendor gets all the pieces (having some issues with \nfiguring out which multilane sata cables to get), I'll have a dual-core \nopteron box with a 3Ware 9500SX-12MI and 8 drives. I need to benchmark to \ncompare this to our xeon/adaptec/scsi build we've been using.\n\nI've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware \ncard scales there - started with 2 drives and got \"drive speed\" mirroring. \nAdded two more and most of the bonnie numbers doubled. This is not what \nI'm used to with the Adaptec SCSI junk.\n\nThese SATA RAID controllers 3Ware is making seem to be leaps and bounds \nbeyond what the \"old guard\" is churning out (at much higher prices).\n\nCharles\n\n> /Mikael\n>\n> -----Original Message-----\n> From: Luke Lonergan [mailto:[email protected]]\n> Sent: den 28 juli 2006 11:17\n> To: Mikael Carneholm; Kjell Tore Fossbakk;\n> [email protected]\n> Subject: RE: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig\n>\n> Mikael,\n>\n>> -----Original Message-----\n>> From: Mikael Carneholm [mailto:[email protected]]\n>> Sent: Friday, July 28, 2006 2:05 AM\n>>\n>> My bonnie++ results are found in this message:\n>> http://archives.postgresql.org/pgsql-performance/2006-07/msg00164.php\n>>\n>\n> Apologies if I've already said this, but those bonnie++ results are very\n> disappointing. The sequential transfer rates between 20MB/s and 57MB/s\n> are slower than a single SATA disk, and your SCSI disks might even do\n> 80MB/s sequential transfer rate each.\n>\n> Random access is also very poor, though perhaps equal to 5 disk drives\n> at 500/second.\n>\n> By comparison, we routinely get 950MB/s sequential transfer rate using\n> 16 SATA disks and 3Ware 9550SX SATA RAID adapters on Linux.\n>\n> On Solaris ZFS on an X4500, we recently got this bonnie++ result on 36\n> SATA disk drives in RAID10 (single thread first):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344\n> 94 1801 4\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++\n> +++++ +++\n>\n> Bumping up the number of concurrent processes to 2, we get about 1.5x\n> speed reads of RAID10 with a concurrent workload (you have to add the\n> rates together):\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472\n> 88 1233 2\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++\n> 4381 97\n>\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr-\n> --Block-- --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec\n> %CP /sec %CP\n> thumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030\n> 87 1274 3\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++\n> 4272 97\n>\n> So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s\n> per character sequential read.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n",
"msg_date": "Sat, 29 Jul 2006 01:57:19 -0400 (EDT)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Hi, Charles,\n\nCharles Sprickman wrote:\n\n> I've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware\n> card scales there - started with 2 drives and got \"drive speed\"\n> mirroring. Added two more and most of the bonnie numbers doubled. This\n> is not what I'm used to with the Adaptec SCSI junk.\n\nWell, for sequential reading, you should be able to get double drive\nspeed on a 2-disk mirror with a good controller, as it can balance the\nreads among the drives.\n\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 07 Aug 2006 14:07:45 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Although I for one have yet to see a controller that actualy does this (I\nbelieve software RAID on linux doesn't either).\n\nAlex.\n\nOn 8/7/06, Markus Schaber <[email protected]> wrote:\n>\n> Hi, Charles,\n>\n> Charles Sprickman wrote:\n>\n> > I've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware\n> > card scales there - started with 2 drives and got \"drive speed\"\n> > mirroring. Added two more and most of the bonnie numbers doubled. This\n> > is not what I'm used to with the Adaptec SCSI junk.\n>\n> Well, for sequential reading, you should be able to get double drive\n> speed on a 2-disk mirror with a good controller, as it can balance the\n> reads among the drives.\n>\n> Markus\n> --\n> Markus Schaber | Logical Tracking&Tracing International AG\n> Dipl. Inf. | Software Development GIS\n>\n> Fight against software patents in EU! www.ffii.org\n> www.nosoftwarepatents.org\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nAlthough I for one have yet to see a controller that actualy does this (I believe software RAID on linux doesn't either).Alex.On 8/7/06, Markus Schaber\n <[email protected]> wrote:Hi, Charles,\nCharles Sprickman wrote:> I've also got a 1U with a 9500SX-4 and 4 drives. I like how the 3Ware> card scales there - started with 2 drives and got \"drive speed\"> mirroring. Added two more and most of the bonnie numbers doubled. This\n> is not what I'm used to with the Adaptec SCSI junk.Well, for sequential reading, you should be able to get double drivespeed on a 2-disk mirror with a good controller, as it can balance thereads among the drives.\nMarkus--Markus Schaber | Logical Tracking&Tracing International AGDipl. Inf. | Software Development GISFight against software patents in EU! www.ffii.org\nwww.nosoftwarepatents.org---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not match",
"msg_date": "Mon, 7 Aug 2006 16:02:52 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "On Mon, Aug 07, 2006 at 04:02:52PM -0400, Alex Turner wrote:\n> Although I for one have yet to see a controller that actualy does this (I\n> believe software RAID on linux doesn't either).\n\nLinux' software RAID does. See earlier threads for demonstrations.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Mon, 7 Aug 2006 22:20:02 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "On Mon, Aug 07, 2006 at 10:20:02PM +0200, Steinar H. Gunderson wrote:\n> On Mon, Aug 07, 2006 at 04:02:52PM -0400, Alex Turner wrote:\n> > Although I for one have yet to see a controller that actualy does this (I\n> > believe software RAID on linux doesn't either).\n> \n> Linux' software RAID does. See earlier threads for demonstrations.\n\nThe real question: will it balance within a single thread?\n\nCheaper raid setups will balance individual requests between devices,\nbut good ones should be able to service a single request from both\ndevices (assuming it's reading more than whatever the stripe size is).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 9 Aug 2006 16:09:04 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "I have a table with 37000 rows, an integer column, and an index on that \ncolumn. I've got a function that returns an integer. When I do a select \nwhere I restrict that column to being equal to a static number, explain \ntells me the index will be used. When I do the same thing but use the \nfunction instead of a static number, explain shows me a full scan on the \ntable.\n\nI must be missing something, because my understanding is that the function \nwill be evaluated once for the statement and then collapsed into a static \nnumber for the filtering. But the results of the explain seem to imply \nthat's not the case....?\n",
"msg_date": "Fri, 28 Jul 2006 12:21:14 -0700 (PDT)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": true,
"msg_subject": "index usage"
}
] |
[
{
"msg_contents": "> De : [email protected] [mailto:pgsql-performance-\n> [email protected]] De la part de Ben\n> Envoyé : vendredi, juillet 28, 2006 15:21\n> À : [email protected]\n> Objet : [PERFORM] index usage\n> \n> I have a table with 37000 rows, an integer column, and an index on that\n> column. I've got a function that returns an integer. When I do a select\n> where I restrict that column to being equal to a static number, explain\n> tells me the index will be used. When I do the same thing but use the\n> function instead of a static number, explain shows me a full scan on the\n> table.\n> \n> I must be missing something, because my understanding is that the function\n> will be evaluated once for the statement and then collapsed into a static\n> number for the filtering. But the results of the explain seem to imply\n> that's not the case....?\n> \n\nIs your function IMMUTABLE, STABLE or VOLATILE?\n\n--\nDaniel\n",
"msg_date": "Fri, 28 Jul 2006 15:23:01 -0400",
"msg_from": "\"Daniel Caune\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: index usage"
},
{
"msg_contents": "It's volatile, but it will always return an integer.\n\nOn Fri, 28 Jul 2006, Daniel Caune wrote:\n\n>> De�: [email protected] [mailto:pgsql-performance-\n>> [email protected]] De la part de Ben\n>> Envoy�: vendredi, juillet 28, 2006 15:21\n>> ��: [email protected]\n>> Objet�: [PERFORM] index usage\n>>\n>> I have a table with 37000 rows, an integer column, and an index on that\n>> column. I've got a function that returns an integer. When I do a select\n>> where I restrict that column to being equal to a static number, explain\n>> tells me the index will be used. When I do the same thing but use the\n>> function instead of a static number, explain shows me a full scan on the\n>> table.\n>>\n>> I must be missing something, because my understanding is that the function\n>> will be evaluated once for the statement and then collapsed into a static\n>> number for the filtering. But the results of the explain seem to imply\n>> that's not the case....?\n>>\n>\n> Is your function IMMUTABLE, STABLE or VOLATILE?\n>\n> --\n> Daniel\n>\n>From [email protected] Fri Jul 28 18:02:18 2006\nX-Original-To: [email protected]\nReceived: from localhost (mx1.hub.org [200.46.208.251])\n\tby postgresql.org (Postfix) with ESMTP id 15F6A9FB270\n\tfor <[email protected]>; Fri, 28 Jul 2006 18:02:18 -0300 (ADT)\nReceived: from postgresql.org ([200.46.204.71])\n by localhost (mx1.hub.org [200.46.208.251]) (amavisd-new, port 10024)\n with ESMTP id 88817-04 for <[email protected]>;\n Fri, 28 Jul 2006 18:02:02 -0300 (ADT)\nX-Greylist: from auto-whitelisted by SQLgrey-\nReceived: from mir3-fs.mir3.com (mail.mir3.com [65.208.188.100])\n\tby postgresql.org (Postfix) with ESMTP id BCD2D9FB27A\n\tfor <[email protected]>; Fri, 28 Jul 2006 18:01:21 -0300 (ADT)\nReceived: mir3-fs.mir3.com 172.16.1.11 from 172.16.2.68 172.16.2.68 via HTTP with MS-WebStorage 6.0.6249\nReceived: from archimedes.mirlogic.com by mir3-fs.mir3.com; 28 Jul 2006 14:01:18 -0700\nSubject: Re: index usage\nFrom: Mark Lewis <[email protected]>\nTo: Ben <[email protected]>\nCc: Daniel Caune <[email protected]>, [email protected]\nIn-Reply-To: <[email protected]>\nReferences: <[email protected]>\n\t <[email protected]>\nContent-Type: text/plain; charset=utf-8\nContent-Transfer-Encoding: quoted-printable\nOrganization: MIR3, Inc.\nDate: Fri, 28 Jul 2006 14:01:18 -0700\nMessage-Id: <1154120478.1634.666.camel@archimedes>\nMime-Version: 1.0\nX-Mailer: Evolution 2.0.2 (2.0.2-27) \nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=0.061 tagged_above=0 required=5 tests=AWL,\n UNPARSEABLE_RELAY\nX-Spam-Level: \nX-Archive-Number: 200607/274\nX-Sequence-Number: 20134\n\nA volatile function has may return a different result for each row;\nthink of the random() or nextval() functions for example. You wouldn't\nwant them to return the same value for each row returned.\n\n-- Mark Lewis\n\nOn Fri, 2006-07-28 at 13:59 -0700, Ben wrote:\n> It's volatile, but it will always return an integer.\n>=20\n> On Fri, 28 Jul 2006, Daniel Caune wrote:\n>=20\n> >> De : [email protected] [mailto:pgsql-performance-\n> >> [email protected]] De la part de Ben\n> >> Envoy=C3=A9 : vendredi, juillet 28, 2006 15:21\n> >> =C3=80 : [email protected]\n> >> Objet : [PERFORM] index usage\n> >>\n> >> I have a table with 37000 rows, an integer column, and an index on tha=\nt\n> >> column. I've got a function that returns an integer. When I do a selec=\nt\n> >> where I restrict that column to being equal to a static number, explai=\nn\n> >> tells me the index will be used. When I do the same thing but use the\n> >> function instead of a static number, explain shows me a full scan on t=\nhe\n> >> table.\n> >>\n> >> I must be missing something, because my understanding is that the func=\ntion\n> >> will be evaluated once for the statement and then collapsed into a sta=\ntic\n> >> number for the filtering. But the results of the explain seem to imply\n> >> that's not the case....?\n> >>\n> >\n> > Is your function IMMUTABLE, STABLE or VOLATILE?\n> >\n> > --\n> > Daniel\n> >\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n",
"msg_date": "Fri, 28 Jul 2006 13:59:57 -0700 (PDT)",
"msg_from": "Ben <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage"
},
{
"msg_contents": "Ben <[email protected]> writes:\n> It's volatile, but it will always return an integer.\n\nIf it's volatile then it can't be used for an index condition.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 28 Jul 2006 17:06:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index usage "
}
] |
[
{
"msg_contents": "Charles, \n\n> FWIW, once our vendor gets all the pieces (having some issues \n> with figuring out which multilane sata cables to get), I'll \n> have a dual-core opteron box with a 3Ware 9500SX-12MI and 8 \n> drives. I need to benchmark to compare this to our \n> xeon/adaptec/scsi build we've been using.\n\nCool! You mean the 9550SX, not the 9500, right?\n\nA trick on the 9550SX with Linux is to set the max readahead to 512KB\nand no larger when using RAID10. If you use RAID5, set it to 16MB.\n\nHere is how you set it (put in /etc/rc.d/rc.local) for 512KB on\n/dev/sda:\n\n /sbin/blockdev --setra 512 /dev/sda\n\nI was able to go from 310MB/s on 8 drives to 475MB/s that way (using\nXFS).\n\nAlso, you need to stay away from Linux Volume Manager (lvm and lvm2),\nthey add a lot of overhead (!!?) to the block access. It took me a long\ntime to figure that out!\n \n- Luke\n\n",
"msg_date": "Sat, 29 Jul 2006 02:06:27 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
}
] |
[
{
"msg_contents": "Tweakers.net has done a database performance test between a Sun T2000 (8\ncore T1) and a Sun X4200 (2 dual core Opteron 280). The database\nbenchmark is developed inhouse and represents the average query pattern\nfrom their website. It is MySQL centric because Tweakers.net runs on\nMySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\ndone basic optimizations like adding indexes.\n\nArjen wrote about some of the preliminary results previously in\nhttp://archives.postgresql.org/pgsql-performance/2006-06/msg00358.php\nbut the article has now been published http://tweakers.net/reviews/633/7\nThis is all the more impressive if you scroll down and look at the\nbehaviour of MySQL (after tweaking by both MySQL AB and Sun).\n\nJochem\n",
"msg_date": "Sat, 29 Jul 2006 17:02:46 +0200",
"msg_from": "Jochem van Dieten <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Jochem van Dieten wrote:\n> Tweakers.net has done a database performance test between a Sun T2000 (8\n> core T1) and a Sun X4200 (2 dual core Opteron 280). The database\n> benchmark is developed inhouse and represents the average query pattern\n> from their website. It is MySQL centric because Tweakers.net runs on\n> MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\n> done basic optimizations like adding indexes.\n> \n> Arjen wrote about some of the preliminary results previously in\n> http://archives.postgresql.org/pgsql-performance/2006-06/msg00358.php\n> but the article has now been published http://tweakers.net/reviews/633/7\n> This is all the more impressive if you scroll down and look at the\n> behaviour of MySQL (after tweaking by both MySQL AB and Sun).\n\nI would love to get my hands on that postgresql version and see how much \nfarther it could be optimized.\n\nJoshua D. Drake\n\n\n> \n> Jochem\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Sat, 29 Jul 2006 08:43:49 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On 29-7-2006 17:02, Jochem van Dieten wrote:\n> Tweakers.net has done a database performance test between a Sun T2000 (8\n> core T1) and a Sun X4200 (2 dual core Opteron 280). The database\n> benchmark is developed inhouse and represents the average query pattern\n> from their website. It is MySQL centric because Tweakers.net runs on\n> MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\n> done basic optimizations like adding indexes.\n\nThere were a few minor datatype changes (like enum('Y', 'N') to boolean, \nbut on the other hand also 'int unsigned' to 'bigint'), a few small \nquery changes (i.e. rewriting join orders, most turned out to be \nnecessary for mysql 5.0 and 5.1 as well anyway) and even fewer larger \nquery changes (including a subquery, instead of the results of another \nquery).\n\nThe indexes also included adding partial indexes and several combined \nindexes.\n\nAll in all I think it took about a week to do the conversion and test \nthe specific queries. Luckily PostgreSQL allows for much clearer \ninformation on what a specific query is doing and much faster \nadding/removing of indexes (mysql rewrites the entire table).\n\n> Arjen wrote about some of the preliminary results previously in\n> http://archives.postgresql.org/pgsql-performance/2006-06/msg00358.php\n> but the article has now been published http://tweakers.net/reviews/633/7\n> This is all the more impressive if you scroll down and look at the\n> behaviour of MySQL (after tweaking by both MySQL AB and Sun).\n\nActually, we haven't had contact with MySQL AB. But as far as I know, \nthe Sun engineers have contacted them about this.\nAs it turns out there are some suboptimal machine codes generated from \nMySQL's source for the Niagara T1 and MySQL has some issues with \nInnoDB's scaling in the later 5.0-versions:\nhttp://www.mysqlperformanceblog.com/2006/07/28/returning-to-innodb-scalability/\n\nThen again, we weren't able to compile the PG8.2 dev using all \noptimizations of Sun's Studio Compiler (the mlibopt-switch failed), so \nthere is very likely more room for improvement on that field as well.\n\nBest regards,\n\nArjen\n",
"msg_date": "Sat, 29 Jul 2006 18:20:30 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On 29-7-2006 17:43, Joshua D. Drake wrote:\n> \n> I would love to get my hands on that postgresql version and see how much \n> farther it could be optimized.\n\nYou probably mean the entire installation? As said in my reply to \nJochem, I've spent a few days testing all queries to improve their \nperformance. I'm not sure what kind of improvements that yielded, but if \nI remember correctly its in the order of 3-5 times for the entire \nbenchmark, compared to the initial MySQL-layout and queries.\n\nIf you mean the configuration and which version it was, I can look that \nup for you if you'd like. Including the compilation switches used on the \nT2000.\n\nIf we get to keep the machine (which we're going to try, but that's with \nworse performance than with their x4200 a bit doubtful), I'm sure we can \nwork something out.\nThen again, we regularly have other server hardware on which the same \ndatabase is used, so even without the T2000 we could still do some \neffort to further improve postgresql's performance.\nIt might be interesting to have some Postgres experts do some more \ntuning and allowing MySQL AB to do the same... But I'm not sure if we're \nwilling to spent that much extra time on a benchmark (just testing one \ndatabase costs us about a day and a half...)\n\nBest regards,\n\nArjen\n",
"msg_date": "Sat, 29 Jul 2006 18:33:05 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Arjen van der Meijden wrote:\n> On 29-7-2006 17:43, Joshua D. Drake wrote:\n>>\n>> I would love to get my hands on that postgresql version and see how \n>> much farther it could be optimized.\n> \n> You probably mean the entire installation? As said in my reply to \n> Jochem, I've spent a few days testing all queries to improve their \n> performance. I'm not sure what kind of improvements that yielded, but if \n> I remember correctly its in the order of 3-5 times for the entire \n> benchmark, compared to the initial MySQL-layout and queries.\n> \n> If you mean the configuration and which version it was, I can look that \n> up for you if you'd like. Including the compilation switches used on the \n> T2000.\n\nWell I would be curious about the postgresql.conf and how much ram \netc... it had.\n\n> Then again, we regularly have other server hardware on which the same \n> database is used, so even without the T2000 we could still do some \n> effort to further improve postgresql's performance.\n> It might be interesting to have some Postgres experts do some more \n> tuning and allowing MySQL AB to do the same... But I'm not sure if we're \n> willing to spent that much extra time on a benchmark (just testing one \n> database costs us about a day and a half...)\n\nI understand, I just have a feeling that we could do even better :) I do \nappreciate all your efforts.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Best regards,\n> \n> Arjen\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Sat, 29 Jul 2006 10:01:38 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On 29-7-2006 19:01, Joshua D. Drake wrote:\n> Well I would be curious about the postgresql.conf and how much ram \n> etc... it had.\n\nIt was the 8core version with 16GB memory... but actually that's just \noverkill, the active portions of the database easily fits in 8GB and a \ntest on another machine with just 2GB didn't even show that much \nimprovements when going to 7GB (6x1G, 2x 512M), it was mostly in the \nrange of 10% improvement or less.\n\nAnyway, the differences to the default postgresql.conf:\nshared_buffers = 30000\nTests with 40k, 50k en 60k didn't really show improvements.\n\nwork_mem = 2048\nThis probably could've been set higher with the sheer amount of \nnot-really-used memory.\n\nmaintenance_work_mem = 65535\nNot really important of course\n\nmax_fsm_pages = 50000\nSomehow it needed to be set quite high, probably because we only cleaned \nup after doing over 200k requests.\n\neffective_cache_size = 350000\nAs said, the database fitted in 8GB of memory, so I didn't see a need to \nset this higher than for the 8GB machines (x4200 and another T2000 we had).\n\ndefault_statistics_target = 200\nFor a few columns on the largest tables I manually raised it to 1000\n\nlog_min_duration_statement = 1000\nI'm not sure if this has much overhead? Stats logging was turned/left on \nas well.\nTurning that off improved it a few percent.\n\n> I understand, I just have a feeling that we could do even better :) I do \n> appreciate all your efforts.\n\nWell, I'll keep that in mind :)\nWhat it makes even worse for MySQL is that it had (on another machine) \nabout 8M hits on the query cache for 4M inserts, i.e. half of the \nqueries weren't even executed on it.\n\nBest regards,\n\nArjen\n",
"msg_date": "Sat, 29 Jul 2006 19:39:30 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Jochem,\n\nOn 7/29/06 8:02 AM, \"Jochem van Dieten\" <[email protected]> wrote:\n\n> Tweakers.net has done a database performance test between a Sun T2000 (8\n> core T1) and a Sun X4200 (2 dual core Opteron 280). The database\n> benchmark is developed inhouse and represents the average query pattern\n> from their website. It is MySQL centric because Tweakers.net runs on\n> MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\n> done basic optimizations like adding indexes.\n\nExcellent article/job on performance profiling - thanks!\n\nBack in March, Anandtech also did a Niagara article profiling web + database\nperformance:\n http://www.anandtech.com/IT/showdoc.aspx?i=2727&p=7\n\nand the results for the T2000/Niagara were also lesser to the multi-core\nOpteron. Now maybe this article will help Sun to improve the processor's\nperformance.\n\n- Luke \n\n\n",
"msg_date": "Sat, 29 Jul 2006 11:52:56 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Hi Arjen,\n\nI am curious about your Sun Studio compiler options also.\nCan you send that too ?\n\nAny other tweakings that you did on Solaris?\n\nThanks.\n\nRegards,\nJignesh\n\n\nArjen van der Meijden wrote:\n> On 29-7-2006 19:01, Joshua D. Drake wrote:\n>> Well I would be curious about the postgresql.conf and how much ram \n>> etc... it had.\n>\n> It was the 8core version with 16GB memory... but actually that's just \n> overkill, the active portions of the database easily fits in 8GB and a \n> test on another machine with just 2GB didn't even show that much \n> improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the \n> range of 10% improvement or less.\n>\n> Anyway, the differences to the default postgresql.conf:\n> shared_buffers = 30000\n> Tests with 40k, 50k en 60k didn't really show improvements.\n>\n> work_mem = 2048\n> This probably could've been set higher with the sheer amount of \n> not-really-used memory.\n>\n> maintenance_work_mem = 65535\n> Not really important of course\n>\n> max_fsm_pages = 50000\n> Somehow it needed to be set quite high, probably because we only \n> cleaned up after doing over 200k requests.\n>\n> effective_cache_size = 350000\n> As said, the database fitted in 8GB of memory, so I didn't see a need \n> to set this higher than for the 8GB machines (x4200 and another T2000 \n> we had).\n>\n> default_statistics_target = 200\n> For a few columns on the largest tables I manually raised it to 1000\n>\n> log_min_duration_statement = 1000\n> I'm not sure if this has much overhead? Stats logging was turned/left \n> on as well.\n> Turning that off improved it a few percent.\n>\n>> I understand, I just have a feeling that we could do even better :) I \n>> do appreciate all your efforts.\n>\n> Well, I'll keep that in mind :)\n> What it makes even worse for MySQL is that it had (on another machine) \n> about 8M hits on the query cache for 4M inserts, i.e. half of the \n> queries weren't even executed on it.\n>\n> Best regards,\n>\n> Arjen\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n",
"msg_date": "Mon, 31 Jul 2006 08:07:07 +0100",
"msg_from": "Jignesh Shah <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Hi Jignesh,\n\nIt was a cvs-checkout of 8.2 devel, compiled using:\nCPPFLAGS=\"-fast -xtarget=ultraT1 -xnolibmopt\" CC=/opt/SUNWspro/bin/cc \n./configure --without-readline\n\nWe'd gotten a specially adjusted Solaris version from Sun Holland for \nthe T2000. It was a dvd with a Solaris flar archive from 11 april 2006 \nand patches from 25 april 2006. It also had the preferred Solaris System \nsettings already applied. If you need more details about that dvd, I \nthink your best option is to contact Hans Nijbacker or Bart Muijzer, \nsince we're no Solaris-experts :)\n\nAppart from that, we did no extra tuning of the OS, nor did Hans for the \nMySQL-optimizations (afaik, but then again, he knows best).\n\nBest regards,\n\nArjen van der Meijden\n\nJignesh Shah wrote:\n> Hi Arjen,\n> \n> I am curious about your Sun Studio compiler options also.\n> Can you send that too ?\n> \n> Any other tweakings that you did on Solaris?\n> \n> Thanks.\n> \n> Regards,\n> Jignesh\n> \n> \n> Arjen van der Meijden wrote:\n>> On 29-7-2006 19:01, Joshua D. Drake wrote:\n>>> Well I would be curious about the postgresql.conf and how much ram \n>>> etc... it had.\n>>\n>> It was the 8core version with 16GB memory... but actually that's just \n>> overkill, the active portions of the database easily fits in 8GB and a \n>> test on another machine with just 2GB didn't even show that much \n>> improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the \n>> range of 10% improvement or less.\n>>\n>> Anyway, the differences to the default postgresql.conf:\n>> shared_buffers = 30000\n>> Tests with 40k, 50k en 60k didn't really show improvements.\n>>\n>> work_mem = 2048\n>> This probably could've been set higher with the sheer amount of \n>> not-really-used memory.\n>>\n>> maintenance_work_mem = 65535\n>> Not really important of course\n>>\n>> max_fsm_pages = 50000\n>> Somehow it needed to be set quite high, probably because we only \n>> cleaned up after doing over 200k requests.\n>>\n>> effective_cache_size = 350000\n>> As said, the database fitted in 8GB of memory, so I didn't see a need \n>> to set this higher than for the 8GB machines (x4200 and another T2000 \n>> we had).\n>>\n>> default_statistics_target = 200\n>> For a few columns on the largest tables I manually raised it to 1000\n>>\n>> log_min_duration_statement = 1000\n>> I'm not sure if this has much overhead? Stats logging was turned/left \n>> on as well.\n>> Turning that off improved it a few percent.\n>>\n>>> I understand, I just have a feeling that we could do even better :) I \n>>> do appreciate all your efforts.\n>>\n>> Well, I'll keep that in mind :)\n>> What it makes even worse for MySQL is that it had (on another machine) \n>> about 8M hits on the query cache for 4M inserts, i.e. half of the \n>> queries weren't even executed on it.\n>>\n>> Best regards,\n>>\n>> Arjen\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n",
"msg_date": "Mon, 31 Jul 2006 09:59:51 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On 7/29/06, Jochem van Dieten <[email protected]> wrote:\n> Tweakers.net has done a database performance test between a Sun T2000 (8\n> core T1) and a Sun X4200 (2 dual core Opteron 280). The database\n> benchmark is developed inhouse and represents the average query pattern\n> from their website. It is MySQL centric because Tweakers.net runs on\n> MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\n> done basic optimizations like adding indexes.\n\nanandtech did a comparison of opteron/xeon/sun t1 not to long ago and\npublished some mysql/postgresql results. however, they were careful\nnot to publish the quad core data for pg to compare vs. mysql, which\nin my opinion would have shown a blowout victory for pg. (also was pg\n8.0).\n\nthe fact is, postgresql is often faster than mysql under real\nworkloads, especially when utilizing features such as stored\nprocedures and such.\n\nmerlin\n",
"msg_date": "Mon, 31 Jul 2006 12:04:26 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:\n> Jochem van Dieten wrote:\n> >Tweakers.net has done a database performance test between a Sun T2000 (8\n> >core T1) and a Sun X4200 (2 dual core Opteron 280). The database\n> >benchmark is developed inhouse and represents the average query pattern\n> >from their website. It is MySQL centric because Tweakers.net runs on\n> >MySQL, but Arjen van der Meijden has ported it to PostgreSQL and has\n> >done basic optimizations like adding indexes.\n> >\n> >Arjen wrote about some of the preliminary results previously in\n> >http://archives.postgresql.org/pgsql-performance/2006-06/msg00358.php\n> >but the article has now been published http://tweakers.net/reviews/633/7\n> >This is all the more impressive if you scroll down and look at the\n> >behaviour of MySQL (after tweaking by both MySQL AB and Sun).\n> \n> I would love to get my hands on that postgresql version and see how much \n> farther it could be optimized.\n\nI'd love to get an english translation that we could use for PR.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 1 Aug 2006 12:26:17 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "On 1-8-2006 19:26, Jim C. Nasby wrote:\n> On Sat, Jul 29, 2006 at 08:43:49AM -0700, Joshua D. Drake wrote:\n> \n> I'd love to get an english translation that we could use for PR.\n\nActually, we have an english version of the Socket F follow-up. \nhttp://tweakers.net/reviews/638 which basically displays the same \nresults for Postgres vs MySQL.\nIf and when a translation of the other article arrives, I don't know. \nOther follow-up stories will follow as well, whether and how soon those \nwill be translated, I also don't know. We are actually pretty interested \nin doing so, but its a lot of work to translate correctly :)\n\nBest regards,\n\nArjen\n",
"msg_date": "Tue, 01 Aug 2006 19:49:22 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Hi, Arjen,\n\nArjen van der Meijden wrote:\n\n> It was the 8core version with 16GB memory... but actually that's just\n> overkill, the active portions of the database easily fits in 8GB and a\n> test on another machine with just 2GB didn't even show that much\n> improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the\n> range of 10% improvement or less.\n\nI'd be interested in the commit_siblings and commit_delay settings,\ntuning them could give a high increase on throughput for highly\nconcurrent insert/update workloads, at the cost of latency (and thus\nworse results for low concurrency situations).\n\nDifferent fsync method settings can also make a difference (I presume\nthat syncing was enabled).\n\nHTH,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 07 Aug 2006 15:18:27 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
},
{
"msg_contents": "Hi Markus,\n\nAs said, our environment really was a read-mostly one. So we didn't do \nmuch inserts/updates and thus spent no time tuning those values and left \nthem as default settings.\n\nBest regards,\n\nArjen\n\nMarkus Schaber wrote:\n> Hi, Arjen,\n> \n> Arjen van der Meijden wrote:\n> \n>> It was the 8core version with 16GB memory... but actually that's just\n>> overkill, the active portions of the database easily fits in 8GB and a\n>> test on another machine with just 2GB didn't even show that much\n>> improvements when going to 7GB (6x1G, 2x 512M), it was mostly in the\n>> range of 10% improvement or less.\n> \n> I'd be interested in the commit_siblings and commit_delay settings,\n> tuning them could give a high increase on throughput for highly\n> concurrent insert/update workloads, at the cost of latency (and thus\n> worse results for low concurrency situations).\n> \n> Different fsync method settings can also make a difference (I presume\n> that syncing was enabled).\n> \n> HTH,\n> Markus\n> \n> \n",
"msg_date": "Mon, 07 Aug 2006 15:45:08 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL scalability on Sun UltraSparc T1"
}
] |
[
{
"msg_contents": "Run bonnie++ version 1.03 and report results here.\n\n\n- Luke\n\nSent from my GoodLink synchronized handheld (www.good.com)\n\n\n -----Original Message-----\nFrom: \tKjell Tore Fossbakk [mailto:[email protected]]\nSent:\tSunday, July 30, 2006 03:03 PM Eastern Standard Time\nTo:\tClaus Guttesen\nCc:\[email protected]\nSubject:\tRe: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200\n\nHello.\n\nOS: Gentoo 2006.0 with gentoo's hardened kernel\nVersion: I haven't checked. Im guessing 8.0.8 (latest stable on all systems)\nor 8.1.4 which is the latest package.\n\nI'm still gonna try to run with smart array 5i. How can i find out that my\nperformance with that is crappy? Without ripping down my systems, and using\nsoftware raid?\n\nKjell Tore\n\nOn 7/28/06, Claus Guttesen <[email protected]> wrote:\n>\n> > As I have understood, there is alot of tuning using both postgres.confand\n> > analyzing queries to make the values of postgres.conf fit my needs,\n> system\n> > and hardware. This is where I need some help. I have looked into\n> > postgres.conf , and seen the tunings. But I'm still not sure what I\n> should\n> > put into those variables (in postgres.conf) with my hardware.\n> >\n> > Any suggestions would be most appreciated!\n>\n> What OS is it running and what version is postgresql?\n>\n> regards\n> Claus\n>\n\n\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\n",
"msg_date": "Sun, 30 Jul 2006 15:14:11 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig"
},
{
"msg_contents": "Okey!\n\nThe thing is, im on vacation. So ill report in about 3 weeks time.. Sry\nguys.. :-)\n\nKjell Tore\n\nOn 7/30/06, Luke Lonergan <[email protected]> wrote:\n>\n> Run bonnie++ version 1.03 and report results here.\n>\n>\n> - Luke\n>\n> Sent from my GoodLink synchronized handheld (www.good.com)\n>\n>\n> -----Original Message-----\n> From: Kjell Tore Fossbakk [mailto:[email protected]]\n> Sent: Sunday, July 30, 2006 03:03 PM Eastern Standard Time\n> To: Claus Guttesen\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Performance with 2 AMD/Opteron 2.6Ghz and\n> 8gig DDR PC3200\n>\n> Hello.\n>\n> OS: Gentoo 2006.0 with gentoo's hardened kernel\n> Version: I haven't checked. Im guessing 8.0.8 (latest stable on all\n> systems)\n> or 8.1.4 which is the latest package.\n>\n> I'm still gonna try to run with smart array 5i. How can i find out that my\n> performance with that is crappy? Without ripping down my systems, and\n> using\n> software raid?\n>\n> Kjell Tore\n>\n> On 7/28/06, Claus Guttesen <[email protected]> wrote:\n> >\n> > > As I have understood, there is alot of tuning using both\n> postgres.confand\n> > > analyzing queries to make the values of postgres.conf fit my needs,\n> > system\n> > > and hardware. This is where I need some help. I have looked into\n> > > postgres.conf , and seen the tunings. But I'm still not sure what I\n> > should\n> > > put into those variables (in postgres.conf) with my hardware.\n> > >\n> > > Any suggestions would be most appreciated!\n> >\n> > What OS is it running and what version is postgresql?\n> >\n> > regards\n> > Claus\n> >\n>\n>\n>\n> --\n> \"Be nice to people on your way up because you meet them on your way down.\"\n>\n>\n\n\n-- \n\"Be nice to people on your way up because you meet them on your way down.\"\n\nOkey!The thing is, im on vacation. So ill report in about 3 weeks time.. Sry guys.. :-)Kjell ToreOn 7/30/06, Luke Lonergan <\[email protected]> wrote:Run bonnie++ version 1.03 and report results here.\n- LukeSent from my GoodLink synchronized handheld (www.good.com) -----Original Message-----From: Kjell Tore Fossbakk [mailto:\[email protected]]Sent: Sunday, July 30, 2006 03:03 PM Eastern Standard TimeTo: Claus GuttesenCc: [email protected]: Re: [PERFORM] Performance with 2 AMD/Opteron \n2.6Ghz and 8gig DDR PC3200Hello.OS: Gentoo 2006.0 with gentoo's hardened kernelVersion: I haven't checked. Im guessing 8.0.8 (latest stable on all systems)or 8.1.4 which is the latest package.\nI'm still gonna try to run with smart array 5i. How can i find out that myperformance with that is crappy? Without ripping down my systems, and usingsoftware raid?Kjell ToreOn 7/28/06, Claus Guttesen <\[email protected]> wrote:>> > As I have understood, there is alot of tuning using both postgres.confand> > analyzing queries to make the values of postgres.conf\n fit my needs,> system> > and hardware. This is where I need some help. I have looked into> > postgres.conf , and seen the tunings. But I'm still not sure what I> should> > put into those variables (in \npostgres.conf) with my hardware.> >> > Any suggestions would be most appreciated!>> What OS is it running and what version is postgresql?>> regards> Claus>\n--\"Be nice to people on your way up because you meet them on your way down.\"-- \"Be nice to people on your way up because you meet them on your way down.\"",
"msg_date": "Sun, 30 Jul 2006 21:18:03 +0200",
"msg_from": "\"Kjell Tore Fossbakk\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance with 2 AMD/Opteron 2.6Ghz and 8gig DDR PC3200"
}
] |
[
{
"msg_contents": "I am testing a query what that has a sub-select. The query performance is very very poor as shown below due to the use of sequencial scans. The actual row count of both tables is also shown. It appears the row count shown by explain analyze does not match the actual count. Columns dstobj, srcobj & objectid are all indexed yet postgres insists on using seq scans. Vacuum analyze makes no difference. I am using 8.1.3 on linux. \n \n This is a very simple query with relatively small amount of data and the query is taking 101482 ms. Queries with sub-selects on both tables individually is very fast (8 ms). \n \n How do I prevent the use of seq scans? \n \n \n \n \n \n capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');\n \n \n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual time=2.933..101467.463 rows=5841 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809 width=30) (actual time=0.007..23.451 rows=5844 loops=1)\n -> Seq Scan on flatommemberrelation (cost=0.00..439.05 rows=5842 width=16) (actual time=0.007..11.790 rows=2922 loops=5844)\n Filter: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n Total runtime: 101482.256 ms\n (6 rows)\n \n capsa=# select count(*) from capsa.flatommemberrelation ;\n count\n -------\n 11932\n (1 row)\n \n capsa=# select count(*) from capsa.flatomfilesysentry ;\n count\n -------\n 5977\n \n \n \n \n\nI am testing a query what that has a sub-select. The query performance is very very poor as shown below due to the use of sequencial scans. The actual row count of both tables is also shown. It appears the row count shown by explain analyze does not match the actual count. Columns dstobj, srcobj & objectid are all indexed yet postgres insists on using seq scans. Vacuum analyze makes no difference. I am using 8.1.3 on linux. This is a very simple query with relatively small amount of data and the query is taking 101482 ms. Queries with sub-selects on both tables individually is very fast (8 ms). How do I prevent the use of seq scans? capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'); \n QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------- Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual time=2.933..101467.463 rows=5841 loops=1) Join Filter: (\"outer\".objectid = \"inner\".dstobj) -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809 width=30) (actual time=0.007..23.451 rows=5844 loops=1) -> Seq Scan on flatommemberrelation (cost=0.00..439.05 rows=5842 width=16) (actual time=0.007..11.790 rows=2922\n loops=5844) Filter: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid) Total runtime: 101482.256 ms (6 rows) capsa=# select count(*) from capsa.flatommemberrelation ; count ------- 11932 (1 row) capsa=# select count(*) from capsa.flatomfilesysentry ; count ------- 5977",
"msg_date": "Sun, 30 Jul 2006 21:50:14 -0400 (EDT)",
"msg_from": "H Hale <[email protected]>",
"msg_from_op": true,
"msg_subject": "sub select performance due to seq scans"
},
{
"msg_contents": "H Hale wrote:\n> I am testing a query what that has a sub-select. The query performance is very very poor as shown below due to the use of sequencial scans. The actual row count of both tables is also shown. It appears the row count shown by explain analyze does not match the actual count. Columns dstobj, srcobj & objectid are all indexed yet postgres insists on using seq scans. Vacuum analyze makes no difference. I am using 8.1.3 on linux. \n> \n> This is a very simple query with relatively small amount of data and the query is taking 101482 ms. Queries with sub-selects on both tables individually is very fast (8 ms). \n> \n> How do I prevent the use of seq scans? \n\nHmm - something strange here.\n\n> capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');\n> \n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual time=2.933..101467.463 rows=5841 loops=1)\n> Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n> -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809 width=30) (actual time=0.007..23.451 rows=5844 loops=1)\n> -> Seq Scan on flatommemberrelation (cost=0.00..439.05 rows=5842 width=16) (actual time=0.007..11.790 rows=2922 loops=5844)\n> Filter: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n> Total runtime: 101482.256 ms\n\nLook at that second seq-scan (on flatommemberrelation) - it's looping \n5844 times (once for each row in flatmfilesysentry). I'd expect PG to \nmaterialise the seq-scan once and then join (unless I'm missing \nsomething, the subselect just involves the one test against a constant).\n\nI'm guessing something in your configuration is pushing your cost \nestimates far away from reality. Could you try issuing a \"set \nenable_seqscan=off\" and then running explain-analyse again. That will \nshow us alternatives.\n\nAlso, what performance-related configuration values have you changed? \nCould you post them with a brief description of your hardware?\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 31 Jul 2006 10:20:41 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans"
},
{
"msg_contents": "> capsa=# explain analyze select name from capsa.flatomfilesysentry\n> where objectid in ( select dstobj from capsa.flatommemberrelation\n> where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');\n> \n> \n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual\n> time=2.933..101467.463 rows=5841 loops=1)\n> Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n> -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809\n> width=30) (actual time=0.007..23.451 rows=5844 loops=1)\n> -> Seq Scan on flatommemberrelation (cost=0.00..439.05 rows=5842\n> width=16) (actual time=0.007..11.790 rows=2922 loops=5844)\n\nA loop for an IN indicates that you are using a very old version of\nPostgreSQL (7.2 or earlier). Please double check that the server is\n8.1.3 as you indicated and not just the client.\n\n>From psql:\n select version();\n\nHmm... Perhaps it is an 8.1.3 server with mergejoin and hashjoin\ndisabled?\n show enable_mergejoin;\n show enable_hashjoin;\n\nYou can try this query syntax:\n \n select name from capsa.flatomfilesysentry join\n capsa.flatommemberrelation on (objectid = dstobj) where srcobj =\n 'c1c7304a-1fe1-11db-8af7-001143214409';\n\n\n> Filter: (srcobj =\n> 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n> Total runtime: 101482.256 ms\n> (6 rows)\n> \n> capsa=# select count(*) from capsa.flatommemberrelation ;\n> count\n> -------\n> 11932\n> (1 row)\n> \n> capsa=# select count(*) from capsa.flatomfilesysentry ;\n> count\n> -------\n> 5977\n> \n> \n> \n> \n\n",
"msg_date": "Mon, 31 Jul 2006 08:09:42 -0400",
"msg_from": "Rod Taylor <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans"
},
{
"msg_contents": "Rod Taylor <[email protected]> writes:\n>> Nested Loop IN Join (cost=0.00..1386.45 rows=5809 width=14) (actual\n>> time=2.933..101467.463 rows=5841 loops=1)\n>> Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n>> -> Seq Scan on flatomfilesysentry (cost=0.00..368.09 rows=5809\n>> width=30) (actual time=0.007..23.451 rows=5844 loops=1)\n>> -> Seq Scan on flatommemberrelation (cost=0.00..439.05 rows=5842\n>> width=16) (actual time=0.007..11.790 rows=2922 loops=5844)\n\n> A loop for an IN indicates that you are using a very old version of\n> PostgreSQL (7.2 or earlier).\n\nNo, it's not that, because 7.2 certainly had no idea of \"IN Join\"s.\nBut there's something mighty fishy about this plan anyway. The\nplanner was predicting 5809 rows out from flatomfilesysentry (not\ntoo far off), so why didn't it predict something north of\n368.09 + 5809 * 439.05 as the total join cost? There's a special case\nin cost_nestloop for IN joins, but it sure shouldn't have reduced the\nestimate by a factor of 1800+ ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jul 2006 08:53:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans "
},
{
"msg_contents": "Look at that second seq-scan (on flatommemberrelation) - it's looping \n5844 times (once for each row in flatmfilesysentry). I'd expect PG to \nmaterialise the seq-scan once and then join (unless I'm missing \nsomething, the subselect just involves the one test against a constant).\n\nI'm guessing something in your configuration is pushing your cost \nestimates far away from reality. Could you try issuing a \"set \nenable_seqscan=off\" and then running explain-analyse again. That will \nshow us alternatives.\n\nAlso, what performance-related configuration values have you changed? \nCould you post them with a brief description of your hardware?\n\n-- \n Richard Huxton\n Archonet Ltd\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\nThe hardware is XEON 3GHZ P4 2GB Memory with 80GB SATA drive.\n Kernel.SHMMAX=128MB\n \n The following config changes have been made from the defaults...\n \n shared_buffers = 8000 # min 16 or max_connections*2, 8KB each\n max_fsm_pages = 50000 # min max_fsm_relations*16, 6 bytes each\n vacuum_cost_delay = 10 # 0-1000 milliseconds\n stats_start_collector = on\n stats_row_level = on\n autovacuum = on # enable autovacuum subprocess?\n autovacuum_naptime = 20 # time between autovacuum runs, in secs\n autovacuum_vacuum_threshold = 500 # min # of tuple updates before# vacuum\n autovacuum_analyze_threshold = 250 # min # of tuple updates before \n \n Here is the query plan...\n \n capsa=# set enable_seqscan=off;\n SET\n Time: 0.478 ms\n capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409');\n QUERY PLAN\n -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=873.32..1017581.78 rows=6476 width=14) (actual time=80.402..241.881 rows=6473 loops=1)\n -> Unique (cost=871.32..903.68 rows=3229 width=16) (actual time=80.315..113.282 rows=6473 loops=1)\n -> Sort (cost=871.32..887.50 rows=6473 width=16) (actual time=80.310..94.279 rows=6473 loops=1)\n Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation (cost=56.66..461.57 rows=6473 width=16) (actual time=2.613..14.229 rows=6473 loops=1)\n Recheck Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..56.66 rows=6473 width=0) (actual time=2.344..2.344 rows=6473 loops=1)\n Index Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)\n Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)\n Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n Total runtime: 251.611 ms\n (13 rows)\n \n Time: 252.825 ms\n \n I went back to the stock conf settings, did a vaccuum full analyze and still get the same results.\n \n Background...\n \n We have spikes of activty where both tables get rows inserted & have many updates. During this time performance drops. \n I have been experimenting with auto vac settings as vaccuuming was helping although query performance \n did not return to normal until after the activity spike. \n In this case ( and I not sure why yet) vac made no difference.\n \n \n \n \n \n \n\nLook at that second seq-scan (on flatommemberrelation) - it's looping 5844 times (once for each row in flatmfilesysentry). I'd expect PG to materialise the seq-scan once and then join (unless I'm missing something, the subselect just involves the one test against a constant).I'm guessing something in your configuration is pushing your cost estimates far away from reality. Could you try issuing a \"set enable_seqscan=off\" and then running explain-analyse again. That will show us alternatives.Also, what performance-related configuration values have you changed? Could you post them with a brief description of your hardware?-- Richard Huxton Archonet Ltd---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the\n postmasterThe hardware is XEON 3GHZ P4 2GB Memory with 80GB SATA drive. Kernel.SHMMAX=128MB The following config changes have been made from the defaults... shared_buffers = 8000 # min 16 or max_connections*2, 8KB each max_fsm_pages = 50000 # min max_fsm_relations*16, 6 bytes each vacuum_cost_delay = 10 # 0-1000 milliseconds stats_start_collector = on stats_row_level = on autovacuum = on # enable autovacuum subprocess? autovacuum_naptime = 20 # time between autovacuum runs, in secs autovacuum_vacuum_threshold = 500 # min # of tuple updates before# vacuum autovacuum_analyze_threshold = 250 #\n min # of tuple updates before Here is the query plan... capsa=# set enable_seqscan=off; SET Time: 0.478 ms capsa=# explain analyze select name from capsa.flatomfilesysentry where objectid in ( select dstobj from capsa.flatommemberrelation where srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'); QUERY PLAN -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=873.32..1017581.78 rows=6476 width=14) (actual time=80.402..241.881 rows=6473 loops=1) -> Unique (cost=871.32..903.68 rows=3229 width=16) (actual time=80.315..113.282 rows=6473 loops=1) -> Sort (cost=871.32..887.50 rows=6473 width=16) (actual time=80.310..94.279 rows=6473 loops=1) Sort Key: flatommemberrelation.dstobj -> Bitmap Heap Scan on flatommemberrelation (cost=56.66..461.57 rows=6473 width=16) (actual time=2.613..14.229 rows=6473 loops=1) Recheck Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..56.66 rows=6473 width=0) (actual time=2.344..2.344 rows=6473 loops=1) Index Cond: (srcobj = 'c1c7304a-1fe1-11db-8af7-001143214409'::capsa_sys.uuid) -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473) Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1\n loops=6473) Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) Total runtime: 251.611 ms (13 rows) Time: 252.825 ms I went back to the stock conf settings, did a vaccuum full analyze and still get the same results. Background... We have spikes of activty where both tables get rows inserted & have many updates. During this time performance drops. I have been experimenting with auto vac settings as vaccuuming was helping although query performance did not return to normal until after the activity spike. In this case ( and I not sure why yet) vac made no difference.",
"msg_date": "Mon, 31 Jul 2006 10:14:27 -0400 (EDT)",
"msg_from": "H Hale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sub select performance due to seq scans"
},
{
"msg_contents": "H Hale <[email protected]> writes:\n> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)\n> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)\n> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n\nWell, there's our estimation failure: 3238 rows expected, one row\nactual.\n\nWhat is the data distribution of flatomfilesysentry.objectid?\nIt looks from this example like it is unique or nearly so,\nbut the planner evidently does not think that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jul 2006 10:28:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans "
},
{
"msg_contents": "Tom, \n \n It is unique. \n\n Indexes:\n \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid)\n \"capsa_flatomfilesysentry_name_idx\" btree (name)\n Foreign-key constraints:\n \"objectid\" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE\n \n \nTom Lane <[email protected]> wrote: H Hale writes:\n> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)\n> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)\n> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n\nWell, there's our estimation failure: 3238 rows expected, one row\nactual.\n\nWhat is the data distribution of flatomfilesysentry.objectid?\nIt looks from this example like it is unique or nearly so,\nbut the planner evidently does not think that.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\nTom, It is unique. Indexes: \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid) \"capsa_flatomfilesysentry_name_idx\" btree (name) Foreign-key constraints: \"objectid\" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE Tom Lane <[email protected]> wrote: H Hale writes:> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)> Index Cond: (flatomfilesysentry.objectid =\n \"outer\".dstobj)Well, there's our estimation failure: 3238 rows expected, one rowactual.What is the data distribution of flatomfilesysentry.objectid?It looks from this example like it is unique or nearly so,but the planner evidently does not think that. regards, tom lane---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Mon, 31 Jul 2006 12:09:02 -0400 (EDT)",
"msg_from": "H Hale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sub select performance due to seq scans "
},
{
"msg_contents": "Not sure if this helps solve the problem but...\n (see below)\n \n As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. \n \n Any suggestions/comments to try and solve this are welcome. Thanks\n \n Data is as follows:\n capsa.flatommemberrelation 1458 records\n capsa.flatommemberrelation(srcobj) 3 distinct\n capsa.flatommemberrelation(dstobj) 730 distinct\n capsa.flatomfilesysentry 732 records\n capsa.flatommemberrelation(objectid) 732 distinct\n \n capsa=# set enable_seqscan=on;\n SET\n Time: 0.599 ms\n capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN\n ----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=196.01..196.02 rows=1 width=0) (actual time=965.420..965.422 rows=1 loops=1)\n -> Nested Loop IN Join (cost=0.00..194.19 rows=728 width=0) (actual time=3.373..964.371 rows=729 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n -> Seq Scan on flatomfilesysentry (cost=0.00..65.28 rows=728 width=16) (actual time=0.007..1.505 rows=732 loops=1)\n -> Seq Scan on flatommemberrelation (cost=0.00..55.12 rows=725 width=16) (actual time=0.004..0.848 rows=366 loops=732)\n Filter: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n Total runtime: 965.492 ms\n (7 rows)\n \n Time: 966.806 ms\n \n-----------------------------------------------------------------------------------------------------------\n capsa=# set enable_seqscan=off;\n SET\n Time: 0.419 ms\n capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN\n --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24847.73..24847.74 rows=1 width=0) (actual time=24.859..24.860 rows=1 loops=1)\n -> Nested Loop (cost=90.05..24845.91 rows=728 width=0) (actual time=2.946..23.640 rows=729 loops=1)\n -> Unique (cost=88.04..91.67 rows=363 width=16) (actual time=2.917..6.671 rows=729 loops=1)\n -> Sort (cost=88.04..89.86 rows=725 width=16) (actual time=2.914..3.998 rows=729 loops=1)\n Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation (cost=7.54..53.60 rows=725 width=16) (actual time=0.260..1.411 rows=729 loops=1)\n Recheck Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..7.54 rows=725 width=0) (actual time=0.244..0.244 rows=729 loops=1)\n Index Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..63.64 rows=364 width=16) (actual time=0.014..0.015 rows=1 loops=729)\n Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=364 width=0) (actual time=0.009..0.009 rows=1 loops=729)\n Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n Total runtime: 25.101 ms\n (14 rows)\n \n Time: 26.878 ms\n \n \n \n \n \n \nH Hale <[email protected]> wrote: Tom, \n \n It is unique. \n\n Indexes:\n \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid)\n \"capsa_flatomfilesysentry_name_idx\" btree (name)\n Foreign-key constraints:\n \"objectid\" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE\n \n \nTom Lane <[email protected]> wrote: H Hale writes:\n> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)\n> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)\n> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n\nWell, there's our estimation failure: 3238 rows expected, one row\nactual.\n\nWhat is the data distribution of flatomfilesysentry.objectid?\nIt looks from this example like it is unique or nearly so,\nbut the planner evidently does not think that.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\nNot sure if this helps solve the problem but... (see below) As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. Any suggestions/comments to try and solve this are welcome. Thanks Data is as follows: capsa.flatommemberrelation 1458 records capsa.flatommemberrelation(srcobj) 3 distinct capsa.flatommemberrelation(dstobj) 730 distinct capsa.flatomfilesysentry 732 records capsa.flatommemberrelation(objectid) 732 distinct capsa=# set enable_seqscan=on; SET Time: 0.599 ms capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=196.01..196.02 rows=1 width=0) (actual time=965.420..965.422 rows=1 loops=1) -> Nested Loop IN Join (cost=0.00..194.19 rows=728 width=0) (actual time=3.373..964.371 rows=729 loops=1) Join Filter: (\"outer\".objectid = \"inner\".dstobj) -> Seq Scan on flatomfilesysentry (cost=0.00..65.28 rows=728\n width=16) (actual time=0.007..1.505 rows=732 loops=1) -> Seq Scan on flatommemberrelation (cost=0.00..55.12 rows=725 width=16) (actual time=0.004..0.848 rows=366 loops=732) Filter: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid) Total runtime: 965.492 ms (7 rows) Time: 966.806 ms ----------------------------------------------------------------------------------------------------------- capsa=# set enable_seqscan=off; SET Time: 0.419 ms capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=24847.73..24847.74 rows=1 width=0) (actual time=24.859..24.860 rows=1 loops=1) -> Nested Loop (cost=90.05..24845.91 rows=728 width=0) (actual time=2.946..23.640 rows=729 loops=1) -> Unique \n (cost=88.04..91.67 rows=363 width=16) (actual time=2.917..6.671 rows=729 loops=1) -> Sort (cost=88.04..89.86 rows=725 width=16) (actual time=2.914..3.998 rows=729 loops=1) Sort Key: flatommemberrelation.dstobj -> Bitmap Heap Scan on flatommemberrelation (cost=7.54..53.60 rows=725 width=16) (actual time=0.260..1.411 rows=729 loops=1) Recheck Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..7.54 rows=725 width=0) (actual time=0.244..0.244 rows=729 loops=1) Index Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid) -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..63.64 rows=364 width=16) (actual time=0.014..0.015 rows=1 loops=729) Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=364 width=0) (actual time=0.009..0.009 rows=1 loops=729) Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) Total runtime: 25.101 ms (14 rows) Time: 26.878 ms H Hale <[email protected]> wrote: Tom, It is unique. Indexes: \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid) \"capsa_flatomfilesysentry_name_idx\" btree (name) Foreign-key constraints: \"objectid\" FOREIGN KEY (objectid) REFERENCES\n capsa_sys.master(objectid) ON DELETE CASCADE Tom Lane <[email protected]> wrote: H Hale writes:> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)Well, there's our estimation failure: 3238 rows expected, one rowactual.What is the data distribution of flatomfilesysentry.objectid?It looks from this example like it is unique or nearly so,but the planner evidently does not think that. \n regards, tom lane---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Tue, 1 Aug 2006 19:07:11 -0400 (EDT)",
"msg_from": "H Hale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sub select performance due to seq scans "
},
{
"msg_contents": "Initial testing was with data that essentially looks like a single collection with many items. \nI then changed this to have 60 collections of 50 items. \nThe result, much better (but not optimum) use of indexs, but a seq scan still\nused. \n\nTurning seq scan off, all indexes where used. \nQuery was much faster (1.5ms vs 300ms). \n\nI have tried to increase stats collection...\n\nalter table capsa.flatommemberrelation column srcobj set statistics 1000;\nalter table capsa.flatommemberrelation column dstobj set statistics 1000;\nalter table capsa.flatommemberrelation column objectid set statistics 1000;\nalter table capsa.flatomfilesysentry column objectid set statistics 1000;\nvacuum full analyze;\nExperimented with many postgres memory parameters.\nNo difference.\n\nIs seq scan off the solution here?\nMy tests are with a relatively small number of records.\nMy concern here is what happens with 100,000's of records and seq scan off?\nI will find out shortly...\n\nDoes anyone know of of any know issues with the query planner?\n\n\n Explain analyze results below.\n \n\ncapsa=# explain analyze select count(*) from capsa.flatomfilesysentry where\nobjectid in (select dstobj from capsa.flatommemberrelation where\nsrcobj='5bdef74c-21d3-11db-9a20-001143214409');\n \nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=742380.16..742380.17 rows=1 width=0) (actual\ntime=1520.269..1520.270 rows=1 loops=1)\n -> Nested Loop (cost=878.91..742355.41 rows=9899 width=0) (actual\ntime=41.516..1520.076 rows=56 loops=1)\n Join Filter: (\"inner\".objectid = \"outer\".dstobj)\n -> Unique (cost=437.03..453.67 rows=3329 width=16) (actual\ntime=0.241..0.624 rows=56 loops=1)\n -> Sort (cost=437.03..445.35 rows=3329 width=16) (actual\ntime=0.237..0.346 rows=56 loops=1)\n Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation \n(cost=30.65..242.26 rows=3329 width=16) (actual time=0.053..0.135 rows=56\nloops=1)\n Recheck Cond: (srcobj =\n'5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on\ncapsa_flatommemberrelation_srcobj_idx (cost=0.00..30.65 rows=3329 width=0)\n(actual time=0.044..0.044 rows=56 loops=1)\n Index Cond: (srcobj =\n'5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid)\n -> Materialize (cost=441.89..540.88 rows=9899 width=16) (actual\ntime=0.011..14.918 rows=9899 loops=56)\n -> Seq Scan on flatomfilesysentry (cost=0.00..431.99 rows=9899\nwidth=16) (actual time=0.005..19.601 rows=9899 loops=1)\n Total runtime: 1521.040 ms\n(13 rows)\n\ncapsa=# explain analyze select count(*) from capsa.flatomfilesysentry where\nobjectid in (select dstobj from capsa.flatommemberrelation where\nsrcobj='5bdef74c-21d3-11db-9a20-001143214409');\n \nQUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1486472.45..1486472.46 rows=1 width=0) (actual\ntime=2.112..2.113 rows=1 loops=1)\n -> Nested Loop (cost=439.03..1486447.70 rows=9899 width=0) (actual\ntime=0.307..2.019 rows=56 loops=1)\n -> Unique (cost=437.03..453.67 rows=3329 width=16) (actual\ntime=0.236..0.482 rows=56 loops=1)\n -> Sort (cost=437.03..445.35 rows=3329 width=16) (actual\ntime=0.233..0.306 rows=56 loops=1)\n Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation \n(cost=30.65..242.26 rows=3329 width=16) (actual time=0.047..0.132 rows=56\nloops=1)\n Recheck Cond: (srcobj =\n'5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on\ncapsa_flatommemberrelation_srcobj_idx (cost=0.00..30.65 rows=3329 width=0)\n(actual time=0.038..0.038 rows=56 loops=1)\n Index Cond: (srcobj =\n'5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid)\n -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..384.50\nrows=4950 width=16) (actual time=0.019..0.020 rows=1 loops=56)\n Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n -> Bitmap Index Scan on flatomfilesysentry_pkey \n(cost=0.00..2.00 rows=4950 width=0) (actual time=0.014..0.014 rows=1 loops=56)\n Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n Total runtime: 2.258 ms\n(14 rows)\n \n \n H Hale <[email protected]> wrote: Not sure if this helps solve the problem but...\n (see below)\n \n As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. \n \n Any suggestions/comments to try and solve this are welcome. Thanks\n \n Data is as follows:\n capsa.flatommemberrelation 1458 records\n capsa.flatommemberrelation(srcobj) 3 distinct\n capsa.flatommemberrelation(dstobj) 730 distinct\n capsa.flatomfilesysentry 732 records\n capsa.flatommemberrelation(objectid) 732 distinct\n \n capsa=# set enable_seqscan=on;\n SET\n Time: 0.599 ms\n capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN\n ----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=196.01..196.02 rows=1 width=0) (actual time=965.420..965.422 rows=1 loops=1)\n -> Nested Loop IN Join (cost=0.00..194.19 rows=728 width=0) (actual time=3.373..964.371 rows=729 loops=1)\n Join Filter: (\"outer\".objectid = \"inner\".dstobj)\n -> Seq Scan on flatomfilesysentry (cost=0.00..65.28 rows=728 width=16) (actual time=0.007..1.505 rows=732 loops=1)\n -> Seq Scan on flatommemberrelation (cost=0.00..55.12 rows=725 width=16) (actual time=0.004..0.848 rows=366 loops=732)\n Filter: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n Total runtime: 965.492 ms\n (7 rows)\n \n Time: 966.806 ms\n \n-----------------------------------------------------------------------------------------------------------\n capsa=# set enable_seqscan=off;\n SET\n Time: 0.419 ms\n capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409');\n QUERY PLAN\n --------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=24847.73..24847.74 rows=1 width=0) (actual time=24.859..24.860 rows=1 loops=1)\n -> Nested Loop (cost=90.05..24845.91 rows=728 width=0) (actual time=2.946..23.640 rows=729 loops=1)\n -> Unique (cost=88.04..91.67 rows=363 width=16) (actual time=2.917..6.671 rows=729 loops=1)\n -> Sort (cost=88.04..89.86 rows=725 width=16) (actual time=2.914..3.998 rows=729 loops=1)\n Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation (cost=7.54..53.60 rows=725 width=16) (actual time=0.260..1.411 rows=729 loops=1)\n Recheck Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..7.54 rows=725 width=0) (actual time=0.244..0.244 rows=729 loops=1)\n Index Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid)\n -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..63.64 rows=364 width=16) (actual time=0.014..0.015 rows=1 loops=729)\n Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=364 width=0) (actual time=0.009..0.009 rows=1 loops=729)\n Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n Total runtime: 25.101 ms\n (14 rows)\n \n Time: 26.878 ms\n \n \n \n \n \n \nH Hale <[email protected]> wrote: Tom, \n \n It is unique. \n\n Indexes:\n \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid)\n \"capsa_flatomfilesysentry_name_idx\" btree (name)\n Foreign-key constraints:\n \"objectid\" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE\n \n \nTom Lane <[email protected]> wrote: H Hale writes:\n> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual time=0.011..0.013 rows=1 loops=6473)\n> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)\n> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)\n\nWell, there's our estimation failure: 3238 rows expected, one row\nactual.\n\nWhat is the data distribution of flatomfilesysentry.objectid?\nIt looks from this example like it is unique or nearly so,\nbut the planner evidently does not think that.\n\n regards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n \n \n\n Initial testing was with data that essentially looks like a single collection with many items. I then changed this to have 60 collections of 50 items. The result, much better (but not optimum) use of indexs, but a seq scan stillused. Turning seq scan off, all indexes where used. Query was much faster (1.5ms vs 300ms). I have tried to increase stats collection...alter table capsa.flatommemberrelation column srcobj set statistics 1000;alter table capsa.flatommemberrelation column dstobj set statistics 1000;alter table capsa.flatommemberrelation column objectid set statistics 1000;alter table capsa.flatomfilesysentry column objectid set statistics 1000;vacuum full analyze;Experimented with many postgres memory parameters.No difference.Is seq scan off the solution here?My tests are with a relatively small number of records.My concern here is what happens with 100,000's\n of records and seq scan off?I will find out shortly...Does anyone know of of any know issues with the query planner? Explain analyze results below. capsa=# explain analyze select count(*) from capsa.flatomfilesysentry whereobjectid in (select dstobj from capsa.flatommemberrelation wheresrcobj='5bdef74c-21d3-11db-9a20-001143214409'); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=742380.16..742380.17 rows=1 width=0) (actualtime=1520.269..1520.270 rows=1 loops=1) -> Nested Loop (cost=878.91..742355.41 rows=9899 width=0) (actualtime=41.516..1520.076 rows=56 loops=1) Join\n Filter: (\"inner\".objectid = \"outer\".dstobj) -> Unique (cost=437.03..453.67 rows=3329 width=16) (actualtime=0.241..0.624 rows=56 loops=1) -> Sort (cost=437.03..445.35 rows=3329 width=16) (actualtime=0.237..0.346 rows=56 loops=1) Sort Key: flatommemberrelation.dstobj -> Bitmap Heap Scan on flatommemberrelation (cost=30.65..242.26 rows=3329 width=16) (actual time=0.053..0.135 rows=56loops=1) Recheck Cond: (srcobj ='5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid) -> Bitmap Index Scan oncapsa_flatommemberrelation_srcobj_idx (cost=0.00..30.65 rows=3329 width=0)(actual time=0.044..0.044 rows=56 loops=1) Index Cond: (srcobj ='5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid) -> Materialize (cost=441.89..540.88 rows=9899 width=16)\n (actualtime=0.011..14.918 rows=9899 loops=56) -> Seq Scan on flatomfilesysentry (cost=0.00..431.99 rows=9899width=16) (actual time=0.005..19.601 rows=9899 loops=1) Total runtime: 1521.040 ms(13 rows)capsa=# explain analyze select count(*) from capsa.flatomfilesysentry whereobjectid in (select dstobj from capsa.flatommemberrelation wheresrcobj='5bdef74c-21d3-11db-9a20-001143214409'); QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=1486472.45..1486472.46 rows=1 width=0) (actualtime=2.112..2.113 rows=1 loops=1) -> Nested Loop (cost=439.03..1486447.70 rows=9899 width=0) (actualtime=0.307..2.019\n rows=56 loops=1) -> Unique (cost=437.03..453.67 rows=3329 width=16) (actualtime=0.236..0.482 rows=56 loops=1) -> Sort (cost=437.03..445.35 rows=3329 width=16) (actualtime=0.233..0.306 rows=56 loops=1) Sort Key: flatommemberrelation.dstobj -> Bitmap Heap Scan on flatommemberrelation (cost=30.65..242.26 rows=3329 width=16) (actual time=0.047..0.132 rows=56loops=1) Recheck Cond: (srcobj ='5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid) -> Bitmap Index Scan oncapsa_flatommemberrelation_srcobj_idx (cost=0.00..30.65 rows=3329 width=0)(actual time=0.038..0.038 rows=56 loops=1) Index Cond: (srcobj ='5bdef74c-21d3-11db-9a20-001143214409'::capsa_sys.uuid) -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..384.50rows=4950 width=16)\n (actual time=0.019..0.020 rows=1 loops=56) Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=4950 width=0) (actual time=0.014..0.014 rows=1 loops=56) Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) Total runtime: 2.258 ms(14 rows) H Hale <[email protected]> wrote: Not sure if this helps solve the problem but... (see below) As new records are added Indexes are used for awhile and then at some point postgres switches to seq scan. It is repeatable. Any suggestions/comments to try and solve this are welcome. Thanks Data is as follows: capsa.flatommemberrelation 1458 records\n capsa.flatommemberrelation(srcobj) 3 distinct capsa.flatommemberrelation(dstobj) 730 distinct capsa.flatomfilesysentry 732 records capsa.flatommemberrelation(objectid) 732 distinct capsa=# set enable_seqscan=on; SET Time: 0.599 ms capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409'); QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=196.01..196.02 rows=1 width=0) (actual time=965.420..965.422 rows=1 loops=1) -> Nested Loop IN Join (cost=0.00..194.19 rows=728 width=0) (actual time=3.373..964.371 rows=729 loops=1) Join Filter: (\"outer\".objectid = \"inner\".dstobj) -> Seq Scan on flatomfilesysentry (cost=0.00..65.28 rows=728 width=16) (actual time=0.007..1.505 rows=732 loops=1) -> Seq Scan on flatommemberrelation (cost=0.00..55.12 rows=725 width=16) (actual time=0.004..0.848 rows=366 loops=732) Filter: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid) Total runtime: 965.492 ms (7 rows) Time: 966.806 ms\n----------------------------------------------------------------------------------------------------------- capsa=# set enable_seqscan=off; SET Time: 0.419 ms capsa=# explain analyze select count(*) from capsa.flatomfilesysentry where objectid in (select dstobj from capsa.flatommemberrelation where srcobj='9e5943e0-219f-11db-8504-001143214409'); QUERY PLAN\n -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=24847.73..24847.74 rows=1 width=0) (actual time=24.859..24.860 rows=1 loops=1) -> Nested Loop (cost=90.05..24845.91 rows=728 width=0) (actual time=2.946..23.640 rows=729 loops=1) -> Unique (cost=88.04..91.67 rows=363 width=16) (actual time=2.917..6.671 rows=729 loops=1) -> Sort (cost=88.04..89.86 rows=725 width=16) (actual time=2.914..3.998 rows=729 loops=1) Sort Key: flatommemberrelation.dstobj\n -> Bitmap Heap Scan on flatommemberrelation (cost=7.54..53.60 rows=725 width=16) (actual time=0.260..1.411 rows=729 loops=1) Recheck Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid) -> Bitmap Index Scan on capsa_flatommemberrelation_srcobj_idx (cost=0.00..7.54 rows=725 width=0) (actual time=0.244..0.244 rows=729 loops=1)\n Index Cond: (srcobj = '9e5943e0-219f-11db-8504-001143214409'::capsa_sys.uuid) -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..63.64 rows=364 width=16) (actual time=0.014..0.015 rows=1 loops=729) Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj) -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=364 width=0) (actual time=0.009..0.009 rows=1 loops=729) Index Cond:\n (flatomfilesysentry.objectid = \"outer\".dstobj) Total runtime: 25.101 ms (14 rows) Time: 26.878 ms H Hale <[email protected]> wrote: Tom, It is unique. Indexes: \"flatomfilesysentry_pkey\" PRIMARY KEY, btree (objectid) \"capsa_flatomfilesysentry_name_idx\" btree (name) Foreign-key constraints: \"objectid\" FOREIGN KEY (objectid) REFERENCES capsa_sys.master(objectid) ON DELETE CASCADE Tom Lane <[email protected]> wrote: H Hale writes:> -> Bitmap Heap Scan on flatomfilesysentry (cost=2.00..274.38 rows=3238 width=30) (actual\n time=0.011..0.013 rows=1 loops=6473)> Recheck Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)> -> Bitmap Index Scan on flatomfilesysentry_pkey (cost=0.00..2.00 rows=3238 width=0) (actual time=0.007..0.007 rows=1 loops=6473)> Index Cond: (flatomfilesysentry.objectid = \"outer\".dstobj)Well, there's our estimation failure: 3238 rows expected, one rowactual.What is the data distribution of flatomfilesysentry.objectid?It looks from this example like it is unique or nearly so,but the planner evidently does not think that. regards, tom lane---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Wed, 2 Aug 2006 08:17:32 -0400 (EDT)",
"msg_from": "H Hale <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: sub select performance due to seq scans "
},
{
"msg_contents": "On Wed, 2006-08-02 at 07:17, H Hale wrote:\n> Initial testing was with data that essentially looks like a single collection with many items. \n> I then changed this to have 60 collections of 50 items. \n> The result, much better (but not optimum) use of indexs, but a seq scan still\n> used. \n> \n> Turning seq scan off, all indexes where used. \n> Query was much faster (1.5ms vs 300ms). \n> \n> I have tried to increase stats collection...\n> \n> alter table capsa.flatommemberrelation column srcobj set statistics 1000;\n> alter table capsa.flatommemberrelation column dstobj set statistics 1000;\n> alter table capsa.flatommemberrelation column objectid set statistics 1000;\n> alter table capsa.flatomfilesysentry column objectid set statistics 1000;\n> vacuum full analyze;\n> Experimented with many postgres memory parameters.\n> No difference.\n> \n> Is seq scan off the solution here?\n\nIt almost never is the right answer.\n\n> My tests are with a relatively small number of records.\n> My concern here is what happens with 100,000's\n> of records and seq scan off?\n\nWhat you need to do is tune PostgreSQL to match your predicted usage\npatterns.\n\nWill most or all of your dataset always fit in RAM? Then you can tune\nrandom_page_cost down near 1.0 normally for large memory / small data\nset servers, 1.2 to 1.4 is about optimal. There will still be times\nwhen seq scan is a win. You can build a test data set of about the size\nyou'll expect to run in the future, and take a handful of the queries\nyou'll be running, and use more and less versions of those queries and\nexplain analyze to get an idea of about where random_page_cost should\nbe. Make sure analyze has been run and that the statistics are fairly\naccurate.\n\neffective_cache_size should be set to some reasonable size based on the\nsteady state size of your machine's kernel cache + disk buffers,\npreferably before you tune random_page_cost too much.\n\nThere are other numbers you can tune as well (the cpu cost ones in\nparticular). If you find yourself needing values of random_page_cost at\n1.0 or below to get the planner to make the right choices, then you've\ngot issues. Otherwise, if a number between 1.2 and 2.0 make it work\nright, you're likely set for a while.\n",
"msg_date": "Wed, 02 Aug 2006 11:48:16 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans"
},
{
"msg_contents": "Hi, Scott and Hale,\n\nScott Marlowe wrote:\n> Make sure analyze has been run and that the statistics are fairly\n> accurate.\n\nIt might also help to increase the statistics_target on the column in\nquestion.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 07 Aug 2006 15:32:39 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: sub select performance due to seq scans"
}
] |
[
{
"msg_contents": "Hi group,\n\nthis is a directory tree query for a backup system (http:// \nsourceforge.net/projects/bacula).\nYou provide a path and get back the names of the children plus a \nboolean telling if the child has itself children.\nThe \"%@\" stands for the initial path:\n---------------------------------------------------------------\nexplain analyze SELECT X.name AS name, COUNT(CH) > 1 AS children\n FROM\n ( SELECT RTRIM( REPLACE( NLPC.path, '%@/', ''),'/') AS name,\n FN.name AS CH\n FROM\n ( SELECT P.path,P.pathid\n FROM bacula.path P\n WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n LEFT OUTER JOIN bacula.file F\n ON\n NLPC.pathid = F.pathid\n LEFT OUTER JOIN bacula.filename FN\n ON\n F.filenameid = FN.filenameid\n GROUP BY NLPC.path, FN.name\n UNION\n SELECT FN.name AS name, FN.name AS CH\n FROM\n bacula.path P, bacula.file F, bacula.filename FN\n WHERE\n P.path = '%@/' AND\n P.pathid = F.pathid AND\n F.filenameid = FN.filenameid\n ) AS X\n WHERE X.name <> ''\n GROUP BY X.name\n---------------------------------------------------------------\nThe 1st part of the union takes care of directories, the 2nd one of \nflat files.\nApplication allows user navigation in a browser (clicking on a name \nin one column triggers the query and fills the next browser column).\nInitial path of \"/Users/axel/Library/Preferences\" results in:\n---------------------------------------------------------------\n Sort (cost=1295.24..1295.47 rows=92 width=64) (actual \ntime=818.987..819.871 rows=527 loops=1)\n Sort Key: upper(x.name)\n -> GroupAggregate (cost=1288.56..1292.24 rows=92 width=64) \n(actual time=784.069..814.059 rows=527 loops=1)\n -> Unique (cost=1288.56..1289.25 rows=92 width=112) \n(actual time=783.931..809.708 rows=684 loops=1)\n -> Sort (cost=1288.56..1288.79 rows=92 width=112) \n(actual time=783.921..793.150 rows=5350 loops=1)\n Sort Key: name, ch\n -> Append (cost=642.03..1285.55 rows=92 \nwidth=112) (actual time=335.134..723.917 rows=5350 loops=1)\n -> Subquery Scan \"*SELECT* \n1\" (cost=642.03..643.18 rows=46 width=112) (actual \ntime=335.130..338.564 rows=184 loops=1)\n -> HashAggregate \n(cost=642.03..642.72 rows=46 width=112) (actual time=335.121..337.843 \nrows=184 loops=1)\n -> Nested Loop Left Join \n(cost=2.00..641.80 rows=46 width=112) (actual time=39.293..326.831 \nrows=1685 loops=1)\n -> Nested Loop Left \nJoin (cost=0.00..502.63 rows=46 width=97) (actual \ntime=21.026..202.206 rows=1685 loops=1)\n -> Index Scan \nusing path_name_idx on path p (cost=0.00..3.02 rows=1 width=97) \n(actual time=15.480..56.935 rows=27 loops=1)\n Index Cond: \n((path >= '/Users/axel/Library/Preferences/'::text) AND (path < '/ \nUsers/axel/Library/Preferences0'::text))\n Filter: \n((path ~ '^/Users/axel/Library/Preferences/[^/]*/$'::text) AND (rtrim \n(\"replace\"(path, '/Users/axel/Library/Preferences/'::text, ''::text), \n'/'::text) <> ''::text))\n -> Index Scan \nusing file_path_idx on file f (cost=0.00..493.28 rows=506 width=8) \n(actual time=0.473..5.119 rows=62 loops=27)\n Index Cond: \n(\"outer\".pathid = f.pathid)\n -> Bitmap Heap Scan on \nfilename fn (cost=2.00..3.01 rows=1 width=23) (actual \ntime=0.058..0.061 rows=1 loops=1685)\n Recheck Cond: \n(\"outer\".filenameid = fn.filenameid)\n -> Bitmap Index \nScan on filename_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.030..0.030 rows=1 loops=1685)\n Index Cond: \n(\"outer\".filenameid = fn.filenameid)\n -> Nested Loop (cost=2.00..641.91 \nrows=46 width=19) (actual time=3.349..377.758 rows=5166 loops=1)\n -> Nested Loop (cost=0.00..502.62 \nrows=46 width=4) (actual time=3.118..97.375 rows=5200 loops=1)\n -> Index Scan using \npath_name_idx on path p (cost=0.00..3.01 rows=1 width=4) (actual \ntime=0.045..0.052 rows=1 loops=1)\n Index Cond: (path = '/ \nUsers/axel/Library/Preferences/'::text)\n -> Index Scan using \nfile_path_idx on file f (cost=0.00..493.28 rows=506 width=8) (actual \ntime=3.058..76.014 rows=5200 loops=1)\n Index Cond: \n(\"outer\".pathid = f.pathid)\n -> Bitmap Heap Scan on filename \nfn (cost=2.00..3.02 rows=1 width=23) (actual time=0.037..0.039 \nrows=1 loops=5200)\n Recheck Cond: \n(\"outer\".filenameid = fn.filenameid)\n Filter: (name <> ''::text)\n -> Bitmap Index Scan on \nfilename_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.018..0.018 rows=1 loops=5200)\n Index Cond: \n(\"outer\".filenameid = fn.filenameid)\nTotal runtime: 832.458 ms\n---------------------------------------------------------------\nwhich is ok, but initial path of \"/Users/axel\" results in (which is \nnot ok):\n---------------------------------------------------------------\n Sort (cost=125533.67..125534.17 rows=200 width=64) (actual \ntime=84273.963..84274.260 rows=181 loops=1)\n Sort Key: upper(x.name)\n -> GroupAggregate (cost=123493.01..125526.03 rows=200 width=64) \n(actual time=84263.411..84272.427 rows=181 loops=1)\n -> Unique (cost=123493.01..124169.51 rows=90201 \nwidth=112) (actual time=84263.215..84270.129 rows=522 loops=1)\n -> Sort (cost=123493.01..123718.51 rows=90201 \nwidth=112) (actual time=84263.208..84265.632 rows=1432 loops=1)\n Sort Key: name, ch\n -> Append (cost=113172.83..116069.08 \nrows=90201 width=112) (actual time=83795.274..84251.660 rows=1432 \nloops=1)\n -> Subquery Scan \"*SELECT* \n1\" (cost=113172.83..115426.71 rows=90155 width=112) (actual \ntime=83795.270..83803.996 rows=410 loops=1)\n -> HashAggregate \n(cost=113172.83..114525.16 rows=90155 width=112) (actual \ntime=83795.258..83802.369 rows=410 loops=1)\n -> Hash Left Join \n(cost=3124.38..112722.06 rows=90155 width=112) (actual \ntime=56254.547..83779.903 rows=3648 loops=1)\n Hash Cond: \n(\"outer\".filenameid = \"inner\".filenameid)\n -> Merge Left Join \n(cost=0.00..106216.87 rows=90155 width=97) (actual \ntime=54926.198..82430.621 rows=3648 loops=1)\n Merge Cond: \n(\"outer\".pathid = \"inner\".pathid)\n -> Index Scan \nusing path_pkey on path p (cost=0.00..2567.57 rows=1941 width=97) \n(actual time=527.805..1521.911 rows=69 loops=1)\n Filter: \n((path ~ '^/Users/axel/[^/]*/$'::text) AND (rtrim(\"replace\"(path, '/ \nUsers/axel/'::text, ''::text), '/'::text) <> ''::text))\n -> Index Scan \nusing file_path_idx on file f (cost=0.00..95191.99 rows=3020363 \nwidth=8) (actual time=17.561..74392.318 rows=2941790 loops=1)\n -> Hash \n(cost=2723.30..2723.30 rows=160430 width=23) (actual \ntime=1299.103..1299.103 rows=160430 loops=1)\n -> Seq Scan on \nfilename fn (cost=0.00..2723.30 rows=160430 width=23) (actual \ntime=3.884..684.918 rows=160430 loops=1)\n -> Nested Loop (cost=2.00..641.91 \nrows=46 width=19) (actual time=93.252..442.196 rows=1022 loops=1)\n -> Nested Loop (cost=0.00..502.62 \nrows=46 width=4) (actual time=49.375..209.694 rows=1050 loops=1)\n -> Index Scan using \npath_name_idx on path p (cost=0.00..3.01 rows=1 width=4) (actual \ntime=29.455..29.462 rows=1 loops=1)\n Index Cond: (path = '/ \nUsers/axel/'::text)\n -> Index Scan using \nfile_path_idx on file f (cost=0.00..493.28 rows=506 width=8) (actual \ntime=19.898..175.869 rows=1050 loops=1)\n Index Cond: \n(\"outer\".pathid = f.pathid)\n -> Bitmap Heap Scan on filename \nfn (cost=2.00..3.02 rows=1 width=23) (actual time=0.206..0.208 \nrows=1 loops=1050)\n Recheck Cond: \n(\"outer\".filenameid = fn.filenameid)\n Filter: (name <> ''::text)\n -> Bitmap Index Scan on \nfilename_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.087..0.087 rows=1 loops=1050)\n Index Cond: \n(\"outer\".filenameid = fn.filenameid)\n Total runtime: 84295.927 ms\n---------------------------------------------------------------\nIt happened once that the planner resolved the 2nd query with the 1st \nplan, but this is not reproducible.\nHow can I avoid the 2nd plan?\n\nThis is 8.1.4 on OpenBSD 3.9 with 2x1GHz PIII and 2GB.\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 12:48:11 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "directory tree query with big planner variation"
},
{
"msg_contents": "On Mon, Jul 31, 2006 at 12:48:11PM +0200, Axel Rau wrote:\n> WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n\nThis can't be indexed. You might try something like \nWHERE P.path LIKE '%@%' AND P.path ~ '^%@/[^/]*/$'\n\nThe schema could be a lot more intelligent here. (E.g., store path \nseperately from file/directory name, store type (file or directory) \nseperately, etc.) Without improving the schema I don't think this will \never be a speed demon.\n\nMike Stone\n",
"msg_date": "Mon, 31 Jul 2006 07:15:42 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 13:15 schrieb Michael Stone:\n\n> On Mon, Jul 31, 2006 at 12:48:11PM +0200, Axel Rau wrote:\n>> WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n>\n> This can't be indexed. You might try something like WHERE P.path \n> LIKE '%@%' AND P.path ~ '^%@/[^/]*/$'\nWhy does it quite well in this case:\n---------------------------------------\n-> Index Scan using path_name_idx on path p (cost=0.00..3.02 rows=1 \nwidth=97) (actual time=15.480..56.935 rows=27 loops=1)\n Index Cond: ((path >= '/Users/axel/Library/ \nPreferences/'::text) AND (path < '/Users/axel/Library/ \nPreferences0'::text))\n Filter: ((path ~ '^/Users/axel/Library/Preferences/[^/]*/ \n$'::text) AND (rtrim(\"replace\"(path, '/Users/axel/Library/ \nPreferences/'::text, ''::text), '/'::text) <> ''::text))\n---------------------------------------\nas compared to this case(ignoring the index on path):\n---------------------------------------\n-> Index Scan using path_pkey on path p (cost=0.00..2567.57 \nrows=1941 width=97) (actual time=527.805..1521.911 rows=69 loops=1)\n Filter: ((path ~ '^/Users/axel/[^/]*/$'::text) AND (rtrim \n(\"replace\"(path, '/Users/axel/'::text, ''::text), '/'::text) <> \n''::text))\n---------------------------------------\n? With all longer path names, I get the above (good)result.\nShould I put the rtrim/replace on the client side?\n>\n> The schema could be a lot more intelligent here. (E.g., store path \n> seperately from file/directory name, store type (file or directory) \n> seperately, etc.) Without improving the schema I don't think this \n> will ever be a speed demon.\nPATH holds complete pathnames of directories, FILENAME holds \nfilenames and pathname components.\nCurrently the schema is the lowest common denominator between SQLite, \nMySQL and pg and the bacula people will stay with that (-;).\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 13:54:24 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "On Mon, Jul 31, 2006 at 01:54:24PM +0200, Axel Rau wrote:\n>Am 31.07.2006 um 13:15 schrieb Michael Stone:\n>>On Mon, Jul 31, 2006 at 12:48:11PM +0200, Axel Rau wrote:\n>>> WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n>>\n>>This can't be indexed. You might try something like WHERE P.path \n>>LIKE '%@%' AND P.path ~ '^%@/[^/]*/$'\n\nIgnore that, I wasn't awake yet.\n\n>Why does it quite well in this case:\n>---------------------------------------\n>-> Index Scan using path_name_idx on path p (cost=0.00..3.02 rows=1 \n>width=97) (actual time=15.480..56.935 rows=27 loops=1)\n> Index Cond: ((path >= '/Users/axel/Library/ \n>Preferences/'::text) AND (path < '/Users/axel/Library/ \n>Preferences0'::text))\n> Filter: ((path ~ '^/Users/axel/Library/Preferences/[^/]*/ \n>$'::text) AND (rtrim(\"replace\"(path, '/Users/axel/Library/ \n>Preferences/'::text, ''::text), '/'::text) <> ''::text))\n>---------------------------------------\n>as compared to this case(ignoring the index on path):\n>---------------------------------------\n>-> Index Scan using path_pkey on path p (cost=0.00..2567.57 \n>rows=1941 width=97) (actual time=527.805..1521.911 rows=69 loops=1)\n> Filter: ((path ~ '^/Users/axel/[^/]*/$'::text) AND (rtrim \n>(\"replace\"(path, '/Users/axel/'::text, ''::text), '/'::text) <> \n>''::text))\n>---------------------------------------\n>? With all longer path names, I get the above (good)result.\n>Should I put the rtrim/replace on the client side?\n\nThat's not the slow part. The slow part is retrieving every single file \nfor each of the subdirectories in order to determine whether there are \nany files in the subdirectories. \n\n>>The schema could be a lot more intelligent here. (E.g., store path \n>>seperately from file/directory name, store type (file or directory) \n>>seperately, etc.) Without improving the schema I don't think this \n>>will ever be a speed demon.\n\n>PATH holds complete pathnames of directories, FILENAME holds \n>filenames and pathname components.\n>Currently the schema is the lowest common denominator between SQLite, \n>MySQL and pg and the bacula people will stay with that (-;).\n\nNothing I suggested raises the bar for the \"lowest common denominator\". \nIf I understand the intend of this SQL, you're pulling all the entries\nin a directory in two parts. The first part (files) is fairly \nstraightforward. The second part (directories) consists of pulling any \nfile whose parent is a subdirectory of the directory you're looking for \n(this is *all* children of the directory, since you have to retrieve \nevery element that begins with the directory, then discard those that \nhave an additional / in their name), counting how many of these there \nare for each subdirectory, and discarding those results except for a \nbinary (yes there are children or no there aren't). This is a lot of \nuseless work to go through, and is going to be slow if you've got a lot \nof stuff in a subdirectory. An alternative approach would be, for each \ndirectory, to store all its children (files and subdirectories) along \nwith a flag indicating which it is. This would allow you to create the \ncollapsed tree view without walking all the children of a subdirectory.\n\nAssuming you can't make changes to the schema, what about the query?\nYou've got this:\n\nexplain analyze SELECT X.name AS name, COUNT(CH) > 1 AS children\n FROM\n ( SELECT RTRIM( REPLACE( NLPC.path, '%@/', ''),'/') AS name,\n FN.name AS CH\n FROM\n ( SELECT P.path,P.pathid\n FROM bacula.path P\n WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n LEFT OUTER JOIN bacula.file F\n ON\n NLPC.pathid = F.pathid\n LEFT OUTER JOIN bacula.filename FN\n ON\n F.filenameid = FN.filenameid\n GROUP BY NLPC.path, FN.name\n UNION\n SELECT FN.name AS name, FN.name AS CH\n FROM\n bacula.path P, bacula.file F, bacula.filename FN\n WHERE\n P.path = '%@/' AND\n P.pathid = F.pathid AND\n F.filenameid = FN.filenameid\n ) AS X\n WHERE X.name <> ''\n GROUP BY X.name\n\nI'm only looking at the first part, which reduces to \nSELECT X.name AS name, COUNT(CH) > 1 AS children FROM\n SELECT NLPC.path AS name, FN.name as CH\n FROM ( SELECT P.path,P.pathid FROM bacula.path AS NLPC\n LEFT OUTER JOIN bacula.file F ON NLPC.pathid=F.pathid\n LEFT OUTER JOIN bacula.filename FN ON F.filenameid=FN.filenameid\n GROUP BY NLPC.path,FN.name\n\nWhy is the filename table even accessed? Would the results be the \nsame if you did\n SELECT NLPC.path AS name, F.fileid AS CH\nand drop the LEFT OUTER JOIN bacula.filename altogether?\n\nAnd then what happens if you try something like \nSELECT X.name,X.children\n FROM \n (SELECT [rtrim]P.path,(SELECT count(*) FROM bacula.file F \n WHERE F.pathid = P.pathid\n LIMIT 2) > 1\n FROM bacula.path P\n WHERE P.path ~ '^%@/[^/]*/$'\n UNION\n SELECT FN.name,0\n FROM bacula.path P, bacula.file F, bacula.filename FN\n WHERE\n P.path = '%@/' AND\n P.pathid = F.pathid AND\n F.filenameid = FN.filenameid\n ) AS X\n WHERE X.name <> ''\n GROUP BY X.name\n\nIt's hard to say without knowing what's actually *in* the tables, but \nthe existing query definately doesn't scale well for what I think it's \ntrying to do.\n\nMike Stone\n",
"msg_date": "Mon, 31 Jul 2006 09:30:37 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "It seems like you might be able to avoid the expensive directory lookups\nentirely without changing the schema by defining an immutable function\ndir_depth(path), which would just count the number of forward slashes.\nThen create a functional index on dir_depth(path) and in the query do a\ncheck for directories with a given prefix and the expected dir_depth.\n\nIn 8.1 and later, this kind of query might be able to use a bitmap index\ncombining thingamajigger (the actual name escapes me right now).\n\nThis is just a hunch and I haven't looked too carefully at the\nschema/query requirements to see if its feasible, but seems like a\nreasonable approach.\n\n-- Mark Lewis\n\n\nOn Mon, 2006-07-31 at 09:30 -0400, Michael Stone wrote:\n> On Mon, Jul 31, 2006 at 01:54:24PM +0200, Axel Rau wrote:\n> >Am 31.07.2006 um 13:15 schrieb Michael Stone:\n> >>On Mon, Jul 31, 2006 at 12:48:11PM +0200, Axel Rau wrote:\n> >>> WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n> >>\n> >>This can't be indexed. You might try something like WHERE P.path \n> >>LIKE '%@%' AND P.path ~ '^%@/[^/]*/$'\n> \n> Ignore that, I wasn't awake yet.\n> \n> >Why does it quite well in this case:\n> >---------------------------------------\n> >-> Index Scan using path_name_idx on path p (cost=0.00..3.02 rows=1 \n> >width=97) (actual time=15.480..56.935 rows=27 loops=1)\n> > Index Cond: ((path >= '/Users/axel/Library/ \n> >Preferences/'::text) AND (path < '/Users/axel/Library/ \n> >Preferences0'::text))\n> > Filter: ((path ~ '^/Users/axel/Library/Preferences/[^/]*/ \n> >$'::text) AND (rtrim(\"replace\"(path, '/Users/axel/Library/ \n> >Preferences/'::text, ''::text), '/'::text) <> ''::text))\n> >---------------------------------------\n> >as compared to this case(ignoring the index on path):\n> >---------------------------------------\n> >-> Index Scan using path_pkey on path p (cost=0.00..2567.57 \n> >rows=1941 width=97) (actual time=527.805..1521.911 rows=69 loops=1)\n> > Filter: ((path ~ '^/Users/axel/[^/]*/$'::text) AND (rtrim \n> >(\"replace\"(path, '/Users/axel/'::text, ''::text), '/'::text) <> \n> >''::text))\n> >---------------------------------------\n> >? With all longer path names, I get the above (good)result.\n> >Should I put the rtrim/replace on the client side?\n> \n> That's not the slow part. The slow part is retrieving every single file \n> for each of the subdirectories in order to determine whether there are \n> any files in the subdirectories. \n> \n> >>The schema could be a lot more intelligent here. (E.g., store path \n> >>seperately from file/directory name, store type (file or directory) \n> >>seperately, etc.) Without improving the schema I don't think this \n> >>will ever be a speed demon.\n> \n> >PATH holds complete pathnames of directories, FILENAME holds \n> >filenames and pathname components.\n> >Currently the schema is the lowest common denominator between SQLite, \n> >MySQL and pg and the bacula people will stay with that (-;).\n> \n> Nothing I suggested raises the bar for the \"lowest common denominator\". \n> If I understand the intend of this SQL, you're pulling all the entries\n> in a directory in two parts. The first part (files) is fairly \n> straightforward. The second part (directories) consists of pulling any \n> file whose parent is a subdirectory of the directory you're looking for \n> (this is *all* children of the directory, since you have to retrieve \n> every element that begins with the directory, then discard those that \n> have an additional / in their name), counting how many of these there \n> are for each subdirectory, and discarding those results except for a \n> binary (yes there are children or no there aren't). This is a lot of \n> useless work to go through, and is going to be slow if you've got a lot \n> of stuff in a subdirectory. An alternative approach would be, for each \n> directory, to store all its children (files and subdirectories) along \n> with a flag indicating which it is. This would allow you to create the \n> collapsed tree view without walking all the children of a subdirectory.\n> \n> Assuming you can't make changes to the schema, what about the query?\n> You've got this:\n> \n> explain analyze SELECT X.name AS name, COUNT(CH) > 1 AS children\n> FROM\n> ( SELECT RTRIM( REPLACE( NLPC.path, '%@/', ''),'/') AS name,\n> FN.name AS CH\n> FROM\n> ( SELECT P.path,P.pathid\n> FROM bacula.path P\n> WHERE P.path ~ '^%@/[^/]*/$' ) AS NLPC\n> LEFT OUTER JOIN bacula.file F\n> ON\n> NLPC.pathid = F.pathid\n> LEFT OUTER JOIN bacula.filename FN\n> ON\n> F.filenameid = FN.filenameid\n> GROUP BY NLPC.path, FN.name\n> UNION\n> SELECT FN.name AS name, FN.name AS CH\n> FROM\n> bacula.path P, bacula.file F, bacula.filename FN\n> WHERE\n> P.path = '%@/' AND\n> P.pathid = F.pathid AND\n> F.filenameid = FN.filenameid\n> ) AS X\n> WHERE X.name <> ''\n> GROUP BY X.name\n> \n> I'm only looking at the first part, which reduces to \n> SELECT X.name AS name, COUNT(CH) > 1 AS children FROM\n> SELECT NLPC.path AS name, FN.name as CH\n> FROM ( SELECT P.path,P.pathid FROM bacula.path AS NLPC\n> LEFT OUTER JOIN bacula.file F ON NLPC.pathid=F.pathid\n> LEFT OUTER JOIN bacula.filename FN ON F.filenameid=FN.filenameid\n> GROUP BY NLPC.path,FN.name\n> \n> Why is the filename table even accessed? Would the results be the \n> same if you did\n> SELECT NLPC.path AS name, F.fileid AS CH\n> and drop the LEFT OUTER JOIN bacula.filename altogether?\n> \n> And then what happens if you try something like \n> SELECT X.name,X.children\n> FROM \n> (SELECT [rtrim]P.path,(SELECT count(*) FROM bacula.file F \n> WHERE F.pathid = P.pathid\n> LIMIT 2) > 1\n> FROM bacula.path P\n> WHERE P.path ~ '^%@/[^/]*/$'\n> UNION\n> SELECT FN.name,0\n> FROM bacula.path P, bacula.file F, bacula.filename FN\n> WHERE\n> P.path = '%@/' AND\n> P.pathid = F.pathid AND\n> F.filenameid = FN.filenameid\n> ) AS X\n> WHERE X.name <> ''\n> GROUP BY X.name\n> \n> It's hard to say without knowing what's actually *in* the tables, but \n> the existing query definately doesn't scale well for what I think it's \n> trying to do.\n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Mon, 31 Jul 2006 06:53:21 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 15:30 schrieb Michael Stone:\n\n> If I understand the intend of this SQL,\nLet me show the tables first:\n Table \"bacula.path\" ( 65031 rows)\nColumn | Type | Modifiers\n--------+--------- \n+-------------------------------------------------------\npathid | integer | not null default nextval('path_pathid_seq'::regclass)\npath | text | not null ( complete pathnames of all \ndirectories )\nIndexes:\n \"path_pkey\" PRIMARY KEY, btree (pathid)\n \"path_name_idx\" btree (path)\n\n Table \"bacula.file\" (3021903 rows)\n Column | Type | Modifiers\n------------+--------- \n+-------------------------------------------------------\nfileid | integer | not null default nextval \n('file_fileid_seq'::regclass)\nfileindex | integer | not null default 0\njobid | integer | not null\npathid | integer | not null\t\t\t\t(FK)\nfilenameid | integer | not null\t\t\t\t(FK)\nmarkid | integer | not null default 0\nlstat | text | not null\nmd5 | text | not null\nIndexes:\n \"file_pkey\" PRIMARY KEY, btree (fileid)\n \"file_fp_idx\" btree (filenameid, pathid)\n \"file_jobid_idx\" btree (jobid)\n \"file_path_idx\" btree (pathid)\n\n Table \"bacula.filename\" ( 160559 rows)\n Column | Type | Modifiers\n------------+--------- \n+---------------------------------------------------------------\nfilenameid | integer | not null default nextval \n('filename_filenameid_seq'::regclass)\nname | text | not null\nIndexes:\n \"filename_pkey\" PRIMARY KEY, btree (filenameid)\n \"filename_name_idx\" btree (name)\n\nAnd now the query;\n\nTask: Return the names of subdirectories and files immediately below \na given path. For each none-empty subdirectory return children=true.\nThe 1st part of the union selects all subdirecories (per regex) and \nthe flatfiles contained in them plus one entry for the subdirectory \nitself (left outer joins). More than one joined filename means: \"The \nsubdirectory has children\".\nThe 2nd part of the union returns all flatfiles, contained in the \ngiven path.\nThe surrounding SELECT removes the given path and the trailing \"/\" \nkeeping only the subdirectory names from the pathnames, so they can \nbe merged with the flatfile names.\n\n> you're pulling all the entries\n> in a directory in two parts. The first\n(second)\n> part (files) is fairly straightforward. The second\n(first)\n> part (directories) consists of pulling any file whose parent is a \n> subdirectory of the directory you're looking for (this is *all* \n> children of the directory, since you have to retrieve every element \n> that begins with the directory, then discard those that have an \n> additional / in their name), counting how many of these there are \n> for each subdirectory, and discarding those results except for a \n> binary (yes there are children or no there aren't). This is a lot \n> of useless work to go through, and is going to be slow if you've \n> got a lot of stuff in a subdirectory.\nI agree, but did not yet find another way.\n> An alternative approach would be, for each directory, to store all \n> its children (files and subdirectories) along with a flag \n> indicating which it is. This would allow you to create the \n> collapsed tree view without walking all the children of a \n> subdirectory.\nPerhaps in a temporary table?\n>\n> Assuming you can't make changes to the schema, what about the query?\nCan be changed.\n> You've got this:\nPlease reconsider your proposals with the above\n\n> It's hard to say without knowing what's actually *in* the tables, \n> but the existing query definately doesn't scale well for what I \n> think it's trying to do.\n>\n> Mike Stone\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 17:06:00 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "On Mon, Jul 31, 2006 at 05:06:00PM +0200, Axel Rau wrote:\n>Please reconsider your proposals with the above\n\nI'm not sure what you're getting at; could you be more specific?\n\nMike Stone\n",
"msg_date": "Mon, 31 Jul 2006 11:21:44 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 17:21 schrieb Michael Stone:\n\n> On Mon, Jul 31, 2006 at 05:06:00PM +0200, Axel Rau wrote:\n>> Please reconsider your proposals with the above\n>\n> I'm not sure what you're getting at; could you be more specific?\nLet's see...\n\n\nAm 31.07.2006 um 15:30 schrieb Michael Stone:\n> And then what happens if you try something like SELECT \n> X.name,X.children\n> FROM (SELECT [rtrim]P.path,(SELECT count(*) FROM bacula.file F\nThe file table is the biggest one, because it contains one row per \nbackup job and file (see my column description).\nYou need the filename table here.\n> WHERE F.pathid = P.pathid\n> LIMIT 2) > 1\n> FROM bacula.path P\n> WHERE P.path ~ '^%@/[^/]*/$'\n> UNION\n> SELECT FN.name,0\n> FROM bacula.path P, bacula.file F, bacula.filename FN\n> WHERE\n> P.path = '%@/' AND\n> P.pathid = F.pathid AND\n> F.filenameid = FN.filenameid\n> ) AS X\n> WHERE X.name <> ''\n> GROUP BY X.name\n\nTweaking your query and omitting the RTRIM/REPLACE stuff, I get:\n-------------------------------\nSELECT X.path,X.children\nFROM (SELECT P.path,(SELECT count(*) FROM bacula.file F, \nbacula.filename FN WHERE F.pathid = \nP.pathid AND F.filenameid = FN.filenameid\n LIMIT 2) > 1 AS children\n FROM bacula.path P\n WHERE P.path ~ '^/Users/axel/ports/[^/]*/$'\n UNION\n SELECT FN.name,0=1\n FROM bacula.path P, bacula.file F, bacula.filename FN\n WHERE\n P.path = '/Users/axel/ports/' AND\n P.pathid = F.pathid AND\n F.filenameid = FN.filenameid\n ) AS X\nWHERE X.path <> ''\nGROUP BY X.path, X.children ;\n path | children\n------------------------------+----------\n.cvsignore | f\n/Users/axel/ports/CVS/ | t\n/Users/axel/ports/archivers/ | t\nINDEX | f\nMakefile | f\nREADME | f\n(6 rows)\n\nTime: 35.221 ms\n-------------------------------\nWhile my version returns:\n-------------------------------\n name | children\n------------+----------\n.cvsignore | f\narchivers | t\nCVS | t\nINDEX | f\nMakefile | f\nREADME | f\n(6 rows)\n\nTime: 30.263 ms\n------------+----------\nHow would you complete your version?\n\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 17:54:41 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 17:54 schrieb Axel Rau:\n>\n> Tweaking your query and omitting the RTRIM/REPLACE stuff, I get:\nMy example did not cover the case of empty subdirectories, in which \ncase your simplified query fails:\n-------------------------------\n path | children\n------------------------------+----------\n.DS_Store | f\n/Users/axel/Projects/ADMIN/ | t\n/Users/axel/Projects/DB/ | t\n/Users/axel/Projects/HW/ | t\n/Users/axel/Projects/JETSEC/ | t\n/Users/axel/Projects/MISC/ | t\n/Users/axel/Projects/NET/ | t\n/Users/axel/Projects/SW/ | t\n/Users/axel/Projects/TOOLS/ | t\n(9 rows)\n-------------------------------\nWhere it shoould be:\n-------------------------------\n name | children\n-----------+----------\n.DS_Store | f\nADMIN | t\nDB | t\nHW | f\nJETSEC | f\nMISC | f\nNET | t\nSW | t\nTOOLS | t\n(9 rows)\n-------------------------------\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 18:19:06 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 15:53 schrieb Mark Lewis:\n\n> It seems like you might be able to avoid the expensive directory \n> lookups\n> entirely without changing the schema by defining an immutable function\n> dir_depth(path), which would just count the number of forward slashes.\n> Then create a functional index on dir_depth(path) and in the query \n> do a\n> check for directories with a given prefix and the expected dir_depth.\nStill I must check for flatfiles in those subdirectories...\nSee my clarification here\n\thttp://archives.postgresql.org/pgsql-performance/2006-07/msg00311.php\n\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 18:20:56 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "On Mon, Jul 31, 2006 at 05:54:41PM +0200, Axel Rau wrote:\n>The file table is the biggest one, because it contains one row per \n>backup job and file (see my column description).\n\nI never saw a column description--that would certainly help. :) I saw a \nschema, but not an explanation of what the elements do. From what I can \nunderstand of what you're saying, it is sounding as though the \nbacula.file table contains an entry for the subdirectory itself as well \nas entries for each file in the subdirectory? And the reason you need to \njoin back to the filename table is that there may be multiple copies of \nthe filename from multiple backups? Does the subdirectory itself have an \nentry in the filename table? What is the content of the lstat column; can \nit be used to distinguish a file from a directory? Similarly for the md5 \ncolumn--what would it contain for a directory?\n\nMike Stone\n",
"msg_date": "Mon, 31 Jul 2006 13:08:36 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "\nAm 31.07.2006 um 19:08 schrieb Michael Stone:\n\n> I never saw a column description--that would certainly help. :) I \n> saw a schema, but not an explanation of what the elements do. From \n> what I can understand of what you're saying, it is sounding as \n> though the bacula.file table contains an entry for the subdirectory \n> itself as well as entries for each file in the subdirectory?\nIt is the junction relation between path and filename and job and \ndescribes\n1. which files (identified by bacula.filename) are in a directory \n(identified by bacula.path)\n2. For each of those files they record a snapshot with \ncharacteristics (lstat [base64 encoded], md5-checksum and a backup- \njob [via jobid], which itself has backup-time etc.)\n> And the reason you need to join back to the filename table is that \n> there may be multiple copies of the filename from multiple backups?\nOne entry per backup(job) for each bacula.path/bacula.filename pair \nin bacula.file.\n> Does the subdirectory itself have an entry in the filename table?\nYes. Directories reference an entry containing '' in \nbacula.filename.name.\n> What is the content of the lstat column\nFile status info -- see stat(2).\n> ; can it be used to distinguish a file from a directory?\nYes, the S_IFDIR bit identifies directories, but the whole lstat \ncolumn is base64 encoded\n> Similarly for the md5 column--what would it contain for a directory?\nIt seems to contain 0.\n\nAxel\nAxel Rau, ☀Frankfurt , Germany +49-69-951418-0\n\n\n",
"msg_date": "Mon, 31 Jul 2006 20:49:00 +0200",
"msg_from": "Axel Rau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: directory tree query with big planner variation"
},
{
"msg_contents": "I am looking for some general guidelines on what is the performance\noverhead of enabling point-in-time recovery (archive_command config) on\nan 8.1 database. Obviously it will depend on a multitude of factors, but\nsome broad-brush statements and/or anecdotal evidence will suffice.\nShould one worry about its performance implications? Also, what can one\ndo to mitigate it? \n\nThanks,\n\nGeorge\n",
"msg_date": "Tue, 1 Aug 2006 09:15:41 -0700",
"msg_from": "\"George Pavlov\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "PITR performance overhead?"
},
{
"msg_contents": "In response to \"George Pavlov\" <[email protected]>:\n\n> I am looking for some general guidelines on what is the performance\n> overhead of enabling point-in-time recovery (archive_command config) on\n> an 8.1 database. Obviously it will depend on a multitude of factors, but\n> some broad-brush statements and/or anecdotal evidence will suffice.\n> Should one worry about its performance implications? Also, what can one\n> do to mitigate it? \n\nPrior to implementing PITR, I did some testing to see what kind of\noverhead it would add. It was negligible. I don't remember the details,\nbut I seem to remember the performance hit was barely measurable.\n\nNote that in our usage scenarios, we have very little IO compared to\nCPU usage. The result is that our DB servers have plenty of disk\nbandwidth to spare. Since the log backup occurs as a background\nprocess, it made almost no difference in our tests. If your DB is\nvery IO intensive, you may have different results.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n",
"msg_date": "Tue, 1 Aug 2006 12:59:00 -0400",
"msg_from": "Bill Moran <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PITR performance overhead?"
},
{
"msg_contents": "On 8/1/06, George Pavlov <[email protected]> wrote:\n> I am looking for some general guidelines on what is the performance\n> overhead of enabling point-in-time recovery (archive_command config) on\n> an 8.1 database. Obviously it will depend on a multitude of factors, but\n> some broad-brush statements and/or anecdotal evidence will suffice.\n> Should one worry about its performance implications? Also, what can one\n> do to mitigate it?\n\npitr is extremely cheap both in performance drag and administation\noverhead for the benefits it provides. it comes almost for free, just\nmake sure you can handle all the wal files and do sane backup\nscheduling. in fact, pitr can actually reduce the load on a server\ndue to running less frequent backups. if your server is heavy i/o\nloaded, it might take a bit of planning.\n\nmerlin\n",
"msg_date": "Tue, 1 Aug 2006 21:17:02 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PITR performance overhead?"
},
{
"msg_contents": "If your server is heavily I/O bound AND you care about your data AND your\nare throwing out your WAL files in the middle of the day... You are headed\nfor a cliff.\n\nI'm sure this doesn't apply to anyone on this thread, just a general\nreminder to all you DBA's out there who sometimes are too busy to implement\nPITR until after a disaster strikes. I know that in the past I've\npersonally been guilty of this on several occasions.\n\n--Denis\n EnterpriseDB (yeah, rah, rah...)\n\nOn 8/1/06, Merlin Moncure <[email protected]> wrote:\n>\n> On 8/1/06, George Pavlov <[email protected]> wrote:\n> > I am looking for some general guidelines on what is the performance\n> > overhead of enabling point-in-time recovery (archive_command config) on\n> > an 8.1 database. Obviously it will depend on a multitude of factors, but\n> > some broad-brush statements and/or anecdotal evidence will suffice.\n> > Should one worry about its performance implications? Also, what can one\n> > do to mitigate it?\n>\n> pitr is extremely cheap both in performance drag and administation\n> overhead for the benefits it provides. it comes almost for free, just\n> make sure you can handle all the wal files and do sane backup\n> scheduling. in fact, pitr can actually reduce the load on a server\n> due to running less frequent backups. if your server is heavy i/o\n> loaded, it might take a bit of planning.\n>\n> merlin\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nIf your server is heavily I/O bound AND you care about your data AND your are throwing out your WAL files in the middle of the day... You are headed for a cliff. I'm sure this doesn't apply to anyone on this thread, just a general reminder to all you DBA's out there who sometimes are too busy to implement PITR until after a disaster strikes. I know that in the past I've personally been guilty of this on several occasions.\n--Denis EnterpriseDB (yeah, rah, rah...)On 8/1/06, Merlin Moncure <[email protected]> wrote:\nOn 8/1/06, George Pavlov <[email protected]\n> wrote:> I am looking for some general guidelines on what is the performance> overhead of enabling point-in-time recovery (archive_command config) on> an 8.1 database. Obviously it will depend on a multitude of factors, but\n> some broad-brush statements and/or anecdotal evidence will suffice.> Should one worry about its performance implications? Also, what can one> do to mitigate it?pitr is extremely cheap both in performance drag and administation\noverhead for the benefits it provides. it comes almost for free, justmake sure you can handle all the wal files and do sane backupscheduling. in fact, pitr can actually reduce the load on a serverdue to running less frequent backups. if your server is heavy i/o\nloaded, it might take a bit of planning.merlin---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives? \nhttp://archives.postgresql.org",
"msg_date": "Thu, 3 Aug 2006 01:21:56 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PITR performance overhead?"
}
] |
[
{
"msg_contents": "I try to partition a large table (~ 120 mio. rows) into 50 smaller\ntables but using the IMO immutable %-function constraint exclusion\ndoes not work as expected:\n\nCREATE TABLE tt_m (id1 int, cont varchar);\nCREATE TABLE tt_0 (check (id1 % 50 = 0)) INHERITS (tt_m);\nCREATE TABLE tt_1 (check (id1 % 50 = 1)) INHERITS (tt_m);\n....\nCREATE RULE ins_tt_0 AS ON INSERT TO tt_m WHERE id1 % 50 = 0 DO INSTEAD INSERT INTO tt_0 VALUES (new.*);\nCREATE RULE ins_tt_1 AS ON INSERT TO tt_m WHERE id1 % 50 = 1 DO INSTEAD INSERT INTO tt_1 VALUES (new.*);\n...\nINSERT INTO tt_m (id1,cont) VALUES (0,'Test1');\nINSERT INTO tt_m (id1,cont) VALUES (1,'Test2');\n....\nEXPLAIN SELECT * FROM tt_m WHERE id1=1;\n QUERY PLAN\n-----------------------------------------------------------------------\n Result (cost=0.00..73.50 rows=18 width=36)\n -> Append (cost=0.00..73.50 rows=18 width=36)\n -> Seq Scan on tt_m (cost=0.00..24.50 rows=6 width=36)\n Filter: (id1 = 1)\n -> Seq Scan on tt_0 tt_m (cost=0.00..24.50 rows=6 width=36)\n Filter: (id1 = 1)\n -> Seq Scan on tt_1 tt_m (cost=0.00..24.50 rows=6 width=36)\n Filter: (id1 = 1)\n ...\n\nOnly adding an explicit %-call to the query results in the expected plan:\n\nEXPLAIN SELECT * FROM tt_m WHERE id1=1 AND id1 % 50 = 1;\n QUERY PLAN\n-----------------------------------------------------------------------\n Result (cost=0.00..60.60 rows=2 width=36)\n -> Append (cost=0.00..60.60 rows=2 width=36)\n -> Seq Scan on tt_m (cost=0.00..30.30 rows=1 width=36)\n Filter: ((id1 = 1) AND ((id1 % 50) = 1))\n -> Seq Scan on tt_1 tt_m (cost=0.00..30.30 rows=1 width=36)\n Filter: ((id1 = 1) AND ((id1 % 50) = 1))\n\nDid I miss something and/or how could I force the planner to use\nconstraint exclusion without adding the explicit second condition above?\n\nTIA, Martin\n",
"msg_date": "Mon, 31 Jul 2006 14:17:08 +0200",
"msg_from": "Martin Lesser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioning / constrain exlusion not working with %-operator"
},
{
"msg_contents": "Martin Lesser <[email protected]> writes:\n> I try to partition a large table (~ 120 mio. rows) into 50 smaller\n> tables but using the IMO immutable %-function constraint exclusion\n> does not work as expected:\n\nThe constraint exclusion mechanism is not as bright as you think.\nThere are some very limited cases where it can make a deduction that\na WHERE clause implies a CHECK constraint that's not an exact textual\nequivalent ... but all those cases have to do with related b-tree\noperators, and % is not one.\n\nIt's usually better to use partitioning rules that have something to\ndo with the WHERE-clauses you'd be using anyway. For instance, try\nto partition on ranges of id1 instead of id1 % 50. That works because\nthe CHECK clauses will be like \"id1 >= x and id1 < y\" and those\noperators are btree-related to the \"id1 = z\" clauses you'll have in\nthe query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 31 Jul 2006 08:42:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioning / constrain exlusion not working with %-operator "
},
{
"msg_contents": "Tom Lane <[email protected]> writes:\n\n> It's usually better to use partitioning rules that have something to\n> do with the WHERE-clauses you'd be using anyway. For instance, try\n> to partition on ranges.\n\nI agree and tried to create new partitioned tables. But now I ran into\nsome other performance-related trouble when inserting (parts of) the old\n(unpartioned) table into the new one:\n\nCREATE TABLE t_unparted (id1 int, cont varchar);\n-- Populate table with 1000 records with id1 from 1 to 1000 and ANALYZE\n\nCREATE TABLE t_parted (id1 int, cont varchar);\nCREATE TABLE t_parted_000 (check (id1 >=0 AND id1 < 100)) INHERITS (t_parted);\nCREATE RULE ins_000 AS ON INSERT TO t_parted WHERE id1 >= 0 AND id1 < 100 DO INSTEAD INSERT INTO t_parted_000 VALUES (new.*);\n-- ... 8 more tables + 8 more rules\nCREATE TABLE t_parted_900 (check (id1 >=900 AND id1 < 1000)) INHERITS (t_parted);\nCREATE RULE ins_900 AS ON INSERT TO t_parted WHERE id1 >= 900 AND id1 < 1000 DO INSTEAD INSERT INTO t_parted_900 VALUES (new.*);\n\nAnd now:\n\nEXPLAIN INSERT INTO t_parted SELECT * FROM t_parted WHERE id1>=0 AND id1<100;\n\n Result (cost=0.00..170.80 rows=12 width=36)\n -> Append (cost=0.00..170.80 rows=12 width=36)\n -> Seq Scan on t_parted (cost=0.00..85.40 rows=6 width=36)\n Filter: ((id1 >= 0) AND (id1 < 100) AND (((id1 >= 0) AND (id1 < 100)) IS NOT TRUE) AND (((id1 >= 100) AND (id1 < 200)) IS NOT TRUE) AND (((id1 >= 200) AND (id1 < 300)) IS NOT TRUE) AND (((id1 >= 300) AND (id1 < 400)) IS NOT TRUE) AND (((id1 >= 400) AND (id1 < 500)) IS NOT TRUE) AND (((id1 >= 500) AND (id1 < 600)) IS NOT TRUE) AND (((id1 >= 600) AND (id1 < 700)) IS NOT TRUE) AND (((id1 >= 700) AND (id1 < 800)) IS NOT TRUE) AND (((id1 >= 800) AND (id1 < 900)) IS NOT TRUE) AND (((id1 >= 900) AND (id1 < 1000)) IS NOT TRUE))\n -> Seq Scan on t_parted_000 t_parted (cost=0.00..85.40 rows=6 width=36)\n Filter: ((id1 >= 0) AND (id1 < 100) AND (((id1 >= 0) AND (id1 < 100)) IS NOT TRUE) AND (((id1 >= 100) AND (id1 < 200)) IS NOT TRUE) AND (((id1 >= 200) AND (id1 < 300)) IS NOT TRUE) AND (((id1 >= 300) AND (id1 < 400)) IS NOT TRUE) AND (((id1 >= 400) AND (id1 < 500)) IS NOT TRUE) AND (((id1 >= 500) AND (id1 < 600)) IS NOT TRUE) AND (((id1 >= 600) AND (id1 < 700)) IS NOT TRUE) AND (((id1 >= 700) AND (id1 < 800)) IS NOT TRUE) AND (((id1 >= 800) AND (id1 < 900)) IS NOT TRUE) AND (((id1 >= 900) AND (id1 < 1000)) IS NOT TRUE))\n\n Result (cost=0.00..66.40 rows=12 width=36)\n -> Append (cost=0.00..66.40 rows=12 width=36)\n -> Seq Scan on t_parted (cost=0.00..33.20 rows=6 width=36)\n Filter: ((id1 >= 0) AND (id1 < 100) AND (id1 >= 0) AND (id1 < 100))\n -> Seq Scan on t_parted_000 t_parted (cost=0.00..33.20 rows=6 width=36)\n Filter: ((id1 >= 0) AND (id1 < 100) AND (id1 >= 0) AND (id1 < 100))\n ...\n Result (cost=0.00..33.20 rows=6 width=36)\n -> Append (cost=0.00..33.20 rows=6 width=36)\n -> Seq Scan on t_parted (cost=0.00..33.20 rows=6 width=36)\n Filter: ((id1 >= 0) AND (id1 < 100) AND (id1 >= 900) AND (id1 < 1000))\n(58 rows)\n\nThe filters appended by the planner do not make any sense and cost too\nmuch time if the old table is huge. (constraint_exclusion was ON)\n\nIs there a better way to partition an existing table with a large\nnumber of rows (>100 mio)?\n\nTIA, Martin\n",
"msg_date": "Fri, 04 Aug 2006 09:36:56 +0200",
"msg_from": "Martin Lesser <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioning / constrain exlusion not working with %-operator"
}
] |
[
{
"msg_contents": "Hello,\n\nI've read a lot of mails here saying how good is the Opteron with PostgreSQL,\nand a lot of people seems to recommend it (instead of Xeon).\n\nHowever, it seems that new Intel processors, Core Duo and Core 2 Duo, performs\nvery well, in desktop environment at least.\n\n\nI wonder what can we expect with them, do anybody have done any experiments with\nthose processors ?\n\n\nRegards,\n\tJonathan Ballet\n",
"msg_date": "Mon, 31 Jul 2006 14:53:24 +0200",
"msg_from": "Jonathan Ballet <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performances with new Intel Core* processors"
},
{
"msg_contents": "On 7/31/06, Jonathan Ballet <[email protected]> wrote:\n> Hello,\n>\n> I've read a lot of mails here saying how good is the Opteron with PostgreSQL,\n> and a lot of people seems to recommend it (instead of Xeon).\n>\n\nI am a huge fan of the opteron but intel certainly seems to have a\nwinner for workstations. from my research on a per core basis the c2d\nis a stronger chip with the 4mb cache version but it is unclear which\nis a better choice for pg on 4 and 8 core platforms. I have direct\npersonal experience with pg on dual (4 core) and quad (8 core) opteron\nand the performance is fantastic, especially on 64 bit o/s with > 2gb\nmemory (vs 32 bit xeon).\n\nalso opteron is 64 bit and mature so i think is a better choice for\nserver platform at the moment, especially for databases. my mind\ncould be changed but it is too soon right now. consider how long it\ntook for the opteron to prove itself in the server world.\n\nmerlin\n",
"msg_date": "Mon, 31 Jul 2006 11:52:31 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "On 31-7-2006 17:52, Merlin Moncure wrote:\n> On 7/31/06, Jonathan Ballet <[email protected]> wrote:\n>> Hello,\n>>\n>> I've read a lot of mails here saying how good is the Opteron with \n>> PostgreSQL,\n>> and a lot of people seems to recommend it (instead of Xeon).\n> \n> I am a huge fan of the opteron but intel certainly seems to have a\n> winner for workstations. from my research on a per core basis the c2d\n> is a stronger chip with the 4mb cache version but it is unclear which\n> is a better choice for pg on 4 and 8 core platforms. I have direct\n> personal experience with pg on dual (4 core) and quad (8 core) opteron\n> and the performance is fantastic, especially on 64 bit o/s with > 2gb\n> memory (vs 32 bit xeon).\n\nAs far as I know there is no support for more than two Woodcrest \nprocessors (Core 2 version of the Xeon) in a system. So when using a \nscalable application (like postgresql) and you need more than four \ncores, Opteron is still the only option in the x86 world.\n\nThe Woodcrest however is faster than a comparably priced Opteron using \nPostgresql. In a benchmark we did (and have yet to publish) a Woodcrest \nsystem outperforms a comparable Sun Fire x4200. And even if you'd adjust \nit to a clock-by-clock comparison, Woodcrest would still beat the \nOpteron. If you'd adjust it to a price/performance comparison (I \nconfigured a HP DL 380G5-system which is similar to what we tested on \ntheir website), the x4200 would loose as well. Mind you a Opteron 280 \n2.4Ghz or 285 2.6Ghz costs more than a Woodcrest 5150 2.66Ghz or 5160 \n3Ghz (resp.), but the FB-Dimm memory for the Xeons is more expensive \nthan the DDR or DDR2 ECC REG memory you need in a Opteron.\n\n> also opteron is 64 bit and mature so i think is a better choice for\n> server platform at the moment, especially for databases. my mind\n> could be changed but it is too soon right now. consider how long it\n> took for the opteron to prove itself in the server world.\n\nIntel Woodcrest can do 64-bit as well. As can all recent Xeons. Whether \nOpteron does a better job at 64-bit than a Xeon, I don't know (our test \nwas in 64-bit though). I have not seen our Xeon 64-bits production \nservers be any less stable than our Opteron 64-bit servers.\nFor a database system, however, processors hardly ever are the main \nbottleneck, are they? So you should probably go for a set of \"fast \nprocessors\" from your favorite supplier and focus mainly on lots of \nmemory and fast disks. Whether that employs Opterons or Xeon Woodcrest \n(no other Xeons are up to that competition, imho) doesn't really matter.\n\nWe'll be publishing the article in the near future, and I'll give a \npointer to it (even though it will be in Dutch, you can still read the \ngraphs).\n\nBest regards,\n\nArjen van der Meijden\nTweakers.net\n",
"msg_date": "Mon, 31 Jul 2006 18:30:27 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "\nGood to know. We have been waiting for performance comparisons on\nthe new Intel CPUs.\n\n---------------------------------------------------------------------------\n\nArjen van der Meijden wrote:\n> On 31-7-2006 17:52, Merlin Moncure wrote:\n> > On 7/31/06, Jonathan Ballet <[email protected]> wrote:\n> >> Hello,\n> >>\n> >> I've read a lot of mails here saying how good is the Opteron with \n> >> PostgreSQL,\n> >> and a lot of people seems to recommend it (instead of Xeon).\n> > \n> > I am a huge fan of the opteron but intel certainly seems to have a\n> > winner for workstations. from my research on a per core basis the c2d\n> > is a stronger chip with the 4mb cache version but it is unclear which\n> > is a better choice for pg on 4 and 8 core platforms. I have direct\n> > personal experience with pg on dual (4 core) and quad (8 core) opteron\n> > and the performance is fantastic, especially on 64 bit o/s with > 2gb\n> > memory (vs 32 bit xeon).\n> \n> As far as I know there is no support for more than two Woodcrest \n> processors (Core 2 version of the Xeon) in a system. So when using a \n> scalable application (like postgresql) and you need more than four \n> cores, Opteron is still the only option in the x86 world.\n> \n> The Woodcrest however is faster than a comparably priced Opteron using \n> Postgresql. In a benchmark we did (and have yet to publish) a Woodcrest \n> system outperforms a comparable Sun Fire x4200. And even if you'd adjust \n> it to a clock-by-clock comparison, Woodcrest would still beat the \n> Opteron. If you'd adjust it to a price/performance comparison (I \n> configured a HP DL 380G5-system which is similar to what we tested on \n> their website), the x4200 would loose as well. Mind you a Opteron 280 \n> 2.4Ghz or 285 2.6Ghz costs more than a Woodcrest 5150 2.66Ghz or 5160 \n> 3Ghz (resp.), but the FB-Dimm memory for the Xeons is more expensive \n> than the DDR or DDR2 ECC REG memory you need in a Opteron.\n> \n> > also opteron is 64 bit and mature so i think is a better choice for\n> > server platform at the moment, especially for databases. my mind\n> > could be changed but it is too soon right now. consider how long it\n> > took for the opteron to prove itself in the server world.\n> \n> Intel Woodcrest can do 64-bit as well. As can all recent Xeons. Whether \n> Opteron does a better job at 64-bit than a Xeon, I don't know (our test \n> was in 64-bit though). I have not seen our Xeon 64-bits production \n> servers be any less stable than our Opteron 64-bit servers.\n> For a database system, however, processors hardly ever are the main \n> bottleneck, are they? So you should probably go for a set of \"fast \n> processors\" from your favorite supplier and focus mainly on lots of \n> memory and fast disks. Whether that employs Opterons or Xeon Woodcrest \n> (no other Xeons are up to that competition, imho) doesn't really matter.\n> \n> We'll be publishing the article in the near future, and I'll give a \n> pointer to it (even though it will be in Dutch, you can still read the \n> graphs).\n> \n> Best regards,\n> \n> Arjen van der Meijden\n> Tweakers.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \n Bruce Momjian [email protected]\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n",
"msg_date": "Mon, 31 Jul 2006 15:57:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "On Mon, 2006-07-31 at 11:30, Arjen van der Meijden wrote:\n> On 31-7-2006 17:52, Merlin Moncure wrote:\n\n> For a database system, however, processors hardly ever are the main \n> bottleneck, are they? So you should probably go for a set of \"fast \n> processors\" from your favorite supplier and focus mainly on lots of \n> memory and fast disks. Whether that employs Opterons or Xeon Woodcrest \n> (no other Xeons are up to that competition, imho) doesn't really matter.\n\nJust making a quick comment here. While the CPU core itself nowadays\ncertainly is not the most common bottleneck for a fast db server, the\nability of the CPU/Memory combo to act as a datapump IS often a limit.\n\nIn that case, you want to go with whichever setup gives you the fastest\naccess to memory.\n",
"msg_date": "Mon, 31 Jul 2006 15:27:45 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "On Jul 31, 2006, at 12:30 PM, Arjen van der Meijden wrote:\n\n> For a database system, however, processors hardly ever are the main \n> bottleneck, are they? So you should probably go for a set of \"fast \n> processors\" from your favorite supplier and focus mainly on lots of \n> memory and fast disks. Whether that employs Opterons or Xeon \n> Woodcrest (no other Xeons are up to that\n\nNo, but it *does* matter how fast said processor can sling the memory \naround, and in my experience, the opterons have been much better at \nthat due to the efficiency of the memory transport layer.",
"msg_date": "Mon, 31 Jul 2006 17:04:37 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "Vivek,\n\nOn 7/31/06 2:04 PM, \"Vivek Khera\" <[email protected]> wrote:\n\n> No, but it *does* matter how fast said processor can sling the memory\n> around, and in my experience, the opterons have been much better at\n> that due to the efficiency of the memory transport layer.\n\nMy Mac laptop with a Core 1 and DDR2 RAM does 2700 MB/s memory bandwidth.\nThe Core 2 also has lower memory latency than the Opteron.\n\nThat said - Intel still hasn't figured out how to do cache-coherent SMP\nscaling yet - the Opteron has the outstanding EV6/HTX bus and the cc-numa\ncache coherency logic working today.\n\n- Luke \n\n\n",
"msg_date": "Mon, 31 Jul 2006 21:26:57 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "* Arjen van der Meijden:\n\n> For a database system, however, processors hardly ever are the main\n> bottleneck, are they?\n\nNot directly, but the choice of processor influences which\nchipsets/mainboards are available, which in turn has some impact on\nthe number of RAM slots. (According to our hardware supplier, beyound\n8 GB, the price per GB goes up sharply.) Unfortunately, it seems that\nthe Core 2 Duo mainboards do not change that much in this area.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Tue, 01 Aug 2006 09:28:05 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
},
{
"msg_contents": "My theory, based entirely on what I have read in this thread, is that a low\nend server (really a small workstation) with an Intel Dual Core CPU is\nlikely an excellent PG choice for the lowest end.\n\nI'll try to snag an Intel Dual Core workstation in the near future and\nreport back DBT2 scores comparing it to a similarly equiped 1 socket AMD\ndual core workstation. I'll keep the data size small to fit entirely in\nRAM so the DBT2 isn't it's usual disk bound dog when you run it the \"right\"\nway (according to tpc-c guidelines).\n\n--Denis\n Dweeb from EnterpriseDB\n\nOn 8/1/06, Florian Weimer <[email protected]> wrote:\n>\n> * Arjen van der Meijden:\n>\n> > For a database system, however, processors hardly ever are the main\n> > bottleneck, are they?\n>\n> Not directly, but the choice of processor influences which\n> chipsets/mainboards are available, which in turn has some impact on\n> the number of RAM slots. (According to our hardware supplier, beyound\n> 8 GB, the price per GB goes up sharply.) Unfortunately, it seems that\n> the Core 2 Duo mainboards do not change that much in this area.\n>\n> --\n> Florian Weimer <[email protected]>\n> BFK edv-consulting GmbH http://www.bfk.de/\n> Durlacher Allee 47 tel: +49-721-96201-1\n> D-76131 Karlsruhe fax: +49-721-96201-99\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nMy theory, based entirely on what I have read in this thread, is that a low end server (really a small workstation) with an Intel Dual Core CPU is likely an excellent PG choice for the lowest end.I'll try to snag an Intel Dual Core workstation in the near future and report back DBT2 scores comparing it to a similarly equiped 1 socket AMD dual core workstation. I'll keep the data size small to fit entirely in RAM so the DBT2 isn't it's usual disk bound dog when you run it the \"right\" way (according to tpc-c guidelines).\n--Denis Dweeb from EnterpriseDBOn 8/1/06, Florian Weimer <[email protected]> wrote:\n* Arjen van der Meijden:> For a database system, however, processors hardly ever are the main> bottleneck, are they?Not directly, but the choice of processor influences whichchipsets/mainboards are available, which in turn has some impact on\nthe number of RAM slots. (According to our hardware supplier, beyound8 GB, the price per GB goes up sharply.) Unfortunately, it seems thatthe Core 2 Duo mainboards do not change that much in this area.\n--Florian Weimer <[email protected]>BFK edv-consulting GmbH http://www.bfk.de/Durlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not\n match",
"msg_date": "Thu, 3 Aug 2006 01:08:52 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performances with new Intel Core* processors"
}
] |
[
{
"msg_contents": "Hello,\n\nI apologize that if the similar questions were already asked and answered\nbefore.....\nHere is a go:\n\na) If we have application clients running on a Solaris 10/SPARC box and\ndatabase server running on a Solaris10 X_86 box; further, we have a few\ntables, in which we split an integer type of field into several our own\ndefined bit map segement, upon them, we have a set of logic operation\nimplemented in our applications, MY question is, could the different edian\nscheme (SPARC is a big edian and X_86 is the opposite) possibly slow down\nthe applcation?\n\nIn fact, it is a general question that \"Is it a good practice we shall avoid\nto run application server and database server on the platform with opposite\nedian? or it simply doesn't matter\"?\n\nb) Same direction for the question, if using slony-1, if master server is\nrunning on a big edian host but slave is running on a small edian host, are\nthere any performance loss due to the edian difference?\n\nThanks in advance for your opinions.\n\nRegards,\nGuoping Zhang\n\n",
"msg_date": "Tue, 1 Aug 2006 15:01:15 +1000",
"msg_from": "\"Guoping Zhang\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Are there any performance penalty for opposite edian platform\n\tcombinations...."
},
{
"msg_contents": "\"Guoping Zhang\" <[email protected]> writes:\n> In fact, it is a general question that \"Is it a good practice we shall avoid\n> to run application server and database server on the platform with opposite\n> edian? or it simply doesn't matter\"?\n\nOur network protocol uses big-endian consistently, so there will be some\ntiny hit for little-endian machines, independently of what's on the\nother end of the wire. I can't imagine you could measure the difference\nthough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Aug 2006 08:10:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Are there any performance penalty for opposite edian platform\n\tcombinations...."
}
] |
[
{
"msg_contents": "Hello, I have a query:\n\nexplain analyze select tu.url_id, tu.url, coalesce(sd.recurse, 100), case when\nCOALESCE(get_option('use_banner')::integer,0) = 0 then 0 else ts.use_banner\nend as use_banner, ts.use_cookies, ts.use_robots, ts.includes, ts.excludes,\nts.track_domain, ts.task_id,get_available_pages(ts.task_id,ts.customer_id),\nts.redirects from task_url tu inner join task_scheduler ts on\ntu.task_id=ts.task_id inner join (subscription s inner join subscription_dic\nsd on sd.id=s.dict_id ) on s.customer_id=ts.customer_id inner join customer\nc on c.customer_id=ts.customer_id AND c.active WHERE\nget_available_pages(ts.task_id,ts.customer_id) > 0 AND\n((get_option('expired_users')::integer = 0) OR (isfinite(last_login) AND\nextract('day' from current_timestamp - last_login)::integer <=\ncoalesce(get_option('expired_users')::integer,100))) AND ((s.status is null\nAND ts.customer_id is null) OR s.status > 0) AND\n(get_check_period(ts.task_id,ts.next_check) is null OR\n(unix_timestamp(get_check_period(ts.task_id,ts.next_check)) -\nunix_timestamp(timenow()) < 3600)) AND ts.status <> 1 AND ((ts.start_time <\ncurrent_time AND ts.stop_time > current_time) OR (ts.start_time is null AND\nts.stop_time is null)) AND tu.url_id = 1 AND ts.customer_id not in (select\ndistinct customer_id from task_scheduler where status = 1) order by\nts.next_check is not null, unix_timestamp(ts.next_check) -\nunix_timestamp(timenow()) limit 10;\n\nwhich produces this query plan:\n Limit (cost=2874.98..2874.99 rows=2 width=88) (actual time=11800.535..11800.546 rows=3 loops=1)\n -> Sort (cost=2874.98..2874.99 rows=2 width=88) (actual time=11800.529..11800.532 rows=3 loops=1)\n Sort Key: (ts.next_check IS NOT NULL), (date_part('epoch'::text, ts.next_check) - date_part('epoch'::text, (timenow())::timestamp without time zone))\n -> Nested Loop (cost=4.37..2874.97 rows=2 width=88) (actual time=10249.115..11800.486 rows=3 loops=1)\n -> Nested Loop (cost=4.37..2868.87 rows=2 width=55) (actual time=10247.721..11796.303 rows=3 loops=1)\n Join Filter: (\"inner\".id = \"outer\".dict_id)\n -> Nested Loop (cost=2.03..2865.13 rows=2 width=55) (actual time=10247.649..11796.142 rows=3 loops=1)\n Join Filter: (((\"inner\".status IS NULL) AND (\"outer\".customer_id IS NULL)) OR (\"inner\".status > 0))\n -> Nested Loop (cost=2.03..2858.34 rows=2 width=55) (actual time=10247.583..11795.936 rows=3 loops=1)\n -> Seq Scan on customer c (cost=0.00..195.71 rows=231 width=4) (actual time=0.082..154.344 rows=4161 loops=1)\n Filter: (active AND isfinite(last_login) AND ((date_part('day'::text, (('now'::text)::timestamp(6) with time zone - (last_login)::timestamp with time zone)))::integer <= 150))\n -> Index Scan using task_scheduler_icustomer_id on task_scheduler ts (cost=2.03..11.51 rows=1 width=51) (actual time=2.785..2.785 rows=0 loops=4161)\n Index Cond: (\"outer\".customer_id = ts.customer_id)\n Filter: ((get_available_pages(task_id, customer_id) > 0) AND ((get_check_period(task_id, next_check) IS NULL) OR ((date_part('epoch'::text, get_check_period(task_id, next_check)) - date_part('epoch'::text, (timenow())::timestamp without time zone)) < 3600::double precision)) AND (status <> 1) AND ((((start_time)::time with time zone < ('now'::text)::time(6) with time zone) AND ((stop_time)::time with time zone > ('now'::text)::time(6) with time zone)) OR ((start_time IS NULL) AND (stop_time IS NULL))) AND (NOT (hashed subplan)))\n SubPlan\n -> Unique (cost=2.02..2.03 rows=1 width=4) (actual time=0.617..0.631 rows=3 loops=1)\n -> Sort (cost=2.02..2.03 rows=1 width=4) (actual time=0.613..0.617 rows=3 loops=1)\n Sort Key: customer_id\n -> Index Scan using task_scheduler_istatus on task_scheduler (cost=0.00..2.01 rows=1 width=4) (actual time=0.044..0.580 rows=3 loops=1)\n Index Cond: (status = 1)\n -> Index Scan using subscription_icustomer_id on subscription s (cost=0.00..3.38 rows=1 width=12) (actual time=0.035..0.041 rows=1 loops=3)\n Index Cond: (\"outer\".customer_id = s.customer_id)\n -> Materialize (cost=2.34..2.65 rows=31 width=8) (actual time=0.008..0.027 rows=6 loops=3)\n -> Seq Scan on subscription_dic sd (cost=0.00..2.31 rows=31 width=8) (actual time=0.013..0.034 rows=6 loops=1)\n -> Index Scan using task_url_storage_task_id on task_url tu (cost=0.00..3.03 rows=1 width=37) (actual time=0.028..0.045 rows=1 loops=3)\n Index Cond: (tu.task_id = \"outer\".task_id)\n Filter: (url_id = 1)\n Total runtime: 11801.082 ms\n(28 rows)\n\n\nDo I need to optimize a query somehow, or it is related to database\nconfiguration?\n\nI'm running postgresql 8.0.0 on CentOS release 3.7\n\n-- \nEugene N Dzhurinsky\n",
"msg_date": "Tue, 1 Aug 2006 16:18:37 +0300",
"msg_from": "Eugeny N Dzhurinsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query/database optimization"
},
{
"msg_contents": "Eugeny N Dzhurinsky <[email protected]> writes:\n> [slow query]\n\nThe bulk of your time seems to be going into this indexscan:\n\n> -> Index Scan using task_scheduler_icustomer_id on task_scheduler ts (cost=2.03..11.51 rows=1 width=51) (actual time=2.785..2.785 rows=0 loops=4161)\n> Index Cond: (\"outer\".customer_id = ts.customer_id)\n> Filter: ((get_available_pages(task_id, customer_id) > 0) AND ((get_check_period(task_id, next_check) IS NULL) OR ((date_part('epoch'::text, get_check_period(task_id, next_check)) - date_part('epoch'::text, (timenow())::timestamp without time zone)) < 3600::double precision)) AND (status <> 1) AND ((((start_time)::time with time zone < ('now'::text)::time(6) with time zone) AND ((stop_time)::time with time zone > ('now'::text)::time(6) with time zone)) OR ((start_time IS NULL) AND (stop_time IS NULL))) AND (NOT (hashed subplan)))\n> SubPlan\n> -> Unique (cost=2.02..2.03 rows=1 width=4) (actual time=0.617..0.631 rows=3 loops=1)\n> ...\n\nI kinda doubt that the index search itself is that slow --- doubtless\nthe problem comes from having to evaluate that filter condition on a lot\nof rows. How fast are those functions you're calling?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 01 Aug 2006 23:15:11 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query/database optimization "
},
{
"msg_contents": "On Tue, Aug 01, 2006 at 11:15:11PM -0400, Tom Lane wrote:\n> Eugeny N Dzhurinsky <[email protected]> writes:\n> > [slow query]\n> The bulk of your time seems to be going into this indexscan:\n> > -> Index Scan using task_scheduler_icustomer_id on task_scheduler ts (cost=2.03..11.51 rows=1 width=51) (actual time=2.785..2.785 rows=0 loops=4161)\n> > Index Cond: (\"outer\".customer_id = ts.customer_id)\n> > Filter: ((get_available_pages(task_id, customer_id) > 0) AND ((get_check_period(task_id, next_check) IS NULL) OR ((date_part('epoch'::text, get_check_period(task_id, next_check)) - date_part('epoch'::text, (timenow())::timestamp without time zone)) < 3600::double precision)) AND (status <> 1) AND ((((start_time)::time with time zone < ('now'::text)::time(6) with time zone) AND ((stop_time)::time with time zone > ('now'::text)::time(6) with time zone)) OR ((start_time IS NULL) AND (stop_time IS NULL))) AND (NOT (hashed subplan)))\n> > SubPlan\n> > -> Unique (cost=2.02..2.03 rows=1 width=4) (actual time=0.617..0.631 rows=3 loops=1)\n> > ...\n> \n> I kinda doubt that the index search itself is that slow --- doubtless\n> the problem comes from having to evaluate that filter condition on a lot\n> of rows. How fast are those functions you're calling?\n\nWell, not really fast, especially get_available_pages\n\nthere is special table with history of changes, and there is a view for latest\nchanges per task, and this function selects all records from a view for given\nID, then calculates sum of pages of tasks and then calculates number of\navailable pages as number of allowed pages deduct number of processed pages.\n\nprobably there is bottleneck in this view selection?\n\n-- \nEugene N Dzhurinsky\n",
"msg_date": "Wed, 2 Aug 2006 13:56:22 +0300",
"msg_from": "Eugeny N Dzhurinsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query/database optimization"
}
] |
[
{
"msg_contents": "I need some expert advice on how to optimize a \"translation\" query (this\nword choice will become clear shortly, I hope).\n\nSay I have a HUMONGOUS table of foreign language \"translations\" (call it\nTRANS) with records like these:\n\nmeaning_id: 1\nlanguage_id: 5\ntranslation: jidoosha\n\nmeaning_id: 1\nlanguage_id: 2\ntranslation: voiture\n\n meaning_id: 1\nlanguage_id: 5\ntranslation: kuruma\n\nmeaning_id: 2\nlanguage_id: 2\ntranslation: chat\n\nmeaning_id: 2\nlanguage_id: 5\ntranslation: neko\n\nmeaning_id: 2\nlanguage_id: 3\ntranslation: katz\n\nmeaning_id: 3\nlanguage_id: 4\ntranslation: pesce\n\n meaning_id: 3\nlanguage_id: 2\ntranslation: poisson\n\n meaning_id: 3\nlanguage_id: 5\ntranslation: sakana\n\nFor the sake of this description, let's assume that the records above are\nall the records in TRANS (though in fact the number of records in TRANS is\nreally about ten million times greater).\n\nNow suppose I have a tiny table called INPUT consisting of single text field\n(say, word). E.g. suppose that INPUT looks like this:\n\nkatz\n voiture\npesce\n\nNow, let's fix a language_id, say 5. This is the \"target\" language_id.\nGiven this target language_id, and this particular INPUT table, I want the\nresults of the query to be something like this:\n\nneko\njidoosha\nkuruma\nsakana\n\nI.e. for each word W in INPUT, the query must first find each record R in\nTRANS that has W as its translation field; then find each record Q in\nTRANS whose language_id is 5 (the target language_id) AND has the same\nmeaning_id as R does. E.g. if W is 'katz', then R is\n\n meaning_id: 2\nlanguage_id: 3\ntranslation: katz\n\nand therefore the desired Q is\n\n meaning_id: 2\nlanguage_id: 5\ntranslation: neko\n\n...and so on.\n\nThe only difficulty here is that performance is critical, and in real\nlife, TRANS has around 50M records (and growing), while INPUT has\ntypically between 500 and 1000 records.\n\nAny advice on how to make this as fast as possible would be much\nappreciated.\n\nThanks!\n\nG.\n\nP.S. Just to show that this post is not just from a college student trying\nto get around doing homework, below I post my most successful query so far.\nIt works, but it's performance isn't great. And it is annoyingly complex,\nto boot; I'm very much the SQL noob, and if nothing else, at least I'd like\nto learn to write \"better\" (i.e. more elegant, more legible, more\nclueful) SQL that this:\n\nSELECT q3.translation, q2.otherstuff\nFROM\n(\n SELECT INPUT.word, q1.meaning_id, INPUT.otherstuff\n FROM\n INPUT\n INNER JOIN\n (\n SELECT translation, meaning_id\n FROM TRANS\n WHERE translation IN (SELECT word FROM INPUT)\n ) AS q1\n ON INPUT.word = q1.translation\n) AS q2\nLEFT JOIN\n(\n SELECT translation, meaning_id\n FROM TRANS\n WHERE language_id=5\n) AS q3\nON q2.meaning_id=q3.meaning_id;\n\nAs you can see, there are additional fields that I didn't mention in my\noriginal description (e.g. INPUT.otherstuff). Also the above is actually a\nsubquery in a larger query, but it is by far, the worst bottleneck. Last,\nthere's an index on TRANS(translation).\n\nI need some expert advice on how to optimize a \"translation\" query (this word choice will become clear shortly, I hope).\n \nSay I have a HUMONGOUS table of foreign language \"translations\" (call it TRANS) with records like these:\n \nmeaning_id: 1\nlanguage_id: 5\ntranslation: jidoosha\n \nmeaning_id: 1\nlanguage_id: 2\ntranslation: voiture\n \n\nmeaning_id: 1\nlanguage_id: 5\ntranslation: kuruma\n \nmeaning_id: 2\nlanguage_id: 2\ntranslation: chat\n \nmeaning_id: 2\nlanguage_id: 5\ntranslation: neko\n \nmeaning_id: 2\nlanguage_id: 3\ntranslation: katz\n \nmeaning_id: 3\nlanguage_id: 4\ntranslation: pesce\n \n\nmeaning_id: 3\nlanguage_id: 2\ntranslation: poisson\n \n\nmeaning_id: 3\nlanguage_id: 5\ntranslation: sakana\n \nFor the sake of this description, let's assume that the records above are all the records in TRANS (though in fact the number of records in TRANS is really about ten million times greater).\n \nNow suppose I have a tiny table called INPUT consisting of single text field (say, word). E.g. suppose that INPUT looks like this:\n \nkatz\n\nvoiturepesce\n \nNow, let's fix a language_id, say 5. This is the \"target\" language_id. Given this target language_id, and this particular INPUT table, I want the results of the query to be something like this:\n \nneko\njidoosha\nkuruma\nsakana\n \nI.e. for each word W in INPUT, the query must first find each record R in TRANS that has W as its translation field; then find each record Q in TRANS whose language_id is 5 (the target language_id) AND has the same meaning_id as R does. \nE.g. if W is 'katz', then R is\n \n\nmeaning_id: 2\nlanguage_id: 3\ntranslation: katz\n \nand therefore the desired Q is\n \n\nmeaning_id: 2\nlanguage_id: 5\ntranslation: neko\n \n...and so on.\n \nThe only difficulty here is that performance is critical, and in real life, TRANS has around 50M records (and growing), while INPUT has typically between 500 and 1000 records.\n \nAny advice on how to make this as fast as possible would be much appreciated.\n \nThanks!\n \nG.\n \nP.S. Just to show that this post is not just from a college student trying to get around doing homework, below I post my most successful query so far. It works, but it's performance isn't great. And it is annoyingly complex, to boot; I'm very much the SQL noob, and if nothing else, at least I'd like to learn to write \"better\" (\ni.e. more elegant, more legible, more clueful) SQL that this:\n \nSELECT q3.translation, q2.otherstuffFROM( SELECT INPUT.word, q1.meaning_id, INPUT.otherstuff FROM INPUT INNER JOIN ( SELECT translation, meaning_id FROM TRANS WHERE translation IN (SELECT word FROM INPUT)\n ) AS q1 ON INPUT.word = q1.translation) AS q2LEFT JOIN( SELECT translation, meaning_id FROM TRANS WHERE language_id=5) AS q3ON q2.meaning_id=q3.meaning_id; \nAs you can see, there are additional fields that I didn't mention in my original description (e.g. INPUT.otherstuff). Also the above is actually a subquery in a larger query, but it is by far, the worst bottleneck. Last, there's an index on TRANS(translation).",
"msg_date": "Tue, 1 Aug 2006 14:09:54 -0400",
"msg_from": "tlm <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to speed up this \"translation\" query?"
},
{
"msg_contents": "\nOn 1 aug 2006, at 20.09, tlm wrote:\n> SELECT q3.translation, q2.otherstuff\n> FROM\n> (\n> SELECT INPUT.word, q1.meaning_id, INPUT.otherstuff\n> FROM\n> INPUT\n> INNER JOIN\n> (\n> SELECT translation, meaning_id\n> FROM TRANS\n> WHERE translation IN (SELECT word FROM INPUT)\n> ) AS q1\n> ON INPUT.word = q1.translation\n> ) AS q2\n> LEFT JOIN\n> (\n> SELECT translation, meaning_id\n> FROM TRANS\n> WHERE language_id=5\n> ) AS q3\n> ON q2.meaning_id=q3.meaning_id;\n\nMaybe I'm not following you properly, but I think you've made things \na little bit more complicated than they need to be. The nested sub- \nselects look a little nasty.\n\nNow, you didn't provide any explain output but I think the following \nSQL will achieve the same result, and hopefully produce a better plan:\n\nSELECT t2.translation, i.otherstuff\nFROM input i INNER JOIN trans t ON i.word=t.translation\nINNER JOIN trans t2 ON t.meaning_id=t2.meaning_id\nWHERE t2.language_id=5;\n\nThe query will also benefit from indices on trans.meaning_id and \ntrans.language_id. Also make sure the tables are vacuumed and \nanalyzed, to allow the planner to make good estimates.\n\n\n\nSincerely,\n\nNiklas Johansson\n\n\n\n\n",
"msg_date": "Tue, 1 Aug 2006 20:38:38 +0200",
"msg_from": "Niklas Johansson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to speed up this \"translation\" query?"
}
] |
[
{
"msg_contents": "I intend to test Postgres/Bizgres for DWH use. I want to use XFS filesystem to get the best possible performance at FS\nlevel(correct me if I am wrong !).\n\nIs anyone using XFS for storing/retrieving relatively large amount of data (~ 200GB)?\n\nIf yes, what about the performance and stability of XFS.\nI am especially interested in recommendations about XFS mount options and mkfs.xfs options.\nMy setup will be roughly this:\n1) 4 SCSI HDD , 128GB each, \n2) RAID 0 on the four SCSI HDD disks using LVM (software RAID)\n\nThere are two other SATA HDD in the server. Server has 2 physical CPUs (XEON at 3 GHz), 4 Logical CPUs, 8 GB RAM, OS\n= SLES9 SP3 \n\nMy questions:\n1) Should I place external XFS journal on separate device ?\n2) What should be the journal buffer size (logbsize) ?\n3) How many journal buffers (logbufs) should I configure ?\n4) How many allocations groups (for mkfs.xfs) should I configure\n5) Is it wortj settion noatime ?\n6) What I/O scheduler(elevators) should I use (massive sequencial reads)\n7) What is the ideal stripe unit and width (for a RAID device) ? \n\nI will appreciate any options, suggestions, pointers.\n\nBest Regards.\nMilen Kulev\n\n",
"msg_date": "Tue, 1 Aug 2006 23:49:56 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "XFS filessystem for Datawarehousing"
},
{
"msg_contents": "\nOn Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:\n> Is anyone using XFS for storing/retrieving relatively large amount \n> of data (~ 200GB)?\n\n\nYes, we've been using it on Linux since v2.4 (currently v2.6) and it \nhas been rock solid on our database servers (Opterons, running in \nboth 32-bit and 64-bit mode). Our databases are not quite 200GB \n(maybe 75GB for a big one currently), but ballpark enough that the \nexperience is probably valid. We also have a few terabyte+ non- \ndatabase XFS file servers too.\n\nPerformance has been very good even with nearly full file systems, \nand reliability has been perfect so far. Some of those file systems \nget used pretty hard for months or years non-stop. Comparatively, I \ncan only tell you that XFS tends to be significantly faster than \nExt3, but we never did any serious file system tuning either.\n\nKnowing nothing else, my experience would suggest that XFS is a fine \nand safe choice for your application.\n\n\nJ. Andrew Rogers\n\n",
"msg_date": "Tue, 1 Aug 2006 15:46:54 -0700",
"msg_from": "\"J. Andrew Rogers\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Hi Andrew, \nThank you for your prompt reply.\nAre you using some special XFS options ? \nI mean special values for logbuffers bufferiosize , extent size preallocations etc ?\nI will have only 6 big tables and about 20 other relatively small (fact aggregation) tables (~ 10-20 GB each). \nI believe it should be a a good idea to use as much contigious chunks of space (from OS point of view) as possible in\norder to make full table scans as fast as possible. \n\n\nBest Regards,\nMilen Kulev\n\n-----Original Message-----\nFrom: J. Andrew Rogers [mailto:[email protected]] \nSent: Wednesday, August 02, 2006 12:47 AM\nTo: Milen Kulev\nCc: Pgsql-Performance ((E-mail))\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing\n\n\n\nOn Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:\n> Is anyone using XFS for storing/retrieving relatively large amount\n> of data (~ 200GB)?\n\n\nYes, we've been using it on Linux since v2.4 (currently v2.6) and it \nhas been rock solid on our database servers (Opterons, running in \nboth 32-bit and 64-bit mode). Our databases are not quite 200GB \n(maybe 75GB for a big one currently), but ballpark enough that the \nexperience is probably valid. We also have a few terabyte+ non- \ndatabase XFS file servers too.\n\nPerformance has been very good even with nearly full file systems, \nand reliability has been perfect so far. Some of those file systems \nget used pretty hard for months or years non-stop. Comparatively, I \ncan only tell you that XFS tends to be significantly faster than \nExt3, but we never did any serious file system tuning either.\n\nKnowing nothing else, my experience would suggest that XFS is a fine \nand safe choice for your application.\n\n\nJ. Andrew Rogers\n\n",
"msg_date": "Wed, 2 Aug 2006 00:59:56 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "J. Andrew Rogers wrote:\n> \n> On Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:\n> >Is anyone using XFS for storing/retrieving relatively large amount \n> >of data (~ 200GB)?\n> \n> \n> Yes, we've been using it on Linux since v2.4 (currently v2.6) and it \n> has been rock solid on our database servers (Opterons, running in \n> both 32-bit and 64-bit mode). Our databases are not quite 200GB \n> (maybe 75GB for a big one currently), but ballpark enough that the \n> experience is probably valid. We also have a few terabyte+ non- \n> database XFS file servers too.\n> \n> Performance has been very good even with nearly full file systems, \n> and reliability has been perfect so far. Some of those file systems \n> get used pretty hard for months or years non-stop. Comparatively, I \n> can only tell you that XFS tends to be significantly faster than \n> Ext3, but we never did any serious file system tuning either.\n\nMost likely ext3 was used on the default configuration, which logs data\noperations as well as metadata, which is what XFS logs. I don't think\nI've seen any credible comparison between XFS and ext3 with the\nmetadata-only journal option.\n\nOn the other hand I don't think it makes sense to journal data on a\nPostgreSQL environment. Metadata is enough, given that we log data on\nWAL anyway.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 1 Aug 2006 20:42:23 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Milen Kulev wrote:\n\n> Is anyone using XFS for storing/retrieving relatively large amount of data (~ 200GB)?\n> \n\nYes, but not for that large - only about 40-50 GB of database data.\n\n> If yes, what about the performance and stability of XFS.\n\nI'm pretty happy with the performance, particularly read (get 215MB/s \nsequential 8K reads from 4 (P)ATA drives setup as software RAID 0). I \nhave always found XFS very stable (used it on servers for several years).\n\n> I am especially interested in recommendations about XFS mount options and mkfs.xfs options.\n> My setup will be roughly this:\n> 1) 4 SCSI HDD , 128GB each, \n> 2) RAID 0 on the four SCSI HDD disks using LVM (software RAID)\n> \n\n> \n> My questions:\n> 1) Should I place external XFS journal on separate device ?\n> 2) What should be the journal buffer size (logbsize) ?\n> 3) How many journal buffers (logbufs) should I configure ?\n> 4) How many allocations groups (for mkfs.xfs) should I configure\n> 5) Is it wortj settion noatime ?\n> 6) What I/O scheduler(elevators) should I use (massive sequencial reads)\n> 7) What is the ideal stripe unit and width (for a RAID device) ? \n> \n>\n\n1-3) I have not done any experimentation with where to put the journal, \nor its buffer size / number of them (well worth doing I suspect tho).\n\n4) I left it at the default.\n\n5) I use noatime, but have not measured if there is any impact if I \nleave it off.\n\n6) deadline scheduler seemed to give slightly better performance for \nsequential performance.\n\n7) I tried out stripe width 2,4 (with 4 disks), and they seemed to give \nthe same results. Stripe unit of 256K (tested 32K, 64K, 128K) seemed to \ngive the best sequential performance. My software raid stripe size was \nmatched to this in each case.\n\n\nI'll be interested to hear what you discover :-)\n\nCheers\n\nMark\n",
"msg_date": "Wed, 02 Aug 2006 14:06:11 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Milen,\n\nOn 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Is anyone using XFS for storing/retrieving relatively large amount of data (~\n> 200GB)?\n\nI concur with the previous poster's experiences with one additional\nobservation:\n\nWe have had instabilities with XFS with software RAID (md) on 32-bit Xeons\nrunning RedHat4 U3 with the Centos 4.3 unsupported SMP kernel. XFS would\noccasionally kernel panic under load.\n\nWe have had no problems with XFS running on the same OS/kernel on 64-bit\nunder heavy workloads for weeks of continuous usage. Each server (of 16\ntotal) had four XFS filesystems, each with 250GB of table data (no indexes)\non them, total of 16 Terabytes. We tested with the TPC-H schema and\nqueries.\n\nWe use the default settings for XFS.\n\nAlso - be aware that LVM has a serious performance bottleneck at about\n600MB/s - if you are working below that threshold, you may not notice the\nissue, maybe some increase in CPU consumption as you approach it.\n\n- Luke\n\n\n",
"msg_date": "Tue, 01 Aug 2006 19:42:37 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Hi Like, Mark , Alvaro and Andrew,\n\nThank you very much for sharing you experience with me. \nI want to compare DHW performance of PG/Bizgres on different filesystems and difffrent \nBlock sizes. \n\nThe hardware will be free for me in a week or too (at a moment another project is running on it) and then I will test\ndiffrenet setups and will post the results.\n\nI MUST (sorry, no other choice) use SLES6 R3, 64 bit. I am not sure at all that I will get enough budget to get\napproapriate RAID controller, and that is why I intent to use software RAID.\n\nI am pretty exited whether XFS will clearly outpertform ETX3 (no default setups for both are planned !). I am not sure\nwhether is it worth to include JFS in comparison too ...\n\n\nBest Regards,\nMilen Kulev\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Wednesday, August 02, 2006 4:43 AM\nTo: Milen Kulev; [email protected]\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing\n\n\nMilen,\n\nOn 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Is anyone using XFS for storing/retrieving relatively large amount of \n> data (~ 200GB)?\n\nI concur with the previous poster's experiences with one additional\nobservation:\n\nWe have had instabilities with XFS with software RAID (md) on 32-bit Xeons running RedHat4 U3 with the Centos 4.3\nunsupported SMP kernel. XFS would occasionally kernel panic under load.\n\nWe have had no problems with XFS running on the same OS/kernel on 64-bit under heavy workloads for weeks of continuous\nusage. Each server (of 16\ntotal) had four XFS filesystems, each with 250GB of table data (no indexes) on them, total of 16 Terabytes. We tested\nwith the TPC-H schema and queries.\n\nWe use the default settings for XFS.\n\nAlso - be aware that LVM has a serious performance bottleneck at about 600MB/s - if you are working below that\nthreshold, you may not notice the issue, maybe some increase in CPU consumption as you approach it.\n\n- Luke\n\n\n",
"msg_date": "Wed, 2 Aug 2006 22:59:34 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Milen,\n\nFor the past year, I have been running odbc-bench on a dual-opteron with\n4GB of RAM using a 8GB sample data. I found the performance difference\nbetween EXT3, JFS, and XFS is +/- 5-8%. This could be written-off as\n\"noise\" just for normal server performance flux. If you plan on using the\ndefault kernel, ext3 will likely perform best (what I found). When I added\nmy own kernel, ext3 performed fair. What I've had to consider is what does\neach file system offer me as far as data integrity goes.\n\nYou'll find greater ROI on performance by investing your time in other areas\nthan chasing down a few percentage point (like I have done). If you could\nborrow more RAM and/or more discs for your tests, Testing newer kernels and\nread-ahead patches may benefit you as well.\n\nBest of luck.\n\nSteve Poe\n\n\n\nOn 8/2/06, Milen Kulev <[email protected]> wrote:\n>\n> Hi Like, Mark , Alvaro and Andrew,\n>\n> Thank you very much for sharing you experience with me.\n> I want to compare DHW performance of PG/Bizgres on different filesystems\n> and difffrent\n> Block sizes.\n>\n> The hardware will be free for me in a week or too (at a moment another\n> project is running on it) and then I will test\n> diffrenet setups and will post the results.\n>\n> I MUST (sorry, no other choice) use SLES6 R3, 64 bit. I am not sure at all\n> that I will get enough budget to get\n> approapriate RAID controller, and that is why I intent to use software\n> RAID.\n>\n> I am pretty exited whether XFS will clearly outpertform ETX3 (no default\n> setups for both are planned !). I am not sure\n> whether is it worth to include JFS in comparison too ...\n>\n>\n> Best Regards,\n> Milen Kulev\n>\n> -----Original Message-----\n> From: Luke Lonergan [mailto:[email protected]]\n> Sent: Wednesday, August 02, 2006 4:43 AM\n> To: Milen Kulev; [email protected]\n> Subject: Re: [PERFORM] XFS filessystem for Datawarehousing\n>\n>\n> Milen,\n>\n> On 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:\n>\n> > Is anyone using XFS for storing/retrieving relatively large amount of\n> > data (~ 200GB)?\n>\n> I concur with the previous poster's experiences with one additional\n> observation:\n>\n> We have had instabilities with XFS with software RAID (md) on 32-bit Xeons\n> running RedHat4 U3 with the Centos 4.3\n> unsupported SMP kernel. XFS would occasionally kernel panic under load.\n>\n> We have had no problems with XFS running on the same OS/kernel on 64-bit\n> under heavy workloads for weeks of continuous\n> usage. Each server (of 16\n> total) had four XFS filesystems, each with 250GB of table data (no\n> indexes) on them, total of 16 Terabytes. We tested\n> with the TPC-H schema and queries.\n>\n> We use the default settings for XFS.\n>\n> Also - be aware that LVM has a serious performance bottleneck at about\n> 600MB/s - if you are working below that\n> threshold, you may not notice the issue, maybe some increase in CPU\n> consumption as you approach it.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\nMilen,\n\nFor the past year, I have been running odbc-bench on a\ndual-opteron with 4GB of RAM using a 8GB sample data. I found the\nperformance difference between EXT3, JFS, and XFS is +/- 5-8%.\nThis could be written-off as \"noise\" just for normal server performance\nflux. If you plan on using the default kernel, ext3 will likely perform\nbest (what I found). When I added my own kernel, ext3 performed fair.\nWhat I've had to consider is what does each file system offer me as far\nas data integrity goes. \n\nYou'll find greater ROI on performance by investing your time in other\nareas than chasing down a few percentage point (like I have\ndone). If you could borrow more RAM and/or more discs for your\ntests, Testing newer kernels and read-ahead patches may benefit\nyou as well.\n\nBest of luck.\n\nSteve Poe\n\nOn 8/2/06, Milen Kulev <[email protected]> wrote:\nHi Like, Mark , Alvaro and Andrew,Thank you very much for sharing you experience with me.I want to compare DHW performance of PG/Bizgres on different filesystems and difffrentBlock sizes.The hardware will be free for me in a week or too (at a moment another project is running on it) and then I will test\ndiffrenet setups and will post the results.I MUST (sorry, no other choice) use SLES6 R3, 64 bit. I am not sure at all that I will get enough budget to getapproapriate RAID controller, and that is why I intent to use software RAID.\nI\nam pretty exited whether XFS will clearly outpertform ETX3 (no default\nsetups for both are planned !). I am not surewhether is it worth to include JFS in comparison too ...Best Regards,Milen Kulev-----Original Message-----From: Luke Lonergan [mailto:\[email protected]]Sent: Wednesday, August 02, 2006 4:43 AMTo: Milen Kulev; [email protected]: Re: [PERFORM] XFS filessystem for Datawarehousing\nMilen,On 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:> Is anyone using XFS for storing/retrieving relatively large amount of\n> data (~ 200GB)?I concur with the previous poster's experiences with one additionalobservation:We have had instabilities with XFS with software RAID (md) on 32-bit Xeons running RedHat4 U3 with the Centos \n4.3unsupported SMP kernel. XFS would occasionally kernel panic under load.We have had no problems with XFS running on the same OS/kernel on 64-bit under heavy workloads for weeks of continuoususage. Each server (of 16\ntotal) had four XFS filesystems, each with 250GB of table data (no indexes) on them, total of 16 Terabytes. We testedwith the TPC-H schema and queries.We use the default settings for XFS.Also - be aware that LVM has a serious performance bottleneck at about 600MB/s - if you are working below that\nthreshold, you may not notice the issue, maybe some increase in CPU consumption as you approach it.- Luke---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?\n http://www.postgresql.org/docs/faq",
"msg_date": "Wed, 2 Aug 2006 14:26:39 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Hi Steve, \nI hope that performance between EXT3 and XFS is not only 5-8% . Such a small difference could be interpreted as\n\"noise\", as you already mentioned.\nI want to give many filesystem a try. Stability is also a concern, but I don't want to favour any FS over another .\n \nBest Regards.\nMIlen Kulev\n \n\n-----Original Message-----\nFrom: Steve Poe [mailto:[email protected]] \nSent: Wednesday, August 02, 2006 11:27 PM\nTo: Milen Kulev\nCc: [email protected]\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing\n\n\nMilen,\n\nFor the past year, I have been running odbc-bench on a dual-opteron with 4GB of RAM using a 8GB sample data. I found\nthe performance difference between EXT3, JFS, and XFS is +/- 5-8%. This could be written-off as \"noise\" just for normal\nserver performance flux. If you plan on using the default kernel, ext3 will likely perform best (what I found). When I\nadded my own kernel, ext3 performed fair. What I've had to consider is what does each file system offer me as far as\ndata integrity goes. \n\nYou'll find greater ROI on performance by investing your time in other areas than chasing down a few percentage point\n(like I have done). If you could borrow more RAM and/or more discs for your tests, Testing newer kernels and\nread-ahead patches may benefit you as well.\n\nBest of luck.\n\nSteve Poe\n\n\n\n\nOn 8/2/06, Milen Kulev <[email protected]> wrote: \n\nHi Like, Mark , Alvaro and Andrew,\n\nThank you very much for sharing you experience with me.\nI want to compare DHW performance of PG/Bizgres on different filesystems and difffrent\nBlock sizes.\n\nThe hardware will be free for me in a week or too (at a moment another project is running on it) and then I will test \ndiffrenet setups and will post the results.\n\nI MUST (sorry, no other choice) use SLES6 R3, 64 bit. I am not sure at all that I will get enough budget to get\napproapriate RAID controller, and that is why I intent to use software RAID. \n\nI am pretty exited whether XFS will clearly outpertform ETX3 (no default setups for both are planned !). I am not sure\nwhether is it worth to include JFS in comparison too ...\n\n\nBest Regards,\nMilen Kulev\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto: [email protected] <mailto:[email protected]> ]\nSent: Wednesday, August 02, 2006 4:43 AM\nTo: Milen Kulev; [email protected]\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing \n\n\nMilen,\n\nOn 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Is anyone using XFS for storing/retrieving relatively large amount of\n> data (~ 200GB)?\n\nI concur with the previous poster's experiences with one additional\nobservation:\n\nWe have had instabilities with XFS with software RAID (md) on 32-bit Xeons running RedHat4 U3 with the Centos 4.3\nunsupported SMP kernel. XFS would occasionally kernel panic under load.\n\nWe have had no problems with XFS running on the same OS/kernel on 64-bit under heavy workloads for weeks of continuous\nusage. Each server (of 16 \ntotal) had four XFS filesystems, each with 250GB of table data (no indexes) on them, total of 16 Terabytes. We tested\nwith the TPC-H schema and queries.\n\nWe use the default settings for XFS.\n\nAlso - be aware that LVM has a serious performance bottleneck at about 600MB/s - if you are working below that \nthreshold, you may not notice the issue, maybe some increase in CPU consumption as you approach it.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ? \n\n http://www.postgresql.org/docs/faq\n\n\n\n\n\n\nNachricht\n\n\nHi \nSteve, \nI hope \nthat performance between EXT3 and XFS is not only 5-8% . Such \na small difference could be interpreted as \"noise\", as you already \nmentioned.\nI want \nto give many filesystem a try. Stability is also a concern, but I don't want to \nfavour any FS over another .\n \nBest \nRegards.\nMIlen \nKulev\n \n\n\n-----Original Message-----From: Steve Poe \n [mailto:[email protected]] Sent: Wednesday, August 02, 2006 11:27 \n PMTo: Milen KulevCc: \n [email protected]: Re: [PERFORM] XFS \n filessystem for DatawarehousingMilen,For the \n past year, I have been running odbc-bench on a dual-opteron with 4GB of \n RAM using a 8GB sample data. I found the performance difference between EXT3, \n JFS, and XFS is +/- 5-8%. This could be written-off as \"noise\" just for \n normal server performance flux. If you plan on using the default kernel, ext3 \n will likely perform best (what I found). When I added my own kernel, ext3 \n performed fair. What I've had to consider is what does each file system offer \n me as far as data integrity goes. You'll find greater ROI \n on performance by investing your time in other areas than chasing down a few \n percentage point (like I have done). If you could borrow more RAM and/or \n more discs for your tests, Testing newer kernels and read-ahead patches \n may benefit you as well.Best of luck.Steve Poe\nOn 8/2/06, Milen \n Kulev <[email protected]> \n wrote:\nHi \n Like, Mark , Alvaro and Andrew,Thank you very much for sharing you \n experience with me.I want to compare DHW performance of PG/Bizgres on \n different filesystems and difffrentBlock sizes.The hardware will \n be free for me in a week or too (at a moment another project is running on \n it) and then I will test diffrenet setups and will post the \n results.I MUST (sorry, no other choice) use SLES6 R3, 64 bit. I am \n not sure at all that I will get enough budget to getapproapriate RAID \n controller, and that is why I intent to use software RAID. I am \n pretty exited whether XFS will clearly outpertform ETX3 (no default setups \n for both are planned !). I am not surewhether is it worth to \n include JFS in comparison too \n ...Best Regards,Milen Kulev-----Original \n Message-----From: Luke Lonergan [mailto: [email protected]]Sent: \n Wednesday, August 02, 2006 4:43 AMTo: Milen Kulev; [email protected]: \n Re: [PERFORM] XFS filessystem for Datawarehousing \n Milen,On 8/1/06 2:49 PM, \"Milen Kulev\" <[email protected]> wrote:> Is \n anyone using XFS for storing/retrieving relatively large amount of> \n data (~ 200GB)?I concur with the previous poster's \n experiences with one additionalobservation:We have had \n instabilities with XFS with software RAID (md) on 32-bit Xeons running \n RedHat4 U3 with the Centos 4.3unsupported SMP kernel. XFS \n would occasionally kernel panic under load.We have had no problems \n with XFS running on the same OS/kernel on 64-bit under heavy workloads for \n weeks of continuoususage. Each server (of 16 total) had \n four XFS filesystems, each with 250GB of table data (no indexes) on them, \n total of 16 Terabytes. We testedwith the TPC-H schema and \n queries.We use the default settings for XFS.Also - be aware \n that LVM has a serious performance bottleneck at about 600MB/s - if you are \n working below that threshold, you may not notice the issue, maybe some \n increase in CPU consumption as you approach it.- \n Luke---------------------------(end of \n broadcast)---------------------------TIP 3: Have you checked our \n extensive FAQ? \n \n http://www.postgresql.org/docs/faq",
"msg_date": "Wed, 2 Aug 2006 23:44:06 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "[email protected] (\"Milen Kulev\") writes:\n> I am pretty exited whether XFS will clearly outpertform ETX3 (no\n> default setups for both are planned !). I am not sure whether is it\n> worth to include JFS in comparison too ...\n\nI did some benchmarking about 2 years ago, and found that JFS was a\nfew percent faster than XFS which was a few percent faster than ext3,\non a \"huge amounts of writes\" workload.\n\nThat the difference was only a few percent made us draw the conclusion\nthat FS performance was fairly much irrelevant. It is of *vastly*\nmore importance whether the filesystem will survive power outages and\nthe like, and, actually, Linux hasn't fared as well with that as I'd\nlike. :-(\n\nThe differences are small enough that what you should *actually* test\nfor is NOT PERFORMANCE.\n\nYou should instead test for reliability.\n\n- Turn off the power when the DB is under load, and see how well it\n survives.\n\n- Pull the fibrechannel cable, and see if the filesystem (and\n database) survives when under load.\n\nIf you find that XFS is 4% faster, that's likely a *terrible*\ntrade-off if it only survives power outage half as often as (say)\next3.\n-- \n(reverse (concatenate 'string \"gro.gultn\" \"@\" \"enworbbc\"))\nhttp://cbbrowne.com/info/wp.html\n\"C combines the power of assembler language with the convenience of\nassembler language.\" -- Unknown\n",
"msg_date": "Wed, 02 Aug 2006 18:43:38 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "On Wed, Aug 02, 2006 at 02:26:39PM -0700, Steve Poe wrote:\n>For the past year, I have been running odbc-bench on a dual-opteron with\n>4GB of RAM using a 8GB sample data. I found the performance difference\n>between EXT3, JFS, and XFS is +/- 5-8%.\n\nThat's not surprising when your db is only 2x your RAM. You'll find that \nfilesystem performance is much more important when your database is 10x+ \nyour RAM (which is often the case once your database heads toward a TB).\n\n>Testing newer kernels and read-ahead patches may benefit you as well.\n\nI've been really impressed by the adaptive readahead patches with \npostgres.\n\nMike Stone\n",
"msg_date": "Wed, 02 Aug 2006 19:33:39 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Again - the performance difference increases as the disk speed increases.\n\nOur experience is that we went from 300MB/s to 475MB/s when moving from ext3\nto xfs.\n\n- Luke \n\n\nOn 8/2/06 4:33 PM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Wed, Aug 02, 2006 at 02:26:39PM -0700, Steve Poe wrote:\n>> For the past year, I have been running odbc-bench on a dual-opteron with\n>> 4GB of RAM using a 8GB sample data. I found the performance difference\n>> between EXT3, JFS, and XFS is +/- 5-8%.\n> \n> That's not surprising when your db is only 2x your RAM. You'll find that\n> filesystem performance is much more important when your database is 10x+\n> your RAM (which is often the case once your database heads toward a TB).\n> \n>> Testing newer kernels and read-ahead patches may benefit you as well.\n> \n> I've been really impressed by the adaptive readahead patches with\n> postgres.\n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Wed, 02 Aug 2006 21:05:05 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Hi Luke, \nThat is ~ 50% increase !! Amazing...\nHow many reader processes did you have to get this results ?\n\nRegards. Milen\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Luke Lonergan\nSent: Thursday, August 03, 2006 6:05 AM\nTo: Michael Stone; [email protected]\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing\n\n\nAgain - the performance difference increases as the disk speed increases.\n\nOur experience is that we went from 300MB/s to 475MB/s when moving from ext3 to xfs.\n\n- Luke \n\n\nOn 8/2/06 4:33 PM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Wed, Aug 02, 2006 at 02:26:39PM -0700, Steve Poe wrote:\n>> For the past year, I have been running odbc-bench on a dual-opteron \n>> with 4GB of RAM using a 8GB sample data. I found the performance \n>> difference between EXT3, JFS, and XFS is +/- 5-8%.\n> \n> That's not surprising when your db is only 2x your RAM. You'll find \n> that filesystem performance is much more important when your database \n> is 10x+ your RAM (which is often the case once your database heads \n> toward a TB).\n> \n>> Testing newer kernels and read-ahead patches may benefit you as well.\n> \n> I've been really impressed by the adaptive readahead patches with \n> postgres.\n> \n> Mike Stone\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n",
"msg_date": "Thu, 3 Aug 2006 21:44:27 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Milen,\n\nOn 8/3/06 12:44 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Hi Luke, \n> That is ~ 50% increase !! Amazing...\n> How many reader processes did you have to get this results ?\n\nJust one - I'll refresh the results sometime and post.\n\n- Luke \n\n\n",
"msg_date": "Thu, 03 Aug 2006 20:41:24 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "On Tue, Aug 01, 2006 at 08:42:23PM -0400, Alvaro Herrera wrote:\n> J. Andrew Rogers wrote:\n> > \n> > On Aug 1, 2006, at 2:49 PM, Milen Kulev wrote:\n> > >Is anyone using XFS for storing/retrieving relatively large amount \n> > >of data (~ 200GB)?\n> > \n> > \n> > Yes, we've been using it on Linux since v2.4 (currently v2.6) and it \n> > has been rock solid on our database servers (Opterons, running in \n> > both 32-bit and 64-bit mode). Our databases are not quite 200GB \n> > (maybe 75GB for a big one currently), but ballpark enough that the \n> > experience is probably valid. We also have a few terabyte+ non- \n> > database XFS file servers too.\n> > \n> > Performance has been very good even with nearly full file systems, \n> > and reliability has been perfect so far. Some of those file systems \n> > get used pretty hard for months or years non-stop. Comparatively, I \n> > can only tell you that XFS tends to be significantly faster than \n> > Ext3, but we never did any serious file system tuning either.\n> \n> Most likely ext3 was used on the default configuration, which logs data\n> operations as well as metadata, which is what XFS logs. I don't think\n> I've seen any credible comparison between XFS and ext3 with the\n> metadata-only journal option.\n> \n> On the other hand I don't think it makes sense to journal data on a\n> PostgreSQL environment. Metadata is enough, given that we log data on\n> WAL anyway.\n\nActually, according to http://en.wikipedia.org/wiki/Ext3 the default\njournalling option for ext3 isn't to journal the data (which is actually\ndata=journal), but to wait until the data is written before considering\nthe metadata to be committed (data=ordered).\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Fri, 4 Aug 2006 15:44:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "Jim C. Nasby wrote:\n> On Tue, Aug 01, 2006 at 08:42:23PM -0400, Alvaro Herrera wrote:\n\n> > Most likely ext3 was used on the default configuration, which logs data\n> > operations as well as metadata, which is what XFS logs. I don't think\n> > I've seen any credible comparison between XFS and ext3 with the\n> > metadata-only journal option.\n> > \n> > On the other hand I don't think it makes sense to journal data on a\n> > PostgreSQL environment. Metadata is enough, given that we log data on\n> > WAL anyway.\n> \n> Actually, according to http://en.wikipedia.org/wiki/Ext3 the default\n> journalling option for ext3 isn't to journal the data (which is actually\n> data=journal), but to wait until the data is written before considering\n> the metadata to be committed (data=ordered).\n\nWell, we don't need the data to be written before considering metadata\ncommitted. data=writeback is enough for partitions to be dedicated to\nPGDATA. Not sure what other FSs do on this front but the ext3 default\nleans towards safe rather than speedy.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Mon, 7 Aug 2006 12:55:16 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
}
] |
[
{
"msg_contents": "Sorry, forgot to ask:\nWhat is the recommended/best PG block size for DWH database? 16k, 32k, 64k ?\nWhat hsould be the relation between XFS/RAID stripe size and PG block size ?\n\nBest Regards. \nMilen Kulev\n \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Milen Kulev\nSent: Tuesday, August 01, 2006 11:50 PM\nTo: [email protected]\nSubject: [PERFORM] XFS filessystem for Datawarehousing\n\n\nI intend to test Postgres/Bizgres for DWH use. I want to use XFS filesystem to get the best possible performance at FS\nlevel(correct me if I am wrong !).\n\nIs anyone using XFS for storing/retrieving relatively large amount of data (~ 200GB)?\n\nIf yes, what about the performance and stability of XFS.\nI am especially interested in recommendations about XFS mount options and mkfs.xfs options. My setup will be roughly\nthis:\n1) 4 SCSI HDD , 128GB each, \n2) RAID 0 on the four SCSI HDD disks using LVM (software RAID)\n\nThere are two other SATA HDD in the server. Server has 2 physical CPUs (XEON at 3 GHz), 4 Logical CPUs, 8 GB RAM, OS\n= SLES9 SP3 \n\nMy questions:\n1) Should I place external XFS journal on separate device ?\n2) What should be the journal buffer size (logbsize) ?\n3) How many journal buffers (logbufs) should I configure ?\n4) How many allocations groups (for mkfs.xfs) should I configure\n5) Is it wortj settion noatime ?\n6) What I/O scheduler(elevators) should I use (massive sequencial reads)\n7) What is the ideal stripe unit and width (for a RAID device) ? \n\nI will appreciate any options, suggestions, pointers.\n\nBest Regards.\nMilen Kulev\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n",
"msg_date": "Wed, 2 Aug 2006 00:19:46 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: XFS filessystem for Datawarehousing -2"
},
{
"msg_contents": "Milen,\n\nOn 8/1/06 3:19 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Sorry, forgot to ask:\n> What is the recommended/best PG block size for DWH database? 16k, 32k, 64k\n> ?\n> What hsould be the relation between XFS/RAID stripe size and PG block size ?\n\nWe have found that the page size in PG starts to matter only at very high\ndisk performance levels around 1000MB/s. Other posters have talked about\nmaintenance tasks improving in performance, but I haven't seen it.\n\n- Luke\n\n\n",
"msg_date": "Tue, 01 Aug 2006 19:44:20 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing -2"
},
{
"msg_contents": "I was kinda thinking that making the Block Size configurable at InitDB time\nwould be a nice & simple enhancement for PG 8.3. My own personal rule of\nthumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH.\n\nI have no personal experience with XFS, but, I've seen numerous internal\nedb-postgres test results that show that of all file systems... OCFS\n2.0seems to be quite good for PG update intensive apps (especially on\n64 bit\nmachines).\n\nOn 8/1/06, Luke Lonergan <[email protected]> wrote:\n>\n> Milen,\n>\n> On 8/1/06 3:19 PM, \"Milen Kulev\" <[email protected]> wrote:\n>\n> > Sorry, forgot to ask:\n> > What is the recommended/best PG block size for DWH database? 16k,\n> 32k, 64k\n> > ?\n> > What hsould be the relation between XFS/RAID stripe size and PG block\n> size ?\n>\n> We have found that the page size in PG starts to matter only at very high\n> disk performance levels around 1000MB/s. Other posters have talked about\n> maintenance tasks improving in performance, but I haven't seen it.\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nI was kinda thinking that making the Block Size configurable at InitDB time would be a nice & simple enhancement for PG 8.3. My own personal rule of thumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH.\nI have no personal experience with XFS, but, I've seen numerous internal edb-postgres test results that show that of all file systems... OCFS 2.0 seems to be quite good for PG update intensive apps (especially on 64 bit machines).\nOn 8/1/06, Luke Lonergan <[email protected]> wrote:\nMilen,On 8/1/06 3:19 PM, \"Milen Kulev\" <[email protected]> wrote:> Sorry, forgot to ask:> What is the recommended/best PG block size for DWH database? 16k, 32k, 64k\n> ?> What hsould be the relation between XFS/RAID stripe size and PG block size ?We have found that the page size in PG starts to matter only at very highdisk performance levels around 1000MB/s. Other posters have talked about\nmaintenance tasks improving in performance, but I haven't seen it.- Luke---------------------------(end of broadcast)---------------------------TIP 4: Have you searched our list archives?\n http://archives.postgresql.org",
"msg_date": "Thu, 3 Aug 2006 01:36:10 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing -2"
},
{
"msg_contents": "Hi Dennis, \nI am just cusrios to try PG with different block sizes ;) I don't know how much performance the bigger block size will\nbring (I mean 32k or 64k , for example, for DWH applikations).\nI am surprised to hear that OCFS2.0 (or any her FS usind direct I/O) performs well with PG. A month ago I have\nperformed a simple test with Veritas FS, with and than without cache (e.g. direct I/O). I have started 1 , then 2, ,\nthen 3, then 4 parallel INSERT processes. \nVeritas FS WITH FS cache outperformed the direct I/O version by factor 2-2.5 !\nI haven't tested woth OCFS2.0 though. I am not sure that OCFS2.0 is the good choice for PG data and index\nfilesystems.\nFor WAL -> perhaps.\n \nBest Regards. Milen \n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Denis Lussier\nSent: Thursday, August 03, 2006 7:36 AM\nTo: Luke Lonergan\nCc: Milen Kulev; [email protected]\nSubject: Re: [PERFORM] XFS filessystem for Datawarehousing -2\n\n\n\nI was kinda thinking that making the Block Size configurable at InitDB time would be a nice & simple enhancement for PG\n8.3. My own personal rule of thumb for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH. \n\nI have no personal experience with XFS, but, I've seen numerous internal edb-postgres test results that show that of all\nfile systems... OCFS 2.0 seems to be quite good for PG update intensive apps (especially on 64 bit machines). \n\n\nOn 8/1/06, Luke Lonergan <[email protected]> wrote: \n\nMilen,\n\nOn 8/1/06 3:19 PM, \"Milen Kulev\" <[email protected]> wrote:\n\n> Sorry, forgot to ask:\n> What is the recommended/best PG block size for DWH database? 16k, 32k, 64k \n> ?\n> What hsould be the relation between XFS/RAID stripe size and PG block size ?\n\nWe have found that the page size in PG starts to matter only at very high\ndisk performance levels around 1000MB/s. Other posters have talked about \nmaintenance tasks improving in performance, but I haven't seen it.\n\n- Luke\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n\n\n\n\n\n\nNachricht\n\n\nHi \nDennis, \nI am \njust cusrios to try PG with different block sizes ;) I don't \nknow how much performance the bigger block size will bring (I mean 32k \nor 64k , for example, for DWH applikations).\nI am \nsurprised to hear that OCFS2.0 (or any her FS usind direct I/O) performs \nwell with PG. A month ago I have performed a simple test \nwith Veritas FS, with and than without cache (e.g. direct \nI/O). I have started 1 , then 2, , then 3, \nthen 4 parallel INSERT processes. \nVeritas FS WITH FS cache outperformed the direct I/O version by \nfactor 2-2.5 !\nI \nhaven't tested woth OCFS2.0 though. I am not sure that OCFS2.0 is the good \nchoice for PG data and index \nfilesystems.\nFor \nWAL -> perhaps.\n \nBest \nRegards. Milen \n\n\n-----Original Message-----From: \n [email protected] \n [mailto:[email protected]] On Behalf Of Denis \n LussierSent: Thursday, August 03, 2006 7:36 AMTo: Luke \n LonerganCc: Milen Kulev; \n [email protected]: Re: [PERFORM] XFS \n filessystem for Datawarehousing -2I was kinda \n thinking that making the Block Size configurable at InitDB time would be a \n nice & simple enhancement for PG 8.3. My own personal rule of thumb \n for sizing is 8k for OLTP, 16k for mixed use, & 32k for DWH. I \n have no personal experience with XFS, but, I've seen numerous internal \n edb-postgres test results that show that of all file systems... OCFS 2.0 seems \n to be quite good for PG update intensive apps (especially on 64 bit machines). \n \nOn 8/1/06, Luke \n Lonergan <[email protected]> \n wrote:\nMilen,On \n 8/1/06 3:19 PM, \"Milen Kulev\" <[email protected]> wrote:> \n Sorry, forgot to ask:> What is the recommended/best PG \n block size for DWH database? 16k, 32k, 64k > \n ?> What hsould be the relation between XFS/RAID stripe \n size and PG block size ?We have found that the page size in PG \n starts to matter only at very highdisk performance levels around \n 1000MB/s. Other posters have talked about maintenance tasks \n improving in performance, but I haven't seen it.- \n Luke---------------------------(end of \n broadcast)---------------------------TIP 4: Have you searched our list \n archives? \n http://archives.postgresql.org",
"msg_date": "Thu, 3 Aug 2006 21:56:19 +0200",
"msg_from": "\"Milen Kulev\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing -2"
},
{
"msg_contents": "[email protected] (\"Denis Lussier\") writes:\n> I have no personal experience with XFS, but, I've seen numerous\n> internal edb-postgres test results that show that of all file\n> systems... OCFS 2.0 seems to be quite good for PG update intensive\n> apps (especially on 64 bit machines).\n\nI have been curious about OCFS for some time; it sounded like a case\nwhere there could possibly be some useful semantic changes to\nfilesystem functionality, notably that:\n\n - atime is pretty irrelevant;\n - it might try to work with pretty fixed block sizes (8K, perhaps?)\n rather than try to be efficient at handling tiny files\n\nIt sounds like it ought to be able to be a good fit. \n\nOf course, with a big warning sticker of \"what is required for Oracle\nto work properly is implemented, anything more is not a guarantee\" on\nit, who's going to trust it?\n-- \nselect 'cbbrowne' || '@' || 'cbbrowne.com';\nhttp://www.ntlug.org/~cbbrowne/oses.html\n\"There isn't any reason why Linux can't be implemented as an\nenterprise computing solution. Find out what you've been missing\nwhile you've been rebooting Windows NT.\" - Infoworld\n",
"msg_date": "Thu, 03 Aug 2006 17:00:04 -0400",
"msg_from": "Chris Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing -2"
},
{
"msg_contents": "I agree that OCFS 2.0 is NOT a general purpose PG (or any other) solution.\nMy recollection is that OCFS gave about 15% performance improvements (same\nas setting some aggressive switches on ext3). I assume OCFS has excellent\ncrash safety with its default settings but we did not test this as of yet.\nOCFS now ships as one of the optional FS's that ship with Suse. That takes\ncare of some of the FUD created by Oracle's disclaimer below.\n\nOCFS 2 is much more POSIX compliant than OCFS 1. The BenchmarkSQL, DBT2, &\nRegression tests we ran on OCFS 2 all worked well. The lack of full Posix\ncompliance did cause some problems for configuring PITR.\n\n--Denis http://www.enterprisedb.com\n\nOn 8/3/06, Chris Browne <[email protected]> wrote:\n>\n>\n> Of course, with a big warning sticker of \"what is required for Oracle\n> to work properly is implemented, anything more is not a guarantee\" on\n> it, who's going to trust it?\n> --\n>\n\nI agree that OCFS 2.0 is NOT a general purpose PG (or any other) solution. My recollection is that OCFS gave about 15% performance improvements (same as setting some aggressive switches on ext3). I assume OCFS has excellent crash safety with its default settings but we did not test this as of yet. OCFS now ships as one of the optional FS's that ship with Suse. That takes care of some of the FUD created by Oracle's disclaimer below. \nOCFS 2 is much more POSIX compliant than OCFS 1. The BenchmarkSQL, DBT2, & Regression tests we ran on OCFS 2 all worked well. The lack of full Posix compliance did cause some problems for configuring PITR.\n--Denis http://www.enterprisedb.comOn 8/3/06, Chris Browne <[email protected]\n> wrote:Of course, with a big warning sticker of \"what is required for Oracle\nto work properly is implemented, anything more is not a guarantee\" onit, who's going to trust it?--",
"msg_date": "Fri, 4 Aug 2006 16:13:00 -0400",
"msg_from": "\"Denis Lussier\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing -2"
}
] |
[
{
"msg_contents": "Merlin,\n\n> moving a gigabyte around/sec on the server, attached or no, \n> is pretty heavy lifting on x86 hardware.\n\nMaybe so, but we're doing 2GB/s plus on Sun/Thumper with software RAID\nand 36 disks and 1GB/s on a HW RAID with 16 disks, all SATA.\n\nWRT seek performance, we're doing 2500 seeks per second on the\nSun/Thumper on 36 disks. You might do better with 15K RPM disks and\ngreat controllers, but I haven't seen it reported yet.\n\nBTW - I'm curious about the HP P600 SAS host based RAID controller - it\nhas very good specs, but is the Linux driver solid?\n\n- Luke\n\n",
"msg_date": "Thu, 3 Aug 2006 00:51:06 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
},
{
"msg_contents": "On 8/3/06, Luke Lonergan <[email protected]> wrote:\n> Merlin,\n>\n> > moving a gigabyte around/sec on the server, attached or no,\n> > is pretty heavy lifting on x86 hardware.\n\n> Maybe so, but we're doing 2GB/s plus on Sun/Thumper with software RAID\n> and 36 disks and 1GB/s on a HW RAID with 16 disks, all SATA.\n\nthat is pretty amazing, that works out to 55 mb/sec/drive, close to\ntheoretical maximums. are you using pci-e sata controller and raptors\nim guessing? this is doubly impressive if we are talking raid 5 here.\n do you find that software raid is generally better than hardware at\nthe highend? how much does this tax the cpu?\n\n> WRT seek performance, we're doing 2500 seeks per second on the\n> Sun/Thumper on 36 disks. You might do better with 15K RPM disks and\n> great controllers, but I haven't seen it reported yet.\n\nthats pretty amazing too. only a highly optimized raid system can\npull this off.\n\n> BTW - I'm curious about the HP P600 SAS host based RAID controller - it\n> has very good specs, but is the Linux driver solid?\n\nhave no clue. i sure hope i dont go through the same headaches as\nwith ibm scsi drivers (rebranded adaptec btw). sas looks really\npromising however. the adaptec sas gear is so cheap it might be worth\nit to just buy some and see what it can do.\n\nmerlin\n",
"msg_date": "Thu, 3 Aug 2006 16:36:29 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "\nMilen,\n\n> XFS, EXT3, JFS\nFor what reason are you planning to use a journaling FS? I think using WAL, fsyncing every transaction and using a journaling FS is tautologous. And if you have problems using EXT2 you can just add the journal later without loosing data.\nMy tests using EXT2 showed a performance boost up to 50% on INSERTs.\n\nChristian\n\n> I am pretty exited whether XFS will clearly outpertform ETX3\r\n> (no default setups for both are planned !). I am not sure\n> whether is it worth to include JFS in comparison too ...\n>\r\n>\r\n> Best Regards,\n> Milen Kulev\n>\r\n\n******************************************\nThe information contained in, or attached to, this e-mail, may contain confidential information and is intended solely for the use of the individual or entity to whom they are addressed and may be subject to legal privilege. If you have received this e-mail in error you should notify the sender immediately by reply e-mail, delete the message from your system and notify your system manager. Please do not copy it for any purpose, or disclose its contents to any other person. The views or opinions presented in this e-mail are solely those of the author and do not necessarily represent those of the company. The recipient should check this e-mail and any attachments for the presence of viruses. The company accepts no liability for any damage caused, directly or indirectly, by any virus transmitted in this email.\n******************************************\n",
"msg_date": "Thu, 3 Aug 2006 01:10:39 -0600",
"msg_from": "\"Koth, Christian (DWBI)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "* Christian Koth:\n\n> For what reason are you planning to use a journaling FS? I think\n> using WAL, fsyncing every transaction and using a journaling FS is\n> tautologous.\n\nThe journal is absolutely required to preserve the integrity of the\nfile system's own on-disk data structures after a crash. Even if\nyou've got a trustworthy file system checker (there are surprisingly\nfew of them, especially for advanced file systems without fixed data\nstructure locations), running it after a crash usually leads to\nunacceptably high downtime.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nDurlacher Allee 47 tel: +49-721-96201-1\nD-76131 Karlsruhe fax: +49-721-96201-99\n",
"msg_date": "Thu, 03 Aug 2006 09:17:12 +0200",
"msg_from": "Florian Weimer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
},
{
"msg_contents": "On Thu, Aug 03, 2006 at 01:10:39AM -0600, Koth, Christian (DWBI) wrote:\n>For what reason are you planning to use a journaling FS? I think using WAL, fsyncing every transaction and using a journaling FS is tautologous. And if you have problems using EXT2 you can just add the journal later without loosing data.\n>My tests using EXT2 showed a performance boost up to 50% on INSERTs.\n\nThe requirements for the WAL filesystem and for the data filesystem are \ndifferent. Having the WAL on a small ext2 filesystem makes sense and is \ngood for performance. Having the data on a huge ext2 filesystem is a \nhorrible idea, because you'll fsck forever if there's a crash, and \nbecause ext2 isn't a great performer for large filesystems. I typically \nhave a couple-gig ext2 WAL paired with a couple of couple-hundred-gig \nxfs data & index partitions. Note that the guarantees of a journaling fs \nlike xfs have nothing to do with the kind of journaling done by the WAL, \nand each has its place on a postgres system.\n\nMike Stone\n",
"msg_date": "Thu, 03 Aug 2006 05:39:56 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: XFS filessystem for Datawarehousing"
}
] |
[
{
"msg_contents": "I'm at a client who's an ASP; they've written their app such that each\ncustomer gets their own database. Rigth now they're at nearly 200\ndatabases, and were thinking that they \"must be the largest PostgreSQL\ninstall in the world\". :) After taking them down a notch or two, I\nstarted wondering how many sites could beat 200 databases in a single\ncluster. I'm sure there's any number that can, though 200 databases in a\ncluster certainly isn't mainstream.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n",
"msg_date": "Thu, 3 Aug 2006 13:33:35 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": true,
"msg_subject": ""
},
{
"msg_contents": "On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim Nasby wrote:\n> I'm at a client who's an ASP; they've written their app such that each\n> customer gets their own database. Rigth now they're at nearly 200\n> databases, and were thinking that they \"must be the largest PostgreSQL\n> install in the world\". :) After taking them down a notch or two, I\n> started wondering how many sites could beat 200 databases in a single\n> cluster. I'm sure there's any number that can, though 200 databases in a\n> cluster certainly isn't mainstream.\n\ncassarossa:~> psql -h sql -l | grep 'rows)'\n(137 rows)\n\nThat's our measly student society. :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Thu, 3 Aug 2006 21:15:22 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "I've got 226 customer databases in one cluster. Works like a champ with\n8.1.3. I have 3 additional PostgreSQL servers with our largest customers on\nthem. They have between 10 and 30 databases. The smallest of my servers\nhas 261GB's worth of db's in the cluster, and the largest is 400GB's.\n\nBTW, our application is an asp application also.\n\nJust some fun numbers for you.\n\nChris\n\nP.S.\n\nThanks to all of the PostgreSQL developers for the great work and for\nproviding the awesome support.\n\nOn 8/3/06, Jim Nasby <[email protected]> wrote:\n>\n> I'm at a client who's an ASP; they've written their app such that each\n> customer gets their own database. Rigth now they're at nearly 200\n> databases, and were thinking that they \"must be the largest PostgreSQL\n> install in the world\". :) After taking them down a notch or two, I\n> started wondering how many sites could beat 200 databases in a single\n> cluster. I'm sure there's any number that can, though 200 databases in a\n> cluster certainly isn't mainstream.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nI've got 226 customer databases in one cluster. Works like a champ with 8.1.3. I have 3 additional PostgreSQL servers with our largest customers on them. They have between 10 and 30 databases. The smallest of my servers has 261GB's worth of db's in the cluster, and the largest is 400GB's.\nBTW, our application is an asp application also.Just some fun numbers for you.ChrisP.S.Thanks to all of the PostgreSQL developers for the great work and for providing the awesome support.\nOn 8/3/06, Jim Nasby <[email protected]> wrote:\nI'm at a client who's an ASP; they've written their app such that eachcustomer gets their own database. Rigth now they're at nearly 200databases, and were thinking that they \"must be the largest PostgreSQL\ninstall in the world\". :) After taking them down a notch or two, Istarted wondering how many sites could beat 200 databases in a singlecluster. I'm sure there's any number that can, though 200 databases in a\ncluster certainly isn't mainstream.--Jim C. Nasby, Sr. Engineering Consultant [email protected] Software http://pervasive.com\n work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match",
"msg_date": "Thu, 3 Aug 2006 15:29:42 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "is that all?\n\n psql -l | grep 'rows)'\n(2016 rows)\n\nOn Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote:\n> On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim Nasby wrote:\n> > I'm at a client who's an ASP; they've written their app such that each\n> > customer gets their own database. Rigth now they're at nearly 200\n> > databases, and were thinking that they \"must be the largest PostgreSQL\n> > install in the world\". :) After taking them down a notch or two, I\n> > started wondering how many sites could beat 200 databases in a single\n> > cluster. I'm sure there's any number that can, though 200 databases in a\n> > cluster certainly isn't mainstream.\n> \n> cassarossa:~> psql -h sql -l | grep 'rows)'\n> (137 rows)\n> \n> That's our measly student society. :-)\n> \n> /* Steinar */\n\n",
"msg_date": "Thu, 03 Aug 2006 15:30:40 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "Just curious, is this a production server? Also, how large is the total\ncluster on disk?\n\nOn 8/3/06, Ian Westmacott <[email protected]> wrote:\n>\n> is that all?\n>\n> psql -l | grep 'rows)'\n> (2016 rows)\n>\n> On Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote:\n> > On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim Nasby wrote:\n> > > I'm at a client who's an ASP; they've written their app such that each\n> > > customer gets their own database. Rigth now they're at nearly 200\n> > > databases, and were thinking that they \"must be the largest PostgreSQL\n> > > install in the world\". :) After taking them down a notch or two, I\n> > > started wondering how many sites could beat 200 databases in a single\n> > > cluster. I'm sure there's any number that can, though 200 databases in\n> a\n> > > cluster certainly isn't mainstream.\n> >\n> > cassarossa:~> psql -h sql -l | grep 'rows)'\n> > (137 rows)\n> >\n> > That's our measly student society. :-)\n> >\n> > /* Steinar */\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\nJust curious, is this a production server? Also, how large is the total cluster on disk?On 8/3/06, Ian Westmacott <\[email protected]> wrote:is that all? psql -l | grep 'rows)'(2016 rows)\nOn Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote:> On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim Nasby wrote:> > I'm at a client who's an ASP; they've written their app such that each\n> > customer gets their own database. Rigth now they're at nearly 200> > databases, and were thinking that they \"must be the largest PostgreSQL> > install in the world\". :) After taking them down a notch or two, I\n> > started wondering how many sites could beat 200 databases in a single> > cluster. I'm sure there's any number that can, though 200 databases in a> > cluster certainly isn't mainstream.\n>> cassarossa:~> psql -h sql -l | grep 'rows)'> (137 rows)>> That's our measly student society. :-)>> /* Steinar */---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives? http://archives.postgresql.org",
"msg_date": "Thu, 3 Aug 2006 16:31:24 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
},
{
"msg_contents": "No, this is a test server used for regression testing. Relatively\nsmall (hundreds of GB) and quiet (dozen connections) in the Postgres\nuniverse.\n\nOn Thu, 2006-08-03 at 16:31 -0400, Chris Hoover wrote:\n> Just curious, is this a production server? Also, how large is the\n> total cluster on disk?\n> \n> On 8/3/06, Ian Westmacott <[email protected]> wrote:\n> is that all?\n> \n> psql -l | grep 'rows)'\n> (2016 rows) \n> \n> On Thu, 2006-08-03 at 21:15 +0200, Steinar H. Gunderson wrote:\n> > On Thu, Aug 03, 2006 at 01:33:35PM -0500, Jim Nasby wrote:\n> > > I'm at a client who's an ASP; they've written their app\n> such that each \n> > > customer gets their own database. Rigth now they're at\n> nearly 200\n> > > databases, and were thinking that they \"must be the\n> largest PostgreSQL\n> > > install in the world\". :) After taking them down a notch\n> or two, I \n> > > started wondering how many sites could beat 200 databases\n> in a single\n> > > cluster. I'm sure there's any number that can, though 200\n> databases in a\n> > > cluster certainly isn't mainstream.\n> >\n> > cassarossa:~> psql -h sql -l | grep 'rows)'\n> > (137 rows)\n> >\n> > That's our measly student society. :-)\n> >\n> > /* Steinar */\n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- \n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n",
"msg_date": "Thu, 03 Aug 2006 16:55:37 -0400",
"msg_from": "Ian Westmacott <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "Hello,\n\nI am trying to migrate data from a DB2 database to SQL Server 2005\ndatabase. Does anyone know about any migration tool that does that? I\n\nhave heard about DB2 Migration Tool kit, but I think you can only\nmigrate data to a DB2 database with that. Thank you.\n\n\nSincerely, \n\n\nEldhose Cyriac\n\n",
"msg_date": "3 Aug 2006 13:42:58 -0700",
"msg_from": "\"contact1981\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migrating data from DB2 to SQL Server"
},
{
"msg_contents": "contact1981 wrote:\n> Hello,\n> \n> I am trying to migrate data from a DB2 database to SQL Server 2005\n> database. Does anyone know about any migration tool that does that? I\n> \n> have heard about DB2 Migration Tool kit, but I think you can only\n> migrate data to a DB2 database with that. Thank you.\n> \n> \n> Sincerely, \n> \n> \n> Eldhose Cyriac\n> \n\nWe use SQLWays to migrate from SQL Server to PostgreSQL.\n\nP.M.\n",
"msg_date": "Mon, 07 Aug 2006 11:48:50 +0200",
"msg_from": "\"Pit M.\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrating data from DB2 to SQL Server"
},
{
"msg_contents": "Hi, Eldhose,\n\ncontact1981 wrote:\n\n> I am trying to migrate data from a DB2 database to SQL Server 2005\n> database. Does anyone know about any migration tool that does that? I\n> have heard about DB2 Migration Tool kit, but I think you can only\n> migrate data to a DB2 database with that. Thank you.\n\nIt seems that you, by accident, hit the wrong list with your question.\n\nBut, as you're here, why don't you migrate to PostgreSQL instead?\n\n\nHave a nice day,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 11 Aug 2006 10:53:58 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrating data from DB2 to SQL Server"
}
] |
[
{
"msg_contents": "Hi. I'm new at using PostgreSQL.\nWhere I work, all databases were built with MS Access. The Access files are hosted by computers with Windows 2000 and Windows \n\nXP. A new server is on its way and only Open Source Software is going to be installed. The OS is going to be SUSE Linux 10.1 \n\nand we are making comparisons between MySQL, PostgreSQL and MS Access. We installed MySQL and PostgreSQL on both SUSE and \n\nWindows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD for Windows and one for Linux)\nThe \"Test Server\" in which we install the DBMS has the following characteristics:\n\nCPU speed = 1.3 GHz\nRAM = 512 MB\nHDD = 40 GB\n\nThe biggest table has 544371 rows(tuples?) with 55 rows. All fields are float8. Only 1 is varchar(255) and 1 timestamp.\nWe query the MS Access databases through Visual Basic Programs and ODBC Drivers. We made a Visual Basic program that uses ADO \n\nto connect to ALL three DBMS using ODBC drivers.\n\nWhen we run the following query \"SELECT * FROM big_table\", we get the following resutls:\n\nMS Access\n- Execution time ~ 51 seconds (Depending on the client machine, it can go as low as 20 seconds)\n- Network Utilization ~ 80 Mbps (According to Windows Task Manager)\n\nMySQL 5.0 (under Windows)\n- Execution time ~ 630 seconds\n- Network Utilization ~ 8 Mbps\n\nPostgreSQL 8.1 (under Windows)\n- Execution time ~ 290 seconds)\n- Network Utilization ~ 13 Mbps\n\n\nMS Access (under Linux. MS Access files are in the Linux computer which has the SAMBA server running. The client computer has \n\na mapped network drive that conects to the Linux files.)\n- Execution time ~ 55 seconds (Depending on the client machine, it can go as low as 20 seconds)\n- Network Utilization ~ 76 Mbps (According to Windows Task Manager)\n\nMySQL 5.0(under Linux)\n- Execution time ~ 440 seconds\n- Network Utilization ~ 11 Mbps\n\nPostgreSQL 8.1(under Linux)\n- Execution time ~ 180 seconds)\n- Network Utilization ~ 18 Mbps\n\n\nVery different results are obtained if a the query \"SELECT * from big_table ORDER BY \"some_column\"\". In this scenario \n\nPostgreSQL is faster than MS Access or MySQL by more than 100 seconds.\n\nWe have run many other queries (not complex, at most nesting of 5 inner joins) and MS Access is always faster. We have seen \n\nby looking at the network activity in the Windows Task Manager that the main problem is the transfer speed. We also have \n\nnoticed that MS Access quickly downloads the file that has the necesary information and works on it locally on the client \n\ncomputer. The queries, obviously, run faster if the client computer has more resources (CPU speed, RAM, etc.). The fact that \n\nthe client computer does not use any resource to execute the query, only to receive the results, is one big plus for \n\nPostgreSQL (we think). We need,however, to improve the performance of the queries that return a lot of rows because those are \n\nthe most used queries.\n\nWe searched the postgresql archives, mailing lists, etc. and have tried changing the parameters of the PostgreSQL server(both \n\non Linux and Windows)(We also tried with the default parameters) and changing the parameters of the ODBC driver as suggested. \n\nWe still get aproximately the same results. We have even changed some TCP/IP parameters(only in Windows) but no improvement.\n\nTo get to the point: Is this problem with the transfer rates a PostgreSQL server/PostgresQL ODBC driver limitation?\nIs there a way to increase the transfer rates?\n\nThank you very much for any help received!\n\nHansell E. Baran Altuve\n\nP.S.: I apologize for the lenght of this post and for any missing information you might need. I will gladly hand out all the \n\nnecessary information to receive any help with my problem. Thanks again!\n\n \t\t\n---------------------------------\nYahoo! Music Unlimited - Access over 1 million songs.Try it free. \nHi. I'm new at using PostgreSQL.Where I work, all databases were built with MS Access. The Access files are hosted by computers with Windows 2000 and Windows XP. A new server is on its way and only Open Source Software is going to be installed. The OS is going to be SUSE Linux 10.1 and we are making comparisons between MySQL, PostgreSQL and MS Access. We installed MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD for Windows and one for Linux)The \"Test Server\" in which we install the DBMS has the following characteristics:CPU speed = 1.3 GHzRAM = 512 MBHDD = 40 GBThe biggest table has 544371 rows(tuples?) with 55 rows. All fields are float8. Only 1 is varchar(255) and 1 timestamp.We query the MS Access databases through Visual Basic Programs and ODBC Drivers. We made a Visual Basic program that uses ADO to connect to ALL three DBMS using\n ODBC drivers.When we run the following query \"SELECT * FROM big_table\", we get the following resutls:MS Access- Execution time ~ 51 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 80 Mbps (According to Windows Task Manager)MySQL 5.0 (under Windows)- Execution time ~ 630 seconds- Network Utilization ~ 8 MbpsPostgreSQL 8.1 (under Windows)- Execution time ~ 290 seconds)- Network Utilization ~ 13 MbpsMS Access (under Linux. MS Access files are in the Linux computer which has the SAMBA server running. The client computer has a mapped network drive that conects to the Linux files.)- Execution time ~ 55 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 76 Mbps (According to Windows Task Manager)MySQL 5.0(under Linux)- Execution time ~ 440 seconds- Network Utilization ~ 11\n MbpsPostgreSQL 8.1(under Linux)- Execution time ~ 180 seconds)- Network Utilization ~ 18 MbpsVery different results are obtained if a the query \"SELECT * from big_table ORDER BY \"some_column\"\". In this scenario PostgreSQL is faster than MS Access or MySQL by more than 100 seconds.We have run many other queries (not complex, at most nesting of 5 inner joins) and MS Access is always faster. We have seen by looking at the network activity in the Windows Task Manager that the main problem is the transfer speed. We also have noticed that MS Access quickly downloads the file that has the necesary information and works on it locally on the client computer. The queries, obviously, run faster if the client computer has more resources (CPU speed, RAM, etc.). The fact that the client computer does not use any resource to execute the query, only to receive the results, is one big plus for PostgreSQL\n (we think). We need,however, to improve the performance of the queries that return a lot of rows because those are the most used queries.We searched the postgresql archives, mailing lists, etc. and have tried changing the parameters of the PostgreSQL server(both on Linux and Windows)(We also tried with the default parameters) and changing the parameters of the ODBC driver as suggested. We still get aproximately the same results. We have even changed some TCP/IP parameters(only in Windows) but no improvement.To get to the point: Is this problem with the transfer rates a PostgreSQL server/PostgresQL ODBC driver limitation?Is there a way to increase the transfer rates?Thank you very much for any help received!Hansell E. Baran AltuveP.S.: I apologize for the lenght of this post and for any missing information you might need. I will gladly hand out all the necessary information to receive any help with\n my problem. Thanks again!\nYahoo! Music Unlimited - Access over 1 million songs.\nTry it free.",
"msg_date": "Thu, 3 Aug 2006 16:39:39 -0700 (PDT)",
"msg_from": "hansell baran <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow transfer speeds with PostgreSQL"
},
{
"msg_contents": "On Aug 3, 2006, at 19:39 , hansell baran wrote:\n> When we run the following query \"SELECT * FROM big_table\", we get \n> the following resutls:\n> Very different results are obtained if a the query \"SELECT * from \n> big_table ORDER BY \"some_column\"\". In this scenario\n\nYou should perform your test with queries which are identical or \nsimilar to the queries which the database will really be seeing. \nAnything else isn't really relevant for tuning because different \nconfigurations cater to different types of workloads.\n\n-M\nOn Aug 3, 2006, at 19:39 , hansell baran wrote:When we run the following query \"SELECT * FROM big_table\", we get the following resutls: Very different results are obtained if a the query \"SELECT * from big_table ORDER BY \"some_column\"\". In this scenario You should perform your test with queries which are identical or similar to the queries which the database will really be seeing. Anything else isn't really relevant for tuning because different configurations cater to different types of workloads. -M",
"msg_date": "Thu, 10 Aug 2006 15:02:09 -0400",
"msg_from": "AgentM <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow transfer speeds with PostgreSQL"
},
{
"msg_contents": "On 8/3/06, hansell baran <[email protected]> wrote:\n> Hi. I'm new at using PostgreSQL.\n> Where I work, all databases were built with MS Access. The Access files are\n> hosted by computers with Windows 2000 and Windows\n>\n> XP. A new server is on its way and only Open Source Software is going to be\n> installed. The OS is going to be SUSE Linux 10.1\n>\n> and we are making comparisons between MySQL, PostgreSQL and MS Access. We\n> installed MySQL and PostgreSQL on both SUSE and\n>\n> Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD\n> for Windows and one for Linux)\n> The \"Test Server\" in which we install the DBMS has the following\n> characteristics:\n>\n> CPU speed = 1.3 GHz\n> RAM = 512 MB\n> HDD = 40 GB\n>\n> The biggest table has 544371 rows(tuples?) with 55 rows. All fields are\n> float8. Only 1 is varchar(255) and 1 timestamp.\n> We query the MS Access databases through Visual Basic Programs and ODBC\n> Drivers. We made a Visual Basic program that uses ADO\n>\n> to connect to ALL three DBMS using ODBC drivers.\n>\n> When we run the following query \"SELECT * FROM big_table\", we get the\n> following resutls:\n>\n> MS Access\n> - Execution time ~ 51 seconds (Depending on the client machine, it can go as\n> low as 20 seconds)\n> - Network Utilization ~ 80 Mbps (According to Windows Task Manager)\n>\n> MySQL 5.0 (under Windows)\n> - Execution time ~ 630 seconds\n> - Network Utilization ~ 8 Mbps\n>\n> PostgreSQL 8.1 (under Windows)\n> - Execution time ~ 290 seconds)\n> - Network Utilization ~ 13 Mbps\n>\n>\n> MS Access (under Linux. MS Access files are in the Linux computer which has\n> the SAMBA server running. The client computer has\n>\n> a mapped network drive that conects to the Linux files.)\n> - Execution time ~ 55 seconds (Depending on the client machine, it can go as\n> low as 20 seconds)\n> - Network Utilization ~ 76 Mbps (According to Windows Task Manager)\n>\n> MySQL 5.0(under Linux)\n> - Execution time ~ 440 seconds\n> - Network Utilization ~ 11 Mbps\n>\n> PostgreSQL 8.1(under Linux)\n> - Execution time ~ 180 seconds)\n> - Network Utilization ~ 18 Mbps\n>\n>\n> Very different results are obtained if a the query \"SELECT * from big_table\n> ORDER BY \"some_column\"\". In this scenario\n\nyou have to be careful comparing access to mysql/postgresql in this\nway because the architecture is different...these results are a bit\nmisleading. access can do some optimization tricks on very simple\nqueries, especially select * from bigtable becuase the result does not\nhave to be fully materialized and returned to the client.\n\n> PostgreSQL is faster than MS Access or MySQL by more than 100 seconds.\n>\n> We have run many other queries (not complex, at most nesting of 5 inner\n> joins) and MS Access is always faster. We have seen\n\ni find this really hard to believe. is your postgresql database\nproperly indexed and did you run analyze? do the standard\n-performance thing, run the query in with explain analyze:\n\nexplain anaylze 5_table_join_query\n\nand post the results to this list.\n\nmerlin\n",
"msg_date": "Thu, 10 Aug 2006 16:51:16 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow transfer speeds with PostgreSQL"
},
{
"msg_contents": "Hi, Hansell,\n\nhansell baran wrote:\n\n> When we run the following query \"SELECT * FROM big_table\", we get the\n> following resutls:\n\nJust for Curiosity:\n\nCould you try to \"COPY big_table TO stdout\" from psql[.exe]? (and\npossibly redirect the psql output to /dev/null or so?)\n\n> Is there a way to increase the transfer rates?\n\nWhich file system do you use?\n\nCould you try to \"VACUUM FULL\" the tables?\n\nI assume that, for complex queries, you have all the appropriate indices\netc.\n\nAlso, I have to admit, that for single-client scenarios and simple,\nmostly read-only queries, PostgreSQL tends to be slower than Access and\nMySQL.\n\nHowever, this changes as soon as you have multiple concurrent writing\nclients. You should take this into account when benchmarking your\nservers (by modelling the appropriate benchmarks), and when deciding\nwhich database to use (by trying to estimate future usage patterns).\n\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 11 Aug 2006 11:02:12 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow transfer speeds with PostgreSQL"
}
] |
[
{
"msg_contents": "> WRT seek performance, we're doing 2500 seeks per second on the\nSun/Thumper on 36 disks. \n\nLuke, \n\nHave you had time to run benchmarksql against it yet? I'm just curious\nabout the IO seeks/s vs. transactions/minute correlation...\n\n/Mikael\n\n\n\n\n\n",
"msg_date": "Fri, 4 Aug 2006 10:08:05 +0200",
"msg_from": "\"Mikael Carneholm\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: RAID stripe size question"
}
] |
[
{
"msg_contents": "Hi,\n\nIs there any inherent benefit of using a the IN operator versus \njoining a temporary table? Should they offer near equal performance? \nIt appears bitmap scan's aren't done when matching across a small \ntemporary table.\n\nI have a temporary table with 5 integers in it that I'm matching \nagainst mildly complex view that has 5 joins. I've analyzed the \ndatabase after the temporary table was created.\n\nMatching against the temporary table takes: 36492.836 ms.\nMatching using the IN operator with the same content takes: 2.732 ms.\n\nThese measurements are after the query has been run a few times, so \nthe data should be in cache.\n\nIt would appear that the temporary table's join isn't evaluated deep \nenough in the query plan to prevent the more expensive joins from \nrunning, is there a way for force it? Could some setting be wrong \nthat telling the planner to make this decision? The same thing \nhappens when I perform the join without the view.\n\nselect * from foo;\n oid\n--------\n161007\n161008\n161000\n161009\n161002\n(5 rows)\n\n\nPlan for IN match:\n\n=# explain analyze select * from crawled_url_full_view where \ncrawled_url_full_view.oid in (161007, 161008, 161000, 161009, 161002);\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n-------------------\nHash Left Join (cost=14.50..94.11 rows=5 width=538) (actual \ntime=1.025..1.522 rows=5 loops=1)\n Hash Cond: (\"outer\".classification_set_id = \"inner\".id)\n Join Filter: (\"outer\".classification_set_id IS NOT NULL)\n -> Hash Left Join (cost=13.30..92.86 rows=5 width=526) (actual \ntime=0.794..1.251 rows=5 loops=1)\n Hash Cond: (\"outer\".charset_id = \"inner\".id)\n Join Filter: (\"outer\".charset_id IS NOT NULL)\n -> Hash Left Join (cost=12.21..91.70 rows=5 width=515) \n(actual time=0.631..1.048 rows=5 loops=1)\n Hash Cond: (\"outer\".http_error_description_id = \n\"inner\".id)\n Join Filter: (\"outer\".http_error_description_id IS \nNOT NULL)\n -> Hash Left Join (cost=11.13..90.59 rows=5 \nwidth=472) (actual time=0.488..0.868 rows=5 loops=1)\n Hash Cond: (\"outer\".content_type_id = \"inner\".id)\n Join Filter: (\"outer\".content_type_id IS NOT NULL)\n -> Nested Loop Left Join (cost=10.02..89.41 \nrows=5 width=443) (actual time=0.244..0.578 rows=5 loops=1)\n Join Filter: (\"outer\".redirect_url_id IS \nNOT NULL)\n -> Nested Loop Left Join \n(cost=10.02..59.56 rows=5 width=339) (actual time=0.225..0.488 rows=5 \nloops=1)\n -> Bitmap Heap Scan on \ncrawled_url (cost=10.02..29.71 rows=5 width=235) (actual \ntime=0.170..0.217 rows=5 loops=1)\n Recheck Cond: ((oid = 161007) \nOR (oid = 161008) OR (oid = 161000) OR (oid = 161009) OR (oid = 161002))\n -> BitmapOr \n(cost=10.02..10.02 rows=5 width=0) (actual time=0.137..0.137 rows=0 \nloops=1)\n -> Bitmap Index Scan \non crawled_url_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.061..0.061 rows=1 loops=1)\n Index Cond: (oid \n= 161007)\n -> Bitmap Index Scan \non crawled_url_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.013..0.013 rows=1 loops=1)\n Index Cond: (oid \n= 161008)\n -> Bitmap Index Scan \non crawled_url_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.013..0.013 rows=1 loops=1)\n Index Cond: (oid \n= 161000)\n -> Bitmap Index Scan \non crawled_url_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.014..0.014 rows=1 loops=1)\n Index Cond: (oid \n= 161009)\n -> Bitmap Index Scan \non crawled_url_pkey (cost=0.00..2.00 rows=1 width=0) (actual \ntime=0.012..0.012 rows=1 loops=1)\n Index Cond: (oid \n= 161002)\n -> Index Scan using url_pkey on \nurl (cost=0.00..5.96 rows=1 width=108) (actual time=0.031..0.036 \nrows=1 loops=5)\n Index Cond: (url.url_id = \n\"outer\".url_id)\n -> Index Scan using url_pkey on url r1 \n(cost=0.00..5.96 rows=1 width=108) (actual time=0.004..0.004 rows=0 \nloops=5)\n Index Cond: (r1.url_id = \n\"outer\".redirect_url_id)\n -> Hash (cost=1.09..1.09 rows=9 width=33) \n(actual time=0.130..0.130 rows=9 loops=1)\n -> Seq Scan on content_types \n(cost=0.00..1.09 rows=9 width=33) (actual time=0.017..0.062 rows=9 \nloops=1)\n -> Hash (cost=1.06..1.06 rows=6 width=47) (actual \ntime=0.088..0.088 rows=6 loops=1)\n -> Seq Scan on http_error_descriptions \n(cost=0.00..1.06 rows=6 width=47) (actual time=0.010..0.040 rows=6 \nloops=1)\n -> Hash (cost=1.08..1.08 rows=8 width=15) (actual \ntime=0.103..0.103 rows=8 loops=1)\n -> Seq Scan on charsets (cost=0.00..1.08 rows=8 \nwidth=15) (actual time=0.011..0.048 rows=8 loops=1)\n -> Hash (cost=1.16..1.16 rows=16 width=16) (actual \ntime=0.175..0.175 rows=16 loops=1)\n -> Seq Scan on classification_sets (cost=0.00..1.16 \nrows=16 width=16) (actual time=0.012..0.088 rows=16 loops=1)\nTotal runtime: 2.743 ms\n(41 rows)\n\n\n\nPlan for temp table match:\n\n\n=# explain analyze select * from foo, crawled_url_full_view where \ncrawled_url_full_view.oid = foo.oid;\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------\nHash IN Join (cost=35667.15..145600.71 rows=5 width=538) (actual \ntime=22371.445..36482.823 rows=5 loops=1)\n Hash Cond: (\"outer\".oid = \"inner\".oid)\n -> Hash Left Join (cost=35666.09..143698.61 rows=380198 \nwidth=538) (actual time=9901.782..35218.758 rows=360531 loops=1)\n Hash Cond: (\"outer\".classification_set_id = \"inner\".id)\n Join Filter: (\"outer\".classification_set_id IS NOT NULL)\n -> Hash Left Join (cost=35664.89..140493.61 rows=380198 \nwidth=526) (actual time=9901.456..32363.212 rows=360531 loops=1)\n Hash Cond: (\"outer\".charset_id = \"inner\".id)\n Join Filter: (\"outer\".charset_id IS NOT NULL)\n -> Hash Left Join (cost=35663.79..135684.27 \nrows=380198 width=515) (actual time=9901.257..29400.189 rows=360531 \nloops=1)\n Hash Cond: (\"outer\".http_error_description_id = \n\"inner\".id)\n Join Filter: (\"outer\".http_error_description_id \nIS NOT NULL)\n -> Hash Left Join (cost=35662.71..133782.19 \nrows=380198 width=472) (actual time=9901.080..26691.473 rows=360531 \nloops=1)\n Hash Cond: (\"outer\".content_type_id = \n\"inner\".id)\n Join Filter: (\"outer\".content_type_id IS \nNOT NULL)\n -> Hash Left Join \n(cost=35661.60..128972.84 rows=380198 width=443) (actual \ntime=9900.802..23743.323 rows=360531 loops=1)\n Hash Cond: (\"outer\".redirect_url_id \n= \"inner\".url_id)\n Join Filter: \n(\"outer\".redirect_url_id IS NOT NULL)\n -> Hash Left Join \n(cost=17830.80..66680.80 rows=380198 width=339) (actual \ntime=4592.701..14466.994 rows=360531 loops=1)\n Hash Cond: (\"outer\".url_id = \n\"inner\".url_id)\n -> Seq Scan on crawled_url \n(cost=0.00..10509.98 rows=380198 width=235) (actual \ntime=0.026..2976.911 rows=360531 loops=1)\n -> Hash \n(cost=10627.04..10627.04 rows=377104 width=108) (actual \ntime=4591.703..4591.703 rows=382149 loops=1)\n -> Seq Scan on url \n(cost=0.00..10627.04 rows=377104 width=108) (actual \ntime=0.041..2142.702 rows=382149 loops=1)\n -> Hash (cost=10627.04..10627.04 \nrows=377104 width=108) (actual time=5307.540..5307.540 rows=382149 \nloops=1)\n -> Seq Scan on url r1 \n(cost=0.00..10627.04 rows=377104 width=108) (actual \ntime=0.138..2503.577 rows=382149 loops=1)\n -> Hash (cost=1.09..1.09 rows=9 \nwidth=33) (actual time=0.144..0.144 rows=9 loops=1)\n -> Seq Scan on content_types \n(cost=0.00..1.09 rows=9 width=33) (actual time=0.020..0.068 rows=9 \nloops=1)\n -> Hash (cost=1.06..1.06 rows=6 width=47) \n(actual time=0.108..0.108 rows=6 loops=1)\n -> Seq Scan on http_error_descriptions \n(cost=0.00..1.06 rows=6 width=47) (actual time=0.015..0.049 rows=6 \nloops=1)\n -> Hash (cost=1.08..1.08 rows=8 width=15) (actual \ntime=0.129..0.129 rows=8 loops=1)\n -> Seq Scan on charsets (cost=0.00..1.08 \nrows=8 width=15) (actual time=0.014..0.058 rows=8 loops=1)\n -> Hash (cost=1.16..1.16 rows=16 width=16) (actual \ntime=0.234..0.234 rows=16 loops=1)\n -> Seq Scan on classification_sets (cost=0.00..1.16 \nrows=16 width=16) (actual time=0.014..0.107 rows=16 loops=1)\n -> Hash (cost=1.05..1.05 rows=5 width=4) (actual \ntime=0.092..0.092 rows=5 loops=1)\n -> Seq Scan on foo (cost=0.00..1.05 rows=5 width=4) \n(actual time=0.022..0.044 rows=5 loops=1)\nTotal runtime: 36492.836 ms\n(35 rows)\n\n\nDefinition of the view:\n\ncreate view crawled_url_full_view as\nselect crawled_url.*,\nurl.url,\nr1.url as redirect_url,\ncontent_types.type as content_type,\nhttp_error_descriptions.error as http_error_description,\ncharsets.name as charset,\nclassification_sets.name as classification_set\nfrom crawled_url left join url on url.url_id = crawled_url.url_id\nleft join url as r1 on (r1.url_id = crawled_url.redirect_url_id and \ncrawled_url.redirect_url_id is not null)\nleft join content_types on (content_types.id = \ncrawled_url.content_type_id and crawled_url.content_type_id is not null)\nleft join http_error_descriptions on (http_error_descriptions.id = \ncrawled_url.http_error_description_id and \ncrawled_url.http_error_description_id is not null)\nleft join charsets on (charsets.id = crawled_url.charset_id and \ncrawled_url.charset_id is not null)\nleft join classification_sets on (classification_sets.id = \ncrawled_url.classification_set_id and \ncrawled_url.classification_set_id is not null);\n\n\nVersion is: PostgreSQL 8.1.4 on i686-pc-linux-gnu, compiled by GCC \ngcc (GCC) 4.0.1 20050727 (Red Hat 4.0.1-5)\n\nwork_mem=30000\nshared_buffers=5000\neffective_cache_size=15000\n\nThanks for any help,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nWeb: http://www.infogears.com\n\n\n\n",
"msg_date": "Fri, 4 Aug 2006 16:37:55 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Plan - Bitmap Index Scan and Views"
},
{
"msg_contents": "Rusty Conover <[email protected]> writes:\n> Is there any inherent benefit of using a the IN operator versus \n> joining a temporary table? Should they offer near equal performance? \n> It appears bitmap scan's aren't done when matching across a small \n> temporary table.\n\nI believe the problem you're facing is that existing PG releases\ndon't know how to rearrange join order in the face of outer joins,\nand your view is full of outer joins. So the join against the temp\ntable happens after forming the full output of the view, whereas you\ndesperately need it to happen at the bottom of the join stack.\n\nCVS tip (8.2-to-be) has some ability to rearrange outer joins, and\nI'm interested to know whether it's smart enough to fix your problem.\nBut you have not provided enough info to let someone else duplicate\nyour test case. Would you be willing to download CVS or a recent\nnightly snapshot and see what it does with your problem?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Aug 2006 22:15:18 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan - Bitmap Index Scan and Views "
},
{
"msg_contents": "On Aug 4, 2006, at 8:15 PM, Tom Lane wrote:\n\n> Rusty Conover <[email protected]> writes:\n>> Is there any inherent benefit of using a the IN operator versus\n>> joining a temporary table? Should they offer near equal performance?\n>> It appears bitmap scan's aren't done when matching across a small\n>> temporary table.\n>\n> I believe the problem you're facing is that existing PG releases\n> don't know how to rearrange join order in the face of outer joins,\n> and your view is full of outer joins. So the join against the temp\n> table happens after forming the full output of the view, whereas you\n> desperately need it to happen at the bottom of the join stack.\n>\n> CVS tip (8.2-to-be) has some ability to rearrange outer joins, and\n> I'm interested to know whether it's smart enough to fix your problem.\n> But you have not provided enough info to let someone else duplicate\n> your test case. Would you be willing to download CVS or a recent\n> nightly snapshot and see what it does with your problem?\n>\n> \t\t\tregards, tom lane\n\n\nAbsolutely, I'll attempt to run the test against the current CVS HEAD.\n\nDo I need to pg_dump and restore from 8.1.4?\n\nWhat other information would be helpful in the meantime?\n\nThanks,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nWeb: http://www.infogears.com\n\n\n\n\nOn Aug 4, 2006, at 8:15 PM, Tom Lane wrote:Rusty Conover <[email protected]> writes: Is there any inherent benefit of using a the IN operator versus joining a temporary table? Should they offer near equal performance? It appears bitmap scan's aren't done when matching across a small temporary table. I believe the problem you're facing is that existing PG releasesdon't know how to rearrange join order in the face of outer joins,and your view is full of outer joins. So the join against the temptable happens after forming the full output of the view, whereas youdesperately need it to happen at the bottom of the join stack.CVS tip (8.2-to-be) has some ability to rearrange outer joins, andI'm interested to know whether it's smart enough to fix your problem.But you have not provided enough info to let someone else duplicateyour test case. Would you be willing to download CVS or a recentnightly snapshot and see what it does with your problem? regards, tom lane Absolutely, I'll attempt to run the test against the current CVS HEAD.Do I need to pg_dump and restore from 8.1.4?What other information would be helpful in the meantime?Thanks,Rusty --Rusty ConoverInfoGears Inc.Web: http://www.infogears.com",
"msg_date": "Fri, 4 Aug 2006 20:56:54 -0600",
"msg_from": "Rusty Conover <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query Plan - Bitmap Index Scan and Views "
},
{
"msg_contents": "Rusty Conover <[email protected]> writes:\n> Absolutely, I'll attempt to run the test against the current CVS HEAD.\n\n> Do I need to pg_dump and restore from 8.1.4?\n\nYup, fraid so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 04 Aug 2006 23:27:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Plan - Bitmap Index Scan and Views "
}
] |
[
{
"msg_contents": "I am do some consulting for an animal hospital in the Boston, MA area.\nThey wanted a new server to run their database on. The client wants\neverything from one vendor, they wanted Dell initially, I'd advised\nagainst it. I recommended a dual Opteron system from either Sun or HP.\nThey settled on a DL385 8GB of RAM with two disc U320 SCSI and a 6-disc\nU320 SCSI array. I recommended they add a RAID adapter with at 128MB and\nbattery backup, they added a HP SmartArray 642 to connect to the drive\narray in addition to the SmartArray 6i which came with the server.\n\nHas anyone worked with server before. I've read the SmartArray 6i is a\npoor performer, I wonder if the SmartArray 642 adapter would have the\nsame fate? \n\nThe database data is on the drive array(RAID10) and the pg_xlog is on\nthe internal RAID1 on the 6i controller. The results have been poor.\n\nMy guess is the controllers are garbage.\n\nThanks for any advice.\n\nSteve Poe\n\n\n\n\n\n\n\n\n\n\n",
"msg_date": "Sat, 05 Aug 2006 16:10:38 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql Performance on an HP DL385 and SmartArray 642"
},
{
"msg_contents": "Steve,\n\nOn 8/5/06 4:10 PM, \"Steve Poe\" <[email protected]> wrote:\n\n> I am do some consulting for an animal hospital in the Boston, MA area.\n> They wanted a new server to run their database on. The client wants\n> everything from one vendor, they wanted Dell initially, I'd advised\n> against it. I recommended a dual Opteron system from either Sun or HP.\n> They settled on a DL385 8GB of RAM with two disc U320 SCSI and a 6-disc\n> U320 SCSI array. I recommended they add a RAID adapter with at 128MB and\n> battery backup, they added a HP SmartArray 642 to connect to the drive\n> array in addition to the SmartArray 6i which came with the server.\n> \n> Has anyone worked with server before. I've read the SmartArray 6i is a\n> poor performer, I wonder if the SmartArray 642 adapter would have the\n> same fate? \n> \n> The database data is on the drive array(RAID10) and the pg_xlog is on\n> the internal RAID1 on the 6i controller. The results have been poor.\n> \n> My guess is the controllers are garbage.\n\nCan you run bonnie++ version 1.03a on the machine and report the results\nhere?\n\nIt could be OK if you have the latest Linux driver for cciss, someone has\nreported good results to this list with the latest, bleeding edge version of\nLinux (2.6.17).\n\n- Luke\n\n\n",
"msg_date": "Mon, 07 Aug 2006 18:10:41 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI'll do that then post the results. I ran zcav on it (default settlings) on\nthe disc array formatted XFS and its peak MB/s was around 85-90. I am using\nkernel 2.6.17.7. mounting the disc array with noatime, nodiratime.\n\nThanks for your feedback.\n\nSteve\n\nOn 8/7/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> On 8/5/06 4:10 PM, \"Steve Poe\" <[email protected]> wrote:\n>\n> > I am do some consulting for an animal hospital in the Boston, MA area.\n> > They wanted a new server to run their database on. The client wants\n> > everything from one vendor, they wanted Dell initially, I'd advised\n> > against it. I recommended a dual Opteron system from either Sun or HP.\n> > They settled on a DL385 8GB of RAM with two disc U320 SCSI and a 6-disc\n> > U320 SCSI array. I recommended they add a RAID adapter with at 128MB and\n> > battery backup, they added a HP SmartArray 642 to connect to the drive\n> > array in addition to the SmartArray 6i which came with the server.\n> >\n> > Has anyone worked with server before. I've read the SmartArray 6i is a\n> > poor performer, I wonder if the SmartArray 642 adapter would have the\n> > same fate?\n> >\n> > The database data is on the drive array(RAID10) and the pg_xlog is on\n> > the internal RAID1 on the 6i controller. The results have been poor.\n> >\n> > My guess is the controllers are garbage.\n>\n> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?\n>\n> It could be OK if you have the latest Linux driver for cciss, someone has\n> reported good results to this list with the latest, bleeding edge version\n> of\n> Linux (2.6.17).\n>\n> - Luke\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nLuke,\n\nI'll do that then post the results. I ran zcav on it (default\nsettlings) on the disc array formatted XFS and its peak MB/s was around\n85-90. I am using kernel 2.6.17.7. mounting the disc array with\nnoatime, nodiratime.\nThanks for your feedback.\n\nSteve\nOn 8/7/06, Luke Lonergan <[email protected]> wrote:\nSteve,On 8/5/06 4:10 PM, \"Steve Poe\" <[email protected]> wrote:> I am do some consulting for an animal hospital in the Boston, MA area.> They wanted a new server to run their database on. The client wants\n> everything from one vendor, they wanted Dell initially, I'd advised> against it. I recommended a dual Opteron system from either Sun or HP.> They settled on a DL385 8GB of RAM with two disc U320 SCSI and a 6-disc\n> U320 SCSI array. I recommended they add a RAID adapter with at 128MB and> battery backup, they added a HP SmartArray 642 to connect to the drive> array in addition to the SmartArray 6i which came with the server.\n>> Has anyone worked with server before. I've read the SmartArray 6i is a> poor performer, I wonder if the SmartArray 642 adapter would have the> same fate?>> The database data is on the drive array(RAID10) and the pg_xlog is on\n> the internal RAID1 on the 6i controller. The results have been poor.>> My guess is the controllers are garbage.Can you run bonnie++ version 1.03a on the machine and report the resultshere?\nIt could be OK if you have the latest Linux driver for cciss, someone hasreported good results to this list with the latest, bleeding edge version ofLinux (2.6.17).- Luke---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match",
"msg_date": "Mon, 7 Aug 2006 18:46:34 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "\n>> The database data is on the drive array(RAID10) and the pg_xlog is on\n>> the internal RAID1 on the 6i controller. The results have been poor.\n\nI have heard that the 6i was actually decent but to avoid the 5i.\n\nJoshua D. Drake\n\n\n>>\n>> My guess is the controllers are garbage.\n> \n> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?\n> \n> It could be OK if you have the latest Linux driver for cciss, someone has\n> reported good results to this list with the latest, bleeding edge version of\n> Linux (2.6.17).\n> \n> - Luke\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 07 Aug 2006 19:19:08 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "There is 64MB on the 6i and 192MB on the 642 controller. I wish the\ncontrollers had a \"wrieback\" enable option like the LSI MegaRAID adapters\nhave. I have tried splitting the cache accelerator 25/75 75/25 0/100 100/0\nbut the results really did not improve.\n\nSteve\n\nOn 8/7/06, Joshua D. Drake <[email protected]> wrote:\n>\n>\n> >> The database data is on the drive array(RAID10) and the pg_xlog is on\n> >> the internal RAID1 on the 6i controller. The results have been poor.\n>\n> I have heard that the 6i was actually decent but to avoid the 5i.\n>\n> Joshua D. Drake\n>\n>\n> >>\n> >> My guess is the controllers are garbage.\n> >\n> > Can you run bonnie++ version 1.03a on the machine and report the results\n> > here?\n> >\n> > It could be OK if you have the latest Linux driver for cciss, someone\n> has\n> > reported good results to this list with the latest, bleeding edge\n> version of\n> > Linux (2.6.17).\n> >\n> > - Luke\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n> >\n>\n>\n> --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n> Providing the most comprehensive PostgreSQL solutions since 1997\n> http://www.commandprompt.com/\n>\n>\n>\n\nThere is 64MB on the 6i and 192MB on the 642 controller. I wish the\ncontrollers had a \"wrieback\" enable option like the LSI MegaRAID\nadapters have. I have tried splitting the cache accelerator 25/75 75/25\n0/100 100/0 but the results really did not improve.\n\nSteveOn 8/7/06, Joshua D. Drake <[email protected]> wrote:\n>> The database data is on the drive array(RAID10) and the pg_xlog is on>> the internal RAID1 on the 6i controller. The results have been poor.I have heard that the 6i was actually decent but to avoid the 5i.\nJoshua D. Drake>>>> My guess is the controllers are garbage.>> Can you run bonnie++ version 1.03a on the machine and report the results> here?>> It could be OK if you have the latest Linux driver for cciss, someone has\n> reported good results to this list with the latest, bleeding edge version of> Linux (2.6.17).>> - Luke>>>> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to> choose an index scan if your joining column's datatypes do not> match>-- === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/",
"msg_date": "Mon, 7 Aug 2006 19:28:46 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nHere are the results of two runs of 16GB file tests on XFS.\n\nscsi disc array\nxfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8\nscsi disc array\nxfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6\n\nThanks.\n\nSteve\n\n\n\n> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?\n> \n> It could be OK if you have the latest Linux driver for cciss, someone has\n> reported good results to this list with the latest, bleeding edge version of\n> Linux (2.6.17).\n> \n> - Luke\n> \n\n\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\n",
"msg_date": "Mon, 07 Aug 2006 21:13:36 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "These number are pretty darn good for a four disk RAID 10, pretty close to\nperfect infact. Nice advert for the 642 - I guess we have a Hardware RAID\ncontroller than will read indpendently from mirrors.\n\nAlex\n\nOn 8/8/06, Steve Poe <[email protected]> wrote:\n>\n> Luke,\n>\n> Here are the results of two runs of 16GB file tests on XFS.\n>\n> scsi disc array\n> xfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1\n> ,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8\n> scsi disc array\n> xfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1\n> ,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6\n>\n> Thanks.\n>\n> Steve\n>\n>\n>\n> > Can you run bonnie++ version 1.03a on the machine and report the results\n> > here?\n> >\n> > It could be OK if you have the latest Linux driver for cciss, someone\n> has\n> > reported good results to this list with the latest, bleeding edge\n> version of\n> > Linux (2.6.17).\n> >\n> > - Luke\n> >\n>\n>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\nThese number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from mirrors.Alex\nOn 8/8/06, Steve Poe <[email protected]> wrote:\nLuke,Here are the results of two runs of 16GB file tests on XFS.scsi disc arrayxfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8scsi disc array\nxfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6Thanks.Steve> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?>> It could be OK if you have the latest Linux driver for cciss, someone has> reported good results to this list with the latest, bleeding edge version of> Linux (2.6.17).>\n> - Luke>>>> ---------------------------(end of broadcast)---------------------------> TIP 9: In versions below 8.0, the planner will ignore your desire to> choose an index scan if your joining column's datatypes do not\n> match---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq",
"msg_date": "Tue, 8 Aug 2006 02:40:39 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "\nOn Aug 5, 2006, at 7:10 PM, Steve Poe wrote:\n>\n> Has anyone worked with server before. I've read the SmartArray 6i is a\n> poor performer, I wonder if the SmartArray 642 adapter would have the\n> same fate?\n>\n\nMy newest db is a DL385, 6 disks. It runs very nicely. I have no \nissues with the 6i controller.\nIf you look in the pgsql-performance archives a week or two ago \nyou'll see a similar thread to\nthis one - in fact, it is also about a dl385 (but he had a 5i \ncontroller)\n\n--\nJeff Trout <[email protected]>\nhttp://www.dellsmartexitin.com/\nhttp://www.stuarthamm.net/\n\n\n\n",
"msg_date": "Tue, 8 Aug 2006 08:20:22 -0400",
"msg_from": "Jeff Trout <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and SmartArray 642"
},
{
"msg_contents": "Alex,\n\nMaybe I mis-stated, this is a 6-disk array.\n\nSteve\n\nOn 8/7/06, Alex Turner <[email protected]> wrote:\n>\n> These number are pretty darn good for a four disk RAID 10, pretty close to\n> perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID\n> controller than will read indpendently from mirrors.\n>\n> Alex\n>\n> On 8/8/06, Steve Poe <[email protected]> wrote:\n>\n> > Luke,\n>\n> Here are the results of two runs of 16GB file tests on XFS.\n>\n> scsi disc array\n> xfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1\n> ,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8\n> scsi disc array\n> xfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1\n> ,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6\n>\n> Thanks.\n>\n> Steve\n>\n>\n>\n> > Can you run bonnie++ version 1.03a on the machine and report the results\n>\n> > here?\n> >\n> > It could be OK if you have the latest Linux driver for cciss, someone\n> has\n> > reported good results to this list with the latest, bleeding edge\n> version of\n> > Linux (2.6.17).\n> >\n> > - Luke\n> >\n>\n>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n\nAlex, Maybe I mis-stated, this is a 6-disk array.SteveOn 8/7/06, Alex Turner <[email protected]\n> wrote:These number are pretty darn good for a four disk RAID 10, pretty close to perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID controller than will read indpendently from mirrors.\nAlex\nOn 8/8/06, Steve Poe <\[email protected]> wrote:\n\nLuke,Here are the results of two runs of 16GB file tests on XFS.scsi disc arrayxfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8scsi disc array\nxfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6Thanks.Steve> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?>> It could be OK if you have the latest Linux driver for cciss, someone has> reported good results to this list with the latest, bleeding edge version of> Linux (2.6.17).>\n> - Luke>>>> ---------------------------(end of broadcast)---------------------------> TIP 9: In versions below 8.0, the planner will ignore your desire to> choose an index scan if your joining column's datatypes do not\n> match---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ? \nhttp://www.postgresql.org/docs/faq",
"msg_date": "Tue, 8 Aug 2006 06:57:10 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Hi,\n\n> Can you run bonnie++ version 1.03a on the machine and report the results\n> here?\n\nDo you know if the figures from bonnie++ are able to measure the\nperformance related to the overhead of the 'fsync' option? I had\nvery strange performance differences between two Dell 1850\nmachines months ago, and raw performance (hdparm, not bonnie++)\nwas similar, the only figure I could find with a significant\ndifference able to explain the issue was the \"await\" compound\nreported by \"iostat\" - but I was still very much in the dark :/\n\nhttp://archives.postgresql.org/pgsql-performance/2006-03/msg00407.php\n\n-- \nGuillaume Cottenceau\nCreate your personal SMS or WAP Service - visit http://mobilefriends.ch/\n",
"msg_date": "16 Aug 2006 10:52:25 +0200",
"msg_from": "Guillaume Cottenceau <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "> There is 64MB on the 6i and 192MB on the 642 controller. I wish the\n> controllers had a \"wrieback\" enable option like the LSI MegaRAID\n> adapters have. I have tried splitting the cache accelerator 25/75\n> 75/25 0/100 100/0 but the results really did not improve.\n\nThey have a writeback option, but you can't enable it unless you buy the\nbattery-pack for the controller. I believe it's enabled by default once\nyou get the BBWC.\n\n//Magnus\n\n",
"msg_date": "Fri, 18 Aug 2006 12:50:43 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Hi All,\n \n I am trying to back up a full copy of one of our databases (14G) and restore it on another server. Both databases run 7.3.2 version. Though the restore completed successfully, it took 9 hours for the process to complete. The destination server runs Fedora Core 3 with 512 MB RAM and has 1 processor. I have also deferred referential intergrity checks during the restore. I tried to tune some parameters in the config file, but it still takes 9 hours. \n \n I have tried this same procedure to restore a full copy, but using 8.1(pg_dump and pg_restore) on a different server and that process took only 2 hours for the same database. But we are unable to migrate to 8.1 at this point and stuck with 7.3.2.\n \n I use a script to dump/restore. I can send the same if that information is needed. \n \n Please give me some pointers on what else I should be looking at to reduce the restore time using 7.3.2 version.\n \n Thanks,\n Sincerely,\n Saranya Sivakumar\n \n \n \n \n \n\n \t\t\n---------------------------------\nYahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2�/min or less.\nHi All, I am trying to back up a full copy of one of our databases (14G) and restore it on another server. Both databases run 7.3.2 version. Though the restore completed successfully, it took 9 hours for the process to complete. The destination server runs Fedora Core 3 with 512 MB RAM and has 1 processor. I have also deferred referential intergrity checks during the restore. I tried to tune some parameters in the config file, but it still takes 9 hours. I have tried this same procedure to restore a full copy, but using 8.1(pg_dump and pg_restore) on a different server and that process took only 2 hours for the same database. But we are unable to migrate to 8.1 at this point and stuck with 7.3.2. I use a script to dump/restore. I can send the same if that information is needed. Please give me some\n pointers on what else I should be looking at to reduce the restore time using 7.3.2 version. Thanks, Sincerely, Saranya Sivakumar \nYahoo! Messenger with Voice. Make PC-to-Phone Calls to the US (and 30+ countries) for 2�/min or less.",
"msg_date": "Sun, 6 Aug 2006 09:54:38 -0700 (PDT)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "7.3.2 pg_restore very slow"
},
{
"msg_contents": "Saranya Sivakumar wrote:\n> Hi All,\n> \n> I am trying to back up a full copy of one of our databases (14G) and\n> restore it on another server. Both databases run 7.3.2 version.\n> Though the restore completed successfully, it took 9 hours for the\n> process to complete. The destination server runs Fedora Core 3 with\n> 512 MB RAM and has 1 processor. I have also deferred referential\n> intergrity checks during the restore. I tried to tune some parameters\n> in the config file, but it still takes 9 hours.\n\nFirstly, you should upgrade to the most recent version of 7.3.x (7.3.15) \n- that's a *lot* of bug-fixes you are missing\n\nThen, I would temporarily disable fsync and increase sort_mem and \ncheckpoint_segments. What you're trying to do is make a single process \nrun as fast as possible, so allow it to grab more resources than you \nnormally would.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Mon, 07 Aug 2006 10:55:50 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "Hi Richard,\n \n Thank you very much for the suggestions. As I said, we are stuck with 7.3.2 version for now. We have a Upgrade Project in place, but this backup is something we have to do immediately (we do not have enough time to test our application with 7.3.15 :( )\n \n The checkpoint segments occur every 1.15 minutes with the default setting.\n I tried tuning some parameters in the conf file, which took 4.5 hours for the restore.\n \n sort_mem = 40960 \n shared_buffers = 3000\n #checkpoint_segments = 3 (default)\n #fsync = true --I will disable this and try\n \n We can afford to have a downtime of only 1 to 1.5 hours.\n I am going to increase the shared_buffers, sort_mem and disable fysnc as suggested by you, and try the restore process again. \n \n I would appreciate any other suggestions/advice in this regard.\n \n Thanks,\n Saranya\n \n \n\nRichard Huxton <[email protected]> wrote:\n Saranya Sivakumar wrote:\n> Hi All,\n> \n> I am trying to back up a full copy of one of our databases (14G) and\n> restore it on another server. Both databases run 7.3.2 version.\n> Though the restore completed successfully, it took 9 hours for the\n> process to complete. The destination server runs Fedora Core 3 with\n> 512 MB RAM and has 1 processor. I have also deferred referential\n> intergrity checks during the restore. I tried to tune some parameters\n> in the config file, but it still takes 9 hours.\n\nFirstly, you should upgrade to the most recent version of 7.3.x (7.3.15) \n- that's a *lot* of bug-fixes you are missing\n\nThen, I would temporarily disable fsync and increase sort_mem and \ncheckpoint_segments. What you're trying to do is make a single process \nrun as fast as possible, so allow it to grab more resources than you \nnormally would.\n\n-- \nRichard Huxton\nArchonet Ltd\n\n\n \t\t\t\n---------------------------------\nSee the all-new, redesigned Yahoo.com. Check it out.\nHi Richard, Thank you very much for the suggestions. As I said, we are stuck with 7.3.2 version for now. We have a Upgrade Project in place, but this backup is something we have to do immediately (we do not have enough time to test our application with 7.3.15 :( ) The checkpoint segments occur every 1.15 minutes with the default setting. I tried tuning some parameters in the conf file, which took 4.5 hours for the restore. sort_mem = 40960 shared_buffers = 3000 #checkpoint_segments = 3 (default) #fsync = true --I will disable this and try We can afford to have a downtime of only 1 to 1.5 hours. I am going to increase the shared_buffers, sort_mem and disable fysnc as suggested by you, and try the restore process again. I\n would appreciate any other suggestions/advice in this regard. Thanks, Saranya Richard Huxton <[email protected]> wrote: Saranya Sivakumar wrote:> Hi All,> > I am trying to back up a full copy of one of our databases (14G) and> restore it on another server. Both databases run 7.3.2 version.> Though the restore completed successfully, it took 9 hours for the> process to complete. The destination server runs Fedora Core 3 with> 512 MB RAM and has 1 processor. I have also deferred referential> intergrity checks during the restore. I tried to tune some parameters> in the config file, but it still takes 9 hours.Firstly, you should upgrade to the most recent version of 7.3.x (7.3.15) - that's a *lot*\n of bug-fixes you are missingThen, I would temporarily disable fsync and increase sort_mem and checkpoint_segments. What you're trying to do is make a single process run as fast as possible, so allow it to grab more resources than you normally would.-- Richard HuxtonArchonet Ltd\nSee the all-new, redesigned Yahoo.com. Check it out.",
"msg_date": "Mon, 7 Aug 2006 08:35:02 -0700 (PDT)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "Hi All,\n \n I tried to set shared_buffers= 10000, turned off fsync and reload the config file.\n But I got the following error:\n \n IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument\n This error usually means that PostgreSQL's request for a shared memory\nsegment exceeded your kernel's SHMMAX parameter. You can either\nreduce the request size or reconfigure the kernel with larger SHMMAX.\nTo reduce the request size (currently 85450752 bytes), reduce\nPostgreSQL's shared_buffers parameter (currently 10000) and/or\nits max_connections parameter (currently 128).\n If the request size is already small, it's possible that it is less than\nyour kernel's SHMMIN parameter, in which case raising the request size or\nreconfiguring SHMMIN is called for.\n\n The total RAM available on this machine is 512MB. \n \n I am not sure how to set these parameters SHMMAX and SHMMIN. \n Any help/advice would be greatly appreciated.\n \n Thanks,\n Saranya\n \nRichard Huxton <[email protected]> wrote:\n Saranya Sivakumar wrote:\n> Hi All,\n> \n> I am trying to back up a full copy of one of our databases (14G) and\n> restore it on another server. Both databases run 7.3.2 version.\n> Though the restore completed successfully, it took 9 hours for the\n> process to complete. The destination server runs Fedora Core 3 with\n> 512 MB RAM and has 1 processor. I have also deferred referential\n> intergrity checks during the restore. I tried to tune some parameters\n> in the config file, but it still takes 9 hours.\n\nFirstly, you should upgrade to the most recent version of 7.3.x (7.3.15) \n- that's a *lot* of bug-fixes you are missing\n\nThen, I would temporarily disable fsync and increase sort_mem and \ncheckpoint_segments. What you're trying to do is make a single process \nrun as fast as possible, so allow it to grab more resources than you \nnormally would.\n\n-- \nRichard Huxton\nArchonet Ltd\n\n\n \t\t\n---------------------------------\nDo you Yahoo!?\n Everyone is raving about the all-new Yahoo! Mail Beta.\nHi All, I tried to set shared_buffers= 10000, turned off fsync and reload the config file. But I got the following error: IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument This error usually means that PostgreSQL's request for a shared memorysegment exceeded your kernel's SHMMAX parameter. You can eitherreduce the request size or reconfigure the kernel with larger SHMMAX.To reduce the request size (currently 85450752 bytes), reducePostgreSQL's shared_buffers parameter (currently 10000) and/orits max_connections parameter (currently 128). If the request size is already small, it's possible that it is less thanyour kernel's SHMMIN parameter, in which case raising the request size orreconfiguring SHMMIN is called for. The total RAM available on this machine is 512MB. \n I am not sure how to set these parameters SHMMAX and SHMMIN. Any help/advice would be greatly appreciated. Thanks, Saranya Richard Huxton <[email protected]> wrote: Saranya Sivakumar wrote:> Hi All,> > I am trying to back up a full copy of one of our databases (14G) and> restore it on another server. Both databases run 7.3.2 version.> Though the restore completed successfully, it took 9 hours for the> process to complete. The destination server runs Fedora Core 3 with> 512 MB RAM and has 1 processor. I have also deferred referential> intergrity checks during the restore. I tried to tune some parameters> in the config file, but it still takes 9 hours.Firstly, you should upgrade to the\n most recent version of 7.3.x (7.3.15) - that's a *lot* of bug-fixes you are missingThen, I would temporarily disable fsync and increase sort_mem and checkpoint_segments. What you're trying to do is make a single process run as fast as possible, so allow it to grab more resources than you normally would.-- Richard HuxtonArchonet Ltd\nDo you Yahoo!? Everyone is raving about the all-new Yahoo! Mail Beta.",
"msg_date": "Mon, 7 Aug 2006 09:08:30 -0700 (PDT)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "> IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument\n> This error usually means that PostgreSQL's request for a shared memory\n> segment exceeded your kernel's SHMMAX parameter. You can either\n> reduce the request size or reconfigure the kernel with larger SHMMAX.\n> To reduce the request size (currently 85450752 bytes), reduce\n> PostgreSQL's shared_buffers parameter (currently 10000) and/or\n> its max_connections parameter (currently 128).\n> If the request size is already small, it's possible that it is less than\n> your kernel's SHMMIN parameter, in which case raising the request size or\n> reconfiguring SHMMIN is called for.\n\nif you cat /proc/sys/kernel/shmmax\nit will tell you what it is set to. It needs to be at least \"85450752\". The size that Postgresql\nis trying to grab.\n\nalso shmall may need to be adjusted also.\n\n> The total RAM available on this machine is 512MB. \n> \n> I am not sure how to set these parameters SHMMAX and SHMMIN. \n> Any help/advice would be greatly appreciated.\n\nhttp://www.postgresql.org/docs/8.1/interactive/kernel-resources.html\nThis will help you to set the kernel parameters.\n\nRegards,\n\nRichard Broersma Jr.\n\n",
"msg_date": "Mon, 7 Aug 2006 09:17:15 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "Hi Richard,\n \n Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it failed to start the postmaster. Thanks for the link to the kernel resources article. I guess changing these parameters would require recompiling the kernel. \n \n Is there any work around without changing these parameters to make maximum use of RAM?\n \n We got a new server now with 2GB RAM, but it also has the same value for SHMMAX.\n \n And I am trying the restore with the following conf\n sort_mem = 40960 (changed from 1024)\n shared_buffers = 3000 (changed from 64)\n max_connections = 128 (changed from 32)\n \n Thanks,\n Saranya\n \n \n \n \nRichard Broersma Jr <[email protected]> wrote:\n\n > IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument\n> This error usually means that PostgreSQL's request for a shared memory\n> segment exceeded your kernel's SHMMAX parameter. You can either\n> reduce the request size or reconfigure the kernel with larger SHMMAX.\n> To reduce the request size (currently 85450752 bytes), reduce\n> PostgreSQL's shared_buffers parameter (currently 10000) and/or\n> its max_connections parameter (currently 128).\n> If the request size is already small, it's possible that it is less than\n> your kernel's SHMMIN parameter, in which case raising the request size or\n> reconfiguring SHMMIN is called for.\n\nif you cat /proc/sys/kernel/shmmax\nit will tell you what it is set to. It needs to be at least \"85450752\". The size that Postgresql\nis trying to grab.\n\nalso shmall may need to be adjusted also.\n\n> The total RAM available on this machine is 512MB. \n> \n> I am not sure how to set these parameters SHMMAX and SHMMIN. \n> Any help/advice would be greatly appreciated.\n\nhttp://www.postgresql.org/docs/8.1/interactive/kernel-resources.html\nThis will help you to set the kernel parameters.\n\nRegards,\n\nRichard Broersma Jr.\n\n\n\n \t\t\n---------------------------------\nGroups are talking. We´re listening. Check out the handy changes to Yahoo! Groups. \nHi Richard, Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it failed to start the postmaster. Thanks for the link to the kernel resources article. I guess changing these parameters would require recompiling the kernel. Is there any work around without changing these parameters to make maximum use of RAM? We got a new server now with 2GB RAM, but it also has the same value for SHMMAX. And I am trying the restore with the following conf sort_mem = 40960 (changed from 1024) shared_buffers = 3000 (changed from 64) \nmax_connections = 128 (changed from 32) Thanks, Saranya Richard Broersma Jr <[email protected]> wrote: > IpcMemoryCreate: shmget(key=5432001, size=85450752, 03600) failed: Invalid argument> This error usually means that PostgreSQL's request for a shared memory> segment exceeded your kernel's SHMMAX parameter. You can either> reduce the request size or reconfigure the kernel with larger SHMMAX.> To reduce the request size (currently 85450752 bytes), reduce> PostgreSQL's shared_buffers parameter (currently 10000) and/or> its max_connections parameter (currently 128).> If the request size is already small, it's possible that it is less than> your kernel's SHMMIN\n parameter, in which case raising the request size or> reconfiguring SHMMIN is called for.if you cat /proc/sys/kernel/shmmaxit will tell you what it is set to. It needs to be at least \"85450752\". The size that Postgresqlis trying to grab.also shmall may need to be adjusted also.> The total RAM available on this machine is 512MB. > > I am not sure how to set these parameters SHMMAX and SHMMIN. > Any help/advice would be greatly appreciated.http://www.postgresql.org/docs/8.1/interactive/kernel-resources.htmlThis will help you to set the kernel parameters.Regards,Richard Broersma Jr.\nGroups are talking. We´re listening. Check out the handy changes to Yahoo! Groups.",
"msg_date": "Mon, 7 Aug 2006 10:28:25 -0700 (PDT)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "Hi, Saranya,\n\nSaranya Sivakumar wrote:\n> Thank you very much for the information. The SHMMAX was set to 33554432,\n> and that's why it failed to start the postmaster. Thanks for the link to\n> the kernel resources article. I guess changing these parameters would\n> require recompiling the kernel.\n\nAs stated on the\nhttp://www.postgresql.org/docs/8.1/interactive/kernel-resources.html\npage, those values can be changed via sysctl or echoing values into\n/proc, under linux at least.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Mon, 07 Aug 2006 20:25:10 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "> Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it\n> failed to start the postmaster. Thanks for the link to the kernel resources article. I guess\n> changing these parameters would require recompiling the kernel. \n> \n> Is there any work around without changing these parameters to make maximum use of RAM?\n> \n> We got a new server now with 2GB RAM, but it also has the same value for SHMMAX.\n> \n> And I am trying the restore with the following conf\n> sort_mem = 40960 (changed from 1024)\n> shared_buffers = 3000 (changed from 64)\n> max_connections = 128 (changed from 32)\n\nThis is one of the best links that I can give you in addition to the Postgresql Kernel resource\nlink.\nhttp://www.powerpostgresql.com/PerfList\n\nI am pretty much a beginner at resource tuning also. In fact, after googling for sources that\ndescribe how to tune kernel parameters, the postgresql documents remains the best documents I've\nfound so far.\n\nI would be interested if anyone else on the list knows of any resources or books that have an in\ndepth discussion on methods/strategies to tune kernel parameters to maximized usage of system\nresources and at the same time allow for harmonious sharing between various programs/services.\n\nRegards,\n\nRichard Broersma Jr.\n",
"msg_date": "Mon, 7 Aug 2006 12:18:56 -0700 (PDT)",
"msg_from": "Richard Broersma Jr <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
},
{
"msg_contents": "Hi All,\n \n Thanks Richard for the additional link. The information is very useful.\n \n The restore completed successfully in 2.5 hours in the new 2GB box, with the same configuration parameters. I think if I can tweak the parameters a little more, I should be able to get it down to the 1 hr down time that we can afford.\n \n Thanks again for all the help.\n \n Sincerely,\n Saranya\n \nRichard Broersma Jr <[email protected]> wrote:\n > Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it\n> failed to start the postmaster. Thanks for the link to the kernel resources article. I guess\n> changing these parameters would require recompiling the kernel. \n> \n> Is there any work around without changing these parameters to make maximum use of RAM?\n> \n> We got a new server now with 2GB RAM, but it also has the same value for SHMMAX.\n> \n> And I am trying the restore with the following conf\n> sort_mem = 40960 (changed from 1024)\n> shared_buffers = 3000 (changed from 64)\n> max_connections = 128 (changed from 32)\n\nThis is one of the best links that I can give you in addition to the Postgresql Kernel resource\nlink.\nhttp://www.powerpostgresql.com/PerfList\n\nI am pretty much a beginner at resource tuning also. In fact, after googling for sources that\ndescribe how to tune kernel parameters, the postgresql documents remains the best documents I've\nfound so far.\n\nI would be interested if anyone else on the list knows of any resources or books that have an in\ndepth discussion on methods/strategies to tune kernel parameters to maximized usage of system\nresources and at the same time allow for harmonious sharing between various programs/services.\n\nRegards,\n\nRichard Broersma Jr.\n\n\n \t\t\t\t\n---------------------------------\nWant to be your own boss? Learn how on Yahoo! Small Business. \nHi All, Thanks Richard for the additional link. The information is very useful. The restore completed successfully in 2.5 hours in the new 2GB box, with the same configuration parameters. I think if I can tweak the parameters a little more, I should be able to get it down to the 1 hr down time that we can afford. Thanks again for all the help. Sincerely, Saranya Richard Broersma Jr <[email protected]> wrote: > Thank you very much for the information. The SHMMAX was set to 33554432, and that's why it> failed to start the postmaster. Thanks for the link to the kernel resources article. I guess> changing these parameters would require recompiling the kernel. >\n > Is there any work around without changing these parameters to make maximum use of RAM?> > We got a new server now with 2GB RAM, but it also has the same value for SHMMAX.> > And I am trying the restore with the following conf> sort_mem = 40960 (changed from 1024)> shared_buffers = 3000 (changed from 64)> max_connections = 128 (changed from 32)This is one of the best links that I can give you in addition to the Postgresql Kernel resourcelink.http://www.powerpostgresql.com/PerfListI am pretty much a beginner at resource tuning also. In fact, after googling for sources thatdescribe how to tune kernel parameters, the postgresql documents remains the best documents I'vefound so far.I would be interested if anyone else on the list knows of any resources or books that have an indepth discussion on methods/strategies to tune kernel parameters to maximized usage of\n systemresources and at the same time allow for harmonious sharing between various programs/services.Regards,Richard Broersma Jr.\nWant to be your own boss? Learn how on Yahoo! Small Business.",
"msg_date": "Mon, 7 Aug 2006 13:07:40 -0700 (PDT)",
"msg_from": "Saranya Sivakumar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] 7.3.2 pg_restore very slow"
}
] |
[
{
"msg_contents": "\nPostgres 8.1.4\nSlony 1.1.5\nLinux manny 2.6.12-10-k7-smp #1 SMP Fri Apr 28 14:17:26 UTC 2006 i686 \nGNU/Linux\n\nWe're seeing an average of 30,000 context-switches a sec. This problem \nwas much worse w/8.0 and got bearable with 8.1 but slowly resurfaced. \nAny ideas?\n\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 8 2 0 392184 40248 3040628 0 0 10012 2300 3371 43436 60 \n25 11 4\n10 2 0 334772 40256 3043340 0 0 2672 1892 3252 10073 84 \n14 1 1\n 9 2 0 338492 40280 3051272 0 0 7960 1612 3548 22013 77 \n16 4 3\n11 2 0 317040 40304 3064576 0 0 13172 1616 3870 42729 61 \n21 11 7\n 7 0 0 291496 40320 3078704 0 0 14192 504 3139 52200 58 \n24 12 7\n\nThe machine has 4 gigs of RAM, shared_buffers = 32768, max_connections = \n400, and currently does around 300-500 queries a second. I can provide \nmore info if needed.\n\n-- \nSumbry][\n\n",
"msg_date": "Mon, 07 Aug 2006 00:38:34 -0700",
"msg_from": "\"Donald C. Sumbry ][\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "High Context-Switches on Linux 8.1.4 Server"
},
{
"msg_contents": "> We're seeing an average of 30,000 context-switches a sec. This problem \n> was much worse w/8.0 and got bearable with 8.1 but slowly resurfaced. \n\nIs this from LWLock or spinlock contention? strace'ing a few backends\ncould tell the difference: look to see how many select(0,...) you see\ncompared to semop()s. Also, how many of these compared to real work\n(such as read/write calls)?\n\nDo you have any long-running transactions, and if so does shutting\nthem down help? There's been some discussion about thrashing of the\npg_subtrans buffers being a problem, and that's mainly a function of\nthe age of the oldest open transaction.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Aug 2006 08:52:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Context-Switches on Linux 8.1.4 Server "
},
{
"msg_contents": "Tom Lane wrote:\n>> We're seeing an average of 30,000 context-switches a sec. This problem \n>> was much worse w/8.0 and got bearable with 8.1 but slowly resurfaced. \n> \n> Is this from LWLock or spinlock contention? strace'ing a few backends\n> could tell the difference: look to see how many select(0,...) you see\n> compared to semop()s. Also, how many of these compared to real work\n> (such as read/write calls)?\n\nOver a 20 second interval, I've got about 85 select()s and 6,230 \nsemop()s. 2604 read()s vs 16 write()s.\n\n> Do you have any long-running transactions, and if so does shutting\n> them down help? There's been some discussion about thrashing of the\n> pg_subtrans buffers being a problem, and that's mainly a function of\n> the age of the oldest open transaction.\n\nNot long-running. We do have a badly behaving legacy app that is \nleaving some backends \"idle in transaction\" They're gone pretty quickly \nso I can't kill them fast enough, but running a pg_stat_activity will \nalways show at least a handful. Could this be contributing?\n\nBased on the number of semop's we're getting it does look like \nshared_memory may be getting thrased - any suggestions? We did try \nlowering shared_memory usage in half the previous day, but that did \nlittle to help (it didn't make performance any worse and we still saw \nthe high context-switches, but it didn't make it any better either).\n\n-- \nSumbry][\n",
"msg_date": "Mon, 07 Aug 2006 08:51:09 -0700",
"msg_from": "\"Donald C. Sumbry ][\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High Context-Switches on Linux 8.1.4 Server"
},
{
"msg_contents": ">> Is this from LWLock or spinlock contention?\n\n> Over a 20 second interval, I've got about 85 select()s and 6,230 \n> semop()s. 2604 read()s vs 16 write()s.\n\nOK, so mostly LWLocks then.\n\n>> Do you have any long-running transactions,\n\n> Not long-running. We do have a badly behaving legacy app that is \n> leaving some backends \"idle in transaction\" They're gone pretty quickly \n> so I can't kill them fast enough, but running a pg_stat_activity will \n> always show at least a handful. Could this be contributing?\n\nSorry, I was unclear: it's the age of your oldest transaction that\ncounts (measured by how many xacts started since it), not how many\ncycles it's consumed or not.\n\nWith the 8.1 code it's possible for performance to degrade pretty badly\nonce the age of your oldest transaction exceeds 16K transactions. You\nwere not specific enough about the behavior of this legacy app to let\nme guess where you are on that scale ...\n\n> Based on the number of semop's we're getting it does look like \n> shared_memory may be getting thrased - any suggestions? We did try \n> lowering shared_memory usage in half the previous day,\n\nUnlikely to help --- if it is the pg_subtrans problem, the number of\nbuffers involved is set by a compile-time constant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 07 Aug 2006 14:34:41 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: High Context-Switches on Linux 8.1.4 Server "
},
{
"msg_contents": "Tom Lane wrote:\n> Sorry, I was unclear: it's the age of your oldest transaction that\n> counts (measured by how many xacts started since it), not how many\n> cycles it's consumed or not.\n\n> With the 8.1 code it's possible for performance to degrade pretty badly\n> once the age of your oldest transaction exceeds 16K transactions. You\n> were not specific enough about the behavior of this legacy app to let\n> me guess where you are on that scale ...\n\nUnderstood. This legacy apps wraps every single transaction (even read \nonly ones) inside of BEGIN; END; blocks. We do about 90+ percent reads \nto our database, and at 300+ queries a second that could quickly add up.\n\nDoes this sound like we should investigate this area more?\n\n>> Based on the number of semop's we're getting it does look like \n>> shared_memory may be getting thrased - any suggestions? We did try \n>> lowering shared_memory usage in half the previous day,\n> \n> Unlikely to help --- if it is the pg_subtrans problem, the number of\n> buffers involved is set by a compile-time constant.\n\nInteresting. One other thing to note, this application in particular \naccounts for only 4 percent of total queries and if we disable the \napplication the database runs like a champ. The only other huge \nvariable I can think of is this app's gratuitous use of cursors.\n\nI haven't read too much about Postgres performance especially when \ndealing with cursors, but could this be a variable? We are considering \nmodifying the app and removing all use of cursors and wonder if we're \nwasting our time or not.\n\nThanks for the help.\n\n-- \nSumbry][\n",
"msg_date": "Mon, 07 Aug 2006 12:27:42 -0700",
"msg_from": "\"Donald C. Sumbry ][\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: High Context-Switches on Linux 8.1.4 Server"
}
] |
[
{
"msg_contents": "Hi. I'm new at using PostgreSQL. I have found posts related to this one but there is not a definite answer or solution. Here it goes.\nWhere I work, all databases were built with MS Access. The Access files are hosted by computers with Windows 2000 and Windows XP. A new server is on its way and only Open Source Software is going to be installed. The OS is going to be SUSE Linux 10.1 and we are making comparisons between MySQL, PostgreSQL and MS Access. We installed MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD for Windows and one for Linux)\nThe \"Test Server\" in which we install the DBMS has the following characteristics:\n\nCPU speed = 1.3 GHz\nRAM = 512 MB\nHDD = 40 GB\n\nThe biggest table has 544371 rows(tuples?) with 55 rows. All fields are float8. Only 1 is varchar(255) and 1 timestamp.\nWe query the MS Access databases through Visual Basic Programs and ODBC Drivers. We made a Visual Basic program that uses ADO to connect to ALL three DBMS using ODBC drivers.\n\nWhen we run the following query \"SELECT * FROM big_table\", we get the following resutls:\n\nMS Access\n- Execution time ~ 51 seconds (Depending on the client machine, it can go as low as 20 seconds)\n- Network Utilization ~ 75 Mbps (According to Windows Task Manager)\n\nMySQL 5.0(under Windows)\n- Execution time ~ 630 seconds\n- Network Utilization ~ 8 Mbps\n\nPostgreSQL 8.1(under Windows)\n- Execution time ~ 290 seconds)\n- Network Utilization ~ 13 Mbps\n\n\nMS Access (under Linux. MS Access files are in the Linux computer which has the SAMBA server running. The client computer has a mapped network drive that conects to the Linux files.)\n- Execution time ~ 55 seconds (Depending on the client machine, it can go as low as 20 seconds)\n- Network Utilization ~ 70 Mbps (According to Windows Task Manager)\n\nMySQL 5.0(under Linux)\n- Execution time ~ 440 seconds\n- Network Utilization ~ 11 Mbps\n\nPostgreSQL 8.1(under Linux)\n- Execution time ~ 180 seconds)\n- Network Utilization ~ 18 Mbps\n\nDue to the fact that the query returns a lot of rows, I cannot use the ODBC driver with the \"Use Declare/Fetch\" option disabled. If I run the query with this option disabled, the transfer speed goes up to about 20 Mpbs (PostgreSQL in Windows) and ~35 Mbps (PostgreSQL in Linux) (The transfer speed never goes beyond 40 Mbps even if we query from several clients at the same time. If we query MS Access from several machines, the transfer speed goes almost to 85 Mbps. Obviously, these simultaneous querys run slower). The problem with running the query with the \"Use Declare/Fetch\" option disabled is that the client computer shows an error saying \"Out of memory while reading tuples\".\n\nVery different results are obtained if a the query \"SELECT * from big_table ORDER BY \"some_column\"\". In this scenario PostgreSQL is faster than MS Access or MySQL by more than 100 seconds. Transfer speed, however, transfer speed is still slower for PostgreSQL than for MS Access.\n\nWe have run many other queries (not complex, at most nesting of 5 inner joins) and MS Access is always faster. We have seen by looking at the network activity in the Windows Task Manager that the main problem is the transfer speed. We also have noticed that MS Access quickly downloads the file that has the necesary information and works on it locally on the client computer. The queries, obviously, run faster if the client computer has more resources (CPU speed, RAM, etc.). The fact that the client computer does not use any resource to execute the query, only to receive the results, is one big plus for PostgreSQL (we think). We need,however, to improve the performance of the queries that return a lot of rows because those are the most used queries.\n\nWe searched the postgresql archives, mailing lists, etc. and have tried changing the parameters of the PostgreSQL server(both on Linux and Windows)(We also tried with the default parameters) and changing the parameters of the ODBC driver as suggested. We still get aproximately the same results. We have even changed some TCP/IP parameters(only in Windows) but no improvement.\n\nWe have turned off all tracings, logs, and debugs of the ODBC driver. The behaviour is the same when querying from pgAdmin III.\n\nTo get to the point: Is this problem with the transfer rates a PostgreSQL server/PostgresQL ODBC driver limitation?\nIs there a way to increase the transfer rates?\n\nThank you very much for any help received!\n\nHansell E. Baran Altuve\n \t\t\n---------------------------------\nDo you Yahoo!?\n Get on board. You're invited to try the new Yahoo! Mail Beta.\nHi. I'm new at using PostgreSQL. I have found posts related to this one but there is not a definite answer or solution. Here it goes.Where I work, all databases were built with MS Access. The Access files are hosted by computers with Windows 2000 and Windows XP. A new server is on its way and only Open Source Software is going to be installed. The OS is going to be SUSE Linux 10.1 and we are making comparisons between MySQL, PostgreSQL and MS Access. We installed MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL DO NOT run at the same time)(There is one HDD for Windows and one for Linux)The \"Test Server\" in which we install the DBMS has the following characteristics:CPU speed = 1.3 GHzRAM = 512 MBHDD = 40 GBThe biggest table has 544371 rows(tuples?) with 55 rows. All fields are float8. Only 1 is varchar(255) and 1 timestamp.We query the MS Access databases through Visual Basic Programs and ODBC Drivers. We made a\n Visual Basic program that uses ADO to connect to ALL three DBMS using ODBC drivers.When we run the following query \"SELECT * FROM big_table\", we get the following resutls:MS Access- Execution time ~ 51 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 75 Mbps (According to Windows Task Manager)MySQL 5.0(under Windows)- Execution time ~ 630 seconds- Network Utilization ~ 8 MbpsPostgreSQL 8.1(under Windows)- Execution time ~ 290 seconds)- Network Utilization ~ 13 MbpsMS Access (under Linux. MS Access files are in the Linux computer which has the SAMBA server running. The client computer has a mapped network drive that conects to the Linux files.)- Execution time ~ 55 seconds (Depending on the client machine, it can go as low as 20 seconds)- Network Utilization ~ 70 Mbps (According to Windows Task Manager)MySQL 5.0(under Linux)- Execution time\n ~ 440 seconds- Network Utilization ~ 11 MbpsPostgreSQL 8.1(under Linux)- Execution time ~ 180 seconds)- Network Utilization ~ 18 MbpsDue to the fact that the query returns a lot of rows, I cannot use the ODBC driver with the \"Use Declare/Fetch\" option disabled. If I run the query with this option disabled, the transfer speed goes up to about 20 Mpbs (PostgreSQL in Windows) and ~35 Mbps (PostgreSQL in Linux) (The transfer speed never goes beyond 40 Mbps even if we query from several clients at the same time. If we query MS Access from several machines, the transfer speed goes almost to 85 Mbps. Obviously, these simultaneous querys run slower). The problem with running the query with the \"Use Declare/Fetch\" option disabled is that the client computer shows an error saying \"Out of memory while reading tuples\".Very different results are obtained if a the query \"SELECT * from big_table ORDER BY \"some_column\"\". In this scenario PostgreSQL is\n faster than MS Access or MySQL by more than 100 seconds. Transfer speed, however, transfer speed is still slower for PostgreSQL than for MS Access.We have run many other queries (not complex, at most nesting of 5 inner joins) and MS Access is always faster. We have seen by looking at the network activity in the Windows Task Manager that the main problem is the transfer speed. We also have noticed that MS Access quickly downloads the file that has the necesary information and works on it locally on the client computer. The queries, obviously, run faster if the client computer has more resources (CPU speed, RAM, etc.). The fact that the client computer does not use any resource to execute the query, only to receive the results, is one big plus for PostgreSQL (we think). We need,however, to improve the performance of the queries that return a lot of rows because those are the most used queries.We searched the postgresql archives, mailing lists, etc. and have\n tried changing the parameters of the PostgreSQL server(both on Linux and Windows)(We also tried with the default parameters) and changing the parameters of the ODBC driver as suggested. We still get aproximately the same results. We have even changed some TCP/IP parameters(only in Windows) but no improvement.We have turned off all tracings, logs, and debugs of the ODBC driver. The behaviour is the same when querying from pgAdmin III.To get to the point: Is this problem with the transfer rates a PostgreSQL server/PostgresQL ODBC driver limitation?Is there a way to increase the transfer rates?Thank you very much for any help received!Hansell E. Baran Altuve\nDo you Yahoo!? \nGet on board. You're invited to try the new Yahoo! Mail Beta.",
"msg_date": "Mon, 7 Aug 2006 10:26:14 -0700 (PDT)",
"msg_from": "hansell baran <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow transfer speeds"
},
{
"msg_contents": "On Mon, 2006-08-07 at 12:26, hansell baran wrote:\n> Hi. I'm new at using PostgreSQL. I have found posts related to this\n> one but there is not a definite answer or solution. Here it goes.\n> Where I work, all databases were built with MS Access. The Access\n> files are hosted by computers with Windows 2000 and Windows XP. A new\n> server is on its way and only Open Source Software is going to be\n> installed. The OS is going to be SUSE Linux 10.1 and we are making\n> comparisons between MySQL, PostgreSQL and MS Access. We installed\n> MySQL and PostgreSQL on both SUSE and Windows XP (MySQL & PostgreSQL\n> DO NOT run at the same time)(There is one HDD for Windows and one for\n> Linux)\n> The \"Test Server\" in which we install the DBMS has the following\n> characteristics:\n> \n> CPU speed = 1.3 GHz\n> RAM = 512 MB\n> HDD = 40 GB\n\nJust FYI, that's not only not much in terms of server, it's not even\nmuch in terms of a workstation. My laptop is about on par with that.\n\nJust sayin.\n\nOK, just so you know, you're comparing apples and oranges. A client\nside application like access has little or none of the overhead that a\nreal database server has.\n\nThe advantage PostgreSQL has is that many people can read AND write to\nthe same data store simultaneously and the database server will make\nsure that the underlying data in the files never gets corrupted. \nFurther, with proper constraints in place, it can make sure that the\ndata stays coherent (i.e. that data dependencies are honored.)\n\nAs you can imagine, there's gonna be some overhead there. And it's\nwholly unfair to compare a databases ability to stream out data in a\nsingle read to access. It is the worst case scenario.\n\nTry having 30 employees connect to the SAME access database and start\nupdating lots and lots of records. Have someone read out the data while\nthat's going on. Repeat on PostgreSQL.\n\nIf you're mostly going to be reading data, then maybe some intermediate\nsystem is needed, something to \"harvest\" the data into some flat files.\n\nBut if your users need to read out 500,000 rows, change a few, and write\nthe whole thing back, your business process is likely not currently\nsuited to a database and needs to be rethought.\n",
"msg_date": "Mon, 07 Aug 2006 13:05:45 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow transfer speeds"
}
] |
[
{
"msg_contents": "Hi,\n\nFirst of all I must tell that my reality in a southern brazilian city is \nway different than what we read in the list. I was lookig for ways to \nfind the HW bottleneck and saw a configuration like:\n\n\"we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4 \nopterons, 16GB of memory and MegaRAID with enough disks. OS is Debian \nSarge amd64, PostgreSQL is 8.0.3.\" on \n(http://archives.postgresql.org/pgsql-performance/2005-07/msg00431.php)\n\nOur old server was a very modest Dell Xeon 2.8 (512 Kb Cache), with 1 GB \nRAM and one SCSI disc. This server runs PostgreSQL (8.1.4), Apache (PHP) \nand other minor services. We managed to get a test machine, a HP Xeon \n3.2 (2 MB cache), also with 1 GB RAM but 4 SCSI discs (in one sigle \narray controller). They're organized in the following way:\n\ndisk 0: Linux Root\ndisk 1: Database Cluster\ndisk 2: pg_xlog\ndisk 3: a dir the suffers constant read/write operations\n\nThe database size stands around 10 GB. The new server has a better \nperformance than the old one, but sometimes it still stucks. We tried to \nuse a HP proprietary tool to monitor the server, and find out what is \nthe bottleneck, but it's been difficult to install it on Debian. The \ntool is only certified for SuSe and RedHat. So we tried to use some \nLinux tools to see what's going on, like vmstat and iostat. Are this \ntools (vm and iostat) enough? Should we use something else? Is there any \nspecifical material about finding bottlenecks in Linux/PostgreSQL \nmachines? Is our disks design proper?\n\nI really apologize for my lack of knowledge in this area, and for the \nexcessive number of questions in a single e-mail.\n\nBest regards,\nAlvaro\n",
"msg_date": "Mon, 07 Aug 2006 19:06:16 -0300",
"msg_from": "Alvaro Nunes Melo <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "First off - very few third party tools support debian. Debian is a sure\nfire way to have an unsupported system. Use RedHat or SuSe (flame me all\nyou want, it doesn't make it less true).\n\nSecond, run bonnie++ benchmark against your disk array(s) to see what\nperformance you are getting, and make sure it's reasonable.\n\nSingle drives for stuff is not a great way to go for either speed or\nreliability, highly not recommended for a production system. Use SAS or\nSATA for the best speed for your $$s, don't buy SAN, they are overpriced and\noften don't perform. RAM could be more to be honest too.\n\nDiagnosing the bottleneck can be done with combinations of top, iostat and\nvmstat. If you have high iowait numbers then your system is waiting on the\ndisks. If you have high system CPU usage, then disks are also worth a look,\nbut not as bad as high iowait. If you have high user CPU with little iowait\nand little system CPU, and very little io activity in iostat, then you are\nCPU bound. If you are IO bound, you need to figure if it's reads or\nwrites. If it's reads, then more RAM will help. if it's writes, then you\nneed more spindles and more controller cache with RAID (please think\ncarefully before using RAID 5 in a write intensive environment, it's not\nideal).\n\nThe other thing is you will probably want to turn on stats in postgres to\nfigure out which queries are the bad ones (does anyone have good docs posted\nfor this?). Once you have identified the bad queries, you can explain\nanalyze them, and figure out why they suck.\n\nAlex.\n\nOn 8/7/06, Alvaro Nunes Melo <[email protected]> wrote:\n>\n> Hi,\n>\n> First of all I must tell that my reality in a southern brazilian city is\n> way different than what we read in the list. I was lookig for ways to\n> find the HW bottleneck and saw a configuration like:\n>\n> \"we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4\n> opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian\n> Sarge amd64, PostgreSQL is 8.0.3.\" on\n> (http://archives.postgresql.org/pgsql-performance/2005-07/msg00431.php)\n>\n> Our old server was a very modest Dell Xeon 2.8 (512 Kb Cache), with 1 GB\n> RAM and one SCSI disc. This server runs PostgreSQL (8.1.4), Apache (PHP)\n> and other minor services. We managed to get a test machine, a HP Xeon\n> 3.2 (2 MB cache), also with 1 GB RAM but 4 SCSI discs (in one sigle\n> array controller). They're organized in the following way:\n>\n> disk 0: Linux Root\n> disk 1: Database Cluster\n> disk 2: pg_xlog\n> disk 3: a dir the suffers constant read/write operations\n>\n> The database size stands around 10 GB. The new server has a better\n> performance than the old one, but sometimes it still stucks. We tried to\n> use a HP proprietary tool to monitor the server, and find out what is\n> the bottleneck, but it's been difficult to install it on Debian. The\n> tool is only certified for SuSe and RedHat. So we tried to use some\n> Linux tools to see what's going on, like vmstat and iostat. Are this\n> tools (vm and iostat) enough? Should we use something else? Is there any\n> specifical material about finding bottlenecks in Linux/PostgreSQL\n> machines? Is our disks design proper?\n>\n> I really apologize for my lack of knowledge in this area, and for the\n> excessive number of questions in a single e-mail.\n>\n> Best regards,\n> Alvaro\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nFirst off - very few third party tools support debian. Debian is a sure fire way to have an unsupported system. Use RedHat or SuSe (flame me all you want, it doesn't make it less true).Second, run bonnie++ benchmark against your disk array(s) to see what performance you are getting, and make sure it's reasonable.\nSingle drives for stuff is not a great way to go for either speed or reliability, highly not recommended for a production system. Use SAS or SATA for the best speed for your $$s, don't buy SAN, they are overpriced and often don't perform. RAM could be more to be honest too.\nDiagnosing the bottleneck can be done with combinations of top, iostat and vmstat. If you have high iowait numbers then your system is waiting on the disks. If you have high system CPU usage, then disks are also worth a look, but not as bad as high iowait. If you have high user CPU with little iowait and little system CPU, and very little io activity in iostat, then you are CPU bound. If you are IO bound, you need to figure if it's reads or writes. If it's reads, then more RAM will help. if it's writes, then you need more spindles and more controller cache with RAID (please think carefully before using RAID 5 in a write intensive environment, it's not ideal).\nThe other thing is you will probably want to turn on stats in postgres to figure out which queries are the bad ones (does anyone have good docs posted for this?). Once you have identified the bad queries, you can explain analyze them, and figure out why they suck.\nAlex.On 8/7/06, Alvaro Nunes Melo <[email protected]> wrote:\nHi,First of all I must tell that my reality in a southern brazilian city isway different than what we read in the list. I was lookig for ways tofind the HW bottleneck and saw a configuration like:\"we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4\nopterons, 16GB of memory and MegaRAID with enough disks. OS is DebianSarge amd64, PostgreSQL is 8.0.3.\" on(http://archives.postgresql.org/pgsql-performance/2005-07/msg00431.php\n)Our old server was a very modest Dell Xeon 2.8 (512 Kb Cache), with 1 GBRAM and one SCSI disc. This server runs PostgreSQL (8.1.4), Apache (PHP)and other minor services. We managed to get a test machine, a HP Xeon\n3.2 (2 MB cache), also with 1 GB RAM but 4 SCSI discs (in one siglearray controller). They're organized in the following way:disk 0: Linux Rootdisk 1: Database Clusterdisk 2: pg_xlogdisk 3: a dir the suffers constant read/write operations\nThe database size stands around 10 GB. The new server has a betterperformance than the old one, but sometimes it still stucks. We tried touse a HP proprietary tool to monitor the server, and find out what is\nthe bottleneck, but it's been difficult to install it on Debian. Thetool is only certified for SuSe and RedHat. So we tried to use someLinux tools to see what's going on, like vmstat and iostat. Are thistools (vm and iostat) enough? Should we use something else? Is there any\nspecifical material about finding bottlenecks in Linux/PostgreSQLmachines? Is our disks design proper?I really apologize for my lack of knowledge in this area, and for theexcessive number of questions in a single e-mail.\nBest regards,Alvaro---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not\n match",
"msg_date": "Tue, 8 Aug 2006 02:33:39 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "* Alex Turner ([email protected]) wrote:\n> First off - very few third party tools support debian. Debian is a sure\n> fire way to have an unsupported system. Use RedHat or SuSe (flame me all\n> you want, it doesn't make it less true).\n\nYeah, actually, it does make it less true since, well, it's really not\nall that true to begin with.\n\nWhat you're probably intending to say is that fewer companies say \"Works\nwith Debian!\" on their advertising material or list it as \"officially\nsupported\". I've had *very* few problems running commercial apps on\nDebian (including things like Oracle and IBM SAN management software).\nGenerally it's just take the rpms and either install them *using* rpm\n(which is available in Debian...) or use alien to convert them to a\ntarball and/or deb.\n\nHP is actually pretty big into Debian and I'd be curious as to what the\nproblems installing the monitoring tools were. My guess is that the\nissue is actually some kernel module or something, in which case any\nkernel that they don't build the module (or write it, depending..) for\nmay be problematic. This would probably include some releases of\nRedHat/SuSe (ES, Fedora, who knows) and pretty much any kernel you build\nusing sources off of kernel.org or for any other distribution unless you\nknow exactly what versions/patches they support.\n\nFeel free to contact me off-list if you'd like to continue this\ndiscussion since I don't really see it as appropriate for this list.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 8 Aug 2006 08:05:29 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "Alvaro,\n\n* Alex Turner ([email protected]) wrote:\n> The other thing is you will probably want to turn on stats in postgres to\n> figure out which queries are the bad ones (does anyone have good docs posted\n> for this?). Once you have identified the bad queries, you can explain\n> analyze them, and figure out why they suck.\n\nGiven your position, this might be the best approach to take to find\nsome 'low-hanging fruit'. Do you have queries which are complex in some\nway? Do you have many long-open transactions? If you're doing more\nthan simple queries then you may want to explain analyze the more\ncomplex ones and try to speed them up. If you run into trouble\nunderstanding the output or how to improve it then post it here (with as\nmuch info as you can, schema definitions, the query, the explain analyze\nresults, etc) and we can help.\n\ntop/iostat/vmstat are very useful tools too and can help with hardware\ndecisions but you probably want to review your queries and make sure the\ndatabase is performing as best it can with the setup you have today\nbefore throwing more hardware at it.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Tue, 8 Aug 2006 08:14:35 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "On Tue, 8 Aug 2006, Stephen Frost wrote:\n\n> * Alex Turner ([email protected]) wrote:\n>> First off - very few third party tools support debian. Debian is a sure\n>> fire way to have an unsupported system. Use RedHat or SuSe (flame me all\n>> you want, it doesn't make it less true).\n>\n> Yeah, actually, it does make it less true since, well, it's really not\n> all that true to begin with.\n>\n> What you're probably intending to say is that fewer companies say \"Works\n> with Debian!\" on their advertising material or list it as \"officially\n> supported\". I've had *very* few problems running commercial apps on\n> Debian (including things like Oracle and IBM SAN management software).\n> Generally it's just take the rpms and either install them *using* rpm\n> (which is available in Debian...) or use alien to convert them to a\n> tarball and/or deb.\n\nthere's a huge difference between 'works on debian' and 'supported on \ndebian'. I do use debian extensivly, (along with slackware on my personal \nmachines), so i am comfortable getting things to work. but 'supported' \nmeans that when you run into a problem you can call for help without being \ntold 'sorry, switch distros, then call us back'.\n\neven many of the companies that offer support for postgres have this \nproblem. the explination is always that they can't test every distro out \nthere so they pick a few and support those (this is one of the reasons why \nI am watching ubuntu with great interest, it's debian under the covers, \nbut they're starting to get the recognition from the support groups of \ncompanies)\n\nDavid Lang\n\n",
"msg_date": "Tue, 8 Aug 2006 19:10:16 -0700 (PDT)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "* David Lang ([email protected]) wrote:\n> there's a huge difference between 'works on debian' and 'supported on \n> debian'. I do use debian extensivly, (along with slackware on my personal \n> machines), so i am comfortable getting things to work. but 'supported' \n> means that when you run into a problem you can call for help without being \n> told 'sorry, switch distros, then call us back'.\n\nHave you ever actually had that happen? I havn't and I've called\nsupport for a number of different issues for various commercial\nsoftware. In the end it might boil down to some distribution-specific\nissue that they're not willing to fix but honestly that's pretty rare.\n\n> even many of the companies that offer support for postgres have this \n> problem. the explination is always that they can't test every distro out \n> there so they pick a few and support those (this is one of the reasons why \n\nMy experience has been that unless it's pretty clearly some\ndistro-specific issue (which doesn't happen all that often, but it's\ngood to be familiar with what would probably be a distro-specific issue\nand what wouldn't), the support folks are willing to help debug it.\n\n\tThanks,\n\n\t\tStephen",
"msg_date": "Wed, 9 Aug 2006 06:30:12 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "Alex Turner wrote:\n> First off - very few third party tools support debian. Debian is a sure\n> fire way to have an unsupported system. Use RedHat or SuSe (flame me all\n> you want, it doesn't make it less true).\n\n*cough* BS *cough*\n\nLinux is Linux. It doesn't matter what trademark you put on top of it. \nAs long as they are running a current version of Linux (e.g; kernel 2.6) \nthey should be fine.\n\nWith Debian that may or may not be the case and that could be an issue.\nTo get the best luck, I would suggest (if you want to stay with a Debian \nbase) Ubuntu Dapper LTS.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 09 Aug 2006 05:47:42 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "\n> Have you ever actually had that happen? I havn't and I've called\n> support for a number of different issues for various commercial\n> software. In the end it might boil down to some distribution-specific\n> issue that they're not willing to fix but honestly that's pretty rare.\n\nVery rare, if you are using a reputable vendor.\n\n> \n>> even many of the companies that offer support for postgres have this \n>> problem. the explination is always that they can't test every distro out \n>> there so they pick a few and support those (this is one of the reasons why \n\nAhh and which companies would these be? As a representative of the most \nprominent one in the US I can tell you that you are not speaking from a \nknowledgeable position.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 09 Aug 2006 05:50:40 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "On 8/7/06, Alvaro Nunes Melo <[email protected]> wrote:\n> \"we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4\n> opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian\n> Sarge amd64, PostgreSQL is 8.0.3.\" on\n> (http://archives.postgresql.org/pgsql-performance/2005-07/msg00431.php)\n\nwell, if you spend three months optimizing your application or buy a\n10k$ server to get the same result, which is cheaper? :)\n\n> The database size stands around 10 GB. The new server has a better\n> performance than the old one, but sometimes it still stucks. We tried to\n> use a HP proprietary tool to monitor the server, and find out what is\n> the bottleneck, but it's been difficult to install it on Debian. The\n\nI'm not familiar with the hp tool, but I suspect you are not missing\nmuch. If you are looking for a free distro, you might have some luck\nwith centos. most redhat binary rpms will install on it.\n\n> tool is only certified for SuSe and RedHat. So we tried to use some\n> Linux tools to see what's going on, like vmstat and iostat. Are this\n> tools (vm and iostat) enough? Should we use something else? Is there any\n> specifical material about finding bottlenecks in Linux/PostgreSQL\n> machines? Is our disks design proper?\n\nthose are pretty broad questions, so you will only get broad answers.\nyou might want to consider hooking up with some commercial support\n(I've heard good things about commandprompt) or providing more\ndetailed information so that you can get some help from this list,\nincluding:\niostat/vmstat reports\nexplain analyze\ninformation from top\n\nnicely summarized at the time the problems occur.\nregards,\nmerlin\n\n> I really apologize for my lack of knowledge in this area, and for the\n> excessive number of questions in a single e-mail.\n>\n> Best regards,\n> Alvaro\n",
"msg_date": "Wed, 9 Aug 2006 09:20:38 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good enough"
},
{
"msg_contents": "On Wed, 9 Aug 2006, Stephen Frost wrote:\n\n> * David Lang ([email protected]) wrote:\n>> there's a huge difference between 'works on debian' and 'supported on\n>> debian'. I do use debian extensivly, (along with slackware on my personal\n>> machines), so i am comfortable getting things to work. but 'supported'\n>> means that when you run into a problem you can call for help without being\n>> told 'sorry, switch distros, then call us back'.\n>\n> Have you ever actually had that happen? I havn't and I've called\n> support for a number of different issues for various commercial\n> software. In the end it might boil down to some distribution-specific\n> issue that they're not willing to fix but honestly that's pretty rare.\n\nunfortunantly I have, repeatedly with different products.\n\nif you can manage to get past the first couple of levels of support to \npeople who really understand things rather then just useing checklists you \nare more likly to get help, but even there I've run into people who seem \neager to take the easy way out by assuming that it must be a distro thing \nrather then anything with their product (even in cases where it ended up \nbeing a simple config thing)\n\nDavid Lang\n",
"msg_date": "Wed, 9 Aug 2006 09:19:53 -0700 (PDT)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "On Wed, 9 Aug 2006, Joshua D. Drake wrote:\n\n>>> even many of the companies that offer support for postgres have this \n>>> problem. the explination is always that they can't test every distro out \n>>> there so they pick a few and support those (this is one of the reasons why \n>\n> Ahh and which companies would these be? As a representative of the most \n> prominent one in the US I can tell you that you are not speaking from a \n> knowledgeable position.\n\nnote I said many, not all. I am aware that your company does not fall into \nthis catagory.\n\nDavid Lang\n",
"msg_date": "Wed, 9 Aug 2006 09:20:55 -0700 (PDT)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "\n>> Ahh and which companies would these be? As a representative of the \n>> most prominent one in the US I can tell you that you are not speaking \n>> from a knowledgeable position.\n> \n> note I said many, not all. I am aware that your company does not fall \n> into this catagory.\n\nI know, but I am curious as to *what* companies. Any reputable \nPostgreSQL company is going to support Linux as a whole except maybe \nsome fringe distros like Gentoo or RedFlag. Not to mention FreeBSD and \nSolaris.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> David Lang\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 09 Aug 2006 09:27:41 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "On Wed, 9 Aug 2006, Joshua D. Drake wrote:\n\n>>> Ahh and which companies would these be? As a representative of the most \n>>> prominent one in the US I can tell you that you are not speaking from a \n>>> knowledgeable position.\n>> \n>> note I said many, not all. I am aware that your company does not fall into \n>> this catagory.\n>\n> I know, but I am curious as to *what* companies. Any reputable PostgreSQL \n> company is going to support Linux as a whole except maybe some fringe distros \n> like Gentoo or RedFlag. Not to mention FreeBSD and Solaris.\n\nI'm not going to name names in public, but I will point out that different \ncompanies definitions of what constatutes 'fringe distros' are different. \nFor some any linux other then RedHat Enterprise or SuSE is a fringe distro \n(with SuSE being a relativly recent addition, for a while RedHat were \nfrequently the only supported distro versions)\n\nand please note, when I'm talking about support, it's not just postgresql \nsupport, but also hardware/driver support that can run into these problems\n\nDavid Lang\n",
"msg_date": "Wed, 9 Aug 2006 09:37:37 -0700 (PDT)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "On Wed, 2006-08-09 at 11:37, David Lang wrote:\n> On Wed, 9 Aug 2006, Joshua D. Drake wrote:\n> \n> >>> Ahh and which companies would these be? As a representative of the most \n> >>> prominent one in the US I can tell you that you are not speaking from a \n> >>> knowledgeable position.\n> >> \n> >> note I said many, not all. I am aware that your company does not fall into \n> >> this catagory.\n> >\n> > I know, but I am curious as to *what* companies. Any reputable PostgreSQL \n> > company is going to support Linux as a whole except maybe some fringe distros \n> > like Gentoo or RedFlag. Not to mention FreeBSD and Solaris.\n> \n> I'm not going to name names in public, but I will point out that different \n> companies definitions of what constatutes 'fringe distros' are different. \n> For some any linux other then RedHat Enterprise or SuSE is a fringe distro \n> (with SuSE being a relativly recent addition, for a while RedHat were \n> frequently the only supported distro versions)\n> \n> and please note, when I'm talking about support, it's not just postgresql \n> support, but also hardware/driver support that can run into these problems\n\nI've run into this as well. Generally speaking, the larger the company,\nthe more likely you are to get the \"we don't support that\" line.\n",
"msg_date": "Wed, 09 Aug 2006 11:45:13 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "\nOn Aug 9, 2006, at 5:47 AM, Joshua D. Drake wrote:\n\n> Alex Turner wrote:\n>> First off - very few third party tools support debian. Debian is \n>> a sure\n>> fire way to have an unsupported system. Use RedHat or SuSe (flame \n>> me all\n>> you want, it doesn't make it less true).\n>\n> *cough* BS *cough*\n>\n> Linux is Linux. It doesn't matter what trademark you put on top of \n> it. As long as they are running a current version of Linux (e.g; \n> kernel 2.6) they should be fine.\n\nThat's really not the case, at least to the degree that makes a\ndifference between \"supported\" and \"unsupported\".\n>\n> With Debian that may or may not be the case and that could be an \n> issue.\n> To get the best luck, I would suggest (if you want to stay with a \n> Debian base) Ubuntu Dapper LTS.\n\nDifferent Linux distributions include different shared libraries, put\ndifferent things in different places and generally break applications\nin a variety of different ways (SELinux would be one example of\nthat commonly seen here).\n\nIf I don't QA my application on it, it isn't supported. I can't \nnecessarily\nreplicate problems on Linux distributions I don't have installed in\nmy QA lab, so I can't guarantee to fix problems that are specific\nto that distribution. I can't even be sure that it will install and run\ncorrectly without doing basic QA of the installation process on\nthat distribution.\n\nAnd in my case that's just for user space applications. It's got to\nbe even worse for hardware drivers.\n\nOur usual phrase is \"We support RedHat versions *mumble*\nonly. We expect our application to run correctly on any Linux\ndistribution, though you may have to install additional shared\nlibraries.\"\n\nI'm quite happy with customers running Debian, SuSe or what\nhave you, as long as they have access to a sysadmin who's\ncomfortable with that distribution. (I'd probably deny support to\nanyone running Gentoo, though :) )\n\nWe've never had big problems with people running our apps on\n\"unsupported\" problems, but those users have had to do some\nmore diagnosis of problems themselves, and we've been less\nable to support them than we can users who use the same\ndistribution we QA on.\n\n(It's not just Linux, either. We \"support\" Windows XP, but we run\njust fine on 2000 and 95/98.)\n\nCheers,\n Steve\n\n",
"msg_date": "Wed, 9 Aug 2006 09:50:58 -0700",
"msg_from": "Steve Atkins <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": ">>\n>> and please note, when I'm talking about support, it's not just postgresql \n>> support, but also hardware/driver support that can run into these problems\n> \n> I've run into this as well. Generally speaking, the larger the company,\n> the more likely you are to get the \"we don't support that\" line.\n> \n\n/me *chuckles* and whispers to himself.. no wonder were winning.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Wed, 09 Aug 2006 09:53:34 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "> > First off - very few third party tools support debian. Debian is\n> a\n> > sure fire way to have an unsupported system. Use RedHat or SuSe\n> > (flame me all you want, it doesn't make it less true).\n> \n> *cough* BS *cough*\n> \n> Linux is Linux. It doesn't matter what trademark you put on top of\n> it.\n> As long as they are running a current version of Linux (e.g; kernel\n> 2.6) they should be fine.\n\nUnfortunatly, that' not my experience either.\nBoth RedHat and SuSE heavily modify the kernel. So anything that needs\nanything near kernel space (two examples: the HP management/monitoring\ntools and the EMC/Legato Networker backup software) simply does not work\non Linux (linux being the kernel from kernel.org). They only work on\nRedHat/SuSE. To the point of not compiling/starting/working, not just\nthe support part.\n\n(One could argue that they shouldn't claim linux support then, but\nspecifically RH/SuSE, but I don't expect them to do that..)\n\nBTW, it used to work much better with 2.4, but since there is no real\n\"stable series\" kernel in 2.6, it's just a lost cause there it seems.\n\n//Magnus\n\n",
"msg_date": "Fri, 18 Aug 2006 12:49:40 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
},
{
"msg_contents": "On 8/18/06, Magnus Hagander <[email protected]> wrote:\n> > > First off - very few third party tools support debian. Debian is\n> > a\n> > > sure fire way to have an unsupported system. Use RedHat or SuSe\n> > > (flame me all you want, it doesn't make it less true).\n> >\n> > *cough* BS *cough*\n> >\n> > Linux is Linux. It doesn't matter what trademark you put on top of\n> > it.\n> > As long as they are running a current version of Linux (e.g; kernel\n> > 2.6) they should be fine.\n>\n> Unfortunatly, that' not my experience either.\n> Both RedHat and SuSE heavily modify the kernel. So anything that needs\n> anything near kernel space (two examples: the HP management/monitoring\n> tools and the EMC/Legato Networker backup software) simply does not work\n> on Linux (linux being the kernel from kernel.org). They only work on\n> RedHat/SuSE. To the point of not compiling/starting/working, not just\n> the support part.\n>\n> (One could argue that they shouldn't claim linux support then, but\n> specifically RH/SuSE, but I don't expect them to do that..)\n>\n> BTW, it used to work much better with 2.4, but since there is no real\n> \"stable series\" kernel in 2.6, it's just a lost cause there it seems.\n",
"msg_date": "Fri, 18 Aug 2006 09:14:08 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware upgraded but performance still ain't good"
}
] |
[
{
"msg_contents": "Hello\n\nI have pg_autovacuum running with the arguments: \n\npg_autovacuum -D -s 120 -v 10000\n\nthe database is postgresql 8.0.0\n\nSometimes load average on server raises to 20 and it is almost impossible to\nlogin via SSH\n\nWhen I'm logging in finally, I see there is cpu usage: 6% and iowait 95%\n\nps ax | grep post gives me \n\npostgres: postgres db [local] VACUUM\n\nIs there some solution to avoid such cases?\n\n-- \nEugene N Dzhurinsky\n",
"msg_date": "Tue, 8 Aug 2006 10:39:56 +0300",
"msg_from": "Eugeny N Dzhurinsky <[email protected]>",
"msg_from_op": true,
"msg_subject": "vacuuming"
},
{
"msg_contents": "On Tue, Aug 08, 2006 at 10:39:56AM +0300, Eugeny N Dzhurinsky wrote:\n> Hello\n> \n> I have pg_autovacuum running with the arguments: \n> \n> pg_autovacuum -D -s 120 -v 10000\n \nIt's been a while since I looked at the pg_autovac settings, but I know\nthat it's threasholds were way, way to high. They were set to something\nlike 2, when 0.2 is a better idea.\n\n> the database is postgresql 8.0.0\n> \n> Sometimes load average on server raises to 20 and it is almost impossible to\n> login via SSH\n> \n> When I'm logging in finally, I see there is cpu usage: 6% and iowait 95%\n> \n> ps ax | grep post gives me \n> \n> postgres: postgres db [local] VACUUM\n> \n> Is there some solution to avoid such cases?\n\nHave you turned on vacuum_cost_delay?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 9 Aug 2006 16:20:46 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: vacuuming"
}
] |
[
{
"msg_contents": "\nThe following bug has been logged online:\n\nBug reference: 2567\nLogged by: kumarselvan\nEmail address: [email protected]\nPostgreSQL version: 8.1\nOperating system: Linux Enterprise version 3\nDescription: High IOWAIT\nDetails: \n\ni have installed the postgres as mentioned in the Install file. it is a 4\ncpu 8 GB Ram Machine installed with Linux Enterprise version 3. when i am\nrunning a load which will perfrom 40 inserts persecond on 2 tables and 10\nupdates per 10seconds on differnt table IOWait on avg going upto 70% due to\nwhich i am not able to increase the load. Is there is any other way to\ninstall the postgres on multiprocessor machine.. can any one help me on\nthis...\n",
"msg_date": "Tue, 8 Aug 2006 08:42:02 GMT",
"msg_from": "\"kumarselvan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "BUG #2567: High IOWAIT"
},
{
"msg_contents": "This isn't a bug; moving to pgsql-performance.\n\nOn Tue, Aug 08, 2006 at 08:42:02AM +0000, kumarselvan wrote:\n> i have installed the postgres as mentioned in the Install file. it is a 4\n> cpu 8 GB Ram Machine installed with Linux Enterprise version 3. when i am\n> running a load which will perfrom 40 inserts persecond on 2 tables and 10\n> updates per 10seconds on differnt table IOWait on avg going upto 70% due to\n> which i am not able to increase the load. Is there is any other way to\n> install the postgres on multiprocessor machine.. can any one help me on\n> this...\n\nYou haven't given us nearly enough information. What kind of hardware is\nthis? RAID? What changes have you made to postgresql.conf?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 9 Aug 2006 17:26:50 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: BUG #2567: High IOWAIT"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi, I have a question with shared_buffer.\n\nOk, I have a server with 4GB of RAM\n- -----\n# cat /proc/meminfo\nMemTotal: 4086484 kB\n[...]\n- -----\n\nSo, if I want to, for example, shared_buffer to take 3 GB of RAM then\nshared_buffer would be 393216 (3 * 1024 * 1024 / 8)\n\nPostmaster dont start.\nError: FATAL: shmat(id=360448) failed: Invalid argument\n\n\nI can set a less value, but not higher than 3 GB.\n\nAm I doing something wrong?\nAny idea?\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFE2GoAIo1XmbAXRboRAtPgAJ9HN7aL0lyFtyTZnOoIAJXmGNsomgCeI1ex\nII1MclZaaIjg/ryH08wCuAY=\n=cgwJ\n-----END PGP SIGNATURE-----\n",
"msg_date": "Tue, 08 Aug 2006 12:40:00 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffer optimization"
},
{
"msg_contents": "Quoth [email protected] (Ruben Rubio):\n> Hi, I have a question with shared_buffer.\n>\n> Ok, I have a server with 4GB of RAM\n> -----\n> # cat /proc/meminfo\n> MemTotal: 4086484 kB\n> [...]\n> -----\n>\n> So, if I want to, for example, shared_buffer to take 3 GB of RAM then\n> shared_buffer would be 393216 (3 * 1024 * 1024 / 8)\n>\n> Postmaster dont start.\n> Error: FATAL: shmat(id=360448) failed: Invalid argument\n>\n>\n> I can set a less value, but not higher than 3 GB.\n>\n> Am I doing something wrong?\n> Any idea?\n\nYes, you're trying to set the value way too high.\n\nThe \"rule of thumb\" is to set shared buffers to the lesser of 10000\nand 15% of system memory. In your case, that would be the lesser of\n10000 and 78643, which is 10000.\n\nI'm not aware of any actual evidence having emerged that it is of any\nvalue to set shared buffers higher than 10000.\n-- \nlet name=\"cbbrowne\" and tld=\"acm.org\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/x.html\nRules of the Evil Overlord #25. \"No matter how well it would perform,\nI will never construct any sort of machinery which is completely\nindestructible except for one small and virtually inaccessible\nvulnerable spot.\" <http://www.eviloverlord.com/>\n",
"msg_date": "Tue, 08 Aug 2006 08:20:01 -0400",
"msg_from": "Christopher Browne <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer optimization"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nSo ...\nI have tried different values. The best one for one day sentences seems\nto be 24576\n\nIO in vmstat has the lowest values\n\"id\" (idle) has the biggest values.\n\nI have created an script that executes all day sentences to try that.\n\nBy the way, could u explain a little bit this?\n\n\"The \"rule of thumb\" is to set shared buffers to the lesser of 10000\n and 15% of system memory. In your case, that would be the lesser of\n 10000 and 78643, which is 10000.\"\n\nIm sorry but im not understanding it at all.\n\nThanks in advance.\n\n\nChristopher Browne wrote:\n> Quoth [email protected] (Ruben Rubio):\n>> Hi, I have a question with shared_buffer.\n>>\n>> Ok, I have a server with 4GB of RAM\n>> -----\n>> # cat /proc/meminfo\n>> MemTotal: 4086484 kB\n>> [...]\n>> -----\n>>\n>> So, if I want to, for example, shared_buffer to take 3 GB of RAM then\n>> shared_buffer would be 393216 (3 * 1024 * 1024 / 8)\n>>\n>> Postmaster dont start.\n>> Error: FATAL: shmat(id=360448) failed: Invalid argument\n>>\n>>\n>> I can set a less value, but not higher than 3 GB.\n>>\n>> Am I doing something wrong?\n>> Any idea?\n> \n> Yes, you're trying to set the value way too high.\n> \n> The \"rule of thumb\" is to set shared buffers to the lesser of 10000\n> and 15% of system memory. In your case, that would be the lesser of\n> 10000 and 78643, which is 10000.\n> \n> I'm not aware of any actual evidence having emerged that it is of any\n> value to set shared buffers higher than 10000.\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFE2b3kIo1XmbAXRboRAh12AKCodmhmXZWamrG7MnAf9mhVfubjgwCfa75v\n7bgmSzq4F7XpBoEkSpyDqnE=\n=3lMc\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 09 Aug 2006 12:50:13 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffer optimization"
},
{
"msg_contents": "On Tue, Aug 08, 2006 at 08:20:01AM -0400, Christopher Browne wrote:\n> I'm not aware of any actual evidence having emerged that it is of any\n> value to set shared buffers higher than 10000.\n\nhttp://flightaware.com\n\nThey saw a large increase in how many concurrent connections they could\nhandle when they bumped shared_buffers up from ~10% to 50% of memory.\nBack then they had 4G of memory. They're up to 12G right now, but\nhaven't bumped shared_buffers up.\n\nEvery single piece of advice I've seen on shared_buffers comes from the\n7.x era, when our buffer management was extremely simplistic. IMO all of\nthat knowledge was made obsolete when 8.0 came out, and our handling of\nshared_buffers has improved ever further since then. This is definately\nan area that could use a lot more testing.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 9 Aug 2006 16:28:04 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer optimization"
},
{
"msg_contents": "\"Jim C. Nasby\" <[email protected]> writes:\n> Every single piece of advice I've seen on shared_buffers comes from the\n> 7.x era, when our buffer management was extremely simplistic. IMO all of\n> that knowledge was made obsolete when 8.0 came out, and our handling of\n> shared_buffers has improved ever further since then. This is definately\n> an area that could use a lot more testing.\n\nActually I think it was probably 8.1 that made the significant\ndifference there, by getting rid of the single point of contention\nfor shared-buffer management. I concur that 7.x-era rules of thumb\nmay well be obsolete --- we need some credible scaling tests ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Aug 2006 18:37:00 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffer optimization "
}
] |
[
{
"msg_contents": "I agree, I think these say you are getting 240MB/s sequential reads and 1000 seeks per second.\n\nThat's pretty much the best you'd expect.\n\n- Luke\n\nSent from my GoodLink synchronized handheld (www.good.com)\n\n\n -----Original Message-----\nFrom: \tAlex Turner [mailto:[email protected]]\nSent:\tTuesday, August 08, 2006 02:40 AM Eastern Standard Time\nTo:\[email protected]\nCc:\tLuke Lonergan; [email protected]\nSubject:\tRe: [PERFORM] Postgresql Performance on an HP DL385 and\n\nThese number are pretty darn good for a four disk RAID 10, pretty close to\nperfect infact. Nice advert for the 642 - I guess we have a Hardware RAID\ncontroller than will read indpendently from mirrors.\n\nAlex\n\nOn 8/8/06, Steve Poe <[email protected]> wrote:\n>\n> Luke,\n>\n> Here are the results of two runs of 16GB file tests on XFS.\n>\n> scsi disc array\n> xfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1\n> ,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8\n> scsi disc array\n> xfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1\n> ,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6\n>\n> Thanks.\n>\n> Steve\n>\n>\n>\n> > Can you run bonnie++ version 1.03a on the machine and report the results\n> > here?\n> >\n> > It could be OK if you have the latest Linux driver for cciss, someone\n> has\n> > reported good results to this list with the latest, bleeding edge\n> version of\n> > Linux (2.6.17).\n> >\n> > - Luke\n> >\n>\n>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > choose an index scan if your joining column's datatypes do not\n> > match\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n\n",
"msg_date": "Tue, 8 Aug 2006 10:36:15 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nThanks for the feedback. I use the same database test that I've run a Sun\ndual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun box\nwith one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box.\nI though I was doing something wrong or there is something wrong with the\nbox.\n\nSteve\n\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\n>\n> I agree, I think these say you are getting 240MB/s sequential reads and\n> 1000 seeks per second.\n>\n> That's pretty much the best you'd expect.\n>\n> - Luke\n>\n> Sent from my GoodLink synchronized handheld (www.good.com)\n>\n>\n> -----Original Message-----\n> From: Alex Turner [mailto:[email protected]]\n> Sent: Tuesday, August 08, 2006 02:40 AM Eastern Standard Time\n> To: [email protected]\n> Cc: Luke Lonergan; [email protected]\n> Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and\n>\n> These number are pretty darn good for a four disk RAID 10, pretty close to\n> perfect infact. Nice advert for the 642 - I guess we have a Hardware RAID\n> controller than will read indpendently from mirrors.\n>\n> Alex\n>\n> On 8/8/06, Steve Poe <[email protected]> wrote:\n> >\n> > Luke,\n> >\n> > Here are the results of two runs of 16GB file tests on XFS.\n> >\n> > scsi disc array\n> > xfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1\n> > ,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8\n> > scsi disc array\n> > xfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,1029.1\n> > ,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6\n> >\n> > Thanks.\n> >\n> > Steve\n> >\n> >\n> >\n> > > Can you run bonnie++ version 1.03a on the machine and report the\n> results\n> > > here?\n> > >\n> > > It could be OK if you have the latest Linux driver for cciss, someone\n> > has\n> > > reported good results to this list with the latest, bleeding edge\n> > version of\n> > > Linux (2.6.17).\n> > >\n> > > - Luke\n> > >\n> >\n> >\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 9: In versions below 8.0, the planner will ignore your desire to\n> > > choose an index scan if your joining column's datatypes do not\n> > > match\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n>\n>\n\nLuke,Thanks for the feedback. I use the same database test that I've run a Sun dual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun box with one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box. I though I was doing something wrong or there is something wrong with the box.\nSteveOn 8/8/06, Luke Lonergan <[email protected]> wrote:\nI agree, I think these say you are getting 240MB/s sequential reads and 1000 seeks per second.That's pretty much the best you'd expect.- LukeSent from my GoodLink synchronized handheld (\nwww.good.com) -----Original Message-----From: Alex Turner [mailto:[email protected]]Sent: Tuesday, August 08, 2006 02:40 AM Eastern Standard TimeTo: \[email protected]: Luke Lonergan; [email protected]: Re: [PERFORM] Postgresql Performance on an HP DL385 and\nThese number are pretty darn good for a four disk RAID 10, pretty close toperfect infact. Nice advert for the 642 - I guess we have a Hardware RAIDcontroller than will read indpendently from mirrors.\nAlexOn 8/8/06, Steve Poe <[email protected]> wrote:>> Luke,>> Here are the results of two runs of 16GB file tests on XFS.>> scsi disc array\n> xfs ,16G,81024,99,153016,24,73422,10,82092,97,243210,17,1043.1> ,0,16,3172,7,+++++,+++,2957,9,3197,10,+++++,+++,2484,8> scsi disc array> xfs ,16G,83320,99,155641,25,73662,10,81756,96,243352,18,\n1029.1> ,0,16,3119,10,+++++,+++,2789,7,3263,11,+++++,+++,2014,6>> Thanks.>> Steve>>>> > Can you run bonnie++ version 1.03a on the machine and report the results\n> > here?> >> > It could be OK if you have the latest Linux driver for cciss, someone> has> > reported good results to this list with the latest, bleeding edge> version of\n> > Linux (2.6.17).> >> > - Luke> >>>> >> >> > ---------------------------(end of broadcast)---------------------------> > TIP 9: In versions below \n8.0, the planner will ignore your desire to> > choose an index scan if your joining column's datatypes do not> > match>>> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?>> http://www.postgresql.org/docs/faq>",
"msg_date": "Tue, 8 Aug 2006 08:01:47 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Steve,\n\nOn 8/8/06 8:01 AM, \"Steve Poe\" <[email protected]> wrote:\n\n> Thanks for the feedback. I use the same database test that I've run a Sun\n> dual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun box with\n> one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box. I\n> though I was doing something wrong or there is something wrong with the box.\n\nGiven the circumstances (benchmarked I/O is great, comparable perf on\nanother box with single disk is better), seems that one of:\n1) something wrong with the CPU/memory on the box\n2) something with the OS version / kernel\n3) something with the postgres configuration\n\nCan you post the database benchmark results?\n\n- Luke\n\n\n",
"msg_date": "Tue, 08 Aug 2006 09:22:15 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nHere's some background:\nI use Pg 7.4.13 (I've tested as far back as 7.4.8). I use an 8GB data with a\nprogram called odbc-bench. I run an 18 minute test. With each run, HP box\nexcluded, I unmount the discs involved, reformat, un-tar the backup of\nPGDATA and pg_xlog back on the discs, start-up Postgresql, then run the\nodbc-bench.\n\nOn the Sun box, I've benchmarked an average of 3 to 4 runs with each disc\n(up to 8) in succession in RAID0, RAID5, and RAID10 where applicable. I've\ndone with with pg_xlog on the same discs as PGDATA and separately, so I've\nfelt like I had a good understanding of how the performance works. I've\nnotice performance seems to level off at around 6 discs with another 10-15%\nwith two more discs.\n\nWhen I run odbc-bench, I also run vmstat in the background (through a python\nscript) which averages/summarzies the high/low/average of each category for\neach minute then a final summary after the run.\n\nOn the Sun box, with 4 discs (RAID10) to one channel on the LSI RAID card, I\nsee an average TPS around 70. If I ran this off of one disc, I see an\naverage TPS of 32.\n\non the HP box, with 6-discs in RAID10 and 1 spare. I see a TPS of 34. I\ndon't have my vmstat reports with me, but I recall the CPU utilitization on\nthe HP was about 50% higher. I need to check on this.\n\nSteve\n\n\n\n\n\n\n\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> On 8/8/06 8:01 AM, \"Steve Poe\" <[email protected]> wrote:\n>\n> > Thanks for the feedback. I use the same database test that I've run a\n> Sun\n> > dual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun\n> box with\n> > one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box. I\n> > though I was doing something wrong or there is something wrong with the\n> box.\n>\n> Given the circumstances (benchmarked I/O is great, comparable perf on\n> another box with single disk is better), seems that one of:\n> 1) something wrong with the CPU/memory on the box\n> 2) something with the OS version / kernel\n> 3) something with the postgres configuration\n>\n> Can you post the database benchmark results?\n>\n> - Luke\n>\n>\n>\n\nLuke,Here's some background:I use Pg 7.4.13 (I've tested as far back as 7.4.8). I use an 8GB data with a program called odbc-bench. I run an 18 minute test. With each run, HP box excluded, I unmount the discs involved, reformat, un-tar the backup of PGDATA and pg_xlog back on the discs, start-up Postgresql, then run the odbc-bench. \nOn the Sun box, I've benchmarked an average of 3 to 4 runs with each disc (up to 8) in succession in RAID0, RAID5, and RAID10 where applicable. I've done with with pg_xlog on the same discs as PGDATA and separately, so I've felt like I had a good understanding of how the performance works. I've notice performance seems to level off at around 6 discs with another 10-15% with two more discs. \nWhen I run odbc-bench, I also run vmstat in the background (through a python script) which averages/summarzies the high/low/average of each category for each minute then a final summary after the run.On the Sun box, with 4 discs (RAID10) to one channel on the LSI RAID card, I see an average TPS around 70. If I ran this off of one disc, I see an average TPS of 32.\non the HP box, with 6-discs in RAID10 and 1 spare. I see a TPS of 34. I don't have my vmstat reports with me, but I recall the CPU utilitization on the HP was about 50% higher. I need to check on this.Steve\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\nSteve,On 8/8/06 8:01 AM, \"Steve Poe\" <[email protected]> wrote:> Thanks for the feedback. I use the same database test that I've run a Sun\n> dual Opteron with 4Gb RAM and (2) four disk arrays in RAID10. The sun box with> one disc on an LSI MegaRAID 2-channel adapter outperforms this HP box. I> though I was doing something wrong or there is something wrong with the box.\nGiven the circumstances (benchmarked I/O is great, comparable perf onanother box with single disk is better), seems that one of:1) something wrong with the CPU/memory on the box2) something with the OS version / kernel\n3) something with the postgres configurationCan you post the database benchmark results?- Luke",
"msg_date": "Tue, 8 Aug 2006 09:57:43 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Steve,\n\nOn 8/8/06 9:57 AM, \"Steve Poe\" <[email protected]> wrote:\n\n> On the Sun box, with 4 discs (RAID10) to one channel on the LSI RAID card, I\n> see an average TPS around 70. If I ran this off of one disc, I see an average\n> TPS of 32. \n> \n> on the HP box, with 6-discs in RAID10 and 1 spare. I see a TPS of 34. I don't\n> have my vmstat reports with me, but I recall the CPU utilitization on the HP\n> was about 50% higher. I need to check on this.\n\nSounds like there are a few moving parts here, one of which is the ODBC\ndriver.\n\nFirst - using 7.4.x postgres is a big variable - not much experience on this\nlist with 7.4.x anymore.\n\nWhat OS versions are on the two machines?\n\nWhat is the network configuration of each - is a caching DNS server\navailable to each? What are the contents of /etc/resolv.conf?\n\nHave you run \"top\" on the machines while the benchmark is running? What is\nthe top running process, what is it doing (RSS, swap, I/O wait, etc)?\n\nAre any of the disks not healthy? Do you see any I/O errors in dmesg?\n\nNote that tarring up the database directory and untarring it actually\nchanges the block layout of the files on the disk from what the database\nmight have done when it was created. When you create a tar archive of the\nfiles in the DB directory, their contents will be packed in file name order\nin the tar archive and unpacked that way as well. By comparison, the\nordering when the database lays them on disk might have been quite\ndifferent. This doesn't impact the problem you describe as you are\nunpacking the tar file on both machines to start the process (right?).\n\n- Luke\n\n\n",
"msg_date": "Tue, 08 Aug 2006 18:55:19 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": ">\n> >Sounds like there are a few moving parts here, one of which is the ODBC\n> >driver.\n\n\nYes, I need to use it since my clients use it for their veterinary\napplication.\n\n\n>First - using 7.4.x postgres is a big variable - not much experience on\n> this\n> >list with 7.4.x anymore.\n\n\nLike the previous, we have to use it since the manufacturer/vendor uses a\n4GL language which only supports Postgresql 7.4.x\n\n>What OS versions are on the two machines?\n\n\nCentos 4.3 x84_64 on both boxes.\n\n\n>What is the network configuration of each - is a caching DNS server\n> >available to each? What are the contents of /etc/resolv.conf?\n\n\nThe database is configured for the local/loopback on 127.0.0.1. This is my\nlocal network. No DNS.\n\n\n>Have you run \"top\" on the machines while the benchmark is running? What is\n> >the top running process, what is it doing (RSS, swap, I/O wait, etc)?\n\n\nI am not running top, but here's an average per second for the 20-25min\nrun from vmstat presented in a high/peak, low and median\n\nSun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10 LSI MegaRAID\n128MB). This is after 8 runs.\n\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,swapd,128,128,128\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,free,21596,21050,21327\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,buffers,1171,174,595\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,cache,3514368,3467427,3495081\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,bi,97276,1720,31745\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,bo,9209,832,4674\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,in,25906,23204,24115\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,cs,49849,46035,47617\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n\nAverage TPS is 75\n\nHP box with 8GB RAM. six disc array RAID10 on SmartArray 642 with 192MB RAM.\nAfter 8 runs, I see:\n\nintown-vetstar-amd64,08/09/06,Tuesday,23,r,0,0,0\nintown-vetstar-amd64,08/09/06,Tuesday,23,b,2,0,0\nintown-vetstar-amd64,08/09/06,Tuesday,23,swapd,0,0,0\nintown-vetstar-amd64,08/09/06,Tuesday,23,free,33760,16501,17931\nintown-vetstar-amd64,08/09/06,Tuesday,23,buffers,1578,673,1179\nintown-vetstar-amd64,08/09/06,Tuesday,23,cache,7881745,7867700,7876327\nintown-vetstar-amd64,08/09/06,Tuesday,23,bi,66536,0,4480\nintown-vetstar-amd64,08/09/06,Tuesday,23,bo,5991,2,2806\nintown-vetstar-amd64,08/09/06,Tuesday,23,in,1624,260,573\nintown-vetstar-amd64,08/09/06,Tuesday,23,cs,2342,17,1464\nintown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\nintown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\nintown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\nintown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n\nAverage TPS is 31.\n\n\n>Are any of the disks not healthy? Do you see any I/O errors in dmesg?\n\n\nI don't know. I do the following message:\n\"PCI: MSI quirk detected. PCI_BUS_FLAGS_NO_MSI set for subordinate bus\"\n\nOtherwise, no disc error messages.\n\nNote that tarring up the database directory and untarring it actually\n> changes the block layout of the files on the disk from what the database\n> might have done when it was created. When you create a tar archive of the\n> files in the DB directory, their contents will be packed in file name\n> order\n> in the tar archive and unpacked that way as well. By comparison, the\n> ordering when the database lays them on disk might have been quite\n> different. This doesn't impact the problem you describe as you are\n> unpacking the tar file on both machines to start the process (right?).\n\n\nYes, I am running this on both machines with the same RPMs of Postgresql and\nsame conf files.\n\nAlso, just for this testing, I am not unmounting, formatting, untaring. I am\ndoing it once than running the series of tests (usually 10 runs).\n\nThanks again for your time. If you're in the SF area, I'll owe you lunch and\n> beer.\n\n\n\nSteve\n\n>Sounds like there are a few moving parts here, one of which is the ODBC>driver.\nYes, I need to use it since my clients use it for their veterinary application. \n>First - using 7.4.x postgres is a big variable - not much experience on this>list with 7.4.x anymore.Like the previous, we have to use it since the manufacturer/vendor uses a 4GL language which only supports Postgresql \n7.4.x >What OS versions are on the two machines?Centos \n4.3 x84_64 on both boxes. >What is the network configuration of each - is a caching DNS server\n>available to each? What are the contents of /etc/resolv.conf?The database is configured for the local/loopback on 127.0.0.1. This is my local network. No DNS.\n >Have you run \"top\" on the machines while the benchmark is running? What is\n>the top running process, what is it doing (RSS, swap, I/O wait, etc)?I am not running top, but here's an average per second for the 20-25min run from vmstat presented in a high/peak, low and median\nSun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10 LSI MegaRAID 128MB). This is after 8 runs.dbserver-dual-opteron-centos,08/08/06,Tuesday,20,swapd,128,128,128dbserver-dual-opteron-centos,08/08/06,Tuesday,20,free,21596,21050,21327\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,buffers,1171,174,595dbserver-dual-opteron-centos,08/08/06,Tuesday,20,cache,3514368,3467427,3495081dbserver-dual-opteron-centos,08/08/06,Tuesday,20,bi,97276,1720,31745\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,bo,9209,832,4674dbserver-dual-opteron-centos,08/08/06,Tuesday,20,in,25906,23204,24115dbserver-dual-opteron-centos,08/08/06,Tuesday,20,cs,49849,46035,47617dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\ndbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38Average TPS is 75\nHP box with 8GB RAM. six disc array RAID10 on SmartArray 642 with 192MB RAM. After 8 runs, I see:intown-vetstar-amd64,08/09/06,Tuesday,23,r,0,0,0intown-vetstar-amd64,08/09/06,Tuesday,23,b,2,0,0intown-vetstar-amd64,08/09/06,Tuesday,23,swapd,0,0,0\nintown-vetstar-amd64,08/09/06,Tuesday,23,free,33760,16501,17931intown-vetstar-amd64,08/09/06,Tuesday,23,buffers,1578,673,1179intown-vetstar-amd64,08/09/06,Tuesday,23,cache,7881745,7867700,7876327intown-vetstar-amd64,08/09/06,Tuesday,23,bi,66536,0,4480\nintown-vetstar-amd64,08/09/06,Tuesday,23,bo,5991,2,2806intown-vetstar-amd64,08/09/06,Tuesday,23,in,1624,260,573intown-vetstar-amd64,08/09/06,Tuesday,23,cs,2342,17,1464intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\nintown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42Average TPS is 31. \n>Are any of the disks not healthy? Do you see any I/O errors in dmesg?I don't know. I do the following message:\"PCI: MSI quirk detected. PCI_BUS_FLAGS_NO_MSI set for subordinate bus\"\n Otherwise, no disc error messages.Note that tarring up the database directory and untarring it actually\nchanges the block layout of the files on the disk from what the databasemight have done when it was created. When you create a tar archive of thefiles in the DB directory, their contents will be packed in file name order\nin the tar archive and unpacked that way as well. By comparison, theordering when the database lays them on disk might have been quitedifferent. This doesn't impact the problem you describe as you areunpacking the tar file on both machines to start the process (right?).\nYes, I am running this on both machines with the same RPMs of Postgresql and same conf files.Also, just for this testing, I am not unmounting, formatting, untaring. I am doing it once than running the series of tests (usually 10 runs).\nThanks again for your time. If you're in the SF area, I'll owe you lunch and beer.\nSteve",
"msg_date": "Tue, 8 Aug 2006 21:56:01 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "> Are any of the disks not healthy? Do you see any I/O errors in dmesg?\n\nLuke,\n\nIn my vmstat report, I it is an average per minute not per-second. Also,\nI found that in the first minute of the very first run, the HP's \"bi\"\nvalue hits a high of 221184 then it tanks after that.\n\nSteve\n\n",
"msg_date": "Tue, 08 Aug 2006 22:22:31 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a query that use a function and some column test to select row. \nIt's in the form of:\n\nSELECT * FROM TABLE\n WHERE TABLE.COLUMN1=something\n AND TABLE.COLUMN2=somethingelse\n AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n\nThe result of the function does NOT depend only from the table, but also \nfrom some other tables.\n\nSince it's long to process, I've add some output to see what's going on. \nI find out that the function process every row even if the row should be \nrejected as per the first or the second condition. Then , my question \nis: Is there a way to formulate a query that wont do all the check if it \ndoes not need to do it ? Meaning that, if condition1 is false then it \nwont check condition2 and that way the function will only be called when \nit's really necessary.\n\nThanks\n",
"msg_date": "Tue, 08 Aug 2006 13:49:06 -0400",
"msg_from": "Patrice Beliveau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing queries"
},
{
"msg_contents": "On Tue, 2006-08-08 at 12:49, Patrice Beliveau wrote:\n> Hi,\n> \n> I have a query that use a function and some column test to select row. \n> It's in the form of:\n> \n> SELECT * FROM TABLE\n> WHERE TABLE.COLUMN1=something\n> AND TABLE.COLUMN2=somethingelse\n> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n> \n> The result of the function does NOT depend only from the table, but also \n> from some other tables.\n> \n> Since it's long to process, I've add some output to see what's going on. \n> I find out that the function process every row even if the row should be \n> rejected as per the first or the second condition. Then , my question \n> is: Is there a way to formulate a query that wont do all the check if it \n> does not need to do it ? Meaning that, if condition1 is false then it \n> wont check condition2 and that way the function will only be called when \n> it's really necessary.\n\nWhat version of postgresql are you running? It might be better in later\nversions. The standard fix for such things is to use a subquery...\n\nselect * from (\n select * from table where \n col1='something'\n and col2='somethingelse'\n) as a\nwhere function(a.col3,a.col4) > 0;\n",
"msg_date": "Tue, 08 Aug 2006 13:39:59 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries"
},
{
"msg_contents": "Scott Marlowe wrote:\n> On Tue, 2006-08-08 at 12:49, Patrice Beliveau wrote:\n> \n>> Hi,\n>>\n>> I have a query that use a function and some column test to select row. \n>> It's in the form of:\n>>\n>> SELECT * FROM TABLE\n>> WHERE TABLE.COLUMN1=something\n>> AND TABLE.COLUMN2=somethingelse\n>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n>>\n>> The result of the function does NOT depend only from the table, but also \n>> from some other tables.\n>>\n>> Since it's long to process, I've add some output to see what's going on. \n>> I find out that the function process every row even if the row should be \n>> rejected as per the first or the second condition. Then , my question \n>> is: Is there a way to formulate a query that wont do all the check if it \n>> does not need to do it ? Meaning that, if condition1 is false then it \n>> wont check condition2 and that way the function will only be called when \n>> it's really necessary.\n>> \n>\n> What version of postgresql are you running? It might be better in later\n> versions. The standard fix for such things is to use a subquery...\n>\n> select * from (\n> select * from table where \n> col1='something'\n> and col2='somethingelse'\n> ) as a\n> where function(a.col3,a.col4) > 0;\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n>\n> \nThanks for the answer, but it does not work, maybe I did something wrong\n\nFirst, I'm using version 8.1.3\n\nThis is what I did:\n\nselect * from (\n select * from sales_order_delivery\n where sales_order_id in (\n select sales_order_id from sales_order\n where closed=false\n )\n ) as a where outstandingorder(sales_order_id, sales_order_item, \ndate_due) > 0;\n\nSome output that I've create look like\nINFO: so:03616 soi:1 date:1993-12-23\nINFO: so:09614 soi:1 date:1998-06-04\n\nwhich are the three arguments passed to the function \"outstandingorder\", \nbut sales_order 03616 and 09614 are closed.\n\nWhat's wrong ??\n\nThanks\n\n",
"msg_date": "Tue, 08 Aug 2006 16:14:48 -0400",
"msg_from": "Patrice Beliveau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing queries"
},
{
"msg_contents": "Patrice Beliveau <[email protected]> writes:\n>>> SELECT * FROM TABLE\n>>> WHERE TABLE.COLUMN1=something\n>>> AND TABLE.COLUMN2=somethingelse\n>>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n\n> I find out that the function process every row even if the row should be \n> rejected as per the first or the second condition.\n> ... I'm using version 8.1.3\n\nPG 8.1 will not reorder WHERE clauses for a single table unless it has\nsome specific reason to do so (and AFAICT no version back to 7.0 or so\nhas done so either...) So there's something you are not telling us that\nis relevant. Let's see the exact table schema (psql \\d output is good),\nthe exact query, and EXPLAIN output for that query.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 08 Aug 2006 16:42:29 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries "
},
{
"msg_contents": "Tom Lane wrote:\n> Patrice Beliveau <[email protected]> writes:\n> \n>>>> SELECT * FROM TABLE\n>>>> WHERE TABLE.COLUMN1=something\n>>>> AND TABLE.COLUMN2=somethingelse\n>>>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n>>>> \n>\n> \n>> I find out that the function process every row even if the row should be \n>> rejected as per the first or the second condition.\n>> ... I'm using version 8.1.3\n>> \n>\n> PG 8.1 will not reorder WHERE clauses for a single table unless it has\n> some specific reason to do so (and AFAICT no version back to 7.0 or so\n> has done so either...) So there's something you are not telling us that\n> is relevant. Let's see the exact table schema (psql \\d output is good),\n> the exact query, and EXPLAIN output for that query.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n>\n> \nHi,\n\nhere is my query, and the query plan that result\n\nexplain select * from (\n select * from sales_order_delivery\n where sales_order_id in (\n select sales_order_id from sales_order\n where closed=false\n )\n ) as a where outstandingorder(sales_order_id, sales_order_item, \ndate_due) > 0;\n\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------\n Hash IN Join (cost=498.89..8348.38 rows=34612 width=262)\n Hash Cond: ((\"outer\".sales_order_id)::text = \n(\"inner\".sales_order_id)::text)\n -> Seq Scan on sales_order_delivery (cost=0.00..6465.03 rows=69223 \nwidth=262)\n Filter: (outstandingorder((sales_order_id)::text, \n(sales_order_item)::text, date_due) > 0::double precision)\n -> Hash (cost=484.90..484.90 rows=5595 width=32)\n -> Seq Scan on sales_order (cost=0.00..484.90 rows=5595 width=32)\n Filter: (NOT closed)\n(7 rows)\n\n",
"msg_date": "Wed, 09 Aug 2006 08:05:02 -0400",
"msg_from": "Patrice Beliveau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing queries"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nIf subquerys are not working I think you should try to create a view\nwith the subquery.\n\nMaybe it will work.\n\nPatrice Beliveau wrote:\n> Tom Lane wrote:\n>> Patrice Beliveau <[email protected]> writes:\n>> \n>>>>> SELECT * FROM TABLE\n>>>>> WHERE TABLE.COLUMN1=something\n>>>>> AND TABLE.COLUMN2=somethingelse\n>>>>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n>>>>> \n>>\n>> \n>>> I find out that the function process every row even if the row should\n>>> be rejected as per the first or the second condition.\n>>> ... I'm using version 8.1.3\n>>> \n>>\n>> PG 8.1 will not reorder WHERE clauses for a single table unless it has\n>> some specific reason to do so (and AFAICT no version back to 7.0 or so\n>> has done so either...) So there's something you are not telling us that\n>> is relevant. Let's see the exact table schema (psql \\d output is good),\n>> the exact query, and EXPLAIN output for that query.\n>>\n>> regards, tom lane\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>> choose an index scan if your joining column's datatypes do not\n>> match\n>>\n>>\n>> \n> Hi,\n> \n> here is my query, and the query plan that result\n> \n> explain select * from (\n> select * from sales_order_delivery\n> where sales_order_id in (\n> select sales_order_id from sales_order\n> where closed=false\n> )\n> ) as a where outstandingorder(sales_order_id, sales_order_item,\n> date_due) > 0;\n> \n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------\n> \n> Hash IN Join (cost=498.89..8348.38 rows=34612 width=262)\n> Hash Cond: ((\"outer\".sales_order_id)::text =\n> (\"inner\".sales_order_id)::text)\n> -> Seq Scan on sales_order_delivery (cost=0.00..6465.03 rows=69223\n> width=262)\n> Filter: (outstandingorder((sales_order_id)::text,\n> (sales_order_item)::text, date_due) > 0::double precision)\n> -> Hash (cost=484.90..484.90 rows=5595 width=32)\n> -> Seq Scan on sales_order (cost=0.00..484.90 rows=5595 width=32)\n> Filter: (NOT closed)\n> (7 rows)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.2.2 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFE2dMTIo1XmbAXRboRAhbIAJwJGZ+ITP0gl38A3qROrzIeNbTtUwCcDOIW\neZ9NJqjL+58gyMfO95jwZSw=\n=4Zxj\n-----END PGP SIGNATURE-----\n",
"msg_date": "Wed, 09 Aug 2006 14:20:35 +0200",
"msg_from": "Ruben Rubio <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries"
},
{
"msg_contents": "I've create a view, same query plan (some number vary a bit, but nothing \nsignificant) and same result, closed sales_order are processed\n\nRuben Rubio wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> If subquerys are not working I think you should try to create a view\n> with the subquery.\n>\n> Maybe it will work.\n>\n> Patrice Beliveau wrote:\n> \n>> Tom Lane wrote:\n>> \n>>> Patrice Beliveau <[email protected]> writes:\n>>> \n>>> \n>>>>>> SELECT * FROM TABLE\n>>>>>> WHERE TABLE.COLUMN1=something\n>>>>>> AND TABLE.COLUMN2=somethingelse\n>>>>>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n>>>>>> \n>>>>>> \n>>> \n>>> \n>>>> I find out that the function process every row even if the row should\n>>>> be rejected as per the first or the second condition.\n>>>> ... I'm using version 8.1.3\n>>>> \n>>>> \n>>> PG 8.1 will not reorder WHERE clauses for a single table unless it has\n>>> some specific reason to do so (and AFAICT no version back to 7.0 or so\n>>> has done so either...) So there's something you are not telling us that\n>>> is relevant. Let's see the exact table schema (psql \\d output is good),\n>>> the exact query, and EXPLAIN output for that query.\n>>>\n>>> regards, tom lane\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 9: In versions below 8.0, the planner will ignore your desire to\n>>> choose an index scan if your joining column's datatypes do not\n>>> match\n>>>\n>>>\n>>> \n>>> \n>> Hi,\n>>\n>> here is my query, and the query plan that result\n>>\n>> explain select * from (\n>> select * from sales_order_delivery\n>> where sales_order_id in (\n>> select sales_order_id from sales_order\n>> where closed=false\n>> )\n>> ) as a where outstandingorder(sales_order_id, sales_order_item,\n>> date_due) > 0;\n>>\n>>\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------\n>>\n>> Hash IN Join (cost=498.89..8348.38 rows=34612 width=262)\n>> Hash Cond: ((\"outer\".sales_order_id)::text =\n>> (\"inner\".sales_order_id)::text)\n>> -> Seq Scan on sales_order_delivery (cost=0.00..6465.03 rows=69223\n>> width=262)\n>> Filter: (outstandingorder((sales_order_id)::text,\n>> (sales_order_item)::text, date_due) > 0::double precision)\n>> -> Hash (cost=484.90..484.90 rows=5595 width=32)\n>> -> Seq Scan on sales_order (cost=0.00..484.90 rows=5595 width=32)\n>> Filter: (NOT closed)\n>> (7 rows)\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>>\n>> \n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.2.2 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFE2dMTIo1XmbAXRboRAhbIAJwJGZ+ITP0gl38A3qROrzIeNbTtUwCcDOIW\n> eZ9NJqjL+58gyMfO95jwZSw=\n> =4Zxj\n> -----END PGP SIGNATURE-----\n>\n>\n> \n\n",
"msg_date": "Wed, 09 Aug 2006 10:23:01 -0400",
"msg_from": "Patrice Beliveau <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing queries"
},
{
"msg_contents": "Patrice Beliveau <[email protected]> writes:\n> Tom Lane wrote:\n>> PG 8.1 will not reorder WHERE clauses for a single table unless it has\n>> some specific reason to do so (and AFAICT no version back to 7.0 or so\n>> has done so either...) So there's something you are not telling us that\n>> is relevant.\n\n> here is my query, and the query plan that result\n\n> explain select * from (\n> select * from sales_order_delivery\n> where sales_order_id in (\n> select sales_order_id from sales_order\n> where closed=false\n> )\n> ) as a where outstandingorder(sales_order_id, sales_order_item, \n> date_due) > 0;\n\nSo this isn't a simple query, but a join. PG will generally push\nsingle-table restrictions down to the individual tables in order to\nreduce the number of rows that have to be processed at the join.\nIn this case that's not a win, but the planner doesn't know enough\nabout the outstandingorder() function to realize that.\n\nI think what you need is an \"optimization fence\" to prevent the subquery\nfrom being flattened:\n\nexplain select * from (\n select * from sales_order_delivery\n where sales_order_id in (\n select sales_order_id from sales_order\n where closed=false\n )\n OFFSET 0\n ) as a where outstandingorder(sales_order_id, sales_order_item, \ndate_due) > 0;\n\nAny LIMIT or OFFSET in a subquery prevents WHERE conditions from being\npushed down past it (since that might change the results). OFFSET 0 is\notherwise a no-op, so that's what people usually use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Aug 2006 10:39:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries "
},
{
"msg_contents": "On Tue, 2006-08-08 at 16:42 -0400, Tom Lane wrote:\n> Patrice Beliveau <[email protected]> writes:\n> >>> SELECT * FROM TABLE\n> >>> WHERE TABLE.COLUMN1=something\n> >>> AND TABLE.COLUMN2=somethingelse\n> >>> AND function(TABLE.COLUMN3,TABLE.COLUMN4) > 0;\n> \n> > I find out that the function process every row even if the row should be \n> > rejected as per the first or the second condition.\n> > ... I'm using version 8.1.3\n> \n> PG 8.1 will not reorder WHERE clauses for a single table unless it has\n> some specific reason to do so (and AFAICT no version back to 7.0 or so\n> has done so either...) So there's something you are not telling us that\n> is relevant. Let's see the exact table schema (psql \\d output is good),\n> the exact query, and EXPLAIN output for that query.\n\nIs WHERE clause re-ordering done for 8.2, or is that still a TODO item?\n(Don't remember seeing that at all).\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 02 Oct 2006 17:08:20 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing queries"
}
] |
[
{
"msg_contents": "Steve, \n\n> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10 \n> LSI MegaRAID 128MB). This is after 8 runs.\n> \n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> \n> Average TPS is 75\n> \n> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642 \n> with 192MB RAM. After 8 runs, I see:\n> \n> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> \n> Average TPS is 31.\n\nNote that the I/O wait (wa) on the HP box high, low and average are all\n*much* higher than on the Sun box. The average I/O wait was 50% of one\nCPU, which is huge. By comparison there was virtually no I/O wait on\nthe Sun machine.\n\nThis is indicating that your HP machine is indeed I/O bound and\nfurthermore is tying up a PG process waiting for the disk to return.\n\n- Luke\n\n",
"msg_date": "Wed, 9 Aug 2006 01:22:35 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI thought so. In my test, I tried to be fair/equal since my Sun box has two\n4-disc arrays each on their own channel. So, I just used one of them which\nshould be a little slower than the 6-disc with 192MB cache.\n\nIncidently, the two internal SCSI drives, which are on the 6i adapter,\ngenerated a TPS of 18.\n\nI thought this server would impressive from notes I've read in the group.\nThis is why I thought I might be doing something wrong. I stumped which way\nto take this. There is no obvious fault but something isn't right.\n\nSteve\n\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> > Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10\n> > LSI MegaRAID 128MB). This is after 8 runs.\n> >\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> >\n> > Average TPS is 75\n> >\n> > HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> > with 192MB RAM. After 8 runs, I see:\n> >\n> > intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> > intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> > intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> >\n> > Average TPS is 31.\n>\n> Note that the I/O wait (wa) on the HP box high, low and average are all\n> *much* higher than on the Sun box. The average I/O wait was 50% of one\n> CPU, which is huge. By comparison there was virtually no I/O wait on\n> the Sun machine.\n>\n> This is indicating that your HP machine is indeed I/O bound and\n> furthermore is tying up a PG process waiting for the disk to return.\n>\n> - Luke\n>\n>\n\nLuke,I thought so. In my test, I tried to be fair/equal since my Sun box has two 4-disc arrays each on their own channel. So, I just used one of them which should be a little slower than the 6-disc with 192MB cache.\nIncidently, the two internal SCSI drives, which are on the 6i adapter, generated a TPS of 18.I thought this server would impressive from notes I've read in the group. This is why I thought I might be doing something wrong. I stumped which way to take this. There is no obvious fault but something isn't right.\nSteveOn 8/8/06, Luke Lonergan <[email protected]> wrote:\nSteve,> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10> LSI MegaRAID 128MB). This is after 8 runs.>> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38>> Average TPS is 75>> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> with 192MB RAM. After 8 runs, I see:>> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42>> Average TPS is 31.Note that the I/O wait (wa) on the HP box high, low and average are all*much* higher than on the Sun box. The average I/O wait was 50% of one\nCPU, which is huge. By comparison there was virtually no I/O wait onthe Sun machine.This is indicating that your HP machine is indeed I/O bound andfurthermore is tying up a PG process waiting for the disk to return.\n- Luke",
"msg_date": "Tue, 8 Aug 2006 22:45:07 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI check dmesg one more time and I found this regarding the cciss driver:\n\nFilesystem \"cciss/c1d0p1\": Disabling barriers, not supported by the\nunderlying device.\n\nDon't know if it means anything, but thought I'd mention it.\n\nSteve\n\nOn 8/8/06, Steve Poe <[email protected]> wrote:\n>\n> Luke,\n>\n> I thought so. In my test, I tried to be fair/equal since my Sun box has\n> two 4-disc arrays each on their own channel. So, I just used one of them\n> which should be a little slower than the 6-disc with 192MB cache.\n>\n> Incidently, the two internal SCSI drives, which are on the 6i adapter,\n> generated a TPS of 18.\n>\n> I thought this server would impressive from notes I've read in the group.\n> This is why I thought I might be doing something wrong. I stumped which way\n> to take this. There is no obvious fault but something isn't right.\n>\n> Steve\n>\n>\n> On 8/8/06, Luke Lonergan <[email protected]> wrote:\n> >\n> > Steve,\n> >\n> > > Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10\n> > > LSI MegaRAID 128MB). This is after 8 runs.\n> > >\n> > > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> > > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> > > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> > > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> > >\n> > > Average TPS is 75\n> > >\n> > > HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> > > with 192MB RAM. After 8 runs, I see:\n> > >\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> > >\n> > > Average TPS is 31.\n> >\n> > Note that the I/O wait (wa) on the HP box high, low and average are all\n> > *much* higher than on the Sun box. The average I/O wait was 50% of one\n> > CPU, which is huge. By comparison there was virtually no I/O wait on\n> > the Sun machine.\n> >\n> > This is indicating that your HP machine is indeed I/O bound and\n> > furthermore is tying up a PG process waiting for the disk to return.\n> >\n> > - Luke\n> >\n> >\n>\n\nLuke,I check dmesg one more time and I found this regarding the cciss driver:Filesystem \"cciss/c1d0p1\": Disabling barriers, not supported by the underlying device.Don't know if it means anything, but thought I'd mention it.\nSteveOn 8/8/06, Steve Poe <[email protected]> wrote:\nLuke,I thought so. In my test, I tried to be fair/equal since my Sun box has two 4-disc arrays each on their own channel. So, I just used one of them which should be a little slower than the 6-disc with 192MB cache.\nIncidently, the two internal SCSI drives, which are on the 6i adapter, generated a TPS of 18.I thought this server would impressive from notes I've read in the group. This is why I thought I might be doing something wrong. I stumped which way to take this. There is no obvious fault but something isn't right.\nSteveOn 8/8/06, Luke Lonergan <\[email protected]> wrote:\nSteve,> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10> LSI MegaRAID 128MB). This is after 8 runs.>> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38>> Average TPS is 75>> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> with 192MB RAM. After 8 runs, I see:>> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42>> Average TPS is 31.Note that the I/O wait (wa) on the HP box high, low and average are all*much* higher than on the Sun box. The average I/O wait was 50% of one\nCPU, which is huge. By comparison there was virtually no I/O wait onthe Sun machine.This is indicating that your HP machine is indeed I/O bound andfurthermore is tying up a PG process waiting for the disk to return.\n- Luke",
"msg_date": "Tue, 8 Aug 2006 23:33:10 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:\n> Luke,\n> \n> I thought so. In my test, I tried to be fair/equal since my Sun box has two\n> 4-disc arrays each on their own channel. So, I just used one of them which\n> should be a little slower than the 6-disc with 192MB cache.\n> \n> Incidently, the two internal SCSI drives, which are on the 6i adapter,\n> generated a TPS of 18.\n \nYou should try putting pg_xlog on the 6 drive array with the data. My\n(limited) experience with such a config is that on a good controller\nwith writeback caching enabled it won't hurt you, and if the internal\ndrives aren't caching writes it'll probably help you a lot.\n\n> I thought this server would impressive from notes I've read in the group.\n> This is why I thought I might be doing something wrong. I stumped which way\n> to take this. There is no obvious fault but something isn't right.\n> \n> Steve\n> \n> On 8/8/06, Luke Lonergan <[email protected]> wrote:\n> >\n> >Steve,\n> >\n> >> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10\n> >> LSI MegaRAID 128MB). This is after 8 runs.\n> >>\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> >>\n> >> Average TPS is 75\n> >>\n> >> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> >> with 192MB RAM. After 8 runs, I see:\n> >>\n> >> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> >> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> >> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> >> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> >>\n> >> Average TPS is 31.\n> >\n> >Note that the I/O wait (wa) on the HP box high, low and average are all\n> >*much* higher than on the Sun box. The average I/O wait was 50% of one\n> >CPU, which is huge. By comparison there was virtually no I/O wait on\n> >the Sun machine.\n> >\n> >This is indicating that your HP machine is indeed I/O bound and\n> >furthermore is tying up a PG process waiting for the disk to return.\n> >\n> >- Luke\n> >\n> >\n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 9 Aug 2006 16:05:40 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Jim,\n\nI'll give it a try. However, I did not see anywhere in the BIOS\nconfiguration of the 642 RAID adapter to enable writeback. It may have been\nmislabled cache accelerator where you can give a percentage to read/write.\nThat aspect did not change the performance like the LSI MegaRAID adapter\ndoes.\n\nSteve\n\nOn 8/9/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:\n> > Luke,\n> >\n> > I thought so. In my test, I tried to be fair/equal since my Sun box has\n> two\n> > 4-disc arrays each on their own channel. So, I just used one of them\n> which\n> > should be a little slower than the 6-disc with 192MB cache.\n> >\n> > Incidently, the two internal SCSI drives, which are on the 6i adapter,\n> > generated a TPS of 18.\n>\n> You should try putting pg_xlog on the 6 drive array with the data. My\n> (limited) experience with such a config is that on a good controller\n> with writeback caching enabled it won't hurt you, and if the internal\n> drives aren't caching writes it'll probably help you a lot.\n>\n> > I thought this server would impressive from notes I've read in the\n> group.\n> > This is why I thought I might be doing something wrong. I stumped which\n> way\n> > to take this. There is no obvious fault but something isn't right.\n> >\n> > Steve\n> >\n> > On 8/8/06, Luke Lonergan <[email protected]> wrote:\n> > >\n> > >Steve,\n> > >\n> > >> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10\n> > >> LSI MegaRAID 128MB). This is after 8 runs.\n> > >>\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> > >>\n> > >> Average TPS is 75\n> > >>\n> > >> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> > >> with 192MB RAM. After 8 runs, I see:\n> > >>\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> > >>\n> > >> Average TPS is 31.\n> > >\n> > >Note that the I/O wait (wa) on the HP box high, low and average are all\n> > >*much* higher than on the Sun box. The average I/O wait was 50% of one\n> > >CPU, which is huge. By comparison there was virtually no I/O wait on\n> > >the Sun machine.\n> > >\n> > >This is indicating that your HP machine is indeed I/O bound and\n> > >furthermore is tying up a PG process waiting for the disk to return.\n> > >\n> > >- Luke\n> > >\n> > >\n>\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n\nJim,I'll give it a try. However, I did not see anywhere in the BIOS configuration of the 642 RAID adapter to enable writeback. It may have been mislabled cache accelerator where you can give a percentage to read/write. That aspect did not change the performance like the LSI MegaRAID adapter does.\nSteveOn 8/9/06, Jim C. Nasby <[email protected]> wrote:\nOn Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:> Luke,>> I thought so. In my test, I tried to be fair/equal since my Sun box has two> 4-disc arrays each on their own channel. So, I just used one of them which\n> should be a little slower than the 6-disc with 192MB cache.>> Incidently, the two internal SCSI drives, which are on the 6i adapter,> generated a TPS of 18.You should try putting pg_xlog on the 6 drive array with the data. My\n(limited) experience with such a config is that on a good controllerwith writeback caching enabled it won't hurt you, and if the internaldrives aren't caching writes it'll probably help you a lot.> I thought this server would impressive from notes I've read in the group.\n> This is why I thought I might be doing something wrong. I stumped which way> to take this. There is no obvious fault but something isn't right.>> Steve>> On 8/8/06, Luke Lonergan <\[email protected]> wrote:> >> >Steve,> >> >> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10> >> LSI MegaRAID 128MB). This is after 8 runs.\n> >>> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38> >>> >> Average TPS is 75> >>> >> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> >> with 192MB RAM. After 8 runs, I see:> >>> >> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3> >> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1> >> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> >> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42> >>> >> Average TPS is 31.> >> >Note that the I/O wait (wa) on the HP box high, low and average are all\n> >*much* higher than on the Sun box. The average I/O wait was 50% of one> >CPU, which is huge. By comparison there was virtually no I/O wait on> >the Sun machine.> >> >This is indicating that your HP machine is indeed I/O bound and\n> >furthermore is tying up a PG process waiting for the disk to return.> >> >- Luke> >> >--Jim C. Nasby, Sr. Engineering Consultant \[email protected] Software http://pervasive.com work: 512-231-6117vcard: http://jim.nasby.net/pervasive.vcf\n cell: 512-569-9461",
"msg_date": "Wed, 9 Aug 2006 14:11:37 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Wed, 2006-08-09 at 16:11, Steve Poe wrote:\n> Jim,\n> \n> I'll give it a try. However, I did not see anywhere in the BIOS\n> configuration of the 642 RAID adapter to enable writeback. It may have\n> been mislabled cache accelerator where you can give a percentage to\n> read/write. That aspect did not change the performance like the LSI\n> MegaRAID adapter does. \n\nNope, that's not the same thing.\n\nDoes your raid controller have batter backed cache, or plain or regular\ncache? write back is unsafe without battery backup.\n\nThe default is write through (i.e. the card waits for the data to get\nwritten out before acking an fsync). In write back, the card's driver\nwrites the data to the bb cache, then returns on an fsync while the\ncache gets written out at leisure. In the event of a loss of power, the\ncache is flushed on restart. \n",
"msg_date": "Wed, 09 Aug 2006 16:37:28 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "I believe it does, I'll need to check.Thanks for the correction.\n\nSteve\n\nOn 8/9/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Wed, 2006-08-09 at 16:11, Steve Poe wrote:\n> > Jim,\n> >\n> > I'll give it a try. However, I did not see anywhere in the BIOS\n> > configuration of the 642 RAID adapter to enable writeback. It may have\n> > been mislabled cache accelerator where you can give a percentage to\n> > read/write. That aspect did not change the performance like the LSI\n> > MegaRAID adapter does.\n>\n> Nope, that's not the same thing.\n>\n> Does your raid controller have batter backed cache, or plain or regular\n> cache? write back is unsafe without battery backup.\n>\n> The default is write through (i.e. the card waits for the data to get\n> written out before acking an fsync). In write back, the card's driver\n> writes the data to the bb cache, then returns on an fsync while the\n> cache gets written out at leisure. In the event of a loss of power, the\n> cache is flushed on restart.\n>\n\n I believe it does, I'll need to check.Thanks for the correction.\n\nSteve On 8/9/06, Scott Marlowe <[email protected]> wrote:\nOn Wed, 2006-08-09 at 16:11, Steve Poe wrote:> Jim,>> I'll give it a try. However, I did not see anywhere in the BIOS> configuration of the 642 RAID adapter to enable writeback. It may have\n> been mislabled cache accelerator where you can give a percentage to> read/write. That aspect did not change the performance like the LSI> MegaRAID adapter does.Nope, that's not the same thing.\nDoes your raid controller have batter backed cache, or plain or regularcache? write back is unsafe without battery backup.The default is write through (i.e. the card waits for the data to getwritten out before acking an fsync). In write back, the card's driver\nwrites the data to the bb cache, then returns on an fsync while thecache gets written out at leisure. In the event of a loss of power, thecache is flushed on restart.",
"msg_date": "Wed, 9 Aug 2006 16:47:10 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Scott,\n\nDo you know how to activate the writeback on the RAID controller from HP?\n\nSteve\n\nOn 8/9/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Wed, 2006-08-09 at 16:11, Steve Poe wrote:\n> > Jim,\n> >\n> > I'll give it a try. However, I did not see anywhere in the BIOS\n> > configuration of the 642 RAID adapter to enable writeback. It may have\n> > been mislabled cache accelerator where you can give a percentage to\n> > read/write. That aspect did not change the performance like the LSI\n> > MegaRAID adapter does.\n>\n> Nope, that's not the same thing.\n>\n> Does your raid controller have batter backed cache, or plain or regular\n> cache? write back is unsafe without battery backup.\n>\n> The default is write through (i.e. the card waits for the data to get\n> written out before acking an fsync). In write back, the card's driver\n> writes the data to the bb cache, then returns on an fsync while the\n> cache gets written out at leisure. In the event of a loss of power, the\n> cache is flushed on restart.\n>\n\nScott,Do you know how to activate the writeback on the RAID controller from HP?SteveOn 8/9/06, Scott Marlowe <\[email protected]> wrote:On Wed, 2006-08-09 at 16:11, Steve Poe wrote:\n> Jim,>> I'll give it a try. However, I did not see anywhere in the BIOS> configuration of the 642 RAID adapter to enable writeback. It may have> been mislabled cache accelerator where you can give a percentage to\n> read/write. That aspect did not change the performance like the LSI> MegaRAID adapter does.Nope, that's not the same thing.Does your raid controller have batter backed cache, or plain or regular\ncache? write back is unsafe without battery backup.The default is write through (i.e. the card waits for the data to getwritten out before acking an fsync). In write back, the card's driverwrites the data to the bb cache, then returns on an fsync while the\ncache gets written out at leisure. In the event of a loss of power, thecache is flushed on restart.",
"msg_date": "Wed, 9 Aug 2006 18:24:07 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Jim,\n\nI tried as you suggested and my performance dropped by 50%. I went from\na 32 TPS to 16. Oh well.\n\nSteve\n\nOn Wed, 2006-08-09 at 16:05 -0500, Jim C. Nasby wrote:\n> On Tue, Aug 08, 2006 at 10:45:07PM -0700, Steve Poe wrote:\n> > Luke,\n> > \n> > I thought so. In my test, I tried to be fair/equal since my Sun box has two\n> > 4-disc arrays each on their own channel. So, I just used one of them which\n> > should be a little slower than the 6-disc with 192MB cache.\n> > \n> > Incidently, the two internal SCSI drives, which are on the 6i adapter,\n> > generated a TPS of 18.\n> \n> You should try putting pg_xlog on the 6 drive array with the data. My\n> (limited) experience with such a config is that on a good controller\n> with writeback caching enabled it won't hurt you, and if the internal\n> drives aren't caching writes it'll probably help you a lot.\n> \n> > I thought this server would impressive from notes I've read in the group.\n> > This is why I thought I might be doing something wrong. I stumped which way\n> > to take this. There is no obvious fault but something isn't right.\n> > \n> > Steve\n> > \n> > On 8/8/06, Luke Lonergan <[email protected]> wrote:\n> > >\n> > >Steve,\n> > >\n> > >> Sun box with 4-disc array (4GB RAM. 4 167GB 10K SCSI RAID10\n> > >> LSI MegaRAID 128MB). This is after 8 runs.\n> > >>\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> > >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> > >>\n> > >> Average TPS is 75\n> > >>\n> > >> HP box with 8GB RAM. six disc array RAID10 on SmartArray 642\n> > >> with 192MB RAM. After 8 runs, I see:\n> > >>\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> > >> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> > >>\n> > >> Average TPS is 31.\n> > >\n> > >Note that the I/O wait (wa) on the HP box high, low and average are all\n> > >*much* higher than on the Sun box. The average I/O wait was 50% of one\n> > >CPU, which is huge. By comparison there was virtually no I/O wait on\n> > >the Sun machine.\n> > >\n> > >This is indicating that your HP machine is indeed I/O bound and\n> > >furthermore is tying up a PG process waiting for the disk to return.\n> > >\n> > >- Luke\n> > >\n> > >\n> \n\n",
"msg_date": "Wed, 09 Aug 2006 20:29:13 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n>I tried as you suggested and my performance dropped by 50%. I went from\n>a 32 TPS to 16. Oh well.\n\nIf you put data & xlog on the same array, put them on seperate \npartitions, probably formatted differently (ext2 on xlog).\n\nMike Stone\n",
"msg_date": "Thu, 10 Aug 2006 07:09:38 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Mike,\n\nOn 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n\n> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n>> I tried as you suggested and my performance dropped by 50%. I went from\n>> a 32 TPS to 16. Oh well.\n> \n> If you put data & xlog on the same array, put them on seperate\n> partitions, probably formatted differently (ext2 on xlog).\n\nIf he's doing the same thing on both systems (Sun and HP) and the HP\nperformance is dramatically worse despite using more disks and having faster\nCPUs and more RAM, ISTM the problem isn't the configuration.\n\nAdd to this the fact that the Sun machine is CPU bound while the HP is I/O\nwait bound and I think the problem is the disk hardware or the driver\ntherein.\n\n- Luke\n\n\n",
"msg_date": "Thu, 10 Aug 2006 08:15:38 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n> Mike,\n> \n> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n> \n> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> >> I tried as you suggested and my performance dropped by 50%. I went from\n> >> a 32 TPS to 16. Oh well.\n> > \n> > If you put data & xlog on the same array, put them on seperate\n> > partitions, probably formatted differently (ext2 on xlog).\n> \n> If he's doing the same thing on both systems (Sun and HP) and the HP\n> performance is dramatically worse despite using more disks and having faster\n> CPUs and more RAM, ISTM the problem isn't the configuration.\n> \n> Add to this the fact that the Sun machine is CPU bound while the HP is I/O\n> wait bound and I think the problem is the disk hardware or the driver\n> therein.\n\nI agree. The problem here looks to be the RAID controller.\n\nSteve, got access to a different RAID controller to test with?\n",
"msg_date": "Thu, 10 Aug 2006 10:35:10 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Scott,\n\nI *could* rip out the LSI MegaRAID 2X from my Sun box. This belongs to me\nfor testing. but I don't know if it will fit in the DL385. Do they have\nfull-heigth/length slots? I've not worked on this type of box before. I was\nthinking this is the next step. In the meantime, I've discovered their no\nemail support for them so I am hoping find a support contact through the\nsales rep that this box was purchased from.\n\nSteve\n\nOn 8/10/06, Scott Marlowe <[email protected]> wrote:\n>\n> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n> > Mike,\n> >\n> > On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n> >\n> > > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> > >> I tried as you suggested and my performance dropped by 50%. I went\n> from\n> > >> a 32 TPS to 16. Oh well.\n> > >\n> > > If you put data & xlog on the same array, put them on seperate\n> > > partitions, probably formatted differently (ext2 on xlog).\n> >\n> > If he's doing the same thing on both systems (Sun and HP) and the HP\n> > performance is dramatically worse despite using more disks and having\n> faster\n> > CPUs and more RAM, ISTM the problem isn't the configuration.\n> >\n> > Add to this the fact that the Sun machine is CPU bound while the HP is\n> I/O\n> > wait bound and I think the problem is the disk hardware or the driver\n> > therein.\n>\n> I agree. The problem here looks to be the RAID controller.\n>\n> Steve, got access to a different RAID controller to test with?\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nScott,I *could* rip out the LSI MegaRAID 2X from my Sun box. This belongs to me for testing. but I don't know if it will fit in the DL385. Do they have full-heigth/length slots? I've not worked on this type of box before. I was thinking this is the next step. In the meantime, I've discovered their no email support for them so I am hoping find a support contact through the sales rep that this box was purchased from. \nSteveOn 8/10/06, Scott Marlowe <[email protected]> wrote:\nOn Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:> Mike,>> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:>\n> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:> >> I tried as you suggested and my performance dropped by 50%. I went from> >> a 32 TPS to 16. Oh well.> >> > If you put data & xlog on the same array, put them on seperate\n> > partitions, probably formatted differently (ext2 on xlog).>> If he's doing the same thing on both systems (Sun and HP) and the HP> performance is dramatically worse despite using more disks and having faster\n> CPUs and more RAM, ISTM the problem isn't the configuration.>> Add to this the fact that the Sun machine is CPU bound while the HP is I/O> wait bound and I think the problem is the disk hardware or the driver\n> therein.I agree. The problem here looks to be the RAID controller.Steve, got access to a different RAID controller to test with?---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Thu, 10 Aug 2006 08:47:05 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Thu, Aug 10, 2006 at 07:09:38AM -0400, Michael Stone wrote:\n> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> >I tried as you suggested and my performance dropped by 50%. I went from\n> >a 32 TPS to 16. Oh well.\n> \n> If you put data & xlog on the same array, put them on seperate \n> partitions, probably formatted differently (ext2 on xlog).\n\nGot any data to back that up?\n\nThe problem with seperate partitions is that it means more head movement\nfor the drives. If it's all one partition the pg_xlog data will tend to\nbe interspersed with the heap data, meaning less need for head\nrepositioning.\n\nOf course, if ext2 provided enough of a speed improvement over ext3 with\ndata=writeback then it's possible that this would be a win, though if\nthe controller is good enough to make putting pg_xlog on the same array\nas $PGDATA a win, I suspect it would make up for most filesystem\nperformance issues associated with pg_xlog as well.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 14 Aug 2006 10:38:41 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Jim,\n\nI have to say Michael is onto something here to my surprise. I partitioned\nthe RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted\nwith XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I\ncould reproduce the similar result with the LSI MegaRAID 2X adapter as well\nas with my own 4-disc drive array.\n\nThe problem lies with the HP SmartArray 6i adapter and/or the internal SCSI\ndiscs. Putting the pg_xlog on it kills the performance.\n\nSteve\n\nOn 8/14/06, Jim C. Nasby <[email protected]> wrote:\n>\n> On Thu, Aug 10, 2006 at 07:09:38AM -0400, Michael Stone wrote:\n> > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> > >I tried as you suggested and my performance dropped by 50%. I went from\n> > >a 32 TPS to 16. Oh well.\n> >\n> > If you put data & xlog on the same array, put them on seperate\n> > partitions, probably formatted differently (ext2 on xlog).\n>\n> Got any data to back that up?\n>\n> The problem with seperate partitions is that it means more head movement\n> for the drives. If it's all one partition the pg_xlog data will tend to\n> be interspersed with the heap data, meaning less need for head\n> repositioning.\n>\n> Of course, if ext2 provided enough of a speed improvement over ext3 with\n> data=writeback then it's possible that this would be a win, though if\n> the controller is good enough to make putting pg_xlog on the same array\n> as $PGDATA a win, I suspect it would make up for most filesystem\n> performance issues associated with pg_xlog as well.\n> --\n> Jim C. Nasby, Sr. Engineering Consultant [email protected]\n> Pervasive Software http://pervasive.com work: 512-231-6117\n> vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n\nJim,I have to say Michael is onto something here to my surprise. I partitioned the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I could reproduce the similar result with the LSI MegaRAID 2X adapter as well as with my own 4-disc drive array. \nThe problem lies with the HP SmartArray 6i adapter and/or the internal SCSI discs. Putting the pg_xlog on it kills the performance.SteveOn 8/14/06, \nJim C. Nasby <[email protected]> wrote:\nOn Thu, Aug 10, 2006 at 07:09:38AM -0400, Michael Stone wrote:> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:> >I tried as you suggested and my performance dropped by 50%. I went from> >a 32 TPS to 16. Oh well.\n>> If you put data & xlog on the same array, put them on seperate> partitions, probably formatted differently (ext2 on xlog).Got any data to back that up?The problem with seperate partitions is that it means more head movement\nfor the drives. If it's all one partition the pg_xlog data will tend tobe interspersed with the heap data, meaning less need for headrepositioning.Of course, if ext2 provided enough of a speed improvement over ext3 with\ndata=writeback then it's possible that this would be a win, though ifthe controller is good enough to make putting pg_xlog on the same arrayas $PGDATA a win, I suspect it would make up for most filesystemperformance issues associated with pg_xlog as well.\n--Jim C. Nasby, Sr. Engineering Consultant [email protected] Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461---------------------------(end of broadcast)---------------------------TIP 5: don't forget to increase your free space map settings",
"msg_date": "Mon, 14 Aug 2006 08:51:09 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:\n>Got any data to back that up?\n\nyes. that I'm willing to dig out? no. :)\n\n>The problem with seperate partitions is that it means more head movement\n>for the drives. If it's all one partition the pg_xlog data will tend to\n>be interspersed with the heap data, meaning less need for head\n>repositioning.\n\nThe pg_xlog files will tend to be created up at the front of the disk \nand just sit there. Any affect the positioning has one way or the other \nisn't going to be measurable/repeatable. With a write cache for pg_xlog \nthe positioning isn't really going to matter anyway, since you don't \nhave to wait for a seek to do the write.\n\n From what I've observed in testing, I'd guess that the issue is that \ncertain filesystem operations (including, possibly, metadata operations) \nare handled in order. If you have xlog on a seperate partition there \nwill never be anything competing with a log write on the server side, \nwhich won't necessarily be true on a shared filesystem. Even if you have \na battery backed write cache, you might still have to wait a relatively \nlong time for the pg_xlog data to be written out if there's already a \nlot of other stuff in a filesystem write queue. \n\nMike Stone\n",
"msg_date": "Mon, 14 Aug 2006 13:03:41 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Mon, Aug 14, 2006 at 08:51:09AM -0700, Steve Poe wrote:\n> Jim,\n> \n> I have to say Michael is onto something here to my surprise. I partitioned\n> the RAID10 on the SmartArray 642 adapter into two parts, PGDATA formatted\n> with XFS and pg_xlog as ext2. Performance jumped up to median of 98 TPS. I\n> could reproduce the similar result with the LSI MegaRAID 2X adapter as well\n> as with my own 4-disc drive array.\n> \n> The problem lies with the HP SmartArray 6i adapter and/or the internal SCSI\n> discs. Putting the pg_xlog on it kills the performance.\n\nWow, interesting. IIRC, XFS is lower performing than ext3, so if your\nprevious tests were done with XFS, that might be part of it. But without\na doubt, if you don't have a good raid controller you don't want to try\ncombining pg_xlog with PGDATA.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Mon, 14 Aug 2006 12:05:46 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:\n>Wow, interesting. IIRC, XFS is lower performing than ext3,\n\nFor xlog, maybe. For data, no. Both are definately slower than ext2 for \nxlog, which is another reason to have xlog on a small filesystem which \ndoesn't need metadata journalling.\n\nMike Stone\n",
"msg_date": "Mon, 14 Aug 2006 13:09:04 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Mon, Aug 14, 2006 at 01:03:41PM -0400, Michael Stone wrote:\n> On Mon, Aug 14, 2006 at 10:38:41AM -0500, Jim C. Nasby wrote:\n> >Got any data to back that up?\n> \n> yes. that I'm willing to dig out? no. :)\n \nWell, I'm not digging hard numbers out either, so that's fair. :) But it\nwould be very handy if people posted results from any testing they're\ndoing as part of setting up new hardware. Actually, a wiki would\nprobably be ideal for this...\n\n> >The problem with seperate partitions is that it means more head movement\n> >for the drives. If it's all one partition the pg_xlog data will tend to\n> >be interspersed with the heap data, meaning less need for head\n> >repositioning.\n> \n> The pg_xlog files will tend to be created up at the front of the disk \n> and just sit there. Any affect the positioning has one way or the other \n> isn't going to be measurable/repeatable. With a write cache for pg_xlog \n> the positioning isn't really going to matter anyway, since you don't \n> have to wait for a seek to do the write.\n \nCertainly... my contention is that if you have a good controller that's\ncaching writes then drive layout basically won't matter at all, because\nthe controller will just magically make things optimal.\n\n> From what I've observed in testing, I'd guess that the issue is that \n> certain filesystem operations (including, possibly, metadata operations) \n> are handled in order. If you have xlog on a seperate partition there \n> will never be anything competing with a log write on the server side, \n> which won't necessarily be true on a shared filesystem. Even if you have \n> a battery backed write cache, you might still have to wait a relatively \n> long time for the pg_xlog data to be written out if there's already a \n> lot of other stuff in a filesystem write queue. \n\nWell, if the controller is caching with a BBU, I'm not sure that order\nmatters anymore, because the controller should be able to re-order at\nwill. Theoretically. :) But this is why having some actual data posted\nsomewhere would be great.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 11:25:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:\n> On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:\n> >Wow, interesting. IIRC, XFS is lower performing than ext3,\n> \n> For xlog, maybe. For data, no. Both are definately slower than ext2 for \n> xlog, which is another reason to have xlog on a small filesystem which \n> doesn't need metadata journalling.\n\nAre 'we' sure that such a setup can't lose any data? I'm worried about\nfiles getting lost when they get written out before the metadata does.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 11:29:26 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 11:25:24AM -0500, Jim C. Nasby wrote:\n>Well, if the controller is caching with a BBU, I'm not sure that order\n>matters anymore, because the controller should be able to re-order at\n>will. Theoretically. :) But this is why having some actual data posted\n>somewhere would be great.\n\nYou're missing the point. It's not a question of what happens once it \ngets to the disk/controller, it's a question of whether the xlog write \nhas to compete with some other write activity before the write gets to \nthe disk (e.g., at the filesystem level). If you've got a bunch of stuff \nin a write buffer on the OS level and you try to push the xlog write \nout, you may have to wait for the other stuff to get to the controller \nwrite cache before the xlog does. It doesn't matter if you don't have to \nwait for the write to get from the controller cache to the disk if you \nalready had to wait to get to the controller cache. The effect is a \n*lot* smaller than not having a non-volatile cache, but it is an \nimprovement. (Also, the difference between ext2 and xfs for the xlog is \npretty big itself, and a good reason all by itself to put xlog on a \nseperate partition that's small enough to not need journalling.)\n\nMike Stone\n",
"msg_date": "Tue, 15 Aug 2006 13:25:21 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:\n>Are 'we' sure that such a setup can't lose any data?\n\nYes. If you check the archives, you can even find the last time this was \ndiscussed...\n\nThe bottom line is that the only reason you need a metadata journalling \nfilesystem is to save the fsck time when you come up. On a little \npartition like xlog, that's not an issue.\n\nMike Stone\n",
"msg_date": "Tue, 15 Aug 2006 13:26:46 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:\n> On Mon, Aug 14, 2006 at 01:09:04PM -0400, Michael Stone wrote:\n> > On Mon, Aug 14, 2006 at 12:05:46PM -0500, Jim C. Nasby wrote:\n> > >Wow, interesting. IIRC, XFS is lower performing than ext3,\n> > For xlog, maybe. For data, no. Both are definately slower than ext2 for \n> > xlog, which is another reason to have xlog on a small filesystem which \n> > doesn't need metadata journalling.\n> Are 'we' sure that such a setup can't lose any data? I'm worried about\n> files getting lost when they get written out before the metadata does.\n\nI've been worrying about this myself, and my current conclusion is that\next2 is bad because: a) fsck, and b) data can be lost or corrupted, which\ncould lead to the need to trash the xlog.\n\nEven ext3 in writeback mode allows for the indirect blocks to be updated\nwithout the data underneath, allowing for blocks to point to random data,\nor worse, previous apparently sane data (especially if the data is from\na drive only used for xlog - the chance is high that a block might look\npartially valid?).\n\nSo, I'm sticking with ext3 in ordered mode.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 14:31:48 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:\n> On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:\n> >Are 'we' sure that such a setup can't lose any data?\n> Yes. If you check the archives, you can even find the last time this was \n> discussed...\n\nI looked last night (coincidence actually) and didn't find proof that\nyou cannot lose data.\n\nHow do you deal with the file system structure being updated before the\ndata blocks are (re-)written?\n\nI don't think you can.\n\n> The bottom line is that the only reason you need a metadata journalling \n> filesystem is to save the fsck time when you come up. On a little \n> partition like xlog, that's not an issue.\n\nfsck isn't only about time to fix. fsck is needed, because the file system\nis broken. If the file system is broken, how can you guarantee data has\nnot been corrupted?\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 14:33:27 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 02:33:27PM -0400, [email protected] wrote:\n>On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:\n>> On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:\n>> >Are 'we' sure that such a setup can't lose any data?\n>> Yes. If you check the archives, you can even find the last time this was \n>> discussed...\n>\n>I looked last night (coincidence actually) and didn't find proof that\n>you cannot lose data.\n\nYou aren't going to find proof, any more than you'll find proof that you \nwon't lose data if you do lose a journalling fs. (Because there isn't \nany.) Unfortunately, many people misunderstand the what a metadata \njournal does for you, and overstate its importance in this type of \napplication.\n\n>How do you deal with the file system structure being updated before the\n>data blocks are (re-)written?\n\n*That's what the postgres log is for.* If the latest xlog entries don't \nmake it to disk, they won't be replayed; if they didn't make it to \ndisk, the transaction would not have been reported as commited. An \napplication that understands filesystem semantics can guarantee data \nintegrity without metadata journaling.\n\n>> The bottom line is that the only reason you need a metadata journalling \n>> filesystem is to save the fsck time when you come up. On a little \n>> partition like xlog, that's not an issue.\n>\n>fsck isn't only about time to fix. fsck is needed, because the file system\n>is broken. \n\nfsck is needed to reconcile the metadata with the on-disk allocations. \nTo do that, it reads all the inodes and their corresponding directory \nentries. The time to do that is proportional to the size of the \nfilesystem, hence the comment about time. fsck is not needed \"because \nthe filesystem is broken\", it's needed because the filesystem is marked \ndirty. \n\nMike Stone\n",
"msg_date": "Tue, 15 Aug 2006 15:02:56 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:\n> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [email protected] wrote:\n> >On Tue, Aug 15, 2006 at 01:26:46PM -0400, Michael Stone wrote:\n> >>On Tue, Aug 15, 2006 at 11:29:26AM -0500, Jim C. Nasby wrote:\n> >>>Are 'we' sure that such a setup can't lose any data?\n> >>Yes. If you check the archives, you can even find the last time this was \n> >>discussed...\n> >\n> >I looked last night (coincidence actually) and didn't find proof that\n> >you cannot lose data.\n> \n> You aren't going to find proof, any more than you'll find proof that you \n> won't lose data if you do lose a journalling fs. (Because there isn't \n> any.) Unfortunately, many people misunderstand the what a metadata \n> journal does for you, and overstate its importance in this type of \n> application.\n> \n> >How do you deal with the file system structure being updated before the\n> >data blocks are (re-)written?\n> \n> *That's what the postgres log is for.* If the latest xlog entries don't \n> make it to disk, they won't be replayed; if they didn't make it to \n> disk, the transaction would not have been reported as commited. An \n> application that understands filesystem semantics can guarantee data \n> integrity without metadata journaling.\n \nSo what causes files to get 'lost' and get stuck in lost+found?\nAFAIK that's because the file was written before the metadata. Now, if\nfsync'ing a file also ensures that all the metadata is written, then\nwe're probably fine... if not, then we're at risk every time we create a\nnew file (every WAL segment if archiving is on, and every time a\nrelation passes a 1GB boundary).\n\nFWIW, the way that FreeBSD gets around the need to fsck a dirty\nfilesystem before use without using a journal is to ensure that metadate\noperations are always on the drive before the actual data is written.\nThere's still a need to fsck a dirty filesystem, but it can now be done\nin the background, with the filesystem mounted and in use.\n\n> >>The bottom line is that the only reason you need a metadata journalling \n> >>filesystem is to save the fsck time when you come up. On a little \n> >>partition like xlog, that's not an issue.\n> >\n> >fsck isn't only about time to fix. fsck is needed, because the file system\n> >is broken. \n> \n> fsck is needed to reconcile the metadata with the on-disk allocations. \n> To do that, it reads all the inodes and their corresponding directory \n> entries. The time to do that is proportional to the size of the \n> filesystem, hence the comment about time. fsck is not needed \"because \n> the filesystem is broken\", it's needed because the filesystem is marked \n> dirty. \n> \n> Mike Stone\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 14:15:05 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 03:02:56PM -0400, Michael Stone wrote:\n> On Tue, Aug 15, 2006 at 02:33:27PM -0400, [email protected] wrote:\n> >>>Are 'we' sure that such a setup can't lose any data?\n> >>Yes. If you check the archives, you can even find the last time this was \n> >>discussed...\n> >I looked last night (coincidence actually) and didn't find proof that\n> >you cannot lose data.\n> You aren't going to find proof, any more than you'll find proof that you \n> won't lose data if you do lose a journalling fs. (Because there isn't \n> any.) Unfortunately, many people misunderstand the what a metadata \n> journal does for you, and overstate its importance in this type of \n> application.\n\nYes, many people do. :-)\n\n> >How do you deal with the file system structure being updated before the\n> >data blocks are (re-)written?\n> *That's what the postgres log is for.* If the latest xlog entries don't \n> make it to disk, they won't be replayed; if they didn't make it to \n> disk, the transaction would not have been reported as commited. An \n> application that understands filesystem semantics can guarantee data \n> integrity without metadata journaling.\n\nNo. This is not true. Updating the file system structure (inodes, indirect\nblocks) touches a separate part of the disk than the actual data. If\nthe file system structure is modified, say, to extend a file to allow\nit to contain more data, but the data itself is not written, then upon\na restore, with a system such as ext2, or ext3 with writeback, or xfs,\nit is possible that the end of the file, even the postgres log file,\nwill contain a random block of data from the disk. If this random block\nof data happens to look like a valid xlog block, it may be played back,\nand the database corrupted.\n\nIf the file system is only used for xlog data, the chance that it looks\nlike a valid block increases, would it not?\n\n> >>The bottom line is that the only reason you need a metadata journalling \n> >>filesystem is to save the fsck time when you come up. On a little \n> >>partition like xlog, that's not an issue.\n> >fsck isn't only about time to fix. fsck is needed, because the file system\n> >is broken. \n> fsck is needed to reconcile the metadata with the on-disk allocations. \n> To do that, it reads all the inodes and their corresponding directory \n> entries. The time to do that is proportional to the size of the \n> filesystem, hence the comment about time. fsck is not needed \"because \n> the filesystem is broken\", it's needed because the filesystem is marked \n> dirty. \n\nThis is also wrong. fsck is needed because the file system is broken.\n\nIt takes time, because it doesn't have a journal to help it, therefore it\nmust look through the entire file system and guess what the problems are.\nThere are classes of problems such as I describe above, for which fsck\n*cannot* guess how to solve the problem. There is not enough information\navailable for it to deduce that anything is wrong at all.\n\nThe probability is low, for sure - but then, the chance of a file system\nfailure is already low.\n\nBetting on ext2 + postgresql xlog has not been confirmed to me as reliable.\n\nTelling me that journalling is misunderstood doesn't prove to me that you\nunderstand it.\n\nI don't mean to be offensive, but I won't accept what you say, as it does\nnot make sense with my understanding of how file systems work. :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 15:39:51 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:\n> So what causes files to get 'lost' and get stuck in lost+found?\n> AFAIK that's because the file was written before the metadata. Now, if\n> fsync'ing a file also ensures that all the metadata is written, then\n> we're probably fine... if not, then we're at risk every time we create a\n> new file (every WAL segment if archiving is on, and every time a\n> relation passes a 1GB boundary).\n\nOnly if fsync ensures that the data written to disk is ordered, which as\nfar as I know, is not done for ext2. Dirty blocks are written in whatever\norder is fastest for them to be written, or sequential order, or some\norder that isn't based on examining the metadata.\n\nIf my understanding is correct - and I've seen nothing yet to say that\nit isn't - ext2 is not safe, postgresql xlog or not, fsck or not. It\nis safer than no postgresql xlog - but there exists windows, however\nsmall, where the file system can be corrupted.\n\nThe need for fsck is due to this problem. If fsck needs to do anything\nat all, other than replay a journal, the file system is broken.\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 15:42:59 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "[email protected] writes:\n> I've been worrying about this myself, and my current conclusion is that\n> ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which\n> could lead to the need to trash the xlog.\n\n> Even ext3 in writeback mode allows for the indirect blocks to be updated\n> without the data underneath, allowing for blocks to point to random data,\n> or worse, previous apparently sane data (especially if the data is from\n> a drive only used for xlog - the chance is high that a block might look\n> partially valid?).\n\nAt least for xlog, this worrying is misguided, because we zero and fsync\na WAL file before we ever put any valuable data into it. Unless the\nfilesystem is lying through its teeth about having done an fsync, there\nshould be no metadata changes happening for an active WAL file (other\nthan mtime of course).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Aug 2006 16:05:17 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and "
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 04:05:17PM -0400, Tom Lane wrote:\n> [email protected] writes:\n> > I've been worrying about this myself, and my current conclusion is that\n> > ext2 is bad because: a) fsck, and b) data can be lost or corrupted, which\n> > could lead to the need to trash the xlog.\n> > Even ext3 in writeback mode allows for the indirect blocks to be updated\n> > without the data underneath, allowing for blocks to point to random data,\n> > or worse, previous apparently sane data (especially if the data is from\n> > a drive only used for xlog - the chance is high that a block might look\n> > partially valid?).\n> At least for xlog, this worrying is misguided, because we zero and fsync\n> a WAL file before we ever put any valuable data into it. Unless the\n> filesystem is lying through its teeth about having done an fsync, there\n> should be no metadata changes happening for an active WAL file (other\n> than mtime of course).\n\nHmmm... I may have missed a post about this in the archive.\n\nWAL file is never appended - only re-written?\n\nIf so, then I'm wrong, and ext2 is fine. The requirement is that no\nfile system structures change as a result of any writes that\nPostgreSQL does. If no file system structures change, then I take\neverything back as uninformed.\n\nPlease confirm whichever. :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 16:29:48 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "[email protected] writes:\n> WAL file is never appended - only re-written?\n\n> If so, then I'm wrong, and ext2 is fine. The requirement is that no\n> file system structures change as a result of any writes that\n> PostgreSQL does. If no file system structures change, then I take\n> everything back as uninformed.\n\nThat risk certainly exists in the general data directory, but AFAIK\nit's not a problem for pg_xlog.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Aug 2006 16:40:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and "
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 02:15:05PM -0500, Jim C. Nasby wrote:\n>Now, if\n>fsync'ing a file also ensures that all the metadata is written, then\n>we're probably fine... \n\n...and it does. Unclean shutdowns cause problems in general because \nfilesystems operate asynchronously. postgres (and other similar \nprograms) go to great lengths to make sure that critical operations are \nperformed synchronously. If the program *doesn't* do that, metadata \njournaling isn't a magic wand which will guarantee data integrity--it \nwon't. If the program *does* do that, all the metadata journaling adds \nis the ability to skip fsck and start up faster.\n\nMike Stone\n",
"msg_date": "Tue, 15 Aug 2006 16:53:03 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 03:39:51PM -0400, [email protected] wrote:\n>No. This is not true. Updating the file system structure (inodes, indirect\n>blocks) touches a separate part of the disk than the actual data. If\n>the file system structure is modified, say, to extend a file to allow\n>it to contain more data, but the data itself is not written, then upon\n>a restore, with a system such as ext2, or ext3 with writeback, or xfs,\n>it is possible that the end of the file, even the postgres log file,\n>will contain a random block of data from the disk. If this random block\n>of data happens to look like a valid xlog block, it may be played back,\n>and the database corrupted.\n\nyou're conflating a whole lot of different issues here. You're ignoring \nthe fact that postgres preallocates the xlog segment, you're ignoring \nthe fact that you can sync a directory entry, you're ignoring the fact \nthat syncing some metadata (such as atime) doesn't matter (only the \nblock allocation is important in this case, and the blocks are \npre-allocated).\n\n>This is also wrong. fsck is needed because the file system is broken.\n\nnope, the file system *may* be broken. the dirty flag simply indicates \nthat the filesystem needs to be checked to find out whether or not it is \nbroken.\n\n>I don't mean to be offensive, but I won't accept what you say, as it does\n>not make sense with my understanding of how file systems work. :-)\n\n<shrug> I'm not getting paid to convince you of anything.\n\nMike Stone\n",
"msg_date": "Tue, 15 Aug 2006 16:58:59 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 04:58:59PM -0400, Michael Stone wrote:\n> On Tue, Aug 15, 2006 at 03:39:51PM -0400, [email protected] wrote:\n> >No. This is not true. Updating the file system structure (inodes, indirect\n> >blocks) touches a separate part of the disk than the actual data. If\n> >the file system structure is modified, say, to extend a file to allow\n> >it to contain more data, but the data itself is not written, then upon\n> >a restore, with a system such as ext2, or ext3 with writeback, or xfs,\n> >it is possible that the end of the file, even the postgres log file,\n> >will contain a random block of data from the disk. If this random block\n> >of data happens to look like a valid xlog block, it may be played back,\n> >and the database corrupted.\n> you're conflating a whole lot of different issues here. You're ignoring \n> the fact that postgres preallocates the xlog segment, you're ignoring \n> the fact that you can sync a directory entry, you're ignoring the fact \n> that syncing some metadata (such as atime) doesn't matter (only the \n> block allocation is important in this case, and the blocks are \n> pre-allocated).\n\nYes, no, no, no. :-)\n\nI didn't know that the xlog segment only uses pre-allocated space. I\nignore mtime/atime as they don't count as file system structure\nchanges to me. It's updating a field in place. No change to the structure.\n\nWith the pre-allocation knowledge, I agree with you. Not sure how I\nmissed that in my reviewing of the archives... I did know it\npre-allocated once upon a time... Hmm....\n\n> >This is also wrong. fsck is needed because the file system is broken.\n> nope, the file system *may* be broken. the dirty flag simply indicates \n> that the filesystem needs to be checked to find out whether or not it is \n> broken.\n\nAh, but if we knew it wasn't broken, then fsck wouldn't be needed, now\nwould it? So we assume that it is broken. A little bit of a game, but\nit is important to me. If I assumed the file system was not broken, I\nwouldn't run fsck. I run fsck, because I assume it may be broken. If\nbroken, it indicates potential corruption.\n\nThe difference for me, is that if you are correct, that the xlog is\nsafe, than for a disk that only uses xlog, fsck is not ever necessary,\neven after a system crash. If fsck is necessary, then there is potential\nfor a problem.\n\nWith the pre-allocation knowledge, I'm tempted to agree with you that\nfsck is not ever necessary for partitions that only hold a properly\npre-allocated xlog.\n\n> >I don't mean to be offensive, but I won't accept what you say, as it does\n> >not make sense with my understanding of how file systems work. :-)\n> <shrug> I'm not getting paid to convince you of anything.\n\nJust getting you to back up your claim a bit... As I said, no intent\nto offend. I learned from it.\n\nThanks,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Tue, 15 Aug 2006 17:38:43 -0400",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 05:38:43PM -0400, [email protected] wrote:\n> I didn't know that the xlog segment only uses pre-allocated space. I\n> ignore mtime/atime as they don't count as file system structure\n> changes to me. It's updating a field in place. No change to the structure.\n> \n> With the pre-allocation knowledge, I agree with you. Not sure how I\n> missed that in my reviewing of the archives... I did know it\n> pre-allocated once upon a time... Hmm....\n\nThis is only valid if the pre-allocation is also fsync'd *and* fsync\nensures that both the metadata and file data are on disk. Anyone\nactually checked that? :)\n\nBTW, I did see some anecdotal evidence on one of the lists a while ago.\nA PostgreSQL DBA had suggested doing a 'pull the power cord' test to the\nother DBAs (all of which were responsible for different RDBMSes,\nincluding a bunch of well known names). They all thought he was off his\nrocker. Not too long after that, an unplanned power outage did occur,\nand PostgreSQL was the only RDBMS that recovered every single database\nwithout intervention.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 17:20:25 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:\n> This is only valid if the pre-allocation is also fsync'd *and* fsync\n> ensures that both the metadata and file data are on disk. Anyone\n> actually checked that? :)\n\nfsync() does that, yes. fdatasync() (if it exists), OTOH, doesn't sync the\nmetadata.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n",
"msg_date": "Wed, 16 Aug 2006 00:23:23 +0200",
"msg_from": "\"Steinar H. Gunderson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "On Tue, 15 Aug 2006 [email protected] wrote:\n>>> This is also wrong. fsck is needed because the file system is broken.\n>> nope, the file system *may* be broken. the dirty flag simply indicates\n>> that the filesystem needs to be checked to find out whether or not it is\n>> broken.\n>\n> Ah, but if we knew it wasn't broken, then fsck wouldn't be needed, now\n> would it? So we assume that it is broken. A little bit of a game, but\n> it is important to me. If I assumed the file system was not broken, I\n> wouldn't run fsck. I run fsck, because I assume it may be broken. If\n> broken, it indicates potential corruption.\n\nnote tha the ext3, reiserfs, jfs, and xfs developers (at least) consider \nfsck nessasary even for journaling fileysstems. they just let you get away \nwithout it being mandatory after a unclean shutdown.\n\nDavid Lang\n",
"msg_date": "Tue, 15 Aug 2006 17:07:17 -0700 (PDT)",
"msg_from": "David Lang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "\"Steinar H. Gunderson\" <[email protected]> writes:\n> On Tue, Aug 15, 2006 at 05:20:25PM -0500, Jim C. Nasby wrote:\n>> This is only valid if the pre-allocation is also fsync'd *and* fsync\n>> ensures that both the metadata and file data are on disk. Anyone\n>> actually checked that? :)\n\n> fsync() does that, yes. fdatasync() (if it exists), OTOH, doesn't sync the\n> metadata.\n\nWell, the POSIX spec says that fsync should do that ;-)\n\nMy guess is that most/all kernel filesystem layers do indeed try to sync\neverything that the spec says they should. The Achilles' heel of the\nwhole business is disk drives that lie about write completion. The\nkernel is just as vulnerable to that as any application ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Aug 2006 23:00:54 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and "
},
{
"msg_contents": "Hi, Jim,\n\nJim C. Nasby wrote:\n\n> Well, if the controller is caching with a BBU, I'm not sure that order\n> matters anymore, because the controller should be able to re-order at\n> will. Theoretically. :) But this is why having some actual data posted\n> somewhere would be great.\n\nWell, actually, the controller should not reorder over write barriers.\n\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 16 Aug 2006 10:33:39 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Everyone,\n\nI wanted to follow-up on bonnie results for the internal RAID1 which is\nconnected to the SmartArray 6i. I believe this is the problem, but I am\nnot good at interepting the results. Here's an sample of three runs:\n\nscsi disc\narray ,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16,+++++,\n+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\nscsi disc\narray ,16G,54634,75,67793,21,36835,6,74190,88,89314,6,579.9,0,16,+++++,\n+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\nscsi disc\narray ,16G,55056,76,66108,20,36859,6,74108,87,89559,6,585.0,0,16,+++++,\n+++,+++++,+++,+++++,+++,+++++,+++,+\n\nThis was run on the internal RAID1 on the outer portion of the discs\nformatted at ext2.\n\nThanks.\n\nSteve\n\nOn Thu, 2006-08-10 at 10:35 -0500, Scott Marlowe wrote:\n> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n> > Mike,\n> > \n> > On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n> > \n> > > On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> > >> I tried as you suggested and my performance dropped by 50%. I went from\n> > >> a 32 TPS to 16. Oh well.\n> > > \n> > > If you put data & xlog on the same array, put them on seperate\n> > > partitions, probably formatted differently (ext2 on xlog).\n> > \n> > If he's doing the same thing on both systems (Sun and HP) and the HP\n> > performance is dramatically worse despite using more disks and having faster\n> > CPUs and more RAM, ISTM the problem isn't the configuration.\n> > \n> > Add to this the fact that the Sun machine is CPU bound while the HP is I/O\n> > wait bound and I think the problem is the disk hardware or the driver\n> > therein.\n> \n> I agree. The problem here looks to be the RAID controller.\n> \n> Steve, got access to a different RAID controller to test with?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n",
"msg_date": "Wed, 16 Aug 2006 19:10:33 -0700",
"msg_from": "Steve Poe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Steve,\n\nIf this is an internal RAID1 on two disks, it looks great.\n\nBased on the random seeks though (578 seeks/sec), it looks like maybe it's 6\ndisks in a RAID10?\n\n- Luke\n\n\nOn 8/16/06 7:10 PM, \"Steve Poe\" <[email protected]> wrote:\n\n> Everyone,\n> \n> I wanted to follow-up on bonnie results for the internal RAID1 which is\n> connected to the SmartArray 6i. I believe this is the problem, but I am\n> not good at interepting the results. Here's an sample of three runs:\n> \n> scsi disc\n> array ,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> scsi disc\n> array ,16G,54634,75,67793,21,36835,6,74190,88,89314,6,579.9,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> scsi disc\n> array ,16G,55056,76,66108,20,36859,6,74108,87,89559,6,585.0,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+\n> \n> This was run on the internal RAID1 on the outer portion of the discs\n> formatted at ext2.\n> \n> Thanks.\n> \n> Steve\n> \n> On Thu, 2006-08-10 at 10:35 -0500, Scott Marlowe wrote:\n>> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n>>> Mike,\n>>> \n>>> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n>>> \n>>>> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n>>>>> I tried as you suggested and my performance dropped by 50%. I went from\n>>>>> a 32 TPS to 16. Oh well.\n>>>> \n>>>> If you put data & xlog on the same array, put them on seperate\n>>>> partitions, probably formatted differently (ext2 on xlog).\n>>> \n>>> If he's doing the same thing on both systems (Sun and HP) and the HP\n>>> performance is dramatically worse despite using more disks and having faster\n>>> CPUs and more RAM, ISTM the problem isn't the configuration.\n>>> \n>>> Add to this the fact that the Sun machine is CPU bound while the HP is I/O\n>>> wait bound and I think the problem is the disk hardware or the driver\n>>> therein.\n>> \n>> I agree. The problem here looks to be the RAID controller.\n>> \n>> Steve, got access to a different RAID controller to test with?\n>> \n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that your\n>> message can get through to the mailing list cleanly\n> \n> \n\n\n",
"msg_date": "Fri, 18 Aug 2006 07:37:32 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "That's about what I was getting for a 2 disk RAID 0 setup on a PE 2950.\nHere's bonnie++ numbers for the RAID10x4 and RAID0x2, unfortunately I\nonly have the 1.93 numbers since this was before I got the advice to run\nwith the earlier version of bonnie and larger file sizes, so I don't\nknow how meaningful they are.\n\nRAID 10x4\nbash-2.05b$ bonnie++ -d bonnie -s 1000:8k\nVersion 1.93c ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t1000M 585 99 21705 4 28560 9 1004 99 812997 98 5436\n454\nLatency 14181us 81364us 50256us 57720us 1671us\n1059ms\nVersion 1.93c ------Sequential Create------ --------Random\nCreate--------\nc -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 4712 10 +++++ +++ +++++ +++ 4674 10 +++++ +++\n+++++ +++\nLatency 807ms 21us 36us 804ms 110us\n36us\n1.93c,1.93c,\n,1,1155207445,1000M,,585,99,21705,4,28560,9,1004,99,812997,98,5436,454,1\n6,,,,,4712,10,+++++,+++,+++++,+++,4674,10,+++++,+++,+++++,+++,14181us,81\n364us,50256us,57720us,1671us,1059ms,807ms,21us,36us,804ms,110us,36us\nbash-2.05b$\n\nRAID 0x2\nbash-2.05b$ bonnie++ -d bonnie -s 1000:8k\nVersion 1.93c ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t1000M 575 99 131621 25 104178 26 1004 99 816928 99 6233\n521\nLatency 14436us 26663us 47478us 54796us 1487us\n38924us\nVersion 1.93c ------Sequential Create------ --------Random\nCreate--------\n-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 4935 10 +++++ +++ +++++ +++ 5198 11 +++++ +++\n+++++ +++\nLatency 738ms 32us 43us 777ms 24us\n30us\n1.93c,1.93c,beast.corp.lumeta.com,1,1155210203,1000M,,575,99,131621,25,1\n04178,26,1004,99,816928,99,6233,521,16,,,,,4935,10,+++++,+++,+++++,+++,5\n198,11,+++++,+++,+++++,+++,14436us,26663us,47478us,54796us,1487us,38924u\ns,738ms,32us,43us,777ms,24us,30us\n\nA RAID 5 configuration seems to outperform this on the PE 2950 though\n(at least in terms of raw read/write perf)\n\nIf anyone's interested in some more detailed tests of the 2950, I might\nbe able to reconfigure the raid for some tests next week before I start\nsetting up the box for long term use, so I'm open to suggestions. See\nearlier posts in this thread for details about the hardware.\n\nThanks,\n\nBucky\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Luke\nLonergan\nSent: Friday, August 18, 2006 10:38 AM\nTo: [email protected]; Scott Marlowe\nCc: Michael Stone; [email protected]\nSubject: Re: [PERFORM] Postgresql Performance on an HP DL385 and\n\nSteve,\n\nIf this is an internal RAID1 on two disks, it looks great.\n\nBased on the random seeks though (578 seeks/sec), it looks like maybe\nit's 6\ndisks in a RAID10?\n\n- Luke\n\n\nOn 8/16/06 7:10 PM, \"Steve Poe\" <[email protected]> wrote:\n\n> Everyone,\n> \n> I wanted to follow-up on bonnie results for the internal RAID1 which\nis\n> connected to the SmartArray 6i. I believe this is the problem, but I\nam\n> not good at interepting the results. Here's an sample of three runs:\n> \n> scsi disc\n> array\n,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> scsi disc\n> array\n,16G,54634,75,67793,21,36835,6,74190,88,89314,6,579.9,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> scsi disc\n> array\n,16G,55056,76,66108,20,36859,6,74108,87,89559,6,585.0,0,16,+++++,\n> +++,+++++,+++,+++++,+++,+++++,+++,+\n> \n> This was run on the internal RAID1 on the outer portion of the discs\n> formatted at ext2.\n> \n> Thanks.\n> \n> Steve\n> \n> On Thu, 2006-08-10 at 10:35 -0500, Scott Marlowe wrote:\n>> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n>>> Mike,\n>>> \n>>> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]>\nwrote:\n>>> \n>>>> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n>>>>> I tried as you suggested and my performance dropped by 50%. I went\nfrom\n>>>>> a 32 TPS to 16. Oh well.\n>>>> \n>>>> If you put data & xlog on the same array, put them on seperate\n>>>> partitions, probably formatted differently (ext2 on xlog).\n>>> \n>>> If he's doing the same thing on both systems (Sun and HP) and the HP\n>>> performance is dramatically worse despite using more disks and\nhaving faster\n>>> CPUs and more RAM, ISTM the problem isn't the configuration.\n>>> \n>>> Add to this the fact that the Sun machine is CPU bound while the HP\nis I/O\n>>> wait bound and I think the problem is the disk hardware or the\ndriver\n>>> therein.\n>> \n>> I agree. The problem here looks to be the RAID controller.\n>> \n>> Steve, got access to a different RAID controller to test with?\n>> \n>> ---------------------------(end of\nbroadcast)---------------------------\n>> TIP 1: if posting/reading through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that\nyour\n>> message can get through to the mailing list cleanly\n> \n> \n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n",
"msg_date": "Fri, 18 Aug 2006 11:26:02 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nNope. it is only a RAID1 for the 2 internal discs connected to the\nSmartArray 6i. This is where I *had* the pg_xlog located when the\nperformance was very poor. Also, I just found out the default stripe size is\n128k. Would this be a problem for pg_xlog?\n\nThe 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n\nSteve\n\nOn 8/18/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> If this is an internal RAID1 on two disks, it looks great.\n>\n> Based on the random seeks though (578 seeks/sec), it looks like maybe it's\n> 6\n> disks in a RAID10?\n>\n> - Luke\n>\n>\n> On 8/16/06 7:10 PM, \"Steve Poe\" <[email protected]> wrote:\n>\n> > Everyone,\n> >\n> > I wanted to follow-up on bonnie results for the internal RAID1 which is\n> > connected to the SmartArray 6i. I believe this is the problem, but I am\n> > not good at interepting the results. Here's an sample of three runs:\n> >\n> > scsi disc\n> > array ,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16,+++++,\n> > +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> > scsi disc\n> > array ,16G,54634,75,67793,21,36835,6,74190,88,89314,6,579.9,0,16,+++++,\n> > +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> > scsi disc\n> > array ,16G,55056,76,66108,20,36859,6,74108,87,89559,6,585.0,0,16,+++++,\n> > +++,+++++,+++,+++++,+++,+++++,+++,+\n> >\n> > This was run on the internal RAID1 on the outer portion of the discs\n> > formatted at ext2.\n> >\n> > Thanks.\n> >\n> > Steve\n> >\n> > On Thu, 2006-08-10 at 10:35 -0500, Scott Marlowe wrote:\n> >> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:\n> >>> Mike,\n> >>>\n> >>> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]> wrote:\n> >>>\n> >>>> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:\n> >>>>> I tried as you suggested and my performance dropped by 50%. I went\n> from\n> >>>>> a 32 TPS to 16. Oh well.\n> >>>>\n> >>>> If you put data & xlog on the same array, put them on seperate\n> >>>> partitions, probably formatted differently (ext2 on xlog).\n> >>>\n> >>> If he's doing the same thing on both systems (Sun and HP) and the HP\n> >>> performance is dramatically worse despite using more disks and having\n> faster\n> >>> CPUs and more RAM, ISTM the problem isn't the configuration.\n> >>>\n> >>> Add to this the fact that the Sun machine is CPU bound while the HP is\n> I/O\n> >>> wait bound and I think the problem is the disk hardware or the driver\n> >>> therein.\n> >>\n> >> I agree. The problem here looks to be the RAID controller.\n> >>\n> >> Steve, got access to a different RAID controller to test with?\n> >>\n> >> ---------------------------(end of\n> broadcast)---------------------------\n> >> TIP 1: if posting/reading through Usenet, please send an appropriate\n> >> subscribe-nomail command to [email protected] so that\n> your\n> >> message can get through to the mailing list cleanly\n> >\n> >\n>\n>\n>\n\nLuke,Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray 6i. This is where I *had* the pg_xlog located when the performance was very poor. Also, I just found out the default stripe size is 128k. Would this be a problem for pg_xlog?\nThe 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.SteveOn 8/18/06, Luke Lonergan <\[email protected]> wrote:Steve,If this is an internal RAID1 on two disks, it looks great.\nBased on the random seeks though (578 seeks/sec), it looks like maybe it's 6disks in a RAID10?- LukeOn 8/16/06 7:10 PM, \"Steve Poe\" <[email protected]\n> wrote:> Everyone,>> I wanted to follow-up on bonnie results for the internal RAID1 which is> connected to the SmartArray 6i. I believe this is the problem, but I am> not good at interepting the results. Here's an sample of three runs:\n>> scsi disc> array ,16G,47983,67,65492,20,37214,6,73785,87,89787,6,578.2,0,16,+++++,> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++> scsi disc> array ,16G,54634,75,67793,21,36835,6,74190,88,89314,6,\n579.9,0,16,+++++,> +++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++> scsi disc> array ,16G,55056,76,66108,20,36859,6,74108,87,89559,6,585.0,0,16,+++++,> +++,+++++,+++,+++++,+++,+++++,+++,+\n>> This was run on the internal RAID1 on the outer portion of the discs> formatted at ext2.>> Thanks.>> Steve>> On Thu, 2006-08-10 at 10:35 -0500, Scott Marlowe wrote:\n>> On Thu, 2006-08-10 at 10:15, Luke Lonergan wrote:>>> Mike,>>>>>> On 8/10/06 4:09 AM, \"Michael Stone\" <[email protected]\n> wrote:>>>>>>> On Wed, Aug 09, 2006 at 08:29:13PM -0700, Steve Poe wrote:>>>>> I tried as you suggested and my performance dropped by 50%. I went from>>>>> a 32 TPS to 16. Oh well.\n>>>>>>>> If you put data & xlog on the same array, put them on seperate>>>> partitions, probably formatted differently (ext2 on xlog).>>>>>> If he's doing the same thing on both systems (Sun and HP) and the HP\n>>> performance is dramatically worse despite using more disks and having faster>>> CPUs and more RAM, ISTM the problem isn't the configuration.>>>>>> Add to this the fact that the Sun machine is CPU bound while the HP is I/O\n>>> wait bound and I think the problem is the disk hardware or the driver>>> therein.>>>> I agree. The problem here looks to be the RAID controller.>>>> Steve, got access to a different RAID controller to test with?\n>>>> ---------------------------(end of broadcast)--------------------------->> TIP 1: if posting/reading through Usenet, please send an appropriate>> subscribe-nomail command to \[email protected] so that your>> message can get through to the mailing list cleanly>>",
"msg_date": "Fri, 18 Aug 2006 10:39:53 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Steve,\n\nOn 8/18/06 10:39 AM, \"Steve Poe\" <[email protected]> wrote:\n\n> Nope. it is only a RAID1 for the 2 internal discs connected to the SmartArray\n> 6i. This is where I *had* the pg_xlog located when the performance was very\n> poor. Also, I just found out the default stripe size is 128k. Would this be a\n> problem for pg_xlog?\n\nISTM that the main performance issue for xlog is going to be the rate at\nwhich fdatasync operations complete, and the stripe size shouldn't hurt\nthat.\n\nWhat are your postgresql.conf settings for the xlog: how many logfiles,\nsync_method, etc?\n\n> The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n\nInteresting - the seek rate is very good for two drives, are they 15K RPM?\n\n- Luke\n\n\n",
"msg_date": "Fri, 18 Aug 2006 11:09:27 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nISTM that the main performance issue for xlog is going to be the rate at\n> which fdatasync operations complete, and the stripe size shouldn't hurt\n> that.\n\n\nI thought so. However, I've also tried running the PGDATA off of the RAID1\nas a test and it is poor.\n\n\n\nWhat are your postgresql.conf settings for the xlog: how many logfiles,\n> sync_method, etc?\n\n\nwal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or\nopen_datasync\n# - Checkpoints -\n\ncheckpoint_segments = 14 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5\n\nWhat stumps me is I use the same settings on a Sun box (dual Opteron 4GB w/\nLSI MegaRAID 128M) with the same data. This is on pg 7.4.13.\n\n\n\n> The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n>\n> Interesting - the seek rate is very good for two drives, are they 15K RPM?\n\n\nNope. 10K. RPM.\n\n\nHP's recommendation for testing is to connect the RAID1 to the second\nchannel off of the SmartArray 642 adapter since they use the same driver,\nand, according to HP, I should not have to rebuilt the RAID1.\n\nI have to send the new server to the hospital next week, so I have very\nlittle testing time left.\n\nSteve\n\nLuke, ISTM that the main performance issue for xlog is going to be the rate at\nwhich fdatasync operations complete, and the stripe size shouldn't hurtthat.I thought so. However, I've also tried running the PGDATA off of the RAID1 as a test and it is poor. \nWhat are your postgresql.conf settings for the xlog: how many logfiles,sync_method, etc?\nwal_sync_method = fsync # the default varies across platforms: # fsync, fdatasync, open_sync, or open_datasync# - Checkpoints -checkpoint_segments = 14 # in logfile segments, min 1, 16MB each\ncheckpoint_timeout = 300 # range 30-3600, in seconds#checkpoint_warning = 30 # 0 is off, in seconds#commit_delay = 0 # range 0-100000, in microseconds#commit_siblings = 5\nWhat stumps me is I use the same settings on a Sun box (dual Opteron 4GB w/ LSI MegaRAID 128M) with the same data. This is on pg 7.4.13. \n> The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.Interesting - the seek rate is very good for two drives, are they 15K RPM?Nope. 10K. RPM. HP's recommendation for testing is to connect the RAID1 to the second channel off of the SmartArray 642 adapter since they use the same driver, and, according to HP, I should not have to rebuilt the RAID1. \nI have to send the new server to the hospital next week, so I have very little testing time left. Steve",
"msg_date": "Fri, 18 Aug 2006 12:00:02 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Steve,\n\nOne thing here is that ³wal_sync_method² should be set to ³fdatasync² and\nnot ³fsync². In fact, the default is fdatasync, but because you have\nuncommented the standard line in the file, it is changed to ³fsync², which\nis a lot slower. This is a bug in the file defaults.\n\nThat could speed things up quite a bit on the xlog.\n\nWRT the difference between the two systems, I¹m kind of stumped.\n\n- Luke\n\n\nOn 8/18/06 12:00 PM, \"Steve Poe\" <[email protected]> wrote:\n\n> \n> Luke, \n> \n>> ISTM that the main performance issue for xlog is going to be the rate at\n>> which fdatasync operations complete, and the stripe size shouldn't hurt\n>> that.\n> \n> I thought so. However, I've also tried running the PGDATA off of the RAID1 as\n> a test and it is poor.\n> \n> \n> \n>> What are your postgresql.conf settings for the xlog: how many logfiles,\n>> sync_method, etc?\n> \n> wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n> open_datasync\n> # - Checkpoints -\n> \n> checkpoint_segments = 14 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # 0 is off, in seconds\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5\n> \n> What stumps me is I use the same settings on a Sun box (dual Opteron 4GB w/\n> LSI MegaRAID 128M) with the same data. This is on pg 7.4.13.\n> \n> \n> \n>>> > The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n>> \n>> Interesting - the seek rate is very good for two drives, are they 15K RPM?\n> \n> Nope. 10K. RPM. \n> \n> \n> HP's recommendation for testing is to connect the RAID1 to the second channel\n> off of the SmartArray 642 adapter since they use the same driver, and,\n> according to HP, I should not have to rebuilt the RAID1.\n> \n> I have to send the new server to the hospital next week, so I have very little\n> testing time left.\n> \n> Steve\n> \n> \n> \n> \n> \n\n\n\n\n\nRe: [PERFORM] Postgresql Performance on an HP DL385 and\n\n\nSteve,\n\nOne thing here is that “wal_sync_method” should be set to “fdatasync” and not “fsync”. In fact, the default is fdatasync, but because you have uncommented the standard line in the file, it is changed to “fsync”, which is a lot slower. This is a bug in the file defaults.\n\nThat could speed things up quite a bit on the xlog.\n\nWRT the difference between the two systems, I’m kind of stumped.\n\n- Luke\n\n\nOn 8/18/06 12:00 PM, \"Steve Poe\" <[email protected]> wrote:\n\n\nLuke, \n\nISTM that the main performance issue for xlog is going to be the rate at \nwhich fdatasync operations complete, and the stripe size shouldn't hurt\nthat.\n\nI thought so. However, I've also tried running the PGDATA off of the RAID1 as a test and it is poor.\n\n \n\nWhat are your postgresql.conf settings for the xlog: how many logfiles,\nsync_method, etc? \n\nwal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n# - Checkpoints -\n\ncheckpoint_segments = 14 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5\n\nWhat stumps me is I use the same settings on a Sun box (dual Opteron 4GB w/ LSI MegaRAID 128M) with the same data. This is on pg 7.4.13.\n\n \n\n> The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n\nInteresting - the seek rate is very good for two drives, are they 15K RPM?\n\nNope. 10K. RPM. \n\n\nHP's recommendation for testing is to connect the RAID1 to the second channel off of the SmartArray 642 adapter since they use the same driver, and, according to HP, I should not have to rebuilt the RAID1. \n\nI have to send the new server to the hospital next week, so I have very little testing time left. \n\nSteve",
"msg_date": "Fri, 18 Aug 2006 14:32:44 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI'll try it, but you're right, it should not matter. The two systems are:\n\nHP DL385 (dual Opteron 265 I believe) 8GB of RAM, two internal RAID1 U320\n10K\n\nSun W2100z (dual Opteron 245 I believe) 4GB of RAM, 1 U320 10K drive with\nLSI MegaRAID 2X 128M driving two external 4-disc arrays U320 10K drives in a\nRAID10 configuration. Running same version of LInux (Centos 4.3 ) and same\nkernel version. No changes within the kernel for each of them. Running the\nsame *.conf files for Postgresql 7.4.13.\n\nSteve\n\nOn 8/18/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> One thing here is that \"wal_sync_method\" should be set to \"fdatasync\" and\n> not \"fsync\". In fact, the default is fdatasync, but because you have\n> uncommented the standard line in the file, it is changed to \"fsync\", which\n> is a lot slower. This is a bug in the file defaults.\n>\n> That could speed things up quite a bit on the xlog.\n>\n> WRT the difference between the two systems, I'm kind of stumped.\n>\n> - Luke\n>\n>\n> On 8/18/06 12:00 PM, \"Steve Poe\" <[email protected]> wrote:\n>\n>\n> Luke,\n>\n> ISTM that the main performance issue for xlog is going to be the rate at\n> which fdatasync operations complete, and the stripe size shouldn't hurt\n> that.\n>\n>\n> I thought so. However, I've also tried running the PGDATA off of the RAID1\n> as a test and it is poor.\n>\n>\n>\n> What are your postgresql.conf settings for the xlog: how many logfiles,\n> sync_method, etc?\n>\n>\n> wal_sync_method = fsync # the default varies across platforms:\n> # fsync, fdatasync, open_sync, or\n> open_datasync\n> # - Checkpoints -\n>\n> checkpoint_segments = 14 # in logfile segments, min 1, 16MB each\n> checkpoint_timeout = 300 # range 30-3600, in seconds\n> #checkpoint_warning = 30 # 0 is off, in seconds\n> #commit_delay = 0 # range 0-100000, in microseconds\n> #commit_siblings = 5\n>\n> What stumps me is I use the same settings on a Sun box (dual Opteron 4GB\n> w/ LSI MegaRAID 128M) with the same data. This is on pg 7.4.13.\n>\n>\n>\n> > The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n>\n> Interesting - the seek rate is very good for two drives, are they 15K RPM?\n>\n>\n> Nope. 10K. RPM.\n>\n>\n> HP's recommendation for testing is to connect the RAID1 to the second\n> channel off of the SmartArray 642 adapter since they use the same driver,\n> and, according to HP, I should not have to rebuilt the RAID1.\n>\n> I have to send the new server to the hospital next week, so I have very\n> little testing time left.\n>\n> Steve\n>\n>\n>\n>\n>\n>\n>\n\nLuke,I'll try it, but you're right, it should not matter. The two systems are:HP DL385 (dual Opteron 265 I believe) 8GB of RAM, two internal RAID1 U320 10KSun W2100z (dual Opteron 245 I believe) 4GB of RAM, 1 U320 10K drive with LSI MegaRAID 2X 128M driving two external 4-disc arrays U320 10K drives in a RAID10 configuration. Running same version of LInux (Centos \n4.3 ) and same kernel version. No changes within the kernel for each of them. Running the same *.conf files for Postgresql 7.4.13.SteveOn 8/18/06, \nLuke Lonergan <[email protected]> wrote:\n\n\nSteve,\n\nOne thing here is that \"wal_sync_method\" should be set to \"fdatasync\" and not \"fsync\". In fact, the default is fdatasync, but because you have uncommented the standard line in the file, it is changed to \"fsync\", which is a lot slower. This is a bug in the file defaults.\n\n\nThat could speed things up quite a bit on the xlog.\n\nWRT the difference between the two systems, I'm kind of stumped.\n\n- Luke\n\n\nOn 8/18/06 12:00 PM, \"Steve Poe\" <[email protected]> wrote:\n\n\nLuke, \n\nISTM that the main performance issue for xlog is going to be the rate at \nwhich fdatasync operations complete, and the stripe size shouldn't hurt\nthat.\n\nI thought so. However, I've also tried running the PGDATA off of the RAID1 as a test and it is poor.\n\n \n\nWhat are your postgresql.conf settings for the xlog: how many logfiles,\nsync_method, etc? \n\nwal_sync_method = fsync # the default varies across platforms:\n # fsync, fdatasync, open_sync, or open_datasync\n# - Checkpoints -\n\ncheckpoint_segments = 14 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 300 # range 30-3600, in seconds\n#checkpoint_warning = 30 # 0 is off, in seconds\n#commit_delay = 0 # range 0-100000, in microseconds\n#commit_siblings = 5\n\nWhat stumps me is I use the same settings on a Sun box (dual Opteron 4GB w/ LSI MegaRAID 128M) with the same data. This is on pg 7.4.13.\n\n \n\n> The 6-disc RAID10 you speak of is on the SmartArray 642 RAID adapter.\n\nInteresting - the seek rate is very good for two drives, are they 15K RPM?\n\nNope. 10K. RPM. \n\n\nHP's recommendation for testing is to connect the RAID1 to the second channel off of the SmartArray 642 adapter since they use the same driver, and, according to HP, I should not have to rebuilt the RAID1. \n\nI have to send the new server to the hospital next week, so I have very little testing time left. \n\nSteve",
"msg_date": "Fri, 18 Aug 2006 15:23:11 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Steve, \n\n> > Are any of the disks not healthy? Do you see any I/O \n> errors in dmesg?\n> \n> In my vmstat report, I it is an average per minute not \n> per-second. Also, I found that in the first minute of the \n> very first run, the HP's \"bi\"\n> value hits a high of 221184 then it tanks after that.\n\nBased on the difference in I/O wait on the two machines, there's\ndefinitely something up with the disk subsystem on the HP - try checking\nthe BIOS on the HP 642 - I bet you'll find a disk error lurking there.\n\n- Luke\n\n",
"msg_date": "Wed, 9 Aug 2006 01:24:16 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Steve,\n\nAt the end of the day it seems that you've got a support issue with the\nSmartArray RAID adapter from HP. \n\nLast I tried that I found that they don't write the cciss driver, don't\ntest it for performance on Linux and don't make any claims about it's\nperformance on Linux.\n\nThat said - can you contact them through HP tech support and report back\nto this list what you find out?\n\n- Luke\n> -----Original Message-----\n> From: Steve Poe [mailto:[email protected]] \n> Sent: Tuesday, August 08, 2006 11:33 PM\n> To: Luke Lonergan\n> Cc: Alex Turner; [email protected]\n> Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and\n> \n> Luke,\n> \n> I check dmesg one more time and I found this regarding the \n> cciss driver:\n> \n> Filesystem \"cciss/c1d0p1\": Disabling barriers, not supported \n> by the underlying device.\n> \n> Don't know if it means anything, but thought I'd mention it. \n> \n> Steve\n> \n> \n> On 8/8/06, Steve Poe <[email protected]> wrote:\n> \n> \tLuke,\n> \t\n> \tI thought so. In my test, I tried to be fair/equal \n> since my Sun box has two 4-disc arrays each on their own \n> channel. So, I just used one of them which should be a little \n> slower than the 6-disc with 192MB cache. \n> \t\n> \tIncidently, the two internal SCSI drives, which are on \n> the 6i adapter, generated a TPS of 18.\n> \t\n> \tI thought this server would impressive from notes I've \n> read in the group. This is why I thought I might be doing \n> something wrong. I stumped which way to take this. There is \n> no obvious fault but something isn't right. \n> \t\n> \t\n> \tSteve\n> \t\n> \t\n> \t\n> \tOn 8/8/06, Luke Lonergan < [email protected] \n> <mailto:[email protected]> > wrote:\n> \n> \t\tSteve,\n> \t\t\n> \t\t> Sun box with 4-disc array (4GB RAM. 4 167GB \n> 10K SCSI RAID10\n> \t\t> LSI MegaRAID 128MB). This is after 8 runs.\n> \t\t>\n> \t\t> \n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> \t\t> \n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53 \n> \t\t> \n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> \t\t> \n> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> \t\t>\n> \t\t> Average TPS is 75\n> \t\t>\n> \t\t> HP box with 8GB RAM. six disc array RAID10 on \n> SmartArray 642 \n> \t\t> with 192MB RAM. After 8 runs, I see:\n> \t\t>\n> \t\t> intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> \t\t> intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> \t\t> intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50 \n> \t\t> intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> \t\t>\n> \t\t> Average TPS is 31.\n> \t\t\n> \t\tNote that the I/O wait (wa) on the HP box high, \n> low and average are all\n> \t\t*much* higher than on the Sun box. The average \n> I/O wait was 50% of one \n> \t\tCPU, which is huge. By comparison there was \n> virtually no I/O wait on\n> \t\tthe Sun machine.\n> \t\t\n> \t\tThis is indicating that your HP machine is \n> indeed I/O bound and\n> \t\tfurthermore is tying up a PG process waiting \n> for the disk to return. \n> \t\t\n> \t\t- Luke\n> \t\t\n> \t\t\n> \n> \n> \n> \n\n",
"msg_date": "Wed, 9 Aug 2006 02:35:59 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI will do that. If it is the general impression that this server should\nperform well with Postgresql, Are the RAID cards, the 6i and 642 sufficient\nto your knowledge? I am wondering if it is the disc array itself.\n\nSteve\n\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> At the end of the day it seems that you've got a support issue with the\n> SmartArray RAID adapter from HP.\n>\n> Last I tried that I found that they don't write the cciss driver, don't\n> test it for performance on Linux and don't make any claims about it's\n> performance on Linux.\n>\n> That said - can you contact them through HP tech support and report back\n> to this list what you find out?\n>\n> - Luke\n> > -----Original Message-----\n> > From: Steve Poe [mailto:[email protected]]\n> > Sent: Tuesday, August 08, 2006 11:33 PM\n> > To: Luke Lonergan\n> > Cc: Alex Turner; [email protected]\n> > Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and\n> >\n> > Luke,\n> >\n> > I check dmesg one more time and I found this regarding the\n> > cciss driver:\n> >\n> > Filesystem \"cciss/c1d0p1\": Disabling barriers, not supported\n> > by the underlying device.\n> >\n> > Don't know if it means anything, but thought I'd mention it.\n> >\n> > Steve\n> >\n> >\n> > On 8/8/06, Steve Poe <[email protected]> wrote:\n> >\n> > Luke,\n> >\n> > I thought so. In my test, I tried to be fair/equal\n> > since my Sun box has two 4-disc arrays each on their own\n> > channel. So, I just used one of them which should be a little\n> > slower than the 6-disc with 192MB cache.\n> >\n> > Incidently, the two internal SCSI drives, which are on\n> > the 6i adapter, generated a TPS of 18.\n> >\n> > I thought this server would impressive from notes I've\n> > read in the group. This is why I thought I might be doing\n> > something wrong. I stumped which way to take this. There is\n> > no obvious fault but something isn't right.\n> >\n> >\n> > Steve\n> >\n> >\n> >\n> > On 8/8/06, Luke Lonergan < [email protected]\n> > <mailto:[email protected]> > wrote:\n> >\n> > Steve,\n> >\n> > > Sun box with 4-disc array (4GB RAM. 4 167GB\n> > 10K SCSI RAID10\n> > > LSI MegaRAID 128MB). This is after 8 runs.\n> > >\n> > >\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5\n> > >\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> > >\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0\n> > >\n> > dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38\n> > >\n> > > Average TPS is 75\n> > >\n> > > HP box with 8GB RAM. six disc array RAID10 on\n> > SmartArray 642\n> > > with 192MB RAM. After 8 runs, I see:\n> > >\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50\n> > > intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> > >\n> > > Average TPS is 31.\n> >\n> > Note that the I/O wait (wa) on the HP box high,\n> > low and average are all\n> > *much* higher than on the Sun box. The average\n> > I/O wait was 50% of one\n> > CPU, which is huge. By comparison there was\n> > virtually no I/O wait on\n> > the Sun machine.\n> >\n> > This is indicating that your HP machine is\n> > indeed I/O bound and\n> > furthermore is tying up a PG process waiting\n> > for the disk to return.\n> >\n> > - Luke\n> >\n> >\n> >\n> >\n> >\n> >\n>\n>\n\nLuke,I will do that. If it is the general impression that this server should perform well with Postgresql, Are the RAID cards, the 6i and 642 sufficient to your knowledge? I am wondering if it is the disc array itself. \nSteveOn 8/8/06, Luke Lonergan <[email protected]> wrote:\nSteve,At the end of the day it seems that you've got a support issue with theSmartArray RAID adapter from HP.Last I tried that I found that they don't write the cciss driver, don'ttest it for performance on Linux and don't make any claims about it's\nperformance on Linux.That said - can you contact them through HP tech support and report backto this list what you find out?- Luke> -----Original Message-----> From: Steve Poe [mailto:\[email protected]]> Sent: Tuesday, August 08, 2006 11:33 PM> To: Luke Lonergan> Cc: Alex Turner; [email protected]\n> Subject: Re: [PERFORM] Postgresql Performance on an HP DL385 and>> Luke,>> I check dmesg one more time and I found this regarding the> cciss driver:>> Filesystem \"cciss/c1d0p1\": Disabling barriers, not supported\n> by the underlying device.>> Don't know if it means anything, but thought I'd mention it.>> Steve>>> On 8/8/06, Steve Poe <[email protected]\n> wrote:>> Luke,>> I thought so. In my test, I tried to be fair/equal> since my Sun box has two 4-disc arrays each on their own> channel. So, I just used one of them which should be a little\n> slower than the 6-disc with 192MB cache.>> Incidently, the two internal SCSI drives, which are on> the 6i adapter, generated a TPS of 18.>> I thought this server would impressive from notes I've\n> read in the group. This is why I thought I might be doing> something wrong. I stumped which way to take this. There is> no obvious fault but something isn't right.>>> Steve\n>>>> On 8/8/06, Luke Lonergan < [email protected]> <mailto:[email protected]\n> > wrote:>> Steve,>> > Sun box with 4-disc array (4GB RAM. 4 167GB> 10K SCSI RAID10> > LSI MegaRAID 128MB). This is after 8 runs.\n> >> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,us,12,2,5> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,sy,59,50,53\n> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,wa,1,0,0> >> dbserver-dual-opteron-centos,08/08/06,Tuesday,20,id,45,26,38> >> > Average TPS is 75\n> >> > HP box with 8GB RAM. six disc array RAID10 on> SmartArray 642> > with 192MB RAM. After 8 runs, I see:> >> > intown-vetstar-amd64,08/09/06,Tuesday,23,us,31,0,3\n> > intown-vetstar-amd64,08/09/06,Tuesday,23,sy,16,0,1> > intown-vetstar-amd64,08/09/06,Tuesday,23,wa,99,6,50> > intown-vetstar-amd64,08/09/06,Tuesday,23,id,78,0,42\n> >> > Average TPS is 31.>> Note that the I/O wait (wa) on the HP box high,> low and average are all> *much* higher than on the Sun box. The average\n> I/O wait was 50% of one> CPU, which is huge. By comparison there was> virtually no I/O wait on> the Sun machine.>> This is indicating that your HP machine is\n> indeed I/O bound and> furthermore is tying up a PG process waiting> for the disk to return.>> - Luke>>>>>>",
"msg_date": "Tue, 8 Aug 2006 23:47:44 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Steve, \n\n> I will do that. If it is the general impression that this \n> server should perform well with Postgresql, Are the RAID \n> cards, the 6i and 642 sufficient to your knowledge? I am \n> wondering if it is the disc array itself. \n\nI think that is the question to be answered by HP support. Ask them for\ntechnical support for the Linux driver for the 6i and 642. If they\noffer support, they should quickly figure out what the problem is. If\nthey don't provide support, I would send the server back to them if you\ncan and buy a Sun server with an LSI RAID adapter.\n\n- Luke\n\n",
"msg_date": "Wed, 9 Aug 2006 02:50:24 -0400",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
},
{
"msg_contents": "Luke,\n\nI hope so. I'll keep you and the list up-to-date as I learn more.\n\nSteve\n\nOn 8/8/06, Luke Lonergan <[email protected]> wrote:\n>\n> Steve,\n>\n> > I will do that. If it is the general impression that this\n> > server should perform well with Postgresql, Are the RAID\n> > cards, the 6i and 642 sufficient to your knowledge? I am\n> > wondering if it is the disc array itself.\n>\n> I think that is the question to be answered by HP support. Ask them for\n> technical support for the Linux driver for the 6i and 642. If they\n> offer support, they should quickly figure out what the problem is. If\n> they don't provide support, I would send the server back to them if you\n> can and buy a Sun server with an LSI RAID adapter.\n>\n> - Luke\n>\n>\n\nLuke,I hope so. I'll keep you and the list up-to-date as I learn more.SteveOn 8/8/06, Luke Lonergan <\[email protected]> wrote:Steve,> I will do that. If it is the general impression that this\n> server should perform well with Postgresql, Are the RAID> cards, the 6i and 642 sufficient to your knowledge? I am> wondering if it is the disc array itself.I think that is the question to be answered by HP support. Ask them for\ntechnical support for the Linux driver for the 6i and 642. If theyoffer support, they should quickly figure out what the problem is. Ifthey don't provide support, I would send the server back to them if you\ncan and buy a Sun server with an LSI RAID adapter.- Luke",
"msg_date": "Tue, 8 Aug 2006 23:56:38 -0700",
"msg_from": "\"Steve Poe\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgresql Performance on an HP DL385 and"
}
] |
[
{
"msg_contents": "Hello everyone.\n\nMy (simplified) database structure is:\n\na) table product (150000 rows)\n product_id BIGINT PRIMARY KEY\n title TEXT\n ...\n\nb) table action (5000 rows)\n action_id BIGINT PRIMARY KEY\n product_id BIGINT, FK to product\n shop_group_id INTEGER (there are about 5 groups, distributed about evenly)\n\nc) table product_program (500000 rows)\n program_id BIGINT (there are about 50 unique)\n product_id BIGINT, FK to product\n\n\nI need to query products, which are in action table of specific group \nand in product_program for a specific program_id. The query is taking \ntoo long to my liking My query is:\n\nSELECT product.product_id\n FROM action\n JOIN product ON (product.product_id=action.product_id)\n WHERE action.shop_group_id=1\n AND EXISTS (SELECT 1\n FROM catalog.product_program\n WHERE product_id=product.product_id\n AND product_program.program_id =1104322\n )\n\n\nQUERY PLAN\nNested Loop (cost=0.00..18073.81 rows=1220 width=8) (actual \ntime=10.153..2705.891 rows=636 loops=1)\n -> Seq Scan on \"action\" (cost=0.00..135.74 rows=2439 width=8) \n(actual time=8.108..36.684 rows=2406 loops=1)\n Filter: (shop_group_id = 1)\n -> Index Scan using product_pkey on product (cost=0.00..7.34 rows=1 \nwidth=8) (actual time=1.031..1.097 rows=0 loops=2406)\n Index Cond: ((product.product_id)::bigint = \n(\"outer\".product_id)::bigint)\n Filter: (subplan)\n SubPlan\n -> Index Scan using product_program_pkey on product_program \n (cost=0.00..4.33 rows=1 width=0) (actual time=0.455..0.455 rows=0 \nloops=2406)\n Index Cond: (((program_id)::bigint = 1104322) AND \n((product_id)::bigint = ($0)::bigint))\nTotal runtime: 2708.575 ms\n\n\n\nI also tried this:\n\nSELECT product.product_id\n FROM action\n JOIN product ON (product.product_id=action.product_id)\n JOIN catalog.product_program ON (\n\tproduct_program.product_id=product.product_id AND\n\tproduct_program.program_id =1104322)\n WHERE action.shop_group_id=1\n\n\nWith about the same results (a bit better, but for different groups it \nwas vice versa):\n\nQUERY PLAN\nNested Loop (cost=141.84..3494.91 rows=139 width=8) (actual \ntime=118.584..1295.303 rows=636 loops=1)\n -> Hash Join (cost=141.84..2729.11 rows=253 width=16) (actual \ntime=118.483..231.103 rows=636 loops=1)\n Hash Cond: ((\"outer\".product_id)::bigint = \n(\"inner\".product_id)::bigint)\n -> Index Scan using product_program_pkey on product_program \n(cost=0.00..2470.04 rows=7647 width=8) (actual time=0.047..73.514 \nrows=7468 loops=1)\n Index Cond: ((program_id)::bigint = 1104322)\n -> Hash (cost=135.74..135.74 rows=2439 width=8) (actual \ntime=118.114..118.114 rows=0 loops=1)\n -> Seq Scan on \"action\" (cost=0.00..135.74 rows=2439 \nwidth=8) (actual time=0.019..106.864 rows=2406 loops=1)\n Filter: (shop_group_id = 1)\n -> Index Scan using product_pkey on product (cost=0.00..3.01 rows=1 \nwidth=8) (actual time=1.300..1.655 rows=1 loops=636)\n Index Cond: ((\"outer\".product_id)::bigint = \n(product.product_id)::bigint)\n\n\nAny ideas if this is really the best I can expect, or is there something \namiss there and my query is wrong for this type of task? My gut feeling \ntells me, that this kind of query should be a lot faster. The hardware \nis Dual Xeon with enough of RAM and other operations run just fine.\n\nThank you.\n\n-- \nMichal T�borsk�\n",
"msg_date": "Wed, 09 Aug 2006 16:06:35 +0200",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": true,
"msg_subject": "3-table query optimization"
},
{
"msg_contents": "Michal Taborsky - Internet Mall <[email protected]> writes:\n> SELECT product.product_id\n> FROM action\n> JOIN product ON (product.product_id=action.product_id)\n> WHERE action.shop_group_id=1\n> AND EXISTS (SELECT 1\n> FROM catalog.product_program\n> WHERE product_id=product.product_id\n> AND product_program.program_id =1104322\n> )\n\nTry converting the EXISTS subquery to an IN.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 09 Aug 2006 11:08:59 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3-table query optimization "
},
{
"msg_contents": "Tom Lane napsal(a):\n> Michal Taborsky - Internet Mall <[email protected]> writes:\n>> SELECT product.product_id\n>> FROM action\n>> JOIN product ON (product.product_id=action.product_id)\n>> WHERE action.shop_group_id=1\n>> AND EXISTS (SELECT 1\n>> FROM catalog.product_program\n>> WHERE product_id=product.product_id\n>> AND product_program.program_id =1104322\n>> )\n> \n> Try converting the EXISTS subquery to an IN.\n\nThe performance is roughly the same. For some groups it's better, for \nsome groups, the bigger ones, it's a bit worse. I forgot to mention, \nthat the server is running 8.0.2. Upgrading would be a bit painful, as \nit is a 24/7 production system, but if it would help significantly, we'd \ngive it a go.\n\n-- \nMichal T�borsk�\n\n",
"msg_date": "Thu, 10 Aug 2006 09:30:35 +0200",
"msg_from": "Michal Taborsky - Internet Mall <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 3-table query optimization"
},
{
"msg_contents": "Michal Taborsky - Internet Mall wrote:\n> Tom Lane napsal(a):\n> >Michal Taborsky - Internet Mall <[email protected]> writes:\n> >>SELECT product.product_id\n> >> FROM action\n> >> JOIN product ON (product.product_id=action.product_id)\n> >> WHERE action.shop_group_id=1\n> >> AND EXISTS (SELECT 1\n> >> FROM catalog.product_program\n> >> WHERE product_id=product.product_id\n> >> AND product_program.program_id =1104322\n> >> )\n> >\n> >Try converting the EXISTS subquery to an IN.\n> \n> The performance is roughly the same.\n\nThat's strange -- IN is usually much more amenable to better plans than\nEXISTS. Please post an EXPLAIN ANALYZE of the queries to see what's\ngoing on. It may be that the query is bound to be \"slow\" for some\ncases (depending on the program_id I guess?)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Thu, 10 Aug 2006 11:50:17 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3-table query optimization"
},
{
"msg_contents": "On Thu, Aug 10, 2006 at 09:30:35AM +0200, Michal Taborsky - Internet Mall wrote:\n> Tom Lane napsal(a):\n> >Michal Taborsky - Internet Mall <[email protected]> writes:\n> >>SELECT product.product_id\n> >> FROM action\n> >> JOIN product ON (product.product_id=action.product_id)\n> >> WHERE action.shop_group_id=1\n> >> AND EXISTS (SELECT 1\n> >> FROM catalog.product_program\n> >> WHERE product_id=product.product_id\n> >> AND product_program.program_id =1104322\n> >> )\n> >\n> >Try converting the EXISTS subquery to an IN.\n> \n> The performance is roughly the same. For some groups it's better, for \n> some groups, the bigger ones, it's a bit worse. I forgot to mention, \n> that the server is running 8.0.2. Upgrading would be a bit painful, as \n> it is a 24/7 production system, but if it would help significantly, we'd \n> give it a go.\n\nYou're exposing yourself to at least one data-loss bug and a security\nhole by running 8.0.2. You should at least move to 8.0.8, which won't\nrequire a lot of downtime.\n\nIf you can make it happen, moving to 8.1.4 would almost certainly net a\nnoticable performance gain. I've seen 50-100% improvements, but how much\ngain you'll actually see is highly workload dependent.\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 10:08:35 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 3-table query optimization"
}
] |
[
{
"msg_contents": "Hello,\n\n \n\nI've recently been tasked with scalability/performance testing of a Dell\nPowerEdge 2950. This is the one with the new Intel Woodcrest Xeons.\nSince I haven't seen any info on this box posted to the list, I figured\npeople might be interested in the results, and maybe in return share a\nfew tips on performance tweaks.\n\n \n\nAfter doing some reading on the performance list, I realize that there's\na preference for Opteron; however, the goal of these experiments is to\nsee what I can get the 2950 to do. I will also be comparing performance\nvs. a 1850 at some point, if there's any interest I can post those\nnumbers too.\n\n \n\nHere's the hardware:\n\n2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache per\nsocket)\n\n8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)\n\n6x300 10k RPM SAS drives\n\nPerc 5i w/256 MB battery backed cache\n\n \n\nThe target application:\n\nMostly OLAP (large bulk loads, then lots of reporting, possibly moving\nto real-time loads in the future). All of this will be run on FreeBSD\n6.1 amd64. (If I have some extra time, I might be able to run a few\ntests on linux just for comparison's sake)\n\n \n\nTest strategy:\n\nMake sure the RAID is giving reasonable performance:\n\nbonnie++ -d /u/bonnie -s 1000:8k\n\ntime bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)\"\n\n \n\nNow, I realize that the above are overly simple, and not indicative of\noverall performance, however here's what I'm seeing:\n\nSingle 10K 300 GB drive - ~75 Mb/s on both tests, more or less\n\nRAID 10, 6 disks (3 sets of mirrored pairs) - ~117 Mb/s\n\n \n\nThe RAID 10 numbers look way off to me, so my next step is to go test\nsome different RAID configs. I'm going to look at a mirrored pair, and a\nstriped pair first, just to make sure the setup is sane. Then, RAID 5 x\n6 disks, and mirrored pair + raid 10 with 4. Possibly software raid,\nhowever I'm not very familiar with this on FreeBSD.\n\n \n\nOnce I get the RAID giving me reasonable results (I would think that a\nraid 10 with 6 10k drives should be able to push >200 MB/s sustained\nIO...no?) I will move on to other more DB specific tests. \n\n \n\nA few questions:\n\n1) Does anyone have other suggestions for testing raw IO for the RAID?\n\n \n\n2) What is reasonable IO (bonnie++, dd) for 4 or 6 disks- RAID 10?\n\n \n\n3) For DB tests, I would like to compare performance on the different\nRAID configs and vs. the 1850. Maybe to assist also in some basic\npostgresql.conf and OS tuning (but that will be saved mostly for when I\nstart application level testing). I realize that benchmarks don't\nnecessarily map to application performance, but it helps me establish a\nbaseline for the hardware. I'm currently running pgbench, but would like\nsomething with a few more features (but hopefully without too much setup\ntime). I've heard mention of the OSDL's DBT tests, and I'm specifically\ninterested in DBT-2 and DBT-3. Any suggestions here?\n\n \n\nHere's some initial numbers from pgbench (-s 50 -c 10 -t 100). Please\nkeep in mind that these are default installs of FreeBSD 6.1 and Postgres\n8.1.4- NO tuning yet.\n\n1850: run1: 121 tps, run2: 132 tps, run3: 229 tps\n\n2950: run1: 178 tps, run2: 201 tps, run3:259 tps\n\n \n\nObviously neither PG nor FreeBSD are taking advantage of all the\nhardware available in either case.\n\n \n\nI will post the additional RAID numbers shortly...\n\n \n\nThanks,\n\n \n\nBucky\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHello,\n \nI’ve recently been tasked with scalability/performance\ntesting of a Dell PowerEdge 2950. This is the one with the new Intel Woodcrest\nXeons. Since I haven’t seen any info on this box posted to the list, I\nfigured people might be interested in the results, and maybe in return share a\nfew tips on performance tweaks.\n \nAfter doing some reading on the performance list, I realize\nthat there’s a preference for Opteron; however, the goal of these experiments\nis to see what I can get the 2950 to do. I will also be comparing performance vs.\na 1850 at some point, if there’s any interest I can post those numbers\ntoo.\n \nHere’s the hardware:\n2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared\ncache per socket)\n8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)\n6x300 10k RPM SAS drives\nPerc 5i w/256 MB battery backed cache\n \nThe target application:\nMostly OLAP (large bulk loads, then lots of reporting,\npossibly moving to real-time loads in the future). All of this will be run on\nFreeBSD 6.1 amd64. (If I have some extra time, I might be able to run a few\ntests on linux just for comparison’s sake)\n \nTest strategy:\nMake sure the RAID is giving reasonable performance:\nbonnie++ -d /u/bonnie -s 1000:8k\ntime bash -c \"(dd if=/dev/zero of=bigfile count=125000\nbs=8k && sync)\"\n \nNow, I realize that the above are overly simple, and not\nindicative of overall performance, however here’s what I’m seeing:\nSingle 10K 300 GB drive - ~75 Mb/s on both tests, more or\nless\nRAID 10, 6 disks (3 sets of mirrored pairs) - ~117 Mb/s\n \nThe RAID 10 numbers look way off to me, so my next step is\nto go test some different RAID configs. I’m going to look at a mirrored\npair, and a striped pair first, just to make sure the setup is sane. Then, RAID\n5 x 6 disks, and mirrored pair + raid 10 with 4. Possibly software raid,\nhowever I’m not very familiar with this on FreeBSD.\n \nOnce I get the RAID giving me reasonable results (I would\nthink that a raid 10 with 6 10k drives should be able to push >200 MB/s\nsustained IO…no?) I will move on to other more DB specific tests. \n \nA few questions:\n1) Does anyone have other suggestions for testing raw IO for\nthe RAID?\n \n2) What is reasonable IO (bonnie++, dd) for 4 or 6 disks-\nRAID 10?\n \n3) For DB tests, I would like to compare performance on the\ndifferent RAID configs and vs. the 1850. Maybe to assist also in some basic\npostgresql.conf and OS tuning (but that will be saved mostly for when I start\napplication level testing). I realize that benchmarks don’t necessarily\nmap to application performance, but it helps me establish a baseline for the\nhardware. I’m currently running pgbench, but would like something with a\nfew more features (but hopefully without too much setup time). I’ve heard\nmention of the OSDL’s DBT tests, and I’m specifically interested in\nDBT-2 and DBT-3. Any suggestions here?\n \nHere’s some initial numbers from pgbench (-s 50 –c\n10 –t 100). Please keep in mind that these are default installs of\nFreeBSD 6.1 and Postgres 8.1.4- NO tuning yet.\n1850: run1: 121 tps, run2: 132 tps, run3: 229 tps\n2950: run1: 178 tps, run2: 201 tps, run3:259 tps\n \nObviously neither PG nor FreeBSD are taking advantage of all\nthe hardware available in either case.\n \nI will post the additional RAID numbers shortly…\n \nThanks,\n \nBucky",
"msg_date": "Wed, 9 Aug 2006 11:56:52 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dell PowerEdge 2950 performance"
},
{
"msg_contents": "On Aug 9, 2006, at 11:56 AM, Bucky Jordan wrote:\n\n> Here�s the hardware:\n>\n> 2xDual Core 3.0 Ghz CPU (Xeon 5160- 1333Mhz FSB, 4 MB shared cache \n> per socket)\n>\n> 8 GB RAM (DDR2, fully buffered, Dual Ranked, 667 Mhz)\n>\n> 6x300 10k RPM SAS drives\n>\n> Perc 5i w/256 MB battery backed cache\n\nIs the PERC 5/i dual channel? If so, are 1/2 the drives on one \nchannel and the other half on the other channel? I find this helps \nRAID10 performance when the mirrored pairs are on separate channels.\n\nYour transfer rate seems pretty good for Dell hardware, but I'm not \nexperienced with SAS drives to know if those numbers are good in an \nabsolute sense.\n\nAlso, which driver picked up the SAS controller? amr(4) or aac(4) or \nsome other? That makes a big difference too. I think the amr driver \nis \"better\" than the aac driver.",
"msg_date": "Mon, 14 Aug 2006 14:28:11 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "...\nIs the PERC 5/i dual channel? If so, are 1/2 the drives on one channel and the other half on the other channel? I find this helps RAID10 performance when the mirrored pairs are on separate channels.\n...\n\nWith the SAS controller (PERC 5/i), every drive gets it's own 3 GB/s port. \n\n...\nYour transfer rate seems pretty good for Dell hardware, but I'm not experienced with SAS drives to know if those numbers are good in an absolute sense.\n\nAlso, which driver picked up the SAS controller? amr(4) or aac(4) or some other? That makes a big difference too. I think the amr driver is \"better\" than the aac driver.\n..\n\nThe internals of the current SAS drives are similar to the U320's they replaced in terms of read/write/seek performance, however the benefit is the SAS bus, which helps eliminate some of the U320 limitations (e.g. with Perc4, you only get 160 MB/s per channel as you mentioned). It's using the mfi driver... \n\nHere's some simplistic performance numbers:\ntime bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)\"\n\nRaid0 x 2 (2 spindles) ~138 MB/s on BSD\nRaid5 x 4 ~160 MB/s BSD, ~274 MB/s Knoppix (ext2)\nRaid5 x 6 ~255 MB/s BSD, 265 MB/s Knoppix (ext3)\nRaid10 x 4 ~25 MB/s BSD\nRaid50 x 6 ~144 MB/s BSD, 271 MB/s Knoppix\n\n* BSD is 6.1-RELEASE amd64 with UFS + Soft updates, Knoppix is 5.1 (ext2 didn't like the > 1TB partition for the 6 disk RAID 5, hence ext3)\n\nSeems to me the PERC5 has issues with layered raid (10, 50) as others have suggested on this list is a common problem with lower end raid cards. For now, I'm going with the RAID 5 option, however if I have time, I would like to test having the hardware do raid 0 and doing raid 1 in the os, or vice versa, as proposed in other posts.\n\nAlso, I ran a pgbench -s 50 -c 10 -t 1000 on a completely default BSD 6.1 and PG 8.1.4 install with RAID5 x 6 disks, and got 442 tps on a fresh run (the numbers climb very rapidly due to caching after running simultaneous tests without reinitializing the test db. I'm guessing this is due to OS caching since the default postgresql.conf is pretty limited in terms of resource use). I probably need to up the scaling factor significantly so the whole data set doesn't get cached in RAM if I want realistic results from simultaneous tests, but it seems quicker to just reinit each time at this point.\n\nOn to some kernel tweaks and some adjustments to postgresql.conf... \n\n- Bucky\n\n",
"msg_date": "Mon, 14 Aug 2006 15:56:46 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "On Aug 14, 2006, at 3:56 PM, Bucky Jordan wrote:\n\n> Seems to me the PERC5 has issues with layered raid (10, 50) as \n> others have suggested on this list is a common problem with lower \n> end raid cards. For now, I'm going with the RAID 5 option, however \n> if I have time, I would like to test having the hardware do raid 0 \n> and doing raid 1 in the os, or vice versa, as proposed in other posts.\n\nWow, those are pretty awesome numbers.... I'm actually inclined to \ntry these as my DB servers again! Lately I've been using Sun X4100 \nwith Adaptec RAID cards, but they don't transfer nearly as fast as \nthat on simple tests.\n\nOf more interest would be a test which involved large files with lots \nof seeks all around (something like bonnie++ should do that).\n\nI too have noticed that Dell controllers don't like doing layered \nRAID levels very well. All of mine are doing plain old RAID5 or \nRAID1 only, and at that they are acceptable. The PERC 4/Si in the \n1850 has been pretty fast at RAID1.\n\nThanks for sharing your numbers.",
"msg_date": "Mon, 14 Aug 2006 17:24:16 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "...\nOf more interest would be a test which involved large files with lots \nof seeks all around (something like bonnie++ should do that).\n...\n\nHere's the bonnie++ numbers for the RAID 5 x 6 disks. I believe this was\nwith write-through and 64k striping. I plan to run a few others with\ndifferent block sizes and larger files- I'd be happy to send out a link\nto the list when I get a chance to post them somewhere. I've also been\nrunning some basic tests with pgbench just to help jumpstart customizing\npostgresql.conf, so that might be of interest too.\n\nbash-2.05b$ bonnie++ -d bonnie -s 1000:8k\nVersion 1.93c ------Sequential Output------ --Sequential Input-\n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n 1000M 587 99 246900 71 225124 76 1000 99 585723 99\n8573 955\nLatency 14367us 50829us 410ms 57965us 1656us\n432ms\nVersion 1.93c ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 28192 91 +++++ +++ +++++ +++ 26076 89 +++++ +++\n+++++ +++\nLatency 25988us 75us 37us 24756us 36us\n41us\n1.93c,1.93c,\n,1,1155223901,1000M,,587,99,246900,71,225124,76,1000,99,585723,99,8573,9\n55,16,,,,,28192,91,+++++,+++,+++++,+++,26076,89,+++++,+++,+++++,+++,1436\n7us,50829us,410ms,57965us,1656us,432ms,25988us,75us,37us,24756us,36us,41\nus\n\n...\nThanks for sharing your numbers.\n...\n\nYou're welcome- I prefer to see actual numbers rather than people simply\nstating that RAID controller X is better, so hopefully more people will\ndo the same.\n\n- Bucky\n",
"msg_date": "Mon, 14 Aug 2006 19:38:03 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Bucky,\n\nI see you are running bonnie++ version 1.93c. The numbers it reports are\nvery different from version 1.03a, which is the one everyone runs - can you\npost your 1.03a numbers from bonnie++?\n\n- Luke\n\n\nOn 8/14/06 4:38 PM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> ...\n> Of more interest would be a test which involved large files with lots\n> of seeks all around (something like bonnie++ should do that).\n> ...\n> \n> Here's the bonnie++ numbers for the RAID 5 x 6 disks. I believe this was\n> with write-through and 64k striping. I plan to run a few others with\n> different block sizes and larger files- I'd be happy to send out a link\n> to the list when I get a chance to post them somewhere. I've also been\n> running some basic tests with pgbench just to help jumpstart customizing\n> postgresql.conf, so that might be of interest too.\n> \n> bash-2.05b$ bonnie++ -d bonnie -s 1000:8k\n> Version 1.93c ------Sequential Output------ --Sequential Input-\n> --Random-\n> Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 1000M 587 99 246900 71 225124 76 1000 99 585723 99\n> 8573 955\n> Latency 14367us 50829us 410ms 57965us 1656us\n> 432ms\n> Version 1.93c ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 28192 91 +++++ +++ +++++ +++ 26076 89 +++++ +++\n> +++++ +++\n> Latency 25988us 75us 37us 24756us 36us\n> 41us\n> 1.93c,1.93c,\n> ,1,1155223901,1000M,,587,99,246900,71,225124,76,1000,99,585723,99,8573,9\n> 55,16,,,,,28192,91,+++++,+++,+++++,+++,26076,89,+++++,+++,+++++,+++,1436\n> 7us,50829us,410ms,57965us,1656us,432ms,25988us,75us,37us,24756us,36us,41\n> us\n> \n> ...\n> Thanks for sharing your numbers.\n> ...\n> \n> You're welcome- I prefer to see actual numbers rather than people simply\n> stating that RAID controller X is better, so hopefully more people will\n> do the same.\n> \n> - Bucky\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n",
"msg_date": "Mon, 14 Aug 2006 22:23:13 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "...\nI see you are running bonnie++ version 1.93c. The numbers it reports are\nvery different from version 1.03a, which is the one everyone runs - can\nyou\npost your 1.03a numbers from bonnie++?\n...\n\nLuke,\n\nThanks for the pointer. Here's the 1.03 numbers, but at the moment I'm\nonly able to run them on the 6 disk RAID 5 setup (128k stripe, writeback\nenabled since the Perc5 does have a battery backed cache). \n\nbonnie++ -d bonnie -s 1000:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t\t 1000M 155274 95 265359 44 232958 52 166884 99\n1054455 99 +++++ +++\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ 30550 88 +++++ +++\n+++++ +++\n,1000M,155274,95,265359,44,232958,52,166884,99,1054455,99,+++++,+++,16,+\n++++,+++,+++++,+++,+++++,+++,30550,88,+++++,+++,+++++,+++\n\n- Bucky\n",
"msg_date": "Tue, 15 Aug 2006 09:56:32 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Bucky,\n\nI don't know why I missed this the first time - you need to let bonnie++\npick the file size - it needs to be 2x memory or the results you get will\nnot be accurate.\n\nIn this case you've got a 1GB file, which nicely fits in RAM.\n\n- Luke\n\n\nOn 8/15/06 6:56 AM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> ...\n> I see you are running bonnie++ version 1.93c. The numbers it reports are\n> very different from version 1.03a, which is the one everyone runs - can\n> you\n> post your 1.03a numbers from bonnie++?\n> ...\n> \n> Luke,\n> \n> Thanks for the pointer. Here's the 1.03 numbers, but at the moment I'm\n> only able to run them on the 6 disk RAID 5 setup (128k stripe, writeback\n> enabled since the Perc5 does have a battery backed cache).\n> \n> bonnie++ -d bonnie -s 1000:8k\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 1000M 155274 95 265359 44 232958 52 166884 99\n> 1054455 99 +++++ +++\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ 30550 88 +++++ +++\n> +++++ +++\n> ,1000M,155274,95,265359,44,232958,52,166884,99,1054455,99,+++++,+++,16,+\n> ++++,+++,+++++,+++,+++++,+++,30550,88,+++++,+++,+++++,+++\n> \n> - Bucky\n> \n\n\n",
"msg_date": "Tue, 15 Aug 2006 11:50:02 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote:\n\n> I don't know why I missed this the first time - you need to let \n> bonnie++\n> pick the file size - it needs to be 2x memory or the results you \n> get will\n> not be accurate.\n\nwhich is an issue with freebsd and bonnie++ since it doesn't know \nthat freebsd can use large files natively (ie, no large file hacks \nnecessary). the freebsd port of bonnie takes care of this, if you \nuse that instead of compiling your own.",
"msg_date": "Tue, 15 Aug 2006 15:17:38 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Luke,\n\nFor some reason it looks like bonnie is picking a 300M file. \n\n> bonnie++ -d bonnie\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++\n+++++ +++\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n+++++ +++\n,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++\n++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++ \n\nSo here's results when I force it to use a 16GB file, which is twice the\namount of physical ram in the system:\n\n> bonnie++ -d bonnie -s 16000:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t 16000M 158539 99 244430 50 58647 29 83252 61 144240 21\n789.8 7\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++\n+++++ +++\n,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203,\n54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++\n\n... from Vivek...\nwhich is an issue with freebsd and bonnie++ since it doesn't know \nthat freebsd can use large files natively (ie, no large file hacks \nnecessary). the freebsd port of bonnie takes care of this, if you \nuse that instead of compiling your own.\n...\n\nUnfortunately I had to download and build by hand, since only bonnie++\n1.9x is available in BSD 6.1 ports when I checked.\n\nOne other question- would the following also be mostly a test of RAM? I\nwouldn't think so since it should force it to sync to disk... \ntime bash -c \"(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k &&\nsync)\"\n\nOh, and while I'm thinking about it, I believe Postgres uses 8k data\npages correct? On the RAID, I'm using 128k stripes. I know there's been\nposts on this before, but is there any way to tell postgres to use this\nin an effective way? \n\nThanks,\n\nBucky\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Vivek Khera\nSent: Tuesday, August 15, 2006 3:18 PM\nTo: Pgsql-Performance ((E-mail))\nSubject: Re: [PERFORM] Dell PowerEdge 2950 performance\n\n\nOn Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote:\n\n> I don't know why I missed this the first time - you need to let \n> bonnie++\n> pick the file size - it needs to be 2x memory or the results you \n> get will\n> not be accurate.\n\nwhich is an issue with freebsd and bonnie++ since it doesn't know \nthat freebsd can use large files natively (ie, no large file hacks \nnecessary). the freebsd port of bonnie takes care of this, if you \nuse that instead of compiling your own.\n\n",
"msg_date": "Tue, 15 Aug 2006 16:21:55 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "On Aug 15, 2006, at 4:21 PM, Bucky Jordan wrote:\n\n> ... from Vivek...\n> which is an issue with freebsd and bonnie++ since it doesn't know\n> that freebsd can use large files natively (ie, no large file hacks\n> necessary). the freebsd port of bonnie takes care of this, if you\n> use that instead of compiling your own.\n> ...\n>\n> Unfortunately I had to download and build by hand, since only bonnie++\n> 1.9x is available in BSD 6.1 ports when I checked.\n\nsee the patch file in the bonnie++ port file and apply something \nsimilar. basically you take out the check for large file support and \nforce it on.",
"msg_date": "Tue, 15 Aug 2006 16:41:55 -0400",
"msg_from": "Vivek Khera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Cool - seems like the posters caught that \"auto memory pick\" problem before\nyou posted, but you got the 16GB/8k parts right.\n\nNow we're looking at realistic numbers - 790 seeks/second, 244MB/s\nsequential write, but only 144MB/s sequential reads, perhaps 60% of what it\nshould be.\n\nSeems like a pretty good performer in general - if it was Linux I'd play\nwith the max readahead in the I/O scheduler to improve the sequential reads.\n\n- Luke\n\n\nOn 8/15/06 1:21 PM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> Luke,\n> \n> For some reason it looks like bonnie is picking a 300M file.\n> \n>> bonnie++ -d bonnie\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++\n> +++++ +++\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n> ,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++\n> ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> So here's results when I force it to use a 16GB file, which is twice the\n> amount of physical ram in the system:\n> \n>> bonnie++ -d bonnie -s 16000:8k\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 16000M 158539 99 244430 50 58647 29 83252 61 144240 21\n> 789.8 7\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++\n> +++++ +++\n> ,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203,\n> 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++\n> \n> ... from Vivek...\n> which is an issue with freebsd and bonnie++ since it doesn't know\n> that freebsd can use large files natively (ie, no large file hacks\n> necessary). the freebsd port of bonnie takes care of this, if you\n> use that instead of compiling your own.\n> ...\n> \n> Unfortunately I had to download and build by hand, since only bonnie++\n> 1.9x is available in BSD 6.1 ports when I checked.\n> \n> One other question- would the following also be mostly a test of RAM? I\n> wouldn't think so since it should force it to sync to disk...\n> time bash -c \"(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k &&\n> sync)\"\n> \n> Oh, and while I'm thinking about it, I believe Postgres uses 8k data\n> pages correct? On the RAID, I'm using 128k stripes. I know there's been\n> posts on this before, but is there any way to tell postgres to use this\n> in an effective way?\n> \n> Thanks,\n> \n> Bucky\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Vivek Khera\n> Sent: Tuesday, August 15, 2006 3:18 PM\n> To: Pgsql-Performance ((E-mail))\n> Subject: Re: [PERFORM] Dell PowerEdge 2950 performance\n> \n> \n> On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote:\n> \n>> I don't know why I missed this the first time - you need to let\n>> bonnie++\n>> pick the file size - it needs to be 2x memory or the results you\n>> get will\n>> not be accurate.\n> \n> which is an issue with freebsd and bonnie++ since it doesn't know\n> that freebsd can use large files natively (ie, no large file hacks\n> necessary). the freebsd port of bonnie takes care of this, if you\n> use that instead of compiling your own.\n> \n> \n\n\n",
"msg_date": "Tue, 15 Aug 2006 23:17:32 -0700",
"msg_from": "\"Luke Lonergan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Luke,\n\nThanks for the tips. I'm running FreeBSD 6.1 amd64, but, I can also\nenable readahead on the raid controller, and also adaptive readahead.\nHere's tests:\n\nReadahead & writeback enabled:\nbash-2.05b$ bonnie++ -d bonnie -s 16000:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\t16000M 156512 98 247520 47 59560 27 83138 60 143588 21\n792.8 7\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 +++++ +++ +++++ +++ 27789 99 +++++ +++ +++++ +++\n+++++ +++\n,16000M,156512,98,247520,47,59560,27,83138,60,143588,21,792.8,7,16,+++++\n,+++,+++++,+++,27789,99,+++++,+++,+++++,+++,+++++,+++\n\n\nWriteback and Adaptive Readahead:\nbash-2.05b$ bonnie++ -d bonnie -s 16000:8k\nVersion 1.03 ------Sequential Output------ --Sequential Input-\n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n16000M 155542 97 246910 47 60356 26 82798 60 143321 21 787.3 6\n ------Sequential Create------ --------Random\nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n 16 6329 49 +++++ +++ +++++ +++ +++++ +++ +++++ +++\n+++++ +++\n,16000M,155542,97,246910,47,60356,26,82798,60,143321,21,787.3,6,16,6329,\n49,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\n\n(As a side note- according to the controller docs, Adaptive read ahead\nreads ahead sequentially if there are two reads from sequential sectors,\notherwise it doesn't).\n\nSo, I'm thinking that the RAID controller doesn't really help with this\ntoo much- I'd think the OS could do a better job deciding when to read\nahead. So, I've set it back to no readahead and the next step is to look\nat OS level file system tuning. Also, if I have time, I'll try doing\nRAID 1 on the controller, and RAID 0 on the OS (or vice versa). Since I\nhave 6 disks, I could do a stripe of 3 mirrored pairs (raid 10) or a\nmirror of two striped sets of 3 (0+1). I suppose theoretically speaking,\nthey should have the same performance characteristics, however I doubt\nthey will in practice. \n\nThanks,\n\nBucky\n\n-----Original Message-----\nFrom: Luke Lonergan [mailto:[email protected]] \nSent: Wednesday, August 16, 2006 2:18 AM\nTo: Bucky Jordan; Vivek Khera; Pgsql-Performance ((E-mail))\nSubject: Re: [PERFORM] Dell PowerEdge 2950 performance\n\nCool - seems like the posters caught that \"auto memory pick\" problem\nbefore\nyou posted, but you got the 16GB/8k parts right.\n\nNow we're looking at realistic numbers - 790 seeks/second, 244MB/s\nsequential write, but only 144MB/s sequential reads, perhaps 60% of what\nit\nshould be.\n\nSeems like a pretty good performer in general - if it was Linux I'd play\nwith the max readahead in the I/O scheduler to improve the sequential\nreads.\n\n- Luke\n\n\nOn 8/15/06 1:21 PM, \"Bucky Jordan\" <[email protected]> wrote:\n\n> Luke,\n> \n> For some reason it looks like bonnie is picking a 300M file.\n> \n>> bonnie++ -d bonnie\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 300M 179028 99 265358 41 270175 57 167989 99 +++++ +++\n> +++++ +++\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\n> +++++ +++\n>\n,300M,179028,99,265358,41,270175,57,167989,99,+++++,+++,+++++,+++,16,+++\n> ++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n> \n> So here's results when I force it to use a 16GB file, which is twice\nthe\n> amount of physical ram in the system:\n> \n>> bonnie++ -d bonnie -s 16000:8k\n> Version 1.03 ------Sequential Output------ --Sequential Input-\n> --Random-\n> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n> --Seeks--\n> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n> /sec %CP\n> 16000M 158539 99 244430 50 58647 29 83252 61 144240 21\n> 789.8 7\n> ------Sequential Create------ --------Random\n> Create--------\n> -Create-- --Read--- -Delete-- -Create-- --Read---\n> -Delete--\n> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n> /sec %CP\n> 16 7203 54 +++++ +++ +++++ +++ 24555 42 +++++ +++\n> +++++ +++\n>\n,16000M,158539,99,244430,50,58647,29,83252,61,144240,21,789.8,7,16,7203,\n> 54,+++++,+++,+++++,+++,24555,42,+++++,+++,+++++,+++\n> \n> ... from Vivek...\n> which is an issue with freebsd and bonnie++ since it doesn't know\n> that freebsd can use large files natively (ie, no large file hacks\n> necessary). the freebsd port of bonnie takes care of this, if you\n> use that instead of compiling your own.\n> ...\n> \n> Unfortunately I had to download and build by hand, since only bonnie++\n> 1.9x is available in BSD 6.1 ports when I checked.\n> \n> One other question- would the following also be mostly a test of RAM?\nI\n> wouldn't think so since it should force it to sync to disk...\n> time bash -c \"(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k &&\n> sync)\"\n> \n> Oh, and while I'm thinking about it, I believe Postgres uses 8k data\n> pages correct? On the RAID, I'm using 128k stripes. I know there's\nbeen\n> posts on this before, but is there any way to tell postgres to use\nthis\n> in an effective way?\n> \n> Thanks,\n> \n> Bucky\n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Vivek\nKhera\n> Sent: Tuesday, August 15, 2006 3:18 PM\n> To: Pgsql-Performance ((E-mail))\n> Subject: Re: [PERFORM] Dell PowerEdge 2950 performance\n> \n> \n> On Aug 15, 2006, at 2:50 PM, Luke Lonergan wrote:\n> \n>> I don't know why I missed this the first time - you need to let\n>> bonnie++\n>> pick the file size - it needs to be 2x memory or the results you\n>> get will\n>> not be accurate.\n> \n> which is an issue with freebsd and bonnie++ since it doesn't know\n> that freebsd can use large files natively (ie, no large file hacks\n> necessary). the freebsd port of bonnie takes care of this, if you\n> use that instead of compiling your own.\n> \n> \n\n\n",
"msg_date": "Wed, 16 Aug 2006 10:45:13 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
},
{
"msg_contents": "Bucky Jordan wrote:\n> Here's some simplistic performance numbers:\n> time bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)\"\n>\n> Raid0 x 2 (2 spindles) ~138 MB/s on BSD\n> \nPE2950 FreeBSD6.1 i386 raid0 (2spindles):\n\ntime csh -c \"(dd if=/dev/zero of=/data/bigfile count=125000 bs=8k && sync)\"\n125000+0 records in\n125000+0 records out\n1024000000 bytes transferred in 7.070130 secs (144834680 bytes/sec)\n0.070u 2.677s 0:07.11 38.5% 23+224k 31+7862io 0pf+0w\n\nmfi0: <Dell PERC 5/i> .\nI recompiled kernel to get latest mfi driver.\nAlso \"bce\" NIC driver is buggy for 6.1 kernel you got in CD distro. Make \nsure you have latest drivers for bsd 6.1.\nbonnie++\nVersion 1.93c ------Sequential Output------ --Sequential Input- \n--Random-\nConcurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\nraid0 16000M 262 99 116527 38 26451 12 495 99 135301 46 \n323.5 15\nLatency 32978us 323ms 242ms 23842us 171ms \n1370ms\nVersion 1.93c ------Sequential Create------ --------Random \nCreate--------\nraid0 -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\n 16 5837 19 +++++ +++ +++++ +++ 3463 11 +++++ +++ \n+++++ +++\nLatency 555ms 422us 43us 1023ms 52us \n60us\n1.93c,1.93c,raid0,1,1155819725,16000M,,262,99,116527,38,26451,12,495,99,135301,46,323.5,15,16,,,,,5837,19,+++++,+++,+++++,+++,3463,11,+++++,+++,+++++,+++,32978us,323ms,242ms,23842us,171ms,1370ms,555ms,422us,43us,1023ms,52us,60us\n\n\n-- \nBest Regards,\nalvis\n\n\n",
"msg_date": "Fri, 18 Aug 2006 09:21:29 +0000",
"msg_from": "alvis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dell PowerEdge 2950 performance"
}
] |
[
{
"msg_contents": "I'm trying to optimize a resume search engine that is using Tsearch2\nindexes. It's running on a dual-opteron 165 system with 4GB of ram\nand a raid1 3Gb/sec SATA array. Each text entry is about 2-3K of\ntext, and there are about 23,000 rows in the search table, with a goal\nof reaching about 100,000 rows eventually.\n\nI'm running Ubuntu 6.06 amd64 server edition. The raid array is a\nsoftware-based linux array with LVM on top of it and the file system\nfor the database mount point is XFS. The only optimization I've done\nso far is to put the following in /etc/sysctl.conf:\n\nkernel.shmall = 2097152\nkernel.shmmax = 2147483648\nkernel.shmmni = 4096\nkernel.sem = 250 32000 100 128\nfs.file-max = 65536\n\nAnd in postgresql.conf I set the following parameters:\n\nshared_buffers = 131072\nwork_mem = 65536\nmax_stack_depth = 4096\nmax_fsm_pages = 40000\nmax_fsm_relations = 2000\n\nThese probably aren't ideal but I was hoping they would perform a\nlittle better than the defaults. I got the following results from a\npgbench script I picked up off the web:\n\nCHECKPOINT\n===== sync ======\n10 concurrent users...\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 632.146016 (including connections establishing)\ntps = 710.474526 (excluding connections establishing)\n\nOnce again I don't know if these results are good or not for my hardware.\n\nI have a couple of questions:\n\n- Does anyone have some good advice for optimizing postgres for\ntsearch2 queries?\n- I noticed that there are six different postmaster daemons running.\nOnly one of them is taking up a lot of RAM (1076m virtual and 584m\nresident). The second one is using 181m resident while the others are\nless than 20m each. Is it normal to have multiple postmaster\nprocesses? Even the biggest process doesn't seem to be using near as\nmuch RAM as I have on this machine. Is that bad? What percentage of\nmy physical memory should I expect postgres to use for itself? How\ncan I encourage it to cache more query results in memory?\n\nThanks in advance for your time.\n\nCarl Youngblood\n",
"msg_date": "Wed, 9 Aug 2006 22:00:00 -0600",
"msg_from": "\"Carl Youngblood\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Beginner optimization questions,\n esp. regarding Tsearch2 configuration"
},
{
"msg_contents": "Carl Youngblood wrote:\n> - I noticed that there are six different postmaster daemons running.\n> Only one of them is taking up a lot of RAM (1076m virtual and 584m\n> resident). The second one is using 181m resident while the others are\n> less than 20m each. Is it normal to have multiple postmaster\n> processes?\n\nYou should have one master backend process and one per connection. PG is \na classic multi-process designed server.\n\n > Even the biggest process doesn't seem to be using near as\n> much RAM as I have on this machine. Is that bad? What percentage of\n> my physical memory should I expect postgres to use for itself? How\n> can I encourage it to cache more query results in memory?\n\nOK - one of the key things with PostgreSQL is that it relies on the O.S. \nto cache its disk files. So, allocating too much memory to PG can be \ncounterproductive.\n\n From your figures, you're allocating about 64MB to work_mem, which is \nper sort. So, a complex query could use several times that amount. If \nyou don't have many concurrent queries that might be what you want.\n\nAlso, you've allocated 1GB to your shared_buffers which is more than I'd \nuse as a starting point.\n\nYou've only mentioned one main table with 100,000 rows, so presumably \nyou're going to cache the entire DB in RAM. So, you'll want to increase \neffective_cache_size and reduce random_page_cost.\n\n-- \n Richard Huxton\n Archonet Ltd\n",
"msg_date": "Thu, 10 Aug 2006 10:23:55 +0100",
"msg_from": "Richard Huxton <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "Hi, Richard and Carl,\n\nRichard Huxton wrote:\n> Carl Youngblood wrote:\n>> - I noticed that there are six different postmaster daemons running.\n>> Only one of them is taking up a lot of RAM (1076m virtual and 584m\n>> resident). The second one is using 181m resident while the others are\n>> less than 20m each. Is it normal to have multiple postmaster\n>> processes?\n> \n> You should have one master backend process and one per connection. PG is\n> a classic multi-process designed server.\n\nThere may be some additional background processes, such as the\nbackground writer, stats collector or autovacuum, depending on your\nversion and configuration.\n\nHTH,\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 10 Aug 2006 13:07:08 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "Thanks a lot for the advice Richard. I will try those things out and\nreport back to the list.\n\nCarl\n\nOn 8/10/06, Richard Huxton <[email protected]> wrote:\n> From your figures, you're allocating about 64MB to work_mem, which is\n> per sort. So, a complex query could use several times that amount. If\n> you don't have many concurrent queries that might be what you want.\n>\n> Also, you've allocated 1GB to your shared_buffers which is more than I'd\n> use as a starting point.\n>\n> You've only mentioned one main table with 100,000 rows, so presumably\n> you're going to cache the entire DB in RAM. So, you'll want to increase\n> effective_cache_size and reduce random_page_cost.\n",
"msg_date": "Thu, 10 Aug 2006 22:18:53 -0600",
"msg_from": "\"Carl Youngblood\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "On Thu, Aug 10, 2006 at 10:23:55AM +0100, Richard Huxton wrote:\n> Carl Youngblood wrote:\n> >- I noticed that there are six different postmaster daemons running.\n> >Only one of them is taking up a lot of RAM (1076m virtual and 584m\n> >resident). The second one is using 181m resident while the others are\n> >less than 20m each. Is it normal to have multiple postmaster\n> >processes?\n> \n> You should have one master backend process and one per connection. PG is \n> a classic multi-process designed server.\n> \n> > Even the biggest process doesn't seem to be using near as\n> >much RAM as I have on this machine. Is that bad? What percentage of\n> >my physical memory should I expect postgres to use for itself? How\n> >can I encourage it to cache more query results in memory?\n> \n> OK - one of the key things with PostgreSQL is that it relies on the O.S. \n> to cache its disk files. So, allocating too much memory to PG can be \n> counterproductive.\n> \n> From your figures, you're allocating about 64MB to work_mem, which is \n> per sort. So, a complex query could use several times that amount. If \n> you don't have many concurrent queries that might be what you want.\n> \n> Also, you've allocated 1GB to your shared_buffers which is more than I'd \n> use as a starting point.\n \nSee the recent thread about how old rules of thumb for shared_buffers\nare now completely bunk. With 4G of memory, setting shared_buffers to 2G\ncould easily be reasonable. The OP really needs to test several\ndifferent values with their actual workload and see what works best.\n\n> You've only mentioned one main table with 100,000 rows, so presumably \n> you're going to cache the entire DB in RAM. So, you'll want to increase \n> effective_cache_size and reduce random_page_cost.\n> \n> -- \n> Richard Huxton\n> Archonet Ltd\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 10:05:30 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "I tried setting it to 2GB and postgres wouldn't start. Didn't\ninvestigate in much greater detail as to why it wouldn't start, but\nafter switching it back to 1GB it started fine.\n\nOn 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> See the recent thread about how old rules of thumb for shared_buffers\n> are now completely bunk. With 4G of memory, setting shared_buffers to 2G\n> could easily be reasonable. The OP really needs to test several\n> different values with their actual workload and see what works best.\n",
"msg_date": "Tue, 15 Aug 2006 12:47:54 -0600",
"msg_from": "\"Carl Youngblood\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "By the way, can you please post a link to that thread?\n\nOn 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> See the recent thread about how old rules of thumb for shared_buffers\n> are now completely bunk. With 4G of memory, setting shared_buffers to 2G\n> could easily be reasonable. The OP really needs to test several\n> different values with their actual workload and see what works best.\n",
"msg_date": "Tue, 15 Aug 2006 12:49:04 -0600",
"msg_from": "\"Carl Youngblood\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "On Tue, 15 Aug 2006 12:47:54 -0600\n\"Carl Youngblood\" <[email protected]> wrote:\n\n> I tried setting it to 2GB and postgres wouldn't start. Didn't\n> investigate in much greater detail as to why it wouldn't start, but\n> after switching it back to 1GB it started fine.\n> \n> On 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> > See the recent thread about how old rules of thumb for\n> > shared_buffers are now completely bunk. With 4G of memory, setting\n> > shared_buffers to 2G could easily be reasonable. The OP really\n> > needs to test several different values with their actual workload\n> > and see what works best.\n\n Sounds like you need to increase your kernel's maximum amount\n of shared memory. This is typically why an increase in\n shared_buffers causes PostgreSQL not to start. \n\n Check out this page in the docs for more information: \n\n http://www.postgresql.org/docs/8.1/static/kernel-resources.html\n\n ---------------------------------\n Frank Wiles <[email protected]>\n http://www.wiles.org\n ---------------------------------\n\n",
"msg_date": "Tue, 15 Aug 2006 14:08:06 -0500",
"msg_from": "Frank Wiles <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 12:47:54PM -0600, Carl Youngblood wrote:\n> I tried setting it to 2GB and postgres wouldn't start. Didn't\n> investigate in much greater detail as to why it wouldn't start, but\n> after switching it back to 1GB it started fine.\n \nMost likely because you didn't set the kernel's shared memory settings\nhigh enough.\n\nTo answer you other question:\nhttp://archives.postgresql.org/pgsql-performance/2006-08/msg00095.php\n\n> On 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> >See the recent thread about how old rules of thumb for shared_buffers\n> >are now completely bunk. With 4G of memory, setting shared_buffers to 2G\n> >could easily be reasonable. The OP really needs to test several\n> >different values with their actual workload and see what works best.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 14:21:46 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "The relevant portion of my sysctl.conf file looks like this:\n\nkernel.shmall = 2097152\nkernel.shmmax = 2147483648\nkernel.shmmni = 4096\nkernel.sem = 250 32000 100 128\nfs.file-max = 65536\n\nI understood it was a good idea to set shmmax to half of available\nmemory (2GB in this case). I assume that I need to set shared_buffers\nslightly lower than 2GB for postgresql to start successfully.\n\nCarl\n\nOn 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> On Tue, Aug 15, 2006 at 12:47:54PM -0600, Carl Youngblood wrote:\n> > I tried setting it to 2GB and postgres wouldn't start. Didn't\n> > investigate in much greater detail as to why it wouldn't start, but\n> > after switching it back to 1GB it started fine.\n>\n> Most likely because you didn't set the kernel's shared memory settings\n> high enough.\n",
"msg_date": "Wed, 16 Aug 2006 09:34:24 -0600",
"msg_from": "\"Carl Youngblood\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
},
{
"msg_contents": "On Wed, Aug 16, 2006 at 09:34:24AM -0600, Carl Youngblood wrote:\n> The relevant portion of my sysctl.conf file looks like this:\n> \n> kernel.shmall = 2097152\n> kernel.shmmax = 2147483648\n> kernel.shmmni = 4096\n> kernel.sem = 250 32000 100 128\n> fs.file-max = 65536\n> \n> I understood it was a good idea to set shmmax to half of available\n> memory (2GB in this case). I assume that I need to set shared_buffers\n\nI don't see any reason to do that, so long as you have control over\nwhat's being run on the system. Just set it to 3000000000 or so.\n\n> slightly lower than 2GB for postgresql to start successfully.\n> \n> Carl\n> \n> On 8/15/06, Jim C. Nasby <[email protected]> wrote:\n> >On Tue, Aug 15, 2006 at 12:47:54PM -0600, Carl Youngblood wrote:\n> >> I tried setting it to 2GB and postgres wouldn't start. Didn't\n> >> investigate in much greater detail as to why it wouldn't start, but\n> >> after switching it back to 1GB it started fine.\n> >\n> >Most likely because you didn't set the kernel's shared memory settings\n> >high enough.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Wed, 16 Aug 2006 23:57:23 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Beginner optimization questions, esp. regarding Tsearch2"
}
] |
[
{
"msg_contents": "Hi all,\n\nThis is my first post to the performance list, I hope someone can help me.\n\nI'm setting up a table with 2 columns, both of which reference a column \nin another table:\n\nCREATE TABLE headwords_core_lexemes (\ncore_id int REFERENCES headwords_core(core_id),\nlexeme_id int REFERENCES headwords_core(core_id),\n);\n\nTrouble is, it's taken 18 hours and counting! The table headwords_core \nonly has about 13,000 lines, and core_id is the primary key on that \ntable. However, I assume it must be those 13,000 lines that are the \nproblem, since if I try it referencing a similar table with 360 lines \nthe new table is created almost instantly.\n\nI found a post on a similar subject from quite a while ago, but no \nanswer, and that was for millions of rows anyway. I only have 13,000. \nSurely it should be faster than this? Is there a way to speed it up?\n\nSue Fitt\n\n",
"msg_date": "Thu, 10 Aug 2006 09:05:35 +0100",
"msg_from": "Sue Fitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "setting up foreign keys"
},
{
"msg_contents": "Sue Fitt wrote:\n> Hi all,\n> \n> This is my first post to the performance list, I hope someone can help me.\n> \n> I'm setting up a table with 2 columns, both of which reference a column \n> in another table:\n> \n> CREATE TABLE headwords_core_lexemes (\n> core_id int REFERENCES headwords_core(core_id),\n> lexeme_id int REFERENCES headwords_core(core_id),\n> );\n\nOne problem here is both of these are referencing the same column ;) I'm \nsure that's a typo.\n\nIt sounds like you have something blocking or locking the other table. \nCheck pg_locks (I think it is), 13,000 rows shouldn't take *that* long.\n\n\nMake sure there is an index on headwords_core(core_id) and whatever the \nother column should be.\n\nForeign keys have to check the other table so without those indexes, it \nwill be slow(er).\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Thu, 10 Aug 2006 18:15:00 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Thanks Chris and Chris, you've solved it.\n\nI had a gui open that connects to the database. It was doing nothing \n(and not preventing me adding to or altering headwords_core via psql), \nbut having closed it the table is instantly created. Weird.\n\nBTW, referencing the same column twice is deliberate, it's a \ncross-reference.\n\nSue\n\nChris Mair wrote:\n >> This is my first post to the performance list, I hope someone can \nhelp me.\n >>\n >> I'm setting up a table with 2 columns, both of which reference a \ncolumn in another table:\n >>\n >> CREATE TABLE headwords_core_lexemes (\n >> core_id int REFERENCES headwords_core(core_id),\n >> lexeme_id int REFERENCES headwords_core(core_id),\n >> );\n >>\n >> Trouble is, it's taken 18 hours and counting! The table \nheadwords_core only has about 13,000 lines, and core_id is the primary \nkey on that table. However, I assume it must be those 13,000 lines that \nare the problem, since if I try it referencing a similar table with 360 \nlines the new table is created almost instantly.\n >> \n >\n > Hi,\n >\n > the 13000 rows in headwords_core don't matter at all for what this\n > statement concerns. I bet you have another idle transaction that keeps\n > headwords_core locked, for example because you did an\n > alter table headwords_core there...\n >\n > Bye,\n > Chris.\n >\n >\n",
"msg_date": "Thu, 10 Aug 2006 10:04:55 +0100",
"msg_from": "Sue Fitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Sue Fitt wrote:\n> Thanks Chris and Chris, you've solved it.\n> \n> I had a gui open that connects to the database. It was doing nothing \n> (and not preventing me adding to or altering headwords_core via psql), \n> but having closed it the table is instantly created. Weird.\n> \n> BTW, referencing the same column twice is deliberate, it's a \n> cross-reference.\n\nThe same column and the same table?\n\nSame column different table I could understand but not the same column & \ntable ;)\n\nI'm sure there's a reason for it though :)\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Thu, 10 Aug 2006 19:13:29 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Well they don't necessarily have the same value!\n\nIt's a dictionary with cross-referenced words, e.g. 'bring' and \n'brought' are both headwords in the dictionary, but 'brought' is \ncross-referenced to 'bring'. So, the table stores the information (using \ninteger id's rather than words) that\n bring: bring\n brought: see bring\n sing: sing\n sang: see sing\netc.\n\nSue\n\nChris wrote:\n> Sue Fitt wrote:\n>> Thanks Chris and Chris, you've solved it.\n>>\n>> I had a gui open that connects to the database. It was doing nothing \n>> (and not preventing me adding to or altering headwords_core via \n>> psql), but having closed it the table is instantly created. Weird.\n>>\n>> BTW, referencing the same column twice is deliberate, it's a \n>> cross-reference.\n>\n> The same column and the same table?\n>\n> Same column different table I could understand but not the same column \n> & table ;)\n>\n> I'm sure there's a reason for it though :)\n>\n",
"msg_date": "Thu, 10 Aug 2006 10:20:45 +0100",
"msg_from": "Sue Fitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "On 8/10/06, Chris <[email protected]> wrote:\n> Sue Fitt wrote:\n> > Thanks Chris and Chris, you've solved it.\n> >\n> > I had a gui open that connects to the database. It was doing nothing\n> > (and not preventing me adding to or altering headwords_core via psql),\n> > but having closed it the table is instantly created. Weird.\n> >\n> > BTW, referencing the same column twice is deliberate, it's a\n> > cross-reference.\n>\n> The same column and the same table?\n>\n> Same column different table I could understand but not the same column &\n> table ;)\n\ncreate table color(color text);\n\ncreate table person(eye_color text references color(color), hair_color\ntext references color(color));\n\n;)\nmerlin\n",
"msg_date": "Thu, 10 Aug 2006 09:33:07 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "On Thu, 10 Aug 2006, Sue Fitt wrote:\n\n> Hi all,\n>\n> This is my first post to the performance list, I hope someone can help me.\n>\n> I'm setting up a table with 2 columns, both of which reference a column\n> in another table:\n>\n> CREATE TABLE headwords_core_lexemes (\n> core_id int REFERENCES headwords_core(core_id),\n> lexeme_id int REFERENCES headwords_core(core_id),\n> );\n>\n> Trouble is, it's taken 18 hours and counting!\n\nWhat precisely is taking the time, the create table itself? The only thing\nthat the create should be waiting for as far as I know is a lock on\nheadwords_core to add the triggers.\n",
"msg_date": "Thu, 10 Aug 2006 11:32:25 -0700 (PDT)",
"msg_from": "Stephan Szabo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Merlin Moncure wrote:\n> On 8/10/06, Chris <[email protected]> wrote:\n>> Sue Fitt wrote:\n>> > Thanks Chris and Chris, you've solved it.\n>> >\n>> > I had a gui open that connects to the database. It was doing nothing\n>> > (and not preventing me adding to or altering headwords_core via psql),\n>> > but having closed it the table is instantly created. Weird.\n>> >\n>> > BTW, referencing the same column twice is deliberate, it's a\n>> > cross-reference.\n>>\n>> The same column and the same table?\n>>\n>> Same column different table I could understand but not the same column &\n>> table ;)\n> \n> create table color(color text);\n> \n> create table person(eye_color text references color(color), hair_color\n> text references color(color));\n\nlol. Good point :)\n\n*back to the hidey hole!*\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Fri, 11 Aug 2006 08:36:11 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Solved, it turned out to be a lock caused by a gui connected to the \ndatabase, even though the gui wasn't actually doing anything at the time...\n\nSue\n\nStephan Szabo wrote:\n> On Thu, 10 Aug 2006, Sue Fitt wrote:\n>\n> \n>> Hi all,\n>>\n>> This is my first post to the performance list, I hope someone can help me.\n>>\n>> I'm setting up a table with 2 columns, both of which reference a column\n>> in another table:\n>>\n>> CREATE TABLE headwords_core_lexemes (\n>> core_id int REFERENCES headwords_core(core_id),\n>> lexeme_id int REFERENCES headwords_core(core_id),\n>> );\n>>\n>> Trouble is, it's taken 18 hours and counting!\n>> \n>\n> What precisely is taking the time, the create table itself? The only thing\n> that the create should be waiting for as far as I know is a lock on\n> headwords_core to add the triggers.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n> \n",
"msg_date": "Fri, 11 Aug 2006 12:48:01 +0100",
"msg_from": "Sue Fitt <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "On Thu, Aug 10, 2006 at 10:20:45AM +0100, Sue Fitt wrote:\n> Well they don't necessarily have the same value!\n> \n> It's a dictionary with cross-referenced words, e.g. 'bring' and \n> 'brought' are both headwords in the dictionary, but 'brought' is \n> cross-referenced to 'bring'. So, the table stores the information (using \n> integer id's rather than words) that\n> bring: bring\n> brought: see bring\n> sing: sing\n> sang: see sing\n> etc.\n \nIf that's actually how it's represented (a row for both sing and song)\nit's denormalized. My rule of thumb is \"normalize 'til it hurts,\ndenormalize 'til it works\", meaning only denormalize if you need to for\nperformance reasons. In this case, it's certainly possible that\nperformance-wise you're best off denormalized, but you might want to\nexperiment and find out.\n\nBTW, the normalized way to store this info would be to only put records\nin that table for brought and song.\n\n> Sue\n> \n> Chris wrote:\n> >Sue Fitt wrote:\n> >>Thanks Chris and Chris, you've solved it.\n> >>\n> >>I had a gui open that connects to the database. It was doing nothing \n> >>(and not preventing me adding to or altering headwords_core via \n> >>psql), but having closed it the table is instantly created. Weird.\n> >>\n> >>BTW, referencing the same column twice is deliberate, it's a \n> >>cross-reference.\n> >\n> >The same column and the same table?\n> >\n> >Same column different table I could understand but not the same column \n> >& table ;)\n> >\n> >I'm sure there's a reason for it though :)\n> >\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 11:01:24 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
}
] |
[
{
"msg_contents": "Hi all,\n\nI have an application that uses PostgreSQL to store its data. The \napplication and an instance of the database have been installed in three \ndifferent locations, and none of these three locations have anything to \ndo with any of the others. I'm observing a problem in that large \ntransfers to some machines on the network (specifically while running \npg_dump) are dead slow. In fact, the information is going from the \nserver to the client machine at dialup speeds over a 100 Mb LAN to some \nmachines, and full speed to others.\n\nThis not a universal problem. Obviously, I'm not experiencing it at my \ndevelopment location, or I would have found and fixed it by now. One of \nthe production installations had no problems. The second of the \nproduction environments experienced the problem on one out of 4 laptops \n(all the desktop machines were OK) until their technical guy uninstalled \nAVG (anti-virus). The third location has 4 laptops that are all slow in \ntransferring PostgreSQL data, while the desktop machines are OK. There \nare no problems with copying files across the network. At the third \nlocation, they have the same software installed on the laptops and \ndesktops, including the Vet security suite. Suspecting that something \nwas screwing up the transfers by fiddling with packets, we suspended \nVet, but that didn't help. We're going to try changing NICs and checking \nto see what happens when Pg runs on port 80.\n\nHas anyone experienced this sort of thing before? We're running with \n8.0.4. My application uses libpg, while another application is using \nOLEDB. Both the native and OLEDB layers exhibit the delay on the \"slow\" \nmachines, and have no problems on the \"fast\" machines. Note that the \nlaptops are in no way inferior to the desktop machines in terms of CPU, \nRAM, etc.\n\nTIA,\n Phil (yak from the build farm).\n",
"msg_date": "Thu, 10 Aug 2006 20:00:38 +1000",
"msg_from": "Phil Cairns <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow access to PostgreSQL server"
},
{
"msg_contents": "On 8/10/06, Phil Cairns <[email protected]> wrote:\n> Hi all,\n>\n> I have an application that uses PostgreSQL to store its data. The\n> application and an instance of the database have been installed in three\n> different locations, and none of these three locations have anything to\n> do with any of the others. I'm observing a problem in that large\n> transfers to some machines on the network (specifically while running\n> pg_dump) are dead slow. In fact, the information is going from the\n> server to the client machine at dialup speeds over a 100 Mb LAN to some\n> machines, and full speed to others.\n\nthere have been numerous problems reported on windows due to various\napplications, especially malware and virus scanners, that cause this\nproblem. be especially cautious about anything that runs in kernel\nmode or runs as a LSP.\n\nmerlin\n",
"msg_date": "Thu, 10 Aug 2006 16:55:54 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow access to PostgreSQL server"
},
{
"msg_contents": "Hi, Phil,\n\nPhil Cairns wrote:\n\n> Has anyone experienced this sort of thing before? We're running with\n> 8.0.4. My application uses libpg, while another application is using\n> OLEDB. Both the native and OLEDB layers exhibit the delay on the \"slow\"\n> machines, and have no problems on the \"fast\" machines. Note that the\n> laptops are in no way inferior to the desktop machines in terms of CPU,\n> RAM, etc.\n\nCan you try to rsync / netcat some large files / random data through\nnonstandard ports in both directions, and see whether that reproduces\nthe behaviour? I also think using PostgreSQL on port 80 might be an\ninteresting test.\n\nIt might be a driver or \"security software\" issue...\n\nWhen http and network drive transfers work fast, but transfers on\nnonstandard ports (postgreSQL uses 5432) work slow, I'd suspect some\npersonal firewall or antivirus network filtering software.\n\nHTH,\nMarku\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 11 Aug 2006 11:16:03 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow access to PostgreSQL server"
},
{
"msg_contents": "Hi,\n\nOn Thursday 10 August 2006 12:00, Phil Cairns wrote:\n| In fact, the information is going from the\n| server to the client machine at dialup speeds over a 100 Mb LAN to some\n| machines, and full speed to others.\n[...]\n| There are no problems with copying files across the network.\n\nand you are really really sure that this is not a network issue? I'd\ndouble check that this is not a duplex mismatch, misconfigured router\nor switch or something in that direction.\n\nCiao,\nThomas\n\n-- \nThomas Pundt <[email protected]> ---- http://rp-online.de/ ----\n",
"msg_date": "Fri, 11 Aug 2006 11:34:49 +0200",
"msg_from": "Thomas Pundt <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow access to PostgreSQL server"
}
] |
[
{
"msg_contents": "Please cc the list so others can help.\n\nHow large is the database? What indexes are on the tables you're inserting into? What speed is the drive?\n\nSince it's a single SCSI drive I'm assuming it's only 10k RPM, which means the theoretical maximum you can hit is 160 transfers per second. At 40 inserts per second (I'm assuming each insert is it's own transaction), you're already at 40 WAL operations per second, minimum. Plus whatever traffic you have to the data tables.\n\nYour biggest win would be to batch those inserts together into transactions, if possible. If not, the commit_delay settings might help you out.\n\nThere may be some further gains to be had by tweaking the background writer settings; it might be too aggressive in your application.\n\nThat update statement could also be causing a lot of activity, depending on what it's doing.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n\n-----Original Message-----\nFrom: Kumarselvan S [mailto:[email protected]]\nSent: Wed 8/9/2006 11:33 PM\nTo: Jim Nasby\nSubject: RE: [BUGS] BUG #2567: High IOWAIT\n \nYes , it is not a Bug. \nHere the some Info abt the Hardware\nIt has an SCSI Drive.\nIt an dell made quad processor machine. \n\nThe changes to Postgresql.conf\n1. max_connections =50\n2. shared buffer = 30000\n3. Temp buffer 20000\n\nRegards,\nKumar\n-----Original Message-----\nFrom: Jim C. Nasby [mailto:[email protected]] \nSent: Thursday, August 10, 2006 3:57 AM\nTo: kumarselvan\nCc: [email protected]\nSubject: Re: [BUGS] BUG #2567: High IOWAIT\n\nThis isn't a bug; moving to pgsql-performance.\n\nOn Tue, Aug 08, 2006 at 08:42:02AM +0000, kumarselvan wrote:\n> i have installed the postgres as mentioned in the Install file. it is a 4\n> cpu 8 GB Ram Machine installed with Linux Enterprise version 3. when i am\n> running a load which will perfrom 40 inserts persecond on 2 tables and 10\n> updates per 10seconds on differnt table IOWait on avg going upto 70% due\nto\n> which i am not able to increase the load. Is there is any other way to\n> install the postgres on multiprocessor machine.. can any one help me on\n> this...\n\nYou haven't given us nearly enough information. What kind of hardware is\nthis? RAID? What changes have you made to postgresql.conf?\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\nquad\n\n\n\n",
"msg_date": "Thu, 10 Aug 2006 11:53:01 -0500",
"msg_from": "\"Jim Nasby\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] BUG #2567: High IOWAIT"
},
{
"msg_contents": "Hi, Jim,\n\nJim Nasby wrote:\n\n> Your biggest win would be to batch those inserts together into\n> transactions, if possible.\n\nUsing COPY instead of INSERT might even give better wins, and AFAIK some\nclient libs use COPY internally (e. G. tablewriter from libpqxx).\n\n> If not, the commit_delay settings might help you out.\n\nAs far as I understand, this will only help for concurrent inserts by\ndifferent clients, dealing throughput for latency. Please correct me if\nI'm wrong.\n\nHTH,\nMarkus\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Fri, 11 Aug 2006 10:48:35 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] BUG #2567: High IOWAIT"
}
] |
[
{
"msg_contents": "Sort of on topic, how many foreign keys in a single table is good v.\nbad? I realize it's relative to the tables the FK's reference so here's\nan example:\n\nTable A: 300 rows\nTable B: 15,000,000 rows\nTable C: 100,000 rows\nTable E: 38 rows\nTable F: 9 rows\nTable G: is partitioned on the FK from Table A and has a FK column for\neach of the above tables\n\nI'm in the process of normalizing the database and have a schema like\nthis in mind. Works wonderfully for SELECT's but haven't gotten the\ndata import process down just yet so I haven't had a chance to put it\nthrough it's paces. Depending on the performance of INSERT, UPDATE, and\nCOPY I may drop the FK constraints since my app could enforce the FK\nchecks.\n\nTIA.\n\nGreg\n \n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Chris\n> Sent: Thursday, August 10, 2006 6:36 PM\n> To: Merlin Moncure\n> Cc: Sue Fitt; [email protected]\n> Subject: Re: [PERFORM] setting up foreign keys\n> \n> Merlin Moncure wrote:\n> > On 8/10/06, Chris <[email protected]> wrote:\n> >> Sue Fitt wrote:\n> >> > Thanks Chris and Chris, you've solved it.\n> >> >\n> >> > I had a gui open that connects to the database. It was doing \n> >> > nothing (and not preventing me adding to or altering \n> headwords_core \n> >> > via psql), but having closed it the table is instantly \n> created. Weird.\n> >> >\n> >> > BTW, referencing the same column twice is deliberate, it's a \n> >> > cross-reference.\n> >>\n> >> The same column and the same table?\n> >>\n> >> Same column different table I could understand but not the same \n> >> column & table ;)\n> > \n> > create table color(color text);\n> > \n> > create table person(eye_color text references color(color), \n> hair_color \n> > text references color(color));\n> \n> lol. Good point :)\n> \n> *back to the hidey hole!*\n> \n> --\n> Postgresql & php tutorials\n> http://www.designmagick.com/\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] \n> so that your\n> message can get through to the mailing list cleanly\n> \n",
"msg_date": "Fri, 11 Aug 2006 15:01:15 -0400",
"msg_from": "\"Spiegelberg, Greg\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: setting up foreign keys"
},
{
"msg_contents": "Spiegelberg, Greg wrote:\n> Sort of on topic, how many foreign keys in a single table is good v.\n> bad? I realize it's relative to the tables the FK's reference so here's\n> an example:\n> \n> Table A: 300 rows\n> Table B: 15,000,000 rows\n> Table C: 100,000 rows\n> Table E: 38 rows\n> Table F: 9 rows\n> Table G: is partitioned on the FK from Table A and has a FK column for\n> each of the above tables\n> \n> I'm in the process of normalizing the database and have a schema like\n> this in mind. Works wonderfully for SELECT's but haven't gotten the\n> data import process down just yet so I haven't had a chance to put it\n> through it's paces. Depending on the performance of INSERT, UPDATE, and\n> COPY I may drop the FK constraints since my app could enforce the FK\n> checks.\n\nAs long as both sides of the FK's are indexed I don't think you'll have \na problem with a particular number of FK's per table.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Mon, 14 Aug 2006 12:14:11 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: setting up foreign keys"
}
] |
[
{
"msg_contents": "Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a serious problem:\nTable: APORTES - Rows: 9,000,000 (9 million)\n*cuiT (char 11)\n*cuiL (char 11)\n*PERI (char 6)\nFAMI (numeric 6)\n\nI need all the cuiLs whose max(PERI) are from a cuiT, and the Max(FAMI) of those cuiLs, so the sentence is:\n\nSELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI\n FROM APORTES T\n INNER JOIN\n (SELECT cuiL, MAX(PERI) AS MAXPERI,\n MAX(FAMI) AS MAXFAMI\n FROM APORTES\n GROUP BY cuiL) AS U\n ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERI\nWHERE T.cuiT='12345678901'\n\nIn MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.\n\nDo you know if there is any way to tune the server or optimize this sentence?\n\nThanks\n Sebasti�n Baioni\n\n Sebasti�n Baioni \n \t\t\n---------------------------------\n Pregunt�. Respond�. Descubr�.\n Todo lo que quer�as saber, y lo que ni imaginabas,\n est� en Yahoo! Respuestas (Beta).\n Probalo ya! \nHello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a serious problem:Table: APORTES - Rows: 9,000,000 (9 million)*cuiT (char 11)*cuiL (char 11)*PERI (char 6)FAMI (numeric 6)I need all the cuiLs whose max(PERI) are from a cuiT, and the Max(FAMI) of those cuiLs, so the sentence is:SELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI FROM APORTES T INNER JOIN (SELECT cuiL, MAX(PERI) AS MAXPERI, MAX(FAMI) AS MAXFAMI FROM APORTES GROUP BY cuiL) AS U ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERIWHERE T.cuiT='12345678901'In MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.Do you know if there is any way to tune the server or optimize this sentence?Thanks Sebasti�n Baioni\n\nSebasti�n Baioni\n\n\n\nPregunt�. Respond�. Descubr�. \nTodo lo que quer�as saber, y lo que ni imaginabas, \nest� en Yahoo! Respuestas (Beta).\nProbalo ya!",
"msg_date": "Tue, 15 Aug 2006 14:38:12 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inner Join of the same table"
},
{
"msg_contents": "Can you provide an EXPLAIN ANALYZE of the query in PG? Have you\nanalyzed the PG database? How many rows is this query expected to\nreturn? Which version of PG are you running? What indexes have you\ndefined?\n\n-- Mark\n\nOn Tue, 2006-08-15 at 14:38 +0000, Sebastián Baioni wrote:\n> Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a\n> serious problem:\n> Table: APORTES - Rows: 9,000,000 (9 million)\n> *cuiT (char 11)\n> *cuiL (char 11)\n> *PERI (char 6)\n> FAMI (numeric 6)\n> \n> I need all the cuiLs whose max(PERI) are from a cuiT, and the Max\n> (FAMI) of those cuiLs, so the sentence is:\n> \n> SELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI\n> FROM APORTES T\n> INNER JOIN\n> (SELECT cuiL, MAX(PERI) AS MAXPERI,\n> MAX(FAMI) AS MAXFAMI\n> FROM APORTES\n> GROUP BY cuiL) AS U\n> ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERI\n> WHERE T.cuiT='12345678901'\n> \n> In MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts\n> 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.\n> \n> Do you know if there is any way to tune the server or optimize this\n> sentence?\n> \n> Thanks\n> Sebastián Baioni\n> \n> Instrumentos musicalesSebastián Baioni Ofertas náuticas\n> \n> \n> ______________________________________________________________________\n> Preguntá. Respondé. Descubrí.\n> Todo lo que querías saber, y lo que ni imaginabas,\n> está en Yahoo! Respuestas (Beta).\n> Probalo ya! \n",
"msg_date": "Tue, 15 Aug 2006 08:10:05 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner Join of the same table"
},
{
"msg_contents": "Hi Nark, thanks for your answer.\n\nIt's expected to return 1,720 rows (of 80,471 that match with condition WHERE\nT.cuiT='12345678901')\n\nWe have indexes by :\nuesapt000: cuiT, cuiL, PERI;\nuesapt001: cuiL, PERI;\nuesapt002: cuiT, PERI;\n\nWe usually make a vacuum analyze and reindex of every table, and we are running 8.0 and 8.1 for\nwindows and 7.4 for Linux.\n\nHere is the EXPLAIN:\nQUERY PLAN\n 1 Unique (cost=37478647.41..37478650.53 rows=312 width=62)\n 2 -> Sort (cost=37478647.41..37478648.19 rows=312 width=62)\n 3 Sort Key: t.cuiT, t.cuiL, u.maxperi\n 4 -> Merge Join (cost=128944.78..37478634.48 rows=312 width=62)\n 5 Merge Cond: (\"outer\".cuiL = \"inner\".cuiL)\n 6 Join Filter: ((\"inner\".PERI)::text = \"outer\".maxperi)\n 7 -> Subquery Scan u (cost=0.00..37348434.56 rows=3951 width=47)\n 8 -> GroupAggregate (cost=0.00..37348395.05 rows=3951 width=25)\n 9 -> Index Scan using uesapt001 on APORTES (cost=0.00..37301678.64\nrows=9339331 width=25)\n10 -> Sort (cost=128944.78..129100.44 rows=62263 width=40)\n11 Sort Key: t.cuiL\n12 -> Index Scan using uesapt002 on APORTES t (cost=0.00..122643.90\nrows=62263 width=40)\n13 Index Cond: (cuiT = '30701965554'::bpchar)\n\nThanks\n Sebasti�n Baioni\n\n --- Mark Lewis <[email protected]> escribi�:\n\n> Can you provide an EXPLAIN ANALYZE of the query in PG? Have you\n> analyzed the PG database? How many rows is this query expected to\n> return? Which version of PG are you running? What indexes have you\n> defined?\n> \n> -- Mark\n> \n> On Tue, 2006-08-15 at 14:38 +0000, Sebasti�n Baioni wrote:\n> > Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a\n> > serious problem:\n> > Table: APORTES - Rows: 9,000,000 (9 million)\n> > *cuiT (char 11)\n> > *cuiL (char 11)\n> > *PERI (char 6)\n> > FAMI (numeric 6)\n> > \n> > I need all the cuiLs whose max(PERI) are from a cuiT, and the Max\n> > (FAMI) of those cuiLs, so the sentence is:\n> > \n> > SELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI\n> > FROM APORTES T\n> > INNER JOIN\n> > (SELECT cuiL, MAX(PERI) AS MAXPERI,\n> > MAX(FAMI) AS MAXFAMI\n> > FROM APORTES\n> > GROUP BY cuiL) AS U\n> > ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERI\n> > WHERE T.cuiT='12345678901'\n> > \n> > In MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts\n> > 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.\n> > \n> > Do you know if there is any way to tune the server or optimize this\n> > sentence?\n> > \n> > Thanks\n> > Sebasti�n Baioni\n\n\n\t\n\t\n\t\t\n__________________________________________________\nPregunt�. Respond�. Descubr�.\nTodo lo que quer�as saber, y lo que ni imaginabas,\nest� en Yahoo! Respuestas (Beta).\n�Probalo ya! \nhttp://www.yahoo.com.ar/respuestas\n\n",
"msg_date": "Tue, 15 Aug 2006 15:43:29 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner Join of the same table"
},
{
"msg_contents": "On Tue, Aug 15, 2006 at 03:43:29PM +0000, Sebasti?n Baioni wrote:\n> Hi Nark, thanks for your answer.\n> \n> It's expected to return 1,720 rows (of 80,471 that match with condition WHERE\n> T.cuiT='12345678901')\n> \n> We have indexes by :\n> uesapt000: cuiT, cuiL, PERI;\n> uesapt001: cuiL, PERI;\n> uesapt002: cuiT, PERI;\n> \n> We usually make a vacuum analyze and reindex of every table, and we are running 8.0 and 8.1 for\n> windows and 7.4 for Linux.\n> \n> Here is the EXPLAIN:\n> QUERY PLAN\n> 1 Unique (cost=37478647.41..37478650.53 rows=312 width=62)\n> 2 -> Sort (cost=37478647.41..37478648.19 rows=312 width=62)\n> 3 Sort Key: t.cuiT, t.cuiL, u.maxperi\n> 4 -> Merge Join (cost=128944.78..37478634.48 rows=312 width=62)\n> 5 Merge Cond: (\"outer\".cuiL = \"inner\".cuiL)\n> 6 Join Filter: ((\"inner\".PERI)::text = \"outer\".maxperi)\n> 7 -> Subquery Scan u (cost=0.00..37348434.56 rows=3951 width=47)\n> 8 -> GroupAggregate (cost=0.00..37348395.05 rows=3951 width=25)\n> 9 -> Index Scan using uesapt001 on APORTES (cost=0.00..37301678.64\n> rows=9339331 width=25)\n> 10 -> Sort (cost=128944.78..129100.44 rows=62263 width=40)\n> 11 Sort Key: t.cuiL\n> 12 -> Index Scan using uesapt002 on APORTES t (cost=0.00..122643.90\n> rows=62263 width=40)\n> 13 Index Cond: (cuiT = '30701965554'::bpchar)\n \nThat's EXPLAIN, not EXPLAIN ANALYZE, which doesn't help us much. Best\nbet would be an EXPLAIN ANALYZE from 8.1.x. It would also be useful to\nknow how MSSQL is executing this query.\n\nIf it would serve your purposes, copying the WHERE clause into the\nsubquery would really help things. I think it might also mean you could\ncombine everything into one query.\n\n> Thanks\n> Sebasti?n Baioni\n> \n> --- Mark Lewis <[email protected]> escribi?:\n> \n> > Can you provide an EXPLAIN ANALYZE of the query in PG? Have you\n> > analyzed the PG database? How many rows is this query expected to\n> > return? Which version of PG are you running? What indexes have you\n> > defined?\n> > \n> > -- Mark\n> > \n> > On Tue, 2006-08-15 at 14:38 +0000, Sebasti?n Baioni wrote:\n> > > Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a\n> > > serious problem:\n> > > Table: APORTES - Rows: 9,000,000 (9 million)\n> > > *cuiT (char 11)\n> > > *cuiL (char 11)\n> > > *PERI (char 6)\n> > > FAMI (numeric 6)\n> > > \n> > > I need all the cuiLs whose max(PERI) are from a cuiT, and the Max\n> > > (FAMI) of those cuiLs, so the sentence is:\n> > > \n> > > SELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI\n> > > FROM APORTES T\n> > > INNER JOIN\n> > > (SELECT cuiL, MAX(PERI) AS MAXPERI,\n> > > MAX(FAMI) AS MAXFAMI\n> > > FROM APORTES\n> > > GROUP BY cuiL) AS U\n> > > ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERI\n> > > WHERE T.cuiT='12345678901'\n> > > \n> > > In MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts\n> > > 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.\n> > > \n> > > Do you know if there is any way to tune the server or optimize this\n> > > sentence?\n> > > \n> > > Thanks\n> > > Sebasti?n Baioni\n> \n> \n> \t\n> \t\n> \t\t\n> __________________________________________________\n> Pregunt?. Respond?. Descubr?.\n> Todo lo que quer?as saber, y lo que ni imaginabas,\n> est? en Yahoo! Respuestas (Beta).\n> ?Probalo ya! \n> http://www.yahoo.com.ar/respuestas\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n",
"msg_date": "Tue, 15 Aug 2006 11:56:02 -0500",
"msg_from": "\"Jim C. Nasby\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner Join of the same table"
},
{
"msg_contents": "Hello Jim, we can't use the Where cuiT='12345678901' in the subquery because we need max(cuiL)\nindependently of that cuiT:\ncuiT cuiL PERI FAMI\n1 a 200608 0\n1 a 200601 2\n1 b 200607 3\n1 c 200605 4\n2 a 200605 9\n2 c 200604 4\n2 b 200608 1\nWe need:\nwhere cuiT = '1'\ncuiT cuiL PERI FAMI\n1 a 200608 9\n1 c 200605 4\nIf we place the Where cuiT = '1' in the subquery we couldn't get the max(FAMI) of cuiL a = 9 and\nwe couldn't know if that PERI is the max(PERI) of that cuiL independently of that cuiT.\n\nHere is the explain analyze with PG 8.0 for Windows:\nExplain Analyze\nSELECT DISTINCT T.cuiT,T.cuiL, U.MAXPERI AS ULT_APORTE_O_DDJJ\n\t\tFROM APORTES AS T\n\t\tINNER JOIN\n\t\t(\n\t\tSELECT cuiL, MAX(PERI) AS MAXPERI\n\t\tFROM APORTES\n\t\tGROUP BY cuiL\n\t\t) AS U ON T.cuiL=U.cuiL AND T.PERI=U.MAXPERI\nWHERE T.cuiT='12345678901'\norder by T.cuiT, T.cuiL, U.MAXPERI;\n\nQUERY PLAN\n 1 Unique (cost=37478647.41..37478650.53 rows=312 width=62) (actual time=2677209.000..2677520.000\nrows=1720 loops=1)\n 2 -> Sort (cost=37478647.41..37478648.19 rows=312 width=62) (actual\ntime=2677209.000..2677260.000 rows=3394 loops=1)\n 3 Sort Key: t.cuiT, t.cuiL, u.maxperi\n 4 -> Merge Join (cost=128944.78..37478634.48 rows=312 width=62) (actual\ntime=74978.000..2677009.000 rows=3394 loops=1)\n 5 Merge Cond: (\"outer\".cuiL = \"inner\".cuiL)\n 6 Join Filter: ((\"inner\".peri)::text = \"outer\".maxperi)\n 7 -> Subquery Scan u (cost=0.00..37348434.56 rows=3951 width=47) (actual\ntime=130.000..2634923.000 rows=254576 loops=1)\n 8 -> GroupAggregate (cost=0.00..37348395.05 rows=3951 width=25) (actual\ntime=130.000..2629617.000 rows=254576 loops=1)\n 9 -> Index Scan using uesapt001 on APORTES (cost=0.00..37301678.64\nrows=9339331 width=25) (actual time=110.000..2520690.000 rows=9335892 loops=1)\n10 -> Sort (cost=128944.78..129100.44 rows=62263 width=40) (actual\ntime=30684.000..36838.000 rows=80471 loops=1)\n11 Sort Key: t.cuiL\n12 -> Index Scan using uesapt002 on APORTES t (cost=0.00..122643.90\nrows=62263 width=40) (actual time=170.000..25566.000 rows=80471 loops=1)\n13 Index Cond: (cuiT = '12345678901'::bpchar)\nTotal runtime: 2677640.000 ms\n\nThanks\n Sebasti�n Baioni\n\n --- \"Jim C. Nasby\" <[email protected]> escribi�:\n\n> On Tue, Aug 15, 2006 at 03:43:29PM +0000, Sebasti?n Baioni wrote:\n> > Hi Nark, thanks for your answer.\n> > \n> > It's expected to return 1,720 rows (of 80,471 that match with condition WHERE\n> > T.cuiT='12345678901')\n> > \n> > We have indexes by :\n> > uesapt000: cuiT, cuiL, PERI;\n> > uesapt001: cuiL, PERI;\n> > uesapt002: cuiT, PERI;\n> > \n> > We usually make a vacuum analyze and reindex of every table, and we are running 8.0 and 8.1\nfor windows and 7.4 for Linux.\n> \n> That's EXPLAIN, not EXPLAIN ANALYZE, which doesn't help us much. Best\n> bet would be an EXPLAIN ANALYZE from 8.1.x. It would also be useful to\n> know how MSSQL is executing this query.\n> \n> If it would serve your purposes, copying the WHERE clause into the\n> subquery would really help things. I think it might also mean you could\n> combine everything into one query.\n> \n> > Thanks\n> > Sebasti?n Baioni\n> > \n> > --- Mark Lewis <[email protected]> escribi?:\n> > \n> > > Can you provide an EXPLAIN ANALYZE of the query in PG? Have you\n> > > analyzed the PG database? How many rows is this query expected to\n> > > return? Which version of PG are you running? What indexes have you\n> > > defined?\n> > > \n> > > -- Mark\n> > > \n> > > On Tue, 2006-08-15 at 14:38 +0000, Sebasti?n Baioni wrote:\n> > > > Hello, I'm migrating from MS SQL Server to PostgreSQL 8.1 and I have a\n> > > > serious problem:\n> > > > Table: APORTES - Rows: 9,000,000 (9 million)\n> > > > *cuiT (char 11)\n> > > > *cuiL (char 11)\n> > > > *PERI (char 6)\n> > > > FAMI (numeric 6)\n> > > > \n> > > > I need all the cuiLs whose max(PERI) are from a cuiT, and the Max\n> > > > (FAMI) of those cuiLs, so the sentence is:\n> > > > \n> > > > SELECT DISTINCT T.cuiT, T.cuiL. U.MAXPERI, U.MAXFAMI\n> > > > FROM APORTES T\n> > > > INNER JOIN\n> > > > (SELECT cuiL, MAX(PERI) AS MAXPERI,\n> > > > MAX(FAMI) AS MAXFAMI\n> > > > FROM APORTES\n> > > > GROUP BY cuiL) AS U\n> > > > ON T.cuiL = U.cuiL AND T.PERI=U.MAXPERI\n> > > > WHERE T.cuiT='12345678901'\n> > > > \n> > > > In MS SQL Server it lasts 1minute, in PostgreSQL for Windows it lasts\n> > > > 40minutes and in PostgreSQL for Linux (FreeBSD) it lasts 20minuts.\n> > > > \n> > > > Do you know if there is any way to tune the server or optimize this\n> > > > sentence?\n> > > > \n> > > > Thanks\n> > > > Sebasti�n Baioni\n\n\n\t\n\t\n\t\t\n__________________________________________________\nPregunt�. Respond�. Descubr�.\nTodo lo que quer�as saber, y lo que ni imaginabas,\nest� en Yahoo! Respuestas (Beta).\n�Probalo ya! \nhttp://www.yahoo.com.ar/respuestas\n\n",
"msg_date": "Tue, 15 Aug 2006 18:53:35 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner Join of the same table"
},
{
"msg_contents": "On Aug 15, 2006, at 1:53 PM, Sebasti�n Baioni wrote:\n> 9 -> Index Scan using uesapt001 on \n> APORTES (cost=0.00..37301678.64\n> rows=9339331 width=25) (actual time=110.000..2520690.000 \n> rows=9335892 loops=1)\n\nIt's taking 2520 seconds to scan an index with 9M rows, which sounds \nway, way too slow. I suspect that index got bloated badly at some \npoint by not vacuuming frequently enough (autovacuum is your friend). \nTry reindexing and see if that fixes the problem.\n--\nJim C. Nasby, Sr. Engineering Consultant [email protected]\nPervasive Software http://pervasive.com work: 512-231-6117\nvcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461\n\n\n",
"msg_date": "Tue, 15 Aug 2006 14:27:49 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner Join of the same table"
},
{
"msg_contents": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]> writes:\n> 8 -> GroupAggregate (cost=0.00..37348395.05 rows=3951 width=25) (actual\n> time=130.000..2629617.000 rows=254576 loops=1)\n> 9 -> Index Scan using uesapt001 on APORTES (cost=0.00..37301678.64\n> rows=9339331 width=25) (actual time=110.000..2520690.000 rows=9335892 loops=1)\n\nGiven the relatively small estimated number of group rows, I'd have\nexpected the thing to use a seqscan and HashAggregate for this part.\nDo you have enable_hashagg turned off for some reason? Or enable_seqscan?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 15 Aug 2006 17:13:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inner Join of the same table "
},
{
"msg_contents": "I had enable_seqscan turned OFF; With enable_seqscan turned ON it takes only 6 minutes to complete\nthe query and not 44minuts like it did with enable_seqscan turned OFF. THANKS A LOT!\nIt's still much more slower than MS SQL server but now it has acceptable times.\n\n Sebasti�n Baioni\n\n --- Tom Lane <[email protected]> escribi�:\n> Given the relatively small estimated number of group rows, I'd have\n> expected the thing to use a seqscan and HashAggregate for this part.\n> Do you have enable_hashagg turned off for some reason? Or enable_seqscan?\n> \n> \t\t\tregards, tom lane\n> > Hello Jim, we can't use the Where cuiT='12345678901' in the subquery because we need max(cuiL)\nindependently of that cuiT:\n> > cuiT cuiL PERI FAMI\n> > 1 a 200608 0\n> > 1 a 200601 2\n> > 1 b 200607 3\n> > 1 c 200605 4\n> > 2 a 200605 9\n> > 2 c 200604 4\n> > 2 b 200608 1\n> > We need:\n> > where cuiT = '1'\n> > cuiT cuiL PERI FAMI\n> > 1 a 200608 9\n> > 1 c 200605 4\n> > If we place the Where cuiT = '1' in the subquery we couldn't get the max(FAMI) of cuiL a = 9\nand we couldn't know if that PERI is the max(PERI) of that cuiL independently of that cuiT.\n> > \n> > Here is the explain analyze with PG 8.0 for Windows:\n> > Explain Analyze\n> > SELECT DISTINCT T.cuiT,T.cuiL, U.MAXPERI AS ULT_APORTE_O_DDJJ\n> > \t\tFROM APORTES AS T\n> > \t\tINNER JOIN\n> > \t\t(\n> > \t\tSELECT cuiL, MAX(PERI) AS MAXPERI\n> > \t\tFROM APORTES\n> > \t\tGROUP BY cuiL\n> > \t\t) AS U ON T.cuiL=U.cuiL AND T.PERI=U.MAXPERI\n> > WHERE T.cuiT='12345678901'\n> > order by T.cuiT, T.cuiL, U.MAXPERI;\n> > \n> > QUERY PLAN\n> > 1 Unique (cost=37478647.41..37478650.53 rows=312 width=62) (actual\ntime=2677209.000..2677520.000\n> > rows=1720 loops=1)\n> > 2 -> Sort (cost=37478647.41..37478648.19 rows=312 width=62) (actual\ntime=2677209.000..2677260.000 rows=3394 loops=1)\n> > 3 Sort Key: t.cuiT, t.cuiL, u.maxperi\n> > 4 -> Merge Join (cost=128944.78..37478634.48 rows=312 width=62) (actual\ntime=74978.000..2677009.000 rows=3394 loops=1)\n> > 5 Merge Cond: (\"outer\".cuiL = \"inner\".cuiL)\n> > 6 Join Filter: ((\"inner\".peri)::text = \"outer\".maxperi)\n> > 7 -> Subquery Scan u (cost=0.00..37348434.56 rows=3951 width=47) (actual\ntime=130.000..2634923.000 rows=254576 loops=1)\n> > 8 -> GroupAggregate (cost=0.00..37348395.05 rows=3951 width=25) (actual\ntime=130.000..2629617.000 rows=254576 loops=1)\n> > 9 -> Index Scan using uesapt001 on APORTES (cost=0.00..37301678.64\nrows=9339331 width=25) (actual time=110.000..2520690.000 rows=9335892 loops=1)\n> > 10 -> Sort (cost=128944.78..129100.44 rows=62263 width=40) (actual\ntime=30684.000..36838.000 rows=80471 loops=1)\n> > 11 Sort Key: t.cuiL\n> > 12 -> Index Scan using uesapt002 on APORTES t (cost=0.00..122643.90\nrows=62263 width=40) (actual time=170.000..25566.000 rows=80471 loops=1)\n> > 13 Index Cond: (cuiT = '12345678901'::bpchar)\n> > Total runtime: 2677640.000 ms\n> > \n> > Thanks\n> > Sebasti�n Baioni\n\n\n\t\n\t\n\t\t\n__________________________________________________\nPregunt�. Respond�. Descubr�.\nTodo lo que quer�as saber, y lo que ni imaginabas,\nest� en Yahoo! Respuestas (Beta).\n�Probalo ya! \nhttp://www.yahoo.com.ar/respuestas\n\n",
"msg_date": "Wed, 16 Aug 2006 18:27:05 +0000 (GMT)",
"msg_from": "=?iso-8859-1?q?Sebasti=E1n=20Baioni?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inner Join of the same table "
}
] |
[
{
"msg_contents": "Hi all,\n\nI have PostgreSQL 8.1.4 running on a P 4 2.8 GHz , 512 MB with Linux \n(Fedora Core 3)\n\nThe SQL comands below have a performance diference that I think is not \nso much acceptable ( 1035.427 ms vs 7.209 ms ), since the tables isn�t\nso much big ( contrato have 1907 rows and prog have 40.002 rows )\nCan I make some optimization here ?\n\n EXPLAIN ANALYZE\n SELECT Contrato.Id\n , Min( prog.dtsemeio ) AS DtSemIni\n , Max( prog.dtsemeio ) AS DtSemFim\n , Min( prog.dtembarque ) AS DtEmbIni\n , Max( prog.dtembarque ) AS DtEmbFim\n , Min( prog.dtentrega ) AS DtEntIni\n , Max( prog.dtentrega ) AS DtEntFim\n , COUNT(prog.*) AS QtSem\n , SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END ) \nAS QtSemAb\n FROM bvz.Contrato\n LEFT OUTER JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id\n WHERE Contrato.Fk_Clifor = 243\n GROUP BY 1;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=5477.34..5706.84 rows=41 width=48) (actual \ntime=883.721..1031.159 rows=41 loops=1)\n -> Merge Left Join (cost=5477.34..5686.15 rows=860 width=48) \n(actual time=868.038..1026.988 rows=1366 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".fk_contrato)\n -> Sort (cost=50.39..50.49 rows=41 width=4) (actual \ntime=0.614..0.683 rows=41 loops=1)\n Sort Key: contrato.id\n -> Bitmap Heap Scan on contrato (cost=2.14..49.29 \nrows=41 width=4) (actual time=0.163..0.508 rows=41 loops=1)\n Recheck Cond: (fk_clifor = 243)\n -> Bitmap Index Scan on fki_contrato_clifor \n(cost=0.00..2.14 rows=41 width=0) (actual time=0.146..0.146 rows=41 loops=1)\n Index Cond: (fk_clifor = 243)\n -> Sort (cost=5426.95..5526.95 rows=40002 width=48) (actual \ntime=862.192..956.903 rows=38914 loops=1)\n Sort Key: prog.fk_contrato\n -> Seq Scan on prog (cost=0.00..1548.02 rows=40002 \nwidth=48) (actual time=0.044..169.795 rows=40002 loops=1)\n Total runtime: 1035.427 ms\n\n\nEXPLAIN ANALYZE\nSELECT Contrato.Id\n , Min( prog.dtsemeio ) AS DtSemIni\n , Max( prog.dtsemeio ) AS DtSemFim\n , Min( prog.dtembarque ) AS DtEmbIni\n , Max( prog.dtembarque ) AS DtEmbFim\n , Min( prog.dtentrega ) AS DtEntIni\n , Max( prog.dtentrega ) AS DtEntFim\n , COUNT(prog.*) AS QtSem\n , SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END ) \nAS QtSemAb\nFROM bvz.Contrato\n LEFT OUTER JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id\nWHERE Contrato.Fk_Clifor = 352\nGROUP BY 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=2.16..4588.74 rows=28 width=48) (actual \ntime=2.196..7.027 rows=28 loops=1)\n -> Nested Loop Left Join (cost=2.16..4574.63 rows=587 width=48) \n(actual time=2.042..6.154 rows=223 loops=1)\n -> Index Scan using pk_contrato on contrato \n(cost=0.00..100.92 rows=28 width=4) (actual time=1.842..3.045 rows=28 \nloops=1)\n Filter: (fk_clifor = 352)\n -> Bitmap Heap Scan on prog (cost=2.16..159.19 rows=47 \nwidth=48) (actual time=0.040..0.080 rows=8 loops=28)\n Recheck Cond: (prog.fk_contrato = \"outer\".id)\n -> Bitmap Index Scan on fki_prog_contrato \n(cost=0.00..2.16 rows=47 width=0) (actual time=0.018..0.018 rows=8 loops=28)\n Index Cond: (prog.fk_contrato = \"outer\".id)\n Total runtime: 7.209 ms\n\n\n\nI think that the problem is in \"LEFT OUTER JOIN\" because when I run the \nqueries with a inner join I have more consistent times,\nalthough the query plan above is a champion :\n\n\nEXPLAIN ANALYZE\nSELECT Contrato.Id\n , Min( prog.dtsemeio ) AS DtSemIni\n , Max( prog.dtsemeio ) AS DtSemFim\n , Min( prog.dtembarque ) AS DtEmbIni\n , Max( prog.dtembarque ) AS DtEmbFim\n , Min( prog.dtentrega ) AS DtEntIni\n , Max( prog.dtentrega ) AS DtEntFim\n , COUNT(prog.*) AS QtSem\n , SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END ) \nAS QtSemAb\nFROM bvz.Contrato\n JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id\nWHERE Contrato.Fk_Clifor = 243\nGROUP BY 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1825.38..1826.71 rows=41 width=48) (actual \ntime=222.671..222.788 rows=41 loops=1)\n -> Hash Join (cost=49.40..1806.03 rows=860 width=48) (actual \ntime=2.040..217.963 rows=1366 loops=1)\n Hash Cond: (\"outer\".fk_contrato = \"inner\".id)\n -> Seq Scan on prog (cost=0.00..1548.02 rows=40002 width=48) \n(actual time=0.047..150.636 rows=40002 loops=1)\n -> Hash (cost=49.29..49.29 rows=41 width=4) (actual \ntime=0.766..0.766 rows=41 loops=1)\n -> Bitmap Heap Scan on contrato (cost=2.14..49.29 \nrows=41 width=4) (actual time=0.146..0.669 rows=41 loops=1)\n Recheck Cond: (fk_clifor = 243)\n -> Bitmap Index Scan on fki_contrato_clifor \n(cost=0.00..2.14 rows=41 width=0) (actual time=0.101..0.101 rows=41 loops=1)\n Index Cond: (fk_clifor = 243)\n Total runtime: 223.230 ms\n\n\nEXPLAIN ANALYZE\nSELECT Contrato.Id\n , Min( prog.dtsemeio ) AS DtSemIni\n , Max( prog.dtsemeio ) AS DtSemFim\n , Min( prog.dtembarque ) AS DtEmbIni\n , Max( prog.dtembarque ) AS DtEmbFim\n , Min( prog.dtentrega ) AS DtEntIni\n , Max( prog.dtentrega ) AS DtEntFim\n , COUNT(prog.*) AS QtSem\n , SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END ) \nAS QtSemAb\nFROM bvz.Contrato\n JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id\nWHERE Contrato.Fk_Clifor = 352\nGROUP BY 1;\n QUERY \nPLAN\n------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=1811.50..1812.41 rows=28 width=48) (actual \ntime=215.214..215.291 rows=28 loops=1)\n -> Hash Join (cost=44.39..1798.29 rows=587 width=48) (actual \ntime=3.853..214.178 rows=223 loops=1)\n Hash Cond: (\"outer\".fk_contrato = \"inner\".id)\n -> Seq Scan on prog (cost=0.00..1548.02 rows=40002 width=48) \n(actual time=0.075..150.701 rows=40002 loops=1)\n -> Hash (cost=44.32..44.32 rows=28 width=4) (actual \ntime=0.248..0.248 rows=28 loops=1)\n -> Bitmap Heap Scan on contrato (cost=2.10..44.32 \nrows=28 width=4) (actual time=0.111..0.187 rows=28 loops=1)\n Recheck Cond: (fk_clifor = 352)\n -> Bitmap Index Scan on fki_contrato_clifor \n(cost=0.00..2.10 rows=28 width=0) (actual time=0.101..0.101 rows=28 loops=1)\n Index Cond: (fk_clifor = 352)\n Total runtime: 215.483 ms\n\nWell, in this case the queries with LEFT OUTER join and with inner join \nreturns the same result set. I don�t have the sufficient knowledge to\naffirm , but I suspect that if the query plan used for fk_clifor = 352 \nand with left outer join is applied for the first query (fk_clifor = 243 \nwith left outer join)\nwe will have a better total runtime.\nThere are some manner to make this test ?\nBy the way (If this is a stupid idea, ignore this), this same (or a \nsimilar) query plan cannot be used in the queries with inner join since \nthe difference in times ( 215.483 ms vs 7.209 ms) still significative ?\n\n\n",
"msg_date": "Tue, 15 Aug 2006 21:39:21 -0300",
"msg_from": "\"Luiz K. Matsumura\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Big diference in response time (query plan question)"
},
{
"msg_contents": "\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Luiz K. Matsumura\n\n> Well, in this case the queries with LEFT OUTER join and with \n> inner join \n> returns the same result set. I don´t have the sufficient knowledge to\n> affirm , but I suspect that if the query plan used for \n> fk_clifor = 352 \n> and with left outer join is applied for the first query \n> (fk_clifor = 243 \n> with left outer join)\n> we will have a better total runtime.\n> There are some manner to make this test ?\n\nIt looks like Postgres used a nested loop join for the fast query and a\nmerge join for the slow query. I don't think the left join is causing any\nproblems. On the slower query the cost estimate of the nested loop must\nhave been higher than the cost estimate of the merge join because of more\nrows. You could try disabling merge joins with the command \"set\nenable_mergejoin=false\". Then run the explain analyze again to see if it is\nfaster. \n\nIf it is faster without merge join, then you could try to change your\nsettings to make the planner prefer the nested loop. I'm not sure what the\nbest way to do that is. Maybe you could try reducing the random_page_cost,\nwhich should make index scans cheaper.\n\nDave\n\n",
"msg_date": "Wed, 16 Aug 2006 08:34:59 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big diference in response time (query plan question)"
},
{
"msg_contents": "Hi Dave,\nThanks to reply.\nI run it now in a Postgres 8.1.4 my notebook (win XP) and the \nperformance is really much better:\n\nEXPLAIN ANALYZE\nSELECT Contrato.Id\n , Min( prog.dtsemeio ) AS DtSemIni\n , Max( prog.dtsemeio ) AS DtSemFim\n , Min( prog.dtembarque ) AS DtEmbIni\n , Max( prog.dtembarque ) AS DtEmbFim\n , Min( prog.dtentrega ) AS DtEntIni\n , Max( prog.dtentrega ) AS DtEntFim\n , COUNT(prog.*) AS QtSem\n , SUM( CASE WHEN Prog.DtSemeio >= '20060814' THEN 1 ELSE 0 END ) \nAS QtSemAb\nFROM bvz.Contrato\n LEFT OUTER JOIN bvz.Prog ON prog.Fk_Contrato = Contrato.Id\nWHERE Contrato.Fk_Clifor = 243\nGROUP BY 1;\n\nGroupAggregate (cost=2.18..7312.45 rows=42 width=48) (actual \ntime=0.446..13.195 rows=42 loops=1)\n -> Nested Loop Left Join (cost=2.18..7291.22 rows=883 width=48) \n(actual time=0.103..10.518 rows=1536 loops=1)\n -> Index Scan using pk_contrato on contrato (cost=0.00..100.29 \nrows=42 width=4) (actual time=0.048..3.163 rows=42 loops=1)\n Filter: (fk_clifor = 243)\n -> Bitmap Heap Scan on prog (cost=2.18..170.59 rows=50 \nwidth=48) (actual time=0.027..0.132 rows=37 loops=42)\n Recheck Cond: (prog.fk_contrato = \"outer\".id)\n -> Bitmap Index Scan on fki_prog_contrato \n(cost=0.00..2.18 rows=50 width=0) (actual time=0.018..0.018 rows=37 \nloops=42)\n Index Cond: (prog.fk_contrato = \"outer\".id)\nTotal runtime: 13.399 ms\n\nWhere I can see the current random_page_cost value ? There are some hint \nabout what value I must set ?\nThanks in advance.\nLuiz\n\nDave Dutcher wrote:\n>> Well, in this case the queries with LEFT OUTER join and with \n>> inner join \n>> returns the same result set. I don�t have the sufficient knowledge to\n>> affirm , but I suspect that if the query plan used for \n>> fk_clifor = 352 \n>> and with left outer join is applied for the first query \n>> (fk_clifor = 243 \n>> with left outer join)\n>> we will have a better total runtime.\n>> There are some manner to make this test ?\n>> \n>\n> It looks like Postgres used a nested loop join for the fast query and a\n> merge join for the slow query. I don't think the left join is causing any\n> problems. On the slower query the cost estimate of the nested loop must\n> have been higher than the cost estimate of the merge join because of more\n> rows. You could try disabling merge joins with the command \"set\n> enable_mergejoin=false\". Then run the explain analyze again to see if it is\n> faster. \n>\n> If it is faster without merge join, then you could try to change your\n> settings to make the planner prefer the nested loop. I'm not sure what the\n> best way to do that is. Maybe you could try reducing the random_page_cost,\n> which should make index scans cheaper.\n>\n> Dave\n> \n",
"msg_date": "Wed, 16 Aug 2006 11:24:37 -0300",
"msg_from": "\"Luiz K. Matsumura\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Big diference in response time (query plan question)"
},
{
"msg_contents": "\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Luiz K. Matsumura\n> \n> \n> Where I can see the current random_page_cost value ? There \n> are some hint \n> about what value I must set ?\n> Thanks in advance.\n> Luiz\n\nOn Linux the random_page_cost is set in the postgresql.conf file. You can\nsee what it is set to by typing \"show random_page_cost\". This page has some\nguidelines on random_page_cost and other server settings:\n\nhttp://www.powerpostgresql.com/PerfList/\n\nAs it says on the page, make sure you test a variety of queries.\n\n",
"msg_date": "Wed, 16 Aug 2006 09:51:09 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Big diference in response time (query plan question)"
}
] |
[
{
"msg_contents": "I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user \nParadox to a web based interface to either MySQL or PostgreSQL.\nThe database is a pedigree sheep breed society database recording sheep and \nflocks (amongst other things).\n\nMy current problem is with one table and an associated query which takes 10 \ntimes longer to execute on PostgreSQL than BDE, which in turn takes 10 times \nlonger than MySQL. The table links sheep to flocks and is created as follows:\n\nCREATE TABLE SHEEP_FLOCK\n(\n regn_no varchar(7) NOT NULL,\n flock_no varchar(6) NOT NULL,\n transfer_date date NOT NULL,\n last_changed date NOT NULL,\n CONSTRAINT SHEEP_FLOCK_pkey PRIMARY KEY (regn_no, flock_no, \ntransfer_date)\n) \nWITHOUT OIDS;\nALTER TABLE SHEEP_FLOCK OWNER TO postgres;\n\nI then populate the table with \n\nCOPY SHEEP_FLOCK\nFROM 'e:/ssbg/devt/devt/export_data/sheep_flock.txt'\nWITH CSV HEADER\n\nThe table then has about 82000 records\n\nThe query I run is:\n\n/* Select all sheep who's most recent transfer was into the subject flock */\nSELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\nFROM SHEEP_FLOCK f1 JOIN \n /* The last transfer date for each sheep */\n (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n FROM SHEEP_FLOCK f\n GROUP BY f.regn_no) f2 \nON f1.regn_no = f2.regn_no\nWHERE f1.flock_no = '1359'\nAND f1.transfer_date = f2.last_xfer_date\n\nThe sub-select on it's own returns about 32000 rows.\n\nUsing identically structured tables and the same primary key, if I run this on \nParadox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms, \nand on PostgresSQL (8.1.3, local server) about 1290ms). All on the same \nWindows XP Pro machine with 512MB ram of which nearly half is free. \n\nThe query plan shows most of the time is spent sorting the 30000+ rows from the subquery, so I added a further\nsubquery as follows: \n\n/* Select all sheep who's most recent transfer was into the subject flock */\nSELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\nFROM SHEEP_FLOCK f1 JOIN \n /* The last transfer date for each sheep */\n (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n FROM SHEEP_FLOCK f\n WHERE f.regn_no IN \n /* Limit the rows extracted by the outer sub-query to those relevant to the \nsubject flock */\n\t/* This typically reduces the time from 1297ms to 47ms - from 35000 rows \nto 127 rows */\n\t(SELECT s.regn_no FROM SHEEP_FLOCK s where s.flock_no = '1359')\n GROUP BY f.regn_no) f2 \nON f1.regn_no = f2.regn_no\nWHERE f1.flock_no = '1359'\nAND f1.transfer_date = f2.last_xfer_date\n\nthen as the comment suggests I get a considerable improvement, but it's still an \norder of magnitude slower than MySQL.\n\nCan anyone suggest why PostgreSQL performs the original query so much slower than even BDE?\n -- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Wed, 16 Aug 2006 17:48:13 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 16-8-2006 18:48, Peter Hardman wrote:\n> Using identically structured tables and the same primary key, if I run this on \n> Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms, \n> and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same \n> Windows XP Pro machine with 512MB ram of which nearly half is free. \n\nIs that with or without query caching? I.e. can you test it with SELECT \nSQL_NO_CACHE ... ?\nIn a read-only environment it will still beat PostgreSQL, but as soon as \nyou'd get a read-write environment, MySQL's query cache is of less use. \nSo you should compare both the cached and non-cached version, if applicable.\n\nBesides that, most advices on this list are impossible without the \nresult of 'explain analyze', so you should probably get that as well.\n\nI'm not sure whether this is the same query, but you might want to try:\nSELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\nFROM SHEEP_FLOCK f1\nWHERE\nf1.flock_no = '1359'\nAND f1.transfer_date = (SELECT MAX(f.transfer_date) FROM SHEEP_FLOCK f \nWHERE regn_no = f1.regn_no)\n\nAnd you might need an index on (regn_no, transfer_date) and/or one \ncombined with that flock_no.\n\nBest regards,\n\nArjen\n",
"msg_date": "Wed, 16 Aug 2006 20:02:24 +0200",
"msg_from": "Arjen van der Meijden <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 8/16/06, Peter Hardman <[email protected]> wrote:\n> I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user\n> Paradox to a web based interface to either MySQL or PostgreSQL.\n> The database is a pedigree sheep breed society database recording sheep and\n> flocks (amongst other things).\n>\n> My current problem is with one table and an associated query which takes 10\n> times longer to execute on PostgreSQL than BDE, which in turn takes 10 times\n> longer than MySQL. The table links sheep to flocks and is created as follows:\n>\n> CREATE TABLE SHEEP_FLOCK\n> (\n> regn_no varchar(7) NOT NULL,\n> flock_no varchar(6) NOT NULL,\n> transfer_date date NOT NULL,\n> last_changed date NOT NULL,\n> CONSTRAINT SHEEP_FLOCK_pkey PRIMARY KEY (regn_no, flock_no,\n> transfer_date)\n> )\n> WITHOUT OIDS;\n> ALTER TABLE SHEEP_FLOCK OWNER TO postgres;\n>\n> I then populate the table with\n>\n> COPY SHEEP_FLOCK\n> FROM 'e:/ssbg/devt/devt/export_data/sheep_flock.txt'\n> WITH CSV HEADER\n>\n> The table then has about 82000 records\n>\n> The query I run is:\n>\n> /* Select all sheep who's most recent transfer was into the subject flock */\n> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n> FROM SHEEP_FLOCK f1 JOIN\n> /* The last transfer date for each sheep */\n> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n> FROM SHEEP_FLOCK f\n> GROUP BY f.regn_no) f2\n> ON f1.regn_no = f2.regn_no\n> WHERE f1.flock_no = '1359'\n> AND f1.transfer_date = f2.last_xfer_date\n>\n> The sub-select on it's own returns about 32000 rows.\n>\n> Using identically structured tables and the same primary key, if I run this on\n> Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms,\n> and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same\n> Windows XP Pro machine with 512MB ram of which nearly half is free.\n>\n> The query plan shows most of the time is spent sorting the 30000+ rows from the subquery, so I added a further\n> subquery as follows:\n>\n> /* Select all sheep who's most recent transfer was into the subject flock */\n> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n> FROM SHEEP_FLOCK f1 JOIN\n> /* The last transfer date for each sheep */\n> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n> FROM SHEEP_FLOCK f\n> WHERE f.regn_no IN\n> /* Limit the rows extracted by the outer sub-query to those relevant to the\n> subject flock */\n> /* This typically reduces the time from 1297ms to 47ms - from 35000 rows\n> to 127 rows */\n> (SELECT s.regn_no FROM SHEEP_FLOCK s where s.flock_no = '1359')\n> GROUP BY f.regn_no) f2\n> ON f1.regn_no = f2.regn_no\n> WHERE f1.flock_no = '1359'\n> AND f1.transfer_date = f2.last_xfer_date\n>\n> then as the comment suggests I get a considerable improvement, but it's still an\n> order of magnitude slower than MySQL.\n>\n> Can anyone suggest why PostgreSQL performs the original query so much slower than even BDE?\n\nANALYZE?\n\nRegards,\n\nRodrigo\n",
"msg_date": "Wed, 16 Aug 2006 12:02:31 -0600",
"msg_from": "\"=?ISO-8859-1?Q?Rodrigo_De_Le=F3n?=\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 16 Aug 2006 at 20:02, Arjen van der Meijden wrote:\n\n> On 16-8-2006 18:48, Peter Hardman wrote:\n> > Using identically structured tables and the same primary key, if I run this on \n> > Paradox/BDE it takes about 120ms, on MySQL (5.0.24, local server) about 3ms, \n> > and on PostgresSQL (8.1.3, local server) about 1290ms). All on the same \n> > Windows XP Pro machine with 512MB ram of which nearly half is free. \n> \n> Is that with or without query caching? I.e. can you test it with SELECT \n> SQL_NO_CACHE ... ?\n> In a read-only environment it will still beat PostgreSQL, but as soon as \n> you'd get a read-write environment, MySQL's query cache is of less use. \n> So you should compare both the cached and non-cached version, if applicable.\nIt seems to make no difference - not surprising really as I'm just running the query \nfrom the command line interface.\n> \n> Besides that, most advices on this list are impossible without the \n> result of 'explain analyze', so you should probably get that as well.\nHere is the output of EXPLAIN ANALYZE for the slow query:\n\nUnique (cost=7201.65..8487.81 rows=1 width=13) (actual \ntime=1649.733..1811.684 rows=32 loops=1)\n -> Merge Join (cost=7201.65..8487.80 rows=1 width=13) (actual \ntime=1649.726..1811.528 rows=32 loops=1)\n Merge Cond: (((\"outer\".regn_no)::text = \"inner\".\"?column3?\") AND \n(\"outer\".transfer_date = \"inner\".last_xfer_date))\n -> Index Scan using sheep_flock_pkey on sheep_flock f1 \n(cost=0.00..1033.19 rows=77 width=13) (actual time=15.357..64.237 rows=127 \nloops=1)\n Index Cond: ((flock_no)::text = '1359'::text)\n -> Sort (cost=7201.65..7285.84 rows=33676 width=15) (actual \ntime=1580.198..1653.502 rows=38277 loops=1)\n Sort Key: (f2.regn_no)::text, f2.last_xfer_date\n -> Subquery Scan f2 (cost=0.00..4261.67 rows=33676 width=15) (actual \ntime=0.331..598.246 rows=38815 loops=1)\n -> GroupAggregate (cost=0.00..3924.91 rows=33676 width=13) \n(actual time=0.324..473.131 rows=38815 loops=1)\n -> Index Scan using sheep_flock_pkey on sheep_flock f \n(cost=0.00..3094.95 rows=81802 width=13) (actual time=0.295..232.156 \nrows=81802 loops=1)\nTotal runtime: 1812.737 ms\n\n\n> \n> I'm not sure whether this is the same query, but you might want to try:\n> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n> FROM SHEEP_FLOCK f1\n> WHERE\n> f1.flock_no = '1359'\n> AND f1.transfer_date = (SELECT MAX(f.transfer_date) FROM SHEEP_FLOCK f \n> WHERE regn_no = f1.regn_no)\n> \nThat's neat - I didn't know you could make a reference from a subselect to the \nouter select. Your query has the same performance as my very complex one on \nboth MySQL and PostgreSQL. However I'm not entirely sure about the times for \nMySQL - every interface gives a different answer so I'll have to try them from a \nscript so I know whats going on.\nInterestingly BDE takes 7 seconds to run your query. Just as well I didn't start \nfrom there... \n> And you might need an index on (regn_no, transfer_date) and/or one \n> combined with that flock_no.\nExplain says it only uses the primary key, so it seems there' no need for a \nseparate index\n\nThanks for the help\n-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Wed, 16 Aug 2006 20:42:49 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "\"Peter Hardman\" <[email protected]> writes:\n> I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user \n> Paradox to a web based interface to either MySQL or PostgreSQL.\n> The query I run is:\n\n> /* Select all sheep who's most recent transfer was into the subject flock */\n> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n> FROM SHEEP_FLOCK f1 JOIN \n> /* The last transfer date for each sheep */\n> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n> FROM SHEEP_FLOCK f\n> GROUP BY f.regn_no) f2 \n> ON f1.regn_no = f2.regn_no\n> WHERE f1.flock_no = '1359'\n> AND f1.transfer_date = f2.last_xfer_date\n\nThis seems pretty closely related to this recent thread:\nhttp://archives.postgresql.org/pgsql-performance/2006-08/msg00220.php\nin which the OP is doing a very similar kind of query in almost exactly\nthe same way.\n\nI can't help thinking that there's probably a better way to phrase this\ntype of query in SQL, though it's not jumping out at me what that is.\n\nWhat I find interesting though is that it sounds like both MSSQL and\nParadox know something we don't about how to optimize it. PG doesn't\nhave any idea how to do the above query without forming the full output\nof the sub-select, but I suspect that the commercial DBs know a\nshortcut; perhaps they are able to automatically derive a restriction\nin the subquery similar to what you did by hand. Does Paradox have\nanything comparable to EXPLAIN that would give a hint about the query\nplan they are using?\n\nAlso, just as in the other thread, I'm thinking that a seqscan+hash\naggregate would be a better idea than this bit:\n\n> -> GroupAggregate (cost=0.00..3924.91 rows=33676 width=13) (actual time=0.324..473.131 rows=38815 loops=1)\n> -> Index Scan using sheep_flock_pkey on sheep_flock f (cost=0.00..3094.95 rows=81802 width=13) (actual time=0.295..232.156)\n\nPossibly you need to raise work_mem to get it to consider the hash\naggregation method.\n\nBTW, are you *sure* you are testing PG 8.1? The \"Subquery Scan f2\" plan\nnode looks unnecessary to me, and I'd have expected 8.1 to drop it out.\n8.0 and before would have left it in the plan though. This doesn't make\nall that much difference performance-wise in itself, but it does make me\nwonder what you are testing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 16 Aug 2006 18:51:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "\nOn 17 Aug 2006 at 10:00, Mario Weilguni wrote:\n\n> not really sure if this is right without any testdata, but isn't that what you \n> want?\n> \n> CREATE index foo on sheep_flock (flock_no);\n> \n> SELECT DISTINCT on (f1.transfer_date) f1.regn_no, f1.transfer_date as date_in\n> FROM SHEEP_FLOCK f1\n> WHERE f1.flock_no = '1359'\n> order by f1.transfer_date desc;\n> \n> best regards, \n> mario weilguni\n> \n> \nMario, Thanks for the suggestion, but this query produces the wrong answer - but \nthen I provided no data, nor properly explained what the data would be.\nEach sheep will have multiple records, starting with one for when it's first \nregistered, then one for each flock it's in (eg sold into) then one for when it dies \nand goes to the 'big flock in the sky'.\n\n So first I need to find the most recent record for each sheep and then select the \nsheep who's most recent record matches the flock in question.\n\nYour query finds all the sheep that have been in the flock in question, then selects \nthe first one from each set of records with the same date. So it collects data on \ndead sheep, and only selects one sheep if several were bought or registered on \nthe same day.\n\nForgive me for being verbose - I want to make sure I understand it propely myself!\n\nregards, \n -- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Thu, 17 Aug 2006 10:07:54 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 16 Aug 2006 at 18:51, Tom Lane wrote:\n\n> \"Peter Hardman\" <[email protected]> writes:\n> > I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user \n<snip>\n\nArjen van der Meijden has proposed a very elegant query in another post. \n\n> What I find interesting though is that it sounds like both MSSQL and\n> Paradox know something we don't about how to optimize it. PG doesn't\n> have any idea how to do the above query without forming the full output\n> of the sub-select, but I suspect that the commercial DBs know a\n> shortcut; perhaps they are able to automatically derive a restriction\n> in the subquery similar to what you did by hand. Does Paradox have\n> anything comparable to EXPLAIN that would give a hint about the query\n> plan they are using?\n\nSadly, no. In fact the ability to use SQL from Paradox at all is not well known and \nnot very visible in the the documentation. \n\nI wonder whether Paradox and MySQL are just not doing the sort (this seems to \nbe what eats up the time), since the output of the subquery is in fact already in the \nproper order.\n\n> \n> Also, just as in the other thread, I'm thinking that a seqscan+hash\n> aggregate would be a better idea than this bit:\n> \n> > -> GroupAggregate (cost=0.00..3924.91 rows=33676 width=13) (actual time=0.324..473.131 rows=38815 loops=1)\n> > -> Index Scan using sheep_flock_pkey on sheep_flock f (cost=0.00..3094.95 rows=81802 width=13) (actual time=0.295..232.156)\n> \n> Possibly you need to raise work_mem to get it to consider the hash\n> aggregation method.\n> \n> BTW, are you *sure* you are testing PG 8.1? The \"Subquery Scan f2\" plan\n> node looks unnecessary to me, and I'd have expected 8.1 to drop it out.\n> 8.0 and before would have left it in the plan though. This doesn't make\n> all that much difference performance-wise in itself, but it does make me\n> wonder what you are testing.\n\nYes, the executables all say version 8.1.3.6044\n> \nRegards,-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Thu, 17 Aug 2006 10:21:01 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "Hi, Peter,\n\nPeter Hardman wrote:\n\n>> BTW, are you *sure* you are testing PG 8.1? The \"Subquery Scan f2\" plan\n>> node looks unnecessary to me, and I'd have expected 8.1 to drop it out.\n>> 8.0 and before would have left it in the plan though. This doesn't make\n>> all that much difference performance-wise in itself, but it does make me\n>> wonder what you are testing.\n> \n> Yes, the executables all say version 8.1.3.6044\n\nWould you mind to look at the output of \"select version();\", too?\n\nI ask this because I stumbled over it myself, that I had installed the\ncorrect postgresql and psql versions, but accidentally connected to a\ndifferent database installation due to strange environment and script\nsettings...\n\n\nThanks,\nMarkus\n\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 17 Aug 2006 12:11:49 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 17 Aug 2006 at 12:11, Markus Schaber wrote:\n\n> Hi, Peter,\n> \n> Peter Hardman wrote:\n> \n> >> BTW, are you *sure* you are testing PG 8.1? The \"Subquery Scan f2\" plan\n> >> node looks unnecessary to me, and I'd have expected 8.1 to drop it out.\n> >> 8.0 and before would have left it in the plan though. This doesn't make\n> >> all that much difference performance-wise in itself, but it does make me\n> >> wonder what you are testing.\n> > \n> > Yes, the executables all say version 8.1.3.6044\n> \n> Would you mind to look at the output of \"select version();\", too?\n> \n> I ask this because I stumbled over it myself, that I had installed the\n> correct postgresql and psql versions, but accidentally connected to a\n> different database installation due to strange environment and script\n> settings...\nselect version() returns\n\nPostgreSQL 8.1.3 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 \n(mingw-special)\n\nCheers,-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Thu, 17 Aug 2006 11:25:18 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "Hi, Peter,\n\nPeter Hardman wrote:\n\n> select version() returns\n> \n> PostgreSQL 8.1.3 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 \n> (mingw-special)\n\nThat looks correct.\n\nI also presume that your environment is not as fragile wr/t connecting\ndo wrong databases, compared to debian with their multi-cluster\nmulti-version script wrapper magic.\n\nDon't mind.\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Thu, 17 Aug 2006 13:01:34 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "\"Peter Hardman\" <[email protected]> writes:\n> I wonder whether Paradox and MySQL are just not doing the sort (this\n> seems to be what eats up the time), since the output of the subquery\n> is in fact already in the proper order.\n\nMSSQL (from the other thread). I feel fairly safe in assuming that\nMySQL's query optimizer is not nearly in the league to do this query\neffectively. (I like the theory Arjen mentioned that what you are\nmeasuring there is the effects of their query cache rather than a\nsmart fundamental implementation.) I wonder whether MSSQL has an\nEXPLAIN equivalent ...\n\nAnywy, your point about the sort being redundant is a good one, and\noffhand I'd have expected PG to catch that; I'll have to look into\nwhy it didn't. But that's not going to explain a 10x speed\ndifference, because the sort isn't 90% of the runtime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Aug 2006 09:11:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "MSSQL can give either a graphical query plan or a text-based one similar\nto PG. There's no way that I've found to get the equivalent of an\nEXPLAIN ANALYZE, but I'm by no means an MSSQL guru.\n\nTo get a neat-looking but not very useful graphical query plan from the\nQuery Analyzer tool, hit <Ctrl-L>.\n\nTo get the text-based one, execute \"SET SHOWPLAN_ALL ON\" which toggles\ndiagnostic mode on, and each query that you run will return the explain\nplan instead of actually running until you execute \"SET SHOWPLAN_ALL\nOFF\".\n\n-- Mark Lewis\n\nOn Thu, 2006-08-17 at 09:11 -0400, Tom Lane wrote:\n> \"Peter Hardman\" <[email protected]> writes:\n> > I wonder whether Paradox and MySQL are just not doing the sort (this\n> > seems to be what eats up the time), since the output of the subquery\n> > is in fact already in the proper order.\n> \n> MSSQL (from the other thread). I feel fairly safe in assuming that\n> MySQL's query optimizer is not nearly in the league to do this query\n> effectively. (I like the theory Arjen mentioned that what you are\n> measuring there is the effects of their query cache rather than a\n> smart fundamental implementation.) I wonder whether MSSQL has an\n> EXPLAIN equivalent ...\n> \n> Anywy, your point about the sort being redundant is a good one, and\n> offhand I'd have expected PG to catch that; I'll have to look into\n> why it didn't. But that's not going to explain a 10x speed\n> difference, because the sort isn't 90% of the runtime.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n",
"msg_date": "Thu, 17 Aug 2006 06:54:00 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and"
},
{
"msg_contents": "> MSSQL can give either a graphical query plan or a text-based \n> one similar to PG. There's no way that I've found to get the \n> equivalent of an EXPLAIN ANALYZE, but I'm by no means an MSSQL guru.\n\nSET STATISTICS IO ON\nSET STATISTICS PROFILE ON\nSET STATISTICS TIME ON\n\n\n//Magnus\n",
"msg_date": "Thu, 17 Aug 2006 18:31:39 +0200",
"msg_from": "\"Magnus Hagander\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and"
},
{
"msg_contents": "I wrote:\n> Anywy, your point about the sort being redundant is a good one, and\n> offhand I'd have expected PG to catch that; I'll have to look into\n> why it didn't. But that's not going to explain a 10x speed\n> difference, because the sort isn't 90% of the runtime.\n\nI dug into this using some made-up test data, and was able to reproduce\nthe plan you got after changing the order of the pkey index columns\nto (regn_no, transfer_date, flock_no) ... are you sure you quoted that\naccurately before?\n\nI found a couple of minor planner problems, which I've repaired in CVS\nHEAD. You might consider using TEXT columns instead of VARCHAR(n),\nbecause the only bug that actually seemed to change the chosen plan\ninvolved the planner getting confused by the difference between\nvarchar_var and varchar_var::text (which is what gets generated for\nsorting purposes because varchar doesn't have a separate sort operator).\n\nThere's a more interesting issue, which I'm afraid we do not have time\nto fix for PG 8.2. The crux of the matter is that given\n\nSELECT ...\nFROM SHEEP_FLOCK f1 JOIN \n (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n FROM SHEEP_FLOCK f\n GROUP BY f.regn_no) f2 \nON f1.regn_no = f2.regn_no\nAND f1.transfer_date = f2.last_xfer_date\n\nif there is an index on (regn_no, transfer_date) then the planner could\nin principle do a double-column merge join between an indexscan on this\nindex and the output of a GroupAggregate implementation of the subquery.\nThe GroupAggregate plan would in fact only be sorting on regn_no, so\nit's not immediately obvious why this is OK. The reason is that there\nis only one MAX() value for any particular regn_no, and so the sort\ncondition that the last_xfer_date values be in order for any one value\nof regn_no is vacuous. We could consider the subquery's output to be\nsorted by *any* list of the form \"regn_no, other-stuff\".\n\nThe planner's notion of matching pathkey lists to determine sortedness\nis not at all capable of dealing with this. After a little bit of\nthought I'm tempted to propose that we add a concept that a particular\npathkey list is \"unique\", meaning that it is known to include a unique\nkey for the data. Such a key would match, for sortedness purposes,\nany requested sort ordering consisting of its columns followed by\nothers. In the above example, we would know that a GROUP BY implemented\nby GroupAggregate yields an output for which the grouping columns\nare a unique sort key.\n\nI imagine this concept is already known in the database research\nliterature; anyone recognize it and know a standard name for it?\n\nWhat'd be really interesting is to know if MSSQL and Paradox are using\nthis concept to optimize their plans for Peter's query. Can someone with\na copy of MSSQL try this test case and see what it reports as the plan?\n\nBTW, I used this to generate some COPY data I could load into Peter's\nexample table:\n\nperl -e 'for ($s = 1; $s < 32000; $s++) {\n$f=int($s/100);\nprint \"$s\\t$f\\t1\\t0\\n\";\nprint \"$s\\t$f\\t2\\t0\\n\";\n}' >sheep.data\n\nI changed the date columns to integers rather than bother to make up\nvalid dates. I think only the regn_no and flock_no statistics matter\nto the planner for this particular query --- as you can see, I arranged\nfor 2 entries per sheep and 100 sheep per flock, which is in the general\nballpark of what Peter mentioned as his stats.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Aug 2006 14:33:27 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "On Aug 16, 2006, at 3:51 PM, Tom Lane wrote:\n>> /* Select all sheep who's most recent transfer was into the \n>> subject flock */\n>> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n>> FROM SHEEP_FLOCK f1 JOIN\n>> /* The last transfer date for each sheep */\n>> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n>> FROM SHEEP_FLOCK f\n>> GROUP BY f.regn_no) f2\n>> ON f1.regn_no = f2.regn_no\n>> WHERE f1.flock_no = '1359'\n>> AND f1.transfer_date = f2.last_xfer_date\n>\n> This seems pretty closely related to this recent thread:\n> http://archives.postgresql.org/pgsql-performance/2006-08/msg00220.php\n> in which the OP is doing a very similar kind of query in almost \n> exactly\n> the same way.\n>\n> I can't help thinking that there's probably a better way to phrase \n> this\n> type of query in SQL, though it's not jumping out at me what that is.\n\nI don't know about better, but I tend to phrase these in a quite \ndifferent way that's (hopefully) equivalent:\n\nselect latest.regn_no,\n latest.transfer_date as date_in\nfrom sheep_flock latest\nwhere not exists (\n select 'x'\n from sheep_flock even_later\n where latest.regn_no = even_later.regn_no\n and latest.transfer_date < even_later.transfer_date)\n and latest.flock_no = '1359'\n\nThere's no MAX() or DISTINCT here, so maybe this is easier to optimize?\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n",
"msg_date": "Thu, 17 Aug 2006 12:09:45 -0700",
"msg_from": "Scott Lamb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "I have no idea if there's a standard name or what it may be, but for \nwhat it's worth, this sounds similar to the optimizations I wanted \nfor a different query:\n\nhttp://archives.postgresql.org/pgsql-performance/2005-11/msg00037.php\n\n1. Recognize that a term constant across the whole sort is \nirrelevant. (In my earlier case, a constant number, but here MAX \n(xxx), which seems harder.)\n2. Put together two sequences already in the appropriate order, \nwithout resorting. (In my case, a union; here a join.)\n\nthough I no longer need them for that problem. I'm quite happy with \nthe client-side solution we came up with.\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n",
"msg_date": "Thu, 17 Aug 2006 12:20:11 -0700",
"msg_from": "Scott Lamb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "\n\nOn 16 Aug 2006 at 17:48, Peter Hardman wrote:\n\n> I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user \n> Paradox to a web based interface to either MySQL or PostgreSQL.\n<snip> \n\nI've uploaded my data to www.shetland-sheep.org.uk/pgdata/sheep-flock.zip\n\nThe flock SSBXXX is the 'big flock in the sky' and thus there should never be any \ndate for a sheep greater than this. \n\nYes, the primary key is regn_no + flock_no + transfer_date.\n\nThanks again for all the help and advice.\n\nRegards,-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Thu, 17 Aug 2006 20:58:20 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "On 17 Aug 2006 at 14:33, Tom Lane wrote:\n\n> I wrote:\n> > Anywy, your point about the sort being redundant is a good one, and\n> > offhand I'd have expected PG to catch that; I'll have to look into\n> > why it didn't. But that's not going to explain a 10x speed\n> > difference, because the sort isn't 90% of the runtime.\n> \n> I dug into this using some made-up test data, and was able to reproduce\n> the plan you got after changing the order of the pkey index columns\n> to (regn_no, transfer_date, flock_no) ... are you sure you quoted that\n> accurately before?\n\nYes. Maybe the data I've uploaded to www.shetland-\nsheep.org.uk/pgdata/sheep_flock.zip will help reproduce the plan.\n\n<snip> \n> I found a couple of minor planner problems, which I've repaired in CVS\n> HEAD. You might consider using TEXT columns instead of VARCHAR(n),\n> because the only bug that actually seemed to change the chosen plan\n> involved the planner getting confused by the difference between\n> varchar_var and varchar_var::text (which is what gets generated for\n> sorting purposes because varchar doesn't have a separate sort operator).\n\nAs someone else suggested, these fields ought really to be CHAR no VARCHAR. \nI chose VARCHAR because the data mostly is shorter than the maximum lengths \n(although probably not enough to matter). I'd not really got into the subtleties of \ndifferent behaviour of CHAR and VARCHAR.\n> \n<snip> \n\nRegards,-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n",
"msg_date": "Thu, 17 Aug 2006 21:13:11 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "\n\n\n\n\n\nOn 17 Aug 2006 at 20:58, Peter Hardman wrote:\n\n\n> \n> \n> On 16 Aug 2006 at 17:48, Peter Hardman wrote:\n> \n> > I'm in the process of migrating a Paradox 7/BDE 5.01 database from single-user \n> > Paradox to a web based interface to either MySQL or PostgreSQL.\n> <snip> \n> \n> I've uploaded my data to www.shetland-sheep.org.uk/pgdata/sheep-flock.zip\n\n\nSorry - that should be www.shetland-sheep.org.uk/pgdata/sheep_flock.zip\n> \n> The flock SSBXXX is the 'big flock in the sky' and thus there should never be any \n> date for a sheep greater than this. \n> \n> Yes, the primary key is regn_no + flock_no + transfer_date.\n> \n> Thanks again for all the help and advice.\n> \n> Regards,-- \n> Peter Hardman\n> Acre Cottage, Horsebridge\n> King's Somborne\n> Stockbridge\n> SO20 6PT\n> \n> == Breeder of Shetland Cattle and Shetland Sheep ==\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \nPeter Hardman\nAcre Cottage, Horsebridge\nKing's Somborne\nStockbridge\nSO20 6PT\n\n\n== Breeder of Shetland Cattle and Shetland Sheep ==\n\n\n\n",
"msg_date": "Thu, 17 Aug 2006 21:25:21 +0100",
"msg_from": "\"Peter Hardman\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL"
},
{
"msg_contents": "Peter, I compared these using the data you supplied on my PostgreSQL \n8.1.4 system:\n\nOn Aug 17, 2006, at 12:09 PM, Scott Lamb wrote:\n\n> On Aug 16, 2006, at 3:51 PM, Tom Lane wrote:\n>>> /* Select all sheep who's most recent transfer was into the \n>>> subject flock */\n>>> SELECT DISTINCT f1.regn_no, f1.transfer_date as date_in\n>>> FROM SHEEP_FLOCK f1 JOIN\n>>> /* The last transfer date for each sheep */\n>>> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n>>> FROM SHEEP_FLOCK f\n>>> GROUP BY f.regn_no) f2\n>>> ON f1.regn_no = f2.regn_no\n>>> WHERE f1.flock_no = '1359'\n>>> AND f1.transfer_date = f2.last_xfer_date\n\n \n QUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------------------------\nUnique (cost=2575.07..2575.08 rows=1 width=36) (actual \ntime=1083.579..1083.696 rows=32 loops=1)\n -> Sort (cost=2575.07..2575.07 rows=1 width=36) (actual \ntime=1083.576..1083.613 rows=32 loops=1)\n Sort Key: f1.regn_no, f1.transfer_date\n -> Nested Loop (cost=1364.00..2575.06 rows=1 width=36) \n(actual time=287.895..1083.297 rows=32 loops=1)\n -> HashAggregate (cost=1364.00..1366.50 rows=200 \nwidth=36) (actual time=262.345..337.940 rows=38815 loops=1)\n -> Seq Scan on sheep_flock f \n(cost=0.00..1116.00 rows=49600 width=36) (actual time=0.005..119.282 \nrows=81802 loops=1)\n -> Index Scan using sheep_flock_pkey on sheep_flock \nf1 (cost=0.00..6.02 rows=1 width=36) (actual time=0.016..0.016 \nrows=0 loops=38815)\n Index Cond: (((f1.regn_no)::text = \n(\"outer\".regn_no)::text) AND ((f1.flock_no)::text = '1359'::text) AND \n(f1.transfer_date = \"outer\".\"?column2?\"))\nTotal runtime: 1085.115 ms\n(9 rows)\n\n>>\n>> This seems pretty closely related to this recent thread:\n>> http://archives.postgresql.org/pgsql-performance/2006-08/msg00220.php\n>> in which the OP is doing a very similar kind of query in almost \n>> exactly\n>> the same way.\n>>\n>> I can't help thinking that there's probably a better way to phrase \n>> this\n>> type of query in SQL, though it's not jumping out at me what that is.\n>\n> I don't know about better, but I tend to phrase these in a quite \n> different way that's (hopefully) equivalent:\n>\n> select latest.regn_no,\n> latest.transfer_date as date_in\n> from sheep_flock latest\n> where not exists (\n> select 'x'\n> from sheep_flock even_later\n> where latest.regn_no = even_later.regn_no\n> and latest.transfer_date < even_later.transfer_date)\n> and latest.flock_no = '1359'\n>\n> There's no MAX() or DISTINCT here, so maybe this is easier to \n> optimize?\n\n Q \nUERY PLAN\n------------------------------------------------------------------------ \n------------------------------------------------------------------------ \n------\nBitmap Heap Scan on sheep_flock latest (cost=764.60..2185.05 \nrows=124 width=36) (actual time=11.915..13.800 rows=32 loops=1)\n Recheck Cond: ((flock_no)::text = '1359'::text)\n Filter: (NOT (subplan))\n -> Bitmap Index Scan on sheep_flock_pkey (cost=0.00..764.60 \nrows=248 width=0) (actual time=10.950..10.950 rows=127 loops=1)\n Index Cond: ((flock_no)::text = '1359'::text)\n SubPlan\n -> Index Scan using sheep_flock_pkey on sheep_flock \neven_later (cost=0.00..317.49 rows=83 width=0) (actual \ntime=0.016..0.016 rows=1 loops=127)\n Index Cond: ((($0)::text = (regn_no)::text) AND ($1 < \ntransfer_date))\nTotal runtime: 13.902 ms\n(9 rows)\n\nseems to return the same data in two orders of magnitude less time.\n\n-- \nScott Lamb <http://www.slamb.org/>\n\n\n",
"msg_date": "Thu, 17 Aug 2006 15:00:52 -0700",
"msg_from": "Scott Lamb <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "\"Peter Hardman\" <[email protected]> writes:\n> On 17 Aug 2006 at 14:33, Tom Lane wrote:\n>> I found a couple of minor planner problems, which I've repaired in CVS\n>> HEAD. You might consider using TEXT columns instead of VARCHAR(n),\n\n> As someone else suggested, these fields ought really to be CHAR no VARCHAR. \n\nThat should be fine too. VARCHAR is sort of a poor stepchild in\nPostgres, because it piggybacks on TEXT's operators --- but CHAR\nhas different comparison rules, hence its own operators, hence\ndoesn't trip over that bug.\n\nThere's still some things that don't add up to me, like the question\nof the pkey column order. Will look some more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Aug 2006 20:31:51 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and MySQL "
},
{
"msg_contents": "On Thu, 2006-08-17 at 14:33 -0400, Tom Lane wrote:\n\n> There's a more interesting issue, which I'm afraid we do not have time\n> to fix for PG 8.2. The crux of the matter is that given\n> \n> SELECT ...\n> FROM SHEEP_FLOCK f1 JOIN \n> (SELECT f.regn_no, MAX(f.transfer_date) as last_xfer_date\n> FROM SHEEP_FLOCK f\n> GROUP BY f.regn_no) f2 \n> ON f1.regn_no = f2.regn_no\n> AND f1.transfer_date = f2.last_xfer_date\n> \n> if there is an index on (regn_no, transfer_date) then the planner could\n> in principle do a double-column merge join between an indexscan on this\n> index and the output of a GroupAggregate implementation of the subquery.\n> The GroupAggregate plan would in fact only be sorting on regn_no, so\n> it's not immediately obvious why this is OK. The reason is that there\n> is only one MAX() value for any particular regn_no, and so the sort\n> condition that the last_xfer_date values be in order for any one value\n> of regn_no is vacuous. We could consider the subquery's output to be\n> sorted by *any* list of the form \"regn_no, other-stuff\".\n> \n> The planner's notion of matching pathkey lists to determine sortedness\n> is not at all capable of dealing with this. After a little bit of\n> thought I'm tempted to propose that we add a concept that a particular\n> pathkey list is \"unique\", meaning that it is known to include a unique\n> key for the data. Such a key would match, for sortedness purposes,\n> any requested sort ordering consisting of its columns followed by\n> others. In the above example, we would know that a GROUP BY implemented\n> by GroupAggregate yields an output for which the grouping columns\n> are a unique sort key.\n> \n> I imagine this concept is already known in the database research\n> literature; anyone recognize it and know a standard name for it?\n\n(catching up on some earlier mails....)\n\nNot seen any particular name for that around. There are quite a few\nplaces in the optimizer, IIRC, that could use the concept of uniqueness\nif it existed.\n\nI would note that the above query plan is similar-ish to the one you'd\nget if you tried to push down the GROUP BY from the top of a join. So\nthe uniqueness information sounds like an important precursor to that.\n\nI've just rechecked out the lit I was reading on this earlier this year:\nhttp://portal.acm.org/ft_gateway.cfm?id=233320&type=pdf&coll=&dl=acm&CFID=15151515&CFTOKEN=6184618#search=%22db2%20order%20optimization%20tpc-d%22\n\"Fundamental Techniques for Order Optimization\" Simmen et al\n\nAlso, IIRC, there was some work talking about extending the Interesting\nOrder concept to allow groupings to be noted also.\n\n>From our work on sorting earlier, we had it that a Merge Join will\nalways require a Mark/Restore operation on its sorted inputs. If the\nOuter input is unique then a Restore operation will never be required,\nso the Mark can be avoided also and thus the materialization of the sort\ncan also be avoided. So some way of telling the MJ node that the sort\norder is also unique would be very useful.\n\n-- \n Simon Riggs \n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Mon, 02 Oct 2006 14:06:50 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL runs a query much slower than BDE and"
}
] |
[
{
"msg_contents": "I have two simple queries that do what I believe to be the exact same\nthing. I was surprised to see a reliable, and what I consider to be\nsignificant (although not problematic for my application) difference\nin execution time. It hints to me that PostgreSQL may be missing an\noptimization opportunity? This is on PostgreSQL 8.1.4.\n\nFor a quick summary of the relationships:\n\nI have a 79 row \"system\" table that describes each ClearCase system.\nClearCase uses uuid to uniquely identify database objects across the\nlife of the object. For this table, I store uid as a varchar(80), and\nhave a unique index on it:\n\neudb=> \\d sm_system\n Table \"public.sm_system\"\n Column | Type | Modifiers \n-------------+------------------------+-----------------------------------------------------------------\n system_dbid | integer | not null default nextval('sm_system_system_dbid_seq'::regclass)\n type | character varying(10) | not null\n uid | character varying(200) | not null\n name | character varying(200) | not null\n owner | character varying(80) | not null\nIndexes:\n \"sm_system_pkey\" PRIMARY KEY, btree (system_dbid) CLUSTER\n \"sm_system_type_key\" UNIQUE, btree (\"type\", uid)\nCheck constraints:\n \"sm_system_type_check\" CHECK (\"type\"::text = 'NEU'::text OR \"type\"::text = 'PLS'::text)\n\nI have a 339,586 row \"change\" table that describes each ClearCase\nactivity. Each activity has a name that should be unique, but may not\nbe unique across time. Uniqueness is relative to the system that\ncontains it.\n\n Table \"public.sm_change\"\n Column | Type | Modifiers\n----------------+--------------------------------+-----------------------------------------------------------------\n change_dbid | integer | not null default nextval('sm_change_change_dbid_seq'::regclass)\n system_dbid | integer | not null\n stream_dbid | integer | not null\n uid | character varying(200) | not null\n name | character varying(200) | not null\n status | character varying(20) | not null\n owner | character varying(80) | not null\n target | integer |\n creationtime | timestamp(0) without time zone | not null\n submissiontime | timestamp(0) without time zone | not null\n comments | text |\n elements | text |\nIndexes:\n \"sm_change_pkey\" PRIMARY KEY, btree (change_dbid) CLUSTER\n \"sm_change_system_dbid_key\" UNIQUE, btree (system_dbid, uid)\n \"sm_change_name_key\" btree (lower(name::text))\n \"sm_change_stream_dbid_key\" btree (stream_dbid)\n \"sm_change_target_key\" btree (target)\nForeign-key constraints:\n \"sm_change_stream_dbid_fkey\" FOREIGN KEY (stream_dbid) REFERENCES sm_stream(stream_dbid)\n \"sm_change_system_dbid_fkey\" FOREIGN KEY (system_dbid) REFERENCES sm_system(system_dbid)\n \"sm_change_target_fkey\" FOREIGN KEY (target) REFERENCES sm_change(change_dbid)\n\n\nOne of the made up queries that I played with was a lookup on the system uuid, and the\nactivity name. This is the one that I noticed the timing difference:\n\nneudb=> select uid, name from sm_change where system_dbid = (select system_dbid from sm_system where uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n uid | name\n------------------------------------------+-----------------\n ff733174.6c7411d8.900c.00:06:5b:b3:db:28 | markm-Q00855572\n(1 row)\n\nTime: 1.242 ms\n\n\nThe 1.242 ms is pretty stable. 1.226 ms -> 1.248 ms over 5 runs.\n\nThen we have:\n\nneudb=> select sm_change.uid, sm_change.name from sm_change join sm_system using (system_dbid) where sm_system.uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da' and lower(sm_change.name) = lower('markm-Q00855572');\n uid | name\n------------------------------------------+-----------------\n ff733174.6c7411d8.900c.00:06:5b:b3:db:28 | markm-Q00855572\n(1 row)\n\nTime: 1.500 ms\n\n\nThis time is less stable - it runs from 1.394 ms -> 1.561 ms over 5 runs.\n\nAs I mentioned - for my application, I don't really care. If it took\n10 ms or more, I wouldn't care. But the difference in time bothered me.\nSo, here are the query plans that PostgreSQL selected for me:\n\n\nneudb=> explain analyze select uid, name from sm_change where system_dbid = (select system_dbid from sm_system where uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------\n Index Scan using sm_change_name_key on sm_change (cost=2.99..7.82 rows=1 width=80) (actual time=0.322..0.328 rows=1 loops=1)\n Index Cond: (lower((name)::text) = 'markm-q00855572'::text)\n Filter: (system_dbid = $0)\n InitPlan\n -> Seq Scan on sm_system (cost=0.00..2.99 rows=1 width=4) (actual time=0.052..0.106 rows=1 loops=1)\n Filter: ((uid)::text = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da'::text)\n Total runtime: 0.419 ms\n(7 rows)\n\nTime: 16.494 ms\n\n\nneudb=> explain analyze select sm_change.uid, sm_change.name from sm_change join sm_system using (system_dbid) where sm_system.uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da' and lower(sm_change.name) = lower('markm-Q00855572');\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..7.83 rows=1 width=80) (actual time=0.099..0.159 rows=1 loops=1)\n Join Filter: (\"outer\".system_dbid = \"inner\".system_dbid)\n -> Index Scan using sm_change_name_key on sm_change (cost=0.00..4.83 rows=1 width=84) (actual time=0.053..0.059 rows=1 loops=1)\n Index Cond: (lower((name)::text) = 'markm-q00855572'::text)\n -> Seq Scan on sm_system (cost=0.00..2.99 rows=1 width=4) (actual time=0.030..0.077 rows=1 loops=1)\n Filter: ((uid)::text = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da'::text)\n Total runtime: 0.250 ms\n(7 rows)\n\nTime: 1.898 ms\n\n\nI'm still learning how PostgreSQL works internally. My understanding\nis that the above are essentially the same. The first finds the system\nrow using a sequential scan, then looks for the change row using the\nindex, filtering by the system value. The second finds the change rows\nusing the same index, expecting to find one row, and finding only one\nrow, and matches it up against the system row using a sequential scan.\n\nSo why does one reliably run faster than the other?\n\nneudb=> prepare plan1 (varchar(80), varchar(80)) as select uid, name from sm_change where system_dbid = (select system_dbid from sm_system where uid = $1) and lower(name) = lower($2);\n\nneudb=> prepare plan2 (varchar(80), varchar(80)) as select sm_change.uid, sm_change.name from sm_change join sm_system using (system_dbid) where sm_system.uid = $1 and lower(sm_change.name) = lower($2);\n\nNow:\n\nneudb=> execute plan1 ('2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da', 'markm-q00855572');\n uid | name\n------------------------------------------+-----------------\n ff733174.6c7411d8.900c.00:06:5b:b3:db:28 | markm-Q00855572\n(1 row)\n\nTime: 0.794 ms\n\n\nneudb=> execute plan2 ('2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da', 'markm-q00855572');\n uid | name\n------------------------------------------+-----------------\n ff733174.6c7411d8.900c.00:06:5b:b3:db:28 | markm-Q00855572\n(1 row)\n\nTime: 0.715 ms\n\n\nThe numbers above don't mean anything. I ran both a few dozen times, and my conclusion\nis that after the plan is prepared (I did explain analyze to ensure that the prepared\nplans were the same as the dynamically generated plans), the times are the same. Both\nranged from 0.690 ms -> 0.850 ms. Timings at these resolutions are not so reliable. :-)\n\nI think this means that the planner takes longer to figure out what to do about the\njoin, and that my writing the select out as an embedded select reduces the effort\nrequired by the planner. This makes sense to me, except that I thought PostgreSQL\nwould convert back and forth between the two forms automatically. They are the same\nquery, are they not? Why wouldn't they both take longer, or both take shorter? What\nif I invented a scenario where the difference in plans made a major difference,\nsuch as making the system table much larger, still without an index? Should they\nnot both come up with the same plan - the better estimated plan?\n\nAm I expecting too much? :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 17 Aug 2006 20:33:27 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Q: Performance of join vs embedded query for simple queries?"
},
{
"msg_contents": "[email protected] writes:\n> I have two simple queries that do what I believe to be the exact same\n> thing.\n\nThese are actually not equivalent per spec.\n\n> neudb=> select uid, name from sm_change where system_dbid = (select system_dbid from sm_system where uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n\n> neudb=> select sm_change.uid, sm_change.name from sm_change join sm_system using (system_dbid) where sm_system.uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da' and lower(sm_change.name) = lower('markm-Q00855572');\n\nThe subselect form constrains the sub-select to return at most one row\n--- you'd have gotten an error if there were more than one sm_system row\nwith that uid. The join form does not make this constraint.\n\nAnother related form is\n\nneudb=> select uid, name from sm_change where system_dbid IN (select system_dbid from sm_system where uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n\nThis still isn't equivalent to the join: it'll return at most one copy\nof any sm_change row, whereas you can get multiple copies of the same\nsm_change row from the join, if there were multiple matching sm_system\nrows. (Hm, given the unique index on (system_dbid, uid), I guess that\ncouldn't actually happen --- but you have to reason about it knowing\nthat that index is there, it's not obvious from the form of the query.)\n\nAnyway: given the way that the planner works, the IN form and the join\nform will probably take comparable amounts of time to plan. The \"=\nsubselect\" form is much more constrained in terms of the number of\nalternative implementations we have, so it doesn't surprise me that it\ntakes less time to plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Aug 2006 21:21:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Q: Performance of join vs embedded query for simple queries? "
},
{
"msg_contents": "On Thu, Aug 17, 2006 at 09:21:33PM -0400, Tom Lane wrote:\n> [email protected] writes:\n> > I have two simple queries that do what I believe to be the exact same\n> > thing.\n> These are actually not equivalent per spec.\n> ...\n> This still isn't equivalent to the join: it'll return at most one copy\n> of any sm_change row, whereas you can get multiple copies of the same\n> sm_change row from the join, if there were multiple matching sm_system\n> rows. (Hm, given the unique index on (system_dbid, uid), I guess that\n> couldn't actually happen --- but you have to reason about it knowing\n> that that index is there, it's not obvious from the form of the query.)\n\n> Anyway: given the way that the planner works, the IN form and the join\n> form will probably take comparable amounts of time to plan. The \"=\n> subselect\" form is much more constrained in terms of the number of\n> alternative implementations we have, so it doesn't surprise me that it\n> takes less time to plan.\n\nThat makes sense. Would it be reasonable for the planner to eliminate\nplan considerations based on the existence of unique indexes, or is\nthis a fundamentally difficult thing to get right in the general case?\n\nI did the elimination in my head, which is why I considered the plans to\nbe the same. Can the planner do it?\n\nSub-millisecond planning/execution for simple queries on moderate\nhardware seems sexy... :-)\n\nThanks,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 17 Aug 2006 22:21:31 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Q: Performance of join vs embedded query for simple queries?"
},
{
"msg_contents": "On Thu, Aug 17, 2006 at 09:21:33PM -0400, Tom Lane wrote:\n> Another related form is\n> \n> neudb=> select uid, name from sm_change where system_dbid IN (select system_dbid from sm_system where uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n> ...\n> Anyway: given the way that the planner works, the IN form and the join\n> form will probably take comparable amounts of time to plan. The \"=\n> subselect\" form is much more constrained in terms of the number of\n> alternative implementations we have, so it doesn't surprise me that it\n> takes less time to plan.\n\nFYI: You are correct. The IN takes about as long as the join to plan,\nand does generate the same plan as the join. This restores confidence\nfor me that PostgreSQL is able to understand the two as equivalent.\n\nWith regard to that unique constraint planning - I gave you the wrong\nquery from my log. I had already thought that through, and realized\nthat my original query missed the type. The timings and plans are the\nfunctionally the same for all the three queries, with or without the\ntype qualifier. This is the table:\n\n Table \"public.sm_system\"\n Column | Type | Modifiers \n-------------+------------------------+-----------------------------------------------------------------\n system_dbid | integer | not null default nextval('sm_system_system_dbid_seq'::regclass)\n type | character varying(10) | not null\n uid | character varying(200) | not null\n name | character varying(200) | not null\n owner | character varying(80) | not null\nIndexes:\n \"sm_system_pkey\" PRIMARY KEY, btree (system_dbid) CLUSTER\n \"sm_system_type_key\" UNIQUE, btree (\"type\", uid)\nCheck constraints:\n \"sm_system_type_check\" CHECK (\"type\"::text = 'NEU'::text OR \"type\"::text = 'PLS'::text)\n\nAnd this is what the query should have been:\n\nneudb=> explain analyze select uid, name from sm_change where system_dbid IN (select system_dbid from sm_system where type = 'NEU' and uid = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da') and lower(name) = lower('markm-Q00855572');\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..7.86 rows=1 width=80) (actual time=19.438..19.453 rows=1 loops=1)\n -> Index Scan using sm_change_name_key on sm_change (cost=0.00..4.83 rows=1 width=84) (actual time=0.064..0.073 rows=1 loops=1)\n Index Cond: (lower((name)::text) = 'markm-q00855572'::text)\n -> Index Scan using sm_system_pkey on sm_system (cost=0.00..3.02 rows=1 width=4) (actual time=19.358..19.358 rows=1 loops=1)\n Index Cond: (\"outer\".system_dbid = sm_system.system_dbid)\n Filter: (((\"type\")::text = 'NEU'::text) AND ((uid)::text = '2ff5942c.dd2911d5.ad56.08:00:09:fd:1b:da'::text))\n Total runtime: 19.568 ms\n(7 rows)\n\nTime: 21.449 ms\n\n\nI guess the case isn't as simple as I thought. It would need to recognize\nthat the specification of both the 'type' and the 'uid' are static, and\nunique, therefore the argument to the IN, or the table that it is joining\nwith will be either 0 rows or 1 row. Too complicated to be worth it, eh? :-)\n\nCheers,\nmark\n\n-- \[email protected] / [email protected] / [email protected] __________________________\n. . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder\n|\\/| |_| |_| |/ |_ |\\/| | |_ | |/ |_ | \n| | | | | \\ | \\ |__ . | | .|. |__ |__ | \\ |__ | Ottawa, Ontario, Canada\n\n One ring to rule them all, one ring to find them, one ring to bring them all\n and in the darkness bind them...\n\n http://mark.mielke.cc/\n\n",
"msg_date": "Thu, 17 Aug 2006 22:30:32 -0400",
"msg_from": "[email protected]",
"msg_from_op": true,
"msg_subject": "Re: Q: Performance of join vs embedded query for simple queries?"
},
{
"msg_contents": "[email protected] writes:\n> That makes sense. Would it be reasonable for the planner to eliminate\n> plan considerations based on the existence of unique indexes, or is\n> this a fundamentally difficult thing to get right in the general case?\n\nThe big obstacle to that at the moment is that we don't have any plan\ncache invalidation mechanism; so a plan that depended for correctness on\nthe existence of a unique index might silently give wrong results after\nsomeone drops the index and inserts non-unique values into the table.\n(If the plan actually *uses* the index, then you'd at least get an\naccess failure ... but if the index was merely used to make an\nassumption at plan time, you wouldn't.)\n\nThe present \"constraint_exclusion\" mechanism will actually fail in\nexactly this kind of scenario, which is why I insisted it be off by\ndefault :-(\n\nThis has been on the radar screen for awhile. I'd hoped we'd get a\nplan invalidation mechanism in place for 8.2, but seems that's not\nhappening. Eventually it'll be there, though, and then we can get\nmore aggressive about making deductions based on constraints.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 17 Aug 2006 22:37:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Q: Performance of join vs embedded query for simple queries? "
}
] |
[
{
"msg_contents": "We're having a problem with one of our queries being slow. It appears to be due\nto the index being used to go from tableA to tableB.\n\nHere are the tables:\nCREATE TABLE tableA\n(\n table_idA int8 NOT NULL DEFAULT nextval('tableA_id_seq'::regclass),\n CONSTRAINT table_idA_pk PRIMARY KEY (table_idA),\n) \nWITHOUT OIDS;\n\nCREATE TABLE tableB\n(\n table_idB int8 NOT NULL DEFAULT nextval('tableB_id_seq'::regclass),\n table_idA int8 NOT NULL,\n direction char NOT NULL,\n CONSTRAINT tableB_pk PRIMARY KEY (table_idB),\n CONSTRAINT tableB_unq UNIQUE (table_idA, direction),\n) \nWITHOUT OIDS;\n\nCREATE TABLE last_summarized\n(\n summary_name varchar(64) NOT NULL,\n summarized_id int8,\n max_session_id int8,\n CONSTRAINT last_summarized_pk PRIMARY KEY (summary_name)\n) \nWITHOUT OIDS;\n\nHere is the query:\nexplain\n SELECT * FROM \n last_summarized ls\n JOIN tableA s ON s.table_idA > ls.summarized_id AND s.table_idA\n <= ls.max_session_id\n LEFT JOIN tableB sf ON s.table_idA = sf.table_idA AND sf.direction = 'a'::\"char\"\n LEFT JOIN tableB sfb ON s.table_idA = sfb.table_idA AND sfb.direction = 'b'::\"char\"\n WHERE ls.summary_name::text = 'summary'::text \n\nSize of tables in # of rows\ntableA: 9,244,816\ntableB: 15,398,497\nlast_summarized: 1\n\n\nExplain of the above query:\n\"Hash Left Join (cost=1811349.31..18546527.89 rows=1029087 width=294)\"\n\" Hash Cond: (\"outer\".table_idA = \"inner\".table_idA)\"\n\" -> Hash Left Join (cost=915760.88..7519203.61 rows=1029087 width=219)\"\n\" Hash Cond: (\"outer\".table_idA = \"inner\".table_idA)\"\n\" -> Nested Loop (cost=0.00..126328.57 rows=1029087 width=144)\"\n\" -> Index Scan using last_summarized_pk on last_summarized ls (cost=0.00..5.98 rows=1 width=82)\"\n\" Index Cond: ((summary_name)::text = 'summary'::text)\"\n\" -> Index Scan using table_idA_pk on tableA s (cost=0.00..110886.29 rows=1029087 width=62)\"\n\" Index Cond: ((s.table_idA > \"outer\".summarized_id) AND (s.table_idA <= \"outer\".max_session_id))\"\n\" -> Hash (cost=784763.16..784763.16 rows=8100289 width=75)\"\n\" -> Bitmap Heap Scan on tableB sf (cost=216418.55..784763.16 rows=8100289 width=75)\"\n\" Recheck Cond: (direction = 'a'::\"char\")\"\n\" -> Bitmap Index Scan on tableB_unq (cost=0.00..216418.55 rows=8100289 width=0)\"\n\" Index Cond: (direction = 'a'::\"char\")\" <------ USING part of Index\n\" -> Hash (cost=775968.61..775968.61 rows=7396725 width=75)\"\n\" -> Bitmap Heap Scan on tableB sfb (cost=216418.55..775968.61 rows=7396725 width=75)\"\n\" Recheck Cond: (direction = 'b'::\"char\")\"\n\" -> Bitmap Index Scan on tableB_unq (cost=0.00..216418.55 rows=7396725 width=0)\"\n\" Index Cond: (direction = 'b'::\"char\")\" <------ USING part of Index\n\n From the above explain see inline comment(\"<------ USING part of Index\"). The table_idA column\nlooks like it is being ignored in the index Cond. If I enable sequential scan the Index Cond in\nquestion gets replaced with a Seq scan.\n\nAlso if I disable enable_bitmapscan sometimes both columns of the index(tableB_unq) will be \nused.\n\nDoes anyone know why we're experiencing this behavior?\n\n\n\n",
"msg_date": "Mon, 21 Aug 2006 13:35:53 -0400",
"msg_from": "Scott Matseas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index usage"
},
{
"msg_contents": "Scott Matseas <[email protected]> writes:\n> If I enable sequential scan the Index Cond in\n> question gets replaced with a Seq scan.\n\nWhat other planner parameters have you been fooling with?\n\nWith no data in the tables, I get a reasonably sane-looking plan,\nso I'm thinking you've chosen bad values for something or other\n(starting with enable_seqscan = off ;-))\n\nexplain\n SELECT * FROM \n last_summarized ls\n JOIN tableA s ON s.table_idA > ls.summarized_id AND s.table_idA\n <= ls.max_session_id\n LEFT JOIN tableB sf ON s.table_idA = sf.table_idA AND sf.direction = 'a'::char\n LEFT JOIN tableB sfb ON s.table_idA = sfb.table_idA AND sfb.direction = 'b'::char\n WHERE ls.summary_name::text = 'summary'::text ;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=6.16..54.51 rows=216 width=116)\n -> Nested Loop Left Join (cost=6.16..42.05 rows=216 width=95)\n -> Nested Loop (cost=6.16..29.58 rows=216 width=74)\n -> Index Scan using last_summarized_pk on last_summarized ls (cost=0.00..8.02 rows=1 width=66)\n Index Cond: ((summary_name)::text = 'summary'::text)\n -> Bitmap Heap Scan on tablea s (cost=6.16..18.32 rows=216 width=8)\n Recheck Cond: ((s.table_ida > ls.summarized_id) AND (s.table_ida <= ls.max_session_id))\n -> Bitmap Index Scan on table_ida_pk (cost=0.00..6.16 rows=216 width=0)\n Index Cond: ((s.table_ida > ls.summarized_id) AND (s.table_ida <= ls.max_session_id))\n -> Index Scan using tableb_unq on tableb sfb (cost=0.00..0.05 rows=1 width=21)\n Index Cond: ((s.table_ida = sfb.table_ida) AND (sfb.direction = 'b'::bpchar))\n -> Index Scan using tableb_unq on tableb sf (cost=0.00..0.05 rows=1 width=21)\n Index Cond: ((s.table_ida = sf.table_ida) AND (sf.direction = 'a'::bpchar))\n(13 rows)\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2006 14:26:19 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index usage "
},
{
"msg_contents": "Tom Lane wrote:\n> What other planner parameters have you been fooling with?\nHi Tom,\nThe other parameters that have been changed are:\nset join_collapse_limit to 1\nset enable_sort to off\n\nWe are using version 8.1.3. We've noticed the query plan changing depending\non the amount of data in the tables especially when the query looks at \nmore rows\nin tableA. The parameter work_mem is set to 262,144.\n\nThanks,\nScott\n\n\n\n\n",
"msg_date": "Mon, 21 Aug 2006 16:25:10 -0400",
"msg_from": "Scott Matseas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index usage"
}
] |
[
{
"msg_contents": "Hi,\n\n \n\nI am using PostgreSQL 8.1.4 for an embedded application. For some\nreason, vacuum is not able to identify rows that are candidates for\nremoval (i.e., mark space as available).\n\n \n\nBackground Info:\n\n \n\nI observed some performance problems - our database seemed to be using\nan unusually high amount of cpu. Further investigation of the problem\nrevealed a very bloated database; the database was around 300M when it\nshould have been about 150M. A number of the database files were quite\nlarge, however, the tables that they stored information for were very\nsmall. For example, we had one table that had only 46 rows, but was\nusing up more than 17M of disk space. We had a number of other tables\nthat were similarly large. \n\n \n\nWe run auto vacuum and I can see from the logs that it is running quite\nfrequently. When I run vacuum full from the psql, I can see that space\nis not being recovered. I have run vacuum full with the verbose flag\nset, I can see that messages that indicate the existence of \"dead row\nversions that cannot be removed yet. \n\n \n\n<--- CUT FROM VACUUM OUTPUT --->\n\nCPU 0.00s/0.00u sec elapsed 0.18 sec.\n\nINFO: \"ibportreceivestatsca\": found 0 removable, 88017 nonremovable row\nversions in 4001 pages\n\nDETAIL: 87957 dead row versions cannot be removed yet.\n\nThere were 1 unused item pointers.\n\n<--- CUT FROM VACUUM OUTPUT --->\n\n \n\nIf I shutdown our application and run a vacuum full, the space is\nrecovered and the database size goes down to 150M. \n\n \n\nSo, my best guess is that something in our application is preventing\nvacuum from removing dead rows. What could cause this? Would it be\ncaused by a long-living transaction? What is the best way to track the\nproblem down...right now, I am looking through pg_stat_activity and\npg_locks to find processes that are \"in transaction\" and what locks they\nare holding.\n\n \n\nHas anyone had a similar problem? If so, how did you resolve it?\n\n \n\nThanks\n\n \n\nIke\n\n \n\n \n\n \n\n \n\n \n\n \n\n\n\n\n\n\n\n\n\n\nHi,\n \nI am using PostgreSQL 8.1.4 for an embedded\napplication. For some reason, vacuum is not able to identify rows that\nare candidates for removal (i.e., mark space as available).\n \nBackground Info:\n \nI observed some performance problems – our database\nseemed to be using an unusually high amount of cpu. Further investigation\nof the problem revealed a very bloated database; the database was around 300M\nwhen it should have been about 150M. A number of the database files were\nquite large, however, the tables that they stored information for were very\nsmall. For example, we had one table that had only 46 rows, but was using\nup more than 17M of disk space. We had a number of other tables that were\nsimilarly large. \n \nWe run auto vacuum and I can see from the logs that it is\nrunning quite frequently. When I run vacuum full from the psql, I can see that\nspace is not being recovered. I have run vacuum full with the verbose\nflag set, I can see that messages that indicate the existence of “dead\nrow versions that cannot be removed yet. \n \n<--- CUT FROM VACUUM OUTPUT --->\nCPU 0.00s/0.00u sec elapsed 0.18 sec.\nINFO: \n\"ibportreceivestatsca\": found 0 removable, 88017 nonremovable row\nversions in 4001 pages\nDETAIL: 87957 dead row versions\ncannot be removed yet.\nThere were 1 unused item pointers.\n<--- CUT FROM VACUUM OUTPUT --->\n \nIf I shutdown our application and run a vacuum full, the\nspace is recovered and the database size goes down to 150M. \n \nSo, my best guess is that something in our application is\npreventing vacuum from removing dead rows. What could cause this? \nWould it be caused by a long-living transaction? What is the best way to\ntrack the problem down...right now, I am looking through pg_stat_activity and\npg_locks to find processes that are “in transaction” and what locks\nthey are holding.\n \nHas anyone had a similar problem? If so, how did you\nresolve it?\n \nThanks\n \nIke",
"msg_date": "Mon, 21 Aug 2006 11:50:02 -0700",
"msg_from": "\"Eamonn Kent\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Vacuum not identifying rows for removal.."
},
{
"msg_contents": "On Mon, 2006-08-21 at 11:50 -0700, Eamonn Kent wrote:\n\n> So, my best guess is that something in our application is preventing\n> vacuum from removing dead rows. What could cause this? Would it be\n> caused by a long-living transaction? What is the best way to track\n> the problem down...right now, I am looking through pg_stat_activity\n> and pg_locks to find processes that are “in transaction” and what\n> locks they are holding.\n\nIf you have any long running transactions - idle or active, that's your\nproblem. Vacuum can only clear out dead tuples older than that oldest\ntransaction. Deal with those. Make sure every single transaction your\napp initiates commits or rolls back every single time. \n\nYou'll generally find them in pg_stat_activity, but not always. ps may\nshow you idle transactions not showing as idle in pg_stat_activity\n \n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n",
"msg_date": "Mon, 21 Aug 2006 15:25:04 -0400",
"msg_from": "Brad Nicholson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum not identifying rows for removal.."
},
{
"msg_contents": "\"Eamonn Kent\" <[email protected]> writes:\n> I am using PostgreSQL 8.1.4 for an embedded application. For some\n> reason, vacuum is not able to identify rows that are candidates for\n> removal (i.e., mark space as available).\n> ...\n> We run auto vacuum and I can see from the logs that it is running quite\n> frequently. When I run vacuum full from the psql, I can see that space\n> is not being recovered. I have run vacuum full with the verbose flag\n> set, I can see that messages that indicate the existence of \"dead row\n> versions that cannot be removed yet.\n\nThis means you've got an open transaction somewhere that could\npotentially still be able to see those rows. Look around for\napplications sitting \"idle in transaction\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2006 17:06:28 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum not identifying rows for removal.. "
},
{
"msg_contents": "Hello,\n\nThanks for the help...It appears that a transaction is indeed being\nopened and remains idle. I am able to identify the postgreSQL backend\nprocess that is associated with the transaction, however, I need to\nfurther localize the issue. We have around 22 (postgres) backend\nprocesses associated with various application processes. I would like\nto identify our application process. \n\nI have tried using netstat -ap and looking through the logs..but, to no\navail. (Both the database and the server processes are running on the\nsame server...connected via unix sockets I believe, perhaps this is\nmaking the association difficult to determine).\n\nAny ideas of how to identify the application process that is the\npostgres process (whose id I know). Perhaps I need to turn on a\ndifferent log flag?\n\n\nThanks\n\nIke\n\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Monday, August 21, 2006 2:06 PM\nTo: Eamonn Kent\nCc: [email protected]\nSubject: Re: [PERFORM] Vacuum not identifying rows for removal.. \n\n\"Eamonn Kent\" <[email protected]> writes:\n> I am using PostgreSQL 8.1.4 for an embedded application. For some\n> reason, vacuum is not able to identify rows that are candidates for\n> removal (i.e., mark space as available).\n> ...\n> We run auto vacuum and I can see from the logs that it is running\nquite\n> frequently. When I run vacuum full from the psql, I can see that space\n> is not being recovered. I have run vacuum full with the verbose flag\n> set, I can see that messages that indicate the existence of \"dead row\n> versions that cannot be removed yet.\n\nThis means you've got an open transaction somewhere that could\npotentially still be able to see those rows. Look around for\napplications sitting \"idle in transaction\".\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 21 Aug 2006 18:27:53 -0700",
"msg_from": "\"Eamonn Kent\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum not identifying rows for removal.. "
},
{
"msg_contents": "\n> Any ideas of how to identify the application process that is the\n> postgres process (whose id I know). Perhaps I need to turn on a\n> different log flag?\n\nselect * from pg_stat_activity will give you the pid :)\n\nJoshua D. Drake\n\n\n> \n> \n> Thanks\n> \n> Ike\n> \n> \n> \n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Monday, August 21, 2006 2:06 PM\n> To: Eamonn Kent\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Vacuum not identifying rows for removal.. \n> \n> \"Eamonn Kent\" <[email protected]> writes:\n>> I am using PostgreSQL 8.1.4 for an embedded application. For some\n>> reason, vacuum is not able to identify rows that are candidates for\n>> removal (i.e., mark space as available).\n>> ...\n>> We run auto vacuum and I can see from the logs that it is running\n> quite\n>> frequently. When I run vacuum full from the psql, I can see that space\n>> is not being recovered. I have run vacuum full with the verbose flag\n>> set, I can see that messages that indicate the existence of \"dead row\n>> versions that cannot be removed yet.\n> \n> This means you've got an open transaction somewhere that could\n> potentially still be able to see those rows. Look around for\n> applications sitting \"idle in transaction\".\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 21 Aug 2006 18:55:50 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Vacuum not identifying rows for removal.."
},
{
"msg_contents": "\nHi Joshua,\n\nThanks for the info...but, what I already have the backend id. I was\ntrying to get the process id of the client application. The client is\nusing libpq and running on the same workstation. We have approximately\n22 different clients running and it would help to isolate the client\nprogram that is causing the problem. \n\nI was unable to locate the client using the backend server's process id\nwith lsof and netstat. Really the information should be there...since,\neach (I believe) each backend postgreSQL server will service a single\nclient via a unix socket (in the case where they are collocated on a\nunix workstation).\n\nThanks\n\nIke\n\n\n\n> Any ideas of how to identify the application process that is the\n> postgres process (whose id I know). Perhaps I need to turn on a\n> different log flag?\n\nselect * from pg_stat_activity will give you the pid :)\n\nJoshua D. Drake\n\n\n> \n> \n> Thanks\n> \n> Ike\n> \n> \n> \n> \n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Monday, August 21, 2006 2:06 PM\n> To: Eamonn Kent\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Vacuum not identifying rows for removal.. \n> \n> \"Eamonn Kent\" <[email protected]> writes:\n>> I am using PostgreSQL 8.1.4 for an embedded application. For some\n>> reason, vacuum is not able to identify rows that are candidates for\n>> removal (i.e., mark space as available).\n>> ...\n>> We run auto vacuum and I can see from the logs that it is running\n> quite\n>> frequently. When I run vacuum full from the psql, I can see that\nspace\n>> is not being recovered. I have run vacuum full with the verbose flag\n>> set, I can see that messages that indicate the existence of \"dead row\n>> versions that cannot be removed yet.\n> \n> This means you've got an open transaction somewhere that could\n> potentially still be able to see those rows. Look around for\n> applications sitting \"idle in transaction\".\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 22 Aug 2006 08:25:30 -0700",
"msg_from": "\"Eamonn Kent\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Vacuum not identifying rows for removal.."
}
] |
[
{
"msg_contents": "I'm exhausted to try all performance tuning ideas, like following\nparameters\n\nshared_buffers \nfsync \nmax_fsm_pages\nmax_connections \nshared_buffers \nwork_mem \nmax_fsm_pages\neffective_cache_size \nrandom_page_cost \n\nI believe all above have right size and values, but I just can not get\nhigher tps more than 300 testd by pgbench\n\nHere is our hardware\n\n\nDual Intel Xeon 2.8GHz\n6GB RAM\nLinux 2.4 kernel\nRedHat Enterprise Linux AS 3\n200GB for PGDATA on 3Par, ext3\n50GB for WAL on 3Par, ext3\n\nWith PostgreSql 8.1.4\n\nWe don't have i/o bottle neck. \n\nWhatelse I can try to better tps? Someone told me I can should get tps\nover 1500, it is hard to believe.\n\nThanks\n\nMarty\n",
"msg_date": "Mon, 21 Aug 2006 16:45:11 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to get higher tps"
},
{
"msg_contents": "On Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following\n> parameters\n> \n> shared_buffers \n> fsync \n\nBy \"tuning\" fsync, what do you mean? Did you turn it off?\n\nIf you turned fsync off, that could compromise your data in case of any\nkind of crash or power failure. However, if you turn fsync off you\nshould much higher TPS on pgbench than you're getting.\n\n\n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n\nDoes your disk controller have battery-backed writeback cache? How much?\n\n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n> \n\nWell, chances are PostgreSQL is waiting for fsync, which means you do\nhave an I/O bottleneck (however, you're not using all of your I/O\nbandwidth, most likely).\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 21 Aug 2006 14:23:26 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Jeff,\n\nThanks for your response, I did turn the fsync off, no performance\nimprovement.\n\nSince the application is a network monring program, data is not critical\nfor us.\n\nMarty \n\n-----Original Message-----\nFrom: Jeff Davis [mailto:[email protected]] \nSent: Monday, August 21, 2006 5:23 PM\nTo: Marty Jia\nCc: [email protected]\nSubject: Re: [PERFORM] How to get higher tps\n\nOn Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following \n> parameters\n> \n> shared_buffers\n> fsync\n\nBy \"tuning\" fsync, what do you mean? Did you turn it off?\n\nIf you turned fsync off, that could compromise your data in case of any\nkind of crash or power failure. However, if you turn fsync off you\nshould much higher TPS on pgbench than you're getting.\n\n\n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n\nDoes your disk controller have battery-backed writeback cache? How much?\n\n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n> \n\nWell, chances are PostgreSQL is waiting for fsync, which means you do\nhave an I/O bottleneck (however, you're not using all of your I/O\nbandwidth, most likely).\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 21 Aug 2006 17:38:26 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Not much we can do unless you give us more info about how you're testing\n(pgbench setup), and what you've done with the parameters you listed\nbelow. It would also be useful if you told us more about your drive\narray than just \"3Par\". We need to know the RAID level, number/speed of\ndisks, whether it's got a battery-backed write cache that's turned on,\nthings like this.\n\nLike Jeff just said, it's likely that you're waiting for rotational\nlatency, which would limit your maximum tps for sequential jobs based on\nthe number of disks in your array. For example, a 2-disk array of 10k\nRPM disks is going to max out somewhere around 333 tps. (2*10000/60).\n\n-- Mark Lewis\n\n \n\nOn Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following\n> parameters\n> \n> shared_buffers \n> fsync \n> max_fsm_pages\n> max_connections \n> shared_buffers \n> work_mem \n> max_fsm_pages\n> effective_cache_size \n> random_page_cost \n> \n> I believe all above have right size and values, but I just can not get\n> higher tps more than 300 testd by pgbench\n> \n> Here is our hardware\n> \n> \n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n> \n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n> \n> Whatelse I can try to better tps? Someone told me I can should get tps\n> over 1500, it is hard to believe.\n> \n> Thanks\n> \n> Marty\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Mon, 21 Aug 2006 14:47:25 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following\n> parameters\n> \n> shared_buffers \n> fsync \n> max_fsm_pages\n> max_connections \n> shared_buffers \n> work_mem \n> max_fsm_pages\n> effective_cache_size \n> random_page_cost \n> \n> I believe all above have right size and values, but I just can not get\n> higher tps more than 300 testd by pgbench\n\nWhat values did you use?\n\n> \n> Here is our hardware\n> \n> \n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n> \n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n\nAre you sure? What does iostat say during a pgbench? What parameters are \nyou passing to pgbench?\n\nWell in theory, upgrading to 2.6 kernel will help as well as making your \nWAL ext2 instead of ext3.\n\n> Whatelse I can try to better tps? Someone told me I can should get tps\n> over 1500, it is hard to believe.\n\n1500? Hmmm... I don't know about that, I can get 470tps or so on my \nmeasily dual core 3800 with 2gig of ram though.\n\nJoshua D. Drake\n\n\n> \n> Thanks\n> \n> Marty\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 21 Aug 2006 15:08:35 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "On Mon, 2006-08-21 at 15:45, Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following\n> parameters\n> \n> shared_buffers \n> fsync \n> max_fsm_pages\n> max_connections \n> shared_buffers \n> work_mem \n> max_fsm_pages\n> effective_cache_size \n> random_page_cost \n> \n> I believe all above have right size and values, but I just can not get\n> higher tps more than 300 testd by pgbench\n> \n> Here is our hardware\n> \n> \n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n> \n> With PostgreSql 8.1.4\n\nI assume this is on a blade server then? Just guessing. I'd suspect\nyour vscsi drivers if that's the case. Look into getting the latest\ndrivers for your hardware platform and your scsi/vscsi etc... drivers. \nIf you're connecting through a fibrechannel card make sure you've got\nthe latest drivers for that as well.\n\n1500, btw, is quite high. Most fast machines I've dealt with were\nhitting 600 to 800 tps on fairly good sized RAID arrays.\n\nYou may be able to put your pg_xlog on a sep partition / set of spindles\nand get some perf gain. Also look into how your drives are configured. \nThe more drives you can throw into a RAID 10 the better. RAID 5 will\nusually never give as good of write performance as RAID 10, although it\ngets better as the number of drives increases.\n",
"msg_date": "Mon, 21 Aug 2006 18:43:50 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Hi, Mark\n\nThanks, here is our hardware info:\n\nRAID 10, using 3Par virtual volume technology across ~200 physical FC\ndisks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2\nvirtual disks for WAL, also striped. SAN attached with Qlogic SAN\nsurfer multipathing to load balance each LUN on two 2GBs paths. HBAs\nare Qlogic 2340's. 16GB host cache on 3Par.\n\nDetailed major config values\n\nshared_buffers = 80000\nfsync = on\nmax_fsm_pages = 350000\nmax_connections = 1000\nwork_mem = 65536\neffective_cache_size = 610000\nrandom_page_cost = 3\n\n\nMarty\n \n\n-----Original Message-----\nFrom: Mark Lewis [mailto:[email protected]] \nSent: Monday, August 21, 2006 5:47 PM\nTo: Marty Jia\nCc: [email protected]\nSubject: Re: [PERFORM] How to get higher tps\n\nNot much we can do unless you give us more info about how you're testing\n(pgbench setup), and what you've done with the parameters you listed\nbelow. It would also be useful if you told us more about your drive\narray than just \"3Par\". We need to know the RAID level, number/speed of\ndisks, whether it's got a battery-backed write cache that's turned on,\nthings like this.\n\nLike Jeff just said, it's likely that you're waiting for rotational\nlatency, which would limit your maximum tps for sequential jobs based on\nthe number of disks in your array. For example, a 2-disk array of 10k\nRPM disks is going to max out somewhere around 333 tps. (2*10000/60).\n\n-- Mark Lewis\n\n \n\nOn Mon, 2006-08-21 at 16:45 -0400, Marty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following \n> parameters\n> \n> shared_buffers\n> fsync\n> max_fsm_pages\n> max_connections\n> shared_buffers\n> work_mem\n> max_fsm_pages\n> effective_cache_size\n> random_page_cost\n> \n> I believe all above have right size and values, but I just can not get\n\n> higher tps more than 300 testd by pgbench\n> \n> Here is our hardware\n> \n> \n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n> \n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n> \n> Whatelse I can try to better tps? Someone told me I can should get tps\n\n> over 1500, it is hard to believe.\n> \n> Thanks\n> \n> Marty\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n",
"msg_date": "Tue, 22 Aug 2006 09:16:34 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Joshua,\n\nHere is \n\nshared_buffers = 80000\nfsync = on\nmax_fsm_pages = 350000\nmax_connections = 1000\nwork_mem = 65536\neffective_cache_size = 610000\nrandom_page_cost = 3\n \nHere is pgbench I used:\n\npgbench -c 10 -t 10000 -d HQDB\n\nThanks\n\nMarty\n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: Monday, August 21, 2006 6:09 PM\nTo: Marty Jia\nCc: [email protected]\nSubject: Re: [PERFORM] How to get higher tps\n\nMarty Jia wrote:\n> I'm exhausted to try all performance tuning ideas, like following \n> parameters\n> \n> shared_buffers\n> fsync\n> max_fsm_pages\n> max_connections\n> shared_buffers\n> work_mem\n> max_fsm_pages\n> effective_cache_size\n> random_page_cost\n> \n> I believe all above have right size and values, but I just can not get\n\n> higher tps more than 300 testd by pgbench\n\nWhat values did you use?\n\n> \n> Here is our hardware\n> \n> \n> Dual Intel Xeon 2.8GHz\n> 6GB RAM\n> Linux 2.4 kernel\n> RedHat Enterprise Linux AS 3\n> 200GB for PGDATA on 3Par, ext3\n> 50GB for WAL on 3Par, ext3\n> \n> With PostgreSql 8.1.4\n> \n> We don't have i/o bottle neck. \n\nAre you sure? What does iostat say during a pgbench? What parameters are\nyou passing to pgbench?\n\nWell in theory, upgrading to 2.6 kernel will help as well as making your\nWAL ext2 instead of ext3.\n\n> Whatelse I can try to better tps? Someone told me I can should get tps\n\n> over 1500, it is hard to believe.\n\n1500? Hmmm... I don't know about that, I can get 470tps or so on my\nmeasily dual core 3800 with 2gig of ram though.\n\nJoshua D. Drake\n\n\n> \n> Thanks\n> \n> Marty\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 22 Aug 2006 09:19:40 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Well, at least on my test machines running gnome-terminal, my pgbench\nruns tend to get throttled by gnome-terminal's lousy performance to no\nmore than 300 tps or so. Running with 2>/dev/null to throw away all the\ndetailed logging gives me 2-3x improvement in scores. Caveat: in my\ncase the db is on the local machine, so who knows what all the\ninteractions are.\n\nAlso, when you initialized the pgbench db what scaling factor did you\nuse? And does running pgbench with -v improve performance at all?\n\n-- Mark\n\nOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> Joshua,\n> \n> Here is \n> \n> shared_buffers = 80000\n> fsync = on\n> max_fsm_pages = 350000\n> max_connections = 1000\n> work_mem = 65536\n> effective_cache_size = 610000\n> random_page_cost = 3\n> \n> Here is pgbench I used:\n> \n> pgbench -c 10 -t 10000 -d HQDB\n> \n> Thanks\n> \n> Marty\n> \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]] \n> Sent: Monday, August 21, 2006 6:09 PM\n> To: Marty Jia\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> Marty Jia wrote:\n> > I'm exhausted to try all performance tuning ideas, like following \n> > parameters\n> > \n> > shared_buffers\n> > fsync\n> > max_fsm_pages\n> > max_connections\n> > shared_buffers\n> > work_mem\n> > max_fsm_pages\n> > effective_cache_size\n> > random_page_cost\n> > \n> > I believe all above have right size and values, but I just can not get\n> \n> > higher tps more than 300 testd by pgbench\n> \n> What values did you use?\n> \n> > \n> > Here is our hardware\n> > \n> > \n> > Dual Intel Xeon 2.8GHz\n> > 6GB RAM\n> > Linux 2.4 kernel\n> > RedHat Enterprise Linux AS 3\n> > 200GB for PGDATA on 3Par, ext3\n> > 50GB for WAL on 3Par, ext3\n> > \n> > With PostgreSql 8.1.4\n> > \n> > We don't have i/o bottle neck. \n> \n> Are you sure? What does iostat say during a pgbench? What parameters are\n> you passing to pgbench?\n> \n> Well in theory, upgrading to 2.6 kernel will help as well as making your\n> WAL ext2 instead of ext3.\n> \n> > Whatelse I can try to better tps? Someone told me I can should get tps\n> \n> > over 1500, it is hard to believe.\n> \n> 1500? Hmmm... I don't know about that, I can get 470tps or so on my\n> measily dual core 3800 with 2gig of ram though.\n> \n> Joshua D. Drake\n> \n> \n> > \n> > Thanks\n> > \n> > Marty\n> > \n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> > \n> \n> \n",
"msg_date": "Tue, 22 Aug 2006 07:31:49 -0700",
"msg_from": "Mark Lewis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "On Tue, 2006-08-22 at 08:16, Marty Jia wrote:\n> Hi, Mark\n> \n> Thanks, here is our hardware info:\n> \n> RAID 10, using 3Par virtual volume technology across ~200 physical FC\n> disks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2\n> virtual disks for WAL, also striped. SAN attached with Qlogic SAN\n> surfer multipathing to load balance each LUN on two 2GBs paths. HBAs\n> are Qlogic 2340's. 16GB host cache on 3Par.\n\nA few points. \n\nSomeone (Luke I think) posted that Linux's LVM has a throughput limit of\naround 600 Megs/second.\n\nWhy are you using multiple virtual disks on an LPAR? Did you try this\nwith just a single big virtual disk first to have something to compare\nit to? I think your disk subsystem is overthought for an LPAR. If you\nwere running physical disks on a locally attached RAID card, it would be\na good idea. But here you're just adding layers of complexity for no\ngain, and in fact may be heading backwards.\n\nI'd make two volumes on the LPAR, and let the LPAR do all the\nvirtualization for you. Put a couple disks in a mirror set for the\npg_xlog, format it ext2, and mount it noatime. Make another from a\ndozen or so disks in an RAID 0 on top of RAID 1 (i.e. make a bunch of\nmirror sets and stripe them into one big partition) and mount that for\nPGDATA. Simplify, and get a baseline. Then, start mucking about to see\nif you can get better performance. change ONE THING at a time, and only\none thing, and test it well.\n\nGot the latest and greatest drivers for the qlogic cards?\n\nI would suggest some component testing to make sure everything is\nworking well. bonnie++ and dd come to mind.\n",
"msg_date": "Tue, 22 Aug 2006 10:26:41 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "First things first, run a bonnie++ benchmark, and post the numbers. That\nwill give a good indication of raw IO performance, and is often the first\ninidication of problems separate from the DB. We have seen pretty bad\nperformance from SANs in the past. How many FC lines do you have running to\nyour server, remember each line is limited to about 200MB/sec, to get good\nthroughput, you will need multiple connections.\n\nWhen you run pgbench, run a iostat also and see what the numbers say.\n\nAlex.\n\nOn 8/22/06, Mark Lewis <[email protected]> wrote:\n>\n> Well, at least on my test machines running gnome-terminal, my pgbench\n> runs tend to get throttled by gnome-terminal's lousy performance to no\n> more than 300 tps or so. Running with 2>/dev/null to throw away all the\n> detailed logging gives me 2-3x improvement in scores. Caveat: in my\n> case the db is on the local machine, so who knows what all the\n> interactions are.\n>\n> Also, when you initialized the pgbench db what scaling factor did you\n> use? And does running pgbench with -v improve performance at all?\n>\n> -- Mark\n>\n> On Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> > Joshua,\n> >\n> > Here is\n> >\n> > shared_buffers = 80000\n> > fsync = on\n> > max_fsm_pages = 350000\n> > max_connections = 1000\n> > work_mem = 65536\n> > effective_cache_size = 610000\n> > random_page_cost = 3\n> >\n> > Here is pgbench I used:\n> >\n> > pgbench -c 10 -t 10000 -d HQDB\n> >\n> > Thanks\n> >\n> > Marty\n> >\n> > -----Original Message-----\n> > From: Joshua D. Drake [mailto:[email protected]]\n> > Sent: Monday, August 21, 2006 6:09 PM\n> > To: Marty Jia\n> > Cc: [email protected]\n> > Subject: Re: [PERFORM] How to get higher tps\n> >\n> > Marty Jia wrote:\n> > > I'm exhausted to try all performance tuning ideas, like following\n> > > parameters\n> > >\n> > > shared_buffers\n> > > fsync\n> > > max_fsm_pages\n> > > max_connections\n> > > shared_buffers\n> > > work_mem\n> > > max_fsm_pages\n> > > effective_cache_size\n> > > random_page_cost\n> > >\n> > > I believe all above have right size and values, but I just can not get\n> >\n> > > higher tps more than 300 testd by pgbench\n> >\n> > What values did you use?\n> >\n> > >\n> > > Here is our hardware\n> > >\n> > >\n> > > Dual Intel Xeon 2.8GHz\n> > > 6GB RAM\n> > > Linux 2.4 kernel\n> > > RedHat Enterprise Linux AS 3\n> > > 200GB for PGDATA on 3Par, ext3\n> > > 50GB for WAL on 3Par, ext3\n> > >\n> > > With PostgreSql 8.1.4\n> > >\n> > > We don't have i/o bottle neck.\n> >\n> > Are you sure? What does iostat say during a pgbench? What parameters are\n> > you passing to pgbench?\n> >\n> > Well in theory, upgrading to 2.6 kernel will help as well as making your\n> > WAL ext2 instead of ext3.\n> >\n> > > Whatelse I can try to better tps? Someone told me I can should get tps\n> >\n> > > over 1500, it is hard to believe.\n> >\n> > 1500? Hmmm... I don't know about that, I can get 470tps or so on my\n> > measily dual core 3800 with 2gig of ram though.\n> >\n> > Joshua D. Drake\n> >\n> >\n> > >\n> > > Thanks\n> > >\n> > > Marty\n> > >\n> > > ---------------------------(end of\n> > > broadcast)---------------------------\n> > > TIP 2: Don't 'kill -9' the postmaster\n> > >\n> >\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nFirst things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is often the first inidication of problems separate from the DB. We have seen pretty bad performance from SANs in the past. How many FC lines do you have running to your server, remember each line is limited to about 200MB/sec, to get good throughput, you will need multiple connections.\nWhen you run pgbench, run a iostat also and see what the numbers say.Alex.On 8/22/06, Mark Lewis <\[email protected]> wrote:Well, at least on my test machines running gnome-terminal, my pgbench\nruns tend to get throttled by gnome-terminal's lousy performance to nomore than 300 tps or so. Running with 2>/dev/null to throw away all thedetailed logging gives me 2-3x improvement in scores. Caveat: in my\ncase the db is on the local machine, so who knows what all theinteractions are.Also, when you initialized the pgbench db what scaling factor did youuse? And does running pgbench with -v improve performance at all?\n-- MarkOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:> Joshua,>> Here is>> shared_buffers = 80000> fsync = on> max_fsm_pages = 350000> max_connections = 1000\n> work_mem = 65536> effective_cache_size = 610000> random_page_cost = 3>> Here is pgbench I used:>> pgbench -c 10 -t 10000 -d HQDB>> Thanks>> Marty\n>> -----Original Message-----> From: Joshua D. Drake [mailto:[email protected]]> Sent: Monday, August 21, 2006 6:09 PM> To: Marty Jia> Cc: \[email protected]> Subject: Re: [PERFORM] How to get higher tps>> Marty Jia wrote:> > I'm exhausted to try all performance tuning ideas, like following\n> > parameters> >> > shared_buffers> > fsync> > max_fsm_pages> > max_connections> > shared_buffers> > work_mem> > max_fsm_pages\n> > effective_cache_size> > random_page_cost> >> > I believe all above have right size and values, but I just can not get>> > higher tps more than 300 testd by pgbench\n>> What values did you use?>> >> > Here is our hardware> >> >> > Dual Intel Xeon 2.8GHz> > 6GB RAM> > Linux 2.4 kernel> > RedHat Enterprise Linux AS 3\n> > 200GB for PGDATA on 3Par, ext3> > 50GB for WAL on 3Par, ext3> >> > With PostgreSql 8.1.4> >> > We don't have i/o bottle neck.>> Are you sure? What does iostat say during a pgbench? What parameters are\n> you passing to pgbench?>> Well in theory, upgrading to 2.6 kernel will help as well as making your> WAL ext2 instead of ext3.>> > Whatelse I can try to better tps? Someone told me I can should get tps\n>> > over 1500, it is hard to believe.>> 1500? Hmmm... I don't know about that, I can get 470tps or so on my> measily dual core 3800 with 2gig of ram though.>> Joshua D. Drake\n>>> >> > Thanks> >> > Marty> >> > ---------------------------(end of> > broadcast)---------------------------> > TIP 2: Don't 'kill -9' the postmaster\n> >>>---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Tue, 22 Aug 2006 11:26:57 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Oh - and it's usefull to know if you are CPU bound, or IO bound. Check top\nor vmstat to get an idea of that\n\nAlex\n\nOn 8/22/06, Alex Turner <[email protected]> wrote:\n>\n> First things first, run a bonnie++ benchmark, and post the numbers. That\n> will give a good indication of raw IO performance, and is often the first\n> inidication of problems separate from the DB. We have seen pretty bad\n> performance from SANs in the past. How many FC lines do you have running to\n> your server, remember each line is limited to about 200MB/sec, to get good\n> throughput, you will need multiple connections.\n>\n> When you run pgbench, run a iostat also and see what the numbers say.\n>\n> Alex.\n>\n>\n> On 8/22/06, Mark Lewis < [email protected]> wrote:\n> >\n> > Well, at least on my test machines running gnome-terminal, my pgbench\n> > runs tend to get throttled by gnome-terminal's lousy performance to no\n> > more than 300 tps or so. Running with 2>/dev/null to throw away all the\n> > detailed logging gives me 2-3x improvement in scores. Caveat: in my\n> > case the db is on the local machine, so who knows what all the\n> > interactions are.\n> >\n> > Also, when you initialized the pgbench db what scaling factor did you\n> > use? And does running pgbench with -v improve performance at all?\n> >\n> > -- Mark\n> >\n> > On Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> > > Joshua,\n> > >\n> > > Here is\n> > >\n> > > shared_buffers = 80000\n> > > fsync = on\n> > > max_fsm_pages = 350000\n> > > max_connections = 1000\n> > > work_mem = 65536\n> > > effective_cache_size = 610000\n> > > random_page_cost = 3\n> > >\n> > > Here is pgbench I used:\n> > >\n> > > pgbench -c 10 -t 10000 -d HQDB\n> > >\n> > > Thanks\n> > >\n> > > Marty\n> > >\n> > > -----Original Message-----\n> > > From: Joshua D. Drake [mailto:[email protected]]\n> > > Sent: Monday, August 21, 2006 6:09 PM\n> > > To: Marty Jia\n> > > Cc: [email protected]\n> > > Subject: Re: [PERFORM] How to get higher tps\n> > >\n> > > Marty Jia wrote:\n> > > > I'm exhausted to try all performance tuning ideas, like following\n> > > > parameters\n> > > >\n> > > > shared_buffers\n> > > > fsync\n> > > > max_fsm_pages\n> > > > max_connections\n> > > > shared_buffers\n> > > > work_mem\n> > > > max_fsm_pages\n> > > > effective_cache_size\n> > > > random_page_cost\n> > > >\n> > > > I believe all above have right size and values, but I just can not\n> > get\n> > >\n> > > > higher tps more than 300 testd by pgbench\n> > >\n> > > What values did you use?\n> > >\n> > > >\n> > > > Here is our hardware\n> > > >\n> > > >\n> > > > Dual Intel Xeon 2.8GHz\n> > > > 6GB RAM\n> > > > Linux 2.4 kernel\n> > > > RedHat Enterprise Linux AS 3\n> > > > 200GB for PGDATA on 3Par, ext3\n> > > > 50GB for WAL on 3Par, ext3\n> > > >\n> > > > With PostgreSql 8.1.4\n> > > >\n> > > > We don't have i/o bottle neck.\n> > >\n> > > Are you sure? What does iostat say during a pgbench? What parameters\n> > are\n> > > you passing to pgbench?\n> > >\n> > > Well in theory, upgrading to 2.6 kernel will help as well as making\n> > your\n> > > WAL ext2 instead of ext3.\n> > >\n> > > > Whatelse I can try to better tps? Someone told me I can should get\n> > tps\n> > >\n> > > > over 1500, it is hard to believe.\n> > >\n> > > 1500? Hmmm... I don't know about that, I can get 470tps or so on my\n> > > measily dual core 3800 with 2gig of ram though.\n> > >\n> > > Joshua D. Drake\n> > >\n> > >\n> > > >\n> > > > Thanks\n> > > >\n> > > > Marty\n> > > >\n> > > > ---------------------------(end of\n> > > > broadcast)---------------------------\n> > > > TIP 2: Don't 'kill -9' the postmaster\n> > > >\n> > >\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to [email protected] so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n>\n\nOh - and it's usefull to know if you are CPU bound, or IO bound. Check top or vmstat to get an idea of thatAlexOn 8/22/06, Alex Turner <\[email protected]> wrote:First things first, run a bonnie++ benchmark, and post the numbers. That will give a good indication of raw IO performance, and is often the first inidication of problems separate from the DB. We have seen pretty bad performance from SANs in the past. How many FC lines do you have running to your server, remember each line is limited to about 200MB/sec, to get good throughput, you will need multiple connections.\nWhen you run pgbench, run a iostat also and see what the numbers say.Alex.On 8/22/06, \nMark Lewis <\[email protected]> wrote:Well, at least on my test machines running gnome-terminal, my pgbench\nruns tend to get throttled by gnome-terminal's lousy performance to nomore than 300 tps or so. Running with 2>/dev/null to throw away all thedetailed logging gives me 2-3x improvement in scores. Caveat: in my\ncase the db is on the local machine, so who knows what all theinteractions are.Also, when you initialized the pgbench db what scaling factor did youuse? And does running pgbench with -v improve performance at all?\n-- MarkOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:> Joshua,>> Here is>> shared_buffers = 80000> fsync = on> max_fsm_pages = 350000> max_connections = 1000\n> work_mem = 65536> effective_cache_size = 610000> random_page_cost = 3>> Here is pgbench I used:>> pgbench -c 10 -t 10000 -d HQDB>> Thanks>> Marty\n>> -----Original Message-----> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Monday, August 21, 2006 6:09 PM> To: Marty Jia> Cc: \[email protected]> Subject: Re: [PERFORM] How to get higher tps>\n> Marty Jia wrote:> > I'm exhausted to try all performance tuning ideas, like following\n> > parameters> >> > shared_buffers> > fsync> > max_fsm_pages> > max_connections> > shared_buffers> > work_mem> > max_fsm_pages\n\n> > effective_cache_size> > random_page_cost> >> > I believe all above have right size and values, but I just can not get>> > higher tps more than 300 testd by pgbench\n\n>> What values did you use?>> >> > Here is our hardware> >> >> > Dual Intel Xeon 2.8GHz> > 6GB RAM> > Linux 2.4 kernel> > RedHat Enterprise Linux AS 3\n> > 200GB for PGDATA on 3Par, ext3> > 50GB for WAL on 3Par, ext3> >> > With PostgreSql 8.1.4> >> > We don't have i/o bottle neck.>> Are you sure? What does iostat say during a pgbench? What parameters are\n> you passing to pgbench?>> Well in theory, upgrading to 2.6 kernel will help as well as making your> WAL ext2 instead of ext3.>> > Whatelse I can try to better tps? Someone told me I can should get tps\n>> > over 1500, it is hard to believe.>> 1500? Hmmm... I don't know about that, I can get 470tps or so on my> measily dual core 3800 with 2gig of ram though.>> Joshua D. Drake\n>>> >> > Thanks> >> > Marty> >> > ---------------------------(end of> > broadcast)---------------------------> > TIP 2: Don't 'kill -9' the postmaster\n> >>>---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to \n\[email protected] so that your message can get through to the mailing list cleanly",
"msg_date": "Tue, 22 Aug 2006 11:27:26 -0400",
"msg_from": "\"Alex Turner\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "The scaling factor is 20\nI used -v and 2>/dev/null, now I got \n\ntps = 389.796376 (excluding connections establishing)\n\nThis is best so far I can get\n\nThanks \n\n-----Original Message-----\nFrom: Mark Lewis [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 10:32 AM\nTo: Marty Jia\nCc: Joshua D. Drake; [email protected]; DBAs; Rich\nWilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\nWell, at least on my test machines running gnome-terminal, my pgbench\nruns tend to get throttled by gnome-terminal's lousy performance to no\nmore than 300 tps or so. Running with 2>/dev/null to throw away all the\ndetailed logging gives me 2-3x improvement in scores. Caveat: in my\ncase the db is on the local machine, so who knows what all the\ninteractions are.\n\nAlso, when you initialized the pgbench db what scaling factor did you\nuse? And does running pgbench with -v improve performance at all?\n\n-- Mark\n\nOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> Joshua,\n> \n> Here is\n> \n> shared_buffers = 80000\n> fsync = on\n> max_fsm_pages = 350000\n> max_connections = 1000\n> work_mem = 65536\n> effective_cache_size = 610000\n> random_page_cost = 3\n> \n> Here is pgbench I used:\n> \n> pgbench -c 10 -t 10000 -d HQDB\n> \n> Thanks\n> \n> Marty\n> \n> -----Original Message-----\n> From: Joshua D. Drake [mailto:[email protected]]\n> Sent: Monday, August 21, 2006 6:09 PM\n> To: Marty Jia\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> Marty Jia wrote:\n> > I'm exhausted to try all performance tuning ideas, like following \n> > parameters\n> > \n> > shared_buffers\n> > fsync\n> > max_fsm_pages\n> > max_connections\n> > shared_buffers\n> > work_mem\n> > max_fsm_pages\n> > effective_cache_size\n> > random_page_cost\n> > \n> > I believe all above have right size and values, but I just can not \n> > get\n> \n> > higher tps more than 300 testd by pgbench\n> \n> What values did you use?\n> \n> > \n> > Here is our hardware\n> > \n> > \n> > Dual Intel Xeon 2.8GHz\n> > 6GB RAM\n> > Linux 2.4 kernel\n> > RedHat Enterprise Linux AS 3\n> > 200GB for PGDATA on 3Par, ext3\n> > 50GB for WAL on 3Par, ext3\n> > \n> > With PostgreSql 8.1.4\n> > \n> > We don't have i/o bottle neck. \n> \n> Are you sure? What does iostat say during a pgbench? What parameters \n> are you passing to pgbench?\n> \n> Well in theory, upgrading to 2.6 kernel will help as well as making \n> your WAL ext2 instead of ext3.\n> \n> > Whatelse I can try to better tps? Someone told me I can should get \n> > tps\n> \n> > over 1500, it is hard to believe.\n> \n> 1500? Hmmm... I don't know about that, I can get 470tps or so on my \n> measily dual core 3800 with 2gig of ram though.\n> \n> Joshua D. Drake\n> \n> \n> > \n> > Thanks\n> > \n> > Marty\n> > \n> > ---------------------------(end of\n> > broadcast)---------------------------\n> > TIP 2: Don't 'kill -9' the postmaster\n> > \n> \n> \n",
"msg_date": "Tue, 22 Aug 2006 11:31:51 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "----------- Here is vmstat\n\nprocs memory swap io system\ncpu\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 0 1 15416 18156 73372 4348488 1 1 3 2 4 1 2\n1 2 2\n \n \n----------- Here is iostat\n \navg-cpu: %user %nice %sys %iowait %idle\n 11.59 0.00 6.13 10.77 71.50\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 2.76 6.88 36.35 16036474 84688320\nsda1 0.00 0.01 0.00 30100 1056\nsda2 0.27 2.36 1.72 5509296 4017224\nsda3 1.85 0.78 21.99 1819850 51242800\nsda4 0.00 0.00 0.00 20 0\nsda5 0.15 0.49 1.47 1131624 3425672\nsda6 0.49 3.14 11.12 7320616 25899088\nsda7 0.01 0.09 0.04 219960 102480\nsdb 2.75 6.78 36.35 15803532 84688320\nsdb1 0.00 0.01 0.00 24322 1056\nsdb2 0.27 2.31 1.72 5391682 4017224\nsdb3 1.84 0.79 21.99 1836088 51242800\nsdb4 0.00 0.00 0.00 20 0\nsdb5 0.15 0.49 1.47 1134546 3425672\nsdb6 0.49 3.12 11.12 7273816 25899088\nsdb7 0.01 0.06 0.04 138138 102480\nsdc 0.00 0.00 0.00 632 0\nsdd 0.00 0.00 0.00 80 0\nsde 0.00 0.00 0.00 80 0\nsdf 0.00 0.00 0.00 80 0\nsdg 0.00 0.00 0.00 112 0\nsdh 0.00 0.00 0.00 112 0\nsdi 139.89 680.59 839.42 1585722266 1955771032\nsdj 139.72 680.21 835.90 1584829368 1947590800\nsdk 139.82 680.30 840.74 1585053608 1958864880\nsdl 139.86 680.56 841.26 1585657408 1960079576\nsdm 54.80 6.67 891.38 15547618 2076836720\nsdn 54.71 6.66 891.35 15509096 2076776352\n \n \n \n \n\n \n________________________________\n\nFrom: Alex Turner [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 11:27 AM\nTo: Mark Lewis\nCc: Marty Jia; Joshua D. Drake; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\n\nOh - and it's usefull to know if you are CPU bound, or IO bound. Check\ntop or vmstat to get an idea of that\n\nAlex\n\n\nOn 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\nwrote: \n\n\tFirst things first, run a bonnie++ benchmark, and post the\nnumbers. That will give a good indication of raw IO performance, and is\noften the first inidication of problems separate from the DB. We have\nseen pretty bad performance from SANs in the past. How many FC lines do\nyou have running to your server, remember each line is limited to about\n200MB/sec, to get good throughput, you will need multiple connections. \n\t\n\tWhen you run pgbench, run a iostat also and see what the numbers\nsay.\n\t\n\t\n\tAlex.\n\t\n\t\n\t\n\tOn 8/22/06, Mark Lewis < [email protected]\n<mailto:[email protected]> > wrote: \n\n\t\tWell, at least on my test machines running\ngnome-terminal, my pgbench \n\t\truns tend to get throttled by gnome-terminal's lousy\nperformance to no\n\t\tmore than 300 tps or so. Running with 2>/dev/null to\nthrow away all the\n\t\tdetailed logging gives me 2-3x improvement in scores.\nCaveat: in my \n\t\tcase the db is on the local machine, so who knows what\nall the\n\t\tinteractions are.\n\t\t\n\t\tAlso, when you initialized the pgbench db what scaling\nfactor did you\n\t\tuse? And does running pgbench with -v improve\nperformance at all? \n\t\t\n\t\t-- Mark\n\t\t\n\t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n\t\t> Joshua,\n\t\t>\n\t\t> Here is\n\t\t>\n\t\t> shared_buffers = 80000\n\t\t> fsync = on\n\t\t> max_fsm_pages = 350000\n\t\t> max_connections = 1000 \n\t\t> work_mem = 65536\n\t\t> effective_cache_size = 610000\n\t\t> random_page_cost = 3\n\t\t>\n\t\t> Here is pgbench I used:\n\t\t>\n\t\t> pgbench -c 10 -t 10000 -d HQDB\n\t\t>\n\t\t> Thanks\n\t\t>\n\t\t> Marty \n\t\t>\n\t\t> -----Original Message-----\n\t\t> From: Joshua D. Drake [mailto:[email protected]]\n\t\t> Sent: Monday, August 21, 2006 6:09 PM\n\t\t> To: Marty Jia\n\t\t> Cc: [email protected]\n\t\t> Subject: Re: [PERFORM] How to get higher tps\n\t\t>\n\t\t> Marty Jia wrote:\n\t\t> > I'm exhausted to try all performance tuning ideas,\nlike following \n\t\t> > parameters\n\t\t> >\n\t\t> > shared_buffers\n\t\t> > fsync\n\t\t> > max_fsm_pages\n\t\t> > max_connections\n\t\t> > shared_buffers\n\t\t> > work_mem\n\t\t> > max_fsm_pages\n\t\t> > effective_cache_size\n\t\t> > random_page_cost\n\t\t> >\n\t\t> > I believe all above have right size and values, but\nI just can not get\n\t\t>\n\t\t> > higher tps more than 300 testd by pgbench \n\t\t>\n\t\t> What values did you use?\n\t\t>\n\t\t> >\n\t\t> > Here is our hardware\n\t\t> >\n\t\t> >\n\t\t> > Dual Intel Xeon 2.8GHz\n\t\t> > 6GB RAM\n\t\t> > Linux 2.4 kernel\n\t\t> > RedHat Enterprise Linux AS 3 \n\t\t> > 200GB for PGDATA on 3Par, ext3\n\t\t> > 50GB for WAL on 3Par, ext3\n\t\t> >\n\t\t> > With PostgreSql 8.1.4\n\t\t> >\n\t\t> > We don't have i/o bottle neck.\n\t\t>\n\t\t> Are you sure? What does iostat say during a pgbench?\nWhat parameters are \n\t\t> you passing to pgbench?\n\t\t>\n\t\t> Well in theory, upgrading to 2.6 kernel will help as\nwell as making your\n\t\t> WAL ext2 instead of ext3.\n\t\t>\n\t\t> > Whatelse I can try to better tps? Someone told me I\ncan should get tps \n\t\t>\n\t\t> > over 1500, it is hard to believe.\n\t\t>\n\t\t> 1500? Hmmm... I don't know about that, I can get\n470tps or so on my\n\t\t> measily dual core 3800 with 2gig of ram though.\n\t\t>\n\t\t> Joshua D. Drake \n\t\t>\n\t\t>\n\t\t> >\n\t\t> > Thanks\n\t\t> >\n\t\t> > Marty\n\t\t> >\n\t\t> > ---------------------------(end of\n\t\t> > broadcast)---------------------------\n\t\t> > TIP 2: Don't 'kill -9' the postmaster \n\t\t> >\n\t\t>\n\t\t>\n\t\t\n\t\t---------------------------(end of\nbroadcast)---------------------------\n\t\tTIP 1: if posting/reading through Usenet, please send an\nappropriate\n\t\t subscribe-nomail command to\[email protected] so that your\n\t\t message can get through to the mailing list\ncleanly\n\t\t\n\n\n\n\n\n\n\n\n\n----------- Here is vmstat\nprocs \nmemory \nswap \nio \nsystem cpu r \nb swpd free buff cache \nsi so bi bo \nin cs us sy id wa 0 1 15416 \n18156 73372 4348488 1 \n1 3 2 \n4 1 2 1 2 2\n \n \n----------- Here is iostat\n \navg-cpu: \n%user %nice %sys %iowait \n%idle \n11.59 0.00 6.13 \n10.77 71.50\n \nDevice: \ntps Blk_read/s Blk_wrtn/s \nBlk_read \nBlk_wrtnsda \n2.76 \n6.88 36.35 \n16036474 \n84688320sda1 \n0.00 \n0.01 \n0.00 30100 \n1056sda2 \n0.27 \n2.36 1.72 \n5509296 \n4017224sda3 \n1.85 \n0.78 21.99 \n1819850 \n51242800sda4 \n0.00 \n0.00 \n0.00 \n20 \n0sda5 \n0.15 \n0.49 1.47 \n1131624 \n3425672sda6 \n0.49 \n3.14 11.12 \n7320616 \n25899088sda7 \n0.01 \n0.09 \n0.04 219960 \n102480sdb \n2.75 \n6.78 36.35 \n15803532 \n84688320sdb1 \n0.00 \n0.01 \n0.00 24322 \n1056sdb2 \n0.27 \n2.31 1.72 \n5391682 \n4017224sdb3 \n1.84 \n0.79 21.99 \n1836088 \n51242800sdb4 \n0.00 \n0.00 \n0.00 \n20 \n0sdb5 \n0.15 \n0.49 1.47 \n1134546 \n3425672sdb6 \n0.49 \n3.12 11.12 \n7273816 \n25899088sdb7 \n0.01 \n0.06 \n0.04 138138 \n102480sdc \n0.00 \n0.00 \n0.00 \n632 \n0sdd \n0.00 \n0.00 \n0.00 \n80 \n0sde \n0.00 \n0.00 \n0.00 \n80 \n0sdf \n0.00 \n0.00 \n0.00 \n80 \n0sdg \n0.00 \n0.00 \n0.00 \n112 \n0sdh \n0.00 \n0.00 \n0.00 \n112 \n0sdi \n139.89 \n680.59 839.42 1585722266 \n1955771032sdj \n139.72 \n680.21 835.90 1584829368 \n1947590800sdk \n139.82 \n680.30 840.74 1585053608 \n1958864880sdl \n139.86 \n680.56 841.26 1585657408 \n1960079576sdm \n54.80 \n6.67 891.38 15547618 \n2076836720sdn \n54.71 \n6.66 891.35 15509096 \n2076776352\n \n \n \n \n \n\n\nFrom: Alex Turner [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 11:27 AMTo: Mark \nLewisCc: Marty Jia; Joshua D. Drake; \[email protected]; DBAs; Rich Wilson; Ernest \nWurzbachSubject: Re: [PERFORM] How to get higher \ntps\nOh - and it's usefull to know if you are CPU bound, or IO \nbound. Check top or vmstat to get an idea of thatAlex\nOn 8/22/06, Alex \nTurner < [email protected]> \nwrote:\n\nFirst things first, run a bonnie++ benchmark, and post the numbers. \n That will give a good indication of raw IO performance, and is often the first \n inidication of problems separate from the DB. We have seen pretty bad \n performance from SANs in the past. How many FC lines do you have running \n to your server, remember each line is limited to about 200MB/sec, to get good \n throughput, you will need multiple connections. When you run pgbench, \n run a iostat also and see what the numbers say.\nAlex.\n\nOn 8/22/06, Mark \n Lewis < [email protected]> \n wrote:\nWell, \n at least on my test machines running gnome-terminal, my pgbench runs \n tend to get throttled by gnome-terminal's lousy performance to nomore \n than 300 tps or so. Running with 2>/dev/null to throw away all \n thedetailed logging gives me 2-3x improvement in \n scores. Caveat: in my case the db is on the local machine, so \n who knows what all theinteractions are.Also, when you \n initialized the pgbench db what scaling factor did \n youuse? And does running pgbench with -v improve performance \n at all? -- MarkOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia \n wrote:> Joshua,>> Here is>> \n shared_buffers = 80000> fsync = on> max_fsm_pages = \n 350000> max_connections = 1000 > work_mem = 65536> \n effective_cache_size = 610000> random_page_cost = 3>> \n Here is pgbench I used:>> pgbench -c 10 -t 10000 -d \n HQDB>> Thanks>> Marty >> \n -----Original Message-----> From: Joshua D. Drake [mailto:[email protected]]> Sent: Monday, August 21, \n 2006 6:09 PM> To: Marty Jia> Cc: [email protected]> Subject: Re: \n [PERFORM] How to get higher tps>> Marty Jia wrote:> \n > I'm exhausted to try all performance tuning ideas, like following \n > > parameters> >> > shared_buffers> \n > fsync> > max_fsm_pages> > max_connections> \n > shared_buffers> > work_mem> > max_fsm_pages> \n > effective_cache_size> > random_page_cost> >> \n > I believe all above have right size and values, but I just can not \n get>> > higher tps more than 300 testd by pgbench \n >> What values did you use?>> >> > \n Here is our hardware> >> >> > Dual Intel Xeon \n 2.8GHz> > 6GB RAM> > Linux 2.4 kernel> > \n RedHat Enterprise Linux AS 3 > > 200GB for PGDATA on 3Par, \n ext3> > 50GB for WAL on 3Par, ext3> >> > With \n PostgreSql 8.1.4> >> > We don't have i/o bottle \n neck.>> Are you sure? What does iostat say during a pgbench? \n What parameters are > you passing to pgbench?>> Well in \n theory, upgrading to 2.6 kernel will help as well as making your> WAL \n ext2 instead of ext3.>> > Whatelse I can try to better tps? \n Someone told me I can should get tps >> > over 1500, it is \n hard to believe.>> 1500? Hmmm... I don't know about that, I \n can get 470tps or so on my> measily dual core 3800 with 2gig of ram \n though.>> Joshua D. Drake >>> \n >> > Thanks> >> > Marty> \n >> > ---------------------------(end of> > \n broadcast)---------------------------> > TIP 2: Don't 'kill -9' \n the postmaster > \n >>>---------------------------(end of \n broadcast)---------------------------TIP 1: if posting/reading through \n Usenet, please send an appropriate \n subscribe-nomail command to [email protected] so that \n your message can get through to the \n mailing list \ncleanly",
"msg_date": "Tue, 22 Aug 2006 11:44:21 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Here is iostat when running pgbench:\n \navg-cpu: %user %nice %sys %iowait %idle\n 26.17 0.00 8.25 23.17 42.42\n \nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 0.00 0.00 0.00 0 0\nsda1 0.00 0.00 0.00 0 0\nsda2 0.00 0.00 0.00 0 0\nsda3 0.00 0.00 0.00 0 0\nsda4 0.00 0.00 0.00 0 0\nsda5 0.00 0.00 0.00 0 0\nsda6 0.00 0.00 0.00 0 0\nsda7 0.00 0.00 0.00 0 0\nsdb 0.00 0.00 0.00 0 0\nsdb1 0.00 0.00 0.00 0 0\nsdb2 0.00 0.00 0.00 0 0\nsdb3 0.00 0.00 0.00 0 0\nsdb4 0.00 0.00 0.00 0 0\nsdb5 0.00 0.00 0.00 0 0\nsdb6 0.00 0.00 0.00 0 0\nsdb7 0.00 0.00 0.00 0 0\nsdc 0.00 0.00 0.00 0 0\nsdd 0.00 0.00 0.00 0 0\nsde 0.00 0.00 0.00 0 0\nsdf 0.00 0.00 0.00 0 0\nsdg 0.00 0.00 0.00 0 0\nsdh 0.00 0.00 0.00 0 0\nsdi 40.33 0.00 413.33 0 1240\nsdj 34.33 0.00 394.67 0 1184\nsdk 36.00 0.00 410.67 0 1232\nsdl 37.00 0.00 429.33 0 1288\nsdm 375.00 0.00 3120.00 0 9360\nsdn 378.33 0.00 3120.00 0 9360\n\n________________________________\n\nFrom: Alex Turner [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 11:27 AM\nTo: Mark Lewis\nCc: Marty Jia; Joshua D. Drake; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\n\nOh - and it's usefull to know if you are CPU bound, or IO bound. Check\ntop or vmstat to get an idea of that\n\nAlex\n\n\nOn 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\nwrote: \n\n\tFirst things first, run a bonnie++ benchmark, and post the\nnumbers. That will give a good indication of raw IO performance, and is\noften the first inidication of problems separate from the DB. We have\nseen pretty bad performance from SANs in the past. How many FC lines do\nyou have running to your server, remember each line is limited to about\n200MB/sec, to get good throughput, you will need multiple connections. \n\t\n\tWhen you run pgbench, run a iostat also and see what the numbers\nsay.\n\t\n\t\n\tAlex.\n\t\n\t\n\t\n\tOn 8/22/06, Mark Lewis < [email protected]\n<mailto:[email protected]> > wrote: \n\n\t\tWell, at least on my test machines running\ngnome-terminal, my pgbench \n\t\truns tend to get throttled by gnome-terminal's lousy\nperformance to no\n\t\tmore than 300 tps or so. Running with 2>/dev/null to\nthrow away all the\n\t\tdetailed logging gives me 2-3x improvement in scores.\nCaveat: in my \n\t\tcase the db is on the local machine, so who knows what\nall the\n\t\tinteractions are.\n\t\t\n\t\tAlso, when you initialized the pgbench db what scaling\nfactor did you\n\t\tuse? And does running pgbench with -v improve\nperformance at all? \n\t\t\n\t\t-- Mark\n\t\t\n\t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n\t\t> Joshua,\n\t\t>\n\t\t> Here is\n\t\t>\n\t\t> shared_buffers = 80000\n\t\t> fsync = on\n\t\t> max_fsm_pages = 350000\n\t\t> max_connections = 1000 \n\t\t> work_mem = 65536\n\t\t> effective_cache_size = 610000\n\t\t> random_page_cost = 3\n\t\t>\n\t\t> Here is pgbench I used:\n\t\t>\n\t\t> pgbench -c 10 -t 10000 -d HQDB\n\t\t>\n\t\t> Thanks\n\t\t>\n\t\t> Marty \n\t\t>\n\t\t> -----Original Message-----\n\t\t> From: Joshua D. Drake [mailto:[email protected]]\n\t\t> Sent: Monday, August 21, 2006 6:09 PM\n\t\t> To: Marty Jia\n\t\t> Cc: [email protected]\n\t\t> Subject: Re: [PERFORM] How to get higher tps\n\t\t>\n\t\t> Marty Jia wrote:\n\t\t> > I'm exhausted to try all performance tuning ideas,\nlike following \n\t\t> > parameters\n\t\t> >\n\t\t> > shared_buffers\n\t\t> > fsync\n\t\t> > max_fsm_pages\n\t\t> > max_connections\n\t\t> > shared_buffers\n\t\t> > work_mem\n\t\t> > max_fsm_pages\n\t\t> > effective_cache_size\n\t\t> > random_page_cost\n\t\t> >\n\t\t> > I believe all above have right size and values, but\nI just can not get\n\t\t>\n\t\t> > higher tps more than 300 testd by pgbench \n\t\t>\n\t\t> What values did you use?\n\t\t>\n\t\t> >\n\t\t> > Here is our hardware\n\t\t> >\n\t\t> >\n\t\t> > Dual Intel Xeon 2.8GHz\n\t\t> > 6GB RAM\n\t\t> > Linux 2.4 kernel\n\t\t> > RedHat Enterprise Linux AS 3 \n\t\t> > 200GB for PGDATA on 3Par, ext3\n\t\t> > 50GB for WAL on 3Par, ext3\n\t\t> >\n\t\t> > With PostgreSql 8.1.4\n\t\t> >\n\t\t> > We don't have i/o bottle neck.\n\t\t>\n\t\t> Are you sure? What does iostat say during a pgbench?\nWhat parameters are \n\t\t> you passing to pgbench?\n\t\t>\n\t\t> Well in theory, upgrading to 2.6 kernel will help as\nwell as making your\n\t\t> WAL ext2 instead of ext3.\n\t\t>\n\t\t> > Whatelse I can try to better tps? Someone told me I\ncan should get tps \n\t\t>\n\t\t> > over 1500, it is hard to believe.\n\t\t>\n\t\t> 1500? Hmmm... I don't know about that, I can get\n470tps or so on my\n\t\t> measily dual core 3800 with 2gig of ram though.\n\t\t>\n\t\t> Joshua D. Drake \n\t\t>\n\t\t>\n\t\t> >\n\t\t> > Thanks\n\t\t> >\n\t\t> > Marty\n\t\t> >\n\t\t> > ---------------------------(end of\n\t\t> > broadcast)---------------------------\n\t\t> > TIP 2: Don't 'kill -9' the postmaster \n\t\t> >\n\t\t>\n\t\t>\n\t\t\n\t\t---------------------------(end of\nbroadcast)---------------------------\n\t\tTIP 1: if posting/reading through Usenet, please send an\nappropriate\n\t\t subscribe-nomail command to\[email protected] so that your\n\t\t message can get through to the mailing list\ncleanly\n\t\t\n\n\n\n\n\n\n\n\nHere is iostat when running pgbench:\n \navg-cpu: %user %nice \n%sys %iowait \n%idle \n26.17 0.00 8.25 \n23.17 42.42\n \nDevice: \ntps Blk_read/s Blk_wrtn/s \nBlk_read \nBlk_wrtnsda \n0.00 \n0.00 \n0.00 \n0 \n0sda1 \n0.00 \n0.00 \n0.00 \n0 \n0sda2 \n0.00 \n0.00 \n0.00 \n0 \n0sda3 \n0.00 \n0.00 \n0.00 \n0 \n0sda4 \n0.00 \n0.00 \n0.00 \n0 \n0sda5 \n0.00 \n0.00 \n0.00 \n0 \n0sda6 \n0.00 \n0.00 \n0.00 \n0 \n0sda7 \n0.00 \n0.00 \n0.00 \n0 \n0sdb \n0.00 \n0.00 \n0.00 \n0 \n0sdb1 \n0.00 \n0.00 \n0.00 \n0 \n0sdb2 \n0.00 \n0.00 \n0.00 \n0 \n0sdb3 \n0.00 \n0.00 \n0.00 \n0 \n0sdb4 \n0.00 \n0.00 \n0.00 \n0 \n0sdb5 \n0.00 \n0.00 \n0.00 \n0 \n0sdb6 \n0.00 \n0.00 \n0.00 \n0 \n0sdb7 \n0.00 \n0.00 \n0.00 \n0 \n0sdc \n0.00 \n0.00 \n0.00 \n0 \n0sdd \n0.00 \n0.00 \n0.00 \n0 \n0sde \n0.00 \n0.00 \n0.00 \n0 \n0sdf \n0.00 \n0.00 \n0.00 \n0 \n0sdg \n0.00 \n0.00 \n0.00 \n0 \n0sdh \n0.00 \n0.00 \n0.00 \n0 \n0sdi \n40.33 \n0.00 \n413.33 \n0 \n1240sdj \n34.33 \n0.00 \n394.67 \n0 \n1184sdk \n36.00 \n0.00 \n410.67 \n0 \n1232sdl \n37.00 \n0.00 \n429.33 \n0 \n1288sdm \n375.00 \n0.00 \n3120.00 \n0 \n9360sdn \n378.33 \n0.00 \n3120.00 \n0 9360\n\n\nFrom: Alex Turner [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 11:27 AMTo: Mark \nLewisCc: Marty Jia; Joshua D. Drake; \[email protected]; DBAs; Rich Wilson; Ernest \nWurzbachSubject: Re: [PERFORM] How to get higher \ntps\nOh - and it's usefull to know if you are CPU bound, or IO \nbound. Check top or vmstat to get an idea of thatAlex\nOn 8/22/06, Alex \nTurner < [email protected]> \nwrote:\n\nFirst things first, run a bonnie++ benchmark, and post the numbers. \n That will give a good indication of raw IO performance, and is often the first \n inidication of problems separate from the DB. We have seen pretty bad \n performance from SANs in the past. How many FC lines do you have running \n to your server, remember each line is limited to about 200MB/sec, to get good \n throughput, you will need multiple connections. When you run pgbench, \n run a iostat also and see what the numbers say.\nAlex.\n\nOn 8/22/06, Mark \n Lewis < [email protected]> \n wrote:\nWell, \n at least on my test machines running gnome-terminal, my pgbench runs \n tend to get throttled by gnome-terminal's lousy performance to nomore \n than 300 tps or so. Running with 2>/dev/null to throw away all \n thedetailed logging gives me 2-3x improvement in \n scores. Caveat: in my case the db is on the local machine, so \n who knows what all theinteractions are.Also, when you \n initialized the pgbench db what scaling factor did \n youuse? And does running pgbench with -v improve performance \n at all? -- MarkOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia \n wrote:> Joshua,>> Here is>> \n shared_buffers = 80000> fsync = on> max_fsm_pages = \n 350000> max_connections = 1000 > work_mem = 65536> \n effective_cache_size = 610000> random_page_cost = 3>> \n Here is pgbench I used:>> pgbench -c 10 -t 10000 -d \n HQDB>> Thanks>> Marty >> \n -----Original Message-----> From: Joshua D. Drake [mailto:[email protected]]> Sent: Monday, August 21, \n 2006 6:09 PM> To: Marty Jia> Cc: [email protected]> Subject: Re: \n [PERFORM] How to get higher tps>> Marty Jia wrote:> \n > I'm exhausted to try all performance tuning ideas, like following \n > > parameters> >> > shared_buffers> \n > fsync> > max_fsm_pages> > max_connections> \n > shared_buffers> > work_mem> > max_fsm_pages> \n > effective_cache_size> > random_page_cost> >> \n > I believe all above have right size and values, but I just can not \n get>> > higher tps more than 300 testd by pgbench \n >> What values did you use?>> >> > \n Here is our hardware> >> >> > Dual Intel Xeon \n 2.8GHz> > 6GB RAM> > Linux 2.4 kernel> > \n RedHat Enterprise Linux AS 3 > > 200GB for PGDATA on 3Par, \n ext3> > 50GB for WAL on 3Par, ext3> >> > With \n PostgreSql 8.1.4> >> > We don't have i/o bottle \n neck.>> Are you sure? What does iostat say during a pgbench? \n What parameters are > you passing to pgbench?>> Well in \n theory, upgrading to 2.6 kernel will help as well as making your> WAL \n ext2 instead of ext3.>> > Whatelse I can try to better tps? \n Someone told me I can should get tps >> > over 1500, it is \n hard to believe.>> 1500? Hmmm... I don't know about that, I \n can get 470tps or so on my> measily dual core 3800 with 2gig of ram \n though.>> Joshua D. Drake >>> \n >> > Thanks> >> > Marty> \n >> > ---------------------------(end of> > \n broadcast)---------------------------> > TIP 2: Don't 'kill -9' \n the postmaster > \n >>>---------------------------(end of \n broadcast)---------------------------TIP 1: if posting/reading through \n Usenet, please send an appropriate \n subscribe-nomail command to [email protected] so that \n your message can get through to the \n mailing list \ncleanly",
"msg_date": "Tue, 22 Aug 2006 11:46:34 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "At 04:45 PM 8/21/2006, Marty Jia wrote:\n>I'm exhausted to try all performance tuning ideas, like following\n>parameters\n>\n>shared_buffers\n>fsync\n>max_fsm_pages\n>max_connections\n>shared_buffers\n>work_mem\n>max_fsm_pages\n>effective_cache_size\n>random_page_cost\n\nAll of this comes =after= the Get the Correct HW (1) & OS (2) \nsteps. You are putting the cart before the horse.\n\n>I believe all above have right size and values, but I just can not get\n>higher tps more than 300 testd by pgbench\n\n300tps on what HW? and under what pattern of IO load?\n300tps of OLTP on a small number of non-Raptor 10K rpm HD's may \nactually be decent performance.\n300tps on a 24 HD RAID 10 based on Raptors or 15Krpm HDs and working \nthrough a HW RAID controller w/ >= 1GB of BB cache is likely to be poor.\n\n>Here is our hardware\n>\n>\n>Dual Intel Xeon 2.8GHz\n>6GB RAM\n\nModest CPU and RAM for a DB server now-a-days. In particular, the \nmore DB you can keep in RAM the better.\nAnd you have said nothing about the most importance HW when talking \nabout tps: What Does Your HD Subsystem Look Like?\n.\n\n>Linux 2.4 kernel\n>RedHat Enterprise Linux AS 3\nUpgrade to a 2.6 based kernel and examine your RHEL-AS3 install with \na close eye to trimming the fat you do not need from it. Cent-OS ot \nCa-Os may be better distro choices.\n\n\n>200GB for PGDATA on 3Par, ext3\n>50GB for WAL on 3Par, ext3\nPut WAL on ext2. Experiment with ext3, jfs, reiserfs, and XFS for pgdata.\n\nTake a =close= look at the exact HW specs of your 3par.to make sure \nthat you are not attempting the impossible with that HW.\n\"3par\" is marketing fluff. We need HD specs and RAID subsystem config data.\n\n>With PostgreSql 8.1.4\n>\n>We don't have i/o bottle neck.\nProve it. Where are the numbers that back up your assertion and how \ndid you get them?\n\n\n>Whatelse I can try to better tps? Someone told me I can should get tps\n>over 1500, it is hard to believe.\nDid they claim your exact HW could get 1500tps? Your exact HW+OS+pg \nversion+app SW? Some subset of those 4 variables?\nPerformance claims are easy to make. =Valid= performance claims are \ntougher since they have to be much more constrained and descriptive.\n\n\nRon\n\n",
"msg_date": "Tue, 22 Aug 2006 11:47:13 -0400",
"msg_from": "Ron <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Ron\n\nHere is our hardware\n\nDual Intel Xeon 2.8GHz\n6GB RAM\nLinux 2.4 kernel\nRedHat Enterprise Linux AS 3\n200GB for PGDATA on 3Par, ext3\n50GB for WAL on 3Par, ext3\n\nRAID 10, using 3Par virtual volume technology across ~200 physical FC\ndisks. 4 virtual disks for PGDATA, striped with LVM into one volume, 2\nvirtual disks for WAL, also striped. SAN attached with Qlogic SAN\nsurfer multipathing to load balance each LUN on two 2GBs paths. HBAs\nare Qlogic 2340's. 16GB host cache on 3Par.\n\nshared_buffers = 80000\nmax_fsm_pages = 350000\nmax_connections = 1000\nwork_mem = 65536\neffective_cache_size = 610000\nrandom_page_cost = 3\n\nThanks\n \n\n-----Original Message-----\nFrom: Ron [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 11:47 AM\nTo: Marty Jia\nCc: [email protected]\nSubject: Re: [PERFORM] How to get higher tps\n\nAt 04:45 PM 8/21/2006, Marty Jia wrote:\n>I'm exhausted to try all performance tuning ideas, like following \n>parameters\n>\n>shared_buffers\n>fsync\n>max_fsm_pages\n>max_connections\n>shared_buffers\n>work_mem\n>max_fsm_pages\n>effective_cache_size\n>random_page_cost\n\nAll of this comes =after= the Get the Correct HW (1) & OS (2) steps.\nYou are putting the cart before the horse.\n\n>I believe all above have right size and values, but I just can not get \n>higher tps more than 300 testd by pgbench\n\n300tps on what HW? and under what pattern of IO load?\n300tps of OLTP on a small number of non-Raptor 10K rpm HD's may actually\nbe decent performance.\n300tps on a 24 HD RAID 10 based on Raptors or 15Krpm HDs and working\nthrough a HW RAID controller w/ >= 1GB of BB cache is likely to be poor.\n\n>Here is our hardware\n>\n>\n>Dual Intel Xeon 2.8GHz\n>6GB RAM\n\nModest CPU and RAM for a DB server now-a-days. In particular, the \nmore DB you can keep in RAM the better.\nAnd you have said nothing about the most importance HW when talking\nabout tps: What Does Your HD Subsystem Look Like?\n.\n\n>Linux 2.4 kernel\n>RedHat Enterprise Linux AS 3\nUpgrade to a 2.6 based kernel and examine your RHEL-AS3 install with a\nclose eye to trimming the fat you do not need from it. Cent-OS ot Ca-Os\nmay be better distro choices.\n\n\n>200GB for PGDATA on 3Par, ext3\n>50GB for WAL on 3Par, ext3\nPut WAL on ext2. Experiment with ext3, jfs, reiserfs, and XFS for\npgdata.\n\nTake a =close= look at the exact HW specs of your 3par.to make sure that\nyou are not attempting the impossible with that HW.\n\"3par\" is marketing fluff. We need HD specs and RAID subsystem config\ndata.\n\n>With PostgreSql 8.1.4\n>\n>We don't have i/o bottle neck.\nProve it. Where are the numbers that back up your assertion and how did\nyou get them?\n\n\n>Whatelse I can try to better tps? Someone told me I can should get tps\n>over 1500, it is hard to believe.\nDid they claim your exact HW could get 1500tps? Your exact HW+OS+pg \nversion+app SW? Some subset of those 4 variables?\nPerformance claims are easy to make. =Valid= performance claims are \ntougher since they have to be much more constrained and descriptive.\n\n\nRon\n\n",
"msg_date": "Tue, 22 Aug 2006 11:57:13 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Marty Jia wrote:\n> Here is iostat when running pgbench:\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 26.17 0.00 8.25 23.17 42.42\n\nYou are are a little io bound and fairly cpu bound. I would be curious \nif your performance goes down if you increase the number of connections \nyou are using.\n\nJoshua D. Drake\n\n\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sda1 0.00 0.00 0.00 0 0\n> sda2 0.00 0.00 0.00 0 0\n> sda3 0.00 0.00 0.00 0 0\n> sda4 0.00 0.00 0.00 0 0\n> sda5 0.00 0.00 0.00 0 0\n> sda6 0.00 0.00 0.00 0 0\n> sda7 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdb1 0.00 0.00 0.00 0 0\n> sdb2 0.00 0.00 0.00 0 0\n> sdb3 0.00 0.00 0.00 0 0\n> sdb4 0.00 0.00 0.00 0 0\n> sdb5 0.00 0.00 0.00 0 0\n> sdb6 0.00 0.00 0.00 0 0\n> sdb7 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n> sdf 0.00 0.00 0.00 0 0\n> sdg 0.00 0.00 0.00 0 0\n> sdh 0.00 0.00 0.00 0 0\n> sdi 40.33 0.00 413.33 0 1240\n> sdj 34.33 0.00 394.67 0 1184\n> sdk 36.00 0.00 410.67 0 1232\n> sdl 37.00 0.00 429.33 0 1288\n> sdm 375.00 0.00 3120.00 0 9360\n> sdn 378.33 0.00 3120.00 0 9360\n> \n> ________________________________\n> \n> From: Alex Turner [mailto:[email protected]] \n> Sent: Tuesday, August 22, 2006 11:27 AM\n> To: Mark Lewis\n> Cc: Marty Jia; Joshua D. Drake; [email protected]; DBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> \n> Oh - and it's usefull to know if you are CPU bound, or IO bound. Check\n> top or vmstat to get an idea of that\n> \n> Alex\n> \n> \n> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\n> wrote: \n> \n> \tFirst things first, run a bonnie++ benchmark, and post the\n> numbers. That will give a good indication of raw IO performance, and is\n> often the first inidication of problems separate from the DB. We have\n> seen pretty bad performance from SANs in the past. How many FC lines do\n> you have running to your server, remember each line is limited to about\n> 200MB/sec, to get good throughput, you will need multiple connections. \n> \t\n> \tWhen you run pgbench, run a iostat also and see what the numbers\n> say.\n> \t\n> \t\n> \tAlex.\n> \t\n> \t\n> \t\n> \tOn 8/22/06, Mark Lewis < [email protected]\n> <mailto:[email protected]> > wrote: \n> \n> \t\tWell, at least on my test machines running\n> gnome-terminal, my pgbench \n> \t\truns tend to get throttled by gnome-terminal's lousy\n> performance to no\n> \t\tmore than 300 tps or so. Running with 2>/dev/null to\n> throw away all the\n> \t\tdetailed logging gives me 2-3x improvement in scores.\n> Caveat: in my \n> \t\tcase the db is on the local machine, so who knows what\n> all the\n> \t\tinteractions are.\n> \t\t\n> \t\tAlso, when you initialized the pgbench db what scaling\n> factor did you\n> \t\tuse? And does running pgbench with -v improve\n> performance at all? \n> \t\t\n> \t\t-- Mark\n> \t\t\n> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> \t\t> Joshua,\n> \t\t>\n> \t\t> Here is\n> \t\t>\n> \t\t> shared_buffers = 80000\n> \t\t> fsync = on\n> \t\t> max_fsm_pages = 350000\n> \t\t> max_connections = 1000 \n> \t\t> work_mem = 65536\n> \t\t> effective_cache_size = 610000\n> \t\t> random_page_cost = 3\n> \t\t>\n> \t\t> Here is pgbench I used:\n> \t\t>\n> \t\t> pgbench -c 10 -t 10000 -d HQDB\n> \t\t>\n> \t\t> Thanks\n> \t\t>\n> \t\t> Marty \n> \t\t>\n> \t\t> -----Original Message-----\n> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n> \t\t> To: Marty Jia\n> \t\t> Cc: [email protected]\n> \t\t> Subject: Re: [PERFORM] How to get higher tps\n> \t\t>\n> \t\t> Marty Jia wrote:\n> \t\t> > I'm exhausted to try all performance tuning ideas,\n> like following \n> \t\t> > parameters\n> \t\t> >\n> \t\t> > shared_buffers\n> \t\t> > fsync\n> \t\t> > max_fsm_pages\n> \t\t> > max_connections\n> \t\t> > shared_buffers\n> \t\t> > work_mem\n> \t\t> > max_fsm_pages\n> \t\t> > effective_cache_size\n> \t\t> > random_page_cost\n> \t\t> >\n> \t\t> > I believe all above have right size and values, but\n> I just can not get\n> \t\t>\n> \t\t> > higher tps more than 300 testd by pgbench \n> \t\t>\n> \t\t> What values did you use?\n> \t\t>\n> \t\t> >\n> \t\t> > Here is our hardware\n> \t\t> >\n> \t\t> >\n> \t\t> > Dual Intel Xeon 2.8GHz\n> \t\t> > 6GB RAM\n> \t\t> > Linux 2.4 kernel\n> \t\t> > RedHat Enterprise Linux AS 3 \n> \t\t> > 200GB for PGDATA on 3Par, ext3\n> \t\t> > 50GB for WAL on 3Par, ext3\n> \t\t> >\n> \t\t> > With PostgreSql 8.1.4\n> \t\t> >\n> \t\t> > We don't have i/o bottle neck.\n> \t\t>\n> \t\t> Are you sure? What does iostat say during a pgbench?\n> What parameters are \n> \t\t> you passing to pgbench?\n> \t\t>\n> \t\t> Well in theory, upgrading to 2.6 kernel will help as\n> well as making your\n> \t\t> WAL ext2 instead of ext3.\n> \t\t>\n> \t\t> > Whatelse I can try to better tps? Someone told me I\n> can should get tps \n> \t\t>\n> \t\t> > over 1500, it is hard to believe.\n> \t\t>\n> \t\t> 1500? Hmmm... I don't know about that, I can get\n> 470tps or so on my\n> \t\t> measily dual core 3800 with 2gig of ram though.\n> \t\t>\n> \t\t> Joshua D. Drake \n> \t\t>\n> \t\t>\n> \t\t> >\n> \t\t> > Thanks\n> \t\t> >\n> \t\t> > Marty\n> \t\t> >\n> \t\t> > ---------------------------(end of\n> \t\t> > broadcast)---------------------------\n> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n> \t\t> >\n> \t\t>\n> \t\t>\n> \t\t\n> \t\t---------------------------(end of\n> broadcast)---------------------------\n> \t\tTIP 1: if posting/reading through Usenet, please send an\n> appropriate\n> \t\t subscribe-nomail command to\n> [email protected] so that your\n> \t\t message can get through to the mailing list\n> cleanly\n> \t\t\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 22 Aug 2006 09:15:44 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Marty,\n\nHere's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install\non a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB\ncache/socket) with 6x300GB 10k SAS drives:\n\npgbench -c 10 -t 10000 -d bench 2>/dev/null\npghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: bench\n`transaction type: TPC-B (sort of)\nscaling factor: 20\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 561.056729 (including connections establishing)\ntps = 561.127760 (excluding connections establishing)\n\nHere's some iostat samples during the test:\n tty mfid0 da0 cd0\ncpu\n tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in\nid\n 6 77 16.01 1642 25.67 0.00 0 0.00 0.00 0 0.00 3 0 8\n2 87\n 8 157 17.48 3541 60.43 0.00 0 0.00 0.00 0 0.00 24 0 28\n4 43\n 5 673 17.66 2287 39.44 0.00 0 0.00 0.00 0 0.00 10 0 13\n2 75\n 6 2818 16.37 2733 43.68 0.00 0 0.00 0.00 0 0.00 17 0 23\n3 56\n 1 765 18.05 2401 42.32 0.00 0 0.00 0.00 0 0.00 15 0 17\n3 65\n\nNote- the above was with no tuning to the kernel or postgresql.conf. \n\nNow for my question- it seems that I've still got quite a bit of\nheadroom on the hardware I'm running the above tests on, since I know\nthe array will pump out > 200 MB/s (dd, bonnie++ numbers), and CPU\nappears mostly idle. This would indicate I should be able to get some\nsignificantly better numbers with postgresql.conf tweaks correct?\n\nI guess the other problem is ensuring that we're not testing RAM speeds,\nsince most of the data is probably in memory (BSD io buffers)? Although,\nfor the initial run, that doesn't seem to be the case, since subsequent\nruns without rebuilding the benchmark db are slightly not believable\n(i.e. 1,200 going up to >2,500 tps over 5 back-to-back runs). So, as\nlong as I re-initialize the benchdb before each run, it should be a\nrealistic test, right?\n\nThanks,\n\nBucky\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Tuesday, August 22, 2006 12:16 PM\nTo: Marty Jia\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\nMarty Jia wrote:\n> Here is iostat when running pgbench:\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 26.17 0.00 8.25 23.17 42.42\n\nYou are are a little io bound and fairly cpu bound. I would be curious \nif your performance goes down if you increase the number of connections \nyou are using.\n\nJoshua D. Drake\n\n\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sda1 0.00 0.00 0.00 0 0\n> sda2 0.00 0.00 0.00 0 0\n> sda3 0.00 0.00 0.00 0 0\n> sda4 0.00 0.00 0.00 0 0\n> sda5 0.00 0.00 0.00 0 0\n> sda6 0.00 0.00 0.00 0 0\n> sda7 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdb1 0.00 0.00 0.00 0 0\n> sdb2 0.00 0.00 0.00 0 0\n> sdb3 0.00 0.00 0.00 0 0\n> sdb4 0.00 0.00 0.00 0 0\n> sdb5 0.00 0.00 0.00 0 0\n> sdb6 0.00 0.00 0.00 0 0\n> sdb7 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n> sdf 0.00 0.00 0.00 0 0\n> sdg 0.00 0.00 0.00 0 0\n> sdh 0.00 0.00 0.00 0 0\n> sdi 40.33 0.00 413.33 0 1240\n> sdj 34.33 0.00 394.67 0 1184\n> sdk 36.00 0.00 410.67 0 1232\n> sdl 37.00 0.00 429.33 0 1288\n> sdm 375.00 0.00 3120.00 0 9360\n> sdn 378.33 0.00 3120.00 0 9360\n> \n> ________________________________\n> \n> From: Alex Turner [mailto:[email protected]] \n> Sent: Tuesday, August 22, 2006 11:27 AM\n> To: Mark Lewis\n> Cc: Marty Jia; Joshua D. Drake; [email protected];\nDBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> \n> Oh - and it's usefull to know if you are CPU bound, or IO bound.\nCheck\n> top or vmstat to get an idea of that\n> \n> Alex\n> \n> \n> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\n> wrote: \n> \n> \tFirst things first, run a bonnie++ benchmark, and post the\n> numbers. That will give a good indication of raw IO performance, and\nis\n> often the first inidication of problems separate from the DB. We have\n> seen pretty bad performance from SANs in the past. How many FC lines\ndo\n> you have running to your server, remember each line is limited to\nabout\n> 200MB/sec, to get good throughput, you will need multiple connections.\n\n> \t\n> \tWhen you run pgbench, run a iostat also and see what the numbers\n> say.\n> \t\n> \t\n> \tAlex.\n> \t\n> \t\n> \t\n> \tOn 8/22/06, Mark Lewis < [email protected]\n> <mailto:[email protected]> > wrote: \n> \n> \t\tWell, at least on my test machines running\n> gnome-terminal, my pgbench \n> \t\truns tend to get throttled by gnome-terminal's lousy\n> performance to no\n> \t\tmore than 300 tps or so. Running with 2>/dev/null to\n> throw away all the\n> \t\tdetailed logging gives me 2-3x improvement in scores.\n> Caveat: in my \n> \t\tcase the db is on the local machine, so who knows what\n> all the\n> \t\tinteractions are.\n> \t\t\n> \t\tAlso, when you initialized the pgbench db what scaling\n> factor did you\n> \t\tuse? And does running pgbench with -v improve\n> performance at all? \n> \t\t\n> \t\t-- Mark\n> \t\t\n> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> \t\t> Joshua,\n> \t\t>\n> \t\t> Here is\n> \t\t>\n> \t\t> shared_buffers = 80000\n> \t\t> fsync = on\n> \t\t> max_fsm_pages = 350000\n> \t\t> max_connections = 1000 \n> \t\t> work_mem = 65536\n> \t\t> effective_cache_size = 610000\n> \t\t> random_page_cost = 3\n> \t\t>\n> \t\t> Here is pgbench I used:\n> \t\t>\n> \t\t> pgbench -c 10 -t 10000 -d HQDB\n> \t\t>\n> \t\t> Thanks\n> \t\t>\n> \t\t> Marty \n> \t\t>\n> \t\t> -----Original Message-----\n> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n> \t\t> To: Marty Jia\n> \t\t> Cc: [email protected]\n> \t\t> Subject: Re: [PERFORM] How to get higher tps\n> \t\t>\n> \t\t> Marty Jia wrote:\n> \t\t> > I'm exhausted to try all performance tuning ideas,\n> like following \n> \t\t> > parameters\n> \t\t> >\n> \t\t> > shared_buffers\n> \t\t> > fsync\n> \t\t> > max_fsm_pages\n> \t\t> > max_connections\n> \t\t> > shared_buffers\n> \t\t> > work_mem\n> \t\t> > max_fsm_pages\n> \t\t> > effective_cache_size\n> \t\t> > random_page_cost\n> \t\t> >\n> \t\t> > I believe all above have right size and values, but\n> I just can not get\n> \t\t>\n> \t\t> > higher tps more than 300 testd by pgbench \n> \t\t>\n> \t\t> What values did you use?\n> \t\t>\n> \t\t> >\n> \t\t> > Here is our hardware\n> \t\t> >\n> \t\t> >\n> \t\t> > Dual Intel Xeon 2.8GHz\n> \t\t> > 6GB RAM\n> \t\t> > Linux 2.4 kernel\n> \t\t> > RedHat Enterprise Linux AS 3 \n> \t\t> > 200GB for PGDATA on 3Par, ext3\n> \t\t> > 50GB for WAL on 3Par, ext3\n> \t\t> >\n> \t\t> > With PostgreSql 8.1.4\n> \t\t> >\n> \t\t> > We don't have i/o bottle neck.\n> \t\t>\n> \t\t> Are you sure? What does iostat say during a pgbench?\n> What parameters are \n> \t\t> you passing to pgbench?\n> \t\t>\n> \t\t> Well in theory, upgrading to 2.6 kernel will help as\n> well as making your\n> \t\t> WAL ext2 instead of ext3.\n> \t\t>\n> \t\t> > Whatelse I can try to better tps? Someone told me I\n> can should get tps \n> \t\t>\n> \t\t> > over 1500, it is hard to believe.\n> \t\t>\n> \t\t> 1500? Hmmm... I don't know about that, I can get\n> 470tps or so on my\n> \t\t> measily dual core 3800 with 2gig of ram though.\n> \t\t>\n> \t\t> Joshua D. Drake \n> \t\t>\n> \t\t>\n> \t\t> >\n> \t\t> > Thanks\n> \t\t> >\n> \t\t> > Marty\n> \t\t> >\n> \t\t> > ---------------------------(end of\n> \t\t> > broadcast)---------------------------\n> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n> \t\t> >\n> \t\t>\n> \t\t>\n> \t\t\n> \t\t---------------------------(end of\n> broadcast)---------------------------\n> \t\tTIP 1: if posting/reading through Usenet, please send an\n> appropriate\n> \t\t subscribe-nomail command to\n> [email protected] so that your\n> \t\t message can get through to the mailing list\n> cleanly\n> \t\t\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Tue, 22 Aug 2006 15:22:51 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Bucky\n\nMy best result is around 380. I believe your hardware is more efficient,\nbecause no matter how I change the conf parameters, no improvement can\nbe obtained. I even turned fsync off.\n\nWhat is your values for the following parameters?\n\nshared_buffers = 80000\nmax_fsm_pages = 350000\nmax_connections = 1000\nwork_mem = 65536\neffective_cache_size = 610000\nrandom_page_cost = 3\n\nThanks\nMarty\n\n-----Original Message-----\nFrom: Bucky Jordan [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 3:23 PM\nTo: Joshua D. Drake; Marty Jia\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: RE: [PERFORM] How to get higher tps\n\nMarty,\n\nHere's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install\non a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB\ncache/socket) with 6x300GB 10k SAS drives:\n\npgbench -c 10 -t 10000 -d bench 2>/dev/null\npghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: bench\n`transaction type: TPC-B (sort of) scaling factor: 20 number of clients:\n10 number of transactions per client: 10000 number of transactions\nactually processed: 100000/100000 tps = 561.056729 (including\nconnections establishing) tps = 561.127760 (excluding connections\nestablishing)\n\nHere's some iostat samples during the test:\n tty mfid0 da0 cd0\ncpu\n tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in\nid\n 6 77 16.01 1642 25.67 0.00 0 0.00 0.00 0 0.00 3 0 8\n2 87\n 8 157 17.48 3541 60.43 0.00 0 0.00 0.00 0 0.00 24 0 28\n4 43\n 5 673 17.66 2287 39.44 0.00 0 0.00 0.00 0 0.00 10 0 13\n2 75\n 6 2818 16.37 2733 43.68 0.00 0 0.00 0.00 0 0.00 17 0 23\n3 56\n 1 765 18.05 2401 42.32 0.00 0 0.00 0.00 0 0.00 15 0 17\n3 65\n\nNote- the above was with no tuning to the kernel or postgresql.conf. \n\nNow for my question- it seems that I've still got quite a bit of\nheadroom on the hardware I'm running the above tests on, since I know\nthe array will pump out > 200 MB/s (dd, bonnie++ numbers), and CPU\nappears mostly idle. This would indicate I should be able to get some\nsignificantly better numbers with postgresql.conf tweaks correct?\n\nI guess the other problem is ensuring that we're not testing RAM speeds,\nsince most of the data is probably in memory (BSD io buffers)? Although,\nfor the initial run, that doesn't seem to be the case, since subsequent\nruns without rebuilding the benchmark db are slightly not believable\n(i.e. 1,200 going up to >2,500 tps over 5 back-to-back runs). So, as\nlong as I re-initialize the benchdb before each run, it should be a\nrealistic test, right?\n\nThanks,\n\nBucky\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Tuesday, August 22, 2006 12:16 PM\nTo: Marty Jia\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\nMarty Jia wrote:\n> Here is iostat when running pgbench:\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 26.17 0.00 8.25 23.17 42.42\n\nYou are are a little io bound and fairly cpu bound. I would be curious\nif your performance goes down if you increase the number of connections\nyou are using.\n\nJoshua D. Drake\n\n\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sda1 0.00 0.00 0.00 0 0\n> sda2 0.00 0.00 0.00 0 0\n> sda3 0.00 0.00 0.00 0 0\n> sda4 0.00 0.00 0.00 0 0\n> sda5 0.00 0.00 0.00 0 0\n> sda6 0.00 0.00 0.00 0 0\n> sda7 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdb1 0.00 0.00 0.00 0 0\n> sdb2 0.00 0.00 0.00 0 0\n> sdb3 0.00 0.00 0.00 0 0\n> sdb4 0.00 0.00 0.00 0 0\n> sdb5 0.00 0.00 0.00 0 0\n> sdb6 0.00 0.00 0.00 0 0\n> sdb7 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n> sdf 0.00 0.00 0.00 0 0\n> sdg 0.00 0.00 0.00 0 0\n> sdh 0.00 0.00 0.00 0 0\n> sdi 40.33 0.00 413.33 0 1240\n> sdj 34.33 0.00 394.67 0 1184\n> sdk 36.00 0.00 410.67 0 1232\n> sdl 37.00 0.00 429.33 0 1288\n> sdm 375.00 0.00 3120.00 0 9360\n> sdn 378.33 0.00 3120.00 0 9360\n> \n> ________________________________\n> \n> From: Alex Turner [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 11:27 AM\n> To: Mark Lewis\n> Cc: Marty Jia; Joshua D. Drake; [email protected];\nDBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> \n> Oh - and it's usefull to know if you are CPU bound, or IO bound.\nCheck\n> top or vmstat to get an idea of that\n> \n> Alex\n> \n> \n> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\n> wrote: \n> \n> \tFirst things first, run a bonnie++ benchmark, and post the\nnumbers. \n> That will give a good indication of raw IO performance, and\nis\n> often the first inidication of problems separate from the DB. We have\n\n> seen pretty bad performance from SANs in the past. How many FC lines\ndo\n> you have running to your server, remember each line is limited to\nabout\n> 200MB/sec, to get good throughput, you will need multiple connections.\n\n> \t\n> \tWhen you run pgbench, run a iostat also and see what the numbers\nsay.\n> \t\n> \t\n> \tAlex.\n> \t\n> \t\n> \t\n> \tOn 8/22/06, Mark Lewis < [email protected] \n> <mailto:[email protected]> > wrote:\n> \n> \t\tWell, at least on my test machines running\ngnome-terminal, my \n> pgbench\n> \t\truns tend to get throttled by gnome-terminal's lousy\nperformance to \n> no\n> \t\tmore than 300 tps or so. Running with 2>/dev/null to\nthrow away all \n> the\n> \t\tdetailed logging gives me 2-3x improvement in scores.\n> Caveat: in my \n> \t\tcase the db is on the local machine, so who knows what\nall the\n> \t\tinteractions are.\n> \t\t\n> \t\tAlso, when you initialized the pgbench db what scaling\nfactor did \n> you\n> \t\tuse? And does running pgbench with -v improve\nperformance at all?\n> \t\t\n> \t\t-- Mark\n> \t\t\n> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> \t\t> Joshua,\n> \t\t>\n> \t\t> Here is\n> \t\t>\n> \t\t> shared_buffers = 80000\n> \t\t> fsync = on\n> \t\t> max_fsm_pages = 350000\n> \t\t> max_connections = 1000 \n> \t\t> work_mem = 65536\n> \t\t> effective_cache_size = 610000\n> \t\t> random_page_cost = 3\n> \t\t>\n> \t\t> Here is pgbench I used:\n> \t\t>\n> \t\t> pgbench -c 10 -t 10000 -d HQDB\n> \t\t>\n> \t\t> Thanks\n> \t\t>\n> \t\t> Marty \n> \t\t>\n> \t\t> -----Original Message-----\n> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n> \t\t> To: Marty Jia\n> \t\t> Cc: [email protected]\n> \t\t> Subject: Re: [PERFORM] How to get higher tps\n> \t\t>\n> \t\t> Marty Jia wrote:\n> \t\t> > I'm exhausted to try all performance tuning ideas,\nlike \n> following\n> \t\t> > parameters\n> \t\t> >\n> \t\t> > shared_buffers\n> \t\t> > fsync\n> \t\t> > max_fsm_pages\n> \t\t> > max_connections\n> \t\t> > shared_buffers\n> \t\t> > work_mem\n> \t\t> > max_fsm_pages\n> \t\t> > effective_cache_size\n> \t\t> > random_page_cost\n> \t\t> >\n> \t\t> > I believe all above have right size and values, but\nI just can \n> not get\n> \t\t>\n> \t\t> > higher tps more than 300 testd by pgbench \n> \t\t>\n> \t\t> What values did you use?\n> \t\t>\n> \t\t> >\n> \t\t> > Here is our hardware\n> \t\t> >\n> \t\t> >\n> \t\t> > Dual Intel Xeon 2.8GHz\n> \t\t> > 6GB RAM\n> \t\t> > Linux 2.4 kernel\n> \t\t> > RedHat Enterprise Linux AS 3 \n> \t\t> > 200GB for PGDATA on 3Par, ext3\n> \t\t> > 50GB for WAL on 3Par, ext3\n> \t\t> >\n> \t\t> > With PostgreSql 8.1.4\n> \t\t> >\n> \t\t> > We don't have i/o bottle neck.\n> \t\t>\n> \t\t> Are you sure? What does iostat say during a pgbench?\n> What parameters are \n> \t\t> you passing to pgbench?\n> \t\t>\n> \t\t> Well in theory, upgrading to 2.6 kernel will help as\nwell as \n> making your\n> \t\t> WAL ext2 instead of ext3.\n> \t\t>\n> \t\t> > Whatelse I can try to better tps? Someone told me I\ncan should \n> get tps\n> \t\t>\n> \t\t> > over 1500, it is hard to believe.\n> \t\t>\n> \t\t> 1500? Hmmm... I don't know about that, I can get\n470tps or so on \n> my\n> \t\t> measily dual core 3800 with 2gig of ram though.\n> \t\t>\n> \t\t> Joshua D. Drake \n> \t\t>\n> \t\t>\n> \t\t> >\n> \t\t> > Thanks\n> \t\t> >\n> \t\t> > Marty\n> \t\t> >\n> \t\t> > ---------------------------(end of\n> \t\t> > broadcast)---------------------------\n> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n> \t\t> >\n> \t\t>\n> \t\t>\n> \t\t\n> \t\t---------------------------(end of\n> broadcast)---------------------------\n> \t\tTIP 1: if posting/reading through Usenet, please send an\nappropriate\n> \t\t subscribe-nomail command to\n> [email protected] so that your\n> \t\t message can get through to the mailing list\ncleanly\n> \t\t\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Tue, 22 Aug 2006 15:37:40 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Marty Jia wrote:\n> Bucky\n> \n> My best result is around 380. I believe your hardware is more efficient,\n> because no matter how I change the conf parameters, no improvement can\n> be obtained. I even turned fsync off.\n\nDo you stay constant if you use 40 clients versus 20?\n\n> \n> What is your values for the following parameters?\n> \n> shared_buffers = 80000\n> max_fsm_pages = 350000\n> max_connections = 1000\n> work_mem = 65536\n> effective_cache_size = 610000\n> random_page_cost = 3\n> \n> Thanks\n> Marty\n> \n> -----Original Message-----\n> From: Bucky Jordan [mailto:[email protected]] \n> Sent: Tuesday, August 22, 2006 3:23 PM\n> To: Joshua D. Drake; Marty Jia\n> Cc: Alex Turner; Mark Lewis; [email protected]; DBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: RE: [PERFORM] How to get higher tps\n> \n> Marty,\n> \n> Here's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install\n> on a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB\n> cache/socket) with 6x300GB 10k SAS drives:\n> \n> pgbench -c 10 -t 10000 -d bench 2>/dev/null\n> pghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: bench\n> `transaction type: TPC-B (sort of) scaling factor: 20 number of clients:\n> 10 number of transactions per client: 10000 number of transactions\n> actually processed: 100000/100000 tps = 561.056729 (including\n> connections establishing) tps = 561.127760 (excluding connections\n> establishing)\n> \n> Here's some iostat samples during the test:\n> tty mfid0 da0 cd0\n> cpu\n> tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in\n> id\n> 6 77 16.01 1642 25.67 0.00 0 0.00 0.00 0 0.00 3 0 8\n> 2 87\n> 8 157 17.48 3541 60.43 0.00 0 0.00 0.00 0 0.00 24 0 28\n> 4 43\n> 5 673 17.66 2287 39.44 0.00 0 0.00 0.00 0 0.00 10 0 13\n> 2 75\n> 6 2818 16.37 2733 43.68 0.00 0 0.00 0.00 0 0.00 17 0 23\n> 3 56\n> 1 765 18.05 2401 42.32 0.00 0 0.00 0.00 0 0.00 15 0 17\n> 3 65\n> \n> Note- the above was with no tuning to the kernel or postgresql.conf. \n> \n> Now for my question- it seems that I've still got quite a bit of\n> headroom on the hardware I'm running the above tests on, since I know\n> the array will pump out > 200 MB/s (dd, bonnie++ numbers), and CPU\n> appears mostly idle. This would indicate I should be able to get some\n> significantly better numbers with postgresql.conf tweaks correct?\n> \n> I guess the other problem is ensuring that we're not testing RAM speeds,\n> since most of the data is probably in memory (BSD io buffers)? Although,\n> for the initial run, that doesn't seem to be the case, since subsequent\n> runs without rebuilding the benchmark db are slightly not believable\n> (i.e. 1,200 going up to >2,500 tps over 5 back-to-back runs). So, as\n> long as I re-initialize the benchdb before each run, it should be a\n> realistic test, right?\n> \n> Thanks,\n> \n> Bucky\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Joshua D.\n> Drake\n> Sent: Tuesday, August 22, 2006 12:16 PM\n> To: Marty Jia\n> Cc: Alex Turner; Mark Lewis; [email protected]; DBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> Marty Jia wrote:\n>> Here is iostat when running pgbench:\n>> \n>> avg-cpu: %user %nice %sys %iowait %idle\n>> 26.17 0.00 8.25 23.17 42.42\n> \n> You are are a little io bound and fairly cpu bound. I would be curious\n> if your performance goes down if you increase the number of connections\n> you are using.\n> \n> Joshua D. Drake\n> \n> \n>> \n>> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n>> sda 0.00 0.00 0.00 0 0\n>> sda1 0.00 0.00 0.00 0 0\n>> sda2 0.00 0.00 0.00 0 0\n>> sda3 0.00 0.00 0.00 0 0\n>> sda4 0.00 0.00 0.00 0 0\n>> sda5 0.00 0.00 0.00 0 0\n>> sda6 0.00 0.00 0.00 0 0\n>> sda7 0.00 0.00 0.00 0 0\n>> sdb 0.00 0.00 0.00 0 0\n>> sdb1 0.00 0.00 0.00 0 0\n>> sdb2 0.00 0.00 0.00 0 0\n>> sdb3 0.00 0.00 0.00 0 0\n>> sdb4 0.00 0.00 0.00 0 0\n>> sdb5 0.00 0.00 0.00 0 0\n>> sdb6 0.00 0.00 0.00 0 0\n>> sdb7 0.00 0.00 0.00 0 0\n>> sdc 0.00 0.00 0.00 0 0\n>> sdd 0.00 0.00 0.00 0 0\n>> sde 0.00 0.00 0.00 0 0\n>> sdf 0.00 0.00 0.00 0 0\n>> sdg 0.00 0.00 0.00 0 0\n>> sdh 0.00 0.00 0.00 0 0\n>> sdi 40.33 0.00 413.33 0 1240\n>> sdj 34.33 0.00 394.67 0 1184\n>> sdk 36.00 0.00 410.67 0 1232\n>> sdl 37.00 0.00 429.33 0 1288\n>> sdm 375.00 0.00 3120.00 0 9360\n>> sdn 378.33 0.00 3120.00 0 9360\n>>\n>> ________________________________\n>>\n>> From: Alex Turner [mailto:[email protected]]\n>> Sent: Tuesday, August 22, 2006 11:27 AM\n>> To: Mark Lewis\n>> Cc: Marty Jia; Joshua D. Drake; [email protected];\n> DBAs;\n>> Rich Wilson; Ernest Wurzbach\n>> Subject: Re: [PERFORM] How to get higher tps\n>>\n>>\n>> Oh - and it's usefull to know if you are CPU bound, or IO bound.\n> Check\n>> top or vmstat to get an idea of that\n>>\n>> Alex\n>>\n>>\n>> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\n>> wrote: \n>>\n>> \tFirst things first, run a bonnie++ benchmark, and post the\n> numbers. \n>> That will give a good indication of raw IO performance, and\n> is\n>> often the first inidication of problems separate from the DB. We have\n> \n>> seen pretty bad performance from SANs in the past. How many FC lines\n> do\n>> you have running to your server, remember each line is limited to\n> about\n>> 200MB/sec, to get good throughput, you will need multiple connections.\n> \n>> \t\n>> \tWhen you run pgbench, run a iostat also and see what the numbers\n> say.\n>> \t\n>> \t\n>> \tAlex.\n>> \t\n>> \t\n>> \t\n>> \tOn 8/22/06, Mark Lewis < [email protected] \n>> <mailto:[email protected]> > wrote:\n>>\n>> \t\tWell, at least on my test machines running\n> gnome-terminal, my \n>> pgbench\n>> \t\truns tend to get throttled by gnome-terminal's lousy\n> performance to \n>> no\n>> \t\tmore than 300 tps or so. Running with 2>/dev/null to\n> throw away all \n>> the\n>> \t\tdetailed logging gives me 2-3x improvement in scores.\n>> Caveat: in my \n>> \t\tcase the db is on the local machine, so who knows what\n> all the\n>> \t\tinteractions are.\n>> \t\t\n>> \t\tAlso, when you initialized the pgbench db what scaling\n> factor did \n>> you\n>> \t\tuse? And does running pgbench with -v improve\n> performance at all?\n>> \t\t\n>> \t\t-- Mark\n>> \t\t\n>> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n>> \t\t> Joshua,\n>> \t\t>\n>> \t\t> Here is\n>> \t\t>\n>> \t\t> shared_buffers = 80000\n>> \t\t> fsync = on\n>> \t\t> max_fsm_pages = 350000\n>> \t\t> max_connections = 1000 \n>> \t\t> work_mem = 65536\n>> \t\t> effective_cache_size = 610000\n>> \t\t> random_page_cost = 3\n>> \t\t>\n>> \t\t> Here is pgbench I used:\n>> \t\t>\n>> \t\t> pgbench -c 10 -t 10000 -d HQDB\n>> \t\t>\n>> \t\t> Thanks\n>> \t\t>\n>> \t\t> Marty \n>> \t\t>\n>> \t\t> -----Original Message-----\n>> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n>> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n>> \t\t> To: Marty Jia\n>> \t\t> Cc: [email protected]\n>> \t\t> Subject: Re: [PERFORM] How to get higher tps\n>> \t\t>\n>> \t\t> Marty Jia wrote:\n>> \t\t> > I'm exhausted to try all performance tuning ideas,\n> like \n>> following\n>> \t\t> > parameters\n>> \t\t> >\n>> \t\t> > shared_buffers\n>> \t\t> > fsync\n>> \t\t> > max_fsm_pages\n>> \t\t> > max_connections\n>> \t\t> > shared_buffers\n>> \t\t> > work_mem\n>> \t\t> > max_fsm_pages\n>> \t\t> > effective_cache_size\n>> \t\t> > random_page_cost\n>> \t\t> >\n>> \t\t> > I believe all above have right size and values, but\n> I just can \n>> not get\n>> \t\t>\n>> \t\t> > higher tps more than 300 testd by pgbench \n>> \t\t>\n>> \t\t> What values did you use?\n>> \t\t>\n>> \t\t> >\n>> \t\t> > Here is our hardware\n>> \t\t> >\n>> \t\t> >\n>> \t\t> > Dual Intel Xeon 2.8GHz\n>> \t\t> > 6GB RAM\n>> \t\t> > Linux 2.4 kernel\n>> \t\t> > RedHat Enterprise Linux AS 3 \n>> \t\t> > 200GB for PGDATA on 3Par, ext3\n>> \t\t> > 50GB for WAL on 3Par, ext3\n>> \t\t> >\n>> \t\t> > With PostgreSql 8.1.4\n>> \t\t> >\n>> \t\t> > We don't have i/o bottle neck.\n>> \t\t>\n>> \t\t> Are you sure? What does iostat say during a pgbench?\n>> What parameters are \n>> \t\t> you passing to pgbench?\n>> \t\t>\n>> \t\t> Well in theory, upgrading to 2.6 kernel will help as\n> well as \n>> making your\n>> \t\t> WAL ext2 instead of ext3.\n>> \t\t>\n>> \t\t> > Whatelse I can try to better tps? Someone told me I\n> can should \n>> get tps\n>> \t\t>\n>> \t\t> > over 1500, it is hard to believe.\n>> \t\t>\n>> \t\t> 1500? Hmmm... I don't know about that, I can get\n> 470tps or so on \n>> my\n>> \t\t> measily dual core 3800 with 2gig of ram though.\n>> \t\t>\n>> \t\t> Joshua D. Drake \n>> \t\t>\n>> \t\t>\n>> \t\t> >\n>> \t\t> > Thanks\n>> \t\t> >\n>> \t\t> > Marty\n>> \t\t> >\n>> \t\t> > ---------------------------(end of\n>> \t\t> > broadcast)---------------------------\n>> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n>> \t\t> >\n>> \t\t>\n>> \t\t>\n>> \t\t\n>> \t\t---------------------------(end of\n>> broadcast)---------------------------\n>> \t\tTIP 1: if posting/reading through Usenet, please send an\n> appropriate\n>> \t\t subscribe-nomail command to\n>> [email protected] so that your\n>> \t\t message can get through to the mailing list\n> cleanly\n>> \t\t\n>>\n>>\n>>\n>>\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 22 Aug 2006 12:38:27 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "As I mentioned, I haven't changed the defaults at all yet:\nFsync is still on... \n\nshared_buffers = 1000\nmax_fsm_pages = 20000\nmax_connections = 40\nwork_mem = 1024\neffective_cache_size = 1000\nrandom_page_cost = 4\n\nI'm not sure how much the dual core woodcrests and faster memory are\nhelping my system. Your hardware should *theoretically* have better IO\nperformance, assuming you're actually making use of the 2x2GB/s FC\ninterfaces and external RAID.\n\nWhat do you get if you run the bench back-to-back without rebuilding the\ntest db? (Say pgbench -c 10 -t 10000 -d bench 2>/dev/null run 5 times in\na row)? Maybe that would put more stress on RAM/CPU?\n\nSeems to me your issue is with an underperforming IO subsystem- as\npreviously mentioned, you might want to check dd and bonnie++ (v 1.03)\nnumbers.\n\ntime bash -c \"(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)\"\n\nI get ~255 mb/s from the above.\n\nBucky \n\n-----Original Message-----\nFrom: Marty Jia [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 3:38 PM\nTo: Bucky Jordan; Joshua D. Drake\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: RE: [PERFORM] How to get higher tps\n\nBucky\n\nMy best result is around 380. I believe your hardware is more efficient,\nbecause no matter how I change the conf parameters, no improvement can\nbe obtained. I even turned fsync off.\n\nWhat is your values for the following parameters?\n\nshared_buffers = 80000\nmax_fsm_pages = 350000\nmax_connections = 1000\nwork_mem = 65536\neffective_cache_size = 610000\nrandom_page_cost = 3\n\nThanks\nMarty\n\n-----Original Message-----\nFrom: Bucky Jordan [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 3:23 PM\nTo: Joshua D. Drake; Marty Jia\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: RE: [PERFORM] How to get higher tps\n\nMarty,\n\nHere's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install\non a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB\ncache/socket) with 6x300GB 10k SAS drives:\n\npgbench -c 10 -t 10000 -d bench 2>/dev/null\npghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: bench\n`transaction type: TPC-B (sort of) scaling factor: 20 number of clients:\n10 number of transactions per client: 10000 number of transactions\nactually processed: 100000/100000 tps = 561.056729 (including\nconnections establishing) tps = 561.127760 (excluding connections\nestablishing)\n\nHere's some iostat samples during the test:\n tty mfid0 da0 cd0\ncpu\n tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in\nid\n 6 77 16.01 1642 25.67 0.00 0 0.00 0.00 0 0.00 3 0 8\n2 87\n 8 157 17.48 3541 60.43 0.00 0 0.00 0.00 0 0.00 24 0 28\n4 43\n 5 673 17.66 2287 39.44 0.00 0 0.00 0.00 0 0.00 10 0 13\n2 75\n 6 2818 16.37 2733 43.68 0.00 0 0.00 0.00 0 0.00 17 0 23\n3 56\n 1 765 18.05 2401 42.32 0.00 0 0.00 0.00 0 0.00 15 0 17\n3 65\n\nNote- the above was with no tuning to the kernel or postgresql.conf. \n\nNow for my question- it seems that I've still got quite a bit of\nheadroom on the hardware I'm running the above tests on, since I know\nthe array will pump out > 200 MB/s (dd, bonnie++ numbers), and CPU\nappears mostly idle. This would indicate I should be able to get some\nsignificantly better numbers with postgresql.conf tweaks correct?\n\nI guess the other problem is ensuring that we're not testing RAM speeds,\nsince most of the data is probably in memory (BSD io buffers)? Although,\nfor the initial run, that doesn't seem to be the case, since subsequent\nruns without rebuilding the benchmark db are slightly not believable\n(i.e. 1,200 going up to >2,500 tps over 5 back-to-back runs). So, as\nlong as I re-initialize the benchdb before each run, it should be a\nrealistic test, right?\n\nThanks,\n\nBucky\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Joshua D.\nDrake\nSent: Tuesday, August 22, 2006 12:16 PM\nTo: Marty Jia\nCc: Alex Turner; Mark Lewis; [email protected]; DBAs;\nRich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\nMarty Jia wrote:\n> Here is iostat when running pgbench:\n> \n> avg-cpu: %user %nice %sys %iowait %idle\n> 26.17 0.00 8.25 23.17 42.42\n\nYou are are a little io bound and fairly cpu bound. I would be curious\nif your performance goes down if you increase the number of connections\nyou are using.\n\nJoshua D. Drake\n\n\n> \n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 0.00 0.00 0.00 0 0\n> sda1 0.00 0.00 0.00 0 0\n> sda2 0.00 0.00 0.00 0 0\n> sda3 0.00 0.00 0.00 0 0\n> sda4 0.00 0.00 0.00 0 0\n> sda5 0.00 0.00 0.00 0 0\n> sda6 0.00 0.00 0.00 0 0\n> sda7 0.00 0.00 0.00 0 0\n> sdb 0.00 0.00 0.00 0 0\n> sdb1 0.00 0.00 0.00 0 0\n> sdb2 0.00 0.00 0.00 0 0\n> sdb3 0.00 0.00 0.00 0 0\n> sdb4 0.00 0.00 0.00 0 0\n> sdb5 0.00 0.00 0.00 0 0\n> sdb6 0.00 0.00 0.00 0 0\n> sdb7 0.00 0.00 0.00 0 0\n> sdc 0.00 0.00 0.00 0 0\n> sdd 0.00 0.00 0.00 0 0\n> sde 0.00 0.00 0.00 0 0\n> sdf 0.00 0.00 0.00 0 0\n> sdg 0.00 0.00 0.00 0 0\n> sdh 0.00 0.00 0.00 0 0\n> sdi 40.33 0.00 413.33 0 1240\n> sdj 34.33 0.00 394.67 0 1184\n> sdk 36.00 0.00 410.67 0 1232\n> sdl 37.00 0.00 429.33 0 1288\n> sdm 375.00 0.00 3120.00 0 9360\n> sdn 378.33 0.00 3120.00 0 9360\n> \n> ________________________________\n> \n> From: Alex Turner [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 11:27 AM\n> To: Mark Lewis\n> Cc: Marty Jia; Joshua D. Drake; [email protected];\nDBAs;\n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> \n> Oh - and it's usefull to know if you are CPU bound, or IO bound.\nCheck\n> top or vmstat to get an idea of that\n> \n> Alex\n> \n> \n> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> >\n> wrote: \n> \n> \tFirst things first, run a bonnie++ benchmark, and post the\nnumbers. \n> That will give a good indication of raw IO performance, and\nis\n> often the first inidication of problems separate from the DB. We have\n\n> seen pretty bad performance from SANs in the past. How many FC lines\ndo\n> you have running to your server, remember each line is limited to\nabout\n> 200MB/sec, to get good throughput, you will need multiple connections.\n\n> \t\n> \tWhen you run pgbench, run a iostat also and see what the numbers\nsay.\n> \t\n> \t\n> \tAlex.\n> \t\n> \t\n> \t\n> \tOn 8/22/06, Mark Lewis < [email protected] \n> <mailto:[email protected]> > wrote:\n> \n> \t\tWell, at least on my test machines running\ngnome-terminal, my \n> pgbench\n> \t\truns tend to get throttled by gnome-terminal's lousy\nperformance to \n> no\n> \t\tmore than 300 tps or so. Running with 2>/dev/null to\nthrow away all \n> the\n> \t\tdetailed logging gives me 2-3x improvement in scores.\n> Caveat: in my \n> \t\tcase the db is on the local machine, so who knows what\nall the\n> \t\tinteractions are.\n> \t\t\n> \t\tAlso, when you initialized the pgbench db what scaling\nfactor did \n> you\n> \t\tuse? And does running pgbench with -v improve\nperformance at all?\n> \t\t\n> \t\t-- Mark\n> \t\t\n> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n> \t\t> Joshua,\n> \t\t>\n> \t\t> Here is\n> \t\t>\n> \t\t> shared_buffers = 80000\n> \t\t> fsync = on\n> \t\t> max_fsm_pages = 350000\n> \t\t> max_connections = 1000 \n> \t\t> work_mem = 65536\n> \t\t> effective_cache_size = 610000\n> \t\t> random_page_cost = 3\n> \t\t>\n> \t\t> Here is pgbench I used:\n> \t\t>\n> \t\t> pgbench -c 10 -t 10000 -d HQDB\n> \t\t>\n> \t\t> Thanks\n> \t\t>\n> \t\t> Marty \n> \t\t>\n> \t\t> -----Original Message-----\n> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n> \t\t> To: Marty Jia\n> \t\t> Cc: [email protected]\n> \t\t> Subject: Re: [PERFORM] How to get higher tps\n> \t\t>\n> \t\t> Marty Jia wrote:\n> \t\t> > I'm exhausted to try all performance tuning ideas,\nlike \n> following\n> \t\t> > parameters\n> \t\t> >\n> \t\t> > shared_buffers\n> \t\t> > fsync\n> \t\t> > max_fsm_pages\n> \t\t> > max_connections\n> \t\t> > shared_buffers\n> \t\t> > work_mem\n> \t\t> > max_fsm_pages\n> \t\t> > effective_cache_size\n> \t\t> > random_page_cost\n> \t\t> >\n> \t\t> > I believe all above have right size and values, but\nI just can \n> not get\n> \t\t>\n> \t\t> > higher tps more than 300 testd by pgbench \n> \t\t>\n> \t\t> What values did you use?\n> \t\t>\n> \t\t> >\n> \t\t> > Here is our hardware\n> \t\t> >\n> \t\t> >\n> \t\t> > Dual Intel Xeon 2.8GHz\n> \t\t> > 6GB RAM\n> \t\t> > Linux 2.4 kernel\n> \t\t> > RedHat Enterprise Linux AS 3 \n> \t\t> > 200GB for PGDATA on 3Par, ext3\n> \t\t> > 50GB for WAL on 3Par, ext3\n> \t\t> >\n> \t\t> > With PostgreSql 8.1.4\n> \t\t> >\n> \t\t> > We don't have i/o bottle neck.\n> \t\t>\n> \t\t> Are you sure? What does iostat say during a pgbench?\n> What parameters are \n> \t\t> you passing to pgbench?\n> \t\t>\n> \t\t> Well in theory, upgrading to 2.6 kernel will help as\nwell as \n> making your\n> \t\t> WAL ext2 instead of ext3.\n> \t\t>\n> \t\t> > Whatelse I can try to better tps? Someone told me I\ncan should \n> get tps\n> \t\t>\n> \t\t> > over 1500, it is hard to believe.\n> \t\t>\n> \t\t> 1500? Hmmm... I don't know about that, I can get\n470tps or so on \n> my\n> \t\t> measily dual core 3800 with 2gig of ram though.\n> \t\t>\n> \t\t> Joshua D. Drake \n> \t\t>\n> \t\t>\n> \t\t> >\n> \t\t> > Thanks\n> \t\t> >\n> \t\t> > Marty\n> \t\t> >\n> \t\t> > ---------------------------(end of\n> \t\t> > broadcast)---------------------------\n> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n> \t\t> >\n> \t\t>\n> \t\t>\n> \t\t\n> \t\t---------------------------(end of\n> broadcast)---------------------------\n> \t\tTIP 1: if posting/reading through Usenet, please send an\nappropriate\n> \t\t subscribe-nomail command to\n> [email protected] so that your\n> \t\t message can get through to the mailing list\ncleanly\n> \t\t\n> \n> \n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Have you searched our list archives?\n\n http://archives.postgresql.org\n",
"msg_date": "Tue, 22 Aug 2006 15:49:06 -0400",
"msg_from": "\"Bucky Jordan\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to get higher tps"
},
{
"msg_contents": "Here is, it's first time I got tps > 400\n\n10 clients:\n\n[pgsql@prdhqdb2:/pgsql/database]pgbench -c 10 -t 10000 -v -d pgbench\n2>/dev/null\npghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: pgbench\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 10\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 100000/100000\ntps = 413.022562 (including connections establishing)\ntps = 413.125733 (excluding connections establishing)\n\n20 clients:\n\n[pgsql@prdhqdb2:/pgsql/database]pgbench -c 20 -t 10000 -v -d pgbench\n2>/dev/null\npghost: pgport: (null) nclients: 20 nxacts: 10000 dbName: pgbench\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nnumber of clients: 20\nnumber of transactions per client: 10000\nnumber of transactions actually processed: 200000/200000\ntps = 220.759983 (including connections establishing)\ntps = 220.790077 (excluding connections establishing)\n \n\n-----Original Message-----\nFrom: Joshua D. Drake [mailto:[email protected]] \nSent: Tuesday, August 22, 2006 3:38 PM\nTo: Marty Jia\nCc: Bucky Jordan; Alex Turner; Mark Lewis;\[email protected]; DBAs; Rich Wilson; Ernest Wurzbach\nSubject: Re: [PERFORM] How to get higher tps\n\nMarty Jia wrote:\n> Bucky\n> \n> My best result is around 380. I believe your hardware is more \n> efficient, because no matter how I change the conf parameters, no \n> improvement can be obtained. I even turned fsync off.\n\nDo you stay constant if you use 40 clients versus 20?\n\n> \n> What is your values for the following parameters?\n> \n> shared_buffers = 80000\n> max_fsm_pages = 350000\n> max_connections = 1000\n> work_mem = 65536\n> effective_cache_size = 610000\n> random_page_cost = 3\n> \n> Thanks\n> Marty\n> \n> -----Original Message-----\n> From: Bucky Jordan [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 3:23 PM\n> To: Joshua D. Drake; Marty Jia\n> Cc: Alex Turner; Mark Lewis; [email protected]; DBAs; \n> Rich Wilson; Ernest Wurzbach\n> Subject: RE: [PERFORM] How to get higher tps\n> \n> Marty,\n> \n> Here's pgbench results from a stock FreeBSD 6.1 amd64/PG 8.1.4 install\n\n> on a Dell Poweredge 2950 with 8gb ram, 2x3.0 dual-core woodcrest (4MB\n> cache/socket) with 6x300GB 10k SAS drives:\n> \n> pgbench -c 10 -t 10000 -d bench 2>/dev/null\n> pghost: pgport: (null) nclients: 10 nxacts: 10000 dbName: bench \n> `transaction type: TPC-B (sort of) scaling factor: 20 number of\nclients:\n> 10 number of transactions per client: 10000 number of transactions \n> actually processed: 100000/100000 tps = 561.056729 (including \n> connections establishing) tps = 561.127760 (excluding connections\n> establishing)\n> \n> Here's some iostat samples during the test:\n> tty mfid0 da0 cd0\n> cpu\n> tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy\nin\n> id\n> 6 77 16.01 1642 25.67 0.00 0 0.00 0.00 0 0.00 3 0 8\n> 2 87\n> 8 157 17.48 3541 60.43 0.00 0 0.00 0.00 0 0.00 24 0 28\n> 4 43\n> 5 673 17.66 2287 39.44 0.00 0 0.00 0.00 0 0.00 10 0 13\n> 2 75\n> 6 2818 16.37 2733 43.68 0.00 0 0.00 0.00 0 0.00 17 0 23\n> 3 56\n> 1 765 18.05 2401 42.32 0.00 0 0.00 0.00 0 0.00 15 0 17\n> 3 65\n> \n> Note- the above was with no tuning to the kernel or postgresql.conf. \n> \n> Now for my question- it seems that I've still got quite a bit of \n> headroom on the hardware I'm running the above tests on, since I know \n> the array will pump out > 200 MB/s (dd, bonnie++ numbers), and CPU \n> appears mostly idle. This would indicate I should be able to get some \n> significantly better numbers with postgresql.conf tweaks correct?\n> \n> I guess the other problem is ensuring that we're not testing RAM \n> speeds, since most of the data is probably in memory (BSD io buffers)?\n\n> Although, for the initial run, that doesn't seem to be the case, since\n\n> subsequent runs without rebuilding the benchmark db are slightly not \n> believable (i.e. 1,200 going up to >2,500 tps over 5 back-to-back \n> runs). So, as long as I re-initialize the benchdb before each run, it \n> should be a realistic test, right?\n> \n> Thanks,\n> \n> Bucky\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Joshua D.\n> Drake\n> Sent: Tuesday, August 22, 2006 12:16 PM\n> To: Marty Jia\n> Cc: Alex Turner; Mark Lewis; [email protected]; DBAs; \n> Rich Wilson; Ernest Wurzbach\n> Subject: Re: [PERFORM] How to get higher tps\n> \n> Marty Jia wrote:\n>> Here is iostat when running pgbench:\n>> \n>> avg-cpu: %user %nice %sys %iowait %idle\n>> 26.17 0.00 8.25 23.17 42.42\n> \n> You are are a little io bound and fairly cpu bound. I would be curious\n\n> if your performance goes down if you increase the number of \n> connections you are using.\n> \n> Joshua D. Drake\n> \n> \n>> \n>> Device: tps Blk_read/s Blk_wrtn/s Blk_read\nBlk_wrtn\n>> sda 0.00 0.00 0.00 0\n0\n>> sda1 0.00 0.00 0.00 0\n0\n>> sda2 0.00 0.00 0.00 0\n0\n>> sda3 0.00 0.00 0.00 0\n0\n>> sda4 0.00 0.00 0.00 0\n0\n>> sda5 0.00 0.00 0.00 0\n0\n>> sda6 0.00 0.00 0.00 0\n0\n>> sda7 0.00 0.00 0.00 0\n0\n>> sdb 0.00 0.00 0.00 0\n0\n>> sdb1 0.00 0.00 0.00 0\n0\n>> sdb2 0.00 0.00 0.00 0\n0\n>> sdb3 0.00 0.00 0.00 0\n0\n>> sdb4 0.00 0.00 0.00 0\n0\n>> sdb5 0.00 0.00 0.00 0\n0\n>> sdb6 0.00 0.00 0.00 0\n0\n>> sdb7 0.00 0.00 0.00 0\n0\n>> sdc 0.00 0.00 0.00 0\n0\n>> sdd 0.00 0.00 0.00 0\n0\n>> sde 0.00 0.00 0.00 0\n0\n>> sdf 0.00 0.00 0.00 0\n0\n>> sdg 0.00 0.00 0.00 0\n0\n>> sdh 0.00 0.00 0.00 0\n0\n>> sdi 40.33 0.00 413.33 0\n1240\n>> sdj 34.33 0.00 394.67 0\n1184\n>> sdk 36.00 0.00 410.67 0\n1232\n>> sdl 37.00 0.00 429.33 0\n1288\n>> sdm 375.00 0.00 3120.00 0\n9360\n>> sdn 378.33 0.00 3120.00 0\n9360\n>>\n>> ________________________________\n>>\n>> From: Alex Turner [mailto:[email protected]]\n>> Sent: Tuesday, August 22, 2006 11:27 AM\n>> To: Mark Lewis\n>> Cc: Marty Jia; Joshua D. Drake; [email protected];\n> DBAs;\n>> Rich Wilson; Ernest Wurzbach\n>> Subject: Re: [PERFORM] How to get higher tps\n>>\n>>\n>> Oh - and it's usefull to know if you are CPU bound, or IO bound.\n> Check\n>> top or vmstat to get an idea of that\n>>\n>> Alex\n>>\n>>\n>> On 8/22/06, Alex Turner < [email protected] <mailto:[email protected]> \n>> >\n>> wrote: \n>>\n>> \tFirst things first, run a bonnie++ benchmark, and post the\n> numbers. \n>> That will give a good indication of raw IO performance, and\n> is\n>> often the first inidication of problems separate from the DB. We \n>> have\n> \n>> seen pretty bad performance from SANs in the past. How many FC lines\n> do\n>> you have running to your server, remember each line is limited to\n> about\n>> 200MB/sec, to get good throughput, you will need multiple\nconnections.\n> \n>> \t\n>> \tWhen you run pgbench, run a iostat also and see what the numbers\n> say.\n>> \t\n>> \t\n>> \tAlex.\n>> \t\n>> \t\n>> \t\n>> \tOn 8/22/06, Mark Lewis < [email protected] \n>> <mailto:[email protected]> > wrote:\n>>\n>> \t\tWell, at least on my test machines running\n> gnome-terminal, my\n>> pgbench\n>> \t\truns tend to get throttled by gnome-terminal's lousy\n> performance to\n>> no\n>> \t\tmore than 300 tps or so. Running with 2>/dev/null to\n> throw away all\n>> the\n>> \t\tdetailed logging gives me 2-3x improvement in scores.\n>> Caveat: in my \n>> \t\tcase the db is on the local machine, so who knows what\n> all the\n>> \t\tinteractions are.\n>> \t\t\n>> \t\tAlso, when you initialized the pgbench db what scaling\n> factor did\n>> you\n>> \t\tuse? And does running pgbench with -v improve\n> performance at all?\n>> \t\t\n>> \t\t-- Mark\n>> \t\t\n>> \t\tOn Tue, 2006-08-22 at 09:19 -0400, Marty Jia wrote:\n>> \t\t> Joshua,\n>> \t\t>\n>> \t\t> Here is\n>> \t\t>\n>> \t\t> shared_buffers = 80000\n>> \t\t> fsync = on\n>> \t\t> max_fsm_pages = 350000\n>> \t\t> max_connections = 1000 \n>> \t\t> work_mem = 65536\n>> \t\t> effective_cache_size = 610000\n>> \t\t> random_page_cost = 3\n>> \t\t>\n>> \t\t> Here is pgbench I used:\n>> \t\t>\n>> \t\t> pgbench -c 10 -t 10000 -d HQDB\n>> \t\t>\n>> \t\t> Thanks\n>> \t\t>\n>> \t\t> Marty \n>> \t\t>\n>> \t\t> -----Original Message-----\n>> \t\t> From: Joshua D. Drake [mailto:[email protected]]\n>> \t\t> Sent: Monday, August 21, 2006 6:09 PM\n>> \t\t> To: Marty Jia\n>> \t\t> Cc: [email protected]\n>> \t\t> Subject: Re: [PERFORM] How to get higher tps\n>> \t\t>\n>> \t\t> Marty Jia wrote:\n>> \t\t> > I'm exhausted to try all performance tuning ideas,\n> like\n>> following\n>> \t\t> > parameters\n>> \t\t> >\n>> \t\t> > shared_buffers\n>> \t\t> > fsync\n>> \t\t> > max_fsm_pages\n>> \t\t> > max_connections\n>> \t\t> > shared_buffers\n>> \t\t> > work_mem\n>> \t\t> > max_fsm_pages\n>> \t\t> > effective_cache_size\n>> \t\t> > random_page_cost\n>> \t\t> >\n>> \t\t> > I believe all above have right size and values, but\n> I just can\n>> not get\n>> \t\t>\n>> \t\t> > higher tps more than 300 testd by pgbench \n>> \t\t>\n>> \t\t> What values did you use?\n>> \t\t>\n>> \t\t> >\n>> \t\t> > Here is our hardware\n>> \t\t> >\n>> \t\t> >\n>> \t\t> > Dual Intel Xeon 2.8GHz\n>> \t\t> > 6GB RAM\n>> \t\t> > Linux 2.4 kernel\n>> \t\t> > RedHat Enterprise Linux AS 3 \n>> \t\t> > 200GB for PGDATA on 3Par, ext3\n>> \t\t> > 50GB for WAL on 3Par, ext3\n>> \t\t> >\n>> \t\t> > With PostgreSql 8.1.4\n>> \t\t> >\n>> \t\t> > We don't have i/o bottle neck.\n>> \t\t>\n>> \t\t> Are you sure? What does iostat say during a pgbench?\n>> What parameters are \n>> \t\t> you passing to pgbench?\n>> \t\t>\n>> \t\t> Well in theory, upgrading to 2.6 kernel will help as\n> well as\n>> making your\n>> \t\t> WAL ext2 instead of ext3.\n>> \t\t>\n>> \t\t> > Whatelse I can try to better tps? Someone told me I\n> can should\n>> get tps\n>> \t\t>\n>> \t\t> > over 1500, it is hard to believe.\n>> \t\t>\n>> \t\t> 1500? Hmmm... I don't know about that, I can get\n> 470tps or so on\n>> my\n>> \t\t> measily dual core 3800 with 2gig of ram though.\n>> \t\t>\n>> \t\t> Joshua D. Drake \n>> \t\t>\n>> \t\t>\n>> \t\t> >\n>> \t\t> > Thanks\n>> \t\t> >\n>> \t\t> > Marty\n>> \t\t> >\n>> \t\t> > ---------------------------(end of\n>> \t\t> > broadcast)---------------------------\n>> \t\t> > TIP 2: Don't 'kill -9' the postmaster \n>> \t\t> >\n>> \t\t>\n>> \t\t>\n>> \t\t\n>> \t\t---------------------------(end of\n>> broadcast)---------------------------\n>> \t\tTIP 1: if posting/reading through Usenet, please send an\n> appropriate\n>> \t\t subscribe-nomail command to\n>> [email protected] so that your\n>> \t\t message can get through to the mailing list\n> cleanly\n>> \t\t\n>>\n>>\n>>\n>>\n> \n> \n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Tue, 22 Aug 2006 16:35:14 -0400",
"msg_from": "\"Marty Jia\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to get higher tps"
}
] |
[
{
"msg_contents": "I am trying to decide what kind of storage options to use for a pair of\ngood database servers, a primary and a read-only that can be a failover.\nHere is what I'm thinking so far:\n\n(1) We have a nice NetApp that can do iSCSI. It has a large (multi-GB)\nbattery-backed cache so it could potentially perform the transactions at\na very high rate. However there are many other applications accessing\nthe NetApp over NFS, so I am not sure what performance to expect. Any\nsuggestions about using network storage like this for the database? Will\nthe database make huge demands on the NetApp, and force my department\nspend huge amounts on new NetApp hardware?\n\n(2) I read with interest this thread:\nhttp://archives.postgresql.org/pgsql-performance/2006-08/msg00164.php\n\nIs there any consensus on whether to do WAL on a RAID-1 and PGDATA on a\nRAID-10 versus everything on a RAID-10? How does the number of disks I\nhave affect this decision (I will probably have 4-8 disks per server).\n\nSome of the applications I initially need to support will be a high\nvolume of simple transactions without many tablescans, if that helps.\nHowever, I expect that these servers will need to serve many needs.\n\nAny other suggestions are appreciated. Is there a common place to look\nfor hardware suggestions (like a postgresql hardware FAQ)?\n\nRegards,\n\tJeff Davis\n\n",
"msg_date": "Mon, 21 Aug 2006 14:50:51 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": true,
"msg_subject": "Storage Options"
},
{
"msg_contents": "On Mon, Aug 21, 2006 at 02:50:51PM -0700, Jeff Davis wrote:\n>the NetApp over NFS, so I am not sure what performance to expect. Any\n>suggestions about using network storage like this for the database?\n\nDon't. Unless you're using a very small (toy-scale) database, the netapp \nstorage is way too expensive for the kind of usage you see with a \ndatabase application. You're much better off buying much cheaper storage \ntwice and using a database replication solution than either choking a \nreally expensive netapp or getting lousy performance from the same. The \nnetapps have their niche, but database storage isn't it. (Peformance in \ngeneral really isn't it--the advantages are managability, snapshotting, \nand cross-platform data exchange. It may be that those factors are \nimportant enough to make that a good solution for your particular \nsituation, but they're generally not particularly relevant in the \npostgres space.)\n\n>Is there any consensus on whether to do WAL on a RAID-1 and PGDATA on a\n>RAID-10 versus everything on a RAID-10? How does the number of disks I\n>have affect this decision (I will probably have 4-8 disks per server).\n\nYou can't get a good answer without testing with your actual data. I'd \nsuspect that with such a low number of disks you're better off with a \nsingle array, assuming that you have a good bbu raid controller and \nassuming that you're not doing write-mostly transaction work. But \ntesting with your actual workload is the only way to really know.\n\nMike Stone\n",
"msg_date": "Tue, 22 Aug 2006 06:02:38 -0400",
"msg_from": "Michael Stone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Storage Options"
}
] |
[
{
"msg_contents": "I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use\nan index. With the index, I get executions times of 0.5 seconds. \nWithout, it's closer to 2.5 seconds.\n\nCompare these two sets of results (also provided at \nhttp://rafb.net/paste/results/ywcOZP66.html\nshould it appear poorly formatted below):\n\nfreshports.org=# \\i test2.sql\n \nQUERY PLAN\n----------------------------------------------------------------------\n----------------------------------------------------------------------\n-\n Merge Join (cost=24030.39..24091.43 rows=3028 width=206) (actual \ntime=301.301..355.261 rows=3149 loops=1)\n Merge Cond: (\"outer\".id = \"inner\".category_id)\n -> Sort (cost=11.17..11.41 rows=97 width=4) (actual \ntime=0.954..1.300 rows=95 loops=1)\n Sort Key: c.id\n -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \nwidth=4) (actual time=0.092..0.517 rows=97 loops=1)\n -> Sort (cost=24019.22..24026.79 rows=3028 width=206) (actual \ntime=300.317..314.114 rows=3149 loops=1)\n Sort Key: p.category_id\n -> Nested Loop (cost=0.00..23844.14 rows=3028 width=206) \n(actual time=0.082..264.459 rows=3149 loops=1)\n -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \nwidth=206) (actual time=0.026..133.575 rows=3149 loops=1)\n Filter: (status = 'D'::bpchar)\n -> Index Scan using element_pkey on element e \n(cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 \nloops=3149)\n Index Cond: (\"outer\".element_id = e.id)\n Total runtime: 369.869 ms\n(13 rows)\n\nfreshports.org=# set enable_hashjoin = true;\nSET\nfreshports.org=# \\i test2.sql\n QUERY PLAN\n----------------------------------------------------------------------\n----------------------------------------------------------\n Hash Join (cost=6156.90..13541.14 rows=3028 width=206) (actual \ntime=154.741..2334.366 rows=3149 loops=1)\n Hash Cond: (\"outer\".category_id = \"inner\".id)\n -> Hash Join (cost=6148.68..13472.36 rows=3028 width=206) \n(actual time=153.801..2288.792 rows=3149 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".element_id)\n -> Seq Scan on element e (cost=0.00..4766.70 rows=252670 \nwidth=4) (actual time=0.022..1062.626 rows=252670 loops=1)\n -> Hash (cost=6141.11..6141.11 rows=3028 width=206) \n(actual time=151.105..151.105 rows=3149 loops=1)\n -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \nwidth=206) (actual time=0.027..131.072 rows=3149 loops=1)\n Filter: (status = 'D'::bpchar)\n -> Hash (cost=7.97..7.97 rows=97 width=4) (actual \ntime=0.885..0.885 rows=97 loops=1)\n -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \nwidth=4) (actual time=0.076..0.476 rows=97 loops=1)\n Total runtime: 2346.877 ms\n(11 rows)\n\nfreshports.org=#\n\nWithout leaving \"enable_hashjoin = false\", can you suggest a way to \nforce the index usage?\n\nFYI, the query is:\n\nexplain analyse\nSELECT P.id,\n P.category_id,\n P.version as version,\n P.revision as revision,\n P.element_id,\n P.maintainer,\n P.short_description,\n to_char(P.date_added - SystemTimeAdjust(), 'DD Mon YYYY \nHH24:MI:SS') as date_added,\n P.last_commit_id as last_change_log_id,\n P.package_exists,\n P.extract_suffix,\n P.homepage,\n P.status,\n P.broken,\n P.forbidden,\n P.ignore,\n P.restricted,\n P.deprecated,\n P.no_cdrom,\n P.expiration_date,\n P.latest_link\n FROM categories C, ports P JOIN element E on P.element_id = E.id\n WHERE P.status = 'D'\n AND P.category_id = C.id;\n\n-- \nDan Langille : Software Developer looking for work\nmy resume: http://www.freebsddiary.org/dan_langille.php\n\n\n",
"msg_date": "Tue, 22 Aug 2006 05:09:36 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Forcing index usage without 'enable_hashjoin = FALSE'"
},
{
"msg_contents": "Dan Langille wrote:\n> I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use\n> an index. With the index, I get executions times of 0.5 seconds. \n> Without, it's closer to 2.5 seconds.\n> \n> Compare these two sets of results (also provided at \n> http://rafb.net/paste/results/ywcOZP66.html\n> should it appear poorly formatted below):\n> \n> freshports.org=# \\i test2.sql\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------------------\n> -\n> Merge Join (cost=24030.39..24091.43 rows=3028 width=206) (actual \n> time=301.301..355.261 rows=3149 loops=1)\n> Merge Cond: (\"outer\".id = \"inner\".category_id)\n> -> Sort (cost=11.17..11.41 rows=97 width=4) (actual \n> time=0.954..1.300 rows=95 loops=1)\n> Sort Key: c.id\n> -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \n> width=4) (actual time=0.092..0.517 rows=97 loops=1)\n> -> Sort (cost=24019.22..24026.79 rows=3028 width=206) (actual \n> time=300.317..314.114 rows=3149 loops=1)\n> Sort Key: p.category_id\n> -> Nested Loop (cost=0.00..23844.14 rows=3028 width=206) \n> (actual time=0.082..264.459 rows=3149 loops=1)\n> -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \n> width=206) (actual time=0.026..133.575 rows=3149 loops=1)\n> Filter: (status = 'D'::bpchar)\n> -> Index Scan using element_pkey on element e \n> (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 \n> loops=3149)\n> Index Cond: (\"outer\".element_id = e.id)\n> Total runtime: 369.869 ms\n> (13 rows)\n> \n> freshports.org=# set enable_hashjoin = true;\n> SET\n> freshports.org=# \\i test2.sql\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> ----------------------------------------------------------\n> Hash Join (cost=6156.90..13541.14 rows=3028 width=206) (actual \n> time=154.741..2334.366 rows=3149 loops=1)\n> Hash Cond: (\"outer\".category_id = \"inner\".id)\n> -> Hash Join (cost=6148.68..13472.36 rows=3028 width=206) \n> (actual time=153.801..2288.792 rows=3149 loops=1)\n> Hash Cond: (\"outer\".id = \"inner\".element_id)\n> -> Seq Scan on element e (cost=0.00..4766.70 rows=252670 \n> width=4) (actual time=0.022..1062.626 rows=252670 loops=1)\n> -> Hash (cost=6141.11..6141.11 rows=3028 width=206) \n> (actual time=151.105..151.105 rows=3149 loops=1)\n> -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \n> width=206) (actual time=0.027..131.072 rows=3149 loops=1)\n> Filter: (status = 'D'::bpchar)\n> -> Hash (cost=7.97..7.97 rows=97 width=4) (actual \n> time=0.885..0.885 rows=97 loops=1)\n> -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \n> width=4) (actual time=0.076..0.476 rows=97 loops=1)\n> Total runtime: 2346.877 ms\n> (11 rows)\n> \n> freshports.org=#\n> \n> Without leaving \"enable_hashjoin = false\", can you suggest a way to \n> force the index usage?\n> \n> FYI, the query is:\n> \n> explain analyse\n> SELECT P.id,\n> P.category_id,\n> P.version as version,\n> P.revision as revision,\n> P.element_id,\n> P.maintainer,\n> P.short_description,\n> to_char(P.date_added - SystemTimeAdjust(), 'DD Mon YYYY \n> HH24:MI:SS') as date_added,\n> P.last_commit_id as last_change_log_id,\n> P.package_exists,\n> P.extract_suffix,\n> P.homepage,\n> P.status,\n> P.broken,\n> P.forbidden,\n> P.ignore,\n> P.restricted,\n> P.deprecated,\n> P.no_cdrom,\n> P.expiration_date,\n> P.latest_link\n> FROM categories C, ports P JOIN element E on P.element_id = E.id\n> WHERE P.status = 'D'\n> AND P.category_id = C.id;\n> \n\nI doubt it would make a difference but if you:\n\n...\nFROM categories C JOIN ports P on P.category_id=C.id JOIN element E on \nP.element_id = E.id\nWHERE P.status = 'D';\n\ndoes it change anything?\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n",
"msg_date": "Wed, 23 Aug 2006 13:31:57 +1000",
"msg_from": "Chris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index usage without 'enable_hashjoin = FALSE'"
},
{
"msg_contents": "On 23 Aug 2006 at 13:31, Chris wrote:\n\n> Dan Langille wrote:\n> > I'm using PostgreSQL 8.1.4 and I'm trying to force the planner to use\n> > an index. With the index, I get executions times of 0.5 seconds. \n> > Without, it's closer to 2.5 seconds.\n> > \n> > Compare these two sets of results (also provided at \n> > http://rafb.net/paste/results/ywcOZP66.html\n> > should it appear poorly formatted below):\n> > \n> > freshports.org=# \\i test2.sql\n> > \n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------------------------------------------------------------\n> > -\n> > Merge Join (cost=24030.39..24091.43 rows=3028 width=206) (actual \n> > time=301.301..355.261 rows=3149 loops=1)\n> > Merge Cond: (\"outer\".id = \"inner\".category_id)\n> > -> Sort (cost=11.17..11.41 rows=97 width=4) (actual \n> > time=0.954..1.300 rows=95 loops=1)\n> > Sort Key: c.id\n> > -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \n> > width=4) (actual time=0.092..0.517 rows=97 loops=1)\n> > -> Sort (cost=24019.22..24026.79 rows=3028 width=206) (actual \n> > time=300.317..314.114 rows=3149 loops=1)\n> > Sort Key: p.category_id\n> > -> Nested Loop (cost=0.00..23844.14 rows=3028 width=206) \n> > (actual time=0.082..264.459 rows=3149 loops=1)\n> > -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \n> > width=206) (actual time=0.026..133.575 rows=3149 loops=1)\n> > Filter: (status = 'D'::bpchar)\n> > -> Index Scan using element_pkey on element e \n> > (cost=0.00..5.83 rows=1 width=4) (actual time=0.022..0.026 rows=1 \n> > loops=3149)\n> > Index Cond: (\"outer\".element_id = e.id)\n> > Total runtime: 369.869 ms\n> > (13 rows)\n> > \n> > freshports.org=# set enable_hashjoin = true;\n> > SET\n> > freshports.org=# \\i test2.sql\n> > QUERY PLAN\n> > ----------------------------------------------------------------------\n> > ----------------------------------------------------------\n> > Hash Join (cost=6156.90..13541.14 rows=3028 width=206) (actual \n> > time=154.741..2334.366 rows=3149 loops=1)\n> > Hash Cond: (\"outer\".category_id = \"inner\".id)\n> > -> Hash Join (cost=6148.68..13472.36 rows=3028 width=206) \n> > (actual time=153.801..2288.792 rows=3149 loops=1)\n> > Hash Cond: (\"outer\".id = \"inner\".element_id)\n> > -> Seq Scan on element e (cost=0.00..4766.70 rows=252670 \n> > width=4) (actual time=0.022..1062.626 rows=252670 loops=1)\n> > -> Hash (cost=6141.11..6141.11 rows=3028 width=206) \n> > (actual time=151.105..151.105 rows=3149 loops=1)\n> > -> Seq Scan on ports p (cost=0.00..6141.11 rows=3028 \n> > width=206) (actual time=0.027..131.072 rows=3149 loops=1)\n> > Filter: (status = 'D'::bpchar)\n> > -> Hash (cost=7.97..7.97 rows=97 width=4) (actual \n> > time=0.885..0.885 rows=97 loops=1)\n> > -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \n> > width=4) (actual time=0.076..0.476 rows=97 loops=1)\n> > Total runtime: 2346.877 ms\n> > (11 rows)\n> > \n> > freshports.org=#\n> > \n> > Without leaving \"enable_hashjoin = false\", can you suggest a way to \n> > force the index usage?\n> > \n> > FYI, the query is:\n> > \n> > explain analyse\n> > SELECT P.id,\n> > P.category_id,\n> > P.version as version,\n> > P.revision as revision,\n> > P.element_id,\n> > P.maintainer,\n> > P.short_description,\n> > to_char(P.date_added - SystemTimeAdjust(), 'DD Mon YYYY \n> > HH24:MI:SS') as date_added,\n> > P.last_commit_id as last_change_log_id,\n> > P.package_exists,\n> > P.extract_suffix,\n> > P.homepage,\n> > P.status,\n> > P.broken,\n> > P.forbidden,\n> > P.ignore,\n> > P.restricted,\n> > P.deprecated,\n> > P.no_cdrom,\n> > P.expiration_date,\n> > P.latest_link\n> > FROM categories C, ports P JOIN element E on P.element_id = E.id\n> > WHERE P.status = 'D'\n> > AND P.category_id = C.id;\n> > \n> \n> I doubt it would make a difference but if you:\n> \n> ...\n> FROM categories C JOIN ports P on P.category_id=C.id JOIN element E on \n> P.element_id = E.id\n> WHERE P.status = 'D';\n> \n> does it change anything?\n\nNot really, no:\n\nfreshports.org=# \\i test3.sql\n \nQUERY PLAN\n----------------------------------------------------------------------\n----------------------------------------------------------------------\n---\n Hash Join (cost=5344.62..12740.73 rows=3365 width=204) (actual \ntime=63.871..2164.880 rows=3149 loops=1)\n Hash Cond: (\"outer\".category_id = \"inner\".id)\n -> Hash Join (cost=5336.41..12665.22 rows=3365 width=204) \n(actual time=62.918..2122.529 rows=3149 loops=1)\n Hash Cond: (\"outer\".id = \"inner\".element_id)\n -> Seq Scan on element e (cost=0.00..4767.58 rows=252758 \nwidth=4) (actual time=0.019..1024.299 rows=252791 loops=1)\n -> Hash (cost=5328.00..5328.00 rows=3365 width=204) \n(actual time=60.228..60.228 rows=3149 loops=1)\n -> Bitmap Heap Scan on ports p (cost=34.02..5328.00 \nrows=3365 width=204) (actual time=1.900..41.316 rows=3149 loops=1)\n Recheck Cond: (status = 'D'::bpchar)\n -> Bitmap Index Scan on ports_deleted \n(cost=0.00..34.02 rows=3365 width=0) (actual time=1.454..1.454 \nrows=3149 loops=1)\n Index Cond: (status = 'D'::bpchar)\n -> Hash (cost=7.97..7.97 rows=97 width=4) (actual \ntime=0.890..0.890 rows=97 loops=1)\n -> Seq Scan on categories c (cost=0.00..7.97 rows=97 \nwidth=4) (actual time=0.074..0.497 rows=97 loops=1)\n Total runtime: 2176.784 ms\n(13 rows)\n\nfreshports.org=#\n\n\n\n-- \nDan Langille : Software Developer looking for work\nmy resume: http://www.freebsddiary.org/dan_langille.php\n\n\n",
"msg_date": "Wed, 23 Aug 2006 22:11:15 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing index usage without 'enable_hashjoin = FALSE'"
},
{
"msg_contents": "\"Dan Langille\" <[email protected]> writes:\n> Without leaving \"enable_hashjoin = false\", can you suggest a way to \n> force the index usage?\n\nHave you tried reducing random_page_cost?\n\nFYI, 8.2 should be a bit better about this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 23 Aug 2006 22:30:16 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Forcing index usage without 'enable_hashjoin = FALSE' "
},
{
"msg_contents": "On 23 Aug 2006 at 22:30, Tom Lane wrote:\n\n> \"Dan Langille\" <[email protected]> writes:\n> > Without leaving \"enable_hashjoin = false\", can you suggest a way to \n> > force the index usage?\n> \n> Have you tried reducing random_page_cost?\n\nYes. No effect.\n\n> FYI, 8.2 should be a bit better about this.\n\nGood. This query is not critical, but it would be nice.\n\nThank you.\n\n-- \nDan Langille : Software Developer looking for work\nmy resume: http://www.freebsddiary.org/dan_langille.php\n\n\n",
"msg_date": "Wed, 23 Aug 2006 22:42:49 -0400",
"msg_from": "\"Dan Langille\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Forcing index usage without 'enable_hashjoin = FALSE' "
}
] |
[
{
"msg_contents": "Hi,\n\nWe are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,\nP4). \n\nWe understand that the maximum connections that can be set is 64 in\nPostgresql 7.1 version. \n\nThe performance is very slow and some time the database is not getting\nconnected from our application because of this. \n\nPlease advise us on how to increase the performance by setting any\nattributes in configuration files ?. \n\nFind enclosed the configuration file. \n\nThanks and regards,\nRavi\n\n\nTo post a message to the mailing list, send it to\n [email protected]\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]\nSent: Tuesday, August 22, 2006 5:32 PM\nTo: ravig3\nSubject: 7E88-5CD9-AD0E : CONFIRM from pgsql-performance (subscribe)\n\n\n__ \nThe following request\n\n \"subscribe pgsql-performance ravig3 <[email protected]>\"\n\nwas sent to \nby ravig3 <[email protected]>.\n\nTo accept or reject this request, please do one of the following:\n\n1. If you have web browsing capability, visit\n \n<http://mail.postgresql.org/mj/mj_confirm/domain=postgresql.org?t=7E88-5CD9-\nAD0E>\n and follow the instructions there.\n\n2. Reply to [email protected] \n with one of the following two commands in the body of the message:\n\n accept\n reject\n\n (The number 7E88-5CD9-AD0E must be in the Subject header)\n\n3. Reply to [email protected] \n with one of the following two commands in the body of the message:\n \n accept 7E88-5CD9-AD0E\n reject 7E88-5CD9-AD0E\n\nYour confirmation is required for the following reason(s):\n\n The subscribe_policy rule says that the \"subscribe\" command \n must be confirmed by the person affected by the command.\n \n\nIf you do not respond within 4 days, a reminder will be sent.\n\nIf you do not respond within 7 days, this token will expire,\nand the request will not be completed.\n\nIf you would like to communicate with a person, \nsend mail to [email protected].\nDISCLAIMER \nThe contents of this e-mail and any attachment(s) are confidential and intended for the \n\nnamed recipient(s) only. It shall not attach any liability on the originator or HCL or its \n\naffiliates. Any views or opinions presented in this email are solely those of the author and \n\nmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n\ndissemination, copying, disclosure, modification, distribution and / or publication of this \n\nmessage without the prior written consent of the author of this e-mail is strictly \n\nprohibited. If you have received this email in error please delete it and notify the sender \n\nimmediately. Before opening any mail and attachments please check them for viruses and \n\ndefect.",
"msg_date": "Tue, 22 Aug 2006 18:09:21 +0530",
"msg_from": "\"Ravindran G - TLS, Chennai.\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "Is there a reason you are not upgrading to PostgreSQL 8.1? it will run\nnatively on Windoze, and will give you much better performance. 7.1 is way\nout of date, and has a lot of bad issues in it.\n\nUpgrading will most likely fix this issue.\n\nChris\n\nOn 8/22/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\n>\n> Hi,\n>\n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,\n> P4).\n>\n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version.\n>\n> The performance is very slow and some time the database is not getting\n> connected from our application because of this.\n>\n> Please advise us on how to increase the performance by setting any\n> attributes in configuration files ?.\n>\n> Find enclosed the configuration file.\n>\n> Thanks and regards,\n> Ravi\n>\n>\n> To post a message to the mailing list, send it to\n> [email protected]\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 5:32 PM\n> To: ravig3\n> Subject: 7E88-5CD9-AD0E : CONFIRM from pgsql-performance (subscribe)\n>\n>\n> __\n> The following request\n>\n> \"subscribe pgsql-performance ravig3 <[email protected]>\"\n>\n> was sent to\n> by ravig3 <[email protected]>.\n>\n> To accept or reject this request, please do one of the following:\n>\n> 1. If you have web browsing capability, visit\n>\n> <\n> http://mail.postgresql.org/mj/mj_confirm/domain=postgresql.org?t=7E88-5CD9-\n> AD0E>\n> and follow the instructions there.\n>\n> 2. Reply to [email protected]\n> with one of the following two commands in the body of the message:\n>\n> accept\n> reject\n>\n> (The number 7E88-5CD9-AD0E must be in the Subject header)\n>\n> 3. Reply to [email protected]\n> with one of the following two commands in the body of the message:\n>\n> accept 7E88-5CD9-AD0E\n> reject 7E88-5CD9-AD0E\n>\n> Your confirmation is required for the following reason(s):\n>\n> The subscribe_policy rule says that the \"subscribe\" command\n> must be confirmed by the person affected by the command.\n>\n>\n> If you do not respond within 4 days, a reminder will be sent.\n>\n> If you do not respond within 7 days, this token will expire,\n> and the request will not be completed.\n>\n> If you would like to communicate with a person,\n> send mail to [email protected].\n> DISCLAIMER\n> The contents of this e-mail and any attachment(s) are confidential and\n> intended for the\n>\n> named recipient(s) only. It shall not attach any liability on the\n> originator or HCL or its\n>\n> affiliates. Any views or opinions presented in this email are solely those\n> of the author and\n>\n> may not necessarily reflect the opinions of HCL or its affiliates. Any\n> form of reproduction,\n>\n> dissemination, copying, disclosure, modification, distribution and / or\n> publication of this\n>\n> message without the prior written consent of the author of this e-mail is\n> strictly\n>\n> prohibited. If you have received this email in error please delete it and\n> notify the sender\n>\n> immediately. Before opening any mail and attachments please check them for\n> viruses and\n>\n> defect.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n>\n>\n>\n\nIs there a reason you are not upgrading to PostgreSQL 8.1? it will run natively on Windoze, and will give you much better performance. 7.1 is way out of date, and has a lot of bad issues in it.Upgrading will most likely fix this issue.\nChrisOn 8/22/06, Ravindran G - TLS, Chennai. <[email protected]> wrote:\nHi,We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,P4).We understand that the maximum connections that can be set is 64 inPostgresql 7.1 version.The performance is very slow and some time the database is not getting\nconnected from our application because of this.Please advise us on how to increase the performance by setting anyattributes in configuration files ?.Find enclosed the configuration file.Thanks and regards,\nRaviTo post a message to the mailing list, send it to [email protected] Message-----From: \[email protected][mailto:[email protected]]Sent: Tuesday, August 22, 2006 5:32 PMTo: ravig3Subject: 7E88-5CD9-AD0E : CONFIRM from pgsql-performance (subscribe)\n__The following request \"subscribe pgsql-performance ravig3 <[email protected]>\"was sent toby ravig3 <\[email protected]>.To accept or reject this request, please do one of the following:1. If you have web browsing capability, visit<\nhttp://mail.postgresql.org/mj/mj_confirm/domain=postgresql.org?t=7E88-5CD9-AD0E> and follow the instructions there.2. Reply to [email protected]\n with one of the following two commands in the body of the message: accept reject (The number 7E88-5CD9-AD0E must be in the Subject header)3. Reply to \[email protected] with one of the following two commands in the body of the message: accept 7E88-5CD9-AD0E reject 7E88-5CD9-AD0EYour confirmation is required for the following reason(s):\n The subscribe_policy rule says that the \"subscribe\" command must be confirmed by the person affected by the command.If you do not respond within 4 days, a reminder will be sent.\nIf you do not respond within 7 days, this token will expire,and the request will not be completed.If you would like to communicate with a person,send mail to \[email protected] contents of this e-mail and any attachment(s) are confidential and intended for thenamed recipient(s) only. It shall not attach any liability on the originator or HCL or its\naffiliates. Any views or opinions presented in this email are solely those of the author andmay not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction,dissemination, copying, disclosure, modification, distribution and / or publication of this\nmessage without the prior written consent of the author of this e-mail is strictlyprohibited. If you have received this email in error please delete it and notify the senderimmediately. Before opening any mail and attachments please check them for viruses and\ndefect.---------------------------(end of broadcast)---------------------------TIP 6: explain analyze is your friend",
"msg_date": "Tue, 22 Aug 2006 09:14:58 -0400",
"msg_from": "\"Chris Hoover\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
},
{
"msg_contents": "\"Ravindran G - TLS, Chennai.\" <[email protected]> writes:\n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,\n> P4).\n\nEgad :-(\n\nIf you are worried about performance, get off 7.1. Even if you are not\nworried about performance, get off 7.1. It *will* eat your data someday.\n\nA native Windows build of PG 8.1 will blow the doors off 7.1/cygwin as\nto both performance and reliability.\n\nI know little about Windows versions, but I suspect people will tell you\nthat a newer version of Windows would be a good idea too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2006 09:17:04 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue. "
},
{
"msg_contents": "Ravindran G - TLS, Chennai. wrote:\n> Hi,\n> \n> We are using PostgreSQL 7.1 cygwin installed on Windows 2000 (2 GB Memory,\n> P4). \n> \n\nI would strongly suggest moving to native 8.1 :). You will find your \nlife much better.\n\n\nJoshua D. Drake\n\n\n> We understand that the maximum connections that can be set is 64 in\n> Postgresql 7.1 version. \n> \n> The performance is very slow and some time the database is not getting\n> connected from our application because of this. \n> \n> Please advise us on how to increase the performance by setting any\n> attributes in configuration files ?. \n> \n> Find enclosed the configuration file. \n> \n> Thanks and regards,\n> Ravi\n> \n> \n> To post a message to the mailing list, send it to\n> [email protected]\n> \n> \n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]]\n> Sent: Tuesday, August 22, 2006 5:32 PM\n> To: ravig3\n> Subject: 7E88-5CD9-AD0E : CONFIRM from pgsql-performance (subscribe)\n> \n> \n> __ \n> The following request\n> \n> \"subscribe pgsql-performance ravig3 <[email protected]>\"\n> \n> was sent to \n> by ravig3 <[email protected]>.\n> \n> To accept or reject this request, please do one of the following:\n> \n> 1. If you have web browsing capability, visit\n> \n> <http://mail.postgresql.org/mj/mj_confirm/domain=postgresql.org?t=7E88-5CD9-\n> AD0E>\n> and follow the instructions there.\n> \n> 2. Reply to [email protected] \n> with one of the following two commands in the body of the message:\n> \n> accept\n> reject\n> \n> (The number 7E88-5CD9-AD0E must be in the Subject header)\n> \n> 3. Reply to [email protected] \n> with one of the following two commands in the body of the message:\n> \n> accept 7E88-5CD9-AD0E\n> reject 7E88-5CD9-AD0E\n> \n> Your confirmation is required for the following reason(s):\n> \n> The subscribe_policy rule says that the \"subscribe\" command \n> must be confirmed by the person affected by the command.\n> \n> \n> If you do not respond within 4 days, a reminder will be sent.\n> \n> If you do not respond within 7 days, this token will expire,\n> and the request will not be completed.\n> \n> If you would like to communicate with a person, \n> send mail to [email protected].\n> DISCLAIMER \n> The contents of this e-mail and any attachment(s) are confidential and intended for the \n> \n> named recipient(s) only. It shall not attach any liability on the originator or HCL or its \n> \n> affiliates. Any views or opinions presented in this email are solely those of the author and \n> \n> may not necessarily reflect the opinions of HCL or its affiliates. Any form of reproduction, \n> \n> dissemination, copying, disclosure, modification, distribution and / or publication of this \n> \n> message without the prior written consent of the author of this e-mail is strictly \n> \n> prohibited. If you have received this email in error please delete it and notify the sender \n> \n> immediately. Before opening any mail and attachments please check them for viruses and \n> \n> defect.\n> \n> \n> ------------------------------------------------------------------------\n> \n> #\n> # PostgreSQL configuration file\n> # -----------------------------\n> #\n> # This file consists of lines of the form\n> #\n> # name = value\n> #\n> # (The `=' is optional.) White space is collapsed, comments are\n> # introduced by `#' anywhere on a line. The complete list of option\n> # names and allowed values can be found in the PostgreSQL\n> # documentation. The commented-out settings shown in this file\n> # represent the default values.\n> \n> # Any option can also be given as a command line switch to the\n> # postmaster, e.g., 'postmaster -c log_connections=on'. Some options\n> # can be changed at run-time with the 'SET' SQL command.\n> \n> \n> #========================================================================\n> \n> \n> #\n> #\tConnection Parameters\n> #\n> tcpip_socket = true\n> #ssl = false\n> \n> max_connections = 64\n> \n> #port = 5432 \n> #hostname_lookup = false\n> #show_source_port = false\n> \n> #unix_socket_directory = ''\n> #unix_socket_group = ''\n> #unix_socket_permissions = 0777\n> \n> #virtual_host = ''\n> \n> #krb_server_keyfile = ''\n> \n> \n> #\n> #\tShared Memory Size\n> #\n> shared_buffers = 20000 # 2*max_connections, min 16\n> #max_fsm_relations = 100 # min 10, fsm is free space map\n> max_fsm_pages = 20000 # min 1000, fsm is free space map\n> #max_locks_per_transaction = 64 # min 10\n> #wal_buffers = 8 # min 4\n> \n> #\n> #\tNon-shared Memory Sizes\n> #\n> #sort_mem = 512 # min 32\n> #vacuum_mem = 8192 # min 1024\n> \n> \n> #\n> #\tWrite-ahead log (WAL)\n> #\n> #wal_files = 0 # range 0-64\n> wal_sync_method = open_sync # the default varies across platforms:\n> #\t\t\t # fsync, fdatasync, open_sync, or open_datasync\n> #wal_debug = 0 # range 0-16\n> #commit_delay = 0 # range 0-100000\n> #commit_siblings = 5 # range 1-1000\n> #checkpoint_segments = 3 # in logfile segments (16MB each), min 1\n> #checkpoint_timeout = 300 # in seconds, range 30-3600\n> #fsync = true\n> \n> \n> #\n> #\tOptimizer Parameters\n> #\n> #enable_seqscan = true\n> #enable_indexscan = true\n> #enable_tidscan = true\n> #enable_sort = true\n> #enable_nestloop = true\n> #enable_mergejoin = true\n> #enable_hashjoin = true\n> \n> #ksqo = false\n> \n> effective_cache_size = 5000 # default in 8k pages\n> #random_page_cost = 4\n> #cpu_tuple_cost = 0.01\n> #cpu_index_tuple_cost = 0.001\n> #cpu_operator_cost = 0.0025\n> \n> \n> #\n> #\tGEQO Optimizer Parameters\n> #\n> #geqo = true\n> #geqo_selection_bias = 2.0 # range 1.5-2.0\n> #geqo_threshold = 11\n> #geqo_pool_size = 0 # default based on #tables in query, range 128-1024\n> #geqo_effort = 1\n> #geqo_generations = 0\n> #geqo_random_seed = -1 # auto-compute seed\n> \n> \n> #\n> #\tDebug display\n> #\n> #silent_mode = false\n> \n> log_connections = true\n> log_timestamp = true\n> #log_pid = false\n> \n> #debug_level = 0 # range 0-16\n> \n> debug_print_query = true\n> #debug_print_parse = false\n> #debug_print_rewritten = false\n> #debug_print_plan = false\n> #debug_pretty_print = false\n> \n> # requires USE_ASSERT_CHECKING\n> #debug_assertions = true\n> \n> \n> #\n> #\tSyslog\n> #\n> # requires ENABLE_SYSLOG\n> #syslog = 0 # range 0-2\n> #syslog_facility = 'LOCAL0'\n> #syslog_ident = 'postgres'\n> \n> \n> #\n> #\tStatistics\n> #\n> #show_parser_stats = false\n> #show_planner_stats = false\n> #show_executor_stats = false\n> #show_query_stats = false\n> \n> # requires BTREE_BUILD_STATS\n> #show_btree_build_stats = false\n> \n> \n> #\n> #\tAccess statistics collection\n> #\n> #stats_start_collector = true\n> #stats_reset_on_server_start = true\n> #stats_command_string = false\n> #stats_row_level = false\n> #stats_block_level = false\n> \n> \n> #\n> #\tLock Tracing\n> #\n> #trace_notify = false\n> \n> # requires LOCK_DEBUG\n> #trace_locks = false\n> #trace_userlocks = false\n> #trace_lwlocks = false\n> #debug_deadlocks = false\n> #trace_lock_oidmin = 16384\n> #trace_lock_table = 0\n> \n> \n> #\n> #\tMisc\n> #\n> #dynamic_library_path = '$libdir'\n> #australian_timezones = false\n> #authentication_timeout = 60 # min 1, max 600\n> #deadlock_timeout = 1000\n> #default_transaction_isolation = 'read committed'\n> #max_expr_depth = 10000 # min 10\n> #max_files_per_process = 1000 # min 25\n> #password_encryption = false\n> #sql_inheritance = true\n> #transform_null_equals = false\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n\n-- \n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240\n Providing the most comprehensive PostgreSQL solutions since 1997\n http://www.commandprompt.com/\n\n\n",
"msg_date": "Mon, 28 Aug 2006 08:09:19 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgre SQL 7.1 cygwin performance issue."
}
] |
[
{
"msg_contents": "Hello all,\nhad an idea of optimizing a query that may work generally.\n\nIn case a 'column' is indexed, following two alterations could be done\nI think:\n\nA)\n\n select ... where column ~ '^Foo' --> Seq Scan\n\ninto that:\n\n select ... where column BETWEEN 'Foo' AND 'FooZ' --> Index Scan\n\nof course 'Z' should be the last possible character internally used of the \nDBMS.\n\nThat would work as long as there is no in-case-sensitive search being done.\n\n\nanother rescribtion:\n\nB)\n\n select ... where column ~ '^Foo$' --> Seq Scan\n\ninto that:\n\n select ... where column = 'Foo' --> Bitmap Heap Scan\n\nThat speeds up things, too.\n\n\n\nThat would also apply to 'LIKE' and 'SIMILAR TO' operations, I think.\n\nIs there any idea to make the \"Query Planner\" more intelligent to do these \nconvertions automatically?\n\nAnythings speeks against this hack?\n\nRegards\n Uli Habel\n",
"msg_date": "Tue, 22 Aug 2006 19:22:59 +0200",
"msg_from": "Ulrich Habel <[email protected]>",
"msg_from_op": true,
"msg_subject": "query planner: automatic rescribe of LIKE to BETWEEN ?"
},
{
"msg_contents": "Ulrich Habel wrote:\n> Hello all,\n> had an idea of optimizing a query that may work generally.\n> \n> In case a 'column' is indexed, following two alterations could be done\n> I think:\n> \n> A)\n> \n> select ... where column ~ '^Foo' --> Seq Scan\n\nThis is not true. You can make this query use an index if you create it\nwith opclass varchar_pattern_ops or text_pattern_ops, as appropiate.\n\nThus you don't need any hack here.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 22 Aug 2006 13:45:39 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner: automatic rescribe of LIKE to BETWEEN ?"
},
{
"msg_contents": "On 8/22/06, Alvaro Herrera <[email protected]> wrote:\n> Ulrich Habel wrote:\n> > Hello all,\n> > had an idea of optimizing a query that may work generally.\n> >\n> > In case a 'column' is indexed, following two alterations could be done\n> > I think:\n> >\n> > A)\n> >\n> > select ... where column ~ '^Foo' --> Seq Scan\n>\n> This is not true. You can make this query use an index if you create it\n> with opclass varchar_pattern_ops or text_pattern_ops, as appropiate.\n>\n> Thus you don't need any hack here.\n>\n\nAnd in the case of more general expression, like:\n select ... where column ~ 'something';\n\nIs there a way to optimise this ? (in the case where 'something' is not\na word, but a part of a word)\n\n-- \nThomas SAMSON\n",
"msg_date": "Tue, 22 Aug 2006 20:21:23 +0200",
"msg_from": "\"Thomas Samson\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner: automatic rescribe of LIKE to BETWEEN ?"
},
{
"msg_contents": "Thomas Samson wrote:\n> On 8/22/06, Alvaro Herrera <[email protected]> wrote:\n> >Ulrich Habel wrote:\n> >> Hello all,\n> >> had an idea of optimizing a query that may work generally.\n> >>\n> >> In case a 'column' is indexed, following two alterations could be done\n> >> I think:\n> >>\n> >> A)\n> >>\n> >> select ... where column ~ '^Foo' --> Seq Scan\n> >\n> >This is not true. You can make this query use an index if you create it\n> >with opclass varchar_pattern_ops or text_pattern_ops, as appropiate.\n> >\n> >Thus you don't need any hack here.\n> >\n> \n> And in the case of more general expression, like:\n> select ... where column ~ 'something';\n> \n> Is there a way to optimise this ? (in the case where 'something' is not\n> a word, but a part of a word)\n\nNot sure. I'd try tsearch2 or pg_trgm (or pg_tgrm, whatever it's\ncalled). It's trigram indexing.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n",
"msg_date": "Tue, 22 Aug 2006 14:49:57 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner: automatic rescribe of LIKE to BETWEEN ?"
},
{
"msg_contents": "Ulrich Habel <[email protected]> writes:\n> Anythings speeks against this hack?\n\nOnly that it was done years ago.\n\nAs Alvaro mentions, if you are using a non-C locale then you need\nnon-default index opclasses to get it to work. Non-C locales usually\nhave index sort orders that don't play nice with this conversion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 22 Aug 2006 17:12:33 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query planner: automatic rescribe of LIKE to BETWEEN ? "
}
] |
[
{
"msg_contents": "Hello,\n\nwe're looking into the reason why we are getting warnings about \ntransaction ID wraparound despite a daily \"vaccumdb -qaz\". Someone is \nclaiming that VACUUM without FULL cannot reassign XIDs properly when \nmax_fsm_pages was set too low (it says so here too, but this is rather \nold: http://www.varlena.com/GeneralBits/Tidbits/perf.html#maxfsmp). Is \nthis true, or do we have a different issue here? We're using 8.1.3 with \na database generated on 8.1.3 (i.e. not migrated from 7.x or anything \nlike that).\n\nThanks,\n Marinos\n",
"msg_date": "Tue, 22 Aug 2006 20:10:49 +0200",
"msg_from": "Marinos Yannikos <[email protected]>",
"msg_from_op": true,
"msg_subject": "VACUUM FULL needed sometimes to prevent transaction ID wraparound?"
},
{
"msg_contents": "Marinos Yannikos wrote:\n> Hello,\n> \n> we're looking into the reason why we are getting warnings about \n> transaction ID wraparound despite a daily \"vaccumdb -qaz\". Someone is \n> claiming that VACUUM without FULL cannot reassign XIDs properly when \n> max_fsm_pages was set too low (it says so here too, but this is rather \n> old: http://www.varlena.com/GeneralBits/Tidbits/perf.html#maxfsmp). Is \n> this true, or do we have a different issue here? We're using 8.1.3 with \n> a database generated on 8.1.3 (i.e. not migrated from 7.x or anything \n> like that).\n\nIt's not true. Having shortage of FSM entries will make you lose space,\nbut it will be able to recycle Xids anyway.\n\nI guess your problem is that you're not vacuuming all databases for some\nreason. I'd advise to lose the -q and make sure you're not redirecting\nto somewhere you can't read the log; the read the log and make sure\neverything is going fine.\n\nWhat's the warning anyway? Does it say that wraparound point is\nnearing, or does it merely say that it is on Xid <some number here> and\nyou don't know how far that number actually is?\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n",
"msg_date": "Tue, 22 Aug 2006 15:19:53 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL needed sometimes to prevent transaction ID\n wraparound?"
},
{
"msg_contents": "I would guess that you are not running vacuumdb as a user with permission to\nvacuum the postgres or template1 databases. Try telling vacuumdb to log in\nas postgres or whatever your superuser account is called.\n\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> Marinos Yannikos\n> Sent: Tuesday, August 22, 2006 1:11 PM\n> To: [email protected]\n> Subject: [PERFORM] VACUUM FULL needed sometimes to prevent \n> transaction ID wraparound?\n> \n> \n> Hello,\n> \n> we're looking into the reason why we are getting warnings about \n> transaction ID wraparound despite a daily \"vaccumdb -qaz\". Someone is \n> claiming that VACUUM without FULL cannot reassign XIDs properly when \n> max_fsm_pages was set too low (it says so here too, but this \n> is rather \n> old: \n> http://www.varlena.com/GeneralBits/Tidbits/perf.html#maxfsmp). Is \n> this true, or do we have a different issue here? We're using \n> 8.1.3 with \n> a database generated on 8.1.3 (i.e. not migrated from 7.x or anything \n> like that).\n> \n> Thanks,\n> Marinos\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n",
"msg_date": "Tue, 22 Aug 2006 14:28:15 -0500",
"msg_from": "\"Dave Dutcher\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL needed sometimes to prevent transaction ID\n wraparound?"
},
{
"msg_contents": "On Tue, 2006-08-22 at 20:10 +0200, Marinos Yannikos wrote:\n> Hello,\n> \n> we're looking into the reason why we are getting warnings about \n> transaction ID wraparound despite a daily \"vaccumdb -qaz\". Someone is \n> claiming that VACUUM without FULL cannot reassign XIDs properly when \n> max_fsm_pages was set too low (it says so here too, but this is rather \n> old: http://www.varlena.com/GeneralBits/Tidbits/perf.html#maxfsmp). Is \n> this true, or do we have a different issue here? We're using 8.1.3 with \n> a database generated on 8.1.3 (i.e. not migrated from 7.x or anything \n> like that).\n\nUsually this is caused by either:\n(1) You're not vacuuming as a superuser, so it's not able to vacuum\neverything.\n(2) You have a long-running transaction that never completed for some\nstrange reason.\n\nHope this helps,\n\tJeff Davis\n\n",
"msg_date": "Tue, 22 Aug 2006 15:00:25 -0700",
"msg_from": "Jeff Davis <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL needed sometimes to prevent transaction"
},
{
"msg_contents": "Hi, Jeff & all,\n\nJeff Davis wrote:\n\n\n> (2) You have a long-running transaction that never completed for some\n> strange reason.\n\nI just asked myself whether a 2-phase-commit transaction that was\nprepared, but never committed, can block vacuuming and TID recycling.\n\nMarkus\n\n-- \nMarkus Schaber | Logical Tracking&Tracing International AG\nDipl. Inf. | Software Development GIS\n\nFight against software patents in EU! www.ffii.org www.nosoftwarepatents.org\n",
"msg_date": "Wed, 23 Aug 2006 14:50:54 +0200",
"msg_from": "Markus Schaber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL needed sometimes to prevent transaction"
},
{
"msg_contents": "Markus Schaber wrote:\n> I just asked myself whether a 2-phase-commit transaction that was\n> prepared, but never committed, can block vacuuming and TID recycling.\n> \n\nYes.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n\n",
"msg_date": "Wed, 23 Aug 2006 14:24:05 +0100",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: VACUUM FULL needed sometimes to prevent transaction"
}
] |
[
{
"msg_contents": "Hi all,\n\nI'm really glad to see all the test results people are posting here. In \nfact, I used info from the archives to put together our first \"big\" \ndatabase host:\n\n-Tyan dual-core/dual-cpu mainboard (\n-One Opteron 270 2.0GHz (although our vendor gave us two for some reason)\n-Chenbro 3U case (RM31212B) - OK, but not very well thought-out\n-8 Seagate SATA drives (yes, we stuck with our vendor of choice, WD \nRaptors may have been a better choice)\n-3Ware 9550SX-12MI\n-2GB RAM (we'll get more when we need it)\n\nSo this thing is sitting next to my desk and I'd like to see just how this \ncompares to other hardware. We already know that it will blow away our \nnormal dual-xeon 1Us with just two U320 drives on Adaptec 2120s ZCR cards. \nWe also know that for what this box will be doing (mailing list archives \nwith msgs stored in Postgres) it's going to be more than enough for the \nnext few years...\n\nSo what are people using to get a general feel for the bang/buck ratio? \nI've toyed with Bonnie, IOZone and simple \"dd\" writes. I'd like to go a \nlittle further and actually hit Postgres to see how the entire system \nperforms. My reasons are, in no particular order:\n\n-to learn something (general and pgsql tuning)\n-to help guide future database server builds\n-to take the benchmark data and share it somewhere\n\nThe first one is obvious. Matching software to hardware is really hard \nand there aren't too many people that can do it well.\n\nThe second is a pretty big deal - we've been doing all 1U builds and \ncurrently spread our load amongst individual db servers that also do the \nweb front end for mailing list management. This has worked OK, but we may \nwant to peel off the db section and start moving towards two large boxes \nlike this with one replicating the other as a backup.\n\nThat last one is a stickler. I've seen so much data posted on this list, \nis there any project in the works to collect this? It seems like some \nRAID hardware just totally sucks (cough *Adaptec* cough). Having a site \nthat listed results for the more common benchmarks and sorting it out by \nhardware would help reduce the number of people that get burned by buying \noverpriced/underperforming RAID controllers/SANs.\n\nAny thoughts on all this?\n\nI'll be throwing in some quick stats on the box described above later \ntoday... At first glance, the 3Ware controller is really looking like an \nexcellent value.\n\nThanks,\n\nCharles\n",
"msg_date": "Tue, 22 Aug 2006 16:59:38 -0400 (EDT)",
"msg_from": "Charles Sprickman <[email protected]>",
"msg_from_op": true,
"msg_subject": "Benchmarks"
},
{
"msg_contents": "> -Tyan dual-core/dual-cpu mainboard (\n> -One Opteron 270 2.0GHz (although our vendor gave us two for some reason)\n> -Chenbro 3U case (RM31212B) - OK, but not very well thought-out\n> -8 Seagate SATA drives (yes, we stuck with our vendor of choice, WD\n> Raptors may have been a better choice)\n> -3Ware 9550SX-12MI\n> -2GB RAM (we'll get more when we need it)\n\nyes, you should have bought raptors :)\n\n> So what are people using to get a general feel for the bang/buck ratio?\n> I've toyed with Bonnie, IOZone and simple \"dd\" writes. I'd like to go a\n> little further and actually hit Postgres to see how the entire system\n> performs. My reasons are, in no particular order:\n\nalso pgbench.\n\n> The second is a pretty big deal - we've been doing all 1U builds and\n> currently spread our load amongst individual db servers that also do the\n> web front end for mailing list management. This has worked OK, but we may\n> want to peel off the db section and start moving towards two large boxes\n> like this with one replicating the other as a backup.\n\nimo, this is a smart move.\n\n> That last one is a stickler. I've seen so much data posted on this list,\n> is there any project in the works to collect this? It seems like some\n> RAID hardware just totally sucks (cough *Adaptec* cough). Having a site\n> that listed results for the more common benchmarks and sorting it out by\n> hardware would help reduce the number of people that get burned by buying\n> overpriced/underperforming RAID controllers/SANs.\n\njust post to this list :) hardware moves quick so published\ninformation quickly loses its value. one warning, many people focus\novermuch on sequential i/o, watch out for that.\n\n> I'll be throwing in some quick stats on the box described above later\n> today... At first glance, the 3Ware controller is really looking like an\n> excellent value.\n\nthey are pretty decent. the benchmark is software raid which actually\noutperforms many hardware controllers. adaptec is complete trash,\nthey even dropped support of their command line utilty for the\ncontroller on linux, ugh. ibm serveraid controllers are rebranded\nadaptect btw.\n\nmerlin\n",
"msg_date": "Sun, 27 Aug 2006 22:26:12 -0400",
"msg_from": "\"Merlin Moncure\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarks"
},
{
"msg_contents": "On Sun, 2006-08-27 at 21:26, Merlin Moncure wrote:\n\n> > I'll be throwing in some quick stats on the box described above later\n> > today... At first glance, the 3Ware controller is really looking like an\n> > excellent value.\n> \n> they are pretty decent. the benchmark is software raid which actually\n> outperforms many hardware controllers. adaptec is complete trash,\n> they even dropped support of their command line utilty for the\n> controller on linux, ugh. ibm serveraid controllers are rebranded\n> adaptect btw.\n\nJust a followup on this. Last place I worked we had a bunch of Dell\n2600 boxen with Adaptec RAID controllers. Due to the horribly\nunreliable behaviour of these machines with those controllers (random\nlockups etc...) we switched off the RAID and went to software mirror\nsets under linux. The machines because very stable, and as an added\nbonus, the performance was higher as well.\n",
"msg_date": "Mon, 28 Aug 2006 11:02:59 -0500",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Benchmarks"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.