threads
listlengths
1
275
[ { "msg_contents": "Hi\nI have a select statement that runs on a partition having say couple\nmillion rows.\nThe tabel has indexes on two colums. However the query uses the\nnon-indexed colums too in its where clause.\nFor example:\nSELECT lane_id,measurement_start,\nmeasurement_end,speed,volume,occupancy,quality,effective_date\n FROM tss.lane_data_06_08\n WHERE lane_id in(select lane_id from lane_info where inactive is null )\n AND date_part('hour', measurement_start) between 5 and 23\n AND date_part('day',measurement_start)=30\nGROUP BY lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\nORDER BY lane_id, measurement_start\n\nout of this only lane_id and mesaurement_start are indexed. This query\nwill return around 10,000 rows. But it seems to be taking a long time\nto execute which doesnt make sense for a select statement. It doesnt\nmake any sense to create index for every field we are gonna use in tne\nwhere clause.\nIsnt there any way we can improve the performance?\n\n\nSamantha\n", "msg_date": "Tue, 1 Jul 2008 15:29:26 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Select running slow on Postgres" }, { "msg_contents": "On Tue, Jul 1, 2008 at 1:29 PM, samantha mahindrakar\n<[email protected]> wrote:\n> Hi\n> I have a select statement that runs on a partition having say couple\n> million rows.\n> The tabel has indexes on two colums. However the query uses the\n> non-indexed colums too in its where clause.\n> For example:\n> SELECT lane_id,measurement_start,\n> measurement_end,speed,volume,occupancy,quality,effective_date\n> FROM tss.lane_data_06_08\n> WHERE lane_id in(select lane_id from lane_info where inactive is null )\n> AND date_part('hour', measurement_start) between 5 and 23\n> AND date_part('day',measurement_start)=30\n> GROUP BY lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\n> ORDER BY lane_id, measurement_start\n>\n> out of this only lane_id and mesaurement_start are indexed. This query\n> will return around 10,000 rows. But it seems to be taking a long time\n> to execute which doesnt make sense for a select statement. It doesnt\n> make any sense to create index for every field we are gonna use in tne\n> where clause.\n> Isnt there any way we can improve the performance?\n\nI'm guessing that adding an index for either\ndate_part('hour',measurement_start) or\ndate_part('day',measurement_start) or both would help.\n\nWhat does explain analyze select ... (rest of query here) say?\n", "msg_date": "Tue, 1 Jul 2008 13:47:26 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select running slow on Postgres" }, { "msg_contents": "I ran the explain analyze.Here is what i got:\n\n\n\"Group (cost=112266.37..112266.40 rows=1 width=56) (actual\ntime=5583.399..5615.476 rows=13373 loops=1)\"\n\" -> Sort (cost=112266.37..112266.38 rows=1 width=56) (actual\ntime=5583.382..5590.890 rows=13373 loops=1)\"\n\" Sort Key: lane_data_07_08.lane_id,\nlane_data_07_08.measurement_start, lane_data_07_08.measurement_end,\nlane_data_07_08.speed, lane_data_07_08.volume, lane_data_07_08.occupancy,\nlane_data_07_08.quality, lane_data_07_08.effective_date\"\n\" -> Nested Loop IN Join (cost=0.00..112266.36 rows=1 width=56)\n(actual time=1100.307..5547.768 rows=13373 loops=1)\"\n\" -> Seq Scan on lane_data_07_08 (cost=0.00..112241.52 rows=3\nwidth=56) (actual time=1087.666..5341.662 rows=20581 loops=1)\"\n\" Filter: (((volume = 255::double precision) OR (speed =\n255::double precision) OR (occupancy = 255::double precision) OR (occupancy\n>= 100::double precision) OR (volume > 52::double precision) OR (volume <\n0::double precision) OR (speed > 120::double precision) OR (speed <\n0::double precision)) AND (date_part('hour'::text, measurement_start) >=\n5::double precision) AND (date_part('hour'::text, measurement_start) <=\n23::double precision) AND (date_part('day'::text, measurement_start) =\n1::double precision))\"\n\" -> Index Scan using lane_info_pk on\nlane_info (cost=0.00..8.27 rows=1 width=4) (actual time=0.007..0.007 rows=1\nloops=20581)\"\n\" Index Cond: (lane_data_07_08.lane_id =\nlane_info.lane_id)\"\n\" Filter: (inactive IS NULL)\"\n\"Total runtime: 5621.409 ms\"\n\n\nWell instaed of creating extra indexes (since they eat up lot of space) i\nmade use of the whole measurement_start field, so thet it uses the index\nproeprty and makes the search faster.\nSo i changed the query to include the measuerment start as follows:\n\nSELECT lane_id,measurement_start,\nmeasurement_end,speed,volume,occupancy,quality,effective_date\nFROM tss.lane_data_06_08\nWHERE lane_id in(select lane_id from lane_info where inactive is null )\n*AND measurement_start between '2008-06-30 05:00:00-04' AND '2008-06-30\n23:00:00-04'*\nGROUP BY\nlane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\nORDER BY lane_id, measurement_start\n\n\nSamantha\n\nOn 7/1/08, Scott Marlowe <[email protected]> wrote:\n> On Tue, Jul 1, 2008 at 1:29 PM, samantha mahindrakar\n> <[email protected]> wrote:\n> > Hi\n> > I have a select statement that runs on a partition having say couple\n> > million rows.\n> > The tabel has indexes on two colums. However the query uses the\n> > non-indexed colums too in its where clause.\n> > For example:\n> > SELECT lane_id,measurement_start,\n> > measurement_end,speed,volume,occupancy,quality,effective_date\n> > FROM tss.lane_data_06_08\n> > WHERE lane_id in(select lane_id from lane_info where inactive is null\n)\n> > AND date_part('hour', measurement_start) between 5 and 23\n> > AND date_part('day',measurement_start)=30\n> > GROUP BY\nlane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\n> > ORDER BY lane_id, measurement_start\n> >\n> > out of this only lane_id and mesaurement_start are indexed. This query\n> > will return around 10,000 rows. But it seems to be taking a long time\n> > to execute which doesnt make sense for a select statement. It doesnt\n> > make any sense to create index for every field we are gonna use in tne\n> > where clause.\n> > Isnt there any way we can improve the performance?\n>\n> I'm guessing that adding an index for either\n> date_part('hour',measurement_start) or\n> date_part('day',measurement_start) or both would help.\n>\n> What does explain analyze select ... (rest of query here) say?\n>\n\nI ran the explain analyze.Here is what i got:\"Group  (cost=112266.37..112266.40 rows=1 width=56) (actual time=5583.399..5615.476 rows=13373 loops=1)\"\"  ->  Sort  (cost=112266.37..112266.38 rows=1 width=56) (actual time=5583.382..5590.890 rows=13373 loops=1)\"\n\"        Sort Key: lane_data_07_08.lane_id, lane_data_07_08.measurement_start, lane_data_07_08.measurement_end, lane_data_07_08.speed, lane_data_07_08.volume, lane_data_07_08.occupancy, lane_data_07_08.quality, lane_data_07_08.effective_date\"\n\"        ->  Nested Loop IN Join  (cost=0.00..112266.36 rows=1 width=56) (actual time=1100.307..5547.768 rows=13373 loops=1)\"\"              ->  Seq Scan on lane_data_07_08  (cost=0.00..112241.52 rows=3 width=56) (actual time=1087.666..5341.662 rows=20581 loops=1)\"\n\"                    Filter: (((volume = 255::double precision) OR (speed = 255::double precision) OR (occupancy = 255::double precision) OR (occupancy >= 100::double precision) OR (volume > 52::double precision) OR (volume < 0::double precision) OR (speed > 120::double precision) OR (speed < 0::double precision)) AND (date_part('hour'::text, measurement_start) >= 5::double precision) AND (date_part('hour'::text, measurement_start) <= 23::double precision) AND (date_part('day'::text, measurement_start) = 1::double precision))\"\n\"              ->  Index Scan using lane_info_pk on lane_info  (cost=0.00..8.27 rows=1 width=4) (actual time=0.007..0.007 rows=1 loops=20581)\"\"                    Index Cond: (lane_data_07_08.lane_id = lane_info.lane_id)\"\n\"                    Filter: (inactive IS NULL)\"\"Total runtime: 5621.409 ms\"Well instaed of creating extra indexes (since they eat up lot of space) i made use of the whole measurement_start field, so thet it uses the index proeprty and makes the search faster.\nSo i changed the query to include the measuerment start as follows:SELECT lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_dateFROM tss.lane_data_06_08WHERE lane_id in(select lane_id from lane_info where inactive is  null )\nAND measurement_start between '2008-06-30 05:00:00-04' AND  '2008-06-30 23:00:00-04'GROUP BY lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\nORDER BY lane_id, measurement_startSamanthaOn 7/1/08, Scott Marlowe <[email protected]> wrote:> On Tue, Jul 1, 2008 at 1:29 PM, samantha mahindrakar\n> <[email protected]> wrote:> > Hi> > I have a select statement that runs on a partition having say couple> > million rows.\n> > The tabel has indexes on two colums. However the query uses the> > non-indexed colums too in its where clause.> > For example:> > SELECT lane_id,measurement_start,> > measurement_end,speed,volume,occupancy,quality,effective_date\n> >  FROM tss.lane_data_06_08> >  WHERE lane_id in(select lane_id from lane_info where inactive is  null )> >  AND date_part('hour', measurement_start) between 5 and 23> >  AND date_part('day',measurement_start)=30\n> > GROUP BY lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date> > ORDER BY lane_id, measurement_start> >> > out of this only lane_id and mesaurement_start are indexed. This query\n> > will return around 10,000 rows. But it seems to be taking a long time> > to execute which doesnt make sense for a select statement. It doesnt> > make any sense to create index for every field we are gonna use in tne\n> > where clause.> > Isnt there any way we can improve the performance?> > I'm guessing that adding an index for either> date_part('hour',measurement_start) or> date_part('day',measurement_start) or both would help.\n> > What does explain analyze select ... (rest of query here) say?>", "msg_date": "Wed, 2 Jul 2008 15:01:41 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Select running slow on Postgres" }, { "msg_contents": "On Wed, Jul 2, 2008 at 1:01 PM, samantha mahindrakar\n<[email protected]> wrote:\n> I ran the explain analyze.Here is what i got:\n>\n>\n> \"Group (cost=112266.37..112266.40 rows=1 width=56) (actual\n> time=5583.399..5615.476 rows=13373 loops=1)\"\n> \" -> Sort (cost=112266.37..112266.38 rows=1 width=56) (actual\n> time=5583.382..5590.890 rows=13373 loops=1)\"\n> \" Sort Key: lane_data_07_08.lane_id,\n> lane_data_07_08.measurement_start, lane_data_07_08.measurement_end,\n> lane_data_07_08.speed, lane_data_07_08.volume, lane_data_07_08.occupancy,\n> lane_data_07_08.quality, lane_data_07_08.effective_date\"\n> \" -> Nested Loop IN Join (cost=0.00..112266.36 rows=1 width=56)\n> (actual time=1100.307..5547.768 rows=13373 loops=1)\"\n> \" -> Seq Scan on lane_data_07_08 (cost=0.00..112241.52 rows=3\n> width=56) (actual time=1087.666..5341.662 rows=20581 loops=1)\"\n\nYou can see here that the seq scan on lane_data is what's eating up\nall your time. Also, since the row estimate is WAY off, it then chose\na nested loop thinking it would be joining up only 1 row and actually\nrunning across 20k rows.\n\n> \" Filter: (((volume = 255::double precision) OR (speed =\n> 255::double precision) OR (occupancy = 255::double precision) OR (occupancy\n>>= 100::double precision) OR (volume > 52::double precision) OR (volume <\n> 0::double precision) OR (speed > 120::double precision) OR (speed <\n> 0::double precision)) AND (date_part('hour'::text, measurement_start) >=\n> 5::double precision) AND (date_part('hour'::text, measurement_start) <=\n> 23::double precision) AND (date_part('day'::text, measurement_start) =\n> 1::double precision))\"\n> \" -> Index Scan using lane_info_pk on\n> lane_info (cost=0.00..8.27 rows=1 width=4) (actual time=0.007..0.007 rows=1\n> loops=20581)\"\n> \" Index Cond: (lane_data_07_08.lane_id =\n> lane_info.lane_id)\"\n> \" Filter: (inactive IS NULL)\"\n> \"Total runtime: 5621.409 ms\"\n>\n>\n> Well instaed of creating extra indexes (since they eat up lot of space) i\n> made use of the whole measurement_start field, so thet it uses the index\n> proeprty and makes the search faster.\n> So i changed the query to include the measuerment start as follows:\n>\n> SELECT lane_id,measurement_start,\n> measurement_end,speed,volume,occupancy,quality,effective_date\n> FROM tss.lane_data_06_08\n> WHERE lane_id in(select lane_id from lane_info where inactive is null )\n> AND measurement_start between '2008-06-30 05:00:00-04' AND '2008-06-30\n> 23:00:00-04'\n> GROUP BY\n> lane_id,measurement_start,measurement_end,speed,volume,occupancy,quality,effective_date\n> ORDER BY lane_id, measurement_start\n\nYeah, anytime you can just compare date / timestamp on an indexed\nfield you'll do better. If you find yourself needing to use the other\nsyntax, so you can, for instance, grab the data for 5 days in a row\nfrom 5am to 11am or something, then the method I mentioned of making\nindexes on date_part are a good choice. Note that you need regular\ntimestamp, not timstamptz to create indexes.\n", "msg_date": "Wed, 2 Jul 2008 13:07:45 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Select running slow on Postgres" } ]
[ { "msg_contents": "I recently got my hands on a device called ioDrive from a company\ncalled Fusion-io. The ioDrive is essentially 80GB of flash on a PCI\ncard. It has its own driver for Linux completely outside of the\nnormal scsi/sata/sas/fc block device stack, but from the user's\nperspective it behaves like a block device. I put the ioDrive in an\nordinary PC with 1GB of memory, a single 2.2GHz AMD CPU, and an\nexisting Areca RAID with 6 SATA disks and a 128MB cache. I tested the\ndevice with PostgreSQL 8.3.3 on Centos 5.3 x86_64 (Linux 2.6.18).\n\nThe pgbench database was initialized with scale factor 100. Test runs\nwere performed with 8 parallel connections (-c 8), both read-only (-S)\nand read-write. PostgreSQL itself was configured with 256MB of shared\nbuffers and 32 checkpoint segments. Otherwise the configuration was\nall defaults.\n\nIn the following table, the \"RAID\" configuration has the xlogs on a\nRAID 0 of 2 10krpm disks with ext2, and the heap is on a RAID 0 of 4\n7200rpm disks with ext3. The \"Fusion\" configuration has everything on\nthe ioDrive with xfs. I tried the ioDrive with ext2 and ext3 but it\ndidn't seem to make any difference.\n\n Service Time Percentile, millis\n R/W TPS R-O TPS 50th 80th 90th 95th\nRAID 182 673 18 32 42 64\nFusion 971 4792 8 9 10 11\n\nBasically the ioDrive is smoking the RAID. The only real problem with\nthis benchmark is that the machine became CPU-limited rather quickly.\nDuring the runs with the ioDrive, iowait was pretty well zero, with\nuser CPU being about 75% and system getting about 20%.\n\nNow, I will say a couple of other things. The Linux driver for this\npiece of hardware is pretty dodgy. Sub-alpha quality actually. But\nthey seem to be working on it. Also there's no driver for\nOpenSolaris, Mac OS X, or Windows right now. In fact there's not even\nanything available for Debian or other respectable Linux distros, only\nRed Hat and its clones. The other problem is the 80GB model is too\nsmall to hold my entire DB, Although it could be used as a tablespace\nfor some critical tables. But hey, it's fast.\n\nI'm going to put this board into my 8-way Xeon to see if it goes any\nfaster with more CPU available.\n\nI'd be interested in hearing experiences with other flash storage\ndevices, SSDs, and that type of thing. So far, this is the fastest\nhardware I've seen for the price.\n\n-jwb\n", "msg_date": "Tue, 1 Jul 2008 17:18:53 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fusion-io ioDrive" }, { "msg_contents": "On 02/07/2008, Jeffrey Baker <[email protected]> wrote:\n\n> Red Hat and its clones. The other problem is the 80GB model is too\n> small to hold my entire DB, Although it could be used as a tablespace\n> for some critical tables. But hey, it's fast.\nAnd when/if it dies, please give us a rough guestimate of its\nlife-span in terms of read/write cycles. Sounds exciting, though!\n\n\nCheers,\nAndrej\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Wed, 2 Jul 2008 13:17:35 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Tue, Jul 1, 2008 at 6:17 PM, Andrej Ricnik-Bay\n<[email protected]> wrote:\n> On 02/07/2008, Jeffrey Baker <[email protected]> wrote:\n>\n>> Red Hat and its clones. The other problem is the 80GB model is too\n>> small to hold my entire DB, Although it could be used as a tablespace\n>> for some critical tables. But hey, it's fast.\n> And when/if it dies, please give us a rough guestimate of its\n> life-span in terms of read/write cycles. Sounds exciting, though!\n\nYeah. The manufacturer rates it for 5 years in constant use. I\nremain skeptical.\n", "msg_date": "Tue, 1 Jul 2008 18:36:02 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Tue, 1 Jul 2008, Jeffrey Baker wrote:\n\n> The only real problem with this benchmark is that the machine became \n> CPU-limited rather quickly. During the runs with the ioDrive, iowait was \n> pretty well zero, with user CPU being about 75% and system getting about \n> 20%.\n\nYou might try reducing the number of clients; with a single CPU like yours \nI'd expect peak throughput here would be at closer to 4 clients rather \nthan 8, and possibly as low as 2. What I normally do is run a quick scan \nof a few client loads before running a long test to figure out where the \ngeneral area of peak throughput is. For your 8-way box, it will be closer \nto 32 clients.\n\nWell done test though. When you try again with the faster system, the \nonly other postgresql.conf parameter I'd suggest bumping up is \nwal_buffers; that can limit best pgbench scores a bit and it only needs a \nMB or so to make that go away.\n\nIt's also worth nothing that the gap between the two types of storage will \ngo up up if you increase scale further; scale=100 is only making a 1.5GB \nor so database. If you collected a second data point at a scale of 500 \nI'd expect the standard disk results would halve by then, but I don't know \nwhat the fusion device would do and I'm kind of curious. You may need to \nincrease this regardless because the bigger box has more RAM, and you want \nthe database to be larger than RAM to get interesting results in this type \nof test.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 1 Jul 2008 21:38:04 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On 02/07/2008, Jeffrey Baker <[email protected]> wrote:\n> Yeah. The manufacturer rates it for 5 years in constant use. I\n> remain skeptical.\nI read in one of their spec-sheets that w/ continuous writes it\nshould survive roughly 3.4 years ... I'd be a tad more conservative,\nI guess, and try to drop 20-30% of that figure if I'd consider something\nlike it for production use\n\nAnd I'll be very indiscreet and ask: \"How much do they go for?\" :}\nI couldn't find anyone actually offering them in 5 minutes of\ngoogling, just some ball-park figure of 2400US$ ...\n\n\n\nCheers,\nAndrej\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Wed, 2 Jul 2008 13:46:06 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[email protected]> wrote:\n> I recently got my hands on a device called ioDrive from a company\n> called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI\n> card. It has its own driver for Linux completely outside of the\n> normal scsi/sata/sas/fc block device stack, but from the user's\n> perspective it behaves like a block device. I put the ioDrive in an\n> ordinary PC with 1GB of memory, a single 2.2GHz AMD CPU, and an\n> existing Areca RAID with 6 SATA disks and a 128MB cache. I tested the\n> device with PostgreSQL 8.3.3 on Centos 5.3 x86_64 (Linux 2.6.18).\n>\n> The pgbench database was initialized with scale factor 100. Test runs\n> were performed with 8 parallel connections (-c 8), both read-only (-S)\n> and read-write. PostgreSQL itself was configured with 256MB of shared\n> buffers and 32 checkpoint segments. Otherwise the configuration was\n> all defaults.\n>\n> In the following table, the \"RAID\" configuration has the xlogs on a\n> RAID 0 of 2 10krpm disks with ext2, and the heap is on a RAID 0 of 4\n> 7200rpm disks with ext3. The \"Fusion\" configuration has everything on\n> the ioDrive with xfs. I tried the ioDrive with ext2 and ext3 but it\n> didn't seem to make any difference.\n>\n> Service Time Percentile, millis\n> R/W TPS R-O TPS 50th 80th 90th 95th\n> RAID 182 673 18 32 42 64\n> Fusion 971 4792 8 9 10 11\n>\n> Basically the ioDrive is smoking the RAID. The only real problem with\n> this benchmark is that the machine became CPU-limited rather quickly.\n> During the runs with the ioDrive, iowait was pretty well zero, with\n> user CPU being about 75% and system getting about 20%.\n>\n> Now, I will say a couple of other things. The Linux driver for this\n> piece of hardware is pretty dodgy. Sub-alpha quality actually. But\n> they seem to be working on it. Also there's no driver for\n> OpenSolaris, Mac OS X, or Windows right now. In fact there's not even\n> anything available for Debian or other respectable Linux distros, only\n> Red Hat and its clones. The other problem is the 80GB model is too\n> small to hold my entire DB, Although it could be used as a tablespace\n> for some critical tables. But hey, it's fast.\n>\n> I'm going to put this board into my 8-way Xeon to see if it goes any\n> faster with more CPU available.\n>\n> I'd be interested in hearing experiences with other flash storage\n> devices, SSDs, and that type of thing. So far, this is the fastest\n> hardware I've seen for the price.\n\nAny chance of getting bonnie results? How long are your pgbench runs?\n Are you sure that you are seeing proper syncs to the device? (this is\nmy largest concern actually)\n\nmerlin.\n", "msg_date": "Wed, 2 Jul 2008 06:57:06 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[email protected]> wrote:\n> Basically the ioDrive is smoking the RAID. The only real problem with\n> this benchmark is that the machine became CPU-limited rather quickly.\n\nThat's traditionally the problem with everything being in memory.\nUnless the database algorithms are designed to exploit L1/L2 cache and\nRAM, which is not the case for a disk-based DBMS, you generally lose\nsome concurrency due to the additional CPU overhead of playing only\nwith memory. This is generally acceptable if you're going to trade\noff higher concurrency for faster service times. And, it isn't only\nevidenced in single systems where a disk-based DBMS is 100% cached,\nbut also in most shared-memory clustering architectures.\n\nIn most cases, when you're waiting on disk I/O, you can generally\nsupport higher concurrency because the OS can utilize the CPU's free\ncycles (during the wait) to handle other users. In short, sometimes,\ndisk I/O is a good thing; it just depends on what you need.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 2 Jul 2008 07:41:49 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "Le Wednesday 02 July 2008, Jonah H. Harris a écrit :\n> On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[email protected]> wrote:\n> > Basically the ioDrive is smoking the RAID. The only real problem with\n> > this benchmark is that the machine became CPU-limited rather quickly.\n>\n> That's traditionally the problem with everything being in memory.\n> Unless the database algorithms are designed to exploit L1/L2 cache and\n> RAM, which is not the case for a disk-based DBMS, you generally lose\n> some concurrency due to the additional CPU overhead of playing only\n> with memory. This is generally acceptable if you're going to trade\n> off higher concurrency for faster service times. And, it isn't only\n> evidenced in single systems where a disk-based DBMS is 100% cached,\n> but also in most shared-memory clustering architectures.\n\nMy experience is that using an IRAM for replication (on the slave) is very \ngood. I am unfortunely unable to provide any numbers or benchs :/ (I'll try \nto get some but it won't be easy)\n \nI would probably use some flash/memory disk when Postgresql get the warm \nstand-by at transaction level (and is up for readonly query)...\n\n>\n> In most cases, when you're waiting on disk I/O, you can generally\n> support higher concurrency because the OS can utilize the CPU's free\n> cycles (during the wait) to handle other users. In short, sometimes,\n> disk I/O is a good thing; it just depends on what you need.\n>\n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n\n\n\n-- \nCédric Villemain\nAdministrateur de Base de Données\nCel: +33 (0)6 74 15 56 53\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Wed, 2 Jul 2008 15:36:06 +0200", "msg_from": "=?iso-8859-1?q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Tue, Jul 1, 2008 at 5:18 PM, Jeffrey Baker <[email protected]> wrote:\n> I recently got my hands on a device called ioDrive from a company\n> called Fusion-io. The ioDrive is essentially 80GB of flash on a PCI\n> card.\n\n[...]\n\n> Service Time Percentile, millis\n> R/W TPS R-O TPS 50th 80th 90th 95th\n> RAID 182 673 18 32 42 64\n> Fusion 971 4792 8 9 10 11\n\nEssentially the same benchmark, but on a quad Xeon 2GHz with 3GB main\nmemory, and the scale factor of 300. Really all we learn from this\nexercise is the sheer futility of throwing CPU at PostgreSQL.\n\nR/W TPS: 1168\nR-O TPS: 6877\n\nQuadrupling the CPU resources and tripling the RAM results in a 20% or\n44% performance increase on read/write and read-only loads,\nrespectively. The system loafs along with 2-3 CPUs completely idle,\nalthough oddly iowait is 0%. I think the system is constrained by\ncontext switch, which is tens of thousands per second. This is a\nproblem with the ioDrive software, not with pg.\n\nSomeone asked for bonnie++ output:\n\nBlock output: 495MB/s, 81% CPU\nBlock input: 676MB/s, 93% CPU\nBlock rewrite: 262MB/s, 59% CPU\n\nPretty respectable. In the same ballpark as an HP MSA70 + P800 with\n25 spindles.\n\n-jwb\n", "msg_date": "Fri, 4 Jul 2008 23:41:38 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[email protected]> wrote:\n>> Service Time Percentile, millis\n>> R/W TPS R-O TPS 50th 80th 90th 95th\n>> RAID 182 673 18 32 42 64\n>> Fusion 971 4792 8 9 10 11\n>\n> Someone asked for bonnie++ output:\n>\n> Block output: 495MB/s, 81% CPU\n> Block input: 676MB/s, 93% CPU\n> Block rewrite: 262MB/s, 59% CPU\n>\n> Pretty respectable. In the same ballpark as an HP MSA70 + P800 with\n> 25 spindles.\n\nYou left off the 'seeks' portion of the bonnie++ results -- this is\nactually the most important portion of the test. Based on your tps\n#s, I'm expecting seeks equiv of about 10 10k drives in configured in\na raid 10, or around 1000-1500. They didn't publish any prices so\nit's hard to say if this is 'cost competitive'.\n\nThese numbers are indeed fantastic, disruptive even. If I was testing\nthe device for consideration in high duty server environments, I would\nbe doing durability testing right now...I would slamming the database\nwith transactions (fsync on, etc) and then power off the device. I\nwould do this several times...making sure the software layer isn't\ndoing some mojo that is technically cheating.\n\nI'm not particularly enamored of having a storage device be stuck\ndirectly in a pci slot -- although I understand it's probably\nnecessary in the short term as flash changes all the rules and you\ncan't expect it to run well using mainstream hardware raid\ncontrollers. By using their own device they have complete control of\nthe i/o stack up to the o/s driver level.\n\nI've been thinking for a while now that flash is getting ready to\nexplode into use in server environments. The outstanding questions I\nsee are:\n*) is write endurance problem truly solved (giving at least a 5-10\nyear lifetime)\n*) what are the true odds of catastrophic device failure (industry\nclaims less, we'll see)\n*) is the flash random write problem going to be solved in hardware or\nspecialized solid state write caching techniques. At least\ncurrently, it seems like software is filling the role.\n*) do the software solutions really work (unproven)\n*) when are the major hardware vendors going to get involved. they\nmake a lot of money selling disks and supporting hardware (san, etc).\n\nmerlin\n", "msg_date": "Mon, 7 Jul 2008 09:08:08 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Wed, Jul 2, 2008 at 7:41 AM, Jonah H. Harris <[email protected]> wrote:\n> On Tue, Jul 1, 2008 at 8:18 PM, Jeffrey Baker <[email protected]> wrote:\n>> Basically the ioDrive is smoking the RAID. The only real problem with\n>> this benchmark is that the machine became CPU-limited rather quickly.\n>\n> That's traditionally the problem with everything being in memory.\n> Unless the database algorithms are designed to exploit L1/L2 cache and\n> RAM, which is not the case for a disk-based DBMS, you generally lose\n> some concurrency due to the additional CPU overhead of playing only\n> with memory. This is generally acceptable if you're going to trade\n> off higher concurrency for faster service times. And, it isn't only\n> evidenced in single systems where a disk-based DBMS is 100% cached,\n> but also in most shared-memory clustering architectures.\n>\n> In most cases, when you're waiting on disk I/O, you can generally\n> support higher concurrency because the OS can utilize the CPU's free\n> cycles (during the wait) to handle other users. In short, sometimes,\n> disk I/O is a good thing; it just depends on what you need.\n\nI have a lot of problems with your statements. First of all, we are\nnot really talking about 'RAM' storage...I think your comments would\nbe more on point if we were talking about mounting database storage\ndirectly from the server memory for example. Sever memory and cpu are\ninvolved to the extent that the o/s using them for caching and\nfilesystem things and inside the device driver.\n\nAlso, your comments seem to indicate that having a slower device leads\nto higher concurrency because it allows the process to yield and do\nother things. This is IMO simply false. With faster storage cpu\nloads will increase but only because the overall system throughput\nincreases and cpu/memory 'work' increases in terms of overall system\nactivity. Presumably as storage approaches speeds of main system\nmemory the algorithms of dealing with it will become simpler (not\nhaving to go through acrobatics to try and making everything\nsequential) and thus faster.\n\nI also find the remarks of software 'optimizing' for strict hardware\nassumptions (L1+L2) cache a little suspicious. In some old programs I\nremember keeping a giant C 'union' of critical structures that was\nexactly 8k to fit in the 486 cpu cache. In modern terms I think that\ntype of programming (sans some specialized environments) is usually\ncounter-productive...I think PostgreSQL's approach of deferring as\nmuch work as possible to the o/s is a great approach.\n\nmerlin\n", "msg_date": "Mon, 7 Jul 2008 09:23:43 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "\n\n> *) is the flash random write problem going to be solved in hardware or\n> specialized solid state write caching techniques. At least\n> currently, it seems like software is filling the role.\n\n\tThose flash chips are page-based, not unlike a harddisk, ie. you cannot \nerase and write a byte, you must erase and write a full page. Size of said \npage depends on the chip implementation. I don't know which chips they \nused so cannot comment there, but you can easily imagine that smaller \npages yield faster random IO write throughput. For reads, you must first \nselect a page and then access it. Thus, it is not like RAM at all. It is \nmuch more similar to a harddisk with an almost zero seek time (on reads) \nand a very small, but significant seek time (on writes) because a page \nmust be erased before being written.\n\n\tBig flash chips include ECC inside to improve reliability. Basically the \nchips include a small static RAM buffer. When you want to read a page it \nis first copied to SRAM and ECC checked. When you want to write a page you \nfirst write it to SRAM and then order the chip to write it to flash.\n\n\tUsually you can't erase a page, you must erase a block which contains \nmany pages (this is probably why most flash SSDs suck at random writes).\n\n\tNAND flash will never replace SDRAM because of these restrictions (NOR \nflash acts like RAM but it is slow and has less capacity).\n\tHowever NAND flash is well suited to replace harddisks.\n\n\tWhen writing a page you write it to the small static RAM buffer on the \nchip (fast) and tell the chip to write it to flash (slow). When the chip \nis busy erasing or writing you can not do anything with it, but you can \nstill talk to the other chips. Since the ioDrive has many chips I'd bet \nthey use this feature.\n\n\tI don't know about the ioDrive implementation but you can see that the \npaging and erasing requirements mean some tricks have to be applied and \nthe thing will probably need some smart buffering in RAM in order to be \nfast. Since the data in a flash doesn't need to be sequential (read seek \ntime being close to zero) it is possible they use a system which makes all \nwrites sequential (for instance) which would suit the block erasing \nrequirements very well, with the information about block mapping stored in \nRAM, or perhaps they use some form of copy-on-write. It would be \ninteresting to dissect this algorithm, especially the part which allows to \nstore permanently the block mappings, which cannot be stored in a constant \nknown sector since it would wear out pretty quickly.\n\n\tErgo, in order to benchmark this thing and get relevant results, I would \ntend to think that you'd need to fill it to say, 80% of capacity and \nbombard it with small random writes, the total amount of data being \nwritten being many times more than the total capacity of the drive, in \norder to test the remapping algorithms which are the weak point of such a \ndevice.\n\n> *) do the software solutions really work (unproven)\n> *) when are the major hardware vendors going to get involved. they\n> make a lot of money selling disks and supporting hardware (san, etc).\n\n\tLooking at the pictures of the \"drive\" I see a bunch of Flash chips which \nprobably make the bulk of the cost, a switching power supply, a small BGA \nchip which is probably a DDR memory for buffering, and the mystery ASIC \nwhich is probably a FPGA, I would tend to think Virtex4 from the shape of \nthe package seen from the side in one of the pictures.\n\n\tA team of talented engineers can design and produce such a board, and \nassembly would only use standard PCB processes. This is unlike harddisks, \nwhich need a huge investment and a specialized factory because of the \ncomplex mechanical parts and very tight tolerances. In the case of the \nioDrive, most of the value is in the intellectual property : software on \nthe PC CPU (driver), embedded software, and programming the FPGA.\n\n\tAll this points to a very different economic model for storage. I could \ndesign and build a scaled down version of the ioDrive in my \"garage\", for \ninstance (well, the PCI Express licensing fees are hefty, so I'd use PCI, \nbut you get the idea).\n\n\tThis means I think we are about to see a flood of these devices coming \n from many small companies. This is very good for the end user, because \nthere will be competition, natural selection, and fast evolution.\n\n\tInteresting times ahead !\n\n> I'm not particularly enamored of having a storage device be stuck\n> directly in a pci slot -- although I understand it's probably\n> necessary in the short term as flash changes all the rules and you\n> can't expect it to run well using mainstream hardware raid\n> controllers. By using their own device they have complete control of\n> the i/o stack up to the o/s driver level.\n\n\tWell, SATA is great for harddisks : small cables, less clutter, less \nfailure prone than 80 conductor cables, faster, cheaper, etc...\n\n\tBasically serial LVDS (low voltage differential signalling) point to \npoint links (SATA, PCI-Express, etc) are replacing parallel busses (PCI, \nIDE) everywhere, except where you need extremely low latency combined with \nextremely high throughput (like RAM). Point-to-point is much better \nbecause there is no contention. SATA is too slow for Flash, though, \nbecause it has only 2 lanes. This only leaves PCI-Express. However the \nhumongous data rates this \"drive\" puts out are not going to go through a \ncable that is going to be cheap.\n\n\tTherefore we are probably going to see a lot more PCI-Express flash \ndrives until a standard comes up to allow the RAID-Card + \"drives\" \nparadigm. But it probably won't involve cables and bays, most likely Flash \nsticks just like we have RAM sticks now, and a RAID controller on the mobo \nor a PCI-Express card. Or perhaps it will just be software RAID.\n\n\tAs for reliability of this device, I'd say the failure point is the Flash \nchips, as stated by the manufacturer. Wear levelling algorithms are going \nto matter a lot.\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 07 Jul 2008 16:58:23 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Mon, Jul 7, 2008 at 9:23 AM, Merlin Moncure <[email protected]> wrote:\n> I have a lot of problems with your statements. First of all, we are\n> not really talking about 'RAM' storage...I think your comments would\n> be more on point if we were talking about mounting database storage\n> directly from the server memory for example. Sever memory and cpu are\n> involved to the extent that the o/s using them for caching and\n> filesystem things and inside the device driver.\n\nI'm not sure how those cards work, but my guess is that the CPU will\ngo 100% busy (with a near-zero I/O wait) on any sizable workload. In\nthis case, the current pgbench configuration being used is quite small\nand probably won't resemble this.\n\n> Also, your comments seem to indicate that having a slower device leads\n> to higher concurrency because it allows the process to yield and do\n> other things. This is IMO simply false.\n\nArgue all you want, but this is a fairly well known (20+ year-old) behavior.\n\n> With faster storage cpu loads will increase but only because the overall\n> system throughput increases and cpu/memory 'work' increases in terms\n> of overall system activity.\n\nAgain, I said that response times (throughput) would improve. I'd\nlike to see your argument for explaining how you can handle more\nCPU-only operations when 0% of the CPU is free for use.\n\n> Presumably as storage approaches speedsof main system memory\n> the algorithms of dealing with it will become simpler (not having to\n> go through acrobatics to try and making everything sequential)\n> and thus faster.\n\nWe'll have to see.\n\n> I also find the remarks of software 'optimizing' for strict hardware\n> assumptions (L1+L2) cache a little suspicious. In some old programs I\n> remember keeping a giant C 'union' of critical structures that was\n> exactly 8k to fit in the 486 cpu cache. In modern terms I think that\n> type of programming (sans some specialized environments) is usually\n> counter-productive...I think PostgreSQL's approach of deferring as\n> much work as possible to the o/s is a great approach.\n\nAll of the major database vendors still see an immense value in\noptimizing their algorithms and memory structures for specific\nplatforms and CPU caches. Hence, if they're *paying* money for\nvery-specialized industry professionals to optimize in this way, I\nwould hesitate to say there isn't any value in it. As a fact,\nPostgres doesn't have those low-level resources, so for the most part,\nI have to agree that they have to rely on the OS.\n\n-Jonah\n", "msg_date": "Mon, 7 Jul 2008 11:09:22 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "On Mon, Jul 7, 2008 at 6:08 AM, Merlin Moncure <[email protected]> wrote:\n> On Sat, Jul 5, 2008 at 2:41 AM, Jeffrey Baker <[email protected]> wrote:\n>>> Service Time Percentile, millis\n>>> R/W TPS R-O TPS 50th 80th 90th 95th\n>>> RAID 182 673 18 32 42 64\n>>> Fusion 971 4792 8 9 10 11\n>>\n>> Someone asked for bonnie++ output:\n>>\n>> Block output: 495MB/s, 81% CPU\n>> Block input: 676MB/s, 93% CPU\n>> Block rewrite: 262MB/s, 59% CPU\n>>\n>> Pretty respectable. In the same ballpark as an HP MSA70 + P800 with\n>> 25 spindles.\n>\n> You left off the 'seeks' portion of the bonnie++ results -- this is\n> actually the most important portion of the test. Based on your tps\n> #s, I'm expecting seeks equiv of about 10 10k drives in configured in\n> a raid 10, or around 1000-1500. They didn't publish any prices so\n> it's hard to say if this is 'cost competitive'.\n\nI left it out because bonnie++ reports it as \"+++++\" i.e. greater than\nor equal to 100000 per second.\n\n-jwb\n", "msg_date": "Mon, 7 Jul 2008 09:10:37 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "\n> PFC, I have to say these kind of posts make me a fan of yours. I've \n> read many of your storage-related replied and have found them all very \n> educational. I just want to let you know I found your assessment of the \n> impact of Flash storage perfectly-worded and unbelievably insightful. \n> Thanks a million for sharing your knowledge with the list. -Dan\n\n\tHehe, thanks.\n\n\tThere was a time when you had to be a big company full of cash to build a \ncomputer, and then sudenly people did it in garages, like Wozniak and \nJobs, out of off-the-shelf parts.\n\n\tI feel the ioDrive guys are the same kind of hackers, except today's \nhackers have much more powerful tools. Perhaps, and I hope it's true, \nstorage is about to undergo a revolution like the personal computer had \n20-30 years ago, when the IBMs of the time were eaten from the roots up.\n\n\tIMHO the key is that you can build a ioDrive from off the shelf parts, \nbut you can't do that with a disk drive.\n\tFlash manufacturers are smelling blood, they profit from USB keys and \ndigicams but imagine the market for solid state drives !\n\tAnd in this case the hardware is simple : flash, ram, a fpga, some chips, \nnothing out of the ordinary, it is the brain juice in the software (which \nincludes FPGAs) which will sort out the high performance and reliability \nwinners from the rest.\n\n\tLowering the barrier of entry is good for innovation. I believe Linux \nwill benefit, too, since the target is (for now) high-performance servers, \nand as shown by the ioDrive, innovating hackers prefer to write Linux \ndrivers rather than Vista (argh) drivers.\n", "msg_date": "Mon, 07 Jul 2008 19:50:30 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "Hi,\n\nJonah H. Harris wrote:\n> I'm not sure how those cards work, but my guess is that the CPU will\n> go 100% busy (with a near-zero I/O wait) on any sizable workload. In\n> this case, the current pgbench configuration being used is quite small\n> and probably won't resemble this.\n\nI'm not sure how they work either, but why should they require more CPU \ncycles than any other PCIe SAS controller?\n\nI think they are doing a clever step by directly attaching the NAND \nchips to PCIe, instead of piping all the data through SAS or (S)ATA (and \nthen through PCIe as well). And if the controller chip on the card isn't \nabsolutely bogus, that certainly has the potential to reduce latency and \nimprove throughput - compared to other SSDs.\n\nOr am I missing something?\n\nRegards\n\nMarkus\n\n", "msg_date": "Tue, 08 Jul 2008 11:49:36 +0200", "msg_from": "Markus Wanner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "Well, what does a revolution like this require of Postgres? That is the\nquestion.\n\nI have looked at the I/O drive, and it could increase our DB throughput\nsignificantly over a RAID array.\n\nIdeally, I would put a few key tables and the WAL, etc. I'd also want all\nthe sort or hash overflow from work_mem to go to this device. Some of our\ntables / indexes are heavily written to for short periods of time then more\ninfrequently later -- these are partitioned by date. I would put the fresh\nones on such a device then move them to the hard drives later.\n\nIdeally, we would then need a few changes in Postgres to take full advantage\nof this:\n\n#1 Per-Tablespace optimizer tuning parameters. Arguably, this is already\nneeded. The tablespaces on such a solid state device would have random and\nsequential access at equal (low) cost. Any one-size-fits-all set of\noptimizer variables is bound to cause performance issues when two\ntablespaces have dramatically different performance profiles.\n#2 Optimally, work_mem could be shrunk, and the optimizer would have to not\npreferentially sort - group_aggregate whenever it suspected that work_mem\nwas too large for a hash_agg. A disk based hash_agg will pretty much win\nevery time with such a device over a sort (in memory or not) once the number\nof rows to aggregate goes above a moderate threshold of a couple hundred\nthousand or so.\nIn fact, I have several examples with 8.3.3 and a standard RAID array where\na hash_agg that spilled to disk (poor or -- purposely distorted statistics\ncause this) was a lot faster than the sort that the optimizer wants to do\ninstead. Whatever mechanism is calculating the cost of doing sorts or\nhashes on disk will need to be tunable per tablespace.\n\nI suppose both of the above may be one task -- I don't know enough about the\nPostgres internals.\n\n#3 Being able to move tables / indexes from one tablespace to another as\nefficiently as possible.\n\nThere are probably other enhancements that will help such a setup. These\nwere the first that came to mind.\n\nOn Tue, Jul 8, 2008 at 2:49 AM, Markus Wanner <[email protected]> wrote:\n\n> Hi,\n>\n> Jonah H. Harris wrote:\n>\n>> I'm not sure how those cards work, but my guess is that the CPU will\n>> go 100% busy (with a near-zero I/O wait) on any sizable workload. In\n>> this case, the current pgbench configuration being used is quite small\n>> and probably won't resemble this.\n>>\n>\n> I'm not sure how they work either, but why should they require more CPU\n> cycles than any other PCIe SAS controller?\n>\n> I think they are doing a clever step by directly attaching the NAND chips\n> to PCIe, instead of piping all the data through SAS or (S)ATA (and then\n> through PCIe as well). And if the controller chip on the card isn't\n> absolutely bogus, that certainly has the potential to reduce latency and\n> improve throughput - compared to other SSDs.\n>\n> Or am I missing something?\n>\n> Regards\n>\n> Markus\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWell, what does a revolution like this require of Postgres?   That is the question.I have looked at the I/O drive, and it could increase our DB throughput significantly over a RAID array.Ideally, I would put a few key tables and the WAL, etc.  I'd also want all the sort or hash overflow from work_mem to go to this device.  Some of our tables / indexes are heavily written to for short periods of time then more infrequently later -- these are partitioned by date.  I would put the fresh ones on such a device then move them to the hard drives later.\nIdeally, we would then need a few changes in Postgres to take full advantage of this:#1  Per-Tablespace optimizer tuning parameters.  Arguably, this is already needed.  The tablespaces on such a solid state device would have random and sequential access at equal (low) cost.   Any one-size-fits-all set of optimizer variables is bound to cause performance issues when two tablespaces have dramatically different performance profiles.\n#2  Optimally, work_mem could be shrunk, and the optimizer would have to not preferentially sort - group_aggregate whenever it suspected that work_mem was too large for a hash_agg.  A disk based hash_agg will pretty much win every time with such a device over a sort (in memory or not) once the number of rows to aggregate goes above a moderate threshold of a couple hundred thousand or so.\nIn fact, I have several examples with 8.3.3 and a standard RAID array where a hash_agg that spilled to disk (poor or -- purposely distorted statistics cause this) was a lot faster than the sort that the optimizer wants to do instead.   Whatever mechanism is calculating the cost of doing sorts or hashes on disk will need to be tunable per tablespace.\nI suppose both of the above may be one task -- I don't know enough about the Postgres internals.#3  Being able to move tables / indexes from one tablespace to another as efficiently as possible.There are probably other enhancements that will help such a setup.  These were the first that came to mind.\nOn Tue, Jul 8, 2008 at 2:49 AM, Markus Wanner <[email protected]> wrote:\nHi,\n\nJonah H. Harris wrote:\n\nI'm not sure how those cards work, but my guess is that the CPU will\ngo 100% busy (with a near-zero I/O wait) on any sizable workload.  In\nthis case, the current pgbench configuration being used is quite small\nand probably won't resemble this.\n\n\nI'm not sure how they work either, but why should they require more CPU cycles than any other PCIe SAS controller?\n\nI think they are doing a clever step by directly attaching the NAND chips to PCIe, instead of piping all the data through SAS or (S)ATA (and then through PCIe as well). And if the controller chip on the card isn't absolutely bogus, that certainly has the potential to reduce latency and improve throughput - compared to other SSDs.\n\nOr am I missing something?\n\nRegards\n\nMarkus\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 8 Jul 2008 09:38:39 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" }, { "msg_contents": "Scott Carey wrote:\n> Well, what does a revolution like this require of Postgres? That is the\n> question.\n[...]\n> #1 Per-Tablespace optimizer tuning parameters.\n\n... automatically measured?\n\n\nCheers,\n Jeremy\n", "msg_date": "Tue, 08 Jul 2008 20:24:41 +0100", "msg_from": "Jeremy Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fusion-io ioDrive" } ]
[ { "msg_contents": "Hi all,\n\nWe have upgraded our database server to postgres 8.3.1 on 28th June.\n\nChecked out the performance of Hot on 30th June :\n\n relname n_tup_ins n_tup_upd n_tup_del n_tup_hot_upd\n n_live_tup n_dead_tup *table1* *15509156* *2884653* *0* *2442823* *\n15509158* *68046* table2 434585 718472 0 642336 434703 16723 table3 105546\n252143 0 233078 105402 4922 table4 7303628 96098 176790 23445 7126838 3443\ntable5 11117045 22245 0 16910 11117045 440\n\nChecked the performance of Hot on 2nd July :\n\n relname n_tup_ins n_tup_upd n_tup_del n_tup_hot_upd\n n_live_tup n_dead_tup *table1**15577724* *6237242* *0* *2924948* *\n18460676* *131103* table2 435558 1579613 0 763397 1171497 42476 table3\n105814 540997 0 273443 355287 14184 table4 7614752 219329 513268 41053\n7424721 62 table5 11184309 47665 0 21198 11206296 745\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPerformance of Hot was much better on 30June as compared to 2nd July.\n\nCan anybody , help out over here.\n\n~ Gauri\nrelname\n n_tup_ins\n n_tup_upd\n n_tup_del\n n_tup_hot_upd\n n_live_tup\n n_dead_tup\n\n\n-- \nRegards\nGauri\n\nHi all,We have upgraded our database server to postgres 8.3.1 on 28th June.Checked out the performance of Hot on 30th June :\n\n\n\n\n\n\n\n\nrelname            \n n_tup_ins \n n_tup_upd \n n_tup_del \n n_tup_hot_upd \n n_live_tup \n n_dead_tup \n\n\ntable1\n15509156\n2884653\n0\n2442823\n15509158\n68046\n\n\ntable2\n434585\n718472\n0\n642336\n434703\n16723\n\n\ntable3\n105546\n252143\n0\n233078\n105402\n4922\n\n\ntable4\n7303628\n96098\n176790\n23445\n7126838\n3443\n\n\ntable5\n11117045\n22245\n0\n16910\n11117045\n440\n\nChecked the performance of Hot on 2nd July :\n\n\n\n\n\n\n\n\nrelname            \n n_tup_ins \n n_tup_upd \n n_tup_del \n n_tup_hot_upd \n n_live_tup \n n_dead_tup \n\n\ntable115577724\n6237242\n0\n2924948\n18460676\n131103\n\n\ntable2\n435558\n1579613\n0\n763397\n1171497\n42476\n\n\ntable3\n105814\n540997\n0\n273443\n355287\n14184\n\n\ntable4\n7614752\n219329\n513268\n41053\n7424721\n62\n\n\ntable5\n11184309\n47665\n0\n21198\n11206296\n745\n\n\n\n\n\n\n\n\nPerformance of Hot was much better on 30June as compared to 2nd July.Can anybody , help out over here.\n~ Gauri\nrelname            \n n_tup_ins \n n_tup_upd \n n_tup_del \n n_tup_hot_upd \n n_live_tup \n n_dead_tup \n\n-- RegardsGauri", "msg_date": "Wed, 2 Jul 2008 18:01:23 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hot Issue" }, { "msg_contents": "On Wed, Jul 2, 2008 at 8:31 AM, Gauri Kanekar\n<[email protected]> wrote:\n> Performance of Hot was much better on 30June as compared to 2nd July.\n\nDid you happen to VACUUM FULL or CLUSTER anything?\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 2 Jul 2008 08:40:40 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "No, Vacuum Full was not done, but auto_vacuum did click onto table1.\n\nNo cluster.\n\n\nOn Wed, Jul 2, 2008 at 6:10 PM, Jonah H. Harris <[email protected]>\nwrote:\n\n> On Wed, Jul 2, 2008 at 8:31 AM, Gauri Kanekar\n> <[email protected]> wrote:\n> > Performance of Hot was much better on 30June as compared to 2nd July.\n>\n> Did you happen to VACUUM FULL or CLUSTER anything?\n>\n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n>\n\n\n\n-- \nRegards\nGauri\n\nNo, Vacuum Full was not done, but auto_vacuum did click onto table1.No cluster.On Wed, Jul 2, 2008 at 6:10 PM, Jonah H. Harris <[email protected]> wrote:\nOn Wed, Jul 2, 2008 at 8:31 AM, Gauri Kanekar\n<[email protected]> wrote:\n> Performance of Hot was much better on 30June as compared to 2nd July.\n\nDid you happen to VACUUM FULL or CLUSTER anything?\n\n--\nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n-- RegardsGauri", "msg_date": "Wed, 2 Jul 2008 18:11:43 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "On Wed, 2008-07-02 at 18:01 +0530, Gauri Kanekar wrote:\n> Checked out the performance of Hot on 30th June :\n> \n> relname n_tup_ins n_tup_upd n_tup_del\n> n_tup_hot_upd\n> n_live_tup n_dead_tup *table1* *15509156* *2884653* *0* *2442823*\n> *\n> 15509158* *68046* table2 434585 718472 0 642336 434703 16723 table3\n> 105546\n> 252143 0 233078 105402 4922 \n<snip>\n> \n> Checked the performance of Hot on 2nd July :\n> \n> relname n_tup_ins n_tup_upd n_tup_del\n> n_tup_hot_upd\n> n_live_tup n_dead_tup *table1**15577724* *6237242* *0* *2924948*\n> *\n> 18460676* *131103* table2 435558 1579613 0 763397 1171497 42476 \n\nMaybe those updates were not qualified for HOT between these days?\n\nRegards,\n-- \nDevrim GÜNDÜZ\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n http://www.gunduz.org", "msg_date": "Wed, 02 Jul 2008 16:08:04 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "How does it indicate if the entries qualify for hot update ??\n\nhot have a limitation that it do not work if, the index column is updated.\nBut that not the case over here.\n\n\n\nOn Wed, Jul 2, 2008 at 6:38 PM, Devrim GÜNDÜZ <[email protected]> wrote:\n\n> On Wed, 2008-07-02 at 18:01 +0530, Gauri Kanekar wrote:\n> > Checked out the performance of Hot on 30th June :\n> >\n> > relname n_tup_ins n_tup_upd n_tup_del\n> > n_tup_hot_upd\n> > n_live_tup n_dead_tup *table1* *15509156* *2884653* *0* *2442823*\n> > *\n> > 15509158* *68046* table2 434585 718472 0 642336 434703 16723 table3\n> > 105546\n> > 252143 0 233078 105402 4922\n> <snip>\n> >\n> > Checked the performance of Hot on 2nd July :\n> >\n> > relname n_tup_ins n_tup_upd n_tup_del\n> > n_tup_hot_upd\n> > n_live_tup n_dead_tup *table1**15577724* *6237242* *0* *2924948*\n> > *\n> > 18460676* *131103* table2 435558 1579613 0 763397 1171497 42476\n>\n> Maybe those updates were not qualified for HOT between these days?\n>\n> Regards,\n> --\n> Devrim GÜNDÜZ\n> devrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n> http://www.gunduz.org\n>\n>\n\n\n-- \nRegards\nGauri\n\nHow does it indicate if the entries qualify for hot update ??hot have a limitation that it do not work if, the index column is updated. But that not the case over here.On Wed, Jul 2, 2008 at 6:38 PM, Devrim GÜNDÜZ <[email protected]> wrote:\nOn Wed, 2008-07-02 at 18:01 +0530, Gauri Kanekar wrote:\n> Checked out the performance of Hot on 30th June :\n>\n>     relname              n_tup_ins   n_tup_upd   n_tup_del\n> n_tup_hot_upd\n>  n_live_tup   n_dead_tup   *table1* *15509156* *2884653* *0* *2442823*\n> *\n> 15509158* *68046*  table2 434585 718472 0 642336 434703 16723  table3\n> 105546\n> 252143 0 233078 105402 4922\n<snip>\n>\n> Checked the performance of Hot on 2nd July :\n>\n>     relname              n_tup_ins   n_tup_upd   n_tup_del\n> n_tup_hot_upd\n>  n_live_tup   n_dead_tup   *table1**15577724* *6237242* *0* *2924948*\n> *\n> 18460676* *131103*  table2 435558 1579613 0 763397 1171497 42476\n\nMaybe those updates were not qualified for HOT between these days?\n\nRegards,\n--\nDevrim GÜNDÜZ\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n                   http://www.gunduz.org\n\n-- RegardsGauri", "msg_date": "Wed, 2 Jul 2008 18:41:38 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "On Wed, Jul 2, 2008 at 9:11 AM, Gauri Kanekar\n<[email protected]> wrote:\n> hot have a limitation that it do not work if, the index column is updated.\n> But that not the case over here.\n\nAnother limitation is that HOT won't work if there's not enough space\nto fit the update on the same page.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 2 Jul 2008 09:29:02 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "On Wed, 2008-07-02 at 18:41 +0530, Gauri Kanekar wrote:\n> hot have a limitation that it do not work if, the index column is\n> updated.\n\nIt is one of the conditions -- it also needs to fit in the same block.\n\nRegards,\n-- \nDevrim GÜNDÜZ\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n http://www.gunduz.org", "msg_date": "Wed, 02 Jul 2008 16:40:10 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "ok.. But we have set fill_factor = 80 for all the indexes on table1.\n\nIs there a way to check if the page is fill and the update is going on a new\npage ??\n\n\n\nOn Wed, Jul 2, 2008 at 6:59 PM, Jonah H. Harris <[email protected]>\nwrote:\n\n> On Wed, Jul 2, 2008 at 9:11 AM, Gauri Kanekar\n> <[email protected]> wrote:\n> > hot have a limitation that it do not work if, the index column is\n> updated.\n> > But that not the case over here.\n>\n> Another limitation is that HOT won't work if there's not enough space\n> to fit the update on the same page.\n>\n> --\n> Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324\n> EnterpriseDB Corporation | fax: 732.331.1301\n> 499 Thornall Street, 2nd Floor | [email protected]\n> Edison, NJ 08837 | http://www.enterprisedb.com/\n>\n\n\n\n-- \nRegards\nGauri\n\nok.. But we have set fill_factor = 80 for all the indexes on table1.Is there a way to check if the page is fill and the update is going on a new page ??On Wed, Jul 2, 2008 at 6:59 PM, Jonah H. Harris <[email protected]> wrote:\nOn Wed, Jul 2, 2008 at 9:11 AM, Gauri Kanekar\n<[email protected]> wrote:\n> hot have a limitation that it do not work if, the index column is updated.\n> But that not the case over here.\n\nAnother limitation is that HOT won't work if there's not enough space\nto fit the update on the same page.\n\n--\nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n-- RegardsGauri", "msg_date": "Wed, 2 Jul 2008 19:14:16 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "On Wed, Jul 2, 2008 at 9:44 AM, Gauri Kanekar\n<[email protected]> wrote:\n> ok.. But we have set fill_factor = 80 for all the indexes on table1.\n\nYou need fill factor for the heap table, not the index.\n\n> Is there a way to check if the page is fill and the update is going on a new\n> page ??\n\nIIRC, I don't think so. I think you'd have to u se something like\npg_filedump to see if you have rows migrated to other blocks due to\nupdates.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 2 Jul 2008 10:21:14 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n> On Wed, Jul 2, 2008 at 9:44 AM, Gauri Kanekar\n>> Is there a way to check if the page is fill and the update is going on a new\n>> page ??\n\n> IIRC, I don't think so.\n\nYou could make your application check to see if the page part of a row's\nCTID changes when it's updated. But there's no centralized counter\nother than the one you are already looking at.\n\nI personally found the original post completely content-free, however,\nsince it gave no context for the two sets of numbers we were shown.\nOver what intervals were the counts accumulated? Were the stats\ncounters reset after taking the first output, or does the second output\ninclude the first? What exactly does the OP think that \"better HOT\nperformance\" is, anyway?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jul 2008 10:40:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue " }, { "msg_contents": "2nd count include the 1st count also.\n\nWe have not restarted the DB since postgres 8.3 have been released.\n\nBetter HOT performance means.... 1st stat showed most of the updated tuples\ngetting hot.\nBut the 2nd stat showed that most of the updated tuples are NOT getting hot.\n\n\n\nOn Wed, Jul 2, 2008 at 8:10 PM, Tom Lane <[email protected]> wrote:\n\n> \"Jonah H. Harris\" <[email protected]> writes:\n> > On Wed, Jul 2, 2008 at 9:44 AM, Gauri Kanekar\n> >> Is there a way to check if the page is fill and the update is going on a\n> new\n> >> page ??\n>\n> > IIRC, I don't think so.\n>\n> You could make your application check to see if the page part of a row's\n> CTID changes when it's updated. But there's no centralized counter\n> other than the one you are already looking at.\n>\n> I personally found the original post completely content-free, however,\n> since it gave no context for the two sets of numbers we were shown.\n> Over what intervals were the counts accumulated? Were the stats\n> counters reset after taking the first output, or does the second output\n> include the first? What exactly does the OP think that \"better HOT\n> performance\" is, anyway?\n>\n> regards, tom lane\n>\n\n\n\n-- \nRegards\nGauri\n\n2nd count include the 1st count also.We have not restarted the DB since postgres 8.3 have been released.Better HOT performance means.... 1st stat showed most of the updated tuples getting hot.But the 2nd stat showed that most of the updated tuples are NOT getting hot.\nOn Wed, Jul 2, 2008 at 8:10 PM, Tom Lane <[email protected]> wrote:\n\"Jonah H. Harris\" <[email protected]> writes:\n> On Wed, Jul 2, 2008 at 9:44 AM, Gauri Kanekar\n>> Is there a way to check if the page is fill and the update is going on a new\n>> page ??\n\n> IIRC, I don't think so.\n\nYou could make your application check to see if the page part of a row's\nCTID changes when it's updated.  But there's no centralized counter\nother than the one you are already looking at.\n\nI personally found the original post completely content-free, however,\nsince it gave no context for the two sets of numbers we were shown.\nOver what intervals were the counts accumulated?  Were the stats\ncounters reset after taking the first output, or does the second output\ninclude the first?  What exactly does the OP think that \"better HOT\nperformance\" is, anyway?\n\n                        regards, tom lane\n-- RegardsGauri", "msg_date": "Wed, 2 Jul 2008 20:18:11 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hot Issue" }, { "msg_contents": "\"Gauri Kanekar\" <[email protected]> writes:\n> Better HOT performance means.... 1st stat showed most of the updated tuples\n> getting hot.\n> But the 2nd stat showed that most of the updated tuples are NOT getting hot.\n\nWell, as was noted upthread, you'd want to reduce the table fillfactor\n(not index fillfactor) below 100 to improve the odds of being able to\ndo HOT updates. But I wonder whether your application behavior changed.\nAre you updating the rows in a way that'd make them wider?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Jul 2008 12:35:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hot Issue " } ]
[ { "msg_contents": "Hi guys, How are you ?\n\n\nI am from Brazil and i work for a little company and it company is working\nis medium-big project and we want to use PostGree like the DataBase system,\nbut i got some questions.\nI want to know if the PostGree has limitations about the concurrent access,\nbecause a lot of people will access this database at the same time.\nI want to know about the limitations, like how much memory do i have to use\n!? How big could be my database and how simultaneously access this database\nsupport ?\n\n\n\nThanks for attention,\nRegards,\nLevi\n\nHi guys, How are you ?I am from Brazil and i work for a little company and it company is working is medium-big project and we want to use PostGree like the DataBase system, but i got some questions. I want to know if the PostGree has limitations about the concurrent access, because a lot of people will access this database at the same time.\nI want to know about the limitations, like how much memory do i have to use !? How big could be my database and how simultaneously access this database support ?Thanks for attention,Regards,Levi", "msg_date": "Wed, 2 Jul 2008 15:31:10 -0300", "msg_from": "\"=?ISO-8859-1?Q?Lev=ED_Teodoro_da_Silva?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "[QUESTION]Concurrent Access" }, { "msg_contents": "On Wed, 2008-07-02 at 15:31 -0300, Leví Teodoro da Silva wrote:\n> we want to use PostGree like the DataBase system,\n> but i got some questions.\n\nFirst of all: Please learn the correct spelling: It is PostgreSQL, or\nPostgres.\n\n> I want to know if the PostGree has limitations about the concurrent\n> access, because a lot of people will access this database at the same\n> time.\n\nPostgreSQL does not force a limit on concurrent access, but it is\ndependent on your hardware, network, etc.\n\n> I want to know about the limitations, like how much memory do i have\n> to use !? \n\nIt depends on the size of your database. See the link below.\n\n> How big could be my database \n\nDepends on your disk ;) There is no PostgreSQL limitation for that.\nWell, there is a limit for tables, etc:\n\nhttp://www.postgresql.org/docs/faqs.FAQ.html#item4.4\n\n-HTH.\n-- \nDevrim GÜNDÜZ\ndevrim~gunduz.org, devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\n http://www.gunduz.org", "msg_date": "Wed, 02 Jul 2008 21:42:16 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTION]Concurrent Access" }, { "msg_contents": "On Wed, Jul 2, 2008 at 12:31 PM, Leví Teodoro da Silva\n<[email protected]> wrote:\n> Hi guys, How are you ?\n>\n>\n> I am from Brazil and i work for a little company and it company is working\n> is medium-big project and we want to use PostGree like the DataBase system,\n> but i got some questions.\n> I want to know if the PostGree has limitations about the concurrent access,\n> because a lot of people will access this database at the same time.\n> I want to know about the limitations, like how much memory do i have to use\n> !? How big could be my database and how simultaneously access this database\n> support ?\n\nThere is a limit that's basically determined by the machine you're\nrunning on. Each live connection uses a bit of memory itself and\nthere can be a few thundering herd issues with a lot of connections.\nthat said, a big server, properly configured can handle several\nhundred to a few thousand connections at once. We have simple little\nsession servers that store most of what they do in memory and handle\naround 700 connections with a single SATA hard drive. The data is\ndisposable so fsync is off and we can just rebuilt one from an image\nin minutes if need be.\n", "msg_date": "Wed, 2 Jul 2008 12:47:23 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTION]Concurrent Access" }, { "msg_contents": "\n> I want to know if the PostGree has limitations about the concurrent \n> access,\n> because a lot of people will access this database at the same time.\n\n\tPostgreSQL has excellent concurrency provided you use it correctly.\n\n\tBut what do you mean by concurrent access ?\n\n\t* Number of opened Postgres connections at the same time ?\n\t=> each one of those uses a little bit of RAM. (see manual) but if idle \nthey don't use CPU.\n\n\t* Number of opened transactions at the same time ?\n\t(between BEGIN and COMMIT)\n\tIf your transactions are long and you have many transactions at the same \ntime you can get lock problems, for instance transaction A updates row X \nand transaction B updates the same row X, one will have to wait for the \nother to commit or rollback of course. If your transactions last 1 ms \nthere is no problem, if they last 5 minutes you will suffer.\n\n\t* Number of queries executing at the same time ?\n\tThis is different from above, each query will eat some CPU and IO \nresources, and memory too.\n\n\t* Number of concurrent HTTP connections to your website ?\n\tIf you have a website, you will probably use some form of connection \npooling, or lighttpd/fastcgi, or a proxy, whatever, so the number of open \ndatabase connections at the same time won't be that high. Unless you use \nmod_php without connection pooling, in that case it will suck of course, \nbut that's normal.\n\n\t* Number of people using your client ?\n\tSee number of idle connections above. Or use connection pool.\n\n> I want to know about the limitations, like how much memory do i have to \n> use\n\n\tThat depends on what you want to do ;)\n\n> How big could be my database ?\n\n\tThat depends on what you do with it ;)\n\n\tWorking set size is more relevant than total database size.\n\n\tFor instance if your database contains orders from the last 10 years, but \nonly current orders (say orders from this month) are accessed all the \ntime, with old orders being rarely accessed, you want the last 1-2 months' \nworth of orders to fit in RAM for fast access (caching) but you don't need \nRAM to fit your entire database.\n\tSo, think about working sets not total sizes.\n\n\tAnd there is no limit on the table size (well, there is, but you'll never \nhit it). People have terabytes in postgres and it seems to work ;)\n", "msg_date": "Thu, 03 Jul 2008 14:56:27 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTION]Concurrent Access" }, { "msg_contents": "[email protected] (\"Lev� Teodoro da Silva\") writes:\n> Hi guys, How are you ?\n> I am from Brazil and i work for a little company and it company is working is medium-big project and we want to use PostGree like the DataBase\n> system, but i got some questions.\n> I want to know if the PostGree has limitations about the concurrent access, because a lot of people will access this database at the same time.\n> I want to know about the limitations, like how much memory do i have to use !? How big could be my database and how simultaneously access this\n> database support ?\n\nPostGree is a system I am not familiar with; this list is for\ndiscussion of PostgreSQL, sometimes aliased as \"Postgres,\" so I will\nassume you are referring instead to PostgreSQL.\n\nPostgreSQL does have limitations; each connection spawns a process,\nand makes use of its own \"work_mem\", which has the result that the\nmore connections you configure a particular backend to support, the\nmore memory that will consume, and eventually your system will\npresumably run out of memory.\n\nThe size of the database doesn't have as much to do with how many\nusers you can support as does the configuration that you set up.\n-- \nselect 'cbbrowne' || '@' || 'linuxfinances.info';\nhttp://cbbrowne.com/info/lsf.html\nRules of the Evil Overlord #145. \"My dungeon cell decor will not\nfeature exposed pipes. While they add to the gloomy atmosphere, they\nare good conductors of vibrations and a lot of prisoners know Morse\ncode.\" <http://www.eviloverlord.com/>\n", "msg_date": "Fri, 04 Jul 2008 12:58:48 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [QUESTION]Concurrent Access" }, { "msg_contents": "Hi guys !!!\n\nSorry for the wrong spelling. =)\nI could see that PostgreSQL will support my application, but i have to do a\ngood configuration on my server.\n\nThanks for answers, now i will look for informations about PostgreSQL on\nOpenSolaris 2008.05\nHave a nice week,\nLevi\n\n\n2008/7/4 Chris Browne <[email protected]>:\n\n> [email protected] (\"Leví Teodoro da Silva\") writes:\n> > Hi guys, How are you ?\n> > I am from Brazil and i work for a little company and it company is\n> working is medium-big project and we want to use PostGree like the DataBase\n> > system, but i got some questions.\n> > I want to know if the PostGree has limitations about the concurrent\n> access, because a lot of people will access this database at the same time.\n> > I want to know about the limitations, like how much memory do i have to\n> use !? How big could be my database and how simultaneously access this\n> > database support ?\n>\n> PostGree is a system I am not familiar with; this list is for\n> discussion of PostgreSQL, sometimes aliased as \"Postgres,\" so I will\n> assume you are referring instead to PostgreSQL.\n>\n> PostgreSQL does have limitations; each connection spawns a process,\n> and makes use of its own \"work_mem\", which has the result that the\n> more connections you configure a particular backend to support, the\n> more memory that will consume, and eventually your system will\n> presumably run out of memory.\n>\n> The size of the database doesn't have as much to do with how many\n> users you can support as does the configuration that you set up.\n> --\n> select 'cbbrowne' || '@' || 'linuxfinances.info';\n> http://cbbrowne.com/info/lsf.html\n> Rules of the Evil Overlord #145. \"My dungeon cell decor will not\n> feature exposed pipes. While they add to the gloomy atmosphere, they\n> are good conductors of vibrations and a lot of prisoners know Morse\n> code.\" <http://www.eviloverlord.com/>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi guys !!!Sorry for the wrong spelling. =)I could see that PostgreSQL will support my application, but i have to do a good configuration on my server.Thanks for answers, now i will look for informations about PostgreSQL on OpenSolaris 2008.05\nHave a nice week,Levi2008/7/4 Chris Browne <[email protected]>:\[email protected] (\"Leví Teodoro da Silva\") writes:\n> Hi guys, How are you ?\n> I am from Brazil and i work for a little company and it company is working is medium-big project and we want to use PostGree like the DataBase\n> system, but i got some questions.\n> I want to know if the PostGree has limitations about the concurrent access, because a lot of people will access this database at the same time.\n> I want to know about the limitations, like how much memory do i have to use !? How big could be my database and how simultaneously access this\n> database support ?\n\nPostGree is a system I am not familiar with; this list is for\ndiscussion of PostgreSQL, sometimes aliased as \"Postgres,\" so I will\nassume you are referring instead to PostgreSQL.\n\nPostgreSQL does have limitations; each connection spawns a process,\nand makes use of its own \"work_mem\", which has the result that the\nmore connections you configure a particular backend to support, the\nmore memory that will consume, and eventually your system will\npresumably run out of memory.\n\nThe size of the database doesn't have as much to do with how many\nusers you can support as does the configuration that you set up.\n--\nselect 'cbbrowne' || '@' || 'linuxfinances.info';\nhttp://cbbrowne.com/info/lsf.html\nRules  of the  Evil Overlord  #145. \"My  dungeon cell  decor  will not\nfeature exposed pipes.  While they add to the  gloomy atmosphere, they\nare good  conductors of vibrations and  a lot of  prisoners know Morse\ncode.\" <http://www.eviloverlord.com/>\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 14 Jul 2008 11:23:01 -0300", "msg_from": "\"=?ISO-8859-1?Q?Lev=ED_Teodoro_da_Silva?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [QUESTION]Concurrent Access" } ]
[ { "msg_contents": "Hi.\n\nI have a table with 1.8M rows on a Postgres 8.1.4 server, and I'm\nexecuting a query which looks like:\n\n select count(*) from header_fields where message in\n (select message from mailbox_messages limit N);\n\nI've found that when N==75, the query uses a fast index scan, but when\nN==100, it switches to a seqscan instead. Here are the plans, first the\nfast query (which retrieves 1306 rows):\n\n> explain analyse select count(*) from header_fields where message in (select message from mailbox_messages limit 75);\n\n Aggregate (cost=84873.57..84873.58 rows=1 width=0) (actual time=940.513..940.516 rows=1 loops=1)\n -> Nested Loop (cost=2.25..84812.59 rows=24391 width=0) (actual time=53.235..935.743 rows=1306 loops=1)\n -> HashAggregate (cost=2.25..3.00 rows=75 width=4) (actual time=1.351..1.969 rows=75 loops=1)\n -> Limit (cost=0.00..1.31 rows=75 width=4) (actual time=0.096..0.929 rows=75 loops=1)\n -> Seq Scan on mailbox_messages (cost=0.00..1912.10 rows=109410 width=4) (actual time=0.087..0.513 rows=75 loops=1)\n -> Index Scan using header_fields_message_key on header_fields (cost=0.00..1126.73 rows=325 width=4) (actual time=9.003..12.330 rows=17 loops=75)\n Index Cond: (header_fields.message = \"outer\".message)\n Total runtime: 942.535 ms\n\nAnd the slow query (which fetches 1834 rows):\n\n> explain analyse select count(*) from header_fields where message in (select message from mailbox_messages limit 100);\n\n Aggregate (cost=95175.20..95175.21 rows=1 width=0) (actual time=36670.432..36670.435 rows=1 loops=1)\n -> Hash IN Join (cost=3.00..95093.89 rows=32522 width=0) (actual time=27.620..36662.768 rows=1834 loops=1)\n Hash Cond: (\"outer\".message = \"inner\".message)\n -> Seq Scan on header_fields (cost=0.00..85706.78 rows=1811778 width=4) (actual time=22.505..29281.553 rows=1812184 loops=1)\n -> Hash (cost=2.75..2.75 rows=100 width=4) (actual time=1.708..1.708 rows=100 loops=1)\n -> Limit (cost=0.00..1.75 rows=100 width=4) (actual time=0.033..1.182 rows=100 loops=1)\n -> Seq Scan on mailbox_messages (cost=0.00..1912.10 rows=109410 width=4) (actual time=0.023..0.633 rows=100 loops=1)\n Total runtime: 36670.732 ms\n\n(If I set enable_seqscan=off, just to see what happens, then it uses the\nfirst plan, and executes much faster.)\n\nI'd like to understand why this happens, although the problem doesn't\nseem to exist with 8.3. The number of rows retrieved in each case is a\ntiny fraction of the table size, so what causes the decision to change\nbetween 75 and 100?\n\nThis machine has only 512MB of RAM, and is running FreeBSD 5.4. It has\nshared_buffers=3072, effective_cache_size=25000, work_mem=sort_mem=2048.\nChanging the last two doesn't seem to have any effect on the plan.\n\nThanks.\n\n-- ams\n", "msg_date": "Thu, 3 Jul 2008 13:29:00 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": true, "msg_subject": "switchover between index and sequential scans" }, { "msg_contents": "\"Abhijit Menon-Sen\" <[email protected]> writes:\n\n> -> Index Scan using header_fields_message_key on header_fields (cost=0.00..1126.73 rows=325 width=4) (actual time=9.003..12.330 rows=17 loops=75)\n> Index Cond: (header_fields.message = \"outer\".message)\n>\n> -> Seq Scan on header_fields (cost=0.00..85706.78 rows=1811778 width=4) (actual time=22.505..29281.553 rows=1812184 loops=1)\n\nIt looks to me like it's overestimating the number of rows in the index scan\nby 20x and it's overestimating the cost of random accesses by about 100%.\nCombined it's overestimating the cost of the index scan by about 40x.\n\n> This machine has only 512MB of RAM, and is running FreeBSD 5.4. It has\n> shared_buffers=3072, effective_cache_size=25000, work_mem=sort_mem=2048.\n> Changing the last two doesn't seem to have any effect on the plan.\n\nYou could try dramatically increasing effective_cache_size to try to convince\nit that most of the random accesses are cached. Or you could reach for the\nbigger hammer and reduce random_page_cost by about half.\n\nAlso, if this box is dedicated you could make use of more than 24M for shared\nbuffers. Probably something in the region 64M-128M if your database is large\nenough to warrant it.\n\nAnd increase the statistics target on header_fields and re-analyze?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Thu, 03 Jul 2008 11:05:46 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: switchover between index and sequential scans" }, { "msg_contents": "Hi Greg.\n\nAt 2008-07-03 11:05:46 +0100, [email protected] wrote:\n>\n> And increase the statistics target on header_fields and re-analyze?\n\nAha! Thanks for the tip. I just changed the default_statistics_target to\n100 (from 10) and ANALYSEd header_fields and mailbox_messages, and now\nit ALWAYS uses the index scan if I specify a LIMIT. That is,\n\n select count(*) from header_fields where message in\n (select message from mailbox_messages limit N)\n\nalways uses the index scan on header_fields_message_key, even when N is\nequal to the number of rows in mailbox_messages (109410).\n\n Aggregate (cost=30779.98..30779.99 rows=1 width=0) (actual time=175040.923..175040.926 rows=1 loops=1)\n -> Nested Loop (cost=3279.73..30760.93 rows=7617 width=0) (actual time=2114.426..169137.088 rows=1771029 loops=1)\n -> HashAggregate (cost=3279.73..3281.73 rows=200 width=4) (actual time=2076.662..2649.541 rows=109365 loops=1)\n -> Limit (cost=0.00..1912.10 rows=109410 width=4) (actual time=0.029..1386.128 rows=109410 loops=1)\n -> Seq Scan on mailbox_messages (cost=0.00..1912.10 rows=109410 width=4) (actual time=0.022..744.190 rows=109410 loops=1)\n -> Index Scan using header_fields_message_key on header_fields (cost=0.00..136.92 rows=38 width=4) (actual time=0.678..1.416 rows=16 loops=109365)\n Index Cond: (header_fields.message = \"outer\".message)\n Total runtime: 175041.496 ms\n\nNote the massive _under_estimation in the hash aggregate and the\nnestloop. If I don't specify a limit, it'll use a seq scan again.\n\nVery interesting.\n\n-- ams\n", "msg_date": "Thu, 3 Jul 2008 17:15:50 +0530", "msg_from": "Abhijit Menon-Sen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: switchover between index and sequential scans" } ]
[ { "msg_contents": "Hi,\n\nI have juste some questions about cursor/Fetch mechanisms under postgreSQL.\n\nDo it use only scan table approach when one fetch (next) is executed or \nit takes into account index when this lasts one is able to increase the \nperformance.\n\n \nI did some experimentation wherein I calculated real wall clock time for \na same cursor/Fetch C1 with and without index over one field.\n \nI noticed that the time response is the same !!!!\n\nI analyzed a same query (Q1) encapsulated and not encapsulated in a \ncursor (c1) and i had the following plan:\n\n- for a Query all alone : explain analyze SELECT id_ville, taux_crim, \ntemp_ann FROM ville WHERE (pop between 50000 AND 100000) AND (taux_crim \nbetween 13 AND 16);\nI have the following plan:\n------------------------------------------------------------------------------------------------------------------------\n\"Bitmap Heap Scan on ville (cost=253.82..1551.48 rows=915 width=12) \n(actual time=12.863..24.354 rows=1309 loops=1)\"\n\" Recheck Cond: ((taux_crim >= 13) AND (taux_crim <= 16))\"\n\" Filter: ((pop >= 50000) AND (pop <= 100000))\"\n\" -> Bitmap Index Scan on taux_crim_idx (cost=0.00..253.59 rows=13333 \nwidth=0) (actual time=12.482..12.482 rows=13381 loops=1)\"\n\" Index Cond: ((taux_crim >= 13) AND (taux_crim <= 16))\"\n\"Total runtime: 27.464 ms\"\n------------------------------------------------------------------------------------------------------------------------\n\n- for a same query encapsulated in a curseur: explain analyze declare c1 \ncursor for SELECT id_ville, taux_crim, temp_ann FROM ville WHERE (pop \nbetween 50000 AND 100000) AND (taux_crim between 13 AND 16);\n\ni have another plan:\n-----------------------------------------------------------------------------------------------------------------------\n\"Seq Scan on ville (cost=0.00..3031.00 rows=915 width=12)\"\n\" Filter: ((pop >= 50000) AND (pop <= 100000) AND (taux_crim >= 13) AND \n(taux_crim <= 16))\"\n-----------------------------------------------------------------------------------------------------------------------\n\n\nIf index can increase the performance, why this last one is not used by \na cursor ??? why ?\nwho to calculate the threshold where it is able to use index ?\n\n \nREMARK: this experimentation has been done with a little program wrote \nin ECPG.\n\n-- \nMr. Amine Mokhtari,\nPhd Student,\nIRISA Lab. \nAddr: Irisa / Enssat Technopole Anticipa, 6, rue de Kerampont, BP\n\t80518-22305 Lannion Cedex\n\nEmail : [email protected]\n [email protected]\n [email protected]\n\nFix: +33 (0)2 96 46 91 00\nMob: +33 (0)6 70 87 58 72\nFax: +33 (0)2 96 37 01 99 \n\n", "msg_date": "Thu, 03 Jul 2008 12:17:47 +0200", "msg_from": "Mokhtari Amine <[email protected]>", "msg_from_op": true, "msg_subject": "cursor/Fetch mechanisms under postgreSQL" } ]
[ { "msg_contents": "I have a table with 29K rows total and I need to delete about 80K out of it.\n\nI have a b-tree index on column cola (varchar(255) ) for my where clause to use.\n\nmy \"select count(*) from test where cola = 'abc' runs very fast,\n \nbut my actual \"delete from test where cola = 'abc';\" takes forever, never can finish and I haven't figured why....\n\nIn my explain output, what is that \"Bitmap Heap Scan on table\"? is it a table scan? is my index being used?\n\nHow does delete work? to delete 80K rows that meet my condition, does Postgres find them all and delete them all together or one at a time?\n\n\nby the way, there is a foreign key on another table that references the primary key col0 on table test.\n\nCould some one help me out here?\n\nThanks a lot,\nJessica\n\n\ntestdb=# select count(*) from test;\n count \n--------\n 295793 --total 295,793 rows\n(1 row)\n\nTime: 155.079 ms\n\ntestdb=# select count(*) from test where cola = 'abc';\n count \n-------\n 80998 - need to delete 80,988 rows\n(1 row)\n\n\n\ntestdb=# explain delete from test where cola = 'abc';\n QUERY PLAN \n----------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on test (cost=2110.49..10491.57 rows=79766 width=6)\n Recheck Cond: ((cola)::text = 'abc'::text)\n -> Bitmap Index Scan on test_cola_idx (cost=0.00..2090.55 rows=79766 width=0)\n Index Cond: ((cola)::text = 'abc'::text)\n(4 rows)\n\n\n\n \nI have a table with 29K rows total and I need to delete about 80K out of it.I have a b-tree index on column cola (varchar(255) ) for my where clause to use.my \"select count(*) from test where cola = 'abc' runs very fast, but my actual \"delete from test where cola = 'abc';\" takes forever, never can finish and I haven't figured why....In my explain output, what is that \"Bitmap Heap Scan on table\"? is it a table scan? is my index being used?How does delete work? to delete 80K rows that meet my condition, does Postgres find them all and delete them all together or one at a time?by the way, there is a foreign key on another table that references the primary key col0 on table test.Could some one help me out here?Thanks a\n lot,Jessicatestdb=# select count(*) from test; count  -------- 295793  --total 295,793 rows(1 row)Time: 155.079 mstestdb=# select count(*) from test where cola = 'abc'; count ------- 80998  - need to delete 80,988 rows(1 row)testdb=# explain delete from test where cola = 'abc';                                             QUERY PLAN                                            \n ---------------------------------------------------------------------------------------------------- Bitmap Heap Scan on test  (cost=2110.49..10491.57 rows=79766 width=6)   Recheck Cond: ((cola)::text = 'abc'::text)   ->  Bitmap Index Scan on test_cola_idx  (cost=0.00..2090.55 rows=79766 width=0)         Index Cond: ((cola)::text = 'abc'::text)(4 rows)", "msg_date": "Thu, 3 Jul 2008 17:44:40 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "slow delete" }, { "msg_contents": "Jessica Richard wrote:\n> I have a table with 29K rows total and I need to delete about 80K out of it.\n\nI assume you meant 290K or something.\n\n> I have a b-tree index on column cola (varchar(255) ) for my where clause \n> to use.\n> \n> my \"select count(*) from test where cola = 'abc' runs very fast,\n> \n> but my actual \"delete from test where cola = 'abc';\" takes forever, \n> never can finish and I haven't figured why....\n\nWhen you delete, the database server must:\n\n- Check all foreign keys referencing the data being deleted\n- Update all indexes on the data being deleted\n- and actually flag the tuples as deleted by your transaction\n\nAll of which takes time. It's a much slower operation than a query that \njust has to find out how many tuples match the search criteria like your \nSELECT does.\n\nHow many indexes do you have on the table you're deleting from? How many \nforeign key constraints are there to the table you're deleting from?\n\nIf you find that it just takes too long, you could drop the indexes and \nforeign key constraints, do the delete, then recreate the indexes and \nforeign key constraints. This can sometimes be faster, depending on just \nwhat proportion of the table must be deleted.\n\nAdditionally, remember to VACUUM ANALYZE the table after that sort of \nbig change. AFAIK you shouldn't really have to if autovacuum is doing \nits job, but it's not a bad idea anyway.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 04 Jul 2008 13:16:31 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow delete" }, { "msg_contents": "\n> by the way, there is a foreign key on another table that references the \n> primary key col0 on table test.\n\n\tIs there an index on the referencing field in the other table ? Postgres \nmust find the rows referencing the deleted rows, so if you forget to index \nthe referencing column, this can take forever.\n", "msg_date": "Fri, 04 Jul 2008 12:11:17 +0200", "msg_from": "PFC <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow delete" } ]
[ { "msg_contents": "Thanks so much for your help.\n\nI can select the 80K data out of 29K rows very fast, but we I delete them, it always just hangs there(> 4 hours without finishing), not deleting anything at all. Finally, I select pky_col where cola = 'abc', and redirect it to an out put file with a list of pky_col numbers, then put them in to a script with 80k lines of individual delete, then it ran fine, slow but actually doing the delete work:\n\ndelete from test where pk_col = n1;\ndelete from test where pk_col = n2;\n...\n\nMy next question is: what is the difference between \"select\" and \"delete\"? There is another table that has one foreign key to reference the test (parent) table that I am deleting from and this foreign key does not have an index on it (a 330K row table).\n\nDeleting one row at a time is fine: delete from test where pk_col = n1;\n\nbut deleting the big chunk all together (with 80K rows to delete) always hangs: delete from test where cola = 'abc';\n\nI am wondering if I don't have enough memory to hold and carry on the 80k-row delete.....\nbut how come I can select those 80k-row very fast? what is the difference between select and delete?\n\nMaybe the foreign key without an index does play a big role here, a 330K-row table references a 29K-row table will get a lot of table scan on the foreign table to check if each row can be deleted from the parent table... Maybe select from the parent table does not have to check the child table?\n\nThank you for pointing out about dropping the constraint first, I can imagine that it will be a lot faster.\n\nBut what if it is a memory issue that prevent me from deleting the 80K-row all at once, where do I check about the memory issue(buffer pool) how to tune it on the memory side?\n\nThanks a lot,\nJessica\n\n\n\n\n----- Original Message ----\nFrom: Craig Ringer <[email protected]>\nTo: Jessica Richard <[email protected]>\nCc: [email protected]\nSent: Friday, July 4, 2008 1:16:31 AM\nSubject: Re: [PERFORM] slow delete\n\nJessica Richard wrote:\n> I have a table with 29K rows total and I need to delete about 80K out of it.\n\nI assume you meant 290K or something.\n\n> I have a b-tree index on column cola (varchar(255) ) for my where clause \n> to use.\n> \n> my \"select count(*) from test where cola = 'abc' runs very fast,\n> \n> but my actual \"delete from test where cola = 'abc';\" takes forever, \n> never can finish and I haven't figured why....\n\nWhen you delete, the database server must:\n\n- Check all foreign keys referencing the data being deleted\n- Update all indexes on the data being deleted\n- and actually flag the tuples as deleted by your transaction\n\nAll of which takes time. It's a much slower operation than a query that \njust has to find out how many tuples match the search criteria like your \nSELECT does.\n\nHow many indexes do you have on the table you're deleting from? How many \nforeign key constraints are there to the table you're deleting from?\n\nIf you find that it just takes too long, you could drop the indexes and \nforeign key constraints, do the delete, then recreate the indexes and \nforeign key constraints. This can sometimes be faster, depending on just \nwhat proportion of the table must be deleted.\n\nAdditionally, remember to VACUUM ANALYZE the table after that sort of \nbig change. AFAIK you shouldn't really have to if autovacuum is doing \nits job, but it's not a bad idea anyway.\n\n--\nCraig Ringer\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n \nThanks so much for your help.I can select the 80K data out of 29K rows very fast, but we I delete them, it always just hangs there(> 4 hours without finishing), not deleting anything at all. Finally, I select pky_col where cola = 'abc', and redirect it to an out put file with a list of pky_col numbers, then put them in to a script with 80k lines of individual delete, then it ran fine, slow but actually doing the delete work:delete from test where pk_col = n1;delete from test where pk_col = n2;...My next question is: what is the difference between \"select\" and \"delete\"? There is another table that has one foreign key to reference the test (parent) table that I am deleting from and this foreign key does not have an index on it (a 330K row\n table).Deleting one row at a time is fine: delete from test where pk_col = n1;but deleting the big chunk all together (with  80K rows to delete) always hangs: delete from test where cola = 'abc';I am wondering if I don't have enough memory to hold and carry on the 80k-row delete.....but how come I can select those 80k-row very fast? what is the difference   between select and delete?Maybe the foreign key without an index does play a big role here, a 330K-row table references a 29K-row table will get a lot of table scan on the foreign table to check if each row can be deleted from the parent table... Maybe select from the parent table does not have to check the child table?Thank you for pointing out about dropping the constraint first, I can imagine that  it will be a lot faster.But what if  it is a memory issue that prevent me from deleting the 80K-row all at once, where do I\n check about the memory issue(buffer pool) how to tune it on the memory side?Thanks a lot,Jessica----- Original Message ----From: Craig Ringer <[email protected]>To: Jessica Richard <[email protected]>Cc: [email protected]: Friday, July 4, 2008 1:16:31 AMSubject: Re: [PERFORM] slow delete\nJessica Richard wrote:> I have a table with 29K rows total and I need to delete about 80K out of it.I assume you meant 290K or something.> I have a b-tree index on column cola (varchar(255) ) for my where clause > to use.> > my \"select count(*) from test where cola = 'abc' runs very fast,>  > but my actual \"delete from test where cola = 'abc';\" takes forever, > never can finish and I haven't figured why....When you delete, the database server must:- Check all foreign keys referencing the data being deleted- Update all indexes on the data being deleted- and actually flag the tuples as deleted by your transactionAll of which takes time. It's a much slower operation than a query that just has to find out how many tuples match the search criteria like your SELECT does.How many indexes do you have on the table you're deleting from? How many\n foreign key constraints are there to the table you're deleting from?If you find that it just takes too long, you could drop the indexes and foreign key constraints, do the delete, then recreate the indexes and foreign key constraints. This can sometimes be faster, depending on just what proportion of the table must be deleted.Additionally, remember to VACUUM ANALYZE the table after that sort of big change. AFAIK you shouldn't really have to if autovacuum is doing its job, but it's not a bad idea anyway.--Craig Ringer-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Jul 2008 05:30:06 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "Re: slow delete" }, { "msg_contents": "> My next question is: what is the difference between \"select\" and \"delete\"?\n> There is another table that has one foreign key to reference the test\n> (parent) table that I am deleting from and this foreign key does not have\n> an index on it (a 330K row table).\n\nThe difference is that with SELECT you're not performing any modifications\nto the data, while with DELETE you are. That means that with DELETE you\nmay have a lot of overhead due to FK checks etc.\n\nSomeone already pointed out that if you reference a table A from table B\n(using a foreign key), then you have to check FK in case of DELETE, and\nthat may knock the server down if the table B is huge and does not have an\nindex on the FK column.\n\n> Deleting one row at a time is fine: delete from test where pk_col = n1;\n>\n> but deleting the big chunk all together (with 80K rows to delete) always\n> hangs: delete from test where cola = 'abc';\n>\n> I am wondering if I don't have enough memory to hold and carry on the\n> 80k-row delete.....\n> but how come I can select those 80k-row very fast? what is the difference\n> between select and delete?\n>\n> Maybe the foreign key without an index does play a big role here, a\n> 330K-row table references a 29K-row table will get a lot of table scan on\n> the foreign table to check if each row can be deleted from the parent\n> table... Maybe select from the parent table does not have to check the\n> child table?\n\nYes, and PFC already pointed this out.\n\nTomas\n\n", "msg_date": "Fri, 4 Jul 2008 15:00:49 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: slow delete" }, { "msg_contents": "On Friday 04 July 2008, [email protected] wrote:\n> > My next question is: what is the difference between \"select\" and\n> > \"delete\"? There is another table that has one foreign key to reference\n> > the test (parent) table that I am deleting from and this foreign key\n> > does not have an index on it (a 330K row table).\n>\n\nYeah you need to fix that. You're doing 80,000 sequential scans of that \ntable to do your delete. That's a whole lot of memory access ...\n\nI don't let people here create foreign key relationships without matching \nindexes - they always cause problems otherwise.\n\n-- \nAlan\n", "msg_date": "Fri, 4 Jul 2008 08:48:19 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slow delete" } ]
[ { "msg_contents": "How can I tell if my work_mem configuration is enough to support all Postgres user activities on the server I am managing?\n\nWhere do I find the indication if the number is lower than needed.\n\nThanks,\nJessica\n\n\n\n \nHow can I tell if my work_mem configuration is enough to support all  Postgres user activities on the server I am managing?Where do I find the indication if the number is lower than needed.Thanks,Jessica", "msg_date": "Sat, 5 Jul 2008 04:24:49 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "How much work_mem to configure..." }, { "msg_contents": "On Sat, Jul 5, 2008 at 5:24 AM, Jessica Richard <[email protected]> wrote:\n> How can I tell if my work_mem configuration is enough to support all\n> Postgres user activities on the server I am managing?\n>\n> Where do I find the indication if the number is lower than needed.\n\nYou kinda have to do some math with fudge factors involved. As\nwork_mem gets smaller, sorts spill over to disk and get slower, and\nhash_aggregate joins get avoided because they need to fit into memory.\n\nAs you increase work_mem, sorts can start happening in memory (or with\nless disk writing) and larger and larger sets can have hash_agg joins\nperformed on them because they can fit in memory.\n\nBut there's a dark side to work_mem being too large, and that is that\nyou can run your machine out of free memory with too many large sorts\nhappening, and then the machine will slow to a crawl as it swaps out\nthe very thing you're trying to do in memory.\n\nSo, I tend to plan for about 1/4 of memory used for shared_buffers,\nand up to 1/4 used for sorts so there's plenty of head room and the OS\nto cache files, which is also important for performance. If you plan\non having 20 users accessing the database at once, then you figure\neach one might on average run a query with 2 sorts, and that you'll be\nusing a maximum of 20*2*work_mem for those sorts etc...\n\nIf it's set to 8M, then you'd get approximately 320 Meg max used by\nall the sorts flying at the same time. You can see why high work_mem\nand high max_connections settings together can be dangerous. and why\npooling connections to limit the possibility of such a thing is useful\ntoo.\n\nGenerally it's a good idea to keep it in the 4 to 16 meg range on most\nmachines to prevent serious issues, but if you're going to allow 100s\nof connections at once, then you need to look at limiting it based on\nhow much memory your server has.\n", "msg_date": "Sun, 6 Jul 2008 00:04:36 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work_mem to configure..." }, { "msg_contents": "In response to \"Scott Marlowe\" <[email protected]>:\n\n> On Sat, Jul 5, 2008 at 5:24 AM, Jessica Richard <[email protected]> wrote:\n> > How can I tell if my work_mem configuration is enough to support all\n> > Postgres user activities on the server I am managing?\n> >\n> > Where do I find the indication if the number is lower than needed.\n> \n> You kinda have to do some math with fudge factors involved. As\n> work_mem gets smaller, sorts spill over to disk and get slower, and\n> hash_aggregate joins get avoided because they need to fit into memory.\n> \n> As you increase work_mem, sorts can start happening in memory (or with\n> less disk writing) and larger and larger sets can have hash_agg joins\n> performed on them because they can fit in memory.\n> \n> But there's a dark side to work_mem being too large, and that is that\n> you can run your machine out of free memory with too many large sorts\n> happening, and then the machine will slow to a crawl as it swaps out\n> the very thing you're trying to do in memory.\n> \n> So, I tend to plan for about 1/4 of memory used for shared_buffers,\n> and up to 1/4 used for sorts so there's plenty of head room and the OS\n> to cache files, which is also important for performance. If you plan\n> on having 20 users accessing the database at once, then you figure\n> each one might on average run a query with 2 sorts, and that you'll be\n> using a maximum of 20*2*work_mem for those sorts etc...\n> \n> If it's set to 8M, then you'd get approximately 320 Meg max used by\n> all the sorts flying at the same time. You can see why high work_mem\n> and high max_connections settings together can be dangerous. and why\n> pooling connections to limit the possibility of such a thing is useful\n> too.\n> \n> Generally it's a good idea to keep it in the 4 to 16 meg range on most\n> machines to prevent serious issues, but if you're going to allow 100s\n> of connections at once, then you need to look at limiting it based on\n> how much memory your server has.\n\nI do have one thing to add: if you're using 8.3, there's a log_temp_files\nconfig variable that you can use to monitor when your sorts spill over\nonto disk. It doesn't change anything that Scott said, it simply gives\nyou another way to monitor what's happening and thus have better\ninformation to tune by.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 7 Jul 2008 07:15:46 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How much work_mem to configure..." } ]
[ { "msg_contents": "Hi,\n\nHere http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm I read:\n\"\"\"\nCombining these two, an optimal fstab for the WAL might look like this:\n\n/dev/hda2 /var ext3 defaults,writeback,noatime 1 2\n\"\"\"\nIs this info accurate?\n\nI also read on other document from the \"technical documentation\" that\nfor partitions where you have the tables and indexes is better to have\njournaling and for partitions for the WAL is better to not have\njournalling...\n\ni tought it has to be the other way (tables & indices without\njournalling, WAL with journalling)\n\n-- \nregards,\nJaime Casanova\nSoporte y capacitación de PostgreSQL\nGuayaquil - Ecuador\nCel. (593) 87171157\n", "msg_date": "Sun, 6 Jul 2008 00:04:31 -0500", "msg_from": "\"Jaime Casanova\" <[email protected]>", "msg_from_op": true, "msg_subject": "filesystem options for WAL" }, { "msg_contents": "On Sun, 6 Jul 2008, Jaime Casanova wrote:\n\n> Here http://www.westnet.com/~gsmith/content/postgresql/TuningPGWAL.htm I read:\n> \"\"\"\n> Combining these two, an optimal fstab for the WAL might look like this:\n>\n> /dev/hda2 /var ext3 defaults,writeback,noatime 1 2\n> \"\"\"\n> Is this info accurate?\n\nNah, that guy doesn't know what he's talking about. That article is \noverdue for an overhaul.\n\n> I also read on other document from the \"technical documentation\" that\n> for partitions where you have the tables and indexes is better to have\n> journaling and for partitions for the WAL is better to not have\n> journalling...\n\nThe WAL is itself a sort of journal, and the way writes to it are done the \nfilesystem level journaling that ext3 provides doesn't buy you much beyond \nadditional overhead. Check out \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \nfor an extensive comparison of different options here, where you can see \nthat using ext2 instead can be much more efficient. The main downside of \next2 is that you might get longer boot times from running fsck, but it \nwon't be any less reliable for database use though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 6 Jul 2008 11:18:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: filesystem options for WAL" } ]
[ { "msg_contents": "I'm spending a third day testing with the ioDrive, and it occurred to\nme that I should normalize my tests by mounting the database on a\nramdisk. The results were surprisingly low. On the single 2.2GHz\nAthlon, the maximum tps seems to be 1450. This is achieved with a\nsingle connection. I/O rates to and from the ramdisk never exceed\n50MB/s on a one-minute average.\n\nWith the flash device on the same benchmark, the tps rate is 1350,\nmeaning that as far as PostgreSQL is concerned, on this machine, the\nflash device achieves 90% of the best possible performance.\n\nQuestion being: what's the bottleneck? Is PG lock-bound?\n\n-jwb\n", "msg_date": "Mon, 7 Jul 2008 14:05:37 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Practical upper limits of pgbench read/write tps with 8.3" }, { "msg_contents": "On Mon, 7 Jul 2008, Jeffrey Baker wrote:\n\n> On the single 2.2GHz Athlon, the maximum tps seems to be 1450...what's \n> the bottleneck? Is PG lock-bound?\n\nIt can become lock-bound if you don't make the database scale \nsignificantly larger than the number of clients, but that's probably not \nyour problem. The pgbench client driver program itself is pretty CPU \nintensive and can suffer badly from kernel issues. I am unsurprised you \ncan only hit 1450 with a single CPU. On systems with multiple CPUs where \nthe single CPU running the pgbench client is much faster than your 2.2GHz \nAthlon, you'd probably be able to get a few thousand TPS, but eventually \nthe context switching of the client itself can become a bottleneck.\n\nRunning pgbench against a RAM disk is a good way to find out where the \nsystem bottlenecks at without disk I/O involvement, you might try that \ntest on your larger server when you get a chance. One interesting thing \nto watch you may not have tried yet is running top and seeing how close to \na single CPU pgbench itself is running at. If you've got 4 CPUs, and the \npgbench client program shows 25% utilization, it is now the bottleneck \nrather than whatever you thought you were measuring. I thought this might \nbe the case in the last test results you reported on Friday but didn't \nhave a chance to comment on it until now.\n\nOne thing you can try here is running pgbench itself on another server \nthan the one hosting the database, but that seems to top out at a few \nthousand TPS as well; may get higher than you've been seeing though.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 7 Jul 2008 18:22:05 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Practical upper limits of pgbench read/write tps with\n 8.3" }, { "msg_contents": "On Mon, Jul 7, 2008 at 3:22 PM, Greg Smith <[email protected]> wrote:\n> On Mon, 7 Jul 2008, Jeffrey Baker wrote:\n>\n>> On the single 2.2GHz Athlon, the maximum tps seems to be 1450...what's the\n>> bottleneck? Is PG lock-bound?\n>\n> It can become lock-bound if you don't make the database scale significantly\n> larger than the number of clients, but that's probably not your problem.\n> The pgbench client driver program itself is pretty CPU intensive and can\n> suffer badly from kernel issues. I am unsurprised you can only hit 1450\n> with a single CPU. On systems with multiple CPUs where the single CPU\n> running the pgbench client is much faster than your 2.2GHz Athlon, you'd\n> probably be able to get a few thousand TPS, but eventually the context\n> switching of the client itself can become a bottleneck.\n\nOn a 2GHz Core 2 Duo the best tps achieved is 2300, with -c 8.\npgbench itself gets around 10% of the CPU (user + sys for pgbench is\n7s of 35s wall clock time, or 70 CPU-seconds, thus 10%).\n\nI suppose you could still blame it on ctxsw between pgbench and pg\nitself, but the results are not better with pgbench on another machine\ncross-connected with gigabit ethernet.\n\n-jwb\n", "msg_date": "Mon, 7 Jul 2008 20:39:28 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Practical upper limits of pgbench read/write tps with 8.3" } ]
[ { "msg_contents": "Hi i have experienced really bad performance on both FreeBSD and linux, with syslog,\nwhen logging statements involving bytea of size ~ 10 Mb.\nConsider this scenario:\npostgres@dynacom=# \\d marinerpapers_atts\n Table \"public.marinerpapers_atts\"\n Column | Type | Modifiers \n-------------+--------------------------+--------------------------------------------------------------------------------\n id | integer | not null default nextval(('public.marinerpapers_atts_id_seq'::text)::regclass)\n marinerid | integer | not null\n filename | text | not null\n mimetype | character varying(50) | not null\n datecreated | timestamp with time zone | not null default now()\n docsrc | bytea | not null\nIndexes:\n \"marinerpapers_atts_pkey\" PRIMARY KEY, btree (id)\n \"marinerpapers_atts_ukey\" UNIQUE, btree (marinerid, filename)\n \"marinerpapers_atts_marinerid\" btree (marinerid)\nForeign-key constraints:\n \"$1\" FOREIGN KEY (marinerid) REFERENCES mariner(id) ON DELETE CASCADE\n\nThe way the insert is done is like\nINSERT INTO marinerpapers_atts(marinerid,filename,mimetype,docsrc) VALUES(1,'foo.pdf','aplication/pdf','%PDF-1.3\\\\0124 0 o....%%EOF\\\\012');\n\nWhen someone tries to insert a row in the above table which results to an error (because e.g. violates the \n\"marinerpapers_atts_ukey\" constraint), the whole statement is logged to the logging system.\n\nFile sizes of about 3M result in actual logging output of ~ 10Mb.\nIn this case, the INSERT *needs* 20 minutes to return. This is because the logging through syslog seems to severely slow the system.\nIf instead, i use stderr, even with logging_collector=on, the same statement needs 15 seconds to return.\n\nI am using syslog since like the stone age, and i would like to stick with it, however this morning i was caught by this bad performance\nand i am planning moving to stderr + logging_collector.\n\nP.S.\nIs there a way to better tune pgsql/syslog in order to work more efficiently in cases like that?\nI know it is a corner case, however i thought i should post my experiences.\nThanx\n\n-- \nAchilleas Mantzios\n", "msg_date": "Tue, 8 Jul 2008 15:24:46 +0300", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": true, "msg_subject": "syslog performance when logging big statements" }, { "msg_contents": "Achilleas Mantzios <[email protected]> writes:\n> In this case, the INSERT *needs* 20 minutes to return. This is because the logging through syslog seems to severely slow the system.\n> If instead, i use stderr, even with logging_collector=on, the same statement needs 15 seconds to return.\n\nHmm. There's a function in elog.c that breaks log messages into chunks\nfor syslog. I don't think anyone's ever looked hard at its performance\n--- maybe there's an O(N^2) behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jul 2008 10:35:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "Στις Tuesday 08 July 2008 17:35:16 ο/η Tom Lane έγραψε:\n> Achilleas Mantzios <[email protected]> writes:\n> > In this case, the INSERT *needs* 20 minutes to return. This is because the logging through syslog seems to severely slow the system.\n> > If instead, i use stderr, even with logging_collector=on, the same statement needs 15 seconds to return.\n> \n> Hmm. There's a function in elog.c that breaks log messages into chunks\n> for syslog. I don't think anyone's ever looked hard at its performance\n> --- maybe there's an O(N^2) behavior?\n> \n> \t\t\tregards, tom lane\n> \n\nThanx,\ni changed PG_SYSLOG_LIMIT in elog.c:1269 from 128 to 1048576\n#ifndef PG_SYSLOG_LIMIT\n/* #define PG_SYSLOG_LIMIT 128 */\n#define PG_SYSLOG_LIMIT 1048576\n#endif\n\nand i got super fast stderr performance. :)\n\nHowever, i noticed a certain amount of data in the log is lost.\nDidnt dig much to the details tho.\n\n-- \nAchilleas Mantzios\n", "msg_date": "Tue, 8 Jul 2008 18:21:57 +0300", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": true, "msg_subject": "Re: syslog performance when logging big statements" }, { "msg_contents": "Achilleas Mantzios <[email protected]> writes:\n> Στις Tuesday 08 July 2008 17:35:16 ο/η Tom Lane έγραψε:\n>> Hmm. There's a function in elog.c that breaks log messages into chunks\n>> for syslog. I don't think anyone's ever looked hard at its performance\n>> --- maybe there's an O(N^2) behavior?\n\n> Thanx,\n> i changed PG_SYSLOG_LIMIT in elog.c:1269 from 128 to 1048576\n> and i got super fast stderr performance. :)\n\nDoesn't seem like a very good solution given its impact on the stack\ndepth right there.\n\nLooking at the code, the only bit that looks like repeated work are the\nrepeated calls to strchr(), which would not be an issue in the \"typical\"\ncase where the very long message contains reasonably frequent newlines.\nAm I right in guessing that your problematic statement contained\nmegabytes worth of text with nary a newline?\n\nIf so, we can certainly fix it by arranging to remember the last\nstrchr() result across loop iterations, but I'd like to confirm the\ntheory before doing that work.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jul 2008 14:34:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "\nOn Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:\n\n> File sizes of about 3M result in actual logging output of ~ 10Mb.\n> In this case, the INSERT *needs* 20 minutes to return. This is \n> because the logging through syslog seems to severely slow the system.\n> If instead, i use stderr, even with logging_collector=on, the same \n> statement needs 15 seconds to return.\n>\n\nIn syslog.conf is the destination for PG marked with a \"-\"? (ie -/var/ \nlog/pg.log) which tells syslog to not sync after each line logged. \nThat could explain a large chunk of the difference in time.\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Tue, 08 Jul 2008 15:00:06 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements" }, { "msg_contents": "Jeff <[email protected]> writes:\n> On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:\n>> File sizes of about 3M result in actual logging output of ~ 10Mb.\n>> In this case, the INSERT *needs* 20 minutes to return. This is \n>> because the logging through syslog seems to severely slow the system.\n>> If instead, i use stderr, even with logging_collector=on, the same \n>> statement needs 15 seconds to return.\n\n> In syslog.conf is the destination for PG marked with a \"-\"? (ie -/var/ \n> log/pg.log) which tells syslog to not sync after each line logged. \n> That could explain a large chunk of the difference in time.\n\nI experimented with this a bit here. There definitely is an O(N^2)\npenalty from the repeated strchr() calls, but it doesn't really start\nto hurt till 1MB or so statement length. Even with that patched,\nsyslog logging pretty much sucks performance-wise. Here are the numbers\nI got on a Fedora 8 workstation, testing the time to log a statement of\nthe form SELECT length('123456...lots of data, no newlines...7890');\n\nstatement length\t\t\t1MB\t\t10MB\n\nCVS HEAD\t\t\t\t2523ms\t\t215588ms\n+ patch to fix repeated strchr\t\t 529ms\t\t 36734ms\nafter turning off syslogd's fsync\t 569ms\t\t 5881ms\nPG_SYSLOG_LIMIT 1024, fsync on\t\t 216ms\t\t 2532ms\nPG_SYSLOG_LIMIT 1024, no fsync\t\t 242ms\t\t 2692ms\nFor comparison purposes:\nlogging statements to stderr\t\t 155ms\t\t 2042ms\nexecute statement without logging\t 42ms\t\t 520ms\n\nThis machine is running a cheap IDE drive that caches writes, so\nthe lack of difference between fsync off and fsync on is not too\nsurprising --- on a machine with server-grade drives there'd be\na lot more difference. (The fact that there is a difference in\nthe 10MB case probably reflects filling the drive's write cache.)\n\nOn my old HPUX machine, where fsync really works (and the syslogd\ndoesn't seem to allow turning it off), the 1MB case takes\n195957ms with the strchr patch, 22922ms at PG_SYSLOG_LIMIT=1024.\n\nSo there's a fairly clear case to be made for fixing the repeated\nstrchr, but I also think that there's a case for jacking up\nPG_SYSLOG_LIMIT. As far as I can tell the current value of 128\nwas chosen *very* conservatively without thought for performance:\nhttp://archives.postgresql.org/pgsql-hackers/2000-05/msg01242.php\n\nAt the time we were looking at evidence that the then-current\nLinux syslogd got tummyache with messages over about 1KB:\nhttp://archives.postgresql.org/pgsql-hackers/2000-05/msg00880.php\n\nSome experimentation with the machines I have handy now says that\n\nFedora 8:\t\ttruncates messages at 2KB (including syslog's header)\nHPUX 10.20 (ancient):\tditto\nMac OS X 10.5.3:\tdrops messages if longer than about 1900 bytes\n\nSo it appears to me that setting PG_SYSLOG_LIMIT = 1024 would be\nperfectly safe on current systems (and probably old ones too),\nand would give at least a factor of two speedup for logging long\nstrings --- more like a factor of 8 if syslogd is fsync'ing.\n\nComments? Anyone know of systems where this is too high?\nPerhaps we should make that change only in HEAD, not in the\nback branches, or crank it up only to 512 in the back branches?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jul 2008 17:39:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "On Tue, 8 Jul 2008, Tom Lane wrote:\n\n> Jeff <[email protected]> writes:\n>> On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:\n>>> File sizes of about 3M result in actual logging output of ~ 10Mb.\n>>> In this case, the INSERT *needs* 20 minutes to return. This is\n>>> because the logging through syslog seems to severely slow the system.\n>>> If instead, i use stderr, even with logging_collector=on, the same\n>>> statement needs 15 seconds to return.\n>\n>> In syslog.conf is the destination for PG marked with a \"-\"? (ie -/var/\n>> log/pg.log) which tells syslog to not sync after each line logged.\n>> That could explain a large chunk of the difference in time.\n>\n> I experimented with this a bit here. There definitely is an O(N^2)\n> penalty from the repeated strchr() calls, but it doesn't really start\n> to hurt till 1MB or so statement length. Even with that patched,\n> syslog logging pretty much sucks performance-wise. Here are the numbers\n> I got on a Fedora 8 workstation, testing the time to log a statement of\n> the form SELECT length('123456...lots of data, no newlines...7890');\n>\n> statement length\t\t\t1MB\t\t10MB\n>\n> CVS HEAD\t\t\t\t2523ms\t\t215588ms\n> + patch to fix repeated strchr\t\t 529ms\t\t 36734ms\n> after turning off syslogd's fsync\t 569ms\t\t 5881ms\n> PG_SYSLOG_LIMIT 1024, fsync on\t\t 216ms\t\t 2532ms\n> PG_SYSLOG_LIMIT 1024, no fsync\t\t 242ms\t\t 2692ms\n> For comparison purposes:\n> logging statements to stderr\t\t 155ms\t\t 2042ms\n> execute statement without logging\t 42ms\t\t 520ms\n>\n> This machine is running a cheap IDE drive that caches writes, so\n> the lack of difference between fsync off and fsync on is not too\n> surprising --- on a machine with server-grade drives there'd be\n> a lot more difference. (The fact that there is a difference in\n> the 10MB case probably reflects filling the drive's write cache.)\n>\n> On my old HPUX machine, where fsync really works (and the syslogd\n> doesn't seem to allow turning it off), the 1MB case takes\n> 195957ms with the strchr patch, 22922ms at PG_SYSLOG_LIMIT=1024.\n>\n> So there's a fairly clear case to be made for fixing the repeated\n> strchr, but I also think that there's a case for jacking up\n> PG_SYSLOG_LIMIT. As far as I can tell the current value of 128\n> was chosen *very* conservatively without thought for performance:\n> http://archives.postgresql.org/pgsql-hackers/2000-05/msg01242.php\n>\n> At the time we were looking at evidence that the then-current\n> Linux syslogd got tummyache with messages over about 1KB:\n> http://archives.postgresql.org/pgsql-hackers/2000-05/msg00880.php\n>\n> Some experimentation with the machines I have handy now says that\n>\n> Fedora 8:\t\ttruncates messages at 2KB (including syslog's header)\n> HPUX 10.20 (ancient):\tditto\n> Mac OS X 10.5.3:\tdrops messages if longer than about 1900 bytes\n>\n> So it appears to me that setting PG_SYSLOG_LIMIT = 1024 would be\n> perfectly safe on current systems (and probably old ones too),\n> and would give at least a factor of two speedup for logging long\n> strings --- more like a factor of 8 if syslogd is fsync'ing.\n>\n> Comments? Anyone know of systems where this is too high?\n> Perhaps we should make that change only in HEAD, not in the\n> back branches, or crank it up only to 512 in the back branches?\n\nwith linux ext2/ext3 filesystems I have seen similar problems when the \nsyslog starts getting large. there are several factors here\n\n1. fsync after each write unless you have \"-\" in syslog.conf (only \navailable on linux AFAIK)\n\n2. ext2/ext3 tend to be very inefficiant when doing appends to large \nfiles. each write requires that the syslog daemon seek to the end of the \nfile (becouse something else may have written to the file in the meantime) \nand with the small block sizes and chaining of indirect blocks this can \nstart to be painful when logfiles get up in to the MB range.\n\nnote that you see this same problem when you start to get lots of \nfiles in one directory as well. even if you delete a lot of files the \ndirectory itself is still large and this can cause serious performance \nproblems.\n\nother filesystems are much less sensitive to file (and directory) sizes.\n\nmy suggestion would be to first make sure you are doing async writes to \nsyslog, and then try putting the logfiles on different filesystems to see \nhow they differ. personally I use XFS most of the time where I expect lots \nof files or large files.\n\nDavid Lang\n", "msg_date": "Tue, 8 Jul 2008 17:47:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "> Jeff <[email protected]> writes:\n> > On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:\n> >> File sizes of about 3M result in actual logging output of ~ 10Mb.\n> >> In this case, the INSERT *needs* 20 minutes to return. This is \n> >> because the logging through syslog seems to severely slow the system.\n> >> If instead, i use stderr, even with logging_collector=on, the same \n> >> statement needs 15 seconds to return.\n> \n> > In syslog.conf is the destination for PG marked with a \"-\"? (ie -/var/ \n> > log/pg.log) which tells syslog to not sync after each line logged. \n> > That could explain a large chunk of the difference in time.\n> \n> I experimented with this a bit here. There definitely is an O(N^2)\n> penalty from the repeated strchr() calls, but it doesn't really start\n> to hurt till 1MB or so statement length. Even with that patched,\n> syslog logging pretty much sucks performance-wise. Here are the numbers\n> I got on a Fedora 8 workstation, testing the time to log a statement of\n> the form SELECT length('123456...lots of data, no newlines...7890');\n> \n> statement length\t\t\t1MB\t\t10MB\n> \n> CVS HEAD\t\t\t\t2523ms\t\t215588ms\n> + patch to fix repeated strchr\t\t 529ms\t\t 36734ms\n> after turning off syslogd's fsync\t 569ms\t\t 5881ms\n> PG_SYSLOG_LIMIT 1024, fsync on\t\t 216ms\t\t 2532ms\n> PG_SYSLOG_LIMIT 1024, no fsync\t\t 242ms\t\t 2692ms\n> For comparison purposes:\n> logging statements to stderr\t\t 155ms\t\t 2042ms\n> execute statement without logging\t 42ms\t\t 520ms\n> \n> This machine is running a cheap IDE drive that caches writes, so\n> the lack of difference between fsync off and fsync on is not too\n> surprising --- on a machine with server-grade drives there'd be\n> a lot more difference. (The fact that there is a difference in\n> the 10MB case probably reflects filling the drive's write cache.)\n> \n> On my old HPUX machine, where fsync really works (and the syslogd\n> doesn't seem to allow turning it off), the 1MB case takes\n> 195957ms with the strchr patch, 22922ms at PG_SYSLOG_LIMIT=1024.\n> \n> So there's a fairly clear case to be made for fixing the repeated\n> strchr, but I also think that there's a case for jacking up\n> PG_SYSLOG_LIMIT. As far as I can tell the current value of 128\n> was chosen *very* conservatively without thought for performance:\n> http://archives.postgresql.org/pgsql-hackers/2000-05/msg01242.php\n> \n> At the time we were looking at evidence that the then-current\n> Linux syslogd got tummyache with messages over about 1KB:\n> http://archives.postgresql.org/pgsql-hackers/2000-05/msg00880.php\n> \n> Some experimentation with the machines I have handy now says that\n> \n> Fedora 8:\t\ttruncates messages at 2KB (including syslog's header)\n> HPUX 10.20 (ancient):\tditto\n> Mac OS X 10.5.3:\tdrops messages if longer than about 1900 bytes\n> \n> So it appears to me that setting PG_SYSLOG_LIMIT = 1024 would be\n> perfectly safe on current systems (and probably old ones too),\n> and would give at least a factor of two speedup for logging long\n> strings --- more like a factor of 8 if syslogd is fsync'ing.\n> \n> Comments? Anyone know of systems where this is too high?\n> Perhaps we should make that change only in HEAD, not in the\n> back branches, or crank it up only to 512 in the back branches?\n\nI'm a little bit worried about cranking up PG_SYSLOG_LIMIT in the back\nbranches. Cranking it up will definitely change syslog messages text\nstyle and might confuse syslog handling scripts(I have no evince that\nsuch scripts exist though). So I suggest to change PG_SYSLOG_LIMIT\nonly in CVS HEAD.\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\n", "msg_date": "Wed, 09 Jul 2008 10:31:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "Tatsuo Ishii <[email protected]> writes:\n> I'm a little bit worried about cranking up PG_SYSLOG_LIMIT in the back\n> branches. Cranking it up will definitely change syslog messages text\n> style and might confuse syslog handling scripts(I have no evince that\n> such scripts exist though). So I suggest to change PG_SYSLOG_LIMIT\n> only in CVS HEAD.\n\nHmm, good point. It would be an externally visible behavior change,\nnot just a speedup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jul 2008 22:08:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements " }, { "msg_contents": "Στις Tuesday 08 July 2008 21:34:01 ο/η Tom Lane έγραψε:\n> Achilleas Mantzios <[email protected]> writes:\n> > Στις Tuesday 08 July 2008 17:35:16 ο/η Tom Lane έγραψε:\n> >> Hmm. There's a function in elog.c that breaks log messages into chunks\n> >> for syslog. I don't think anyone's ever looked hard at its performance\n> >> --- maybe there's an O(N^2) behavior?\n> >\n> > Thanx,\n> > i changed PG_SYSLOG_LIMIT in elog.c:1269 from 128 to 1048576\n> > and i got super fast stderr performance. :)\n>\n> Doesn't seem like a very good solution given its impact on the stack\n> depth right there.\n>\n> Looking at the code, the only bit that looks like repeated work are the\n> repeated calls to strchr(), which would not be an issue in the \"typical\"\n> case where the very long message contains reasonably frequent newlines.\n> Am I right in guessing that your problematic statement contained\n> megabytes worth of text with nary a newline?\n\nYes it was the input source of a bytea field containing tiff or pdf data,\nof size of some 1-3 megabytes, and it did not contain (many) newlines.\n\n>\n> If so, we can certainly fix it by arranging to remember the last\n> strchr() result across loop iterations, but I'd like to confirm the\n> theory before doing that work.\n>\n> \t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 9 Jul 2008 15:31:05 +0300", "msg_from": "\"achill@matrix\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements" }, { "msg_contents": "Στις Wednesday 09 July 2008 03:47:34 ο/η [email protected] έγραψε:\n> On Tue, 8 Jul 2008, Tom Lane wrote:\n> > Jeff <[email protected]> writes:\n> >> On Jul 8, 2008, at 8:24 AM, Achilleas Mantzios wrote:\n> >>> File sizes of about 3M result in actual logging output of ~ 10Mb.\n> >>> In this case, the INSERT *needs* 20 minutes to return. This is\n> >>> because the logging through syslog seems to severely slow the system.\n> >>> If instead, i use stderr, even with logging_collector=on, the same\n> >>> statement needs 15 seconds to return.\n> >>\n> >> In syslog.conf is the destination for PG marked with a \"-\"? (ie -/var/\n> >> log/pg.log) which tells syslog to not sync after each line logged.\n> >> That could explain a large chunk of the difference in time.\n> >\n> > I experimented with this a bit here. There definitely is an O(N^2)\n> > penalty from the repeated strchr() calls, but it doesn't really start\n> > to hurt till 1MB or so statement length. Even with that patched,\n> > syslog logging pretty much sucks performance-wise. Here are the numbers\n> > I got on a Fedora 8 workstation, testing the time to log a statement of\n> > the form SELECT length('123456...lots of data, no newlines...7890');\n> >\n> > statement length\t\t\t1MB\t\t10MB\n> >\n> > CVS HEAD\t\t\t\t2523ms\t\t215588ms\n> > + patch to fix repeated strchr\t\t 529ms\t\t 36734ms\n> > after turning off syslogd's fsync\t 569ms\t\t 5881ms\n> > PG_SYSLOG_LIMIT 1024, fsync on\t\t 216ms\t\t 2532ms\n> > PG_SYSLOG_LIMIT 1024, no fsync\t\t 242ms\t\t 2692ms\n> > For comparison purposes:\n> > logging statements to stderr\t\t 155ms\t\t 2042ms\n> > execute statement without logging\t 42ms\t\t 520ms\n> >\n> > This machine is running a cheap IDE drive that caches writes, so\n> > the lack of difference between fsync off and fsync on is not too\n> > surprising --- on a machine with server-grade drives there'd be\n> > a lot more difference. (The fact that there is a difference in\n> > the 10MB case probably reflects filling the drive's write cache.)\n> >\n> > On my old HPUX machine, where fsync really works (and the syslogd\n> > doesn't seem to allow turning it off), the 1MB case takes\n> > 195957ms with the strchr patch, 22922ms at PG_SYSLOG_LIMIT=1024.\n> >\n> > So there's a fairly clear case to be made for fixing the repeated\n> > strchr, but I also think that there's a case for jacking up\n> > PG_SYSLOG_LIMIT. As far as I can tell the current value of 128\n> > was chosen *very* conservatively without thought for performance:\n> > http://archives.postgresql.org/pgsql-hackers/2000-05/msg01242.php\n> >\n> > At the time we were looking at evidence that the then-current\n> > Linux syslogd got tummyache with messages over about 1KB:\n> > http://archives.postgresql.org/pgsql-hackers/2000-05/msg00880.php\n> >\n> > Some experimentation with the machines I have handy now says that\n> >\n> > Fedora 8:\t\ttruncates messages at 2KB (including syslog's header)\n> > HPUX 10.20 (ancient):\tditto\n> > Mac OS X 10.5.3:\tdrops messages if longer than about 1900 bytes\n> >\n> > So it appears to me that setting PG_SYSLOG_LIMIT = 1024 would be\n> > perfectly safe on current systems (and probably old ones too),\n> > and would give at least a factor of two speedup for logging long\n> > strings --- more like a factor of 8 if syslogd is fsync'ing.\n> >\n> > Comments? Anyone know of systems where this is too high?\n> > Perhaps we should make that change only in HEAD, not in the\n> > back branches, or crank it up only to 512 in the back branches?\n>\n> with linux ext2/ext3 filesystems I have seen similar problems when the\n> syslog starts getting large. there are several factors here\n>\n> 1. fsync after each write unless you have \"-\" in syslog.conf (only\n> available on linux AFAIK)\n>\nIn FreeBSD 7.0 by default it does not fsync (except for kernel messages),\nunless the path is prefixed by \"-\" whereas it syncs.\n> 2. ext2/ext3 tend to be very inefficiant when doing appends to large\n> files. each write requires that the syslog daemon seek to the end of the\n> file (becouse something else may have written to the file in the meantime)\n> and with the small block sizes and chaining of indirect blocks this can\n> start to be painful when logfiles get up in to the MB range.\n>\n> note that you see this same problem when you start to get lots of\n> files in one directory as well. even if you delete a lot of files the\n> directory itself is still large and this can cause serious performance\n> problems.\n>\n> other filesystems are much less sensitive to file (and directory) sizes.\n>\n> my suggestion would be to first make sure you are doing async writes to\n> syslog, and then try putting the logfiles on different filesystems to see\n> how they differ. personally I use XFS most of the time where I expect lots\n> of files or large files.\n>\n> David Lang\n\n\n", "msg_date": "Wed, 9 Jul 2008 15:37:26 +0300", "msg_from": "\"achill@matrix\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements" }, { "msg_contents": "\n>In FreeBSD 7.0 by default it does not fsync (except for kernel messages),\n>unless the path is prefixed by \"-\" whereas it syncs.\nSorry, scrap the above sentence.\nThe correct is to say that FreeBSD 7.0 by default it does not fsync(2) (except \nfor kernel messages), and even in this case of kernel messages, syncing\ncan be bypassed by the use of \"-\" prefix.\nSo in our case of postgresql (local0 facility) it does not sync.\n", "msg_date": "Wed, 9 Jul 2008 15:51:30 +0300", "msg_from": "\"achill@matrix\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: syslog performance when logging big statements" } ]
[ { "msg_contents": "Hi,\n\nwhen i issued the vaccuum cmd, I recieved this message:\n\necho \"VACUUM --full -d ARSys\" | psql -d dbname\n\nWARNING: relation \"public.tradetbl\" contains more than\n\"max_fsm_pages\" pages with useful free space\nHINT: Consider compacting this relation or increasing the\nconfiguration parameter \"max_fsm_pages\".\nNOTICE: number of page slots needed (309616) exceeds max_fsm_pages (153600)\nHINT: Consider increasing the configuration parameter \"max_fsm_pages\"\nto a value over 309616.\nVACUUM\n\nWhat does the warning indicate? How will it adversely affect the system.\n\nThanks.\n-rs\n\n\n-- \nIt is all a matter of perspective. You choose your view by choosing\nwhere to stand. --Larry Wall\n", "msg_date": "Tue, 8 Jul 2008 15:13:39 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "max fsm pages question" }, { "msg_contents": "In response to \"Radhika S\" <[email protected]>:\n> \n> when i issued the vaccuum cmd, I recieved this message:\n> \n> echo \"VACUUM --full -d ARSys\" | psql -d dbname\n> \n> WARNING: relation \"public.tradetbl\" contains more than\n> \"max_fsm_pages\" pages with useful free space\n> HINT: Consider compacting this relation or increasing the\n> configuration parameter \"max_fsm_pages\".\n> NOTICE: number of page slots needed (309616) exceeds max_fsm_pages (153600)\n> HINT: Consider increasing the configuration parameter \"max_fsm_pages\"\n> to a value over 309616.\n> VACUUM\n> \n> What does the warning indicate? How will it adversely affect the system.\n\nIt means any combination of the following things:\n1) You're not vacuuming often enough\n2) Your FSM settings are too low\n3) You recently had some unusually high update/delete activity on that\n table that's exceeded your normal settings for FSM and vacuum and\n will need special attention to get back on track.\n\nIf you know it's #3, then just take steps to get things back on track\nand don't worry about 1 or 2. If you don't think it's #3, then you\nmay want to increase the frequency of vacuum and/or increase the FSM\nsettings in your conf file.\n\nYou can do a CLUSTER to get that table back in shape, but be sure to\nread up on CLUSTER so you understand the implications before you do so.\nYou can also temporarily raise the FSM settings to allow vacuum to work,\nthen lower them back down when it's under control again. This is one\nof the few circumstances where you may want to VACUUM FULL.\n\nIf you don't handle this, that table will continue to grow in size on\nthe disk, taking up space unnecessarily and probably negatively\nimpacting performance.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 8 Jul 2008 15:24:57 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: max fsm pages question" }, { "msg_contents": "\nOn Jul 8, 2008, at 3:24 PM, Bill Moran wrote:\n\n> If you don't handle this, that table will continue to grow in size on\n> the disk, taking up space unnecessarily and probably negatively\n> impacting performance.\n\ns/probably/definitely/\n\nAlso, if it was #3 on Bill's list, one thing to do is look for index \nbloat. Reindex followed by vacuum might help as well.\n\nAs for running VACUUM FULL, I find it sometimes faster to just do a \ndump + reload. A FULL vacuum is really, really, really, extremely, \nslow.\n\n", "msg_date": "Wed, 9 Jul 2008 11:59:31 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: max fsm pages question" } ]
[ { "msg_contents": "Is there any quick hacks to do this quickly? There's around\n20-30million\nrows of data. \n\nI want to change a column type from varchar(4) to varchar()\n\ntable size is ~10-15GB (and another 10-15G for indexes)\n\nWhat would be the preferrred way of doing it? SHould I be dropping the\nindexes 1st to make things faster? Would it matter?\n\nThe punch line is that since the databases are connected via slony, this\nmakes it even harder to pull it off. My last try got the DDL change\ncompleted in like 3 hours (smallest table of the bunch) and it hung\neverything.\n\n\n\n\n\n", "msg_date": "Thu, 10 Jul 2008 15:53:00 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Altering a column type - Most efficient way" }, { "msg_contents": "Ow Mun Heng schrieb:\n> Is there any quick hacks to do this quickly? There's around\n> 20-30million\n> rows of data. \n>\n> I want to change a column type from varchar(4) to varchar()\n>\n> table size is ~10-15GB (and another 10-15G for indexes)\n>\n> What would be the preferrred way of doing it? SHould I be dropping the\n> indexes 1st to make things faster? Would it matter?\n>\n> The punch line is that since the databases are connected via slony, this\n> makes it even harder to pull it off. My last try got the DDL change\n> completed in like 3 hours (smallest table of the bunch) and it hung\n> everything\n> \nBefore Postgresql supported \"alter table ... type ... \" conversions, I \ndid it a few times when I detected later that my varchar() fields were \ntoo short, and it worked perfectly.\n\nExample:\n{OLDLEN} = 4\n{NEWLEN} = 60\n\nupdate pg_attribute\n set atttypmod={NEWLEN}+4\n where attname='the-name-of-the-column'\n and attrelid=(select oid from pg_class where \nrelname='the-name-of-the-table')\n and atttypmod={OLDLEN}+4;\n\n\nThis worked very well when you want to increase the maximum length, \ndon't try to reduce the maximum length this way!\n\nDisclaimer: I do not know if slony might be have a problem with this.\n\n\n\n", "msg_date": "Thu, 10 Jul 2008 10:36:10 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "On Thu, 2008-07-10 at 10:36 +0200, Mario Weilguni wrote:\n> Ow Mun Heng schrieb:\n> >\n> > I want to change a column type from varchar(4) to varchar()\n> >\n> > \n> Example:\n> {OLDLEN} = 4\n> {NEWLEN} = 60\n> \n> update pg_attribute\n> set atttypmod={NEWLEN}+4\n> where attname='the-name-of-the-column'\n> and attrelid=(select oid from pg_class where \n> relname='the-name-of-the-table')\n> and atttypmod={OLDLEN}+4;\n> \n\nThis is what I see on the table\n\nNEW attypmod = -1\nOLD attypmod = 8\n\nI'm not very confident in doint this change since this is not a\ndevelopment server. If would help though if someone else has any other\nsuggestion or something.. Thanks for the information though, it's\nenlightening knowledge.\n", "msg_date": "Thu, 10 Jul 2008 17:34:41 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "Ow Mun Heng wrote:\n\n> This is what I see on the table\n> \n> NEW attypmod = -1\n> OLD attypmod = 8\n\n8 means varchar(4) which is what you said you had (4+4)\n-1 means unlimited size.\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 10 Jul 2008 09:57:56 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "On Thu, 2008-07-10 at 09:57 -0400, Alvaro Herrera wrote:\n> Ow Mun Heng wrote:\n> \n> > This is what I see on the table\n> > \n> > NEW attypmod = -1\n> > OLD attypmod = 8\n> \n> 8 means varchar(4) which is what you said you had (4+4)\n> -1 means unlimited size.\n> \n\nThis is cool. \n\nIf it were this simple a change, I'm not certain why (I believe) PG is\nchecking each and every row to see if it will fit into the new column\ndefinition/type. \n\nThus, I'm still a bit hesitant to do the change, although it is\ndefinitely a very enticing thing to do. ( I presume also that this\nchange will be instantaneous and does not need to check on each and\nevery row of the table?)\n\nThanks./\n", "msg_date": "Fri, 11 Jul 2008 10:17:00 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "Ow Mun Heng schrieb:\n> On Thu, 2008-07-10 at 09:57 -0400, Alvaro Herrera wrote:\n> \n>> Ow Mun Heng wrote:\n>>\n>> \n>>> This is what I see on the table\n>>>\n>>> NEW attypmod = -1\n>>> OLD attypmod = 8\n>>> \n>> 8 means varchar(4) which is what you said you had (4+4)\n>> -1 means unlimited size.\n>>\n>> \n>\n> This is cool. \n>\n> If it were this simple a change, I'm not certain why (I believe) PG is\n> checking each and every row to see if it will fit into the new column\n> definition/type. \n>\n> Thus, I'm still a bit hesitant to do the change, although it is\n> definitely a very enticing thing to do. ( I presume also that this\n> change will be instantaneous and does not need to check on each and\n> every row of the table?)\n>\n> Thanks./\n>\n> \n\nIt should be safe, because the length limit is checked at insert/update \ntime, and internally, a varchar(20) is treated as something like this:\nfoo varchar(1000000000) check (length(foo) <= 20)\n\n\nThe change is done without re-checking all rows, and will not fail IF \nthe new size is longer than the old size.\n\n", "msg_date": "Fri, 11 Jul 2008 10:01:08 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "Ow Mun Heng wrote:\n\n> If it were this simple a change, I'm not certain why (I believe) PG is\n> checking each and every row to see if it will fit into the new column\n> definition/type. \n\nBecause the code that does the ALTER TYPE is very generic, and it\ndoesn't (yet) have an optimization that tells it to skip the check and\nthe possible table rewrite in the cases where it's obviously not needed\n(like this one).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 11 Jul 2008 09:44:23 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Ow Mun Heng wrote:\n>> If it were this simple a change, I'm not certain why (I believe) PG is\n>> checking each and every row to see if it will fit into the new column\n>> definition/type. \n\n> Because the code that does the ALTER TYPE is very generic, and it\n> doesn't (yet) have an optimization that tells it to skip the check and\n> the possible table rewrite in the cases where it's obviously not needed\n> (like this one).\n\nAwhile back I looked into teaching ALTER TYPE that it needn't rewrite\nif the type conversion expression parses out as just a Var with\nRelabelType, but it seemed that that wouldn't cover very much of the\nuse-cases where a human thinks that it's \"obvious\" that no rewrite\nis needed. You'd really need to build in hard-wired knowledge about\nthe behavior of specific coercion functions, which seems entirely\nunappealing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jul 2008 10:27:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way " }, { "msg_contents": ">>> Tom Lane <[email protected]> wrote: \n> Alvaro Herrera <[email protected]> writes:\n>> Ow Mun Heng wrote:\n>>> If it were this simple a change, I'm not certain why (I believe) PG\nis\n>>> checking each and every row to see if it will fit into the new\ncolumn\n>>> definition/type. \n> \n>> Because the code that does the ALTER TYPE is very generic, and it\n>> doesn't (yet) have an optimization that tells it to skip the check\nand\n>> the possible table rewrite in the cases where it's obviously not\nneeded\n>> (like this one).\n> \n> Awhile back I looked into teaching ALTER TYPE that it needn't\nrewrite\n> if the type conversion expression parses out as just a Var with\n> RelabelType, but it seemed that that wouldn't cover very much of the\n> use-cases where a human thinks that it's \"obvious\" that no rewrite\n> is needed.\n \nWe wouldn't have to cover all possible cases for it to be useful. If\nthere's some low-hanging fruit here, +1 for getting that. The cases\nwhich would most often save staff here some time are switching a\nvarchar to a longer maximum length or switching between a domain which\nis varchar to plain varchar (or vice versa).\n \n-Kevin\n", "msg_date": "Fri, 11 Jul 2008 09:55:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Altering a column type - Most efficient way" }, { "msg_contents": "On Fri, 2008-07-11 at 09:55 -0500, Kevin Grittner wrote:\n> > Alvaro Herrera <[email protected]> writes:\n> >>> Ow Mun Heng wrote:\n> >>> If it were this simple a change, I'm not certain why (I believe) PG\n> >>>is checking each and every row to see if it will fit into the new column\n> >>> definition/type. \n> >Because the code that does the ALTER TYPE is very generic, and it\n> > doesn't (yet) have an optimization that tells it to skip the check\n> > and the possible table rewrite in the cases where it's obviously not\n> >needed(like this one).\n\n> If there's some low-hanging fruit here, +1 for getting that. \n\nI just tested this out and everything seems to be working fine. (cross\nfingers - for now and if I do report back, it means we've crashed and\nburned, but as of now... the low hanging fruit is tasty)\n\nThis 2 sec change is much preferred over the 3+ hour per table.\n\nI agree with Tom that this is not useful in _all_ cases and may seem to\nlook like a hack, but it really isn't. Given that the condition that\nwe're expaning the min length rather than the opposite, it should be\npretty safe.\n\n\nGuys(/gals) Thanks very much for brightening up a dreadry Monday\nmorning.\n\n", "msg_date": "Mon, 14 Jul 2008 09:46:46 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": true, "msg_subject": "[SOLVED] Re: Altering a column type - Most efficient way" } ]
[ { "msg_contents": "On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I know it is bad, but how bad can it be? Just trying to understand the impact the \"shmmax\" parameter can have on Postgres and the entire system after Postgres comes up on this number.\n\nWhat is the reasonable setting for shmmax on a 4G total machine?\n\nThanks a lot,\nJessica\n\n\n\n \nOn a Linux system,  if the total memory is 4G and the shmmax is set to 4G, I know it is bad, but how bad can it be? Just trying to understand the impact the \"shmmax\" parameter can have  on Postgres  and the entire system after Postgres comes up on this number.What is the reasonable setting for shmmax on a 4G total machine?Thanks a lot,Jessica", "msg_date": "Thu, 10 Jul 2008 03:53:40 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "how big shmmax is good for Postgres..." }, { "msg_contents": "In response to Jessica Richard <[email protected]>:\n\n> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I know it is bad, but how bad can it be? Just trying to understand the impact the \"shmmax\" parameter can have on Postgres and the entire system after Postgres comes up on this number.\n\nIt's not bad by definition. shmmax is a cap on the max that can be used.\nJust because you set it to 4G doesn't mean any application is going to\nuse all of that. With PostgreSQL, the maximum amount of shared memory it\nwill allocate is governed by the shared_buffers setting in the\npostgresql.conf.\n\nIt _is_ a good idea to set shmmax to a reasonable size to prevent\na misbehaving application from eating up all the memory on a system,\nbut I've yet to see PostgreSQL misbehave in this manner. Perhaps I'm\ntoo trusting.\n\n> What is the reasonable setting for shmmax on a 4G total machine?\n\nIf you mean what's a reasonable setting for shared_buffers, conventional\nwisdom says to start with 25% of the available RAM and increase it or\ndecrease it if you discover your workload benefits from more or less.\nBy \"available RAM\" is meant the free RAM after all other applications\nare running, which will be 4G if this machine only runs PostgreSQL, but\ncould be less if it runs other things like a web server.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 10 Jul 2008 07:19:19 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how big shmmax is good for Postgres..." }, { "msg_contents": "On Thu, Jul 10, 2008 at 4:53 AM, Jessica Richard <[email protected]> wrote:\n> On a Linux system, if the total memory is 4G and the shmmax is set to 4G, I\n> know it is bad, but how bad can it be? Just trying to understand the impact\n> the \"shmmax\" parameter can have on Postgres and the entire system after\n> Postgres comes up on this number.\n\nThere are two settings, shmmax for the OS, and shared_buffers for\npgsql. If the OS has a max setting of 4G but pgsql is set to use\n512Meg then you'd be safe.\n\nNow, assuming that both the OS and pgsql are set to 4G, and you're\napproaching that level of usage, the OS will start swapping out to\nmake room for the shared memory. The machine will likely be starved\nof memory, and will go into what's often called a swap storm where it\nspends all its time swapping stuff in and out and doing little else.\n\n> What is the reasonable setting for shmmax on a 4G total machine?\n\nKeep in mind that postgresql's shared buffers are just one level of\ncaching / buffering that's going on. The OS caches file access, and\nsometimes the RAID controller and / or hard drive.\n\nGenerally the nominal setting is 25% of memory, but that's variable.\nIf the working set of your data will fit in 10% then there's no need\nfor setting shared_memory to 25%...\n", "msg_date": "Thu, 10 Jul 2008 05:23:12 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how big shmmax is good for Postgres..." }, { "msg_contents": "I just wanted to add to my previous post that shared_memory generally\nhas a performance envelope of quickly increasing performance as you\nfirst increase share_memory, then a smaller performance step with each\nincrease in shared_memory. Once all of the working set of your data\nfits, the return starts to fall off quickly. Assuming you were on a\nmachine with infinite memory and there were no penalties for a large\nshared_buffers, then you would go until you were comfortably in the\nlevel area. If your working set were 1G on a dedicated machine with\n64G, you could assume that memory was functionally unlimited.\nSomewhere around 1.1 Gigs of memory and you get no performance\nincrease.\n\nIn real life, maintaining a cache has costs too. The kernel of most\nOSes now caches data very well. and it does it very well for large\nchunks of data. So on a machine that big, the OS would be caching the\nwhole dataset as well.\n\nWhat we're interested in is the working set size. If you typically\ncall up a couple k of data, band it around some and commit, then never\nhit that section again for hours, and all your calls do that, and the\naccess area is evenly spread across the database, then your working\nset is the whole thing. This is a common access pattern when dealing\nwith transactional systems.\n\nIf you commonly have 100 transactions doing that at once, then you\nmultiply much memory they use times 100 to get total buffers in use,\nand the rest is likely NEVER going to get used.\n\nIn these systems, what seems like a bad idea, lowering the\nbuffer_size, might be the exact right call.\n\nFor session servers and large transactional systems, it's often best\nto let the OS do the best caching of the most of the data, and have\nenough shared buffers to handle 2-10 times the in memory data set\nsize. This will result in a buffer size of a few hundred megabytes.\n\nThe advantage here is that the OS doesn't have to spend a lot of time\nmaintaining a large buffer pool and checkpoints are cheaper. With\nspare CPU the background writer can use spare I/O cycles to write out\nthe smaller number of dirty pages in shared_memory and the system runs\nfaster.\n\nSame is true of session servers. If a DB is just used for tracking\nlogged in users, it only needs a tiny amount of shared_buffers.\n\nConversely, when you need large numbers of shared_buffers is when you\nhave something like a large social networking site. A LOT of people\nupdating a large data set at the same time likely need way more\nshared_buffers to run well. A user might be inputing data for several\nminutes or even hours. The same pages are getting hit over and over\ntoo. For this kind of app, you need as much memory as you can afford\nto throw at the problem, and a semi fast large RAID array. A large\ncache means your RAID controller / array only have to write, on\naverage, as fast as the database commits it.\n\nWhen you can reach the point where shared_buffers is larger than 1/2\nyour memory you're now using postgresql for caching more so than the\nkernel. As your shared_buffers goes past 75% of available memory you\nnow have three times as much cache under postgesql'l control than\nunder the OS's control. This would mean you've already testing this\nand found that postgresql's caching works better for you than the OS's\ncaching does. Haven't seen that happen a lot. Maybe some large\ntransactional systems would work well that way.\n\nMy point here, and I have one, is that larger shared_buffers only\nmakes sense if you can use them. They can work against you in a few\ndifferent ways that aren't obvious up front. Checkpointing, Self DOS\ndue to swap storm, using up memory that the kernel might be better at\nusing as cache, etc.\n\nSo the answer really is, do some realistic testing, with an eye\ntowards anamalous behavior.\n", "msg_date": "Thu, 10 Jul 2008 06:11:47 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how big shmmax is good for Postgres..." }, { "msg_contents": "Some corrections:\n\nOn Thu, Jul 10, 2008 at 6:11 AM, Scott Marlowe <[email protected]> wrote:\n\nSNIP\n\n> If you commonly have 100 transactions doing that at once, then you\n> multiply much memory they use times 100 to get total buffer >> SPACE << in use,\n> and the rest is likely NEVER going to get used.\n>\n> In these systems, what seems like a bad idea, lowering the\n> buffer_size, might be the exact right call.\n>\n> For session servers and large transactional systems, it's often best\n> to let the OS do the best caching of the most of the data, and have\n> enough shared buffers to handle 2-10 times the in memory data set\n> size. This will result in a buffer size of a few hundred megabytes.\n>\n> The advantage here is that the (NOT OS) DATABASE doesn't have to spend a lot of time\n> maintaining a large buffer pool and checkpoints are cheaper.\n> The background writer can use spare >> CPU << and I/O cycles to write out\n> the now smaller number of dirty pages in shared_memory and the system runs\n> faster.\n\n>\n> Conversely, when you need large numbers of shared_buffers is when you\n> have something like a large social networking site. A LOT of people\n> updating a large data set at the same time likely need way more\n> shared_buffers to run well. A user might be inputing data for several\n> minutes or even hours. The same pages are getting hit over and over\n> too. For this kind of app, you need as much memory as you can afford\n> to throw at the problem, and a semi fast large RAID array. A large\n> >> RAID << cache means your RAID controller / array only have to write, on\n> average, as fast as the database commits.\n\nJust minor edits. If there's anything obviously wrong someone please\nlet me know.\n", "msg_date": "Thu, 10 Jul 2008 09:37:19 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how big shmmax is good for Postgres..." } ]
[ { "msg_contents": "Hi All,\n I am having a trigger in table, If i update the the table manually it is not taking time(say 200ms per row), But if i update the table through procedure the trigger is taking time to fire(say 7 to 10 seconds per row).\n\nHow can i make the trigger to fire immediately?\n\nRegards,\nRam\n\n\n\n\n\n\nHi All,\n    I am having a trigger in table, \nIf i update the the table manually it is not taking time(say 200ms per row), But \nif i update the table through procedure the trigger is taking time to fire(say 7 \nto 10 seconds per row).\n \nHow can i make the trigger to fire \nimmediately?\n \nRegards,\nRam", "msg_date": "Fri, 11 Jul 2008 13:05:47 +0530", "msg_from": "\"Ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger is taking time to fire" } ]
[ { "msg_contents": "I've got a couple boxes with some 3ware 9550 controllers, and I'm \nless than pleased with performance on them.. Sequential access is \nnice, but start seeking around and you kick it in the gut. (I've \nfound posts on the internets about others having similar issues). My \nlast box with a 3ware I simply had it in jbod mode and used sw raid \nand it smoked the hw.\n\nAnyway, anybody have experience in 3ware vs Areca - I've heard plenty \nof good anecdotal things that Areca is much better, just wondering if \nanybody here has firsthand experience. It'll be plugged into about \n8 10k rpm sata disks.\n\nthanks\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n\nI've got a couple boxes with some 3ware 9550 controllers, and I'm less than pleased with performance on them.. Sequential access is nice, but start seeking around and you kick it in the gut.  (I've found posts on the internets about others having similar issues).  My last box with a 3ware I simply had it in jbod mode and used sw raid and it smoked the hw.Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of good anecdotal things that Areca is much better, just wondering if anybody here has firsthand experience.    It'll be plugged into about 8 10k rpm sata disks. thanks --Jeff Trout <[email protected]>http://www.stuarthamm.net/http://www.dellsmartexitin.com/", "msg_date": "Fri, 11 Jul 2008 09:26:37 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "3ware vs Areca" }, { "msg_contents": "On Fri, Jul 11, 2008 at 7:26 AM, Jeff <[email protected]> wrote:\n> I've got a couple boxes with some 3ware 9550 controllers, and I'm less than\n> pleased with performance on them.. Sequential access is nice, but start\n> seeking around and you kick it in the gut. (I've found posts on the\n> internets about others having similar issues). My last box with a 3ware I\n> simply had it in jbod mode and used sw raid and it smoked the hw.\n> Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of\n> good anecdotal things that Areca is much better, just wondering if anybody\n> here has firsthand experience. It'll be plugged into about 8 10k rpm sata\n> disks.\n\nWhat RAID level are you using? How much cache do you have? Write\nback / battery backed? What OS and version?\n\nEverything I've heard sys they're both fast performers.\n", "msg_date": "Fri, 11 Jul 2008 08:19:50 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "The Arecas are a lot faster than the 9550, more noticeable with disk counts\nfrom 12 on up. At 8 disks you may not see much difference.\n\nThe 3Ware 9650 is their answer to the Areca and it put the two a lot closer.\n\nFWIW ­ we got some Arecas at one point and had trouble getting them\nconfigured and working properly.\n\n- Luke\n\n\nOn 7/11/08 6:26 AM, \"Jeff\" <[email protected]> wrote:\n\n> I've got a couple boxes with some 3ware 9550 controllers, and I'm less than\n> pleased with performance on them.. Sequential access is nice, but start\n> seeking around and you kick it in the gut.  (I've found posts on the internets\n> about others having similar issues).  My last box with a 3ware I simply had it\n> in jbod mode and used sw raid and it smoked the hw.\n> \n> Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of good\n> anecdotal things that Areca is much better, just wondering if anybody here has\n> firsthand experience.    It'll be plugged into about 8 10k rpm sata disks. \n> \n> thanks\n> \n> --\n> Jeff Trout <[email protected]>\n> http://www.stuarthamm.net/\n> http://www.dellsmartexitin.com/\n> \n> \n> \n> \n> \n\n\n\n\nRe: [PERFORM] 3ware vs Areca\n\n\nThe Arecas are a lot faster than the 9550, more noticeable with disk counts from 12 on up.  At 8 disks you may not see much difference.\n\nThe 3Ware 9650 is their answer to the Areca and it put the two a lot closer.\n\nFWIW – we got some Arecas at one point and had trouble getting them configured and working properly.\n\n- Luke\n\n\nOn 7/11/08 6:26 AM, \"Jeff\" <[email protected]> wrote:\n\nI've got a couple boxes with some 3ware 9550 controllers, and I'm less than pleased with performance on them.. Sequential access is nice, but start seeking around and you kick it in the gut.  (I've found posts on the internets about others having similar issues).  My last box with a 3ware I simply had it in jbod mode and used sw raid and it smoked the hw.\n\nAnyway, anybody have experience in 3ware vs Areca - I've heard plenty of good anecdotal things that Areca is much better, just wondering if anybody here has firsthand experience.    It'll be plugged into about 8 10k rpm sata disks. \n\nthanks\n \n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/", "msg_date": "Fri, 11 Jul 2008 07:59:00 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Fri, Jul 11, 2008 at 8:59 AM, Luke Lonergan <[email protected]> wrote:\n> The Arecas are a lot faster than the 9550, more noticeable with disk counts\n> from 12 on up. At 8 disks you may not see much difference.\n>\n> The 3Ware 9650 is their answer to the Areca and it put the two a lot closer.\n\nDo you mean the areca 12xx series or the newer 1680? I was under the\nimpression the difference in performance wasn't that big between teh\n95xx 3wares and teh 12xx Arecas. We have a 1680i on order, with 16\n15K RPM SAS drives. I'll let you guys know how it runs.\n\n> FWIW – we got some Arecas at one point and had trouble getting them\n> configured and working properly.\n\nI've heard the SAS/SATA Arecas don't get along well with some SATA\ndrives, but generally are quite reliable on SAS. this was in a number\nof forum posts on different hardware and linux sites repeated by\ndifferent folks over and over.\n", "msg_date": "Fri, 11 Jul 2008 11:46:19 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Fri, 11 Jul 2008, Jeff wrote:\n\n> I've got a couple boxes with some 3ware 9550 controllers, and I'm less than \n> pleased with performance on them.. Sequential access is nice, but start \n> seeking around and you kick it in the gut. (I've found posts on the \n> internets about others having similar issues).\n\nYeah, there's something weird about those controllers, maybe in how stuff \nflows through the cache, that makes them slow in a lot of situations. \nThe old benchmarks at \nhttp://tweakers.net/reviews/557/21/comparison-of-nine-serial-ata-raid-5-adapters-pagina-21.html \nshow their cards acting badly in a lot of situations and I haven't seen \nanything else since vindicating the 95XX models from them.\n\n> My last box with a 3ware I simply had it in jbod mode and used sw raid \n> and it smoked the hw.\n\nThat is often the case no matter which hardware controller you've got, \nparticularly in more complicated RAID setups. You might want to consider \nthat a larger lesson rather than just a single data point.\n\n> Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of good \n> anecdotal things that Areca is much better, just wondering if anybody here \n> has firsthand experience. It'll be plugged into about 8 10k rpm sata \n> disks.\n\nAreca had a pretty clear performance lead for a while there against \n3ware's 3500 series, but from what I've been reading I'm not sure that is \nstill true in the current generation of products. Check out the pages \nstarting at \nhttp://www.tomshardware.com/reviews/SERIAL-RAID-CONTROLLERS-AMCC,1738-12.html \nfor example, where the newer Areca 1680ML card just gets crushed at all \nkinds of workloads by the AMCC 3ware 9690SA. I think the 3ware 9600 \nseries cards have achieved or exceeded what Areca's 1200 series was \ncapable of, while Areca's latest generation has slipped a bit from the \nprevious one.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Jul 2008 15:21:20 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Fri, Jul 11, 2008 at 12:21 PM, Greg Smith <[email protected]> wrote:\n> On Fri, 11 Jul 2008, Jeff wrote:\n>\n>> I've got a couple boxes with some 3ware 9550 controllers, and I'm less\n>> than pleased with performance on them.. Sequential access is nice, but start\n>> seeking around and you kick it in the gut. (I've found posts on the\n>> internets about others having similar issues).\n>\n> Yeah, there's something weird about those controllers, maybe in how stuff\n> flows through the cache, that makes them slow in a lot of situations. The\n> old benchmarks at\n> http://tweakers.net/reviews/557/21/comparison-of-nine-serial-ata-raid-5-adapters-pagina-21.html\n> show their cards acting badly in a lot of situations and I haven't seen\n> anything else since vindicating the 95XX models from them.\n>\n>> My last box with a 3ware I simply had it in jbod mode and used sw raid and\n>> it smoked the hw.\n>\n> That is often the case no matter which hardware controller you've got,\n> particularly in more complicated RAID setups. You might want to consider\n> that a larger lesson rather than just a single data point.\n>\n>> Anyway, anybody have experience in 3ware vs Areca - I've heard plenty of\n>> good anecdotal things that Areca is much better, just wondering if anybody\n>> here has firsthand experience. It'll be plugged into about 8 10k rpm sata\n>> disks.\n>\n> Areca had a pretty clear performance lead for a while there against 3ware's\n> 3500 series, but from what I've been reading I'm not sure that is still true\n> in the current generation of products. Check out the pages starting at\n> http://www.tomshardware.com/reviews/SERIAL-RAID-CONTROLLERS-AMCC,1738-12.html\n> for example, where the newer Areca 1680ML card just gets crushed at all\n> kinds of workloads by the AMCC 3ware 9690SA. I think the 3ware 9600 series\n> cards have achieved or exceeded what Areca's 1200 series was capable of,\n> while Areca's latest generation has slipped a bit from the previous one.\n\n From my experience, the Areca controllers are difficult to operate.\nTheir firmware is, frankly, garbage. In more than one instance we\nhave had the card panic when a disk fails, which is obviously counter\nto the entire purpose of a RAID. We finally removed the Areca\ncontrollers from our database server and replaced them with HP P800s.\n\n-jwb\n", "msg_date": "Fri, 11 Jul 2008 12:39:43 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Jul 11, 2008, at 3:39 PM, Jeffrey Baker wrote:\n>\n> From my experience, the Areca controllers are difficult to operate.\n> Their firmware is, frankly, garbage. In more than one instance we\n> have had the card panic when a disk fails, which is obviously counter\n> to the entire purpose of a RAID. We finally removed the Areca\n> controllers from our database server and replaced them with HP P800s.\n>\n\nMy main db has a p600 plugged into an msa70 which works well - does \nthe HP junk work in non-hp boxes?\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n\n\nOn Jul 11, 2008, at 3:39 PM, Jeffrey Baker wrote:From my experience, the Areca controllers are difficult to operate.Their firmware is, frankly, garbage.  In more than one instance wehave had the card panic when a disk fails, which is obviously counterto the entire purpose of a RAID.  We finally removed the Arecacontrollers from our database server and replaced them with HP P800s.My main db has a p600 plugged into an msa70 which works well - does the HP junk work in non-hp boxes?--Jeff Trout <[email protected]>http://www.stuarthamm.net/http://www.dellsmartexitin.com/", "msg_date": "Fri, 11 Jul 2008 15:52:02 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Jul 11, 2008, at 3:21 PM, Greg Smith wrote:\n>> My last box with a 3ware I simply had it in jbod mode and used sw \n>> raid and it smoked the hw.\n>\n> That is often the case no matter which hardware controller you've \n> got, particularly in more complicated RAID setups. You might want \n> to consider that a larger lesson rather than just a single data point.\n>\n\nYeah, it'd be fun to run more benchmarks, but the beefy box, for some \nreason, is a prod box busy 24/7. no time to nuke it and fidgit :)\n\n> Check out the pages starting at http://www.tomshardware.com/reviews/ \n> SERIAL-RAID-CONTROLLERS-AMCC,1738-12.html for example, where the \n> newer Areca 1680ML card just gets crushed at all kinds of workloads \n> by the AMCC 3ware 9690SA. I think the 3ware 9600 series cards have \n> achieved or exceeded what Areca's 1200 series was capable of, while \n> Areca's latest generation has slipped a bit from the previous one.\n>\nIt does look like the 9600 series fixed a lot of the 9550 issues.\n\n(and for the record, yes, either card I get will have a bbu. tis \nsilly to get a controller without one)\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n\n\nOn Jul 11, 2008, at 3:21 PM, Greg Smith wrote:My last box with a 3ware I simply had it in jbod mode and used sw raid and it smoked the hw. That is often the case no matter which hardware controller you've got, particularly in more complicated RAID setups.  You might want to consider that a larger lesson rather than just a single data point.Yeah, it'd be fun to run more benchmarks, but the beefy box, for some reason, is a prod box busy 24/7.  no time to nuke it and fidgit :)Check out the pages starting at http://www.tomshardware.com/reviews/SERIAL-RAID-CONTROLLERS-AMCC,1738-12.html for example, where the newer Areca 1680ML card just gets crushed at all kinds of workloads by the AMCC 3ware 9690SA.  I think the 3ware 9600 series cards have achieved or exceeded what Areca's 1200 series was capable of, while Areca's latest generation has slipped a bit from the previous one.It does look like the 9600 series fixed a lot of the 9550 issues.(and for the record, yes, either card I get will have a bbu. tis silly to get a controller without one) --Jeff Trout <[email protected]>http://www.stuarthamm.net/http://www.dellsmartexitin.com/", "msg_date": "Fri, 11 Jul 2008 15:58:56 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Fri, 11 Jul 2008, Jeff wrote:\n\n> Yeah, it'd be fun to run more benchmarks, but the beefy box, for some reason, \n> is a prod box busy 24/7. no time to nuke it and fidgit :)\n\nIf you've got an existing array and you want to switch to another \ncontroller, that may not work without nuking no matter what. One other \nreason to consider software RAID on top of a hardware controller running \nJBOD mode is that there's no standard for how striping/mirroring is done, \nso you can't just move a set of RAID disks to another brand of controller \nnormally. That's no problem with software RAID.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 11 Jul 2008 20:39:09 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Fri, 11 Jul 2008, Jeffrey Baker wrote:\n\n> Their firmware is, frankly, garbage. In more than one instance we\n> have had the card panic when a disk fails, which is obviously counter\n> to the entire purpose of a RAID. We finally removed the Areca\n> controllers from our database server and replaced them with HP P800s.\n\nCan you give a bit more detail here? If what you mean is that the driver \nfor the card generated an OS panic when a drive failed, that's not \nnecessarily the firmware at all. I know I had problems with the Areca \ncards under Linux until their driver went into the mainline kernel in \n2.6.19, all kinds of panics under normal conditions. Haven't seen \nanything like that with later Linux kernels or under Solaris 10, but then \nagain I haven't had a disk failure yet either.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 15 Jul 2008 00:13:26 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Mon, Jul 14, 2008 at 9:13 PM, Greg Smith <[email protected]> wrote:\n> On Fri, 11 Jul 2008, Jeffrey Baker wrote:\n>\n>> Their firmware is, frankly, garbage. In more than one instance we\n>> have had the card panic when a disk fails, which is obviously counter\n>> to the entire purpose of a RAID. We finally removed the Areca\n>> controllers from our database server and replaced them with HP P800s.\n>\n> Can you give a bit more detail here? If what you mean is that the driver\n> for the card generated an OS panic when a drive failed, that's not\n> necessarily the firmware at all. I know I had problems with the Areca cards\n> under Linux until their driver went into the mainline kernel in 2.6.19, all\n> kinds of panics under normal conditions. Haven't seen anything like that\n> with later Linux kernels or under Solaris 10, but then again I haven't had a\n> disk failure yet either.\n\nWell, it is difficult to tell if the fault is with the hardware or the\nsoftware. No traditional kernel panic has been observed. But most\nrecently in my memory we had an Areca HBA which, when one of its WD\nRE-2 disks failed, completely stopped responding to both the command\nline and the web management interface. Then, i/o to that RAID became\nincreasingly slower, and slower, until it stopped serving i/o at all.\nAt that point it was not relevant that the machine was technically\nstill running.\n\nWe have another Areca HBA that starts throwing errors up the SCSI\nstack if it runs for more than 2 months at a time. We have to reboot\nit on a schedule to keep it running.\n\n-jwb\n", "msg_date": "Tue, 15 Jul 2008 07:46:19 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Tue, 15 Jul 2008, Jeffrey Baker wrote:\n\n> But most recently in my memory we had an Areca HBA which, when one of \n> its WD RE-2 disks failed, completely stopped responding to both the \n> command line and the web management interface.\n\nWhat operating system/kernel version are you using on these systems?\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 15 Jul 2008 11:17:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Tue, Jul 15, 2008 at 8:17 AM, Greg Smith <[email protected]> wrote:\n> On Tue, 15 Jul 2008, Jeffrey Baker wrote:\n>\n>> But most recently in my memory we had an Areca HBA which, when one of its\n>> WD RE-2 disks failed, completely stopped responding to both the command line\n>> and the web management interface.\n>\n> What operating system/kernel version are you using on these systems?\n\nDebian \"etch\", which has a 2.6.18 kernel. I have contacted Areca\nsupport (as well as the linux-scsi mailing list) and their responses\nare usually either 1) upgrade the driver and/or firmware even though I\nhave the latest drivers and firmware, or 2) vague statements about the\ndisk being incompatible with the controller.\n\n-jwb\n", "msg_date": "Tue, 15 Jul 2008 08:54:00 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "Jeffrey Baker writes:\n\n> Their firmware is, frankly, garbage. In more than one instance we\n> have had the card panic when a disk fails, which is obviously counter\n> to the entire purpose of a RAID.\n\nI have had simmilar problems with 3ware 9550 and 9650 cards.\nUndre FreeBSD I have seen constant crashes under heavy loads.\nUsed to think it was just FreeBSD, but saw a thread on StorageReview where \nthe same was happening under Linux.\n\n> controllers from our database server and replaced them with HP P800s.\n\nHow is that working out?\nWhich RAID level? SAS/SATA? \n", "msg_date": "Fri, 18 Jul 2008 21:39:55 -0400", "msg_from": "Francisco Reyes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" }, { "msg_contents": "On Tue, 15 Jul 2008, Jeffrey Baker wrote:\n\n> Debian \"etch\", which has a 2.6.18 kernel. I have contacted Areca\n> support (as well as the linux-scsi mailing list) and their responses\n> are usually either 1) upgrade the driver and/or firmware even though I\n> have the latest drivers and firmware\n\nWell, technically you don't have the latest driver, because that's the one \nthat comes with the latest Linux kernel. I'm guessing you have RHEL5 here \nfrom that fact that you're using 2.6.18. I have a CentOS5 system here \nwith an Areca card in it. It installed it initially with the stock 2.6.18 \nkernel there but it never worked quite right; all sorts of odd panics \nunder heavy load. All my problems went away just by moving to a generic \n2.6.22, released some time after the Areca card became of more first-class \ncitizen maintained actively by the kernel developers themselves.\n\n> 2) vague statements about the disk being incompatible with the \n> controller.\n\nThat sort of situation is unfortunate but I don't feel it's unique to \nAreca. There's lots of reasons why some manufacturers end up with drives \nthat don't work well with some controllers, and it is hard to assign blame \nwhen it happens. There is something to be said for buying more integrated \nand tested systems; ultimately if you build stuff from parts, you're kind \nof stuck being the QA and that process presumes that you may discover \nincompatible combinations and punt them out in place of ones that do.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 20 Jul 2008 19:10:18 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 3ware vs Areca" } ]
[ { "msg_contents": "Hi,\n\nI just noticed what looks like a deadlock situation on postgresql 8.2.4. After more than an hour of running REINDEX, two \nprocesses are each in a \"waiting\" state and yet have no time used. This is also the first time I've seen this condition after \nsome 48 hours of continuous load testing.\n\nThe two postgresql processes that seem to be stuck on one another are:\n\n1 S postgres 31812 22455 0 80 0 - 274059 semtim 12:13 ? 00:00:00 postgres: metacarta metacarta 127.0.0.1(54297) SELECT \nwaiting\n1 S postgres 32577 22455 0 80 0 - 274063 semtim 12:16 ? 00:00:00 postgres: metacarta metacarta 127.0.0.1(60377) \nREINDEX waiting\n\n\nThe rest of the postgresql processes change; here's a snapshot. (Note that process 14349 is active and continues to \nintermittently run queries generated by another thread, but should not be blocking anything.) Unfortunately, I cannot tell you \nexactly what the deadlocked SELECT query is, but it is likely to be against the same table that the REINDEX command has been \nissued for.\n\n\n1 S postgres 2046 22455 0 80 0 - 274059 - 12:23 ? 00:00:00 postgres: metacarta metacarta 127.0.0.1(33940) idle \n\n1 S postgres 2097 22455 0 80 0 - 274059 - 12:24 ? 00:00:00 postgres: metacarta metacarta 127.0.0.1(36947) idle \n\n0 S root 2380 4263 0 80 0 - 411 stext 12:24 pts/1 00:00:00 grep postgres:\n1 R postgres 14349 22455 26 80 0 - 274385 - 11:51 ? 00:08:47 postgres: metacarta metacarta 127.0.0.1(52448) PARSE \n\n1 S postgres 22457 22455 0 80 0 - 273864 - Jul10 ? 00:01:55 postgres: writer process \n\n1 S postgres 22458 22455 0 80 0 - 2380 434161 Jul10 ? 00:00:50 postgres: stats collector process \n\n\n\nDoes anyone know what may be going on here? Has this been fixed on later versions of postgresql?\n\n\nThanks,\nKarl\n\n", "msg_date": "Fri, 11 Jul 2008 12:59:58 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "REINDEX/SELECT deadlock?" }, { "msg_contents": "Karl Wright <[email protected]> writes:\n> I just noticed what looks like a deadlock situation on postgresql\n> 8.2.4.\n\nDid you look into pg_locks to see what locks those transactions have and\nare waiting for?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jul 2008 15:38:01 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REINDEX/SELECT deadlock? " }, { "msg_contents": "Tom Lane wrote:\n> Karl Wright <[email protected]> writes:\n> \n>>I just noticed what looks like a deadlock situation on postgresql\n>>8.2.4.\n> \n> \n> Did you look into pg_locks to see what locks those transactions have and\n> are waiting for?\n> \n> \t\t\tregards, tom lane\n\nNo. Unlike a typical transaction-based deadlock, I did not see a deadlock error come back - it just \nseemed to wait indefinitely instead. So I'm not even sure anything will show up in the pg_locks \ntable. In any case, it will have to happen again before I can spelunk more, since I had to reset \neverything because the system was dead and it was needed by others.\n\nKarl\n\n\n\n", "msg_date": "Fri, 11 Jul 2008 17:40:50 -0400", "msg_from": "Karl Wright <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REINDEX/SELECT deadlock?" } ]
[ { "msg_contents": "PostgreSQL: 8.2\n\nHow can you identify how many inserts are being done in a given time\nframe for a database?\n\n \n\nThanks,\n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\nMy e-mail address has changed to [email protected]\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2\nHow can you identify how many inserts are being done in a\ngiven time frame for a database?\n \nThanks,\n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\nMy e-mail address has changed to [email protected]", "msg_date": "Fri, 11 Jul 2008 16:25:53 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "How many inserts am I doing" } ]
[ { "msg_contents": "PostgreSQL: 8.2\n\nHow can I identify how many inserts and updates are being done in a\ngiven time frame for a database?\n\n \n\nThanks,\n\n \n\n \n\nLance Campbell\n\nProject Manager/Software Architect\n\nWeb Services at Public Affairs\n\nUniversity of Illinois\n\n217.333.0382\n\nhttp://webservices.uiuc.edu\n\nMy e-mail address has changed to [email protected]\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\nPostgreSQL: 8.2\nHow can I identify how many inserts and updates are being\ndone in a given time frame for a database?\n \nThanks,\n \n \nLance Campbell\nProject Manager/Software Architect\nWeb Services at Public Affairs\nUniversity of Illinois\n217.333.0382\nhttp://webservices.uiuc.edu\nMy e-mail address has changed to [email protected]", "msg_date": "Fri, 11 Jul 2008 16:27:11 -0500", "msg_from": "\"Campbell, Lance\" <[email protected]>", "msg_from_op": true, "msg_subject": "How many updates and inserts" }, { "msg_contents": "Have a look at the pg_stat_user_tables table.\n\nOn Fri, 11 Jul 2008, Campbell, Lance wrote:\n\n> PostgreSQL: 8.2\n>\n> How can I identify how many inserts and updates are being done in a\n> given time frame for a database?\n>\n>\n>\n> Thanks,\n>\n>\n>\n>\n>\n> Lance Campbell\n>\n> Project Manager/Software Architect\n>\n> Web Services at Public Affairs\n>\n> University of Illinois\n>\n> 217.333.0382\n>\n> http://webservices.uiuc.edu\n>\n> My e-mail address has changed to [email protected]\n>\n>\n>\n>\n", "msg_date": "Fri, 11 Jul 2008 14:47:07 -0700 (PDT)", "msg_from": "Ben <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How many updates and inserts" } ]
[ { "msg_contents": "On a running production machine, we have 900M configured on a 16G-memory Linux host. The db size for all dbs combined is about 50G. There are many transactions going on all the times (deletes, inserts, updates). We do not have a testing environment that has the same setup and the same amount of workload. I want to evaluate on the production host if this 900M is enough. If not, we still have room to go up a little bit to speed up all Postgres activities. I don't know enough about the SA side. I just would imagine, if something like \"top\" command or other tools can measure how much total memory Postgres is actually using (against the configured 900M shared buffers), and if Postgres is using almost 900M all the time, I would take this as an indication that the shared_buffers can go up for another 100M...\n\nWhat is the best way to tell how much memory Postgres (all Postgres related things) is actually using?\n\nThanks\nJessica\n\n\n\n \nOn a running production machine, we have 900M configured on a 16G-memory Linux host. The db size for all dbs combined is about 50G.  There are many transactions going on all the times (deletes, inserts, updates). We do not have a testing environment that has the same setup and the same amount of workload. I want to evaluate on the production host if this 900M is enough. If not, we still have room to go up a little bit to speed up all Postgres activities. I don't know enough about the SA side. I just would imagine, if something like \"top\" command or other tools can measure how much total memory Postgres is actually using (against the configured 900M shared buffers), and if Postgres is using almost 900M all the time, I would take this as an indication that the shared_buffers can go up\n for another 100M...What is the best way to tell how much memory Postgres (all Postgres related things) is actually using?ThanksJessica", "msg_date": "Sat, 12 Jul 2008 04:30:42 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "how to estimate shared_buffers..." }, { "msg_contents": "On Sat, Jul 12, 2008 at 5:30 AM, Jessica Richard <[email protected]> wrote:\n> On a running production machine, we have 900M configured on a 16G-memory\n> Linux host. The db size for all dbs combined is about 50G. There are many\n> transactions going on all the times (deletes, inserts, updates). We do not\n> have a testing environment that has the same setup and the same amount of\n> workload. I want to evaluate on the production host if this 900M is enough.\n> If not, we still have room to go up a little bit to speed up all Postgres\n> activities. I don't know enough about the SA side. I just would imagine, if\n> something like \"top\" command or other tools can measure how much total\n> memory Postgres is actually using (against the configured 900M shared\n> buffers), and if Postgres is using almost 900M all the time, I would take\n> this as an indication that the shared_buffers can go up for another 100M...\n>\n> What is the best way to tell how much memory Postgres (all Postgres related\n> things) is actually using?\n\nIf you've got a 50G data set, then postgresql is most likely using\nwhatever memory you give it for shared buffers. top should show that\neasily.\n\nI'd say start at 25% ~ 4G (this is a 64 bit machine, right?). That\nleaves plenty of memory for the OS to cache data, and for postgresql\nto allocate work_mem type stuff from.\n", "msg_date": "Sat, 12 Jul 2008 07:05:59 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to estimate shared_buffers..." }, { "msg_contents": "On Sat, 12 Jul 2008, Jessica Richard wrote:\n\n> On a running production machine, we have 900M configured on a 16G-memory Linux host. The db size for all dbs combined is about 50G. There are many transactions going on all the times (deletes, inserts, updates). We do not have a testing environment that has the same setup and the same amount of workload. I want to evaluate on the production host if this 900M is enough. If not, we still have room to go up a little bit to speed up all Postgres activities. I don't know enough about the SA side. I just would imagine, if something like \"top\" command or other tools can measure how much total memory Postgres is actually using (against the configured 900M shared buffers), and if Postgres is using almost 900M all the time, I would take this as an indication that the shared_buffers can go up for another 100M...\n>\n> What is the best way to tell how much memory Postgres (all Postgres related things) is actually using?\n\nthere is a contrib/pg_buffers which can tell you about usage of shared \nmemory. Also, you can estimate how much memory of OS cache occupied by\npostgres files (tables, indexes). Looks on \nhttp://www.kennygorman.com/wordpress/?p=246 for some details.\nI wrote a perl script, which simplifies estimation of OS buffers, but\nit's not yet ready for public.\n\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Sat, 12 Jul 2008 18:30:44 +0400 (MSD)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to estimate shared_buffers..." } ]
[ { "msg_contents": "On a new system with 64G memory, what is the best starting points for shmmax on OS and shared_buffers for Postgres. All databases combined are about 50G.\n\nI want to evaluate this with the two following scenarios:\n\n1. this machine is running only Postgres, no other applications\n\n2. this machine has postgres plus some other things (such as web server, etc).\n\nWhat are the dangers for shmmax and shared_buffers to be confiugred very big for each of the two scenarios?\n\nIn case #1, if shmmax = 63G and shared_buffers = 50G, can it speed up Postgres (loading almost all dbs into memeory)? \n\nThanks,\nJessica\n\n\n\n \nOn a new system with 64G memory, what is the best starting points for shmmax on OS and shared_buffers for Postgres. All databases combined are about 50G.I want to evaluate this with the two following scenarios:1. this machine is running only  Postgres, no other applications2. this machine has postgres plus some other things (such as web server, etc).What are the dangers for shmmax and shared_buffers to be confiugred very big for each of the two scenarios?In case #1, if shmmax = 63G and shared_buffers = 50G, can it speed up Postgres (loading almost all dbs into memeory)? Thanks,Jessica", "msg_date": "Sat, 12 Jul 2008 05:03:13 -0700 (PDT)", "msg_from": "Jessica Richard <[email protected]>", "msg_from_op": true, "msg_subject": "best starting point..." } ]
[ { "msg_contents": "Hi All,\n I am having a trigger in table, If i update the the table manually it is not taking time(say 200ms per row), But if i update the table through procedure the trigger is taking time to fire(say 7 to 10 seconds per row).\n\nHow can i make the trigger to fire immediately ?\n\nRegards,\nRam\n\n\n\n\n\n\n \n\nHi All,\n    I am having a trigger in table, \nIf i update the the table manually it is not taking time(say 200ms per row), But \nif i update the table through procedure the trigger is taking time to fire(say 7 \nto 10 seconds per row).\n \nHow can i make the trigger to fire immediately \n?\n \nRegards,\nRam", "msg_date": "Mon, 14 Jul 2008 11:29:24 +0530", "msg_from": "\"Ramasubramanian\" <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger is taking time to fire" }, { "msg_contents": "On Sun, Jul 13, 2008 at 11:59 PM, Ramasubramanian\n<[email protected]> wrote:\n>\n> Hi All,\n> I am having a trigger in table, If i update the the table manually it is\n> not taking time(say 200ms per row), But if i update the table through\n> procedure the trigger is taking time to fire(say 7 to 10 seconds per row).\n>\n> How can i make the trigger to fire immediately ?\n\nI'm thinking the user defined function is using a cached query plan.\nTry using execute to see if it runs faster that way.\n", "msg_date": "Mon, 14 Jul 2008 12:08:13 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger is taking time to fire" } ]
[ { "msg_contents": "Hi All,\n I am having a trigger in table, If I update the the table manually trigger is firing immediately(say 200ms per row), But if I update the table through procedure the trigger is taking time to fire(say 7 to 10 seconds per row).\n\nPlease tell me what kind of changes can I make so that trigger fire immediately while updating the table through procedure ?\n\nRegards,\nPraveen\n\n\n\n\n\n\n \nHi All,\n    I am having a trigger in table, \nIf I update the the table manually trigger is firing immediately(say 200ms \nper row), But if I update the table through procedure the trigger is taking \ntime to fire(say 7 to 10 seconds per row).\n \nPlease tell me what kind of changes can I \nmake so that  trigger  fire immediately while updating the table \nthrough procedure ?\n \nRegards,\nPraveen", "msg_date": "Mon, 14 Jul 2008 12:04:49 +0530", "msg_from": "\"Praveen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Trigger is not firing immediately" }, { "msg_contents": "am Mon, dem 14.07.2008, um 12:04:49 +0530 mailte Praveen folgendes:\n> \n> Hi All,\n> I am having a trigger in table, If I update the the table manually trigger\n> is firing immediately(say 200ms per row), But if I update the table through\n> procedure the trigger is taking time to fire(say 7 to 10 seconds per row).\n> \n> Please tell me what kind of changes can I make so that trigger fire\n> immediately while updating the table through procedure ?\n\nShow us more details like source-code of the procedure, the trigger and\na demonstration.\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 14 Jul 2008 08:48:00 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger is not firing immediately" }, { "msg_contents": "Praveen wrote:\n> \n> Hi All,\n> I am having a trigger in table, If I update the the table manually\n> trigger is firing immediately(say 200ms per row), But if I update the\n> table through procedure the trigger is taking time to fire(say 7 to 10\n> seconds per row).\n> \n> Please tell me what kind of changes can I make so that trigger fire\n> immediately while updating the table through procedure ?\n\nSending the same email over and over again isn't going to get you a\nresponse any quicker.\n\nIf you send the details of the trigger and the tables/fields it affects\nthen you might get a more helpful response.\n\n-- \nPostgresql & php tutorials\nhttp://www.designmagick.com/\n", "msg_date": "Mon, 14 Jul 2008 16:52:30 +1000", "msg_from": "Chris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger is not firing immediately" } ]
[ { "msg_contents": "\n Hi all,\n Please find the procedure and trigger which took more time when we try \nto update value in table through Procedure.\n\n CREATE OR REPLACE FUNCTION procname1(args)\n RETURNS void AS\n $BODY$\nDECLARE\n {\n ---\n Some code blocks\n ---\n }\n BEGIN\n\n --> to here it is executing fastly after reaches this statement it's \ntaking time\n\n\n update table1 set col1 = val1 where pk = val2 and col2 = val3;\n\n\n -----> HERE table1 IS HAVING THE TRIGGER I GIVEN BELOW THE TRIGGER( \ntrigger1)\n\n\nexception\n WHEN OTHERS\n THEN\n raise notice '''EXCEPTION IN procname1 BLOCK 3 : %''',SQLERRM;\n NULL;\nEND; $BODY$\n LANGUAGE 'plpgsql' VOLATILE;\n ALTER FUNCTION procname1(args);\n\n\n\n\n CREATE OR REPLACE FUNCTION trigger1()\n RETURNS \"trigger\" AS\n $BODY$\n BEGIN\n\nIF (TG_OP='UPDATE') THEN\n IF( some condition )\n THEN\n BEGIN\n INSERT INTO table2(cols)\n VALUES(values);\n IF NOT FOUND THEN\n NULL;\n END IF;\n EXCEPTION\n WHEN OTHERS\n THEN\n NULL;\n END;\n END IF;\nEND IF;\n RETURN NULL;\n END; $BODY$\n LANGUAGE 'plpgsql' VOLATILE;\nALTER FUNCTION trigger1();\n\n\n> ----- Original Message ----- \n> From: \"A. Kretschmer\" <[email protected]>\n> To: <[email protected]>\n> Sent: Monday, July 14, 2008 12:18 PM\n> Subject: Re: [PERFORM] Trigger is not firing immediately\n>\n>\n>> am Mon, dem 14.07.2008, um 12:04:49 +0530 mailte Praveen folgendes:\n>>>\n>>> Hi All,\n>>> I am having a trigger in table, If I update the the table manually \n>>> trigger\n>>> is firing immediately(say 200ms per row), But if I update the table \n>>> through\n>>> procedure the trigger is taking time to fire(say 7 to 10 seconds per \n>>> row).\n>>>\n>>> Please tell me what kind of changes can I make so that trigger fire\n>>> immediately while updating the table through procedure ?\n>>\n>> Show us more details like source-code of the procedure, the trigger and\n>> a demonstration.\n>>\n>>\n>> Andreas\n>> -- \n>> Andreas Kretschmer\n>> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n>> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>>\n>> -- \n>> Sent via pgsql-performance mailing list \n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n>\n> \n\n\n", "msg_date": "Mon, 14 Jul 2008 12:47:12 +0530", "msg_from": "\"Praveen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger is not firing immediately" }, { "msg_contents": "hello\n\nAre you sure, so you don't call trigger recursion?\n\nRegards\nPavel Stehule\n\n2008/7/14 Praveen <[email protected]>:\n>\n> Hi all,\n> Please find the procedure and trigger which took more time when we try to\n> update value in table through Procedure.\n>\n> CREATE OR REPLACE FUNCTION procname1(args)\n> RETURNS void AS\n> $BODY$\n> DECLARE\n> {\n> ---\n> Some code blocks\n> ---\n> }\n> BEGIN\n>\n> --> to here it is executing fastly after reaches this statement it's\n> taking time\n>\n>\n> update table1 set col1 = val1 where pk = val2 and col2 = val3;\n>\n>\n> -----> HERE table1 IS HAVING THE TRIGGER I GIVEN BELOW THE TRIGGER(\n> trigger1)\n>\n>\n> exception\n> WHEN OTHERS\n> THEN\n> raise notice '''EXCEPTION IN procname1 BLOCK 3 : %''',SQLERRM;\n> NULL;\n> END; $BODY$\n> LANGUAGE 'plpgsql' VOLATILE;\n> ALTER FUNCTION procname1(args);\n>\n>\n>\n>\n> CREATE OR REPLACE FUNCTION trigger1()\n> RETURNS \"trigger\" AS\n> $BODY$\n> BEGIN\n>\n> IF (TG_OP='UPDATE') THEN\n> IF( some condition )\n> THEN\n> BEGIN\n> INSERT INTO table2(cols)\n> VALUES(values);\n> IF NOT FOUND THEN\n> NULL;\n> END IF;\n> EXCEPTION\n> WHEN OTHERS\n> THEN\n> NULL;\n> END;\n> END IF;\n> END IF;\n> RETURN NULL;\n> END; $BODY$\n> LANGUAGE 'plpgsql' VOLATILE;\n> ALTER FUNCTION trigger1();\n>\n>\n>> ----- Original Message ----- From: \"A. Kretschmer\"\n>> <[email protected]>\n>> To: <[email protected]>\n>> Sent: Monday, July 14, 2008 12:18 PM\n>> Subject: Re: [PERFORM] Trigger is not firing immediately\n>>\n>>\n>>> am Mon, dem 14.07.2008, um 12:04:49 +0530 mailte Praveen folgendes:\n>>>>\n>>>> Hi All,\n>>>> I am having a trigger in table, If I update the the table manually\n>>>> trigger\n>>>> is firing immediately(say 200ms per row), But if I update the table\n>>>> through\n>>>> procedure the trigger is taking time to fire(say 7 to 10 seconds per\n>>>> row).\n>>>>\n>>>> Please tell me what kind of changes can I make so that trigger fire\n>>>> immediately while updating the table through procedure ?\n>>>\n>>> Show us more details like source-code of the procedure, the trigger and\n>>> a demonstration.\n>>>\n>>>\n>>> Andreas\n>>> --\n>>> Andreas Kretschmer\n>>> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n>>> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>\n>>\n>>\n>>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 14 Jul 2008 11:33:46 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Trigger is not firing immediately" }, { "msg_contents": "Yes , I am sure it is not trigger recursion.\n----- Original Message ----- \nFrom: \"Pavel Stehule\" <[email protected]>\nTo: \"Praveen\" <[email protected]>\nCc: <[email protected]>; <[email protected]>\nSent: Monday, July 14, 2008 3:03 PM\nSubject: Re: [PERFORM] Trigger is not firing immediately\n\n\n> hello\n>\n> Are you sure, so you don't call trigger recursion?\n>\n> Regards\n> Pavel Stehule\n>\n> 2008/7/14 Praveen <[email protected]>:\n>>\n>> Hi all,\n>> Please find the procedure and trigger which took more time when we try \n>> to\n>> update value in table through Procedure.\n>>\n>> CREATE OR REPLACE FUNCTION procname1(args)\n>> RETURNS void AS\n>> $BODY$\n>> DECLARE\n>> {\n>> ---\n>> Some code blocks\n>> ---\n>> }\n>> BEGIN\n>>\n>> --> to here it is executing fastly after reaches this statement it's\n>> taking time\n>>\n>>\n>> update table1 set col1 = val1 where pk = val2 and col2 = val3;\n>>\n>>\n>> -----> HERE table1 IS HAVING THE TRIGGER I GIVEN BELOW THE TRIGGER(\n>> trigger1)\n>>\n>>\n>> exception\n>> WHEN OTHERS\n>> THEN\n>> raise notice '''EXCEPTION IN procname1 BLOCK 3 : %''',SQLERRM;\n>> NULL;\n>> END; $BODY$\n>> LANGUAGE 'plpgsql' VOLATILE;\n>> ALTER FUNCTION procname1(args);\n>>\n>>\n>>\n>>\n>> CREATE OR REPLACE FUNCTION trigger1()\n>> RETURNS \"trigger\" AS\n>> $BODY$\n>> BEGIN\n>>\n>> IF (TG_OP='UPDATE') THEN\n>> IF( some condition )\n>> THEN\n>> BEGIN\n>> INSERT INTO table2(cols)\n>> VALUES(values);\n>> IF NOT FOUND THEN\n>> NULL;\n>> END IF;\n>> EXCEPTION\n>> WHEN OTHERS\n>> THEN\n>> NULL;\n>> END;\n>> END IF;\n>> END IF;\n>> RETURN NULL;\n>> END; $BODY$\n>> LANGUAGE 'plpgsql' VOLATILE;\n>> ALTER FUNCTION trigger1();\n>>\n>>\n>>> ----- Original Message ----- From: \"A. Kretschmer\"\n>>> <[email protected]>\n>>> To: <[email protected]>\n>>> Sent: Monday, July 14, 2008 12:18 PM\n>>> Subject: Re: [PERFORM] Trigger is not firing immediately\n>>>\n>>>\n>>>> am Mon, dem 14.07.2008, um 12:04:49 +0530 mailte Praveen folgendes:\n>>>>>\n>>>>> Hi All,\n>>>>> I am having a trigger in table, If I update the the table manually\n>>>>> trigger\n>>>>> is firing immediately(say 200ms per row), But if I update the table\n>>>>> through\n>>>>> procedure the trigger is taking time to fire(say 7 to 10 seconds per\n>>>>> row).\n>>>>>\n>>>>> Please tell me what kind of changes can I make so that trigger fire\n>>>>> immediately while updating the table through procedure ?\n>>>>\n>>>> Show us more details like source-code of the procedure, the trigger and\n>>>> a demonstration.\n>>>>\n>>>>\n>>>> Andreas\n>>>> --\n>>>> Andreas Kretschmer\n>>>> Kontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\n>>>> GnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n>>>>\n>>>> --\n>>>> Sent via pgsql-performance mailing list\n>>>> ([email protected])\n>>>> To make changes to your subscription:\n>>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>>>\n>>>\n>>>\n>>>\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list \n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n> \n\n\n", "msg_date": "Mon, 14 Jul 2008 15:21:50 +0530", "msg_from": "\"Praveen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Trigger is not firing immediately" } ]
[ { "msg_contents": "Hi,\n\nWhen trying to to set shared_buffers greater then 3,5 GB on 32 GB x86\nmachine with solaris 10 I running in this error:\nFATAL: requested shared memory size overflows size_t\n\nThe solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.\nPostgresql 8.2.5.\nmax-shm ist allowed to 8GB.\n\nprojmod -s -K \"project.max-shm-memory=(priv,8G,deny)\" user.postgres\n\n\nDoes anybody have an idea?\n\nThanks.\nUwe\n\nHi,When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86 machine with solaris 10 I running in this error:FATAL: requested shared memory size overflows size_tThe solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.\nPostgresql 8.2.5.max-shm ist allowed to 8GB.projmod -s -K \"project.max-shm-memory=(priv,8G,deny)\" user.postgresDoes anybody have an idea?Thanks.\nUwe", "msg_date": "Tue, 15 Jul 2008 12:26:23 +0200", "msg_from": "\"Uwe Bartels\" <[email protected]>", "msg_from_op": true, "msg_subject": "requested shared memory size overflows size_t" }, { "msg_contents": "\"Uwe Bartels\" <[email protected]> writes:\n> When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86\n> machine with solaris 10 I running in this error:\n> FATAL: requested shared memory size overflows size_t\n\n> The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.\n\nEither it's not really a 64-bit build, or you made an error in your\nmath. What did you try to set shared_buffers to, exactly? Did you\nincrease any other parameters at the same time?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Jul 2008 10:25:53 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t " }, { "msg_contents": "Hey there;\n\nAs Tom notes before maybe you're not using the right postgres. Solaris 10\ncomes with a postgres, but on SPARC it's 32 bit compiled (I can't speak to\nx86 Solaris though).\n\nAssuming that's not the problem, you can be 100% sure if your Postgres\nbinary is actually 64 bit by using the file command on the 'postgres'\nexecutable. A sample from 64 bit SPARC looks like this:\n\npostgres: ELF 64-bit MSB executable SPARCV9 Version 1, UltraSPARC3\nExtensions Required, dynamically linked, not stripped\n\nBut x86 should show something similar. I have run Postgres up to about 8\ngigs of RAM on Solaris without trouble. Anyway, sorry if this is obvious /\nnot helpful but good luck :)\n\nSteve\n\nOn Tue, Jul 15, 2008 at 10:25 AM, Tom Lane <[email protected]> wrote:\n\n> \"Uwe Bartels\" <[email protected]> writes:\n> > When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86\n> > machine with solaris 10 I running in this error:\n> > FATAL: requested shared memory size overflows size_t\n>\n> > The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.\n>\n> Either it's not really a 64-bit build, or you made an error in your\n> math. What did you try to set shared_buffers to, exactly? Did you\n> increase any other parameters at the same time?\n>\n> regards, tom lane\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHey there;As Tom notes before maybe you're not using the right postgres.  Solaris 10 comes with a postgres, but on SPARC it's 32 bit compiled (I can't speak to x86 Solaris though).\nAssuming that's not the problem, you can be 100% sure if your Postgres binary is actually 64 bit by using the file command on the 'postgres' executable.  A sample from 64 bit SPARC looks like this:postgres:       ELF 64-bit MSB executable SPARCV9 Version 1, UltraSPARC3 Extensions Required, dynamically linked, not stripped\nBut x86 should show something similar.  I have run Postgres up to about 8 gigs of RAM on Solaris without trouble.  Anyway, sorry if this is obvious / not helpful but good luck :)Steve\nOn Tue, Jul 15, 2008 at 10:25 AM, Tom Lane <[email protected]> wrote:\n\"Uwe Bartels\" <[email protected]> writes:\n> When trying to to set shared_buffers greater then 3,5 GB on 32 GB x86\n> machine with solaris 10 I running in this error:\n> FATAL: requested shared memory size overflows size_t\n\n> The solaris x86 ist 64-bit and the compiled postgres is as well 64-bit.\n\nEither it's not really a 64-bit build, or you made an error in your\nmath.  What did you try to set shared_buffers to, exactly?  Did you\nincrease any other parameters at the same time?\n\n                        regards, tom lane\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 15 Jul 2008 17:26:52 -0400", "msg_from": "\"Stephen Conley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: requested shared memory size overflows size_t" } ]
[ { "msg_contents": "Hi guys,\n\nI've got a query that is running slower on 8.3 than on 8.1 (with \nequivalent server config),\nbecause the join ordering is not the same, at least that's my guess... ;-)\n\nIn 8.1.4, table A had 122880 pages, B 112690 pages and C 80600 pages.\nNow in 8.3.3, table A has only 77560 pages, B 69580 but C remains at \n80600 pages.\n\nIn 8.1 the tables were joined in that way (using explain analyse):\nC join A join B\nnow in 8.3:\nB join A join C\nBeside that, the plan is very similar, but the indexes used are not the \nsame.\n\nCould the number of disk pages of a table influence the\norder in which it is joined, even when it is scanned with an index?\n\nI'm pretty sure it is because of the reduced table sizes,\nsince the server configuration is the same.\n\nThoughts?\n\nThanks,\nPatrick\n\n\n\n\n\n\nHi\nguys,\n\n\nI've got a query that is running slower on 8.3 than on 8.1 (with\nequivalent server config),\n\nbecause the join ordering is not the same, at least that's my guess...\n;-)\n\n\nIn 8.1.4, table A had 122880 pages, B 112690 pages and C 80600 pages.\n\nNow in 8.3.3, table A has only 77560 pages, B 69580 but C remains at\n80600 pages.\n\n\nIn 8.1 the tables were joined in that way (using explain analyse):\n\nC join A join B\n\nnow in 8.3:\n\nB join A join C\n\nBeside that, the plan is very similar, but the indexes used are not the\nsame.\n\n\nCould the number of disk pages of a table influence the\n\norder in which it is joined, even when it is scanned with an index?\n\n\nI'm pretty sure it is because of the reduced table sizes,\n\nsince the server configuration is the same.\n\n\nThoughts?\n\n\nThanks,\n\nPatrick", "msg_date": "Wed, 16 Jul 2008 17:37:58 -0400", "msg_from": "Patrick Vachon <[email protected]>", "msg_from_op": true, "msg_subject": "Difference between 8.1 & 8.3" }, { "msg_contents": "Patrick Vachon wrote:\n\n> Hi guys,\n> \n> I've got a query that is running slower on 8.3 than on 8.1 (with \n> equivalent server config),\n> because the join ordering is not the same, at least that's my guess... ;-)\n> \n> In 8.1.4, table A had 122880 pages, B 112690 pages and C 80600 pages.\n> Now in 8.3.3, table A has only 77560 pages, B 69580 but C remains at \n> 80600 pages.\n> \n> In 8.1 the tables were joined in that way (using explain analyse):\n> C join A join B\n> now in 8.3:\n> B join A join C\n> Beside that, the plan is very similar, but the indexes used are not the \n> same.\n> \n> Could the number of disk pages of a table influence the\n> order in which it is joined, even when it is scanned with an index?\n> \n> I'm pretty sure it is because of the reduced table sizes,\n> since the server configuration is the same.\n> \n> Thoughts?\n\n8.3 has fewer automatic casts to text types; perhaps you have indexes which are not being used because of mismatched types ? Perhaps an EXPLAIN ANALYZE from both, if possible, would clairfy.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\nRE: [PERFORM] Difference between 8.1 & 8.3\n\n\n\nPatrick Vachon wrote:\n\n> Hi guys,\n>\n> I've got a query that is running slower on 8.3 than on 8.1 (with\n> equivalent server config),\n> because the join ordering is not the same, at least that's my guess... ;-)\n>\n> In 8.1.4, table A had 122880 pages, B 112690 pages and C 80600 pages.\n> Now in 8.3.3, table A has only 77560 pages, B 69580 but C remains at\n> 80600 pages.\n>\n> In 8.1 the tables were joined in that way (using explain analyse):\n> C join A join B\n> now in 8.3:\n> B join A join C\n> Beside that, the plan is very similar, but the indexes used are not the\n> same.\n>\n> Could the number of disk pages of a table influence the\n> order in which it is joined, even when it is scanned with an index?\n>\n> I'm pretty sure it is because of the reduced table sizes,\n> since the server configuration is the same.\n>\n> Thoughts?\n\n8.3 has fewer automatic casts to text types; perhaps you have indexes which are not being used because of mismatched types ? Perhaps an EXPLAIN ANALYZE from both, if possible, would clairfy.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Wed, 16 Jul 2008 15:59:42 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between 8.1 & 8.3" }, { "msg_contents": "Hi,\n\nFinally found the problem.\nTurning off nested loops gave me much better performance on 8.3 than 8.1.\n\nThe problem seems to come from postgresql miscalculation of the number \nof rows returned by nested loops.\nIt is well described in that thread:\nhttp://archives.postgresql.org/pgsql-performance/2008-03/msg00371.php\n\nIn my case, it was by a factor of 20000.\n\nOf course, I can't turn off nested loop in my database,\nit will impact performance on small tables too much...\nSo there is no easy fix for that, it seems,\nbeside playing with per-column statistics-gathering target maybe?\n\nPatrick\n\nGregory Williamson wrote:\n>\n> Patrick Vachon wrote:\n>\n> > Hi guys,\n> >\n> > I've got a query that is running slower on 8.3 than on 8.1 (with\n> > equivalent server config),\n> > because the join ordering is not the same, at least that's my \n> guess... ;-)\n> >\n> > In 8.1.4, table A had 122880 pages, B 112690 pages and C 80600 pages.\n> > Now in 8.3.3, table A has only 77560 pages, B 69580 but C remains at\n> > 80600 pages.\n> >\n> > In 8.1 the tables were joined in that way (using explain analyse):\n> > C join A join B\n> > now in 8.3:\n> > B join A join C\n> > Beside that, the plan is very similar, but the indexes used are not the\n> > same.\n> >\n> > Could the number of disk pages of a table influence the\n> > order in which it is joined, even when it is scanned with an index?\n> >\n> > I'm pretty sure it is because of the reduced table sizes,\n> > since the server configuration is the same.\n> >\n> > Thoughts?\n>\n> 8.3 has fewer automatic casts to text types; perhaps you have indexes \n> which are not being used because of mismatched types ? Perhaps an \n> EXPLAIN ANALYZE from both, if possible, would clairfy.\n>\n> HTH,\n>\n> Greg Williamson\n> Senior DBA\n> DigitalGlobe\n>\n> Confidentiality Notice: This e-mail message, including any \n> attachments, is for the sole use of the intended recipient(s) and may \n> contain confidential and privileged information and must be protected \n> in accordance with those provisions. Any unauthorized review, use, \n> disclosure or distribution is prohibited. If you are not the intended \n> recipient, please contact the sender by reply e-mail and destroy all \n> copies of the original message.\n>\n> (My corporate masters made me say this.)\n>\n> ------------------------------------------------------------------------\n>\n> No virus found in this incoming message.\n> Checked by AVG. \n> Version: 7.5.524 / Virus Database: 270.5.0/1558 - Release Date: 7/17/2008 9:56 AM\n> \n\n\n\n\n\n\nHi,\n\nFinally found the problem.\nTurning off nested loops gave me much better performance on 8.3 than\n8.1.\n\nThe problem seems to come from postgresql miscalculation of the number\nof rows returned by nested loops.\nIt is well described in that thread:\nhttp://archives.postgresql.org/pgsql-performance/2008-03/msg00371.php\n\nIn my case, it was by a factor of 20000.\n\nOf course, I can't turn off nested loop in my database,\nit will impact performance on small tables too much...\nSo there is no easy fix for that, it seems,\nbeside playing with per-column statistics-gathering target maybe?\n\nPatrick\n\nGregory Williamson wrote:\n\n\n\nRE: [PERFORM] Difference between 8.1 & 8.3\n\nPatrick Vachon wrote:\n\n> Hi guys,\n>\n> I've got a query that is running slower on 8.3 than on 8.1 (with\n> equivalent server config),\n> because the join ordering is not the same, at least that's my\nguess... ;-)\n>\n> In 8.1.4, table A had 122880 pages, B 112690 pages and C 80600\npages.\n> Now in 8.3.3, table A has only 77560 pages, B 69580 but C remains\nat\n> 80600 pages.\n>\n> In 8.1 the tables were joined in that way (using explain analyse):\n> C join A join B\n> now in 8.3:\n> B join A join C\n> Beside that, the plan is very similar, but the indexes used are\nnot the\n> same.\n>\n> Could the number of disk pages of a table influence the\n> order in which it is joined, even when it is scanned with an index?\n>\n> I'm pretty sure it is because of the reduced table sizes,\n> since the server configuration is the same.\n>\n> Thoughts?\n\n8.3 has fewer automatic casts to text types; perhaps you have indexes\nwhich are not being used because of mismatched types ? Perhaps an\nEXPLAIN ANALYZE from both, if possible, would clairfy.\n\nHTH,\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments,\nis for the sole use of the intended recipient(s) and may contain\nconfidential and privileged information and must be protected in\naccordance with those provisions. Any unauthorized review, use,\ndisclosure or distribution is prohibited. If you are not the intended\nrecipient, please contact the sender by reply e-mail and destroy all\ncopies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\nNo virus found in this incoming message.\nChecked by AVG. \nVersion: 7.5.524 / Virus Database: 270.5.0/1558 - Release Date: 7/17/2008 9:56 AM", "msg_date": "Tue, 22 Jul 2008 11:15:31 -0400", "msg_from": "Patrick Vachon <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference between 8.1 & 8.3" } ]
[ { "msg_contents": "Dear all,\n\nAfter setting log_statement='all' at postgres.conf,\nthen i'm rebooting OS [freeBSD or CentOS],\ni can't find where log file created from log_statement='all' located...\nFYI, location of postgres.conf at /var/lib/pgsql/data/postgres.conf\n\nmany thanks..\n\nRegards,\nJoko [SYSTEM]\nPT. Indra Jaya Swastika\nPhone: +62 31 7481388 Ext 201\nhttp://www.ijs.co.id\n\n--\nIf you have any problem with our services ,\nplease contact us at 70468146 or e-mail: [email protected]\nPT Indra Jaya Swastika | Jl. Kalianak Barat 57A | +62-31-7481388\n\n\n\n\n\n\n\nDear all,\n \nAfter setting log_statement='all' at \npostgres.conf,\nthen i'm rebooting OS [freeBSD or \nCentOS],\ni can't find where log file created from \nlog_statement='all' located...\nFYI, location of postgres.conf at \n/var/lib/pgsql/data/postgres.conf\n \nmany thanks..\n \nRegards,Joko [SYSTEM]PT. Indra Jaya \nSwastikaPhone: +62 31 7481388  Ext 201http://www.ijs.co.id", "msg_date": "Thu, 17 Jul 2008 15:43:59 +0700", "msg_from": "\"System/IJS - Joko\" <[email protected]>", "msg_from_op": true, "msg_subject": "log_statement at postgres.conf" }, { "msg_contents": "> After setting log_statement='all' at postgres.conf,\n> then i'm rebooting OS [freeBSD or CentOS],\n> i can't find where log file created from log_statement='all' located...\n> FYI, location of postgres.conf at /var/lib/pgsql/data/postgres.conf\n>\n> many thanks..\n\nI added the following to FreeBSD:\n\n/etc/newsyslog.conf:\n/var/log/postgresql 600 7 * @T00 JC\n\n/etc/syslog.conf:\nlocal0.* /var/log/postgresql\n\n/usr/local/pgsql/data/postgresql.conf:\nlog_destination = 'syslog'\nsyslog_facility = 'LOCAL0'\nsyslog_ident = 'postgres'\nlog_min_duration_statement = 100 # -1 is disabled, 0 logs all\nstatements, in ms.\n\nRemember to touch /var/log/postgresql before restarting syslogd (kill\n-HUP syslog-pid). Chmod 0700 so only root can read the log-file.\nAdjust log_min_duration_statement to your needs.\n\nI found this recipe somewhere, but don't remember where so I can't\ngive credit to the that person.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Thu, 17 Jul 2008 11:29:23 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: log_statement at postgres.conf" }, { "msg_contents": "On Thu, 17 Jul 2008, Claus Guttesen wrote:\n\n>> After setting log_statement='all' at postgres.conf,\n>> then i'm rebooting OS [freeBSD or CentOS],\n>> i can't find where log file created from log_statement='all' located...\n>> FYI, location of postgres.conf at /var/lib/pgsql/data/postgres.conf\n>>\n>> many thanks..\n>\n> I added the following to FreeBSD:\n>\n> /etc/newsyslog.conf:\n> /var/log/postgresql 600 7 * @T00 JC\n>\n> /etc/syslog.conf:\n> local0.* /var/log/postgresql\n>\n> /usr/local/pgsql/data/postgresql.conf:\n> log_destination = 'syslog'\n> syslog_facility = 'LOCAL0'\n> syslog_ident = 'postgres'\n> log_min_duration_statement = 100 # -1 is disabled, 0 logs all\n> statements, in ms.\n>\n> Remember to touch /var/log/postgresql before restarting syslogd (kill\n> -HUP syslog-pid). Chmod 0700 so only root can read the log-file.\n> Adjust log_min_duration_statement to your needs.\n>\n> I found this recipe somewhere, but don't remember where so I can't\n> give credit to the that person.\n>\n\nHello,\n\nanother possibility is to have logs stored in a file by just changing \n'redirect_stderr' to 'on' and 'log_destination' to 'stderr'.\n\nThis way, with the default config, all logs sent to stderr will be written \nto 'log_directory' under the name 'log_filename', without having to change \nsyslog.conf (you just need to change postgresql.conf).\n\nAdditionaly, I added 'log_rotation_size = 0' to have on log file per day.\n\nNote that in that case, the log files won't be rotated, you'll need to \ncheck you don't store too many log file after a few months (as the number \nof files will increase every day).\n\n\nNicolas\n", "msg_date": "Thu, 17 Jul 2008 11:40:13 +0200 (CEST)", "msg_from": "Pomarede Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: log_statement at postgres.conf" }, { "msg_contents": ">> I added the following to FreeBSD:\n>>\n>> /etc/newsyslog.conf:\n>> /var/log/postgresql 600 7 * @T00 JC\nmake new file?\n\n>> /etc/syslog.conf:\n>> local0.* /var/log/postgresql\n>>\n>> /usr/local/pgsql/data/postgresql.conf:\n>> log_destination = 'syslog'\n>> syslog_facility = 'LOCAL0'\n>> syslog_ident = 'postgres'\n>> log_min_duration_statement = 100 # -1 is disabled, 0 logs all\n>> statements, in ms.\nI already do this, but i can't find my log file\nFYI, i just wanna to log every SQL statement.\n\n>> Remember to touch /var/log/postgresql before restarting syslogd (kill\n>> -HUP syslog-pid). Chmod 0700 so only root can read the log-file.\n>> Adjust log_min_duration_statement to your needs.\ni don't understand \"to touch /var/log/postgresql\"\n\n> Hello,\n> \n> another possibility is to have logs stored in a file by just changing \n> 'redirect_stderr' to 'on' and 'log_destination' to 'stderr'.\n> \n> This way, with the default config, all logs sent to stderr will be written \n> to 'log_directory' under the name 'log_filename', without having to change \n> syslog.conf (you just need to change postgresql.conf).\n> \n> Additionaly, I added 'log_rotation_size = 0' to have on log file per day.\n> \n> Note that in that case, the log files won't be rotated, you'll need to \n> check you don't store too many log file after a few months (as the number \n> of files will increase every day).\nsetting 'log_destination' to 'stderr' could also log every sql statement happen on my server?\n\nMy mission is to activate 'log_statement' to 'all', so that i can log all sql activity on my database.\n\nRegards,\nJoko [SYSTEM]\nPT. Indra Jaya Swastika\nPhone: +62 31 7481388 Ext 201\nhttp://www.ijs.co.id\n\n--\nIf you have any problem with our services ,\nplease contact us at 70468146 or e-mail: [email protected]\nPT Indra Jaya Swastika | Jl. Kalianak Barat 57A | +62-31-7481388\n\n\n\n\n\n\n\n>> I added the following to \nFreeBSD:>>>> /etc/newsyslog.conf:>> \n/var/log/postgresql                     \n600  7     *    @T00  \nJC\nmake new file?\n>> /etc/syslog.conf:>> \nlocal0.*                                        \n/var/log/postgresql>>>> \n/usr/local/pgsql/data/postgresql.conf:>> log_destination = \n'syslog'>> syslog_facility = 'LOCAL0'>> syslog_ident = \n'postgres'>> log_min_duration_statement = \n100        # -1 is disabled, 0 logs \nall>> statements, in ms.\nI already do this, but i can't find my log \nfile\nFYI, i just wanna to log every SQL \nstatement.\n>> Remember to touch /var/log/postgresql before restarting \nsyslogd (kill>> -HUP syslog-pid). Chmod 0700 so only root can read the \nlog-file.>> Adjust log_min_duration_statement to your needs.\ni don't understand \"to touch /var/log/postgresql\"\n> Hello,> > another possibility is to have logs stored in \na file by just changing > 'redirect_stderr' to 'on' and 'log_destination' \nto 'stderr'.> > This way, with the default config, all logs sent \nto stderr will be written > to 'log_directory' under the name \n'log_filename', without having to change > syslog.conf (you just need to \nchange postgresql.conf).> > Additionaly, I added \n'log_rotation_size = 0' to have on log file per day.> > Note that \nin that case, the log files won't be rotated, you'll need to > check you \ndon't store too many log file after a few months (as the number > of \nfiles will increase every day).setting 'log_destination' to 'stderr' could \nalso log every sql statement happen on my server?\n \nMy mission is to activate 'log_statement' to 'all', so that i can log all \nsql activity on my database.\n \nRegards,Joko [SYSTEM]PT. Indra Jaya SwastikaPhone: +62 31 \n7481388  Ext 201http://www.ijs.co.id", "msg_date": "Fri, 18 Jul 2008 12:48:10 +0700", "msg_from": "\"System/IJS - Joko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: log_statement at postgres.conf" }, { "msg_contents": "On Fri, 18 Jul 2008, System/IJS - Joko wrote:\n\n>>> I added the following to FreeBSD:\n>>>\n>>> /etc/newsyslog.conf:\n>>> /var/log/postgresql 600 7 * @T00 JC\n> make new file?\n>\n>>> /etc/syslog.conf:\n>>> local0.* /var/log/postgresql\n>>>\n>>> /usr/local/pgsql/data/postgresql.conf:\n>>> log_destination = 'syslog'\n>>> syslog_facility = 'LOCAL0'\n>>> syslog_ident = 'postgres'\n>>> log_min_duration_statement = 100 # -1 is disabled, 0 logs all\n>>> statements, in ms.\n> I already do this, but i can't find my log file\n> FYI, i just wanna to log every SQL statement.\n>\n>>> Remember to touch /var/log/postgresql before restarting syslogd (kill\n>>> -HUP syslog-pid). Chmod 0700 so only root can read the log-file.\n>>> Adjust log_min_duration_statement to your needs.\n> i don't understand \"to touch /var/log/postgresql\"\n>\n>> Hello,\n>>\n>> another possibility is to have logs stored in a file by just changing\n>> 'redirect_stderr' to 'on' and 'log_destination' to 'stderr'.\n>>\n>> This way, with the default config, all logs sent to stderr will be written\n>> to 'log_directory' under the name 'log_filename', without having to change\n>> syslog.conf (you just need to change postgresql.conf).\n>>\n>> Additionaly, I added 'log_rotation_size = 0' to have on log file per day.\n>>\n>> Note that in that case, the log files won't be rotated, you'll need to\n>> check you don't store too many log file after a few months (as the number\n>> of files will increase every day).\n> setting 'log_destination' to 'stderr' could also log every sql statement happen on my server?\n>\n> My mission is to activate 'log_statement' to 'all', so that i can log all sql activity on my database.\n\nThere're 2 points in your question :\n\n - what to log\n - where to log\n\nTo choose 'what' to log in your case, you can change 'log_statement' to \n'all'.\n\nThen, to choose 'where' to log, you can either use the proposal in the \nfirst answer, or change 'log_destination' to 'stderr' and \n'redirect_stderr' to 'on'.\n\nNicolas\n\n", "msg_date": "Fri, 18 Jul 2008 10:16:00 +0200 (CEST)", "msg_from": "Pomarede Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: log_statement at postgres.conf" }, { "msg_contents": "Thx a lot Nicolas,\n\nI finaly success to log query statement because of your simple explanation.\nI have other question:\n1. Is there posibility to automatically logging that statement to table?\n2. All of that statement is come from every database on my server,\n could I know from which database that statement come?\n or at least I can filter to log only from database X ?\n3. If I need to log only changed made on my database, then the value of \n'log_statement' is 'mod' ?\nCMIIW\n\nRegards,\nJoko [SYSTEM]\nPT. Indra Jaya Swastika\nPhone: +62 31 7481388 Ext 201\nhttp://www.ijs.co.id\n\n--sorry for my bad english\n\n----- Original Message ----- \nFrom: \"Pomarede Nicolas\" <[email protected]>\nTo: \"System/IJS - Joko\" <[email protected]>\nCc: <[email protected]>\nSent: Friday, July 18, 2008 3:16 PM\nSubject: Re: [PERFORM] log_statement at postgres.conf\n\n> There're 2 points in your question :\n>\n> - what to log\n> - where to log\n>\n> To choose 'what' to log in your case, you can change 'log_statement' to \n> 'all'.\n>\n> Then, to choose 'where' to log, you can either use the proposal in the \n> first answer, or change 'log_destination' to 'stderr' and \n> 'redirect_stderr' to 'on'.\n>\n> Nicolas\n> \n\n--\nIf you have any problem with our services ,\nplease contact us at 70468146 or e-mail: [email protected]\nPT Indra Jaya Swastika | Jl. Kalianak Barat 57A | +62-31-7481388\n\n\n", "msg_date": "Mon, 21 Jul 2008 13:35:20 +0700", "msg_from": "\"System/IJS - Joko\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: log_statement at postgres.conf" }, { "msg_contents": "On Mon, 21 Jul 2008, System/IJS - Joko wrote:\n\n> Thx a lot Nicolas,\n>\n> I finaly success to log query statement because of your simple explanation.\n> I have other question:\n> 1. Is there posibility to automatically logging that statement to table?\n\nI don't know, never tried that.\n\n> 2. All of that statement is come from every database on my server,\n> could I know from which database that statement come?\n> or at least I can filter to log only from database X ?\n\nYou can modify 'log_line_prefix' to add the database name :\nuse '%d %t %p %r ' instead of the default '%t %p %r ' for example.\n\n> 3. If I need to log only changed made on my database, then the value of \n> 'log_statement' is 'mod' ?\n\nyes\n\n\nNicolas\n\n", "msg_date": "Mon, 21 Jul 2008 10:27:35 +0200 (CEST)", "msg_from": "Pomarede Nicolas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: log_statement at postgres.conf" } ]
[ { "msg_contents": "I have two postgresql servers. One runs 8.3.1, the other 8.3.3. On the 8.3.1 \nmachine, the index scans are being planned extremely low cost:\n\nexplain ANALYZE select * from email_entity where email_thread = 375629157;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..4.59 \nrows=1 width=1031) (actual time=0.095..0.120 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n Total runtime: 0.207 ms\n(3 rows)\n\n\nBut on the 8.3.3 machine, the index scans are being planned much higher cost:\n\n explain ANALYZE select * from email_entity where email_thread = 375629157;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..2218.61 \nrows=1151 width=931) (actual time=0.094..0.111 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n Total runtime: 0.253 ms\n(3 rows)\n\n\n\ndiffing the 'show all;' output reveals the following (left side is the low \ncost plan, right side is the high cost plan server):\n\n57c57\n< effective_cache_size | 31800MB | \nSets the planner's assumption about the size of the disk cache.\n---\n> effective_cache_size | 15300MB | \nSets the planner's assumption about the size of the disk cache.\n72c72\n< fsync | on | \nForces synchronization of updates to disk.\n---\n> fsync | off | \nForces synchronization of updates to disk.\n110c110\n< log_line_prefix | | \nControls information prefixed to each log line.\n---\n> log_line_prefix | user=%u,db=%d | \nControls information prefixed to each log line.\n128,129c128,129\n< max_fsm_pages | 2000000 | \nSets the maximum number of disk pages for which free space is tracked.\n< max_fsm_relations | 1000 | \nSets the maximum number of tables and indexes for which free space is tracked.\n---\n> max_fsm_pages | 4000000 | \nSets the maximum number of disk pages for which free space is tracked.\n> max_fsm_relations | 5000 | \nSets the maximum number of tables and indexes for which free space is tracked.\n145,146c145,146\n< server_version | 8.3.1 | \nShows the server version.\n< server_version_num | 80301 | \nShows the server version as an integer.\n---\n> server_version | 8.3.3 | \nShows the server version.\n> server_version_num | 80303 | \nShows the server version as an integer.\n149c149\n< shared_preload_libraries | | \nLists shared libraries to preload into server.\n---\n> shared_preload_libraries | $libdir/plugins/plugin_debugger.so | \nLists shared libraries to preload into server.\n\nDisabling the debugger had no effect on the slow server.\n\nI then thought perhaps this was a difference between 8.3.1 and 8.3.3, so I \nloaded the DB on a separate test machine and tried the query with both 8.3.1 \nand 8.3.3 on the same server:\n\nengage=# show server_version;\n server_version\n----------------\n 8.3.1\n(1 row)\n\n explain ANALYZE select * from email_entity where email_thread = 375629157;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..1319.44 \nrows=1183 width=1046) (actual time=0.017..0.022 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n Total runtime: 0.054 ms\n(3 rows)\n\n\nengage=# show server_version;\n server_version\n----------------\n 8.3.3\n(1 row)\n\n explain ANALYZE select * from email_entity where email_thread = 375629157;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..1319.44 \nrows=1183 width=1046) (actual time=0.018..0.022 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n Total runtime: 0.055 ms\n(3 rows)\n\nAs you might guess, the reason I started looking at this is that the high cost \nchanges the plan of a more complex query for the worse.\n\nAny idea what might be influencing the plan on the other server? I tried \nincreasing the statistics target on the email_thread column and that helped to \na certain extent. Setting the statistics target to 1000 gets me a good enough \nplan to help the complex query in question:\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..26.36 \nrows=12 width=913) (actual time=0.028..0.040 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n Total runtime: 0.092 ms\n(3 rows)\n\nBut 26.36 is still not 4.59 like the other server estimates AND the statistics \ntarget on that column is just the default 10 on the server with the 4.59 cost \nestimate.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 17 Jul 2008 14:21:09 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "index scan cost" }, { "msg_contents": "The \"fast\" server makes a much more accurate estimation of the number\nof rows to expect (4 rows are returning, 1 was estimated). The \"slow\"\nserver estimates 1151 rows. Try running ANALYZE on the slow one\n", "msg_date": "Fri, 18 Jul 2008 00:58:23 +0200", "msg_from": "\"Dennis Brakhane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan cost" }, { "msg_contents": "On Fri, 18 Jul 2008, Dennis Brakhane wrote:\n\n> The \"fast\" server makes a much more accurate estimation of the number\n> of rows to expect (4 rows are returning, 1 was estimated). The \"slow\"\n> server estimates 1151 rows. Try running ANALYZE on the slow one\n\nYou're quite right. I probably didn't mention that the slow one has been \nanalyzed several times. In fact, every time adjusted the statistics target \nfor that column I analyzed, thus the eventually better, but still inaccurate \nestimates toward the bottom of the post.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 650-780-7908\tFAX: 650-649-1954\n", "msg_date": "Thu, 17 Jul 2008 16:12:44 -0700 (PDT)", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan cost" }, { "msg_contents": "Jeff Frost <[email protected]> writes:\n> I have two postgresql servers. One runs 8.3.1, the other 8.3.3. On the 8.3.1 \n> machine, the index scans are being planned extremely low cost:\n\n> Index Scan using ix_email_entity_thread on email_entity (cost=0.00..4.59 \n> rows=1 width=1031) (actual time=0.095..0.120 rows=4 loops=1)\n> Index Cond: (email_thread = 375629157)\n\n> Index Scan using ix_email_entity_thread on email_entity (cost=0.00..2218.61 \n> rows=1151 width=931) (actual time=0.094..0.111 rows=4 loops=1)\n> Index Cond: (email_thread = 375629157)\n\nThis isn't a \"cost\" problem, this is a \"stats\" problem. Why does the\nsecond server think 1151 rows will be returned? Try comparing the\npg_stats entries for the email_thread column on both servers ... seems\nlike they must be significantly different.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 18 Jul 2008 00:37:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index scan cost " }, { "msg_contents": "I've never gotten a single spam from the Postgres mailing list ... until today. A Chinese company selling consumer products is using this list. I have my filters set to automatically trust this list because it has been so reliable until now. It would be really, really unfortunate if this list fell to the spammers.\n\nCraig\n", "msg_date": "Fri, 18 Jul 2008 08:02:37 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Mailing list hacked by spammer?" }, { "msg_contents": "\nOn Jul 18, 2008, at 4:02 PM, Craig James wrote:\n\n> I've never gotten a single spam from the Postgres mailing list ... \n> until today. A Chinese company selling consumer products is using \n> this list. I have my filters set to automatically trust this list \n> because it has been so reliable until now. It would be really, \n> really unfortunate if this list fell to the spammers.\n\nIt's not been \"hacked by spammers\".\n\nIt's a valid From address, probably coincidentally. Nothing worth \ndiscussing. *Definitely* not something worth discussing on the list.\n\nCheers,\n Steve\n\n", "msg_date": "Fri, 18 Jul 2008 16:41:04 +0100", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mailing list hacked by spammer?" }, { "msg_contents": "Steve Atkins wrote:\n>\n> On Jul 18, 2008, at 4:02 PM, Craig James wrote:\n>\n>> I've never gotten a single spam from the Postgres mailing list ... \n>> until today. A Chinese company selling consumer products is using \n>> this list. I have my filters set to automatically trust this list \n>> because it has been so reliable until now. It would be really, really \n>> unfortunate if this list fell to the spammers.\n>\n> It's not been \"hacked by spammers\".\n>\n> It's a valid From address, probably coincidentally. Nothing worth \n> discussing. *Definitely* not something worth discussing on the list.\n\nKeep in mind that messages from unsubscribed addresses are held up for\nmoderation. A human moderator must then reject it or approve it, and\nhumans make mistakes sometimes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Fri, 18 Jul 2008 12:21:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mailing list hacked by spammer?" }, { "msg_contents": "Tom Lane wrote:\n> Jeff Frost <[email protected]> writes:\n> \n>> I have two postgresql servers. One runs 8.3.1, the other 8.3.3. On the 8.3.1 \n>> machine, the index scans are being planned extremely low cost:\n>> \n>\n> \n>> Index Scan using ix_email_entity_thread on email_entity (cost=0.00..4.59 \n>> rows=1 width=1031) (actual time=0.095..0.120 rows=4 loops=1)\n>> Index Cond: (email_thread = 375629157)\n>> \n>\n> \n>> Index Scan using ix_email_entity_thread on email_entity (cost=0.00..2218.61 \n>> rows=1151 width=931) (actual time=0.094..0.111 rows=4 loops=1)\n>> Index Cond: (email_thread = 375629157)\n>> \n>\n> This isn't a \"cost\" problem, this is a \"stats\" problem. Why does the\n> second server think 1151 rows will be returned? Try comparing the\n> pg_stats entries for the email_thread column on both servers ... seems\n> like they must be significantly different.\n> \nSorry it took me a while to close the loop on this. So, the server that \nhad the less desirable plan had actually been analyzed more recently by \nautovacuum. When I went back to compare the stats on the faster server, \nautovacuum had analyzed it and the plan was now more similar. Adjusting \nthe stats target up for that column helped on both servers though it \nnever did get back as close as before.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032\n\n\n\n\n\n\n\nTom Lane wrote:\n\nJeff Frost <[email protected]> writes:\n \n\nI have two postgresql servers. One runs 8.3.1, the other 8.3.3. On the 8.3.1 \nmachine, the index scans are being planned extremely low cost:\n \n\n\n \n\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..4.59 \nrows=1 width=1031) (actual time=0.095..0.120 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n \n\n\n \n\n Index Scan using ix_email_entity_thread on email_entity (cost=0.00..2218.61 \nrows=1151 width=931) (actual time=0.094..0.111 rows=4 loops=1)\n Index Cond: (email_thread = 375629157)\n \n\n\nThis isn't a \"cost\" problem, this is a \"stats\" problem. Why does the\nsecond server think 1151 rows will be returned? Try comparing the\npg_stats entries for the email_thread column on both servers ... seems\nlike they must be significantly different.\n \n\nSorry it took me a while to close the loop on this.  So, the server\nthat had the less desirable plan had actually been analyzed more\nrecently by autovacuum.  When I went back to compare the stats on the\nfaster server, autovacuum had analyzed it and the plan was now more\nsimilar.  Adjusting the stats target up for that column helped on both\nservers though it never did get back as close as before.\n\n-- \nJeff Frost, Owner \t<[email protected]>\nFrost Consulting, LLC \thttp://www.frostconsultingllc.com/\nPhone: 916-647-6411\tFAX: 916-405-4032", "msg_date": "Fri, 08 Aug 2008 23:17:23 -0700", "msg_from": "Jeff Frost <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index scan cost" } ]
[ { "msg_contents": "Dear friend,\n\n * **We are an electronic products wholesaler located in Shenzhen China\n.Our products are of high quality and competitive price. If you want to find\nthe best supplier , we can offer you the most reasonable discount to leave\nyou more profits.\n Sincerely looking forward to your cooperation.*.\n\n Please visit our website: http://www.wabada.com\n\nE-mail : [email protected]\n\nMSN: [email protected]\n\n*Our mainly products: such as **Mobile Phone (Apple iphone , Nokia\nN95, Nokia N96, Nokia N76, Nokia N93i , Nokia 8800, Nokia 6500), Ipod Mp3\nMp4 Player, Ipod, Name Card Mp3, SONY PSP, GPS navigation, Memory Card ,\nBluetooth Headset etc. *\n\n* *Welcome to visit our website! http://www.wabada.com\n\n\n\n\n\n\n\n\nDear friend,\n   We are an electronic products wholesaler located in Shenzhen China .Our products are of high quality and competitive price. If you want to find the best supplier , we can offer you the most reasonable discount to leave you more profits.\n   Sincerely looking forward to your cooperation.. \n                Please visit our website: http://www.wabada.com\nE-mail : [email protected]\nMSN: [email protected]\nOur mainly products: such as Mobile Phone (Apple iphone , Nokia N95, Nokia N96, Nokia N76, Nokia N93i , Nokia 8800, Nokia 6500), Ipod Mp3 Mp4 Player, Ipod, Name Card Mp3, SONY PSP, GPS navigation, Memory Card , Bluetooth Headset  etc. \n  Welcome to visit our website! http://www.wabada.com", "msg_date": "Fri, 18 Jul 2008 17:30:33 +0800", "msg_from": "\"Krishna Kumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Hi,Thank you!" } ]
[ { "msg_contents": " hi list,\n\ni have a problem with time consuming query. first of all my table structure:\n\nCREATE TABLE nw_tla_2008_4_deu\n(\n\"ID\" bigint NOT NULL,\n\"NET2CLASS\" smallint,\n\"FOW\" smallint,\nCONSTRAINT nw_tla_2008_4_deu_pkey PRIMARY KEY (\"ID\"),\n)\nWITHOUT OIDS;\n\nCREATE INDEX nw_tla_2008_4_deu_fow_idx\nON nw_tla_2008_4_deu\nUSING btree\n(\"FOW\");\n\nCREATE INDEX nw_tla_2008_4_deu_net2class_idx\nON nw_tla_2008_4_deu\nUSING btree\n(\"NET2CLASS\");\n\nCREATE INDEX nw_tla_2008_4_deu_the_geom_gist\nON nw_tla_2008_4_deu\nUSING gist\n(the_geom gist_geometry_ops);\nALTER TABLE nw_tla_2008_4_deu CLUSTER ON nw_tla_2008_4_deu_the_geom_gist;\n\n\nwhen i run the following query with explain analyze i get the following result:\n\nEXPLAIN\nANALYZE\n\nSELECT\nnw.\"ID\" AS id\n\nFROM\nnw_tla_2008_4_deu AS nw\n\nWHERE\nexpand(st_pointfromtext('POINT(13.7328934 51.049476)',4326), 0.24769615911118054) && nw.the_geom\nAND nw.\"FOW\" IN (1,2,3,4,10,17)\nAND nw.\"NET2CLASS\" IN (0,1,2,3)\n\nBitmap Heap Scan on nw_tla_2008_4_deu nw (cost=35375.52..77994.15 rows=11196 width=8) (actual time=13307.830..13368.969 rows=15425 loops=1)\n\nRecheck Cond: (\"NET2CLASS\" = ANY ('{0,1,2,3}'::integer[]))\n\nFilter: (('0103000020E61000000100000005000000000000C06BF82A40000000A0A0664940000000C06BF82A40000000C009A64940000000E00FF62B40000000C009A64940000000E00FF62B40000000A0A0664940000000C06BF82A40000000A0A0664940'::geometry && the_geom) AND (\"FOW\" = ANY ('{1,2,3,4,10,17}'::integer[])))\n\n-> BitmapAnd (cost=35375.52..35375.52 rows=12614 width=0) (actual time=13307.710..13307.710 rows=0 loops=1)\n\n-> Bitmap Index Scan on nw_tla_2008_4_deu_the_geom_gist (cost=0.00..1759.12 rows=55052 width=0) (actual time=22.452..22.452 rows=52840 loops=1)\n\nIndex Cond: ('0103000020E61000000100000005000000000000C06BF82A40000000A0A0664940000000C06BF82A40000000C009A64940000000E00FF62B40000000C009A64940000000E00FF62B40000000A0A0664940000000C06BF82A40000000A0A0664940'::geometry && the_geom)\n\n-> Bitmap Index Scan on nw_tla_2008_4_deu_net2class_idx (cost=0.00..33610.55 rows=1864620 width=0) (actual time=13284.121..13284.121 rows=2021814 loops=1)\n\nIndex Cond: (\"NET2CLASS\" = ANY ('{0,1,2,3}'::integer[]))\n\nTotal runtime: *13.332* ms\n\n\nrunning the next query which is only slightly different and has one instead of two and conditions leads to the following result\n\nEXPLAIN\nANALYZE\n\nSELECT\nnw.\"ID\" AS id\n\nFROM\nnw_tla_2008_4_deu AS nw\n\nWHERE\nexpand(st_pointfromtext('POINT(13.7328934 51.049476)',4326), 0.24769615911118054) && nw.the_geom\nAND nw.\"FOW\" IN (1,2,3,4,10,17)\n\n\nBitmap Heap Scan on nw_tla_2008_4_deu nw (cost=1771.34..146161.54 rows=48864 width=8) (actual time=23.285..99.493 rows=47723 loops=1)\n\nFilter: (('0103000020E61000000100000005000000000000C06BF82A40000000A0A0664940000000C06BF82A40000000C009A64940000000E00FF62B40000000C009A64940000000E00FF62B40000000A0A0664940000000C06BF82A40000000A0A0664940'::geometry && the_geom) AND (\"FOW\" = ANY ('{1,2,3,4,10,17}'::integer[])))\n\n-> Bitmap Index Scan on nw_tla_2008_4_deu_the_geom_gist (cost=0.00..1759.12 rows=55052 width=0) (actual time=22.491..22.491 rows=52840 loops=1)\n\nIndex Cond: ('0103000020E61000000100000005000000000000C06BF82A40000000A0A0664940000000C06BF82A40000000C009A64940000000E00FF62B40000000C009A64940000000E00FF62B40000000A0A0664940000000C06BF82A40000000A0A0664940'::geometry && the_geom)\n\nTotal runtime: *109*ms\n\n\nso in both querys there are and conditions. there are two and conditions in the first query and one and condition in the second query. unfortunately i am not an expert in reading the postgre query plan. basically i am wondering why in the first query a second index scan is done whereas in the second query the second index scan is not done. the second query runs hundred times faster then first one which surprising to me.\n\nany ideas?\n\nregards, stefan\n\n_________________________________________________________________________\nIn 5 Schritten zur eigenen Homepage. Jetzt Domain sichern und gestalten! \nNur 3,99 EUR/Monat! http://www.maildomain.web.de/?mc=021114\n\n", "msg_date": "Fri, 18 Jul 2008 12:28:56 +0200", "msg_from": "Stefan Zweig <[email protected]>", "msg_from_op": true, "msg_subject": "query plan, index scan cost" }, { "msg_contents": "On Jul 18, 2008, at 5:28 AM, Stefan Zweig wrote:\n> CREATE TABLE nw_tla_2008_4_deu\n> (\n> \"ID\" bigint NOT NULL,\n> \"NET2CLASS\" smallint,\n> \"FOW\" smallint,\n> CONSTRAINT nw_tla_2008_4_deu_pkey PRIMARY KEY (\"ID\"),\n> )\n> WITHOUT OIDS;\n\nYou might want to give up on the double-quotes... you'll have to use \nthem everywhere. It'd drive me nuts... :)\n\n> EXPLAIN\n> ANALYZE\n>\n> SELECT\n> nw.\"ID\" AS id\n>\n> FROM\n> nw_tla_2008_4_deu AS nw\n>\n> WHERE\n> expand(st_pointfromtext('POINT(13.7328934 51.049476)',4326), \n> 0.24769615911118054) && nw.the_geom\n> AND nw.\"FOW\" IN (1,2,3,4,10,17)\n> AND nw.\"NET2CLASS\" IN (0,1,2,3)\n<snip>\n> Total runtime: *13.332* ms\n>\n>\n> running the next query which is only slightly different and has one \n> instead of two and conditions leads to the following result\n>\n> EXPLAIN\n> ANALYZE\n>\n> SELECT\n> nw.\"ID\" AS id\n>\n> FROM\n> nw_tla_2008_4_deu AS nw\n>\n> WHERE\n> expand(st_pointfromtext('POINT(13.7328934 51.049476)',4326), \n> 0.24769615911118054) && nw.the_geom\n> AND nw.\"FOW\" IN (1,2,3,4,10,17)\n<snip>\n> Total runtime: *109*ms\n>\n>\n> so in both querys there are and conditions. there are two and \n> conditions in the first query and one and condition in the second \n> query. unfortunately i am not an expert in reading the postgre \n> query plan. basically i am wondering why in the first query a \n> second index scan is done whereas in the second query the second \n> index scan is not done. the second query runs hundred times faster \n> then first one which surprising to me.\n\nThe second index scan wasn't done in the second query because you \ndon't have the second IN clause. And it's actually the 1st query that \nwas faster, because it returned fewer rows (15k instead of 45k).\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 13 Aug 2008 09:50:51 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query plan, index scan cost" } ]
[ { "msg_contents": "Hi there,\n\nI have a script which includes 30000 called functions within a single \ntransaction.\n\nAt the beginning, the functions runs fast enough (about 60 ms each). In \ntime, it begins to run slower and slower (at final about one per 2 seconds).\n\nI check the functions that runs slowly outside the script and they run \nnormally (60 ms each).\n\nWhat is the problem ?\n\nTIA,\nSabin \n\n\n", "msg_date": "Fri, 18 Jul 2008 18:34:20 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "long transaction" }, { "msg_contents": "have you use VACUMM?\n\n--- On Fri, 7/18/08, Sabin Coanda <[email protected]> wrote:\n\n> From: Sabin Coanda <[email protected]>\n> Subject: [PERFORM] long transaction\n> To: [email protected]\n> Date: Friday, July 18, 2008, 3:34 PM\n> Hi there,\n> \n> I have a script which includes 30000 called functions\n> within a single \n> transaction.\n> \n> At the beginning, the functions runs fast enough (about 60\n> ms each). In \n> time, it begins to run slower and slower (at final about\n> one per 2 seconds).\n> \n> I check the functions that runs slowly outside the script\n> and they run \n> normally (60 ms each).\n> \n> What is the problem ?\n> \n> TIA,\n> Sabin \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n \n\n", "msg_date": "Fri, 18 Jul 2008 10:22:59 -0700 (PDT)", "msg_from": "Lennin Caro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long transaction" }, { "msg_contents": "No, I cannot use VACUUM inside the transaction, and it seems this is the \nproblem, although autovacuum is set.\n\nHowever I checked the following scenario to find a solution. I call the \n30000 statements without transaction. The performance it not changed. But \nwhen I add VACUUM command after each 20 statement set, I got the linear \nperformance that I want. Unfortunatelly this is not possible inside a \ntransaction.\n\nDo you know how could I solve my problem, keeping the 30000 statements \ninside a single transaction ?\n\nSabin\n\n\n\"Lennin Caro\" <[email protected]> wrote in message \nnews:[email protected]...\n> have you use VACUMM?\n>\n> --- On Fri, 7/18/08, Sabin Coanda <[email protected]> wrote:\n>\n>> From: Sabin Coanda <[email protected]>\n>> Subject: [PERFORM] long transaction\n>> To: [email protected]\n>> Date: Friday, July 18, 2008, 3:34 PM\n>> Hi there,\n>>\n>> I have a script which includes 30000 called functions\n>> within a single\n>> transaction.\n>>\n>> At the beginning, the functions runs fast enough (about 60\n>> ms each). In\n>> time, it begins to run slower and slower (at final about\n>> one per 2 seconds).\n>>\n>> I check the functions that runs slowly outside the script\n>> and they run\n>> normally (60 ms each).\n>>\n>> What is the problem ?\n>>\n>> TIA,\n>> Sabin\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n", "msg_date": "Mon, 11 Aug 2008 09:53:07 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long transaction" }, { "msg_contents": "On Mon, Aug 11, 2008 at 2:53 AM, Sabin Coanda\n<[email protected]> wrote:\n> No, I cannot use VACUUM inside the transaction, and it seems this is the\n> problem, although autovacuum is set.\n>\n> However I checked the following scenario to find a solution. I call the\n> 30000 statements without transaction. The performance it not changed. But\n> when I add VACUUM command after each 20 statement set, I got the linear\n> performance that I want. Unfortunatelly this is not possible inside a\n> transaction.\n>\n> Do you know how could I solve my problem, keeping the 30000 statements\n> inside a single transaction ?\n\nlong running transactions can be evil. is there a reason why this has\nto run in a single transaction?\n\nmerlin\n", "msg_date": "Mon, 11 Aug 2008 15:03:10 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long transaction" }, { "msg_contents": "> long running transactions can be evil. is there a reason why this has\n> to run in a single transaction?\n\nThis single transaction is used to import new information in a database. I \nneed it because the database cannot be disconected from the users, and the \nwhole new data has to be consistently. There are different constraints that \nare checked during the import.\n\n\n", "msg_date": "Tue, 12 Aug 2008 11:17:12 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long transaction" }, { "msg_contents": "On Tue, Aug 12, 2008 at 4:17 AM, Sabin Coanda\n<[email protected]> wrote:\n>> long running transactions can be evil. is there a reason why this has\n>> to run in a single transaction?\n>\n> This single transaction is used to import new information in a database. I\n> need it because the database cannot be disconected from the users, and the\n> whole new data has to be consistently. There are different constraints that\n> are checked during the import.\n\nhave you considered importing to a temporary 'holding' table with\ncopy, then doing 'big' sql statements on it to check constraints, etc?\n\nmerlin\n", "msg_date": "Tue, 12 Aug 2008 12:56:29 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long transaction" }, { "msg_contents": ">\n> have you considered importing to a temporary 'holding' table with\n> copy, then doing 'big' sql statements on it to check constraints, etc?\n>\n\nYes I considered it, but the problem is the data is very tight related \nbetween different tables and is important to keep the import order of each \nentity into the database. With other words, the entity imprt serialization \nis mandatory. In fact the import script doesn't keep just insert but also \ndelete and update for different entities. So copy is not enough. Also using \n'big' sql statements cannot guarantee the import order.\n\nSabin \n\n\n", "msg_date": "Wed, 13 Aug 2008 09:07:49 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: long transaction" }, { "msg_contents": "On Wed, Aug 13, 2008 at 2:07 AM, Sabin Coanda\n<[email protected]> wrote:\n>>\n>> have you considered importing to a temporary 'holding' table with\n>> copy, then doing 'big' sql statements on it to check constraints, etc?\n>>\n>\n> Yes I considered it, but the problem is the data is very tight related\n> between different tables and is important to keep the import order of each\n> entity into the database. With other words, the entity imprt serialization\n> is mandatory. In fact the import script doesn't keep just insert but also\n> delete and update for different entities. So copy is not enough. Also using\n> 'big' sql statements cannot guarantee the import order.\n\nMore than likely, to solve your problem (outside of buying bigger box\nor hacking fsync) is to rethink your import along the lines of what\nI'm suggesting. You're welcome to give more specific details of\nwhat/how your imports are running, in order to get more specific\nadvice.\n\nmerlin\n", "msg_date": "Sun, 17 Aug 2008 20:04:53 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: long transaction" } ]
[ { "msg_contents": "Most likely just a forged header or something, hardly hacked though is it. I think you need to do some training: http://www2.b3ta.com/bigquiz/hacker-or-spacker/\n\n\n\n----- Original Message ----\n> From: Craig James <[email protected]>\n> To: [email protected]\n> Sent: Friday, 18 July, 2008 4:02:37 PM\n> Subject: [PERFORM] Mailing list hacked by spammer?\n> \n> I've never gotten a single spam from the Postgres mailing list ... until today. \n> A Chinese company selling consumer products is using this list. I have my \n> filters set to automatically trust this list because it has been so reliable \n> until now. It would be really, really unfortunate if this list fell to the \n> spammers.\n> \n> Craig\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Fri, 18 Jul 2008 16:37:54 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mailing list hacked by spammer?" }, { "msg_contents": "Glyn Astill wrote:\n> Most likely just a forged header or something, hardly hacked\n> though is it.\n\nYes, hack is the correct term. The bad guys have hacked into the major email systems, including gmail, which was the origin of this spam:\n\n http://www.theregister.co.uk/2008/02/25/gmail_captcha_crack/\n\n> I think you need to do some training:\n> http://www2.b3ta.com/bigquiz/hacker-or-spacker/\n\nSending a link to a web site that plays loud rap music is not a friendly way to make your point.\n\nCraig\n\n> \n> \n> \n> ----- Original Message ----\n>> From: Craig James <[email protected]>\n>> To: [email protected]\n>> Sent: Friday, 18 July, 2008 4:02:37 PM\n>> Subject: [PERFORM] Mailing list hacked by spammer?\n>>\n>> I've never gotten a single spam from the Postgres mailing list ... until today. \n>> A Chinese company selling consumer products is using this list. I have my \n>> filters set to automatically trust this list because it has been so reliable \n>> until now. It would be really, really unfortunate if this list fell to the \n>> spammers.\n>>\n>> Craig\n>>\n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> __________________________________________________________\n> Not happy with your email address?.\n> Get the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n> \n\n", "msg_date": "Fri, 18 Jul 2008 10:40:33 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mailing list hacked by spammer?" }, { "msg_contents": "On Fri, Jul 18, 2008 at 10:40:33AM -0700, Craig James wrote:\n\n> Yes, hack is the correct term. The bad guys have hacked into the major email systems, including gmail, which was the origin of this spam:\n>\n> http://www.theregister.co.uk/2008/02/25/gmail_captcha_crack/\n\nThe simple fact is that, as long as we don't reject completely all\nmail from any unsubscribed user, some spam will occasionally get\nthrough. It's humans who have to do the moderation, and sometimes we\nhit the wrong button. Sorry.\n\n(Moreover, the trick of foiling captchas and using compromised\nmachines all over the Internet to send spam is hardly \"hacking the\nlist\".)\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Fri, 18 Jul 2008 14:07:37 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Mailing list hacked by spammer?" } ]
[ { "msg_contents": "> Glyn Astill wrote:\n\n> > Most likely just a forged header or something, hardly hacked\n> > though is it.\n> \n> Yes, hack is the correct term. The bad guys have hacked into the major email \n> systems, including gmail, which was the origin of this spam:\n> \n> http://www.theregister.co.uk/2008/02/25/gmail_captcha_crack/\n> \n> > I think you need to do some training:\n> > http://www2.b3ta.com/bigquiz/hacker-or-spacker/\n> \n> Sending a link to a web site that plays loud rap music is not a friendly way to \n> make your point.\n> \n> Craig\n> \n\nWhatever. I see you clicked on the link then, even though it came from a 'hacked' mailing list :-)\n\nweren't you the chap that couldn't figure out how to use the slony tools and threw a wobbler at the developers ...\n\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Fri, 18 Jul 2008 18:41:17 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Mailing list hacked by spammer?" } ]
[ { "msg_contents": "\nI'm trying to run a few basic tests to see what a current machine can \ndeliver (typical workload ETL like, long running aggregate queries, \nmedium size db ~100 to 200GB).\n\nI'm currently checking the system (dd, bonnie++) to see if performances \nare within the normal range but I'm having trouble relating it to \nanything known. Scouting the archives there are more than a few people \nfamiliar with it, so if someone can have a look at those numbers and \nraise a flag where some numbers look very out of range for such system, \nthat would be appreciated. I also added some raw pgbench numbers at the end.\n\n(Many thanks to Greg Smith, his pages was extremely helpful to get \nstarted. Any mistake is mine)\n\nHardware:\n\nSun Fire X4150 x64\n\n2 Quad-Core Intel(R) Xeon(R) X5460 processor (2x6MB L2, 3.16 GHz, 1333 \nMHz FSB)\n16GB of memory (4x2GB PC2-5300 667 MHz ECC fully buffered DDR2 DIMMs)\n\n6x 146GB 10K RPM SAS in RAID10 - for os + data\n2x 146GB 10K RPM SAS in RAID1 - for xlog\nSun StorageTek SAS HBA Internal (Adaptec AAC-RAID)\n\n\nOS is Ubuntu 7.10 x86_64 running 2.6.22-14\nos in on ext3\ndata is on xfs noatime\nxlog is on ext2 noatime\n\n\ndata\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 152.359 seconds, 215 MB/s\n\nreal 2m36.895s\nuser 0m0.570s\nsys 0m36.520s\n\n$ time dd if=bigfile of=/dev/null bs=8k\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 114.723 seconds, 286 MB/s\n\nreal 1m54.725s\nuser 0m0.450s\nsys 0m22.060s\n\n\nxlog\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 389.216 seconds, 84.2 MB/s\n\nreal 6m50.155s\nuser 0m0.420s\nsys 0m26.490s\n\n$ time dd if=bigfile of=/dev/null bs=8k\n4000000+0 records in\n4000000+0 records out\n32768000000 bytes (33 GB) copied, 294.556 seconds, 111 MB/s\n\nreal 4m54.558s\nuser 0m0.430s\nsys 0m23.480s\n\n\n\nbonnie++ -s 32g -n 256\n\ndata:\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\nlid-statsdb-1 32G 101188 98 202523 20 107642 13 88931 88 271576 \n19 980.7 2\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\n 256 11429 93 +++++ +++ 17492 71 11097 91 +++++ +++ \n2473 11\n\n\n\nxlog\nVersion 1.03 ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \n/sec %CP\nlid-statsdb-1 32G 62973 59 69981 5 35433 4 87977 85 119749 9 \n496.2 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \n/sec %CP\n 256 551 99 +++++ +++ 300935 99 573 99 +++++ +++ \n1384 99\n\npgbench\n\npostgresql 8.2.9 with data and xlog as mentioned above\n\npostgresql.conf:\nshared_buffers = 4GB\ncheckpoint_segments = 8\neffective_cache_size = 8GB\n\nScript running over scaling factor 1 to 1000 and running 3 times pgbench \nwith \"pgbench -t 2000 -c 8 -S pgbench\"\n\nIt's a bit limited and will try to do a much much longer run and \nincrease the # of tests and calculate mean and stddev as I have a pretty \nlarge variation for the 3 runs sometimes (typically for the scaling \nfactor at 1000, the runs are respectively 1952, 940, 3162) so the graph \nis pretty ugly.\n\nI get (scaling factor, size of db in MB, middle tps)\n\n1 20 22150\n5 82 22998\n10 160 22301\n20 316 22857\n30 472 23012\n40 629 17434\n50 785 22179\n100 1565 20193\n200 3127 23788\n300 4688 15494\n400 6249 23513\n500 7810 18868\n600 9372 22146\n700 11000 14555\n800 12000 10742\n900 14000 13696\n1000 15000 940\n\ncheers,\n\n-- stephane\n", "msg_date": "Sat, 19 Jul 2008 15:19:43 +0200", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)" }, { "msg_contents": "On Sat, 19 Jul 2008, Stephane Bailliez wrote:\n\n> OS is Ubuntu 7.10 x86_64 running 2.6.22-14\n\nNote that I've had some issues with the desktop Ubuntu giving slower \nresults in tests like this than the same kernel release using the stock \nkernel parameters. Haven't had a chance yet to see how the server Ubuntu \nkernel fits into that or exactly what the desktop one is doing wrong yet. \nCould be worse--if you were running any 8.04 I expect your pgbench results \nwould be downright awful.\n\n> data is on xfs noatime\n\nWhile XFS has some interesting characteristics, make sure you're \ncomfortable with the potential issues the journal approach used by that \nfilesystem has. With ext3, you can choose the somewhat risky writeback \nbehavior or not, you're stuck with it in XFS as far as I know. A somewhat \none-sided intro here is at \nhttp://zork.net/~nick/mail/why-reiserfs-is-teh-sukc\n\n> postgresql 8.2.9 with data and xlog as mentioned above\n\nThere are so many known performance issues in 8.2 that are improved in 8.3 \nthat I'd suggest you really should be considering it for a new install at \nthis point.\n\n> Script running over scaling factor 1 to 1000 and running 3 times pgbench with \n> \"pgbench -t 2000 -c 8 -S pgbench\"\n\nIn general, you'll want to use a couple of clients per CPU core for \npgbench tests to get a true look at the scalability. Unfortunately, the \nway the pgbench client runs means that it tends to top out at 20 or 30 \nthousand TPS on read-only tests no matter how many cores you have around. \nBut you may find operations where peak throughput comes at closer to 32 \nclients here rather than just 8.\n\n> It's a bit limited and will try to do a much much longer run and increase the \n> # of tests and calculate mean and stddev as I have a pretty large variation \n> for the 3 runs sometimes (typically for the scaling factor at 1000, the runs \n> are respectively 1952, 940, 3162) so the graph is pretty ugly.\n\nThis is kind of a futile exercise and I wouldn't go crazy trying to \nanalyze here. Having been through that many times, I predict you'll \ndiscover no real value to a more statistically intense analysis. It's not \nlike sampling at more points makes the variation go away, or that the \nvariation itself has some meaning worth analyzing. Really the goal of \npgbench tests should be look at a general trend. Looking at your data for \nexample, I'd say the main useful observation to draw from your tests is \nthat performance is steady then drops off sharply once the database itself \nexceeds 10GB, which is a fairly positive statement that you're getting \nsomething out of most of the the 16GB of RAM in the server during this \ntest.\n\nAs far as the rest of your results go, Luke's comment that you may need \nmore than one process to truly see the upper limit of your disk \nperformance is right on target. More useful commentary on that issue I'd \nrecomend is near the end of \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\n(man does that need to be a smaller URL)\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sun, 20 Jul 2008 19:12:28 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++,\n pgbench)" }, { "msg_contents": "Greg Smith wrote:\n>\n> Note that I've had some issues with the desktop Ubuntu giving slower \n> results in tests like this than the same kernel release using the \n> stock kernel parameters. Haven't had a chance yet to see how the \n> server Ubuntu kernel fits into that or exactly what the desktop one is \n> doing wrong yet. Could be worse--if you were running any 8.04 I expect \n> your pgbench results would be downright awful.\n\nAh interesting. Isn't it a scheduler problem, I thought CFQ was the \ndefault for desktop ?\nI doublechecked the 7.10 server on this box and it's really the deadline \none that is used:\n\ncat /sys/block/sdb/queue/scheduler\nnoop anticipatory [deadline] cfq\n\nDo you have some more pointers on the 8.04 issues you mentioned ? \n(that's deemed to be the next upgrade from ops)\n\n>> postgresql 8.2.9 with data and xlog as mentioned above\n> There are so many known performance issues in 8.2 that are improved in \n> 8.3 that I'd suggest you really should be considering it for a new \n> install at this point.\n\nYes I'd definitely prefer to go 8.3 as well but there are a couple \nreasons for now I have to suck it up:\n- 8.2 is the one in the 7.10 repository.\n- I need plr as well and 8.3-plr debian package does not exist yet.\n\n(I know in both cases we could recompile and install it from there, but ...)\n\n> In general, you'll want to use a couple of clients per CPU core for \n> pgbench tests to get a true look at the scalability. Unfortunately, \n> the way the pgbench client runs means that it tends to top out at 20 \n> or 30 thousand TPS on read-only tests no matter how many cores you \n> have around. But you may find operations where peak throughput comes \n> at closer to 32 clients here rather than just 8.\nok. Make sense.\n\n> As far as the rest of your results go, Luke's comment that you may \n> need more than one process to truly see the upper limit of your disk \n> performance is right on target. More useful commentary on that issue \n> I'd recomend is near the end of \n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \n>\nYeah I was looking at that url as well. Very useful.\n\nThanks for all the info Greg.\n\n-- stephane\n\n", "msg_date": "Mon, 21 Jul 2008 12:11:57 +0200", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)" }, { "msg_contents": "On Mon, 21 Jul 2008, Stephane Bailliez wrote:\n\n> Isn't it a scheduler problem, I thought CFQ was the default for desktop \n> ?\n\nCFQ/Deadline/AS are I/O scheduler choices. What changed completely in \n2.6.23 is the kernel process scheduler. \nhttp://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt gives \nsome info about the new one.\n\nWhile the switch to CFS has shown great improvements in terms of desktop \nand many server workloads, what I discovered is that the pgbench test \nprogram itself is really incompatible with it. There's a kernel patch \nthat seems to fix the problem at http://lkml.org/lkml/2008/5/27/58 but I \ndon't think it's made it into a release yet.\n\nThis is not to say the kernel itself is unsuitable for running PostgreSQL \nitself, but if you're using pgbench as the program to confirm that I \nexpect you'll be dissapointed with results under the Ubuntu 8.04 kernel. \nIt tops out at around 10,000 TPS running the select-only test for me while \nolder kernels did 3X that much.\n\n> Yes I'd definitely prefer to go 8.3 as well but there are a couple reasons \n> for now I have to suck it up:\n> - 8.2 is the one in the 7.10 repository.\n> - I need plr as well and 8.3-plr debian package does not exist yet.\n> (I know in both cases we could recompile and install it from there, but ...)\n\nStop and think about this for a minute. You're going into production with \nan older version having a set of known, impossible to work around issues \nthat if you hit them the response will be \"upgrade to 8.3 to fix that\", \nwhich will require the major disruption to your application of a database \ndump and reload at that point if that fix becomes critical. And you can't \njust do that now because of some packaging issues? I hope you can impress \nupon the other people involved how incredibly short-sighted that is.\n\nUnfortunately, it's harder than everyone would like to upgrade an existing \nPostgreSQL installation. That really argues for going out of your way ir \nnecessary to deploy the latest stable release when you're building \nsomething new, if there's not some legacy bits seriously holding you back.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 21 Jul 2008 14:43:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++,\n pgbench)" }, { "msg_contents": "\n[...]\n\n> Yes I'd definitely prefer to go 8.3 as well but there are a couple\n> reasons for now I have to suck it up:\n> - 8.2 is the one in the 7.10 repository.\n> - I need plr as well and 8.3-plr debian package does not exist yet.\n>\n> (I know in both cases we could recompile and install it from there,\n> but ...)\n\nAt least on debian it was quite easy to \"backport\" 8.3.3 from sid\nto etch using apt-get's source and build-dep functions. That way\nyou get a normal installable package.\n\nI'm not sure, but given the similarity I would guess it won't be\nmuch harder on ubuntu.\n\n// Emil\n\n", "msg_date": "Tue, 22 Jul 2008 01:20:52 +0200", "msg_from": "Emil Pedersen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd,\n bonnie++,\tpgbench)" }, { "msg_contents": "\n\n--On tisdag, juli 22, 2008 01.20.52 +0200 Emil Pedersen \n<[email protected]> wrote:\n\n>\n> [...]\n>\n>> Yes I'd definitely prefer to go 8.3 as well but there are a couple\n>> reasons for now I have to suck it up:\n>> - 8.2 is the one in the 7.10 repository.\n>> - I need plr as well and 8.3-plr debian package does not exist yet.\n>>\n>> (I know in both cases we could recompile and install it from there,\n>> but ...)\n>\n> At least on debian it was quite easy to \"backport\" 8.3.3 from sid\n> to etch using apt-get's source and build-dep functions. That way\n> you get a normal installable package.\n>\n> I'm not sure, but given the similarity I would guess it won't be\n> much harder on ubuntu.\n\nI should have said that I was talking about the postgresql, I\nmissed the plr part. I appologize for the noice.\n\n// Emil\n\n", "msg_date": "Tue, 22 Jul 2008 01:34:43 +0200", "msg_from": "Emil Pedersen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64\n (dd,\tbonnie++,\tpgbench)" }, { "msg_contents": "Emil Pedersen <[email protected]> writes:\n>> At least on debian it was quite easy to \"backport\" 8.3.3 from sid\n>> to etch using apt-get's source and build-dep functions. That way\n>> you get a normal installable package.\n\n> I should have said that I was talking about the postgresql, I\n> missed the plr part. I appologize for the noice.\n\nStill, there's not normally that much difference between the packaging\nfor one version and for the next. I can't imagine that it would take\nmuch time to throw together a package for 8.3 plr based on what you're\nusing for 8.2. All modern package-based distros make this pretty easy.\nThe only reason not to do it would be if you're buying support from\na vendor who will only support specific package versions...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Jul 2008 20:06:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench) " }, { "msg_contents": "Greg Smith wrote:\n> CFQ/Deadline/AS are I/O scheduler choices. What changed completely in \n> 2.6.23 is the kernel process scheduler. \n> http://people.redhat.com/mingo/cfs-scheduler/sched-design-CFS.txt \n> gives some info about the new one.\n>\n> While the switch to CFS has shown great improvements in terms of \n> desktop and many server workloads, what I discovered is that the \n> pgbench test program itself is really incompatible with it. There's a \n> kernel patch that seems to fix the problem at \n> http://lkml.org/lkml/2008/5/27/58 but I don't think it's made it into \n> a release yet.\n>\n> This is not to say the kernel itself is unsuitable for running \n> PostgreSQL itself, but if you're using pgbench as the program to \n> confirm that I expect you'll be dissapointed with results under the \n> Ubuntu 8.04 kernel. It tops out at around 10,000 TPS running the \n> select-only test for me while older kernels did 3X that much.\n\nok, thanks for all the details. good to know.\n\n> Stop and think about this for a minute. You're going into production \n> with an older version having a set of known, impossible to work around \n> issues that if you hit them the response will be \"upgrade to 8.3 to \n> fix that\", which will require the major disruption to your application \n> of a database dump and reload at that point if that fix becomes \n> critical. And you can't just do that now because of some packaging \n> issues? I hope you can impress upon the other people involved how \n> incredibly short-sighted that is.\n\nI understand what you're saying. However if I were to play devil's \nadvocate, the existing one that I'm 'migrating' (read entirely changing \nschemas, 'migrating' data) is coming out from a 8.1.11 install. It is \nnot a critical system. The source data is always available from another \nsystem and the postgresql system would be a 'client'. So if 8.2.x is so \nabysmal it should not even be considered for install compared to 8.1.x \nand that only 8.3.x is viable then ok that makes sense and I have to go \nthe extra mile.\n\nBut message received loud and clear. Conveniently 8.3.3 is also \navailable on backports so it does not cost much and pinning it will be \nand pinning it is right now. (don't think there will be any pb with plr, \neven though the original seems to be patched a bit, but that will be for \nlater when I don't know what to do and that all is ready).\n\nFor the sake of completeness (even though irrelevant), here's the run \nwith 32 clients on 8.3 same config as before (except max_fsm_pages at \n204800)\n\n1 19 36292\n100 1499 32127\n200 2994 30679\n300 4489 29673\n400 5985 18627\n500 7480 19714\n600 8975 19437\n700 10000 20271\n800 12000 18038\n900 13000 9842\n1000 15000 5996\n1200 18000 5404\n1400 20000 3701\n1600 23000 2877\n1800 26000 2657\n2000 29000 2612\n\ncheers,\n\n-- stephane\n", "msg_date": "Tue, 22 Jul 2008 08:46:08 +0200", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)" }, { "msg_contents": "On Tue, 22 Jul 2008, Stephane Bailliez wrote:\n\n> I'm 'migrating' (read entirely changing schemas, 'migrating' data) is \n> coming out from a 8.1.11 install. It is not a critical system. The \n> source data is always available from another system and the postgresql \n> system would be a 'client'. So if 8.2.x is so abysmal it should not even \n> be considered for install compared to 8.1.x and that only 8.3.x is \n> viable then ok that makes sense and I have to go the extra mile.\n\n8.2 is a big improvement over the 8.1 you're on now, and 8.3 is a further \nimprovement. If the system isn't critical, it doesn't sound like doing a \nlater 8.2->8.3 upgrade (rather than going right to 8.3 now) will be a big \ndeal for you. Just wanted you to be aware that upgrading larger installs \ngets tricky sometimes, so it's best to avoid that if you could just do \nmore up-front work instead to start on a later version if practical.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 22 Jul 2008 13:21:59 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++,\n pgbench)" } ]
[ { "msg_contents": "pgbench is unrelated to the workload you are concerned with if ETL/ELT and decision support / data warehousing queries are your target. \r\n\r\nAlso - placing the xlog on dedicated disks is mostly irrelevant to data warehouse / decision support work or ELT. If you need to maximize loading speed while concurrent queries are running, it may be necessary, but I think you'll be limited in load speed by CPU related to data formatting anyway.\r\n\r\nThe primary performance driver for ELT / DW is sequential transfer rate, thus the dd test at 2X memory. With six data disks of this type, you should expect a maximum of around 6 x 80 = 480 MB/s. With RAID10, depending on the raid adapter, you may need to have two or more IO streams to use all platters, otherwise your max speed for one query would be 1/2 that, or 240 MB/s.\r\n\r\nI'd suggest RAID5, or even better, configure all eight disks as a JBOD in the RAID adapter and run ZFS RAIDZ. You would then expect to get about 7 x 80 = 560 MB/s on your single query.\r\n\r\nThat said, your single cpu on one query will only be able to scan that data at about 300 MB/s (try running a SELECT COUNT(*) against a table that is 2X memory size).\r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sat Jul 19 09:19:43 2008\r\nSubject: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)\r\n\r\n\r\nI'm trying to run a few basic tests to see what a current machine can \r\ndeliver (typical workload ETL like, long running aggregate queries, \r\nmedium size db ~100 to 200GB).\r\n\r\nI'm currently checking the system (dd, bonnie++) to see if performances \r\nare within the normal range but I'm having trouble relating it to \r\nanything known. Scouting the archives there are more than a few people \r\nfamiliar with it, so if someone can have a look at those numbers and \r\nraise a flag where some numbers look very out of range for such system, \r\nthat would be appreciated. I also added some raw pgbench numbers at the end.\r\n\r\n(Many thanks to Greg Smith, his pages was extremely helpful to get \r\nstarted. Any mistake is mine)\r\n\r\nHardware:\r\n\r\nSun Fire X4150 x64\r\n\r\n2 Quad-Core Intel(R) Xeon(R) X5460 processor (2x6MB L2, 3.16 GHz, 1333 \r\nMHz FSB)\r\n16GB of memory (4x2GB PC2-5300 667 MHz ECC fully buffered DDR2 DIMMs)\r\n\r\n6x 146GB 10K RPM SAS in RAID10 - for os + data\r\n2x 146GB 10K RPM SAS in RAID1 - for xlog\r\nSun StorageTek SAS HBA Internal (Adaptec AAC-RAID)\r\n\r\n\r\nOS is Ubuntu 7.10 x86_64 running 2.6.22-14\r\nos in on ext3\r\ndata is on xfs noatime\r\nxlog is on ext2 noatime\r\n\r\n\r\ndata\r\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 152.359 seconds, 215 MB/s\r\n\r\nreal 2m36.895s\r\nuser 0m0.570s\r\nsys 0m36.520s\r\n\r\n$ time dd if=bigfile of=/dev/null bs=8k\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 114.723 seconds, 286 MB/s\r\n\r\nreal 1m54.725s\r\nuser 0m0.450s\r\nsys 0m22.060s\r\n\r\n\r\nxlog\r\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 389.216 seconds, 84.2 MB/s\r\n\r\nreal 6m50.155s\r\nuser 0m0.420s\r\nsys 0m26.490s\r\n\r\n$ time dd if=bigfile of=/dev/null bs=8k\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 294.556 seconds, 111 MB/s\r\n\r\nreal 4m54.558s\r\nuser 0m0.430s\r\nsys 0m23.480s\r\n\r\n\r\n\r\nbonnie++ -s 32g -n 256\r\n\r\ndata:\r\nVersion 1.03 ------Sequential Output------ --Sequential Input- \r\n--Random-\r\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \r\n--Seeks--\r\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \r\n/sec %CP\r\nlid-statsdb-1 32G 101188 98 202523 20 107642 13 88931 88 271576 \r\n19 980.7 2\r\n ------Sequential Create------ --------Random \r\nCreate--------\r\n -Create-- --Read--- -Delete-- -Create-- --Read--- \r\n-Delete--\r\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \r\n/sec %CP\r\n 256 11429 93 +++++ +++ 17492 71 11097 91 +++++ +++ \r\n2473 11\r\n\r\n\r\n\r\nxlog\r\nVersion 1.03 ------Sequential Output------ --Sequential Input- \r\n--Random-\r\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \r\n--Seeks--\r\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \r\n/sec %CP\r\nlid-statsdb-1 32G 62973 59 69981 5 35433 4 87977 85 119749 9 \r\n496.2 1\r\n ------Sequential Create------ --------Random \r\nCreate--------\r\n -Create-- --Read--- -Delete-- -Create-- --Read--- \r\n-Delete--\r\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP \r\n/sec %CP\r\n 256 551 99 +++++ +++ 300935 99 573 99 +++++ +++ \r\n1384 99\r\n\r\npgbench\r\n\r\npostgresql 8.2.9 with data and xlog as mentioned above\r\n\r\npostgresql.conf:\r\nshared_buffers = 4GB\r\ncheckpoint_segments = 8\r\neffective_cache_size = 8GB\r\n\r\nScript running over scaling factor 1 to 1000 and running 3 times pgbench \r\nwith \"pgbench -t 2000 -c 8 -S pgbench\"\r\n\r\nIt's a bit limited and will try to do a much much longer run and \r\nincrease the # of tests and calculate mean and stddev as I have a pretty \r\nlarge variation for the 3 runs sometimes (typically for the scaling \r\nfactor at 1000, the runs are respectively 1952, 940, 3162) so the graph \r\nis pretty ugly.\r\n\r\nI get (scaling factor, size of db in MB, middle tps)\r\n\r\n1 20 22150\r\n5 82 22998\r\n10 160 22301\r\n20 316 22857\r\n30 472 23012\r\n40 629 17434\r\n50 785 22179\r\n100 1565 20193\r\n200 3127 23788\r\n300 4688 15494\r\n400 6249 23513\r\n500 7810 18868\r\n600 9372 22146\r\n700 11000 14555\r\n800 12000 10742\r\n900 14000 13696\r\n1000 15000 940\r\n\r\ncheers,\r\n\r\n-- stephane\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\nRe: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)\n\n\n\npgbench is unrelated to the workload you are concerned with if ETL/ELT and decision support / data warehousing queries are your target.\n\r\nAlso - placing the xlog on dedicated disks is mostly irrelevant to data warehouse / decision support work or ELT.  If you need to maximize loading speed while concurrent queries are running, it may be necessary, but I think you'll be limited in load speed by CPU related to data formatting anyway.\n\r\nThe primary performance driver for ELT / DW is sequential transfer rate, thus the dd test at 2X memory.  With six data disks of this type, you should expect a maximum of around 6 x 80 = 480 MB/s.  With RAID10, depending on the raid adapter, you may need to have two or more IO streams to use all platters, otherwise your max speed for one query would be 1/2 that, or 240 MB/s.\n\r\nI'd suggest RAID5, or even better, configure all eight disks as a JBOD in the RAID adapter and run ZFS RAIDZ.  You would then expect to get about 7 x 80 = 560 MB/s on your single query.\n\r\nThat said, your single cpu on one query will only be able to scan that data at about 300 MB/s (try running a SELECT COUNT(*) against a table that is 2X memory size).\n\r\n- Luke\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Sat Jul 19 09:19:43 2008\r\nSubject: [PERFORM] Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)\n\n\r\nI'm trying to run a few basic tests to see what a current machine can\r\ndeliver (typical workload ETL like, long running aggregate queries,\r\nmedium size db ~100 to 200GB).\n\r\nI'm currently checking the system (dd, bonnie++) to see if performances\r\nare within the normal range but I'm having trouble relating it to\r\nanything known. Scouting the archives there are more than a few people\r\nfamiliar with it, so if someone can have a look at those numbers and\r\nraise a flag where some numbers look very out of range for such system,\r\nthat would be appreciated. I also added some raw pgbench numbers at the end.\n\r\n(Many thanks to Greg Smith, his pages was extremely helpful to get\r\nstarted. Any mistake is mine)\n\r\nHardware:\n\r\nSun Fire X4150 x64\n\r\n2 Quad-Core Intel(R) Xeon(R) X5460 processor (2x6MB L2, 3.16 GHz, 1333\r\nMHz FSB)\r\n16GB of memory (4x2GB PC2-5300 667 MHz ECC fully buffered DDR2 DIMMs)\n\r\n6x 146GB 10K RPM SAS  in RAID10 - for os + data\r\n2x 146GB 10K RPM SAS  in RAID1 - for xlog\r\nSun StorageTek SAS HBA Internal (Adaptec AAC-RAID)\n\n\r\nOS is Ubuntu 7.10 x86_64 running  2.6.22-14\r\nos in on ext3\r\ndata is on xfs noatime\r\nxlog is on ext2 noatime\n\n\r\ndata\r\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 152.359 seconds, 215 MB/s\n\r\nreal    2m36.895s\r\nuser    0m0.570s\r\nsys     0m36.520s\n\r\n$ time dd if=bigfile of=/dev/null bs=8k\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 114.723 seconds, 286 MB/s\n\r\nreal    1m54.725s\r\nuser    0m0.450s\r\nsys     0m22.060s\n\n\r\nxlog\r\n$ time sh -c \"dd if=/dev/zero of=bigfile bs=8k count=4000000 && sync\"\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 389.216 seconds, 84.2 MB/s\n\r\nreal    6m50.155s\r\nuser    0m0.420s\r\nsys     0m26.490s\n\r\n$ time dd if=bigfile of=/dev/null bs=8k\r\n4000000+0 records in\r\n4000000+0 records out\r\n32768000000 bytes (33 GB) copied, 294.556 seconds, 111 MB/s\n\r\nreal    4m54.558s\r\nuser    0m0.430s\r\nsys     0m23.480s\n\n\n\r\nbonnie++ -s 32g -n 256\n\r\ndata:\r\nVersion  1.03       ------Sequential Output------ --Sequential Input-\r\n--Random-\r\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\r\n--Seeks--\r\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \r\n/sec %CP\r\nlid-statsdb-1   32G 101188  98 202523  20 107642  13 88931  88 271576 \r\n19 980.7   2\r\n                    ------Sequential Create------ --------Random\r\nCreate--------\r\n                    -Create-- --Read--- -Delete-- -Create-- --Read---\r\n-Delete--\r\n              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP \r\n/sec %CP\r\n                256 11429  93 +++++ +++ 17492  71 11097  91 +++++ +++ \r\n2473  11\n\n\n\r\nxlog\r\nVersion  1.03       ------Sequential Output------ --Sequential Input-\r\n--Random-\r\n                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\r\n--Seeks--\r\nMachine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP \r\n/sec %CP\r\nlid-statsdb-1   32G 62973  59 69981   5 35433   4 87977  85 119749   9\r\n496.2   1\r\n                    ------Sequential Create------ --------Random\r\nCreate--------\r\n                    -Create-- --Read--- -Delete-- -Create-- --Read---\r\n-Delete--\r\n              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP \r\n/sec %CP\r\n                256   551  99 +++++ +++ 300935  99   573  99 +++++ +++ \r\n1384  99\n\r\npgbench\n\r\npostgresql 8.2.9 with data and xlog as mentioned above\n\r\npostgresql.conf:\r\nshared_buffers = 4GB\r\ncheckpoint_segments = 8\r\neffective_cache_size = 8GB\n\r\nScript running over scaling factor 1 to 1000 and running 3 times pgbench\r\nwith \"pgbench -t 2000 -c 8 -S pgbench\"\n\r\nIt's a bit limited and will try to do a much much longer run and\r\nincrease the # of tests and calculate mean and stddev as I have a pretty\r\nlarge variation for the 3 runs sometimes (typically for the scaling\r\nfactor at 1000, the runs are respectively 1952, 940, 3162)  so the graph\r\nis pretty ugly.\n\r\nI get (scaling factor, size of db in MB, middle tps)\n\r\n1 20 22150\r\n5 82 22998\r\n10 160 22301\r\n20 316 22857\r\n30 472 23012\r\n40 629 17434\r\n50 785 22179\r\n100 1565 20193\r\n200 3127 23788\r\n300 4688 15494\r\n400 6249 23513\r\n500 7810 18868\r\n600 9372 22146\r\n700 11000 14555\r\n800 12000 10742\r\n900 14000 13696\r\n1000 15000 940\n\r\ncheers,\n\r\n-- stephane\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 19 Jul 2008 09:59:19 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)" }, { "msg_contents": "Luke Lonergan wrote:\n>\n> pgbench is unrelated to the workload you are concerned with if ETL/ELT \n> and decision support / data warehousing queries are your target.\n>\n> Also - placing the xlog on dedicated disks is mostly irrelevant to \n> data warehouse / decision support work or ELT. If you need to \n> maximize loading speed while concurrent queries are running, it may be \n> necessary, but I think you'll be limited in load speed by CPU related \n> to data formatting anyway.\n>\nIndeed. pgbench was mostly done as 'informative' and not really relevant \nto the future workload of this db. (given the queries it's doing not \nsure it's relevant for anything but connections speed,\ninteresting for me to get reference for tx like however). I was more \ninterested in the raw disk performance.\n\n>\n> The primary performance driver for ELT / DW is sequential transfer \n> rate, thus the dd test at 2X memory. With six data disks of this \n> type, you should expect a maximum of around 6 x 80 = 480 MB/s. With \n> RAID10, depending on the raid adapter, you may need to have two or \n> more IO streams to use all platters, otherwise your max speed for one \n> query would be 1/2 that, or 240 MB/s.\n>\nok, which seems to be in par with what I'm getting. (the 240 that is)\n\n>\n> I'd suggest RAID5, or even better, configure all eight disks as a JBOD \n> in the RAID adapter and run ZFS RAIDZ. You would then expect to get \n> about 7 x 80 = 560 MB/s on your single query.\n>\nDo you have a particular controller and disk hardware configuration in \nmind when you're suggesting RAID5 ?\nMy understanding was it was more difficult to find the right hardware to \nget performance on RAID5 compared to RAID10.\n\n>\n> That said, your single cpu on one query will only be able to scan that \n> data at about 300 MB/s (try running a SELECT COUNT(*) against a table \n> that is 2X memory size).\n>\nNote quite 2x memory size, but ~26GB (accounts with scaling factor 2000):\n\n$ time psql -c \"select count(*) from accounts\" pgbench\n count\n-----------\n 200000000\n(1 row)\n\nreal 1m52.050s\nuser 0m0.020s\nsys 0m0.020s\n\n\nNB: For the sake of completness, reran the pgbench by taking average of \n10 runs for each scaling factor (same configuration as per initial mail, \ncolumns are scaling factor, db size, average tps)\n\n1 20 23451\n100 1565 21898\n200 3127 20474\n300 4688 20003\n400 6249 20637\n500 7810 16434\n600 9372 15114\n700 11000 14595\n800 12000 16090\n900 14000 14894\n1000 15000 3071\n1200 18000 3382\n1400 21000 1888\n1600 24000 1515\n1800 27000 1435\n2000 30000 1354\n\n-- stephane\n", "msg_date": "Mon, 21 Jul 2008 10:53:14 +0200", "msg_from": "Stephane Bailliez <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++, pgbench)" }, { "msg_contents": "Hi Stephane,\n\nOn 7/21/08 1:53 AM, \"Stephane Bailliez\" <[email protected]> wrote:\n\n>> I'd suggest RAID5, or even better, configure all eight disks as a JBOD\n>> in the RAID adapter and run ZFS RAIDZ. You would then expect to get\n>> about 7 x 80 = 560 MB/s on your single query.\n>> \n> Do you have a particular controller and disk hardware configuration in\n> mind when you're suggesting RAID5 ?\n> My understanding was it was more difficult to find the right hardware to\n> get performance on RAID5 compared to RAID10.\n\nIf you're running RAIDZ on ZFS, the controller you have should be fine.\nJust configure the HW RAID controller to treat the disks as JBOD (eight\nindividual disks), then make a single RAIDZ zpool of the eight disks. This\nwill run them in a robust SW RAID within Solaris. The fault management is\nsuperior to what you would otherwise have in your HW RAID and the\nperformance should be much better.\n\n- Luke\n\n", "msg_date": "Mon, 21 Jul 2008 01:57:52 -0700", "msg_from": "Luke Lonergan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on Sun Fire X4150 x64 (dd, bonnie++,\n pgbench)" } ]
[ { "msg_contents": "Dear PostgreSQL community,\n\nfirst some info about our application:\n\n- Online course directory for a University\n- Amount of data: complete dump is 27 MB\n- Semester is part of primary key in each table\n- Data for approx. 10 semesters stored in the DB\n- Read-only access from web application (JDBC)\n\nOur client has asked us if the performance of the application could be\nimproved by moving the data from previous years to a separate \"archive\"\napplication. This would reduce the overall amount of data in the main\napplication by about 80% at the moment.\n\nActually I doubt that this will have the desired effect, since the\nsemester is part of the primary key in virtually all tables (apart from\nsome small tables containing string constants etc.), and therefore\nindexed. Some tests with EXPLAIN ANALYZE and some web tests (JMeter)\nseem to confirm this, the queries showed the same performance with 2 and\n10 semesters.\n\nBut since I'm not sure yet, I would very much appreciate any answers to\nthe following questions:\n\n- Do you think the approach (reducing the data) is effective?\n- Are there any particular tests which I should do?\n\nThanks a lot in advance!\n\n-- Andreas\n\n\n\n-- \nAndreas Hartmann, CTO\nBeCompany GmbH\nhttp://www.becompany.ch\nTel.: +41 (0) 43 818 57 01\n", "msg_date": "Mon, 21 Jul 2008 12:50:42 +0200", "msg_from": "Andreas Hartmann <[email protected]>", "msg_from_op": true, "msg_subject": "Less rows -> better performance?" }, { "msg_contents": "Andreas Hartmann wrote:\n> Dear PostgreSQL community,\n> \n> first some info about our application:\n> \n> - Online course directory for a University\n> - Amount of data: complete dump is 27 MB\n> - Semester is part of primary key in each table\n> - Data for approx. 10 semesters stored in the DB\n> - Read-only access from web application (JDBC)\n> \n> Our client has asked us if the performance of the application could be\n> improved by moving the data from previous years to a separate \"archive\"\n> application. \n\nIf you had 27GB of data maybe, but you've only got 27MB - that's \npresumably all sitting in memory.\n\nWhat in particular is slow?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 21 Jul 2008 12:06:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Richard, thanks for your reply!\n\nRichard Huxton schrieb:\n> Andreas Hartmann wrote:\n>> Dear PostgreSQL community,\n>>\n>> first some info about our application:\n>>\n>> - Online course directory for a University\n>> - Amount of data: complete dump is 27 MB\n>> - Semester is part of primary key in each table\n>> - Data for approx. 10 semesters stored in the DB\n>> - Read-only access from web application (JDBC)\n>>\n>> Our client has asked us if the performance of the application could be\n>> improved by moving the data from previous years to a separate \"archive\"\n>> application. \n> \n> If you had 27GB of data maybe, but you've only got 27MB - that's \n> presumably all sitting in memory.\n\nHere's some info about the actual amount of data:\n\nSELECT pg_database.datname,\npg_size_pretty(pg_database_size(pg_database.datname)) AS size\nFROM pg_database where pg_database.datname = 'vvz_live_1';\n\n datname | size\n---------------+---------\n vvz_live_1 | 2565 MB\n\nI wonder why the actual size is so much bigger than the data-only dump - \nis this because of index data etc.?\n\n\n> What in particular is slow?\n\nThere's no particular bottleneck (at least that we're aware of). During \nthe first couple of days after the beginning of the semester the \napplication request processing tends to slow down due to the high load \n(many students assemble their schedule). The customer upgraded the \nhardware (which already helped a lot), but they asked us to find further \napproaches to performance optimiziation.\n\n-- Andreas\n\n\n-- \nAndreas Hartmann, CTO\nBeCompany GmbH\nhttp://www.becompany.ch\nTel.: +41 (0) 43 818 57 01\n", "msg_date": "Mon, 21 Jul 2008 13:25:30 +0200", "msg_from": "Andreas Hartmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "On Mon, Jul 21, 2008 at 1:25 PM, Andreas Hartmann <[email protected]> wrote:\n> SELECT pg_database.datname,\n> pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n> FROM pg_database where pg_database.datname = 'vvz_live_1';\n>\n> datname | size\n> ---------------+---------\n> vvz_live_1 | 2565 MB\n>\n> I wonder why the actual size is so much bigger than the data-only dump - is\n> this because of index data etc.?\n\nMore probably because the database is totally bloated. Do you run\nVACUUM regularly or did you set up autovacuum?\n\n-- \nGuillaume\n", "msg_date": "Mon, 21 Jul 2008 13:32:37 +0200", "msg_from": "\"Guillaume Smet\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Andreas Hartmann wrote:\n> \n> Here's some info about the actual amount of data:\n> \n> SELECT pg_database.datname,\n> pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n> FROM pg_database where pg_database.datname = 'vvz_live_1';\n> \n> datname | size\n> ---------------+---------\n> vvz_live_1 | 2565 MB\n> \n> I wonder why the actual size is so much bigger than the data-only dump - \n> is this because of index data etc.?\n\nI suspect Guillame is right and you've not been vacuuming. That or \nyou've got a *LOT* of indexes. If the database is only 27MB dumped, I'd \njust dump/restore it.\n\nSince the database is read-only it might be worth running CLUSTER on the \n main tables if there's a sensible ordering for them.\n\n>> What in particular is slow?\n> \n> There's no particular bottleneck (at least that we're aware of). During \n> the first couple of days after the beginning of the semester the \n> application request processing tends to slow down due to the high load \n> (many students assemble their schedule). The customer upgraded the \n> hardware (which already helped a lot), but they asked us to find further \n> approaches to performance optimiziation.\n\n1. Cache sensibly at the application (I should have thought there's \nplenty of opportunity here).\n2. Make sure you're using a connection pool and have sized it reasonably \n(try 4,8,16 see what loads you can support).\n3. Use prepared statements where it makes sense. Not sure how you'll \nmanage the interplay between this and connection pooling in JDBC. Not a \nJava man I'm afraid.\n\nIf you're happy with the query plans you're looking to reduce overheads \nas much as possible during peak times.\n\n4. Offload more of the processing to clients with some fancy ajax-ed \ninterface.\n5. Throw in a spare machine as an app server for the first week of term. \n Presumably your load is 100 times average at this time.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 21 Jul 2008 13:11:02 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Guillaume Smet wrote:\n> On Mon, Jul 21, 2008 at 1:25 PM, Andreas Hartmann <[email protected]> wrote:\n>> SELECT pg_database.datname,\n>> pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n>> FROM pg_database where pg_database.datname = 'vvz_live_1';\n>>\n>> datname | size\n>> ---------------+---------\n>> vvz_live_1 | 2565 MB\n>>\n>> I wonder why the actual size is so much bigger than the data-only dump - is\n>> this because of index data etc.?\n> \n> More probably because the database is totally bloated. Do you run\n> VACUUM regularly or did you set up autovacuum?\n\nYou might also want to REINDEX and see if that improves things. My \nunderstanding is that if vacuum isn't run regularly, the indexes may end \nup a bit of a mess as well as the tables.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 21 Jul 2008 21:53:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Hi,\n\nReducing the amount of data will only have effect on table scan or index\nscan. If your queries are selective and optimized, it will have no effect.\n\nBefore looking for solutions, the first thing to do is to understand what's\nhappen.\n\nIf you already know the queries then explain them. Otherwise, you must log\nduration with the log_statement and log_min_duration parameters in the\npostgresql.conf.\n\nBefore this, you must at least run VACUUM ANALYZE on the database to collect\nactual statistics and have current explain plans.\n\nBest regards.\n\nChristian\n\n2008/7/21 Richard Huxton <[email protected]>\n\n> Andreas Hartmann wrote:\n>\n>>\n>> Here's some info about the actual amount of data:\n>>\n>> SELECT pg_database.datname,\n>> pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n>> FROM pg_database where pg_database.datname = 'vvz_live_1';\n>>\n>> datname | size\n>> ---------------+---------\n>> vvz_live_1 | 2565 MB\n>>\n>> I wonder why the actual size is so much bigger than the data-only dump -\n>> is this because of index data etc.?\n>>\n>\n> I suspect Guillame is right and you've not been vacuuming. That or you've\n> got a *LOT* of indexes. If the database is only 27MB dumped, I'd just\n> dump/restore it.\n>\n> Since the database is read-only it might be worth running CLUSTER on the\n> main tables if there's a sensible ordering for them.\n>\n> What in particular is slow?\n>>>\n>>\n>> There's no particular bottleneck (at least that we're aware of). During\n>> the first couple of days after the beginning of the semester the application\n>> request processing tends to slow down due to the high load (many students\n>> assemble their schedule). The customer upgraded the hardware (which already\n>> helped a lot), but they asked us to find further approaches to performance\n>> optimiziation.\n>>\n>\n> 1. Cache sensibly at the application (I should have thought there's plenty\n> of opportunity here).\n> 2. Make sure you're using a connection pool and have sized it reasonably\n> (try 4,8,16 see what loads you can support).\n> 3. Use prepared statements where it makes sense. Not sure how you'll manage\n> the interplay between this and connection pooling in JDBC. Not a Java man\n> I'm afraid.\n>\n> If you're happy with the query plans you're looking to reduce overheads as\n> much as possible during peak times.\n>\n> 4. Offload more of the processing to clients with some fancy ajax-ed\n> interface.\n> 5. Throw in a spare machine as an app server for the first week of term.\n> Presumably your load is 100 times average at this time.\n>\n> --\n> Richard Huxton\n> Archonet Ltd\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi,Reducing the amount of data will only have effect on table scan or index scan. If your queries are selective and optimized, it will have no effect.Before looking for solutions, the first thing to do is to understand what's happen. \nIf you already know the queries then explain them. Otherwise, you must log duration with the log_statement and log_min_duration parameters in the postgresql.conf.Before this, you must at least run VACUUM ANALYZE on the database to collect actual statistics and have current explain plans.\nBest regards.Christian2008/7/21 Richard Huxton <[email protected]>\nAndreas Hartmann wrote:\n\n\nHere's some info about the actual amount of data:\n\nSELECT pg_database.datname,\npg_size_pretty(pg_database_size(pg_database.datname)) AS size\nFROM pg_database where pg_database.datname = 'vvz_live_1';\n\n    datname    |  size\n---------------+---------\n vvz_live_1    | 2565 MB\n\nI wonder why the actual size is so much bigger than the data-only dump - is this because of index data etc.?\n\n\nI suspect Guillame is right and you've not been vacuuming. That or you've got a *LOT* of indexes. If the database is only 27MB dumped, I'd just dump/restore it.\n\nSince the database is read-only it might be worth running CLUSTER on the  main tables if there's a sensible ordering for them.\n\n\n\nWhat in particular is slow?\n\n\nThere's no particular bottleneck (at least that we're aware of). During the first couple of days after the beginning of the semester the application request processing tends to slow down due to the high load (many students assemble their schedule). The customer upgraded the hardware (which already helped a lot), but they asked us to find further approaches to performance optimiziation.\n\n\n1. Cache sensibly at the application (I should have thought there's plenty of opportunity here).\n2. Make sure you're using a connection pool and have sized it reasonably (try 4,8,16 see what loads you can support).\n3. Use prepared statements where it makes sense. Not sure how you'll manage the interplay between this and connection pooling in JDBC. Not a Java man I'm afraid.\n\nIf you're happy with the query plans you're looking to reduce overheads as much as possible during peak times.\n\n4. Offload more of the processing to clients with some fancy ajax-ed interface.\n5. Throw in a spare machine as an app server for the first week of term.    Presumably your load is 100 times average at this time.\n\n-- \n  Richard Huxton\n  Archonet Ltd\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 21 Jul 2008 16:00:23 +0200", "msg_from": "\"Christian GRANDIN\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Guillaume Smet schrieb:\n> On Mon, Jul 21, 2008 at 1:25 PM, Andreas Hartmann <[email protected]> wrote:\n>> SELECT pg_database.datname,\n>> pg_size_pretty(pg_database_size(pg_database.datname)) AS size\n>> FROM pg_database where pg_database.datname = 'vvz_live_1';\n>>\n>> datname | size\n>> ---------------+---------\n>> vvz_live_1 | 2565 MB\n>>\n>> I wonder why the actual size is so much bigger than the data-only dump - is\n>> this because of index data etc.?\n> \n> More probably because the database is totally bloated. Do you run\n> VACUUM regularly or did you set up autovacuum?\n\nThanks for the hint!\n\nI just verified that the autovacuum property is enabled. I did the \nfollowing to prepare the tests:\n\n- setup two test databases, let's call them db_all and db_current\n- import the dump from the live DB into both test DBs\n- delete the old semester data from db_current, leaving only the current \ndata\n\nBoth test DBs were 600 MB large after this. I did a VACUUM FULL ANALYZE \non both of them now. db_all didn't shrink significantly (only 1 MB), \ndb_current shrunk to 440 MB. We're using quite a lot of indexes, I guess \nthat's why that much data are allocated.\n\n-- Andreas\n\n-- \nAndreas Hartmann, CTO\nBeCompany GmbH\nhttp://www.becompany.ch\nTel.: +41 (0) 43 818 57 01\n", "msg_date": "Mon, 21 Jul 2008 16:45:10 +0200", "msg_from": "Andreas Hartmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Andreas,\n\n> I just verified that the autovacuum property is enabled. I did the following\n> to prepare the tests:\n\n\"autovacuum property is enabled\" Did you also check the logs, if\nautovacuum is working?\n\n> - setup two test databases, let's call them db_all and db_current\n> - import the dump from the live DB into both test DBs\n> - delete the old semester data from db_current, leaving only the current\n> data\n>\n> Both test DBs were 600 MB large after this. I did a VACUUM FULL ANALYZE on\n> both of them now. db_all didn't shrink significantly (only 1 MB), db_current\n> shrunk to 440 MB.\n\nYour test is not testing if vacuum is done on your production\ndatabase! With pg_dump + pg_restore you removed next to all database\nbloat. (theoretically all)\n\nAfter loading a fresh dump, vacuuming ideally has to do nearly\nnothing; after deleting some data VACUUM reclaims the memory of the\ndeleted rows, thats the shrinking you see after delete + vacuum.\n\nThe bload in your production system may be the result of updates and\ndeletes in that system; dumping and restoring removes that bloat.\n\nIf your life DB is ~2,5Gig, and your dumped / restored DB is only\n600MB, that 2500MB minus 600MB is some bloat from not vacuuming or\nbloated indexes. So, before the start of the next semester, at least\ndo vacuum. (maybe also reindex)\n\nBest wishes,\n\nHarald\n\n\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nno fx, no carrier pidgeon\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n", "msg_date": "Mon, 21 Jul 2008 16:56:27 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Mario Weilguni schrieb:\n> Andreas Hartmann schrieb:\n\n[�]\n\n>> I just verified that the autovacuum property is enabled.\n\n[�]\n\n> Did you have:\n> stats_start_collector = on\n> stats_block_level = on\n> stats_row_level = on\n> \n> Otherwise autovacuum won't run IMO.\n\nThanks for the hint! The section looks like this:\n\nstats_start_collector = on\n#stats_command_string = off\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off\n\n\nI'll check the logs if the vacuum really runs - as soon as I find them :)\n\n-- Andreas\n-- \nAndreas Hartmann, CTO\nBeCompany GmbH\nhttp://www.becompany.ch\nTel.: +41 (0) 43 818 57 01\n", "msg_date": "Mon, 21 Jul 2008 17:14:27 +0200", "msg_from": "Andreas Hartmann <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "Andreas Hartmann schrieb:\n> Mario Weilguni schrieb:\n>> Andreas Hartmann schrieb:\n>\n> [�]\n>\n>>> I just verified that the autovacuum property is enabled.\n>\n> [�]\n>\n>> Did you have:\n>> stats_start_collector = on\n>> stats_block_level = on\n>> stats_row_level = on\n>>\n>> Otherwise autovacuum won't run IMO.\n>\n> Thanks for the hint! The section looks like this:\n>\n> stats_start_collector = on\n> #stats_command_string = off\n> #stats_block_level = off\n> stats_row_level = on\n> #stats_reset_on_server_start = off\n>\n>\n> I'll check the logs if the vacuum really runs - as soon as I find them :)\n>\n> -- Andreas\nYou might want to use these entries in your config:\nredirect_stderr = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_rotation_age = 1d\n\nFit those to your needs, then you will find log entries in $PGDATA/pg_log/\n\nAnd BTW, I was wrong, you just need to have stats_row_level=On, \nstats_block_level doesn't matter. But in fact it's simple, if you don't \nhave 24x7 requirements type VACUUM FULL ANALYZE; and check if your DB \nbecomes smaller, I really doubt you can have that much indizes that 27MB \ndumps might use 2.3 GB on-disk.\n\nYou can check this too:\nselect relname, relpages, reltuples, relkind\n from pg_class\nwhere relkind in ('r', 'i')\norder by relpages desc limit 20;\n\nWill give you the top-20 tables and their sizes, 1 page is typically \n8KB, so you can cross-check if relpages/reltuples is completly off, this \nis a good indicator for table/index bloat.\n\nRegards,\nMario\n\n", "msg_date": "Mon, 21 Jul 2008 17:29:14 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" }, { "msg_contents": "[...]\n> You can check this too:\n> select relname, relpages, reltuples, relkind\n> from pg_class\n> where relkind in ('r', 'i')\n> order by relpages desc limit 20;\n>\n> Will give you the top-20 tables and their sizes, 1 page is typically \n> 8KB, so you can cross-check if relpages/reltuples is completly off, \n> this is a good indicator for table/index bloat.\nuse this query :\nselect pg_size_pretty(pg_relation_size(oid)) as relation_size,relname, \nrelpages, reltuples, relkind\n from pg_class\nwhere relkind in ('r', 'i')\norder by relpages desc limit 20;\n\noutput will be much more readeable\n>\n> Regards,\n> Mario\n Lukasz\n\n-- \nLukasz Filut\nDBA - IT Group, WSB-TEB Corporation\nPoznan - Poland\n\n", "msg_date": "Tue, 22 Jul 2008 09:48:34 +0200", "msg_from": "=?UTF-8?B?xYF1a2FzeiBGaWx1dA==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Less rows -> better performance?" } ]
[ { "msg_contents": "Hi,\n\nI have ran quite a few tests comparing how long a query takes to execute from Perl/DBI as compared to psql/pqlib. No matter how many times I run the test the results were always the same.\n\nI run a SELECT all on a fairly big table and enabled the log_min_duration_statement option. With psql postgres consistently logs half a second while the exact same query executed with Perl/DBI takes again consistently 2 seconds.\n\nIf I were timing the applications I would have been too much surprised by these results, obviously, processing with Perl would be slower than a native application. But it's the postmaster that gives these results. Could it be because the DBI module is slower at assimilating the data?\n\nAny light on the subject would be greatly appreciated.\n\n\nRegards,\n\nVal\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Mon, 21 Jul 2008 11:19:15 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": true, "msg_subject": "Perl/DBI vs Native" }, { "msg_contents": "Valentin Bogdanov wrote:\n> Hi,\n> \n> I have ran quite a few tests comparing how long a query takes to execute from Perl/DBI as compared to psql/pqlib. No matter how many times I run the test the results were always the same.\n> \n> I run a SELECT all on a fairly big table and enabled the log_min_duration_statement option. With psql postgres consistently logs half a second while the exact same query executed with Perl/DBI takes again consistently 2 seconds.\n> \n> If I were timing the applications I would have been too much surprised by these results, obviously, processing with Perl would be slower than a native application. But it's the postmaster that gives these results. Could it be because the DBI module is slower at assimilating the data?\n> \n> Any light on the subject would be greatly appreciated.\n\nRandom guess: Perl's DBI is using parameterized prepared statements, \npreventing the optimizer from using its knowledge about common values in \nthe table to decide whether or not index use is appropriate. When you're \nwriting the query in psql, you're not using prepared statements so the \noptimizer can be cleverer.\n\nTry comparing:\n\nSELECT statement\n\nto\n\nPREPARE test(params) AS statement;\nEXECUTE test(params);\n\neg:\n\nSELECT x + 44 FROM t;\n\nvs:\n\nPREPARE test(int) AS x + $1 FROM t;\nEXECUTE test(44);\n\nUse EXPLAIN ANALYZE to better understand the changes in the query plan.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 21 Jul 2008 21:49:18 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "\nOn Jul 21, 2008, at 5:19 AM, Valentin Bogdanov wrote:\n\n> Hi,\n>\n> I have ran quite a few tests comparing how long a query takes to \n> execute from Perl/DBI as compared to psql/pqlib. No matter how many \n> times I run the test the results were always the same.\n>\n> I run a SELECT all on a fairly big table and enabled the \n> log_min_duration_statement option. With psql postgres consistently \n> logs half a second while the exact same query executed with Perl/DBI \n> takes again consistently 2 seconds.\n>\n> If I were timing the applications I would have been too much \n> surprised by these results, obviously, processing with Perl would be \n> slower than a native application. But it's the postmaster that gives \n> these results. Could it be because the DBI module is slower at \n> assimilating the data?\n\nHi Val,\n\nYes, DBI can be slower then the native C interface. The speed depends \non how the data is being returned inside of Perl. Returning hashes is \na slower method then returning arrays from what I've found due to the \noverhead in the creation of the objects in Perl.\n\nSo:\n\nmy $r = $dbh->selectall_arrayref(\"select * from table\", { Columns => \n{}});\n\nIs slower then:\n\nmy $r = $dbh->selectall_arrayref(\"select * from table\", undef);\n\nSecondarily, if you're returning a lot of rows you may want to look \ninto using a cursor, so that you can fetch the rows a 1000 at a time \nin a tight loop then discard them once you are done with them. This \nwill hopefully prevent the system from having continually allocate \nmemory for all of your rows. For each field in each row Perl \nallocates memory to store the value from Postgres, so if you have many \nfields on your table this can be a large number of allocations \ndepending on the number of rows. Any userland profile tool should \nhelp you debug what's going on here.\n\nCheers,\n\nRusty\n--\nRusty Conover\nInfoGears Inc.\nhttp://www.infogears.com - http://www.gearbuyer.com - http://www.footwearbuyer.com\n", "msg_date": "Mon, 21 Jul 2008 07:57:17 -0600", "msg_from": "Rusty Conover <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "Valentin Bogdanov wrote: \n> I have ran quite a few tests comparing how long a query takes to\n> execute from Perl/DBI as compared to psql/pqlib. No matter how many\n> times I run the test the results were always the same.\n> \n> I run a SELECT all on a fairly big table and enabled the\n> log_min_duration_statement option. With psql postgres consistently\n> logs half a second while the exact same query executed with Perl/DBI\n> takes again consistently 2 seconds.\n\nThe problem may be that your two tests are not equivalent. When Perl executes a statement, it copies the *entire* result set back to the client before it returns the first row. The following program might appear to just be fetching the first row:\n\n $sth = $dbh->prepare(\"select item from mytable\");\n $sth->execute();\n $item = $sth->fetchrow_array();\n\nBut in fact, before Perl returns from the $sth->execute() statement, it has already run the query and copied all of the rows into a hidden, client-side cache. Each $sth->fetchrow_array() merely copies the data from the hidden cache into your local variable.\n\nBy contrast, psql executes the query, and starts returning the data a page at a time. So it may appear to be much faster.\n\nThis also means that Perl has trouble with very large tables. If the \"mytable\" in the above example is very large, say a hundred billion rows, you simply can't execute this statement in Perl. It will try to copy 100 billion rows into memory before returning the first answer.\n\nThe reason for Perl's behind-the-scenes caching is because it allows multiple connections to a single database, and multiple statements on each database handle. By executing each statement completely, it gives the appearance that multiple concurrent queries are supported. The downside is that it can be a huge memory hog.\n\nCraig\n", "msg_date": "Mon, 21 Jul 2008 10:24:03 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Valentin Bogdanov wrote: \n>> I have ran quite a few tests comparing how long a query takes to\n>> execute from Perl/DBI as compared to psql/pqlib. No matter how many\n>> times I run the test the results were always the same.\n>> \n>> I run a SELECT all on a fairly big table and enabled the\n>> log_min_duration_statement option. With psql postgres consistently\n>> logs half a second while the exact same query executed with Perl/DBI\n>> takes again consistently 2 seconds.\n\n> The problem may be that your two tests are not equivalent. When Perl\n> executes a statement, it copies the *entire* result set back to the\n> client before it returns the first row.\n\nSure, but so does psql (unless you've turned on the magic FETCH_COUNT\nsetting). I think the theories about prepared versus literal statements\nwere more promising; but I don't know DBI well enough to know exactly\nwhat it was sending to the server.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 21 Jul 2008 14:44:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native " }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\nTom Lane wrote:\n> Sure, but so does psql (unless you've turned on the magic FETCH_COUNT\n> setting). I think the theories about prepared versus literal statements\n> were more promising; but I don't know DBI well enough to know exactly\n> what it was sending to the server.\n\nAlmost certainly a prepared_statement unless no placeholders were being\nused at all. Another way to test (from the DBI side) is to set\n$sth->{pg_server_prepare} = 0, which will send the SQL directly to the\nbackend, just as if you've typed it in at a command prompt. You can also\nuse the tracing mechanism of DBI to see what's going on behind the scenes.\nFor example:\n\n$dbh->trace('SQL');\n\n$dbh->do(\"SELECT 1234 FROM pg_class WHERE relname = 'bob'\");\n$dbh->do(\"SELECT 1234 FROM pg_class WHERE relname = ?\", undef, 'mallory');\n\n$sth = $dbh->prepare(\"SELECT 4567 FROM pg_class WHERE relname = ?\");\n$sth->execute('alice');\n$sth->{pg_server_prepare} = 0;\n$sth->execute('eve1');\n$sth->{pg_server_prepare} = 1;\n$sth->execute('eve2');\n\n$dbh->commit;\n\nOutputs:\n\n===\n\nbegin;\n\nSELECT 1234 FROM pg_class WHERE relname = 'bob';\n\nEXECUTE SELECT 1234 FROM pg_class WHERE relname = $1 (\n$1: mallory\n);\n\nPREPARE dbdpg_p22988_1 AS SELECT 4567 FROM pg_class WHERE relname = $1;\n\nEXECUTE dbdpg_p22988_1 (\n$1: alice\n);\n\nSELECT 4567 FROM pg_class WHERE relname = 'eve1';\n\nEXECUTE dbdpg_p22988_1 (\n$1: eve2\n);\n\ncommit;\n\nDEALLOCATE dbdpg_p22988_1;\n\n===\n\nYou can even view exactly which libpq calls are being used at each point with:\n\n$dbh->trace('SQL,libpq');\n\nTo get back to the original poster's complaint, you may want to figure out why\nthe difference is so great for a prepared plan. It may be that you need to\ncast the placeholder(s) to a specific type, for example.\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200807211637\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAkiE83wACgkQvJuQZxSWSsiGrwCdGMLgauGwR2UzfoMPrTH/mrRg\nnxsAnjx14goMV23a9yRjtSw+ixJWQkuI\n=gjVE\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Mon, 21 Jul 2008 20:37:45 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "Thanks to everyone who replied. There were some really good points.\n\nHowever, I found what is causing the difference. The perl program was connecting to the database via a TCP socket while the C version was using Unix socket. I changed the connect in my perl script, so that it now uses Unix sockets as well. Run the tests again and got identical results for both programs.\n\nIn case someone is wondering, the way to force DBI to use unix sockets is by not specifying a host and port in the connect call.\n\n\nCheers,\n\nVal\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Tue, 22 Jul 2008 11:32:28 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: RIPEMD160\n\n\n> In case someone is wondering, the way to force DBI to use unix\n> sockets is by not specifying a host and port in the connect call.\n\nActually, the host defaults to the local socket. Using the port\nmay still be needed: if you leave it out, it simply uses the default\nvalue (5432) if left out. Thus, for most purposes, just leaving\nthe host out is enough to cause a socket connection on the default\nport.\n\nFor completeness in the archives, you can also specify the complete\npath to the unix socket directory in the host line, for those cases in\nwhich the socket is not where you expect it to be:\n\n$dbh = DBI->connect('dbi:Pg:dbname=test;host=/var/local/sockets',\n $user, $pass, {AutoCommit=>0, RaiseError=>1});\n\n- --\nGreg Sabino Mullane [email protected]\nEnd Point Corporation\nPGP Key: 0x14964AC8 200807221248\nhttp://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8\n-----BEGIN PGP SIGNATURE-----\n\niEYEAREDAAYFAkiGD1UACgkQvJuQZxSWSsjHTQCfYbGnh3dvs9ggZX0FCSwMro81\nsJsAoOUcDyu6vQM43EJOGAay/vXyKWES\n=hYLf\n-----END PGP SIGNATURE-----\n\n\n", "msg_date": "Tue, 22 Jul 2008 16:48:42 -0000", "msg_from": "\"Greg Sabino Mullane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "On Tue, Jul 22, 2008 at 9:48 AM, Greg Sabino Mullane <[email protected]> wrote:\n>> In case someone is wondering, the way to force DBI to use unix\n>> sockets is by not specifying a host and port in the connect call.\n>\n> Actually, the host defaults to the local socket. Using the port\n> may still be needed: if you leave it out, it simply uses the default\n> value (5432) if left out. Thus, for most purposes, just leaving\n> the host out is enough to cause a socket connection on the default\n> port.\n\nFor the further illumination of the historical record, the best\npractice here is probably to use the pg_service.conf file, which may\nor may not live in /etc depending on your operating system. Then you\ncan connect in DBI using dbi:Pg:service=whatever, and change the\ndefinition of \"whatever\" in pg_service.conf. This has the same\nsemantics as PGSERVICE=whatever when using psql. It's a good idea to\nkeep these connection details out of your program code.\n\n-jwb\n", "msg_date": "Tue, 22 Jul 2008 13:35:26 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl/DBI vs Native" }, { "msg_contents": "Thanks Guys, this is really useful, especially the pg_service.conf. I have got an app where the connection parameters have to be set in 3 different places I was thinking of writing something myself but now that I know of pg_service.conf, problem solved.\n\nRegards,\nVal\n\n\n--- On Tue, 22/7/08, Jeffrey Baker <[email protected]> wrote:\n\n> From: Jeffrey Baker <[email protected]>\n> Subject: Re: [PERFORM] Perl/DBI vs Native\n> To: \"Greg Sabino Mullane\" <[email protected]>\n> Cc: [email protected]\n> Date: Tuesday, 22 July, 2008, 9:35 PM\n> On Tue, Jul 22, 2008 at 9:48 AM, Greg Sabino Mullane\n> <[email protected]> wrote:\n> >> In case someone is wondering, the way to force DBI\n> to use unix\n> >> sockets is by not specifying a host and port in\n> the connect call.\n> >\n> > Actually, the host defaults to the local socket. Using\n> the port\n> > may still be needed: if you leave it out, it simply\n> uses the default\n> > value (5432) if left out. Thus, for most purposes,\n> just leaving\n> > the host out is enough to cause a socket connection on\n> the default\n> > port.\n> \n> For the further illumination of the historical record, the\n> best\n> practice here is probably to use the pg_service.conf file,\n> which may\n> or may not live in /etc depending on your operating system.\n> Then you\n> can connect in DBI using dbi:Pg:service=whatever, and\n> change the\n> definition of \"whatever\" in pg_service.conf. \n> This has the same\n> semantics as PGSERVICE=whatever when using psql. It's\n> a good idea to\n> keep these connection details out of your program code.\n> \n> -jwb\n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Wed, 23 Jul 2008 10:46:59 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perl/DBI vs Native" } ]
[ { "msg_contents": "Hi Guys,\n\nI am developing a project with PostgreSQL and one guy from project is\nfamiliar with Oracle and did a question for me, but i could not answer, if\nsomeone could help it will be good. =)\nThe question is :\n*\n- In oracle he makes a full backup two times in a day. In this range of\ntime, Oracle make a lot of mini-backups, but this backups is about just the\ndata whose have changed in this time. If the system fails, he could\nreconstruct the database adding the last \"big backup\" with \"mini-backups\".\nCan Postgres do this ? *\n\n\n\n\nRegards,\nLeví - Brazil\n\nHi Guys,I am developing a project with PostgreSQL and one guy from project is familiar with Oracle and did a question for me, but i could not answer, if someone could help it will be good. =)The question is :\n- In oracle he makes a full backup two times in a day. In this range of \ntime, Oracle make a lot of mini-backups, but this backups is about just \nthe data whose have changed in this time. If the system fails, he could \nreconstruct the database adding the last \"big backup\" with \n\"mini-backups\". Can Postgres do this ? \nRegards,Leví - Brazil", "msg_date": "Mon, 21 Jul 2008 15:20:27 -0300", "msg_from": "\"=?ISO-8859-1?Q?Lev=ED_Teodoro_da_Silva?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "[BACKUPS]Little backups" }, { "msg_contents": "On Mon, Jul 21, 2008 at 03:20:27PM -0300, Leví Teodoro da Silva wrote:\n> - In oracle he makes a full backup two times in a day. In this range of\n> time, Oracle make a lot of mini-backups, but this backups is about just the\n> data whose have changed in this time. If the system fails, he could\n> reconstruct the database adding the last \"big backup\" with \"mini-backups\".\n> Can Postgres do this ? *\n\nTake a look at Point-In-Time-Recovery, PITR:\nhttp://www.postgresql.org/docs/current/static/continuous-archiving.html\n\n-Berge\n\n-- \nBerge Schwebs Bjørlo\nAlegría!\n", "msg_date": "Mon, 21 Jul 2008 20:28:15 +0200", "msg_from": "Berge Schwebs =?utf-8?Q?Bj=C3=B8rlo?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BACKUPS]Little backups" }, { "msg_contents": ">>> \"Levᅵ Teodoro da Silva\" <[email protected]> wrote: \n \n> - In oracle he makes a full backup two times in a day. In this range\nof\n> time, Oracle make a lot of mini-backups, but this backups is about\njust the\n> data whose have changed in this time. If the system fails, he could\n> reconstruct the database adding the last \"big backup\" with\n\"mini-backups\".\n> Can Postgres do this ? *\n \nThe equivalent capability in PostgreSQL is the Point-In-Time Recovery\nbackup strategy:\n \nhttp://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n \nTwice daily seems rather extreme -- we generally go with monthly.\n \n-Kevin\n", "msg_date": "Mon, 21 Jul 2008 13:30:44 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BACKUPS]Little backups" }, { "msg_contents": "A Dilluns 21 Juliol 2008, Leví Teodoro da Silva va escriure:\n> Hi Guys,\n>\n> I am developing a project with PostgreSQL and one guy from project is\n> familiar with Oracle and did a question for me, but i could not answer, if\n> someone could help it will be good. =)\n> The question is :\n> *\n> - In oracle he makes a full backup two times in a day. In this range of\n> time, Oracle make a lot of mini-backups, but this backups is about just the\n> data whose have changed in this time. If the system fails, he could\n> reconstruct the database adding the last \"big backup\" with \"mini-backups\".\n> Can Postgres do this ? *\n\nYes, it can. If you need detailed information, you can take a look at \nhttp://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n\n>\n>\n>\n>\n> Regards,\n> Leví - Brazil\n \n", "msg_date": "Mon, 21 Jul 2008 20:30:46 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BACKUPS]Little backups" }, { "msg_contents": "am Mon, dem 21.07.2008, um 15:20:27 -0300 mailte Lev� Teodoro da Silva folgendes:\n> - In oracle he makes a full backup two times in a day. In this range of time,\n> Oracle make a lot of mini-backups, but this backups is about just the data\n> whose have changed in this time. If the system fails, he could reconstruct the\n> database adding the last \"big backup\" with \"mini-backups\". Can Postgres do this\n> ? \n\nSure, with the WAL-files.\n\nhttp://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n\n\nAndreas\n-- \nAndreas Kretschmer\nKontakt: Heynitz: 035242/47150, D1: 0160/7141639 (mehr: -> Header)\nGnuPG-ID: 0x3FFF606C, privat 0x7F4584DA http://wwwkeys.de.pgp.net\n", "msg_date": "Mon, 21 Jul 2008 20:38:43 +0200", "msg_from": "\"A. Kretschmer\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BACKUPS]Little backups" }, { "msg_contents": "Thank you guys for the fast answer.\n\nThis same guy asked me about the support on PostgreSQL. When he see the \ncommunity behind PostgreSQL , he never will be worried about support. =)\n\nThanks a lot,\nLev�\n\nA. Kretschmer escreveu:\n> am Mon, dem 21.07.2008, um 15:20:27 -0300 mailte Lev� Teodoro da Silva folgendes:\n> \n>> - In oracle he makes a full backup two times in a day. In this range of time,\n>> Oracle make a lot of mini-backups, but this backups is about just the data\n>> whose have changed in this time. If the system fails, he could reconstruct the\n>> database adding the last \"big backup\" with \"mini-backups\". Can Postgres do this\n>> ? \n>> \n>\n> Sure, with the WAL-files.\n>\n> http://www.postgresql.org/docs/8.3/interactive/continuous-archiving.html\n>\n>\n> Andreas\n> \n\n", "msg_date": "Mon, 21 Jul 2008 16:01:04 -0300", "msg_from": "Levi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [BACKUPS]Little backups" } ]
[ { "msg_contents": "I am facing performance issues with running scheduled jobs. There are\nmultiple jobs running on the database ...not necessarily acting on the same\ntable. Either ways i checked if there was a locking issue, I did not find\nany conflicting issues. The program is run over night ......when i check the\nlog files i saw that it runs slow when other loaders are running.Is there\nany way i can improve the performance?Can someone help me understand how to\ndebug this bottle neck?\n\nThanks\nSamantha\n\nI am facing performance issues with running scheduled jobs. There are multiple jobs running on the database ...not necessarily acting on the same table. Either ways i checked if there was a locking issue, I did not find any conflicting issues. The program is run over night ......when i check the log files i saw that it runs slow when other loaders are running.Is there any way i can improve the performance?Can someone help me understand how to debug this bottle neck?\n \nThanks\nSamantha", "msg_date": "Tue, 22 Jul 2008 11:44:29 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of jobs" }, { "msg_contents": "samantha mahindrakar wrote:\n> I am facing performance issues with running scheduled jobs. There are \n> multiple jobs running on the database ...not necessarily acting on the \n> same table. Either ways i checked if there was a locking issue, I did \n> not find any conflicting issues. The program is run over night \n> ......when i check the log files i saw that it runs slow when other \n> loaders are running.Is there any way i can improve the performance?Can \n> someone help me understand how to debug this bottle neck?\n\nWhat is your hardware? OS? OS version? PostgreSQL version? Number of \ndisks, disk type, disk/RAID controller, RAID/LVM configuration, ...\n\n--\nCraig Ringer\n", "msg_date": "Wed, 23 Jul 2008 00:13:46 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of jobs" }, { "msg_contents": "Windows 2003 Server SP2\nIntel Xeon 3.2 GHz\n3.75 Gb RAM\nSystem Drive 33.8 Gb\nData drive 956 Gb\nPostgreSQL 8.2.6\nPERC RAID 10 I believe SCSI disks\n\n\n\n\nOn 7/22/08, Craig Ringer <[email protected]> wrote:\n>\n> samantha mahindrakar wrote:\n>\n>> I am facing performance issues with running scheduled jobs. There are\n>> multiple jobs running on the database ...not necessarily acting on the same\n>> table. Either ways i checked if there was a locking issue, I did not find\n>> any conflicting issues. The program is run over night ......when i check the\n>> log files i saw that it runs slow when other loaders are running.Is there\n>> any way i can improve the performance?Can someone help me understand how to\n>> debug this bottle neck?\n>>\n>\n> What is your hardware? OS? OS version? PostgreSQL version? Number of disks,\n> disk type, disk/RAID controller, RAID/LVM configuration, ...\n>\n> --\n> Craig Ringer\n>\n\nWindows 2003 Server SP2Intel Xeon 3.2 GHz3.75 Gb RAMSystem Drive 33.8 GbData drive 956 GbPostgreSQL 8.2.6PERC RAID 10 I believe SCSI disks \n \n \nOn 7/22/08, Craig Ringer <[email protected]> wrote:\nsamantha mahindrakar wrote:\nI am facing performance issues with running scheduled jobs. There are multiple jobs running on the database ...not necessarily acting on the same table. Either ways i checked if there was a locking issue, I did not find any conflicting issues. The program is run over night ......when i check the log files i saw that it runs slow when other loaders are running.Is there any way i can improve the performance?Can someone help me understand how to debug this bottle neck?\nWhat is your hardware? OS? OS version? PostgreSQL version? Number of disks, disk type, disk/RAID controller, RAID/LVM configuration, ...--Craig Ringer", "msg_date": "Tue, 22 Jul 2008 14:59:55 -0400", "msg_from": "\"samantha mahindrakar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of jobs" }, { "msg_contents": "samantha mahindrakar wrote:\n> Windows 2003 Server SP2\n> Intel Xeon 3.2 GHz\n> 3.75 Gb RAM\n> System Drive 33.8 Gb\n> Data drive 956 Gb\n> PostgreSQL 8.2.6\n> PERC RAID 10 I believe SCSI disks\n\n... all of which look fairly reasonable, though you didn't say how many \ndisks (which is *very* important for performance) or how fast they are.\n\nThe concurrent jobs will compete for disk bandwidth, CPU time, and \nmemory for cache. If a single job already loads your server quite \nheavily then a second concurrent job will slow the original job down and \nrun slower its self. There's not much you can do about that except \nschedule the jobs not to run concurrently.\n\nYou should probably start by using the system's performance monitoring \ntools (whatever they are - I don't know much about that for Windows) to \ndetermine how much RAM, CPU time, and disk I/O each job uses when run \nalone. Time how long each job takes when run alone. Then run the jobs \nconcurrently, time them, and measure how much CPU time and (if possible) \ndisk I/O each job gets that way. See how much greater the total time \ntaken is when they're run concurrently and try to see if the jobs appear \nto be CPU-limited, disk limited, etc.\n\nIf your controller has a battery backed cache, consider turning on write \ncaching and see if that helps. DO NOT turn write caching on without a \nbattery backed cache. If performance is important to you, you should \nprobably get a battery backed cache module anyway.\n\nAlso make sure that PostgreSQL has enough shared memory configured. It \nmight be struggling to fit the data it needs for the jobs in its shared \nmemory cache when you run them concurrently. See the manual for a more \ndetailed explanation.\n\nYou have not provided any useful information about exactly what you \nthink is wrong, what the problem jobs are, etc so it is not possible to \ngive more than very generic explanations. To be specific it'd probably \nbe necessary to know your data and queries. At minimum some measurements \nof query times when run concurrently vs alone would be necessary.\n\n--\nCraig Ringer\n", "msg_date": "Wed, 23 Jul 2008 09:15:22 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of jobs" } ]
[ { "msg_contents": "For background, please read the thread \"Fusion-io ioDrive\", archived at\n\nhttp://archives.postgresql.org/pgsql-performance/2008-07/msg00010.php\n\nTo recap, I tested an ioDrive versus a 6-disk RAID with pgbench on an\nordinary PC. I now also have a 32GB Samsung SATA SSD, and I have tested\nit in the same machine with the same software and configuration. I\ntested it connected to the NVIDIA CK804 SATA controller on the\nmotherboard, and as a pass-through disk on the Areca RAID controller,\nwith write-back caching enabled.\n\n Service Time Percentile, millis\n R/W TPS R-O TPS 50th 80th 90th 95th\nRAID 182 673 18 32 42 64\nFusion 971 4792 8 9 10 11\nSSD+NV 442 4399 12 18 36 43\nSSD+Areca 252 5937 12 15 17 21\n\nAs you can see, there are tradeoffs. The motherboard's ports are\nsubstantially faster on the TPC-B type of workload. This little, cheap\nSSD achieves almost half the performance of the ioDrive (i.e. similar\nperformance to a 50-disk SAS array.) The RAID controller does a better\njob on the read-only workload, surpassing the ioDrive by 20%.\n\nStrangely the RAID controller behaves badly on the TPC-B workload. It\nis faster than disk, but not by a lot, and it's much slower than the\nother flash configurations. The read/write benchmark did not vary when\nchanging the number of clients between 1 and 8. I suspect this is some\nkind of problem with Areca's kernel driver or firmware.\n\nOn the bright side, the Samsung+Areca configuration offers excellent\nservice time distribution, comparable to that achieved by the ioDrive.\nUsing the motherboard's SATA ports gave service times comparable to the\ndisk RAID.\n\nThe performance is respectable for a $400 device. You get about half\nthe tps and half the capacity of the ioDrive, but for one fifth the\nprice and in the much more convenient SATA form factor.\n\nYour faithful investigator,\njwb\n\n", "msg_date": "Tue, 22 Jul 2008 17:04:35 -0700", "msg_from": "\"Jeffrey W. Baker\" <[email protected]>", "msg_from_op": true, "msg_subject": "Samsung 32GB SATA SSD tested" }, { "msg_contents": "On Tue, Jul 22, 2008 at 6:04 PM, Jeffrey W. Baker <[email protected]> wrote:\n\n> Strangely the RAID controller behaves badly on the TPC-B workload. It\n> is faster than disk, but not by a lot, and it's much slower than the\n> other flash configurations. The read/write benchmark did not vary when\n> changing the number of clients between 1 and 8. I suspect this is some\n> kind of problem with Areca's kernel driver or firmware.\n\nAre you still using the 2.6.18 kernel for testing, or have you\nupgraded to something like 2.6.22. I've heard many good things about\nthe areca driver in that kernel version.\n\nThis sounds like an interesting development I'll have to keep track\nof. In a year or two I might be replacing 16 disk arrays with SSD\ndrives...\n", "msg_date": "Tue, 22 Jul 2008 18:32:25 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Samsung 32GB SATA SSD tested" }, { "msg_contents": "On Tue, Jul 22, 2008 at 5:32 PM, Scott Marlowe <[email protected]> wrote:\n> On Tue, Jul 22, 2008 at 6:04 PM, Jeffrey W. Baker <[email protected]> wrote:\n>\n>> Strangely the RAID controller behaves badly on the TPC-B workload. It\n>> is faster than disk, but not by a lot, and it's much slower than the\n>> other flash configurations. The read/write benchmark did not vary when\n>> changing the number of clients between 1 and 8. I suspect this is some\n>> kind of problem with Areca's kernel driver or firmware.\n>\n> Are you still using the 2.6.18 kernel for testing, or have you\n> upgraded to something like 2.6.22. I've heard many good things about\n> the areca driver in that kernel version.\n\nThese tests are being run with the CentOS 5 kernel, which is 2.6.18.\nThe ioDrive driver is available for that kernel, and I want to keep\nthe software constant to get comparable results.\n\nI put the Samsung SSD in my laptop, which is a Core 2 Duo @ 2.2GHz\nwith ICH9 SATA port and kernel 2.6.24, and it scored about 525 on R/W\npgbench.\n\n> This sounds like an interesting development I'll have to keep track\n> of. In a year or two I might be replacing 16 disk arrays with SSD\n> drives...\n\nI agree, it's definitely an exciting development. I have yet to\ndetermine whether the SSDs have good properties for production\noperations, but I'm learning.\n\n-jwb\n", "msg_date": "Wed, 23 Jul 2008 12:57:34 -0700", "msg_from": "\"Jeffrey Baker\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Samsung 32GB SATA SSD tested" }, { "msg_contents": "On Wed, Jul 23, 2008 at 1:57 PM, Jeffrey Baker <[email protected]> wrote:\n> On Tue, Jul 22, 2008 at 5:32 PM, Scott Marlowe <[email protected]> wrote:\n>> On Tue, Jul 22, 2008 at 6:04 PM, Jeffrey W. Baker <[email protected]> wrote:\n>>\n>>> Strangely the RAID controller behaves badly on the TPC-B workload. It\n>>> is faster than disk, but not by a lot, and it's much slower than the\n>>> other flash configurations. The read/write benchmark did not vary when\n>>> changing the number of clients between 1 and 8. I suspect this is some\n>>> kind of problem with Areca's kernel driver or firmware.\n>>\n>> Are you still using the 2.6.18 kernel for testing, or have you\n>> upgraded to something like 2.6.22. I've heard many good things about\n>> the areca driver in that kernel version.\n>\n> These tests are being run with the CentOS 5 kernel, which is 2.6.18.\n> The ioDrive driver is available for that kernel, and I want to keep\n> the software constant to get comparable results.\n>\n> I put the Samsung SSD in my laptop, which is a Core 2 Duo @ 2.2GHz\n> with ICH9 SATA port and kernel 2.6.24, and it scored about 525 on R/W\n> pgbench.\n\n From what I've read the scheduler in 2.6.24 has some performance\nissues under pgsql. Given that the 2.6.18 kernel driver for the areca\ncard was also mentioned as being questionable, that's the reason I'd\nasked about the 2.6.22 kernel, which is the one I'll be running in\nabout a month on our big db servers. Ahh, but I won't be running on\n32 Gig SATA / Flash drives. :) Wouldn't mind testing an array of 16\nor so of them at once though.\n", "msg_date": "Wed, 23 Jul 2008 14:10:53 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Samsung 32GB SATA SSD tested" }, { "msg_contents": "\nJeff,\n\nSome off topic questions:\n\nIs it possible to boot the OS from the ioDrive? If so, is the difference in\nboot up time noticeable?\n\nAlso, how does ioDrive impact compilation time for a moderately large code\nbase? What about application startup times?\n\nCheers,\nBehrang\n\n\nJeffrey Baker wrote:\n> \n> For background, please read the thread \"Fusion-io ioDrive\", archived at\n> \n> http://archives.postgresql.org/pgsql-performance/2008-07/msg00010.php\n> \n> To recap, I tested an ioDrive versus a 6-disk RAID with pgbench on an\n> ordinary PC. I now also have a 32GB Samsung SATA SSD, and I have tested\n> it in the same machine with the same software and configuration. I\n> tested it connected to the NVIDIA CK804 SATA controller on the\n> motherboard, and as a pass-through disk on the Areca RAID controller,\n> with write-back caching enabled.\n> \n> Service Time Percentile, millis\n> R/W TPS R-O TPS 50th 80th 90th 95th\n> RAID 182 673 18 32 42 64\n> Fusion 971 4792 8 9 10 11\n> SSD+NV 442 4399 12 18 36 43\n> SSD+Areca 252 5937 12 15 17 21\n> \n> As you can see, there are tradeoffs. The motherboard's ports are\n> substantially faster on the TPC-B type of workload. This little, cheap\n> SSD achieves almost half the performance of the ioDrive (i.e. similar\n> performance to a 50-disk SAS array.) The RAID controller does a better\n> job on the read-only workload, surpassing the ioDrive by 20%.\n> \n> Strangely the RAID controller behaves badly on the TPC-B workload. It\n> is faster than disk, but not by a lot, and it's much slower than the\n> other flash configurations. The read/write benchmark did not vary when\n> changing the number of clients between 1 and 8. I suspect this is some\n> kind of problem with Areca's kernel driver or firmware.\n> \n> On the bright side, the Samsung+Areca configuration offers excellent\n> service time distribution, comparable to that achieved by the ioDrive.\n> Using the motherboard's SATA ports gave service times comparable to the\n> disk RAID.\n> \n> The performance is respectable for a $400 device. You get about half\n> the tps and half the capacity of the ioDrive, but for one fifth the\n> price and in the much more convenient SATA form factor.\n> \n> Your faithful investigator,\n> jwb\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n-- \nView this message in context: http://www.nabble.com/Samsung-32GB-SATA-SSD-tested-tp18601508p19282698.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Tue, 2 Sep 2008 21:20:04 -0700 (PDT)", "msg_from": "behrangs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Samsung 32GB SATA SSD tested" } ]
[ { "msg_contents": "I have a PostgreSQL database on a very low-resource Xen virtual machine,\n48 MB RAM. When two queries run at the same time, it takes longer to\ncomplete then if run in sequence. Is there perhaps a way to install\nsomething like a query sequencer, which would process queries in a FIFO\nmanner, one at a time, even if a new query comes before the last one\nrunning is finished, it would not give the new query to the server\nbefore the one running now finishes? That would greatly improve\nperformance.\n\nAny tips in general for running PostgreSQL on such low-resource machine?\n\nI have:\n\nshared_buffers = 5MB\nwork_mem = 1024kB\n\nare these good values, or could perhaps changing something improve it a\nbit? Any other parameters to look at?\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Wed, 23 Jul 2008 17:21:12 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "On Wed, Jul 23, 2008 at 9:21 AM, Miernik <[email protected]> wrote:\n> I have a PostgreSQL database on a very low-resource Xen virtual machine,\n> 48 MB RAM. When two queries run at the same time, it takes longer to\n> complete then if run in sequence. Is there perhaps a way to install\n> something like a query sequencer, which would process queries in a FIFO\n> manner, one at a time, even if a new query comes before the last one\n> running is finished, it would not give the new query to the server\n> before the one running now finishes? That would greatly improve\n> performance.\n>\n> Any tips in general for running PostgreSQL on such low-resource machine?\n>\n> I have:\n>\n> shared_buffers = 5MB\n> work_mem = 1024kB\n>\n> are these good values, or could perhaps changing something improve it a\n> bit? Any other parameters to look at?\n\nWell, you're basically working on a really limited machine there. I'd\nset shared buffers up by 1 meg at a time and see if that helps. But\nyou're basically looking at a very narrow problem that most people\nwon't ever run into. Why such an incredibly limited virtual machine?\nEven my cell phone came with 256 meg built in two years ago.\n", "msg_date": "Wed, 23 Jul 2008 11:25:35 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n> I have a PostgreSQL database on a very low-resource Xen virtual machine,\n> 48 MB RAM. When two queries run at the same time, it takes longer to\n> complete then if run in sequence. Is there perhaps a way to install\n> something like a query sequencer, which would process queries in a FIFO\n> manner, one at a time, even if a new query comes before the last one\n> running is finished, it would not give the new query to the server\n> before the one running now finishes? That would greatly improve\n> performance.\n\nOne idea I just had was to have a connection pooler (say pgpool) and\nallow only one connection slot. If the pooler is capable to be\nconfigured to block new connections until the slot is unused, this would\ndo what you want. (I don't know whether poolers allow you to do this).\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 23 Jul 2008 15:30:48 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "Scott Marlowe <[email protected]> wrote:\n> won't ever run into. Why such an incredibly limited virtual machine?\n> Even my cell phone came with 256 meg built in two years ago.\n\nBecause I don't want to spend too much money on the machine rent, and a\n48 MB RAM Xen is about all I can get with a budget of 100$ per year.\nWell, there are a few providers which will give me a 128 MB Xen for that\nmoney, but will the difference in performance be worth the hassle to\nswitch providers? My current provider gives me almost perfect\nrelaliability for that 100$ and I don't know how the providers which\ngive more RAM for the same money perform, maybe they are often down or\nsomething. And spending more then 100$ yearly on this would be really\noverkill. My thing runs fine, only a bit slow, but reasonable. I just\nwant to find out if I could maybe make it better with a little tweaking.\nCan I expect it to work at least three times faster on 128 MB RAM?\nGetting 256 MB would certainly cost too much. Or maybe there are some\nproviders which can give me much more performance PostgreSQL server with\nat least several GB of storage for well... not more then 50$ per year.\n(because I must still rent another server to run and SMTP server and few\nother small stuff).\n\nMy DB has several tables with like 100000 to 1 million rows each,\nrunning sorts, joins, updates etc on them several times per hour.\nAbout 10000 inserts and selects each hour, the whole DB takes 1.5 GB on\ndisk now, 500 MB dumped.\n\nIf I could shorten the time it takes to run each query by a factor of 3\nthat's something worth going for.\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Wed, 23 Jul 2008 22:32:03 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "On Wed, Jul 23, 2008 at 2:32 PM, Miernik <[email protected]> wrote:\n> Scott Marlowe <[email protected]> wrote:\n>> won't ever run into. Why such an incredibly limited virtual machine?\n>> Even my cell phone came with 256 meg built in two years ago.\n>\n> Because I don't want to spend too much money on the machine rent, and a\n> 48 MB RAM Xen is about all I can get with a budget of 100$ per year.\n> Well, there are a few providers which will give me a 128 MB Xen for that\n> money, but will the difference in performance be worth the hassle to\n> switch providers? My current provider gives me almost perfect\n> relaliability for that 100$ and I don't know how the providers which\n> give more RAM for the same money perform, maybe they are often down or\n> something. And spending more then 100$ yearly on this would be really\n> overkill. My thing runs fine, only a bit slow, but reasonable. I just\n> want to find out if I could maybe make it better with a little tweaking.\n> Can I expect it to work at least three times faster on 128 MB RAM?\n> Getting 256 MB would certainly cost too much. Or maybe there are some\n> providers which can give me much more performance PostgreSQL server with\n> at least several GB of storage for well... not more then 50$ per year.\n> (because I must still rent another server to run and SMTP server and few\n> other small stuff).\n>\n> My DB has several tables with like 100000 to 1 million rows each,\n> running sorts, joins, updates etc on them several times per hour.\n> About 10000 inserts and selects each hour, the whole DB takes 1.5 GB on\n> disk now, 500 MB dumped.\n>\n> If I could shorten the time it takes to run each query by a factor of 3\n> that's something worth going for.\n\nWell, my guess is that by running under Xen you're already sacrificing\nquite a bit of performance, and running it with only 48 Megs of ram is\nmaking it even worse. But if your budget is $100 a year, I guess\nyou're probably stuck with such a setup. I would see if you can get a\ntrial Xen at the other hosts with 128M or more of memory and compare.\n", "msg_date": "Wed, 23 Jul 2008 14:43:25 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Scott Marlowe <[email protected]> wrote:\n> Well, my guess is that by running under Xen you're already sacrificing\n> quite a bit of performance, and running it with only 48 Megs of ram is\n> making it even worse. But if your budget is $100 a year, I guess\n> you're probably stuck with such a setup. I would see if you can get a\n> trial Xen at the other hosts with 128M or more of memory and compare.\n\nIs running in a 48 MB Xen any different in terms of performance to\nrunning on a hardware 48 MB RAM machine?\n\nI see that I am not the only one with such requirements, one guy even\nset up a site listing all VPS providers which host for under 7$ per\nmonth: http://www.lowendbox.com/virtual-server-comparison/\nYou can see that the most you can get for that money is a 128 MB OpenVZ\nI wonder if running PostgreSQL on OpenVZ is any different to running it\non Xen in terms of performance.\n\nOh here is something more:\nhttp://vpsempire.com/action.php?do=vpslite\n256 MB for 7.45$ per month\n512 MB for 11.95$ per month\nhowever it doesn't say what is the virtualization software, so don't\nreally know what it is.\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 24 Jul 2008 01:04:04 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "A Dimecres 23 Juliol 2008, Miernik va escriure:\n> I have a PostgreSQL database on a very low-resource Xen virtual machine,\n> 48 MB RAM. When two queries run at the same time, it takes longer to\n> complete then if run in sequence. Is there perhaps a way to install\n> something like a query sequencer, which would process queries in a FIFO\n> manner, one at a time, even if a new query comes before the last one\n> running is finished, it would not give the new query to the server\n> before the one running now finishes? That would greatly improve\n> performance.\n\nYou didn't mention your PostgreSQL version. Since 8.3 there's \"synchronized \nscans\" which greatly improves performance if concurrent queries have to do a \nsequential scan on the same table. Of course, if queries don't hit the same \ntable there'll be no improvements in performance...\n\n>\n> Any tips in general for running PostgreSQL on such low-resource machine?\n>\n> I have:\n>\n> shared_buffers = 5MB\n> work_mem = 1024kB\n>\n> are these good values, or could perhaps changing something improve it a\n> bit? Any other parameters to look at?\n>\n> --\n> Miernik\n> http://miernik.name/\n\n\n\n", "msg_date": "Thu, 24 Jul 2008 09:38:28 +0200", "msg_from": "Albert Cervera Areny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n> Scott Marlowe <[email protected]> wrote:\n>> won't ever run into. Why such an incredibly limited virtual machine?\n>> Even my cell phone came with 256 meg built in two years ago.\n> \n> Because I don't want to spend too much money on the machine rent, and a\n> 48 MB RAM Xen is about all I can get with a budget of 100$ per year.\n[snip]\n> My DB has several tables with like 100000 to 1 million rows each,\n> running sorts, joins, updates etc on them several times per hour.\n> About 10000 inserts and selects each hour, the whole DB takes 1.5 GB on\n> disk now, 500 MB dumped.\n> \n> If I could shorten the time it takes to run each query by a factor of 3\n> that's something worth going for.\n\nFirstly, congratulations on providing quite a large database on such a \nlimited system. I think most people on such plans have tables with a few \nhundred to a thousand rows in them, not a million. Many of the people \nhere are used to budgets a hundred or a thousand times of yours, so bear \nin mind you're as much an expert as them :-)\n\nIf you're going to get the most out of this, you'll want to set up your \nown Xen virtual machine on a local system so you can test changes. \nYou'll be trading your time against the budget, so bear that in mind.\n\nIf you know other small organisations locally in a similar position \nperhaps consider sharing a physical machine and managing Xen yourselves \n- that can be cheaper.\n\nChanges\n\nFirst step is to make sure you're running version 8.3 - there are some \nuseful improvements there that reduce the size of shorter text fields, \nas well as the synchronised scans Albert mentions below.\n\nSecond step is to make turn off any other processes you don't need. Tune \ndown the number of consoles, apache processes, mail processes etc. \nNormally not worth the trouble, but getting another couple of MB is \nworthwhile in your case. Might be worth turning off autovacuum and \nrunning a manual vacuum full overnight if your database is mostly reads.\n\nFinally, I think it's worth looking at pgpool or pgbouncer (as Alvaro \nsaid) and set them to allow only one connection in the pool. I know that \npgbouncer offers per-transaction connection sharing which will make this \nmore practical. Even so, it will help if your application can co-operate \nby closing the connection as soon as possible.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 24 Jul 2008 09:44:03 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "Richard Huxton <[email protected]> wrote:\n> Firstly, congratulations on providing quite a large database on such a\n> limited system. I think most people on such plans have tables with a\n> few hundred to a thousand rows in them, not a million. Many of the\n> people here are used to budgets a hundred or a thousand times of\n> yours, so bear in mind you're as much an expert as them :-)\n\nWell, I proved that it can reasonably well work, and I am finetuning the\nsystem step by step, so it can work better.\n\n> If you're going to get the most out of this, you'll want to set up\n> your own Xen virtual machine on a local system so you can test\n> changes.\n\nGood idea.\n\n> If you know other small organisations locally in a similar position\n> perhaps consider sharing a physical machine and managing Xen\n> yourselves - that can be cheaper.\n\nWell, maybe, but its also a lot of hassle, not sure it's worth it, just\nlooking to get the most out of thje existing system.\n\n> First step is to make sure you're running version 8.3 - there are some\n> useful improvements there that reduce the size of shorter text fields,\n> as well as the synchronised scans Albert mentions below.\n\nI am running 8.3.3\n\n> Second step is to make turn off any other processes you don't need.\n> Tune down the number of consoles, apache processes, mail processes\n> etc. Normally not worth the trouble, but getting another couple of MB\n> is worthwhile in your case.\n\nThere is no apache, but lighttpd, right now:\n\nroot@polica:~# free\n total used free shared buffers cached\nMem: 49344 47840 1504 0 4 23924\n-/+ buffers/cache: 23912 25432\nSwap: 257000 9028 247972\nroot@polica:~#\n\n> Might be worth turning off autovacuum and running a manual vacuum full\n> overnight if your database is mostly reads.\n\nI run autovacum, and the database has a lot of updates all the time,\nalso TRUNCATING tables and refilling them, usually one or two\nINSERTS/UPDATES per second.\n\n> Finally, I think it's worth looking at pgpool or pgbouncer (as Alvaro\n> said) and set them to allow only one connection in the pool. I know\n> that pgbouncer offers per-transaction connection sharing which will\n> make this more practical. Even so, it will help if your application\n> can co-operate by closing the connection as soon as possible.\n\nI just installed pgpool2 and whoaaa! Everything its like about 3 times\nfaster! My application are bash scripts using psql -c \"UPDATE ...\".\nI plan to rewrite it in Python, not sure if it would improve\nperformance, but will at least be a \"cleaner\" implementation.\n\nIn /etc/pgpool.conf I used:\n\n# number of pre-forked child process\nnum_init_children = 1\n\n# Number of connection pools allowed for a child process\nmax_pool = 1\n\nWanted to install pgbouncer, but it is broken currently in Debian. And\nwhy is it in contrib and not in main (speaking of Debian location)?\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 31 Jul 2008 08:15:22 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n>> Might be worth turning off autovacuum and running a manual vacuum full\n>> overnight if your database is mostly reads.\n> \n> I run autovacum, and the database has a lot of updates all the time,\n> also TRUNCATING tables and refilling them, usually one or two\n> INSERTS/UPDATES per second.\n\nOK\n\n>> Finally, I think it's worth looking at pgpool or pgbouncer (as Alvaro\n>> said) and set them to allow only one connection in the pool. I know\n>> that pgbouncer offers per-transaction connection sharing which will\n>> make this more practical. Even so, it will help if your application\n>> can co-operate by closing the connection as soon as possible.\n> \n> I just installed pgpool2 and whoaaa! Everything its like about 3 times\n> faster! My application are bash scripts using psql -c \"UPDATE ...\".\n\nProbably spending most of their time setting up a new connection, then \nclearing it down again.\n\n> I plan to rewrite it in Python, not sure if it would improve\n> performance, but will at least be a \"cleaner\" implementation.\n\nCareful of introducing any more overheads though. If libraries end up \nusing another 2.5MB of RAM then that's 10% of your disk-cache gone.\n\n> In /etc/pgpool.conf I used:\n> \n> # number of pre-forked child process\n> num_init_children = 1\n> \n> # Number of connection pools allowed for a child process\n> max_pool = 1\n\nMight need to increase that to 2 or 3.\n\n> Wanted to install pgbouncer, but it is broken currently in Debian. And\n> why is it in contrib and not in main (speaking of Debian location)?\n\nNot well known enough on the Debian side of the fence? It's simple \nenough to install from source though. Takes about one minute.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 31 Jul 2008 08:36:10 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "Richard Huxton <[email protected]> wrote:\n>> I just installed pgpool2 and whoaaa! Everything its like about 3 times\n>> faster! My application are bash scripts using psql -c \"UPDATE ...\".\n> \n> Probably spending most of their time setting up a new connection, then\n> clearing it down again.\n\nIf I do it in Python it could do all queries in the same connection, so\nshould be faster? Besides that 'psql' is written in perl, so its also\nheavy, by not using psql I get rid of perl library in RAM. Also the\nscript uses wget to poll some external data sources a lot, also\nneedlessly opening new connection to the webserver, so I want to make\nthe script save the http connection, which means I must get rid of wget.\nMaybe I should write some parts in C?\n\nBTW, doesn't there exist any tool does what \"psql -c\" does, but is\nwritten in plain C, not perl? I was looking for such psql replacement,\nbut couldn't find any.\n\n>> # Number of connection pools allowed for a child process\n>> max_pool = 1\n> \n> Might need to increase that to 2 or 3.\n\nWhy? The website says:\n\nmax_pool\n\n The maximum number of cached connections in pgpool-II children\nprocesses. pgpool-II reuses the cached connection if an incoming\nconnection is connecting to the same database by the same username.\n\nBut all my connections are to the same database and the same username,\nand I only ever want my application to do 1 connection to the database\nat a time, so why would I want/need 2 or 3 in max_pool?\n\n> Not well known enough on the Debian side of the fence? It's simple\n> enough to install from source though. Takes about one minute.\n\nBut is there any advantage for me compared to pgpool2, which works\nreally nice? In some parts, like doing some count(*) stuff, it now does\nthings in about one second, which took a few minutes to finish before (if\nthe other part of the scripts where doing something else on the database\nat the same time).\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 31 Jul 2008 10:29:30 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "\nOn 31 Jul 2008, at 10:29AM, Miernik wrote:\n\n> Richard Huxton <[email protected]> wrote:\n>>> I just installed pgpool2 and whoaaa! Everything its like about 3 \n>>> times\n>>> faster! My application are bash scripts using psql -c \"UPDATE ...\".\n>>\n>> Probably spending most of their time setting up a new connection, \n>> then\n>> clearing it down again.\n>\n> If I do it in Python it could do all queries in the same connection, \n> so\n> should be faster? Besides that 'psql' is written in perl, so its also\n> heavy, by not using psql I get rid of perl library in RAM. Also the\n> script uses wget to poll some external data sources a lot, also\n> needlessly opening new connection to the webserver, so I want to make\n> the script save the http connection, which means I must get rid of \n> wget.\n> Maybe I should write some parts in C?\n>\n> BTW, doesn't there exist any tool does what \"psql -c\" does, but is\n> written in plain C, not perl? I was looking for such psql replacement,\n> but couldn't find any.\n\n\n?\n\nfile `which psql`\n/usr/bin/psql: ELF 32-bit LSB executable, Intel 80386, version 1 \n(SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, \nstripped\n\n-- \nRegards\nTheo\n\n", "msg_date": "Thu, 31 Jul 2008 10:46:52 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Theo Kramer <[email protected]> wrote:\n> file `which psql`\n> /usr/bin/psql: ELF 32-bit LSB executable, Intel 80386, version 1 \n> (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, \n> stripped\n\nmiernik@tarnica:~$ file `which psql`\n/usr/bin/psql: symbolic link to `../share/postgresql-common/pg_wrapper'\nmiernik@tarnica:~$ file /usr/share/postgresql-common/pg_wrapper\n/usr/share/postgresql-common/pg_wrapper: a /usr/bin/perl -w script text executable\nmiernik@tarnica:~$\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 31 Jul 2008 11:17:20 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n> Richard Huxton <[email protected]> wrote:\n>>> I just installed pgpool2 and whoaaa! Everything its like about 3 times\n>>> faster! My application are bash scripts using psql -c \"UPDATE ...\".\n>> Probably spending most of their time setting up a new connection, then\n>> clearing it down again.\n> \n> If I do it in Python it could do all queries in the same connection, so\n> should be faster? Besides that 'psql' is written in perl, so its also\n> heavy, by not using psql I get rid of perl library in RAM.\n\nNope - \"C\" all through.\n\n > Also the\n> script uses wget to poll some external data sources a lot, also\n> needlessly opening new connection to the webserver, so I want to make\n> the script save the http connection, which means I must get rid of wget.\n> Maybe I should write some parts in C?\n> \n> BTW, doesn't there exist any tool does what \"psql -c\" does, but is\n> written in plain C, not perl? I was looking for such psql replacement,\n> but couldn't find any\n\nWell ECPG lets you embed SQL directly in your \"C\".\n\n>>> # Number of connection pools allowed for a child process\n>>> max_pool = 1\n>> Might need to increase that to 2 or 3.\n> \n> Why? The website says:\n> \n> max_pool\n> \n> The maximum number of cached connections in pgpool-II children\n> processes. pgpool-II reuses the cached connection if an incoming\n> connection is connecting to the same database by the same username.\n> \n> But all my connections are to the same database and the same username,\n> and I only ever want my application to do 1 connection to the database\n> at a time, so why would I want/need 2 or 3 in max_pool?\n\n From the subject line of your question: \"how to fix problem then when \ntwo queries run at the same time...\"\n\nOf course if you don't actually want to run two simultaneous queries, \nthen max_pool=1 is what you want.\n\n>> Not well known enough on the Debian side of the fence? It's simple\n>> enough to install from source though. Takes about one minute.\n> \n> But is there any advantage for me compared to pgpool2, which works\n> really nice?\n\nCan't say. Given your limited RAM, it's probably worth looking at both \nand seeing which leaves you more memory. Your main concern has got to be \nto reduce wasted RAM.\n\n > In some parts, like doing some count(*) stuff, it now does\n> things in about one second, which took a few minutes to finish before (if\n> the other part of the scripts where doing something else on the database\n> at the same time).\n\nThat will be because you're only running one query, I'd have thought. \nTwo queries might be sending you into swap.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 31 Jul 2008 10:24:15 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n> Theo Kramer <[email protected]> wrote:\n>> file `which psql`\n>> /usr/bin/psql: ELF 32-bit LSB executable, Intel 80386, version 1 \n>> (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9, \n>> stripped\n> \n> miernik@tarnica:~$ file `which psql`\n> /usr/bin/psql: symbolic link to `../share/postgresql-common/pg_wrapper'\n> miernik@tarnica:~$ file /usr/share/postgresql-common/pg_wrapper\n> /usr/share/postgresql-common/pg_wrapper: a /usr/bin/perl -w script text executable\n\nThat's not psql though, that's Debian's wrapper around it which lets you \ninstall multiple versions of PostgreSQL on the same machine. Might be \nworth bypassing it and calling it directly.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 31 Jul 2008 10:45:20 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "\nOn 31 Jul 2008, at 11:17AM, Miernik wrote:\n\n> Theo Kramer <[email protected]> wrote:\n>> file `which psql`\n>> /usr/bin/psql: ELF 32-bit LSB executable, Intel 80386, version 1\n>> (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.9,\n>> stripped\n>\n> miernik@tarnica:~$ file `which psql`\n> /usr/bin/psql: symbolic link to `../share/postgresql-common/ \n> pg_wrapper'\n> miernik@tarnica:~$ file /usr/share/postgresql-common/pg_wrapper\n> /usr/share/postgresql-common/pg_wrapper: a /usr/bin/perl -w script \n> text executable\n> miernik@tarnica:~$\n\n\nHmm - looks like you are on a debian or debian derivative. However, \npg_wrapper is\nnot psql. It invokes psql which is written in C. Once psql is invoked \npg_wrapper drops away.\n-- \nRegards\nTheo\n\n", "msg_date": "Thu, 31 Jul 2008 11:52:12 +0200", "msg_from": "Theo Kramer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the same time,\n\tit takes longer to complete then if run in sequence" }, { "msg_contents": "Miernik wrote:\n\n> BTW, doesn't there exist any tool does what \"psql -c\" does, but is\n> written in plain C, not perl? I was looking for such psql replacement,\n> but couldn't find any.\n\nAs others have noted, psql is written in C, and you're using a wrapper.\n\nAssuming your're on Debian or similar you should be able to invoke the\nreal psql with:\n\n/usr/lib/postgresql/8.3/bin/psql\n\npsql is a C executable that uses libpq directly, and is really rather\nlow-overhead and fast.\n\nAs for how to issue multiple commands in one statement in a shell\nscript: one way is to use a here document instead of \"-c\". Eg:\n\n\npsql <<__END__\nUPDATE blah SET thingy = 7 WHERE otherthingy = 4;\nDELETE FROM sometable WHERE criterion = -1;\n__END__\n\nYou can of course wrap statements in explicit transaction BEGIN/COMMIT,\netc, as appropriate.\n\nAs you start doing more complex things, and especially once you start\nwanting to have both good performance *and* good error handling, you'll\nwant to move away from sql scripts with psql and toward using perl,\npython, or similar so you can use their native interfaces to libpq.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 31 Jul 2008 18:24:32 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" }, { "msg_contents": "\n> Wanted to install pgbouncer, but it is broken currently in Debian. And\n> why is it in contrib and not in main (speaking of Debian location)\n\nPgbouncer has worked very well for us. Wasn't available in default repos \nfor Ubuntu server when I did my original setup but installing from \nsource is quite easy. Running heavy load benchmarks on a web app \n(high-rate simple-queries) I saw about a 10-fold improvement just by \nusing pgbouncer.\n\nCheers,\nSteve\n\n", "msg_date": "Thu, 31 Jul 2008 08:30:35 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how to fix problem then when two queries run at the\n\tsame time, it takes longer to complete then if run in sequence" } ]
[ { "msg_contents": "Hi all,\n\nHave a problem related to partition tables.\n\nWe have the following schema's :\n\nmaster -\nid integer (PRIMARY KEY)\ncid integer\n\nchild1, child2 are the child tables of master table. Here \"cid\" is the field\nused for partitioning.\n\nWe have another table called\nother_tbl -\nid integer (PRIMARY KEY)\nm_id integer (FOREIGN KEY FROM master(id))\n\nNow the problem is, since other_tbl->m_id referenced from master->id, if we\ntry to insert any vales in \"other_tbl\" for m_id its checks it presence in\nonly master table and not in its child table. As the master table is always\nempty in partitions tables a foreign key violation ERROR is being given.\n\nHow can we define the Foreign_key constraint on \"other_tbl\" based upon any\npartitioned table field.\n\n-- \nRegards\nGauri\n\nHi all,Have a problem related to partition tables.We have the following schema's :master -id integer (PRIMARY KEY)cid integerchild1, child2 are the child tables of master table. Here \"cid\" is the field used for partitioning.\nWe have another table called other_tbl -id integer (PRIMARY KEY)m_id integer (FOREIGN KEY FROM master(id))Now the problem is, since other_tbl->m_id referenced from master->id, if we try to insert any vales in \"other_tbl\" for m_id its checks it presence in only master table and not in its child table. As the master table is always empty in partitions tables a foreign key violation ERROR is being given.\nHow can we define the Foreign_key constraint on \"other_tbl\" based upon any partitioned table field.-- RegardsGauri", "msg_date": "Thu, 24 Jul 2008 15:45:41 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partitioned Tables Foreign Key Constraints Problem" }, { "msg_contents": "Gauri,\n\n> How can we define the Foreign_key constraint on \"other_tbl\" based upon \n> any partitioned table field.\n\nCurrently PostgreSQL doesn't have any simple way to support this. There \nare a few workarounds, but they're fairly awkward, such as a trigger to \ncheck if the ID is correct.\n\n--Josh\n\n", "msg_date": "Fri, 25 Jul 2008 15:31:55 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Partitioned Tables Foreign Key Constraints Problem" } ]
[ { "msg_contents": "Hi All,\n\nThis is my first post so please be gentle if I make any mistakes.\n\nI know that ENUMs were not intended to be used instead of joins, but I\nbelieve that I my case there will be huge saves on using Enums. Every night\nI will drop the database and rebuild it from CSV files. The database will be\nquite large with around 200 million rows in a fact table (only numbers).\nThere are about 10 dimension tables (lookup table with ID and text columns).\nWhat is very special is that the software I am 100% dependent on to generate\nqueries, joins the fact table will all dimension tables in all cases. Mostly\nmy queries do not include any WHERE clauses (apart from those needed for\njoins) only GROUP BY and HAVING. Joining 200 million rows with side tables\nthat on average have about 50000 rows, is heavy. When I need results within\n10 seconds, I believe there is a need for Enums.\n\nOne of the side tables is text with length of up to 900 characters. My\nquestion is whether I could build PostgreSQL with NAMEDATALEN (which\ncontrols the max size of Enums) equal to 900? Or do you see any things going\nwrong then (like stability problems or whatever)?\n\nRegards,\n\nDavid\n\nHi All,This is my first post so please be gentle if I make any mistakes. I know that ENUMs were not intended to be used instead of joins, but I believe that I my case there will be huge saves on using Enums. Every night I will drop the database and rebuild it from CSV files. The database will be quite large with around 200 million rows in a fact table (only numbers). There are about 10 dimension tables (lookup table with ID and text columns). What is very special is that the software I am 100% dependent on to generate queries, joins the fact table will all dimension tables in all cases. Mostly my queries do not include any WHERE clauses (apart from those needed for joins) only GROUP BY and HAVING. Joining 200 million rows with side tables that on average have about 50000 rows, is heavy. When I need results within 10 seconds, I believe there is a need for Enums. \nOne of the side tables is text with length of up to 900 characters. My question is whether I could build PostgreSQL with NAMEDATALEN  (which controls the max size of Enums) equal to 900? Or do you see any things going wrong then (like stability problems or whatever)?\nRegards,David", "msg_date": "Sat, 26 Jul 2008 23:22:09 +0200", "msg_from": "\"David Andersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Using ENUM with huge NAMEDATALEN" }, { "msg_contents": "\"David Andersen\" <[email protected]> writes:\n> One of the side tables is text with length of up to 900 characters. My\n> question is whether I could build PostgreSQL with NAMEDATALEN (which\n> controls the max size of Enums) equal to 900?\n\nI wouldn't recommend it. Consider changing pg_enum.enumlabel to type\nTEXT instead.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Jul 2008 17:44:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using ENUM with huge NAMEDATALEN " }, { "msg_contents": "Hi Tom,\n\nThanks a lot for the tip! I will try this. You probably saved be a few days\nof work!\n\nRegards,\n\nDavid\n\nOn Sat, Jul 26, 2008 at 11:44 PM, Tom Lane <[email protected]> wrote:\n\n> \"David Andersen\" <[email protected]> writes:\n> > One of the side tables is text with length of up to 900 characters. My\n> > question is whether I could build PostgreSQL with NAMEDATALEN (which\n> > controls the max size of Enums) equal to 900?\n>\n> I wouldn't recommend it. Consider changing pg_enum.enumlabel to type\n> TEXT instead.\n>\n> regards, tom lane\n>\n>\n>\n\nHi Tom,Thanks a lot for the tip! I will try this. You probably saved be a few days of work!Regards,DavidOn Sat, Jul 26, 2008 at 11:44 PM, Tom Lane <[email protected]> wrote:\n\"David Andersen\" <[email protected]> writes:\n\n> One of the side tables is text with length of up to 900 characters. My\n> question is whether I could build PostgreSQL with NAMEDATALEN  (which\n> controls the max size of Enums) equal to 900?\n\nI wouldn't recommend it.  Consider changing pg_enum.enumlabel to type\nTEXT instead.\n\n                        regards, tom lane", "msg_date": "Sat, 26 Jul 2008 23:48:50 +0200", "msg_from": "\"David Andersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using ENUM with huge NAMEDATALEN" }, { "msg_contents": "Hi again,\n\nI am attempting to alter pg_enum.enumlabel to Text, but I seem to run into a\nstrange permission problem with regards to system tables. I am not allowed\nto modify them even if I am a superuser. Some output to make sure I do not\nmake any novice mistakes:\n\nC:\\Program Files\\PostgreSQL\\8.3\\bin>psql -d postgres -U postgres\nWelcome to psql 8.3.0, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help with psql commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nWarning: Console code page (850) differs from Windows code page (1252)\n 8-bit characters might not work correctly. See psql reference\n page \"Notes for Windows users\" for details.\n\npostgres=# GRANT ALL privileges ON TABLE pg_enum TO postgres;\nGRANT\npostgres=# alter table pg_enum ALTER COLUMN enumlabel TYPE text;\nERROR: permission denied: \"pg_enum\" is a system catalog\nSTATEMENT: alter table pg_enum ALTER COLUMN enumlabel TYPE text;\nERROR: permission denied: \"pg_enum\" is a system catalog\npostgres=# \\du\n List of roles\n Role name | Superuser | Create role | Create DB | Connections | Member of\n-----------+-----------+-------------+-----------+-------------+-----------\n postgres | yes | yes | yes | no limit | {}\n tull | yes | yes | yes | no limit | {}\n(2 rows)\n\npostgres=#\n\n\n\nThanks in advance for your help.\n\nRegards,\n\nDavid\n\n\nOn Sat, Jul 26, 2008 at 11:48 PM, David Andersen <[email protected]> wrote:\n\n> Hi Tom,\n>\n> Thanks a lot for the tip! I will try this. You probably saved be a few days\n> of work!\n>\n> Regards,\n>\n> David\n>\n>\n> On Sat, Jul 26, 2008 at 11:44 PM, Tom Lane <[email protected]> wrote:\n>\n>> \"David Andersen\" <[email protected]> writes:\n>> > One of the side tables is text with length of up to 900 characters. My\n>> > question is whether I could build PostgreSQL with NAMEDATALEN (which\n>> > controls the max size of Enums) equal to 900?\n>>\n>> I wouldn't recommend it. Consider changing pg_enum.enumlabel to type\n>> TEXT instead.\n>>\n>> regards, tom lane\n>>\n>>\n>>\n>\n\nHi again,I am attempting to alter pg_enum.enumlabel to Text, but I seem to run into a strange permission problem with regards to system tables. I am not allowed to modify them even if I am a superuser. Some output to make sure I do not make any novice mistakes:\nC:\\Program Files\\PostgreSQL\\8.3\\bin>psql -d postgres -U postgresWelcome to psql 8.3.0, the PostgreSQL interactive terminal.Type:  \\copyright for distribution terms       \\h for help with SQL commands\n       \\? for help with psql commands       \\g or terminate with semicolon to execute query       \\q to quitWarning: Console code page (850) differs from Windows code page (1252)         8-bit characters might not work correctly. See psql reference\n         page \"Notes for Windows users\" for details.postgres=# GRANT ALL privileges ON TABLE pg_enum TO postgres;GRANTpostgres=# alter table pg_enum ALTER COLUMN enumlabel TYPE text;ERROR:  permission denied: \"pg_enum\" is a system catalog\nSTATEMENT:  alter table pg_enum ALTER COLUMN enumlabel TYPE text;ERROR:  permission denied: \"pg_enum\" is a system catalogpostgres=# \\du                               List of roles Role name | Superuser | Create role | Create DB | Connections | Member of\n-----------+-----------+-------------+-----------+-------------+----------- postgres  | yes       | yes         | yes       | no limit    | {} tull      | yes       | yes         | yes       | no limit    | {}\n(2 rows)postgres=#Thanks in advance for your help.Regards,DavidOn Sat, Jul 26, 2008 at 11:48 PM, David Andersen <[email protected]> wrote:\nHi Tom,Thanks a lot for the tip! I will try this. You probably saved be a few days of work!\nRegards,DavidOn Sat, Jul 26, 2008 at 11:44 PM, Tom Lane <[email protected]> wrote:\n\"David Andersen\" <[email protected]> writes:\n\n\n> One of the side tables is text with length of up to 900 characters. My\n> question is whether I could build PostgreSQL with NAMEDATALEN  (which\n> controls the max size of Enums) equal to 900?\n\nI wouldn't recommend it.  Consider changing pg_enum.enumlabel to type\nTEXT instead.\n\n                        regards, tom lane", "msg_date": "Sun, 27 Jul 2008 02:13:47 +0200", "msg_from": "\"David Andersen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Using ENUM with huge NAMEDATALEN" }, { "msg_contents": "\"David Andersen\" <[email protected]> writes:\n> I am attempting to alter pg_enum.enumlabel to Text, but I seem to run into a\n> strange permission problem with regards to system tables. I am not allowed\n> to modify them even if I am a superuser.\n\nALTER TABLE is hardly gonna be sufficient on a system catalog anyway, as\nknowledge of its rowtype is generally hardwired into the C code. You'd\nhave to modify src/include/catalog/pg_enum.h and then go around and find\nall the references to enumlabel and fix them to know it's text not name.\nFortunately, this being not a widely used catalog, there shouldn't be\ntoo many places to fix. Right offhand, it looks like the indexing.h\ndefinition of its index and about three places in pg_enum.c would be all\nthat have to change.\n\nNote that this would be an initdb-forcing change and so you should also\nbump the catversion number.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Jul 2008 20:36:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using ENUM with huge NAMEDATALEN " } ]
[ { "msg_contents": "Hi All,\n\n \n\nI have taken over the maintenance of a server farm , recently. 2 webserver\non db server. They are quite powerful 2 processor xeon w/ 6Gig of ram .\n\n \n\nCouple of days ago we had a serious performance hit and the db server (pg.\nv7.4) was overloaded w/ something in a way that operating system was almost\nnot able to respond or in cases it did not.\n\n \n\n \n\nAfter some analysis i suspect that there is a query that takes up to 1\nsecond and that is the cause. Upon each page loading this query fires and\ntakes the one second and blocks the page to load completly . The load was\nroughly ~300 connections in one minute .\n\n \n\nSo here are my questions : \n\n \n\n. Why does the second and the later queries take the whole on second\nif the dataset is the same . Shouldn't PG realise that the query is the same\nso i give the user the same resultset ?\n\n. How do I know if one query blocks the other ?\n\n. Is there a way to log the long running queries in 7.4 ? If not is\nit available in any newer version ?\n\n \n\nthanks for your help ! \n\n \n\nÜdvözlettel/kind regards,\n\nFaludi, Gábor\n\nFITS Magyarország Kft.\n\n <http://www.fits.hu/> http://www.FITS.hu\n\nTel.:+36 30 4945862\n\nEmail: <mailto:[email protected]> [email protected]\n\nIngyenes videó tanfolyamok(Excel,Access,Word) :\n<http://www.fits.hu/trainings> http://www.fits.hu/trainings\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi All,\n \nI have taken over the maintenance of a server farm ,\nrecently. 2 webserver on  db server. They are quite powerful 2 processor xeon\nw/ 6Gig of ram .\n \nCouple of days ago we had a serious performance hit and the\ndb server (pg. v7.4) was overloaded w/ something in a way that operating system\nwas almost not able to respond or in cases it did not.\n \n \nAfter some analysis i suspect that there is a query that\ntakes up to 1 second and that is the cause. Upon  each page loading this query\nfires and takes the one second and blocks the page to load completly . The load\nwas roughly ~300 connections in one minute .\n \nSo here are my questions : \n \n·        \nWhy does the second and the later queries take\nthe whole on second if the dataset is the same . Shouldn’t PG realise\nthat the query is the same so i give the user the same resultset ?\n·        \nHow do I know if one query blocks the other ?\n·        \nIs there a way to log the long running queries\nin 7.4 ? If not is it available in any newer version ?\n \nthanks for your help ! \n \nÜdvözlettel/kind\nregards,\nFaludi,\nGábor\nFITS\nMagyarország Kft.\nhttp://www.FITS.hu\nTel.:+36 30\n4945862\nEmail: [email protected]\nIngyenes\nvideó tanfolyamok(Excel,Access,Word) : http://www.fits.hu/trainings", "msg_date": "Mon, 28 Jul 2008 08:43:51 +0200", "msg_from": "=?iso-8859-2?Q?Faludi_G=E1bor?= <[email protected]>", "msg_from_op": true, "msg_subject": "how does pg handle concurrent queries and same queries" }, { "msg_contents": "> I have taken over the maintenance of a server farm , recently. 2 webserver\n> on db server. They are quite powerful 2 processor xeon w/ 6Gig of ram .\n>\n> Couple of days ago we had a serious performance hit and the db server (pg.\n> v7.4) was overloaded w/ something in a way that operating system was almost\n> not able to respond or in cases it did not.\n>\n> After some analysis i suspect that there is a query that takes up to 1\n> second and that is the cause. Upon each page loading this query fires and\n> takes the one second and blocks the page to load completly . The load was\n> roughly ~300 connections in one minute .\n>\n> So here are my questions :\n>\n> · Why does the second and the later queries take the whole on second\n> if the dataset is the same . Shouldn't PG realise that the query is the same\n> so i give the user the same resultset ?\n>\n> · How do I know if one query blocks the other ?\n>\n> · Is there a way to log the long running queries in 7.4 ? If not is\n> it available in any newer version ?\n\nCan you post the queries? Can you provide an 'analyze explain'? Do you\nperform a 'vacuum analyze' on a regular basis?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Mon, 28 Jul 2008 08:55:53 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how does pg handle concurrent queries and same queries" }, { "msg_contents": "Hi,\n\nhere is what the original query was which was obviously nonsense :\nEXPLAIN ANALYZE SELECT DISTINCT letoltes.cid, s.elofordulas FROM letoltes\nINNER JOIN (select letoltes.cid, count(letoltes.cid) AS elofordulas FROM\nletoltes GROUP BY cid) s ON s.cid=letoltes.cid ORDER BY s.elofordulas DESC\nLIMIT 5;\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n------------\n Limit (cost=73945.35..73945.65 rows=5 width=12) (actual\ntime=4191.396..4351.966 rows=5 loops=1)\n -> Unique (cost=73945.35..77427.99 rows=58800 width=12) (actual\ntime=4191.390..4351.956 rows=5 loops=1)\n -> Sort (cost=73945.35..75106.23 rows=464351 width=12) (actual\ntime=4191.386..4283.545 rows=175944 loops=1)\n Sort Key: s.elofordulas, letoltes.cid\n -> Merge Join (cost=9257.99..30238.65 rows=464351 width=12)\n(actual time=652.535..2920.304 rows=464351 loops=1)\n Merge Cond: (\"outer\".cid = \"inner\".cid)\n -> Index Scan using idx_letoltes_cid on letoltes\n(cost=0.00..12854.51 rows=464351 width=4) (actual time=0.084..1270.588\nrows=464351 loops=1)\n -> Sort (cost=9257.99..9258.73 rows=294 width=12)\n(actual time=652.434..810.941 rows=464176 loops=1)\n Sort Key: s.cid\n -> Subquery Scan s (cost=9242.26..9245.94\nrows=294 width=12) (actual time=651.343..652.028 rows=373 loops=1)\n -> HashAggregate (cost=9242.26..9243.00\nrows=294 width=4) (actual time=651.339..651.661 rows=373 loops=1)\n -> Seq Scan on letoltes\n(cost=0.00..6920.51 rows=464351 width=4) (actual time=0.014..307.469\nrows=464351 loops=1)\n Total runtime: 4708.434 ms\n(13 sor)\n\nHowever after fixing the query this is 1/4 th of the time but still blocks\nthe site :\n\n\nEXPLAIN ANALYZE SELECT DISTINCT letoltes.cid, count(letoltes.cid) AS\nelofordulas FROM letoltes GROUP BY cid ORDER BY elofordulas DESC LIMIT 5;\n QUERY PLAN\n----------------------------------------------------------------------------\n-------------------------------------------------------------\n Limit (cost=9255.05..9255.09 rows=5 width=4) (actual time=604.734..604.743\nrows=5 loops=1)\n -> Unique (cost=9255.05..9257.26 rows=294 width=4) (actual\ntime=604.732..604.737 rows=5 loops=1)\n -> Sort (cost=9255.05..9255.79 rows=294 width=4) (actual\ntime=604.730..604.732 rows=5 loops=1)\n Sort Key: count(cid), cid\n -> HashAggregate (cost=9242.26..9243.00 rows=294 width=4)\n(actual time=604.109..604.417 rows=373 loops=1)\n -> Seq Scan on letoltes (cost=0.00..6920.51\nrows=464351 width=4) (actual time=0.022..281.413 rows=464351 loops=1)\n Total runtime: 604.811 ms\n\n\nhere is the table : \n\\d letoltes\n TĂĄbla \"public.letoltes\"\n Oszlop | TĂ­pus | MĂłdosĂ­tĂł\n--------+---------+------------------------------------------------\n id | integer | not null default nextval('letoltes_seq'::text)\n cid | integer |\nIndexes:\n \"idx_letoltes_cid\" btree (cid)\n \"idx_letoltes_id\" btree (id)\n\nselect count(1) from letoltes;\n count\n--------\n 464351\n\n\nVACUM ANALYZE runs overnight every day.\n\nthanks,\nGabor\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Claus Guttesen\nSent: Monday, July 28, 2008 8:56 AM\nTo: Faludi Gábor\nCc: [email protected]\nSubject: Re: [PERFORM] how does pg handle concurrent queries and same\nqueries\n\n> I have taken over the maintenance of a server farm , recently. 2 webserver\n> on db server. They are quite powerful 2 processor xeon w/ 6Gig of ram .\n>\n> Couple of days ago we had a serious performance hit and the db server (pg.\n> v7.4) was overloaded w/ something in a way that operating system was\nalmost\n> not able to respond or in cases it did not.\n>\n> After some analysis i suspect that there is a query that takes up to 1\n> second and that is the cause. Upon each page loading this query fires and\n> takes the one second and blocks the page to load completly . The load was\n> roughly ~300 connections in one minute .\n>\n> So here are my questions :\n>\n> . Why does the second and the later queries take the whole on\nsecond\n> if the dataset is the same . Shouldn't PG realise that the query is the\nsame\n> so i give the user the same resultset ?\n>\n> . How do I know if one query blocks the other ?\n>\n> . Is there a way to log the long running queries in 7.4 ? If not\nis\n> it available in any newer version ?\n\nCan you post the queries? Can you provide an 'analyze explain'? Do you\nperform a 'vacuum analyze' on a regular basis?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\nNo virus found in this incoming message.\nChecked by AVG - http://www.avg.com \nVersion: 8.0.138 / Virus Database: 270.5.6/1576 - Release Date: 2008.07.27.\n16:16\n\n", "msg_date": "Mon, 28 Jul 2008 09:53:24 +0200", "msg_from": "=?iso-8859-2?Q?Faludi_G=E1bor?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: how does pg handle concurrent queries and same queries" }, { "msg_contents": "Faludi Gďż˝bor wrote:\n\n> . Why does the second and the later queries take the whole on second\n> if the dataset is the same . Shouldn't PG realise that the query is the same\n> so i give the user the same resultset ?\n\nThat would require a result cache. I don't know if Pg even has a query\nresult cache - I don't think so, but I'm not sure. Even if it does, it'd\nstill only be useful if the queries were issued under *exactly* the same\nconditions - in other words, no writes had been made to the database\nsince the cached query was issued, and the first query had committed\nbefore the second began (or was read-only). Additionally, no volatile\nfunctions could be called in the query, because their values/effects\nmight be different when the query is executed a second time. That\nincludes triggers, etc.\n\nSince 7.4 doesn't do lazy xid allocation it can't really tell that\nnothing has been changed since the previous query was cached. So, if I'm\nnot missing something here, a query result cache would be useless anyway.\n\n> . How do I know if one query blocks the other ?\n\nExamination of pg_catalog.pg_locks is certainly a start. It's trickier\nwith lots of short-running queries, though.\n\n> . Is there a way to log the long running queries in 7.4 ? If not is\n> it available in any newer version ?\n\nIt's certainly available in 8.3, as log_min_duration_statement in\npostgresql.conf . You can find out if it's in 7.4, and if not what\nversion it was introduced in, by looking through the documentation for\nversions 7.4 and up.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 28 Jul 2008 16:46:12 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how does pg handle concurrent queries and same queries" }, { "msg_contents": "On Mon, 28 Jul 2008, Faludi Gábor wrote:\n> EXPLAIN ANALYZE SELECT DISTINCT letoltes.cid, count(letoltes.cid) AS\n> elofordulas FROM letoltes GROUP BY cid ORDER BY elofordulas DESC LIMIT 5;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=9255.05..9255.09 rows=5 width=4) (actual time=604.734..604.743 > rows=5 loops=1)\n> -> Unique (cost=9255.05..9257.26 rows=294 width=4) (actual time=604.732..604.737 rows=5 loops=1)\n> -> Sort (cost=9255.05..9255.79 rows=294 width=4) (actual time=604.730..604.732 rows=5 loops=1)\n> Sort Key: count(cid), cid\n> -> HashAggregate (cost=9242.26..9243.00 rows=294 width=4) (actual time=604.109..604.417 rows=373 loops=1)\n> -> Seq Scan on letoltes (cost=0.00..6920.51 rows=464351 width=4) (actual time=0.022..281.413 rows=464351 loops=1)\n> Total runtime: 604.811 ms\n\nSo this query is doing a sequential scan of the letoltes table for each \nquery. You may get some improvement by creating an index on cid and \nclustering on that index, but probably not much.\n\nMoving to Postgres 8.3 will probably help a lot, as it will allow multiple \nqueries to use the same sequential scan in parallel. That's assuming the \nentire table isn't in cache.\n\nAnother solution would be to create an additional table that contains the \nresults of this query, and keep it up to date using triggers on the \noriginal table. Then query that table instead.\n\nHowever, probably the best solution is to examine the problem and work out \nif you can alter the application to make it avoid doing such an expensive \nquery so often. Perhaps it could cache the results.\n\nMatthew\n\n-- \nPsychotics are consistently inconsistent. The essence of sanity is to\nbe inconsistently inconsistent.", "msg_date": "Mon, 28 Jul 2008 12:27:27 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how does pg handle concurrent queries and same queries" }, { "msg_contents": "Craig Ringer wrote:\n> Faludi G�bor wrote:\n> \n> > . Why does the second and the later queries take the whole on second\n> > if the dataset is the same . Shouldn't PG realise that the query is the same\n> > so i give the user the same resultset ?\n> \n> That would require a result cache. I don't know if Pg even has a query\n> result cache - I don't think so, but I'm not sure.\n\nIt doesn't.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Mon, 28 Jul 2008 09:47:02 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how does pg handle concurrent queries and same\n\tqueries" }, { "msg_contents": "Slightly off-topic, but judging from the fact that you were able to\n\"fix\" the query, it seems you have some way to modify the application\ncode itself. In that case, I'd try to implement caching (at least for\nthis statement) on the application side, for example with memcached.\n", "msg_date": "Wed, 30 Jul 2008 20:34:16 +0200", "msg_from": "\"Dennis Brakhane\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: how does pg handle concurrent queries and same queries" } ]
[ { "msg_contents": "Morning folks,\n\tLong time listener, first time poster. Having an interesting\nproblem related to performance which I'll try and describe below and\nhopefully get some enlightenment. First the environment:\n\n\nPostgres 8.1.8\n\tshared_buffers = 2000\n\tmax_fsm_pages = 400000\nRedhat Enterprise 4\nRunning on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1\nAlso running on the server is a tomcat web server and other ancillaries\n\nNow, the problem. We have an application that continually writes a\nbunch of data to a few tables which is then deleted by a batch job each\nnight. We're adding around 80,000 rows to one table per day and\nremoving around 75,000 that are deemed to be \"unimportant\". Now, the\nproblem we see is that after a period of time, the database access\nbecomes very 'slow' and the load avg on the machine gets up around 5.\nWhen this happens, the application using the DB basically grinds to a\nhalt. Checking the stats, the DB size is around 7.5GB; no tables or\nindexes look to be 'bloated' (we have been using psql since 7.3 with the\nclassic index bloat problem) and the auto-vac has been running solidly.\n\nWe had this problem around a month ago and again yesterday. Because the\napplication needs reasonably high availability, we couldn't full vacuum\nso what we did was a dump and load to another system. What I found here\nwas that after the load, the DB size was around 2.7GB - a decrease of\n5GB. Re-loading this back onto the main system, and the world is good.\n\nOne observation I've made on the DB system is the disk I/O seems\ndreadfully slow...we're at around 75% I/O wait sometimes and the read\nrates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for\nun-cached reads). I've also observed that the OS cache seems to be\nusing all of the remaining memory for it's cache (around 3GB) which\nseems probably the best it can do with the available memory.\n\nNow, clearly we need to examine the need for the application to write\nand remove so much data but my main question is:\n\nWhy does the size of the database with so much \"un-used\" space seem to\nimpact performance so much? If (in this case) the extra 5GB of space is\nessentially \"unallocated\", does it factor into any of the caching or\nperformance metrics that the DBMS uses? And if so, would I be better\nhaving a higher shared_buffers rather than relying so much on OS cache?\n\nYes, I know we need to upgrade to 8.3 but that's going to take some time\n:)\n\nMany thanks in advance.\n\nDave\n\n___\nDave North\[email protected]\nSigniant - Making Media Move\nVisit Signiant at: www.signiant.com <http://www.signiant.com/> \n\n", "msg_date": "Wed, 30 Jul 2008 07:09:10 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Database size Vs performance degradation" }, { "msg_contents": "Dave North wrote:\n> Morning folks,\n> \tLong time listener, first time poster.\n\nHi Dave\n\n> Postgres 8.1.8\n> \tshared_buffers = 2000\n> \tmax_fsm_pages = 400000\n> Redhat Enterprise 4\n> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1\n> Also running on the server is a tomcat web server and other ancillaries\n\nThe value of 2000 seems a bit low for shared_buffers perhaps. Oh, and \n8.1.13 seems to be the latest bugfix for 8.1 too.\n\n> Now, the problem. We have an application that continually writes a\n> bunch of data to a few tables which is then deleted by a batch job each\n> night. We're adding around 80,000 rows to one table per day and\n> removing around 75,000 that are deemed to be \"unimportant\".\n[snip]\n> We had this problem around a month ago and again yesterday. Because the\n> application needs reasonably high availability, we couldn't full vacuum\n> so what we did was a dump and load to another system. What I found here\n> was that after the load, the DB size was around 2.7GB - a decrease of\n> 5GB. Re-loading this back onto the main system, and the world is good.\n\nWell, that's pretty much the definition of bloat. Are you sure you're \nvacuuming enough? I don't have an 8.1 to hand at the moment, but a \n\"vacuum verbose\" in 8.2+ gives some details at the end about how many \nfree-space slots need to be tracked. Presumably you're not tracking \nenough of them, or your vacuuming isn't actually taking place.\n\nCheck the size of your database every night. It will rise from 2.7GB, \nbut it should stay roughly static (apart from whatever data you add of \ncourse). If you can keep it so that most of the working-set of your \ndatabase fits in RAM speed will stay just fine.\n\n> Yes, I know we need to upgrade to 8.3 but that's going to take some time\n> :)\n\nI think you'll like some of the improvements, but it's probably more \nimportant to get 8.1.13 installed soon-ish.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 30 Jul 2008 13:28:09 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "On Wed, 30 Jul 2008, Dave North wrote:\n> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1\n\n> Checking the stats, the DB size is around 7.5GB;\n\nDoesn't fit in RAM.\n\n> ...after the load, the DB size was around 2.7GB\n\nDoes fit in RAM.\n\n> One observation I've made on the DB system is the disk I/O seems\n> dreadfully slow...we're at around 75% I/O wait sometimes and the read\n> rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for\n> un-cached reads).\n\nThat's incredibly slow in this day and age, especially from 10krpm HDDs. \nDefinitely worth investigating.\n\nHowever, I think vacuuming more agressively is going to be your best win \nat the moment.\n\nMatthew\n\n-- \nPatron: \"I am looking for a globe of the earth.\"\nLibrarian: \"We have a table-top model over here.\"\nPatron: \"No, that's not good enough. Don't you have a life-size?\"\nLibrarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Wed, 30 Jul 2008 13:36:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": " \n\n-----Original Message-----\nFrom: Richard Huxton [mailto:[email protected]] \nSent: July 30, 2008 8:28 AM\nTo: Dave North\nCc: [email protected]\nSubject: Re: [PERFORM] Database size Vs performance degradation\n\nDave North wrote:\n> Morning folks,\n> \tLong time listener, first time poster.\n\nHi Dave\n\n> Postgres 8.1.8\n> \tshared_buffers = 2000\n> \tmax_fsm_pages = 400000\n> Redhat Enterprise 4\n> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1 Also running\n\n> on the server is a tomcat web server and other ancillaries\n\nThe value of 2000 seems a bit low for shared_buffers perhaps. Oh, and\n8.1.13 seems to be the latest bugfix for 8.1 too.\n\nDN: Yeah, I was thinking the same. I spent several hours reading info\non this list and other places and it's highly inconclusive about having\nhigh or low shared buffs Vs letting the OS disk cache handle it.\n\n> Now, the problem. We have an application that continually writes a \n> bunch of data to a few tables which is then deleted by a batch job \n> each night. We're adding around 80,000 rows to one table per day and \n> removing around 75,000 that are deemed to be \"unimportant\".\n[snip]\n> We had this problem around a month ago and again yesterday. Because \n> the application needs reasonably high availability, we couldn't full \n> vacuum so what we did was a dump and load to another system. What I \n> found here was that after the load, the DB size was around 2.7GB - a \n> decrease of 5GB. Re-loading this back onto the main system, and the\nworld is good.\n\nWell, that's pretty much the definition of bloat. Are you sure you're\nvacuuming enough?\n\nDN: Well, the auto-vac is kicking off pretty darn frequently...around\nonce every 2 minutes. However, you just made me think of the obvious -\nis it actually doing anything?! The app is pretty darn write intensive\nso I wonder if it's actually able to vacuum the tables?\n\n I don't have an 8.1 to hand at the moment, but a \"vacuum verbose\" in\n8.2+ gives some details at the end about how many free-space slots need\nto be tracked. Presumably you're not tracking enough of them, or your\nvacuuming isn't actually taking place.\n\nDN: I think you've hit it. Now the next obvious problem is how to make\nthe vac actually vac while maintaining a running system?\n\nCheck the size of your database every night. It will rise from 2.7GB,\nbut it should stay roughly static (apart from whatever data you add of\ncourse). If you can keep it so that most of the working-set of your\ndatabase fits in RAM speed will stay just fine.\n\nDN: Yep, I'm just implementing a size tracker now to keep a track on it.\nIt grew from the 2.5GB to 7GB in around a month so it's pretty easy to\nsee big jumps I'd say. Does the auto-vac log it's results somewhere by\nany chance do you know?\n\nFantastic post, thanks so much.\n\nDave\n\n> Yes, I know we need to upgrade to 8.3 but that's going to take some \n> time\n> :)\n\nI think you'll like some of the improvements, but it's probably more\nimportant to get 8.1.13 installed soon-ish.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 30 Jul 2008 07:41:25 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": " \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Matthew\nWakeling\nSent: July 30, 2008 8:37 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Database size Vs performance degradation\n\nOn Wed, 30 Jul 2008, Dave North wrote:\n> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1\n\n> Checking the stats, the DB size is around 7.5GB;\n\nDoesn't fit in RAM.\n\n> ...after the load, the DB size was around 2.7GB\n\nDoes fit in RAM.\n\n> One observation I've made on the DB system is the disk I/O seems \n> dreadfully slow...we're at around 75% I/O wait sometimes and the read \n> rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for \n> un-cached reads).\n\nThat's incredibly slow in this day and age, especially from 10krpm HDDs.\n\nDefinitely worth investigating.\n\nDN: Yeah, I was thinking the same thing. Unlike the folks here, I'm no\nperformance whiz but it did seem crazy slow. Given the 10K disks, it\nseems to me there is most likely something on the RAID Array itself that\nis set sub-optimally. Next thing to look at.\n\nHowever, I think vacuuming more agressively is going to be your best win\nat the moment.\n\nDN: As I just replied to the past (very helpful) chap, I think I need to\ngo see what exactly the vac is vac'ing (autovac that is) because\nalthough it's running super frequently, the big question is \"is it doing\nanything\" :)\n\nCheers\n\nDave\n", "msg_date": "Wed, 30 Jul 2008 07:43:38 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "I am guessing that you are using DELETE to remove the 75,000 unimportant.\nChange your batch job to CREATE a new table consisting only of the 5,000 important. You can use \"CREATE TABLE table_name AS select_statement\" command. Then drop the old table. After that you can use ALTER TABLE to change the name of the new table to that of the old one. \n\nI am not an expert but if this is a viable solution for you then I think doing it this way will rid you of your bloating problem.\n\nRegards,\nVal\n\n\n--- On Wed, 30/7/08, Dave North <[email protected]> wrote:\n\n> From: Dave North <[email protected]>\n> Subject: [PERFORM] Database size Vs performance degradation\n> To: [email protected]\n> Date: Wednesday, 30 July, 2008, 1:09 PM\n> Morning folks,\n> \tLong time listener, first time poster. Having an\n> interesting\n> problem related to performance which I'll try and\n> describe below and\n> hopefully get some enlightenment. First the environment:\n> \n> \n> Postgres 8.1.8\n> \tshared_buffers = 2000\n> \tmax_fsm_pages = 400000\n> Redhat Enterprise 4\n> Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1\n> Also running on the server is a tomcat web server and other\n> ancillaries\n> \n> Now, the problem. We have an application that continually\n> writes a\n> bunch of data to a few tables which is then deleted by a\n> batch job each\n> night. We're adding around 80,000 rows to one table\n> per day and\n> removing around 75,000 that are deemed to be\n> \"unimportant\". Now, the\n> problem we see is that after a period of time, the database\n> access\n> becomes very 'slow' and the load avg on the machine\n> gets up around 5.\n> When this happens, the application using the DB basically\n> grinds to a\n> halt. Checking the stats, the DB size is around 7.5GB; no\n> tables or\n> indexes look to be 'bloated' (we have been using\n> psql since 7.3 with the\n> classic index bloat problem) and the auto-vac has been\n> running solidly.\n> \n> We had this problem around a month ago and again yesterday.\n> Because the\n> application needs reasonably high availability, we\n> couldn't full vacuum\n> so what we did was a dump and load to another system. What\n> I found here\n> was that after the load, the DB size was around 2.7GB - a\n> decrease of\n> 5GB. Re-loading this back onto the main system, and the\n> world is good.\n> \n> One observation I've made on the DB system is the disk\n> I/O seems\n> dreadfully slow...we're at around 75% I/O wait\n> sometimes and the read\n> rates seem quite slow (hdparm says around 2.2MB/sec -\n> 20MB/sec for\n> un-cached reads). I've also observed that the OS cache\n> seems to be\n> using all of the remaining memory for it's cache\n> (around 3GB) which\n> seems probably the best it can do with the available\n> memory.\n> \n> Now, clearly we need to examine the need for the\n> application to write\n> and remove so much data but my main question is:\n> \n> Why does the size of the database with so much\n> \"un-used\" space seem to\n> impact performance so much? If (in this case) the extra\n> 5GB of space is\n> essentially \"unallocated\", does it factor into\n> any of the caching or\n> performance metrics that the DBMS uses? And if so, would I\n> be better\n> having a higher shared_buffers rather than relying so much\n> on OS cache?\n> \n> Yes, I know we need to upgrade to 8.3 but that's going\n> to take some time\n> :)\n> \n> Many thanks in advance.\n> \n> Dave\n> \n> ___\n> Dave North\n> [email protected]\n> Signiant - Making Media Move\n> Visit Signiant at: www.signiant.com\n> <http://www.signiant.com/> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Wed, 30 Jul 2008 14:57:47 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Thank you for the suggestion..much appreciated. Alas, I don't think\nthis will be possible without a change to the application but it's a\ngood idea nonetheless.\n\nWhere I am now is looking at the autovac tuning parameters. I strongly\nsuspect that the 2 tables that are \"frequent changers\" are just not\ngetting enough cleaning. It's hard to tell though because the AV\nmessages are only at debug2 and setting debug2 on this server would be a\nkiller. However, just from running the math using the autovac limit and\nhow it's a percentage of table size, I'm pretty sure that we're not\nvac'ing enough and that reducing the multiplier down from 0.4 would make\na significant difference. My main question was answered though I think\n- the growth is NOT normal which was the stimulus I needed to\ninvestigate further.\n\nThanks again\n\nDave\n\n> -----Original Message-----\n> From: Valentin Bogdanov [mailto:[email protected]] \n> Sent: July 30, 2008 10:58 AM\n> To: [email protected]; Dave North\n> Subject: Re: [PERFORM] Database size Vs performance degradation\n> \n> I am guessing that you are using DELETE to remove the 75,000 \n> unimportant.\n> Change your batch job to CREATE a new table consisting only \n> of the 5,000 important. You can use \"CREATE TABLE table_name \n> AS select_statement\" command. Then drop the old table. After \n> that you can use ALTER TABLE to change the name of the new \n> table to that of the old one. \n> \n> I am not an expert but if this is a viable solution for you \n> then I think doing it this way will rid you of your bloating problem.\n> \n> Regards,\n> Val\n> \n> \n> --- On Wed, 30/7/08, Dave North <[email protected]> wrote:\n> \n> > From: Dave North <[email protected]>\n> > Subject: [PERFORM] Database size Vs performance degradation\n> > To: [email protected]\n> > Date: Wednesday, 30 July, 2008, 1:09 PM Morning folks,\n> > \tLong time listener, first time poster. Having an \n> interesting problem \n> > related to performance which I'll try and describe below \n> and hopefully \n> > get some enlightenment. First the environment:\n> > \n> > \n> > Postgres 8.1.8\n> > \tshared_buffers = 2000\n> > \tmax_fsm_pages = 400000\n> > Redhat Enterprise 4\n> > Running on HP DL380 w/ 4GB RAM, dual 10K HDDs in RAID 0+1 \n> Also running \n> > on the server is a tomcat web server and other ancillaries\n> > \n> > Now, the problem. We have an application that continually writes a \n> > bunch of data to a few tables which is then deleted by a batch job \n> > each night. We're adding around 80,000 rows to one table \n> per day and \n> > removing around 75,000 that are deemed to be \"unimportant\". \n> Now, the \n> > problem we see is that after a period of time, the database access \n> > becomes very 'slow' and the load avg on the machine gets up \n> around 5.\n> > When this happens, the application using the DB basically \n> grinds to a \n> > halt. Checking the stats, the DB size is around 7.5GB; no \n> tables or \n> > indexes look to be 'bloated' (we have been using psql since \n> 7.3 with \n> > the classic index bloat problem) and the auto-vac has been running \n> > solidly.\n> > \n> > We had this problem around a month ago and again yesterday.\n> > Because the\n> > application needs reasonably high availability, we couldn't full \n> > vacuum so what we did was a dump and load to another \n> system. What I \n> > found here was that after the load, the DB size was around \n> 2.7GB - a \n> > decrease of 5GB. Re-loading this back onto the main \n> system, and the \n> > world is good.\n> > \n> > One observation I've made on the DB system is the disk I/O seems \n> > dreadfully slow...we're at around 75% I/O wait sometimes \n> and the read \n> > rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for \n> > un-cached reads). I've also observed that the OS cache seems to be \n> > using all of the remaining memory for it's cache (around 3GB) which \n> > seems probably the best it can do with the available memory.\n> > \n> > Now, clearly we need to examine the need for the \n> application to write \n> > and remove so much data but my main question is:\n> > \n> > Why does the size of the database with so much \"un-used\" \n> space seem to \n> > impact performance so much? If (in this case) the extra \n> 5GB of space \n> > is essentially \"unallocated\", does it factor into any of \n> the caching \n> > or performance metrics that the DBMS uses? And if so, would I be \n> > better having a higher shared_buffers rather than relying \n> so much on \n> > OS cache?\n> > \n> > Yes, I know we need to upgrade to 8.3 but that's going to take some \n> > time\n> > :)\n> > \n> > Many thanks in advance.\n> > \n> > Dave\n> > \n> > ___\n> > Dave North\n> > [email protected]\n> > Signiant - Making Media Move\n> > Visit Signiant at: www.signiant.com\n> > <http://www.signiant.com/>\n> > \n> > \n> > --\n> > Sent via pgsql-performance mailing list\n> > ([email protected])\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> __________________________________________________________\n> Not happy with your email address?.\n> Get the one you really want - millions of new email addresses \n> available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n> \n", "msg_date": "Wed, 30 Jul 2008 10:02:35 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "\"Dave North\" <[email protected]> writes:\n> From: Richard Huxton [mailto:[email protected]] \n>> Well, that's pretty much the definition of bloat. Are you sure you're\n>> vacuuming enough?\n\n> DN: Well, the auto-vac is kicking off pretty darn frequently...around\n> once every 2 minutes. However, you just made me think of the obvious -\n> is it actually doing anything?! The app is pretty darn write intensive\n> so I wonder if it's actually able to vacuum the tables?\n\nIIRC, the default autovac parameters in 8.1 were pretty darn\nunaggressive. You should also check for long-running transactions\nthat might be preventing vacuum from removing recently-dead rows.\n\nOne of the reasons for updating off 8.1 is that finding out what autovac\nis really doing is hard :-(. I think it does log, but at level DEBUG2\nor so, which means that the only way to find out is to accept a huge\namount of useless chatter in the postmaster log. Newer releases have\na saner logging scheme.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jul 2008 11:05:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation " }, { "msg_contents": "Dave North wrote:\n> Thank you for the suggestion..much appreciated. Alas, I don't think\n> this will be possible without a change to the application but it's a\n> good idea nonetheless.\n\nI assume you mean the \"create table as select ...\" suggestion (don't forget to include a little quoted material so we'll know what you are replying to :-)\n\nYou don't have to change the application. One of the great advantages of Postgres is that even table creation, dropping and renaming are transactional. So you can do the select / drop / rename as a transaction by an external app, and your main application will be none the wiser. In pseudo-SQL:\n\n begin\n create table new_table as (select * from old_table);\n create index ... on new_table ... (as needed)\n drop table old_table\n alter table new_table rename to old_table\n commit\n\nYou should be able to just execute this by hand on a running system, and see if some of your bloat goes away.\n\nCraig\n", "msg_date": "Wed, 30 Jul 2008 08:32:24 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "\nOn Wed, 2008-07-30 at 10:02 -0500, Dave North wrote:\n> Thank you for the suggestion..much appreciated. Alas, I don't think\n> this will be possible without a change to the application but it's a\n> good idea nonetheless.\n\nAffirmative, Dave. I read you.\n\nIf I were in your situation (not having access/desire to change the base\napplication), I'd write a sql script that does something like this:\n\n- Create __new_table__ from old_table # Read lock on old table\n- Rename old_table to __old_table__ # Access Exclusive Lock\n- Rename __new_table__ to old_table # Access Exclusive Lock\n- Commit # Now the application can write to the new table\n- Sync newly written changes to the new table (these would be written\nbetween the creation and access exclusive lock).\n- Drop/Vacuum full/Archive old_table\n\nWell, it would at least let you get the benefits of the rename approach\nwithout actually altering the application. Additionally, the\napplication's writes would only be blocked for the duration of the\nrename itself.\n\nThis makes the assumption that these writes aren't strictly necessary\nimmediately (such as via a find or insert construct). If this\nassumption is false, you would need to lock the table and block the\napplication from writing while you create the temporary table. This has\nthe advantage of not requiring the final sync step.\n\nSorry if all of this seems redundant, but best of luck!\n\n-Mark\n\n", "msg_date": "Wed, 30 Jul 2008 08:46:16 -0700", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "On Wed, 30 Jul 2008, Craig James wrote:\n> You don't have to change the application. One of the great advantages of \n> Postgres is that even table creation, dropping and renaming are \n> transactional. So you can do the select / drop / rename as a transaction by \n> an external app, and your main application will be none the wiser. In \n> pseudo-SQL:\n>\n> begin\n> create table new_table as (select * from old_table);\n> create index ... on new_table ... (as needed)\n> drop table old_table\n> alter table new_table rename to old_table\n> commit\n\nI believe this SQL snippet could cause data loss, because there is a \nperiod during which writes can be made to the old table that will not be \ncopied to the new table.\n\nOn a side note, I would be interested to know what happens with locks when \nrenaming tables. For example, if we were to alter the above SQL, and add a \n\"LOCK TABLE old_table IN ACCESS EXCLUSIVE\" line, would this fix the \nproblem? What I mean is, if the application tries to run \"INSERT INTO \nold_table ...\", and blocks on the lock, when the old_table is dropped, \nwill it resume trying to insert into the dropped table and fail, or will \nit redirect its attentions to the new table that has been renamed into \nplace?\n\nAlso, if a lock is taken on a table, and the table is renamed, does the \nlock follow the table, or does it stay attached to the table name?\n\nAnyway, surely it's much safer to just run VACUUM manually?\n\nMatthew\n\n-- \nChange is inevitable, except from vending machines.\n", "msg_date": "Wed, 30 Jul 2008 17:16:11 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "On Wed, 30 Jul 2008, Dave North wrote:\n\n> One observation I've made on the DB system is the disk I/O seems\n> dreadfully slow...we're at around 75% I/O wait sometimes and the read\n> rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for\n> un-cached reads).\n\nThis is typically what happens when you are not buffering enough of the \nright information in RAM, such that there are lots of small reads and \nwrites to the disk involve lots of seeking. You'll only get a couple of \nMB/s out of a disk if it has to move all over the place to retreive the \nblocks you asked for.\n\nSetting shared_buffers too low makes this more likely to happen, because \nPostgreSQL has to constantly read and write out random blocks to make \nspace to read new ones in its limited work area. The OS buffers some of \nthat, but not as well as if the database server has a bit more RAM for \nitself because then the blocks it most uses won't leave that area.\n\n> And if so, would I be better having a higher shared_buffers rather than \n> relying so much on OS cache?\n\nThe main situation where making shared_buffers too high is a problem on \n8.1 involves checkpoints writing out too much information at once. You \ndidn't mention changing checkpoint_segments on your system; if it's at its \ndefault of 3, your system is likely continuously doing tiny checkpoints, \nwhich might be another reason why you're seeing so much scattered seek \nbehavior above. Something >30 would be more appropriate for \ncheckpoint_segments on your server.\n\nI'd suggest re-tuning as follows:\n\n1) Increase shared_buffers to 10,000, test. Things should be a bit \nfaster.\n\n2) Increase checkpoint_segments to 30, test. What you want to watch for \nhere whether there are periods where the server seems to freeze for a \ncouple of seconds. That's a \"checkpoint spike\". If this happens, reduce \ncheckpoint_segments to some sort of middle ground; some people never get \nabove 10 before it's a problem.\n\n3) Increase shared_buffers in larger chunks, as long as you don't see any \nproblematic spikes you might usefully keep going until it's set to at \nleast 100,000 before improvements level off.\n\n> I spent several hours reading info on this list and other places and \n> it's highly inconclusive about having high or low shared buffs Vs \n> letting the OS disk cache handle it.\n\nA lot of the material floating around the 'net was written circa \nPostgreSQL 8.0 or earlier, and you need to ignore any advice in this area \nfrom those articles. I think if you rescan everything with that filter in \nplace you'll find its not so inconclusive that increasing shared_buffers \nis a win, so long as it doesn't trigger checkpoint spikes (on your \nplatform at least, there are still Windows issues). Check out my \"Inside \nthe PostgreSQL Buffer Cache\" presentation at \nhttp://www.westnet.com/~gsmith/content/postgresql for an excess of detail \non this topic. Unfortunately the usage_count recommendations given there \nare impractical for use on 8.1 because pg_buffercache doesn't include that \ninfo, but the general \"shared_buffers vs. OS cache\" theory and suggestions \napply.\n\nThe other parameter I hope you're setting correctly for your system is \neffective_cache_size, which should be at least 2GB for your server (exact \nsizing depends on how much RAM is leftover after the Tomcat app is \nrunning).\n\nAll this is something to consider in parallel with the vacuum \ninvestigation you're doing. It looks like your autovacuum isn't anywhere \nclose to aggressive enough for your workload, which is not unusual at all \nfor 8.1, and that may actually be the majority if your problem.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 30 Jul 2008 12:48:14 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "\nOn Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:\n> \n> I believe this SQL snippet could cause data loss, because there is a \n> period during which writes can be made to the old table that will not\n> be \n> copied to the new table.\n\nIt could indeed cause data loss.\n\n\n> On a side note, I would be interested to know what happens with locks\n> when \n> renaming tables. For example, if we were to alter the above SQL, and\n> add a \n> \"LOCK TABLE old_table IN ACCESS EXCLUSIVE\" line, would this fix the \n> problem? What I mean is, if the application tries to run \"INSERT INTO \n> old_table ...\", and blocks on the lock, when the old_table is\n> dropped, \n> will it resume trying to insert into the dropped table and fail, or\n> will \n> it redirect its attentions to the new table that has been renamed\n> into \n> place?\n\nYes, that would resolve the issue. It would also block the\napplication's writes for however long the process takes (this could be\nunacceptable).\n\n> Also, if a lock is taken on a table, and the table is renamed, does\n> the \n> lock follow the table, or does it stay attached to the table name?\n\nThe lock will follow the table itself (rather than the table name).\n\n> Anyway, surely it's much safer to just run VACUUM manually?\n\nGenerally, you would think so. The problem comes from Vacuum blocking\nthe application process' writes.\n\n-Mark\n\n", "msg_date": "Wed, 30 Jul 2008 09:49:38 -0700", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Dave North wrote:\n> -----Original Message-----\n> From: Richard Huxton [mailto:[email protected]] \n\n> \n> Well, that's pretty much the definition of bloat. Are you sure you're\n> vacuuming enough?\n> \n> DN: Well, the auto-vac is kicking off pretty darn frequently...around\n> once every 2 minutes. However, you just made me think of the obvious -\n> is it actually doing anything?! The app is pretty darn write intensive\n> so I wonder if it's actually able to vacuum the tables?\n\nIf you've got a big batch delete, it can't hurt to manually vacuum that \ntable immediately afterwards.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Wed, 30 Jul 2008 18:18:25 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Greg Smith [mailto:[email protected]] \n> Sent: July 30, 2008 12:48 PM\n> To: Dave North\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Database size Vs performance degradation\n> \n> On Wed, 30 Jul 2008, Dave North wrote:\n> \n> > One observation I've made on the DB system is the disk I/O seems \n> > dreadfully slow...we're at around 75% I/O wait sometimes \n> and the read \n> > rates seem quite slow (hdparm says around 2.2MB/sec - 20MB/sec for \n> > un-cached reads).\n> \n> This is typically what happens when you are not buffering \n> enough of the right information in RAM, such that there are \n> lots of small reads and writes to the disk involve lots of \n> seeking. You'll only get a couple of MB/s out of a disk if \n> it has to move all over the place to retreive the blocks you \n> asked for.\n\nI could totally see that except on another identical server, the MAX\nrate I was able to get under no load was 20MB/sec which just seems\nawfully low for 10K rpm disks to me (but granted, I'm not a performance\nanalysis expert by any stretch)\n\n> \n> Setting shared_buffers too low makes this more likely to \n> happen, because PostgreSQL has to constantly read and write \n> out random blocks to make space to read new ones in its \n> limited work area. The OS buffers some of that, but not as \n> well as if the database server has a bit more RAM for itself \n> because then the blocks it most uses won't leave that area.\n\nOK, this makes sense that a \"specialist\" cache will provide more\nbenefits that a \"general\" cache. Got it.\n\n> \n> > And if so, would I be better having a higher shared_buffers rather \n> > than relying so much on OS cache?\n> \n> The main situation where making shared_buffers too high is a \n> problem on\n> 8.1 involves checkpoints writing out too much information at \n> once. You didn't mention changing checkpoint_segments on \n> your system; if it's at its default of 3, your system is \n> likely continuously doing tiny checkpoints, which might be \n> another reason why you're seeing so much scattered seek \n> behavior above. Something >30 would be more appropriate for \n> checkpoint_segments on your server.\n\nIt appears ours is currently set to 12 but this is something I'll have a\nplay with as well.\n\n> \n> I'd suggest re-tuning as follows:\n> \n> 1) Increase shared_buffers to 10,000, test. Things should be \n> a bit faster.\n> \n> 2) Increase checkpoint_segments to 30, test. What you want \n> to watch for here whether there are periods where the server \n> seems to freeze for a couple of seconds. That's a \n> \"checkpoint spike\". If this happens, reduce \n> checkpoint_segments to some sort of middle ground; some \n> people never get above 10 before it's a problem.\n> \n> 3) Increase shared_buffers in larger chunks, as long as you \n> don't see any problematic spikes you might usefully keep \n> going until it's set to at least 100,000 before improvements \n> level off.\n\nDo you happen to know if these are \"reload\" or \"restart\" tunable\nparameters? I think I've read somewhere before that they are restart\nparameters (assuming I've set SHMMAX high enough of course)\n\n> \n> > I spent several hours reading info on this list and other \n> places and \n> > it's highly inconclusive about having high or low shared buffs Vs \n> > letting the OS disk cache handle it.\n> \n<SNIP good reading info\n> \n> The other parameter I hope you're setting correctly for your \n> system is effective_cache_size, which should be at least 2GB \n> for your server (exact sizing depends on how much RAM is \n> leftover after the Tomcat app is running).\n\nNow, this is interesting. I'm seeing just from top and vmstat, that the\nOS cache is around 2-3GB pretty consistently with everything running\nunder full load. So it seems I should be able to pretty safely set this\nto 2GB as you suggest.\n\n> \n> All this is something to consider in parallel with the vacuum \n> investigation you're doing. It looks like your autovacuum \n> isn't anywhere close to aggressive enough for your workload, \n> which is not unusual at all for 8.1, and that may actually be \n> the majority if your problem.\n\nYeah, I've pretty well convinced myself that it is. We have 2 tables\nthat see this add/remove pattern where we add something like 100,000\nrows per table and then delete around 75,000 per night...what I've just\ndone is added enties into pg_autovacuum to change the vac_scale_factor\ndown to 0.2 for both of these tables. By my calcs this will lower the\nvac threshold for these tables from 221,000 to 111,000 tuples each.\nEven this may be too high with max_fsm_pages at 400,000 but I can go\nlower if needed. As per other threads, my only real metric to measure\nthis is the overall database size since the better AV messaging is an\n8.3 enhancement.\n\nI'm starting with just changing the autovac parameters and see how that\naffects things. I'm reluctant to change multiple parameters in one shot\n(although they all make sense logically) just in case one goes awry ;)\n\nI have to say, I've learnt a whole load from you folks here this\nmorning...very enlightening. I'm now moving on to your site Greg! :)\n\nCheers\n\nDave\n\n> \n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n> \n", "msg_date": "Wed, 30 Jul 2008 12:39:09 -0500", "msg_from": "\"Dave North\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Dave North a �crit :\n> [...]\n>> I'd suggest re-tuning as follows:\n>>\n>> 1) Increase shared_buffers to 10,000, test. Things should be \n>> a bit faster.\n>>\n>> 2) Increase checkpoint_segments to 30, test. What you want \n>> to watch for here whether there are periods where the server \n>> seems to freeze for a couple of seconds. That's a \n>> \"checkpoint spike\". If this happens, reduce \n>> checkpoint_segments to some sort of middle ground; some \n>> people never get above 10 before it's a problem.\n>>\n>> 3) Increase shared_buffers in larger chunks, as long as you \n>> don't see any problematic spikes you might usefully keep \n>> going until it's set to at least 100,000 before improvements \n>> level off.\n> \n> Do you happen to know if these are \"reload\" or \"restart\" tunable\n> parameters? I think I've read somewhere before that they are restart\n> parameters (assuming I've set SHMMAX high enough of course)\n> \n\nshared_buffers and checkpoint_segments both need a restart.\n> [...]\n> I have to say, I've learnt a whole load from you folks here this\n> morning...very enlightening. I'm now moving on to your site Greg! :)\n> \n\nThere's much to learn from Greg's site. I was kinda impressed by all the \ngood articles in it.\n\n\n-- \nGuillaume.\n http://www.postgresqlfr.org\n http://dalibo.com\n", "msg_date": "Wed, 30 Jul 2008 19:51:47 +0200", "msg_from": "Guillaume Lelarge <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Mark Roberts <[email protected]> writes:\n> On Wed, 2008-07-30 at 17:16 +0100, Matthew Wakeling wrote:\n>> Anyway, surely it's much safer to just run VACUUM manually?\n\n> Generally, you would think so. The problem comes from Vacuum blocking\n> the application process' writes.\n\nHuh? Vacuum doesn't block writes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jul 2008 13:51:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation " }, { "msg_contents": "\nOn Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:\n> \n> \n> Huh? Vacuum doesn't block writes.\n> \n> regards, tom lane\n> \n\nOf course, you are correct. I was thinking of Vacuum full, which is\nrecommended for use when you're deleting the majority of rows in a\ntable.\n\nhttp://www.postgresql.org/docs/8.1/interactive/sql-vacuum.html\n\n-Mark\n\n", "msg_date": "Wed, 30 Jul 2008 11:04:21 -0700", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Valentin Bogdanov <[email protected]> wrote:\n> I am guessing that you are using DELETE to remove the 75,000\n> unimportant. Change your batch job to CREATE a new table consisting\n> only of the 5,000 important. You can use \"CREATE TABLE table_name AS\n> select_statement\" command. Then drop the old table. After that you can\n> use ALTER TABLE to change the name of the new table to that of the old\n> one.\n\nI have a similar, but different situation, where I TRUNCATE a table with\n60k rows every hour, and refill it with new rows. Would it be better\n(concerning bloat) to just DROP the table every hour, and recreate it,\nthen to TRUNCATE it? Or does TRUNCATE take care of the boat as good as a\nDROP and CREATE?\n\nI am running 8.3.3 in a 48 MB RAM Xen, so performance matters much.\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Wed, 30 Jul 2008 23:58:34 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "On Wed, 2008-07-30 at 23:58 +0200, Miernik wrote:\n\n> I have a similar, but different situation, where I TRUNCATE a table\n> with\n> 60k rows every hour, and refill it with new rows. Would it be better\n> (concerning bloat) to just DROP the table every hour, and recreate it,\n> then to TRUNCATE it? Or does TRUNCATE take care of the boat as good as\n> a\n> DROP and CREATE?\n> \n> I am running 8.3.3 in a 48 MB RAM Xen, so performance matters much.\n\nI've successfully used truncate for this purpose (server 8.2.6):\n\n-----------------------------\n\npsql=> select pg_relation_size(oid) from pg_class where relname =\n'asdf';\n pg_relation_size \n------------------\n 32768\n(1 row)\n\nTime: 0.597 ms\npsql=> truncate asdf;\nTRUNCATE TABLE\nTime: 1.069 ms\npsql=> select pg_relation_size(oid) from pg_class where relname =\n'asdf';\n pg_relation_size \n------------------\n 0\n(1 row)\n\n-Mark\n\n", "msg_date": "Wed, 30 Jul 2008 15:14:19 -0700", "msg_from": "Mark Roberts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Mark Roberts wrote:\n> On Wed, 2008-07-30 at 13:51 -0400, Tom Lane wrote:\n> \n>> Huh? Vacuum doesn't block writes.\n>>\n>> regards, tom lane\n>>\n>> \n>\n> Of course, you are correct. I was thinking of Vacuum full, which is\n> recommended for use when you're deleting the majority of rows in a\n> table.\n>\n> http://www.postgresql.org/docs/8.1/interactive/sql-vacuum.html\n> \nMaybe I'm wrong but if this \"bulk insert and delete\" process is cyclical \nthen You don't need vacuum full.\nReleased tuples will fill up again with fresh data next day - after \nregular vacuum.\n\nI have such situation at work. Size of database on disk is 60GB and is \nstable.\n\n-- \nAndrzej Zawadzki\n", "msg_date": "Thu, 31 Jul 2008 19:49:28 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:\n> Maybe I'm wrong but if this \"bulk insert and delete\" process is cyclical then \n> You don't need vacuum full.\n> Released tuples will fill up again with fresh data next day - after regular \n> vacuum.\n\nYes, a regular manual vacuum will prevent the table from growing more than \nit needs to. However, a vacuum full is required to actually reduce the \nsize of the table from 7.5G to 2.7G if that hasn't been done on the \nproduction system already.\n\nMatthew\n\n-- \nIt's one of those irregular verbs - \"I have an independent mind,\" \"You are\nan eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Fri, 1 Aug 2008 12:03:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "2008/8/1 Matthew Wakeling <[email protected]>:\n> On Thu, 31 Jul 2008, Andrzej Zawadzki wrote:\n>>\n>> Maybe I'm wrong but if this \"bulk insert and delete\" process is cyclical\n>> then You don't need vacuum full.\n>> Released tuples will fill up again with fresh data next day - after\n>> regular vacuum.\n>\n> Yes, a regular manual vacuum will prevent the table from growing more than\n> it needs to. However, a vacuum full is required to actually reduce the size\n> of the table from 7.5G to 2.7G if that hasn't been done on the production\n> system already.\n\n One good possibility is use pg8.3 for fix problem. Enable\nAutovacuum+HOT was won a significant performance compared with 8.2 and\nminor versions. :)\n\n\n\nKind Regards,\n-- \nFernando Ike\nhttp://www.midstorm.org/~fike/weblog\n", "msg_date": "Sun, 3 Aug 2008 23:51:40 +0000", "msg_from": "\"Fernando Ike\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Database size Vs performance degradation" }, { "msg_contents": "Hello to all,\n\n\nI have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET \n2008 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\nwith 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, \nbut it would change has quickly I can).\n\nI have a database of 38Go and take 6Go per week.\n\nI have a lot of update and insert, especially in 8 tables. 2 tables are \nusing for temporary storage, so I right something like 15000 request per \n2 minutes and empty it into 10 min.\nI'm making some update or select on tables including more than 20 \nmillions of entrance.\n\nI have the following postgresql.conf settings :\n\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the postmaster.\n#\n# Any option can also be given as a command line switch to the\n# postmaster, e.g. 'postmaster -c log_connections=on'. Some options\n# can be changed at run-time with the 'SET' SQL command.\n#\n# This file is read on postmaster startup and when the postmaster\n# receives a SIGHUP. If you edit the file on a running system, you have\n# to SIGHUP the postmaster for the changes to take effect, or use\n# \"pg_ctl reload\". Some settings, such as listen_addresses, require\n# a postmaster shutdown and restart to take effect.\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir' # use data in another directory\n#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file\n#ident_file = 'ConfigDir/pg_ident.conf' # IDENT configuration file\n\n# If external_pid_file is not explicitly set, no extra pid file is written.\n#external_pid_file = '(none)' # write an extra pid file\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\nlisten_addresses = 'xxx.xxx.xxx.xxx' # what IP address(es) \nto listen on;\n # comma-separated list of addresses;\n # defaults to 'localhost', '*' = all\nport = 5432\nmax_connections = 624\n# note: increasing max_connections costs ~400 bytes of shared memory per\n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 2\n#unix_socket_directory = ''\n#unix_socket_group = ''\n#unix_socket_permissions = 0777 # octal\n#bonjour_name = '' # defaults to the computer name\n\n# - Security & Authentication -\n\n#authentication_timeout = 60 # 1-600, in seconds\n#ssl = off\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\n#krb_srvname = 'postgres'\n#krb_server_hostname = '' # empty string matches any \nkeytab entry\n#krb_caseins_users = off\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\ntcp_keepalives_idle = 300 # TCP_KEEPIDLE, in seconds;\n # 0 selects the system default\n#tcp_keepalives_interval = 0 # TCP_KEEPINTVL, in seconds;\n # 0 selects the system default\n#tcp_keepalives_count = 0 # TCP_KEEPCNT;\n # 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 250000 # min 16 or max_connections*2, \n8KB each\ntemp_buffers = 500 # min 100, 8KB each\nmax_prepared_transactions = 200 # can be 0 or more\n# note: increasing max_prepared_transactions costs ~600 bytes of shared \nmemory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 9000 # min 64, size in KB\nmaintenance_work_mem = 5000 # min 1024, size in KB\nmax_stack_depth = 8192 # min 100, size in KB\n\n# - Free Space Map -\n\nmax_fsm_pages = 100000 # min max_fsm_relations*16, 6 \nbytes each\nmax_fsm_relations = 5000 # min 100, ~70 bytes each\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000 # min 25\n#preload_libraries = ''\n\n# - Cost-Based Vacuum Delay -\n\nvacuum_cost_delay = 5 # 0-1000 milliseconds\nvacuum_cost_page_hit = 10 # 0-10000 credits\nvacuum_cost_page_miss = 100 # 0-10000 credits\nvacuum_cost_page_dirty = 20 # 0-10000 credits\nvacuum_cost_limit = 500 # 0-10000 credits\n\n# - Background writer -\n\nbgwriter_delay = 50 # 10-10000 milliseconds between \nrounds\nbgwriter_lru_percent = 1.0 # 0-100% of LRU buffers \nscanned/round\nbgwriter_lru_maxpages = 25 # 0-1000 buffers max written/round\nbgwriter_all_percent = 0.333 # 0-100% of all buffers \nscanned/round\nbgwriter_all_maxpages = 50 # 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on # turns forced synchronization \non or off\n#wal_sync_method = fsync # the default is the first option\n # supported by the operating system:\n # open_datasync\n # fdatasync\n # fsync\n # fsync_writethrough\n # open_sync\n#full_page_writes = on # recover from partial page writes\nwal_buffers = 16 # min 4, 8KB each\ncommit_delay = 500 # range 0-100000, in microseconds\ncommit_siblings = 50 # range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 50 # in logfile segments, min 1, \n16MB each\ncheckpoint_timeout = 1800 # range 30-3600, in seconds\ncheckpoint_warning = 180 # in seconds, 0 is off\n\n# - Archiving -\n\n#archive_command = '' # command to use to archive a \nlogfile\n # segment\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\neffective_cache_size = 625000 # typically 8KB each\nrandom_page_cost = 3 # units are one sequential page \nfetch\n # cost\n#cpu_tuple_cost = 0.01 # (same)\n#cpu_index_tuple_cost = 0.001 # (same)\n#cpu_operator_cost = 0.0025 # (same)\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5 # range 1-10\n#geqo_pool_size = 0 # selects default based on effort\n#geqo_generations = 0 # selects default based on effort\n#geqo_selection_bias = 2.0 # range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10 # range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8 # 1 disables collapsing of explicit\n # JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr' # Valid values are combinations of\n # stderr, syslog and eventlog,\n # depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off # Enable capturing of stderr \ninto log\n # files\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log' # Directory where log files are \nwritten\n # Can be absolute or relative to \nPGDATA\n#log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n # Can include strftime() escapes\n#log_truncate_on_rotation = off # If on, any existing log file of the same\n # name as the new log file will be\n # truncated rather than appended \nto. But\n # such truncation only occurs on\n # time-driven rotation, not on \nrestarts\n # or size-driven rotation. \nDefault is\n # off, meaning append to \nexisting files\n # in all cases.\n#log_rotation_age = 1440 # Automatic rotation of logfiles \nwill\n # happen after so many minutes. \n0 to\n # disable.\n#log_rotation_size = 10240 # Automatic rotation of logfiles \nwill\n # happen after so many kilobytes \nof log\n # output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice # Values, in order of decreasing \ndetail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # log\n # notice\n # warning\n # error\n\n#log_min_messages = notice # Values, in order of decreasing \ndetail:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # log\n # fatal\n # panic\n\n#log_error_verbosity = default # terse, default, or verbose \nmessages\n\n#log_min_error_statement = panic # Values in order of increasing \nseverity:\n # debug5\n # debug4\n # debug3\n # debug2\n # debug1\n # info\n # notice\n # warning\n # error\n # panic(off)\n\n#log_min_duration_statement = -1 # -1 is disabled, 0 logs all \nstatements\n # and their durations, in \nmilliseconds.\n\n#silent_mode = off # DO NOT USE without syslog or\n # redirect_stderr\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_line_prefix = '' # Special values:\n # %u = user name\n # %d = database name\n # %r = remote host and port\n # %h = remote host\n # %p = PID\n # %t = timestamp (no milliseconds)\n # %m = timestamp with milliseconds\n # %i = command tag\n # %c = session id\n # %l = session line number\n # %s = session start timestamp\n # %x = transaction id\n # %q = stop here in non-session\n # processes\n # %% = '%'\n # e.g. '<%u%%%d> '\n#log_statement = 'none' # none, mod, ddl, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n# - Query/Index Statistics Collector -\n\nstats_start_collector = on\n#stats_command_string = off\n#stats_block_level = off\nstats_row_level = on\n#stats_reset_on_server_start = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\nautovacuum = on # enable autovacuum subprocess?\nautovacuum_naptime = 180 # time between autovacuum runs, \nin secs\nautovacuum_vacuum_threshold = 100000 # min # of tuple updates before\n # vacuum\nautovacuum_analyze_threshold = 9000 # min # of tuple updates before\n # analyze\nautovacuum_vacuum_scale_factor = 0.4 # fraction of rel size before\n # vacuum\nautovacuum_analyze_scale_factor = 0.2 # fraction of rel size before\n # analyze\nautovacuum_vacuum_cost_delay = -1 # default vacuum cost delay for\n # autovac, -1 means use\n # vacuum_cost_delay\nautovacuum_vacuum_cost_limit = -1 # default vacuum cost limit for\n # autovac, -1 means use\n # vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public' # schema names\n#default_tablespace = '' # a tablespace name, '' uses\n # the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0 # 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown # actually, defaults to TZ\n # environment setting\n#australian_timezones = off\n#extra_float_digits = 0 # min -15, max 2\n#client_encoding = sql_ascii # actually, defaults to database\n # encoding\n\n# These settings are initialized by initdb -- they might be changed\n#lc_messages = 'fr_FR@euro' # locale for system \nerror message\n # strings\n#lc_monetary = 'fr_FR@euro' # locale for monetary \nformatting\n#lc_numeric = 'fr_FR@euro' # locale for number \nformatting\n#lc_time = 'fr_FR@euro' # locale for time formatting\n\nlc_messages = 'fr_FR.UTF-8' # locale for system \nerror message\n # strings\nlc_monetary = 'fr_FR.UTF-8' # locale for monetary \nformatting\nlc_numeric = 'fr_FR.UTF-8' # locale for number \nformatting\nlc_time = 'fr_FR.UTF-8' # locale for time formatting\n\n\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000 # in milliseconds\n#max_locks_per_transaction = 64 # min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding # on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced # advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = '' # list of custom variable class \nnames\n\n\nI'm sure that it could be more optimised. I don't know any thing on WAL, \nautovacuum, fsm, bgwriter, kernel process, geqo or planner cost settings.\n\nI'll thanks you all in advance for your precious help\n\nRegards\n\nDavid\n", "msg_date": "Thu, 07 Aug 2008 00:12:18 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": false, "msg_subject": "Plz Heeeelp! performance settings" }, { "msg_contents": "On Wed, Aug 6, 2008 at 6:12 PM, dforum <[email protected]> wrote:\n> Hello to all,\n>\n>\n> I have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET 2008\n> x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\n> with 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, but\n<snip>\n\nthis is likely your problem...with fsync on (as you have it), you will\nbe lucky to get a couple of hundred transactions/sec out of the\ndatabase. you are probably just exceeding your operational\ncapabilities of the hardware so you probably need to upgrade or turn\noff fsync (which means data corruption in event of hard crash).\n\nmerlin\n", "msg_date": "Wed, 6 Aug 2008 22:16:29 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "Tx for your reply.\n\nYou mean that RAID use fsync method for keeping data's copy.\n\nSo you invite me to desactivate fsync to increase the performance ?\n\nDesactivating fsync. my second disk will not be uptodate, so if the \nmachine crash, I wont be able to get the server working quickly??? But \nif I use a second machine to replicate the database, I escape this \nproblem isn't it ?\n\nIf I understand right, could you tell me how to do desactivate fsync \nplease ?\n\nBest regards\n\nDavid\n\nMerlin Moncure a �crit :\n> On Wed, Aug 6, 2008 at 6:12 PM, dforum <[email protected]> wrote:\n> \n>> Hello to all,\n>>\n>>\n>> I have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET 2008\n>> x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\n>> with 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, but\n>> \n> <snip>\n>\n> this is likely your problem...with fsync on (as you have it), you will\n> be lucky to get a couple of hundred transactions/sec out of the\n> database. you are probably just exceeding your operational\n> capabilities of the hardware so you probably need to upgrade or turn\n> off fsync (which means data corruption in event of hard crash).\n>\n> merlin\n>\n> \n\n", "msg_date": "Thu, 07 Aug 2008 09:35:17 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "dforum wrote:\n> Tx for your reply.\n> \n> You mean that RAID use fsync method for keeping data's copy.\n\nNo, Merlin means PostgreSQL will issue a sync to force WAL to actual disk.\n\n> So you invite me to desactivate fsync to increase the performance ?\n\nHe means you might have to if you can't afford new hardware. Is disk \nactivity the problem? Have you looked at the output of \"vmstat\" to check?\n\n> Desactivating fsync. my second disk will not be uptodate, \n\nNo - the RAID stuff is happening in the operating-system.\n\n > so if the\n> machine crash, I wont be able to get the server working quickly??? \n\nNot \"quickly\", perhaps not \"at all\".\n\n > But\n> if I use a second machine to replicate the database, I escape this \n> problem isn't it ?\n\nYou reduce the chance of a single failure causing disaster.\n\n> If I understand right, could you tell me how to do desactivate fsync \n> please ?\n\nThere's an \"fsync = on\" setting in your postgresql.conf, but don't \nchange it yet.\n\n > I have a database of 38Go and take 6Go per week.\n\nWhat do you mean by \"take 6Go per week\"? You update/delete that much \ndata? It's growing by that amount each week?\n\n > I have a lot of update and insert, especially in 8 tables. 2 tables are\n > using for temporary storage, so I right something like 15000 request per\n > 2 minutes and empty it into 10 min.\n\nI'm not sure what \"15000 request per 2 minutes and empty it into 10 min\" \nmeans.\n\nDo you have 7500 requests per minute?\nAre these updates?\nTo the \"temporary storage\"?\nWhat is this \"temporary storage\" - an ordinary table?\n\n > I'm making some update or select on tables including more than 20\n > millions of entrance.\n\nAgain, I'm not sure what this means.\n\n\nOh - *important* - which version of PostgreSQL are you running?\nIs an upgrade practical?\n\n\nLooking at your postgresql.conf settings:\n\n max_connections = 624\n\nThat's an odd number.\nDo you usually have that many connections?\nWhat are they doing? They can't all be active, the machine you've got \nwouldn't cope.\n\n shared_buffers = 250000\n work_mem = 9000\n temp_buffers = 500\n\nThese three are important. The shared_buffers are workspace shared \nbetween all backends, and you've allocated about 2GB. You've also set \nwork_mem=9MB, which is how much each backend can use for a single sort. \nThat means it can use double or triple that in a complex query. If \nyou're using temporary tables, then you'll want to make sure the \ntemp_buffers setting is correct.\n\nI can't say whether these figures are good or bad without knowing how \nthe database is being used.\n\n effective_cache_size = 625000\n\nThat's around 5GB - is that roughly the amount of memory used for \ncaching (what does free -m say for buffers/cache)?\n\n max_prepared_transactions = 200\n\nDo you use a lot of prepared transactions in two-phase commit?\nI'm guessing that you don't.\n\n > I'm sure that it could be more optimised. I don't know any thing on\n > WAL,\n > autovacuum, fsm, bgwriter, kernel process, geqo or planner cost\n > settings.\n\nIf you run a \"vacuum verbose\" it will recommend fsm settings at the end \nof its output. I think you probably need to make your autovacuum more \naggressive, but that's something you'll be able to tell by monitoring \nyour database.\n\nIt's quite likely that Merlin's right, and you need better hardware to \ncope with the number of updates you're making - that's something where \nyou need fast disks. However, he's just guessing because you've not told \nus enough to tell where the problem really lies.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 09:33:57 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "\n\nRichard Huxton a �crit :\n > dforum wrote:\n >> Tx for your reply.\n >>\n >> You mean that RAID use fsync method for keeping data's copy.\n >\n > No, Merlin means PostgreSQL will issue a sync to force WAL to actual \ndisk.\n >\n >> So you invite me to desactivate fsync to increase the performance ?\n >\n > He means you might have to if you can't afford new hardware. Is disk\n > activity the problem? Have you looked at the output of \"vmstat\" to check?\nvmstat is giving :\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us \nsy id wa\n 0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9 \n 3 82 7\n\n\n >\n >> Desactivating fsync. my second disk will not be uptodate,\n >\n > No - the RAID stuff is happening in the operating-system.\n >\n > > so if the\n >> machine crash, I wont be able to get the server working quickly???\n >\n > Not \"quickly\", perhaps not \"at all\".\nOups\n >\n > > But\n >> if I use a second machine to replicate the database, I escape this\n >> problem isn't it ?\n > You reduce the chance of a single failure causing disaster.\nNot clear this reply. It's scare me ....\n >\n >> If I understand right, could you tell me how to do desactivate fsync\n >> please ?\n >\n > There's an \"fsync = on\" setting in your postgresql.conf, but don't\n > change it yet.\nOK\n >\n > > I have a database of 38Go and take 6Go per week.\n >\n > What do you mean by \"take 6Go per week\"? You update/delete that much\n > data? It's growing by that amount each week?\nYES\n >\n > > I have a lot of update and insert, especially in 8 tables. 2 \ntables are\n > > using for temporary storage, so I right something like 15000 \nrequest per\n > > 2 minutes and empty it into 10 min.\n >\n > I'm not sure what \"15000 request per 2 minutes and empty it into 10 min\"\n > means.\nI insert 15000 datas every 2 min and delete 15000 every 10 min in those \ntables\n >\n > Do you have 7500 requests per minute?\nshould be that, But in fact I'm not treating the datas in real time, and \nI buffer the datas and push the data into the database every 2 min\n > Are these updates?\nduring the delete the data are aggregated in other tables which make updates\n > To the \"temporary storage\"?\n\n > What is this \"temporary storage\" - an ordinary table?\nYes, I thied to use temporary tables but I never been able to connect \nthis tables over 2 different session/connection, seems that is a \nfunctionnality of postgresql, or a misunderstanding from me.\n >\n > > I'm making some update or select on tables including more than 20\n > > millions of entrance.\n >\n > Again, I'm not sure what this means.\n\nTo aggregate the data, I have to check the presence of others \ninformation that are stores in 2 tables which includes 24 millions of \nentrance.\n >\n >\n > Oh - *important* - which version of PostgreSQL are you running?\n8.1.11\n > Is an upgrade practical?\nWe are working of trying to upgrade to 8.3.3, but we are not yet ready \nfor such migration\n >\n >\n > Looking at your postgresql.conf settings:\n >\n > max_connections = 624\n >\n > That's an odd number.\nNow we could decrease this number, it's not so much usefull for now. we \ncould decrease is to 350.\n > Do you usually have that many connections?\n > What are they doing? They can't all be active, the machine you've got\n > wouldn't cope.\n >\n > shared_buffers = 250000\n > work_mem = 9000\n > temp_buffers = 500\n >\n > These three are important. The shared_buffers are workspace shared\n > between all backends, and you've allocated about 2GB. You've also set\n > work_mem=9MB, which is how much each backend can use for a single sort.\n > That means it can use double or triple that in a complex query\n\n(i now about it).\n\nIf\n > you're using temporary tables, then you'll want to make sure the\n > temp_buffers setting is correct.\nI need help for that, I don't know\n >\n > I can't say whether these figures are good or bad without knowing how\n > the database is being used.\n >\n > effective_cache_size = 625000\n >\n > That's around 5GB - is that roughly the amount of memory used for\n > caching (what does free -m say for buffers/cache)?\n total used free shared buffers cached\nMem: 7984 7828 156 0 38 7349\n-/+ buffers/cache: 440 7544\nSwap: 509 1 508\n\n\n >\n > max_prepared_transactions = 200\n >\n > Do you use a lot of prepared transactions in two-phase commit?\n > I'm guessing that you don't.\nI don't\n >\n > > I'm sure that it could be more optimised. I don't know any thing on\n > > WAL,\n > > autovacuum, fsm, bgwriter, kernel process, geqo or planner cost\n > > settings.\n >\n > If you run a \"vacuum verbose\" it will recommend fsm settings at the end\n > of its output. I think you probably need to make your autovacuum more\n > aggressive, but that's something you'll be able to tell by monitoring\n > your database.\n >\n > It's quite likely that Merlin's right, and you need better hardware to\n > cope with the number of updates you're making - that's something where\n > you need fast disks. However, he's just guessing because you've not told\n > us enough to tell where the problem really lies.\n >\n\nHope that new information will give you more information to help me.\n\nRegards\n\ndavid\n\n-- \n<http://www.1st-affiliation.fr>\n\n*David Bigand\n*Pr�sident Directeur G�n�rale*\n*51 chemin des moulins\n73000 CHAMBERY - FRANCE\n\nWeb : htttp://www.1st-affiliation.fr\nEmail : [email protected]\nTel. : +33 479 696 685\nMob. : +33 666 583 836\nSkype : firstaffiliation_support\n\n", "msg_date": "Thu, 07 Aug 2008 11:12:47 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "dforums wrote:\n> vmstat is giving :\n> procs -----------memory---------- ---swap-- -----io---- --system-- \n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy \n> id wa\n> 0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9 \n> 3 82 7\n\nThis system is practically idle. Either you're not measuring it at a \nuseful time, or there isn't a performance problem.\n\n> > > But\n> >> if I use a second machine to replicate the database, I escape this\n> >> problem isn't it ?\n> > You reduce the chance of a single failure causing disaster.\n> Not clear this reply. It's scare me ....\n\nIf server A fails, you still have server B. If server A fails so that \nreplication stops working and you don't notice, server B won't help any \nmore.\n\n> > What do you mean by \"take 6Go per week\"? You update/delete that much\n> > data? It's growing by that amount each week?\n> YES\n\nThat wasn't a yes/no question. Please choose one of:\nAre you updating 6Go per week?\nAre you adding 6Go per week?\n\n> > I'm not sure what \"15000 request per 2 minutes and empty it into 10 min\"\n> > means.\n> I insert 15000 datas every 2 min and delete 15000 every 10 min in those \n> tables\n> >\n> > Do you have 7500 requests per minute?\n> should be that, But in fact I'm not treating the datas in real time, and \n> I buffer the datas and push the data into the database every 2 min\n> > Are these updates?\n> during the delete the data are aggregated in other tables which make \n> updates\n\nOK, so every 2 minutes you run one big query that adds 15000 rows.\nEvery 10 minutes you run one big query that deletes 15000 rows.\n\n> > To the \"temporary storage\"?\n> \n> > What is this \"temporary storage\" - an ordinary table?\n> Yes, I thied to use temporary tables but I never been able to connect \n> this tables over 2 different session/connection, seems that is a \n> functionnality of postgresql, or a misunderstanding from me.\n\nThat's correct - temporary tables are private to a backend (connection).\n\n> > > I'm making some update or select on tables including more than 20\n> > > millions of entrance.\n> >\n> > Again, I'm not sure what this means.\n> \n> To aggregate the data, I have to check the presence of others \n> information that are stores in 2 tables which includes 24 millions of \n> entrance.\n\nOK. I assume you're happy with the plans you are getting on these \nqueries, since you've not provided any information about them.\n\n> > Oh - *important* - which version of PostgreSQL are you running?\n> 8.1.11\n> > Is an upgrade practical?\n> We are working of trying to upgrade to 8.3.3, but we are not yet ready \n> for such migration\n\nOK\n\n> > Looking at your postgresql.conf settings:\n> >\n> > max_connections = 624\n> >\n> > That's an odd number.\n> Now we could decrease this number, it's not so much usefull for now. we \n> could decrease is to 350.\n\nI don't believe you've got 350 active connections either. It will be \neasier to help if you can provide some useful information.\n\n> > effective_cache_size = 625000\n> >\n> > That's around 5GB - is that roughly the amount of memory used for\n> > caching (what does free -m say for buffers/cache)?\n> total used free shared buffers cached\n> Mem: 7984 7828 156 0 38 7349\n> -/+ buffers/cache: 440 7544\n> Swap: 509 1 508\n\nNot far off - free is showing 7349MB cached. You're not running 350 \nclients there though - you're only using 440MB of RAM.\n\n\nI don't see anything to show a performance problem from these emails.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 11:40:24 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "The performance problem is really only on the insertion and even more on \nthe treatment for the aggregation.\n\nTo treat the 3000 entrances and to insert, or update the tables it needs \n10 minutes.\n\nAs I told you I inject 14000 query every 2 minutes, and it needs 10 \nminutes to treat 3000 of those query.\n\nAs you can easly understand it's a big narrow section.\n\nI'm not doing the treatment in ones, cause I can't, but all is managed \nby procedure.\n\n > That wasn't a yes/no question. Please choose one of:\n > Are you updating 6Go per week? most of update\n > Are you adding 6Go per week? less of injection,\n\nThis action depend if the data are already present in the database.\n\n\n >\n > OK. I assume you're happy with the plans you are getting on these\n > queries, since you've not provided any information about them.\n\nThe plan seems ok as it use index as well.\nhere is the plan :\n\nexplain analyse SELECT \"insertUpdateTracks\"(137,2605, 852, ('2008-08-06 \n19:28:54'::text)::date,3,'dailydisplay',2,NULL);\nINFO: method 1\n QUERY PLAN\n------------------------------------------------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.151..1.151 \nrows=1 loops=1)\n Total runtime: 1.160 ms\n(2 lignes)\n\n\n\n Has you can see the runtime processs for an update in this table.\n\nmultiplying this per 10000, it is too long.\n\nregards\n\ndavid\n\n\nRichard Huxton a �crit :\n> dforums wrote:\n>> vmstat is giving :\n>> procs -----------memory---------- ---swap-- -----io---- --system-- \n>> ----cpu----\n>> r b swpd free buff cache si so bi bo in cs us \n>> sy id wa\n>> 0 2 1540 47388 41684 7578976 0 0 131 259 0 1 9 \n>> 3 82 7\n> \n> This system is practically idle. Either you're not measuring it at a \n> useful time, or there isn't a performance problem.\n> \n>> > > But\n>> >> if I use a second machine to replicate the database, I escape this\n>> >> problem isn't it ?\n>> > You reduce the chance of a single failure causing disaster.\n>> Not clear this reply. It's scare me ....\n> \n> If server A fails, you still have server B. If server A fails so that \n> replication stops working and you don't notice, server B won't help any \n> more.\n> \n>> > What do you mean by \"take 6Go per week\"? You update/delete that much\n>> > data? It's growing by that amount each week?\n>> YES\n> \n> That wasn't a yes/no question. Please choose one of:\n> Are you updating 6Go per week?\n> Are you adding 6Go per week?\n> \n>> > I'm not sure what \"15000 request per 2 minutes and empty it into 10 \n>> min\"\n>> > means.\n>> I insert 15000 datas every 2 min and delete 15000 every 10 min in \n>> those tables\n>> >\n>> > Do you have 7500 requests per minute?\n>> should be that, But in fact I'm not treating the datas in real time, \n>> and I buffer the datas and push the data into the database every 2 min\n>> > Are these updates?\n>> during the delete the data are aggregated in other tables which make \n>> updates\n> \n> OK, so every 2 minutes you run one big query that adds 15000 rows.\n> Every 10 minutes you run one big query that deletes 15000 rows.\n> \n>> > To the \"temporary storage\"?\n>>\n>> > What is this \"temporary storage\" - an ordinary table?\n>> Yes, I thied to use temporary tables but I never been able to connect \n>> this tables over 2 different session/connection, seems that is a \n>> functionnality of postgresql, or a misunderstanding from me.\n> \n> That's correct - temporary tables are private to a backend (connection).\n> \n>> > > I'm making some update or select on tables including more than 20\n>> > > millions of entrance.\n>> >\n>> > Again, I'm not sure what this means.\n>>\n>> To aggregate the data, I have to check the presence of others \n>> information that are stores in 2 tables which includes 24 millions of \n>> entrance.\n> \n> OK. I assume you're happy with the plans you are getting on these \n> queries, since you've not provided any information about them.\n> \n>> > Oh - *important* - which version of PostgreSQL are you running?\n>> 8.1.11\n>> > Is an upgrade practical?\n>> We are working of trying to upgrade to 8.3.3, but we are not yet ready \n>> for such migration\n> \n> OK\n> \n>> > Looking at your postgresql.conf settings:\n>> >\n>> > max_connections = 624\n>> >\n>> > That's an odd number.\n>> Now we could decrease this number, it's not so much usefull for now. \n>> we could decrease is to 350.\n> \n> I don't believe you've got 350 active connections either. It will be \n> easier to help if you can provide some useful information.\n> \n>> > effective_cache_size = 625000\n>> >\n>> > That's around 5GB - is that roughly the amount of memory used for\n>> > caching (what does free -m say for buffers/cache)?\n>> total used free shared buffers cached\n>> Mem: 7984 7828 156 0 38 7349\n>> -/+ buffers/cache: 440 7544\n>> Swap: 509 1 508\n> \n> Not far off - free is showing 7349MB cached. You're not running 350 \n> clients there though - you're only using 440MB of RAM.\n> \n> \n> I don't see anything to show a performance problem from these emails.\n> \n\n-- \n<http://www.1st-affiliation.fr>\n\n*David Bigand\n*Pr�sident Directeur G�n�rale*\n*51 chemin des moulins\n73000 CHAMBERY - FRANCE\n\nWeb : htttp://www.1st-affiliation.fr\nEmail : [email protected]\nTel. : +33 479 696 685\nMob. : +33 666 583 836\nSkype : firstaffiliation_support\n\n", "msg_date": "Thu, 07 Aug 2008 15:30:16 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "dforums wrote:\n> The performance problem is really only on the insertion and even more on \n> the treatment for the aggregation.\n> \n> To treat the 3000 entrances and to insert, or update the tables it needs \n> 10 minutes.\n> \n> As I told you I inject 14000 query every 2 minutes, and it needs 10 \n> minutes to treat 3000 of those query.\n\nSorry - I still don't understand. What is this \"treatment\" you are doing?\n\n> >\n> > OK. I assume you're happy with the plans you are getting on these\n> > queries, since you've not provided any information about them.\n> \n> The plan seems ok as it use index as well.\n> here is the plan :\n> \n> explain analyse SELECT \"insertUpdateTracks\"(137,2605, 852, ('2008-08-06 \n> 19:28:54'::text)::date,3,'dailydisplay',2,NULL);\n> INFO: method 1\n> QUERY PLAN\n> ------------------------------------------------------------------------------------ \n> \n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.151..1.151 \n> rows=1 loops=1)\n> Total runtime: 1.160 ms\n\nThere's nothing to do with an index here - this is a function call.\n\n> Has you can see the runtime processs for an update in this table.\n> \n> multiplying this per 10000, it is too long.\n\nSo - are you calling this function 14000 times to inject your data? \nYou're doing this in one transaction, yes?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 15:12:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "On Thu, Aug 7, 2008 at 9:30 AM, dforums <[email protected]> wrote:\n> The performance problem is really only on the insertion and even more on the\n> treatment for the aggregation.\n>\n> To treat the 3000 entrances and to insert, or update the tables it needs 10\n> minutes.\n>\n> As I told you I inject 14000 query every 2 minutes, and it needs 10 minutes\n> to treat 3000 of those query.\n>\n> As you can easly understand it's a big narrow section.\n>\n> I'm not doing the treatment in ones, cause I can't, but all is managed by\n> procedure.\n>\n>> That wasn't a yes/no question. Please choose one of:\n>> Are you updating 6Go per week? most of update\n>> Are you adding 6Go per week? less of injection,\n>\n> This action depend if the data are already present in the database.\n>\n>\n>>\n>> OK. I assume you're happy with the plans you are getting on these\n>> queries, since you've not provided any information about them.\n>\n> The plan seems ok as it use index as well.\n> here is the plan :\n>\n> explain analyse SELECT \"insertUpdateTracks\"(137,2605, 852, ('2008-08-06\n> 19:28:54'::text)::date,3,'dailydisplay',2,NULL);\n> INFO: method 1\n> QUERY PLAN\n> ------------------------------------------------------------------------------------\n> Result (cost=0.00..0.01 rows=1 width=0) (actual time=1.151..1.151 rows=1\n> loops=1)\n> Total runtime: 1.160 ms\n> (2 lignes)\n>\n> Has you can see the runtime processs for an update in this table.\n>\n> multiplying this per 10000, it is too long.\n>\n\nplease don't top-post (put your reply after previous comments).\n\nWith fsync on, you are lucky to get 10k inserts in 10 minutes on\nsingle sata 1. The basic issue is that after each time function runs\npostgesql tells disk drive to flush, guaranteeing data safety. You\nhave few different options here:\n\n*) group multiple inserts into single transaction\n*) modify function to take multiple 'update' records at once.\n*) disable fsync (extremely unsafe as noted)\n*) upgrade to 8.3 and disable synchronized_commit (the 'fsync lite', a\ngood compromise between fsync on/off)\n\nmerlin\n", "msg_date": "Thu, 7 Aug 2008 11:05:30 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" }, { "msg_contents": "dforums wrote:\n > The delete is global, the procedure is called for each line/tracks.\n > > So - are you calling this function 14000 times to inject your data?\n > > You're doing this in one transaction, yes?\n > NO I have to make it 14000 times cause, I use some inserted information\n > for other insert to make links between data.\n\nWhy does that stop you putting all 14000 calls in one transaction?\n\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 16:08:52 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Plz Heeeelp! performance settings" } ]
[ { "msg_contents": "AFAIK, provided bar is UNIQUE in table2 (e.g. is a PRIMARY KEY) the two\nqueries will give the same result:\n\nSELECT foo, id FROM table1 WHERE id IN (SELECT id FROM table2);\n\nSELECT foo, id FROM table1 INNER JOIN table2 USING (id);\n\nGiven table1 has about 100k rows, and table2 about 100 rows, which one\nshould be faster, less resource intensive, use less RAM, disk access, etc?\nAre there any other even better ways to acomlish the same query?\n\nUsing 8.3.3 on a 48 MB RAM Xen.\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 31 Jul 2008 00:11:58 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "what is less resource-intensive, WHERE id IN or INNER JOIN?" }, { "msg_contents": "WHERE id IN will generally lead to faster query plans. Often, much faster\non large tables.\n\nThe inner join deals with duplicate values for id differently. WHERE id IN\n( subquery ) will be much more likely to choose a hash method for filtering\nthe scan on table1.\n\nI just ran into this today on a query of similar nature on a table with 2M\nrows (table1 below) joining on a table with 100k (table 2). The speedup was\nabout 10x -- 20 seconds to 2 seconds in my case -- but I'm working with many\nGB of RAM and 800MB of work_mem.\n\nI'm not an expert on the query planner guts -- the below is all knowledge\nbased on experimenting with queries on my dataset and triggering various\nquery plans. Due to the differences in semantics of these two queries the\nplan on larger tables will be two sorts and a merge join for the INNER JOIN\non most columns, though indexes and especially unique indexes will change\nthis. This holds true even if the number of distinct values of the column\nbeing joined on is very low.\n\nThe WHERE id IN version will produce a much faster query plan most of the\ntime, assuming you have enough work_mem configured. The planner is\nconservative if it estimates usage of work_mem to overflow even a little bit\n-- and often shifts to sorts rather than hashes on disk.\n\nSince you are working with such a small ammount of RAM, make sure you have\nsome of it doled out to work_mem and tune the balance between work_mem, the\nOS, and the shared buffers carefully. The conservative, sort-if-uncertain\nnature of the query planner may need some coersion with an unusual\nenvironment such as yours. You may even have faster results with a hash\noverflown to disk than a sort overflown to disk with that little memory if\nthe % of overflow is small enough and the OS disk cache large enough. Plus,\nvirtual machines sometimes do some odd things with caching non sync disk\nwrites that may distort the usual random versus sequential disk cost for\nsmall I/O volumes. Though my VM experience is VMWare not Xen.\nThe querry planner won't generally go for hashes on disk on purpose however,\nso you might need to be creative with manual statistics setting or changing\nthe optimizer cost settings to experiment with various query plans and\nmeasure the unique aspects of your atypical environment and your data.\n\nOn Wed, Jul 30, 2008 at 3:11 PM, Miernik <[email protected]> wrote:\n\n> AFAIK, provided bar is UNIQUE in table2 (e.g. is a PRIMARY KEY) the two\n> queries will give the same result:\n>\n> SELECT foo, id FROM table1 WHERE id IN (SELECT id FROM table2);\n>\n> SELECT foo, id FROM table1 INNER JOIN table2 USING (id);\n>\n> Given table1 has about 100k rows, and table2 about 100 rows, which one\n> should be faster, less resource intensive, use less RAM, disk access, etc?\n> Are there any other even better ways to acomlish the same query?\n>\n> Using 8.3.3 on a 48 MB RAM Xen.\n>\n> --\n> Miernik\n> http://miernik.name/\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWHERE id IN will generally lead to faster query plans.  Often, much faster on large tables.The inner join deals with duplicate values for id differently.  WHERE id IN ( subquery ) will be much more likely to choose a hash method for filtering the scan on table1.\nI just ran into this today on a query of similar nature on a table with 2M rows (table1 below) joining on a table with 100k (table 2).  The speedup was about 10x -- 20 seconds to 2 seconds in my case -- but I'm working with many GB of RAM and 800MB of work_mem.\nI'm not an expert on the query planner guts -- the below is all knowledge based on experimenting with queries on my dataset and triggering various query plans.  Due to the differences in semantics of these two queries the plan on larger tables will be two sorts and a merge join for the INNER JOIN on most columns, though indexes and especially unique indexes will change this.  This holds true even if the number of distinct values of the column being joined on is very low.\nThe WHERE id IN version will produce a much faster query plan most of the time, assuming you have enough work_mem configured.  The planner is conservative if it estimates usage of work_mem to overflow even a little bit -- and often shifts to sorts rather than hashes on disk.\nSince you are working with such a small ammount of RAM, make sure you have some of it doled out to work_mem and tune the balance between work_mem, the OS, and the shared buffers carefully.  The conservative, sort-if-uncertain nature of the query planner may need some coersion with an unusual environment such as yours. You may even have faster results with a hash overflown to disk than a sort overflown to disk with that little memory if the % of overflow is small enough and the OS disk cache large enough. Plus, virtual machines sometimes do some odd things with caching non sync disk writes that may distort the usual random versus sequential disk cost for small I/O volumes.  Though my VM experience is VMWare not Xen.\nThe querry planner won't generally go for hashes on disk on purpose however, so you might need to be creative with manual statistics setting or changing the optimizer cost settings to experiment with various query plans and measure the unique aspects of your atypical environment and your data.  \nOn Wed, Jul 30, 2008 at 3:11 PM, Miernik <[email protected]> wrote:\nAFAIK, provided bar is UNIQUE in table2 (e.g. is a PRIMARY KEY) the two\nqueries will give the same result:\n\nSELECT foo, id FROM table1 WHERE id IN (SELECT id FROM table2);\n\nSELECT foo, id FROM table1 INNER JOIN table2 USING (id);\n\nGiven table1 has about 100k rows, and table2 about 100 rows, which one\nshould be faster, less resource intensive, use less RAM, disk access, etc?\nAre there any other even better ways to acomlish the same query?\n\nUsing 8.3.3 on a 48 MB RAM Xen.\n\n--\nMiernik\nhttp://miernik.name/\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 30 Jul 2008 23:47:40 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what is less resource-intensive, WHERE id IN or INNER JOIN?" } ]
[ { "msg_contents": "Hello friends\n\nThe commands \"\\timing\" and \"EXPLAIN ANALYZE\" return values related to the\ntime of execution of the instruction. These values are \"Time:\" and \"Total\nruntime:\" respectively. What is the difference between these values, and\nthe specific use of each command in queries?\n\nIf someone can help me.\n\nThank's for attention.\n\nTarcizio Bini\n\n\n", "msg_date": "Wed, 30 Jul 2008 21:19:15 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Difference between \"Explain analyze\" and \"\\timing\"" }, { "msg_contents": "[email protected] wrote:\n>\n> Hello friends\n> \n> The commands \"\\timing\" and \"EXPLAIN ANALYZE\" return values related to the\n> time of execution of the instruction. These values are \"Time:\" and \"Total\n> runtime:\" respectively. What is the difference between these values, and\n> the specific use of each command in queries?\n\nThe time reported by explain will be the time it took the server to\nexecute the query.\n\nThe time reported by \\timing is the time it takes the client (psql) to\nget the result. This will be the time the server took, plus any network\ndelays and time required to process the query and result on the client\nside.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information\nand is intended only for the individual named. If the reader of\nthis message is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Wed, 30 Jul 2008 20:59:22 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between \"Explain analyze\" and \"\\timing\"" }, { "msg_contents": "Bill Moran <[email protected]> writes:\n> [email protected] wrote:\n>> The commands \"\\timing\" and \"EXPLAIN ANALYZE\" return values related to the\n>> time of execution of the instruction. These values are \"Time:\" and \"Total\n>> runtime:\" respectively. What is the difference between these values, and\n>> the specific use of each command in queries?\n\n> The time reported by explain will be the time it took the server to\n> execute the query.\n\n> The time reported by \\timing is the time it takes the client (psql) to\n> get the result. This will be the time the server took, plus any network\n> delays and time required to process the query and result on the client\n> side.\n\nAlso realize that explain analyze only reports the time to *execute*\nthe query. Not counted are the time to parse and plan the query\nbeforehand and prepare the explain output afterwards. But psql's number\nis the whole round-trip time and so includes all that stuff.\n\nIf any of this seems confusing, you might find it worth reading\nthis overview of the Postgres system structure:\nhttp://www.postgresql.org/docs/8.3/static/overview.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jul 2008 21:06:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between \"Explain analyze\" and \"\\timing\" " } ]
[ { "msg_contents": "Two queries which do the same thing, first one takes ages to complete\n(did wait several minutes and cancelled it), while the second one took\n9 seconds? Don't they do the same thing?\n\nmiernik=> EXPLAIN SELECT uid FROM locks WHERE uid NOT IN (SELECT uid FROM locks INNER JOIN wys USING (uid, login));\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on locks (cost=38341.39..61365389.71 rows=48446 width=4)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=38341.39..39408.47 rows=79508 width=4)\n -> Hash Join (cost=3997.27..37989.89 rows=79508 width=4)\n Hash Cond: (((wys.uid)::integer = (locks.uid)::integer) AND ((wys.login)::text = (locks.login)::text))\n -> Seq Scan on wys (cost=0.00..13866.51 rows=633451 width=16)\n -> Hash (cost=2069.91..2069.91 rows=96891 width=16)\n -> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=16)\n(9 rows)\n\nTime: 231,634 ms\nmiernik=> EXPLAIN SELECT uid FROM locks EXCEPT (SELECT uid FROM locks INNER JOIN wys USING (uid, login));\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n SetOp Except (cost=59306.12..60188.11 rows=17640 width=4)\n -> Sort (cost=59306.12..59747.12 rows=176399 width=4)\n Sort Key: \"*SELECT* 1\".uid\n -> Append (cost=0.00..41823.79 rows=176399 width=4)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..3038.82 rows=96891 width=4)\n -> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=4)\n -> Subquery Scan \"*SELECT* 2\" (cost=3997.27..38784.97 rows=79508 width=4)\n -> Hash Join (cost=3997.27..37989.89 rows=79508 width=4)\n Hash Cond: (((wys.uid)::integer = (locks.uid)::integer) AND ((wys.login)::text = (locks.login)::text))\n -> Seq Scan on wys (cost=0.00..13866.51 rows=633451 width=16)\n -> Hash (cost=2069.91..2069.91 rows=96891 width=16)\n -> Seq Scan on locks (cost=0.00..2069.91 rows=96891 width=16)\n(12 rows)\n\nTime: 1479,238 ms\nmiernik=>\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Thu, 31 Jul 2008 04:45:08 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "why \"WHERE uid NOT IN\" is so slow,\n\tand EXCEPT in the same situtation is fast?" }, { "msg_contents": "Miernik <[email protected]> writes:\n> Two queries which do the same thing, first one takes ages to complete\n> (did wait several minutes and cancelled it), while the second one took\n> 9 seconds? Don't they do the same thing?\n\nHmm, what have you got work_mem set to? The first one would likely\nhave been a lot faster if it had hashed the subplan; which I'd have\nthought would happen with only 80K rows in the subplan result,\nexcept it didn't.\n\nThe queries are in fact not exactly equivalent, because EXCEPT\ninvolves some duplicate-elimination behavior that won't happen\nin the NOT IN formulation. So I don't apologize for your having\ngotten different plans. But you should have gotten a plan less\nawful than that one for the NOT IN.\n\nAnother issue is that the NOT IN will probably not do what you\nexpected if the subquery yields any NULLs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jul 2008 23:08:06 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why \"WHERE uid NOT IN\" is so slow,\n\tand EXCEPT in the same situtation is fast?" }, { "msg_contents": "On Wed, Jul 30, 2008 at 11:08:06PM -0400, Tom Lane wrote:\n> Hmm, what have you got work_mem set to? The first one would likely\n> have been a lot faster if it had hashed the subplan; which I'd have\n> thought would happen with only 80K rows in the subplan result,\n> except it didn't.\n\nwork_mem = 1024kB\n\nThe machine has 48 MB total RAM and is a Xen host.\n\n> The queries are in fact not exactly equivalent, because EXCEPT\n> involves some duplicate-elimination behavior that won't happen\n> in the NOT IN formulation. So I don't apologize for your having\n> gotten different plans.\n\nBut if use EXCEPT ALL?\n\n> Another issue is that the NOT IN will probably not do what you\n> expected if the subquery yields any NULLs.\n\nIn this specific query I think it is not possible for the subquery to\nhave NULLs, because its an INNER JOIN USING (the_only_column_in_the\n_result, some_other_column_also). If any \"uid\" column of any row would\nhave been NULL, it wouldn't appear in that INNER JOIN, no?\n\n-- \nMiernik\nhttp://miernik.name/\n", "msg_date": "Thu, 31 Jul 2008 05:18:22 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why \"WHERE uid NOT IN\" is so slow, and EXCEPT in the\n\tsame situtation is fast?" }, { "msg_contents": "Miernik <[email protected]> writes:\n> On Wed, Jul 30, 2008 at 11:08:06PM -0400, Tom Lane wrote:\n>> Hmm, what have you got work_mem set to? The first one would likely\n>> have been a lot faster if it had hashed the subplan; which I'd have\n>> thought would happen with only 80K rows in the subplan result,\n>> except it didn't.\n\n> work_mem = 1024kB\n\nTry increasing that ... I don't recall the exact per-row overhead\nbut I'm quite sure it's more than 8 bytes. Ten times that would\nlikely get you to a hash subquery plan.\n\n> The machine has 48 MB total RAM and is a Xen host.\n\n48MB is really not a sane amount of memory to run a modern database\nin. Maybe you could make it go with sqlite or some other tiny-footprint\nDBMS, but Postgres isn't focused on that case.\n\n>> The queries are in fact not exactly equivalent, because EXCEPT\n>> involves some duplicate-elimination behavior that won't happen\n>> in the NOT IN formulation. So I don't apologize for your having\n>> gotten different plans.\n\n> But if use EXCEPT ALL?\n\nFraid not, EXCEPT ALL has yet other rules for how it deals with\nduplicates.\n\n>> Another issue is that the NOT IN will probably not do what you\n>> expected if the subquery yields any NULLs.\n\n> In this specific query I think it is not possible for the subquery to\n> have NULLs,\n\nOkay, just wanted to point out a common gotcha.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 30 Jul 2008 23:55:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why \"WHERE uid NOT IN\" is so slow,\n\tand EXCEPT in the same situtation is fast?" } ]
[ { "msg_contents": "Interesting read...\n\nhttp://www.linux.com/feature/142658\n", "msg_date": "Thu, 31 Jul 2008 13:45:38 -0400", "msg_from": "\"Matthew T. O'Connor\" <[email protected]>", "msg_from_op": true, "msg_subject": "SSD Performance Article" }, { "msg_contents": "On Thu, Jul 31, 2008 at 11:45 AM, Matthew T. O'Connor <[email protected]> wrote:\n> Interesting read...\n>\n> http://www.linux.com/feature/142658\n\nWish he had used a dataset larger than 1G...\n", "msg_date": "Thu, 31 Jul 2008 19:27:46 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD Performance Article" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Jul 31, 2008 at 11:45 AM, Matthew T. O'Connor <[email protected]> wrote:\n> \n>> Interesting read...\n>>\n>> http://www.linux.com/feature/142658\n>> \n>\n> Wish he had used a dataset larger than 1G...\n>\n> \nWish he had performed a test with the index on a dedicated SATA.\n\nHH\n\n\n\n\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Mon, 04 Aug 2008 07:20:45 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SSD Performance Article" } ]
[ { "msg_contents": "Hello,\nI installed postgresql-8.3.3 in our local server with option --enable-nls . After successful installion , I create database and import data.But I am not aware how to use nls sort in this postgresql-8.3.3 .\nPlease tell me syntax how to use nls sort in query , if some one know.\nThanks in advance.\nPraveen Malik.\n\n\n\n\n\n\nHello,\nI installed postgresql-8.3.3 in our \nlocal server  with option  --enable-nls . After successful installion , I create \ndatabase and import data.But I am not \naware how to use nls sort in this postgresql-8.3.3 .\nPlease tell me  syntax how to use nls sort in \nquery , if some one know.\nThanks in advance.\nPraveen Malik.", "msg_date": "Fri, 1 Aug 2008 14:05:12 +0530", "msg_from": "\"Praveen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nls sorting in Postgresql-8.3.3" }, { "msg_contents": "Praveen wrote:\n> Hello,\n> I installed postgresql-8.3.3 in our local server with option --enable-nls . After successful installion , I create database and import data.But I am not aware how to use nls sort in this postgresql-8.3.3 .\n> Please tell me syntax how to use nls sort in query , if some one know.\n\nWhat is \"nls sort\"? What do you expect --enable-nls to do? It looks like \n it's for multi-language message display rather than sorting.\n\nThe locale options are already built-in.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 01 Aug 2008 09:43:44 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Nls sorting in Postgresql-8.3.3" }, { "msg_contents": "Hi!\n\nNLS stands for \"Native Language Support\" and it's used for display \nerror/information messages in different languages.\n\nFor sorting data you need to look at LC_COLLATE (http://www.postgresql.org/docs/8.3/interactive/locale.html \n)\n\nYou also need to check that you operation system support collation for \nyour charset/encoding as PostgreSQL relies on the underling operation- \nsystems collation support. FreeBSD for example does not support UTF-8 \ncollation. On FreeBSD you need the supplied ICU (http://www.icu-project.org/ \n) patch supplied by portage to get working UTF-8 collation.\n\nBest regards,\nMathias Stjernstrom\n\n--------------------------------------\nhttp://www.globalinn.com/\n\n\nOn 1 aug 2008, at 10.35, Praveen wrote:\n\n> Hello,\n> I installed postgresql-8.3.3 in our local server with option -- \n> enable-nls . After successful installion , I create database and \n> import data.But I am not aware how to use nls sort in this \n> postgresql-8.3.3 .\n> Please tell me syntax how to use nls sort in query , if some one \n> know.\n> Thanks in advance.\n> Praveen Malik.", "msg_date": "Sat, 2 Aug 2008 12:39:19 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nls sorting in Postgresql-8.3.3" } ]
[ { "msg_contents": "Hi all,\n\nWe've thrown together some results from simple i/o tests on Linux\ncomparing various file systems, hardware and software raid with a\nlittle bit of volume management:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nWhat I'd like to ask of the folks on the list is how relevant is this\ninformation in helping make decisions such as \"What file system should\nI use?\" \"What performance can I expect from this RAID configuration?\"\n I know these kind of tests won't help answer questions like \"Which\nfile system is most reliable?\" but we would like to be as helpful as\nwe can.\n\nAny suggestions/comments/criticisms for what would be more relevant or\ninteresting also appreciated. We've started with Linux but we'd also\nlike to hit some other OS's. I'm assuming FreeBSD would be the other\npopular choice for the DL-380 that we're using.\n\nI hope this is helpful.\n\nRegards,\nMark\n", "msg_date": "Mon, 4 Aug 2008 21:54:36 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "file system and raid performance" }, { "msg_contents": "On Mon, 4 Aug 2008, Mark Wong wrote:\n\n> Hi all,\n>\n> We've thrown together some results from simple i/o tests on Linux\n> comparing various file systems, hardware and software raid with a\n> little bit of volume management:\n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> What I'd like to ask of the folks on the list is how relevant is this\n> information in helping make decisions such as \"What file system should\n> I use?\" \"What performance can I expect from this RAID configuration?\"\n> I know these kind of tests won't help answer questions like \"Which\n> file system is most reliable?\" but we would like to be as helpful as\n> we can.\n>\n> Any suggestions/comments/criticisms for what would be more relevant or\n> interesting also appreciated. We've started with Linux but we'd also\n> like to hit some other OS's. I'm assuming FreeBSD would be the other\n> popular choice for the DL-380 that we're using.\n>\n> I hope this is helpful.\n\nit's definantly timely for me (we were having a spirited 'discussion' on \nthis topic at work today ;-)\n\nwhat happened with XFS?\n\nyou show it as not completing half the tests in the single-disk table and \nit's completly missing from the other ones.\n\nwhat OS/kernel were you running?\n\nif it was linux, which software raid did you try (md or dm) did you use \nlvm or raw partitions?\n\nDavid Lang\n", "msg_date": "Mon, 4 Aug 2008 22:04:42 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,\nand ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used\nbonnie++, so the numbers are really only useful for my hardware. \n\nWhat parameters were used to create the XFS partition in these tests? And,\nwhat options were used to mount the file system? Was the kernel 32-bit or\n64-bit? Given what I've seen with some of the XFS options (like lazy-count),\nI am wondering about the options used in these tests.\n\nThanks,\nGreg\n\n\n\n", "msg_date": "Mon, 4 Aug 2008 22:56:06 -0700", "msg_from": "\"Gregory S. Youngblood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Mon, Aug 4, 2008 at 10:04 PM, <[email protected]> wrote:\n> On Mon, 4 Aug 2008, Mark Wong wrote:\n>\n>> Hi all,\n>>\n>> We've thrown together some results from simple i/o tests on Linux\n>> comparing various file systems, hardware and software raid with a\n>> little bit of volume management:\n>>\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>>\n>> What I'd like to ask of the folks on the list is how relevant is this\n>> information in helping make decisions such as \"What file system should\n>> I use?\" \"What performance can I expect from this RAID configuration?\"\n>> I know these kind of tests won't help answer questions like \"Which\n>> file system is most reliable?\" but we would like to be as helpful as\n>> we can.\n>>\n>> Any suggestions/comments/criticisms for what would be more relevant or\n>> interesting also appreciated. We've started with Linux but we'd also\n>> like to hit some other OS's. I'm assuming FreeBSD would be the other\n>> popular choice for the DL-380 that we're using.\n>>\n>> I hope this is helpful.\n>\n> it's definantly timely for me (we were having a spirited 'discussion' on\n> this topic at work today ;-)\n>\n> what happened with XFS?\n\nNot exactly sure, I didn't attempt to debug much. I only looked into\nit enough to see that the fio processes were waiting for something.\nIn one case I left the test go for 24 hours too see if it would stop.\nNote that I specified to fio not to run longer than an hour.\n\n> you show it as not completing half the tests in the single-disk table and\n> it's completly missing from the other ones.\n>\n> what OS/kernel were you running?\n\nThis is a Gentoo system, running the 2.6.25-gentoo-r6 kernel.\n\n> if it was linux, which software raid did you try (md or dm) did you use lvm\n> or raw partitions?\n\nWe tried mdraid, not device-mapper. So far we have only used raw\npartitions (whole devices without partitions.)\n\nRegards,\nMark\n", "msg_date": "Tue, 5 Aug 2008 07:57:58 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood <[email protected]> wrote:\n> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,\n> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used\n> bonnie++, so the numbers are really only useful for my hardware.\n>\n> What parameters were used to create the XFS partition in these tests? And,\n> what options were used to mount the file system? Was the kernel 32-bit or\n> 64-bit? Given what I've seen with some of the XFS options (like lazy-count),\n> I am wondering about the options used in these tests.\n\nThe default (no arguments specified) parameters were used to create\nthe XFS partitions. Mount options specified are described in the\ntable. This was a 64-bit OS.\n\nRegards,\nMark\n", "msg_date": "Tue, 5 Aug 2008 08:00:02 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Tue, Aug 5, 2008 at 4:54 AM, Mark Wong <[email protected]> wrote:\n> Hi all,\n Hi\n\n> We've thrown together some results from simple i/o tests on Linux\n> comparing various file systems, hardware and software raid with a\n> little bit of volume management:\n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n>\n> Any suggestions/comments/criticisms for what would be more relevant or\n> interesting also appreciated. We've started with Linux but we'd also\n> like to hit some other OS's. I'm assuming FreeBSD would be the other\n> popular choice for the DL-380 that we're using.\n>\n\n Would be interesting also tests with Ext4. Despite of don't\nconsider stable in kernel linux, on the case is possible because the\nversion kernel and assuming that is e2fsprogs is supported.\n\n\n\nRegards,\n-- \nFernando Ike\nhttp://www.midstorm.org/~fike/weblog\n", "msg_date": "Tue, 5 Aug 2008 16:51:44 +0000", "msg_from": "\"Fernando Ike\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Mark Wong wrote:\n> On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood <[email protected]> wrote:\n> \n>> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,\n>> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used\n>> bonnie++, so the numbers are really only useful for my hardware.\n>>\n>> What parameters were used to create the XFS partition in these tests? And,\n>> what options were used to mount the file system? Was the kernel 32-bit or\n>> 64-bit? Given what I've seen with some of the XFS options (like lazy-count),\n>> I am wondering about the options used in these tests.\n>> \n>\n> The default (no arguments specified) parameters were used to create\n> the XFS partitions. Mount options specified are described in the\n> table. This was a 64-bit OS.\n>\n> Regards,\n> Mark\n>\n> \nI think it is a good idea to match the raid stripe size and give some \nindication of how many disks are in the array. E.g:\n\nFor a 4 disk system with 256K stripe size I used:\n\n $ mkfs.xfs -d su=256k,sw=2 /dev/mdx\n\nwhich performed about 2-3 times quicker than the default (I did try sw=4 \nas well, but didn't notice any difference compared to sw=4).\n\nregards\n\nMark\n\n", "msg_date": "Wed, 06 Aug 2008 11:53:46 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "> From: Mark Kirkwood [mailto:[email protected]]\n> Mark Wong wrote:\n> > On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood\n> <[email protected]> wrote:\n> >\n> >> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing\n> JFS, XFS,\n> >> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only\n> used\n> >> bonnie++, so the numbers are really only useful for my hardware.\n> >>\n> >> What parameters were used to create the XFS partition in these\n> tests? And,\n> >> what options were used to mount the file system? Was the kernel 32-\n> bit or\n> >> 64-bit? Given what I've seen with some of the XFS options (like\n> lazy-count),\n> >> I am wondering about the options used in these tests.\n> >>\n> >\n> > The default (no arguments specified) parameters were used to create\n> > the XFS partitions. Mount options specified are described in the\n> > table. This was a 64-bit OS.\n> >\n> I think it is a good idea to match the raid stripe size and give some\n> indication of how many disks are in the array. E.g:\n> \n> For a 4 disk system with 256K stripe size I used:\n> \n> $ mkfs.xfs -d su=256k,sw=2 /dev/mdx\n> \n> which performed about 2-3 times quicker than the default (I did try\n> sw=4\n> as well, but didn't notice any difference compared to sw=4).\n\n[Greg says] \nI thought that xfs picked up those details when using md and a soft-raid\nconfiguration. \n\n\n\n\n", "msg_date": "Tue, 5 Aug 2008 17:03:23 -0700", "msg_from": "\"Gregory S. Youngblood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Gregory S. Youngblood wrote:\n>> From: Mark Kirkwood [mailto:[email protected]]\n>> Mark Wong wrote:\n>> \n>>> On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood\n>>> \n>> <[email protected]> wrote:\n>> \n>>>> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing\n>>>> \n>> JFS, XFS,\n>> \n>>>> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only\n>>>> \n>> used\n>> \n>>>> bonnie++, so the numbers are really only useful for my hardware.\n>>>>\n>>>> What parameters were used to create the XFS partition in these\n>>>> \n>> tests? And,\n>> \n>>>> what options were used to mount the file system? Was the kernel 32-\n>>>> \n>> bit or\n>> \n>>>> 64-bit? Given what I've seen with some of the XFS options (like\n>>>> \n>> lazy-count),\n>> \n>>>> I am wondering about the options used in these tests.\n>>>>\n>>>> \n>>> The default (no arguments specified) parameters were used to create\n>>> the XFS partitions. Mount options specified are described in the\n>>> table. This was a 64-bit OS.\n>>>\n>>> \n>> I think it is a good idea to match the raid stripe size and give some\n>> indication of how many disks are in the array. E.g:\n>>\n>> For a 4 disk system with 256K stripe size I used:\n>>\n>> $ mkfs.xfs -d su=256k,sw=2 /dev/mdx\n>>\n>> which performed about 2-3 times quicker than the default (I did try\n>> sw=4\n>> as well, but didn't notice any difference compared to sw=4).\n>> \n>\n> [Greg says] \n> I thought that xfs picked up those details when using md and a soft-raid\n> configuration. \n>\n>\n>\n>\n>\n> \nYou are right, it does (I may be recalling performance from my other \nmachine that has a 3Ware card - this was a couple of years ago...) \nAnyway, I'm thinking for the Hardware raid tests they may need to be \nspecified.\n\nCheers\n\nMark\n", "msg_date": "Thu, 07 Aug 2008 12:01:57 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Mark Kirkwood wrote:\n> You are right, it does (I may be recalling performance from my other \n> machine that has a 3Ware card - this was a couple of years ago...) \n> Anyway, I'm thinking for the Hardware raid tests they may need to be \n> specified.\n>\n>\n\nFWIW - of course this somewhat academic given that the single disk xfs \ntest failed! I'm puzzled - having a Gentoo system of similar \nconfiguration (2.6.25-gentoo-r6) and running the fio tests a little \nmodified for my config (2 cpu PIII 2G RAM with 4x ATA disks RAID0 and \nall xfs filesystems - I changed sizes of files to 4G and no. processes \nto 4) all tests that failed on Marks HP work on my Supermicro P2TDER + \nPromise TX4000. In fact the performance is pretty reasonable on the old \ngirl as well (seq read is 142Mb/s and the random read/write is 12.7/12.0 \nMb/s).\n\nI certainly would like to see some more info on why the xfs tests were \nfailing - as on most systems I've encountered xfs is a great performer.\n\nregards\n\nMark\n", "msg_date": "Thu, 07 Aug 2008 21:40:00 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Mark Kirkwood schrieb:\n> Mark Kirkwood wrote:\n>> You are right, it does (I may be recalling performance from my other \n>> machine that has a 3Ware card - this was a couple of years ago...) \n>> Anyway, I'm thinking for the Hardware raid tests they may need to be \n>> specified.\n>>\n>>\n>\n> FWIW - of course this somewhat academic given that the single disk xfs \n> test failed! I'm puzzled - having a Gentoo system of similar \n> configuration (2.6.25-gentoo-r6) and running the fio tests a little \n> modified for my config (2 cpu PIII 2G RAM with 4x ATA disks RAID0 and \n> all xfs filesystems - I changed sizes of files to 4G and no. processes \n> to 4) all tests that failed on Marks HP work on my Supermicro P2TDER + \n> Promise TX4000. In fact the performance is pretty reasonable on the \n> old girl as well (seq read is 142Mb/s and the random read/write is \n> 12.7/12.0 Mb/s).\n>\n> I certainly would like to see some more info on why the xfs tests were \n> failing - as on most systems I've encountered xfs is a great performer.\n>\n> regards\n>\n> Mark\n>\nI can second this, we use XFS on nearly all our database servers, and \nnever encountered the problems mentioned.\n\n", "msg_date": "Thu, 07 Aug 2008 12:21:04 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 3:21 AM, Mario Weilguni <[email protected]> wrote:\n> Mark Kirkwood schrieb:\n>>\n>> Mark Kirkwood wrote:\n>>>\n>>> You are right, it does (I may be recalling performance from my other\n>>> machine that has a 3Ware card - this was a couple of years ago...) Anyway,\n>>> I'm thinking for the Hardware raid tests they may need to be specified.\n>>>\n>>>\n>>\n>> FWIW - of course this somewhat academic given that the single disk xfs\n>> test failed! I'm puzzled - having a Gentoo system of similar configuration\n>> (2.6.25-gentoo-r6) and running the fio tests a little modified for my config\n>> (2 cpu PIII 2G RAM with 4x ATA disks RAID0 and all xfs filesystems - I\n>> changed sizes of files to 4G and no. processes to 4) all tests that failed\n>> on Marks HP work on my Supermicro P2TDER + Promise TX4000. In fact the\n>> performance is pretty reasonable on the old girl as well (seq read is\n>> 142Mb/s and the random read/write is 12.7/12.0 Mb/s).\n>>\n>> I certainly would like to see some more info on why the xfs tests were\n>> failing - as on most systems I've encountered xfs is a great performer.\n>>\n>> regards\n>>\n>> Mark\n>>\n> I can second this, we use XFS on nearly all our database servers, and never\n> encountered the problems mentioned.\n\nI have heard of one or two situations where the combination of the\ndisk controller caused bizarre behaviors with different journaling\nfile systems. They seem so few and far between though. I personally\nwasn't looking forwarding to chasing Linux file system problems, but I\ncan set up an account and remote management access if anyone else\nwould like to volunteer.\n\nRegards,\nMark\n", "msg_date": "Thu, 7 Aug 2008 12:36:43 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "> -----Original Message-----\n> From: Mark Wong [mailto:[email protected]]\n> Sent: Thursday, August 07, 2008 12:37 PM\n> To: Mario Weilguni\n> Cc: Mark Kirkwood; [email protected]; [email protected]; pgsql-\n> [email protected]; Gabrielle Roth\n> Subject: Re: [PERFORM] file system and raid performance\n> \n \n> I have heard of one or two situations where the combination of the\n> disk controller caused bizarre behaviors with different journaling\n> file systems. They seem so few and far between though. I personally\n> wasn't looking forwarding to chasing Linux file system problems, but I\n> can set up an account and remote management access if anyone else\n> would like to volunteer.\n\n[Greg says] \nTempting... if no one else takes you up on it by then, I might have some\ntime in a week or two to experiment and test a couple of things.\n\nOne thing I've noticed with a Silicon Image 3124 SATA going through a\nSilicon Image 3726 port multiplier with the binary-only drivers from Silicon\nImage (until the PM support made it into the mainline kernel - 2.6.24 I\nthink, might have been .25) is that under some heavy loads it might drop a\nsata channel and if that channel happens to have a PM on it, it drops 5\ndrives. I saw this with a card that had 4 channels, 2 connected to a PM w/5\ndrives and 2 direct. It was pretty random. \n\nNot saying that's happening in this case, but odd things have been known to\nhappen under unusual usage patterns.\n\n\n\n", "msg_date": "Thu, 7 Aug 2008 13:24:23 -0700", "msg_from": "\"Gregory S. Youngblood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "To me it still boggles the mind that noatime should actually slow down\nactivities on ANY file-system ... has someone got an explanation for\nthat kind of behaviour? As far as I'm concerned this means that even\nto any read I'll add the overhead of a write - most likely in a disk-location\nslightly off of the position that I read the data ... how would that speed\nthe process up on average?\n\n\n\nCheers,\nAndrej\n", "msg_date": "Fri, 8 Aug 2008 08:59:40 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 2:59 PM, Andrej Ricnik-Bay\n<[email protected]> wrote:\n> To me it still boggles the mind that noatime should actually slow down\n> activities on ANY file-system ... has someone got an explanation for\n> that kind of behaviour? As far as I'm concerned this means that even\n> to any read I'll add the overhead of a write - most likely in a disk-location\n> slightly off of the position that I read the data ... how would that speed\n> the process up on average?\n\nnoatime turns off the atime write behaviour. Or did you already know\nthat and I missed some weird post where noatime somehow managed to\nslow down performance?\n", "msg_date": "Thu, 7 Aug 2008 15:30:13 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "2008/8/8 Scott Marlowe <[email protected]>:\n> noatime turns off the atime write behaviour. Or did you already know\n> that and I missed some weird post where noatime somehow managed to\n> slow down performance?\n\nScott, I'm quite aware of what noatime does ... you didn't miss a post, but\nif you look at Mark's graphs on\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\nthey pretty much all indicate that (unless I completely misinterpret the\nmeaning and purpose of the labels), independent of the file-system,\nusing noatime slows read/writes down (on average).\n\n\n\n\n-- \nPlease don't top post, and don't use HTML e-Mail :} Make your quotes concise.\n\nhttp://www.american.edu/econ/notes/htmlmail.htm\n", "msg_date": "Fri, 8 Aug 2008 09:57:39 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Andrej Ricnik-Bay wrote:\n> 2008/8/8 Scott Marlowe <[email protected]>:\n> \n>> noatime turns off the atime write behaviour. Or did you already know\n>> that and I missed some weird post where noatime somehow managed to\n>> slow down performance?\n>> \n>\n> Scott, I'm quite aware of what noatime does ... you didn't miss a post, but\n> if you look at Mark's graphs on\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> they pretty much all indicate that (unless I completely misinterpret the\n> meaning and purpose of the labels), independent of the file-system,\n> using noatime slows read/writes down (on average)\n\nThat doesn't make sense - if noatime slows things down, then the \nanalysis is probably wrong.\n\nNow, modern Linux distributions default to \"relatime\" - which will only \nupdate access time if the access time is currently less than the update \ntime or something like this. The effect is that modern Linux \ndistributions do not benefit from \"noatime\" as much as they have in the \npast. In this case, \"noatime\" vs default would probably be measuring % \nnoise.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nAndrej Ricnik-Bay wrote:\n\n2008/8/8 Scott Marlowe <[email protected]>:\n \n\nnoatime turns off the atime write behaviour. Or did you already know\nthat and I missed some weird post where noatime somehow managed to\nslow down performance?\n \n\n\nScott, I'm quite aware of what noatime does ... you didn't miss a post, but\nif you look at Mark's graphs on\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\nthey pretty much all indicate that (unless I completely misinterpret the\nmeaning and purpose of the labels), independent of the file-system,\nusing noatime slows read/writes down (on average)\n\n\nThat doesn't make sense - if noatime slows things down, then the\nanalysis is probably wrong.\n\nNow, modern Linux distributions default to \"relatime\" - which will only\nupdate access time if the access time is currently less than the update\ntime or something like this. The effect is that modern Linux\ndistributions do not benefit from \"noatime\" as much as they have in the\npast. In this case, \"noatime\" vs default would probably be measuring %\nnoise.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Thu, 07 Aug 2008 18:08:59 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 3:57 PM, Andrej Ricnik-Bay\n<[email protected]> wrote:\n> 2008/8/8 Scott Marlowe <[email protected]>:\n>> noatime turns off the atime write behaviour. Or did you already know\n>> that and I missed some weird post where noatime somehow managed to\n>> slow down performance?\n>\n> Scott, I'm quite aware of what noatime does ... you didn't miss a post, but\n> if you look at Mark's graphs on\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> they pretty much all indicate that (unless I completely misinterpret the\n> meaning and purpose of the labels), independent of the file-system,\n> using noatime slows read/writes down (on average).\n\nInteresting. While a few of the benchmarks looks noticeably slower\nwith noatime (reiserfs for instance) most seem faster in that listing.\n\nI am just now setting up our big database server for work and noticed\na MUCH lower performance without noatime.\n", "msg_date": "Thu, 7 Aug 2008 16:12:58 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 1:24 PM, Gregory S. Youngblood <[email protected]> wrote:\n>> -----Original Message-----\n>> From: Mark Wong [mailto:[email protected]]\n>> Sent: Thursday, August 07, 2008 12:37 PM\n>> To: Mario Weilguni\n>> Cc: Mark Kirkwood; [email protected]; [email protected]; pgsql-\n>> [email protected]; Gabrielle Roth\n>> Subject: Re: [PERFORM] file system and raid performance\n>>\n>\n>> I have heard of one or two situations where the combination of the\n>> disk controller caused bizarre behaviors with different journaling\n>> file systems. They seem so few and far between though. I personally\n>> wasn't looking forwarding to chasing Linux file system problems, but I\n>> can set up an account and remote management access if anyone else\n>> would like to volunteer.\n>\n> [Greg says]\n> Tempting... if no one else takes you up on it by then, I might have some\n> time in a week or two to experiment and test a couple of things.\n\nOk, let me know and I'll set you up with access.\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Aug 2008 09:28:44 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 3:08 PM, Mark Mielke <[email protected]> wrote:\n> Andrej Ricnik-Bay wrote:\n>\n> 2008/8/8 Scott Marlowe <[email protected]>:\n>\n>\n> noatime turns off the atime write behaviour. Or did you already know\n> that and I missed some weird post where noatime somehow managed to\n> slow down performance?\n>\n>\n> Scott, I'm quite aware of what noatime does ... you didn't miss a post, but\n> if you look at Mark's graphs on\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> they pretty much all indicate that (unless I completely misinterpret the\n> meaning and purpose of the labels), independent of the file-system,\n> using noatime slows read/writes down (on average)\n>\n> That doesn't make sense - if noatime slows things down, then the analysis is\n> probably wrong.\n>\n> Now, modern Linux distributions default to \"relatime\" - which will only\n> update access time if the access time is currently less than the update time\n> or something like this. The effect is that modern Linux distributions do not\n> benefit from \"noatime\" as much as they have in the past. In this case,\n> \"noatime\" vs default would probably be measuring % noise.\n\nAnyone know what to look for in kernel profiles? There is readprofile\n(profile.text) and oprofile (oprofile.kernel and oprofile.user) data\navailable. Just click on the results number, then the \"raw data\" link\nfor a directory listing of files. For example, here is one of the\nlinks:\n\nhttp://osdldbt.sourceforge.net/dl380/3disk/sraid5/ext3-journal/seq-read/fio/profiling/oprofile.kernel\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Aug 2008 09:33:06 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 3:08 PM, Mark Mielke <[email protected]> wrote:\n> Andrej Ricnik-Bay wrote:\n>\n> 2008/8/8 Scott Marlowe <[email protected]>:\n>\n>\n> noatime turns off the atime write behaviour. Or did you already know\n> that and I missed some weird post where noatime somehow managed to\n> slow down performance?\n>\n>\n> Scott, I'm quite aware of what noatime does ... you didn't miss a post, but\n> if you look at Mark's graphs on\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> they pretty much all indicate that (unless I completely misinterpret the\n> meaning and purpose of the labels), independent of the file-system,\n> using noatime slows read/writes down (on average)\n>\n> That doesn't make sense - if noatime slows things down, then the analysis is\n> probably wrong.\n>\n> Now, modern Linux distributions default to \"relatime\" - which will only\n> update access time if the access time is currently less than the update time\n> or something like this. The effect is that modern Linux distributions do not\n> benefit from \"noatime\" as much as they have in the past. In this case,\n> \"noatime\" vs default would probably be measuring % noise.\n\nInteresting, now how would we see if it is defaulting to \"relatime\"?\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Aug 2008 09:56:15 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, Aug 7, 2008 at 3:08 PM, Mark Mielke <[email protected]> wrote:\n> Andrej Ricnik-Bay wrote:\n>\n> 2008/8/8 Scott Marlowe <[email protected]>:\n>\n>\n> noatime turns off the atime write behaviour. Or did you already know\n> that and I missed some weird post where noatime somehow managed to\n> slow down performance?\n>\n>\n> Scott, I'm quite aware of what noatime does ... you didn't miss a post, but\n> if you look at Mark's graphs on\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n> they pretty much all indicate that (unless I completely misinterpret the\n> meaning and purpose of the labels), independent of the file-system,\n> using noatime slows read/writes down (on average)\n>\n> That doesn't make sense - if noatime slows things down, then the analysis is\n> probably wrong.\n>\n> Now, modern Linux distributions default to \"relatime\" - which will only\n> update access time if the access time is currently less than the update time\n> or something like this. The effect is that modern Linux distributions do not\n> benefit from \"noatime\" as much as they have in the past. In this case,\n> \"noatime\" vs default would probably be measuring % noise.\n\nIt appears that the default mount option on this system is \"atime\".\nNot specifying any options, \"relatime\" or \"noatime\", results in\nneither being shown in /proc/mounts. I'm assuming if the default\nbehavior was to use \"relatime\" that it would be shown in /proc/mounts.\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Aug 2008 14:13:29 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Thu, 7 Aug 2008, Mark Mielke wrote:\n\n> Now, modern Linux distributions default to \"relatime\"\n\nRight, but Mark's HP test system is running Gentoo.\n\n(ducks)\n\nAccording to http://brainstorm.ubuntu.com/idea/2369/ relatime is the \ndefault for Fedora 8, Mandriva 2008, Pardus, and Ubuntu 8.04.\n\nAnyway, there aren't many actual files involved in this test, and I \nsuspect the atime writes are just being cached until forced out to disk \nonly periodically. You need to run something that accesses more files \nand/or regularly forces sync to disk periodically to get a more \ndatabase-like situation where the atime writes degrade performance. Note \nhow Joshua Drake's ext2 vs. ext3 comparison, which does show a large \ndifference here, was run with the iozone's -e parameter that flushes the \nwrites with fsync. I don't see anything like that in the DL380 G5 fio \ntests.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 8 Aug 2008 18:30:07 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Mark Wong wrote:\n> On Mon, Aug 4, 2008 at 10:04 PM, <[email protected]> wrote:\n> > On Mon, 4 Aug 2008, Mark Wong wrote:\n> >\n> >> Hi all,\n> >>\n> >> We've thrown together some results from simple i/o tests on Linux\n> >> comparing various file systems, hardware and software raid with a\n> >> little bit of volume management:\n> >>\n> >> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nMark, very useful analysis. I am curious why you didn't test\n'data=writeback' on ext3; 'data=writeback' is the recommended mount\nmethod for that file system, though I see that is not mentioned in our\nofficial documentation.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Fri, 15 Aug 2008 15:22:59 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Fri, Aug 15, 2008 at 12:22 PM, Bruce Momjian <[email protected]> wrote:\n> Mark Wong wrote:\n>> On Mon, Aug 4, 2008 at 10:04 PM, <[email protected]> wrote:\n>> > On Mon, 4 Aug 2008, Mark Wong wrote:\n>> >\n>> >> Hi all,\n>> >>\n>> >> We've thrown together some results from simple i/o tests on Linux\n>> >> comparing various file systems, hardware and software raid with a\n>> >> little bit of volume management:\n>> >>\n>> >> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> Mark, very useful analysis. I am curious why you didn't test\n> 'data=writeback' on ext3; 'data=writeback' is the recommended mount\n> method for that file system, though I see that is not mentioned in our\n> official documentation.\n\nI think the short answer is that I neglected to. :) I didn't realized\n'data=writeback' is the recommended journal mode. We'll get a result\nor two and see how it looks.\n\nMark\n", "msg_date": "Fri, 15 Aug 2008 12:31:04 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Fri, 15 Aug 2008, Bruce Momjian wrote:\n\n> 'data=writeback' is the recommended mount method for that file system, \n> though I see that is not mentioned in our official documentation.\n\nWhile writeback has good performance characteristics, I don't know that \nI'd go so far as to support making that an official recommendation. The \nintegrity guarantees of that journaling mode are pretty weak. Sure the \ndatabase itself should be fine; it's got the WAL as a backup if the \nfilesytem loses some recently written bits. But I'd hate to see somebody \nswitch to that mount option on this project's recommendation only to find \nsome other files got corrupted on a power loss because of writeback's \nlimited journalling. ext3 has plenty of problem already without picking \nits least safe mode, and recommending writeback would need a carefully \nwritten warning to that effect.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 16 Aug 2008 00:53:20 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Greg Smith wrote:\n> On Fri, 15 Aug 2008, Bruce Momjian wrote:\n>> 'data=writeback' is the recommended mount method for that file \n>> system, though I see that is not mentioned in our official \n>> documentation.\n> While writeback has good performance characteristics, I don't know \n> that I'd go so far as to support making that an official \n> recommendation. The integrity guarantees of that journaling mode are \n> pretty weak. Sure the database itself should be fine; it's got the \n> WAL as a backup if the filesytem loses some recently written bits. \n> But I'd hate to see somebody switch to that mount option on this \n> project's recommendation only to find some other files got corrupted \n> on a power loss because of writeback's limited journalling. ext3 has \n> plenty of problem already without picking its least safe mode, and \n> recommending writeback would need a carefully written warning to that \n> effect.\n\nTo contrast - not recommending it means that most people unaware will be \nrunning with a less effective mode, and they will base their performance \nmeasurements on this less effective mode.\n\nPerhaps the documentation should only state that \"With ext3, \ndata=writeback is the recommended mode for PostgreSQL. PostgreSQL \nperforms its own journalling of data and does not require the additional \nguarantees provided by the more conservative ext3 modes. However, if the \nfile system is used for any purpose other than PostregSQL database \nstorage, the data integrity requirements of these other purposes must be \nconsidered on their own.\"\n\nPersonally, I use data=writeback for most purposes, but use data=journal \nfor /mail and /home. In these cases, I find even the default ext3 mode \nto be fewer guarantees than I am comfortable with. :-)\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Sat, 16 Aug 2008 11:01:31 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Fri, Aug 15, 2008 at 12:22 PM, Bruce Momjian <[email protected]> wrote:\n> Mark Wong wrote:\n>> On Mon, Aug 4, 2008 at 10:04 PM, <[email protected]> wrote:\n>> > On Mon, 4 Aug 2008, Mark Wong wrote:\n>> >\n>> >> Hi all,\n>> >>\n>> >> We've thrown together some results from simple i/o tests on Linux\n>> >> comparing various file systems, hardware and software raid with a\n>> >> little bit of volume management:\n>> >>\n>> >> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> Mark, very useful analysis. I am curious why you didn't test\n> 'data=writeback' on ext3; 'data=writeback' is the recommended mount\n> method for that file system, though I see that is not mentioned in our\n> official documentation.\n\nI have one set of results with ext3 data=writeback and it appears that\nsome of the write tests have less throughput than data=ordered. For\nanyone who wants to look at the results details:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nit's under the \"Aggregate Bandwidth (MB/s) - RAID 5 (256KB stripe) -\nNo partition table\" table.\n\nRegards,\nMark\n", "msg_date": "Mon, 18 Aug 2008 08:33:44 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Mark Mielke wrote:\n> Greg Smith wrote:\n> > On Fri, 15 Aug 2008, Bruce Momjian wrote:\n> >> 'data=writeback' is the recommended mount method for that file \n> >> system, though I see that is not mentioned in our official \n> >> documentation.\n> > While writeback has good performance characteristics, I don't know \n> > that I'd go so far as to support making that an official \n> > recommendation. The integrity guarantees of that journaling mode are \n> > pretty weak. Sure the database itself should be fine; it's got the \n> > WAL as a backup if the filesytem loses some recently written bits. \n> > But I'd hate to see somebody switch to that mount option on this \n> > project's recommendation only to find some other files got corrupted \n> > on a power loss because of writeback's limited journalling. ext3 has \n> > plenty of problem already without picking its least safe mode, and \n> > recommending writeback would need a carefully written warning to that \n> > effect.\n> \n> To contrast - not recommending it means that most people unaware will be \n> running with a less effective mode, and they will base their performance \n> measurements on this less effective mode.\n> \n> Perhaps the documentation should only state that \"With ext3, \n> data=writeback is the recommended mode for PostgreSQL. PostgreSQL \n> performs its own journalling of data and does not require the additional \n> guarantees provided by the more conservative ext3 modes. However, if the \n> file system is used for any purpose other than PostregSQL database \n> storage, the data integrity requirements of these other purposes must be \n> considered on their own.\"\n> \n> Personally, I use data=writeback for most purposes, but use data=journal \n> for /mail and /home. In these cases, I find even the default ext3 mode \n> to be fewer guarantees than I am comfortable with. :-)\n\nI have documented this in the WAL section of the manual, which seemed\nlike the most logical location.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +", "msg_date": "Sat, 6 Dec 2008 16:34:58 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Bruce Momjian wrote:\n> Mark Mielke wrote:\n>> Greg Smith wrote:\n>>> On Fri, 15 Aug 2008, Bruce Momjian wrote:\n>>>> 'data=writeback' is the recommended mount method for that file \n>>>> system, though I see that is not mentioned in our official \n>>>> documentation.\n>>> While writeback has good performance characteristics, I don't know \n>>> that I'd go so far as to support making that an official \n>>> recommendation. The integrity guarantees of that journaling mode are \n>>> pretty weak. Sure the database itself should be fine; it's got the \n>>> WAL as a backup if the filesytem loses some recently written bits. \n>>> But I'd hate to see somebody switch to that mount option on this \n>>> project's recommendation only to find some other files got corrupted \n>>> on a power loss because of writeback's limited journalling. ext3 has \n>>> plenty of problem already without picking its least safe mode, and \n>>> recommending writeback would need a carefully written warning to that \n>>> effect.\n>> To contrast - not recommending it means that most people unaware will be \n>> running with a less effective mode, and they will base their performance \n>> measurements on this less effective mode.\n>>\n>> Perhaps the documentation should only state that \"With ext3, \n>> data=writeback is the recommended mode for PostgreSQL. PostgreSQL \n>> performs its own journalling of data and does not require the additional \n>> guarantees provided by the more conservative ext3 modes. However, if the \n>> file system is used for any purpose other than PostregSQL database \n>> storage, the data integrity requirements of these other purposes must be \n>> considered on their own.\"\n>>\n>> Personally, I use data=writeback for most purposes, but use data=journal \n>> for /mail and /home. In these cases, I find even the default ext3 mode \n>> to be fewer guarantees than I am comfortable with. :-)\n> \n> I have documented this in the WAL section of the manual, which seemed\n> like the most logical location.\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n\nAh, but shouldn't a PostgreSQL (or any other database, for that matter)\nhave its own set of filesystems tuned to the application's I/O patterns?\nSure, there are some people who need to have all of their eggs in one\nbasket because they can't afford multiple baskets. For them, maybe the\nOS defaults are the right choice. But if you're building a\ndatabase-specific server, you can optimize the I/O for that.\n\n-- \nM. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P), WOM\n\n\"A mathematician is a device for turning coffee into theorems.\" --\nAlfréd Rényi via Paul Erdős\n\n", "msg_date": "Sun, 07 Dec 2008 21:59:16 -0800", "msg_from": "\"M. Edward (Ed) Borasky\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "M. Edward (Ed) Borasky wrote:\n\n> Ah, but shouldn't a PostgreSQL (or any other database, for that matter)\n> have its own set of filesystems tuned to the application's I/O patterns?\n> Sure, there are some people who need to have all of their eggs in one\n> basket because they can't afford multiple baskets. For them, maybe the\n> OS defaults are the right choice. But if you're building a\n> database-specific server, you can optimize the I/O for that.\n> \nI used to run IBM's DB2 database management system. It can use a normal\nLinux file system (e.g., ext2 or ext3), but it prefers to run a partition\n(or more, preferably more than one) itself in raw mode. This eliminates\ncore-to-core copies of in put and output, organizing the disk space as it\nprefers, allows multiple writer processes (typically one per disk drive),\nand multiple reader processes (also, typically one per drive), and\npotentially increasing the concurrency of reading, writing, and processing.\n\nMy dbms needs are extremely modest (only one database, usually only one\nuser, all on one machine), so I saw only a little benefit to using DB2, and\nthere were administrative problems. I use Red Hat Enterprise Linux, and the\nlatest version of that (RHEL 5) does not offer raw file systems anymore, but\nprovides the same thing by other means. Trouble is, I would have to buy the\nlatest version of DB2 to be compatible with my version of Linux. Instead, I\njust converted everything to postgreSQL instead, it it works very well.\n\nWhen I started with this in 1986, I first tried Microsoft Access, but could\nnot get it to accept the database description I was using. So I switched to\nLinux (for many reasons -- that was just one of them) and tried postgreSQL.\nAt the time, it was useless. One version would not do views (it accepted the\nconstruct, IIRC, but they did not work), and the other version would do\nviews, but would not do something else (I forget what), so I got Informix\nthat worked pretty well with Red Hat Linux 5.0. When I upgraded to RHL 5.2\nor 6.0 (I forget which), Informix would not work (could not even install\nit), I could get no support from them, so that is why I went to DB2. When I\ngot tired of trying to keep DB2 working with RHEL 5, I switched to\npostgreSQL, and the dbms itself worked right out of the box. I had to diddle\nmy programs very slightly (I used embedded SQL), but there were superficial\nchanges here and there.\n\nThe advantage of using one of the OS's file systems (I use ext2 for the dbms\nand ext3 for everything else) are that the dbms developers have to be ready\nfor only about one file system. That is a really big advantage, I imagine. I\nalso have atime turned off. The main database is on 4 small hard drive\n(about 18 GBytes each) each of which has just one partition taking the\nentire drive. They are all on a single SCSI controller that also has my\nbackup tape drive on it. The machine has two other hard drives (around 80\nGBytes each) on another SCSI controller and nothing else on that controller.\nOne of the drives has a partition on it where mainly the WAL is placed, and\nanother with little stuff. Those two drives have other partitions for the\nLinus stuff, /tmp, /var, and /home as the main partitions on them, but the\none with the WAL on it is just about unused (contains /usr/source and stuff\nlike that) when postgres is running. That is good enough for me. If I were\nin a serious production environment, I would take everything except the dbms\noff that machine and run it on another one.\n\nI cannot make any speed comparisons between postgreSQL and DB2, because the\nmachine I ran DB2 on has two 550 MHz processors and 512 Megabytes RAM\nrunning RHL 7.3, and the new machine for postgres has two 3.06 GBYte\nhyperthreaded Xeon processors and 8 GBytes RAM running RHEL 5, so a\ncomparison would be kind of meaningless.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 06:50:01 up 4 days, 17:08, 4 users, load average: 4.18, 4.15, 4.07\n", "msg_date": "Mon, 08 Dec 2008 07:15:22 -0500", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Sun, Dec 7, 2008 at 10:59 PM, M. Edward (Ed) Borasky\n<[email protected]> wrote:\n> Ah, but shouldn't a PostgreSQL (or any other database, for that matter)\n> have its own set of filesystems tuned to the application's I/O patterns?\n> Sure, there are some people who need to have all of their eggs in one\n> basket because they can't afford multiple baskets. For them, maybe the\n> OS defaults are the right choice. But if you're building a\n> database-specific server, you can optimize the I/O for that.\n\nIt's really about a cost / benefits analysis. 20 years ago file\nsystems were slow and buggy and a database could, with little work,\noutperform them. Nowadays, not so much. I'm guessing that the extra\ncost and effort of maintaining a file system for pgsql outweighs any\nreal gain you're likely to see performance wise.\n\nBut I'm sure that if you implemented one that outran XFS / ZFS / ext3\net. al. people would want to hear about it.\n", "msg_date": "Mon, 8 Dec 2008 09:05:37 -0700", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "On Mon, 8 Dec 2008, Scott Marlowe wrote:\n\n> On Sun, Dec 7, 2008 at 10:59 PM, M. Edward (Ed) Borasky\n> <[email protected]> wrote:\n>> Ah, but shouldn't a PostgreSQL (or any other database, for that matter)\n>> have its own set of filesystems tuned to the application's I/O patterns?\n>> Sure, there are some people who need to have all of their eggs in one\n>> basket because they can't afford multiple baskets. For them, maybe the\n>> OS defaults are the right choice. But if you're building a\n>> database-specific server, you can optimize the I/O for that.\n>\n> It's really about a cost / benefits analysis. 20 years ago file\n> systems were slow and buggy and a database could, with little work,\n> outperform them. Nowadays, not so much. I'm guessing that the extra\n> cost and effort of maintaining a file system for pgsql outweighs any\n> real gain you're likely to see performance wise.\n\nespecially with the need to support the new 'filesystem' on many different \nOS types.\n\nDavid Lang\n\n> But I'm sure that if you implemented one that outran XFS / ZFS / ext3\n> et. al. people would want to hear about it.\n>\n>\n", "msg_date": "Mon, 8 Dec 2008 12:51:39 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" }, { "msg_contents": "Scott Marlowe wrote:\n> On Sun, Dec 7, 2008 at 10:59 PM, M. Edward (Ed) Borasky\n> <[email protected]> wrote:\n>> Ah, but shouldn't a PostgreSQL (or any other database, for that matter)\n>> have its own set of filesystems tuned to the application's I/O patterns?\n>> Sure, there are some people who need to have all of their eggs in one\n>> basket because they can't afford multiple baskets. For them, maybe the\n>> OS defaults are the right choice. But if you're building a\n>> database-specific server, you can optimize the I/O for that.\n> \n> It's really about a cost / benefits analysis. 20 years ago file\n> systems were slow and buggy and a database could, with little work,\n> outperform them. Nowadays, not so much. I'm guessing that the extra\n> cost and effort of maintaining a file system for pgsql outweighs any\n> real gain you're likely to see performance wise.\n> \n> But I'm sure that if you implemented one that outran XFS / ZFS / ext3\n> et. al. people would want to hear about it.\n> \nI guess I wasn't clear -- I didn't mean a PostgreSQL-specific filesystem\ndesign, although BTRFS does have some things that are \"RDBMS-friendly\".\nI meant that one should hand-tune existing filesystems / hardware for\noptimum performance on specific workloads. The tablespaces in PostgreSQL\ngive you that kind of potential granularity, I think.\n\n-- \nM. Edward (Ed) Borasky, FBG, AB, PTA, PGS, MS, MNLP, NST, ACMC(P), WOM\n\n\"A mathematician is a device for turning coffee into theorems.\" --\nAlfréd Rényi via Paul Erdős\n\n", "msg_date": "Tue, 09 Dec 2008 06:35:49 -0800", "msg_from": "\"M. Edward (Ed) Borasky\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: file system and raid performance" } ]
[ { "msg_contents": "Hello,\n\nWe have a bit of a problem with the daily db backup. At the moment the \nbackup db size is half of the main db, and we have the following errors \nin the log file:\n\nERROR: out of memory\nDETAIL: Failed on request of size 536870912\n\n(There's a bunch of those.)\n\nThere were some 'permission denied for language c' errors earlier in the \nlog, but then the backup continued for a while.\n\nThis is the command we're running:\n\npg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c \nmaintenance_work_mem=1500MB -c sort_mem=64MB'\npsql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;\n\nBoth databases (main and backup) are Postgres 8.3.\n\nI'd appreciate any suggestions on how to fix this problem. Also, if you \nhave a better idea on running a daily backup please let me know.\nThanks!\n\nMarcin", "msg_date": "Wed, 06 Aug 2008 11:48:51 +0200", "msg_from": "Marcin Citowicki <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump error - out of memory, Failed on request of size 536870912" }, { "msg_contents": "Hello,\n\nI forgot to add - all those 'out of memory' errors happen when backup db \nis trying to create index. Every 'CREATE INDEX' operation is followed by \n'out of memory' error.\nThanks!\n\nMarcin\n\n\nMarcin Citowicki wrote:\n> Hello,\n>\n> We have a bit of a problem with the daily db backup. At the moment the \n> backup db size is half of the main db, and we have the following \n> errors in the log file:\n>\n> ERROR: out of memory\n> DETAIL: Failed on request of size 536870912\n>\n> (There's a bunch of those.)\n>\n> There were some 'permission denied for language c' errors earlier in \n> the log, but then the backup continued for a while.\n>\n> This is the command we're running:\n>\n> pg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c \n> maintenance_work_mem=1500MB -c sort_mem=64MB'\n> psql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;\n>\n> Both databases (main and backup) are Postgres 8.3.\n>\n> I'd appreciate any suggestions on how to fix this problem. Also, if \n> you have a better idea on running a daily backup please let me know.\n> Thanks!\n>\n> Marcin", "msg_date": "Wed, 06 Aug 2008 12:26:39 +0200", "msg_from": "Marcin Citowicki <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_dump error - out of memory,\n Failed on request of size 536870912" }, { "msg_contents": "Hello\n\nyou have some data files broken.\n\ntry to use http://svana.org/kleptog/pgsql/pgfsck.html - but DO\nDATABASE CLUSTER FILES BACKUP BEFORE. pgfsck doesn't support 8.3, but\nit can help you to search currupted rows. Then you have remove these\nrows, thats all. Later check your hardware - probably your server has\nproblems with memory or controller.\n\nregards\nPavel Stehule\n\n\n2008/8/6 Marcin Citowicki <[email protected]>:\n> Hello,\n>\n> I forgot to add - all those 'out of memory' errors happen when backup db is\n> trying to create index. Every 'CREATE INDEX' operation is followed by 'out\n> of memory' error.\n> Thanks!\n>\n> Marcin\n>\n>\n> Marcin Citowicki wrote:\n>>\n>> Hello,\n>>\n>> We have a bit of a problem with the daily db backup. At the moment the\n>> backup db size is half of the main db, and we have the following errors in\n>> the log file:\n>>\n>> ERROR: out of memory\n>> DETAIL: Failed on request of size 536870912\n>>\n>> (There's a bunch of those.)\n>>\n>> There were some 'permission denied for language c' errors earlier in the\n>> log, but then the backup continued for a while.\n>>\n>> This is the command we're running:\n>>\n>> pg_dump -c -h $DB_HOST -U $DB_USER $DB_NAME | PGOPTIONS='-c\n>> maintenance_work_mem=1500MB -c sort_mem=64MB'\n>> psql --quiet -U $DB_DUMP_USER -h $DB_DUMP_HOST $DB_DUMP_NAME;\n>>\n>> Both databases (main and backup) are Postgres 8.3.\n>>\n>> I'd appreciate any suggestions on how to fix this problem. Also, if you\n>> have a better idea on running a daily backup please let me know.\n>> Thanks!\n>>\n>> Marcin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n", "msg_date": "Wed, 6 Aug 2008 14:21:15 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - out of memory,\n Failed on request of size 536870912" }, { "msg_contents": "Marcin Citowicki wrote:\n> Hello,\n> \n> I forgot to add - all those 'out of memory' errors happen when backup db \n> is trying to create index. Every 'CREATE INDEX' operation is followed by \n> 'out of memory' error.\n\nare you sure that your OS (or ulimit) is able to support a \nmaintenance_work_setting that large ? - try reducing to a say 128MB for \na start and try again.\n\n\nStefan\n", "msg_date": "Wed, 06 Aug 2008 14:33:18 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump error - out of memory,\n Failed on request of size 536870912" } ]
[ { "msg_contents": "This query is run on a test system just after a backup of the database\nhas been restored and it does exactly what I expect it to do\n\nEXPLAIN ANALYZE SELECT foos.* FROM foos INNER JOIN bars ON foos.id =\nbars.foos_id WHERE ((bars.bars_id = 12345)) ORDER BY attr1 LIMIT 3\nOFFSET 0;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=12946.83..12946.83 rows=3 width=1175) (actual\ntime=0.123..0.131 rows=1 loops=1)\n -> Sort (cost=12946.83..12950.83 rows=1602 width=1175) (actual\ntime=0.116..0.119 rows=1 loops=1)\n Sort Key: foos.attr1\n -> Nested Loop (cost=28.69..12035.56 rows=1602 width=1175)\n(actual time=0.071..0.086 rows=1 loops=1)\n -> Bitmap Heap Scan on bars (cost=28.69..2059.66\nrows=1602 width=4) (actual time=0.036..0.039 rows=1 loops=1)\n Recheck Cond: (bars_id = 12345)\n -> Bitmap Index Scan on index_bars_on_bars_id\n(cost=0.00..28.29 rows=1602 width=0) (actual time=0.024..0.024 rows=1\nloops=1)\n Index Cond: (bars_id = 12345)\n -> Index Scan using foos_pkey on foos (cost=0.00..6.21\nrows=1 width=1175) (actual time=0.017..0.021 rows=1 loops=1)\n Index Cond: (foos.id = bars.foos_id)\n Total runtime: 0.350 ms\n\nThis query is run on a production system and is using foos_1attr1\nwhich is an index on attr1 which is a string.\n\nEXPLAIN ANALYZE SELECT foos.* FROM foos INNER JOIN bars ON foos.id =\nbars.foos_id WHERE ((bars.bars_id = 12345)) ORDER BY attr1 LIMIT 3\nOFFSET 0;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..2847.31 rows=3 width=332) (actual\ntime=6175.515..6414.599 rows=1 loops=1)\n -> Nested Loop (cost=0.00..287578.30 rows=303 width=332) (actual\ntime=6175.510..6414.591 rows=1 loops=1)\n -> Index Scan using foos_1attr1 on foos\n(cost=0.00..128038.65 rows=1602 width=332) (actual\ntime=0.182..2451.923 rows=2498 loops=1)\n -> Index Scan using bars_1ix on bars (cost=0.00..0.37\nrows=1 width=4) (actual time=0.006..0.006 rows=0 loops=421939)\n Index Cond: (foos.id = bars.foos_id)\n Filter: (bars_id = 12345)\n Total runtime: 6414.804 ms\n", "msg_date": "Wed, 6 Aug 2008 16:35:01 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "query planner not using the correct index" }, { "msg_contents": "Joshua Shanks wrote:\n> This query is run on a test system just after a backup of the database\n> has been restored and it does exactly what I expect it to do\n\n[snip]\n\nObvious questions:\n\n- Have you changed the random page cost on either installation?\n\n- Have both installations had VACUUM ANALYZE run recently?\n\n- Are the stats targets the same on both installations?\n\n- Do both installations have similar shared buffers, total available\n RAM info, etc?\n\n--\nCraig Ringer\n", "msg_date": "Thu, 07 Aug 2008 10:47:22 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "> - Have you changed the random page cost on either installation?\n\nThis is whatever the default is for both boxes (commented config file says 4.0)\n\n> - Have both installations had VACUUM ANALYZE run recently?\n\nThis is the first thing I did and didn't seem to do anything.\n\nOddly enough I just went and did a VACUUM ANALYZE on a newly restored\ndb on the test server and get the same query plan as production so I\nam now guessing something with the stats from ANALYZE are making\npostgres think the string index is the best bet but is clearly 1000's\nof times slower.\n\n> - Are the stats targets the same on both installations?\n\nIf you mean default_statistics_target that is also the default\n(commented config file says 10)\n\n> - Do both installations have similar shared buffers, total available RAM info, etc?\n\nThe boxes have different configs as the test box isn't as big as the\nproduction on so it doesn't have as much resources available or\nallocated to it.\n\nI did run the query on the backup db box (exact same hardware and\nconfiguration as the production box) which gets restored from a backup\nperiodically (how I populated the test db) and got the same results as\nthe test box.\n", "msg_date": "Wed, 6 Aug 2008 22:14:17 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "Joshua Shanks wrote:\n>> - Have you changed the random page cost on either installation?\n> \n> This is whatever the default is for both boxes (commented config file says 4.0)\n> \n>> - Have both installations had VACUUM ANALYZE run recently?\n> \n> This is the first thing I did and didn't seem to do anything.\n> \n> Oddly enough I just went and did a VACUUM ANALYZE on a newly restored\n> db on the test server and get the same query plan as production so I\n> am now guessing something with the stats from ANALYZE are making\n> postgres think the string index is the best bet but is clearly 1000's\n> of times slower.\n\nOK, that's interesting. There are ways to examine Pg's statistics on\ncolumns, get an idea of which stats might be less than accurate, etc,\nbut I'm not really familiar enough with it all to give you any useful\nadvice on the details. I can make one suggestion in the vein of shotgun\nthroubleshooting, though:\n\nTry altering the statistics targets on the tables of interest, or tweak\nthe default_statistics_target, then rerun VACUUM ANALYZE and re-test.\nMaybe start with a stats target of 100 and see what happens.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 07 Aug 2008 11:24:37 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "On Wed, Aug 6, 2008 at 10:24 PM, Craig Ringer\n<[email protected]> wrote:\n> OK, that's interesting. There are ways to examine Pg's statistics on\n> columns, get an idea of which stats might be less than accurate, etc,\n> but I'm not really familiar enough with it all to give you any useful\n> advice on the details. I can make one suggestion in the vein of shotgun\n> throubleshooting, though:\n>\n> Try altering the statistics targets on the tables of interest, or tweak\n> the default_statistics_target, then rerun VACUUM ANALYZE and re-test.\n> Maybe start with a stats target of 100 and see what happens.\n>\n> --\n> Craig Ringer\n\nI tried 100, 500, and 1000 for default_statistics_target. I think\nbelow is the right query to examine the stats. None of the levels of\ndefault_statistics_target I tried changed the query planners behavior.\n\nIt seems obvious that the stats on attr1 at the current level are\ninaccurate as there are over 100,000 unique enteries in the table. But\neven tweaking them to be more accurate doesn't seem to add any\nbenefit.\n\ndefault_statistics_target = 10\n\nSELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\npg_stats WHERE tablename = 'foos' AND attname='attr1';\n null_frac | n_distinct | most_common_vals | most_common_freqs\n-----------+------------+------------------+-------------------\n 0 | 1789 | {\"\"} | {0.625667}\n\ndefault_statistics_target = 100\n\nSELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\npg_stats WHERE tablename = 'foo' AND attname='attr1';\n null_frac | n_distinct | most_common_vals | most_common_freqs\n-------------+------------+------------------+-------------------\n 0.000266667 | 17429 | {\"\"} | {0.6223}\n\ndefault_statistics_target = 500\n\nSELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\npg_stats WHERE tablename = 'foo' AND attname='attr1';\n null_frac | n_distinct | most_common_vals | most_common_freqs\n-------------+------------+------------------+-------------------\n 0.000293333 | -0.17954 | {\"\"} | {0.62158}\n\ndefault_statistics_target = 1000\n\nSELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\npg_stats WHERE tablename = 'foo' AND attname='attr1';\n null_frac | n_distinct | most_common_vals | most_common_freqs\n-------------+------------+------------------+-------------------\n 0.000293333 | -0.304907 | {\"\"} | {0.621043}\n", "msg_date": "Thu, 7 Aug 2008 10:11:43 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "\"Joshua Shanks\" <[email protected]> writes:\n> It seems obvious that the stats on attr1 at the current level are\n> inaccurate as there are over 100,000 unique enteries in the table.\n\nWell, you haven't told us how big any of these tables are, so it's\nhard to tell if the n_distinct value is wrong or not ... but in\nany case I don't think that the stats on attr1 have anything to do\nwith your problem. The reason that the \"fast\" query is fast is that\nit benefits from the fact that there's only one bars row with\nbars_id = 12345. So the question is how many such rows does the\nplanner now think there are (try \"explain analyze select * from bars\nwhere bars_id = 12345\"), and if it's badly wrong, then you need to be\nlooking at the stats on bars.bars_id to find out why.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2008 17:12:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index " }, { "msg_contents": "On Thu, Aug 7, 2008 at 4:12 PM, Tom Lane <[email protected]> wrote:\n> Well, you haven't told us how big any of these tables are, so it's\n> hard to tell if the n_distinct value is wrong or not ... but in\n> any case I don't think that the stats on attr1 have anything to do\n> with your problem. The reason that the \"fast\" query is fast is that\n> it benefits from the fact that there's only one bars row with\n> bars_id = 12345. So the question is how many such rows does the\n> planner now think there are (try \"explain analyze select * from bars\n> where bars_id = 12345\"), and if it's badly wrong, then you need to be\n> looking at the stats on bars.bars_id to find out why.\n>\n> regards, tom lane\n>\n\nfoo is 400,000+ rows\nbar is 300,000+ rows\n\nI was just about to write back about this as with all my tinkering\ntoday I figured that to be the root cause.\n\nSELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\npg_stats WHERE tablename = 'bars' AND attname='bars_id';\n null_frac | n_distinct | most_common_vals | most_common_freqs\n-----------+------------+----------------------+---------------------------\n 0 | 14 | {145823,47063,24895} | {0.484667,0.257333,0.242}\n\nThose 3 values in reality and in the stats account for 98% of the\nrows. actual distinct values are around 350\n\nThat plus the information information on\nhttp://www.postgresql.org/docs/8.3/static/indexes-ordering.html make\nit all make sense as to why the query planner is doing what it is\ndoing.\n\nThe only problem is we rarely if ever call the query with the where\nclause containing those values. I did some testing and the planner\nworks awesome if we were to call those values but 99.9% of the time we\nare calling other values.\n\nIt seems like the planner would want to get the result set from\nbars.bars_id condition and if it is big using the index on the join to\navoid the separate sorting, but if it is small (0-5 rows which is our\nnormal case) use the primary key index to join and then just quickly\nsort. Is there any reason the planner doesn't do this?\n\nI found a way to run the query as a subselect which is fast for our\nnormal case but doesn't work for the edge cases so I might just have\nto do count on the bars_id and then pick a query based on that.\n", "msg_date": "Thu, 7 Aug 2008 17:01:53 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "\"Joshua Shanks\" <[email protected]> writes:\n\n> Those 3 values in reality and in the stats account for 98% of the\n> rows. actual distinct values are around 350\n\nMeasuring n_distinct from a sample is inherently difficult and unreliable.\nWhen 98% of your table falls into those categories it's leaving very few\nchances for the sample to find many other distinct values. \n\nI haven't seen the whole thread, if you haven't tried already you could try\nraising the statistics target for these columns -- that's usually necessary\nanyways when you have a very skewed distribution like this.\n\n> It seems like the planner would want to get the result set from\n> bars.bars_id condition and if it is big using the index on the join to\n> avoid the separate sorting, but if it is small (0-5 rows which is our\n> normal case) use the primary key index to join and then just quickly\n> sort. Is there any reason the planner doesn't do this?\n\nYeah, Heikki's suggested having a kind of \"branch\" plan node that knows how\nwhere the break-point is between two plans and can call the appropriate one.\nWe don't have anything like that yet.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Thu, 07 Aug 2008 23:38:07 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "On Thu, Aug 7, 2008 at 5:38 PM, Gregory Stark <[email protected]> wrote:\n> Measuring n_distinct from a sample is inherently difficult and unreliable.\n> When 98% of your table falls into those categories it's leaving very few\n> chances for the sample to find many other distinct values.\n>\n> I haven't seen the whole thread, if you haven't tried already you could try\n> raising the statistics target for these columns -- that's usually necessary\n> anyways when you have a very skewed distribution like this.\n>\n\nI did some tweaking on default_statistics_target earlier in the thread\nwith no luck. I just retried it with default_statistics_target set to\n500 and did the VACUUM ANALYZE on the other table this time and\nstarted to see better results and more of the behavior I would expect.\n\nIs there a way to set the stats target for just one column? That seems\nlike what we might need to do.\n\n> Yeah, Heikki's suggested having a kind of \"branch\" plan node that knows how\n> where the break-point is between two plans and can call the appropriate one.\n> We don't have anything like that yet.\n>\n\nIs this already on a todo list or is there a bug for it?\n", "msg_date": "Thu, 7 Aug 2008 18:04:58 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "\"Joshua Shanks\" <[email protected]> writes:\n> SELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\n> pg_stats WHERE tablename = 'bars' AND attname='bars_id';\n> null_frac | n_distinct | most_common_vals | most_common_freqs\n> -----------+------------+----------------------+---------------------------\n> 0 | 14 | {145823,47063,24895} | {0.484667,0.257333,0.242}\n\n> Those 3 values in reality and in the stats account for 98% of the\n> rows. actual distinct values are around 350\n\nSo you need to increase the stats target for this column. With those\nnumbers the planner is going to assume that any value that's not one\nof the big three appears about (1 - (0.484667+0.257333+0.242)) / 11\nof the time, or several hundred times in 300K rows. If n_distinct were\nup around 350 it would be estimating just a dozen or so occurrences,\nwhich should push the join plan into the shape you want. It's likely\nthat it won't bother to include any more entries in most_common_vals\nno matter how much you raise the target; but a larger sample should\ndefinitely give it a better clue about n_distinct.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2008 19:48:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index " }, { "msg_contents": "Yeah with default_statistics_target at 500 most_common_vals had 4\nvalues with the fourth having a frequency of 1.5% and distinct have\n250+ in it.\n\nHow do I increase the stats target for just one column?\n\nOn Thu, Aug 7, 2008 at 6:48 PM, Tom Lane <[email protected]> wrote:\n> \"Joshua Shanks\" <[email protected]> writes:\n>> SELECT null_frac, n_distinct, most_common_vals, most_common_freqs FROM\n>> pg_stats WHERE tablename = 'bars' AND attname='bars_id';\n>> null_frac | n_distinct | most_common_vals | most_common_freqs\n>> -----------+------------+----------------------+---------------------------\n>> 0 | 14 | {145823,47063,24895} | {0.484667,0.257333,0.242}\n>\n>> Those 3 values in reality and in the stats account for 98% of the\n>> rows. actual distinct values are around 350\n>\n> So you need to increase the stats target for this column. With those\n> numbers the planner is going to assume that any value that's not one\n> of the big three appears about (1 - (0.484667+0.257333+0.242)) / 11\n> of the time, or several hundred times in 300K rows. If n_distinct were\n> up around 350 it would be estimating just a dozen or so occurrences,\n> which should push the join plan into the shape you want. It's likely\n> that it won't bother to include any more entries in most_common_vals\n> no matter how much you raise the target; but a larger sample should\n> definitely give it a better clue about n_distinct.\n>\n> regards, tom lane\n>\n", "msg_date": "Thu, 7 Aug 2008 18:55:42 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" }, { "msg_contents": "\"Joshua Shanks\" <[email protected]> writes:\n> How do I increase the stats target for just one column?\n\nLook under ALTER TABLE.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2008 20:11:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query planner not using the correct index " }, { "msg_contents": "Just for closure I ended up doing\n\nALTER TABLE bars ALTER COLUMN bars_id SET STATISTICS 500;\n\nOn Thu, Aug 7, 2008 at 7:11 PM, Tom Lane <[email protected]> wrote:\n> \"Joshua Shanks\" <[email protected]> writes:\n>> How do I increase the stats target for just one column?\n>\n> Look under ALTER TABLE.\n>\n> regards, tom lane\n>\n", "msg_date": "Fri, 8 Aug 2008 09:28:44 -0500", "msg_from": "\"Joshua Shanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query planner not using the correct index" } ]
[ { "msg_contents": "Hi,\n\nBelow command has been running since ~700 minutes in one of our\nPostgreSQL servers.\n\n DELETE FROM mugpsreglog\n WHERE NOT EXISTS (SELECT 1\n FROM mueventlog\n WHERE mueventlog.eventlogid = mugpsreglog.eventlogid);\n\n Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6)\n Filter: (NOT (subplan))\n SubPlan\n -> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0)\n Filter: (eventlogid = $0)\n\nHere is some information about related tables:\n\n # SELECT pg_relation_size('emove.mueventlog') / pow(1024, 2);\n ?column?\n ----------\n 11440\n (1 row)\n \n # SELECT pg_relation_size('emove.mugpsreglog') / pow(1024, 2);\n ?column?\n -------------\n 631.8046875\n (1 row)\n\nAnd there isn't any constraints (FK/PK), triggers, indexes, etc. on any\nof the tables. (We're in the phase of a migration, many DELETE commands\nsimilar to above gets executed to relax constraints will be introduced.)\n\nHere are related postgresql.conf lines:\n\n shared_buffers = 512MB\n max_prepared_transactions = 0\n work_mem = 8MB\n maintenance_work_mem = 512MB\n max_fsm_pages = 204800\n max_fsm_relations = 8192\n vacuum_cost_delay = 10\n wal_buffers = 2MB\n checkpoint_segments = 128\n checkpoint_timeout = 1h\n checkpoint_completion_target = 0.5\n checkpoint_warning = 1min\n effective_cache_size = 5GB\n autovacuum = off\n \nAnd system hardware & software profile is:\n\n OS : Red Hat Enterprise Linux ES release 4 (Nahant Update 5)\n PostgreSQL: 8.3.1\n Filesystem: GFS (IBM DS4700 SAN)\n CPU : 4 x Quad Core Intel(R) Xeon(TM) CPU 3.00GHz\n Memory : 8GB\n\nDoes anybody have an idea what might be causing the problem? Any\nsuggestions to improve the performance during such bulk DELETEs?\n\n\nRegards.\n", "msg_date": "Thu, 07 Aug 2008 09:09:39 +0300", "msg_from": "Volkan YAZICI <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpectedly Long DELETE Wait" }, { "msg_contents": "Volkan YAZICI wrote:\n> Hi,\n> \n> Below command has been running since ~700 minutes in one of our\n> PostgreSQL servers.\n> \n> DELETE FROM mugpsreglog\n> WHERE NOT EXISTS (SELECT 1\n> FROM mueventlog\n> WHERE mueventlog.eventlogid = mugpsreglog.eventlogid);\n> \n> Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0)\n> Filter: (eventlogid = $0)\n\nOuch - look at the estimated cost on that!\n\n> And there isn't any constraints (FK/PK), triggers, indexes, etc. on any\n> of the tables. (We're in the phase of a migration, many DELETE commands\n> similar to above gets executed to relax constraints will be introduced.)\n\nWell there you go. Add an index on eventlogid for mugpsreglog.\n\nAlternatively, if you increased your work_mem that might help. Try SET \nwork_mem='64MB' (or even higher) before running the explain and see if \nit tries a materialize. For situations like this where you're doing big \none-off queries you can afford to increase resource limits.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 08:58:00 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpectedly Long DELETE Wait" }, { "msg_contents": "On Thu, 07 Aug 2008, Richard Huxton <[email protected]> writes:\n> Volkan YAZICI wrote:\n>> DELETE FROM mugpsreglog\n>> WHERE NOT EXISTS (SELECT 1\n>> FROM mueventlog\n>> WHERE mueventlog.eventlogid = mugpsreglog.eventlogid);\n>>\n>> Seq Scan on mugpsreglog (cost=0.00..57184031821394.73 rows=6590986 width=6)\n>> Filter: (NOT (subplan))\n>> SubPlan\n>> -> Seq Scan on mueventlog (cost=0.00..4338048.00 rows=1 width=0)\n>> Filter: (eventlogid = $0)\n>\n> Ouch - look at the estimated cost on that!\n>\n>> And there isn't any constraints (FK/PK), triggers, indexes, etc. on any\n>> of the tables. (We're in the phase of a migration, many DELETE commands\n>> similar to above gets executed to relax constraints will be introduced.)\n>\n> Well there you go. Add an index on eventlogid for mugpsreglog.\n\nHrm... Adding an INDEX on \"eventlogid\" column of \"mueventlog\" table\nsolved the problem. Anyway, thanks for your kindly help.\n\n> Alternatively, if you increased your work_mem that might help. Try SET\n> work_mem='64MB' (or even higher) before running the explain and see if it tries\n> a materialize. For situations like this where you're doing big one-off queries\n> you can afford to increase resource limits.\n\nNone of 64MB, 128MB, 256MB and 512MB settings make a change in the query\nplan.\n\n\nRegards.\n", "msg_date": "Thu, 07 Aug 2008 17:36:28 +0300", "msg_from": "Volkan YAZICI <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpectedly Long DELETE Wait" } ]
[ { "msg_contents": "Hi, I have a timestamptz field that I want to use with a query, but I \ndon�t need the full timestamp resolution, so I�ve created a \nday_trunc(timestamptz) immutable function which I�ll use with the \nquery and with a new index:\n\nlogs=> create index test_idx on blackbox (day_trunc(ts));\n\nHowever, the query plan doesn�t use the index:\n\nlogs=>explain select count(*) from blackbox group by day_trunc(ts) \norder by day_trunc(ts);\n QUERY PLAN\n------------------------------------------------------------------------------------------\n GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)\n -> Sort (cost=98431.58..99050.92 rows=247736 width=8)\n Sort Key: (day_trunc(ts))\n -> Seq Scan on blackbox (cost=0.00..72848.36 rows=247736 \nwidth=8)\n(4 rows)\n\nwhile with this index:\n\nlogs=>create index test_2_idx on blackbox (ts);\n\nthe query plan is the expected one:\n\nlogs=>explain select count(*) from blackbox group by ts order by ts;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..19109.66 rows=74226 width=8)\n -> Index Scan using test_2_idx on blackbox (cost=0.00..16943.16 \nrows=247736 width=8)\n\nBut I fail to see why. Any hints?\n\nThank you in advance\n--\nGiorgio Valoti", "msg_date": "Thu, 07 Aug 2008 09:50:04 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Query Plan choice with timestamps" }, { "msg_contents": "Giorgio Valoti wrote:\n> Hi, I have a timestamptz field that I want to use with a query, but I \n> don’t need the full timestamp resolution, so I’ve created a \n> day_trunc(timestamptz) immutable function which I’ll use with the query \n> and with a new index:\n> \n> logs=> create index test_idx on blackbox (day_trunc(ts));\n> \n> However, the query plan doesn’t use the index:\n\nDoes it use it ever? e.g. with\n SELECT * FROM blackbox WHERE day_trunk(ts) = '...'\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 09:35:55 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "\nOn 07/ago/08, at 10:35, Richard Huxton wrote:\n\n> Giorgio Valoti wrote:\n>> Hi, I have a timestamptz field that I want to use with a query, but \n>> I don�t need the full timestamp resolution, so I�ve created a \n>> day_trunc(timestamptz) immutable function which I�ll use with the \n>> query and with a new index:\n>> logs=> create index test_idx on blackbox (day_trunc(ts));\n>> However, the query plan doesn�t use the index:\n>\n> Does it use it ever? e.g. with\n> SELECT * FROM blackbox WHERE day_trunk(ts) = '...'\n\nIt�s used:\n\nlogs=> explain select * from blackbox where day_trunc(ts) = \nday_trunc(now());\n QUERY PLAN\n---------------------------------------------------------------------------\n Bitmap Heap Scan on blackbox (cost=22.38..3998.43 rows=1239 \nwidth=264)\n Recheck Cond: (day_trunc(ts) = day_trunc(now()))\n -> Bitmap Index Scan on date_idx (cost=0.00..22.07 rows=1239 \nwidth=0)\n Index Cond: (day_trunc(ts) = day_trunc(now()))\n\n--\nGiorgio Valoti", "msg_date": "Thu, 07 Aug 2008 14:30:51 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "Giorgio Valoti wrote:\n> \n> On 07/ago/08, at 10:35, Richard Huxton wrote:\n> \n>> Giorgio Valoti wrote:\n>>> Hi, I have a timestamptz field that I want to use with a query, but I \n>>> don’t need the full timestamp resolution, so I’ve created a \n>>> day_trunc(timestamptz) immutable function which I’ll use with the \n>>> query and with a new index:\n>>> logs=> create index test_idx on blackbox (day_trunc(ts));\n>>> However, the query plan doesn’t use the index:\n>>\n>> Does it use it ever? e.g. with\n>> SELECT * FROM blackbox WHERE day_trunk(ts) = '...'\n> \n> It’s used:\n[snip]\n\nOK - so the index is working.\n\nIf you disable seq-scans before running the query, does it use it then?\n\nSET enable_seqscan = off;\n\n > logs=>explain select count(*) from blackbox group by day_trunc(ts) order\n > by day_trunc(ts);\n > QUERY PLAN\n > \n------------------------------------------------------------------------------------------ \n\n >\n > GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)\n\nIn particular:\n1. Is the estimated cost more or less than 119773.92?\n2. How does that match the actual time taken?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 07 Aug 2008 13:36:18 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "Giorgio Valoti <[email protected]> writes:\n> GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)\n> -> Sort (cost=98431.58..99050.92 rows=247736 width=8)\n> Sort Key: (day_trunc(ts))\n> -> Seq Scan on blackbox (cost=0.00..72848.36 rows=247736 width=8)\n\n> GroupAggregate (cost=0.00..19109.66 rows=74226 width=8)\n> -> Index Scan using test_2_idx on blackbox (cost=0.00..16943.16 rows=247736 width=8)\n\nThese numbers seem pretty bogus: there is hardly any scenario in which a\nfull-table indexscan should be costed as significantly cheaper than a\nseqscan. Have you put in silly values for random_page_cost?\n\nIf you haven't mucked with the cost parameters, the only way I can think\nof to get this result is to have an enormously bloated table that's\nmostly empty. Maybe you need to review your vacuuming procedures.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2008 11:50:59 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan choice with timestamps " }, { "msg_contents": "\nOn 07/ago/08, at 17:50, Tom Lane wrote:\n\n> Giorgio Valoti <[email protected]> writes:\n>> GroupAggregate (cost=98431.58..119773.92 rows=74226 width=8)\n>> -> Sort (cost=98431.58..99050.92 rows=247736 width=8)\n>> Sort Key: (day_trunc(ts))\n>> -> Seq Scan on blackbox (cost=0.00..72848.36 rows=247736 \n>> width=8)\n>\n>> GroupAggregate (cost=0.00..19109.66 rows=74226 width=8)\n>> -> Index Scan using test_2_idx on blackbox \n>> (cost=0.00..16943.16 rows=247736 width=8)\n>\n> These numbers seem pretty bogus: there is hardly any scenario in \n> which a\n> full-table indexscan should be costed as significantly cheaper than a\n> seqscan. Have you put in silly values for random_page_cost?\n\nNo,\n\n>\n>\n> If you haven't mucked with the cost parameters, the only way I can \n> think\n> of to get this result is to have an enormously bloated table that's\n> mostly empty. Maybe you need to review your vacuuming procedures.\n\nI�ll review them.\n\nThank you\n--\nGiorgio Valoti", "msg_date": "Thu, 07 Aug 2008 20:37:09 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "\nOn 07/ago/08, at 14:36, Richard Huxton wrote:\n\n> Giorgio Valoti wrote:\n>> On 07/ago/08, at 10:35, Richard Huxton wrote:\n>>> Giorgio Valoti wrote:\n>>>> Hi, I have a timestamptz field that I want to use with a query, \n>>>> but I don�t need the full timestamp resolution, so I�ve created a \n>>>> day_trunc(timestamptz) immutable function which I�ll use with the \n>>>> query and with a new index:\n>>>> logs=> create index test_idx on blackbox (day_trunc(ts));\n>>>> However, the query plan doesn�t use the index:\n>>>\n>>> Does it use it ever? e.g. with\n>>> SELECT * FROM blackbox WHERE day_trunk(ts) = '...'\n>> It�s used:\n> [snip]\n>\n> OK - so the index is working.\n>\n> If you disable seq-scans before running the query, does it use it \n> then?\n>\n> SET enable_seqscan = off;\n\nYes\n\n> [�]\n>\n> In particular:\n> 1. Is the estimated cost more or less than 119773.92?\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..122309.32 rows=74226 width=8)\n -> Index Scan using date_idx on blackbox (cost=0.00..101586.31 \nrows=247736 width=8)\n\n>\n> 2. How does that match the actual time taken?\n\n QUERY \nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..122309.32 rows=74226 width=8) (actual \ntime=0.222..1931.651 rows=428 loops=1)\n -> Index Scan using date_idx on blackbox (cost=0.00..101586.31 \nrows=247736 width=8) (actual time=0.072..1861.367 rows=247736 loops=1)\n Total runtime: 1931.782 ms\n\nBut I haven�t revised the vacuum settings.\n\nThank you\n--\nGiorgio Valoti\n\n\n", "msg_date": "Thu, 07 Aug 2008 20:42:19 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "\nOn 07/ago/08, at 20:37, Giorgio Valoti wrote:\n\n>\n> [�]\n>\n>>\n>>\n>> If you haven't mucked with the cost parameters, the only way I can \n>> think\n>> of to get this result is to have an enormously bloated table that's\n>> mostly empty. Maybe you need to review your vacuuming procedures.\n>\n> I�ll review them.\n\nI�ve manually vacuum�ed the table:\nlogs=> VACUUM FULL verbose analyze blackbox;\nINFO: vacuuming \"public.blackbox\"\nINFO: \"blackbox\": found 0 removable, 247736 nonremovable row versions \nin 8436 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 137 to 1210 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 894432 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n2926 pages containing 564212 free bytes are potential move destinations.\nCPU 0.00s/0.04u sec elapsed 0.04 sec.\nINFO: index \"blackbox_pkey\" now contains 247736 row versions in 1602 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.01s/0.00u sec elapsed 0.01 sec.\nINFO: index \"vhost_idx\" now contains 247736 row versions in 1226 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.01 sec.\nINFO: index \"remoteip_idx\" now contains 247736 row versions in 682 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"date_idx\" now contains 247736 row versions in 547 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"test_2_idx\" now contains 247736 row versions in 682 pages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"blackbox\": moved 0 row versions, truncated 8436 to 8436 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: vacuuming \"pg_toast.pg_toast_45532\"\nINFO: \"pg_toast_45532\": found 0 removable, 0 nonremovable row \nversions in 0 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nNonremovable row versions range from 0 to 0 bytes long.\nThere were 0 unused item pointers.\nTotal free space (including removable row versions) is 0 bytes.\n0 pages are or will become empty, including 0 at the end of the table.\n0 pages containing 0 free bytes are potential move destinations.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"pg_toast_45532_index\" now contains 0 row versions in 1 \npages\nDETAIL: 0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.blackbox\"\nINFO: \"blackbox\": scanned 3000 of 8436 pages, containing 87941 live \nrows and 0 dead rows; 3000 rows in sample, 247290 estimated total rows\nVACUUM\n\nAnd here the explain results:\nlogs=> explain select count(*) from blackbox group by day_trunc(ts) \norder by day_trunc(ts);\n QUERY PLAN\n-----------------------------------------------------------------------------\n Sort (cost=74210.52..74211.54 rows=407 width=8)\n Sort Key: (day_trunc(ts))\n -> HashAggregate (cost=74086.04..74192.88 rows=407 width=8)\n -> Seq Scan on blackbox (cost=0.00..72847.36 rows=247736 \nwidth=8)\n(4 rows)\n\nlogs=> explain select count(*) from blackbox group by ts order by ts;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.00..18381.54 rows=77738 width=8)\n -> Index Scan using test_2_idx on blackbox (cost=0.00..16171.13 \nrows=247736 width=8)\n(2 rows)\n\nMaybe it�s the silly test queries that prove nothing:\n\nlogs=> explain select * from blackbox where day_trunc(ts) = \nday_trunc(now());\n QUERY PLAN\n-------------------------------------------------------------------------------\n Index Scan using date_idx on blackbox (cost=0.50..158.65 rows=569 \nwidth=237)\n Index Cond: (day_trunc(ts) = day_trunc(now()))\n(2 rows)\n\nCiao\n--\nGiorgio Valoti", "msg_date": "Thu, 07 Aug 2008 21:29:23 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan choice with timestamps" }, { "msg_contents": "Giorgio Valoti <[email protected]> writes:\n> On 07/ago/08, at 17:50, Tom Lane wrote:\n>> These numbers seem pretty bogus: there is hardly any scenario in \n>> which a\n>> full-table indexscan should be costed as significantly cheaper than a\n>> seqscan. Have you put in silly values for random_page_cost?\n\n> No,\n\nI looked at it more closely and realized that the cost discrepancy is\nfrom the evaluation of the function: having to evaluate a SQL or plpgsql\nfunction 247736 times more than explains the cost estimate differential\ncompared to a query that involves no function call. Some experiments\nhere suggest that it hardly matters whether the query uses indexscan or\nseqscan because the time is dominated by the function calls anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2008 17:01:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Plan choice with timestamps " }, { "msg_contents": "\nOn 07/ago/08, at 23:01, Tom Lane wrote:\n\n> Giorgio Valoti <[email protected]> writes:\n>> On 07/ago/08, at 17:50, Tom Lane wrote:\n>>> These numbers seem pretty bogus: there is hardly any scenario in\n>>> which a\n>>> full-table indexscan should be costed as significantly cheaper \n>>> than a\n>>> seqscan. Have you put in silly values for random_page_cost?\n>\n>> No,\n>\n> I looked at it more closely and realized that the cost discrepancy is\n> from the evaluation of the function: having to evaluate a SQL or \n> plpgsql\n> function 247736 times more than explains the cost estimate \n> differential\n> compared to a query that involves no function call. Some experiments\n> here suggest that it hardly matters whether the query uses indexscan \n> or\n> seqscan because the time is dominated by the function calls anyway.\n\nI see, thank you Tom. Could it be a good idea adding some notes about \nit in <http://www.postgresql.org/docs/8.3/interactive/indexes-expressional.html \n >? As you said, since the function call dominates the query cost, in \nthis case, I think there�s no point to use an index expression.\n\nCiao\n--\nGiorgio Valoti", "msg_date": "Fri, 08 Aug 2008 07:38:10 +0200", "msg_from": "Giorgio Valoti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query Plan choice with timestamps" } ]
[ { "msg_contents": "Hi list,\n\n\nI'm helping a customer with their new postgresql server and have some \nquestions.\n\nThe servers is connected to a SAN with dual raid cards which all have \n512MB cache with BBU.\n\nThe configuration they set up is now.\n2 SAS 15K drives in RAID 1 on the internal controller for OS.\n\n6 SAS 15K drives in RAID 10 on one of the SAN controllers for database\n\n10 SATA 10K drives in RAID 5 on the second SAN controller for file \nstorage.\n\nMy first idea was to have one partition on the RAID 10 using ext3 with \ndata=writeback, noatime as mount options.\n\nBut I wonder if I should have 2 partitions on the RAID 10 one for the \nPGDATA dir using ext3 and one partition for XLOGS using ext2.\n\nShould I do this instead? Is there just a minor speed bump as the \npartitions are on the same channel or?\n\nIf this is the way to go how should it be configured?\n\nHow big should the xlog partition be?\n\nHow should the fstab look like for the XLOG partition?\n\nAny other pointers, hints would be appreciated.\n\nCheers,\nHenke\n\n\n\n", "msg_date": "Thu, 7 Aug 2008 10:46:28 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Filesystem setup on new system" }, { "msg_contents": "On Thu, 7 Aug 2008, Henrik wrote:\n\n> My first idea was to have one partition on the RAID 10 using ext3 with \n> data=writeback, noatime as mount options.\n>\n> But I wonder if I should have 2 partitions on the RAID 10 one for the PGDATA \n> dir using ext3 and one partition for XLOGS using ext2.\n\nReally depends on your write volume. The write cache on your controller \nwill keep having a separate xlog disk from being as important as it is \nwithout one. If your write volume is really high though, it may still be \na bottleneck, and you may discover your app runs better with a dedicated \next2 xlog disk instead.\n\nThe simple version is:\n\nWAL write volume extremely high->dedicated xlog can be better\n\nWAL volume low->more disks for the database array better even if that \nmixes the WAL on there as well\n\nIf you want a true answer for which is better, you have to measure your \napplication running on this hardware.\n\n> 6 SAS 15K drives in RAID 10 on one of the SAN controllers for database\n\nWith only 6 disks available, in general you won't be able to reach the WAL \nas a bottleneck before being limited by seeks on the remaining 4 database \ndisks, so you might as well group all 6 together. It's possible your \nparticular application might prefer it the other way though, if you're \ndoing a while lot of small writes for example. I've seen a separate WAL \nhandle low-level benchmarks better, but on more real-world loads it's \nharder to run into that situation.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 8 Aug 2008 18:42:05 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem setup on new system" } ]
[ { "msg_contents": "Hey all, I have two tables that look like this:\n\n\nCREATE TABLE details\n(\n cust_code character varying(6) NOT NULL,\n cust_po character varying(20) NOT NULL,\n date_ordd date NOT NULL,\n item_nbr integer NOT NULL,\n orig_qty_ordd integer,\n CONSTRAINT details_pkey PRIMARY KEY (cust_code, cust_po, date_ordd, \nitem_nbr)\n);\nCREATE INDEX idx_details ON details USING btree (cust_code, cust_po, \ndate_ordd);\n\n\nCREATE TABLE status\n(\n id serial NOT NULL,\n cust_code character varying(6) NOT NULL,\n cust_po character varying(20) NOT NULL,\n date_ordd date NOT NULL,\n item_nbr integer,\n ext_nbr integer,\n....\n....\n...\n CONSTRAINT status_pkey PRIMARY KEY (id)\n);\nCREATE INDEX idx_status_idx2 ON status USING btree (cust_code, \ncust_po, date_ordd, item_nbr);\n\n\nBoth tables are analyzed full and Table details contains around \n390.000 records and status contains around 580.000 records.\n\n\nDoing the following SQL:\nexplain analyze\nSELECT\na \n.cust_code \n,a \n.cust_po \n,a.date_ordd,a.item_nbr,b.Lines_part_ordd,a.orig_qty_ordd,b.Ship_via\nFROM details a\nJOIN status b ON (a.cust_code = b.cust_code AND a.cust_po = b.cust_po \nAND a.date_ordd = b.date_ordd AND a.item_nbr = b.item_nbr )\n\nCreated this execution plan\n\nMerge Join (cost=0.76..71872.13 rows=331 width=41) (actual \ntime=0.393..225404.902 rows=579742 loops=1)\n Merge Cond: (((a.cust_code)::text = (b.cust_code)::text) AND \n((a.cust_po)::text = (b.cust_po)::text) AND (a.date_ordd = b.date_ordd))\n Join Filter: (a.item_nbr = b.item_nbr)\n -> Index Scan using idx_details on details a (cost=0.00..23847.74 \nrows=389147 width=28) (actual time=0.244..927.752 rows=389147 loops=1)\n -> Index Scan using idx_status_idx2 on status b \n(cost=0.00..40249.31 rows=579933 width=37) (actual \ntime=0.142..84250.016 rows=176701093 loops=1)\nTotal runtime: 225541.232 ms\n\nQuestion to myself is why does it want to do a Join Filter on item_nbr?\n\nWhen drop the index idx_details I get this execution plan:\n\nMerge Join (cost=0.81..74650.36 rows=331 width=41) (actual \ntime=0.106..2159.315 rows=579742 loops=1)\n Merge Cond: (((a.cust_code)::text = (b.cust_code)::text) AND \n((a.cust_po)::text = (b.cust_po)::text) AND (a.date_ordd = \nb.date_ordd) AND (a.item_nbr = b.item_nbr))\n -> Index Scan using details_pkey on details a \n(cost=0.00..24707.75 rows=389147 width=28) (actual time=0.030..562.234 \nrows=389147 loops=1)\n -> Index Scan using idx_status_idx2 on status b \n(cost=0.00..40249.31 rows=579933 width=37) (actual time=0.069..289.359 \nrows=579933 loops=1)\nTotal runtime: 2226.793 ms\n\nNotice the difference in speed, the second one is about 100 times \nfaster.\n\nAs per Tom's advice I tried to set random_page_cost to 2, but that \ndoesn't change my execution plan, also not with 1...\n\nNow I add the index idx_details back again.......\n\nAnd when I set my cpu_tuple_cost to 0.25 I get the 'correct' execution \nplan as shown above.\n\nset cpu_tuple_cost=0.25;\nexplain analyze\nSELECT\na \n.cust_code \n,a \n.cust_po \n,a.date_ordd,a.item_nbr,b.Lines_part_ordd,a.orig_qty_ordd,b.Ship_via\nFROM acc_sc.details a\nJOIN acc_sc.status b ON (a.cust_code = b.cust_code AND a.cust_po = \nb.cust_po AND a.date_ordd = b.date_ordd AND a.item_nbr = b.item_nbr )\n\nThis execution plan is the same as the one above when idx_details was \nremoved\n\nMerge Join (cost=3.45..307306.36 rows=331 width=41) (actual \ntime=0.038..2246.726 rows=579742 loops=1)\n Merge Cond: (((a.cust_code)::text = (b.cust_code)::text) AND \n((a.cust_po)::text = (b.cust_po)::text) AND (a.date_ordd = \nb.date_ordd) AND (a.item_nbr = b.item_nbr))\n -> Index Scan using details_pkey on details a \n(cost=0.00..118103.03 rows=389147 width=28) (actual \ntime=0.020..589.779 rows=389147 loops=1)\n -> Index Scan using idx_status_idx2 on status b \n(cost=0.00..179433.23 rows=579933 width=37) (actual \ntime=0.011..305.963 rows=579933 loops=1)\nTotal runtime: 2318.647 ms\n\n\n\nWhat I am trying to understand is why PostgreSQL want's to use \nidx_details over it's primary_key?\nI am sure that 'simply' setting cpu_tuple_cost to 0.25 from 0.01 is \nnot a good idea......?\n\n\n\nRies van Twisk\n\n\nSHOW ALL;\nadd_missing_from;off;Automatically adds missing table references to \nFROM clauses.\nallow_system_table_mods;off;Allows modifications of the structure of \nsystem tables.\narchive_command;(disabled);Sets the shell command that will be called \nto archive a WAL file.\narchive_mode;off;Allows archiving of WAL files using archive_command.\narchive_timeout;0;Forces a switch to the next xlog file if a new file \nhas not been started within N seconds.\narray_nulls;on;Enable input of NULL elements in arrays.\nauthentication_timeout;1min;Sets the maximum allowed time to complete \nclient authentication.\nautovacuum;on;Starts the autovacuum subprocess.\nautovacuum_analyze_scale_factor;0.1;Number of tuple inserts, updates \nor deletes prior to analyze as a fraction of reltuples.\nautovacuum_analyze_threshold;50;Minimum number of tuple inserts, \nupdates or deletes prior to analyze.\nautovacuum_freeze_max_age;200000000;Age at which to autovacuum a table \nto prevent transaction ID wraparound.\nautovacuum_max_workers;3;Sets the maximum number of simultaneously \nrunning autovacuum worker processes.\nautovacuum_naptime;1min;Time to sleep between autovacuum runs.\nautovacuum_vacuum_cost_delay;20ms;Vacuum cost delay in milliseconds, \nfor autovacuum.\nautovacuum_vacuum_cost_limit;-1;Vacuum cost amount available before \nnapping, for autovacuum.\nautovacuum_vacuum_scale_factor;0.2;Number of tuple updates or deletes \nprior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold;50;Minimum number of tuple updates or \ndeletes prior to vacuum.\nbackslash_quote;safe_encoding;Sets whether \"\\'\" is allowed in string \nliterals.\nbgwriter_delay;200ms;Background writer sleep time between rounds.\nbgwriter_lru_maxpages;100;Background writer maximum number of LRU \npages to flush per round.\nbgwriter_lru_multiplier;2;Background writer multiplier on average \nbuffers to scan per round.\nblock_size;8192;Shows the size of a disk block.\nbonjour_name;;Sets the Bonjour broadcast service name.\ncheck_function_bodies;on;Check function bodies during CREATE FUNCTION.\ncheckpoint_completion_target;0.5;Time spent flushing dirty buffers \nduring checkpoint, as fraction of checkpoint interval.\ncheckpoint_segments;3;Sets the maximum distance in log segments \nbetween automatic WAL checkpoints.\ncheckpoint_timeout;5min;Sets the maximum time between automatic WAL \ncheckpoints.\ncheckpoint_warning;30s;Enables warnings if checkpoint segments are \nfilled more frequently than this.\nclient_encoding;UNICODE;Sets the client's character set encoding.\nclient_min_messages;notice;Sets the message levels that are sent to \nthe client.\ncommit_delay;0;Sets the delay in microseconds between transaction \ncommit and flushing WAL to disk.\ncommit_siblings;5;Sets the minimum concurrent open transactions before \nperforming commit_delay.\nconfig_file;/usr/local/pgsql/data/postgresql.conf;Sets the server's \nmain configuration file.\nconstraint_exclusion;off;Enables the planner to use constraints to \noptimize queries.\ncpu_index_tuple_cost;0.005;Sets the planner's estimate of the cost of \nprocessing each index entry during an index scan.\ncpu_operator_cost;0.0025;Sets the planner's estimate of the cost of \nprocessing each operator or function call.\ncpu_tuple_cost;0.01;Sets the planner's estimate of the cost of \nprocessing each tuple (row).\ncustom_variable_classes;;Sets the list of known custom variable classes.\ndata_directory;/usr/local/pgsql/data;Sets the server's data directory.\nDateStyle;ISO, MDY;Sets the display format for date and time values.\ndb_user_namespace;off;Enables per-database user names.\ndeadlock_timeout;1s;Sets the time to wait on a lock before checking \nfor deadlock.\ndebug_assertions;off;Turns on various assertion checks.\ndebug_pretty_print;off;Indents parse and plan tree displays.\ndebug_print_parse;off;Prints the parse tree to the server log.\ndebug_print_plan;off;Prints the execution plan to server log.\ndebug_print_rewritten;off;Prints the parse tree after rewriting to \nserver log.\ndefault_statistics_target;10;Sets the default statistics target.\ndefault_tablespace;;Sets the default tablespace to create tables and \nindexes in.\ndefault_text_search_config;pg_catalog.simple;Sets default text search \nconfiguration.\ndefault_transaction_isolation;read committed;Sets the transaction \nisolation level of each new transaction.\ndefault_transaction_read_only;off;Sets the default read-only status of \nnew transactions.\ndefault_with_oids;off;Create new tables with OIDs by default.\ndynamic_library_path;$libdir;Sets the path for dynamically loadable \nmodules.\neffective_cache_size;1GB;Sets the planner's assumption about the size \nof the disk cache.\nenable_bitmapscan;on;Enables the planner's use of bitmap-scan plans.\nenable_hashagg;on;Enables the planner's use of hashed aggregation plans.\nenable_hashjoin;on;Enables the planner's use of hash join plans.\nenable_indexscan;on;Enables the planner's use of index-scan plans.\nenable_mergejoin;on;Enables the planner's use of merge join plans.\nenable_nestloop;on;Enables the planner's use of nested-loop join plans.\nenable_seqscan;on;Enables the planner's use of sequential-scan plans.\nenable_sort;on;Enables the planner's use of explicit sort steps.\nenable_tidscan;on;Enables the planner's use of TID scan plans.\nescape_string_warning;on;Warn about backslash escapes in ordinary \nstring literals.\nexplain_pretty_print;on;Uses the indented output format for EXPLAIN \nVERBOSE.\nexternal_pid_file;;Writes the postmaster PID to the specified file.\nextra_float_digits;0;Sets the number of digits displayed for floating- \npoint values.\nfrom_collapse_limit;8;Sets the FROM-list size beyond which subqueries \nare not collapsed.\nfsync;on;Forces synchronization of updates to disk.\nfull_page_writes;on;Writes full pages to WAL when first modified after \na checkpoint.\ngeqo;on;Enables genetic query optimization.\ngeqo_effort;5;GEQO: effort is used to set the default for other GEQO \nparameters.\ngeqo_generations;0;GEQO: number of iterations of the algorithm.\ngeqo_pool_size;0;GEQO: number of individuals in the population.\ngeqo_selection_bias;2;GEQO: selective pressure within the population.\ngeqo_threshold;12;Sets the threshold of FROM items beyond which GEQO \nis used.\ngin_fuzzy_search_limit;0;Sets the maximum allowed result for exact \nsearch by GIN.\nhba_file;/usr/local/pgsql/data/pg_hba.conf;Sets the server's \"hba\" \nconfiguration file.\nident_file;/usr/local/pgsql/data/pg_ident.conf;Sets the server's \n\"ident\" configuration file.\nignore_system_indexes;off;Disables reading from system indexes.\ninteger_datetimes;off;Datetimes are integer based.\njoin_collapse_limit;8;Sets the FROM-list size beyond which JOIN \nconstructs are not flattened.\nkrb_caseins_users;off;Sets whether Kerberos and GSSAPI user names \nshould be treated as case-insensitive.\nkrb_realm;;Sets realm to match Kerberos and GSSAPI users against.\nkrb_server_hostname;;Sets the hostname of the Kerberos server.\nkrb_server_keyfile;;Sets the location of the Kerberos server key file.\nkrb_srvname;postgres;Sets the name of the Kerberos service.\nlc_collate;C;Shows the collation order locale.\nlc_ctype;C;Shows the character classification and case conversion \nlocale.\nlc_messages;;Sets the language in which messages are displayed.\nlc_monetary;C;Sets the locale for formatting monetary amounts.\nlc_numeric;C;Sets the locale for formatting numbers.\nlc_time;C;Sets the locale for formatting date and time values.\nlisten_addresses;*;Sets the host name or IP address(es) to listen to.\nlocal_preload_libraries;;Lists shared libraries to preload into each \nbackend.\nlog_autovacuum_min_duration;-1;Sets the minimum execution time above \nwhich autovacuum actions will be logged.\nlog_checkpoints;off;Logs each checkpoint.\nlog_connections;off;Logs each successful connection.\nlog_destination;stderr;Sets the destination for server log output.\nlog_directory;pg_log;Sets the destination directory for log files.\nlog_disconnections;off;Logs end of a session, including duration.\nlog_duration;off;Logs the duration of each completed SQL statement.\nlog_error_verbosity;default;Sets the verbosity of logged messages.\nlog_executor_stats;off;Writes executor performance statistics to the \nserver log.\nlog_filename;postgresql-%Y-%m-%d_%H%M%S.log;Sets the file name pattern \nfor log files.\nlog_hostname;off;Logs the host name in the connection logs.\nlog_line_prefix;;Controls information prefixed to each log line.\nlog_lock_waits;off;Logs long lock waits.\nlog_min_duration_statement;-1;Sets the minimum execution time above \nwhich statements will be logged.\nlog_min_error_statement;error;Causes all statements generating error \nat or above this level to be logged.\nlog_min_messages;notice;Sets the message levels that are logged.\nlog_parser_stats;off;Writes parser performance statistics to the \nserver log.\nlog_planner_stats;off;Writes planner performance statistics to the \nserver log.\nlog_rotation_age;1d;Automatic log file rotation will occur after N \nminutes.\nlog_rotation_size;10MB;Automatic log file rotation will occur after N \nkilobytes.\nlog_statement;none;Sets the type of statements logged.\nlog_statement_stats;off;Writes cumulative performance statistics to \nthe server log.\nlog_temp_files;-1;Log the use of temporary files larger than this \nnumber of kilobytes.\nlog_timezone;America/Guayaquil;Sets the time zone to use in log \nmessages.\nlog_truncate_on_rotation;off;Truncate existing log files of same name \nduring log rotation.\nlogging_collector;off;Start a subprocess to capture stderr output and/ \nor csvlogs into log files.\nmaintenance_work_mem;256MB;Sets the maximum memory to be used for \nmaintenance operations.\nmax_connections;100;Sets the maximum number of concurrent connections.\nmax_files_per_process;1000;Sets the maximum number of simultaneously \nopen files for each server process.\nmax_fsm_pages;524288;Sets the maximum number of disk pages for which \nfree space is tracked.\nmax_fsm_relations;1000;Sets the maximum number of tables and indexes \nfor which free space is tracked.\nmax_function_args;100;Shows the maximum number of function arguments.\nmax_identifier_length;63;Shows the maximum identifier length.\nmax_index_keys;32;Shows the maximum number of index keys.\nmax_locks_per_transaction;64;Sets the maximum number of locks per \ntransaction.\nmax_prepared_transactions;5;Sets the maximum number of simultaneously \nprepared transactions.\nmax_stack_depth;2MB;Sets the maximum stack depth, in kilobytes.\npassword_encryption;on;Encrypt passwords.\nport;5432;Sets the TCP port the server listens on.\npost_auth_delay;0;Waits N seconds on connection startup after \nauthentication.\npre_auth_delay;0;Waits N seconds on connection startup before \nauthentication.\nrandom_page_cost;4;Sets the planner's estimate of the cost of a \nnonsequentially fetched disk page.\nregex_flavor;advanced;Sets the regular expression \"flavor\".\nsearch_path;tmp, public;Sets the schema search order for names that \nare not schema-qualified.\nseq_page_cost;1;Sets the planner's estimate of the cost of a \nsequentially fetched disk page.\nserver_encoding;UTF8;Sets the server (database) character set encoding.\nserver_version;8.3.1;Shows the server version.\nserver_version_num;80301;Shows the server version as an integer.\nsession_replication_role;origin;Sets the session's behavior for \ntriggers and rewrite rules.\nshared_buffers;192MB;Sets the number of shared memory buffers used by \nthe server.\nshared_preload_libraries;;Lists shared libraries to preload into server.\nsilent_mode;off;Runs the server silently.\nsql_inheritance;on;Causes subtables to be included by default in \nvarious commands.\nssl;off;Enables SSL connections.\nstandard_conforming_strings;off;Causes '...' strings to treat \nbackslashes literally.\nstatement_timeout;0;Sets the maximum allowed duration of any statement.\nsuperuser_reserved_connections;3;Sets the number of connection slots \nreserved for superusers.\nsynchronize_seqscans;on;Enable synchronized sequential scans.\nsynchronous_commit;on;Sets immediate fsync at commit.\nsyslog_facility;LOCAL0;Sets the syslog \"facility\" to be used when \nsyslog enabled.\nsyslog_ident;postgres;Sets the program name used to identify \nPostgreSQL messages in syslog.\ntcp_keepalives_count;0;Maximum number of TCP keepalive retransmits.\ntcp_keepalives_idle;0;Time between issuing TCP keepalives.\ntcp_keepalives_interval;0;Time between TCP keepalive retransmits.\ntemp_buffers;1024;Sets the maximum number of temporary buffers used by \neach session.\ntemp_tablespaces;;Sets the tablespace(s) to use for temporary tables \nand sort files.\nTimeZone;America/Guayaquil;Sets the time zone for displaying and \ninterpreting time stamps.\ntimezone_abbreviations;Default;Selects a file of time zone \nabbreviations.\ntrace_notify;off;Generates debugging output for LISTEN and NOTIFY.\ntrace_sort;off;Emit information about resource usage in sorting.\ntrack_activities;on;Collects information about executing commands.\ntrack_counts;on;Collects statistics on database activity.\ntransaction_isolation;read committed;Sets the current transaction's \nisolation level.\ntransaction_read_only;off;Sets the current transaction's read-only \nstatus.\ntransform_null_equals;off;Treats \"expr=NULL\" as \"expr IS NULL\".\nunix_socket_directory;;Sets the directory where the Unix-domain socket \nwill be created.\nunix_socket_group;;Sets the owning group of the Unix-domain socket.\nunix_socket_permissions;511;Sets the access permissions of the Unix- \ndomain socket.\nupdate_process_title;on;Updates the process title to show the active \nSQL command.\nvacuum_cost_delay;0;Vacuum cost delay in milliseconds.\nvacuum_cost_limit;200;Vacuum cost amount available before napping.\nvacuum_cost_page_dirty;20;Vacuum cost for a page dirtied by vacuum.\nvacuum_cost_page_hit;1;Vacuum cost for a page found in the buffer cache.\nvacuum_cost_page_miss;10;Vacuum cost for a page not found in the \nbuffer cache.\nvacuum_freeze_min_age;100000000;Minimum age at which VACUUM should \nfreeze a table row.\nwal_buffers;8MB;Sets the number of disk-page buffers in shared memory \nfor WAL.\nwal_sync_method;fsync;Selects the method used for forcing WAL updates \nto disk.\nwal_writer_delay;200ms;WAL writer sleep time between WAL flushes.\nwork_mem;4MB;Sets the maximum memory to be used for query workspaces.\nxmlbinary;base64;Sets how binary values are to be encoded in XML.\nxmloption;content;Sets whether XML data in implicit parsing and \nserialization operations is to be considered as documents or content \nfragments.\nzero_damaged_pages;off;Continues processing past damaged page headers.\n\n\n\n", "msg_date": "Thu, 7 Aug 2008 09:30:48 -0500", "msg_from": "ries van Twisk <[email protected]>", "msg_from_op": true, "msg_subject": "Another index related question...." } ]
[ { "msg_contents": "Hello\n\nRegarding the advice from all, and the performance of postgresql 8.3.3\n\nI'm trying to change the server and to upgrade to 8.3.3\n\nI install postgresql 8.3.3 on a new server for testing. All well!!!\n\nAnd I run a \\i mybackup.sql since yesterday 7pm. This morning the datas \nare not insert yet.\n\nCOuld you advice me on which restoration method is the faster. To \nupgrade from postgresql 8.1.11 to 8.3.3.\n\nRegards\n\nDavid\n\n\n\n-- \n<http://www.1st-affiliation.fr>\n\n*David Bigand\n*Pr�sident Directeur G�n�rale*\n*51 chemin des moulins\n73000 CHAMBERY - FRANCE\n\nWeb : htttp://www.1st-affiliation.fr\nEmail : [email protected]\nTel. : +33 479 696 685\nMob. : +33 666 583 836\nSkype : firstaffiliation_support\n\n", "msg_date": "Fri, 08 Aug 2008 09:25:18 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": true, "msg_subject": "Restoration of datas" }, { "msg_contents": "dforums wrote:\n> COuld you advice me on which restoration method is the faster. To \n> upgrade from postgresql 8.1.11 to 8.3.3.\n\nUsing the pg_dump from your 8.3 package, dump the database using -Fc to \nget a nicely compressed dump. Then use pg_restore to restore it. If you \nadd a --verbose flag then you will be able to track it.\n\nYou might want to set fsync=off while doing the restore. This is safe \nsince if the machine crashes during restore you just start again. Oh, \nand increase work_mem too - there's only going to be one process.\n\nWhat will take the most time is the creating of indexes etc.\n\nIt will take a long time to do a full restore though - you've got 64GB \nof data and slow disks.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 08 Aug 2008 08:56:50 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Restoration of datas" } ]
[ { "msg_contents": "Hello list,\n\nI have a server with a direct attached storage containing 4 15k SAS \ndrives and 6 standard SATA drives.\nThe server is a quad core xeon with 16GB ram.\nBoth server and DAS has dual PERC/6E raid controllers with 512 MB BBU\n\nThere is 2 raid set configured.\nOne RAID 10 containing 4 SAS disks\nOne RAID 5 containing 6 SATA disks\n\nThere is one partition per RAID set with ext2 filesystem.\n\nI ran the following iozone test which I stole from Joshua Drake's test \nat\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nI ran this test against the RAID 5 SATA partition\n\n#iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\n\nWith these random write results\n\n\tChildren see throughput for 1 random writers \t= 168647.33 KB/sec\n\tParent sees throughput for 1 random writers \t= 168413.61 KB/sec\n\tMin throughput per process \t\t\t= 168647.33 KB/sec\n\tMax throughput per process \t\t\t= 168647.33 KB/sec\n\tAvg throughput per process \t\t\t= 168647.33 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 6.072 CPU time 0.540 CPU \nutilization 8.89 %\n\nAlmost 170 MB/sek. Not bad for 6 standard SATA drives.\n\nThen I ran the same thing against the RAID 10 SAS partition\n\n\tChildren see throughput for 1 random writers \t= 68816.25 KB/sec\n\tParent sees throughput for 1 random writers \t= 68767.90 KB/sec\n\tMin throughput per process \t\t\t= 68816.25 KB/sec\n\tMax throughput per process \t\t\t= 68816.25 KB/sec\n\tAvg throughput per process \t\t\t= 68816.25 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 14.880 CPU time 0.520 CPU \nutilization 3.49 %\n\nWhat only 70 MB/sek?\n\nIs it possible that the 2 more spindles for the SATA drives makes that \npartition soooo much faster? Even though the disks and the RAID \nconfiguration should be slower?\nIt feels like there is something fishy going on. Maybe the RAID 10 \nimplementation on the PERC/6e is crap?\n\nAny pointers, suggestion, ideas?\n\nI'm going to change the RAID 10 to a RAID 5 and test again and see \nwhat happens.\n\nCheers,\nHenke\n\n", "msg_date": "Fri, 8 Aug 2008 16:23:55 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Fri, 8 Aug 2008, Henrik wrote:\n\n> It feels like there is something fishy going on. Maybe the RAID 10 \n> implementation on the PERC/6e is crap?\n\nNormally, when a SATA implementation is running significantly faster than \na SAS one, it's because there's some write cache in the SATA disks turned \non (which they usually are unless you go out of your way to disable them). \nSince all non-battery backed caches need to get turned off for reliable \ndatabase use, you might want to double-check that on the controller that's \ndriving the SATA disks.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 8 Aug 2008 18:47:03 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "\n9 aug 2008 kl. 00.47 skrev Greg Smith:\n\n> On Fri, 8 Aug 2008, Henrik wrote:\n>\n>> It feels like there is something fishy going on. Maybe the RAID 10 \n>> implementation on the PERC/6e is crap?\n>\n> Normally, when a SATA implementation is running significantly faster \n> than a SAS one, it's because there's some write cache in the SATA \n> disks turned on (which they usually are unless you go out of your \n> way to disable them). Since all non-battery backed caches need to \n> get turned off for reliable database use, you might want to double- \n> check that on the controller that's driving the SATA disks.\n\nLucky for my I have BBU on all my controllers cards and I'm also not \nusing the SATA drives for database. That is why I bought the SAS \ndrives :) Just got confused when the SATA RAID 5 was sooo much faster \nthan the SAS RAID10, even random writes. But I should have realized \nthat SAS is only faster if the number of drives are equal :)\n\nThanks for the input!\n\nCheers,\nHenke\n\n>\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Sun, 10 Aug 2008 21:40:30 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "> >\n> >> It feels like there is something fishy going on.\n> Maybe the RAID 10 \n> >> implementation on the PERC/6e is crap?\n> >\n\nIt's possible. We had a bunch of perc/5i SAS raid cards in our servers that performed quite well in Raid 5 but were shite in Raid 10. I switched them out for Adaptec 5808s and saw a massive improvement in Raid 10.\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Mon, 11 Aug 2008 10:35:50 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "11 aug 2008 kl. 12.35 skrev Glyn Astill:\n\n>>>\n>>>> It feels like there is something fishy going on.\n>> Maybe the RAID 10\n>>>> implementation on the PERC/6e is crap?\n>>>\n>\n> It's possible. We had a bunch of perc/5i SAS raid cards in our \n> servers that performed quite well in Raid 5 but were shite in Raid \n> 10. I switched them out for Adaptec 5808s and saw a massive \n> improvement in Raid 10.\nI suspected that. Maybe I should just put the PERC/6 cards in JBOD \nmode and then make a RAID10 with linux software raid MD?\n\n\n>\n>\n>\n> __________________________________________________________\n> Not happy with your email address?.\n> Get the one you really want - millions of new email addresses \n> available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 11 Aug 2008 14:08:01 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Mon, Aug 11, 2008 at 6:08 AM, Henrik <[email protected]> wrote:\n> 11 aug 2008 kl. 12.35 skrev Glyn Astill:\n>\n>>>>\n>>>>> It feels like there is something fishy going on.\n>>>\n>>> Maybe the RAID 10\n>>>>>\n>>>>> implementation on the PERC/6e is crap?\n>>>>\n>>\n>> It's possible. We had a bunch of perc/5i SAS raid cards in our servers\n>> that performed quite well in Raid 5 but were shite in Raid 10. I switched\n>> them out for Adaptec 5808s and saw a massive improvement in Raid 10.\n>\n> I suspected that. Maybe I should just put the PERC/6 cards in JBOD mode and\n> then make a RAID10 with linux software raid MD?\n\nYou can also try making mirror sets with the hardware RAID controller\nand then doing SW RAID 0 on top of that. Since RAID0 requires little\nor no CPU overhead, this is a good compromise because the OS has the\nleast work to do, and the RAID controller is doing what it's probably\npretty good at, mirror sets.\n", "msg_date": "Mon, 11 Aug 2008 07:20:08 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Sun, 10 Aug 2008, Henrik wrote:\n\n>> Normally, when a SATA implementation is running significantly faster than a \n>> SAS one, it's because there's some write cache in the SATA disks turned on \n>> (which they usually are unless you go out of your way to disable them). \n> Lucky for my I have BBU on all my controllers cards and I'm also not using \n> the SATA drives for database.\n\n From how you responded I don't think I made myself clear. In addition to \nthe cache on the controller itself, each of the disks has its own cache, \nprobably 8-32MB in size. Your controllers may have an option to enable or \ndisable the caches on the individual disks, which would be a separate \nconfiguration setting from turning the main controller cache on or off. \nYour results look like what I'd expect if the individual disks caches on \nthe SATA drives were on, while those on the SAS controller were off (which \nmatches the defaults you'll find on some products in both categories). \nJust something to double-check.\n\nBy the way: getting useful results out of iozone is fairly difficult if \nyou're unfamiliar with it, there are lots of ways you can set that up to \nrun tests that aren't completely fair or that you don't run them for long \nenough to give useful results. I'd suggest doing a round of comparisons \nwith bonnie++, which isn't as flexible but will usually give fair results \nwithout needing to specify any parameters. The \"seeks\" number that comes \nout of bonnie++ is a combined read/write one and would be good for \ndouble-checking whether the unexpected results you're seeing are \nindependant of the benchmark used.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Mon, 11 Aug 2008 11:56:22 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Hi again all,\n\nJust wanted to give you an update.\n\nTalked to Dell tech support and they recommended using write- \nthrough(!) caching in RAID10 configuration. Well, it didn't work and \ngot even worse performance.\n\nAnyone have an estimated what a RAID10 on 4 15k SAS disks should \ngenerate in random writes?\n\nI'm really keen on trying Scotts suggestion on using the PERC/6 with \nmirror sets only and then make the stripe with Linux SW raid.\n\nThanks for all the input! Much appreciated.\n\n\nCheers,\nHenke\n\n11 aug 2008 kl. 17.56 skrev Greg Smith:\n\n> On Sun, 10 Aug 2008, Henrik wrote:\n>\n>>> Normally, when a SATA implementation is running significantly \n>>> faster than a SAS one, it's because there's some write cache in \n>>> the SATA disks turned on (which they usually are unless you go out \n>>> of your way to disable them).\n>> Lucky for my I have BBU on all my controllers cards and I'm also \n>> not using the SATA drives for database.\n>\n>> From how you responded I don't think I made myself clear. In \n>> addition to\n> the cache on the controller itself, each of the disks has its own \n> cache, probably 8-32MB in size. Your controllers may have an option \n> to enable or disable the caches on the individual disks, which would \n> be a separate configuration setting from turning the main controller \n> cache on or off. Your results look like what I'd expect if the \n> individual disks caches on the SATA drives were on, while those on \n> the SAS controller were off (which matches the defaults you'll find \n> on some products in both categories). Just something to double-check.\n>\n> By the way: getting useful results out of iozone is fairly \n> difficult if you're unfamiliar with it, there are lots of ways you \n> can set that up to run tests that aren't completely fair or that you \n> don't run them for long enough to give useful results. I'd suggest \n> doing a round of comparisons with bonnie++, which isn't as flexible \n> but will usually give fair results without needing to specify any \n> parameters. The \"seeks\" number that comes out of bonnie++ is a \n> combined read/write one and would be good for double-checking \n> whether the unexpected results you're seeing are independant of the \n> benchmark used.\n>\n> --\n> * Greg Smith [email protected] http://www.gregsmith.com \n> Baltimore, MD\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 12 Aug 2008 21:40:20 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, Aug 12, 2008 at 1:40 PM, Henrik <[email protected]> wrote:\n> Hi again all,\n>\n> Just wanted to give you an update.\n>\n> Talked to Dell tech support and they recommended using write-through(!)\n> caching in RAID10 configuration. Well, it didn't work and got even worse\n> performance.\n\nSomeone at Dell doesn't understand the difference between write back\nand write through.\n\n> Anyone have an estimated what a RAID10 on 4 15k SAS disks should generate in\n> random writes?\n\nUsing sw RAID or a non-caching RAID controller, you should be able to\nget close to 2xmax write based on rpms. On 7200 RPM drives that's\n2*150 or ~300 small transactions per second. On 15k drives that's\nabout 2*250 or around 500 tps.\n\nThe bigger the data you're writing, the fewer you're gonna be able to\nwrite each second of course.\n\n> I'm really keen on trying Scotts suggestion on using the PERC/6 with mirror\n> sets only and then make the stripe with Linux SW raid.\n\nDefinitely worth the try. Even full on sw RAID may be faster. It's\nworth testing.\n\nOn our new servers at work, we have Areca controllers with 512M bbc\nand they were about 10% faster mixing sw and hw raid, but honestly, it\nwasn't worth the extra trouble of the hw/sw combo to go with.\n", "msg_date": "Tue, 12 Aug 2008 14:42:13 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Greg Smith wrote:\n> some write cache in the SATA disks...Since all non-battery backed caches \n> need to get turned off for reliable database use, you might want to \n> double-check that on the controller that's driving the SATA disks.\n\nIs this really true?\n\nDoesn't the ATA \"FLUSH CACHE\" command (say, ATA command 0xE7)\nguarantee that writes are on the media?\n\nhttp://www.t13.org/Documents/UploadedDocuments/technical/e01126r0.pdf\n\"A non-error completion of the command indicates that all cached data\n since the last FLUSH CACHE command completion was successfully written\n to media, including any cached data that may have been\n written prior to receipt of FLUSH CACHE command.\"\n(I still can't find any $0 SATA specs; but I imagine the final\nwording for the command is similar to the wording in the proposal\nfor the command which can be found on the ATA Technical Committee's\nweb site at the link above.)\n\nReally old software (notably 2.4 linux kernels) didn't send\ncache synchronizing commands for SCSI nor either ATA; but\nit seems well thought through in the 2.6 kernels as described\nin the Linux kernel documentation.\nhttp://www.mjmwired.net/kernel/Documentation/block/barrier.txt\n\nIf you do have a disk where you need to disable write caches,\nI'd love to know the name of the disk and see the output of\nof \"hdparm -I /dev/sd***\" to see if it claims to support such\ncache flushes.\n\n\nI'm almost tempted to say that if you find yourself having to disable\ncaches on modern (this century) hardware and software, you're probably\ncovering up a more serious issue with your system.\n\n", "msg_date": "Tue, 12 Aug 2008 14:47:57 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Some SATA drives were known to not flush their cache when told to.\nSome file systems don't know about this (UFS, older linux kernels, etc).\n\nSo yes, if your OS / File System / Controller card combo properly sends the\nwrite cache flush command, and the drive is not a flawed one, all is well.\nMost should, not all do. Any one of those bits along the chain can\npotentially be disk write cache unsafe.\n\n\nOn Tue, Aug 12, 2008 at 2:47 PM, Ron Mayer <[email protected]>wrote:\n\n> Greg Smith wrote:\n>\n>> some write cache in the SATA disks...Since all non-battery backed caches\n>> need to get turned off for reliable database use, you might want to\n>> double-check that on the controller that's driving the SATA disks.\n>>\n>\n> Is this really true?\n>\n> Doesn't the ATA \"FLUSH CACHE\" command (say, ATA command 0xE7)\n> guarantee that writes are on the media?\n>\n> http://www.t13.org/Documents/UploadedDocuments/technical/e01126r0.pdf\n> \"A non-error completion of the command indicates that all cached data\n> since the last FLUSH CACHE command completion was successfully written\n> to media, including any cached data that may have been\n> written prior to receipt of FLUSH CACHE command.\"\n> (I still can't find any $0 SATA specs; but I imagine the final\n> wording for the command is similar to the wording in the proposal\n> for the command which can be found on the ATA Technical Committee's\n> web site at the link above.)\n>\n> Really old software (notably 2.4 linux kernels) didn't send\n> cache synchronizing commands for SCSI nor either ATA; but\n> it seems well thought through in the 2.6 kernels as described\n> in the Linux kernel documentation.\n> http://www.mjmwired.net/kernel/Documentation/block/barrier.txt\n>\n> If you do have a disk where you need to disable write caches,\n> I'd love to know the name of the disk and see the output of\n> of \"hdparm -I /dev/sd***\" to see if it claims to support such\n> cache flushes.\n>\n>\n> I'm almost tempted to say that if you find yourself having to disable\n> caches on modern (this century) hardware and software, you're probably\n> covering up a more serious issue with your system.\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nSome SATA drives were known to not flush their cache when told to.Some file systems don't know about this (UFS, older linux kernels, etc).So yes, if your OS / File System / Controller card combo properly sends the write cache flush command, and the drive is not a flawed one, all is well.   Most should, not all do.  Any one of those bits along the chain can potentially be disk write cache unsafe.\nOn Tue, Aug 12, 2008 at 2:47 PM, Ron Mayer <[email protected]> wrote:\nGreg Smith wrote:\n\nsome write cache in the SATA disks...Since all non-battery backed caches need to get turned  off for reliable database use, you might want to  double-check that on the controller that's driving the SATA disks.\n\n\nIs this really true?\n\nDoesn't the ATA \"FLUSH CACHE\" command (say, ATA command 0xE7)\nguarantee that writes are on the media?\n\nhttp://www.t13.org/Documents/UploadedDocuments/technical/e01126r0.pdf\n\"A non-error completion of the command indicates that all cached data\n since the last FLUSH CACHE command completion was successfully written\n to media, including any cached data that may have been\n written prior to receipt of FLUSH CACHE command.\"\n(I still can't find any $0 SATA specs; but I imagine the final\nwording for the command is similar to the wording in the proposal\nfor the command which can be found on the ATA Technical Committee's\nweb site at the link above.)\n\nReally old software (notably 2.4 linux kernels) didn't send\ncache synchronizing commands for SCSI nor either ATA; but\nit seems well thought through in the 2.6 kernels as described\nin the Linux kernel documentation.\nhttp://www.mjmwired.net/kernel/Documentation/block/barrier.txt\n\nIf you do have a disk where you need to disable write caches,\nI'd love to know the name of the disk and see the output of\nof \"hdparm -I /dev/sd***\" to see if it claims to support such\ncache flushes.\n\n\nI'm almost tempted to say that if you find yourself having to disable\ncaches on modern (this century) hardware and software, you're probably\ncovering up a more serious issue with your system.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 12 Aug 2008 17:23:31 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Scott Carey wrote:\n> Some SATA drives were known to not flush their cache when told to.\n\nCan you name one? The ATA commands seem pretty clear on the matter,\nand ISTM most of the reports of these issues came from before\nLinux had write-barrier support.\n\nI've yet to hear of a drive with the problem; though no doubt there\nare some cheap RAID controllers somewhere that expect you to disable\nthe drive caches.\n", "msg_date": "Tue, 12 Aug 2008 17:40:18 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, Aug 12, 2008 at 6:23 PM, Scott Carey <[email protected]> wrote:\n> Some SATA drives were known to not flush their cache when told to.\n> Some file systems don't know about this (UFS, older linux kernels, etc).\n>\n> So yes, if your OS / File System / Controller card combo properly sends the\n> write cache flush command, and the drive is not a flawed one, all is well.\n> Most should, not all do. Any one of those bits along the chain can\n> potentially be disk write cache unsafe.\n\nI can attest to the 2.4 kernel not being able to guarantee fsync on\nIDE drives. And to the LSI megaraid SCSI controllers of the era\nsurviving numerous power off tests.\n", "msg_date": "Tue, 12 Aug 2008 18:48:04 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, 12 Aug 2008, Ron Mayer wrote:\n\n> Scott Carey wrote:\n>> Some SATA drives were known to not flush their cache when told to.\n>\n> Can you name one? The ATA commands seem pretty clear on the matter,\n> and ISTM most of the reports of these issues came from before\n> Linux had write-barrier support.\n\nI can't name one, but I've seen it mentioned in the discussions on \nlinux-kernel several times by the folks who are writing the write-barrier \nsupport.\n\nDavid Lang\n\n", "msg_date": "Tue, 12 Aug 2008 18:33:08 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "I'm not an expert on which and where -- its been a while since I was exposed\nto the issue. From what I've read in a few places over time (\nstoragereview.com, linux and windows patches or knowledge base articles), it\nhappens from time to time. Drives usually get firmware updates quickly.\nDrivers / Controller cards often take longer to fix. Anyway, my anecdotal\nrecollection was with a few instances of this occuring about 4 years ago and\nmanefesting itself with complaints on message boards then going away. And\nin general some searching around google indicates this is a problem more\noften drivers and controllers than drives themselves.\n\nI recall some cheap raid cards and controller cards being an issue, like the\nbelow:\nhttp://www.fixya.com/support/t163682-hard_drive_corrupt_every_reboot\n\nAnd here is an example of an HP Fiber Channel Disk firmware bug:\nHS02969 28SEP07<http://h50025.www5.hp.com/ENP5/Admin/UserFileAdmin/EV/23560/File/Hotstuffs.pdf>\n•\nTitle\n: OPN FIBRE CHANNEL DISK FIRMWARE\n•\nPlatform\n: S-Series & NS-Series only with FCDMs\n•\nSummary\n:\nHP recently discovered a firmware flaw in some versions of 72,\n146, and 300 Gigabyte fibre channel disk devices that shipped in late 2006\nand early 2007. The flaw enabled the affected disk devices to inadvertently\ncache write data. In very rare instances, this caching operation presents an\nopportunity for disk write operations to be lost.\n\n\nEven ext3 doesn't default to using write barriers at this time due to\nperformance concerns:\nhttp://lwn.net/Articles/283161/\n\nOn Tue, Aug 12, 2008 at 6:33 PM, <[email protected]> wrote:\n\n> On Tue, 12 Aug 2008, Ron Mayer wrote:\n>\n> Scott Carey wrote:\n>>\n>>> Some SATA drives were known to not flush their cache when told to.\n>>>\n>>\n>> Can you name one? The ATA commands seem pretty clear on the matter,\n>> and ISTM most of the reports of these issues came from before\n>> Linux had write-barrier support.\n>>\n>\n> I can't name one, but I've seen it mentioned in the discussions on\n> linux-kernel several times by the folks who are writing the write-barrier\n> support.\n>\n> David Lang\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI'm not an expert on which and where -- its been a while since I was exposed to the issue.  From what I've read in a few places over time (storagereview.com, linux and windows patches or knowledge base articles), it happens from time to time.  Drives usually get firmware updates quickly.  Drivers / Controller cards often take longer to fix.  Anyway, my anecdotal recollection was with a few instances of this occuring about 4 years ago and manefesting itself with complaints on message boards then going away.  And in general some searching around google indicates this is a problem more often drivers and controllers than drives themselves.\nI recall some cheap raid cards and controller cards being an issue, like the below:http://www.fixya.com/support/t163682-hard_drive_corrupt_every_reboot\nAnd here is an example of an HP Fiber Channel Disk firmware bug:HS02969 28SEP07•Title: OPN FIBRE CHANNEL DISK FIRMWARE\n•Platform: S-Series & NS-Series only with FCDMs•Summary:HP recently discovered a firmware flaw in some versions of 72,146, and 300 Gigabyte fibre channel disk devices that shipped in late 2006\nand early 2007. The flaw enabled the affected disk devices to inadvertentlycache write data. In very rare instances, this caching operation presents anopportunity for disk write operations to be lost.Even ext3 doesn't default to using write barriers at this time due to performance concerns:\nhttp://lwn.net/Articles/283161/On Tue, Aug 12, 2008 at 6:33 PM, <[email protected]> wrote:\nOn Tue, 12 Aug 2008, Ron Mayer wrote:\n\n\nScott Carey wrote:\n\nSome SATA drives were known to not flush their cache when told to.\n\n\nCan you name one?  The ATA commands seem pretty clear on the matter,\nand ISTM most of the reports of these issues came from before\nLinux had write-barrier support.\n\n\nI can't name one, but I've seen it mentioned in the discussions on linux-kernel several times by the folks who are writing the write-barrier support.\n\nDavid Lang\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 12 Aug 2008 19:52:10 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Scott Marlowe wrote:\n> I can attest to the 2.4 kernel not being able to guarantee fsync on\n> IDE drives. \n\nSure. But note that it won't for SCSI either; since AFAICT the write\nbarrier support was implemented at the same time for both.\n", "msg_date": "Tue, 12 Aug 2008 21:28:32 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, 12 Aug 2008, Ron Mayer wrote:\n\n> Really old software (notably 2.4 linux kernels) didn't send\n> cache synchronizing commands for SCSI nor either ATA; but\n> it seems well thought through in the 2.6 kernels as described\n> in the Linux kernel documentation.\n> http://www.mjmwired.net/kernel/Documentation/block/barrier.txt\n\nIf you've drank the kool-aid you might believe that. When I see people \nasking about this in early 2008 at \nhttp://thread.gmane.org/gmane.linux.kernel/646040 and serious disk driver \nhacker Jeff Garzik says \"It's completely ridiculous that we default to an \nunsafe fsync.\" [ http://thread.gmane.org/gmane.linux.kernel/646040 ], I \ndon't know about you but that barrier documentation doesn't make me feel \nwarm and safe anymore.\n\n> If you do have a disk where you need to disable write caches,\n> I'd love to know the name of the disk and see the output of\n> of \"hdparm -I /dev/sd***\" to see if it claims to support such\n> cache flushes.\n\nThe below disk writes impossibly fast when I issue a sequence of fsync \nwrites to it under the CentOS 5 Linux I was running on it. Should only be \npossible to do at most 120/second since it's 7200 RPM, and if I poke it \nwith \"hdparm -W0\" first it behaves. The drive is a known piece of junk \nfrom circa 2004, and it's worth noting that it's an ext3 filesystem in a \nmd0 RAID-1 array (aren't there issues with md and the barriers?)\n\n# hdparm -I /dev/hde\n\n/dev/hde:\n\nATA device, with non-removable media\n Model Number: Maxtor 6Y250P0\n Serial Number: Y62K95PE\n Firmware Revision: YAR41BW0\nStandards:\n Used: ATA/ATAPI-7 T13 1532D revision 0\n Supported: 7 6 5 4\nConfiguration:\n Logical max current\n cylinders 16383 65535\n heads 16 1\n sectors/track 63 63\n --\n CHS current addressable sectors: 4128705\n LBA user addressable sectors: 268435455\n LBA48 user addressable sectors: 490234752\n device size with M = 1024*1024: 239372 MBytes\n device size with M = 1000*1000: 251000 MBytes (251 GB)\nCapabilities:\n LBA, IORDY(can be disabled)\n Standby timer values: spec'd by Standard, no device specific \nminimum\n R/W multiple sector transfer: Max = 16 Current = 16\n Advanced power management level: unknown setting (0x0000)\n Recommended acoustic management value: 192, current value: 254\n DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6\n Cycle time: min=120ns recommended=120ns\n PIO: pio0 pio1 pio2 pio3 pio4\n Cycle time: no flow control=120ns IORDY flow control=120ns\nCommands/features:\n Enabled Supported:\n * SMART feature set\n Security Mode feature set\n * Power Management feature set\n * Write cache\n * Look-ahead\n * Host Protected Area feature set\n * WRITE_VERIFY command\n * WRITE_BUFFER command\n * READ_BUFFER command\n * NOP cmd\n * DOWNLOAD_MICROCODE\n Advanced Power Management feature set\n SET_MAX security extension\n * Automatic Acoustic Management feature set\n * 48-bit Address feature set\n * Device Configuration Overlay feature set\n * Mandatory FLUSH_CACHE\n * FLUSH_CACHE_EXT\n * SMART error logging\n * SMART self-test\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 13 Aug 2008 02:17:44 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, Aug 12, 2008 at 10:28 PM, Ron Mayer\n<[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> I can attest to the 2.4 kernel not being able to guarantee fsync on\n>> IDE drives.\n>\n> Sure. But note that it won't for SCSI either; since AFAICT the write\n> barrier support was implemented at the same time for both.\n\nTested both by pulling the power plug. The SCSI was pulled 10 times\nwhile running 600 or so concurrent pgbench threads, and so was the\nIDE. The SCSI came up clean every single time, the IDE came up\ncorrupted every single time.\n\nI find it hard to believe there was no difference in write barrier\nbehaviour with those two setups.\n", "msg_date": "Wed, 13 Aug 2008 01:03:29 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Tue, 12 Aug 2008, Ron Mayer wrote:\n> Really old software (notably 2.4 linux kernels) didn't send\n> cache synchronizing commands for SCSI nor either ATA;\n\nSurely not true. Write cache flushing has been a known problem in the \ncomputer science world for several tens of years. The difference is that \nin the past we only had a \"flush everything\" command whereas now we have a \n\"flush everything before the barrier before everything after the barrier\" \ncommand.\n\nMatthew\n\n-- \n\"To err is human; to really louse things up requires root\n privileges.\" -- Alexander Pope, slightly paraphrased\n", "msg_date": "Wed, 13 Aug 2008 11:15:55 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Scott Marlowe wrote:\n> On Tue, Aug 12, 2008 at 10:28 PM, Ron Mayer ...wrote:\n>> Scott Marlowe wrote:\n>>> I can attest to the 2.4 kernel ...\n>> ...SCSI...AFAICT the write barrier support...\n> \n> Tested both by pulling the power plug. The SCSI was pulled 10 times\n> while running 600 or so concurrent pgbench threads, and so was the\n> IDE. The SCSI came up clean every single time, the IDE came up\n> corrupted every single time.\n\nInteresting. With a pre-write-barrier 2.4 kernel I'd\nexpect corruption in both.\nPerhaps all caches were disabled in the SCSI drives?\n\n> I find it hard to believe there was no difference in write barrier\n> behaviour with those two setups.\n\nSkimming lkml it seems write barriers for SCSI were\nbehind (in terms of implementation) those for ATA\nhttp://lkml.org/lkml/2005/1/27/94\n\"Jan 2005 ... scsi/sata write barrier support ...\n For the longest time, only the old PATA drivers\n supported barrier writes with journalled file systems.\n This patch adds support for the same type of cache\n flushing barriers that PATA uses for SCSI\"\n", "msg_date": "Wed, 13 Aug 2008 06:24:29 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Greg Smith wrote:\n> The below disk writes impossibly fast when I issue a sequence of fsync \n\n'k. I've got some homework. I'll be trying to reproduce similar\nwith md raid, old IDE drives, etc to see if I can reproduce them.\nI assume test_fsync in the postgres source distribution is\na decent way to see?\n\n> driver hacker Jeff Garzik says \"It's completely ridiculous that we \n> default to an unsafe fsync.\" \n\nYipes indeed. Still makes me want to understand why people\nclaim IDE suffers more than SCSI, tho. Ext3 bugs seem likely\nto affect both to me.\n\n> writes to it under the CentOS 5 Linux I was running on it. ...\n> junk from circa 2004, and it's worth noting that it's an ext3 filesystem \n> in a md0 RAID-1 array (aren't there issues with md and the barriers?)\n\nApparently various distros vary a lot in how they're set\nup (SuSE apparently defaults to mounting ext3 with the barrier=1\noption; other distros seemed not to, etc).\n\nI'll do a number of experiments with md, a few different drives,\netc. today and see if I can find issues with any of the\ndrives (and/or filesystems) around here.\n\nBut I still am looking for any evidence that there were any\nwidely shipped SATA (or even IDE drives) that were at fault,\nas opposed to filesystem bugs and poor settings of defaults.\n", "msg_date": "Wed, 13 Aug 2008 07:41:27 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Wed, Aug 13, 2008 at 8:41 AM, Ron Mayer\n<[email protected]> wrote:\n> Greg Smith wrote:\n\n> But I still am looking for any evidence that there were any\n> widely shipped SATA (or even IDE drives) that were at fault,\n> as opposed to filesystem bugs and poor settings of defaults.\n\nWell, if they're getting more than 150/166.6/250 transactions per\nsecond without a battery backed cache, then they're likely lying about\nfsync. And most SATA and IDE drives will give you way over that for a\nsmall data set.\n", "msg_date": "Wed, 13 Aug 2008 08:48:17 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Wed, 13 Aug 2008, Ron Mayer wrote:\n\n> I assume test_fsync in the postgres source distribution is\n> a decent way to see?\n\nNot really. It takes too long (runs too many tests you don't care about) \nand doesn't spit out the results the way you want them--TPS, not average \ntime.\n\nYou can do it with pgbench (scale here really doesn't matter):\n\n$ cat insert.sql\n\\set nbranches :scale\n\\set ntellers 10 * :scale\n\\set naccounts 100000 * :scale\n\\setrandom aid 1 :naccounts\n\\setrandom bid 1 :nbranches\n\\setrandom tid 1 :ntellers\n\\setrandom delta -5000 5000\nBEGIN;\nINSERT INTO history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, \n:aid, :delta, CURRENT_TIMESTAMP);\nEND;\n$ createdb pgbench\n$ pgbench -i -s 20 pgbench\n$ pgbench -f insert.sql -s 20 -c 1 -t 10000 pgbench\n\nDon't really need to ever rebuild that just to run more tests if all you \ncare about is the fsync speed (no indexes in the history table to bloat or \nanything).\n\nOr you can measure with sysbench; \nhttp://www.mysqlperformanceblog.com/2006/05/03/group-commit-and-real-fsync/ \ngoes over that but they don't have the syntax exacty right. Here's an \nexample that works:\n\n:~/sysbench-0.4.8/bin/bin$ ./sysbench run --test=fileio \n--file-fsync-freq=1 --file-num=1 --file-total-size=16384 \n--file-test-mode=rndwr\n\n> But I still am looking for any evidence that there were any widely \n> shipped SATA (or even IDE drives) that were at fault, as opposed to \n> filesystem bugs and poor settings of defaults.\n\nAlan Cox claims that until circa 2001, the ATA standard didn't require \nimplementing the cache flush call at all. See \nhttp://www.kerneltraffic.org/kernel-traffic/kt20011015_137.html Since \nfirmware is expensive to write and manufacturers are generally lazy here, \nI'd bet a lot of disks from that era were missing support for the call. \nNext time I'd digging through my disk graveyard I'll try and find such a \ndisk. If he's correct that the standard changed around you wouldn't \nexpect any recent drive to not support the call.\n\nI feel it's largely irrelevant that most drives handle things just fine \nnowadays if you send them the correct flush commands, because there are so \nmanh other things that can make that system as a whole not work right. \nEven if the flush call works most of the time, disk firmware is turning \nincreasibly into buggy software, and attempts to reduce how much of that \nfirmware you're actually using can be viewed as helpful.\n\nThis is why I usually suggest just turning the individual drive caches \noff; the caveats for when they might work fine in this context are just \ntoo numerous.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 13 Aug 2008 12:55:38 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Scott Marlowe wrote:\n>IDE came up corrupted every single time.\nGreg Smith wrote:\n> you've drank the kool-aid ... completely \n> ridiculous ...unsafe fsync ... md0 RAID-1 \n> array (aren't there issues with md and the barriers?)\n\nAlright - I'll eat my words. Or mostly.\n\nI still haven't found IDE drives that lie; but\nif the testing I've done today, I'm starting to\nthink that:\n\n 1a) ext3 fsync() seems to lie badly.\n 1b) but ext3 can be tricked not to lie (but not\n in the way you might think).\n 2a) md raid1 fsync() sometimes doesn't actually\n sync\n 2b) I can't trick it not to.\n 3a) some IDE drives don't even pretend to support\n letting you know when their cache is flushed\n 3b) but the kernel will happily tell you about\n any such devices; as well as including md\n raid ones.\n\nIn more detail. I tested on a number of systems\nand disks including new (this year) and old (1997)\nIDE drives; and EXT3 with and without the \"barrier=1\"\nmount option.\n\n\nFirst off - some IDE drives don't even support the\nrelatively recent ATA command that apparently lets\nthe software know when a cache flush is complete.\nApparently on those you will get messages in your\nsystem logs:\n %dmesg | grep 'disabling barriers'\n JBD: barrier-based sync failed on md1 - disabling barriers\n JBD: barrier-based sync failed on hda3 - disabling barriers\nand\n %hdparm -I /dev/hdf | grep FLUSH_CACHE_EXT\nwill not show you anything on those devices.\nIMHO that's cool; and doesn't count as a lying IDE drive\nsince it didn't claim to support this.\n\nSecond of all - ext3 fsync() appears to me to\nbe *extremely* stupid. It only seems to correctly\ndo the correct flushing (and waiting) for a drive's\ncache to be flushed when a file's inode has changed.\nFor example, in the test program below, it will happily\ndo a real fsync (i.e. the program take a couple seconds\nto run) so long as I have the \"fchmod()\" statements are in\nthere. It will *NOT* wait on my system if I comment those\nfchmod()'s out. Sadly, I get the same behavior with and\nwithout the ext3 barrier=1 mount option. :(\n==========================================================\n/*\n** based on http://article.gmane.org/gmane.linux.file-systems/21373\n** http://thread.gmane.org/gmane.linux.kernel/646040\n*/\n#include <sys/types.h>\n#include <sys/stat.h>\n#include <fcntl.h>\n#include <unistd.h>\n#include <stdio.h>\n#include <stdlib.h>\n\nint main(int argc,char *argv[]) {\n if (argc<2) {\n printf(\"usage: fs <filename>\\n\");\n exit(1);\n }\n int fd = open (argv[1], O_RDWR | O_CREAT | O_TRUNC, 0666);\n int i;\n for (i=0;i<100;i++) {\n char byte;\n pwrite (fd, &byte, 1, 0);\n fchmod (fd, 0644); fchmod (fd, 0664);\n fsync (fd);\n }\n}\n==========================================================\nSince it does indeed wait when the inode's touched, I think\nit suggests that it's not the hard drive that's lying, but\nrather ext3.\n\nSo I take back what I said about linux and write barriers\nbeing sane. They're not.\n\nBut AFACT, all the (6 different) IDE drives I've seen work\nas advertised, and the kernel happily seems to spews boot\nmessages when it finds one that doesn't support knowing\nwhen a cache flush finished.\n\n", "msg_date": "Wed, 13 Aug 2008 15:30:40 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Wed, 13 Aug 2008, Ron Mayer wrote:\n\n> First off - some IDE drives don't even support the relatively recent ATA \n> command that apparently lets the software know when a cache flush is \n> complete.\n\nRight, so this is one reason you can't assume barriers will be available. \nAnd barriers don't work regardless if you go through the device mapper, \nlike some LVM and software RAID configurations; see \nhttp://lwn.net/Articles/283161/\n\n> Second of all - ext3 fsync() appears to me to be *extremely* stupid. \n> It only seems to correctly do the correct flushing (and waiting) for a \n> drive's cache to be flushed when a file's inode has changed.\n\nThis is bad, but the way PostgreSQL uses fsync seems to work fine--if it \ndidn't, we'd all see unnaturally high write rates all the time.\n\n> So I take back what I said about linux and write barriers\n> being sane. They're not.\n\nRight. Where Linux seems to be at right now is that there's this \noccasional problem people run into where ext3 volumes can get corrupted if \nthere are out of order writes to its journal: \nhttp://en.wikipedia.org/wiki/Ext3#No_checksumming_in_journal\nhttp://archives.free.net.ph/message/20070518.134838.52e26369.en.html\n\n(By the way: I just fixed the ext3 Wikipedia article to reflect the \ncurrent state of things and dumped a bunch of reference links in to there, \nincluding some that are not listed here. I prefer to keep my notes about \ninteresting topics in Wikipedia instead of having my own copies whenever \npossible).\n\nThere are two ways to get around this issue ext3. You can disable write \ncaching, changing your default mount options to \"data=journal\". In the \nPostgreSQL case, the way the WAL is used seems to keep corruption at bay \neven with the default \"data=ordered\" case, but after reading up on this \nagain I'm thinking I may want to switch to \"journal\" anyway in the future \n(and retrofit some older installs with that change). I also avoid using \nLinux LVM whenever possible for databases just on general principle; one \nless flakey thing in the way.\n\nThe other way, barriers, is just plain scary unless you know your disk \nhardware does the right thing and the planets align just right, and even \nthen it seems buggy. I personally just ignore the fact that they exist on \next3, and maybe one day ext4 will get this right.\n\nBy the way: there is a great ext3 \"torture test\" program that just came \nout a few months ago that's useful for checking general filesystem \ncorruption in this context I keep meaning to try, if you've got some \ncycles to spare working in this area check it out: \nhttp://uwsg.indiana.edu/hypermail/linux/kernel/0805.2/1470.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 14 Aug 2008 10:10:24 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "I've seen it written a couple of times in this thread, and in the\nwikipedia article, that SOME sw raid configs don't support write\nbarriers. This implies that some do. Which ones do and which ones\ndon't? Does anybody have a list of them?\n\nI was mainly wondering if sw RAID0 on top of hw RAID1 would be safe.\n", "msg_date": "Thu, 14 Aug 2008 08:54:22 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "Greg Smith wrote:\n> On Wed, 13 Aug 2008, Ron Mayer wrote:\n> \n>> Second of all - ext3 fsync() appears to me to be *extremely* stupid. \n>> It only seems to correctly do the correct flushing (and waiting) for a \n>> drive's cache to be flushed when a file's inode has changed.\n> \n> This is bad, but the way PostgreSQL uses fsync seems to work fine--if it \n> didn't, we'd all see unnaturally high write rates all the time.\n\nBut only if you turn off IDE drive caches.\n\nWhat was new to me in these experiments is that if you touch the\ninode as described here:\n http://article.gmane.org/gmane.linux.file-systems/21373\nthen fsync() works and you can leave the IDE cache enabled; so\nlong as your drive supports the flush command -- which you can\nsee by looking for the drive in the output of:\n %dmesg | grep 'disabling barriers'\n JBD: barrier-based sync failed on md1 - disabling barriers\n JBD: barrier-based sync failed on hda3 - disabling barriers\n\n>> So I take back what I said about linux and write barriers\n>> being sane. They're not.\n> \n> Right. Where Linux seems to be at right now is that there's this \n\nI almost fear I misphrased that. Apparently IDE drives\ndon't lie (the ones that don't support barriers let the OS\nknow that they don't). And apparently write barriers\ndo work.\n\nIt's just that ext3 only uses the write barriers correctly\non fsync() when an inode is touched, rather than any time\na file's data is touched.\n\n\n> then it seems buggy. I personally just ignore the fact that they exist \n> on ext3, and maybe one day ext4 will get this right.\n\n+1\n", "msg_date": "Thu, 14 Aug 2008 21:33:28 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" } ]
[ { "msg_contents": "Your expected write speed on a 4 drive RAID10 is two drives worth, probably 160 MB/s, depending on the generation of drives.\r\n\r\nThe expect write speed for a 6 drive RAID5 is 5 drives worth, or about 400 MB/s, sans the RAID5 parity overhead.\r\n\r\n- Luke\r\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Fri Aug 08 10:23:55 2008\r\nSubject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\r\n\r\nHello list,\r\n\r\nI have a server with a direct attached storage containing 4 15k SAS \r\ndrives and 6 standard SATA drives.\r\nThe server is a quad core xeon with 16GB ram.\r\nBoth server and DAS has dual PERC/6E raid controllers with 512 MB BBU\r\n\r\nThere is 2 raid set configured.\r\nOne RAID 10 containing 4 SAS disks\r\nOne RAID 5 containing 6 SATA disks\r\n\r\nThere is one partition per RAID set with ext2 filesystem.\r\n\r\nI ran the following iozone test which I stole from Joshua Drake's test \r\nat\r\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\r\n\r\nI ran this test against the RAID 5 SATA partition\r\n\r\n#iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\r\n\r\nWith these random write results\r\n\r\n\tChildren see throughput for 1 random writers \t= 168647.33 KB/sec\r\n\tParent sees throughput for 1 random writers \t= 168413.61 KB/sec\r\n\tMin throughput per process \t\t\t= 168647.33 KB/sec\r\n\tMax throughput per process \t\t\t= 168647.33 KB/sec\r\n\tAvg throughput per process \t\t\t= 168647.33 KB/sec\r\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\r\n\tCPU utilization: Wall time 6.072 CPU time 0.540 CPU \r\nutilization 8.89 %\r\n\r\nAlmost 170 MB/sek. Not bad for 6 standard SATA drives.\r\n\r\nThen I ran the same thing against the RAID 10 SAS partition\r\n\r\n\tChildren see throughput for 1 random writers \t= 68816.25 KB/sec\r\n\tParent sees throughput for 1 random writers \t= 68767.90 KB/sec\r\n\tMin throughput per process \t\t\t= 68816.25 KB/sec\r\n\tMax throughput per process \t\t\t= 68816.25 KB/sec\r\n\tAvg throughput per process \t\t\t= 68816.25 KB/sec\r\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\r\n\tCPU utilization: Wall time 14.880 CPU time 0.520 CPU \r\nutilization 3.49 %\r\n\r\nWhat only 70 MB/sek?\r\n\r\nIs it possible that the 2 more spindles for the SATA drives makes that \r\npartition soooo much faster? Even though the disks and the RAID \r\nconfiguration should be slower?\r\nIt feels like there is something fishy going on. Maybe the RAID 10 \r\nimplementation on the PERC/6e is crap?\r\n\r\nAny pointers, suggestion, ideas?\r\n\r\nI'm going to change the RAID 10 to a RAID 5 and test again and see \r\nwhat happens.\r\n\r\nCheers,\r\nHenke\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n\n\n\n\nRe: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\n\n\n\nYour expected write speed on a 4 drive RAID10 is two drives worth, probably 160 MB/s, depending on the generation of drives.\n\r\nThe expect write speed for a 6 drive RAID5 is 5 drives worth, or about 400 MB/s, sans the RAID5 parity overhead.\n\r\n- Luke\n\r\n----- Original Message -----\r\nFrom: [email protected] <[email protected]>\r\nTo: [email protected] <[email protected]>\r\nSent: Fri Aug 08 10:23:55 2008\r\nSubject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\n\r\nHello list,\n\r\nI have a server with a direct attached storage containing 4 15k SAS \r\ndrives and 6 standard SATA drives.\r\nThe server is a quad core xeon with 16GB ram.\r\nBoth server and DAS has dual PERC/6E raid controllers with 512 MB BBU\n\r\nThere is 2 raid set configured.\r\nOne RAID 10 containing 4 SAS disks\r\nOne RAID 5 containing 6 SATA disks\n\r\nThere is one partition per RAID set with ext2 filesystem.\n\r\nI ran the following iozone test which I stole from Joshua Drake's test \r\nat\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\r\nI ran this test against the RAID 5 SATA partition\n\r\n#iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\n\r\nWith these random write results\n\r\n        Children see throughput for 1 random writers    =  168647.33 KB/sec\r\n        Parent sees throughput for 1 random writers     =  168413.61 KB/sec\r\n        Min throughput per process                      =  168647.33 KB/sec\r\n        Max throughput per process                      =  168647.33 KB/sec\r\n        Avg throughput per process                      =  168647.33 KB/sec\r\n        Min xfer                                        = 1024000.00 KB\r\n        CPU utilization: Wall time    6.072    CPU time    0.540    CPU \r\nutilization   8.89 %\n\r\nAlmost 170 MB/sek. Not bad for 6 standard SATA drives.\n\r\nThen I ran the same thing against the RAID 10 SAS partition\n\r\n        Children see throughput for 1 random writers    =   68816.25 KB/sec\r\n        Parent sees throughput for 1 random writers     =   68767.90 KB/sec\r\n        Min throughput per process                      =   68816.25 KB/sec\r\n        Max throughput per process                      =   68816.25 KB/sec\r\n        Avg throughput per process                      =   68816.25 KB/sec\r\n        Min xfer                                        = 1024000.00 KB\r\n        CPU utilization: Wall time   14.880    CPU time    0.520    CPU \r\nutilization   3.49 %\n\r\nWhat only 70 MB/sek?\n\r\nIs it possible that the 2 more spindles for the SATA drives makes that \r\npartition soooo much faster? Even though the disks and the RAID \r\nconfiguration should be slower?\r\nIt feels like there is something fishy going on. Maybe the RAID 10 \r\nimplementation on the PERC/6e is crap?\n\r\nAny pointers, suggestion, ideas?\n\r\nI'm going to change the RAID 10 to a RAID 5 and test again and see \r\nwhat happens.\n\r\nCheers,\r\nHenke\n\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 8 Aug 2008 10:53:33 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "But random writes should be faster on a RAID10 as it doesn't need to \ncalculate parity. That is why people suggest RAID 10 for datases, \ncorrect?\n\nI can understand that RAID5 can be faster with sequential writes.\n\n//Henke\n\n8 aug 2008 kl. 16.53 skrev Luke Lonergan:\n\n> Your expected write speed on a 4 drive RAID10 is two drives worth, \n> probably 160 MB/s, depending on the generation of drives.\n>\n> The expect write speed for a 6 drive RAID5 is 5 drives worth, or \n> about 400 MB/s, sans the RAID5 parity overhead.\n>\n> - Luke\n>\n> ----- Original Message -----\n> From: [email protected] <[email protected] \n> >\n> To: [email protected] <pgsql- \n> [email protected]>\n> Sent: Fri Aug 08 10:23:55 2008\n> Subject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\n>\n> Hello list,\n>\n> I have a server with a direct attached storage containing 4 15k SAS\n> drives and 6 standard SATA drives.\n> The server is a quad core xeon with 16GB ram.\n> Both server and DAS has dual PERC/6E raid controllers with 512 MB BBU\n>\n> There is 2 raid set configured.\n> One RAID 10 containing 4 SAS disks\n> One RAID 5 containing 6 SATA disks\n>\n> There is one partition per RAID set with ext2 filesystem.\n>\n> I ran the following iozone test which I stole from Joshua Drake's test\n> at\n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>\n> I ran this test against the RAID 5 SATA partition\n>\n> #iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\n>\n> With these random write results\n>\n> Children see throughput for 1 random writers = 168647.33 \n> KB/sec\n> Parent sees throughput for 1 random writers = 168413.61 \n> KB/sec\n> Min throughput per process = 168647.33 \n> KB/sec\n> Max throughput per process = 168647.33 \n> KB/sec\n> Avg throughput per process = 168647.33 \n> KB/sec\n> Min xfer = 1024000.00 \n> KB\n> CPU utilization: Wall time 6.072 CPU time 0.540 \n> CPU\n> utilization 8.89 %\n>\n> Almost 170 MB/sek. Not bad for 6 standard SATA drives.\n>\n> Then I ran the same thing against the RAID 10 SAS partition\n>\n> Children see throughput for 1 random writers = 68816.25 \n> KB/sec\n> Parent sees throughput for 1 random writers = 68767.90 \n> KB/sec\n> Min throughput per process = 68816.25 \n> KB/sec\n> Max throughput per process = 68816.25 \n> KB/sec\n> Avg throughput per process = 68816.25 \n> KB/sec\n> Min xfer = 1024000.00 \n> KB\n> CPU utilization: Wall time 14.880 CPU time 0.520 \n> CPU\n> utilization 3.49 %\n>\n> What only 70 MB/sek?\n>\n> Is it possible that the 2 more spindles for the SATA drives makes that\n> partition soooo much faster? Even though the disks and the RAID\n> configuration should be slower?\n> It feels like there is something fishy going on. Maybe the RAID 10\n> implementation on the PERC/6e is crap?\n>\n> Any pointers, suggestion, ideas?\n>\n> I'm going to change the RAID 10 to a RAID 5 and test again and see\n> what happens.\n>\n> Cheers,\n> Henke\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\nBut random writes should be faster on a RAID10 as it doesn't need to calculate parity. That is why people suggest RAID 10 for datases, correct?I can understand that RAID5 can be faster with sequential writes.//Henke8 aug 2008 kl. 16.53 skrev Luke Lonergan: Your expected write speed on a 4 drive RAID10 is two drives worth, probably 160 MB/s, depending on the generation of drives. The expect write speed for a 6 drive RAID5 is 5 drives worth, or about 400 MB/s, sans the RAID5 parity overhead. - Luke ----- Original Message ----- From: [email protected] <[email protected]> To: [email protected] <[email protected]> Sent: Fri Aug 08 10:23:55 2008 Subject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server Hello list, I have a server with a direct attached storage containing 4 15k SAS  drives and 6 standard SATA drives. The server is a quad core xeon with 16GB ram. Both server and DAS has dual PERC/6E raid controllers with 512 MB BBU There is 2 raid set configured. One RAID 10 containing 4 SAS disks One RAID 5 containing 6 SATA disks There is one partition per RAID set with ext2 filesystem. I ran the following iozone test which I stole from Joshua Drake's test  at http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ I ran this test against the RAID 5 SATA partition #iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u With these random write results         Children see throughput for 1 random writers    =  168647.33 KB/sec         Parent sees throughput for 1 random writers     =  168413.61 KB/sec         Min throughput per process                      =  168647.33 KB/sec         Max throughput per process                      =  168647.33 KB/sec         Avg throughput per process                      =  168647.33 KB/sec         Min xfer                                        = 1024000.00 KB         CPU utilization: Wall time    6.072    CPU time    0.540    CPU  utilization   8.89 % Almost 170 MB/sek. Not bad for 6 standard SATA drives. Then I ran the same thing against the RAID 10 SAS partition         Children see throughput for 1 random writers    =   68816.25 KB/sec         Parent sees throughput for 1 random writers     =   68767.90 KB/sec         Min throughput per process                      =   68816.25 KB/sec         Max throughput per process                      =   68816.25 KB/sec         Avg throughput per process                      =   68816.25 KB/sec         Min xfer                                        = 1024000.00 KB         CPU utilization: Wall time   14.880    CPU time    0.520    CPU  utilization   3.49 % What only 70 MB/sek? Is it possible that the 2 more spindles for the SATA drives makes that  partition soooo much faster? Even though the disks and the RAID  configuration should be slower? It feels like there is something fishy going on. Maybe the RAID 10  implementation on the PERC/6e is crap? Any pointers, suggestion, ideas? I'm going to change the RAID 10 to a RAID 5 and test again and see  what happens. Cheers, Henke -- Sent via pgsql-performance mailing list ([email protected]) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 8 Aug 2008 17:08:02 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Fri, Aug 8, 2008 at 8:08 AM, Henrik <[email protected]> wrote:\n> But random writes should be faster on a RAID10 as it doesn't need to\n> calculate parity. That is why people suggest RAID 10 for datases, correct?\n> I can understand that RAID5 can be faster with sequential writes.\n\nThere is some data here that does not support that RAID5 can be faster\nthan RAID10 for sequential writes:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nRegards,\nMark\n", "msg_date": "Fri, 8 Aug 2008 09:44:33 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "\n8 aug 2008 kl. 18.44 skrev Mark Wong:\n\n> On Fri, Aug 8, 2008 at 8:08 AM, Henrik <[email protected]> wrote:\n>> But random writes should be faster on a RAID10 as it doesn't need to\n>> calculate parity. That is why people suggest RAID 10 for datases, \n>> correct?\n>> I can understand that RAID5 can be faster with sequential writes.\n>\n> There is some data here that does not support that RAID5 can be faster\n> than RAID10 for sequential writes:\n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\nI'm amazed by the big difference on hardware vs software raid.\n\nI set up e new Dell(!) system against a MD1000 DAS with singel quad \ncore 2,33 Ghz, 16GB RAM and Perc/6E raid controllers with 512MB BBU.\n\nI set up a RAID 10 on 4 15K SAS disks.\n\nI ran IOZone against this partition with ext2 filesystem and got the \nfollowing results.\n\nsafeuser@safecube04:/$ iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k - \n+u -F /database/iotest\n\tIozone: Performance Test of File I/O\n\t Version $Revision: 3.279 $\n\t\tCompiled for 64 bit mode.\n\t\tBuild: linux\n\n\tChildren see throughput for 1 initial writers \t= 254561.23 KB/sec\n\tParent sees throughput for 1 initial writers \t= 253935.07 KB/sec\n\tMin throughput per process \t\t\t= 254561.23 KB/sec\n\tMax throughput per process \t\t\t= 254561.23 KB/sec\n\tAvg throughput per process \t\t\t= 254561.23 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU Utilization: Wall time 4.023 CPU time 0.740 CPU \nutilization 18.40 %\n\n\n\tChildren see throughput for 1 rewriters \t= 259640.61 KB/sec\n\tParent sees throughput for 1 rewriters \t= 259351.20 KB/sec\n\tMin throughput per process \t\t\t= 259640.61 KB/sec\n\tMax throughput per process \t\t\t= 259640.61 KB/sec\n\tAvg throughput per process \t\t\t= 259640.61 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 3.944 CPU time 0.460 CPU \nutilization 11.66 %\n\n\n\tChildren see throughput for 1 readers \t\t= 2931030.50 KB/sec\n\tParent sees throughput for 1 readers \t\t= 2877172.20 KB/sec\n\tMin throughput per process \t\t\t= 2931030.50 KB/sec\n\tMax throughput per process \t\t\t= 2931030.50 KB/sec\n\tAvg throughput per process \t\t\t= 2931030.50 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 0.349 CPU time 0.340 CPU \nutilization 97.32 %\n\n\n\tChildren see throughput for 1 random readers \t= 2534182.50 KB/sec\n\tParent sees throughput for 1 random readers \t= 2465408.13 KB/sec\n\tMin throughput per process \t\t\t= 2534182.50 KB/sec\n\tMax throughput per process \t\t\t= 2534182.50 KB/sec\n\tAvg throughput per process \t\t\t= 2534182.50 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 0.404 CPU time 0.400 CPU \nutilization 98.99 %\n\n\tChildren see throughput for 1 random writers \t= 68816.25 KB/sec\n\tParent sees throughput for 1 random writers \t= 68767.90 KB/sec\n\tMin throughput per process \t\t\t= 68816.25 KB/sec\n\tMax throughput per process \t\t\t= 68816.25 KB/sec\n\tAvg throughput per process \t\t\t= 68816.25 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 14.880 CPU time 0.520 CPU \nutilization 3.49 %\n\n\nSo compared to the HP 8000 benchmarks this setup is even better than \nthe software raid.\n\nBut I'm skeptical of iozones results as when I run the same test \nagains 6 standard SATA drives in RAID5 I got random writes of 170MB / \nsek (!). Sure 2 more spindles but still.\n\nCheers,\nHenke\n\n", "msg_date": "Fri, 8 Aug 2008 21:21:54 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On 09/08/2008, Henrik <[email protected]> wrote:\n> But random writes should be faster on a RAID10 as it doesn't need to\n> calculate parity. That is why people suggest RAID 10 for datases, correct?\n\nIf it had 10 spindles as opposed to 4 ... with 4 drives the \"split\" is (because\nyou're striping and mirroring) like writing to two.\n\n\nCheers,\nAndrej\n", "msg_date": "Sat, 9 Aug 2008 08:48:05 +1200", "msg_from": "\"Andrej Ricnik-Bay\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Fri, 8 Aug 2008, Henrik wrote:\n\n> But random writes should be faster on a RAID10 as it doesn't need to \n> calculate parity. That is why people suggest RAID 10 for datases, correct?\n>\n> I can understand that RAID5 can be faster with sequential writes.\n\nthe key word here is \"can\" be faster, it depends on the exact \nimplementation, stripe size, OS caching, etc.\n\nthe ideal situation would be that the OS would flush exactly one stripe of \ndata at a time (aligned with the array) and no reads would need to be \ndone, mearly calculate the parity info for the new data and write it all.\n\nthe worst case is when the write size is small in relation to the stripe \nsize and crosses the stripe boundry. In that case the system needs to read \ndata from multiple stripes to calculate the new parity and write the \ndata and parity data.\n\nI don't know any systems (software or hardware) that meet the ideal \nsituation today.\n\nwhen comparing software and hardware raid, one other thing to remember is \nthat CPU and I/O bandwidth that's used for software raid is not available \nto do other things.\n\nso a system that benchmarks much faster with software raid could end up \nbeing significantly slower in practice if it needs that CPU and I/O \nbandwidth for other purposes.\n\nexamples could be needing the CPU/memory capacity to search through \namounts of RAM once the data is retrieved from disk, or finding that you \nhave enough network I/O that it combines with your disk I/O to saturate \nyour system busses.\n\nDavid Lang\n\n\n> //Henke\n>\n> 8 aug 2008 kl. 16.53 skrev Luke Lonergan:\n>\n>> Your expected write speed on a 4 drive RAID10 is two drives worth, probably \n>> 160 MB/s, depending on the generation of drives.\n>> \n>> The expect write speed for a 6 drive RAID5 is 5 drives worth, or about 400 \n>> MB/s, sans the RAID5 parity overhead.\n>> \n>> - Luke\n>> \n>> ----- Original Message -----\n>> From: [email protected] \n>> <[email protected]>\n>> To: [email protected] <[email protected]>\n>> Sent: Fri Aug 08 10:23:55 2008\n>> Subject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\n>> \n>> Hello list,\n>> \n>> I have a server with a direct attached storage containing 4 15k SAS\n>> drives and 6 standard SATA drives.\n>> The server is a quad core xeon with 16GB ram.\n>> Both server and DAS has dual PERC/6E raid controllers with 512 MB BBU\n>> \n>> There is 2 raid set configured.\n>> One RAID 10 containing 4 SAS disks\n>> One RAID 5 containing 6 SATA disks\n>> \n>> There is one partition per RAID set with ext2 filesystem.\n>> \n>> I ran the following iozone test which I stole from Joshua Drake's test\n>> at\n>> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>> \n>> I ran this test against the RAID 5 SATA partition\n>> \n>> #iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\n>> \n>> With these random write results\n>>\n>> Children see throughput for 1 random writers = 168647.33 KB/sec\n>> Parent sees throughput for 1 random writers = 168413.61 KB/sec\n>> Min throughput per process = 168647.33 KB/sec\n>> Max throughput per process = 168647.33 KB/sec\n>> Avg throughput per process = 168647.33 KB/sec\n>> Min xfer = 1024000.00 KB\n>> CPU utilization: Wall time 6.072 CPU time 0.540 CPU\n>> utilization 8.89 %\n>> \n>> Almost 170 MB/sek. Not bad for 6 standard SATA drives.\n>> \n>> Then I ran the same thing against the RAID 10 SAS partition\n>>\n>> Children see throughput for 1 random writers = 68816.25 KB/sec\n>> Parent sees throughput for 1 random writers = 68767.90 KB/sec\n>> Min throughput per process = 68816.25 KB/sec\n>> Max throughput per process = 68816.25 KB/sec\n>> Avg throughput per process = 68816.25 KB/sec\n>> Min xfer = 1024000.00 KB\n>> CPU utilization: Wall time 14.880 CPU time 0.520 CPU\n>> utilization 3.49 %\n>> \n>> What only 70 MB/sek?\n>> \n>> Is it possible that the 2 more spindles for the SATA drives makes that\n>> partition soooo much faster? Even though the disks and the RAID\n>> configuration should be slower?\n>> It feels like there is something fishy going on. Maybe the RAID 10\n>> implementation on the PERC/6e is crap?\n>> \n>> Any pointers, suggestion, ideas?\n>> \n>> I'm going to change the RAID 10 to a RAID 5 and test again and see\n>> what happens.\n>> \n>> Cheers,\n>> Henke\n>> \n>> \n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>> \n>\n", "msg_date": "Fri, 8 Aug 2008 19:24:14 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "OK, changed the SAS RAID 10 to RAID 5 and now my random writes are \nhanding 112 MB/ sek. So it is almsot twice as fast as the RAID10 with \nthe same disks. Any ideas why?\n\nIs the iozone tests faulty?\n\nWhat is your suggestions? Trust the IOZone tests and use RAID5 instead \nof RAID10, or go for RAID10 as it should be faster and will be more \nsuited when we add more disks in the future?\n\nI'm a little confused by the benchmarks.\n\nThis is from the RAID5 tests on 4 SAS 15K drives...\n\niozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u -F /database/iotest\n\n\tChildren see throughput for 1 random writers \t= 112074.58 KB/sec\n\tParent sees throughput for 1 random writers \t= 111962.80 KB/sec\n\tMin throughput per process \t\t\t= 112074.58 KB/sec\n\tMax throughput per process \t\t\t= 112074.58 KB/sec\n\tAvg throughput per process \t\t\t= 112074.58 KB/sec\n\tMin xfer \t\t\t\t\t= 1024000.00 KB\n\tCPU utilization: Wall time 9.137 CPU time 0.510 CPU \nutilization 5.58 %\n\n\n\n\n9 aug 2008 kl. 04.24 skrev [email protected]:\n\n> On Fri, 8 Aug 2008, Henrik wrote:\n>\n>> But random writes should be faster on a RAID10 as it doesn't need \n>> to calculate parity. That is why people suggest RAID 10 for \n>> datases, correct?\n>>\n>> I can understand that RAID5 can be faster with sequential writes.\n>\n> the key word here is \"can\" be faster, it depends on the exact \n> implementation, stripe size, OS caching, etc.\n>\n> the ideal situation would be that the OS would flush exactly one \n> stripe of data at a time (aligned with the array) and no reads would \n> need to be done, mearly calculate the parity info for the new data \n> and write it all.\n>\n> the worst case is when the write size is small in relation to the \n> stripe size and crosses the stripe boundry. In that case the system \n> needs to read data from multiple stripes to calculate the new parity \n> and write the data and parity data.\n>\n> I don't know any systems (software or hardware) that meet the ideal \n> situation today.\n>\n> when comparing software and hardware raid, one other thing to \n> remember is that CPU and I/O bandwidth that's used for software raid \n> is not available to do other things.\n>\n> so a system that benchmarks much faster with software raid could end \n> up being significantly slower in practice if it needs that CPU and I/ \n> O bandwidth for other purposes.\n>\n> examples could be needing the CPU/memory capacity to search through \n> amounts of RAM once the data is retrieved from disk, or finding that \n> you have enough network I/O that it combines with your disk I/O to \n> saturate your system busses.\n>\n> David Lang\n>\n>\n>> //Henke\n>>\n>> 8 aug 2008 kl. 16.53 skrev Luke Lonergan:\n>>\n>>> Your expected write speed on a 4 drive RAID10 is two drives worth, \n>>> probably 160 MB/s, depending on the generation of drives.\n>>> The expect write speed for a 6 drive RAID5 is 5 drives worth, or \n>>> about 400 MB/s, sans the RAID5 parity overhead.\n>>> - Luke\n>>> ----- Original Message -----\n>>> From: [email protected] <[email protected] \n>>> >\n>>> To: [email protected] <[email protected] \n>>> >\n>>> Sent: Fri Aug 08 10:23:55 2008\n>>> Subject: [PERFORM] Filesystem benchmarking for pg 8.3.3 server\n>>> Hello list,\n>>> I have a server with a direct attached storage containing 4 15k SAS\n>>> drives and 6 standard SATA drives.\n>>> The server is a quad core xeon with 16GB ram.\n>>> Both server and DAS has dual PERC/6E raid controllers with 512 MB \n>>> BBU\n>>> There is 2 raid set configured.\n>>> One RAID 10 containing 4 SAS disks\n>>> One RAID 5 containing 6 SATA disks\n>>> There is one partition per RAID set with ext2 filesystem.\n>>> I ran the following iozone test which I stole from Joshua Drake's \n>>> test\n>>> at\n>>> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>>> I ran this test against the RAID 5 SATA partition\n>>> #iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u\n>>> With these random write results\n>>>\n>>> Children see throughput for 1 random writers = 168647.33 \n>>> KB/sec\n>>> Parent sees throughput for 1 random writers = 168413.61 \n>>> KB/sec\n>>> Min throughput per process = 168647.33 \n>>> KB/sec\n>>> Max throughput per process = 168647.33 \n>>> KB/sec\n>>> Avg throughput per process = 168647.33 \n>>> KB/sec\n>>> Min xfer = 1024000.00 \n>>> KB\n>>> CPU utilization: Wall time 6.072 CPU time 0.540 \n>>> CPU\n>>> utilization 8.89 %\n>>> Almost 170 MB/sek. Not bad for 6 standard SATA drives.\n>>> Then I ran the same thing against the RAID 10 SAS partition\n>>>\n>>> Children see throughput for 1 random writers = 68816.25 \n>>> KB/sec\n>>> Parent sees throughput for 1 random writers = 68767.90 \n>>> KB/sec\n>>> Min throughput per process = 68816.25 \n>>> KB/sec\n>>> Max throughput per process = 68816.25 \n>>> KB/sec\n>>> Avg throughput per process = 68816.25 \n>>> KB/sec\n>>> Min xfer = 1024000.00 \n>>> KB\n>>> CPU utilization: Wall time 14.880 CPU time 0.520 \n>>> CPU\n>>> utilization 3.49 %\n>>> What only 70 MB/sek?\n>>> Is it possible that the 2 more spindles for the SATA drives makes \n>>> that\n>>> partition soooo much faster? Even though the disks and the RAID\n>>> configuration should be slower?\n>>> It feels like there is something fishy going on. Maybe the RAID 10\n>>> implementation on the PERC/6e is crap?\n>>> Any pointers, suggestion, ideas?\n>>> I'm going to change the RAID 10 to a RAID 5 and test again and see\n>>> what happens.\n>>> Cheers,\n>>> Henke\n>>> --\n>>> Sent via pgsql-performance mailing list ([email protected] \n>>> )\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 11 Aug 2008 11:17:55 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "\nOn Aug 11, 2008, at 5:17 AM, Henrik wrote:\n\n> OK, changed the SAS RAID 10 to RAID 5 and now my random writes are \n> handing 112 MB/ sek. So it is almsot twice as fast as the RAID10 \n> with the same disks. Any ideas why?\n>\n> Is the iozone tests faulty?\n\n\ndoes IOzone disable the os caches?\nIf not you need to use a size of 2xRAM for true results.\n\nregardless - the test only took 10 seconds of wall time - which isn't \nvery long at all. You'd probably want to run it longer anyway.\n\n\n>\n> iozone -e -i0 -i1 -i2 -i8 -t1 -s 1000m -r 8k -+u -F /database/iotest\n>\n> \tChildren see throughput for 1 random writers \t= 112074.58 KB/sec\n> \tParent sees throughput for 1 random writers \t= 111962.80 KB/sec\n> \tMin throughput per process \t\t\t= 112074.58 KB/sec\n> \tMax throughput per process \t\t\t= 112074.58 KB/sec\n> \tAvg throughput per process \t\t\t= 112074.58 KB/sec\n> \tMin xfer \t\t\t\t\t= 1024000.00 KB\n> \tCPU utilization: Wall time 9.137 CPU time 0.510 CPU \n> utilization 5.58 %\n>\n\n\n--\nJeff Trout <[email protected]>\nhttp://www.stuarthamm.net/\nhttp://www.dellsmartexitin.com/\n\n\n\n", "msg_date": "Mon, 11 Aug 2008 10:01:57 -0400", "msg_from": "Jeff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Aug 11, 2008, at 9:01 AM, Jeff wrote:\n> On Aug 11, 2008, at 5:17 AM, Henrik wrote:\n>\n>> OK, changed the SAS RAID 10 to RAID 5 and now my random writes are \n>> handing 112 MB/ sek. So it is almsot twice as fast as the RAID10 \n>> with the same disks. Any ideas why?\n>>\n>> Is the iozone tests faulty?\n>\n>\n> does IOzone disable the os caches?\n> If not you need to use a size of 2xRAM for true results.\n>\n> regardless - the test only took 10 seconds of wall time - which \n> isn't very long at all. You'd probably want to run it longer anyway.\n\n\nAdditionally, you need to be careful of what size writes you're \nusing. If you're doing random writes that perfectly align with the \nraid stripe size, you'll see virtually no RAID5 overhead, and you'll \nget the performance of N-1 drives, as opposed to RAID10 giving you N/2.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 13 Aug 2008 10:13:34 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "\n13 aug 2008 kl. 17.13 skrev Decibel!:\n\n> On Aug 11, 2008, at 9:01 AM, Jeff wrote:\n>> On Aug 11, 2008, at 5:17 AM, Henrik wrote:\n>>\n>>> OK, changed the SAS RAID 10 to RAID 5 and now my random writes are \n>>> handing 112 MB/ sek. So it is almsot twice as fast as the RAID10 \n>>> with the same disks. Any ideas why?\n>>>\n>>> Is the iozone tests faulty?\n>>\n>>\n>> does IOzone disable the os caches?\n>> If not you need to use a size of 2xRAM for true results.\n>>\n>> regardless - the test only took 10 seconds of wall time - which \n>> isn't very long at all. You'd probably want to run it longer anyway.\n>\n>\n> Additionally, you need to be careful of what size writes you're \n> using. If you're doing random writes that perfectly align with the \n> raid stripe size, you'll see virtually no RAID5 overhead, and you'll \n> get the performance of N-1 drives, as opposed to RAID10 giving you N/ \n> 2.\nBut it still needs to do 2 reads and 2 writes for every write, correct?\n\nI did some bonnie++ tests just to give some new more reasonable numbers.\nThis is with RAID10 on 4 SAS 15k drives with write-back cache.\n\nVersion 1.03b ------Sequential Output------ --Sequential Input- \n--Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- \n--Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec \n%CP /sec %CP\nsafecube04 32136M 73245 95 213092 16 89456 11 64923 81 219341 \n16 839.9 1\n ------Sequential Create------ --------Random \nCreate--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- \n-Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec \n%CP /sec %CP\n 16 6178 99 +++++ +++ +++++ +++ 6452 100 +++++ +++ \n20633 99\nsafecube04,32136M, \n73245,95,213092,16,89456,11,64923,81,219341,16,839.9,1,16,6178,99,++++ \n+,+++,+++++,+++,6452,100,+++++,+++,20633,99\n\n\n\n\n\n\n>\n> -- \n> Decibel!, aka Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n>\n\n", "msg_date": "Wed, 13 Aug 2008 21:54:43 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Aug 13, 2008, at 2:54 PM, Henrik wrote:\n>> Additionally, you need to be careful of what size writes you're \n>> using. If you're doing random writes that perfectly align with the \n>> raid stripe size, you'll see virtually no RAID5 overhead, and \n>> you'll get the performance of N-1 drives, as opposed to RAID10 \n>> giving you N/2.\n> But it still needs to do 2 reads and 2 writes for every write, \n> correct?\n\n\nIf you are completely over-writing an entire stripe, there's no \nreason to read the existing data; you would just calculate the parity \ninformation from the new data. Any good controller should take that \napproach.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 16 Aug 2008 13:49:05 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "On Sat, 16 Aug 2008, Decibel! wrote:\n\n> On Aug 13, 2008, at 2:54 PM, Henrik wrote:\n>>> Additionally, you need to be careful of what size writes you're using. If \n>>> you're doing random writes that perfectly align with the raid stripe size, \n>>> you'll see virtually no RAID5 overhead, and you'll get the performance of \n>>> N-1 drives, as opposed to RAID10 giving you N/2.\n>> But it still needs to do 2 reads and 2 writes for every write, correct?\n>\n>\n> If you are completely over-writing an entire stripe, there's no reason to \n> read the existing data; you would just calculate the parity information from \n> the new data. Any good controller should take that approach.\n\nin theory yes, in practice the OS writes usually aren't that large and \naligned, and as a result most raid controllers (and software) don't have \nthe special-case code to deal with it.\n\nthere's discussion of these issues, but not much more then that.\n\nDavid Lang\n", "msg_date": "Sat, 16 Aug 2008 17:15:41 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "<[email protected]> writes:\n\n>> If you are completely over-writing an entire stripe, there's no reason to\n>> read the existing data; you would just calculate the parity information from\n>> the new data. Any good controller should take that approach.\n>\n> in theory yes, in practice the OS writes usually aren't that large and aligned,\n> and as a result most raid controllers (and software) don't have the\n> special-case code to deal with it.\n\nI'm pretty sure all half-decent controllers and software do actually. This is\none major reason that large (hopefully battery backed) caches help RAID-5\ndisproportionately. The larger the cache the more likely it'll be able to wait\nuntil the entire raid stripe is replaced avoid having to read in the old\nparity.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's 24x7 Postgres support!\n", "msg_date": "Sun, 17 Aug 2008 11:55:36 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" }, { "msg_contents": "\"Gregory Stark\" <[email protected]> writes:\n\n> <[email protected]> writes:\n>\n>>> If you are completely over-writing an entire stripe, there's no reason to\n>>> read the existing data; you would just calculate the parity information from\n>>> the new data. Any good controller should take that approach.\n>>\n>> in theory yes, in practice the OS writes usually aren't that large and aligned,\n>> and as a result most raid controllers (and software) don't have the\n>> special-case code to deal with it.\n>\n> I'm pretty sure all half-decent controllers and software do actually. This is\n> one major reason that large (hopefully battery backed) caches help RAID-5\n> disproportionately. The larger the cache the more likely it'll be able to wait\n> until the entire raid stripe is replaced avoid having to read in the old\n> parity.\n\nOr now that I think about it, replace two or more blocks from the same set of\nparity bits. It only has to recalculate the parity bits once for all those\nblocks instead of for every single block write.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's PostGIS support!\n", "msg_date": "Sun, 17 Aug 2008 12:17:55 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Filesystem benchmarking for pg 8.3.3 server" } ]
[ { "msg_contents": "Hello,\n\nI'm trying to install a solution to permit me to :\n- Secure the datas, without RAID\n- Giving ability to increase the potentiality of the database towards \nthe needs.\n\nI have read about slony, DRBD, pgpool....\n\nI don't find the good system to do what I want.\n\nI manage for now 50 millions of request per month.\n\nI will reach 100 millions in the end of the year I suppose.\n\nThere is 2 difficulties :\n1 - is the storage : to get faster access,it is recommend to use SAS 15 \n000 tps. But the disk I can get are 149 GO of space. As the database is \ngrowing par 1,7 Go per week at the moment, it will reach is maximum in 3 \nmonth. I can add 3 disk at least so It can go to 9 month. What to do \nafter, and especially what to do today to prevent it?\n2 - The machine will treat more and more simultaneous entrance, so I \nneed to loadbalance those inserts/updates on several machine and to \nreplicate the datas between them. It's not a real problem if the data \nare asynchrony.\n\nI'm migrating to postgresql 8.3.3.\n\nThanks for all your remarks, suggestions and helps\n\nDavid\n", "msg_date": "Sat, 09 Aug 2008 19:29:39 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": true, "msg_subject": "Distant mirroring" }, { "msg_contents": "On Sat, Aug 9, 2008 at 11:29 AM, dforum <[email protected]> wrote:\n> Hello,\n>\n> I'm trying to install a solution to permit me to :\n> - Secure the datas, without RAID\n\nNothing beats a simple mirror set for simplicity while protecting the\ndata, and for a pretty cheap cost. How much is your data worth?\n\n> - Giving ability to increase the potentiality of the database towards the\n> needs.\n>\n> I have read about slony, DRBD, pgpool....\n>\n> I don't find the good system to do what I want.\n>\n> I manage for now 50 millions of request per month.\n\nAssuming they all happen from 9 to 5 and during business days only,\nthat's about 86 transactions per second. Well within the realm of a\nsingle mirror set to keep up if you don't make your db work real fat.\n\n> I will reach 100 millions in the end of the year I suppose.\n\nThat takes us to 172 transactions per second.\n\n> There is 2 difficulties :\n> 1 - is the storage : to get faster access,it is recommend to use SAS 15 000\n> tps. But the disk I can get are 149 GO of space. As the database is growing\n> par 1,7 Go per week at the moment, it will reach is maximum in 3 month. I\n> can add 3 disk at least so It can go to 9 month. What to do after, and\n> especially what to do today to prevent it?\n\nNo, don't piecemeal just enough to outrun the disk space boogieman\neach month. Buy enough to last you at least 1 year in the future.\nMore if you can afford it.\n\n> 2 - The machine will treat more and more simultaneous entrance, so I need to\n> loadbalance those inserts/updates on several machine and to replicate the\n> datas between them. It's not a real problem if the data are asynchrony.\n\nThen PostgreSQL might not be your best choice. But I think you're\nwrong. You can easily handle the load you're talking about on a\nmid-sized box for about $5000 to $10000.\n\nYou can use 7200 rpm SATA drives, probably 8 to 12 or so, in a RAID-10\nwith a battery backed cache and hit 172 transactions per second.\n\nGiven the 1+ G a week storage requirement, you should definitely look\nat using inheritance to do partitioning. Then use slony or something\nto replicate the data into the back office for other things. There's\nalways other things most the time that are read only.\n", "msg_date": "Sat, 9 Aug 2008 13:29:52 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distant mirroring" }, { "msg_contents": "Houlala\n\nI got headache !!!\n\nSo please help...........;;\n\n\"Assuming they all happen from 9 to 5 and during business days only,\nthat's about 86 transactions per second. Well within the realm of a\nsingle mirror set to keep up if you don't make your db work real fat.\"\n\nOK i like, But my reality is that to make an insert of a table that have \n27 millions of entrance it took 200 ms.\nso it took between 2 minutes and 10 minutes to treat 3000 records and \ndispatch/agregate in other tables. And I have for now 20000 records \nevery 3 minutes.\n\nAt the moment I have a\n\nI have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET \n2008 x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\nwith 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, \nbut it would change has quickly I can).\n\nI got 1-2 GO per week\n\nI can change to 2 kinds of server, using 8.3.3 postgresql server, and \neven taking more sever if need. But it is the biggest computer that I \ncan rent for now.\n\nIntel 2x Xeon X5355\n2x 4x 2.66 GHz\nL2: 8Mo, FSB: 1333MHz\nDouble Quadruple Coeur\n64 bits\n12 Go FBDIMM DDR2\n2x 147 Go\nSAS 15 000 tr/min\nRAID 1 HARD\n\nI can add 500 Go under sataII\n\nOR\n\nIntel 2x Xeon X5355\n2x 4x 2.66 GHz\nL2: 8Mo, FSB: 1333MHz\nDouble Quadruple Coeur\n64 bits\n12 Go FBDIMM DDR2\n5x 750 Go (2.8 To **)\nSATA2 RAID HARD 5\n\nI can add 500 Go under sataII\n\n\nAfter several tunings, reading, ect...\n\nThe low speed seems to be definetly linked to the SATA II in RAID 1.\n\nSo I need a solution to be able to 1st supporting more transaction, \nsecondly I need to secure the data, and being able to load balancing the \ncharge.\n\nPlease, give me any advise or suggestion that can help me.\n\nregards to all\n\nDavid\n\n\n\n\n\nScott Marlowe a �crit :\n> On Sat, Aug 9, 2008 at 11:29 AM, dforum <[email protected]> wrote:\n>> Hello,\n>>\n>> I'm trying to install a solution to permit me to :\n>> - Secure the datas, without RAID\n> \n> Nothing beats a simple mirror set for simplicity while protecting the\n> data, and for a pretty cheap cost. How much is your data worth?\n> \n>> - Giving ability to increase the potentiality of the database towards the\n>> needs.\n>>\n>> I have read about slony, DRBD, pgpool....\n>>\n>> I don't find the good system to do what I want.\n>>\n>> I manage for now 50 millions of request per month.\n> \n> Assuming they all happen from 9 to 5 and during business days only,\n> that's about 86 transactions per second. Well within the realm of a\n> single mirror set to keep up if you don't make your db work real fat.\n> \n>> I will reach 100 millions in the end of the year I suppose.\n> \n> That takes us to 172 transactions per second.\n> \n>> There is 2 difficulties :\n>> 1 - is the storage : to get faster access,it is recommend to use SAS 15 000\n>> tps. But the disk I can get are 149 GO of space. As the database is growing\n>> par 1,7 Go per week at the moment, it will reach is maximum in 3 month. I\n>> can add 3 disk at least so It can go to 9 month. What to do after, and\n>> especially what to do today to prevent it?\n> \n> No, don't piecemeal just enough to outrun the disk space boogieman\n> each month. Buy enough to last you at least 1 year in the future.\n> More if you can afford it.\n> \n>> 2 - The machine will treat more and more simultaneous entrance, so I need to\n>> loadbalance those inserts/updates on several machine and to replicate the\n>> datas between them. It's not a real problem if the data are asynchrony.\n> \n> Then PostgreSQL might not be your best choice. But I think you're\n> wrong. You can easily handle the load you're talking about on a\n> mid-sized box for about $5000 to $10000.\n> \n> You can use 7200 rpm SATA drives, probably 8 to 12 or so, in a RAID-10\n> with a battery backed cache and hit 172 transactions per second.\n> \n> Given the 1+ G a week storage requirement, you should definitely look\n> at using inheritance to do partitioning. Then use slony or something\n> to replicate the data into the back office for other things. There's\n> always other things most the time that are read only.\n> \n\n-- \n<http://www.1st-affiliation.fr>\n\n*David Bigand\n*Pr�sident Directeur G�n�rale*\n*51 chemin des moulins\n73000 CHAMBERY - FRANCE\n\nWeb : htttp://www.1st-affiliation.fr\nEmail : [email protected]\nTel. : +33 479 696 685\nMob. : +33 666 583 836\nSkype : firstaffiliation_support\n\n", "msg_date": "Mon, 11 Aug 2008 16:26:31 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distant mirroring" }, { "msg_contents": "\n> -----Mensaje original-----\n> De: [email protected] \n> [mailto:[email protected]] En nombre de dforums\n> Enviado el: Lunes, 11 de Agosto de 2008 11:27\n> Para: Scott Marlowe; [email protected]\n> Asunto: Re: [PERFORM] Distant mirroring\n> \n> Houlala\n> \n> I got headache !!!\n> \n> So please help...........;;\n> \n> \"Assuming they all happen from 9 to 5 and during business \n> days only, that's about 86 transactions per second. Well \n> within the realm of a single mirror set to keep up if you \n> don't make your db work real fat.\"\n> \n> OK i like, But my reality is that to make an insert of a \n> table that have\n> 27 millions of entrance it took 200 ms.\n> so it took between 2 minutes and 10 minutes to treat 3000 \n> records and dispatch/agregate in other tables. And I have for \n> now 20000 records every 3 minutes.\n> \n\nYou must try to partition that table. It should considerably speed up your\ninserts.\n\n> \n> So I need a solution to be able to 1st supporting more \n> transaction, secondly I need to secure the data, and being \n> able to load balancing the charge.\n> \n> Please, give me any advise or suggestion that can help me.\n> \n\nHave you taken into consideration programming a solution on BerkeleyDB? Its\nan API that provides a high-performance non-SQL database. With such a\nsolution you could achieve several thousands tps on a much smaller hardware.\nYou could use non-work hours to dump your data to Postgres for SQL support\nfor reporting and such.\n\nRegards,\nFernando\n\n", "msg_date": "Mon, 11 Aug 2008 12:35:22 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distant mirroring" }, { "msg_contents": "On Mon, Aug 11, 2008 at 8:26 AM, dforums <[email protected]> wrote:\n> Houlala\n>\n> I got headache !!!\n>\n> So please help...........;;\n>\n> \"Assuming they all happen from 9 to 5 and during business days only,\n> that's about 86 transactions per second. Well within the realm of a\n> single mirror set to keep up if you don't make your db work real fat.\"\n>\n> OK i like, But my reality is that to make an insert of a table that have 27\n> millions of entrance it took 200 ms.\n> so it took between 2 minutes and 10 minutes to treat 3000 records and\n> dispatch/agregate in other tables. And I have for now 20000 records every 3\n> minutes.\n\nCan you partition your data on some logical barrier like a timestamp\nor something? that would probably help a lot. also, are you doing\nall 3000 records in one transaction or individual transactions? If\none at a time, can you batch them together for better performance or\nare you stuck doing them one at a time?\n\n> At the moment I have a\n>\n> I have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET 2008\n> x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\n> with 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, but\n> it would change has quickly I can).\n\nYeah, you're gonna be I/O bound as long as you've only got a single\nmirror set. A machine with 8 or 12 SAS 15K drives should make it much\nmore likely you can handle the load.\n\n>\n> I got 1-2 GO per week\n\nDefinitely let's look at partitioning then if we can do it.\n\n> I can change to 2 kinds of server, using 8.3.3 postgresql server, and even\n> taking more sever if need. But it is the biggest computer that I can rent\n> for now.\n>\n> Intel 2x Xeon X5355\n> 2x 4x 2.66 GHz\n> L2: 8Mo, FSB: 1333MHz\n> Double Quadruple Coeur\n> 64 bits\n> 12 Go FBDIMM DDR2\n> 2x 147 Go\n> SAS 15 000 tr/min\n> RAID 1 HARD\n\nAll that memory and CPU power will be wasted on a db with just two\ndrives. Do you at least have a decent RAID controller in that setup?\n\n>\n> I can add 500 Go under sataII\n>\n> OR\n>\n> Intel 2x Xeon X5355\n> 2x 4x 2.66 GHz\n> L2: 8Mo, FSB: 1333MHz\n> Double Quadruple Coeur\n> 64 bits\n> 12 Go FBDIMM DDR2\n> 5x 750 Go (2.8 To **)\n> SATA2 RAID HARD 5\n>\n> I can add 500 Go under sataII\n\nRAID5 is generally a poor choice for a write limited DB. I'd guess\nthat the dual SAS drives above would work better than the 5 SATA\ndrives in RAID 5 here.\n\n> After several tunings, reading, ect...\n>\n> The low speed seems to be definetly linked to the SATA II in RAID 1.\n\nGoing to 15k SAS RAID 1 will just about double your write rate\n(assuming it's a commits/second issue and it likely is). going to a 4\ndisk SAS RAID10 will double that, and so on.\n\n> So I need a solution to be able to 1st supporting more transaction, secondly\n> I need to secure the data, and being able to load balancing the charge.\n\nLook at slony for read only slaves and the master db as write only.\nIf you can handle the slight delay in updates from master to slave.\nOtherwise you'll need sync replication, and that is generally not as\nfast.\n\nTake a look at something like this server:\n\nhttp://www.aberdeeninc.com/abcatg/Stirling-229.htm\n\nWith 8 15k SAS 146G drives it runs around $5k or so. Right now all\nthe servers your hosting provider is likely to provide you with are\ngonna be big on CPU and memory and light on I/O, and that's the\nopposite of what you need for databases.\n", "msg_date": "Mon, 11 Aug 2008 10:00:26 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distant mirroring" }, { "msg_contents": "Tx to all.\n\nI reach the same reflection on partitionning the data to those tables.\n\nAnd postgresql is giving very good tools for that with the rules features.\n\nI got the SAS server for immediate fix.\n\nBut I'm looking for buying a machine that will handle my needs for more \nlong time.\n\nRegarding partitionning it seems that I could just use a daily tables \nfor daily treatment and keeping a another one for mass reading. I even \nthings to partition per years or half years.\n\nOne question is on table that have FK constraint, I don't know how to \nmaintain it ? Could I use rules for it too ? Tx for helps\n\nRegards\n\nDavid\n\nScott Marlowe a �crit :\n> On Mon, Aug 11, 2008 at 8:26 AM, dforums <[email protected]> wrote:\n>> Houlala\n>>\n>> I got headache !!!\n>>\n>> So please help...........;;\n>>\n>> \"Assuming they all happen from 9 to 5 and during business days only,\n>> that's about 86 transactions per second. Well within the realm of a\n>> single mirror set to keep up if you don't make your db work real fat.\"\n>>\n>> OK i like, But my reality is that to make an insert of a table that have 27\n>> millions of entrance it took 200 ms.\n>> so it took between 2 minutes and 10 minutes to treat 3000 records and\n>> dispatch/agregate in other tables. And I have for now 20000 records every 3\n>> minutes.\n> \n> Can you partition your data on some logical barrier like a timestamp\n> or something? that would probably help a lot. also, are you doing\n> all 3000 records in one transaction or individual transactions? If\n> one at a time, can you batch them together for better performance or\n> are you stuck doing them one at a time?\n> \n>> At the moment I have a\n>>\n>> I have a Linux 2.6.24.2-xxxx-std-ipv4-64 #3 SMP Tue Feb 12 12:27:47 CET 2008\n>> x86_64 Intel(R) Xeon(R) CPU X5355 @ 2.66GHz GenuineIntel GNU/Linux\n>> with 8Gb of memory. Using sata II disk in RAID 1 (I known that is bad, but\n>> it would change has quickly I can).\n> \n> Yeah, you're gonna be I/O bound as long as you've only got a single\n> mirror set. A machine with 8 or 12 SAS 15K drives should make it much\n> more likely you can handle the load.\n> \n>> I got 1-2 GO per week\n> \n> Definitely let's look at partitioning then if we can do it.\n> \n>> I can change to 2 kinds of server, using 8.3.3 postgresql server, and even\n>> taking more sever if need. But it is the biggest computer that I can rent\n>> for now.\n>>\n>> Intel 2x Xeon X5355\n>> 2x 4x 2.66 GHz\n>> L2: 8Mo, FSB: 1333MHz\n>> Double Quadruple Coeur\n>> 64 bits\n>> 12 Go FBDIMM DDR2\n>> 2x 147 Go\n>> SAS 15 000 tr/min\n>> RAID 1 HARD\n> \n> All that memory and CPU power will be wasted on a db with just two\n> drives. Do you at least have a decent RAID controller in that setup?\n> \n>> I can add 500 Go under sataII\n>>\n>> OR\n>>\n>> Intel 2x Xeon X5355\n>> 2x 4x 2.66 GHz\n>> L2: 8Mo, FSB: 1333MHz\n>> Double Quadruple Coeur\n>> 64 bits\n>> 12 Go FBDIMM DDR2\n>> 5x 750 Go (2.8 To **)\n>> SATA2 RAID HARD 5\n>>\n>> I can add 500 Go under sataII\n> \n> RAID5 is generally a poor choice for a write limited DB. I'd guess\n> that the dual SAS drives above would work better than the 5 SATA\n> drives in RAID 5 here.\n> \n>> After several tunings, reading, ect...\n>>\n>> The low speed seems to be definetly linked to the SATA II in RAID 1.\n> \n> Going to 15k SAS RAID 1 will just about double your write rate\n> (assuming it's a commits/second issue and it likely is). going to a 4\n> disk SAS RAID10 will double that, and so on.\n> \n>> So I need a solution to be able to 1st supporting more transaction, secondly\n>> I need to secure the data, and being able to load balancing the charge.\n> \n> Look at slony for read only slaves and the master db as write only.\n> If you can handle the slight delay in updates from master to slave.\n> Otherwise you'll need sync replication, and that is generally not as\n> fast.\n> \n> Take a look at something like this server:\n> \n> http://www.aberdeeninc.com/abcatg/Stirling-229.htm\n> \n> With 8 15k SAS 146G drives it runs around $5k or so. Right now all\n> the servers your hosting provider is likely to provide you with are\n> gonna be big on CPU and memory and light on I/O, and that's the\n> opposite of what you need for databases.\n> \n\n-- \n<http://www.1st-affiliation.fr>\n\n*David Bigand\n*Pr�sident Directeur G�n�rale*\n*51 chemin des moulins\n73000 CHAMBERY - FRANCE\n\nWeb : htttp://www.1st-affiliation.fr\nEmail : [email protected]\nTel. : +33 479 696 685\nMob. : +33 666 583 836\nSkype : firstaffiliation_support\n\n", "msg_date": "Tue, 12 Aug 2008 17:06:38 +0200", "msg_from": "dforums <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Distant mirroring" } ]
[ { "msg_contents": "Something goes wrong that this query plan thinks there is only gonna be\n1 row from (SELECT uid FROM alog ... ) so chooses such query plan, and\nthus it runs forever (at least so long that I didn't bother to wait,\nlike 10 minutes):\n\n\nmiernik=> EXPLAIN UPDATE cnts SET p0 = FALSE WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..3317.34 rows=1 width=44)\n -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=44)\n -> Index Scan using alog_uid_idx on alog (cost=0.00..296.95 rows=1 width=4)\n Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n(5 rows)\n\n\nBut if I give him only the inner part, it makes reasonable assumptions\nand runs OK:\n\n\n\nmiernik=> EXPLAIN SELECT uid FROM alog WHERE pid = 3452654 AND o = 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Bitmap Heap Scan on alog (cost=100.21..9559.64 rows=3457 width=4)\n Recheck Cond: ((pid = 3452654::numeric) AND (o = 1::numeric))\n -> Bitmap Index Scan on alog_pid_o_idx (cost=0.00..99.35 rows=3457 width=0)\n Index Cond: ((pid = 3452654::numeric) AND (o = 1::numeric))\n(4 rows)\n\n\n\nCan't show you EXPLAIN ANALYZE for the first one, as it also runds\nforver. For the second one, its consistent with the EXPLAIN.\n\n\n\nBefore it was running OK, but I recently disabled autovacuum and now run\nVACUUM manually serveal times a day, and run ANALYZE manually on alog\nand cnts tables before runnign the above. How may I fix this to work?\n\nshared_buffers = 5MB\nwork_mem = 1MB\nMachine is a 48 MB RAM Xen.\n\nBut not the disabling autovacuum broke it, but running ANALYZE manually\non the tables broke it, and I don't know why, I thougt ANALYZE would\nimprove the guesses?\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Sat, 9 Aug 2008 22:34:35 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "Miernik <[email protected]> writes:\n> miernik=> EXPLAIN UPDATE cnts SET p0 = FALSE WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..3317.34 rows=1 width=44)\n> -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=44)\n> -> Index Scan using alog_uid_idx on alog (cost=0.00..296.95 rows=1 width=4)\n> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n> (5 rows)\n\n> But if I give him only the inner part, it makes reasonable assumptions\n> and runs OK:\n\nWhat's the results for\n\nexplain select * from cnts, alog where alog.uid = cnts.uid\n\n?\n\nIf necessary, turn off enable_hashjoin and enable_mergejoin so we can\nsee a comparable plan. I'm suspecting it thinks the condition on\nuid is more selective than the one on the other index.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Aug 2008 16:57:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n> Miernik <[email protected]> writes:\n>> miernik=> EXPLAIN UPDATE cnts SET p0 = FALSE WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n>> QUERY PLAN\n>> -----------------------------------------------------------------------------------------------\n>> Nested Loop IN Join (cost=0.00..3317.34 rows=1 width=44)\n>> -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=44)\n>> -> Index Scan using alog_uid_idx on alog (cost=0.00..296.95 rows=1 width=4)\n>> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n>> Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n>> (5 rows)\n> \n>> But if I give him only the inner part, it makes reasonable assumptions\n>> and runs OK:\n> \n> What's the results for\n> \n> explain select * from cnts, alog where alog.uid = cnts.uid\n\nmiernik=> explain select * from cnts, alog where alog.uid = cnts.uid;\n QUERY PLAN\n----------------------------------------------------------------------------\n Hash Join (cost=61.00..71810.41 rows=159220 width=76)\n Hash Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n -> Seq Scan on alog (cost=0.00..54951.81 rows=3041081 width=37)\n -> Hash (cost=36.00..36.00 rows=2000 width=39)\n -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=39)\n(5 rows)\n\n> If necessary, turn off enable_hashjoin and enable_mergejoin so we can\n> see a comparable plan.\n\nAfter doing that it thinks like this:\n\nmiernik=> explain select * from cnts, alog where alog.uid = cnts.uid;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Nested Loop (cost=4.95..573640.43 rows=159220 width=76)\n -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=39)\n -> Bitmap Heap Scan on alog (cost=4.95..285.80 rows=80 width=37)\n Recheck Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n -> Bitmap Index Scan on alog_uid_idx (cost=0.00..4.93 rows=80 width=0)\n Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n(6 rows)\n\nTrying EXPLAIN ANALZYE now on this makes it run forever...\n\nHow can I bring it back to working? Like un-run ANALYZE on that table or\nsomething? All was running reasonably well before I changed from\nautovacuum to running ANALYZE manually, and I thought I would improve\nperformance... ;(\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Sat, 9 Aug 2008 23:19:32 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "Miernik <[email protected]> wrote:\n> How can I bring it back to working? Like un-run ANALYZE on that table or\n> something? All was running reasonably well before I changed from\n> autovacuum to running ANALYZE manually, and I thought I would improve\n> performance... ;(\n\nI now removed all manual ANALYZE commands from the scripts, set\nenable_hashjoin = on\nenable_mergejoin = on\nand set on autovacuum, but it didn't bring back the performance of that query :(\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Sat, 9 Aug 2008 23:36:35 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "Miernik <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> If necessary, turn off enable_hashjoin and enable_mergejoin so we can\n>> see a comparable plan.\n\n> After doing that it thinks like this:\n\n> miernik=> explain select * from cnts, alog where alog.uid = cnts.uid;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------\n> Nested Loop (cost=4.95..573640.43 rows=159220 width=76)\n> -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=39)\n> -> Bitmap Heap Scan on alog (cost=4.95..285.80 rows=80 width=37)\n> Recheck Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> -> Bitmap Index Scan on alog_uid_idx (cost=0.00..4.93 rows=80 width=0)\n> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> (6 rows)\n\n> Trying EXPLAIN ANALZYE now on this makes it run forever...\n\nIt couldn't run very long if those rowcounts were accurate. How many\nrows in \"cnts\" really? How big is \"alog\", and how many of its rows join\nto \"cnts\"?\n\nWhile I'm looking at this, what's the real datatypes of the uid columns?\nThose explicit coercions seem a bit fishy.\n\n> How can I bring it back to working?\n\nIt's premature to ask for a solution when we don't understand the\nproblem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 09 Aug 2008 17:37:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "On Sat, Aug 09, 2008 at 05:37:29PM -0400, Tom Lane wrote:\n> > miernik=> explain select * from cnts, alog where alog.uid = cnts.uid;\n> > QUERY PLAN\n> > -----------------------------------------------------------------------------------------\n> > Nested Loop (cost=4.95..573640.43 rows=159220 width=76)\n> > -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=39)\n> > -> Bitmap Heap Scan on alog (cost=4.95..285.80 rows=80 width=37)\n> > Recheck Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> > -> Bitmap Index Scan on alog_uid_idx (cost=0.00..4.93 rows=80 width=0)\n> > Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> > (6 rows)\n> \n> > Trying EXPLAIN ANALZYE now on this makes it run forever...\n> \n> It couldn't run very long if those rowcounts were accurate.\n\nWell, count \"over 5 minutes\" as \"forever\".\n\n> How many\n> rows in \"cnts\" really? How big is \"alog\", and how many of its rows join\n> to \"cnts\"?\n\ncnts is exactly 1000 rows.\nalog as a whole is now 3041833 rows\n\n\"SELECT uid FROM alog WHERE pid = 3452654 AND o = 1\" gives 870 rows (202\nof them NULL), but if I would first try to JOIN alog to cnts, that would\nbe really huge, like roughly 100000 rows, so to have this work\nreasonably well, it MUST first filter alog on pid and o, and then JOIN\nthe result to cnts, not the other way around. Trying to fist JOIN alog to\ncnts, and then filter the whole thing on pid and o is excessively stupid\nin this situation, isn't it?\n\n> \n> While I'm looking at this, what's the real datatypes of the uid columns?\n> Those explicit coercions seem a bit fishy.\n\nuid is of DOMAIN uid which is defined as:\n\nCREATE DOMAIN uid AS integer CHECK (VALUE > 0);\n\nBut I don't think its a problem. It was working perfectly for serveral\nmonths until yesterday I decided to mess with autovacuum and manual\nANALYZE.\n\n-- \nMiernik\nhttp://miernik.name/\n", "msg_date": "Sat, 9 Aug 2008 23:56:43 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is\n\twrong, but when run the inner query alone is OK?" }, { "msg_contents": "Miernik <[email protected]> wrote:\n> Something goes wrong that this query plan thinks there is only gonna be\n> 1 row from (SELECT uid FROM alog ... ) so chooses such query plan, and\n> thus it runs forever (at least so long that I didn't bother to wait,\n> like 10 minutes):\n> \n> \n> miernik=> EXPLAIN UPDATE cnts SET p0 = FALSE WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..3317.34 rows=1 width=44)\n> -> Seq Scan on cnts (cost=0.00..36.00 rows=2000 width=44)\n> -> Index Scan using alog_uid_idx on alog (cost=0.00..296.95 rows=1 width=4)\n> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n> (5 rows)\n\nWell, in fact its not only the autovacuum/manual VACUUM ANALYZE that\nchanged, its a new copy of the cnts table with only 1000 rows, and\nbefore it was a 61729 row table. The new, smaller, 1000 row table is\nrecreated, but I have a copy of the old 61729 row table, and guess what?\nIt runs correctly! And the query plan of the exactly the same query, on\na table of the exactly same structure and indexes, differing only by\nhaving 61729 rows instead of 1000 rows, is like this:\n\nI've done a SELECT uid plan, instead of an UPDATE plan, but it should\nbe no difference. This is a plan that is quick:\n\nmiernik=> EXPLAIN SELECT uid FROM cnts_old WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Nested Loop (cost=9077.07..9238.61 rows=12 width=4)\n -> HashAggregate (cost=9077.07..9077.29 rows=22 width=4)\n -> Bitmap Heap Scan on alog (cost=93.88..9069.00 rows=3229 width=4)\n Recheck Cond: ((pid = 3452654::numeric) AND (o = 1::numeric))\n -> Bitmap Index Scan on alog_pid_o (cost=0.00..93.07 rows=3229 width=0)\n Index Cond: ((pid = 3452654::numeric) AND (o = 1::numeric))\n -> Index Scan using cnts_old_pkey on cnts_old (cost=0.00..7.32 rows=1 width=4)\n Index Cond: ((cnts_old.uid)::integer = (alog.uid)::integer)\n(8 rows)\n\n\nI present a SELECT uid plan with the 1000 table also below, just to be\nsure, this is the \"bad\" plan, that takes forever:\n\nmiernik=> EXPLAIN SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..3532.70 rows=1 width=4)\n -> Seq Scan on cnts (cost=0.00..26.26 rows=1026 width=4)\n -> Index Scan using alog_uid_idx on alog (cost=0.00..297.32 rows=1 width=4)\n Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n(5 rows)\n\n\nI've also got a version of the cnts table with only 14 rows, called\ncnts_small, and the query plan on that one is below:\n\nmiernik=> EXPLAIN SELECT uid FROM cnts_small WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=99.05..1444.29 rows=1 width=4)\n -> Seq Scan on cnts_small (cost=0.00..1.14 rows=14 width=4)\n -> Bitmap Heap Scan on alog (cost=99.05..103.07 rows=1 width=4)\n Recheck Cond: (((alog.uid)::integer = (cnts_small.uid)::integer) AND (alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n -> BitmapAnd (cost=99.05..99.05 rows=1 width=0)\n -> Bitmap Index Scan on alog_uid_idx (cost=0.00..5.21 rows=80 width=0)\n Index Cond: ((alog.uid)::integer = (cnts_small.uid)::integer)\n -> Bitmap Index Scan on alog_pid_o (cost=0.00..92.78 rows=3229 width=0)\n Index Cond: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n(9 rows)\n\nThat one is fast too. And the structure and indexes of cnts_small is\nexactly the same as of cnts and cnts_old. So it works OK if I use a 14\nrow table and if I use a 61729 row table, but breaks when I use a 1000\nrow table. Any ideas?\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Sun, 10 Aug 2008 00:32:08 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "Miernik <[email protected]> wrote:\n> I present a SELECT uid plan with the 1000 table also below, just to be\n> sure, this is the \"bad\" plan, that takes forever:\n> \n> miernik=> EXPLAIN SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..3532.70 rows=1 width=4)\n> -> Seq Scan on cnts (cost=0.00..26.26 rows=1026 width=4)\n> -> Index Scan using alog_uid_idx on alog (cost=0.00..297.32 rows=1 width=4)\n> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n> (5 rows)\n\nIf I reduce the number of rows in cnts to 100, I can actually make an\nEXPLAIN ANALYZE with this query plan finish in reasonable time:\n\nmiernik=> EXPLAIN ANALYZE SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog WHERE pid = 555949 AND odp = 1);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join (cost=0.00..3585.54 rows=1 width=4) (actual time=51831.430..267844.815 rows=7 loops=1)\n -> Seq Scan on cnts (cost=0.00..14.00 rows=700 width=4) (actual time=0.005..148.464 rows=100 loops=1)\n -> Index Scan using alog_uid_idx on alog (cost=0.00..301.02 rows=1 width=4) (actual time=2676.959..2676.959 rows=0 loops=100)\n Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n Filter: ((alog.pid = 555949::numeric) AND (alog.odp = 1::numeric))\n Total runtime: 267844.942 ms\n(6 rows)\n\nThe real running times are about 10 times more than the estimates. Is\nthat normal?\n\n-- \nMiernik\nhttp://miernik.name/\n\n", "msg_date": "Sun, 10 Aug 2008 04:20:59 +0200", "msg_from": "Miernik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" }, { "msg_contents": "It is hardware dependent. The estimates are not time estimates, but on an\narbitrary scale.\n\nOn the server I work with, the estimates are almost always 10x larger than\nthe run times, and sometimes more than 50x.\n\n(many GBs RAM, 8 CPU cores, more than 10 disks, standard optimizer settings\nother than statistics sample sizes and increased common values for columns).\n\n-Scott\n\nOn Sat, Aug 9, 2008 at 7:20 PM, Miernik <[email protected]> wrote:\n\n> Miernik <[email protected]> wrote:\n> > I present a SELECT uid plan with the 1000 table also below, just to be\n> > sure, this is the \"bad\" plan, that takes forever:\n> >\n> > miernik=> EXPLAIN SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog\n> WHERE pid = 3452654 AND o = 1);\n> > QUERY PLAN\n> >\n> -----------------------------------------------------------------------------------------------\n> > Nested Loop IN Join (cost=0.00..3532.70 rows=1 width=4)\n> > -> Seq Scan on cnts (cost=0.00..26.26 rows=1026 width=4)\n> > -> Index Scan using alog_uid_idx on alog (cost=0.00..297.32 rows=1\n> width=4)\n> > Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> > Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n> > (5 rows)\n>\n> If I reduce the number of rows in cnts to 100, I can actually make an\n> EXPLAIN ANALYZE with this query plan finish in reasonable time:\n>\n> miernik=> EXPLAIN ANALYZE SELECT uid FROM cnts WHERE uid IN (SELECT uid\n> FROM alog WHERE pid = 555949 AND odp = 1);\n> QUERY\n> PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop IN Join (cost=0.00..3585.54 rows=1 width=4) (actual\n> time=51831.430..267844.815 rows=7 loops=1)\n> -> Seq Scan on cnts (cost=0.00..14.00 rows=700 width=4) (actual\n> time=0.005..148.464 rows=100 loops=1)\n> -> Index Scan using alog_uid_idx on alog (cost=0.00..301.02 rows=1\n> width=4) (actual time=2676.959..2676.959 rows=0 loops=100)\n> Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n> Filter: ((alog.pid = 555949::numeric) AND (alog.odp = 1::numeric))\n> Total runtime: 267844.942 ms\n> (6 rows)\n>\n> The real running times are about 10 times more than the estimates. Is\n> that normal?\n>\n> --\n> Miernik\n> http://miernik.name/\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt is hardware dependent.   The estimates are not time estimates, but on an arbitrary scale.On the server I work with, the estimates are almost always 10x larger than the run times, and sometimes more than 50x.\n(many GBs RAM, 8 CPU cores, more than 10 disks, standard optimizer settings\nother than statistics sample sizes and increased common values for columns).-ScottOn Sat, Aug 9, 2008 at 7:20 PM, Miernik <[email protected]> wrote:\n\nMiernik <[email protected]> wrote:\n> I present a SELECT uid plan with the 1000 table also below, just to be\n> sure, this is the \"bad\" plan, that takes forever:\n>\n> miernik=> EXPLAIN SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog WHERE pid = 3452654 AND o = 1);\n>                                          QUERY PLAN\n> -----------------------------------------------------------------------------------------------\n> Nested Loop IN Join  (cost=0.00..3532.70 rows=1 width=4)\n>   ->  Seq Scan on cnts  (cost=0.00..26.26 rows=1026 width=4)\n>   ->  Index Scan using alog_uid_idx on alog  (cost=0.00..297.32 rows=1 width=4)\n>         Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n>         Filter: ((alog.pid = 3452654::numeric) AND (alog.o = 1::numeric))\n> (5 rows)\n\nIf I reduce the number of rows in cnts to 100, I can actually make an\nEXPLAIN ANALYZE with this query plan finish in reasonable time:\n\nmiernik=> EXPLAIN ANALYZE SELECT uid FROM cnts WHERE uid IN (SELECT uid FROM alog WHERE pid = 555949 AND odp = 1);\n                                                                   QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop IN Join  (cost=0.00..3585.54 rows=1 width=4) (actual time=51831.430..267844.815 rows=7 loops=1)\n   ->  Seq Scan on cnts  (cost=0.00..14.00 rows=700 width=4) (actual time=0.005..148.464 rows=100 loops=1)\n   ->  Index Scan using alog_uid_idx on alog  (cost=0.00..301.02 rows=1 width=4) (actual time=2676.959..2676.959 rows=0 loops=100)\n         Index Cond: ((alog.uid)::integer = (cnts.uid)::integer)\n         Filter: ((alog.pid = 555949::numeric) AND (alog.odp = 1::numeric))\n Total runtime: 267844.942 ms\n(6 rows)\n\nThe real running times are about 10 times more than the estimates. Is\nthat normal?\n\n--\nMiernik\nhttp://miernik.name/\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 9 Aug 2008 20:25:07 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: why query plan for the inner SELECT of WHERE x IN is wrong,\n\tbut when run the inner query alone is OK?" } ]
[ { "msg_contents": "I have a table named table_Users:\n\nCREATE TABLE table_Users (\n UserID character(40) NOT NULL default '',\n Username varchar(256) NOT NULL default '',\n Email varchar(256) NOT NULL default ''\n etc...\n);\n\nThe UserID is a character(40) and is generated using UUID function. We\nstarted making making other tables and ended up not really using\nUserID, but instead using Username as the unique identifier for the\nother tables. Now, we pass and insert the Username to for discussions,\nwikis, etc, for all the modules we have developed. I was wondering if\nit would be a performance improvement to use the 40 Character UserID\ninstead of Username when querying the other tables, or if we should\nchange the UserID to a serial value and use that to query the other\ntables. Or just keep the way things are because it doesn't really make\nmuch a difference.\n\nWe are still in development and its about half done, but if there is\ngoing to be performance issues because using PK as a String value, we\ncan just take a day change it before any production as been started.\nAnyway advice you can give would be much appreciated.\n\nPostgres performance guru where are you?\n", "msg_date": "Mon, 11 Aug 2008 00:34:36 -0700 (PDT)", "msg_from": "Jay <[email protected]>", "msg_from_op": true, "msg_subject": "Using PK value as a String" }, { "msg_contents": "\"Jay\" <[email protected]> writes:\n\n> I have a table named table_Users:\n>\n> CREATE TABLE table_Users (\n> UserID character(40) NOT NULL default '',\n> Username varchar(256) NOT NULL default '',\n> Email varchar(256) NOT NULL default ''\n> etc...\n> );\n>\n> The UserID is a character(40) and is generated using UUID function. We\n> started making making other tables and ended up not really using\n> UserID, but instead using Username as the unique identifier for the\n> other tables. Now, we pass and insert the Username to for discussions,\n> wikis, etc, for all the modules we have developed. I was wondering if\n> it would be a performance improvement to use the 40 Character UserID\n> instead of Username when querying the other tables, or if we should\n> change the UserID to a serial value and use that to query the other\n> tables. Or just keep the way things are because it doesn't really make\n> much a difference.\n\nUsername would not be any slower than UserID unless you have a lot of\nusernames longer than 40 characters.\n\nHowever making UserID an integer would be quite a bit more efficient. It would\ntake 4 bytes instead of as the length of the Username which adds up when it's\nin all your other tables... Also internationalized text collations are quite a\nbit more expensive than a simple integer comparison.\n\nBut the real question here is what's the better design. If you use Username\nyou'll be cursing if you ever want to provide a facility to allow people to\nchange their usernames. You may not want such a facility now but one day...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Mon, 11 Aug 2008 10:30:31 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "--- On Mon, 11/8/08, Gregory Stark <[email protected]> wrote:\n\n> From: Gregory Stark <[email protected]>\n> Subject: Re: [PERFORM] Using PK value as a String\n> To: \"Jay\" <[email protected]>\n> Cc: [email protected]\n> Date: Monday, 11 August, 2008, 10:30 AM\n> \"Jay\" <[email protected]> writes:\n> \n> > I have a table named table_Users:\n> >\n> > CREATE TABLE table_Users (\n> > UserID character(40) NOT NULL default\n> '',\n> > Username varchar(256) NOT NULL default\n> '',\n> > Email varchar(256) NOT NULL default\n> ''\n> > etc...\n> > );\n> >\n> > The UserID is a character(40) and is generated using\n> UUID function. We\n> > started making making other tables and ended up not\n> really using\n> > UserID, but instead using Username as the unique\n> identifier for the\n> > other tables. Now, we pass and insert the Username to\n> for discussions,\n> > wikis, etc, for all the modules we have developed. I\n> was wondering if\n> > it would be a performance improvement to use the 40\n> Character UserID\n> > instead of Username when querying the other tables, or\n> if we should\n> > change the UserID to a serial value and use that to\n> query the other\n> > tables. Or just keep the way things are because it\n> doesn't really make\n> > much a difference.\n> \n> Username would not be any slower than UserID unless you\n> have a lot of\n> usernames longer than 40 characters.\n> \n> However making UserID an integer would be quite a bit more\n> efficient. It would\n> take 4 bytes instead of as the length of the Username which\n> adds up when it's\n> in all your other tables... Also internationalized text\n> collations are quite a\n> bit more expensive than a simple integer comparison.\n> \n> But the real question here is what's the better design.\n> If you use Username\n> you'll be cursing if you ever want to provide a\n> facility to allow people to\n> change their usernames. You may not want such a facility\n> now but one day...\n> \n\nI don't understand Gregory's suggestion about the design. I thought using natural primary keys as opposed to surrogate ones is a better design strategy, even when it comes to performance considerations and even more so if there are complex relationships within the database.\n\nRegards,\nValentin\n\n\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's On-Demand Production\n> Tuning\n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n __________________________________________________________\nNot happy with your email address?.\nGet the one you really want - millions of new email addresses available now at Yahoo! http://uk.docs.yahoo.com/ymail/new.html\n", "msg_date": "Mon, 11 Aug 2008 09:46:01 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "If UserID just be unique internal key and the unique id of other tables, I'd\nlike sequence, which is unique and just use 8 bytes(bigint) When it querying\nother tables, it will faster , and disk space smaller than UUID(40 bytes).\n\n\t 莫建祥\t\n阿里巴巴软件(上海)有限公司\n研发中心-IM服务端开发部 \n联系方式:86-0571-85022088-13072\n贸易通ID:jaymo 淘宝ID:jackem\n公司网站:www.alisoft.com\nwiki:http://10.0.32.21:1688/confluence/pages/viewpage.action?pageId=10338\n\n-----邮件原件-----\n发件人: [email protected]\n[mailto:[email protected]] 代表 Jay\n发送时间: 2008年8月11日 15:35\n收件人: [email protected]\n主题: [PERFORM] Using PK value as a String\n\nI have a table named table_Users:\n\nCREATE TABLE table_Users (\n UserID character(40) NOT NULL default '',\n Username varchar(256) NOT NULL default '',\n Email varchar(256) NOT NULL default ''\n etc...\n);\n\nThe UserID is a character(40) and is generated using UUID function. We\nstarted making making other tables and ended up not really using\nUserID, but instead using Username as the unique identifier for the\nother tables. Now, we pass and insert the Username to for discussions,\nwikis, etc, for all the modules we have developed. I was wondering if\nit would be a performance improvement to use the 40 Character UserID\ninstead of Username when querying the other tables, or if we should\nchange the UserID to a serial value and use that to query the other\ntables. Or just keep the way things are because it doesn't really make\nmuch a difference.\n\nWe are still in development and its about half done, but if there is\ngoing to be performance issues because using PK as a String value, we\ncan just take a day change it before any production as been started.\nAnyway advice you can give would be much appreciated.\n\nPostgres performance guru where are you?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Mon, 11 Aug 2008 18:05:11 +0800", "msg_from": "\"jay\" <[email protected]>", "msg_from_op": false, "msg_subject": "=?gb2312?B?tPC4tDogW1BFUkZPUk1dIFVzaW5nIFBLIHZhbHVlIGFzIGEgU3RyaW5n?=" }, { "msg_contents": "\nOn Aug 11, 2008, at 4:30 AM, Gregory Stark wrote:\n\n> \"Jay\" <[email protected]> writes:\n>\n>> I have a table named table_Users:\n>>\n>> CREATE TABLE table_Users (\n>> UserID character(40) NOT NULL default '',\n>> Username varchar(256) NOT NULL default '',\n>> Email varchar(256) NOT NULL default ''\n>> etc...\n>> );\n>>\n>> The UserID is a character(40) and is generated using UUID function. \n>> We\n>> started making making other tables and ended up not really using\n>> UserID, but instead using Username as the unique identifier for the\n>> other tables. Now, we pass and insert the Username to for \n>> discussions,\n>> wikis, etc, for all the modules we have developed. I was wondering if\n>> it would be a performance improvement to use the 40 Character UserID\n>> instead of Username when querying the other tables, or if we should\n>> change the UserID to a serial value and use that to query the other\n>> tables. Or just keep the way things are because it doesn't really \n>> make\n>> much a difference.\n>\n> Username would not be any slower than UserID unless you have a lot of\n> usernames longer than 40 characters.\n>\n> However making UserID an integer would be quite a bit more \n> efficient. It would\n> take 4 bytes instead of as the length of the Username which adds up \n> when it's\n> in all your other tables... Also internationalized text collations \n> are quite a\n> bit more expensive than a simple integer comparison.\n>\n> But the real question here is what's the better design. If you use \n> Username\n> you'll be cursing if you ever want to provide a facility to allow \n> people to\n> change their usernames. You may not want such a facility now but one \n> day...\n>\n\nIf you generate UUID's with the UUID function and you are on 8.3,\nwhy not use the UUID type to store it?\n\nRies\n\n\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's On-Demand Production Tuning\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nRies van Twisk\ntags: Freelance TYPO3 Glassfish JasperReports JasperETL Flex Blaze-DS \nWebORB PostgreSQL DB-Architect\nemail: [email protected]\nweb: http://www.rvantwisk.nl/\nskype: callto://r.vantwisk\n\n\n\n", "msg_date": "Mon, 11 Aug 2008 07:22:04 -0500", "msg_from": "ries van Twisk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Valentin Bogdanov wrote:\n> --- On Mon, 11/8/08, Gregory Stark <[email protected]> wrote:\n> \n>> From: Gregory Stark <[email protected]>\n>> Subject: Re: [PERFORM] Using PK value as a String\n>> To: \"Jay\" <[email protected]>\n>> Cc: [email protected]\n>> Date: Monday, 11 August, 2008, 10:30 AM\n>> \"Jay\" <[email protected]> writes:\n>>\n>>> I have a table named table_Users:\n>>>\n>>> CREATE TABLE table_Users (\n>>> UserID character(40) NOT NULL default\n>> '',\n>>> Username varchar(256) NOT NULL default\n>> '',\n>>> Email varchar(256) NOT NULL default\n>> ''\n>>> etc...\n>>> );\n>>>\n...\n>> But the real question here is what's the better design.\n>> If you use Username\n>> you'll be cursing if you ever want to provide a\n>> facility to allow people to\n>> change their usernames. You may not want such a facility\n>> now but one day...\n>>\n> \n> I don't understand Gregory's suggestion about the design. I thought\n> using natural primary keys as opposed to surrogate ones is a better\n> design strategy, even when it comes to performance considerations\n> and even more so if there are complex relationships within the database.\n\nNo, exactly the opposite. Data about users (such as name, email address, etc.) are rarely a good choice as a foreign key, and shouldn't be considered \"keys\" in most circumstances. As Gregory points out, you're spreading the user's name across the database, effectively denormalizing it.\n\nInstead, you should have a user record, with an arbitrary key, an integer or OID, that you use as the foreign key for all other tables. That way, when the username changes, only one table will be affected. And it's much more efficient to use an integer as the key than a long string.\n\nCraig\n", "msg_date": "Mon, 11 Aug 2008 09:03:58 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Valentin Bogdanov schrieb:\n> --- On Mon, 11/8/08, Gregory Stark <[email protected]> wrote:\n>\n> \n>> From: Gregory Stark <[email protected]>\n>> Subject: Re: [PERFORM] Using PK value as a String\n>> To: \"Jay\" <[email protected]>\n>> Cc: [email protected]\n>> Date: Monday, 11 August, 2008, 10:30 AM\n>> \"Jay\" <[email protected]> writes:\n>>\n>> \n>>> I have a table named table_Users:\n>>>\n>>> CREATE TABLE table_Users (\n>>> UserID character(40) NOT NULL default\n>>> \n>> '',\n>> \n>>> Username varchar(256) NOT NULL default\n>>> \n>> '',\n>> \n>>> Email varchar(256) NOT NULL default\n>>> \n>> ''\n>> \n>>> etc...\n>>> );\n>>>\n>>> The UserID is a character(40) and is generated using\n>>> \n>> UUID function. We\n>> \n>>> started making making other tables and ended up not\n>>> \n>> really using\n>> \n>>> UserID, but instead using Username as the unique\n>>> \n>> identifier for the\n>> \n>>> other tables. Now, we pass and insert the Username to\n>>> \n>> for discussions,\n>> \n>>> wikis, etc, for all the modules we have developed. I\n>>> \n>> was wondering if\n>> \n>>> it would be a performance improvement to use the 40\n>>> \n>> Character UserID\n>> \n>>> instead of Username when querying the other tables, or\n>>> \n>> if we should\n>> \n>>> change the UserID to a serial value and use that to\n>>> \n>> query the other\n>> \n>>> tables. Or just keep the way things are because it\n>>> \n>> doesn't really make\n>> \n>>> much a difference.\n>>> \n>> Username would not be any slower than UserID unless you\n>> have a lot of\n>> usernames longer than 40 characters.\n>>\n>> However making UserID an integer would be quite a bit more\n>> efficient. It would\n>> take 4 bytes instead of as the length of the Username which\n>> adds up when it's\n>> in all your other tables... Also internationalized text\n>> collations are quite a\n>> bit more expensive than a simple integer comparison.\n>>\n>> But the real question here is what's the better design.\n>> If you use Username\n>> you'll be cursing if you ever want to provide a\n>> facility to allow people to\n>> change their usernames. You may not want such a facility\n>> now but one day...\n>>\n>> \n>\n> I don't understand Gregory's suggestion about the design. I thought using natural primary keys as opposed to surrogate ones is a better design strategy, even when it comes to performance considerations and even more so if there are complex relationships within the database.\n>\n> Regards,\n> Valentin\n>\n> \nUUID is already a surrogate key not a natural key, in no aspect better \nthan a numeric key, just taking a lot more space.\n\nSo why not use int4/int8?\n\n\n\n", "msg_date": "Tue, 12 Aug 2008 11:58:39 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "You guys totally rock!\n\nI guess, bottom line, we should take that extra day to convert our PK and FK\nto a numerical value, using BIG INT to be on the save side. (Even though\nWikipedia's UserID uses just an integer as data type)\n\nTo Gregory: Thank you for you valuable statement.\n\"But the real question here is what's the better design. If you use Username\nyou'll be cursing if you ever want to provide a facility to allow people to\nchange their usernames. You may not want such a facility now but one day\" I\nthink you hit the nail on the head with this comment. If a member really\nwants to change their username, IE: Choose to go with IloveUSara, only to be\ndumped on the alter, who am I to say no.\n\nTo Valentin: I wish someone would prove us both wrong or right. I still\nthought it wasn't a bad idea to use username a varchar(256) to interact with\nall the modules... Well thats what I thought when I first started writing\nthe tables...\n\nTo Jay: Thanks for keeping it short and simple. \"I'd like sequence, which is\nunique and just use 8 bytes(bigint) When it querying other tables, it will\nfaster , and disk space smaller than UUID(40 bytes).\" I'm taking your advice\non this^^ Although wikipedia's postgresql database schema still stands.\n\nTo Craig: Yes, I agree. Please see my comment on IloveUSara.\n\nTo Mario: Let's go! I'm Mario... Sorry, I love Mario Kart. Especially on the\nold super famacon. Going with int8, thank you for the advice.\n\n\nOn Tue, Aug 12, 2008 at 6:58 PM, Mario Weilguni <[email protected]> wrote:\n\n> Valentin Bogdanov schrieb:\n>\n> --- On Mon, 11/8/08, Gregory Stark <[email protected]> wrote:\n>>\n>>\n>>\n>>> From: Gregory Stark <[email protected]>\n>>> Subject: Re: [PERFORM] Using PK value as a String\n>>> To: \"Jay\" <[email protected]>\n>>> Cc: [email protected]\n>>> Date: Monday, 11 August, 2008, 10:30 AM\n>>> \"Jay\" <[email protected]> writes:\n>>>\n>>>\n>>>\n>>>> I have a table named table_Users:\n>>>>\n>>>> CREATE TABLE table_Users (\n>>>> UserID character(40) NOT NULL default\n>>>>\n>>>>\n>>> '',\n>>>\n>>>\n>>>> Username varchar(256) NOT NULL default\n>>>>\n>>>>\n>>> '',\n>>>\n>>>\n>>>> Email varchar(256) NOT NULL default\n>>>>\n>>>>\n>>> ''\n>>>\n>>>\n>>>> etc...\n>>>> );\n>>>>\n>>>> The UserID is a character(40) and is generated using\n>>>>\n>>>>\n>>> UUID function. We\n>>>\n>>>\n>>>> started making making other tables and ended up not\n>>>>\n>>>>\n>>> really using\n>>>\n>>>\n>>>> UserID, but instead using Username as the unique\n>>>>\n>>>>\n>>> identifier for the\n>>>\n>>>\n>>>> other tables. Now, we pass and insert the Username to\n>>>>\n>>>>\n>>> for discussions,\n>>>\n>>>\n>>>> wikis, etc, for all the modules we have developed. I\n>>>>\n>>>>\n>>> was wondering if\n>>>\n>>>\n>>>> it would be a performance improvement to use the 40\n>>>>\n>>>>\n>>> Character UserID\n>>>\n>>>\n>>>> instead of Username when querying the other tables, or\n>>>>\n>>>>\n>>> if we should\n>>>\n>>>\n>>>> change the UserID to a serial value and use that to\n>>>>\n>>>>\n>>> query the other\n>>>\n>>>\n>>>> tables. Or just keep the way things are because it\n>>>>\n>>>>\n>>> doesn't really make\n>>>\n>>>\n>>>> much a difference.\n>>>>\n>>>>\n>>> Username would not be any slower than UserID unless you\n>>> have a lot of\n>>> usernames longer than 40 characters.\n>>>\n>>> However making UserID an integer would be quite a bit more\n>>> efficient. It would\n>>> take 4 bytes instead of as the length of the Username which\n>>> adds up when it's\n>>> in all your other tables... Also internationalized text\n>>> collations are quite a\n>>> bit more expensive than a simple integer comparison.\n>>>\n>>> But the real question here is what's the better design.\n>>> If you use Username\n>>> you'll be cursing if you ever want to provide a\n>>> facility to allow people to\n>>> change their usernames. You may not want such a facility\n>>> now but one day...\n>>>\n>>>\n>>>\n>>\n>> I don't understand Gregory's suggestion about the design. I thought using\n>> natural primary keys as opposed to surrogate ones is a better design\n>> strategy, even when it comes to performance considerations and even more so\n>> if there are complex relationships within the database.\n>>\n>> Regards,\n>> Valentin\n>>\n>>\n>>\n> UUID is already a surrogate key not a natural key, in no aspect better than\n> a numeric key, just taking a lot more space.\n>\n> So why not use int4/int8?\n>\n>\n>\n>\n\n\n-- \nRegards,\nJay Kang\n\n\nThis e-mail is intended only for the proper person to whom it is addressed\nand may contain legally privileged and/or confidential information. If you\nreceived this communication erroneously, please notify me by reply e-mail,\ndelete this e-mail and all your copies of this e-mail and do not review,\ndisseminate, redistribute, make other use of, rely upon, or copy this\ncommunication. Thank you.\n\nYou guys totally rock! I guess, bottom line, we should take that extra day to convert our PK and FK to a numerical value, using BIG INT to be on the save side. (Even though Wikipedia's UserID uses just an integer as data type) \nTo Gregory: Thank you for you valuable statement. \"But the real question here is what's the better design. If you use Username you'll be cursing if you ever want to provide a facility to allow people to change their usernames. You may not want such a facility now but one day\" I think you hit the nail on the head with this comment. If a member really wants to change their username, IE: Choose to go with IloveUSara, only to be dumped on the alter, who am I to say no.\nTo Valentin: I wish someone would prove us both wrong or right. I still thought it wasn't a bad idea to use username a varchar(256) to interact with all the modules... Well thats what I thought when I first started writing the tables...\nTo Jay: Thanks for keeping it short and simple. \"I'd like sequence, which is unique and just use 8 bytes(bigint) When it querying other tables, it will faster , and disk space smaller than UUID(40 bytes).\" I'm taking your advice on this^^ Although wikipedia's postgresql database schema still stands.\nTo Craig: Yes, I agree. Please see my comment on IloveUSara. To Mario: Let's go! I'm Mario... Sorry, I love Mario Kart. Especially on the old super famacon. Going with int8, thank you for the advice.\nOn Tue, Aug 12, 2008 at 6:58 PM, Mario Weilguni <[email protected]> wrote:\nValentin Bogdanov schrieb:\n\n--- On Mon, 11/8/08, Gregory Stark <[email protected]> wrote:\n\n  \n\nFrom: Gregory Stark <[email protected]>\nSubject: Re: [PERFORM] Using PK value as a String\nTo: \"Jay\" <[email protected]>\nCc: [email protected]\nDate: Monday, 11 August, 2008, 10:30 AM\n\"Jay\" <[email protected]> writes:\n\n    \n\nI have a table named table_Users:\n\nCREATE TABLE table_Users (\n   UserID       character(40)  NOT NULL default\n      \n\n'',\n    \n\n   Username   varchar(256)  NOT NULL default\n      \n\n'',\n    \n\n   Email          varchar(256) NOT NULL default\n      \n\n''\n    \n\n   etc...\n);\n\nThe UserID is a character(40) and is generated using\n      \n\nUUID function. We\n    \n\nstarted making making other tables and ended up not\n      \n\nreally using\n    \n\nUserID, but instead using Username as the unique\n      \n\nidentifier for the\n    \n\nother tables. Now, we pass and insert the Username to\n      \n\nfor discussions,\n    \n\nwikis, etc, for all the modules we have developed. I\n      \n\nwas wondering if\n    \n\nit would be a performance improvement to use the 40\n      \n\nCharacter UserID\n    \n\ninstead of Username when querying the other tables, or\n      \n\nif we should\n    \n\nchange the UserID to a serial value and use that to\n      \n\nquery the other\n    \n\ntables. Or just keep the way things are because it\n      \n\ndoesn't really make\n    \n\nmuch a difference.\n      \n\nUsername would not be any slower than UserID unless you\nhave a lot of\nusernames longer than 40 characters.\n\nHowever making UserID an integer would be quite a bit more\nefficient. It would\ntake 4 bytes instead of as the length of the Username which\nadds up when it's\nin all your other tables... Also internationalized text\ncollations are quite a\nbit more expensive than a simple integer comparison.\n\nBut the real question here is what's the better design.\nIf you use Username\nyou'll be cursing if you ever want to provide a\nfacility to allow people to\nchange their usernames. You may not want such a facility\nnow but one day...\n\n    \n\n\nI don't understand Gregory's suggestion about the design. I thought using natural primary keys as opposed to surrogate ones is a better design strategy, even when it comes to performance considerations and even more so if there are complex relationships within the database.\n\nRegards,\nValentin\n\n  \n\nUUID is already a surrogate key not a natural key, in no aspect better than a numeric key, just taking a lot more space.\n\nSo why not use int4/int8?\n\n\n\n-- Regards,Jay KangThis e-mail is intended only for the proper person to whom it is addressed and may contain legally privileged and/or confidential information. If you received this communication erroneously, please notify me by reply e-mail, delete this e-mail and all your copies of this e-mail and do not review, disseminate, redistribute, make other use of, rely upon, or copy this communication. Thank you.", "msg_date": "Tue, 12 Aug 2008 19:18:26 +0900", "msg_from": "\"Jay D. Kang\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "\"Mario Weilguni\" <[email protected]> writes:\n\n> UUID is already a surrogate key not a natural key, in no aspect better than a\n> numeric key, just taking a lot more space.\n>\n> So why not use int4/int8?\n\nThe main reason to use UUID instead of sequences is if you want to be able to\ngenerate unique values across multiple systems. So, for example, if you want\nto be able to send these userids to another system which is taking\nregistrations from lots of places. Of course that only works if that other\nsystem is already using UUIDs and you're all using good generators.\n\nYou only need int8 if you might someday have more than 2 *billion* users...\nProbably not an urgent issue.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Tue, 12 Aug 2008 13:51:36 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "In response to Gregory Stark <[email protected]>:\n\n> \"Mario Weilguni\" <[email protected]> writes:\n> \n> > UUID is already a surrogate key not a natural key, in no aspect better than a\n> > numeric key, just taking a lot more space.\n> >\n> > So why not use int4/int8?\n> \n> The main reason to use UUID instead of sequences is if you want to be able to\n> generate unique values across multiple systems. So, for example, if you want\n> to be able to send these userids to another system which is taking\n> registrations from lots of places. Of course that only works if that other\n> system is already using UUIDs and you're all using good generators.\n\nNote that in many circumstances, there are other options than UUIDs. If\nyou have control over all the systems generating values, you can prefix\neach generated value with a system ID (i.e. make the high 8 bits the\nsystem ID and the remaining bits come from a sequence) This allows\nyou to still use int4 or int8.\n\nUUID is designed to be a universal solution. But universal solutions\nare frequently less efficient than custom-tailored solutions.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Aug 2008 09:06:21 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Bill Moran wrote:\n>> The main reason to use UUID instead of sequences is if you want to be able to\n>> generate unique values across multiple systems. So, for example, if you want\n>> to be able to send these userids to another system which is taking\n>> registrations from lots of places. Of course that only works if that other\n>> system is already using UUIDs and you're all using good generators.\n>> \n>\n> Note that in many circumstances, there are other options than UUIDs. If\n> you have control over all the systems generating values, you can prefix\n> each generated value with a system ID (i.e. make the high 8 bits the\n> system ID and the remaining bits come from a sequence) This allows\n> you to still use int4 or int8.\n>\n> UUID is designed to be a universal solution. But universal solutions\n> are frequently less efficient than custom-tailored solutions.\n> \n\nOther benefits include:\n - Reduced management cost. As described above, one would have to \nallocate keyspace in each system. By using a UUID, one can skip this step.\n - Increased keyspace. Even if keyspace allocation is performed, an \nint4 only has 32-bit of keyspace to allocate. The IPv4 address space is \nalready over 85% allocated as an example of how this can happen. \n128-bits has a LOT more keyspace than 32-bits or 64-bits.\n - Reduced sequence predictability. Certain forms of exploits when \nthe surrogate key is exposed to the public, are rendered ineffective as \nguessing the \"next\" or \"previous\" generated key is far more difficult.\n - Used as the key into a cache or other lookup table. Multiple types \nof records can be cached to the same storage as the sequence is intended \nto be universally unique.\n - Flexibility to merge systems later, even if unplanned. For \nexample, System A and System B are run independently for some time. \nThen, it is determined that they should be merged. If unique keys are \nspecific to the system, this becomes far more difficult to implement \nthan if the unique keys are universal.\n\nThat said, most uses of UUID do not require any of the above. It's a \n\"just in case\" measure, that suffers the performance cost, \"just in case.\"\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\n\nBill Moran wrote:\n\n\nThe main reason to use UUID instead of sequences is if you want to be able to\ngenerate unique values across multiple systems. So, for example, if you want\nto be able to send these userids to another system which is taking\nregistrations from lots of places. Of course that only works if that other\nsystem is already using UUIDs and you're all using good generators.\n \n\n\nNote that in many circumstances, there are other options than UUIDs. If\nyou have control over all the systems generating values, you can prefix\neach generated value with a system ID (i.e. make the high 8 bits the\nsystem ID and the remaining bits come from a sequence) This allows\nyou to still use int4 or int8.\n\nUUID is designed to be a universal solution. But universal solutions\nare frequently less efficient than custom-tailored solutions.\n \n\n\nOther benefits include:\n    - Reduced management cost. As described above, one would have to\nallocate keyspace in each system. By using a UUID, one can skip this\nstep.\n    - Increased keyspace. Even if keyspace allocation is performed, an\nint4 only has 32-bit of keyspace to allocate. The IPv4 address space is\nalready over 85% allocated as an example of how this can happen.\n128-bits has a LOT more keyspace than 32-bits or 64-bits.\n    - Reduced sequence predictability. Certain forms of exploits when\nthe surrogate key is exposed to the public, are rendered ineffective as\nguessing the \"next\" or \"previous\" generated key is far more difficult.\n    - Used as the key into a cache or other lookup table. Multiple\ntypes of records can be cached to the same storage as the sequence is\nintended to be universally unique.\n    - Flexibility to merge systems later, even if unplanned. For\nexample, System A and System B are run independently for some time.\nThen, it is determined that they should be merged. If unique keys are\nspecific to the system, this becomes far more difficult to implement\nthan if the unique keys are universal.\n\nThat said, most uses of UUID do not require any of the above. It's a\n\"just in case\" measure, that suffers the performance cost, \"just in\ncase.\"\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Tue, 12 Aug 2008 09:21:40 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "\"Mark Mielke\" <[email protected]> writes:\n\n> - Increased keyspace. Even if keyspace allocation is performed, an int4 only\n> has 32-bit of keyspace to allocate. The IPv4 address space is already over 85%\n> allocated as an example of how this can happen. 128-bits has a LOT more\n> keyspace than 32-bits or 64-bits.\n\nThe rest of your points are valid (though not particularly convincing to me\nfor most applications) but this example is bogus. The IPv4 address space is\ncongested because of the hierarchic nature of allocations. Not because there\nis an actual shortage of IPv4 addresses themselves. There would be enough IPv4\nfor every ethernet device on the planet for decades to come if we could\nallocate them individually -- but we can't.\n\nThat is, when allocating an organization 100 addresses if they want to be able\nto treat them as a contiguous network they must be allocated 128 addresses.\nAnd if they might ever grow to 129 they're better off just justifying 256\naddresses today.\n\nThat's not an issue for a sequence generated primary key. Arguably it *is* a\nproblem for UUID which partitions up that 128-bits much the way the original\npre-CIDR IPv4 addressing scheme partitioned up the address. But 128-bits is so\nmuch bigger it avoids running into the issue.\n\nThe flip side is that sequence generated keys have to deal with gaps if record\nis deleted later. So the relevant question is not whether you plan to have 2\nbillion users at any single point in the future but rather whether you plan to\never have had 2 billion users total over your history. I suspect large\nnetworks like Yahoo or Google might be nearing or past that point now even\nthough they probably only have a few hundred million current users.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's Slony Replication support!\n", "msg_date": "Tue, 12 Aug 2008 14:46:57 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Hi!\n\nWe use normal sequences to generate id's across multiple nodes. We \nuse the \"increment\" parameter for the sequence and we specify each \nnode to increment its sequence with for example 10 and the the first \nnode to start the sequence at 1 and the second at 2 and so on. In that \nway you get an unique ID across each nodes thats an INT. Not in \nchronological order but it's unique ;)\n\nThe only issue with this is that the value you chose for increment \nvalue is your node limit.\n\nCheers!\n\nMathias\n\n\nOn 12 aug 2008, at 14.51, Gregory Stark wrote:\n\n> \"Mario Weilguni\" <[email protected]> writes:\n>\n>> UUID is already a surrogate key not a natural key, in no aspect \n>> better than a\n>> numeric key, just taking a lot more space.\n>>\n>> So why not use int4/int8?\n>\n> The main reason to use UUID instead of sequences is if you want to \n> be able to\n> generate unique values across multiple systems. So, for example, if \n> you want\n> to be able to send these userids to another system which is taking\n> registrations from lots of places. Of course that only works if that \n> other\n> system is already using UUIDs and you're all using good generators.\n>\n> You only need int8 if you might someday have more than 2 *billion* \n> users...\n> Probably not an urgent issue.\n>\n> -- \n> Gregory Stark\n> EnterpriseDB http://www.enterprisedb.com\n> Ask me about EnterpriseDB's Slony Replication support!\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 12 Aug 2008 15:57:09 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Gregory Stark wrote:\n> \"Mark Mielke\" <[email protected]> writes:\n>\n> \n>> - Increased keyspace. Even if keyspace allocation is performed, an int4 only\n>> has 32-bit of keyspace to allocate. The IPv4 address space is already over 85%\n>> allocated as an example of how this can happen. 128-bits has a LOT more\n>> keyspace than 32-bits or 64-bits.\n>> \n>\n> The rest of your points are valid (though not particularly convincing to me\n> for most applications) but this example is bogus. The IPv4 address space is\n> congested because of the hierarchic nature of allocations. Not because there\n> is an actual shortage of IPv4 addresses themselves. There would be enough IPv4\n> for every ethernet device on the planet for decades to come if we could\n> allocate them individually -- but we can't.\n> \n\nI don't disagree. Obviously, most systems people work with do not \nrequire 2**32 records. You trimmed my bottom statement where \"most \nsystems don't require any of these benefits - it's only a just in case.\" :-)\n\nThe point is valid - 128-bits has more keyspace than 32-bits or 64-bits. \nThe relevance of this point to a particular application other than \nFacebook, Google, or Yahoo, is probably low or non-existent.\n\nCheers,\nmark\n\n\n> That is, when allocating an organization 100 addresses if they want to be able\n> to treat them as a contiguous network they must be allocated 128 addresses.\n> And if they might ever grow to 129 they're better off just justifying 256\n> addresses today.\n>\n> That's not an issue for a sequence generated primary key. Arguably it *is* a\n> problem for UUID which partitions up that 128-bits much the way the original\n> pre-CIDR IPv4 addressing scheme partitioned up the address. But 128-bits is so\n> much bigger it avoids running into the issue.\n>\n> The flip side is that sequence generated keys have to deal with gaps if record\n> is deleted later. So the relevant question is not whether you plan to have 2\n> billion users at any single point in the future but rather whether you plan to\n> ever have had 2 billion users total over your history. I suspect large\n> networks like Yahoo or Google might be nearing or past that point now even\n> though they probably only have a few hundred million current users.\n>\n> \n\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nGregory Stark wrote:\n\n\"Mark Mielke\" <[email protected]> writes:\n\n \n\n - Increased keyspace. Even if keyspace allocation is performed, an int4 only\nhas 32-bit of keyspace to allocate. The IPv4 address space is already over 85%\nallocated as an example of how this can happen. 128-bits has a LOT more\nkeyspace than 32-bits or 64-bits.\n \n\n\nThe rest of your points are valid (though not particularly convincing to me\nfor most applications) but this example is bogus. The IPv4 address space is\ncongested because of the hierarchic nature of allocations. Not because there\nis an actual shortage of IPv4 addresses themselves. There would be enough IPv4\nfor every ethernet device on the planet for decades to come if we could\nallocate them individually -- but we can't.\n \n\n\nI don't disagree. Obviously, most systems people work with do not\nrequire 2**32 records. You trimmed my bottom statement where \"most\nsystems don't require any of these benefits - it's only a just in\ncase.\" :-)\n\nThe point is valid - 128-bits has more keyspace than 32-bits or\n64-bits. The relevance of this point to a particular application other\nthan Facebook, Google, or Yahoo, is probably low or non-existent.\n\nCheers,\nmark\n\n\n\n\nThat is, when allocating an organization 100 addresses if they want to be able\nto treat them as a contiguous network they must be allocated 128 addresses.\nAnd if they might ever grow to 129 they're better off just justifying 256\naddresses today.\n\nThat's not an issue for a sequence generated primary key. Arguably it *is* a\nproblem for UUID which partitions up that 128-bits much the way the original\npre-CIDR IPv4 addressing scheme partitioned up the address. But 128-bits is so\nmuch bigger it avoids running into the issue.\n\nThe flip side is that sequence generated keys have to deal with gaps if record\nis deleted later. So the relevant question is not whether you plan to have 2\nbillion users at any single point in the future but rather whether you plan to\never have had 2 billion users total over your history. I suspect large\nnetworks like Yahoo or Google might be nearing or past that point now even\nthough they probably only have a few hundred million current users.\n\n \n\n\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Tue, 12 Aug 2008 10:11:50 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "We chose UUID as PK because there is still some information in an \ninteger key.\nYou can see if a user has registered before someone else (user1.id < \nuser2.id)\nor you can see how many new users registered in a specific period of \ntime\n(compare the id of the newest user to the id a week ago). This is \ninformation\nwhich is in some cases critical.\n\nmoritz\n", "msg_date": "Tue, 12 Aug 2008 16:17:58 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "On Tue, Aug 12, 2008 at 9:46 AM, Gregory Stark <[email protected]> wrote:\n> \"Mark Mielke\" <[email protected]> writes:\n>\n>> - Increased keyspace. Even if keyspace allocation is performed, an int4 only\n>> has 32-bit of keyspace to allocate. The IPv4 address space is already over 85%\n>> allocated as an example of how this can happen. 128-bits has a LOT more\n>> keyspace than 32-bits or 64-bits.\n>\n> The rest of your points are valid (though not particularly convincing to me\n> for most applications) but this example is bogus. The IPv4 address space is\n> congested because of the hierarchic nature of allocations. Not because there\n> is an actual shortage of IPv4 addresses themselves. There would be enough IPv4\n> for every ethernet device on the planet for decades to come if we could\n> allocate them individually -- but we can't.\n\nOnly because of NAT. There are a _lot_ of IP devices out there maybe\nnot billions, but maybe so, and 'enough for decades' is quite a\nstretch.\n\nmerlin\n", "msg_date": "Tue, 12 Aug 2008 10:29:17 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "In response to Moritz Onken <[email protected]>:\n\n> We chose UUID as PK because there is still some information in an \n> integer key.\n> You can see if a user has registered before someone else (user1.id < \n> user2.id)\n> or you can see how many new users registered in a specific period of \n> time\n> (compare the id of the newest user to the id a week ago). This is \n> information\n> which is in some cases critical.\n\nSo you're accidentally storing critical information in magic values\ninstead of storing it explicitly?\n\nGood luck with that.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Aug 2008 11:04:16 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "\nAm 12.08.2008 um 17:04 schrieb Bill Moran:\n\n> In response to Moritz Onken <[email protected]>:\n>\n>> We chose UUID as PK because there is still some information in an\n>> integer key.\n>> You can see if a user has registered before someone else (user1.id <\n>> user2.id)\n>> or you can see how many new users registered in a specific period of\n>> time\n>> (compare the id of the newest user to the id a week ago). This is\n>> information\n>> which is in some cases critical.\n>\n> So you're accidentally storing critical information in magic values\n> instead of storing it explicitly?\n>\n> Good luck with that.\n>\n\n\nHow do I store critical information? I was just saying that it easy\nto get some information out of a primary key which is an incrementing\ninteger. And it makes sense, in some rare cases, to have a PK which\nis some kind of random like UUIDs where you cannot guess the next value.\n\nmoritz\n", "msg_date": "Tue, 12 Aug 2008 17:10:24 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "In response to Moritz Onken <[email protected]>:\n\n> \n> Am 12.08.2008 um 17:04 schrieb Bill Moran:\n> \n> > In response to Moritz Onken <[email protected]>:\n> >\n> >> We chose UUID as PK because there is still some information in an\n> >> integer key.\n> >> You can see if a user has registered before someone else (user1.id <\n> >> user2.id)\n> >> or you can see how many new users registered in a specific period of\n> >> time\n> >> (compare the id of the newest user to the id a week ago). This is\n> >> information\n> >> which is in some cases critical.\n> >\n> > So you're accidentally storing critical information in magic values\n> > instead of storing it explicitly?\n> >\n> > Good luck with that.\n> \n> How do I store critical information? I was just saying that it easy\n> to get some information out of a primary key which is an incrementing\n> integer. And it makes sense, in some rare cases, to have a PK which\n> is some kind of random like UUIDs where you cannot guess the next value.\n\nI just repeated your words. Read above \"this is information which is in\nsome cases critical.\"\n\nIf I misunderstood, then I misunderstood.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Aug 2008 11:21:36 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "\nAm 12.08.2008 um 17:21 schrieb Bill Moran:\n\n> In response to Moritz Onken <[email protected]>:\n>\n>>\n>> Am 12.08.2008 um 17:04 schrieb Bill Moran:\n>>\n>>> In response to Moritz Onken <[email protected]>:\n>>>\n>>>> We chose UUID as PK because there is still some information in an\n>>>> integer key.\n>>>> You can see if a user has registered before someone else \n>>>> (user1.id <\n>>>> user2.id)\n>>>> or you can see how many new users registered in a specific period \n>>>> of\n>>>> time\n>>>> (compare the id of the newest user to the id a week ago). This is\n>>>> information\n>>>> which is in some cases critical.\n>>>\n>>> So you're accidentally storing critical information in magic values\n>>> instead of storing it explicitly?\n>>>\n>>> Good luck with that.\n>>\n>> How do I store critical information? I was just saying that it easy\n>> to get some information out of a primary key which is an incrementing\n>> integer. And it makes sense, in some rare cases, to have a PK which\n>> is some kind of random like UUIDs where you cannot guess the next \n>> value.\n>\n> I just repeated your words. Read above \"this is information which \n> is in\n> some cases critical.\"\n>\n> If I misunderstood, then I misunderstood.\n\nIf you are using incrementing integers as pk then you are storing this\ndata implicitly with your primary key. Using UUIDs is a way to avoid \nthat.\n\n", "msg_date": "Tue, 12 Aug 2008 17:24:50 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "\nOn Aug 12, 2008, at 8:21 AM, Bill Moran wrote:\n\n> In response to Moritz Onken <[email protected]>:\n>\n>>\n>> Am 12.08.2008 um 17:04 schrieb Bill Moran:\n>>\n>>> In response to Moritz Onken <[email protected]>:\n>>>\n>>>> We chose UUID as PK because there is still some information in an\n>>>> integer key.\n>>>> You can see if a user has registered before someone else \n>>>> (user1.id <\n>>>> user2.id)\n>>>> or you can see how many new users registered in a specific period \n>>>> of\n>>>> time\n>>>> (compare the id of the newest user to the id a week ago). This is\n>>>> information\n>>>> which is in some cases critical.\n>>>\n>>> So you're accidentally storing critical information in magic values\n>>> instead of storing it explicitly?\n>>>\n>>> Good luck with that.\n>>\n>> How do I store critical information? I was just saying that it easy\n>> to get some information out of a primary key which is an incrementing\n>> integer. And it makes sense, in some rare cases, to have a PK which\n>> is some kind of random like UUIDs where you cannot guess the next \n>> value.\n>\n> I just repeated your words. Read above \"this is information which \n> is in\n> some cases critical.\"\n>\n> If I misunderstood, then I misunderstood.\n>\n\nI think Moritz is more concerned about leakage of critical information,\nrather than intentional storage of it. When a simple incrementing \ninteger\nis used as an identifier in publicly visible places (webapps, ticketing\nsystems) then that may leak more information than intended.\n\nCheers,\n Steve\n\n", "msg_date": "Tue, 12 Aug 2008 08:36:10 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "In response to Steve Atkins <[email protected]>:\n\n> \n> On Aug 12, 2008, at 8:21 AM, Bill Moran wrote:\n> \n> > In response to Moritz Onken <[email protected]>:\n> >\n> >>\n> >> Am 12.08.2008 um 17:04 schrieb Bill Moran:\n> >>\n> >>> In response to Moritz Onken <[email protected]>:\n> >>>\n> >>>> We chose UUID as PK because there is still some information in an\n> >>>> integer key.\n> >>>> You can see if a user has registered before someone else \n> >>>> (user1.id <\n> >>>> user2.id)\n> >>>> or you can see how many new users registered in a specific period \n> >>>> of\n> >>>> time\n> >>>> (compare the id of the newest user to the id a week ago). This is\n> >>>> information\n> >>>> which is in some cases critical.\n> >>>\n> >>> So you're accidentally storing critical information in magic values\n> >>> instead of storing it explicitly?\n> >>>\n> >>> Good luck with that.\n> >>\n> >> How do I store critical information? I was just saying that it easy\n> >> to get some information out of a primary key which is an incrementing\n> >> integer. And it makes sense, in some rare cases, to have a PK which\n> >> is some kind of random like UUIDs where you cannot guess the next \n> >> value.\n> >\n> > I just repeated your words. Read above \"this is information which \n> > is in\n> > some cases critical.\"\n> >\n> > If I misunderstood, then I misunderstood.\n> >\n> \n> I think Moritz is more concerned about leakage of critical information,\n> rather than intentional storage of it. When a simple incrementing \n> integer\n> is used as an identifier in publicly visible places (webapps, ticketing\n> systems) then that may leak more information than intended.\n\nThen I did misunderstand.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 12 Aug 2008 11:48:40 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Bill Moran wrote:\n>>>>>> We chose UUID as PK because there is still some information in an\n>>>>>> integer key.\n>>>>>> You can see if a user has registered before someone else \n>>>>>> (user1.id <\n>>>>>> user2.id)\n>>>>>> or you can see how many new users registered in a specific period \n>>>>>> of\n>>>>>> time\n>>>>>> (compare the id of the newest user to the id a week ago). This is\n>>>>>> information\n>>>>>> which is in some cases critical.\n>> I think Moritz is more concerned about leakage of critical information,\n>> rather than intentional storage of it. When a simple incrementing \n>> integer\n>> is used as an identifier in publicly visible places (webapps, ticketing\n>> systems) then that may leak more information than intended.\n>> \n\nWhile we are on this distraction - UUID will sometimes encode \"critical\" \ninformation such as: 1) The timestamp (allowing users to be compared), \nand 2) The MAC address of the computer that generated it.\n\nSo, I wouldn't say that UUID truly protects you here unless you are sure \nto use one of the UUID formats that is not timestamp or MAC address \nbased. The main benefit of UUID here is the increased keyspace, so \npredicting sequence becomes more difficult.\n\n(Note that an all-random UUID is not better than two pairs of all-random \n64-bit integers with a good random number source. :-) )\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Tue, 12 Aug 2008 11:56:22 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" }, { "msg_contents": "Bill Moran wrote:\n> In response to Steve Atkins <[email protected]>:\n>\n> \n>> On Aug 12, 2008, at 8:21 AM, Bill Moran wrote:\n>>\n>> \n>>> In response to Moritz Onken <[email protected]>:\n>>>\n>>> \n>>>> Am 12.08.2008 um 17:04 schrieb Bill Moran:\n>>>>\n>>>> \n>>>>> In response to Moritz Onken <[email protected]>:\n>>>>>\n>>>>> \n>>>>>> We chose UUID as PK because there is still some information in an\n>>>>>> integer key.\n>>>>>> You can see if a user has registered before someone else \n>>>>>> (user1.id <\n>>>>>> user2.id)\n>>>>>> or you can see how many new users registered in a specific period \n>>>>>> of\n>>>>>> time\n>>>>>> (compare the id of the newest user to the id a week ago). This is\n>>>>>> information\n>>>>>> which is in some cases critical.\n>>>>>> \n>>>>> So you're accidentally storing critical information in magic values\n>>>>> instead of storing it explicitly?\n>>>>>\n>>>>> Good luck with that.\n>>>>> \n>>>> How do I store critical information? I was just saying that it easy\n>>>> to get some information out of a primary key which is an incrementing\n>>>> integer. And it makes sense, in some rare cases, to have a PK which\n>>>> is some kind of random like UUIDs where you cannot guess the next \n>>>> value.\n>>>> \nInteresting. Ordered chronologically and the next value is unguessable.\n\n>>> I just repeated your words. Read above \"this is information which \n>>> is in\n>>> some cases critical.\"\n>>>\n>>> If I misunderstood, then I misunderstood.\n>>>\n>>> \n>> I think Moritz is more concerned about leakage of critical information,\n>> rather than intentional storage of it. When a simple incrementing \n>> integer\n>> is used as an identifier in publicly visible places (webapps, ticketing\n>> systems) then that may leak more information than intended.\n>> \n\nI think there are better ways to accomplish this than encoding and \ndecoding/decrypting a PK. Store the sensitive data in a session variable \nor store session data in the database neither of which is accessible to \nusers.\n\nIt is usually a big mistake to de-normalize a table by encoding several \nfields in a single column PK or not. If you want to do something with \nthe encoded data such as find one or more rows with an encoded value \nthen you will have to inspect every row and decode it and then compare \nit or add it or whatever.\n\n\n>\n> Then I did misunderstand.\n>\n> \n\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Tue, 12 Aug 2008 14:43:42 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Using PK value as a String" } ]
[ { "msg_contents": "unregister\n\nunregister", "msg_date": "Mon, 11 Aug 2008 16:44:16 +0200", "msg_from": "\"Sven Clement\" <[email protected]>", "msg_from_op": true, "msg_subject": "" } ]
[ { "msg_contents": "Hello All,\n\nRan into a re-occuring performance problem with some report queries again\ntoday. In a nutshell, we have filters on either multiple joined tables, or\nmultiple columns on a single table that are highly correlated. So, the\nestimates come out grossly incorrect (the planner has no way to know they\nare correlated). 2000:1 for one I'm looking at right now. Generally this\ndoesn't matter, except in complex reporting queries like these when this is\nthe first join of 40 other joins. Because the estimate is wrong at the\nlowest level, it snowballs up through the rest of the joins causing the\nquery to run very, very slowly. In many of these cases, forcing nested\nloops off for the duration of the query fixes the problem. But I have a\ncouple that still are painfully slow and shouldn't be.\n\nI've been reading through the archives with others having similar problems\n(including myself a year ago). Am I right in assuming that at this point\nthere is still little we can do in postgres to speed up this kind of query?\nRight now the planner has no way to know the correlation between different\ncolumns in the same table, let alone columns in different tables. So, it\njust assumes no correlation and returns incorrectly low estimates in cases\nlike these.\n\nThe only solution I've come up with so far is to materialize portions of the\nlarger query into subqueries with these correlated filters which are indexed\nand analyzed before joining into the larger query. This would keep the\nincorrect estimates from snowballing up through the chain of joins.\n\nAre there any other solutions to this problem?\n\nThanks,\n\n-Chris\n\nHello All,Ran into a re-occuring performance problem with some report queries again today.  In a nutshell, we have filters on either multiple joined tables, or multiple columns on a single table that are highly correlated.  So, the estimates come out grossly incorrect (the planner has no way to know they are correlated).  2000:1 for one I'm looking at right now.  Generally this doesn't matter, except in complex reporting queries like these when this is the first join of 40 other joins.  Because the estimate is wrong at the lowest level, it snowballs up through the rest of the joins causing the query to run very, very slowly.   In many of these cases, forcing nested loops off for the duration of the query fixes the problem.  But I have a couple that still are painfully slow and shouldn't be.\nI've been reading through the archives with others having similar problems (including myself a year ago).  Am I right in assuming that at this point there is still little we can do in postgres to speed up this kind of query?  Right now the planner has no way to know the correlation between different columns in the same table, let alone columns in different tables.  So, it just assumes no correlation and returns incorrectly low estimates in cases like these.\nThe only solution I've come up with so far is to materialize portions of the larger query into subqueries with these correlated filters which are indexed and analyzed before joining into the larger query.  This would keep the incorrect estimates from snowballing up through the chain of joins.\nAre there any other solutions to this problem?Thanks,-Chris", "msg_date": "Tue, 12 Aug 2008 17:59:27 -0400", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect estimates on correlated filters" }, { "msg_contents": "On Aug 12, 2008, at 4:59 PM, Chris Kratz wrote:\n> Ran into a re-occuring performance problem with some report queries \n> again today. In a nutshell, we have filters on either multiple \n> joined tables, or multiple columns on a single table that are \n> highly correlated. So, the estimates come out grossly incorrect \n> (the planner has no way to know they are correlated). 2000:1 for \n> one I'm looking at right now. Generally this doesn't matter, \n> except in complex reporting queries like these when this is the \n> first join of 40 other joins. Because the estimate is wrong at the \n> lowest level, it snowballs up through the rest of the joins causing \n> the query to run very, very slowly. In many of these cases, \n> forcing nested loops off for the duration of the query fixes the \n> problem. But I have a couple that still are painfully slow and \n> shouldn't be.\n>\n> I've been reading through the archives with others having similar \n> problems (including myself a year ago). Am I right in assuming \n> that at this point there is still little we can do in postgres to \n> speed up this kind of query? Right now the planner has no way to \n> know the correlation between different columns in the same table, \n> let alone columns in different tables. So, it just assumes no \n> correlation and returns incorrectly low estimates in cases like these.\n>\n> The only solution I've come up with so far is to materialize \n> portions of the larger query into subqueries with these correlated \n> filters which are indexed and analyzed before joining into the \n> larger query. This would keep the incorrect estimates from \n> snowballing up through the chain of joins.\n>\n> Are there any other solutions to this problem?\n\n\nWell... you could try and convince certain members of the community \nthat we actually do need some kind of a query hint mechanism... ;)\n\nI did make a suggestion a few months ago that involved sorting a \ntable on different columns and recording the correlation of other \ncolumns. The scheme isn't perfect, but it would help detect cases \nlike a field populated by a sequence and another field that's insert \ntimestamp; those two fields would correlate highly, and you should \neven be able to correlate the two histograms; that would allow you to \ninfer that most of the insert times for _id's between 100 and 200 \nwill be between 2008-01-01 00:10 and 2008-01-01 00:20, for example.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 13 Aug 2008 09:59:49 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on correlated filters" }, { "msg_contents": "On Wed, Aug 13, 2008 at 10:59 AM, Decibel! <[email protected]> wrote:\n\n> On Aug 12, 2008, at 4:59 PM, Chris Kratz wrote:\n>\n>> Ran into a re-occuring performance problem with some report queries again\n>> today. In a nutshell, we have filters on either multiple joined tables, or\n>> multiple columns on a single table that are highly correlated. So, the\n>> estimates come out grossly incorrect (the planner has no way to know they\n>> are correlated). 2000:1 for one I'm looking at right now. Generally this\n>> doesn't matter, except in complex reporting queries like these when this is\n>> the first join of 40 other joins. Because the estimate is wrong at the\n>> lowest level, it snowballs up through the rest of the joins causing the\n>> query to run very, very slowly. In many of these cases, forcing nested\n>> loops off for the duration of the query fixes the problem. But I have a\n>> couple that still are painfully slow and shouldn't be.\n>>\n>> I've been reading through the archives with others having similar problems\n>> (including myself a year ago). Am I right in assuming that at this point\n>> there is still little we can do in postgres to speed up this kind of query?\n>> Right now the planner has no way to know the correlation between different\n>> columns in the same table, let alone columns in different tables. So, it\n>> just assumes no correlation and returns incorrectly low estimates in cases\n>> like these.\n>>\n>> The only solution I've come up with so far is to materialize portions of\n>> the larger query into subqueries with these correlated filters which are\n>> indexed and analyzed before joining into the larger query. This would keep\n>> the incorrect estimates from snowballing up through the chain of joins.\n>>\n>> Are there any other solutions to this problem?\n>>\n>\n>\n> Well... you could try and convince certain members of the community that we\n> actually do need some kind of a query hint mechanism... ;)\n>\n> I did make a suggestion a few months ago that involved sorting a table on\n> different columns and recording the correlation of other columns. The scheme\n> isn't perfect, but it would help detect cases like a field populated by a\n> sequence and another field that's insert timestamp; those two fields would\n> correlate highly, and you should even be able to correlate the two\n> histograms; that would allow you to infer that most of the insert times for\n> _id's between 100 and 200 will be between 2008-01-01 00:10 and 2008-01-01\n> 00:20, for example.\n> --\n> Decibel!, aka Jim C. Nasby, Database Architect [email protected]\n> Give your computer some brain candy! www.distributed.net Team #1828\n>\n>\n> Thanks for the reply,\n\nYes, I know hints are frowned upon around here. Though, I'd love to have\nthem or something equivalent on this particular query just so the customer\ncan run their important reports. As it is, it's unrunnable.\n\nUnfortunately, if I don't think the sorting idea would help in the one case\nI'm looking at which involves filters on two tables that are joined\ntogether. The filters happen to be correlated such that about 95% of the\nrows from each filtered table are actually returned after the join.\nUnfortunately, the planner thinks we will get 1 row back.\n\nI do have to find a way to make these queries runnable. I'll keep looking.\n\nThanks,\n\n-Chris\n\nOn Wed, Aug 13, 2008 at 10:59 AM, Decibel! <[email protected]> wrote:\nOn Aug 12, 2008, at 4:59 PM, Chris Kratz wrote:\n\nRan into a re-occuring performance problem with some report queries again today.  In a nutshell, we have filters on either multiple joined tables, or multiple columns on a single table that are highly correlated.  So, the estimates come out grossly incorrect (the planner has no way to know they are correlated).  2000:1 for one I'm looking at right now.  Generally this doesn't matter, except in complex reporting queries like these when this is the first join of 40 other joins.  Because the estimate is wrong at the lowest level, it snowballs up through the rest of the joins causing the query to run very, very slowly.   In many of these cases, forcing nested loops off for the duration of the query fixes the problem.  But I have a couple that still are painfully slow and shouldn't be.\n\nI've been reading through the archives with others having similar problems (including myself a year ago).  Am I right in assuming that at this point there is still little we can do in postgres to speed up this kind of query?  Right now the planner has no way to know the correlation between different columns in the same table, let alone columns in different tables.  So, it just assumes no correlation and returns incorrectly low estimates in cases like these.\n\nThe only solution I've come up with so far is to materialize portions of the larger query into subqueries with these correlated filters which are indexed and analyzed before joining into the larger query.  This would keep the incorrect estimates from snowballing up through the chain of joins.\n\nAre there any other solutions to this problem?\n\n\n\nWell... you could try and convince certain members of the community that we actually do need some kind of a query hint mechanism... ;)\n\nI did make a suggestion a few months ago that involved sorting a table on different columns and recording the correlation of other columns. The scheme isn't perfect, but it would help detect cases like a field populated by a sequence and another field that's insert timestamp; those two fields would correlate highly, and you should even be able to correlate the two histograms; that would allow you to infer that most of the insert times for _id's between 100 and 200 will be between 2008-01-01 00:10 and 2008-01-01 00:20, for example.\n\n-- \nDecibel!, aka Jim C. Nasby, Database Architect  [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\nThanks for the reply,\n\nYes, I know hints are frowned upon around here.  Though, I'd love to\nhave them or something equivalent on this particular query just so the customer can run their important reports.  As it is, it's unrunnable.\n\nUnfortunately, if I don't think the sorting idea would help in the one case\nI'm looking at which involves filters on two tables that are joined\ntogether.  The filters happen to be correlated such that about 95% of\nthe rows from each filtered table are actually returned after the join. \nUnfortunately, the planner thinks we will get 1 row back.  \n\nI do have to find a way to make these queries runnable.  I'll keep looking.\n\nThanks,\n\n-Chris", "msg_date": "Wed, 13 Aug 2008 14:45:05 -0400", "msg_from": "\"Chris Kratz\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect estimates on correlated filters" }, { "msg_contents": "Chris Kratz wrote:\n\n> Unfortunately, if I don't think the sorting idea would help in the one case\n> I'm looking at which involves filters on two tables that are joined\n> together. The filters happen to be correlated such that about 95% of the\n> rows from each filtered table are actually returned after the join.\n> Unfortunately, the planner thinks we will get 1 row back.\n\nMaybe you can wrap that part of the query in a SQL function and set its\nestimated cost to the real values with ALTER FUNCTION ... ROWS.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 13 Aug 2008 16:58:30 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on correlated filters" }, { "msg_contents": "Decibel! wrote:\n\n> Well... you could try and convince certain members of the community that\n> we actually do need some kind of a query hint mechanism... ;)\n\nIt strikes me that there are really two types of query hint possible here.\n\nOne tells the planner (eg) \"prefer a merge join here\".\n\nThe other gives the planner more information that it might not otherwise\nhave to work with, so it can improve its decisions. \"The values used in\nthis join condition are highly correlated\".\n\nIs there anything wrong with the second approach? It shouldn't tend to\nsuppress planner bug reports etc. Well, not unless people use it to lie\nto the planner, and I expect results from that would be iffy at best. It\njust provides information to supplement Pg's existing stats system to\nhandle cases where it's not able to reasonably collect the required\ninformation.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 14 Aug 2008 09:48:03 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on correlated filters" }, { "msg_contents": "\"Craig Ringer\" <[email protected]> writes:\n\n> It strikes me that there are really two types of query hint possible here.\n>\n> One tells the planner (eg) \"prefer a merge join here\".\n>\n> The other gives the planner more information that it might not otherwise\n> have to work with, so it can improve its decisions. \"The values used in\n> this join condition are highly correlated\".\n\nThis sounds familiar:\n\nhttp://article.gmane.org/gmane.comp.db.postgresql.devel.general/55730/match=hints\n\nPlus ça change...\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Thu, 14 Aug 2008 11:12:00 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on correlated filters" }, { "msg_contents": "On Aug 13, 2008, at 1:45 PM, Chris Kratz wrote:\n> Yes, I know hints are frowned upon around here. Though, I'd love \n> to have them or something equivalent on this particular query just \n> so the customer can run their important reports. As it is, it's \n> unrunnable.\n\n\nActually, now that I think about it the last time this was brought up \nthere was discussion about something that doesn't force a particular \nexecution method, but instead provides improved information to the \nplanner. It might be worth pursuing that, as I think there was less \nopposition to it.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 16 Aug 2008 13:21:47 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect estimates on correlated filters" } ]
[ { "msg_contents": "Hackers and PG users,\n\nDoes anyone see a need for having TOAST tables be individually\nconfigurable for autovacuum? I've finally come around to looking at\nbeing able to use ALTER TABLE for autovacuum settings, and I'm wondering\nif we need to support that case.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 13 Aug 2008 17:28:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "autovacuum: use case for indenpedent TOAST table autovac settings" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Does anyone see a need for having TOAST tables be individually\n> configurable for autovacuum? I've finally come around to looking at\n> being able to use ALTER TABLE for autovacuum settings, and I'm wondering\n> if we need to support that case.\n\nIt seems like we'll want to do it somehow. Perhaps the cleanest way is\nto incorporate toast-table settings in the reloptions of the parent\ntable. Otherwise dump/reload is gonna be a mess.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Aug 2008 20:53:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: autovacuum: use case for indenpedent TOAST table autovac settings" }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Does anyone see a need for having TOAST tables be individually\n> > configurable for autovacuum? I've finally come around to looking at\n> > being able to use ALTER TABLE for autovacuum settings, and I'm wondering\n> > if we need to support that case.\n> \n> It seems like we'll want to do it somehow. Perhaps the cleanest way is\n> to incorporate toast-table settings in the reloptions of the parent\n> table. Otherwise dump/reload is gonna be a mess.\n\nYeah, Magnus was suggesting this syntax:\n\nALTER TABLE foo SET toast_autovacuum_enable = false;\nand the like.\n\nMy question is whether there is interest in actually having support for\nthis, or should we just inherit the settings from the main table. My\ngut feeling is that this may be needed in some cases, but perhaps I'm\noverengineering the thing.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 13 Aug 2008 21:28:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] autovacuum: use case for indenpedent TOAST\n\ttable autovac settings" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Tom Lane wrote:\n>> It seems like we'll want to do it somehow. Perhaps the cleanest way is\n>> to incorporate toast-table settings in the reloptions of the parent\n>> table. Otherwise dump/reload is gonna be a mess.\n\n> My question is whether there is interest in actually having support for\n> this, or should we just inherit the settings from the main table. My\n> gut feeling is that this may be needed in some cases, but perhaps I'm\n> overengineering the thing.\n\nIt seems reasonable to inherit the parent's settings by default, in any\ncase. So you could do that now and then extend the feature later if\nthere's real demand.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 13 Aug 2008 21:30:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] autovacuum: use case for indenpedent TOAST table\n\tautovac settings" }, { "msg_contents": "\nOn Wed, 2008-08-13 at 21:30 -0400, Tom Lane wrote:\n> Alvaro Herrera <[email protected]> writes:\n> > Tom Lane wrote:\n> >> It seems like we'll want to do it somehow. Perhaps the cleanest way is\n> >> to incorporate toast-table settings in the reloptions of the parent\n> >> table. Otherwise dump/reload is gonna be a mess.\n> \n> > My question is whether there is interest in actually having support for\n> > this, or should we just inherit the settings from the main table. My\n> > gut feeling is that this may be needed in some cases, but perhaps I'm\n> > overengineering the thing.\n> \n> It seems reasonable to inherit the parent's settings by default, in any\n> case. So you could do that now and then extend the feature later if\n> there's real demand.\n\nYeh, I can't really see a reason why you'd want to treat toast tables\ndifferently with regard to autovacuuming. It's one more setting to get\nwrong, so no thanks.\n\n-- \n Simon Riggs www.2ndQuadrant.com\n PostgreSQL Training, Services and Support\n\n", "msg_date": "Thu, 14 Aug 2008 10:51:01 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] autovacuum: use case for indenpedent TOAST\n\ttable autovac settings" } ]
[ { "msg_contents": "Hello,\n\nWe're developing a project which uses PostgreSQL to store binary\ndocuments. Since our system is likely to grow up to some terabytes in two\nyears, I'd like to ask if some of you have had some experience with\nstoring a huge amount of blob files in postgres. How does it scale in\nperformance?\n\nThank you very much for you time!\n\nJuliano Freitas\n\n", "msg_date": "Thu, 14 Aug 2008 15:00:03 -0300 (BRT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Experiences storing binary in Postgres" }, { "msg_contents": "On Aug 14, 2008, at 1:00 PM, [email protected] wrote:\n> We're developing a project which uses PostgreSQL to store binary\n> documents. Since our system is likely to grow up to some terabytes \n> in two\n> years, I'd like to ask if some of you have had some experience with\n> storing a huge amount of blob files in postgres. How does it scale in\n> performance?\n\nIt depends on your access patterns. If this is an OLTP database, you \nneed to think really hard about putting that stuff in the database, \nbecause it will seriously hurt your caching ability. If we had the \nability to define buffersize limits per-tablespace, you could handle \nit that way, but...\n\nAnother consideration is why you want to put this data in a database \nin the first place? It may be convenient, but if that's the only \nreason you could be hurting yourself in the long run.\n\nBTW, after seeing the SkyTools presentation at pgCon this year I \nrealized there's a pretty attractive middle-ground between storing \nthis data in your production database and storing it in the \nfilesystem. Using plproxy and pgBouncer, it wouldn't be hard to store \nthe data in an external database. That gives you the ease-of- \nmanagement of a database, but keeps the data away from your \nproduction data.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 16 Aug 2008 13:45:53 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Experiences storing binary in Postgres" } ]
[ { "msg_contents": "Hi all,\n\n I've got a simple table with a lot of data in it:\n\nCREATE TABLE customer_data (\n\tcd_id\t\t\tint\t\tprimary key\tdefault(nextval('cd_seq')),\n\tcd_cust_id\t\tint\t\tnot null,\n\tcd_variable\t\ttext\t\tnot null,\n\tcd_value\t\ttext,\n\tcd_tag\t\t\ttext,\n\tadded_user\t\tint\t\tnot null,\n\tadded_date\t\ttimestamp\tnot null\tdefault now(),\n\tmodified_user\t\tint\t\tnot null,\n\tmodified_date\t\ttimestamp\tnot null\tdefault now(),\n\t\n\tFOREIGN KEY(cd_cust_id) REFERENCES customer(cust_id)\n);\n\n The 'cust_id' references the customer that the given data belongs to. \nThe reason for this \"data bucket\" (does this structure have a proper \nname?) is that the data I need to store on a give customer is quite \nvariable and outside of my control. As it is, there is about 400 \ndifferent variable/value pairs I need to store per customer.\n\n This table has a copy in a second historical schema that matches this \none in public but with an additional 'history_id' sequence. I use a \nsimple function to copy an INSERT or UPDATE to any entry in the \nhistorical schema.\n\n Now I want to graph a certain subset of these variable/value pairs, \nso I created a simple (in concept) view to pull out the historical data \nset for a given customer. I do this by pulling up a set of records based \non the name of the 'cd_variable' and 'cd_tag' and connect the records \ntogether using a matching timestamp.\n\n The problem is that this view has very quickly become terribly slow.\n\n I've got indexes on the 'cd_variable', 'cd_tag' and the parent \n'cust_id' columns, and the plan seems to show that the indexes are \nindeed being used, but the query against this view can take up to 10 \nminutes to respond. I am hoping to avoid making a dedicated table as \nwhat I use to build this dataset may change over time.\n\n Below I will post the VIEW and a sample of the query's EXPLAIN \nANALYZE. Thanks for any tips/help/clue-stick-beating you may be able to \nshare!\n\nMadi\n\n-=] VIEW\n\nCREATE VIEW view_sync_rate_history AS\n SELECT\n a.cust_id AS vsrh_cust_id,\n a.cust_name AS vsrh_cust_name,\n a.cust_business AS vsrh_cust_business,\n a.cust_nexxia_id||'-'||a.cust_nexxia_seq AS vsrh_cust_nexxia,\n a.cust_phone AS vsrh_cust_phone,\n b.cd_value AS vsrh_up_speed,\n b.history_id AS vsrh_up_speed_history_id,\n c.cd_value AS vsrh_up_rco,\n c.history_id AS vsrh_up_rco_history_id,\n d.cd_value AS vsrh_up_nm,\n d.history_id AS vsrh_up_nm_history_id,\n e.cd_value AS vsrh_up_sp,\n e.history_id AS vsrh_up_sp_history_id,\n f.cd_value AS vsrh_up_atten,\n f.history_id AS vsrh_up_atten_history_id,\n g.cd_value AS vsrh_down_speed,\n g.history_id AS vsrh_down_speed_history_id,\n h.cd_value AS vsrh_down_rco,\n h.history_id AS vsrh_down_rco_history_id,\n i.cd_value AS vsrh_down_nm,\n i.history_id AS vsrh_down_nm_history_id,\n j.cd_value AS vsrh_down_sp,\n j.history_id AS vsrh_down_sp_history_id,\n k.cd_value AS vsrh_down_atten,\n k.history_id AS vsrh_down_atten_history_id,\n l.cd_value AS vsrh_updated,\n l.history_id AS vsrh_updated_history_id\n FROM\n customer a,\n history.customer_data b,\n history.customer_data c,\n history.customer_data d,\n history.customer_data e,\n history.customer_data f,\n history.customer_data g,\n history.customer_data h,\n history.customer_data i,\n history.customer_data j,\n history.customer_data k,\n history.customer_data l\n WHERE\n a.cust_id=b.cd_cust_id AND\n a.cust_id=c.cd_cust_id AND\n a.cust_id=d.cd_cust_id AND\n a.cust_id=e.cd_cust_id AND\n a.cust_id=f.cd_cust_id AND\n a.cust_id=g.cd_cust_id AND\n a.cust_id=h.cd_cust_id AND\n a.cust_id=i.cd_cust_id AND\n a.cust_id=j.cd_cust_id AND\n a.cust_id=k.cd_cust_id AND\n a.cust_id=l.cd_cust_id AND\n b.cd_tag='sync_rate' AND\n c.cd_tag='sync_rate' AND\n d.cd_tag='sync_rate' AND\n e.cd_tag='sync_rate' AND\n f.cd_tag='sync_rate' AND\n g.cd_tag='sync_rate' AND\n h.cd_tag='sync_rate' AND\n i.cd_tag='sync_rate' AND\n j.cd_tag='sync_rate' AND\n k.cd_tag='sync_rate' AND\n l.cd_tag='sync_rate' AND\n b.cd_variable='upstream_speed' AND\n c.cd_variable='upstream_relative_capacity_occupation' AND\n d.cd_variable='upstream_noise_margin' AND\n e.cd_variable='upstream_signal_power' AND\n f.cd_variable='upstream_attenuation' AND\n g.cd_variable='downstream_speed' AND\n h.cd_variable='downstream_relative_capacity_occupation' AND\n i.cd_variable='downstream_noise_margin' AND\n j.cd_variable='downstream_signal_power' AND\n k.cd_variable='downstream_attenuation' AND\n l.cd_variable='sync_rate_updated' AND\n b.modified_date=c.modified_date AND\n b.modified_date=d.modified_date AND\n b.modified_date=e.modified_date AND\n b.modified_date=f.modified_date AND\n b.modified_date=g.modified_date AND\n b.modified_date=h.modified_date AND\n b.modified_date=i.modified_date AND\n b.modified_date=j.modified_date AND\n b.modified_date=k.modified_date AND\n b.modified_date=l.modified_date;\n\n-=] EXPLAIN ANALYZE of a sample query\nIn case this is hard to read in the mail program, here is a link: \nhttp://mizu-bu.org/misc/long_explain_analyze.txt\n\n \n QUERY PLAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1263.93..3417.98 rows=1 width=262) (actual \ntime=88.005..248996.948 rows=131 loops=1)\n Join Filter: (\"inner\".modified_date = \"outer\".modified_date)\n -> Nested Loop (cost=577.65..1482.46 rows=1 width=154) (actual \ntime=58.664..5253.873 rows=131 loops=1)\n Join Filter: (\"outer\".modified_date = \"inner\".modified_date)\n -> Nested Loop (cost=369.98..870.28 rows=1 width=128) \n(actual time=51.858..4328.108 rows=131 loops=1)\n Join Filter: (\"inner\".modified_date = \"outer\".modified_date)\n -> Nested Loop (cost=343.35..823.93 rows=1 width=116) \n(actual time=42.851..3185.995 rows=131 loops=1)\n Join Filter: (\"inner\".modified_date = \n\"outer\".modified_date)\n -> Nested Loop (cost=126.20..185.37 rows=1 \nwidth=90) (actual time=36.181..2280.245 rows=131 loops=1)\n Join Filter: (\"inner\".modified_date = \n\"outer\".modified_date)\n -> Nested Loop (cost=99.57..139.02 rows=1 \nwidth=64) (actual time=27.918..1168.061 rows=131 loops=1)\n Join Filter: (\"outer\".modified_date = \n\"inner\".modified_date)\n -> Hash Join (cost=72.94..92.67 \nrows=1 width=38) (actual time=17.769..18.572 rows=131 loops=1)\n Hash Cond: \n(\"outer\".modified_date = \"inner\".modified_date)\n -> Bitmap Heap Scan on \ncustomer_data i (cost=26.63..46.30 rows=4 width=26) (actual \ntime=8.226..8.563 rows=131 loops=1)\n Recheck Cond: \n((cd_variable = 'downstream_noise_margin'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = \n'sync_rate'::text)\n -> BitmapAnd \n(cost=26.63..26.63 rows=5 width=0) (actual time=8.172..8.172 rows=0 loops=1)\n -> Bitmap Index \nScan on cd_variable_index (cost=0.00..10.21 rows=918 width=0) (actual \ntime=6.409..6.409 rows=20981 loops=1)\n Index Cond: \n(cd_variable = 'downstream_noise_margin'::text)\n -> Bitmap Index \nScan on cd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.502..0.502 rows=2619 loops=1)\n Index Cond: \n(103 = cd_cust_id)\n -> Hash (cost=46.30..46.30 \nrows=4 width=12) (actual time=9.526..9.526 rows=131 loops=1)\n -> Bitmap Heap Scan on \ncustomer_data e (cost=26.63..46.30 rows=4 width=12) (actual \ntime=9.140..9.381 rows=131 loops=1)\n Recheck Cond: \n((cd_variable = 'upstream_signal_power'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = \n'sync_rate'::text)\n -> BitmapAnd \n(cost=26.63..26.63 rows=5 width=0) (actual time=9.082..9.082 rows=0 loops=1)\n -> Bitmap \nIndex Scan on cd_variable_index (cost=0.00..10.21 rows=918 width=0) \n(actual time=7.298..7.298 rows=20981 loops=1)\n Index \nCond: (cd_variable = 'upstream_signal_power'::text)\n -> Bitmap \nIndex Scan on cd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.502..0.502 rows=2619 loops=1)\n Index \nCond: (103 = cd_cust_id)\n -> Bitmap Heap Scan on customer_data \nc (cost=26.63..46.30 rows=4 width=26) (actual time=8.492..8.693 \nrows=131 loops=131)\n Recheck Cond: ((cd_variable = \n'upstream_relative_capacity_occupation'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd \n(cost=26.63..26.63 rows=5 width=0) (actual time=8.446..8.446 rows=0 \nloops=131)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..10.21 rows=918 width=0) (actual \ntime=6.693..6.693 rows=20986 loops=131)\n Index Cond: \n(cd_variable = 'upstream_relative_capacity_occupation'::text)\n -> Bitmap Index Scan on \ncd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.494..0.494 rows=2619 loops=131)\n Index Cond: (103 = \ncd_cust_id)\n -> Bitmap Heap Scan on customer_data b \n(cost=26.63..46.30 rows=4 width=26) (actual time=8.216..8.405 rows=131 \nloops=131)\n Recheck Cond: ((cd_variable = \n'upstream_speed'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=26.63..26.63 \nrows=5 width=0) (actual time=8.172..8.172 rows=0 loops=131)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..10.21 rows=918 width=0) (actual \ntime=6.417..6.417 rows=20986 loops=131)\n Index Cond: (cd_variable = \n'upstream_speed'::text)\n -> Bitmap Index Scan on \ncd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.495..0.495 rows=2619 loops=131)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Heap Scan on customer_data l \n(cost=217.14..637.28 rows=102 width=26) (actual time=6.653..6.843 \nrows=131 loops=131)\n Recheck Cond: ((103 = cd_cust_id) AND \n(cd_variable = 'sync_rate_updated'::text))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=217.14..217.14 rows=117 \nwidth=0) (actual time=6.618..6.618 rows=0 loops=131)\n -> Bitmap Index Scan on cd_id_index \n(cost=0.00..16.17 rows=2333 width=0) (actual time=0.485..0.485 rows=2619 \nloops=131)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..200.72 rows=21350 width=0) (actual \ntime=6.079..6.079 rows=20986 loops=131)\n Index Cond: (cd_variable = \n'sync_rate_updated'::text)\n -> Bitmap Heap Scan on customer_data k \n(cost=26.63..46.30 rows=4 width=12) (actual time=8.442..8.638 rows=131 \nloops=131)\n Recheck Cond: ((cd_variable = \n'downstream_attenuation'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=26.63..26.63 rows=5 width=0) \n(actual time=8.397..8.397 rows=0 loops=131)\n -> Bitmap Index Scan on cd_variable_index \n(cost=0.00..10.21 rows=918 width=0) (actual time=6.624..6.624 rows=20986 \nloops=131)\n Index Cond: (cd_variable = \n'downstream_attenuation'::text)\n -> Bitmap Index Scan on cd_id_index \n(cost=0.00..16.17 rows=2333 width=0) (actual time=0.487..0.487 rows=2619 \nloops=131)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Heap Scan on customer_data d (cost=207.68..610.95 \nrows=98 width=26) (actual time=6.805..6.994 rows=131 loops=131)\n Recheck Cond: ((103 = cd_cust_id) AND (cd_variable = \n'upstream_noise_margin'::text))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=207.68..207.68 rows=112 width=0) \n(actual time=6.769..6.769 rows=0 loops=131)\n -> Bitmap Index Scan on cd_id_index \n(cost=0.00..16.17 rows=2333 width=0) (actual time=0.487..0.487 rows=2619 \nloops=131)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Index Scan on cd_variable_index \n(cost=0.00..191.26 rows=20360 width=0) (actual time=6.230..6.230 \nrows=20986 loops=131)\n Index Cond: (cd_variable = \n'upstream_noise_margin'::text)\n -> Nested Loop (cost=686.28..1935.49 rows=1 width=224) (actual \ntime=21.077..1860.475 rows=131 loops=131)\n -> Seq Scan on customer a (cost=0.00..5.22 rows=1 width=164) \n(actual time=0.053..0.090 rows=1 loops=131)\n Filter: (cust_id = 103)\n -> Nested Loop (cost=686.28..1930.26 rows=1 width=76) \n(actual time=21.014..1860.177 rows=131 loops=131)\n Join Filter: (\"inner\".modified_date = \"outer\".modified_date)\n -> Nested Loop (cost=472.13..1298.07 rows=1 width=50) \n(actual time=14.460..971.017 rows=131 loops=131)\n Join Filter: (\"inner\".modified_date = \n\"outer\".modified_date)\n -> Hash Join (cost=259.97..674.63 rows=1 \nwidth=38) (actual time=7.459..8.272 rows=131 loops=131)\n Hash Cond: (\"outer\".modified_date = \n\"inner\".modified_date)\n -> Bitmap Heap Scan on customer_data h \n(cost=213.66..627.06 rows=100 width=26) (actual time=7.391..7.707 \nrows=131 loops=131)\n Recheck Cond: ((103 = cd_cust_id) AND \n(cd_variable = 'downstream_relative_capacity_occupation'::text))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=213.66..213.66 \nrows=115 width=0) (actual time=7.355..7.355 rows=0 loops=131)\n -> Bitmap Index Scan on \ncd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.493..0.493 rows=2619 loops=131)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..197.24 rows=20926 width=0) (actual \ntime=6.809..6.809 rows=20986 loops=131)\n Index Cond: (cd_variable = \n'downstream_relative_capacity_occupation'::text)\n -> Hash (cost=46.30..46.30 rows=4 \nwidth=12) (actual time=8.253..8.253 rows=131 loops=1)\n -> Bitmap Heap Scan on customer_data \nf (cost=26.63..46.30 rows=4 width=12) (actual time=7.882..8.113 \nrows=131 loops=1)\n Recheck Cond: ((cd_variable = \n'upstream_attenuation'::text) AND (103 = cd_cust_id))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd \n(cost=26.63..26.63 rows=5 width=0) (actual time=7.832..7.832 rows=0 loops=1)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..10.21 rows=918 width=0) (actual \ntime=6.065..6.065 rows=20981 loops=1)\n Index Cond: \n(cd_variable = 'upstream_attenuation'::text)\n -> Bitmap Index Scan on \ncd_id_index (cost=0.00..16.17 rows=2333 width=0) (actual \ntime=0.489..0.489 rows=2619 loops=1)\n Index Cond: (103 = \ncd_cust_id)\n -> Bitmap Heap Scan on customer_data j \n(cost=212.16..622.19 rows=100 width=12) (actual time=7.092..7.280 \nrows=131 loops=17161)\n Recheck Cond: ((103 = cd_cust_id) AND \n(cd_variable = 'downstream_signal_power'::text))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=212.16..212.16 rows=114 \nwidth=0) (actual time=7.057..7.057 rows=0 loops=17161)\n -> Bitmap Index Scan on cd_id_index \n(cost=0.00..16.17 rows=2333 width=0) (actual time=0.493..0.493 rows=2619 \nloops=17161)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Index Scan on \ncd_variable_index (cost=0.00..195.74 rows=20784 width=0) (actual \ntime=6.512..6.512 rows=20986 loops=17161)\n Index Cond: (cd_variable = \n'downstream_signal_power'::text)\n -> Bitmap Heap Scan on customer_data g \n(cost=214.15..630.92 rows=101 width=26) (actual time=6.526..6.718 \nrows=131 loops=17161)\n Recheck Cond: ((103 = cd_cust_id) AND (cd_variable \n= 'downstream_speed'::text))\n Filter: (cd_tag = 'sync_rate'::text)\n -> BitmapAnd (cost=214.15..214.15 rows=116 \nwidth=0) (actual time=6.492..6.492 rows=0 loops=17161)\n -> Bitmap Index Scan on cd_id_index \n(cost=0.00..16.17 rows=2333 width=0) (actual time=0.486..0.486 rows=2619 \nloops=17161)\n Index Cond: (103 = cd_cust_id)\n -> Bitmap Index Scan on cd_variable_index \n(cost=0.00..197.73 rows=21067 width=0) (actual time=5.956..5.956 \nrows=20986 loops=17161)\n Index Cond: (cd_variable = \n'downstream_speed'::text)\n Total runtime: 248997.571 ms\n(114 rows)\n\n", "msg_date": "Fri, 15 Aug 2008 14:36:20 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Optimizing a VIEW" }, { "msg_contents": "On Aug 15, 2008, at 1:36 PM, Madison Kelly wrote:\n> The 'cust_id' references the customer that the given data belongs \n> to. The reason for this \"data bucket\" (does this structure have a \n> proper name?) is that the data I need to store on a give customer \n> is quite variable and outside of my control. As it is, there is \n> about 400 different variable/value pairs I need to store per customer.\n\n\nIt's called Entity-Attribute-Value, and it's performance is pretty \nmuch guaranteed to suck for any kind of a large dataset. The problem \nis that you're storing a MASSIVE amount of extra information for \nevery single value. Consider:\n\nIf each data point was just a field in a table, then even if we left \ncd_value as text, each data point would consume 4 bytes* + 1 byte per \ncharacter (I'm assuming you don't need extra UTF8 chars or anything). \nOf course if you know you're only storing numbers or the like then \nyou can make that even more efficient.\n\n* In 8.3, the text field overhead could be as low as 1 byte if the \nfield is small enough.\n\nOTOH, your table is going to 32+24 bytes per row just for the per-row \noverhead, ints and timestamps. Each text field will have 1 or 4 bytes \nin overhead, then you have to store the actual data. Realistically, \nyou're looking at 60+ bytes per data point, as opposed to maybe 15, \nor even down to 4 if you know you're storing an int.\n\nNow figure out what that turns into if you have 100 data points per \nminute. It doesn't take very long until you have a huge pile of data \nyou're trying to deal with. (As an aside, I once consulted with a \ncompany that wanted to do this... they wanted to store about 400 data \npoints from about 1000 devices on a 5 minute interval. That worked \nout to something like 5GB per day, just for the EAV table. Just \nwasn't going to scale...)\n\nSo, back to your situation... there's several things you can do that \nwill greatly improve things.\n\nIdentify data points that are very common and don't use EAV to store \nthem. Instead, store them as regular fields in a table (and don't use \ntext if at all possible).\n\nYou need to trim down your EAV table. Throw out the added/modified \ninfo; there's almost certainly no reason to store that *per data \npoint*. Get rid of cd_id; there should be a natural PK you can use, \nand you certainly don't want anything else referring to this table \n(which is a big reason to use a surrogate key).\n\ncd_variable and cd_tag need to be ints that point at other tables. \nFor that matter, do you really need to tag each *data point*? \nProbably not...\n\nFinally, if you have a defined set of points that you need to report \non, create a materialized view that has that information.\n\nBTW, it would probably be better to store data either in the main \ntable, or the history table, but not both places.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Sat, 16 Aug 2008 14:19:03 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Sat, Aug 16, 2008 at 2:19 PM, Decibel! <[email protected]> wrote:\n> You need to trim down your EAV table.\n\nEgads! I'd say completely get rid of this beast and redesign it\naccording to valid relational concepts.\n\nThis post pretty much explains the whole issue with EAV:\nhttp://groups.google.com/group/microsoft.public.sqlserver.programming/browse_thread/thread/df5bb99b3eaadfa9/6a160e5027ce3a80?lnk=st&q=eav#6a160e5027ce3a80\n\nEAV is evil. Period.\n", "msg_date": "Sat, 16 Aug 2008 20:33:06 -0500", "msg_from": "\"=?UTF-8?Q?Rodrigo_E._De_Le=C3=B3n_Plicet?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Fri, Aug 15, 2008 at 1:36 PM, Madison Kelly <[email protected]> wrote:\n> The 'cust_id' references the customer that the given data belongs to. The\n> reason for this \"data bucket\" (does this structure have a proper name?) is\n> that the data I need to store on a give customer is quite variable and\n> outside of my control. As it is, there is about 400 different variable/value\n> pairs I need to store per customer.\n\nFor you very specific case, I recommend you check out contrib/hstore:\nhttp://www.postgresql.org/docs/current/static/hstore.html\n", "msg_date": "Sat, 16 Aug 2008 20:36:21 -0500", "msg_from": "\"=?UTF-8?Q?Rodrigo_E._De_Le=C3=B3n_Plicet?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Sun, Aug 17, 2008 at 7:06 AM, Rodrigo E. De León Plicet <\[email protected]> wrote:\n\n> On Fri, Aug 15, 2008 at 1:36 PM, Madison Kelly <[email protected]> wrote:\n> > The 'cust_id' references the customer that the given data belongs to.\n> The\n> > reason for this \"data bucket\" (does this structure have a proper name?)\n> is\n> > that the data I need to store on a give customer is quite variable and\n> > outside of my control. As it is, there is about 400 different\n> variable/value\n> > pairs I need to store per customer.\n>\n> For you very specific case, I recommend you check out contrib/hstore:\n> http://www.postgresql.org/docs/current/static/hstore.html\n>\n>\nAwesome!!!! Any comments on the performance of hstore?\n\nBest regards,\n-- \ngurjeet[.singh]@EnterpriseDB.com\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.com\n\nEnterpriseDB http://www.enterprisedb.com\n\nMail sent from my BlackLaptop device\n\nOn Sun, Aug 17, 2008 at 7:06 AM, Rodrigo E. De León Plicet <[email protected]> wrote:\nOn Fri, Aug 15, 2008 at 1:36 PM, Madison Kelly <[email protected]> wrote:\n>  The 'cust_id' references the customer that the given data belongs to. The\n> reason for this \"data bucket\" (does this structure have a proper name?) is\n> that the data I need to store on a give customer is quite variable and\n> outside of my control. As it is, there is about 400 different variable/value\n> pairs I need to store per customer.\n\nFor you very specific case, I recommend you check out contrib/hstore:\nhttp://www.postgresql.org/docs/current/static/hstore.html\nAwesome!!!! Any comments on the performance of hstore?Best regards,-- gurjeet[.singh]@EnterpriseDB.com\nsingh.gurjeet@{ gmail | hotmail | indiatimes | yahoo }.comEnterpriseDB http://www.enterprisedb.comMail sent from my BlackLaptop device", "msg_date": "Sun, 17 Aug 2008 07:49:28 +0530", "msg_from": "\"Gurjeet Singh\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "Decibel! wrote:\n> On Aug 15, 2008, at 1:36 PM, Madison Kelly wrote:\n>> The 'cust_id' references the customer that the given data belongs to. \n>> The reason for this \"data bucket\" (does this structure have a proper \n>> name?) is that the data I need to store on a give customer is quite \n>> variable and outside of my control. As it is, there is about 400 \n>> different variable/value pairs I need to store per customer.\n> \n> \n> It's called Entity-Attribute-Value, and it's performance is pretty much \n> guaranteed to suck for any kind of a large dataset. The problem is that \n> you're storing a MASSIVE amount of extra information for every single \n> value. Consider:\n> \n> If each data point was just a field in a table, then even if we left \n> cd_value as text, each data point would consume 4 bytes* + 1 byte per \n> character (I'm assuming you don't need extra UTF8 chars or anything). Of \n> course if you know you're only storing numbers or the like then you can \n> make that even more efficient.\n> \n> * In 8.3, the text field overhead could be as low as 1 byte if the field \n> is small enough.\n> \n> OTOH, your table is going to 32+24 bytes per row just for the per-row \n> overhead, ints and timestamps. Each text field will have 1 or 4 bytes in \n> overhead, then you have to store the actual data. Realistically, you're \n> looking at 60+ bytes per data point, as opposed to maybe 15, or even \n> down to 4 if you know you're storing an int.\n> \n> Now figure out what that turns into if you have 100 data points per \n> minute. It doesn't take very long until you have a huge pile of data \n> you're trying to deal with. (As an aside, I once consulted with a \n> company that wanted to do this... they wanted to store about 400 data \n> points from about 1000 devices on a 5 minute interval. That worked out \n> to something like 5GB per day, just for the EAV table. Just wasn't going \n> to scale...)\n> \n> So, back to your situation... there's several things you can do that \n> will greatly improve things.\n> \n> Identify data points that are very common and don't use EAV to store \n> them. Instead, store them as regular fields in a table (and don't use \n> text if at all possible).\n> \n> You need to trim down your EAV table. Throw out the added/modified info; \n> there's almost certainly no reason to store that *per data point*. Get \n> rid of cd_id; there should be a natural PK you can use, and you \n> certainly don't want anything else referring to this table (which is a \n> big reason to use a surrogate key).\n> \n> cd_variable and cd_tag need to be ints that point at other tables. For \n> that matter, do you really need to tag each *data point*? Probably not...\n> \n> Finally, if you have a defined set of points that you need to report on, \n> create a materialized view that has that information.\n> \n> BTW, it would probably be better to store data either in the main table, \n> or the history table, but not both places.\n\nThis is a very long and thoughtful reply, thank you very kindly.\n\nTruth be told, I sort of expected this would be what I had to do. I \nthink I asked this more in hoping that there might be some \"magic\" I \ndidn't know about, but I see now that's not the case. :)\n\nAs my data points grow to 500,000+, the time it took to return these \nresults grew to well over 10 minutes on a decent server and the DB size \nwas growing rapidly, as you spoke of.\n\nSo I did just as you suggested and took the variable names I knew about \nspecifically and created a table for them. These are the ones that are \nbeing most often updated (hourly per customer) and made each column an \n'int' or 'real' where possible and ditched the tracking of the \nadding/modifying user and time stamp. I added those out of habit, more \nthan anything. This data will always come from a system app though, so...\n\nGiven that my DB is in development and how very long and intensive it \nwould have been to pull out the existing data, I have started over and \nam now gathering new data. In a week or so I should have the same amount \nof data as I had before and I will be able to do a closer comparison test.\n\nHowever, I already suspect the growth of the database will be \nsubstantially slower and the queries will return substantially faster.\n\nThank you again!\n\nMadi\n", "msg_date": "Sun, 17 Aug 2008 11:21:08 -0400", "msg_from": "Madison Kelly <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Fri, 15 Aug 2008, Madison Kelly wrote:\n> Below I will post the VIEW and a sample of the query's EXPLAIN ANALYZE. \n> Thanks for any tips/help/clue-stick-beating you may be able to share!\n\nThis query looks incredibly expensive:\n\n> SELECT\n...\n> FROM\n> customer a,\n> history.customer_data b,\n> history.customer_data c,\n> history.customer_data d,\n> history.customer_data e,\n> history.customer_data f,\n> history.customer_data g,\n> history.customer_data h,\n> history.customer_data i,\n> history.customer_data j,\n> history.customer_data k,\n> history.customer_data l\n> WHERE\n> a.cust_id=b.cd_cust_id AND\n> a.cust_id=c.cd_cust_id AND\n> a.cust_id=d.cd_cust_id AND\n> a.cust_id=e.cd_cust_id AND\n> a.cust_id=f.cd_cust_id AND\n> a.cust_id=g.cd_cust_id AND\n> a.cust_id=h.cd_cust_id AND\n> a.cust_id=i.cd_cust_id AND\n> a.cust_id=j.cd_cust_id AND\n> a.cust_id=k.cd_cust_id AND\n> a.cust_id=l.cd_cust_id AND\n...\n\nI would refactor this significantly, so that instead of returning a wide \nresult, it would return more than one row per customer. Just do a single \njoin between customer and history.customer_data - it will run much faster.\n\nMatthew\n\n-- \nHere we go - the Fairy Godmother redundancy proof.\n -- Computer Science Lecturer\n", "msg_date": "Mon, 18 Aug 2008 12:06:03 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Aug 16, 2008, at 9:19 PM, Gurjeet Singh wrote:\n> For you very specific case, I recommend you check out contrib/hstore:\n> http://www.postgresql.org/docs/current/static/hstore.html\n>\n>\n> Awesome!!!! Any comments on the performance of hstore?\n\nI've looked at it but haven't actually used it. One thing I wish it \ndid was to keep a catalog somewhere of the \"names\" that it's seen so \nthat it wasn't storing them as in-line text. If you have even \nmoderate-length names and are storing small values you quickly end up \nwasting a ton of space.\n\nBTW, now that you can build arrays of composite types, that might be \nan easy way to deal with this stuff. Create a composite type of \n(name_id, value) and store that in an array.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 20 Aug 2008 11:49:55 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "On Aug 17, 2008, at 10:21 AM, Madison Kelly wrote:\n> Truth be told, I sort of expected this would be what I had to do. I \n> think I asked this more in hoping that there might be some \"magic\" \n> I didn't know about, but I see now that's not the case. :)\n>\n> As my data points grow to 500,000+, the time it took to return \n> these results grew to well over 10 minutes on a decent server and \n> the DB size was growing rapidly, as you spoke of.\n>\n> So I did just as you suggested and took the variable names I knew \n> about specifically and created a table for them. These are the ones \n> that are being most often updated (hourly per customer) and made \n> each column an 'int' or 'real' where possible and ditched the \n> tracking of the adding/modifying user and time stamp. I added those \n> out of habit, more than anything. This data will always come from a \n> system app though, so...\n>\n> Given that my DB is in development and how very long and intensive \n> it would have been to pull out the existing data, I have started \n> over and am now gathering new data. In a week or so I should have \n> the same amount of data as I had before and I will be able to do a \n> closer comparison test.\n>\n> However, I already suspect the growth of the database will be \n> substantially slower and the queries will return substantially faster.\n\n\nI strongly recommend you also re-think using EAV at all for this. It \nplain and simple does not scale well. I won't go so far as to say it \ncan never be used (we're actually working on one right now, but it \nwill only be used to occasionally pull up single entities), but you \nhave to be really careful with it. I don't see it working very well \nfor what it sounds like you're trying to do.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Wed, 20 Aug 2008 11:51:31 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW" }, { "msg_contents": "Decibel! <[email protected]> writes:\n> On Aug 16, 2008, at 9:19 PM, Gurjeet Singh wrote:\n>> Awesome!!!! Any comments on the performance of hstore?\n\n> I've looked at it but haven't actually used it. One thing I wish it \n> did was to keep a catalog somewhere of the \"names\" that it's seen so \n> that it wasn't storing them as in-line text. If you have even \n> moderate-length names and are storing small values you quickly end up \n> wasting a ton of space.\n\n> BTW, now that you can build arrays of composite types, that might be \n> an easy way to deal with this stuff. Create a composite type of \n> (name_id, value) and store that in an array.\n\nIf you're worried about storage space, I wouldn't go for arrays of\ncomposite :-(. The tuple header overhead is horrendous, almost\ncertainly a lot worse than hstore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Aug 2008 14:18:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW " }, { "msg_contents": "On Aug 20, 2008, at 1:18 PM, Tom Lane wrote:\n> If you're worried about storage space, I wouldn't go for arrays of\n> composite :-(. The tuple header overhead is horrendous, almost\n> certainly a lot worse than hstore.\n\n\nOh holy cow, I didn't realize we had a big header in there. Is that \nto allow for changing the definition of the composite type?\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Thu, 21 Aug 2008 23:53:58 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Optimizing a VIEW " } ]
[ { "msg_contents": "Hi,\n\n\nFollowing is the Query :\nSELECT sum(id), sum(cd), sum(ad)\n FROM table1 a , table2 b cross join table3 c\n WHERE a.nkey = b.key\n AND a.dkey = c.key\n AND c.date = '2008-02-01'\n AND b.id = 999 ;\n\n\nWe have fired this on our production system which is postgres 8.1.3, and got\nthe following explain analyse of it\n\n Aggregate (cost=11045.52..11045.53 rows=1 width=24) (actual\ntime=79.290..79.291 rows=1 loops=1)\n -> Nested Loop (cost=49.98..11043.42 rows=279 width=24) (actual\ntime=1.729..50.498 rows=10473 loops=1)\n -> Nested Loop (cost=0.00..6.05 rows=1 width=8) (actual\ntime=0.028..0.043 rows=1 loops=1)\n -> Index Scan using rnididx on table2 b (cost=0.00..3.02\nrows=1 width=4) (actual time=0.011..0.015 rows=1 loops=1)\n Index Cond: (id = 999)\n -> Index Scan using rddtidx on table3 c (cost=0.00..3.02\nrows=1 width=4) (actual time=0.010..0.016 rows=1 loops=1)\n Index Cond: (date = '2008-02-01 00:00:00'::timestamp\nwithout time zone)\n -> Bitmap Heap Scan on table1 a (cost=49.98..10954.93 rows=5496\nwidth=32) (actual time=1.694..19.006 rows=10473 loops=1)\n Recheck Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey =\n\"outer\".\"key\"))\n -> Bitmap Index Scan on rndateidx (cost=0.00..49.98\nrows=5496 width=0) (actual time=1.664..1.664 rows=10473 loops=1)\n Index Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey =\n\"outer\".\"key\"))\n Total runtime: 79.397 ms\n\nTime: 80.752 ms\n\n\n\nSame Query when we fire on postgres 8.3.3, following is the explain analyse\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1171996.35..1171996.36 rows=1 width=24) (actual\ntime=6360.783..6360.785 rows=1 loops=1)\n -> Nested Loop (cost=0.00..1171994.28 rows=275 width=24) (actual\ntime=3429.309..6330.424 rows=10473 loops=1)\n Join Filter: (a.nkey = b.key)\n -> Index Scan using rnididx on table2 b (cost=0.00..4.27 rows=1\nwidth=4) (actual time=0.030..0.033 rows=1 loops=1)\n Index Cond: (id = 999)\n -> Nested Loop (cost=0.00..1169411.17 rows=206308 width=28)\n(actual time=0.098..4818.450 rows=879480 loops=1)\n -> Index Scan using rddtidx on table1 c (cost=0.00..4.27\nrows=1 width=4) (actual time=0.031..0.034 rows=1 loops=1)\n Index Cond: (date = '2008-02-01 00:00:00'::timestamp\nwithout time zone)\n -> Index Scan using rdnetidx on table1 a\n(cost=0.00..1156050.51 rows=1068511 width=32) (actual time=0.047..1732.229\nrows=879480 loops=1)\n Index Cond: (a.dkey = c.key)\n Total runtime: 6360.978 ms\n\n\nThe Query on postgres 8.1.3 use to take only 80.752 ms is now taking\n6364.950 ms.\n\nWe have done vacuum analyse on all the tables.\n\nCan anybody helpout over here ... was may b wrong... and why the query seems\nto take time on postgres 8.3.3.\n\nIs it 8.3.3 problem or its cross join problem on 8.3.3\n\nThanx\n\n-- \nRegards\nGauri\n\nHi,Following is the Query :SELECT sum(id), sum(cd), sum(ad)       FROM table1 a , table2 b cross join table3 c        WHERE a.nkey = b.key             AND a.dkey = c.key             AND c.date = '2008-02-01'\n             AND b.id = 999 ;We have fired this on our production system which is postgres 8.1.3, and got the following explain analyse of it Aggregate  (cost=11045.52..11045.53 rows=1 width=24) (actual time=79.290..79.291 rows=1 loops=1)\n   ->  Nested Loop  (cost=49.98..11043.42 rows=279 width=24) (actual time=1.729..50.498 rows=10473 loops=1)         ->  Nested Loop  (cost=0.00..6.05 rows=1 width=8) (actual time=0.028..0.043 rows=1 loops=1)\n               ->  Index Scan using rnididx on table2 b  (cost=0.00..3.02 rows=1 width=4) (actual time=0.011..0.015 rows=1 loops=1)                     Index Cond: (id = 999)               ->  Index Scan using rddtidx on table3 c  (cost=0.00..3.02 rows=1 width=4) (actual time=0.010..0.016 rows=1 loops=1)\n                     Index Cond: (date = '2008-02-01 00:00:00'::timestamp without time zone)         ->  Bitmap Heap Scan on table1 a  (cost=49.98..10954.93 rows=5496 width=32) (actual time=1.694..19.006 rows=10473 loops=1)\n               Recheck Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey = \"outer\".\"key\"))               ->  Bitmap Index Scan on rndateidx  (cost=0.00..49.98 rows=5496 width=0) (actual time=1.664..1.664 rows=10473 loops=1)\n                     Index Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey = \"outer\".\"key\")) Total runtime: 79.397 msTime: 80.752 msSame Query when we fire on postgres 8.3.3, following is the explain analyse \n                                                                                    QUERY PLAN                               -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate  (cost=1171996.35..1171996.36 rows=1 width=24) (actual time=6360.783..6360.785 rows=1 loops=1)   ->  Nested Loop  (cost=0.00..1171994.28 rows=275 width=24) (actual time=3429.309..6330.424 rows=10473 loops=1)\n         Join Filter: (a.nkey = b.key)         ->  Index Scan using rnididx on table2 b  (cost=0.00..4.27 rows=1 width=4) (actual time=0.030..0.033 rows=1 loops=1)               Index Cond: (id = 999)         ->  Nested Loop  (cost=0.00..1169411.17 rows=206308 width=28) (actual time=0.098..4818.450 rows=879480 loops=1)\n               ->  Index Scan using rddtidx on table1 c  (cost=0.00..4.27 rows=1 width=4) (actual time=0.031..0.034 rows=1 loops=1)                     Index Cond: (date = '2008-02-01 00:00:00'::timestamp without time zone)\n               ->  Index Scan using rdnetidx on table1 a  (cost=0.00..1156050.51 rows=1068511 width=32) (actual time=0.047..1732.229 rows=879480 loops=1)                     Index Cond: (a.dkey = c.key) Total runtime: 6360.978 ms\nThe Query on postgres 8.1.3 use to take only 80.752 ms is now taking 6364.950 ms.We have done vacuum analyse on all the tables.Can anybody helpout over here ... was may b wrong... and why the query seems to take time on postgres 8.3.3.\nIs it 8.3.3 problem or its cross join problem on 8.3.3Thanx-- RegardsGauri", "msg_date": "Mon, 18 Aug 2008 19:07:33 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Cross Join Problem" }, { "msg_contents": "\"Gauri Kanekar\" <[email protected]> writes:\n> Following is the Query :\n> SELECT sum(id), sum(cd), sum(ad)\n> FROM table1 a , table2 b cross join table3 c\n> WHERE a.nkey = b.key\n> AND a.dkey = c.key\n> AND c.date = '2008-02-01'\n> AND b.id = 999 ;\n\n\n> We have fired this on our production system which is postgres 8.1.3, and got\n> the following explain analyse of it\n\n> Aggregate (cost=11045.52..11045.53 rows=1 width=24) (actual\n> time=79.290..79.291 rows=1 loops=1)\n> -> Nested Loop (cost=49.98..11043.42 rows=279 width=24) (actual\n> time=1.729..50.498 rows=10473 loops=1)\n> -> Nested Loop (cost=0.00..6.05 rows=1 width=8) (actual\n> time=0.028..0.043 rows=1 loops=1)\n> -> Index Scan using rnididx on table2 b (cost=0.00..3.02\n> rows=1 width=4) (actual time=0.011..0.015 rows=1 loops=1)\n> Index Cond: (id = 999)\n> -> Index Scan using rddtidx on table3 c (cost=0.00..3.02\n> rows=1 width=4) (actual time=0.010..0.016 rows=1 loops=1)\n> Index Cond: (date = '2008-02-01 00:00:00'::timestamp\n> without time zone)\n> -> Bitmap Heap Scan on table1 a (cost=49.98..10954.93 rows=5496\n> width=32) (actual time=1.694..19.006 rows=10473 loops=1)\n> Recheck Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey =\n> \"outer\".\"key\"))\n> -> Bitmap Index Scan on rndateidx (cost=0.00..49.98\n> rows=5496 width=0) (actual time=1.664..1.664 rows=10473 loops=1)\n> Index Cond: ((a.nkey = \"outer\".\"key\") AND (a.dkey =\n> \"outer\".\"key\"))\n> Total runtime: 79.397 ms\n\nNo PG release since 7.3 would have voluntarily planned that query that\nway. Maybe you were using join_collapse_limit = 1 to force the join\norder?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Aug 2008 10:02:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross Join Problem " }, { "msg_contents": "[ please keep the list cc'd for the archives' sake ]\n\n\"Gauri Kanekar\" <[email protected]> writes:\n> On Mon, Aug 18, 2008 at 7:32 PM, Tom Lane <[email protected]> wrote:\n>> No PG release since 7.3 would have voluntarily planned that query that\n>> way. Maybe you were using join_collapse_limit = 1 to force the join\n>> order?\n\n> Yes, We have set join_collapse_limit set to 1.\n\nAh, so really your question is why join_collapse_limit isn't working as\nyou expect. That code changed quite a bit in 8.2, and the way it works\nnow is that the critical decision occurs while deciding whether to fold\nthe cross-join (a sub-problem of size 2) into the top-level join\nproblem. Which is a decision that's going to be driven by\nfrom_collapse_limit not join_collapse_limit.\n\nSo one way you could make it work is to reduce from_collapse_limit to\nless than 3, but I suspect you'd find that that has too many bad\nconsequences for other queries. What's probably best is to write the\nproblem query like this:\n\n\tFROM table1 a cross join ( table2 b cross join table3 c )\n\nwhich will cause join_collapse_limit to be the relevant number at both\nsteps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 18 Aug 2008 11:20:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Cross Join Problem " }, { "msg_contents": "Thanx alot... its solved my problem\n\nOn Mon, Aug 18, 2008 at 8:50 PM, Tom Lane <[email protected]> wrote:\n\n> [ please keep the list cc'd for the archives' sake ]\n>\n> \"Gauri Kanekar\" <[email protected]> writes:\n> > On Mon, Aug 18, 2008 at 7:32 PM, Tom Lane <[email protected]> wrote:\n> >> No PG release since 7.3 would have voluntarily planned that query that\n> >> way. Maybe you were using join_collapse_limit = 1 to force the join\n> >> order?\n>\n> > Yes, We have set join_collapse_limit set to 1.\n>\n> Ah, so really your question is why join_collapse_limit isn't working as\n> you expect. That code changed quite a bit in 8.2, and the way it works\n> now is that the critical decision occurs while deciding whether to fold\n> the cross-join (a sub-problem of size 2) into the top-level join\n> problem. Which is a decision that's going to be driven by\n> from_collapse_limit not join_collapse_limit.\n>\n> So one way you could make it work is to reduce from_collapse_limit to\n> less than 3, but I suspect you'd find that that has too many bad\n> consequences for other queries. What's probably best is to write the\n> problem query like this:\n>\n> FROM table1 a cross join ( table2 b cross join table3 c )\n>\n> which will cause join_collapse_limit to be the relevant number at both\n> steps.\n>\n> regards, tom lane\n>\n\n\n\n-- \nRegards\nGauri\n\nThanx alot... its solved my problemOn Mon, Aug 18, 2008 at 8:50 PM, Tom Lane <[email protected]> wrote:\n[ please keep the list cc'd for the archives' sake ]\n\n\"Gauri Kanekar\" <[email protected]> writes:\n> On Mon, Aug 18, 2008 at 7:32 PM, Tom Lane <[email protected]> wrote:\n>> No PG release since 7.3 would have voluntarily planned that query that\n>> way.  Maybe you were using join_collapse_limit = 1 to force the join\n>> order?\n\n> Yes, We have set join_collapse_limit set to 1.\n\nAh, so really your question is why join_collapse_limit isn't working as\nyou expect.  That code changed quite a bit in 8.2, and the way it works\nnow is that the critical decision occurs while deciding whether to fold\nthe cross-join (a sub-problem of size 2) into the top-level join\nproblem.  Which is a decision that's going to be driven by\nfrom_collapse_limit not join_collapse_limit.\n\nSo one way you could make it work is to reduce from_collapse_limit to\nless than 3, but I suspect you'd find that that has too many bad\nconsequences for other queries.  What's probably best is to write the\nproblem query like this:\n\n        FROM table1 a cross join ( table2 b cross join table3 c )\n\nwhich will cause join_collapse_limit to be the relevant number at both\nsteps.\n\n                        regards, tom lane\n-- RegardsGauri", "msg_date": "Tue, 19 Aug 2008 19:04:17 +0530", "msg_from": "\"Gauri Kanekar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Cross Join Problem" } ]
[ { "msg_contents": "Hi,\n\nI run this query:\n\nselect max(a.\"user\"), b.category, count(1) from result a, \ndomain_categories b where a.\"domain\" = b.\"domain\" group by b.category;\n\nthe table result contains all websites a user visited. And the table \ndomain_categories contains all categories a domain is in.\nresult has 20 Mio rows and domain_categories has about 12 Mio. There \nare 500.000 different users.\n\nI have indexes on result.domain, domain_categories.domain, \nresult.user, domain_categories.category. Clustered result on user and \ndomain_categories on domain.\n\nexplain analyze says (limited to one user with id 1337):\n\n\"HashAggregate (cost=2441577.16..2441614.72 rows=2504 width=8) \n(actual time=94667.335..94671.508 rows=3361 loops=1)\"\n\" -> Merge Join (cost=2119158.02..2334105.00 rows=14329622 width=8) \n(actual time=63559.938..94621.557 rows=36308 loops=1)\"\n\" Merge Cond: (a.domain = b.domain)\"\n\" -> Sort (cost=395.52..405.49 rows=3985 width=8) (actual \ntime=0.189..0.211 rows=19 loops=1)\"\n\" Sort Key: a.domain\"\n\" Sort Method: quicksort Memory: 27kB\"\n\" -> Index Scan using result_user_idx on result a \n(cost=0.00..157.21 rows=3985 width=8) (actual time=0.027..0.108 \nrows=61 loops=1)\"\n\" Index Cond: (\"user\" = 1337)\"\n\" -> Materialize (cost=2118752.28..2270064.64 rows=12104989 \nwidth=8) (actual time=46460.599..82336.116 rows=12123161 loops=1)\"\n\" -> Sort (cost=2118752.28..2149014.75 rows=12104989 \nwidth=8) (actual time=46460.592..59595.851 rows=12104989 loops=1)\"\n\" Sort Key: b.domain\"\n\" Sort Method: external sort Disk: 283992kB\"\n\" -> Seq Scan on domain_categories b \n(cost=0.00..198151.89 rows=12104989 width=8) (actual \ntime=14.352..22572.869 rows=12104989 loops=1)\"\n\"Total runtime: 94817.058 ms\"\n\nThis is running on a pretty small server with 1gb of ram and a slow \nsata hd. Shared_buffers is 312mb, max_fsm_pages = 153600. Everything \nelse is commented out. Postgresql v8.3.3. Operating system Ubuntu 8.04.\n\nIt would be great if someone could help improve this query. This is \nfor a research project at my university.\n\nThanks in advance,\n\nMoritz\n\n", "msg_date": "Mon, 18 Aug 2008 16:08:44 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query with a lot of data" }, { "msg_contents": "On Mon, 18 Aug 2008, Moritz Onken wrote:\n> I have indexes on result.domain, domain_categories.domain, result.user, \n> domain_categories.category. Clustered result on user and domain_categories on \n> domain.\n\n> \" -> Materialize (cost=2118752.28..2270064.64 rows=12104989 width=8) \n> (actual time=46460.599..82336.116 rows=12123161 loops=1)\"\n> \" -> Sort (cost=2118752.28..2149014.75 rows=12104989 width=8) \n> (actual time=46460.592..59595.851 rows=12104989 loops=1)\"\n> \" Sort Key: b.domain\"\n> \" Sort Method: external sort Disk: 283992kB\"\n> \" -> Seq Scan on domain_categories b \n> (cost=0.00..198151.89 rows=12104989 width=8) (actual time=14.352..22572.869 \n> rows=12104989 loops=1)\"\n\nThis is weird, given you say you have clustered domain_categories on \ndomain. Have you analysed? You should be able to run:\n\nEXPLAIN SELECT * from domain_categories ORDER BY domain\n\nand have it say \"Index scan\" instead of \"Seq Scan followed by disc sort)\".\n\nMatthew\n\n-- \nPatron: \"I am looking for a globe of the earth.\"\nLibrarian: \"We have a table-top model over here.\"\nPatron: \"No, that's not good enough. Don't you have a life-size?\"\nLibrarian: (pause) \"Yes, but it's in use right now.\"\n", "msg_date": "Mon, 18 Aug 2008 15:30:42 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 18.08.2008 um 16:30 schrieb Matthew Wakeling:\n\n> On Mon, 18 Aug 2008, Moritz Onken wrote:\n>> I have indexes on result.domain, domain_categories.domain, \n>> result.user, domain_categories.category. Clustered result on user \n>> and domain_categories on domain.\n>\n>> \" -> Materialize (cost=2118752.28..2270064.64 \n>> rows=12104989 width=8) (actual time=46460.599..82336.116 \n>> rows=12123161 loops=1)\"\n>> \" -> Sort (cost=2118752.28..2149014.75 rows=12104989 \n>> width=8) (actual time=46460.592..59595.851 rows=12104989 loops=1)\"\n>> \" Sort Key: b.domain\"\n>> \" Sort Method: external sort Disk: 283992kB\"\n>> \" -> Seq Scan on domain_categories b \n>> (cost=0.00..198151.89 rows=12104989 width=8) (actual \n>> time=14.352..22572.869 rows=12104989 loops=1)\"\n>\n> This is weird, given you say you have clustered domain_categories on \n> domain. Have you analysed? You should be able to run:\n>\n> EXPLAIN SELECT * from domain_categories ORDER BY domain\n>\n> and have it say \"Index scan\" instead of \"Seq Scan followed by disc \n> sort)\".\n>\n> Matthew\n>\n\nThanks, the index was created but I forgot to run analyze again on \nthat table.\n\nI had a little mistake in my previous sql query. The corrected version \nis this:\nexplain analyze select a.\"user\", b.category, count(1) from result a, \ndomain_categories b where a.\"domain\" = b.\"domain\" and a.\"user\" = 1337 \ngroup by a.\"user\", b.category;\n\n(notice the additional group by column).\n\nexplain analyze:\n\n\n\"HashAggregate (cost=817397.78..817428.92 rows=2491 width=8) (actual \ntime=42874.339..42878.419 rows=3361 loops=1)\"\n\" -> Merge Join (cost=748.47..674365.50 rows=19070970 width=8) \n(actual time=15702.449..42829.388 rows=36308 loops=1)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on \ndomain_categories b (cost=0.00..391453.79 rows=12105014 width=8) \n(actual time=39.018..30166.349 rows=12104989 loops=1)\"\n\" -> Sort (cost=395.52..405.49 rows=3985 width=8) (actual \ntime=0.188..32.345 rows=36309 loops=1)\"\n\" Sort Key: a.domain\"\n\" Sort Method: quicksort Memory: 27kB\"\n\" -> Index Scan using result_user_idx on result a \n(cost=0.00..157.21 rows=3985 width=8) (actual time=0.021..0.101 \nrows=61 loops=1)\"\n\" Index Cond: (\"user\" = 1337)\"\n\"Total runtime: 42881.382 ms\"\n\nThis is still very slow...\n\n\n", "msg_date": "Mon, 18 Aug 2008 16:43:16 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Mon, 18 Aug 2008, Moritz Onken wrote:\n> \"HashAggregate (cost=817397.78..817428.92 rows=2491 width=8) (actual time=42874.339..42878.419 rows=3361 loops=1)\"\n> \" -> Merge Join (cost=748.47..674365.50 rows=19070970 width=8) (actual > time=15702.449..42829.388 rows=36308 loops=1)\"\n> \" Merge Cond: (b.domain = a.domain)\"\n> \" -> Index Scan using domain_categories_domain on domain_categories b > (cost=0.00..391453.79 rows=12105014 width=8) (actual time=39.018..30166.349 > rows=12104989 loops=1)\"\n> \" -> Sort (cost=395.52..405.49 rows=3985 width=8) (actual > time=0.188..32.345 rows=36309 loops=1)\"\n> \" Sort Key: a.domain\"\n> \" Sort Method: quicksort Memory: 27kB\"\n> \" -> Index Scan using result_user_idx on result a > (cost=0.00..157.21 rows=3985 width=8) (actual time=0.021..0.101 rows=61 > loops=1)\"\n> \" Index Cond: (\"user\" = 1337)\"\n> \"Total runtime: 42881.382 ms\"\n>\n> This is still very slow...\n\nWell, you're getting the database to read the entire contents of the \ndomain_categories table in order. That's 12 million rows - a fair amount \nof work.\n\nYou may find that removing the \"user = 1337\" constraint doesn't make the \nquery much slower - that's where you get a big win by clustering on \ndomain. You might also want to cluster the results table on domain.\n\nIf you want the results for just one user, it would be very helpful to \nhave a user column on the domain_categories table, and an index on that \ncolumn. However, that will slow down the query for all users a little.\n\nMatthew\n\n-- \nFor every complex problem, there is a solution that is simple, neat, and wrong.\n -- H. L. Mencken \n", "msg_date": "Mon, 18 Aug 2008 15:57:22 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": ">>\n>\n> Well, you're getting the database to read the entire contents of the \n> domain_categories table in order. That's 12 million rows - a fair \n> amount of work.\n>\n> You may find that removing the \"user = 1337\" constraint doesn't make \n> the query much slower - that's where you get a big win by clustering \n> on domain. You might also want to cluster the results table on domain.\n\nRunning the query for more than one user is indeed not much slower. \nThat's what I need. I'm clustering the results table on domain right \nnow. But why is this better than clustering it on \"user\"?\n\n>\n>\n> If you want the results for just one user, it would be very helpful \n> to have a user column on the domain_categories table, and an index \n> on that column. However, that will slow down the query for all users \n> a little.\n\nA row in domain_categories can belong to more than one user. But I \ndon't need to run this query for only one user anyway.\n\nThanks so far,\n", "msg_date": "Mon, 18 Aug 2008 17:49:38 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Mon, 18 Aug 2008, Moritz Onken wrote:\n> Running the query for more than one user is indeed not much slower. That's \n> what I need. I'm clustering the results table on domain right now. But why is \n> this better than clustering it on \"user\"?\n\nThe reason is the way that the merge join algorithm works. What it does is \ntakes two tables, and sorts them both by the join fields. Then it can \nstream through both tables producing results as it goes. It's the best \njoin algorithm, but it does require both tables to be sorted by the same \nthing, which is domain in this case. The aggregating on user happens after \nthe join has been done, and the hash aggregate can accept the users in \nrandom order.\n\nIf you look at your last EXPLAIN, see that it has to sort the result table \non domain, although it can read the domain_categories in domain order due \nto the clustered index.\n\n\"HashAggregate\n\" -> Merge Join\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on domain_categories b\n\" -> Sort\n\" Sort Key: a.domain\"\n\" Sort Method: quicksort Memory: 27kB\"\n\" -> Index Scan using result_user_idx on result a \n\" Index Cond: (\"user\" = 1337)\"\n\nWithout the user restriction and re-clustering, this should become:\n\n\"HashAggregate\n\" -> Merge Join\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on domain_categories b\n\" -> Index Scan using result_domain on result a\n\nMatthew\n\n-- \nVacuums are nothings. We only mention them to let them know we know\nthey're there.\n", "msg_date": "Mon, 18 Aug 2008 17:05:11 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 18.08.2008 um 18:05 schrieb Matthew Wakeling:\n\n> On Mon, 18 Aug 2008, Moritz Onken wrote:\n>> Running the query for more than one user is indeed not much slower. \n>> That's what I need. I'm clustering the results table on domain \n>> right now. But why is this better than clustering it on \"user\"?\n>\n> The reason is the way that the merge join algorithm works. What it \n> does is takes two tables, and sorts them both by the join fields. \n> Then it can stream through both tables producing results as it goes. \n> It's the best join algorithm, but it does require both tables to be \n> sorted by the same thing, which is domain in this case. The \n> aggregating on user happens after the join has been done, and the \n> hash aggregate can accept the users in random order.\n>\n> If you look at your last EXPLAIN, see that it has to sort the result \n> table on domain, although it can read the domain_categories in \n> domain order due to the clustered index.\n\nexplain select\n a.\"user\", b.category, sum(1.0/b.cat_count)::float\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n group by a.\"user\", b.category;\n\n\"GroupAggregate (cost=21400443313.69..22050401897.13 rows=35049240 \nwidth=12)\"\n\" -> Sort (cost=21400443313.69..21562757713.35 rows=64925759864 \nwidth=12)\"\n\" Sort Key: a.\"user\", b.category\"\n\" -> Merge Join (cost=4000210.40..863834009.08 \nrows=64925759864 width=12)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on \ndomain_categories b (cost=0.00..391453.79 rows=12105014 width=12)\"\n\" -> Materialize (cost=3999931.73..4253766.93 \nrows=20306816 width=8)\"\n\" -> Sort (cost=3999931.73..4050698.77 \nrows=20306816 width=8)\"\n\" Sort Key: a.domain\"\n\" -> Seq Scan on result a \n(cost=0.00..424609.16 rows=20306816 width=8)\"\n\nBoth results and domain_categories are clustered on domain and analyzed.\nIt took 50 minutes to run this query for 280 users (\"and \"user\" IN \n([280 ids])\"), 78000 rows were returned and stored in a table. Is this \nreasonable?\nWhy is it still sorting on domain? I thought the clustering should \nprevent the planner from doing this?\n\nmoritz\n", "msg_date": "Tue, 19 Aug 2008 11:03:34 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Tue, 19 Aug 2008, Moritz Onken wrote:\n> explain select\n> a.\"user\", b.category, sum(1.0/b.cat_count)::float\n> from result a, domain_categories b\n> where a.\"domain\" = b.\"domain\"\n> group by a.\"user\", b.category;\n\n> Both results and domain_categories are clustered on domain and analyzed.\n> Why is it still sorting on domain? I thought the clustering should prevent \n> the planner from doing this?\n\nAs far as I can tell, it should. If it is clustered on an index on domain, \nand then analysed, it should no longer have to sort on domain.\n\nCould you post here the results of running:\n\nselect * from pg_stats where attname = 'domain';\n\n> It took 50 minutes to run this query for 280 users (\"and \"user\" IN ([280 \n> ids])\"), 78000 rows were returned and stored in a table. Is this reasonable?\n\nSounds like an awfully long time to me. Also, I think restricting it to \n280 users is probably not making it any faster.\n\nMatthew\n\n-- \nIt's one of those irregular verbs - \"I have an independent mind,\" \"You are\nan eccentric,\" \"He is round the twist.\"\n -- Bernard Woolly, Yes Prime Minister\n", "msg_date": "Tue, 19 Aug 2008 12:13:21 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": ">>\n> As far as I can tell, it should. If it is clustered on an index on \n> domain, and then analysed, it should no longer have to sort on domain.\n>\n> Could you post here the results of running:\n>\n> select * from pg_stats where attname = 'domain';\n>\n\n\n schemaname | tablename | attname | null_frac | \navg_width | n_distinct | \nmost_common_vals \n| \nmost_common_freqs \n| \n histogram_bounds \n | \n correlation\n\n public | result | domain | 0 | \n4 | 1642 | \n{3491378,3213829,3316634,3013831,3062500,3242775,3290846,3171997,3412018,3454092 \n} | \n{0.352333,0.021,0.01,0.00766667,0.00566667,0.00533333,0.00533333,0.005,0.00266667,0.00266667 \n} | \n{3001780,3031753,3075043,3129688,3176566,3230067,3286784,3341445,3386233,3444374,3491203 \n} | \n 1\n\n\nNo idea what that means :)\n>>\n>\n> Sounds like an awfully long time to me. Also, I think restricting it \n> to 280 users is probably not making it any faster.\n\nIf I hadn't restricted it to 280 users it would have run ~350days...\n\nThanks for your help!\n\nmoritz\n", "msg_date": "Tue, 19 Aug 2008 13:39:15 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Tue, 19 Aug 2008, Moritz Onken wrote:\n> tablename | attname | n_distinct | correlation\n> result | domain | 1642 | 1\n\nWell, the important thing is the correlation, which is 1, indicating that \nPostgres knows that the table is clustered. So I have no idea why it is \nsorting the entire table.\n\nWhat happens when you run EXPLAIN SELECT * FROM result ORDER BY domain?\n\n>> Sounds like an awfully long time to me. Also, I think restricting it to 280 \n>> users is probably not making it any faster.\n>\n> If I hadn't restricted it to 280 users it would have run ~350days...\n\nWhat makes you say that? Perhaps you could post EXPLAINs of both of the \nqueries.\n\nMatthew\n\n-- \nWhat goes up must come down. Ask any system administrator.\n", "msg_date": "Tue, 19 Aug 2008 13:17:25 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 19.08.2008 um 14:17 schrieb Matthew Wakeling:\n\n> On Tue, 19 Aug 2008, Moritz Onken wrote:\n>> tablename | attname | n_distinct | correlation\n>> result | domain | 1642 | 1\n>\n> Well, the important thing is the correlation, which is 1, indicating \n> that Postgres knows that the table is clustered. So I have no idea \n> why it is sorting the entire table.\n>\n> What happens when you run EXPLAIN SELECT * FROM result ORDER BY \n> domain?\n>\n\n\"Index Scan using result_domain_idx on result (cost=0.00..748720.72 \nrows=20306816 width=49)\"\n... as it should be.\n\n>>> Sounds like an awfully long time to me. Also, I think restricting \n>>> it to 280 users is probably not making it any faster.\n>>\n>> If I hadn't restricted it to 280 users it would have run ~350days...\n>\n> What makes you say that? Perhaps you could post EXPLAINs of both of \n> the queries.\n>\n> Matthew\n\nThat was just a guess. The query needs to retrieve the data for about \n50,000 users. But it should be fast if I don't retrieve the data for \nspecific users but let in run through all rows.\n\nexplain insert into setup1 (select\n a.\"user\", b.category, sum(1.0/b.cat_count)::float\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n and b.depth < 4\n and a.results > 100\n and a.\"user\" < 30000\n group by a.\"user\", b.category);\n\n\n\"GroupAggregate (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n\" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\" Sort Key: a.\"user\", b.category\"\n\" -> Merge Join (cost=149241.25..1287278.89 rows=53171707 \nwidth=12)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on \ndomain_categories b (cost=0.00..421716.32 rows=5112568 width=12)\"\n\" Filter: (depth < 4)\"\n\" -> Materialize (cost=148954.16..149446.36 rows=39376 \nwidth=8)\"\n\" -> Sort (cost=148954.16..149052.60 rows=39376 \nwidth=8)\"\n\" Sort Key: a.domain\"\n\" -> Bitmap Heap Scan on result a \n(cost=1249.93..145409.79 rows=39376 width=8)\"\n\" Recheck Cond: (\"user\" < 30000)\"\n\" Filter: (results > 100)\"\n\" -> Bitmap Index Scan on \nresult_user_idx (cost=0.00..1240.08 rows=66881 width=0)\"\n\" Index Cond: (\"user\" < 30000)\"\n\n\nThis query limits the number of users to 215 and this query took about \n50 minutes.\nI could create to temp tables which have only those records which I \nneed for this query. Would this be a good idea?\n\n\nmoritz\n\n", "msg_date": "Tue, 19 Aug 2008 14:47:33 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "What is your work_mem set to? The default?\n\nTry increasing it significantly if you have the RAM and seeing if that\naffects the explain plan. You may even want to set it to a number larger\nthan the RAM you have just to see what happens. In all honesty, it may be\nfaster to overflow to OS swap space than sort too many rows, but ONLY if it\nchanges the plan to a significantly more efficient one.\n\nSimply type\n'SET work_mem = '500MB';\nbefore running your explain. Set it to even more RAM if you have the space\nfor this experiment.\n\nIn my experience the performance of aggregates on large tables is\nsignificantly affected by work_mem and the optimizer will chosse poorly\nwithout enough of it. It will rule out plans that may be fast enough when\noverflowing to disk in preference to colossal sized sorts (which likely also\noverflow to disk but take hours or days).\n\nOn Tue, Aug 19, 2008 at 5:47 AM, Moritz Onken <[email protected]>wrote:\n\n>\n> Am 19.08.2008 um 14:17 schrieb Matthew Wakeling:\n>\n> On Tue, 19 Aug 2008, Moritz Onken wrote:\n>>\n>>> tablename | attname | n_distinct | correlation\n>>> result | domain | 1642 | 1\n>>>\n>>\n>> Well, the important thing is the correlation, which is 1, indicating that\n>> Postgres knows that the table is clustered. So I have no idea why it is\n>> sorting the entire table.\n>>\n>> What happens when you run EXPLAIN SELECT * FROM result ORDER BY domain?\n>>\n>>\n> \"Index Scan using result_domain_idx on result (cost=0.00..748720.72\n> rows=20306816 width=49)\"\n> ... as it should be.\n>\n> Sounds like an awfully long time to me. Also, I think restricting it to\n>>>> 280 users is probably not making it any faster.\n>>>>\n>>>\n>>> If I hadn't restricted it to 280 users it would have run ~350days...\n>>>\n>>\n>> What makes you say that? Perhaps you could post EXPLAINs of both of the\n>> queries.\n>>\n>> Matthew\n>>\n>\n> That was just a guess. The query needs to retrieve the data for about\n> 50,000 users. But it should be fast if I don't retrieve the data for\n> specific users but let in run through all rows.\n>\n> explain insert into setup1 (select\n> a.\"user\", b.category, sum(1.0/b.cat_count)::float\n> from result a, domain_categories b\n> where a.\"domain\" = b.\"domain\"\n> and b.depth < 4\n> and a.results > 100\n> and a.\"user\" < 30000\n> group by a.\"user\", b.category);\n>\n>\n> \"GroupAggregate (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n> \" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n> \" Sort Key: a.\"user\", b.category\"\n> \" -> Merge Join (cost=149241.25..1287278.89 rows=53171707\n> width=12)\"\n> \" Merge Cond: (b.domain = a.domain)\"\n> \" -> Index Scan using domain_categories_domain on\n> domain_categories b (cost=0.00..421716.32 rows=5112568 width=12)\"\n> \" Filter: (depth < 4)\"\n> \" -> Materialize (cost=148954.16..149446.36 rows=39376\n> width=8)\"\n> \" -> Sort (cost=148954.16..149052.60 rows=39376\n> width=8)\"\n> \" Sort Key: a.domain\"\n> \" -> Bitmap Heap Scan on result a\n> (cost=1249.93..145409.79 rows=39376 width=8)\"\n> \" Recheck Cond: (\"user\" < 30000)\"\n> \" Filter: (results > 100)\"\n> \" -> Bitmap Index Scan on result_user_idx\n> (cost=0.00..1240.08 rows=66881 width=0)\"\n> \" Index Cond: (\"user\" < 30000)\"\n>\n>\n> This query limits the number of users to 215 and this query took about 50\n> minutes.\n> I could create to temp tables which have only those records which I need\n> for this query. Would this be a good idea?\n>\n>\n> moritz\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWhat is your work_mem set to?  The default?Try increasing it significantly if you have the RAM and seeing if that affects the explain plan.  You may even want to set it to a number larger than the RAM you have just to see what happens.  In all honesty, it may be faster to overflow to OS swap space than sort too many rows, but ONLY if it changes the plan to a significantly more efficient one.\nSimply type'SET work_mem = '500MB';   before running your explain.  Set it to even more RAM if you have the space for this experiment. In my experience the performance of aggregates on large tables is significantly affected by work_mem and the optimizer will chosse poorly without enough of it.  It will rule out plans that may be fast enough when overflowing to disk in preference to colossal sized sorts (which likely also overflow to disk but take hours or days).\nOn Tue, Aug 19, 2008 at 5:47 AM, Moritz Onken <[email protected]> wrote:\n\nAm 19.08.2008 um 14:17 schrieb Matthew Wakeling:\n\n\nOn Tue, 19 Aug 2008, Moritz Onken wrote:\n\n      tablename        | attname | n_distinct | correlation\nresult                 | domain  |       1642 |           1\n\n\nWell, the important thing is the correlation, which is 1, indicating that Postgres knows that the table is clustered. So I have no idea why it is sorting the entire table.\n\nWhat happens when you run EXPLAIN SELECT * FROM result ORDER BY domain?\n\n\n\n\"Index Scan using result_domain_idx on result  (cost=0.00..748720.72 rows=20306816 width=49)\"\n... as it should be.\n\n\n\nSounds like an awfully long time to me. Also, I think restricting it to 280 users is probably not making it any faster.\n\n\nIf I hadn't restricted it to 280 users it would have run ~350days...\n\n\nWhat makes you say that? Perhaps you could post EXPLAINs of both of the queries.\n\nMatthew\n\n\nThat was just a guess. The query needs to retrieve the data for about 50,000 users. But it should be fast if I don't retrieve the data for specific users but let in run through all rows.\n\nexplain insert into setup1 (select\n  a.\"user\", b.category, sum(1.0/b.cat_count)::float\n  from result a, domain_categories b\n  where a.\"domain\" = b.\"domain\"\n  and b.depth < 4\n  and a.results > 100\n  and a.\"user\" < 30000\n  group by a.\"user\", b.category);\n\n\n\"GroupAggregate  (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n\"  ->  Sort  (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\"        Sort Key: a.\"user\", b.category\"\n\"        ->  Merge Join  (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n\"              Merge Cond: (b.domain = a.domain)\"\n\"              ->  Index Scan using domain_categories_domain on domain_categories b  (cost=0.00..421716.32 rows=5112568 width=12)\"\n\"                    Filter: (depth < 4)\"\n\"              ->  Materialize  (cost=148954.16..149446.36 rows=39376 width=8)\"\n\"                    ->  Sort  (cost=148954.16..149052.60 rows=39376 width=8)\"\n\"                          Sort Key: a.domain\"\n\"                          ->  Bitmap Heap Scan on result a  (cost=1249.93..145409.79 rows=39376 width=8)\"\n\"                                Recheck Cond: (\"user\" < 30000)\"\n\"                                Filter: (results > 100)\"\n\"                                ->  Bitmap Index Scan on result_user_idx  (cost=0.00..1240.08 rows=66881 width=0)\"\n\"                                      Index Cond: (\"user\" < 30000)\"\n\n\nThis query limits the number of users to 215 and this query took about 50 minutes.\nI could create to temp tables which have only those records which I need for this query. Would this be a good idea?\n\n\nmoritz\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 19 Aug 2008 07:49:29 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 19.08.2008 um 16:49 schrieb Scott Carey:\n\n> What is your work_mem set to? The default?\n>\n> Try increasing it significantly if you have the RAM and seeing if \n> that affects the explain plan. You may even want to set it to a \n> number larger than the RAM you have just to see what happens. In \n> all honesty, it may be faster to overflow to OS swap space than sort \n> too many rows, but ONLY if it changes the plan to a significantly \n> more efficient one.\n>\n> Simply type\n> 'SET work_mem = '500MB';\n> before running your explain. Set it to even more RAM if you have \n> the space for this experiment.\n>\n> In my experience the performance of aggregates on large tables is \n> significantly affected by work_mem and the optimizer will chosse \n> poorly without enough of it. It will rule out plans that may be \n> fast enough when overflowing to disk in preference to colossal sized \n> sorts (which likely also overflow to disk but take hours or days).\n\nThanks for that advice but the explain is not different :-(\n\nmoritz\n", "msg_date": "Tue, 19 Aug 2008 17:23:42 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 19.08.2008 um 17:23 schrieb Moritz Onken:\n\n>\n> Am 19.08.2008 um 16:49 schrieb Scott Carey:\n>\n>> What is your work_mem set to? The default?\n>>\n>> Try increasing it significantly if you have the RAM and seeing if \n>> that affects the explain plan. You may even want to set it to a \n>> number larger than the RAM you have just to see what happens. In \n>> all honesty, it may be faster to overflow to OS swap space than \n>> sort too many rows, but ONLY if it changes the plan to a \n>> significantly more efficient one.\n>>\n>> Simply type\n>> 'SET work_mem = '500MB';\n>> before running your explain. Set it to even more RAM if you have \n>> the space for this experiment.\n>>\n>> In my experience the performance of aggregates on large tables is \n>> significantly affected by work_mem and the optimizer will chosse \n>> poorly without enough of it. It will rule out plans that may be \n>> fast enough when overflowing to disk in preference to colossal \n>> sized sorts (which likely also overflow to disk but take hours or \n>> days).\n>\n> Thanks for that advice but the explain is not different :-(\n>\n> moritz\n>\n> -- \n\nHi,\n\nI started the query with work_mem set to 3000MB. The explain output \ndidn't change but it runs now much faster (about 10 times). The swap \nisn't used. How can you explain that?\n\nmoritz\n", "msg_date": "Wed, 20 Aug 2008 09:54:13 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "Moritz Onken �rta:\n>\n> Am 19.08.2008 um 17:23 schrieb Moritz Onken:\n>\n>>\n>> Am 19.08.2008 um 16:49 schrieb Scott Carey:\n>>\n>>> What is your work_mem set to? The default?\n>>>\n>>> Try increasing it significantly if you have the RAM and seeing if\n>>> that affects the explain plan. You may even want to set it to a\n>>> number larger than the RAM you have just to see what happens. In\n>>> all honesty, it may be faster to overflow to OS swap space than sort\n>>> too many rows, but ONLY if it changes the plan to a significantly\n>>> more efficient one.\n>>>\n>>> Simply type\n>>> 'SET work_mem = '500MB';\n>>> before running your explain. Set it to even more RAM if you have\n>>> the space for this experiment.\n>>>\n>>> In my experience the performance of aggregates on large tables is\n>>> significantly affected by work_mem and the optimizer will chosse\n>>> poorly without enough of it. It will rule out plans that may be\n>>> fast enough when overflowing to disk in preference to colossal sized\n>>> sorts (which likely also overflow to disk but take hours or days).\n>>\n>> Thanks for that advice but the explain is not different :-(\n>>\n>> moritz\n>>\n>> -- \n>\n> Hi,\n>\n> I started the query with work_mem set to 3000MB. The explain output\n> didn't change but it runs now much faster (about 10 times). The swap\n> isn't used. How can you explain that?\n\n$ cat /proc/sys/vm/overcommit_memory\n0\n$ less linux/Documentation/filesystems/proc.txt\n...\novercommit_memory\n-----------------\n\nControls overcommit of system memory, possibly allowing processes\nto allocate (but not use) more memory than is actually available.\n\n\n0 - Heuristic overcommit handling. Obvious overcommits of\n address space are refused. Used for a typical system. It\n ensures a seriously wild allocation fails while allowing\n overcommit to reduce swap usage. root is allowed to\n allocate slightly more memory in this mode. This is the\n default.\n\n1 - Always overcommit. Appropriate for some scientific\n applications.\n\n2 - Don't overcommit. The total address space commit\n for the system is not permitted to exceed swap plus a\n configurable percentage (default is 50) of physical RAM.\n Depending on the percentage you use, in most situations\n this means a process will not be killed while attempting\n to use already-allocated memory but will receive errors\n on memory allocation as appropriate.\n...\n\nI guess you are running on 64-bit because \"obvious overcommit\" exceeds\n3GB already.\nOr you're running 32-bit and overcommit_memory=1 on your system.\n\nBest regards,\nZolt�n B�sz�rm�nyi\n\n-- \n----------------------------------\nZolt�n B�sz�rm�nyi\nCybertec Sch�nig & Sch�nig GmbH\nhttp://www.postgresql.at/\n\n", "msg_date": "Wed, 20 Aug 2008 11:10:21 +0200", "msg_from": "Zoltan Boszormenyi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "More work_mem will make the sort fit more in memory and less on disk, even\nwith the same query plan.\n\n\nOn Wed, Aug 20, 2008 at 12:54 AM, Moritz Onken <[email protected]>wrote:\n\n>\n> Am 19.08.2008 um 17:23 schrieb Moritz Onken:\n>\n>\n>> Am 19.08.2008 um 16:49 schrieb Scott Carey:\n>>\n>> What is your work_mem set to? The default?\n>>>\n>>> Try increasing it significantly if you have the RAM and seeing if that\n>>> affects the explain plan. You may even want to set it to a number larger\n>>> than the RAM you have just to see what happens. In all honesty, it may be\n>>> faster to overflow to OS swap space than sort too many rows, but ONLY if it\n>>> changes the plan to a significantly more efficient one.\n>>>\n>>> Simply type\n>>> 'SET work_mem = '500MB';\n>>> before running your explain. Set it to even more RAM if you have the\n>>> space for this experiment.\n>>>\n>>> In my experience the performance of aggregates on large tables is\n>>> significantly affected by work_mem and the optimizer will chosse poorly\n>>> without enough of it. It will rule out plans that may be fast enough when\n>>> overflowing to disk in preference to colossal sized sorts (which likely also\n>>> overflow to disk but take hours or days).\n>>>\n>>\n>> Thanks for that advice but the explain is not different :-(\n>>\n>> moritz\n>>\n>> --\n>>\n>\n> Hi,\n>\n> I started the query with work_mem set to 3000MB. The explain output didn't\n> change but it runs now much faster (about 10 times). The swap isn't used.\n> How can you explain that?\n>\n>\n> moritz\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMore work_mem will make the sort fit more in memory and less on disk, even with the same query plan.On Wed, Aug 20, 2008 at 12:54 AM, Moritz Onken <[email protected]> wrote:\n\nAm 19.08.2008 um 17:23 schrieb Moritz Onken:\n\n\n\nAm 19.08.2008 um 16:49 schrieb Scott Carey:\n\n\nWhat is your work_mem set to?  The default?\n\nTry increasing it significantly if you have the RAM and seeing if that affects the explain plan.  You may even want to set it to a number larger than the RAM you have just to see what happens.  In all honesty, it may be faster to overflow to OS swap space than sort too many rows, but ONLY if it changes the plan to a significantly more efficient one.\n\nSimply type\n'SET work_mem = '500MB';\nbefore running your explain.  Set it to even more RAM if you have the space for this experiment.\n\nIn my experience the performance of aggregates on large tables is significantly affected by work_mem and the optimizer will chosse poorly without enough of it.  It will rule out plans that may be fast enough when overflowing to disk in preference to colossal sized sorts (which likely also overflow to disk but take hours or days).\n\n\nThanks for that advice but the explain is not different :-(\n\nmoritz\n\n-- \n\n\nHi,\n\nI started the query with work_mem set to 3000MB. The explain output didn't change but it runs now much faster (about 10 times). The swap isn't used. How can you explain that?\n\nmoritz\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 20 Aug 2008 09:01:07 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "Ok, so the problem boils down to the sort at the end.\n\nThe query up through the merge join on domain is as fast as its going to\nget. The sort at the end however, should not happen ideally. There are not\nthat many rows returned, and it should hash_aggregate if it thinks there is\nenough space to do so.\n\nThe query planner is going to choose the sort > agg over the hash-agg if it\nestimates the total number of resulting rows to be large enough so that the\nhash won't fit in work_mem. However, there seems to be another factor here\nbased on this:\n\n\nGroupAggregate (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n\" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\" Sort Key: a.\"user\", b.category\"\n\" -> Merge Join (cost=149241.25..1287278.89 rows=53171707\nwidth=12)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\n\nThe planner actually thinks there will only be 28704 rows returned of width\n12. But it chooses to sort 53 million rows before aggregating. Thats\neither a bug or there's something else wrong here. That is the wrong way\nto aggregate those results no matter how much work_mem you have unless I'm\ncompletely missing something...\n\nYou can try rearranging the query just to see if you can work around this.\nWhat happens if you compare the explain on:\n\nselect a.\"user\", b.category, sum(1.0/b.cat_count)::float\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n and b.depth < 4\n and a.results > 100\n and a.\"user\" < 30000\n group by a.\"user\", b.category\n\n\nto\n\nselect c.\"user\", c.category, sum(1.0/c.cat_count)::float\n from (select a.\"user\", b.category, b.cat_count\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n and b.depth < 4\n and a.results > 100\n and a.\"user\" < 30000 ) c\n group by c.\"user\", c.category\n\nIt shouldn't make a difference, but I've seen things like this help before\nso its worth a try. Make sure work_mem is reasonably sized for this test.\n\nAnother thing that won't be that fast, but may avoid the sort, is to select\nthe subselection above into a temporary table, analyze it, and then do the\nouter select. Make sure your settings for temporary space (temp_buffers in\n8.3) are large enough for the intermediate results (700MB should do it).\nThat won't be that fast, but it will most likely be faster than sorting 50\nmillion + rows. There are lots of problems with this approach but it may be\nworth the experiment.\n\n\nOn Tue, Aug 19, 2008 at 5:47 AM, Moritz Onken <[email protected]>wrote:\n\n>\n> Am 19.08.2008 um 14:17 schrieb Matthew Wakeling:\n>\n> On Tue, 19 Aug 2008, Moritz Onken wrote:\n>>\n>>> tablename | attname | n_distinct | correlation\n>>> result | domain | 1642 | 1\n>>>\n>>\n>> Well, the important thing is the correlation, which is 1, indicating that\n>> Postgres knows that the table is clustered. So I have no idea why it is\n>> sorting the entire table.\n>>\n>> What happens when you run EXPLAIN SELECT * FROM result ORDER BY domain?\n>>\n>>\n> \"Index Scan using result_domain_idx on result (cost=0.00..748720.72\n> rows=20306816 width=49)\"\n> ... as it should be.\n>\n> Sounds like an awfully long time to me. Also, I think restricting it to\n>>>> 280 users is probably not making it any faster.\n>>>>\n>>>\n>>> If I hadn't restricted it to 280 users it would have run ~350days...\n>>>\n>>\n>> What makes you say that? Perhaps you could post EXPLAINs of both of the\n>> queries.\n>>\n>> Matthew\n>>\n>\n> That was just a guess. The query needs to retrieve the data for about\n> 50,000 users. But it should be fast if I don't retrieve the data for\n> specific users but let in run through all rows.\n>\n> explain insert into setup1 (select\n> a.\"user\", b.category, sum(1.0/b.cat_count)::float\n> from result a, domain_categories b\n> where a.\"domain\" = b.\"domain\"\n> and b.depth < 4\n> and a.results > 100\n> and a.\"user\" < 30000\n> group by a.\"user\", b.category);\n>\n>\n> \"GroupAggregate (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n> \" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n> \" Sort Key: a.\"user\", b.category\"\n> \" -> Merge Join (cost=149241.25..1287278.89 rows=53171707\n> width=12)\"\n> \" Merge Cond: (b.domain = a.domain)\"\n> \" -> Index Scan using domain_categories_domain on\n> domain_categories b (cost=0.00..421716.32 rows=5112568 width=12)\"\n> \" Filter: (depth < 4)\"\n> \" -> Materialize (cost=148954.16..149446.36 rows=39376\n> width=8)\"\n> \" -> Sort (cost=148954.16..149052.60 rows=39376\n> width=8)\"\n> \" Sort Key: a.domain\"\n> \" -> Bitmap Heap Scan on result a\n> (cost=1249.93..145409.79 rows=39376 width=8)\"\n> \" Recheck Cond: (\"user\" < 30000)\"\n> \" Filter: (results > 100)\"\n> \" -> Bitmap Index Scan on result_user_idx\n> (cost=0.00..1240.08 rows=66881 width=0)\"\n> \" Index Cond: (\"user\" < 30000)\"\n>\n>\n> This query limits the number of users to 215 and this query took about 50\n> minutes.\n> I could create to temp tables which have only those records which I need\n> for this query. Would this be a good idea?\n>\n>\n> moritz\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nOk, so the problem boils down to the sort at the end.The query up through the merge join on domain is as fast as its going to get.  The sort at the end however, should not happen ideally.  There are not that many rows returned, and it should hash_aggregate if it thinks there is enough space to do so.\nThe query planner is going to choose the sort > agg over the hash-agg if it estimates the total number of resulting rows to be large enough so that the hash won't fit in work_mem.   However, there seems to be another factor here based on this:\nGroupAggregate  (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n\"  ->  Sort  (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\"        Sort Key: a.\"user\", b.category\"\n\"        ->  Merge Join  (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n\"              Merge Cond: (b.domain = a.domain)\"The planner actually thinks there will only be 28704 rows returned of width 12.  But it chooses to sort 53 million rows before aggregating.  Thats either a bug or there's something else wrong here.   That is the wrong way to aggregate those results no matter how much work_mem you have unless I'm completely missing something...\nYou can try rearranging the query just to see if you can work around this.  What happens if you compare the explain on:select a.\"user\", b.category, sum(1.0/b.cat_count)::float\n\n  from result a, domain_categories b\n  where a.\"domain\" = b.\"domain\"\n  and b.depth < 4\n  and a.results > 100\n  and a.\"user\" < 30000\n  group by a.\"user\", b.categoryto select c.\"user\", c.category, sum(1.0/c.cat_count)::float\n  from (select a.\"user\", b.category, b.cat_count    from result a, domain_categories b\n      where a.\"domain\" = b.\"domain\"\n        and b.depth < 4\n        and a.results > 100\n        and a.\"user\" < 30000 ) c \n group by c.\"user\", c.categoryIt shouldn't make a difference, but I've seen things like this help before so its worth a try.  Make sure work_mem is reasonably sized for this test.Another thing that won't be that fast, but may avoid the sort, is to select the subselection above into a temporary table, analyze it, and then do the outer select.  Make sure your settings for temporary space (temp_buffers in 8.3) are large enough for the intermediate results (700MB should do it).  That won't be that fast, but it will most likely be faster than sorting 50 million + rows.  There are lots of problems with this approach but it may be worth the experiment.\nOn Tue, Aug 19, 2008 at 5:47 AM, Moritz Onken <[email protected]> wrote:\n\nAm 19.08.2008 um 14:17 schrieb Matthew Wakeling:\n\n\nOn Tue, 19 Aug 2008, Moritz Onken wrote:\n\n      tablename        | attname | n_distinct | correlation\nresult                 | domain  |       1642 |           1\n\n\nWell, the important thing is the correlation, which is 1, indicating that Postgres knows that the table is clustered. So I have no idea why it is sorting the entire table.\n\nWhat happens when you run EXPLAIN SELECT * FROM result ORDER BY domain?\n\n\n\n\"Index Scan using result_domain_idx on result  (cost=0.00..748720.72 rows=20306816 width=49)\"\n... as it should be.\n\n\n\nSounds like an awfully long time to me. Also, I think restricting it to 280 users is probably not making it any faster.\n\n\nIf I hadn't restricted it to 280 users it would have run ~350days...\n\n\nWhat makes you say that? Perhaps you could post EXPLAINs of both of the queries.\n\nMatthew\n\n\nThat was just a guess. The query needs to retrieve the data for about 50,000 users. But it should be fast if I don't retrieve the data for specific users but let in run through all rows.\n\nexplain insert into setup1 (select\n  a.\"user\", b.category, sum(1.0/b.cat_count)::float\n  from result a, domain_categories b\n  where a.\"domain\" = b.\"domain\"\n  and b.depth < 4\n  and a.results > 100\n  and a.\"user\" < 30000\n  group by a.\"user\", b.category);\n\n\n\"GroupAggregate  (cost=11745105.66..12277396.81 rows=28704 width=12)\"\n\"  ->  Sort  (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\"        Sort Key: a.\"user\", b.category\"\n\"        ->  Merge Join  (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n\"              Merge Cond: (b.domain = a.domain)\"\n\"              ->  Index Scan using domain_categories_domain on domain_categories b  (cost=0.00..421716.32 rows=5112568 width=12)\"\n\"                    Filter: (depth < 4)\"\n\"              ->  Materialize  (cost=148954.16..149446.36 rows=39376 width=8)\"\n\"                    ->  Sort  (cost=148954.16..149052.60 rows=39376 width=8)\"\n\"                          Sort Key: a.domain\"\n\"                          ->  Bitmap Heap Scan on result a  (cost=1249.93..145409.79 rows=39376 width=8)\"\n\"                                Recheck Cond: (\"user\" < 30000)\"\n\"                                Filter: (results > 100)\"\n\"                                ->  Bitmap Index Scan on result_user_idx  (cost=0.00..1240.08 rows=66881 width=0)\"\n\"                                      Index Cond: (\"user\" < 30000)\"\n\n\nThis query limits the number of users to 215 and this query took about 50 minutes.\nI could create to temp tables which have only those records which I need for this query. Would this be a good idea?\n\n\nmoritz\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 20 Aug 2008 11:06:06 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\"Scott Carey\" <[email protected]> writes:\n> The planner actually thinks there will only be 28704 rows returned of width\n> 12. But it chooses to sort 53 million rows before aggregating. Thats\n> either a bug or there's something else wrong here. That is the wrong way\n> to aggregate those results no matter how much work_mem you have unless I'm\n> completely missing something...\n\nThat does look weird. What are the datatypes of the columns being\ngrouped by? Maybe they're not hashable?\n\nAnother forcing function that prevents use of HashAgg is DISTINCT\naggregates, but you don't seem to have any in this query...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Aug 2008 14:28:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data " }, { "msg_contents": "\nAm 20.08.2008 um 20:06 schrieb Scott Carey:\n\n> Ok, so the problem boils down to the sort at the end.\n>\n> The query up through the merge join on domain is as fast as its \n> going to get. The sort at the end however, should not happen \n> ideally. There are not that many rows returned, and it should \n> hash_aggregate if it thinks there is enough space to do so.\n>\n> The query planner is going to choose the sort > agg over the hash- \n> agg if it estimates the total number of resulting rows to be large \n> enough so that the hash won't fit in work_mem. However, there \n> seems to be another factor here based on this:\n>\n>\n> GroupAggregate (cost=11745105.66..12277396.\n> 81 rows=28704 width=12)\"\n> \" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n>\n> \" Sort Key: a.\"user\", b.category\"\n> \" -> Merge Join (cost=149241.25..1287278.89 rows=53171707 \n> width=12)\"\n>\n> \" Merge Cond: (b.domain = a.domain)\"\n>\n>\n> The planner actually thinks there will only be 28704 rows returned \n> of width 12. But it chooses to sort 53 million rows before \n> aggregating. Thats either a bug or there's something else wrong \n> here. That is the wrong way to aggregate those results no matter \n> how much work_mem you have unless I'm completely missing something...\n>\n> You can try rearranging the query just to see if you can work around \n> this. What happens if you compare the explain on:\n>\n> select\n> a.\"user\", b.category, sum(1.0/b.cat_count)::float\n> from result a, domain_categories b\n> where a.\"domain\" = b.\"domain\"\n> and b.depth < 4\n> and a.results > 100\n> and a.\"user\" < 30000\n> group by a.\"user\", b.category\n>\n>\n\n\"HashAggregate (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n\" -> Merge Join (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on \ndomain_categories b (cost=0.00..421716.32 rows=5112568 width=12)\"\n\" Filter: (depth < 4)\"\n\" -> Sort (cost=148415.16..148513.60 rows=39376 width=8)\"\n\" Sort Key: a.domain\"\n\" -> Bitmap Heap Scan on result a \n(cost=1249.93..145409.79 rows=39376 width=8)\"\n\" Recheck Cond: (\"user\" < 30000)\"\n\" Filter: (results > 100)\"\n\" -> Bitmap Index Scan on result_user_idx \n(cost=0.00..1240.08 rows=66881 width=0)\"\n\" Index Cond: (\"user\" < 30000)\"\n\n\n\n> to\n>\n> select\n> c.\"user\", c.category, sum(1.0/c.cat_count)::float\n> from (select a.\"user\", b.category, b.cat_count\n> from result a, domain_categories b\n> where a.\"domain\" = b.\"domain\"\n> and b.depth < 4\n> and a.results > 100\n> and a.\"user\" < 30000 ) c\n> group by c.\"user\", c.category\n>\n\n\n\"HashAggregate (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n\" -> Merge Join (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n\" Merge Cond: (b.domain = a.domain)\"\n\" -> Index Scan using domain_categories_domain on \ndomain_categories b (cost=0.00..421716.32 rows=5112568 width=12)\"\n\" Filter: (depth < 4)\"\n\" -> Sort (cost=148415.16..148513.60 rows=39376 width=8)\"\n\" Sort Key: a.domain\"\n\" -> Bitmap Heap Scan on result a \n(cost=1249.93..145409.79 rows=39376 width=8)\"\n\" Recheck Cond: (\"user\" < 30000)\"\n\" Filter: (results > 100)\"\n\" -> Bitmap Index Scan on result_user_idx \n(cost=0.00..1240.08 rows=66881 width=0)\"\n\" Index Cond: (\"user\" < 30000)\"\n\n\n\n> It shouldn't make a difference, but I've seen things like this help \n> before so its worth a try. Make sure work_mem is reasonably sized \n> for this test.\n\nIt's exactly the same. work_mem was set to 3000MB.\n\n>\n>\n> Another thing that won't be that fast, but may avoid the sort, is to \n> select the subselection above into a temporary table, analyze it, \n> and then do the outer select. Make sure your settings for temporary \n> space (temp_buffers in 8.3) are large enough for the intermediate \n> results (700MB should do it). That won't be that fast, but it will \n> most likely be faster than sorting 50 million + rows. There are \n> lots of problems with this approach but it may be worth the \n> experiment.\n>\n\nI'll try this.\n\nThanks so far!\n", "msg_date": "Thu, 21 Aug 2008 09:03:53 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 20.08.2008 um 20:28 schrieb Tom Lane:\n\n> \"Scott Carey\" <[email protected]> writes:\n>> The planner actually thinks there will only be 28704 rows returned \n>> of width\n>> 12. But it chooses to sort 53 million rows before aggregating. \n>> Thats\n>> either a bug or there's something else wrong here. That is the \n>> wrong way\n>> to aggregate those results no matter how much work_mem you have \n>> unless I'm\n>> completely missing something...\n>\n> That does look weird. What are the datatypes of the columns being\n> grouped by? Maybe they're not hashable?\n>\n> Another forcing function that prevents use of HashAgg is DISTINCT\n> aggregates, but you don't seem to have any in this query...\n>\n> \t\t\tregards, tom lane\n\nThe datatypes are both integers. There is no DISTINCT in this query.\nThanks anyway!\n\n\n", "msg_date": "Thu, 21 Aug 2008 09:04:38 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data " }, { "msg_contents": "\nAm 21.08.2008 um 09:04 schrieb Moritz Onken:\n\n>\n> Am 20.08.2008 um 20:28 schrieb Tom Lane:\n>\n>> \"Scott Carey\" <[email protected]> writes:\n>>> The planner actually thinks there will only be 28704 rows returned \n>>> of width\n>>> 12. But it chooses to sort 53 million rows before aggregating. \n>>> Thats\n>>> either a bug or there's something else wrong here. That is the \n>>> wrong way\n>>> to aggregate those results no matter how much work_mem you have \n>>> unless I'm\n>>> completely missing something...\n>>\n>> That does look weird. What are the datatypes of the columns being\n>> grouped by? Maybe they're not hashable?\n>>\n>> Another forcing function that prevents use of HashAgg is DISTINCT\n>> aggregates, but you don't seem to have any in this query...\n>>\n>> \t\t\tregards, tom lane\n>\n> The datatypes are both integers. There is no DISTINCT in this query.\n> Thanks anyway!\n>\n\ninsert into setup1 (select\n a.\"user\", b.category, sum(1.0/b.cat_count)::float\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n and b.depth < 4\n and a.results > 100\n group by a.\"user\", b.category);\n\nThis query inserted a total of 16,000,000 rows and, with work_mem set \nto 3000mb,\ntook about 24 hours.\n\nAny more ideas to speed this up?\n\n\n", "msg_date": "Thu, 21 Aug 2008 09:45:32 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data " }, { "msg_contents": "It looks to me like the work_mem did have an effect.\n\nYour earlier queries had a sort followed by group aggregate at the top, and\nnow its a hash-aggregate. So the query plan DID change. That is likely\nwhere the first 10x performance gain came from.\n\nThe top of the plan was:\n\nGroupAggregate (cost=11745105.66..12277396.\n81 rows=28704 width=12)\"\n -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n -> Merge Join (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n Merge Cond: (b.domain = a.domain)\"\n\nand now it is:\n\n\"HashAggregate (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n -> Merge Join (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n Merge Cond: (b.domain = a.domain)\"\n\nThe HashAggregate replaced the Sort followed by GroupAggregate at about 1/10\nthe cost.\n\nIt probably only took the first couple hundred MB of work_mem to do this, or\nless given that you were at the default originally.\nNote how the estimated cost on the latter is 1.6 million, and it is 11\nmillion in the first one.\n\nYou won't get a large table aggregate significantly faster than this --\nyou're asking it to scan through 53 million records and aggregate. An\nexplain analyze will be somewhat instructive to help identify if there is\nmore I/O or CPU bound overall as we can compare the estimated cost with the\nactual times, but this probably won't get very far.\n\nAfter that, inserting 16M rows requires rather different tuning and\nbottleneck identification.\n\nOn Thu, Aug 21, 2008 at 12:03 AM, Moritz Onken <[email protected]>wrote:\n\n>\n> Am 20.08.2008 um 20:06 schrieb Scott Carey:\n>\n> Ok, so the problem boils down to the sort at the end.\n>>\n>> The query up through the merge join on domain is as fast as its going to\n>> get. The sort at the end however, should not happen ideally. There are not\n>> that many rows returned, and it should hash_aggregate if it thinks there is\n>> enough space to do so.\n>>\n>> The query planner is going to choose the sort > agg over the hash-agg if\n>> it estimates the total number of resulting rows to be large enough so that\n>> the hash won't fit in work_mem. However, there seems to be another factor\n>> here based on this:\n>>\n>>\n>> GroupAggregate (cost=11745105.66..12277396.\n>> 81 rows=28704 width=12)\"\n>> \" -> Sort (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n>>\n>> \" Sort Key: a.\"user\", b.category\"\n>> \" -> Merge Join (cost=149241.25..1287278.89 rows=53171707\n>> width=12)\"\n>>\n>> \" Merge Cond: (b.domain = a.domain)\"\n>>\n>>\n>> The planner actually thinks there will only be 28704 rows returned of\n>> width 12. But it chooses to sort 53 million rows before aggregating. Thats\n>> either a bug or there's something else wrong here. That is the wrong way\n>> to aggregate those results no matter how much work_mem you have unless I'm\n>> completely missing something...\n>>\n>> You can try rearranging the query just to see if you can work around this.\n>> What happens if you compare the explain on:\n>>\n>> select\n>> a.\"user\", b.category, sum(1.0/b.cat_count)::float\n>> from result a, domain_categories b\n>> where a.\"domain\" = b.\"domain\"\n>> and b.depth < 4\n>> and a.results > 100\n>> and a.\"user\" < 30000\n>> group by a.\"user\", b.category\n>>\n>>\n>>\n> \"HashAggregate (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n> \" -> Merge Join (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n> \" Merge Cond: (b.domain = a.domain)\"\n> \" -> Index Scan using domain_categories_domain on domain_categories\n> b (cost=0.00..421716.32 rows=5112568 width=12)\"\n> \" Filter: (depth < 4)\"\n> \" -> Sort (cost=148415.16..148513.60 rows=39376 width=8)\"\n> \" Sort Key: a.domain\"\n> \" -> Bitmap Heap Scan on result a (cost=1249.93..145409.79\n> rows=39376 width=8)\"\n> \" Recheck Cond: (\"user\" < 30000)\"\n> \" Filter: (results > 100)\"\n> \" -> Bitmap Index Scan on result_user_idx\n> (cost=0.00..1240.08 rows=66881 width=0)\"\n> \" Index Cond: (\"user\" < 30000)\"\n>\n>\n>\n> to\n>>\n>> select\n>> c.\"user\", c.category, sum(1.0/c.cat_count)::float\n>> from (select a.\"user\", b.category, b.cat_count\n>> from result a, domain_categories b\n>> where a.\"domain\" = b.\"domain\"\n>> and b.depth < 4\n>> and a.results > 100\n>> and a.\"user\" < 30000 ) c\n>> group by c.\"user\", c.category\n>>\n>>\n>\n> \"HashAggregate (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n> \" -> Merge Join (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n> \" Merge Cond: (b.domain = a.domain)\"\n> \" -> Index Scan using domain_categories_domain on domain_categories\n> b (cost=0.00..421716.32 rows=5112568 width=12)\"\n> \" Filter: (depth < 4)\"\n> \" -> Sort (cost=148415.16..148513.60 rows=39376 width=8)\"\n> \" Sort Key: a.domain\"\n> \" -> Bitmap Heap Scan on result a (cost=1249.93..145409.79\n> rows=39376 width=8)\"\n> \" Recheck Cond: (\"user\" < 30000)\"\n> \" Filter: (results > 100)\"\n> \" -> Bitmap Index Scan on result_user_idx\n> (cost=0.00..1240.08 rows=66881 width=0)\"\n> \" Index Cond: (\"user\" < 30000)\"\n>\n>\n>\n> It shouldn't make a difference, but I've seen things like this help before\n>> so its worth a try. Make sure work_mem is reasonably sized for this test.\n>>\n>\n> It's exactly the same. work_mem was set to 3000MB.\n>\n>\n>>\n>> Another thing that won't be that fast, but may avoid the sort, is to\n>> select the subselection above into a temporary table, analyze it, and then\n>> do the outer select. Make sure your settings for temporary space\n>> (temp_buffers in 8.3) are large enough for the intermediate results (700MB\n>> should do it). That won't be that fast, but it will most likely be faster\n>> than sorting 50 million + rows. There are lots of problems with this\n>> approach but it may be worth the experiment.\n>>\n>>\n> I'll try this.\n>\n> Thanks so far!\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt looks to me like the work_mem did have an effect.  Your earlier queries had a sort followed by group aggregate at the top, and now its a hash-aggregate.  So the query plan DID change.  That is likely where the first 10x performance gain came from.  \nThe top of the plan was:GroupAggregate  (cost=11745105.66..12277396.\n81 rows=28704 width=12)\"\n  ->  Sort  (cost=11745105.66..11878034.93 rows=53171707 width=12)\"        ->  Merge Join  (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n              Merge Cond: (b.domain = a.domain)\"and now it is:\"HashAggregate  (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n  ->  Merge Join  (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n        Merge Cond: (b.domain = a.domain)\"The HashAggregate replaced the Sort followed by GroupAggregate at about 1/10 the cost. It probably only took the first couple hundred MB of work_mem to do this, or less given that you were at the default originally.\nNote how the estimated cost on the latter is 1.6 million, and it is 11 million in the first one.You won't get a large table aggregate significantly faster than this -- you're asking it to scan through 53 million records and aggregate.  An explain analyze will be somewhat instructive to help identify if there is more I/O or CPU bound overall as we can compare the estimated cost with the actual times, but this probably won't get very far.\nAfter that, inserting 16M rows requires rather different tuning and bottleneck identification.On Thu, Aug 21, 2008 at 12:03 AM, Moritz Onken <[email protected]> wrote:\n\nAm 20.08.2008 um 20:06 schrieb Scott Carey:\n\n\nOk, so the problem boils down to the sort at the end.\n\nThe query up through the merge join on domain is as fast as its going to get.  The sort at the end however, should not happen ideally.  There are not that many rows returned, and it should hash_aggregate if it thinks there is enough space to do so.\n\nThe query planner is going to choose the sort > agg over the hash-agg if it estimates the total number of resulting rows to be large enough so that the hash won't fit in work_mem.   However, there seems to be another factor here based on this:\n\n\nGroupAggregate  (cost=11745105.66..12277396.\n81 rows=28704 width=12)\"\n\"  ->  Sort  (cost=11745105.66..11878034.93 rows=53171707 width=12)\"\n\n\"        Sort Key: a.\"user\", b.category\"\n\"        ->  Merge Join  (cost=149241.25..1287278.89 rows=53171707 width=12)\"\n\n\"              Merge Cond: (b.domain = a.domain)\"\n\n\nThe planner actually thinks there will only be 28704 rows returned of width 12.  But it chooses to sort 53 million rows before aggregating.  Thats either a bug or there's something else wrong here.   That is the wrong way to aggregate those results no matter how much work_mem you have unless I'm completely missing something...\n\nYou can try rearranging the query just to see if you can work around this.  What happens if you compare the explain on:\n\nselect\n a.\"user\", b.category, sum(1.0/b.cat_count)::float\n from result a, domain_categories b\n where a.\"domain\" = b.\"domain\"\n and b.depth < 4\n and a.results > 100\n and a.\"user\" < 30000\n group by a.\"user\", b.category\n\n\n\n\n\"HashAggregate  (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n\"  ->  Merge Join  (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n\"        Merge Cond: (b.domain = a.domain)\"\n\"        ->  Index Scan using domain_categories_domain on domain_categories b  (cost=0.00..421716.32 rows=5112568 width=12)\"\n\"              Filter: (depth < 4)\"\n\"        ->  Sort  (cost=148415.16..148513.60 rows=39376 width=8)\"\n\"              Sort Key: a.domain\"\n\"              ->  Bitmap Heap Scan on result a  (cost=1249.93..145409.79 rows=39376 width=8)\"\n\"                    Recheck Cond: (\"user\" < 30000)\"\n\"                    Filter: (results > 100)\"\n\"                    ->  Bitmap Index Scan on result_user_idx  (cost=0.00..1240.08 rows=66881 width=0)\"\n\"                          Index Cond: (\"user\" < 30000)\"\n\n\n\n\nto\n\nselect\n c.\"user\", c.category, sum(1.0/c.cat_count)::float\n from (select a.\"user\", b.category, b.cat_count\n   from result a, domain_categories b\n     where a.\"domain\" = b.\"domain\"\n       and b.depth < 4\n       and a.results > 100\n       and a.\"user\" < 30000 ) c\n  group by c.\"user\", c.category\n\n\n\n\n\"HashAggregate  (cost=1685527.69..1686101.77 rows=28704 width=12)\"\n\"  ->  Merge Join  (cost=148702.25..1286739.89 rows=53171707 width=12)\"\n\"        Merge Cond: (b.domain = a.domain)\"\n\"        ->  Index Scan using domain_categories_domain on domain_categories b  (cost=0.00..421716.32 rows=5112568 width=12)\"\n\"              Filter: (depth < 4)\"\n\"        ->  Sort  (cost=148415.16..148513.60 rows=39376 width=8)\"\n\"              Sort Key: a.domain\"\n\"              ->  Bitmap Heap Scan on result a  (cost=1249.93..145409.79 rows=39376 width=8)\"\n\"                    Recheck Cond: (\"user\" < 30000)\"\n\"                    Filter: (results > 100)\"\n\"                    ->  Bitmap Index Scan on result_user_idx  (cost=0.00..1240.08 rows=66881 width=0)\"\n\"                          Index Cond: (\"user\" < 30000)\"\n\n\n\n\nIt shouldn't make a difference, but I've seen things like this help before so its worth a try.  Make sure work_mem is reasonably sized for this test.\n\n\nIt's exactly the same. work_mem was set to 3000MB.\n\n\n\n\nAnother thing that won't be that fast, but may avoid the sort, is to select the subselection above into a temporary table, analyze it, and then do the outer select.  Make sure your settings for temporary space (temp_buffers in 8.3) are large enough for the intermediate results (700MB should do it).  That won't be that fast, but it will most likely be faster than sorting 50 million + rows.  There are lots of problems with this approach but it may be worth the experiment.\n\n\n\nI'll try this.\n\nThanks so far!\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 21 Aug 2008 07:39:24 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 21.08.2008 um 16:39 schrieb Scott Carey:\n\n> It looks to me like the work_mem did have an effect.\n>\n> Your earlier queries had a sort followed by group aggregate at the \n> top, and now its a hash-aggregate. So the query plan DID change. \n> That is likely where the first 10x performance gain came from.\n\nBut it didn't change as I added the sub select.\nThank you guys very much. The speed is now ok and I hope I can finish \ntihs work soon.\n\nBut there is another problem. If I run this query without the \nlimitation of the user id, postgres consumes about 150GB of disk space \nand dies with\n\nERROR: could not write block 25305351 of temporary file: No space \nleft on device\n\nAfter that the avaiable disk space is back to normal.\n\nIs this normal? The resulting table (setup1) is not bigger than 1.5 GB.\n\nmoritz\n", "msg_date": "Thu, 21 Aug 2008 17:07:54 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken <[email protected]> wrote:\n>\n> Am 21.08.2008 um 16:39 schrieb Scott Carey:\n>\n>> It looks to me like the work_mem did have an effect.\n>>\n>> Your earlier queries had a sort followed by group aggregate at the top,\n>> and now its a hash-aggregate. So the query plan DID change. That is likely\n>> where the first 10x performance gain came from.\n>\n> But it didn't change as I added the sub select.\n> Thank you guys very much. The speed is now ok and I hope I can finish tihs\n> work soon.\n>\n> But there is another problem. If I run this query without the limitation of\n> the user id, postgres consumes about 150GB of disk space and dies with\n>\n> ERROR: could not write block 25305351 of temporary file: No space left on\n> device\n>\n> After that the avaiable disk space is back to normal.\n>\n> Is this normal? The resulting table (setup1) is not bigger than 1.5 GB.\n\nMaybe the result is too big. if you explain the query, you should get\nan estimate of rows returned. If this is the case, you need to\nrethink your query or do something like a cursor to browse the result.\n\nmerlin\n", "msg_date": "Thu, 21 Aug 2008 13:08:14 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "\nAm 21.08.2008 um 19:08 schrieb Merlin Moncure:\n\n> On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken \n> <[email protected]> wrote:\n>>\n>> Am 21.08.2008 um 16:39 schrieb Scott Carey:\n>>\n>>> It looks to me like the work_mem did have an effect.\n>>>\n>>> Your earlier queries had a sort followed by group aggregate at the \n>>> top,\n>>> and now its a hash-aggregate. So the query plan DID change. That \n>>> is likely\n>>> where the first 10x performance gain came from.\n>>\n>> But it didn't change as I added the sub select.\n>> Thank you guys very much. The speed is now ok and I hope I can \n>> finish tihs\n>> work soon.\n>>\n>> But there is another problem. If I run this query without the \n>> limitation of\n>> the user id, postgres consumes about 150GB of disk space and dies \n>> with\n>>\n>> ERROR: could not write block 25305351 of temporary file: No space \n>> left on\n>> device\n>>\n>> After that the avaiable disk space is back to normal.\n>>\n>> Is this normal? The resulting table (setup1) is not bigger than 1.5 \n>> GB.\n>\n> Maybe the result is too big. if you explain the query, you should get\n> an estimate of rows returned. If this is the case, you need to\n> rethink your query or do something like a cursor to browse the result.\n>\n> merlin\n\nThere will be a few million rows. But I don't understand why these rows\nbloat up so much. If the query is done the new table is about 1 GB in \nsize.\nBut while the query is running it uses >150GB of disk space.\n\nmoritz\n\n", "msg_date": "Fri, 22 Aug 2008 08:31:56 +0200", "msg_from": "Moritz Onken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with a lot of data" }, { "msg_contents": "On Fri, Aug 22, 2008 at 2:31 AM, Moritz Onken <[email protected]> wrote:\n> Am 21.08.2008 um 19:08 schrieb Merlin Moncure:\n>\n>> On Thu, Aug 21, 2008 at 11:07 AM, Moritz Onken <[email protected]>\n>> wrote:\n>>>\n>>> Am 21.08.2008 um 16:39 schrieb Scott Carey:\n>>>\n>>>> It looks to me like the work_mem did have an effect.\n>>>>\n>>>> Your earlier queries had a sort followed by group aggregate at the top,\n>>>> and now its a hash-aggregate. So the query plan DID change. That is\n>>>> likely\n>>>> where the first 10x performance gain came from.\n>>>\n>>> But it didn't change as I added the sub select.\n>>> Thank you guys very much. The speed is now ok and I hope I can finish\n>>> tihs\n>>> work soon.\n>>>\n>>> But there is another problem. If I run this query without the limitation\n>>> of\n>>> the user id, postgres consumes about 150GB of disk space and dies with\n>>>\n>>> ERROR: could not write block 25305351 of temporary file: No space left\n>>> on\n>>> device\n>>>\n>>> After that the avaiable disk space is back to normal.\n>>>\n>>> Is this normal? The resulting table (setup1) is not bigger than 1.5 GB.\n>>\n>> Maybe the result is too big. if you explain the query, you should get\n>> an estimate of rows returned. If this is the case, you need to\n>> rethink your query or do something like a cursor to browse the result.\n>>\n>> merlin\n>\n> There will be a few million rows. But I don't understand why these rows\n> bloat up so much. If the query is done the new table is about 1 GB in size.\n> But while the query is running it uses >150GB of disk space.\n\ncan we see explain?\n\nmerlin\n", "msg_date": "Fri, 22 Aug 2008 13:57:28 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with a lot of data" } ]
[ { "msg_contents": "Hi all,\n\nWe started an attempt to slice the data we've been collecting in\nanother way, to show the results of software vs. hardware RAID:\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Hardware_vs._Software_Raid\n\nThe angle we're trying to show here is the processor utilization and\ni/o throughput for a given file system and raid configuration. I\nwasn't sure about the best way to present it, so this is how it looks\nso far. Click on the results for a chart of the aggregate processor\nutilization for the test.\n\nComments, suggestions, criticisms, et al. welcome.\n\nRegards,\nMark\n", "msg_date": "Tue, 19 Aug 2008 22:23:16 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Software vs. Hardware RAID Data" }, { "msg_contents": "On Tue, 19 Aug 2008, Mark Wong wrote:\n\n> Hi all,\n>\n> We started an attempt to slice the data we've been collecting in\n> another way, to show the results of software vs. hardware RAID:\n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Hardware_vs._Software_Raid\n>\n> The angle we're trying to show here is the processor utilization and\n> i/o throughput for a given file system and raid configuration. I\n> wasn't sure about the best way to present it, so this is how it looks\n> so far. Click on the results for a chart of the aggregate processor\n> utilization for the test.\n>\n> Comments, suggestions, criticisms, et al. welcome.\n\nit's really good to show cpu utilization as well as throughput, but how \nabout showing the cpu utilization as %cpu per MB/s (possibly with a flag \nto indicate any entries that look like they may have hit cpu limits)\n\nwhy did you use 4M stripe size on the software raid? especially on raid 5 \nthis seems like a lot of data to have to touch when making an update.\n\nDavid Lang\n", "msg_date": "Tue, 19 Aug 2008 22:49:48 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Software vs. Hardware RAID Data" }, { "msg_contents": "Mark Wong wrote:\n> Hi all,\n> \n> We started an attempt to slice the data we've been collecting in\n> another way, to show the results of software vs. hardware RAID:\n> \n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Hardware_vs._Software_Raid\n> \n> Comments, suggestions, criticisms, et al. welcome.\n\n\nThe link to the graph for \"Two Disk Software RAID-0 (64KB stripe)\" \npoints to the wrong graph, hraid vs sraid.\n\n\n-- \nTommy Gildseth\n", "msg_date": "Wed, 20 Aug 2008 09:53:37 +0200", "msg_from": "Tommy Gildseth <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Software vs. Hardware RAID Data" }, { "msg_contents": "On Tue, Aug 19, 2008 at 10:49 PM, <[email protected]> wrote:\n> On Tue, 19 Aug 2008, Mark Wong wrote:\n>\n>> Hi all,\n>>\n>> We started an attempt to slice the data we've been collecting in\n>> another way, to show the results of software vs. hardware RAID:\n>>\n>>\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Hardware_vs._Software_Raid\n>>\n>> The angle we're trying to show here is the processor utilization and\n>> i/o throughput for a given file system and raid configuration. I\n>> wasn't sure about the best way to present it, so this is how it looks\n>> so far. Click on the results for a chart of the aggregate processor\n>> utilization for the test.\n>>\n>> Comments, suggestions, criticisms, et al. welcome.\n>\n> it's really good to show cpu utilization as well as throughput, but how\n> about showing the cpu utilization as %cpu per MB/s (possibly with a flag to\n> indicate any entries that look like they may have hit cpu limits)\n\nOk, we'll add that and see how it looks.\n\n> why did you use 4M stripe size on the software raid? especially on raid 5\n> this seems like a lot of data to have to touch when making an update.\n\nI'm sort of taking a shotgun approach, but ultimately we hope to show\nwhether there is significant impact of the stripe width relative to\nthe database blocksize.\n\nRegards,\nMark\n", "msg_date": "Wed, 20 Aug 2008 08:02:56 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Software vs. Hardware RAID Data" }, { "msg_contents": "On Wed, Aug 20, 2008 at 12:53 AM, Tommy Gildseth\n<[email protected]> wrote:\n> Mark Wong wrote:\n>>\n>> Hi all,\n>>\n>> We started an attempt to slice the data we've been collecting in\n>> another way, to show the results of software vs. hardware RAID:\n>>\n>>\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide#Hardware_vs._Software_Raid\n>>\n>> Comments, suggestions, criticisms, et al. welcome.\n>\n>\n> The link to the graph for \"Two Disk Software RAID-0 (64KB stripe)\" points to\n> the wrong graph, hraid vs sraid.\n\nThanks, I think I have it right this time.\n\nRegards,\nMark\n", "msg_date": "Wed, 20 Aug 2008 08:05:54 -0700", "msg_from": "\"Mark Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Software vs. Hardware RAID Data" } ]
[ { "msg_contents": "Hi all,\n\nLooks like I found a bug with views optimization:\n\nFor example create a test view:\n\nCREATE OR REPLACE VIEW bar AS\nSELECT *\nFROM (\n (\n SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n FROM asterisk_cdr\n ) UNION ALL (\n SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n FROM asterisk_huntgroups_calls\n )\n) AS foo;\n\nAnd perform select on it:\n\nEXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n\nTheoretically second UNION statement shouldn't be executed at all (because 1007 != NULL)... but postgres performs seq-scans on both UNION parts.\n\nasteriskpilot=> EXPLAIN ANALYZE SELECT * FROM bar WHERE caller_id = 1007;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan foo (cost=0.00..94509.49 rows=7303 width=28) (actual time=12249.473..14841.648 rows=25 loops=1)\n Filter: (caller_id = 1007)\n -> Append (cost=0.00..76252.26 rows=1460578 width=24) (actual time=0.065..13681.814 rows=1460405 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..57301.22 rows=1120410 width=20) (actual time=0.064..10427.353 rows=1120237 loops=1)\n -> Seq Scan on asterisk_cdr (cost=0.00..46097.12 rows=1120410 width=20) (actual time=0.059..8326.974 rows=1120237 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..18951.04 rows=340168 width=24) (actual time=0.034..1382.653 rows=340168 loops=1)\n -> Seq Scan on asterisk_huntgroups_calls (cost=0.00..15549.36 rows=340168 width=24) (actual time=0.031..863.529 rows=340168 loops=1)\n Total runtime: 14841.739 ms\n(8 rows)\n\n\nBut if we wrap this NULL value into the _IMMUTABLE RETURNS NULL ON NULL INPUT_ function postgres handle this view properly\n\nasteriskpilot=> EXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Append (cost=20.21..15663.02 rows=1015 width=24)\n -> Subquery Scan \"*SELECT* 1\" (cost=20.21..3515.32 rows=1014 width=20)\n -> Bitmap Heap Scan on asterisk_cdr (cost=20.21..3505.18 rows=1014 width=20)\n Recheck Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n -> Bitmap Index Scan on asterisk_cdr_caller_id (cost=0.00..19.96 rows=1014 width=0)\n Index Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n -> Result (cost=0.00..12147.69 rows=1 width=24)\n One-Time Filter: NULL::boolean\n -> Seq Scan on asterisk_huntgroups_calls (cost=0.00..12147.68 rows=1 width=24)\n\n\n\n\n\n________________________________\nThis message (including attachments) is private and confidential. If you have received this message in error, please notify us and remove it from your system.\n\n\n\n\n\n\n\n\n\nHi all,\n \nLooks like I found a bug with views optimization:\n \nFor example create a test view:\n \nCREATE OR REPLACE VIEW bar AS\nSELECT *\nFROM (\n    (\n        SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n        FROM asterisk_cdr\n    ) UNION ALL (\n        SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n        FROM asterisk_huntgroups_calls\n    )\n) AS foo;\n \nAnd perform select on it:\n \nEXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n \nTheoretically second UNION statement shouldn’t be executed at all (because 1007 != NULL)… but postgres performs seq-scans on both UNION parts.\n \nasteriskpilot=> EXPLAIN ANALYZE SELECT * FROM bar WHERE caller_id = 1007;\n                                                                      QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan foo  (cost=0.00..94509.49 rows=7303 width=28) (actual time=12249.473..14841.648 rows=25 loops=1)\n   Filter: (caller_id = 1007)\n   ->  Append  (cost=0.00..76252.26 rows=1460578 width=24) (actual time=0.065..13681.814 rows=1460405 loops=1)\n         ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..57301.22 rows=1120410 width=20) (actual time=0.064..10427.353 rows=1120237 loops=1)\n               ->  Seq Scan on asterisk_cdr  (cost=0.00..46097.12 rows=1120410 width=20) (actual time=0.059..8326.974 rows=1120237 loops=1)\n         ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..18951.04 rows=340168 width=24) (actual time=0.034..1382.653 rows=340168 loops=1)\n               ->  Seq Scan on asterisk_huntgroups_calls  (cost=0.00..15549.36 rows=340168 width=24) (actual time=0.031..863.529 rows=340168 loops=1)\n Total runtime: 14841.739 ms\n(8 rows)\n \n \nBut if we wrap this NULL value into the _IMMUTABLE RETURNS NULL ON NULL INPUT_ function postgres handle\n this view properly\n \nasteriskpilot=> EXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n                                             QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Append  (cost=20.21..15663.02 rows=1015 width=24)\n   ->  Subquery Scan \"*SELECT* 1\"  (cost=20.21..3515.32 rows=1014 width=20)\n         ->  Bitmap Heap Scan on asterisk_cdr  (cost=20.21..3505.18 rows=1014 width=20)\n               Recheck Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n               ->  Bitmap Index Scan on asterisk_cdr_caller_id  (cost=0.00..19.96 rows=1014 width=0)\n                     Index Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n   ->  Result  (cost=0.00..12147.69 rows=1 width=24)\n         One-Time Filter: NULL::boolean\n         ->  Seq Scan on asterisk_huntgroups_calls  (cost=0.00..12147.68 rows=1 width=24)\n \n \n \n \n\n\n\nThis message (including attachments) is private and confidential. If you have received this message in error, please notify us and remove it from your system.", "msg_date": "Wed, 20 Aug 2008 02:17:24 -0700", "msg_from": "Sergey Hripchenko <[email protected]>", "msg_from_op": true, "msg_subject": "pgsql do not handle NULL constants in the view" }, { "msg_contents": "Sergey Hripchenko <[email protected]> writes:\n> CREATE OR REPLACE VIEW bar AS\n> SELECT *\n> FROM (\n> (\n> SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n> FROM asterisk_cdr\n> ) UNION ALL (\n> SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n> FROM asterisk_huntgroups_calls\n> )\n> ) AS foo;\n\nTry casting the NULL to integer (or whatever the datatype of the other\nunion arm is) explicitly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 20 Aug 2008 08:54:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql do not handle NULL constants in the view " }, { "msg_contents": "Thx it helps.\n\nShame on me %) I forgot that NULL itself has no type, and thought that each constant in the view are casted to the resulting type at the creation time.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: Wednesday, August 20, 2008 4:54 PM\nTo: Sergey Hripchenko\nCc: [email protected]\nSubject: Re: [PERFORM] pgsql do not handle NULL constants in the view\n\nSergey Hripchenko <[email protected]> writes:\n> CREATE OR REPLACE VIEW bar AS\n> SELECT *\n> FROM (\n> (\n> SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n> FROM asterisk_cdr\n> ) UNION ALL (\n> SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n> FROM asterisk_huntgroups_calls\n> )\n> ) AS foo;\n\nTry casting the NULL to integer (or whatever the datatype of the other\nunion arm is) explicitly.\n\n regards, tom lane\n\nThis message (including attachments) is private and confidential. If you have received this message in error, please notify us and remove it from your system.\n", "msg_date": "Wed, 20 Aug 2008 06:49:00 -0700", "msg_from": "Sergey Hripchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql do not handle NULL constants in the view " } ]
[ { "msg_contents": "Forgot to add\n\nasteriskpilot=> SELECT version();\n version\n--------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.9 on i386-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-27)\n(1 row)\n\nasteriskpilot=> \\q\n[root@ast-sql data]# uname -a\nLinux ast-sql.intermedia.net 2.6.23.1-21.fc7 #1 SMP Thu Nov 1 21:09:24 EDT 2007 i686 i686 i386 GNU/Linux\n [root@ast-sql data]# cat /etc/redhat-release\nFedora release 7 (Moonshine)\n[root@ast-sql data]# rpm -qa | grep postgres\npostgresql-8.2.9-1.fc7\npostgresql-libs-8.2.9-1.fc7\npostgresql-server-8.2.9-1.fc7\npostgresql-contrib-8.2.9-1.fc7\npostgresql-devel-8.2.9-1.fc7\n\n________________________________\nFrom: Sergey Hripchenko\nSent: Wednesday, August 20, 2008 1:17 PM\nTo: '[email protected]'\nSubject: pgsql do not handle NULL constants in the view\n\nHi all,\n\nLooks like I found a bug with views optimization:\n\nFor example create a test view:\n\nCREATE OR REPLACE VIEW bar AS\nSELECT *\nFROM (\n (\n SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n FROM asterisk_cdr\n ) UNION ALL (\n SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n FROM asterisk_huntgroups_calls\n )\n) AS foo;\n\nAnd perform select on it:\n\nEXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n\nTheoretically second UNION statement shouldn't be executed at all (because 1007 != NULL)... but postgres performs seq-scans on both UNION parts.\n\nasteriskpilot=> EXPLAIN ANALYZE SELECT * FROM bar WHERE caller_id = 1007;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan foo (cost=0.00..94509.49 rows=7303 width=28) (actual time=12249.473..14841.648 rows=25 loops=1)\n Filter: (caller_id = 1007)\n -> Append (cost=0.00..76252.26 rows=1460578 width=24) (actual time=0.065..13681.814 rows=1460405 loops=1)\n -> Subquery Scan \"*SELECT* 1\" (cost=0.00..57301.22 rows=1120410 width=20) (actual time=0.064..10427.353 rows=1120237 loops=1)\n -> Seq Scan on asterisk_cdr (cost=0.00..46097.12 rows=1120410 width=20) (actual time=0.059..8326.974 rows=1120237 loops=1)\n -> Subquery Scan \"*SELECT* 2\" (cost=0.00..18951.04 rows=340168 width=24) (actual time=0.034..1382.653 rows=340168 loops=1)\n -> Seq Scan on asterisk_huntgroups_calls (cost=0.00..15549.36 rows=340168 width=24) (actual time=0.031..863.529 rows=340168 loops=1)\n Total runtime: 14841.739 ms\n(8 rows)\n\n\nBut if we wrap this NULL value into the _IMMUTABLE RETURNS NULL ON NULL INPUT_ function postgres handle this view properly\n\nasteriskpilot=> EXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Append (cost=20.21..15663.02 rows=1015 width=24)\n -> Subquery Scan \"*SELECT* 1\" (cost=20.21..3515.32 rows=1014 width=20)\n -> Bitmap Heap Scan on asterisk_cdr (cost=20.21..3505.18 rows=1014 width=20)\n Recheck Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n -> Bitmap Index Scan on asterisk_cdr_caller_id (cost=0.00..19.96 rows=1014 width=0)\n Index Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n -> Result (cost=0.00..12147.69 rows=1 width=24)\n One-Time Filter: NULL::boolean\n -> Seq Scan on asterisk_huntgroups_calls (cost=0.00..12147.68 rows=1 width=24)\n\n\n\n\n\n________________________________\nThis message (including attachments) is private and confidential. If you have received this message in error, please notify us and remove it from your system.\n\n\n\n\n\n\n\n\n\nForgot to add\n \nasteriskpilot=> SELECT version();\n                                                version\n--------------------------------------------------------------------------------------------------------\n PostgreSQL 8.2.9 on i386-redhat-linux-gnu, compiled by GCC gcc (GCC) 4.1.2 20070925 (Red Hat 4.1.2-27)\n(1 row)\n \nasteriskpilot=> \\q\n[root@ast-sql data]# uname -a\nLinux ast-sql.intermedia.net 2.6.23.1-21.fc7 #1 SMP Thu Nov 1 21:09:24 EDT 2007 i686 i686 i386 GNU/Linux\n [root@ast-sql data]# cat /etc/redhat-release\nFedora release 7 (Moonshine)\n[root@ast-sql data]# rpm -qa | grep postgres\npostgresql-8.2.9-1.fc7\npostgresql-libs-8.2.9-1.fc7\npostgresql-server-8.2.9-1.fc7\npostgresql-contrib-8.2.9-1.fc7\npostgresql-devel-8.2.9-1.fc7\n \n\n\n\n\nFrom: Sergey Hripchenko\n\nSent: Wednesday, August 20, 2008 1:17 PM\nTo: '[email protected]'\nSubject: pgsql do not handle NULL constants in the view\n\n \nHi all,\n \nLooks like I found a bug with views optimization:\n \nFor example create a test view:\n \nCREATE OR REPLACE VIEW bar AS\nSELECT *\nFROM (\n    (\n        SELECT calldate, duration, billsec, get_asterisk_cdr_caller_id(accountcode) AS caller_id\n        FROM asterisk_cdr\n    ) UNION ALL (\n        SELECT start_time, get_interval_seconds(completed_time-start_time), get_interval_seconds(answered_time-start_time), NULL\n        FROM asterisk_huntgroups_calls\n    )\n) AS foo;\n \nAnd perform select on it:\n \nEXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n \nTheoretically second UNION statement shouldn’t be executed at all (because 1007 != NULL)… but postgres performs seq-scans on both UNION parts.\n \nasteriskpilot=> EXPLAIN ANALYZE SELECT * FROM bar WHERE caller_id = 1007;\n                                                                      QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Subquery Scan foo  (cost=0.00..94509.49 rows=7303 width=28) (actual time=12249.473..14841.648 rows=25 loops=1)\n   Filter: (caller_id = 1007)\n   ->  Append  (cost=0.00..76252.26 rows=1460578 width=24) (actual time=0.065..13681.814 rows=1460405 loops=1)\n         ->  Subquery Scan \"*SELECT* 1\"  (cost=0.00..57301.22 rows=1120410 width=20) (actual time=0.064..10427.353 rows=1120237 loops=1)\n               ->  Seq Scan on asterisk_cdr  (cost=0.00..46097.12 rows=1120410 width=20) (actual time=0.059..8326.974 rows=1120237 loops=1)\n         ->  Subquery Scan \"*SELECT* 2\"  (cost=0.00..18951.04 rows=340168 width=24) (actual time=0.034..1382.653 rows=340168 loops=1)\n               ->  Seq Scan on asterisk_huntgroups_calls  (cost=0.00..15549.36 rows=340168 width=24) (actual time=0.031..863.529 rows=340168 loops=1)\n Total runtime: 14841.739 ms\n(8 rows)\n \n \nBut if we wrap this NULL value into the _IMMUTABLE RETURNS NULL ON NULL INPUT_ function postgres handle\n this view properly\n \nasteriskpilot=> EXPLAIN SELECT * FROM bar WHERE caller_id = 1007;\n                                             QUERY PLAN\n-----------------------------------------------------------------------------------------------------\n Append  (cost=20.21..15663.02 rows=1015 width=24)\n   ->  Subquery Scan \"*SELECT* 1\"  (cost=20.21..3515.32 rows=1014 width=20)\n         ->  Bitmap Heap Scan on asterisk_cdr  (cost=20.21..3505.18 rows=1014 width=20)\n               Recheck Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n               ->  Bitmap Index Scan on asterisk_cdr_caller_id  (cost=0.00..19.96 rows=1014 width=0)\n                     Index Cond: (get_asterisk_cdr_caller_id(accountcode) = 1007)\n   ->  Result  (cost=0.00..12147.69 rows=1 width=24)\n         One-Time Filter: NULL::boolean\n         ->  Seq Scan on asterisk_huntgroups_calls  (cost=0.00..12147.68 rows=1 width=24)\n \n \n \n \n\n\n\nThis message (including attachments) is private and confidential. If you have received this message in error, please notify us and remove it from your system.", "msg_date": "Wed, 20 Aug 2008 02:19:21 -0700", "msg_from": "Sergey Hripchenko <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgsql do not handle NULL constants in the view" } ]
[ { "msg_contents": "Hi,\n\nCan anyone suggest the performance tips for PostgreSQL using Hibernate.\n\nOne of the queries:\n\n- PostgreSQL has INDEX concept and Hibernate also has Column INDEXes. Which\nis better among them? or creating either of them is enough? or need to\ncreate both of them?\n\nand any more performace aspects ?\n\nThanks in advance.\n\n==\nKP\n\nHi,Can anyone suggest the performance tips for PostgreSQL using Hibernate.One of the queries:- PostgreSQL has INDEX concept and Hibernate also has Column INDEXes. Which is better among them? or creating either of them is enough? or need to create both of them?\nand any more performace aspects ?Thanks in advance.==KP", "msg_date": "Wed, 20 Aug 2008 17:55:16 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL+Hibernate Performance" }, { "msg_contents": "On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\r\n> Hi,\r\n> \r\n> Can anyone suggest the performance tips for PostgreSQL using\r\n> Hibernate.\r\n> \r\n> One of the queries:\r\n> \r\n> - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\r\n> Which is better among them? or creating either of them is enough? or\r\n> need to create both of them?\r\n> \r\n> and any more performace aspects ?\r\n\r\nHibernate is a library for accessing a database such as PostgreSQL. It\r\ndoes not offer any add-on capabilities to the storage layer itself. So\r\nwhen you tell Hibernate that a column should be indexed, all that it\r\ndoes create the associated PostgreSQL index when you ask Hibernate to\r\nbuild the DB tables for you. This is part of Hibernate's effort to\r\nprotect you from the implementation details of the underlying database,\r\nin order to make supporting multiple databases with the same application\r\ncode easier.\r\n\r\nSo there is no performance difference between a PG index and a Hibernate\r\ncolumn index, because they are the same thing.\r\n\r\nThe most useful Hibernate performance-tuning advice isn't PG-specific at\r\nall, there are just things that you need to keep in mind when developing\r\nfor any database to avoid pathologically bad performance; those tips are\r\nreally beyond the scope of this mailing list, Google is your friend\r\nhere.\r\n\r\nI've been the architect for an enterprise-class application for a few\r\nyears now using PostgreSQL and Hibernate together in a\r\nperformance-critical context, and honestly I can't think of one time\r\nthat I've been bitten by a PG-specific performance issue (a lot of\r\nperformance issues with Hibernate that affected all databases though;\r\nyou need to know what you're doing to make Hibernate apps that run fast.\r\nIf you do run into problems, you can figure out the actual SQL that\r\nHibernate is issuing and do the normal PostgreSQL explain analyze on it;\r\nusually caused by a missing index.\r\n\r\n-- Mark\r\n\r\n\r\n", "msg_date": "Wed, 20 Aug 2008 07:40:13 -0700", "msg_from": "\"Mark Lewis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "Hi Mark,\n\nThank you very much for the information. I will analyse the DB structure and\ncreate indexes on PG directly.\nAre you using any connection pooling like DBCP? or PG POOL?\n\nRegards, KP\n\n\nOn Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[email protected]> wrote:\n\n> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> > Hi,\n> >\n> > Can anyone suggest the performance tips for PostgreSQL using\n> > Hibernate.\n> >\n> > One of the queries:\n> >\n> > - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> > Which is better among them? or creating either of them is enough? or\n> > need to create both of them?\n> >\n> > and any more performace aspects ?\n>\n> Hibernate is a library for accessing a database such as PostgreSQL. It\n> does not offer any add-on capabilities to the storage layer itself. So\n> when you tell Hibernate that a column should be indexed, all that it\n> does create the associated PostgreSQL index when you ask Hibernate to\n> build the DB tables for you. This is part of Hibernate's effort to\n> protect you from the implementation details of the underlying database,\n> in order to make supporting multiple databases with the same application\n> code easier.\n>\n> So there is no performance difference between a PG index and a Hibernate\n> column index, because they are the same thing.\n>\n> The most useful Hibernate performance-tuning advice isn't PG-specific at\n> all, there are just things that you need to keep in mind when developing\n> for any database to avoid pathologically bad performance; those tips are\n> really beyond the scope of this mailing list, Google is your friend\n> here.\n>\n> I've been the architect for an enterprise-class application for a few\n> years now using PostgreSQL and Hibernate together in a\n> performance-critical context, and honestly I can't think of one time\n> that I've been bitten by a PG-specific performance issue (a lot of\n> performance issues with Hibernate that affected all databases though;\n> you need to know what you're doing to make Hibernate apps that run fast.\n> If you do run into problems, you can figure out the actual SQL that\n> Hibernate is issuing and do the normal PostgreSQL explain analyze on it;\n> usually caused by a missing index.\n>\n> -- Mark\n>\n\n\n\n-- \n\nBest Regards\nKranti Kiran Kumar Parisa\nM: +91 - 9391 - 438 - 738\n+91 - 9849 - 625 - 625\n\nHi Mark,Thank you very much for the information. I will analyse the DB structure and create indexes on PG directly.Are you using any connection pooling like DBCP? or PG POOL?Regards, KP\nOn Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[email protected]> wrote:\nOn Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> Hi,\n>\n> Can anyone suggest the performance tips for PostgreSQL using\n> Hibernate.\n>\n> One of the queries:\n>\n> - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> Which is better among them? or creating either of them is enough? or\n> need to create both of them?\n>\n> and any more performace aspects ?\n\nHibernate is a library for accessing a database such as PostgreSQL.  It\ndoes not offer any add-on capabilities to the storage layer itself.  So\nwhen you tell Hibernate that a column should be indexed, all that it\ndoes create the associated PostgreSQL index when you ask Hibernate to\nbuild the DB tables for you.  This is part of Hibernate's effort to\nprotect you from the implementation details of the underlying database,\nin order to make supporting multiple databases with the same application\ncode easier.\n\nSo there is no performance difference between a PG index and a Hibernate\ncolumn index, because they are the same thing.\n\nThe most useful Hibernate performance-tuning advice isn't PG-specific at\nall, there are just things that you need to keep in mind when developing\nfor any database to avoid pathologically bad performance; those tips are\nreally beyond the scope of this mailing list, Google is your friend\nhere.\n\nI've been the architect for an enterprise-class application for a few\nyears now using PostgreSQL and Hibernate together in a\nperformance-critical context, and honestly I can't think of one time\nthat I've been bitten by a PG-specific performance issue (a lot of\nperformance issues with Hibernate that affected all databases though;\nyou need to know what you're doing to make Hibernate apps that run fast.\nIf you do run into problems, you can figure out the actual SQL that\nHibernate is issuing and do the normal PostgreSQL explain analyze on it;\nusually caused by a missing index.\n\n-- Mark\n-- Best RegardsKranti Kiran Kumar ParisaM: +91 - 9391 - 438 - 738 +91 - 9849 - 625 - 625", "msg_date": "Wed, 20 Aug 2008 20:32:05 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "The only thing thats bitten me about hibernate + postgres is that when\ninserting into partitioned tables, postgres does not reply with the number\nof rows that hibernate expected. My current (not great) solution is to\ndefine a specific SQLInsert annotation and tell it not to do any checking\nlike so:\n\n@SQLInsert(sql=\"insert into bigmetric (account_id, a, b, timestamp, id)\nvalues (?, ?, ?, ?, ?)\", check=ResultCheckStyle.NONE)\n\nI just steel the sql from the SQL from hibernate's logs.\n\n\n\nOn Wed, Aug 20, 2008 at 10:40 AM, Mark Lewis <[email protected]> wrote:\n\n> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> > Hi,\n> >\n> > Can anyone suggest the performance tips for PostgreSQL using\n> > Hibernate.\n> >\n> > One of the queries:\n> >\n> > - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> > Which is better among them? or creating either of them is enough? or\n> > need to create both of them?\n> >\n> > and any more performace aspects ?\n>\n> Hibernate is a library for accessing a database such as PostgreSQL. It\n> does not offer any add-on capabilities to the storage layer itself. So\n> when you tell Hibernate that a column should be indexed, all that it\n> does create the associated PostgreSQL index when you ask Hibernate to\n> build the DB tables for you. This is part of Hibernate's effort to\n> protect you from the implementation details of the underlying database,\n> in order to make supporting multiple databases with the same application\n> code easier.\n>\n> So there is no performance difference between a PG index and a Hibernate\n> column index, because they are the same thing.\n>\n> The most useful Hibernate performance-tuning advice isn't PG-specific at\n> all, there are just things that you need to keep in mind when developing\n> for any database to avoid pathologically bad performance; those tips are\n> really beyond the scope of this mailing list, Google is your friend\n> here.\n>\n> I've been the architect for an enterprise-class application for a few\n> years now using PostgreSQL and Hibernate together in a\n> performance-critical context, and honestly I can't think of one time\n> that I've been bitten by a PG-specific performance issue (a lot of\n> performance issues with Hibernate that affected all databases though;\n> you need to know what you're doing to make Hibernate apps that run fast.\n> If you do run into problems, you can figure out the actual SQL that\n> Hibernate is issuing and do the normal PostgreSQL explain analyze on it;\n> usually caused by a missing index.\n>\n> -- Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe only thing thats bitten me about hibernate + postgres is that when inserting into partitioned tables, postgres does not reply with the number of rows that hibernate expected.  My current (not great) solution is to define a specific SQLInsert annotation and tell it not to do any checking like so:\n@SQLInsert(sql=\"insert into bigmetric (account_id, a, b, timestamp, id) values (?, ?, ?, ?, ?)\", check=ResultCheckStyle.NONE)I just steel the sql from the SQL from hibernate's logs.\nOn Wed, Aug 20, 2008 at 10:40 AM, Mark Lewis <[email protected]> wrote:\nOn Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> Hi,\n>\n> Can anyone suggest the performance tips for PostgreSQL using\n> Hibernate.\n>\n> One of the queries:\n>\n> - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> Which is better among them? or creating either of them is enough? or\n> need to create both of them?\n>\n> and any more performace aspects ?\n\nHibernate is a library for accessing a database such as PostgreSQL.  It\ndoes not offer any add-on capabilities to the storage layer itself.  So\nwhen you tell Hibernate that a column should be indexed, all that it\ndoes create the associated PostgreSQL index when you ask Hibernate to\nbuild the DB tables for you.  This is part of Hibernate's effort to\nprotect you from the implementation details of the underlying database,\nin order to make supporting multiple databases with the same application\ncode easier.\n\nSo there is no performance difference between a PG index and a Hibernate\ncolumn index, because they are the same thing.\n\nThe most useful Hibernate performance-tuning advice isn't PG-specific at\nall, there are just things that you need to keep in mind when developing\nfor any database to avoid pathologically bad performance; those tips are\nreally beyond the scope of this mailing list, Google is your friend\nhere.\n\nI've been the architect for an enterprise-class application for a few\nyears now using PostgreSQL and Hibernate together in a\nperformance-critical context, and honestly I can't think of one time\nthat I've been bitten by a PG-specific performance issue (a lot of\nperformance issues with Hibernate that affected all databases though;\nyou need to know what you're doing to make Hibernate apps that run fast.\nIf you do run into problems, you can figure out the actual SQL that\nHibernate is issuing and do the normal PostgreSQL explain analyze on it;\nusually caused by a missing index.\n\n-- Mark\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 20 Aug 2008 11:04:42 -0400", "msg_from": "\"Nikolas Everett\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "creating multiple indexes on same column will effect performance?\n for example:\n\nindex1 : column1, column2, column3\nindex2: column1\nindex3: column2,\nindex4: column3\nindex5: column1,column2\n\nwhich means, i am trying fire the SQL queries keeping columns in the where\nconditions. and the possibilities are like the above.\n\nif we create such indexes will it effect on performance?\nand what is the best go in this case?\n\n\nOn Wed, Aug 20, 2008 at 8:10 PM, Mark Lewis <[email protected]> wrote:\n\n> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> > Hi,\n> >\n> > Can anyone suggest the performance tips for PostgreSQL using\n> > Hibernate.\n> >\n> > One of the queries:\n> >\n> > - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> > Which is better among them? or creating either of them is enough? or\n> > need to create both of them?\n> >\n> > and any more performace aspects ?\n>\n> Hibernate is a library for accessing a database such as PostgreSQL. It\n> does not offer any add-on capabilities to the storage layer itself. So\n> when you tell Hibernate that a column should be indexed, all that it\n> does create the associated PostgreSQL index when you ask Hibernate to\n> build the DB tables for you. This is part of Hibernate's effort to\n> protect you from the implementation details of the underlying database,\n> in order to make supporting multiple databases with the same application\n> code easier.\n>\n> So there is no performance difference between a PG index and a Hibernate\n> column index, because they are the same thing.\n>\n> The most useful Hibernate performance-tuning advice isn't PG-specific at\n> all, there are just things that you need to keep in mind when developing\n> for any database to avoid pathologically bad performance; those tips are\n> really beyond the scope of this mailing list, Google is your friend\n> here.\n>\n> I've been the architect for an enterprise-class application for a few\n> years now using PostgreSQL and Hibernate together in a\n> performance-critical context, and honestly I can't think of one time\n> that I've been bitten by a PG-specific performance issue (a lot of\n> performance issues with Hibernate that affected all databases though;\n> you need to know what you're doing to make Hibernate apps that run fast.\n> If you do run into problems, you can figure out the actual SQL that\n> Hibernate is issuing and do the normal PostgreSQL explain analyze on it;\n> usually caused by a missing index.\n>\n> -- Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\nBest Regards\nKranti Kiran Kumar Parisa\nM: +91 - 9391 - 438 - 738\n+91 - 9849 - 625 - 625\n\ncreating multiple indexes on same column will effect performance? for example:index1 : column1, column2, column3index2: column1index3: column2,index4: column3index5: column1,column2\nwhich means, i am trying fire the SQL queries keeping columns in the where conditions. and the possibilities are like the above.if we create such indexes will it effect on performance?and what is the best go in this case?\nOn Wed, Aug 20, 2008 at 8:10 PM, Mark Lewis <[email protected]> wrote:\nOn Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> Hi,\n>\n> Can anyone suggest the performance tips for PostgreSQL using\n> Hibernate.\n>\n> One of the queries:\n>\n> - PostgreSQL has INDEX concept and Hibernate also has Column INDEXes.\n> Which is better among them? or creating either of them is enough? or\n> need to create both of them?\n>\n> and any more performace aspects ?\n\nHibernate is a library for accessing a database such as PostgreSQL.  It\ndoes not offer any add-on capabilities to the storage layer itself.  So\nwhen you tell Hibernate that a column should be indexed, all that it\ndoes create the associated PostgreSQL index when you ask Hibernate to\nbuild the DB tables for you.  This is part of Hibernate's effort to\nprotect you from the implementation details of the underlying database,\nin order to make supporting multiple databases with the same application\ncode easier.\n\nSo there is no performance difference between a PG index and a Hibernate\ncolumn index, because they are the same thing.\n\nThe most useful Hibernate performance-tuning advice isn't PG-specific at\nall, there are just things that you need to keep in mind when developing\nfor any database to avoid pathologically bad performance; those tips are\nreally beyond the scope of this mailing list, Google is your friend\nhere.\n\nI've been the architect for an enterprise-class application for a few\nyears now using PostgreSQL and Hibernate together in a\nperformance-critical context, and honestly I can't think of one time\nthat I've been bitten by a PG-specific performance issue (a lot of\nperformance issues with Hibernate that affected all databases though;\nyou need to know what you're doing to make Hibernate apps that run fast.\nIf you do run into problems, you can figure out the actual SQL that\nHibernate is issuing and do the normal PostgreSQL explain analyze on it;\nusually caused by a missing index.\n\n-- Mark\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best RegardsKranti Kiran Kumar ParisaM: +91 - 9391 - 438 - 738 +91 - 9849 - 625 - 625", "msg_date": "Wed, 20 Aug 2008 20:40:41 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n> creating multiple indexes on same column will effect performance?\n>  for example:\n> \n> index1 : column1, column2, column3\n> index2: column1\n> index3: column2,\n> index4: column3\n> index5: column1,column2\n\nThe sole purpose of indexes is to affect performance.\n\nHowever, if you have index1, there is no point in having index2 or index5.\n\nMatthew\n\n-- \nIsn't \"Microsoft Works\" something of a contradiction?", "msg_date": "Wed, 20 Aug 2008 16:24:07 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "The tradeoffs for multiple indexes are more or less as follows:\n\n1. Having the right indexes makes queries faster, often dramatically so.\n\n2. But more indexes makes inserts/updates slower, although generally not\ndramatically slower.\n\n3. Each index requires disk space. With several indexes, you can easily\nhave more of your disk taken up by indexes than with actual data.\n\nI would be careful to only create the indexes you need, but it's\nprobably worse to have too few indexes than too many. Depends on your\napp though.\n\n-- Mark\n\nOn Wed, 2008-08-20 at 20:40 +0530, Kranti K K Parisa™ wrote:\n> creating multiple indexes on same column will effect performance?\n> for example:\n> \n> index1 : column1, column2, column3\n> index2: column1\n> index3: column2,\n> index4: column3\n> index5: column1,column2\n> \n> which means, i am trying fire the SQL queries keeping columns in the\n> where conditions. and the possibilities are like the above.\n> \n> if we create such indexes will it effect on performance?\n> and what is the best go in this case?\n> \n> \n> On Wed, Aug 20, 2008 at 8:10 PM, Mark Lewis <[email protected]>\n> wrote:\n> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> \n> \n> > Hi,\n> >\n> > Can anyone suggest the performance tips for PostgreSQL using\n> > Hibernate.\n> >\n> > One of the queries:\n> >\n> > - PostgreSQL has INDEX concept and Hibernate also has Column\n> INDEXes.\n> > Which is better among them? or creating either of them is\n> enough? or\n> > need to create both of them?\n> >\n> > and any more performace aspects ?\n> \n> \n> \n> Hibernate is a library for accessing a database such as\n> PostgreSQL. It\n> does not offer any add-on capabilities to the storage layer\n> itself. So\n> when you tell Hibernate that a column should be indexed, all\n> that it\n> does create the associated PostgreSQL index when you ask\n> Hibernate to\n> build the DB tables for you. This is part of Hibernate's\n> effort to\n> protect you from the implementation details of the underlying\n> database,\n> in order to make supporting multiple databases with the same\n> application\n> code easier.\n> \n> So there is no performance difference between a PG index and a\n> Hibernate\n> column index, because they are the same thing.\n> \n> The most useful Hibernate performance-tuning advice isn't\n> PG-specific at\n> all, there are just things that you need to keep in mind when\n> developing\n> for any database to avoid pathologically bad performance;\n> those tips are\n> really beyond the scope of this mailing list, Google is your\n> friend\n> here.\n> \n> I've been the architect for an enterprise-class application\n> for a few\n> years now using PostgreSQL and Hibernate together in a\n> performance-critical context, and honestly I can't think of\n> one time\n> that I've been bitten by a PG-specific performance issue (a\n> lot of\n> performance issues with Hibernate that affected all databases\n> though;\n> you need to know what you're doing to make Hibernate apps that\n> run fast.\n> If you do run into problems, you can figure out the actual SQL\n> that\n> Hibernate is issuing and do the normal PostgreSQL explain\n> analyze on it;\n> usually caused by a missing index.\n> \n> -- Mark\n> \n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> \n> Best Regards\n> Kranti Kiran Kumar Parisa\n> M: +91 - 9391 - 438 - 738\n> +91 - 9849 - 625 - 625\n> \n> \n", "msg_date": "Wed, 20 Aug 2008 08:24:46 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "Yes, we use connection pooling. As I recall Hibernate ships with c3p0\nconnection pooling built-in, which is what we use. We were happy enough\nwith c3p0 that we ended up moving our other non-hibernate apps over to\nit, away from DBCP.\n\npgpool does connection pooling at a socket level instead of in a local\nlibrary level, so really it's a very different thing. If your app is\nthe only thing talking to this database, and you don't have a\nmulti-database configuration, then it will be easier for you to use a\nJava-based connection pooling library like c3p0 or DBCP than to use\npgpool.\n\n-- Mark\n\nOn Wed, 2008-08-20 at 20:32 +0530, Kranti K K Parisa™ wrote:\n> Hi Mark,\n> \n> Thank you very much for the information. I will analyse the DB\n> structure and create indexes on PG directly.\n> Are you using any connection pooling like DBCP? or PG POOL?\n> \n> Regards, KP\n> \n> \n> On Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[email protected]>\n> wrote:\n> \n> On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> > Hi,\n> >\n> > Can anyone suggest the performance tips for PostgreSQL using\n> > Hibernate.\n> >\n> > One of the queries:\n> >\n> > - PostgreSQL has INDEX concept and Hibernate also has Column\n> INDEXes.\n> > Which is better among them? or creating either of them is\n> enough? or\n> > need to create both of them?\n> >\n> > and any more performace aspects ?\n> \n> \n> Hibernate is a library for accessing a database such as\n> PostgreSQL. It\n> does not offer any add-on capabilities to the storage layer\n> itself. So\n> when you tell Hibernate that a column should be indexed, all\n> that it\n> does create the associated PostgreSQL index when you ask\n> Hibernate to\n> build the DB tables for you. This is part of Hibernate's\n> effort to\n> protect you from the implementation details of the underlying\n> database,\n> in order to make supporting multiple databases with the same\n> application\n> code easier.\n> \n> So there is no performance difference between a PG index and a\n> Hibernate\n> column index, because they are the same thing.\n> \n> The most useful Hibernate performance-tuning advice isn't\n> PG-specific at\n> all, there are just things that you need to keep in mind when\n> developing\n> for any database to avoid pathologically bad performance;\n> those tips are\n> really beyond the scope of this mailing list, Google is your\n> friend\n> here.\n> \n> I've been the architect for an enterprise-class application\n> for a few\n> years now using PostgreSQL and Hibernate together in a\n> performance-critical context, and honestly I can't think of\n> one time\n> that I've been bitten by a PG-specific performance issue (a\n> lot of\n> performance issues with Hibernate that affected all databases\n> though;\n> you need to know what you're doing to make Hibernate apps that\n> run fast.\n> If you do run into problems, you can figure out the actual SQL\n> that\n> Hibernate is issuing and do the normal PostgreSQL explain\n> analyze on it;\n> usually caused by a missing index.\n> \n> -- Mark\n> \n> \n> \n> -- \n> \n> Best Regards\n> Kranti Kiran Kumar Parisa\n> M: +91 - 9391 - 438 - 738\n> +91 - 9849 - 625 - 625\n> \n> \n", "msg_date": "Wed, 20 Aug 2008 08:29:43 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "Thanks Mark,\n\nWe are using DBCP and i found something about pgpool in some forum threads,\nwhich gave me queries on it. But I am clear now.\n\nOn Wed, Aug 20, 2008 at 8:59 PM, Mark Lewis <[email protected]> wrote:\n\n> Yes, we use connection pooling. As I recall Hibernate ships with c3p0\n> connection pooling built-in, which is what we use. We were happy enough\n> with c3p0 that we ended up moving our other non-hibernate apps over to\n> it, away from DBCP.\n>\n> pgpool does connection pooling at a socket level instead of in a local\n> library level, so really it's a very different thing. If your app is\n> the only thing talking to this database, and you don't have a\n> multi-database configuration, then it will be easier for you to use a\n> Java-based connection pooling library like c3p0 or DBCP than to use\n> pgpool.\n>\n> -- Mark\n>\n> On Wed, 2008-08-20 at 20:32 +0530, Kranti K K Parisa™ wrote:\n> > Hi Mark,\n> >\n> > Thank you very much for the information. I will analyse the DB\n> > structure and create indexes on PG directly.\n> > Are you using any connection pooling like DBCP? or PG POOL?\n> >\n> > Regards, KP\n> >\n> >\n> > On Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[email protected]>\n> > wrote:\n> >\n> > On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n> > > Hi,\n> > >\n> > > Can anyone suggest the performance tips for PostgreSQL using\n> > > Hibernate.\n> > >\n> > > One of the queries:\n> > >\n> > > - PostgreSQL has INDEX concept and Hibernate also has Column\n> > INDEXes.\n> > > Which is better among them? or creating either of them is\n> > enough? or\n> > > need to create both of them?\n> > >\n> > > and any more performace aspects ?\n> >\n> >\n> > Hibernate is a library for accessing a database such as\n> > PostgreSQL. It\n> > does not offer any add-on capabilities to the storage layer\n> > itself. So\n> > when you tell Hibernate that a column should be indexed, all\n> > that it\n> > does create the associated PostgreSQL index when you ask\n> > Hibernate to\n> > build the DB tables for you. This is part of Hibernate's\n> > effort to\n> > protect you from the implementation details of the underlying\n> > database,\n> > in order to make supporting multiple databases with the same\n> > application\n> > code easier.\n> >\n> > So there is no performance difference between a PG index and a\n> > Hibernate\n> > column index, because they are the same thing.\n> >\n> > The most useful Hibernate performance-tuning advice isn't\n> > PG-specific at\n> > all, there are just things that you need to keep in mind when\n> > developing\n> > for any database to avoid pathologically bad performance;\n> > those tips are\n> > really beyond the scope of this mailing list, Google is your\n> > friend\n> > here.\n> >\n> > I've been the architect for an enterprise-class application\n> > for a few\n> > years now using PostgreSQL and Hibernate together in a\n> > performance-critical context, and honestly I can't think of\n> > one time\n> > that I've been bitten by a PG-specific performance issue (a\n> > lot of\n> > performance issues with Hibernate that affected all databases\n> > though;\n> > you need to know what you're doing to make Hibernate apps that\n> > run fast.\n> > If you do run into problems, you can figure out the actual SQL\n> > that\n> > Hibernate is issuing and do the normal PostgreSQL explain\n> > analyze on it;\n> > usually caused by a missing index.\n> >\n> > -- Mark\n> >\n> >\n> >\n> > --\n> >\n> > Best Regards\n> > Kranti Kiran Kumar Parisa\n> > M: +91 - 9391 - 438 - 738\n> > +91 - 9849 - 625 - 625\n> >\n> >\n>\n\n\n\n-- \n\nBest Regards\nKranti Kiran Kumar Parisa\nM: +91 - 9391 - 438 - 738\n+91 - 9849 - 625 - 625\n\nThanks Mark,We are using DBCP and i found something about pgpool in some forum threads, which gave me queries on it. But I am clear now.On Wed, Aug 20, 2008 at 8:59 PM, Mark Lewis <[email protected]> wrote:\nYes, we use connection pooling.  As I recall Hibernate ships with c3p0\nconnection pooling built-in, which is what we use.  We were happy enough\nwith c3p0 that we ended up moving our other non-hibernate apps over to\nit, away from DBCP.\n\npgpool does connection pooling at a socket level instead of in a local\nlibrary level, so really it's a very different thing.  If your app is\nthe only thing talking to this database, and you don't have a\nmulti-database configuration, then it will be easier for you to use a\nJava-based connection pooling library like c3p0 or DBCP than to use\npgpool.\n\n-- Mark\n\nOn Wed, 2008-08-20 at 20:32 +0530, Kranti K K Parisa™ wrote:\n> Hi Mark,\n>\n> Thank you very much for the information. I will analyse the DB\n> structure and create indexes on PG directly.\n> Are you using any connection pooling like DBCP? or PG POOL?\n>\n> Regards, KP\n>\n>\n> On Wed, Aug 20, 2008 at 8:05 PM, Mark Lewis <[email protected]>\n> wrote:\n>\n>         On Wed, 2008-08-20 at 17:55 +0530, Kranti K K Parisa™ wrote:\n>         > Hi,\n>         >\n>         > Can anyone suggest the performance tips for PostgreSQL using\n>         > Hibernate.\n>         >\n>         > One of the queries:\n>         >\n>         > - PostgreSQL has INDEX concept and Hibernate also has Column\n>         INDEXes.\n>         > Which is better among them? or creating either of them is\n>         enough? or\n>         > need to create both of them?\n>         >\n>         > and any more performace aspects ?\n>\n>\n>         Hibernate is a library for accessing a database such as\n>         PostgreSQL.  It\n>         does not offer any add-on capabilities to the storage layer\n>         itself.  So\n>         when you tell Hibernate that a column should be indexed, all\n>         that it\n>         does create the associated PostgreSQL index when you ask\n>         Hibernate to\n>         build the DB tables for you.  This is part of Hibernate's\n>         effort to\n>         protect you from the implementation details of the underlying\n>         database,\n>         in order to make supporting multiple databases with the same\n>         application\n>         code easier.\n>\n>         So there is no performance difference between a PG index and a\n>         Hibernate\n>         column index, because they are the same thing.\n>\n>         The most useful Hibernate performance-tuning advice isn't\n>         PG-specific at\n>         all, there are just things that you need to keep in mind when\n>         developing\n>         for any database to avoid pathologically bad performance;\n>         those tips are\n>         really beyond the scope of this mailing list, Google is your\n>         friend\n>         here.\n>\n>         I've been the architect for an enterprise-class application\n>         for a few\n>         years now using PostgreSQL and Hibernate together in a\n>         performance-critical context, and honestly I can't think of\n>         one time\n>         that I've been bitten by a PG-specific performance issue (a\n>         lot of\n>         performance issues with Hibernate that affected all databases\n>         though;\n>         you need to know what you're doing to make Hibernate apps that\n>         run fast.\n>         If you do run into problems, you can figure out the actual SQL\n>         that\n>         Hibernate is issuing and do the normal PostgreSQL explain\n>         analyze on it;\n>         usually caused by a missing index.\n>\n>         -- Mark\n>\n>\n>\n> --\n>\n> Best Regards\n> Kranti Kiran Kumar Parisa\n> M: +91 - 9391 - 438 - 738\n> +91 - 9849 - 625 - 625\n>\n>\n-- Best RegardsKranti Kiran Kumar ParisaM: +91 - 9391 - 438 - 738 +91 - 9849 - 625 - 625", "msg_date": "Thu, 21 Aug 2008 12:29:38 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "Thanks Matthew,\n\ndoes that mean i can just have index1, index3, index4?\n\nOn Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling <[email protected]>wrote:\n\n> On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n>\n>> creating multiple indexes on same column will effect performance?\n>> for example:\n>>\n>> index1 : column1, column2, column3\n>> index2: column1\n>> index3: column2,\n>> index4: column3\n>> index5: column1,column2\n>>\n>\n> The sole purpose of indexes is to affect performance.\n>\n> However, if you have index1, there is no point in having index2 or index5.\n>\n> Matthew\n>\n> --\n> Isn't \"Microsoft Works\" something of a contradiction?\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\nBest Regards\nKranti Kiran Kumar Parisa\nM: +91 - 9391 - 438 - 738\n+91 - 9849 - 625 - 625\n\nThanks Matthew,does that mean i can just have index1, index3, index4?On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling <[email protected]> wrote:\nOn Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n\ncreating multiple indexes on same column will effect performance?\n for example:\n\nindex1 : column1, column2, column3\nindex2: column1\nindex3: column2,\nindex4: column3\nindex5: column1,column2\n\n\nThe sole purpose of indexes is to affect performance.\n\nHowever, if you have index1, there is no point in having index2 or index5.\n\nMatthew\n\n-- \nIsn't \"Microsoft Works\" something of a contradiction?\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Best RegardsKranti Kiran Kumar ParisaM: +91 - 9391 - 438 - 738 +91 - 9849 - 625 - 625", "msg_date": "Thu, 21 Aug 2008 12:33:47 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "On Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:\n\n> On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling\n> <[email protected]> wrote:\n> On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n> creating multiple indexes on same column will effect\n> performance?\n> for example:\n> \n> index1 : column1, column2, column3\n> index2: column1\n> index3: column2,\n> index4: column3\n> index5: column1,column2\n> \n> \n> The sole purpose of indexes is to affect performance.\n> \n> However, if you have index1, there is no point in having\n> index2 or index5.\n> \n> Matthew\n> \n> Thanks Matthew,\n> \n> does that mean i can just have index1, index3, index4?\n> \n\n(trying to get the thread back into newest-comments-last order)\n\nWell, yes you can get away with just index1, index3 and index4, and it\nmay well be the optimal solution for you, but it's not entirely\nclear-cut.\n\nIt's true that PG can use index1 to satisfy queries of the form \"SELECT\nx FROM y WHERE column1=somevalue\" or \"column1=a AND column2=b\". It will\nnot be as fast as an index lookup from a single index, but depending on\nthe size of the tables/indexes and the selectivity of leading column(s)\nin the index, the difference in speed may be trivial.\n\nOn the other hand, if you have individual indexes on column1, column2\nand column3 but no multi-column index, PG can combine the individual\nindexes in memory with a bitmap. This is not as fast as a normal lookup\nin the multi-column index would be, but can still be a big win over not\nhaving an index at all.\n\nTo make an educated decision you might want to read over some of the\nonline documentation about indexes, in particular these two sections:\n\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html\n\nand\n\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-bitmap-scans.html\n\n-- Mark\n", "msg_date": "Thu, 21 Aug 2008 07:02:00 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL+Hibernate Performance" }, { "msg_contents": "Hi Mark,\n\nThanks again for the info.\nI shall create diff sets of indexes and see the query execution time.\nAnd one of such tables might get around 700,000 records over a period of 4-5\nmonths. So what kind of other measures I need to focus on.\nI thought of the following\n1) Indexes\n2) Better Hardware (RAM & HDD)\n\nAnd how can i estimate the size of the row? is it like based on the data\ntypes of the columns i have in the table?\nDo you have any info to guide me on this?\n\nOn Thu, Aug 21, 2008 at 7:32 PM, Mark Lewis <[email protected]> wrote:\n\n> On Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:\n>\n> > On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling\n> > <[email protected]> wrote:\n> > On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n> > creating multiple indexes on same column will effect\n> > performance?\n> > for example:\n> >\n> > index1 : column1, column2, column3\n> > index2: column1\n> > index3: column2,\n> > index4: column3\n> > index5: column1,column2\n> >\n> >\n> > The sole purpose of indexes is to affect performance.\n> >\n> > However, if you have index1, there is no point in having\n> > index2 or index5.\n> >\n> > Matthew\n> >\n> > Thanks Matthew,\n> >\n> > does that mean i can just have index1, index3, index4?\n> >\n>\n> (trying to get the thread back into newest-comments-last order)\n>\n> Well, yes you can get away with just index1, index3 and index4, and it\n> may well be the optimal solution for you, but it's not entirely\n> clear-cut.\n>\n> It's true that PG can use index1 to satisfy queries of the form \"SELECT\n> x FROM y WHERE column1=somevalue\" or \"column1=a AND column2=b\". It will\n> not be as fast as an index lookup from a single index, but depending on\n> the size of the tables/indexes and the selectivity of leading column(s)\n> in the index, the difference in speed may be trivial.\n>\n> On the other hand, if you have individual indexes on column1, column2\n> and column3 but no multi-column index, PG can combine the individual\n> indexes in memory with a bitmap. This is not as fast as a normal lookup\n> in the multi-column index would be, but can still be a big win over not\n> having an index at all.\n>\n> To make an educated decision you might want to read over some of the\n> online documentation about indexes, in particular these two sections:\n>\n> http://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html\n>\n> and\n>\n> http://www.postgresql.org/docs/8.3/interactive/indexes-bitmap-scans.html\n>\n> -- Mark\n>\n\n\n\n-- \n\nBest Regards\nKranti Kiran Kumar Parisa\nM: +91 - 9391 - 438 - 738\n+91 - 9849 - 625 - 625\n\nHi Mark,Thanks again for the info.I shall create diff sets of indexes and see the query execution time.And one of such tables might get around 700,000 records over a period of 4-5 months. So what kind of other measures I need to focus on. \nI thought of the following1) Indexes2) Better Hardware (RAM & HDD)And how can i estimate the size of the row?  is it like based on the data types of the columns i have in the table?Do you have any info to guide me on this?\nOn Thu, Aug 21, 2008 at 7:32 PM, Mark Lewis <[email protected]> wrote:\nOn Thu, 2008-08-21 at 12:33 +0530, Kranti K K Parisa™ wrote:\n\n> On Wed, Aug 20, 2008 at 8:54 PM, Matthew Wakeling\n> <[email protected]> wrote:\n>         On Wed, 20 Aug 2008, Kranti K K Parisa™ wrote:\n>                 creating multiple indexes on same column will effect\n>                 performance?\n>                  for example:\n>\n>                 index1 : column1, column2, column3\n>                 index2: column1\n>                 index3: column2,\n>                 index4: column3\n>                 index5: column1,column2\n>\n>\n>         The sole purpose of indexes is to affect performance.\n>\n>         However, if you have index1, there is no point in having\n>         index2 or index5.\n>\n>         Matthew\n>\n> Thanks Matthew,\n>\n> does that mean i can just have index1, index3, index4?\n>\n\n(trying to get the thread back into newest-comments-last order)\n\nWell, yes you can get away with just index1, index3 and index4, and it\nmay well be the optimal solution for you, but it's not entirely\nclear-cut.\n\nIt's true that PG can use index1 to satisfy queries of the form \"SELECT\nx FROM y WHERE column1=somevalue\" or \"column1=a AND column2=b\".  It will\nnot be as fast as an index lookup from a single index, but depending on\nthe size of the tables/indexes and the selectivity of leading column(s)\nin the index, the difference in speed may be trivial.\n\nOn the other hand, if you have individual indexes on column1, column2\nand column3 but no multi-column index, PG can combine the individual\nindexes in memory with a bitmap.  This is not as fast as a normal lookup\nin the multi-column index would be, but can still be a big win over not\nhaving an index at all.\n\nTo make an educated decision you might want to read over some of the\nonline documentation about indexes, in particular these two sections:\n\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-multicolumn.html\n\nand\n\nhttp://www.postgresql.org/docs/8.3/interactive/indexes-bitmap-scans.html\n\n-- Mark\n-- Best RegardsKranti Kiran Kumar ParisaM: +91 - 9391 - 438 - 738 +91 - 9849 - 625 - 625", "msg_date": "Fri, 22 Aug 2008 11:22:16 +0530", "msg_from": "\"=?WINDOWS-1252?Q?Kranti_K_K_Parisa=99?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL+Hibernate Performance" } ]
[ { "msg_contents": "Folks,\n\nIn a fresh Pg8.3.3 box, we created a 3-stripe RAID0 array.\nThe write speed is about 180MB/s, as shown by dd :\n\n# dd if=/dev/zero of=/postgres/base/teste2 bs=1024 count=5000000\n5000000+0 records in\n5000000+0 records out\n5120000000 bytes (5,1 GB) copied, 28,035 seconds, 183 MB/s\n\nBTW, /postgres is $PGDATA ...\n\nUnfortunely, we cant figure out why Postgres is not using the disks at all.\nRight now, there are 3 heavy queryes running, and the disks are almost \nsleeping...\nThe CPU is 100% used since a few hours ago. Can anyone tell why?\n\n# dstat\n----total-cpu-usage---- -dsk/total- -net/total- ---paging-- ---system--\nusr sys idl wai hiq siq|_read _writ|_recv _send|__in_ _out_|_int_ _csw_\n 99 1 0 0 0 0| 0 0 | 574B 1710B| 0 0 | 6 55\n 99 1 0 0 0 0| 0 0 | 290B 770B| 0 0 | 8 48\n 99 1 0 0 0 0| 0 0 | 220B 540B| 0 0 | 6 44\n 99 1 0 0 0 0| 0 0 | 354B 818B| 0 0 | 10 50\n 98 2 0 0 0 0| 0 0 |1894B 4857B| 0 0 | 27 78\n100 0 0 0 0 1| 0 0 | 348B 1096B| 0 0 | 12 45\n\nThe specs:\nIntel(R) Pentium(R) Dual CPU E2160 @ 1.80GHz\nLinux dbserver4 2.6.24-etchnhalf.1-686-bigmem #1 SMP Mon Jul 21 11:59:12 \nUTC 2008 i686 GNU/Linux\n4GB RAM\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Wed, 20 Aug 2008 15:30:25 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres not using array" }, { "msg_contents": "On Wed, Aug 20, 2008 at 2:30 PM, André Volpato\n<[email protected]> wrote:\n\n> The CPU is 100% used since a few hours ago. Can anyone tell why?\n\nSounds like you've just got a CPU bound query. The data may even all\nbe in cache.\n\nSome information on database size, along with EXPLAIN results for your\nqueries, would help here.\n\n-- \n- David T. Wilson\[email protected]\n", "msg_date": "Wed, 20 Aug 2008 16:02:01 -0400", "msg_from": "\"David Wilson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array" } ]
[ { "msg_contents": "\nDavid Wilson escreveu:\n> On Wed, Aug 20, 2008 at 2:30 PM, André Volpato\n> <[email protected]> wrote:\n>\n> \n>> The CPU is 100% used since a few hours ago. Can anyone tell why?\n>> \n>\n> Sounds like you've just got a CPU bound query. The data may even all\n> be in cache.\n>\n> Some information on database size, along with EXPLAIN results for your\n> queries, would help here.\n> \n\nThe query itself runs smoothly, almost with no delay.\n\nI am into some serious hardware fault. The application runs the query, \nand store the results\nlike text files in another server. For some reason, the query pid \nremains active\nin the dbserver, and taking 100% CPU.\n\nI´m gonna dig a little more.\nMaybe the application is not able to store the results, or something.\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Wed, 20 Aug 2008 17:46:10 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "André Volpato escreveu:\n>\n> David Wilson escreveu:\n>> On Wed, Aug 20, 2008 at 2:30 PM, André Volpato\n>> <[email protected]> wrote:\n>>\n>> \n>>> The CPU is 100% used since a few hours ago. Can anyone tell why?\n>>> \n>>\n>> Sounds like you've just got a CPU bound query. The data may even all\n>> be in cache.\n>>\n>> Some information on database size, along with EXPLAIN results for your\n>> queries, would help here.\n>> \n>\n> The query itself runs smoothly, almost with no delay.\n>\n\nYou where right about the cache.\nAfter some experiences, I noticed that the arrays are being used, but \nonly for a short time...\nSo, what is slowing down is the CPU (Intel(R) Pentium(R) Dual CPU \nE2160 @ 1.80GHz)\n\nIn practice, I have noticed that dual 1.8 is worse than single 3.0. We \nhave another server wich\nis a Pentium D 3.0 GHz, that runs faster.\n\nExplain output:\n HashAggregate (cost=19826.23..19826.96 rows=73 width=160) (actual \ntime=11826.754..11826.754 rows=0 loops=1)\n -> Subquery Scan b2 (cost=19167.71..19817.21 rows=722 width=160) \n(actual time=11826.752..11826.752 rows=0 loops=1)\n Filter: (bds_internacoes(200805, 200806, (b2.cod)::text, \n'qtdI'::text, 'P'::bpchar) >= 1::numeric)\n -> Limit (cost=19167.71..19248.89 rows=2165 width=48) (actual \ntime=415.157..621.043 rows=28923 loops=1)\n -> HashAggregate (cost=19167.71..19248.89 rows=2165 \nwidth=48) (actual time=415.155..593.309 rows=28923 loops=1)\n -> Bitmap Heap Scan on bds_beneficiario b \n(cost=832.53..18031.61 rows=56805 width=48) (actual time=68.259..160.912 \nrows=56646 loops=1)\n Recheck Cond: ((benef_referencia >= 200805) \nAND (benef_referencia <= 200806))\n -> Bitmap Index Scan on ibds_beneficiario2 \n(cost=0.00..818.33 rows=56805 width=0) (actual time=63.293..63.293 \nrows=56646 loops=1)\n Index Cond: ((benef_referencia >= \n200805) AND (benef_referencia <= 200806))\n Total runtime: 11827.374 ms\n\nPostgres read the array in less than 1 sec, and the other 10s he takes \n100% of CPU usage,\nwich is, in this case, one of the two cores at 1.8GHz.\n\nI am a bit confused about what CPU is best for Postgres. Our apps is \nmostly read, with\na few connections and heavy queryes.\nDoes it worth a multi-core ?\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Thu, 21 Aug 2008 10:53:33 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "Andr� Volpato wrote:\n> In practice, I have noticed that dual 1.8 is worse than single 3.0. We \n> have another server wich\n> is a Pentium D 3.0 GHz, that runs faster.\n> ...\n> Postgres read the array in less than 1 sec, and the other 10s he takes \n> 100% of CPU usage,\n> wich is, in this case, one of the two cores at 1.8GHz.\n>\n> I am a bit confused about what CPU is best for Postgres. Our apps is \n> mostly read, with\n> a few connections and heavy queryes.\n> Does it worth a multi-core ?\n\nHow are you doing your benchmarking? If you have two or more queries \nrunning at the same time, I would expect the 1.8 Ghz x 2 to be \nsignificant and possibly out-perform the 3.0 Ghz x 1. If you usually \nonly have one query running at the same time, I expect the 3.0 Ghz x 1 \nto always win. PostgreSQL isn't good at splitting the load from a single \nclient across multiple CPU cores.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n", "msg_date": "Thu, 21 Aug 2008 10:16:40 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "=?ISO-8859-1?Q?Andr=E9_Volpato?= <[email protected]> writes:\n> Explain output:\n> HashAggregate (cost=19826.23..19826.96 rows=73 width=160) (actual \n> time=11826.754..11826.754 rows=0 loops=1)\n> -> Subquery Scan b2 (cost=19167.71..19817.21 rows=722 width=160) \n> (actual time=11826.752..11826.752 rows=0 loops=1)\n> Filter: (bds_internacoes(200805, 200806, (b2.cod)::text, \n> 'qtdI'::text, 'P'::bpchar) >= 1::numeric)\n> -> Limit (cost=19167.71..19248.89 rows=2165 width=48) (actual \n> time=415.157..621.043 rows=28923 loops=1)\n\nSo I guess the question is \"what is the bds_internacoes function, and\nwhy is it so slow?\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Aug 2008 10:28:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array " }, { "msg_contents": "Tom Lane escreveu:\n> =?ISO-8859-1?Q?Andr=E9_Volpato?= <[email protected]> writes:\n> \n>> Explain output:\n>> HashAggregate (cost=19826.23..19826.96 rows=73 width=160) (actual \n>> time=11826.754..11826.754 rows=0 loops=1)\n>> -> Subquery Scan b2 (cost=19167.71..19817.21 rows=722 width=160) \n>> (actual time=11826.752..11826.752 rows=0 loops=1)\n>> Filter: (bds_internacoes(200805, 200806, (b2.cod)::text, \n>> 'qtdI'::text, 'P'::bpchar) >= 1::numeric)\n>> -> Limit (cost=19167.71..19248.89 rows=2165 width=48) (actual \n>> time=415.157..621.043 rows=28923 loops=1)\n>> \n>\n> So I guess the question is \"what is the bds_internacoes function, and\n> why is it so slow?\"\t\n\nThis function is quite fast:\n Aggregate (cost=5.17..5.18 rows=1 width=12) (actual time=0.286..0.287 \nrows=1 loops=1)\n -> Index Scan using iinternacoes4 on internacoes (cost=0.01..5.16 \nrows=1 width=12) (actual time=0.273..0.273 rows=0 loops=1)\n Index Cond: ((((ano * 100) + mes) >= 200801) AND (((ano * 100) \n+ mes) <= 200806) AND ((cod_benef)::text = '0005375200'::text))\n Filter: (tipo_internacao = 'P'::bpchar)\n Total runtime: 0.343 ms\n\n\nThe problem is that its fired up against 29K rows, wich takes the\ntotal runtime about 10s.\n\nWe are guessing that a dual core 3.0GHz will beat up a quad core 2.2,\nat least in this environmnent with less than 4 concurrent queryes.\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Thu, 21 Aug 2008 11:51:38 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "=?ISO-8859-1?Q?Andr=E9_Volpato?= <[email protected]> writes:\n> Tom Lane escreveu:\n>> So I guess the question is \"what is the bds_internacoes function, and\n>> why is it so slow?\"\t\n\n> This function is quite fast:\n\nWell, \"fast\" is relative. It's not fast enough, or you wouldn't have\nbeen complaining.\n\n> We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,\n> at least in this environmnent with less than 4 concurrent queryes.\n\nThe most you could hope for from that is less than a 50% speedup. I'd\nsuggest investing some tuning effort first. Some rethinking of your\nschema, for example, might buy you orders of magnitude ... with no new\nhardware investment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Aug 2008 15:10:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array " }, { "msg_contents": "Tom Lane escreveu:\n>> We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,\n>> at least in this environmnent with less than 4 concurrent queryes.\n>\n> The most you could hope for from that is less than a 50% speedup. I'd\n> suggest investing some tuning effort first. Some rethinking of your\n> schema, for example, might buy you orders of magnitude ... with no new\n> hardware investment.\n\nI think we almost reached the tuning limit, without changing the schema.\n\nYou are right, the whole design must be rethinked.\nBut this question about single vs multi cores has bitten me.\n\nWe will rethink the investiment in new hardware too. The databases that are\nused less often will be managed to a single core server.\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Thu, 21 Aug 2008 17:05:12 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "André Volpato <[email protected]> writes:\n\n> Tom Lane escreveu:\n>>> We are guessing that a dual core 3.0GHz will beat up a quad core 2.2,\n>>> at least in this environmnent with less than 4 concurrent queryes.\n>>\n>> The most you could hope for from that is less than a 50% speedup. I'd\n>> suggest investing some tuning effort first. Some rethinking of your\n>> schema, for example, might buy you orders of magnitude ... with no new\n>> hardware investment.\n>\n> I think we almost reached the tuning limit, without changing the schema.\n\nIt's hard to tell from the plan you posted (and with only a brief look) but it\nlooks to me like your query with that function is basically doing a join but\nbecause the inner side of the join is in your function's index lookup it's\neffectively forcing the use of a \"nested loop\" join. That's usually a good\nchoice for small queries against big tables but if you're joining a lot of\ndata there are other join types which are much faster. You might find the\nplanner can do a better job if you write your query as a plain SQL query and\nlet the optimizer figure out the best way instead of forcing its hand.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's On-Demand Production Tuning\n", "msg_date": "Fri, 22 Aug 2008 11:28:58 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "\n\n\n\n\n\nGregory Stark escreveu:\n\nAndré Volpato <[email protected]> writes:\n\n\n\nI think we almost reached the tuning limit, without changing the schema.\n\n\n\nIt's hard to tell from the plan you posted (and with only a brief look) but it\nlooks to me like your query with that function is basically doing a join but\nbecause the inner side of the join is in your function's index lookup it's\neffectively forcing the use of a \"nested loop\" join. That's usually a good\nchoice for small queries against big tables but if you're joining a lot of\ndata there are other join types which are much faster. You might find the\nplanner can do a better job if you write your query as a plain SQL query and\nlet the optimizer figure out the best way instead of forcing its hand.\n\n\nThanks Greg, I rewrote the query with a explicit join, removing the\nfunction.\n\nThe planner uses a nestloop, becouse its only a few rows, none in the\nend.\n(A HashAggregate is used to join the same query, running against a\nbigger database)\n\nThe good side about the function is the facility to write in a\ndinamic application. \nWe´re gonna change it and save some bucks...\n\nIts an impressive win, look:\n\n HashAggregate  (cost=19773.60..19773.61 rows=1 width=160) (actual\ntime=0.511..0.511 rows=0 loops=1)\n   ->  Nested Loop  (cost=19143.21..19773.58 rows=1 width=160)\n(actual time=0.509..0.509 rows=0 loops=1)\n         Join Filter: ((b.benef_cod_arquivo)::text =\n(internacoes.cod_benef)::text)\n         ->  Bitmap Heap Scan on internacoes  (cost=13.34..516.70\nrows=1 width=8) (actual time=0.507..0.507 rows=0 loops=1)\n               Recheck Cond: ((((ano * 100) + mes) >= 200805) AND\n(((ano * 100) + mes) <= 200806))\n               Filter: (tipo_internacao = 'P'::bpchar)\n               ->  Bitmap Index Scan on iinternacoes4 \n(cost=0.00..13.34 rows=708 width=0) (actual time=0.143..0.143 rows=708\nloops=1)\n                     Index Cond: ((((ano * 100) + mes) >= 200805)\nAND (((ano * 100) + mes) <= 200806))\n         ->  Limit  (cost=19129.87..19209.26 rows=2117 width=48)\n(never executed)\n               ->  HashAggregate  (cost=19129.87..19209.26 rows=2117\nwidth=48) (never executed)\n                     ->  Bitmap Heap Scan on bds_beneficiario b \n(cost=822.41..18009.61 rows=56013 width=48) (never executed)\n                           Recheck Cond: ((benef_referencia >=\n200805) AND (benef_referencia <= 200806))\n                           ->  Bitmap Index Scan on\nibds_beneficiario2  (cost=0.00..808.41 rows=56013 width=0) (never\nexecuted)\n                                 Index Cond: ((benef_referencia >=\n200805) AND (benef_referencia <= 200806))\n Total runtime: 0.642 ms\n\n\n\n\n-- \n\n[]´s, ACV\n\n\n", "msg_date": "Fri, 22 Aug 2008 14:05:32 -0300", "msg_from": "=?ISO-8859-1?Q?Andr=E9_Volpato?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres not using array" }, { "msg_contents": "On Thu, 21 Aug 2008, Andr� Volpato wrote:\n\n> So, what is slowing down is the CPU (Intel(R) Pentium(R) Dual CPU E2160 \n> @ 1.80GHz)..In practice, I have noticed that dual 1.8 is worse than \n> single 3.0. We have another server wich is a Pentium D 3.0 GHz, that \n> runs faster.\n\nPentium D models are all dual-core so either you've got the wrong model \nnumber here or you've actually comparing against a 2X3.0GHz part.\n\nThe Core 2 Duo E2160 has a very small CPU cache--512KB per core. Your \nolder Pentium system probably has quite a bit more. I suspect that's the \nmain reason it runs faster on this application.\n\n> I am a bit confused about what CPU is best for Postgres. Our apps is \n> mostly read, with a few connections and heavy queryes.\n\nThere are a lot of things you can run into with Postgres that end up being \nlimited by the fact that they only run on a single core, as you've seen \nhere. If you've only got a fairly small number of connections running CPU \nheavy queries, you probably want a processor with lots of L2 cache and a \nfast clock speed, rather than adding a large number of cores running at a \nslower speed. The very small L2 cache on your E2160 is likely what's \nholding it back here, and even though the newer processors are \nsignificantly more efficient per clock the gap between 1.8GHz and 3.0GHz \nis pretty big.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD", "msg_date": "Thu, 28 Aug 2008 22:29:45 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres not using array" } ]
[ { "msg_contents": "I'm currently trying to find out what the best configuration is for \nour new database server. It will server a database of about 80 GB and \ngrowing fast. The new machine has plenty of memory (64GB) and 16 SAS \ndisks, of wich two are already in use as a mirror for the OS.\n\nThe rest can be used for PostgreSQL. So that makes a total of 14 15k.5 \nSAS diks. There is obviously a lot to interesting reading to be found, \nmost of them stating that the transaction log should be put onto a \nseparate disk spindle. You can also do this with the indexes. Since \nthey will be updated a lot, I guess that might be a good idea. But \nwhat no-one states, is what performance these spindle should have in \ncomparison to the data spindle? If I create a raid 10 of 6 disks for \nthe data, 4 disk raid 10 for the log, and 4 disk raid 10 for the \nindexes, will this yield best performance? Or is it sufficient to just \nhave a simple mirror for the log and/or indexes...? I have not found \nany information about these figures, and I guess it should be possible \nto give some pointers on how these different setup might affect \nperformance?\n\nSo I hope someone has already tested this and can give some tips...\n\nKind regards,\n\nChristiaan \n", "msg_date": "Thu, 21 Aug 2008 00:25:59 +0200", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "How to setup disk spindles for best performance" }, { "msg_contents": "On Wed, Aug 20, 2008 at 6:25 PM, Christiaan Willemsen\n<[email protected]> wrote:\n> I'm currently trying to find out what the best configuration is for our new\n> database server. It will server a database of about 80 GB and growing fast.\n> The new machine has plenty of memory (64GB) and 16 SAS disks, of wich two\n> are already in use as a mirror for the OS.\n>\n> The rest can be used for PostgreSQL. So that makes a total of 14 15k.5 SAS\n> diks. There is obviously a lot to interesting reading to be found, most of\n> them stating that the transaction log should be put onto a separate disk\n> spindle. You can also do this with the indexes. Since they will be updated a\n> lot, I guess that might be a good idea. But what no-one states, is what\n> performance these spindle should have in comparison to the data spindle? If\n> I create a raid 10 of 6 disks for the data, 4 disk raid 10 for the log, and\n> 4 disk raid 10 for the indexes, will this yield best performance? Or is it\n> sufficient to just have a simple mirror for the log and/or indexes...? I\n> have not found any information about these figures, and I guess it should be\n> possible to give some pointers on how these different setup might affect\n> performance?\n\nWell, the speed of your logging device puts an upper bound on the\nwrite speed of the database. While modern sas drives can do 80mb/sec\n+ with sequential ops, this can turn to 1mb/sec real fast if the\nlogging is duking it out with the other generally random work the\ndatabase has to do, which is why it's often separated out.\n\n80mb/sec is actually quite a lot in database terms and you will likely\nonly get anything close to that when doing heavy insertion, so that\nit's unlikely to become the bottleneck. Even if you hit that limit\nsometimes, those drives are probably put to better use in the data\nvolume somewhere.\n\nAs for partitioning the data volume, I'd advise this only if you have\na mixed duty database that does different tasks with different\nperformance requirements. You may be serving a user interface which\nhas very low maximum transaction time and therefore gets dedicated\ndisk i/o apart from the data churn that is going on elsewhere. Apart\nfrom that though, I'd keep it in a single volume.\n\nmerlin\n", "msg_date": "Wed, 20 Aug 2008 21:49:54 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "On Wed, Aug 20, 2008 at 4:25 PM, Christiaan Willemsen\n<[email protected]> wrote:\n> I'm currently trying to find out what the best configuration is for our new\n> database server. It will server a database of about 80 GB and growing fast.\n> The new machine has plenty of memory (64GB) and 16 SAS disks, of wich two\n> are already in use as a mirror for the OS.\n\nGot almost the same setup with 32 Gig of ram... I'm running the Areca\n1680 controller with 512M battery backed cache and I'm quite happy\nwith the performance so far.\n\nMine is set up to use the OS mirror set for the pg_xlog as well, but\non a separate logical partition. I figure the RAID controller will\neven out the writes there to keep it fast. The next 12 drives are a\nRAID-10 set and the last two are hot spares. On an array this big you\nshould always have at least one hot spare.\n\nGenerally breaking the disks up for index versus tables etc is a lot\nof work for minimal gain. A good RAID controller will make up for\nhaving to do that, and usually do better, since if the indexes are\ngetting hit a LOT then you have all 12 disks in the RAID-10 working\ntogether.\n\nBut what's your workload look like? Lotsa updates, inserts, deletes,\nbig selects, bulk loads in the middle of the day, etc...?\n", "msg_date": "Wed, 20 Aug 2008 21:41:05 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "So, what you are basically saying, is that a single mirror is in \ngeneral more than enough to facilitate the transaction log.\n\nSo it would not be smart to put the indexes onto a separate disk \nspindle to improve index performance?\n\nOn Aug 21, 2008, at 3:49 AM, Merlin Moncure wrote:\n\n> On Wed, Aug 20, 2008 at 6:25 PM, Christiaan Willemsen\n> <[email protected]> wrote:\n>> I'm currently trying to find out what the best configuration is for \n>> our new\n>> database server. It will server a database of about 80 GB and \n>> growing fast.\n>> The new machine has plenty of memory (64GB) and 16 SAS disks, of \n>> wich two\n>> are already in use as a mirror for the OS.\n>>\n>> The rest can be used for PostgreSQL. So that makes a total of 14 \n>> 15k.5 SAS\n>> diks. There is obviously a lot to interesting reading to be found, \n>> most of\n>> them stating that the transaction log should be put onto a separate \n>> disk\n>> spindle. You can also do this with the indexes. Since they will be \n>> updated a\n>> lot, I guess that might be a good idea. But what no-one states, is \n>> what\n>> performance these spindle should have in comparison to the data \n>> spindle? If\n>> I create a raid 10 of 6 disks for the data, 4 disk raid 10 for the \n>> log, and\n>> 4 disk raid 10 for the indexes, will this yield best performance? \n>> Or is it\n>> sufficient to just have a simple mirror for the log and/or \n>> indexes...? I\n>> have not found any information about these figures, and I guess it \n>> should be\n>> possible to give some pointers on how these different setup might \n>> affect\n>> performance?\n>\n> Well, the speed of your logging device puts an upper bound on the\n> write speed of the database. While modern sas drives can do 80mb/sec\n> + with sequential ops, this can turn to 1mb/sec real fast if the\n> logging is duking it out with the other generally random work the\n> database has to do, which is why it's often separated out.\n>\n> 80mb/sec is actually quite a lot in database terms and you will likely\n> only get anything close to that when doing heavy insertion, so that\n> it's unlikely to become the bottleneck. Even if you hit that limit\n> sometimes, those drives are probably put to better use in the data\n> volume somewhere.\n>\n> As for partitioning the data volume, I'd advise this only if you have\n> a mixed duty database that does different tasks with different\n> performance requirements. You may be serving a user interface which\n> has very low maximum transaction time and therefore gets dedicated\n> disk i/o apart from the data churn that is going on elsewhere. Apart\n> from that though, I'd keep it in a single volume.\n>\n> merlin\n\n", "msg_date": "Thu, 21 Aug 2008 07:38:39 +0200", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "Christiaan Willemsen wrote:\n> So, what you are basically saying, is that a single mirror is in general \n> more than enough to facilitate the transaction log.\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nAnd to answer your question, yes. Transaction logs are written \nsequentially. You do not need a journaled file system and raid 1 is \nplenty for most if not all work loads.\n\nSincerely,\n\nJoshua D. Drake\n", "msg_date": "Wed, 20 Aug 2008 22:50:38 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "Thanks Joshua,\n\nSo what about putting the indexes on a separate array? Since we do a lot \nof inserts indexes are going to be worked on a lot of the time.\n\nRegards,\n\nChristiaan\n\nJoshua D. Drake wrote:\n> Christiaan Willemsen wrote:\n>> So, what you are basically saying, is that a single mirror is in \n>> general more than enough to facilitate the transaction log.\n>\n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> And to answer your question, yes. Transaction logs are written \n> sequentially. You do not need a journaled file system and raid 1 is \n> plenty for most if not all work loads.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n", "msg_date": "Thu, 21 Aug 2008 10:34:19 +0200", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "Indexes will be random write workload, but these won't by synchronous writes\nand will be buffered by the raid controller's cache. Assuming you're using\na hardware raid controller that is, and one that doesn't have major\nperformance problems on your platform. Which brings those questions up ---\nwhat is your RAID card and OS?\n\nFor reads, if your shared_buffers is large enough, your heavily used indexes\nwon't likely go to disk much at all.\n\nA good raid controller will typically help distribute the workload\neffectively on a large array.\n\nYou probably want a simple 2 disk mirror or 4 disks in raid 10 for your OS +\nxlog, and the rest for data + indexes -- with hot spares IF your card\nsupports them.\n\nThe biggest risk to splitting up data and indexes is that you don't know how\nmuch I/O each needs relative to each other, and if this isn't a relatively\nconstant ratio you will have one subset busy while the other subset is idle.\nUnless you have extensively profiled your disk activity into index and data\nsubsets and know roughly what the optimal ratio is, its probably going to\ncause more problems than it fixes.\nFurthermore, if this ratio changes at all, its a maintenance nightmare. How\nmuch each would need in a perfect world is application dependant, so there\ncan be no general recommendation other than: don't do it.\n\nOn Thu, Aug 21, 2008 at 1:34 AM, Christiaan Willemsen <\[email protected]> wrote:\n\n> Thanks Joshua,\n>\n> So what about putting the indexes on a separate array? Since we do a lot of\n> inserts indexes are going to be worked on a lot of the time.\n>\n> Regards,\n>\n> Christiaan\n>\n>\n> Joshua D. Drake wrote:\n>\n>> Christiaan Willemsen wrote:\n>>\n>>> So, what you are basically saying, is that a single mirror is in general\n>>> more than enough to facilitate the transaction log.\n>>>\n>>\n>>\n>> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>>\n>> And to answer your question, yes. Transaction logs are written\n>> sequentially. You do not need a journaled file system and raid 1 is plenty\n>> for most if not all work loads.\n>>\n>> Sincerely,\n>>\n>> Joshua D. Drake\n>>\n>>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIndexes will be random write workload, but these won't by synchronous writes and will be buffered by the raid controller's cache.  Assuming you're using a hardware raid controller that is, and one that doesn't have major performance problems on your platform.  Which brings those questions up --- what is your RAID card and OS?\nFor reads, if your shared_buffers is large enough, your heavily used indexes won't likely go to disk much at all.A good raid controller will typically help distribute the workload effectively on a large array.\nYou probably want a simple 2 disk mirror or 4 disks in raid 10 for your OS + xlog, and the rest for data + indexes -- with hot spares IF your card supports them.The biggest risk to splitting up data and indexes is that you don't know how much I/O each needs relative to each other, and if this isn't a relatively constant ratio you will have one subset busy while the other subset is idle.\nUnless you have extensively profiled your disk activity into index and data subsets and know roughly what the optimal ratio is, its probably going to cause more problems than it fixes.  Furthermore, if this ratio changes at all, its a maintenance nightmare.  How much each would need in a perfect world is application dependant, so there can be no general recommendation other than:  don't do it.\nOn Thu, Aug 21, 2008 at 1:34 AM, Christiaan Willemsen <[email protected]> wrote:\nThanks Joshua,\n\nSo what about putting the indexes on a separate array? Since we do a lot of inserts indexes are going to be worked on a lot of the time.\n\nRegards,\n\nChristiaan\n\nJoshua D. Drake wrote:\n\nChristiaan Willemsen wrote:\n\nSo, what you are basically saying, is that a single mirror is in general more than enough to facilitate the transaction log.\n\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nAnd to answer your question, yes. Transaction logs are written sequentially. You do not need a journaled file system and raid 1 is plenty for most if not all work loads.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 21 Aug 2008 07:53:05 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "Hi Scott,\n\nGreat info! Our RAID card is at the moment a ICP vortex (Adaptec) \nICP5165BR, and I'll be using it with Ubuntu server 8.04. I tried \nOpenSolaris, but it yielded even more terrible performance, specially \nusing ZFS.. I guess that was just a missmatch. Anyway, I'm going to \nreturn the controller, because it does not scale very well with more \nthat 4 disks in raid 10. Bandwidth is limited to 350MB/sec, and IOPS \nscale badly with extra disks...\n\nSo I guess, I'll be waiting for another controller first. The idea for \nxlog + os on 4 disk raid 10 and the rest for the data sound good :) I \nhope it will turn out that way too.. First another controller..\n\nRegards,\n\nChristiaan\n\nScott Carey wrote:\n> Indexes will be random write workload, but these won't by synchronous \n> writes and will be buffered by the raid controller's cache. Assuming \n> you're using a hardware raid controller that is, and one that doesn't \n> have major performance problems on your platform. Which brings those \n> questions up --- what is your RAID card and OS?\n>\n> For reads, if your shared_buffers is large enough, your heavily used \n> indexes won't likely go to disk much at all.\n>\n> A good raid controller will typically help distribute the workload \n> effectively on a large array.\n>\n> You probably want a simple 2 disk mirror or 4 disks in raid 10 for \n> your OS + xlog, and the rest for data + indexes -- with hot spares IF \n> your card supports them.\n>\n> The biggest risk to splitting up data and indexes is that you don't \n> know how much I/O each needs relative to each other, and if this isn't \n> a relatively constant ratio you will have one subset busy while the \n> other subset is idle.\n> Unless you have extensively profiled your disk activity into index and \n> data subsets and know roughly what the optimal ratio is, its probably \n> going to cause more problems than it fixes. \n> Furthermore, if this ratio changes at all, its a maintenance \n> nightmare. How much each would need in a perfect world is application \n> dependant, so there can be no general recommendation other than: \n> don't do it.\n>\n> On Thu, Aug 21, 2008 at 1:34 AM, Christiaan Willemsen \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Thanks Joshua,\n>\n> So what about putting the indexes on a separate array? Since we do\n> a lot of inserts indexes are going to be worked on a lot of the time.\n>\n> Regards,\n>\n> Christiaan\n>\n>\n> Joshua D. Drake wrote:\n>\n> Christiaan Willemsen wrote:\n>\n> So, what you are basically saying, is that a single mirror\n> is in general more than enough to facilitate the\n> transaction log.\n>\n>\n> http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n>\n> http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n>\n> And to answer your question, yes. Transaction logs are written\n> sequentially. You do not need a journaled file system and raid\n> 1 is plenty for most if not all work loads.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n\n\n\nHi Scott,\n\nGreat info! Our RAID card is at  the moment a ICP vortex (Adaptec)\nICP5165BR, and I'll be using it with Ubuntu server 8.04. I tried\nOpenSolaris, but it yielded even more terrible performance, specially\nusing ZFS.. I guess that was just a missmatch. Anyway, I'm going to\nreturn the controller, because it does not scale very well with more\nthat 4 disks in raid 10. Bandwidth is limited to 350MB/sec, and IOPS\nscale badly with extra disks...\n\nSo I guess, I'll be waiting for another controller first. The idea for\nxlog + os on 4 disk raid 10 and the rest for the data sound good :) I\nhope it will turn out that way too.. First another controller..\n\nRegards,\n\nChristiaan\n\nScott Carey wrote:\n\nIndexes will be random write workload, but these won't\nby synchronous writes and will be buffered by the raid controller's\ncache.  Assuming you're using a hardware raid controller that is, and\none that doesn't have major performance problems on your platform. \nWhich brings those questions up --- what is your RAID card and OS?\n\nFor reads, if your shared_buffers is large enough, your heavily used\nindexes won't likely go to disk much at all.\n\nA good raid controller will typically help distribute the workload\neffectively on a large array.\n\nYou probably want a simple 2 disk mirror or 4 disks in raid 10 for your\nOS + xlog, and the rest for data + indexes -- with hot spares IF your\ncard supports them.\n\nThe biggest risk to splitting up data and indexes is that you don't\nknow how much I/O each needs relative to each other, and if this isn't\na relatively constant ratio you will have one subset busy while the\nother subset is idle.\nUnless you have extensively profiled your disk activity into index and\ndata subsets and know roughly what the optimal ratio is, its probably\ngoing to cause more problems than it fixes.  \nFurthermore, if this ratio changes at all, its a maintenance\nnightmare.  How much each would need in a perfect world is application\ndependant, so there can be no general recommendation other than:  don't\ndo it.\n\nOn Thu, Aug 21, 2008 at 1:34 AM, Christiaan\nWillemsen <[email protected]>\nwrote:\nThanks\nJoshua,\n\nSo what about putting the indexes on a separate array? Since we do a\nlot of inserts indexes are going to be worked on a lot of the time.\n\nRegards,\n\nChristiaan\n\n\n\nJoshua D. Drake wrote:\n\nChristiaan Willemsen wrote:\n\nSo, what you are basically saying, is that a single mirror is in\ngeneral more than enough to facilitate the transaction log.\n\n\nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/\n\nhttp://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide\n\nAnd to answer your question, yes. Transaction logs are written\nsequentially. You do not need a journaled file system and raid 1 is\nplenty for most if not all work loads.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 21 Aug 2008 17:06:13 +0200", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "Scott Carey wrote:\n> For reads, if your shared_buffers is large enough, your heavily used \n> indexes won't likely go to disk much at all.\n\nISTM this would happen regardless of your shared_buffers setting.\nIf you have enough memory the OS should cache the frequently used\npages regardless of shared_buffers; and if you don't have enough\nmemory it won't.\n\n> ... splitting up data and indexes ...\n\nFWIW, I've had a system where moving pgsql_tmp to different disks\nhelped more than moving indexes.\n", "msg_date": "Thu, 21 Aug 2008 12:14:47 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "On Thu, 21 Aug 2008, Christiaan Willemsen wrote:\n\n> Anyway, I'm going to return the controller, because it \n> does not scale very well with more that 4 disks in raid 10. Bandwidth is \n> limited to 350MB/sec, and IOPS scale badly with extra disks...\n\nHow did you determine that upper limit? Usually it takes multiple \nbenchmark processes running at once in order to get more than 350MB/s out \nof a controller. For example, if you look carefully at the end of \nhttp://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \nyou can see that Joshua had to throw 8 threads at the disks in order to \nreach maximum bandwidth.\n\n> The idea for xlog + os on 4 disk raid 10 and the rest for the data sound \n> good\n\nI would just use a RAID1 pair for the OS, another pair for the xlog, and \nthrow all the other disks into a big 0+1 set. There is some value to \nseparating the WAL from the OS disks, from both the performance and the \nmanagement perspectives. It's nice to be able to monitor the xlog write \nbandwidth rate under load easily for example.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 28 Aug 2008 22:43:54 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to setup disk spindles for best performance" }, { "msg_contents": "\nOn Aug 29, 2008, at 4:43 AM, Greg Smith wrote:\n\n> On Thu, 21 Aug 2008, Christiaan Willemsen wrote:\n>\n>> Anyway, I'm going to return the controller, because it does not \n>> scale very well with more that 4 disks in raid 10. Bandwidth is \n>> limited to 350MB/sec, and IOPS scale badly with extra disks...\n>\n> How did you determine that upper limit? Usually it takes multiple \n> benchmark processes running at once in order to get more than 350MB/ \n> s out of a controller. For example, if you look carefully at the \n> end of http://www.commandprompt.com/blogs/joshua_drake/2008/04/is_that_performance_i_smell_ext2_vs_ext3_on_50_spindles_testing_for_postgresql/ \n> you can see that Joshua had to throw 8 threads at the disks in \n> order to reach maximum bandwidth.\n\nI used IOmeter to do some tests, with 50 worker thread doing jobs. I \ncan get more than 350 MB/sec, I'll have to use huge blocksizes \n(something like 8 MB). Even worse is random read and 70%read, 50% \nrandom tests. They don't scale at all when you add disks. A 6 disk \nraid 5 is exactly as fast as a 12 disk raid 10 :(\n\n>> The idea for xlog + os on 4 disk raid 10 and the rest for the data \n>> sound good\n>\n> I would just use a RAID1 pair for the OS, another pair for the xlog, \n> and throw all the other disks into a big 0+1 set. There is some \n> value to separating the WAL from the OS disks, from both the \n> performance and the management perspectives. It's nice to be able \n> to monitor the xlog write bandwidth rate under load easily for \n> example.\n\nYes, that's about what I had in mind.\n\nKind regards,\n\nChristiaan\n", "msg_date": "Fri, 29 Aug 2008 07:07:58 +0200", "msg_from": "Christiaan Willemsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to setup disk spindles for best performance" } ]
[ { "msg_contents": "My company finally has the means to install a new database server for \nreplication. I have Googled and found a lot of sparse information out \nthere regarding replication systems for PostgreSQL and a lot of it looks \nvery out-of-date. Can I please get some ideas from those of you that \nare currently using fail-over replication systems? What advantage does \nyour solution have? What are the \"gotchas\" I need to worry about?\n\nMy desire would be to have a parallel server that could act as a hot \nstandby system with automatic fail over in a multi-master role. If our \nprimary server goes down for whatever reason, the secondary would take \nover and handle the load seamlessly. I think this is really the \"holy \ngrail\" scenario and I understand how difficult it is to achieve. \nEspecially since we make frequent use of sequences in our databases. If \nMM is too difficult, I'm willing to accept a hot-standby read-only \nsystem that will handle queries until we can fix whatever ails the master. \n\nWe are primary an OLAP environment but there is a constant stream of \ninserts into the databases. There are 47 different databases hosted on \nthe primary server and this number will continue to scale up to whatever \nthe server seems to support. The reason I mention this number is that \nit seems that those systems that make heavy use of schema changes \nrequire a lot of \"fiddling\". For a single database, this doesn't seem \ntoo problematic, but any manual work involved and administrative \noverhead will scale at the same rate as the database count grows and I \ncertainly want to minimize as much fiddling as possible.\n\nWe are using 8.3 and the total combined size for the PG data directory \nis 226G. Hopefully I didn't neglect to include more relevant information.\n\nAs always, thank you for your insight.\n\n-Dan\n\n\n", "msg_date": "Thu, 21 Aug 2008 11:53:51 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "The state of PG replication in 2008/Q2?" }, { "msg_contents": "Hi Dan!\n\nIts true, many of the replication options that exists for PostgreSQL \nhave not seen any updates in a while.\n\nIf you only looking for redundancy and not a performance gain you \nshould look at PostgreSQL PITR (http://www.postgresql.org/docs/8.1/static/backup-online.html \n)\n\nFor Master-Slave replication i think that Slony http://www.slony.info/ \nis most up to date. But it does not support DDL changes.\n\nYou may wich to look at pgpool http://pgpool.projects.postgresql.org/ \nit supports Synchronous replication (wich is good for data integrity, \nbut can be bad for performance).\n\nThese are some of the open source options. I do not have any \nexperience with the commercial onces.\n\nBest regards,\nMathias\n\nhttp://www.pastbedti.me/\n\n\nOn 21 aug 2008, at 19.53, Dan Harris wrote:\n\n> My company finally has the means to install a new database server \n> for replication. I have Googled and found a lot of sparse \n> information out there regarding replication systems for PostgreSQL \n> and a lot of it looks very out-of-date. Can I please get some ideas \n> from those of you that are currently using fail-over replication \n> systems? What advantage does your solution have? What are the \n> \"gotchas\" I need to worry about?\n>\n> My desire would be to have a parallel server that could act as a hot \n> standby system with automatic fail over in a multi-master role. If \n> our primary server goes down for whatever reason, the secondary \n> would take over and handle the load seamlessly. I think this is \n> really the \"holy grail\" scenario and I understand how difficult it \n> is to achieve. Especially since we make frequent use of sequences \n> in our databases. If MM is too difficult, I'm willing to accept a \n> hot-standby read-only system that will handle queries until we can \n> fix whatever ails the master.\n> We are primary an OLAP environment but there is a constant stream of \n> inserts into the databases. There are 47 different databases hosted \n> on the primary server and this number will continue to scale up to \n> whatever the server seems to support. The reason I mention this \n> number is that it seems that those systems that make heavy use of \n> schema changes require a lot of \"fiddling\". For a single database, \n> this doesn't seem too problematic, but any manual work involved and \n> administrative overhead will scale at the same rate as the database \n> count grows and I certainly want to minimize as much fiddling as \n> possible.\n>\n> We are using 8.3 and the total combined size for the PG data \n> directory is 226G. Hopefully I didn't neglect to include more \n> relevant information.\n>\n> As always, thank you for your insight.\n>\n> -Dan\n>\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 21 Aug 2008 22:53:05 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thu, Aug 21, 2008 at 10:53:05PM +0200, Mathias Stjernstr�m wrote:\n\n> For Master-Slave replication i think that Slony http://www.slony.info/ is \n> most up to date. But it does not support DDL changes.\n\nThis isn't quite true. It supports DDL; it just doesn't support it in\nthe normal way, and is broken by applications doing DDL as part of the\nregular operation. \n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 21 Aug 2008 17:04:36 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thursday 21 August 2008, Dan Harris <[email protected]> wrote:\n> Especially since we make frequent use of sequences in our databases. If\n> MM is too difficult, I'm willing to accept a hot-standby read-only\n> system that will handle queries until we can fix whatever ails the\n> master.\n\nA heartbeat+DRBD solution might make more sense than database-level \nreplication to achieve this. \n\n-- \nAlan\n", "msg_date": "Thu, 21 Aug 2008 14:13:33 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Yes thats true. It does support DDL changes but not in a automatic \nway. You have to execute all DDL changes with a separate script.\n\nWhat's the status of http://www.commandprompt.com/products/mammothreplicator/ \n ?\n\nBest regards,\nMathias\n\nhttp://www.pastbedti.me/\n\n\nOn 21 aug 2008, at 23.04, Andrew Sullivan wrote:\n\n> On Thu, Aug 21, 2008 at 10:53:05PM +0200, Mathias Stjernstr�m wrote:\n>\n>> For Master-Slave replication i think that Slony http://www.slony.info/ \n>> is\n>> most up to date. But it does not support DDL changes.\n>\n> This isn't quite true. It supports DDL; it just doesn't support it in\n> the normal way, and is broken by applications doing DDL as part of the\n> regular operation.\n>\n> A\n>\n> -- \n> Andrew Sullivan\n> [email protected]\n> +1 503 667 4564 x104\n> http://www.commandprompt.com/\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 21 Aug 2008 23:21:26 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thu, 21 Aug 2008 23:21:26 +0200\nMathias Stjernström <[email protected]> wrote:\n\n> Yes thats true. It does support DDL changes but not in a automatic \n> way. You have to execute all DDL changes with a separate script.\n> \n> What's the status of\n> http://www.commandprompt.com/products/mammothreplicator/ ?\n> \n\nIt is about to go open source but it doesn't replicate DDL either.\n\nJoshua D. Drake\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 21 Aug 2008 14:23:57 -0700", "msg_from": "Joshua Drake <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "\n\nMathias Stjernstr�m wrote:\n> Yes thats true. It does support DDL changes but not in a automatic way. \n> You have to execute all DDL changes with a separate script.\n> \n\nThat's true, but it's quite simple to do with the provided perl \nscript(s) - slonik_execute_script. I've had to make use of it a few \ntimes and have had no problems.\n\n-salman\n", "msg_date": "Thu, 21 Aug 2008 17:26:55 -0400", "msg_from": "salman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thursday 21 August 2008, salman <[email protected]> wrote:\n> Mathias Stjernström wrote:\n> > Yes thats true. It does support DDL changes but not in a automatic way.\n> > You have to execute all DDL changes with a separate script.\n>\n> That's true, but it's quite simple to do with the provided perl\n> script(s) - slonik_execute_script. I've had to make use of it a few\n> times and have had no problems.\n\nI do it almost every day, and it is not all that simple if your \nconfiguration is complex. The original poster would require at least 47 \ndifferent Slony clusters, for starters. The complications from adding and \ndropping tables and sequences across 47 databases, and trying to keep Slony \nup to date throughout, staggers the imagination, honestly.\n\n-- \nAlan\n", "msg_date": "Thu, 21 Aug 2008 14:41:59 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Joshua Drake wrote:\n> On Thu, 21 Aug 2008 23:21:26 +0200\n> Mathias Stjernstr�m <[email protected]> wrote:\n> \n> > Yes thats true. It does support DDL changes but not in a automatic \n> > way. You have to execute all DDL changes with a separate script.\n> > \n> > What's the status of\n> > http://www.commandprompt.com/products/mammothreplicator/ ?\n> \n> It is about to go open source but it doesn't replicate DDL either.\n\nIt doesn't replicate multiple databases either.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 21 Aug 2008 17:54:11 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thu, 21 Aug 2008, Mathias Stjernstr?m wrote:\n\n> Hi Dan!\n>\n> Its true, many of the replication options that exists for PostgreSQL have not \n> seen any updates in a while.\n>\n> If you only looking for redundancy and not a performance gain you should look \n> at PostgreSQL PITR \n> (http://www.postgresql.org/docs/8.1/static/backup-online.html)\n>\n> For Master-Slave replication i think that Slony http://www.slony.info/ is \n> most up to date. But it does not support DDL changes.\n>\n> You may wich to look at pgpool http://pgpool.projects.postgresql.org/ it \n> supports Synchronous replication (wich is good for data integrity, but can be \n> bad for performance).\n>\n> These are some of the open source options. I do not have any experience with \n> the commercial onces.\n\na couple of months ago there was a lot of news about a WAL based \nreplication engine. one that was closed source, but possibly getting \nopened shortly, and also the decision by the core devs to add one into the \nbase distro.\n\nwhat's been happening on this front?\n\nfrom my understanding the first versions of this would not support queries \nof the replica, but would provide for the consistancy needed for reliable \nfailover.\n\nDavid Lang\n", "msg_date": "Thu, 21 Aug 2008 15:10:29 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "On Thu, 21 Aug 2008 17:54:11 -0400\nAlvaro Herrera <[email protected]> wrote:\n\n> Joshua Drake wrote:\n> > On Thu, 21 Aug 2008 23:21:26 +0200\n> > Mathias Stjernström <[email protected]> wrote:\n> > \n> > > Yes thats true. It does support DDL changes but not in a\n> > > automatic way. You have to execute all DDL changes with a\n> > > separate script.\n> > > \n> > > What's the status of\n> > > http://www.commandprompt.com/products/mammothreplicator/ ?\n> > \n> > It is about to go open source but it doesn't replicate DDL either.\n> \n> It doesn't replicate multiple databases either.\n> \n\nTrue\n\nJoshua D. Drake\n\n-- \nThe PostgreSQL Company since 1997: http://www.commandprompt.com/ \nPostgreSQL Community Conference: http://www.postgresqlconference.org/\nUnited States PostgreSQL Association: http://www.postgresql.us/\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Thu, 21 Aug 2008 15:15:44 -0700", "msg_from": "Joshua Drake <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "\n> My company finally has the means to install a new database server for \n> replication. I have Googled and found a lot of sparse information out \n> there regarding replication systems for PostgreSQL and a lot of it \n> looks very out-of-date. Can I please get some ideas from those of you \n> that are currently using fail-over replication systems? What \n> advantage does your solution have? What are the \"gotchas\" I need to \n> worry about?\n>\n> My desire would be to have a parallel server that could act as a hot \n> standby system with automatic fail over in a multi-master role. If \n> our primary server goes down for whatever reason, the secondary would \n> take over and handle the load seamlessly. I think this is really the \n> \"holy grail\" scenario and I understand how difficult it is to \n> achieve. Especially since we make frequent use of sequences in our \n> databases. If MM is too difficult, I'm willing to accept a \n> hot-standby read-only system that will handle queries until we can fix \n> whatever ails the master.\n> We are primary an OLAP environment but there is a constant stream of \n> inserts into the databases. There are 47 different databases hosted \n> on the primary server and this number will continue to scale up to \n> whatever the server seems to support. The reason I mention this \n> number is that it seems that those systems that make heavy use of \n> schema changes require a lot of \"fiddling\". For a single database, \n> this doesn't seem too problematic, but any manual work involved and \n> administrative overhead will scale at the same rate as the database \n> count grows and I certainly want to minimize as much fiddling as \n> possible.\n>\n>\nIf you really need \"only\" need automatic failover than use DRBD + Heartbeat\nsomebody already mentioned. We are using this solution since 3 years now.\nWith DRBD replication is done at filesystem block level. So you don't \nhave to\nbother about changes done with a DDL statement and Heartbeat will\nautomatically failover if one server goes down. It's really stable.\n\nIf you want MM you should give Cybercluster a try. \n(http://www.postgresql.at/english/pr_cybercluster_e.html)\nThey offer good support and is Open Source since a few month now.\n\nRobert\n\n", "msg_date": "Fri, 22 Aug 2008 08:18:25 +0200", "msg_from": "RW <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Yup, but sometimes you are not in charge of the DDL changes. You may \nhave many different users that make changes or for example if you are \nworking with Ruby On Rails you have something thats called Migrations \nthat handles all DDL changes in those situations it can get really \ncomplicated with those slony scripts. My experience is that automatic \nhandling of DDL changes is a very important feature of a replication \nsystem of curse not in all systems but in many.\n\nI am also very interested in the WAL replication that David Lang asked \nabout.\n\nBest regards,\nMathias Stjernstr�m\n\nhttp://www.pastbedti.me/\n\n\n\nOn 21 aug 2008, at 23.26, salman wrote:\n\n>\n>\n> Mathias Stjernstr�m wrote:\n>> Yes thats true. It does support DDL changes but not in a automatic \n>> way. You have to execute all DDL changes with a separate script.\n>\n> That's true, but it's quite simple to do with the provided perl \n> script(s) - slonik_execute_script. I've had to make use of it a few \n> times and have had no problems.\n>\n> -salman", "msg_date": "Fri, 22 Aug 2008 08:29:11 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "I Agree with Robert but i never heard of Cybercluster before.\nDoes anyone have any experience with Cybercluster? It sounds really \ninteresting!\n\nBest regards,\nMathias Stjernstr�m\n\nhttp://www.pastbedti.me/\n\n\nOn 22 aug 2008, at 08.18, RW wrote:\n\n>\n>> My company finally has the means to install a new database server \n>> for replication. I have Googled and found a lot of sparse \n>> information out there regarding replication systems for PostgreSQL \n>> and a lot of it looks very out-of-date. Can I please get some \n>> ideas from those of you that are currently using fail-over \n>> replication systems? What advantage does your solution have? What \n>> are the \"gotchas\" I need to worry about?\n>>\n>> My desire would be to have a parallel server that could act as a \n>> hot standby system with automatic fail over in a multi-master \n>> role. If our primary server goes down for whatever reason, the \n>> secondary would take over and handle the load seamlessly. I think \n>> this is really the \"holy grail\" scenario and I understand how \n>> difficult it is to achieve. Especially since we make frequent use \n>> of sequences in our databases. If MM is too difficult, I'm willing \n>> to accept a hot-standby read-only system that will handle queries \n>> until we can fix whatever ails the master.\n>> We are primary an OLAP environment but there is a constant stream \n>> of inserts into the databases. There are 47 different databases \n>> hosted on the primary server and this number will continue to scale \n>> up to whatever the server seems to support. The reason I mention \n>> this number is that it seems that those systems that make heavy use \n>> of schema changes require a lot of \"fiddling\". For a single \n>> database, this doesn't seem too problematic, but any manual work \n>> involved and administrative overhead will scale at the same rate as \n>> the database count grows and I certainly want to minimize as much \n>> fiddling as possible.\n>>\n>>\n> If you really need \"only\" need automatic failover than use DRBD + \n> Heartbeat\n> somebody already mentioned. We are using this solution since 3 years \n> now.\n> With DRBD replication is done at filesystem block level. So you \n> don't have to\n> bother about changes done with a DDL statement and Heartbeat will\n> automatically failover if one server goes down. It's really stable.\n>\n> If you want MM you should give Cybercluster a try. (http://www.postgresql.at/english/pr_cybercluster_e.html \n> )\n> They offer good support and is Open Source since a few month now.\n>\n> Robert\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected] \n> )\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 22 Aug 2008 08:35:43 +0200", "msg_from": "=?ISO-8859-1?Q?Mathias_Stjernstr=F6m?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Dan Harris wrote:\n> My desire would be to have a parallel server that could act as a hot\n> standby system with automatic fail over in a multi-master role.\n\nI will add my \"me too\" for DRBD + Heartbeat.\n", "msg_date": "Fri, 22 Aug 2008 12:41:31 +0300", "msg_from": "Peter Eisentraut <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Hi Mathias,\n\nOn Aug 22, 2008, at 8:35 AM, Mathias Stjernstr�m wrote:\n\n> I Agree with Robert but i never heard of Cybercluster before.\n> Does anyone have any experience with Cybercluster? It sounds really \n> interesting!\n\nSome months ago i took a look into cybercluster. At that point \ncybercluster was\nbasically postgres-source 8.3 patched already with pgcluster sources.\n\nBest regards,\n\nJan", "msg_date": "Fri, 22 Aug 2008 16:37:59 +0200", "msg_from": "Jan Otto <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" }, { "msg_contents": "Jan Otto wrote:\n> Hi Mathias,\n> \n> On Aug 22, 2008, at 8:35 AM, Mathias Stjernstr�m wrote:\n> \n>> I Agree with Robert but i never heard of Cybercluster before. Does\n>> anyone have any experience with Cybercluster? It sounds really \n>> interesting!\n> \n> Some months ago i took a look into cybercluster. At that point \n> cybercluster was basically postgres-source 8.3 patched already with\n> pgcluster sources.\n> \n\nI do believe it is a packaged version of pgcluster\n\nDoes anyone have experience with bucardo? It's still a recent addition\nto open source offerings. No DDL replication but it does support two\nmasters.\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Sat, 23 Aug 2008 17:09:27 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The state of PG replication in 2008/Q2?" } ]
[ { "msg_contents": "I only have a few days of experience with postgres and it is working \ngreat, but when I start building up test queries I ran into a problem I \ndon't understand.\n\nOne query works fast, returning results in under a second. If I insert \none more join into the table however, it switches to nested-loops and \ntakes minutes. It does this randomly.. using integer compares in the \nquery lines is fast for instance, but using LIKE operators causes it to \nuse loops again.\n\nI have about 200,000 items per table, so nested loops cause lots of pain.\nThe database has objects with lots (100 to 1000) 'specs' for each object \nin another table, so I have to pull them out to do queries and sorting on \nthem.\n\nHere are the two queries. They are the same except the first has \ntwo 'find' joins and the other has three.\n\nI assume that I am hitting some limit somewhere that is causing postgres \nto change it's idea on how to run the query. Can I force it to use hash \njoins?\n\nThanks!\n\nSo this is fast... \n\nEXPLAIN ANALYZE SELECT * FROM logical \nLEFT JOIN model ON logical.uid = model.logical_uid \nLEFT JOIN company ON model.company_uid = company.uid \nLEFT JOIN type ON logical.type::INT = type.uid \nJOIN specs spec_find1 ON spec_find1 .spec_uid='8' AND spec_find1 .text LIKE '%' AND spec_find1 .logical_uid=model.logical_uid \nJOIN specs spec_find2 ON spec_find2.spec_uid='8' AND spec_find2.text LIKE '%' AND spec_find2.logical_uid=model.logical_uid \nLEFT JOIN specs specs_sort ON specs_sort.spec_uid='4' AND specs_sort.logical_uid=logical.uid \nORDER BY specs_sort.num;\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5398.43..5398.44 rows=6 width=247) (actual time=331.546..333.303 rows=3555 loops=1)\n Sort Key: specs_sort.num\n Sort Method: quicksort Memory: 981kB\n -> Hash Left Join (cost=1087.68..5398.35 rows=6 width=247) (actual time=37.309..315.451 rows=3555 loops=1)\n Hash Cond: (model.company_uid = company.uid)\n -> Hash Left Join (cost=1086.28..5396.86 rows=6 width=217) (actual time=37.235..308.787 rows=3555 loops=1)\n Hash Cond: (logical.uid = specs_sort.logical_uid)\n -> Hash Left Join (cost=694.84..5004.96 rows=6 width=180) (actual time=22.433..284.832 rows=3555 loops=1)\n Hash Cond: ((logical.type)::integer = type.uid)\n -> Nested Loop (cost=693.55..5003.62 rows=6 width=168) (actual time=22.361..273.502 rows=3555 loops=1)\n -> Hash Join (cost=693.55..4953.84 rows=6 width=110) (actual time=22.330..237.717 rows=3555 loops=1)\n Hash Cond: (model.logical_uid = spec_find1.logical_uid)\n -> Seq Scan on model (cost=0.00..3337.82 rows=184182 width=36) (actual time=0.017..99.289 rows=184182 loops=1)\n -> Hash (cost=691.60..691.60 rows=156 width=74) (actual time=21.795..21.795 rows=2196 loops=1)\n -> Hash Join (cost=339.84..691.60 rows=156 width=74) (actual time=8.558..19.060 rows=2196 loops=1)\n Hash Cond: (spec_find1.logical_uid = spec_find2.logical_uid)\n -> Seq Scan on specs spec_find1 (cost=0.00..326.89 rows=1036 width=37) (actual time=0.023..6.765 rows=2196 loops=1)\n Filter: (((text)::text ~~ '%'::text) AND (spec_uid = 8))\n -> Hash (cost=326.89..326.89 rows=1036 width=37) (actual time=8.508..8.508 rows=2196 loops=1)\n -> Seq Scan on specs spec_find2 (cost=0.00..326.89 rows=1036 width=37) (actual time=0.010..6.667 rows=2196 loops=1)\n Filter: (((text)::text ~~ '%'::text) AND (spec_uid = 8))\n -> Index Scan using logical_pkey on logical (cost=0.00..8.28 rows=1 width=58) (actual time=0.006..0.007 rows=1 loops=3555)\n Index Cond: (logical.uid = model.logical_uid)\n -> Hash (cost=1.13..1.13 rows=13 width=12) (actual time=0.024..0.024 rows=13 loops=1)\n -> Seq Scan on type (cost=0.00..1.13 rows=13 width=12) (actual time=0.004..0.011 rows=13 loops=1)\n -> Hash (cost=287.57..287.57 rows=8309 width=37) (actual time=14.773..14.773 rows=8172 loops=1)\n -> Seq Scan on specs specs_sort (cost=0.00..287.57 rows=8309 width=37) (actual time=0.012..8.206 rows=8172 loops=1)\n Filter: (spec_uid = 4)\n -> Hash (cost=1.18..1.18 rows=18 width=30) (actual time=0.032..0.032 rows=18 loops=1)\n -> Seq Scan on company (cost=0.00..1.18 rows=18 width=30) (actual time=0.005..0.013 rows=18 loops=1)\n Total runtime: 338.389 ms\n(31 rows)\n\n\nAdding a third 'spec_find' join causes the outer joins to be looped.\n\nEXPLAIN ANALYZE SELECT logical.uid FROM logical \nLEFT JOIN model ON logical.uid = model.logical_uid \nLEFT JOIN company ON model.company_uid = company.uid \nLEFT JOIN type ON logical.type::INT = type.uid \nJOIN specs spec_find1 ON spec_find1 .spec_uid='8' AND spec_find1 .text LIKE '%' AND spec_find1 .logical_uid=model.logical_uid \nJOIN specs spec_find2 ON spec_find2.spec_uid='8' AND spec_find2.text LIKE '%' AND spec_find2.logical_uid=model.logical_uid \nJOIN specs spec_find3 ON spec_find3 .spec_uid='8' AND spec_find3 .text LIKE '%' AND spec_find3 .logical_uid=model.logical_uid \nLEFT JOIN specs specs_sort ON specs_sort.spec_uid='4' AND specs_sort.logical_uid=logical.uid \nORDER BY specs_sort.num;\n\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=5464.51..5464.52 rows=1 width=8) (actual time=41770.603..41772.111 rows=3555 loops=1)\n Sort Key: specs_sort.num\n Sort Method: quicksort Memory: 203kB\n -> Nested Loop Left Join (cost=1035.47..5464.50 rows=1 width=8) (actual time=40.324..41756.946 rows=3555 loops=1)\n -> Nested Loop Left Join (cost=1035.47..5464.22 rows=1 width=12) (actual time=40.313..41716.150 rows=3555 loops=1)\n -> Nested Loop Left Join (cost=1035.47..5463.93 rows=1 width=21) (actual time=40.272..41639.961 rows=3555 loops=1)\n Join Filter: (specs_sort.logical_uid = logical.uid)\n -> Nested Loop (cost=1035.47..5072.50 rows=1 width=17) (actual time=32.400..340.250 rows=3555 loops=1)\n -> Hash Join (cost=1035.47..5064.20 rows=1 width=20) (actual time=32.370..251.601 rows=3555 loops=1)\n Hash Cond: (model.logical_uid = spec_find1.logical_uid)\n -> Seq Scan on model (cost=0.00..3337.82 rows=184182 width=8) (actual time=0.015..95.970 rows=184182 loops=1)\n -> Hash (cost=1035.18..1035.18 rows=23 width=12) (actual time=31.890..31.890 rows=2196 loops=1)\n -> Hash Join (cost=679.68..1035.18 rows=23 width=12) (actual time=17.178..30.063 rows=2196 loops=1)\n Hash Cond: (spec_find1.logical_uid = spec_find2.logical_uid)\n -> Hash Join (cost=339.84..691.60 rows=156 width=8) (actual time=8.568..18.308 rows=2196 loops=1)\n Hash Cond: (spec_find1.logical_uid = spec_find3.logical_uid)\n -> Seq Scan on specs spec_find1 (cost=0.00..326.89 rows=1036 width=4) (actual time=0.015..6.603 rows=2196 loops=1)\n Filter: (((text)::text ~~ '%'::text) AND (spec_uid = 8))\n -> Hash (cost=326.89..326.89 rows=1036 width=4) (actual time=8.513..8.513 rows=2196 loops=1)\n -> Seq Scan on specs spec_find3 (cost=0.00..326.89 rows=1036 width=4) (actual time=0.011..6.851 rows=2196 loops=1)\n Filter: (((text)::text ~~ '%'::text) AND (spec_uid = 8))\n -> Hash (cost=326.89..326.89 rows=1036 width=4) (actual time=8.588..8.588 rows=2196 loops=1)\n -> Seq Scan on specs spec_find2 (cost=0.00..326.89 rows=1036 width=4) (actual time=0.022..6.939 rows=2196 loops=1)\n Filter: (((text)::text ~~ '%'::text) AND (spec_uid = 8))\n -> Index Scan using logical_pkey on logical (cost=0.00..8.28 rows=1 width=13) (actual time=0.015..0.019 rows=1 loops=3555)\n Index Cond: (logical.uid = model.logical_uid)\n -> Seq Scan on specs specs_sort (cost=0.00..287.57 rows=8309 width=8) (actual time=0.009..7.996 rows=8172 loops=3555)\n Filter: (specs_sort.spec_uid = 4)\n -> Index Scan using type_pkey on type (cost=0.00..0.27 rows=1 width=4) (actual time=0.010..0.011 rows=1 loops=3555)\n Index Cond: ((logical.type)::integer = type.uid)\n -> Index Scan using company_pkey on company (cost=0.00..0.27 rows=1 width=4) (actual time=0.006..0.007 rows=1 loops=3555)\n Index Cond: (model.company_uid = company.uid)\n Total runtime: 41773.972 ms\n(33 rows)\n\n--\nIan Smith\nwww.ian.org\n", "msg_date": "Thu, 21 Aug 2008 18:50:35 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Why do my hash joins turn to nested loops?" }, { "msg_contents": "[email protected] writes:\n> One query works fast, returning results in under a second. If I insert \n> one more join into the table however, it switches to nested-loops and \n> takes minutes.\n\nI think you need to raise from_collapse_limit and/or\njoin_collapse_limit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 21 Aug 2008 20:49:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why do my hash joins turn to nested loops? " }, { "msg_contents": "On Thu, 21 Aug 2008, Tom Lane wrote:\n> I think you need to raise from_collapse_limit and/or\n> join_collapse_limit.\n\nAhah, that was it.. a much simpler solution than I was fearing.\n\nI had already re-written the queries to get around it, but ran into \nanother snag with that method, so this was good timing.\n\nThanks!\n\n--\nIan Smith\nwww.ian.org\n", "msg_date": "Fri, 22 Aug 2008 14:33:25 -0400 (EDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Why do my hash joins turn to nested loops? " } ]
[ { "msg_contents": "[for the purpose of this post, 'blocking' refers to an I/O operation\ntaking a long time for reasons other than the amount of work the I/O\noperation itself actually implies; not to use of blocking I/O calls or\nanything like that]\n\nHello,\n\nI have a situation in which deterministic latency is a lot more\nimportant than throughput.\n\nI realize this is a hugely complex topic and that there is inteaction\nbetween many different things (pg buffer cache, os buffer cache, raid\ncontroller caching, wal buffers, storage layout, etc). I already know\nseveral things I definitely want to do to improve things.\n\nBut in general, it would be very interesting to see, at any given\nmoment, what PostgreSQL backends are actually blocking on from the\nperspective of PostgreSQL.\n\nSo for example, if I have 30 COMMIT:s that are active, to know whether\nit is simply waiting on the WAL fsync or actually waiting on a data\nfsync because a checkpoint is being created. or similarly, for\nnon-commits whether they are blocking because WAL buffers is full and\nwriting them out is blocking, etc.\n\nThis would make it easier to observe and draw conclusions when\ntweaking different things in pg/the os/the raid controller.\n\nIs there currently a way of dumping such information? I.e., asking PG\n\"what are backends waiting on right now?\".\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 22 Aug 2008 13:52:44 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Identifying the nature of blocking I/O" }, { "msg_contents": "Peter Schuller wrote:\n\n> But in general, it would be very interesting to see, at any given\n> moment, what PostgreSQL backends are actually blocking on from the\n> perspective of PostgreSQL.\n\nThe recent work on DTrace support for PostgreSQL will probably give you\nthe easiest path to useful results. You'll probably need an OpenSolaris\nor (I think) FreeBSD host, though, rather than a Linux host.\n\n--\nCraig Ringer\n", "msg_date": "Mon, 25 Aug 2008 08:30:18 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "More info/notes on DTrace --\n\nDTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.\nLinux however is still in the dark ages when it comes to system monitoring,\nespecially with I/O.\n\nYou can write some custom DTrace scripts to map any of the basic Postgres\noperations or processes to things that it is waiting on in the OS. You can\ndefinitely write a script that would be able to track the I/O in reads and\nwrites caused by a transaction, how long those took, what the I/O sizes\nwere, and even what portion of the disk it went to.\n\nhttp://lethargy.org/~jesus/archives/74-PostgreSQL-performance-through-the-eyes-of-DTrace.html<http://lethargy.org/%7Ejesus/archives/74-PostgreSQL-performance-through-the-eyes-of-DTrace.html>\nhttp://www.brendangregg.com/dtrace.html#DTraceToolkit\n\nEven without the custom DTrace probes in Postgres, DTrace gives you the\nability to see what the OS is doing, how long it is taking, and what\nprocesses, files, locks, or other things are involved. Most important\nhowever is the ability to correlate things and not just deal with high level\naggregates like more simplistic tools. It takes some work and it is not the\neasiest thing to use, as its power comes at a complexity cost.\n\n\n\nOn Sun, Aug 24, 2008 at 5:30 PM, Craig Ringer\n<[email protected]>wrote:\n\n> Peter Schuller wrote:\n>\n> > But in general, it would be very interesting to see, at any given\n> > moment, what PostgreSQL backends are actually blocking on from the\n> > perspective of PostgreSQL.\n>\n> The recent work on DTrace support for PostgreSQL will probably give you\n> the easiest path to useful results. You'll probably need an OpenSolaris\n> or (I think) FreeBSD host, though, rather than a Linux host.\n>\n> --\n> Craig Ringer\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nMore info/notes on DTrace --DTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.Linux however is still in the dark ages when it comes to system monitoring,\nespecially with I/O.You can write\nsome custom DTrace scripts to map any of the basic Postgres operations\nor processes to things that it is waiting on in the OS.  You can\ndefinitely write a script that would be able to track the I/O in reads\nand writes caused by a transaction, how long those took, what the I/O\nsizes were, and even what portion of the disk it went to.\nhttp://lethargy.org/~jesus/archives/74-PostgreSQL-performance-through-the-eyes-of-DTrace.html\nhttp://www.brendangregg.com/dtrace.html#DTraceToolkit\nEven without the custom DTrace probes in Postgres, DTrace gives you\nthe ability to see what the OS is doing, how long it is taking, and\nwhat processes, files, locks, or other things are involved.  Most\nimportant however is the ability to correlate things and not just deal\nwith high level aggregates like more simplistic tools.  It takes some work and it\nis not the easiest thing to use, as its power comes at a complexity\ncost.\nOn Sun, Aug 24, 2008 at 5:30 PM, Craig Ringer <[email protected]> wrote:\nPeter Schuller wrote:\n\n> But in general, it would be very interesting to see, at any given\n> moment, what PostgreSQL backends are actually blocking on from the\n> perspective of PostgreSQL.\n\nThe recent work on DTrace support for PostgreSQL will probably give you\nthe easiest path to useful results. You'll probably need an OpenSolaris\nor (I think) FreeBSD host, though, rather than a Linux host.\n\n--\nCraig Ringer\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 24 Aug 2008 18:34:18 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "Craig Ringer <[email protected]> writes:\n> Peter Schuller wrote:\n>> But in general, it would be very interesting to see, at any given\n>> moment, what PostgreSQL backends are actually blocking on from the\n>> perspective of PostgreSQL.\n\n> The recent work on DTrace support for PostgreSQL will probably give you\n> the easiest path to useful results. You'll probably need an OpenSolaris\n> or (I think) FreeBSD host, though, rather than a Linux host.\n\n<cant-resist>get a mac</cant-resist>\n\n(Mind you, I don't think Apple sells any hardware that would be really\nsuitable for a big-ass database server. But for development purposes,\nOS X on a recent laptop is a pretty nice unix-at-the-core-plus-eye-candy\nenvironment.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Aug 2008 21:37:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O " }, { "msg_contents": "\"Scott Carey\" <[email protected]> writes:\n> DTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.\n> Linux however is still in the dark ages when it comes to system monitoring,\n> especially with I/O.\n\nOh, after poking around a bit, I should note that some of my Red Hat\ncompatriots think that \"systemtap\" is the long-term Linux answer here.\nI know zip about it myself, but it's something to read up on if you are\nlooking for better performance monitoring on Linux.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Aug 2008 21:49:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O " }, { "msg_contents": "Tom Lane wrote:\n> \"Scott Carey\" <[email protected]> writes:\n> > DTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.\n> > Linux however is still in the dark ages when it comes to system monitoring,\n> > especially with I/O.\n> \n> Oh, after poking around a bit, I should note that some of my Red Hat\n> compatriots think that \"systemtap\" is the long-term Linux answer here.\n> I know zip about it myself, but it's something to read up on if you are\n> looking for better performance monitoring on Linux.\n\nFWIW there are a number of tracing options on Linux, none of which is\nsaid to be yet at the level of DTrace. See here for an article on the\ntopic: http://lwn.net/Articles/291091/\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sun, 24 Aug 2008 22:16:09 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "On Mon, Aug 25, 2008 at 3:34 AM, Scott Carey <[email protected]> wrote:\n> DTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.\n> Linux however is still in the dark ages when it comes to system monitoring,\n> especially with I/O.\n\nWhile that's true, newer 2.6 kernel versions at least have I/O\naccounting built in, something which only used to be available through\nthe \"atop\" accounting kernel patch:\n\n$ cat /proc/22785/io\nrchar: 31928\nwchar: 138\nsyscr: 272\nsyscw: 4\nread_bytes: 0\nwrite_bytes: 0\ncancelled_write_bytes: 0\n\nAlexander.\n", "msg_date": "Mon, 25 Aug 2008 10:18:24 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "This matches not exactly the topic but it is sometimes helpfull.\nIf you've enabled I/O accounting and a kernel >= 2.6.20 (needs\nto be compiled with\n\n**CONFIG_TASKSTATS=y\nCONFIG_TASK_DELAY_ACCT=y\nCONFIG_TASK_XACCT=y\nCONFIG_TASK_IO_ACCOUNTING=y\n)\n\nand sysstat package (>= 7.1.5) installed you can use \"pidstat\"\ncommand which show's you the processes doing I/O in kb/sec.\n\nRobert\n\n**\n\n\nAlexander Staubo wrote:\n> On Mon, Aug 25, 2008 at 3:34 AM, Scott Carey <[email protected]> wrote:\n> \n>> DTrace is available now on MacOSX, Solaris 10, OpenSolaris, and FreeBSD.\n>> Linux however is still in the dark ages when it comes to system monitoring,\n>> especially with I/O.\n>> \n>\n> While that's true, newer 2.6 kernel versions at least have I/O\n> accounting built in, something which only used to be available through\n> the \"atop\" accounting kernel patch:\n>\n> $ cat /proc/22785/io\n> rchar: 31928\n> wchar: 138\n> syscr: 272\n> syscw: 4\n> read_bytes: 0\n> write_bytes: 0\n> cancelled_write_bytes: 0\n>\n> Alexander.\n>\n> \n\n", "msg_date": "Mon, 25 Aug 2008 10:32:24 +0200", "msg_from": "RW <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "On Fri, Aug 22, 2008 at 7:52 AM, Peter Schuller\n<[email protected]> wrote:\n> Is there currently a way of dumping such information? I.e., asking PG\n> \"what are backends waiting on right now?\".\n\nUnfortunately, not within Postgres itself. The question, \"what is the\ndatabase waiting on?\" is a good one, and one Oracle understood in the\nearly 90's. It is for that reason that EnterpriseDB added RITA, the\nRuntime Instrumentation and Tracing Architecture, to their Advanced\nServer product. RITA gives DBAs some of the same information as the\nOracle Wait Interface does regarding what the database is waiting for,\nsuch as locks, I/O, and which relation/block. While it's not as\nefficient as DTrace due to Linux's lack of a good high-resolution\nuser-mode timer, no one has found it to have a noticible overhead on\nthe throughput of a system in benchmarks or real-world applications.\n\nIf you're on a DTrace platform, I would suggest using it. Otherwise,\nyou can try and use strace/ltrace on Linux, but that's probably not\ngoing to get you the answers you need quickly or easily enough. Until\nenough users ask for this type of feature, the community isn't going\nto see it as valuable enough to add to the core engine. IIRC,\nsystemtap is pretty much dead :(\n\n-- \nJonah H. Harris, Senior DBA\nmyYearbook.com\n", "msg_date": "Mon, 25 Aug 2008 10:20:37 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O" }, { "msg_contents": "On Sun, 24 Aug 2008, Tom Lane wrote:\n\n> Mind you, I don't think Apple sells any hardware that would be really \n> suitable for a big-ass database server.\n\nIf you have money to burn, you can get an XServe with up to 8 cores and \n32GB of RAM, and get a card to connect it to a Fiber Channel disk array. \nFor only moderately large requirements, you can even get a card with 256MB \nof battery-backed cache (rebranded LSI) to attach the 3 drives in the \nchassis. None of these are very cost effective compared to servers like \nthe popular HP models people mention here regularly, but it is possible.\n\nAs for Systemtap on Linux, it might be possible that will accumulate \nenough of a standard library to be usable by regular admins one day, but I \ndon't see any sign that's a priority for development. Right now what you \nhave to know in order to write useful scripts is so much more complicated \nthan DTrace, where there's all sorts of useful things you can script \ntrivially. I think a good part of DTrace's success comes from flattening \nthat learning curve. Take a look at the one-liners at \nhttp://www.solarisinternals.com/wiki/index.php/DTraceToolkit and compare \nthem against http://sourceware.org/systemtap/examples/\n\nThat complexity works against the tool on so many levels. For example, I \ncan easily imagine selling even a paranoid admin on running a simple \nDTrace script like the one-line examples. Whereas every Systemtap example \nI've seen looks pretty scary at first, and I can't imagine a DBA in a \ntypical enterprise environment being able to convince their associated \nadmin team they're perfectly safe to run in production.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 28 Aug 2008 23:53:39 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Identifying the nature of blocking I/O " } ]
[ { "msg_contents": "Hello,\nI'm having trouble with a Nested Loop being selected for a rather \ncomplex query; it turns out this is a pretty bad plan as the nested \nloop's row estimates are quite off (1 estimated / 1207881 actual). If \nI disable enable_nestloop, the query executes much faster (42 seconds \ninstead of 605). The tables in the query have all been ANALYZEd just \nbefore generating these plans.\n\nHere are the plans with and without enable_nestloop:\n\nhttp://pastie.org/258043\n\nThe inventory table is huge; it currently has about 1.3 x 10^9 tuples. \nThe items table has around 10,000 tuples, and the other tables in the \nquery are tiny.\n\nAny ideas or suggestions would be greatly appreciated. Thanks!\n--\nBrad Ediger", "msg_date": "Fri, 22 Aug 2008 10:26:28 -0500", "msg_from": "Brad Ediger <[email protected]>", "msg_from_op": true, "msg_subject": "Nested Loop join being improperly chosen" }, { "msg_contents": "I had a similar problem here:\nhttp://archives.postgresql.org/pgsql-bugs/2008-07/msg00026.php\n\nIs the nested loop performing a LEFT join with yours? It's a little\ndifficult to tell just from the query plan you showed.\n\nA work around for mine was to use a full outer join and eliminate the extra\nrows in the where clause. A bit of a hack but it changed a 2 min query into\none that ran in under a second.\n\nOf course this is not helping with your problem but at least may trigger\nsome more feedback.\n\nDavid.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Brad Ediger\nSent: 22 August 2008 16:26\nTo: [email protected]\nSubject: [PERFORM] Nested Loop join being improperly chosen\n\nHello,\nI'm having trouble with a Nested Loop being selected for a rather \ncomplex query; it turns out this is a pretty bad plan as the nested \nloop's row estimates are quite off (1 estimated / 1207881 actual). If \nI disable enable_nestloop, the query executes much faster (42 seconds \ninstead of 605). The tables in the query have all been ANALYZEd just \nbefore generating these plans.\n\nHere are the plans with and without enable_nestloop:\n\nhttp://pastie.org/258043\n\nThe inventory table is huge; it currently has about 1.3 x 10^9 tuples. \nThe items table has around 10,000 tuples, and the other tables in the \nquery are tiny.\n\nAny ideas or suggestions would be greatly appreciated. Thanks!\n--\nBrad Ediger\n\n\n", "msg_date": "Fri, 29 Aug 2008 00:01:05 +0100", "msg_from": "\"David Rowley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Nested Loop join being improperly chosen" }, { "msg_contents": "On Aug 28, 2008, at 6:01 PM, David Rowley wrote:\n\n> I had a similar problem here:\n> http://archives.postgresql.org/pgsql-bugs/2008-07/msg00026.php\n>\n> Is the nested loop performing a LEFT join with yours? It's a little\n> difficult to tell just from the query plan you showed.\n>\n> A work around for mine was to use a full outer join and eliminate \n> the extra\n> rows in the where clause. A bit of a hack but it changed a 2 min \n> query into\n> one that ran in under a second.\n>\n> Of course this is not helping with your problem but at least may \n> trigger\n> some more feedback.\n\nHi David,\nThanks for your input. All of the joins are inner joins; the query is \na large one with 5 or 6 subqueries. It was being generated from a \npopular data warehousing / business intelligence product whose name I \nshall not mention. The vendor ended up pulling the subselects out into \nSELECT INTO statements on temporary tables. It's kludgey, but it works \nmuch better.\n\nThanks,\nBrad", "msg_date": "Thu, 28 Aug 2008 18:22:19 -0500", "msg_from": "Brad Ediger <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Nested Loop join being improperly chosen" } ]
[ { "msg_contents": "Hi list.\n \nI have a table with over 30 million rows. Performance was dropping steadily\nso I moved old data not needed online to an historic table. Now the table\nhas about 14 million rows. I don't need the disk space returned to the OS\nbut I do need to improve performance. Will a plain vacuum do or is a vacuum\nfull necessary?\n¿Would a vacuum full improve performance at all?\n \nThanks for your hindsight.\nRegards,\n \nFernando.\n\n", "msg_date": "Fri, 22 Aug 2008 17:36:44 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big delete on big table... now what?" }, { "msg_contents": ">>> \"Fernando Hevia\" <[email protected]> wrote: \n \n> I have a table with over 30 million rows. Performance was dropping\nsteadily\n> so I moved old data not needed online to an historic table. Now the\ntable\n> has about 14 million rows. I don't need the disk space returned to\nthe OS\n> but I do need to improve performance. Will a plain vacuum do or is a\nvacuum\n> full necessary?\n> *Would a vacuum full improve performance at all?\n \nIf this database can be out of production for long enough to run it\n(possibly a few hours, depending on hardware, configuration, table\nwidth, indexes) your best option might be to CLUSTER and ANALYZE the\ntable. It gets more complicated if you can't tolerate down-time.\n \n-Kevin\n", "msg_date": "Fri, 22 Aug 2008 16:30:59 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big delete on big table... now what?" }, { "msg_contents": "\"Fernando Hevia\" <[email protected]> wrote:\n>\n> Hi list.\n> \n> I have a table with over 30 million rows. Performance was dropping steadily\n> so I moved old data not needed online to an historic table. Now the table\n> has about 14 million rows. I don't need the disk space returned to the OS\n> but I do need to improve performance. Will a plain vacuum do or is a vacuum\n> full necessary?\n> ¿Would a vacuum full improve performance at all?\n\nIf you can afford the downtime on that table, cluster would be best.\n\nIf not, do the normal vacuum and analyze. This is unlikely to improve\nthe performance much (although it may shrink the table _some_) but\nregular vacuum will keep performance from getting any worse.\n\nYou can also reindex pretty safely. Any queries that run during the\nreindex will just have to do so without the indexes.\n\nLonger term, if you remove smaller groups of rows more frequently, you'll\nprobably be able to maintain performance and table bloat at a reasonable\nlevel with normal vacuum.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 22 Aug 2008 18:44:07 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big delete on big table... now what?" }, { "msg_contents": "\"Bill Moran\" <[email protected]> writes:\n\n> \"Fernando Hevia\" <[email protected]> wrote:\n>>\n>> Hi list.\n>> \n>> I have a table with over 30 million rows. Performance was dropping steadily\n>> so I moved old data not needed online to an historic table. Now the table\n>> has about 14 million rows. I don't need the disk space returned to the OS\n>> but I do need to improve performance. Will a plain vacuum do or is a vacuum\n>> full necessary?\n>> ¿Would a vacuum full improve performance at all?\n>\n> If you can afford the downtime on that table, cluster would be best.\n>\n> If not, do the normal vacuum and analyze. This is unlikely to improve\n> the performance much (although it may shrink the table _some_) but\n> regular vacuum will keep performance from getting any worse.\n\nNote that CLUSTER requires enough space to store the new and the old copies of\nthe table simultaneously. That's the main reason for VACUUM FULL to still\nexist.\n\nThere is also the option of doing something like (assuming id is already an\ninteger -- ie this doesn't actually change the data):\n\n ALTER TABLE x ALTER id TYPE integer USING id;\n\nwhich will rewrite the whole table. This is effectively the same as CLUSTER\nexcept it doesn't order the table according to an index. It will still require\nenough space to hold two copies of the table but it will be significantly\nfaster.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n Ask me about EnterpriseDB's RemoteDBA services!\n", "msg_date": "Sat, 23 Aug 2008 01:39:56 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big delete on big table... now what?" }, { "msg_contents": "> Gregory Stark <[email protected]> writes:\n> \n> \"Bill Moran\" <[email protected]> writes:\n> \n> > \"Fernando Hevia\" <[email protected]> wrote:\n> >> Hi list.\n> >> I have a table with over 30 million rows. Performance was dropping \n> >> steadily so I moved old data not needed online to an \n> historic table. \n> >> Now the table has about 14 million rows. I don't need the \n> disk space \n> >> returned to the OS but I do need to improve performance. \n> Will a plain \n> >> vacuum do or is a vacuum full necessary?\n> >> ¿Would a vacuum full improve performance at all?\n> >\n> > If you can afford the downtime on that table, cluster would be best.\n> >\n> > If not, do the normal vacuum and analyze. This is unlikely \n> to improve \n> > the performance much (although it may shrink the table _some_) but \n> > regular vacuum will keep performance from getting any worse.\n> \n> Note that CLUSTER requires enough space to store the new and \n> the old copies of the table simultaneously. That's the main \n> reason for VACUUM FULL to still exist.\n> \n> There is also the option of doing something like (assuming id \n> is already an integer -- ie this doesn't actually change the data):\n> \n> ALTER TABLE x ALTER id TYPE integer USING id;\n> \n> which will rewrite the whole table. This is effectively the \n> same as CLUSTER except it doesn't order the table according \n> to an index. It will still require enough space to hold two \n> copies of the table but it will be significantly faster.\n> \n\nYes, I can afford a downtime on Sunday.\nActually the clustering option would help since most of our slow queries use\nthe same index.\n\nThanks Bill and Gregory for the advice.\nRegards,\nFernando.\n\n", "msg_date": "Mon, 25 Aug 2008 10:38:08 -0300", "msg_from": "\"Fernando Hevia\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big delete on big table... now what?" } ]
[ { "msg_contents": "Hi,\n\nI use Postgresql 8.3.1-1 to store a lot of data coming from a large amount\nof sensors. In order to have good performances on querying by timestamp on\neach sensor, I partitionned my measures table for each sensor. Thus I create\na lot of tables.\nI simulated a large sensor network with 3000 nodes so I have ~3000 tables.\nAnd it appears that each insert (in separate transactions) in the database\ntakes about 300ms (3-4 insert per second) in tables where there is just few\ntuples (< 10). I think you can understand that it's not efficient at all\nbecause I need to treat a lot of inserts.\n\nDo you have any idea why it is that slow ? and how can have good insert ?\n\nMy test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)\niostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while\nconstant insert\n\nHere is the DDL of the measures tables:\n-------------------------------------------------------\nCREATE TABLE measures_0\n(\n \"timestamp\" timestamp without time zone,\n storedtime timestamp with time zone,\n count smallint,\n \"value\" smallint[]\n)\nWITH (OIDS=FALSE);\nCREATE INDEX measures_0_1_idx\n ON measures_0\n USING btree\n ((value[1]));\n\n-- Index: measures_0_2_idx\nCREATE INDEX measures_0_2_idx\n ON measures_0\n USING btree\n ((value[2]));\n\n-- Index: measures_0_3_idx\nCREATE INDEX measures_0_3_idx\n ON measures_0\n USING btree\n ((value[3]));\n\n-- Index: measures_0_count_idx\nCREATE INDEX measures_0_count_idx\n ON measures_0\n USING btree\n (count);\n\n-- Index: measures_0_timestamp_idx\nCREATE INDEX measures_0_timestamp_idx\n ON measures_0\n USING btree\n (\"timestamp\");\n\n-- Index: measures_0_value_idx\nCREATE INDEX measures_0_value_idx\n ON measures_0\n USING btree\n (value);\n-------------------------------------------------------\n\nRegards\n\nLoïc Petit\n\nHi,I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create a lot of tables.\nI simulated a large sensor network with 3000 nodes so I have ~3000 tables. And it appears that each insert (in separate transactions) in the database takes about 300ms (3-4 insert per second) in tables where there is just few tuples (< 10). I think you can understand that it's not efficient at all because I need to treat a lot of inserts.\nDo you have any idea why it is that slow ? and how can have good insert ?My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while constant insert\nHere is the DDL of the measures tables:-------------------------------------------------------CREATE TABLE measures_0( \"timestamp\" timestamp without time zone, storedtime timestamp with time zone,\n count smallint, \"value\" smallint[])WITH (OIDS=FALSE);CREATE INDEX measures_0_1_idx ON measures_0 USING btree ((value[1]));-- Index: measures_0_2_idxCREATE INDEX measures_0_2_idx\n ON measures_0 USING btree ((value[2]));-- Index: measures_0_3_idxCREATE INDEX measures_0_3_idx ON measures_0 USING btree ((value[3]));-- Index: measures_0_count_idxCREATE INDEX measures_0_count_idx\n ON measures_0 USING btree (count);-- Index: measures_0_timestamp_idxCREATE INDEX measures_0_timestamp_idx ON measures_0 USING btree (\"timestamp\");-- Index: measures_0_value_idx\nCREATE INDEX measures_0_value_idx ON measures_0 USING btree (value);-------------------------------------------------------RegardsLoïc Petit", "msg_date": "Sat, 23 Aug 2008 02:41:27 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Large number of tables slow insert" }, { "msg_contents": "Each INDEX create a delay on INSERT. Try to measure performance w/o any\nindexes.\n\n", "msg_date": "Sat, 23 Aug 2008 13:30:19 +0300", "msg_from": "DiezelMax <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "Actually, I've got another test system with only few sensors (thus few tables)\nand it's working well (<10ms insert) with all the indexes.\nI know it's slowing down my performance but I need them to interogate the big\ntables (each one can reach millions rows with time) really fast.\n\n> Each INDEX create a delay on INSERT. Try to measure performance w/o any\n> indexes.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nRegards\n\nLo�c\n\n\n", "msg_date": "Sat, 23 Aug 2008 21:35:34 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, Aug 23, 2008 at 1:35 PM, <[email protected]> wrote:\n> Actually, I've got another test system with only few sensors (thus few tables)\n> and it's working well (<10ms insert) with all the indexes.\n> I know it's slowing down my performance but I need them to interogate the big\n> tables (each one can reach millions rows with time) really fast.\n\nIt's quite likely that on the smaller system the indexes all fit into\nmemory and only require writes, while on the bigger system they are\ntoo large and have to be read from disk first, then written out.\n\nA useful solution is to remove most of the indexes on the main server,\nand set up a slony slave with the extra indexes on it to handle the\nreporting queries.\n", "msg_date": "Sat, 23 Aug 2008 17:49:58 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On this smaller test, the indexes are over the allowed memory size (I've got\nover 400.000 readings per sensor) so they are mostly written in disk. And on the\nbig test, I had small indexes (< page_size) because I only had about 5-10 rows\nper table, thus it was 3000*8kb = 24mb which is lower than the allowed memory.\n\nbtw which is the conf parameter that contains the previously read indexes ?\n\nI cannot do test this weekend because I do not have access to the machine but I\nwill try on monday some tests.\n\nThanks for your answers thought\n\nSelon Scott Marlowe <[email protected]>:\n\n> On Sat, Aug 23, 2008 at 1:35 PM, <[email protected]> wrote:\n> > Actually, I've got another test system with only few sensors (thus few\n> tables)\n> > and it's working well (<10ms insert) with all the indexes.\n> > I know it's slowing down my performance but I need them to interogate the\n> big\n> > tables (each one can reach millions rows with time) really fast.\n>\n> It's quite likely that on the smaller system the indexes all fit into\n> memory and only require writes, while on the bigger system they are\n> too large and have to be read from disk first, then written out.\n>\n> A useful solution is to remove most of the indexes on the main server,\n> and set up a slony slave with the extra indexes on it to handle the\n> reporting queries.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n", "msg_date": "Sun, 24 Aug 2008 02:09:13 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, Aug 23, 2008 at 6:09 PM, <[email protected]> wrote:\n> On this smaller test, the indexes are over the allowed memory size (I've got\n> over 400.000 readings per sensor) so they are mostly written in disk.\n\nThey're always written to disk. Just sometimes they're not read.\nNote that the OS caches files as well as pgsql, and I'm not sure what\nyou mean by \"allowed memory size\" but only the shared_buffers would\nhold onto something after it's been operated on. work_mem won't.\n\n> And on the\n> big test, I had small indexes (< page_size) because I only had about 5-10 rows\n> per table, thus it was 3000*8kb = 24mb which is lower than the allowed memory.\n\n> btw which is the conf parameter that contains the previously read indexes ?\n\nNot sure what you're asking. It's all automatic as far as OS and\npostgresql caching goes.\n", "msg_date": "Sat, 23 Aug 2008 18:25:52 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "Loic Petit wrote:\n> Hi,\n>\n> I use Postgresql 8.3.1-1 to store a lot of data coming from a large \n> amount of sensors. In order to have good performances on querying by \n> timestamp on each sensor, I partitionned my measures table for each \n> sensor. Thus I create a lot of tables.\n> I simulated a large sensor network with 3000 nodes so I have ~3000 \n> tables. And it appears that each insert (in separate transactions) in \n> the database takes about 300ms (3-4 insert per second) in tables where \n> there is just few tuples (< 10). I think you can understand that it's \n> not efficient at all because I need to treat a lot of inserts.\nCan you tell us what kind of application this is? It sounds like a \ncontrol systems application where you will write the current values of \nthe sensors with each scan of a PLC. If so, is that a good idea? Also \nis 3,000 sensors realistic? That would be a lot of sensors for one \ncontrol system.\n>\n> Do you have any idea why it is that slow ? and how can have good insert ?\nHow often do you write data for a sensor?\nOnce write per sensor per second = 3,000 writes per second\nThat would be an insert plus updates to each of your 6 indexes every \n0.33 ms .\n\nIs that a good idea? Is there a better strategy? What are you measuring \nwith the instruments e.g. is this a process plant or manufacturing \nfacility? What will people do with this data?\n>\n> My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)\n> iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while \n> constant insert\n>\n> Here is the DDL of the measures tables:\n> -------------------------------------------------------\n> CREATE TABLE measures_0\n> (\n> \"timestamp\" timestamp without time zone,\n> storedtime timestamp with time zone,\n> count smallint,\n> \"value\" smallint[]\n> )\n> WITH (OIDS=FALSE);\n> CREATE INDEX measures_0_1_idx\n> ON measures_0\n> USING btree\n> ((value[1]));\n>\n> -- Index: measures_0_2_idx\n> CREATE INDEX measures_0_2_idx\n> ON measures_0\n> USING btree\n> ((value[2]));\n>\n> -- Index: measures_0_3_idx\n> CREATE INDEX measures_0_3_idx\n> ON measures_0\n> USING btree\n> ((value[3]));\n>\n> -- Index: measures_0_count_idx\n> CREATE INDEX measures_0_count_idx\n> ON measures_0\n> USING btree\n> (count);\n>\n> -- Index: measures_0_timestamp_idx\n> CREATE INDEX measures_0_timestamp_idx\n> ON measures_0\n> USING btree\n> (\"timestamp\");\n>\n> -- Index: measures_0_value_idx\n> CREATE INDEX measures_0_value_idx\n> ON measures_0\n> USING btree\n> (value);\n> -------------------------------------------------------\n>\n> Regards\n>\n> Lo�c Petit\n>\n> --------------------------------\n>\n>\n\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Sun, 24 Aug 2008 17:01:58 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount\n> of sensors. In order to have good performances on querying by timestamp on\n> each sensor, I partitionned my measures table for each sensor. Thus I create\n> a lot of tables.\n> I simulated a large sensor network with 3000 nodes so I have ~3000 tables.\n> And it appears that each insert (in separate transactions) in the database\n> takes about 300ms (3-4 insert per second) in tables where there is just few\n> tuples (< 10). I think you can understand that it's not efficient at all\n> because I need to treat a lot of inserts.\n> \n> Do you have any idea why it is that slow ? and how can have good insert ?\n>\n> My test machine: Intel p4 2.4ghz, 512mb ram, Ubuntu Server (ext3)\n> iostat tells me that I do : 0.5MB/s reading and ~6-7MB/s writing while\n> constant insert\n\nHave you checked what you are bottlenecking on - CPU or disk? Try\niostat/top/etc during the inserts. Also check actual disk utilizatio\n(iostat -x on linux/freebsd; varies on others) to see what percentage\nof time the disk/storage device is busy.\n\nYou say you have 3-4 inserts/second causing 6-7 MB/s writing. That\nsuggests to me the inserts are fairly large. Are they in the MB range,\nwhich would account for the I/O?\n\nMy suspicion is that you are bottlenecking on CPU, since in my\nexperience there is definitely something surprisingly slow about\nencoding/decoding data at the protocol level or somewhere else that is\ninvolved in backend/client communication. I.e, I suspect your client\nand/or server is spending a lot of CPU time with things not directly\nrelated to the actual table inserts. If so, various suggested schemes\nw.r.t. indexing, table bloat etc won't help at all.\n\nIn short, 6-7 MB/second would be fairly consistent with INSERT/COPY\noperations being CPU bound on a modern CPU, in my experience. It may\nbe that this is entirely untrue in your case, but it sounds like a\nreasonable thing to at least consider.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Sun, 24 Aug 2008 23:29:53 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, 23 Aug 2008, Loic Petit wrote:\n> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount of sensors. In order to have good\n> performances on querying by timestamp on each sensor, I partitionned my measures table for each sensor. Thus I create\n> a lot of tables.\n\nAs far as I can see, you are having performance problems as a direct \nresult of this design decision, so it may be wise to reconsider. If you \nhave an index on both the sensor identifier and the timestamp, it should \nperform reasonably well. It would scale a lot better with thousands of \nsensors too.\n\nMatthew\n\n-- \nAnd why do I do it that way? Because I wish to remain sane. Um, actually,\nmaybe I should just say I don't want to be any worse than I already am.\n - Computer Science Lecturer\n", "msg_date": "Tue, 26 Aug 2008 13:50:48 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Tue, Aug 26, 2008 at 6:50 AM, Matthew Wakeling <[email protected]> wrote:\n> On Sat, 23 Aug 2008, Loic Petit wrote:\n>>\n>> I use Postgresql 8.3.1-1 to store a lot of data coming from a large amount\n>> of sensors. In order to have good\n>> performances on querying by timestamp on each sensor, I partitionned my\n>> measures table for each sensor. Thus I create\n>> a lot of tables.\n>\n> As far as I can see, you are having performance problems as a direct result\n> of this design decision, so it may be wise to reconsider. If you have an\n> index on both the sensor identifier and the timestamp, it should perform\n> reasonably well. It would scale a lot better with thousands of sensors too.\n\nProperly partitioned, I'd expect one big table to outperform 3,000\nsparsely populated tables.\n", "msg_date": "Tue, 26 Aug 2008 09:29:15 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "Actually, I've got another test system with only few sensors (thus few\ntables) and it's working well (<10ms insert) with all the indexes.\nI know it's slowing down my performance but I need them to interogate the\nbig tables (each one can reach millions rows with time) really fast.\n\nRegards\n\nLoïc\n\nActually, I've got another test system with only few sensors (thus few tables) and it's working well (<10ms insert) with all the indexes.I know it's slowing down my performance but I need them to interogate the big tables (each one can reach millions rows with time) really fast.\nRegardsLoïc", "msg_date": "Sat, 23 Aug 2008 18:25:36 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "I was a bit confused about the read and write sorry ! I understand what you\nmean...\nBut do you think that the IO cost (of only one page) needed to handle the\nindex writing is superior than 300ms ? Because each insert in any of these\ntables is that slow.\nNB: between my \"small\" and my \"big\" tests the only difference is the number\nof table (10 and 3000), there is almost the same amount of on-disk data\n\nI was a bit confused about the read and write sorry ! I understand what you mean...But do you think that the IO cost (of only one page) needed to handle the index writing is superior than 300ms ? Because each insert in any of these tables is that slow.\nNB: between my \"small\" and my \"big\" tests the only difference is the number of table (10 and 3000), there is almost the same amount of on-disk data", "msg_date": "Sun, 24 Aug 2008 01:47:36 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, Aug 23, 2008 at 6:47 PM, Loic Petit <[email protected]> wrote:\n> I was a bit confused about the read and write sorry ! I understand what you\n> mean...\n> But do you think that the IO cost (of only one page) needed to handle the\n> index writing is superior than 300ms ? Because each insert in any of these\n> tables is that slow.\n> NB: between my \"small\" and my \"big\" tests the only difference is the number\n> of table (10 and 3000), there is almost the same amount of on-disk data\n\nIt could be that the tables or indexes are heavily bloated. What's\nthe churn rate on these tables / indexes?\n", "msg_date": "Sat, 23 Aug 2008 18:50:48 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "1 table contains about 5 indexes : timestamp, one for each sensor type - 3,\nand one for packet counting (measures packet dropping)\n(I reckon that this is quite heavy, but a least the timestamp and the values\nare really usefull)\n\n1 table contains about 5 indexes : timestamp, one for each sensor type - 3, and one for packet counting (measures packet dropping)(I reckon that this is quite heavy, but a least the timestamp and the values are really usefull)", "msg_date": "Sun, 24 Aug 2008 01:59:16 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, Aug 23, 2008 at 6:59 PM, Loic Petit <[email protected]> wrote:\n> 1 table contains about 5 indexes : timestamp, one for each sensor type - 3,\n> and one for packet counting (measures packet dropping)\n> (I reckon that this is quite heavy, but a least the timestamp and the values\n> are really usefull)\n\nBut what's the update rate on these indexes and tables? I'm wondering\nif you're not vacuuming aggresively enough to keep up with bursty\nupdate patterns\n", "msg_date": "Sat, 23 Aug 2008 19:18:43 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "hello to all,\n\nI've a question regarding the folowing comments.\n\nHow to estimate vacuum aggressiveness ?\n\nIt's for me very deficulte to setup the autovaccum setting correctly. It \nseems for me that it is not enough aggressive, but when I change the \nsettings the autovacuum process is almost always running.\n\nSo how to setup it, for around 40000 insert, update, delete per 5 minutes\n\nregards\n\ndavid\nScott Marlowe a �crit :\n> On Sat, Aug 23, 2008 at 6:59 PM, Loic Petit <[email protected]> wrote:\n> \n>> 1 table contains about 5 indexes : timestamp, one for each sensor type - 3,\n>> and one for packet counting (measures packet dropping)\n>> (I reckon that this is quite heavy, but a least the timestamp and the values\n>> are really usefull)\n>> \n>\n> But what's the update rate on these indexes and tables? I'm wondering\n> if you're not vacuuming aggresively enough to keep up with bursty\n> update patterns\n>\n> \n\n", "msg_date": "Sun, 24 Aug 2008 16:31:14 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "One sensor (so one table) sends a packet each seconds (for 3000 sensors).\n=> So we have : 1 insert per second for 3000 tables (and their indexes).\nHopefully there is no update nor delete in it...\n\nOne sensor (so one table) sends a packet each seconds (for 3000 sensors). => So we have : 1 insert per second for 3000 tables (and their indexes). Hopefully there is no update nor delete in it...", "msg_date": "Sun, 24 Aug 2008 02:31:37 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "On Sat, Aug 23, 2008 at 7:31 PM, Loic Petit <[email protected]> wrote:\n> One sensor (so one table) sends a packet each seconds (for 3000 sensors).\n> => So we have : 1 insert per second for 3000 tables (and their indexes).\n> Hopefully there is no update nor delete in it...\n\nWait, I'm confused, I thought you said earlier that for the big test\nthe tables only had a few rows. I'd expect it to be quite a bit\nbigger if you're inserting into a table once a second for any period.\n", "msg_date": "Sat, 23 Aug 2008 19:37:39 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "What I described in the last mail is what I try to do.\nBut I said earlier that I only do about 3-4 inserts / seconds because of my\nproblem.\nSo it's about one insert each 30 minutes for each table.\n\nWhat I described in the last mail is what I try to do.But I said earlier that I only do about 3-4 inserts / seconds because of my problem. So it's about one insert each 30 minutes for each table.", "msg_date": "Sun, 24 Aug 2008 02:48:07 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "Just a guess, but have you tried increasing max_fsm_relations ? This\nprobably shouldn't matter but you'll want this to be larger than the sum of\nall your tables and indexes and it doesn't take that much memory to increase\nit.\n\nMy next suggestion would be to log in as the superuser and 'vacuum analyze'\nthe system tables. Perhaps it is simply the system table access that has\ngotten inefficient with this many tables / indexes.\n\n\nOn Sat, Aug 23, 2008 at 6:48 PM, Loic Petit <[email protected]> wrote:\n\n> What I described in the last mail is what I try to do.\n> But I said earlier that I only do about 3-4 inserts / seconds because of my\n> problem.\n> So it's about one insert each 30 minutes for each table.\n>\n\nJust a guess, but have you tried increasing max_fsm_relations ?  This probably shouldn't matter but you'll want this to be larger than the sum of all your tables and indexes and it doesn't take that much memory to increase it.\nMy next suggestion would be to log in as the superuser and 'vacuum analyze' the system tables.  Perhaps it is simply the system table access that has gotten inefficient with this many tables / indexes.\nOn Sat, Aug 23, 2008 at 6:48 PM, Loic Petit <[email protected]> wrote:\nWhat I described in the last mail is what I try to do.But I said earlier that I only do about 3-4 inserts / seconds because of my problem. So it's about one insert each 30 minutes for each table.", "msg_date": "Sun, 24 Aug 2008 13:59:18 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS \nwill help you? What about optimization on application level?\n", "msg_date": "Sun, 24 Aug 2008 17:42:05 +0300", "msg_from": "DiezelMax <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "Hello every body,\n\nI just discover a big not only big huge difference between NOW() and \nCURRENT_DATE.\n\nDid you already know about it and do you know why ?\n\nDELETE FROM blacklist where bl_date < (NOW() - interval '2 DAY');\non 6 000 000 of records\n699 ms\n\nDELETE FROM blacklist where bl_date < (CURRENT_DATE - interval '2 DAY');\non 6 000 000 of records\n\n0.065 ms\n\ntx\n\ndavid\n", "msg_date": "Sun, 24 Aug 2008 22:01:55 +0200", "msg_from": "dforum <[email protected]>", "msg_from_op": true, "msg_subject": "NOW vs CURRENT_DATE" }, { "msg_contents": "> I just discover a big not only big huge difference between NOW() and \n> CURRENT_DATE.\n> \n> Did you already know about it and do you know why ?\n> \n> DELETE FROM blacklist where bl_date < (NOW() - interval '2 DAY');\n> on 6 000 000 of records\n> 699 ms\n> \n> DELETE FROM blacklist where bl_date < (CURRENT_DATE - interval '2 DAY');\n> on 6 000 000 of records\n\nIs this a one-off run after each other (e.g. with a ROLLBACK in\nbetween)? If so I suspect the difference is due to caching and if you\nre-run the NOW() version it would also be fast.\n\nAlso, NOW() is equivalent to CURRENT_TIMESTAMP() rather than\nCURRENT_DATE(). Perhaps the date vs. timestamp has some implication of\nhow they query is planned.\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Sun, 24 Aug 2008 23:22:43 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOW vs CURRENT_DATE" }, { "msg_contents": "Hi,\n\ndforum wrote:\n> Hello every body,\n> \n> I just discover a big not only big huge difference between NOW() and \n> CURRENT_DATE.\n> \n> Did you already know about it and do you know why ?\n> \n> DELETE FROM blacklist where bl_date < (NOW() - interval '2 DAY');\n> on 6 000 000 of records\n> 699 ms\n> \n> DELETE FROM blacklist where bl_date < (CURRENT_DATE - interval '2 DAY');\n> on 6 000 000 of records\n\nTry that with a timestamp - column and use now() and current_timestamp\nwith a long running query and then compare min(column) max(column) in\nboth cases :-)\n\nRegards\nTino", "msg_date": "Mon, 25 Aug 2008 08:01:47 +0200", "msg_from": "Tino Wildenhain <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NOW vs CURRENT_DATE" } ]
[ { "msg_contents": "Quite a lot of answers !\n\n> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS\nwill help you? What about optimization on application level?\nYes it's on only one application (through JDBC), optimizations (grouped\ntransaction, prepared statement) will be done but that our very first test\nin hard condition which scared us all :p.\n\n> Can you tell us what kind of application this is? It sounds like a control\nsystems application where you will write the current values of the sensors\nwith each scan of a PLC. If so, is that a good idea? Also is 3,000 sensors\nrealistic? That would be a lot of sensors for one control system.\nOur research project is trying to manage large scale sensor network\ndeployments. 3.000 is quite a huge deployment but it can be realistic for\nhuge aggricultural deployment for example.\n\n> That would be an insert plus updates to each of your 6 indexes every 0.33\nms. Is that a good idea? Is there a better strategy? What are you measuring\nwith the instruments e.g. is this a process plant or manufacturing facility?\nWhat will people do with this data?\nI try to suppress the indexes the more I can. Actually I only really need\nthe index on timestamp to see for example the last readings, and to search\nfor a historical data by period, the others (values) are more for \"when this\nsensor was over 45ºC\" for instance but it can be without indexes (will be\nslow but less heavy at insert time). I get the data from differents telosb\nmotes that gathers temperature / humidity and light.\n\n> Have you checked what you are bottlenecking on - CPU or disk? Try\niostat/top/etc during the inserts. Also check actual disk utilizatio (iostat\n-x on linux/freebsd; varies on others) to see what percentage of time the\ndisk/storage device is busy.\nI saw the results of iostat and top, the postgres process was at 70% cpu .\nYes I know that my test machine is not brand new but I have to find a good\nsolution with this.\n\nOk I just ran some tests. It seems that I spammed too much right after the\ncreation of the tables, thus the vacuum analyse could not be ran. I have\nbetter results now :\n\nAverage of writing 10 rows in each table\nON 1000 TABLES\n Without indexes at all : ~1.5s\n With only the index on timestamp : ~2.5s\n With all indexes : ~30s\n\nON 3000 TABLES\n Without indexes at all : ~8s\n With only the index on timestamp : ~45s\n With all indexes : ~3min\n\nI don't know why but the difference is quite huge with indexes ! When I did\nmy vacuum the system told me about the \"max_fsm_relations\" (1000). Do you\nthink it will change something (as Scott said). I didn't have time to run\ntests with vacuum analyze on system table see you tomorow for other news...\n\nQuite a lot of answers !> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS will help you? What about optimization on application level?Yes it's on only one application (through JDBC), optimizations (grouped transaction, prepared statement) will be done but that our very first test in hard condition which scared us all :p.\n> Can you tell us what kind of application this is? It sounds like a control systems application where you will write the current values of the sensors with each scan of a PLC.  If so, is that a good idea?  Also is 3,000 sensors realistic? That would be a lot of sensors for one control system.\nOur research project is trying to manage large scale sensor network deployments. 3.000 is quite a huge deployment but it can be realistic for huge aggricultural deployment for example.> That would be an insert plus updates to each of your 6 indexes every 0.33 ms. Is that a good idea?  Is there a better strategy? What are you measuring with the instruments e.g. is this a process plant or manufacturing facility? What will people do with this data?\nI try to suppress the indexes the more I can. Actually I only really need the index on timestamp to see for example the last readings, and to search for a historical data by period, the others (values) are more for \"when this sensor was over 45ºC\" for instance but it can be without indexes (will be slow but less heavy at insert time). I get the data from differents telosb motes that gathers temperature / humidity and light.\n> Have you checked what you are bottlenecking on - CPU or disk? Try iostat/top/etc during the inserts. Also check actual disk utilizatio (iostat -x on linux/freebsd; varies on others) to see what percentage of time the disk/storage device is busy.\nI saw the results of iostat and top, the postgres process was at 70% cpu . Yes I know that my test machine is not brand new but I have to find a good solution with this.Ok I just ran some tests. It seems that I spammed too much right after the creation of the tables, thus the vacuum analyse could not be ran. I have better results now :\nAverage of writing 10 rows in each tableON 1000 TABLES     Without indexes at all : ~1.5s     With only the index on timestamp : ~2.5s\n    With all indexes : ~30s\nON 3000 TABLES \n    Without indexes at all : ~8s\n    With only the index on timestamp : ~45s    With all indexes : ~3minI don't know why but the difference is quite huge with indexes  ! When I did my vacuum the system told me about the \"max_fsm_relations\" (1000). Do you think it will change something (as Scott said). I didn't have time to run tests with vacuum analyze on system table see you tomorow for other news...", "msg_date": "Sun, 24 Aug 2008 23:30:44 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" }, { "msg_contents": "I don't know if the max_fsm_relations issue will solve your problem or not.\nI do know that you definitely want to increase it to a number larger than\nthe sum of all your tables and indexes -- preferably with room to grow.\nAdditionally the max_fsm_pages value will likely need to be increased as\nyour data size grows.\n\nI work with about 9000 tables at the moment (growing each day) and do not\nsee your issue. I do not have indexes on most of my tables, and\nmax_fsm_relations is set to 30000.\n\nAlthough this will increase the number of tables even more-- you may want to\nconsider partitioning your tables by time: day or week or month.\nThis way, you may not even need an index on the date, as it will only scan\ntables over the date range specified ( NOTE -- this is not true if you use\nprepared statements -- prepared statements + partitioned tables =\nperformance disaster).\nIn addition, this may allow you to add the indexes on the partitioned table\nat a later date. For example:\n\nPartitions by week -- the current week's table has no indexes and is thus\nfast to insert. But once it becomes last week's table and you are only\ninserting into a new table, the old one can have indexes added to it -- it\nis now mostly a read-only table. In this way, full scans will only be\nneeded for the current week's table, which will most of the time be smaller\nthan the others and more likely be cached in memory as well. You may want\nto partition by day or month instead.\nYou may want to combine several sensors into one table, so that you can\npartition by day or even hour. It all depends on how you expect to access\nthe data later and how much you can afford to deal with managing all those\ntables -- postgres only does some of the partitioning work for you and you\nhave to be very careful with your queries. There are some query optimizer\noddities with partitioned tables one has to be aware of.\n\nOn Sun, Aug 24, 2008 at 3:30 PM, Loic Petit <[email protected]> wrote:\n\n> Quite a lot of answers !\n>\n> > Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS\n> will help you? What about optimization on application level?\n> Yes it's on only one application (through JDBC), optimizations (grouped\n> transaction, prepared statement) will be done but that our very first test\n> in hard condition which scared us all :p.\n>\n> > Can you tell us what kind of application this is? It sounds like a\n> control systems application where you will write the current values of the\n> sensors with each scan of a PLC. If so, is that a good idea? Also is 3,000\n> sensors realistic? That would be a lot of sensors for one control system.\n> Our research project is trying to manage large scale sensor network\n> deployments. 3.000 is quite a huge deployment but it can be realistic for\n> huge aggricultural deployment for example.\n>\n> > That would be an insert plus updates to each of your 6 indexes every 0.33\n> ms. Is that a good idea? Is there a better strategy? What are you measuring\n> with the instruments e.g. is this a process plant or manufacturing facility?\n> What will people do with this data?\n> I try to suppress the indexes the more I can. Actually I only really need\n> the index on timestamp to see for example the last readings, and to search\n> for a historical data by period, the others (values) are more for \"when this\n> sensor was over 45ºC\" for instance but it can be without indexes (will be\n> slow but less heavy at insert time). I get the data from differents telosb\n> motes that gathers temperature / humidity and light.\n>\n> > Have you checked what you are bottlenecking on - CPU or disk? Try\n> iostat/top/etc during the inserts. Also check actual disk utilizatio (iostat\n> -x on linux/freebsd; varies on others) to see what percentage of time the\n> disk/storage device is busy.\n> I saw the results of iostat and top, the postgres process was at 70% cpu .\n> Yes I know that my test machine is not brand new but I have to find a good\n> solution with this.\n>\n> Ok I just ran some tests. It seems that I spammed too much right after the\n> creation of the tables, thus the vacuum analyse could not be ran. I have\n> better results now :\n>\n> Average of writing 10 rows in each table\n> ON 1000 TABLES\n> Without indexes at all : ~1.5s\n> With only the index on timestamp : ~2.5s\n> With all indexes : ~30s\n>\n> ON 3000 TABLES\n> Without indexes at all : ~8s\n> With only the index on timestamp : ~45s\n> With all indexes : ~3min\n>\n> I don't know why but the difference is quite huge with indexes ! When I\n> did my vacuum the system told me about the \"max_fsm_relations\" (1000). Do\n> you think it will change something (as Scott said). I didn't have time to\n> run tests with vacuum analyze on system table see you tomorow for other\n> news...\n>\n\nI don't know if the max_fsm_relations issue will solve your problem or not.  I do know that you definitely want to increase it to a number larger than the sum of all your tables and indexes -- preferably with room to grow.  Additionally the max_fsm_pages value will likely need to be increased as your data size grows.\nI work with about 9000 tables at the moment (growing each day) and do not see your issue.  I do not have indexes on most of my tables, and max_fsm_relations is set to 30000.Although this will increase the number of tables even more-- you may want to consider partitioning your tables by time:  day or week or month.\nThis way, you may not even need an index on the date, as it will only scan tables over the date range specified ( NOTE -- this is not true if you use prepared statements -- prepared statements + partitioned tables = performance disaster).  \nIn addition, this may allow you to add the indexes on the partitioned table at a later date.  For example:Partitions by week -- the current week's table has no indexes and is thus fast to insert.  But once it becomes last week's table and you are only inserting into a new table, the old one can have indexes added to it -- it is now mostly a read-only table.  In this way, full scans will only be needed for the current week's table, which will most of the time be smaller than the others and more likely be cached in memory as well.  You may want to partition by day or month instead.\nYou may want to combine several sensors into one table, so that you can partition by day or even hour.  It all depends on how you expect to access the data later and how much you can afford to deal with managing all those tables -- postgres only does some of the partitioning work for you and you have to be very careful with your queries.  There are some query optimizer oddities with partitioned tables one has to be aware of.\nOn Sun, Aug 24, 2008 at 3:30 PM, Loic Petit <[email protected]> wrote:\nQuite a lot of answers !> Does all INSERTs are made by one application? Maybe PREPARED STATEMENTS will help you? What about optimization on application level?Yes it's on only one application (through JDBC), optimizations (grouped transaction, prepared statement) will be done but that our very first test in hard condition which scared us all :p.\n\n> Can you tell us what kind of application this is? It sounds like a control systems application where you will write the current values of the sensors with each scan of a PLC.  If so, is that a good idea?  Also is 3,000 sensors realistic? That would be a lot of sensors for one control system.\n\nOur research project is trying to manage large scale sensor network deployments. 3.000 is quite a huge deployment but it can be realistic for huge aggricultural deployment for example.> That would be an insert plus updates to each of your 6 indexes every 0.33 ms. Is that a good idea?  Is there a better strategy? What are you measuring with the instruments e.g. is this a process plant or manufacturing facility? What will people do with this data?\n\nI try to suppress the indexes the more I can. Actually I only really need the index on timestamp to see for example the last readings, and to search for a historical data by period, the others (values) are more for \"when this sensor was over 45ºC\" for instance but it can be without indexes (will be slow but less heavy at insert time). I get the data from differents telosb motes that gathers temperature / humidity and light.\n\n> Have you checked what you are bottlenecking on - CPU or disk? Try iostat/top/etc during the inserts. Also check actual disk utilizatio (iostat -x on linux/freebsd; varies on others) to see what percentage of time the disk/storage device is busy.\n\nI saw the results of iostat and top, the postgres process was at 70% cpu . Yes I know that my test machine is not brand new but I have to find a good solution with this.Ok I just ran some tests. It seems that I spammed too much right after the creation of the tables, thus the vacuum analyse could not be ran. I have better results now :\nAverage of writing 10 rows in each tableON 1000 TABLES     Without indexes at all : ~1.5s     With only the index on timestamp : ~2.5s\n    With all indexes : ~30s\nON 3000 TABLES \n    Without indexes at all : ~8s\n    With only the index on timestamp : ~45s    With all indexes : ~3minI don't know why but the difference is quite huge with indexes  ! When I did my vacuum the system told me about the \"max_fsm_relations\" (1000). Do you think it will change something (as Scott said). I didn't have time to run tests with vacuum analyze on system table see you tomorow for other news...", "msg_date": "Sun, 24 Aug 2008 18:08:08 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "That's not a bad idea, at least for historical data.\nBut actually one of the most common thing in sensor network monitoring is\nlast readings monitoring.\nWith indexes what I can do is : SELECT * FROM measures_xx ORDER BY timestamp\nDESC LIMIT 1 => And I got the very last reading in a blink (one page reading\nonly).\nIt shall be complicated that way... it can only be done by a mass update on\na table each time I receive a packet...\nAlso it's very important to have last 24h readings aggregate by hour or\nminutes (to plot a graph).\nSo I can't go that way.... I think I must keep the timestamp index where it\nis but I should probably get rid of the others.\n\nThank you again for your help people\n\nRegards\n\nLoic Petit\n\nThat's not a bad idea, at least for historical data.But actually one of the most common thing in sensor network monitoring is last readings monitoring. With indexes what I can do is : SELECT * FROM measures_xx ORDER BY timestamp DESC LIMIT 1 => And I got the very last reading in a blink (one page reading only).\nIt shall be complicated that way... it can only be done by a mass update on a table each time I receive a packet...\nAlso it's very important to have last 24h readings aggregate by hour or minutes (to plot a graph). So I can't go that way.... I think I must keep the timestamp index where it is but I should probably get rid of the others.\nThank you again for your help peopleRegardsLoic Petit", "msg_date": "Mon, 25 Aug 2008 03:02:02 +0100", "msg_from": "\"Loic Petit\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Large number of tables slow insert" } ]
[ { "msg_contents": "We are attempting to turn off autovacuum but it keeps coming back. We \ncan't afford the performance hit from vacuum while end users are \naccessing our system.\n\nPostgresql Version: 8.3.3\nOS: Linux 2.6.18-53.el5PAE #1 SMP\n\nRunning PostgreSQL setting: \nsspg=# show autovacuum;\n autovacuum\n------------\n off\n(1 row)\n\npg.log Log Entries:\n2008-08-26 15:24:50 GMTLOG: autovacuum launcher started\n-- and then we manually kill it\npostgres 32371 0.0 0.1 1133768 23572 ? Ss 15:16 0:00 \npostgres: autovacuum worker process rollup_data_store\n# kill 32371\n2008-08-26 15:24:53 GMTFATAL: terminating autovacuum process due to \nadministrator command\n\nDoes anyone know what will cause this bahavior for autovacuum?\n\nThank you in advance\n\n-Jerry\n\n", "msg_date": "Tue, 26 Aug 2008 09:27:48 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": true, "msg_subject": "Autovacuum does not stay turned off" }, { "msg_contents": "On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n> Does anyone know what will cause this bahavior for autovacuum?\n\nhttp://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html\n-> autovacuum_freeze_max_age\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Tue, 26 Aug 2008 18:14:27 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum does not stay turned off" }, { "msg_contents": "On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n>\n> Does anyone know what will cause this bahavior for autovacuum?\n\nYou're probably approaching the wraparound limit in some database. \n\nIf you think you can't afford the overhead when users are accessing\nthe system, when are you vacuuming?\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Tue, 26 Aug 2008 12:21:44 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum does not stay turned off" }, { "msg_contents": "This makes sense. What queries can I run to see how close to the limit \nwe are? We need to determine if we should stop the process which \nupdates and inserts into this table until after the critical time this \nafternoon when we can perform the required maintenance on this table.\n\nhubert depesz lubaczewski wrote:\n> On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n> \n>> Does anyone know what will cause this bahavior for autovacuum?\n>> \n>\n> http://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html\n> -> autovacuum_freeze_max_age\n>\n> depesz\n>\n> \nAndrew Sullivan wrote:\n> On Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n> \n>> Does anyone know what will cause this bahavior for autovacuum?\n>> \n>\n> You're probably approaching the wraparound limit in some database. \n>\n> If you think you can't afford the overhead when users are accessing\n> the system, when are you vacuuming?\n>\n> A\n>\n> \nWe are changing the table structure tonight. These two tables are very \nhighly updated. The goal is to use autovacuum but not have it take 10 \ndays to run on a 13MM record table.\n\nThanks\n\n-Jerry\n\n\n\n\n\n\nThis makes sense.  What queries can I run to see how close to the limit\nwe are?  We need to determine if we should stop the process which\nupdates and inserts into this table until after the critical time this\nafternoon when we can perform the required maintenance on this table.\n\nhubert depesz lubaczewski wrote:\n\nOn Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n \n\nDoes anyone know what will cause this bahavior for autovacuum?\n \n\n\nhttp://www.postgresql.org/docs/current/interactive/runtime-config-autovacuum.html\n-> autovacuum_freeze_max_age\n\ndepesz\n\n \n\nAndrew Sullivan wrote:\n\nOn Tue, Aug 26, 2008 at 09:27:48AM -0600, Jerry Champlin wrote:\n \n\nDoes anyone know what will cause this bahavior for autovacuum?\n \n\n\nYou're probably approaching the wraparound limit in some database. \n\nIf you think you can't afford the overhead when users are accessing\nthe system, when are you vacuuming?\n\nA\n\n \n\nWe are changing the table structure tonight.  These two tables are very\nhighly updated.  The goal is to use autovacuum but not have it take 10\ndays to run on a 13MM record table.\n\nThanks\n\n-Jerry", "msg_date": "Tue, 26 Aug 2008 10:45:31 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum does not stay turned off" }, { "msg_contents": "On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:\n> This makes sense. What queries can I run to see how close to the limit \n> we are? We need to determine if we should stop the process which \n> updates and inserts into this table until after the critical time this \n> afternoon when we can perform the required maintenance on this table.\n\nselect datname, age(datfrozenxid) from pg_database;\n\nBest regards,\n\ndepesz\n\n-- \nLinkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/\njid/gtalk: [email protected] / aim:depeszhdl / skype:depesz_hdl / gg:6749007\n", "msg_date": "Tue, 26 Aug 2008 19:04:23 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Autovacuum does not stay turned off" }, { "msg_contents": "Thanks for the help. The applied solution follows. We will be taking a \nnumber of maintenance steps to manage these very high update tables \nwhich I will summarize later as I suspect we are not the only ones with \nthis challenge.\n\nhttp://www.postgresql.org/docs/current/interactive/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\nhttp://www.postgresql.org/docs/current/interactive/catalog-pg-autovacuum.html\n\ndata_store=# SELECT relname, oid, age(relfrozenxid) FROM pg_class WHERE \nrelkind = 'r';\n...\nhour_summary | 16392 | 252934596\npercentile_metadata | 20580 | 264210966\n(51 rows)\n\ndata_store=# insert into pg_autovacuum values \n(16392,false,350000000,2,350000000,1,200,200,350000000,500000000);\nINSERT 0 1\ndata_store=# insert into pg_autovacuum values \n(20580,false,350000000,2,350000000,1,200,200,350000000,500000000);\nINSERT 0 1\ndata_store=#\n\n\nhubert depesz lubaczewski wrote:\n> On Tue, Aug 26, 2008 at 10:45:31AM -0600, Jerry Champlin wrote:\n> \n>> This makes sense. What queries can I run to see how close to the limit \n>> we are? We need to determine if we should stop the process which \n>> updates and inserts into this table until after the critical time this \n>> afternoon when we can perform the required maintenance on this table.\n>> \n>\n> select datname, age(datfrozenxid) from pg_database;\n>\n> Best regards,\n>\n> depesz\n>\n> \n", "msg_date": "Tue, 26 Aug 2008 12:06:27 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Autovacuum does not stay turned off" } ]
[ { "msg_contents": "It seems to me that the planner makes a very poor decision with this\nparticular query:\n\n--- snip ---\nwoome=> explain analyze SELECT \"webapp_invite\".\"id\",\n\"webapp_invite\".\"person_id\", \"webapp_invite\".\"session_id\",\n\"webapp_invite\".\"created\", \"webapp_invite\".\"text\",\n\"webapp_invite\".\"subject\", \"webapp_invite\".\"email\",\n\"webapp_invite\".\"batch_seen\", \"webapp_invite\".\"woouser\",\n\"webapp_invite\".\"accepted\", \"webapp_invite\".\"declined\",\n\"webapp_invite\".\"deleted\", \"webapp_invite\".\"local_start_time\" FROM\n\"webapp_invite\" INNER JOIN \"webapp_person\" ON\n(\"webapp_invite\".\"person_id\" = \"webapp_person\".\"id\") INNER JOIN\n\"webapp_person\" T3 ON (\"webapp_invite\".\"person_id\" = T3.\"id\") WHERE\n\"webapp_person\".\"is_suspended\" = false AND T3.\"is_banned\" = false\nAND \"webapp_invite\".\"woouser\" = 'suggus' ORDER BY\n\"webapp_invite\".\"id\" DESC LIMIT 5;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..3324.29 rows=5 width=44) (actual\ntime=2545.137..2545.137 rows=0 loops=1)\n -> Nested Loop (cost=0.00..207435.61 rows=312 width=44) (actual\ntime=2545.135..2545.135 rows=0 loops=1)\n -> Nested Loop (cost=0.00..204803.04 rows=322 width=48)\n(actual time=2545.133..2545.133 rows=0 loops=1)\n -> Index Scan Backward using webapp_invite_pkey on\nwebapp_invite (cost=0.00..201698.51 rows=382 width=44) (actual\ntime=2545.131..2545.131 rows=0 loops=1)\n Filter: ((woouser)::text = 'suggus'::text)\n -> Index Scan using webapp_person_pkey on\nwebapp_person t3 (cost=0.00..8.11 rows=1 width=4) (never executed)\n Index Cond: (t3.id = webapp_invite.person_id)\n Filter: (NOT t3.is_banned)\n -> Index Scan using webapp_person_pkey on webapp_person\n(cost=0.00..8.16 rows=1 width=4) (never executed)\n Index Cond: (webapp_person.id = webapp_invite.person_id)\n Filter: (NOT webapp_person.is_suspended)\n Total runtime: 2545.284 ms\n(12 rows)\n--- snap ---\n\nbecause if I just remove the LIMIT, it runs like the wind:\n\n--- snip ---\nwoome=> explain analyze SELECT \"webapp_invite\".\"id\",\n\"webapp_invite\".\"person_id\", \"webapp_invite\".\"session_id\",\n\"webapp_invite\".\"created\", \"webapp_invite\".\"text\",\n\"webapp_invite\".\"subject\", \"webapp_invite\".\"email\",\n\"webapp_invite\".\"batch_seen\", \"webapp_invite\".\"woouser\",\n\"webapp_invite\".\"accepted\", \"webapp_invite\".\"declined\",\n\"webapp_invite\".\"deleted\", \"webapp_invite\".\"local_start_time\" FROM\n\"webapp_invite\" INNER JOIN \"webapp_person\" ON\n(\"webapp_invite\".\"person_id\" = \"webapp_person\".\"id\") INNER JOIN\n\"webapp_person\" T3 ON (\"webapp_invite\".\"person_id\" = T3.\"id\") WHERE\n\"webapp_person\".\"is_suspended\" = false AND T3.\"is_banned\" = false\nAND \"webapp_invite\".\"woouser\" = 'suggus' ORDER BY\n\"webapp_invite\".\"id\" DESC;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=7194.46..7195.24 rows=312 width=44) (actual\ntime=0.141..0.141 rows=0 loops=1)\n Sort Key: webapp_invite.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=12.20..7181.53 rows=312 width=44) (actual\ntime=0.087..0.087 rows=0 loops=1)\n -> Nested Loop (cost=12.20..4548.96 rows=322 width=48)\n(actual time=0.085..0.085 rows=0 loops=1)\n -> Bitmap Heap Scan on webapp_invite\n(cost=12.20..1444.44 rows=382 width=44) (actual time=0.084..0.084\nrows=0 loops=1)\n Recheck Cond: ((woouser)::text = 'suggus'::text)\n -> Bitmap Index Scan on\nwebapp_invite_woouser_idx (cost=0.00..12.10 rows=382 width=0) (actual\ntime=0.081..0.081 rows=0 loops=1)\n Index Cond: ((woouser)::text = 'suggus'::text)\n -> Index Scan using webapp_person_pkey on\nwebapp_person t3 (cost=0.00..8.11 rows=1 width=4) (never executed)\n Index Cond: (t3.id = webapp_invite.person_id)\n Filter: (NOT t3.is_banned)\n -> Index Scan using webapp_person_pkey on webapp_person\n(cost=0.00..8.16 rows=1 width=4) (never executed)\n Index Cond: (webapp_person.id = webapp_invite.person_id)\n Filter: (NOT webapp_person.is_suspended)\n Total runtime: 0.295 ms\n(16 rows)\n--- snap ---\n\nAnd for this particular filter, the result set is empty to boot, so\nthe LIMIT doesn't even do anything.\n\nDoes this behaviour make sense to anyone? Can I force the planner\nsomehow to be smarter about it?\n\nThanks!\n\nFrank\n", "msg_date": "Tue, 26 Aug 2008 17:37:32 +0100", "msg_from": "\"Frank Joerdens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query w empty result set with LIMIT orders of magnitude slower than\n\twithout" }, { "msg_contents": "\"Frank Joerdens\" <[email protected]> writes:\n> It seems to me that the planner makes a very poor decision with this\n> particular query:\n\nTry increasing the stats on woouser. You need it to make a smaller\nestimate of the number of matching rows here:\n\n> -> Index Scan Backward using webapp_invite_pkey on\n> webapp_invite (cost=0.00..201698.51 rows=382 width=44) (actual\n> time=2545.131..2545.131 rows=0 loops=1)\n> Filter: ((woouser)::text = 'suggus'::text)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 26 Aug 2008 12:53:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query w empty result set with LIMIT orders of magnitude slower\n\tthan without" }, { "msg_contents": "On Tue, Aug 26, 2008 at 5:53 PM, Tom Lane <[email protected]> wrote:\n> \"Frank Joerdens\" <[email protected]> writes:\n>> It seems to me that the planner makes a very poor decision with this\n>> particular query:\n>\n> Try increasing the stats on woouser. You need it to make a smaller\n> estimate of the number of matching rows here:\n\nWay cool, this\n\nalter table webapp_invite alter column woouser set statistics 50;\n\ndid the trick, without cleaning up the spurious join there even.\n\nThanks!\n\nFrank\n", "msg_date": "Tue, 26 Aug 2008 19:41:49 +0100", "msg_from": "\"Frank Joerdens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query w empty result set with LIMIT orders of magnitude slower\n\tthan without" } ]
[ { "msg_contents": "\nHi,\n\nWe're currently having a problem with queries on a medium sized table. This table is 22GB in size (via select pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring the total size of the table to 35 GB (measured with pg_total_relation_size).\n\nOn this table we're inserting records with a relatively low frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian Linux. The machine is completely devoted to PG, nothing else runs on the box.\n\nLately we're getting a lot of exceptions from the Java process that does these inserts: \"An I/O error occured while sending to the backend.\" No other information is provided with this exception (besides the stack trace of course). The pattern is that for about a minute, almost every insert to this 22 GB table results in this exception. After this minute everything is suddenly fine and PG happily accepts all inserts again. We tried to nail the problem down, and it seems that every time this happens, a select query on this same table is in progress. This select query starts right before the insert problems begin and most often right after this select query finishes executing, inserts are fine again. Sometimes though inserts only fail in the middle of the execution of this select query. E.g. if the select query starts at 12:00 and ends at 12:03, inserts fail from 12:01 to 12:02.\n\nWe have spend a lot of hours in getting to the bottom of this, but our ideas for fixing this problem are more or less exhausted at the moment.\n\nI wonder if anyone recognizes this problem and could give some pointers to stuff that we could investigate next. \n\nThanks a lot in advance.\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/", "msg_date": "Tue, 26 Aug 2008 18:44:02 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "select on 22 GB table causes \"An I/O error occured while sending to\n\tthe backend.\" exception" }, { "msg_contents": "On Tue, 2008-08-26 at 18:44 +0200, henk de wit wrote:\n> Hi,\n> \n> We're currently having a problem with queries on a medium sized table. This table is 22GB in size (via select pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring the total size of the table to 35 GB (measured with pg_total_relation_size).\n> \n> On this table we're inserting records with a relatively low frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian Linux. The machine is completely devoted to PG, nothing else runs on the box.\n> \n> Lately we're getting a lot of exceptions from the Java process that does these inserts: \"An I/O error occured while sending to the backend.\" No other information is provided with this exception (besides the stack trace of course). The pattern is that for about a minute, almost every insert to this 22 GB table results in this exception. After this minute everything is suddenly fine and PG happily accepts all inserts again. We tried to nail the problem down, and it seems that every time this happens, a select query on this same table is in progress. This select query starts right before the insert problems begin and most often right after this select query finishes executing, inserts are fine again. Sometimes though inserts only fail in the middle of the execution of this select query. E.g. if the select query starts at 12:00 and ends at 12:03, inserts fail from 12:01 to 12:02.\n> \n> We have spend a lot of hours in getting to the bottom of this, but our ideas for fixing this problem are more or less exhausted at the moment.\n> \n> I wonder if anyone recognizes this problem and could give some pointers to stuff that we could investigate next. \n> \n> Thanks a lot in advance.\n\nIf the select returns a lot of data and you haven't enabled cursors (by\ncalling setFetchSize), then the entire SQL response will be loaded in\nmemory at once, so there could be an out-of-memory condition on the\nclient.\n\nOr if the select uses sorts and PG thinks it has access to more sort\nmemory than is actually available on the system (due to ulimits,\nphysical memory restrictions, etc) then you could run into problems that\nlook like out-of-memory errors on the server.\n\nIf could also be something else entirely; exceeding your max\nconnections, something like that.\n\nA really good place to start would be to enable tracing on the JDBC\ndriver. Look at the docs for the PostgreSQL JDBC driver for how to\nenable logging; that should give you a much better picture of exactly\nwhere and what is failing.\n\nIf the issue is server-side, then you will also want to look at the\nPostgreSQL logs on the server; anything as serious as a backend aborting\nshould write an entry in the log.\n\n-- Mark\n", "msg_date": "Tue, 26 Aug 2008 10:10:15 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Tue, Aug 26, 2008 at 10:44 AM, henk de wit <[email protected]> wrote:\n>\n> Hi,\n>\n> We're currently having a problem with queries on a medium sized table. This table is 22GB in size (via select pg_size_pretty(pg_relation_size('table'));). It has 7 indexes, which bring the total size of the table to 35 GB (measured with pg_total_relation_size).\n>\n> On this table we're inserting records with a relatively low frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian Linux. The machine is completely devoted to PG, nothing else runs on the box.\n>\n> Lately we're getting a lot of exceptions from the Java process that does these inserts: \"An I/O error occured while sending to the backend.\" No other information is provided with this exception (besides the stack trace of course).\n\nWhat do your various logs (pgsql, application, etc...) have to say?\nCan you read a java stack trace? Sometimes slogging through them will\nreveal some useful information.\n\n> The pattern is that for about a minute, almost every insert to this 22 GB table results in this exception. After this minute everything is suddenly fine and PG happily accepts all inserts again. We tried to nail the problem down, and it seems that every time this happens, a select query on this same table is in progress. This select query starts right before the insert problems begin and most often right after this select query finishes executing, inserts are fine again. Sometimes though inserts only fail in the middle of the execution of this select query. E.g. if the select query starts at 12:00 and ends at 12:03, inserts fail from 12:01 to 12:02.\n\nSounds to me like your connections are timing out (what's your timeout\nin jdbc set to?)\n\nA likely cause is that you're getting big checkpoint spikes. What\ndoes vmstat 10 say during these spikes? If you're running the\nsysstate service with data collection then sar can tell you a lot.\n\nIf it is a checkpoint issue then you need more aggresive bgwriter\nsettings, and possibly more bandwidth on your storage array.\n\nNote that you can force a checkpoint from a superuser account at the\ncommand line. You can always force one and see what happens to\nperformance during it. You'll need to wait a few minutes or so\nbetween runs to see an effect.\n", "msg_date": "Tue, 26 Aug 2008 12:23:30 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "> If the select returns a lot of data and you haven't enabled cursors (by\n> calling setFetchSize), then the entire SQL response will be loaded in\n> memory at once, so there could be an out-of-memory condition on the\n> client.\n\nI hear you. This is absolutely not the case though. There is no other exception anywhere besides the \"An I/O error occured while sending to the backend.\". The select query eventually returns something between 10 and 50 rows. Also, the select query runs from another machine than the one that issues the inserts to the data base. I failed to mention in the openings post that simultaneously with this select/insert query for this single 22 GB table, thousands if not tens of thousands other queries are hitting other tables in the same database. There are about 70 other tables, with a combined size of about 40 GB. None of those 70 others tables has a size anywhere near that 22GB of the problematic table. There is never even a single problem of this kind with any of those other tables.\n\nWhen doing research on the net, it became clear that a lot of these \"An I/O error...\" exceptions are caused by malfunctioning switches or routers in the network between the application server(s) and the data base. In our case this can hardly be true. As mentioned, a great number of other queries are hitting the database. Some of these are very small (exeuction times of about 10 ms), while others are quite large (hundreds of lines of SQL with over 12 joins and an execution time of several minutes). Not a single one of those results in this I/O error.\n\n> If could also be something else entirely; exceeding your max\n> connections, something like that.\n\nWe indeed ran into that, but I think more as collateral damage. When this single select query for the 22 GB table is executing and those inserts start to fail, this also starts holding up things. As a results the 6 writes per second start queuing up and requesting more and more connections from our connection pool (DBCP as supplied by Apache Tomcat). We had the maximum of our connection pool set to a too high value and after a while Postgres responded with a message that the connection limit was exceeded. We thereafter lowered the max of our connection pool and didn't see that particular message anymore.\n\nSo, it seems likely that \"An I/O error occured while sending to the backend.\" doet not mean \"connection limit exceeded\", since the latter message is explitely given when this is the case.\n\n> A really good place to start would be to enable tracing on the JDBC\n> driver.\n\nThat's a good idea indeed. I'll try to enable this and see what it does.\n\n> If the issue is server-side, then you will also want to look at the\n> PostgreSQL logs on the server; anything as serious as a backend aborting\n> should write an entry in the log.\n\nWe studied the PG logs extensively but unfortunately could not really detect anything that could point to the problem there.\n\n\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> If the select returns a lot of data and you haven't enabled cursors (by> calling setFetchSize), then the entire SQL response will be loaded in> memory at once, so there could be an out-of-memory condition on the> client.I hear you. This is absolutely not the case though. There is no other exception anywhere besides the \"An I/O error occured while sending to the backend.\". The select query eventually returns something between 10 and 50 rows. Also, the select query runs from another machine than the one that issues the inserts to the data base. I failed to mention in the openings post that simultaneously with this select/insert query for this single 22 GB table, thousands if not tens of thousands other queries are hitting other tables in the same database. There are about 70 other tables, with a combined size of about 40 GB. None of those 70 others tables has a size anywhere near that 22GB of the problematic table. There is never even a single problem of this kind with any of those other tables.When doing research on the net, it became clear that a lot of these \"An I/O error...\" exceptions are caused by malfunctioning switches or routers in the network between the application server(s) and the data base. In our case this can hardly be true. As mentioned, a great number of other queries are hitting the database. Some of these are very small (exeuction times of about 10 ms), while others are quite large (hundreds of lines of SQL with over 12 joins and an execution time of several minutes). Not a single one of those results in this I/O error.> If could also be something else entirely; exceeding your max> connections, something like that.We indeed ran into that, but I think more as collateral damage. When this single select query for the 22 GB table is executing and those inserts start to fail, this also starts holding up things. As a results the 6 writes per second start queuing up and requesting more and more connections from our connection pool (DBCP as supplied by Apache Tomcat). We had the maximum of our connection pool set to a too high value and after a while Postgres responded with a message that the connection limit was exceeded. We thereafter lowered the max of our connection pool and didn't see that particular message anymore.So, it seems likely that \"An I/O error occured while sending to the backend.\" doet not mean \"connection limit exceeded\", since the latter message is explitely given when this is the case.> A really good place to start would be to enable tracing on the JDBC> driver.That's a good idea indeed. I'll try to enable this and see what it does.> If the issue is server-side, then you will also want to look at the> PostgreSQL logs on the server; anything as serious as a backend aborting> should write an entry in the log.We studied the PG logs extensively but unfortunately could not really detect anything that could point to the problem there.Express yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Tue, 26 Aug 2008 22:29:48 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "> What do your various logs (pgsql, application, etc...) have to say?\n\nThere\nis hardly anything helpful in the pgsql log. The application log\ndoesn't mention anything either. We log a great deal of information in\nour application, but there's nothing out of the ordinary there,\nalthough there's of course always a chance that somewhere we missed\nsomething. \n\n> Can you read a java stack trace? Sometimes slogging through them will\n> reveal some useful information.\n\nI can read a java stack trace very well, I'm primarily a Java developer. The stack trace is the following one:\n\norg.postgresql.util.PSQLException:\nAn I/O error occured while sending to the backend.\n\n at\norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:218)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)\n\n at\norg.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304)\n\n at\norg.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)\n\n at\norg.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)\n\n\nOur\ncode simply executes the same statement.executeUpdate() that is does\nabout 500.000 times during a business day. As soon as this select query\nis hitting this 22 GB table then there's a chance that 'suddenly' all\nthese utterly simply insert queries start failing. The insert query is\nnothing special either. It's just an \"INSERT INTO ... VALUES (...)\"\ntype of thing. The select query can actually be a few different kinds\nof queries, but basically the common thing between them is reading from\nthis 22 GB table. In fact, our system administrator just told me that\neven the DB backup is able to trigger this behaviour. As soon as the\nbackup process is reading from this 22 GB table, the inserts on it\n-may- start to fail.\n\n> Sounds to me like your connections are timing out (what's your timeout\n> in jdbc set to?)\n\nThere's\nno explicit timeout being set. Queries can theoretically execute for\nhours. In some rare cases, some queries indeed run for that long.\n \n> A likely cause is that you're getting big checkpoint spikes. What\n> does vmstat 10 say during these spikes?\n\nIt's\nhard to reproduce the problem. We're trying to simulate it in on our\ntesting servers but haven't been successfull yet. The problem typically\nlasts for only a minute a time on the production server and there's no\nsaying on when it will occur again. Of course we could try to enfore it\nby running this select query continously, but for a production server\nit's not an easy decission to actually do that. So therefore basically\nall we were able to do now is investigate the logs afterwards. I'll try\nto run vmstat though when the problem happens when I'm at the console. \n\n> If you're running the\n> sysstate service with data collection then sar can tell you a lot.\n\nOk, I'm not a big PG expert so I'll have to look into what that means exactly ;) Thanks for the advice though.\n\nKind regards,\nHenk\n_________________________________________________________________\nExpress yourself instantly with MSN Messenger! Download today it's FREE!\nhttp://messenger.msn.click-url.com/go/onm00200471ave/direct/01/\n\n\n\n\n> What do your various logs (pgsql, application, etc...) have to say?There\nis hardly anything helpful in the pgsql log. The application log\ndoesn't mention anything either. We log a great deal of information in\nour application, but there's nothing out of the ordinary there,\nalthough there's of course always a chance that somewhere we missed\nsomething. > Can you read a java stack trace? Sometimes slogging through them will> reveal some useful information.I can read a java stack trace very well, I'm primarily a Java developer. The stack trace is the following one:org.postgresql.util.PSQLException:\nAn I/O error occured while sending to the backend.\n    at\norg.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:218)\n    at\norg.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:451)\n    at\norg.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:350)\n    at\norg.postgresql.jdbc2.AbstractJdbc2Statement.executeUpdate(AbstractJdbc2Statement.java:304)\n    at\norg.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)\n    at\norg.apache.tomcat.dbcp.dbcp.DelegatingPreparedStatement.executeUpdate(DelegatingPreparedStatement.java:102)Our\ncode simply executes the same statement.executeUpdate() that is does\nabout 500.000 times during a business day. As soon as this select query\nis hitting this 22 GB table then there's a chance that 'suddenly' all\nthese utterly simply insert queries start failing. The insert query is\nnothing special either. It's just an \"INSERT INTO ... VALUES (...)\"\ntype of thing. The select query can actually be a few different kinds\nof queries, but basically the common thing between them is reading from\nthis 22 GB table. In fact, our system administrator just told me that\neven the DB backup is able to trigger this behaviour. As soon as the\nbackup process is reading from this 22 GB table, the inserts on it\n-may- start to fail.> Sounds to me like your connections are timing out (what's your timeout> in jdbc set to?)There's\nno explicit timeout being set. Queries can theoretically execute for\nhours. In some rare cases, some queries indeed run for that long. > A likely cause is that you're getting big checkpoint spikes. What> does vmstat 10 say during these spikes?It's\nhard to reproduce the problem. We're trying to simulate it in on our\ntesting servers but haven't been successfull yet. The problem typically\nlasts for only a minute a time on the production server and there's no\nsaying on when it will occur again. Of course we could try to enfore it\nby running this select query continously, but for a production server\nit's not an easy decission to actually do that. So therefore basically\nall we were able to do now is investigate the logs afterwards. I'll try\nto run vmstat though when the problem happens when I'm at the console. > If you're running the> sysstate service with data collection then sar can tell you a lot.Ok, I'm not a big PG expert so I'll have to look into what that means exactly ;) Thanks for the advice though.Kind regards,HenkExpress yourself instantly with MSN Messenger! MSN Messenger", "msg_date": "Tue, 26 Aug 2008 22:53:40 +0200", "msg_from": "henk de wit <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "* henk de wit:\n\n> On this table we're inserting records with a relatively low\n> frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine\n> with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian\n> Linux. The machine is completely devoted to PG, nothing else runs on\n> the box.\n\nHave you disabled the OOM killer?\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Wed, 27 Aug 2008 09:21:35 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "In response to henk de wit <[email protected]>:\n\n> > What do your various logs (pgsql, application, etc...) have to say?\n> \n> There\n> is hardly anything helpful in the pgsql log. The application log\n> doesn't mention anything either. We log a great deal of information in\n> our application, but there's nothing out of the ordinary there,\n> although there's of course always a chance that somewhere we missed\n> something.\n\nThere should be something in a log somewhere. Someone suggested the oom\nkiller might be getting you, if so there should be something in one of\nthe system logs.\n\nIf you can't find anything, then you need to beef up your logs. Try\nincreasing the amount of stuff that gets logged by PG by tweaking the\npostgres.conf settings. Then run iostat, vmstat and top in an endless\nloop dumping their output to files (recommend you run date(1) in between\neach run, otherwise you can't correlate the output to the time of\noccurrence ;)\n\nWhile you've got all this extra logging going and you're waiting for the\nproblem to happen again, do an audit of your postgres.conf settings for\nmemory usage and see if they actually add up. How much RAM does the\nsystem have? How much of it is free? How much of that are you eating\nwith shared_buffers? How much sort_mem did you tell PG it has? Have\nyou told PG that it has more memory than the machine actually has?\n\nI've frequently recommended installing pg_buffercache and using mrtg\nor something similar to graph various values from it and other easily\naccessible statistics in PG and the operating system. The overhead of\ncollecting and graphing those values is minimal, and having the data\nfrom those graphs can often be the little red arrow that points you to\nthe solution to problems like these. Not to mention the historical\ndata generally tells you months ahead of time when you're going to\nneed to scale up to bigger hardware.\n\nOn a side note, what version of PG are you using? If it was in a\nprevious email, I missed it.\n\nHope this helps.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n", "msg_date": "Wed, 27 Aug 2008 09:01:23 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes\n\t\"An I/O error occured while sending to the backend.\" exception" }, { "msg_contents": "Maybe strace could help you find the problem, but could cause a great\noverhead...\n\n\"Bill Moran\" <[email protected]> escreveu:\n> ...\n--\n<span style=\"color: #000080\">Daniel Cristian Cruz\n</span>Administrador de Banco de Dados\nDireção Regional - Núcleo de Tecnologia da Informação\nSENAI - SC\nTelefone: 48-3239-1422 (ramal 1422)\n\n\n\n", "msg_date": "Wed, 27 Aug 2008 10:24:51 -0300", "msg_from": "DANIEL CRISTIAN CRUZ <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Bill Moran wrote:\n\n> On a side note, what version of PG are you using? If it was in a \n> previous email, I missed it.\n> \nHe mentioned 8.3.1 in the first email.\nAlthough nothing stands out in the 8.3.2 or 8.3.3 fix list (without \nknowing his table structure or any contrib modules used) I wonder if\none of them may resolve his issue.\n\nI also wonder if the error is actually sent back from postgresql or\nwhether jdbc is throwing the exception because of a timeout waiting for\na response. I would think that with the table in use having 22GB data\nand 13GB indexes that the long running query has a chance of creating a\ndelay on the connections that is long enough to give jdbc the impression\nthat it isn't responding - generating a misleading error code of \"An I/O\nerror\" (meaning we know the server got the request but the response from\nthe server isn't coming back)\n\nCan you increase the timeout settings on the insert connections that are\nfailing?\n\n\n\n\n-- \n\nShane Ambler\npgSQL (at) Sheeky (dot) Biz\n\nGet Sheeky @ http://Sheeky.Biz\n", "msg_date": "Thu, 28 Aug 2008 02:15:52 +0930", "msg_from": "Shane Ambler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, 27 Aug 2008, Florian Weimer wrote:\n\n> * henk de wit:\n>\n>> On this table we're inserting records with a relatively low\n>> frequency of +- 6~10 per second. We're using PG 8.3.1 on a machine\n>> with two dual core 2.4Ghz XEON CPUs, 16 GB of memory and Debian\n>> Linux. The machine is completely devoted to PG, nothing else runs on\n>> the box.\n>\n> Have you disabled the OOM killer?\n\nmy understanding of the OOM killer is that 'disabling' it is disabling \nmemory overcommit, making it impossible for you to get into a situation \nwhere the OOM killer would activate, but this means that any load that \nwould have triggered the OOM killer will always start getting memory \nallocation errors before that point.\n\nthe OOM killer exists becouse there are many things that can happen on a \nsystem that allocate memory that 'may' really be needed, but also 'may \nnot' really be needed.\n\nfor example if you have a process that uses 1G of ram (say firefox) and it \nneeds to start a new process (say acroread to handle a pdf file), what it \ndoes is it forks the firefox process (each of which have 1G of ram \nallocated), and then does an exec of the acroread process (releasing the \n1G of ram previously held by that copy of the firefox process)\n\nwith memory overcommit enabled (the default), the kernel recognises that \nmost programs that fork don't write to all the memory they have allocated, \nso it marks the 1G of ram that firefox uses as read-only, and if either \ncopy of firefox writes to a page of memory it splits that page into \nseperate copies for the seperate processes (and if at this time it runs of \nof memory it invokes the OOM killer to free some space), when firefox does \nan exec almost immediatly after the fork it touches basicly none of the \npages, so the process only uses 1G or ram total.\n\nif memory overcommit is disabled, the kernel checks to see if you have an \nextra 1G of ram available, if you do it allows the process to continue, if \nyou don't it tries to free memory (by throwing away cache, swapping to \ndisk, etc), and if it can't free the memory will return a memroy \nallocation error (which I believe will cause firefox to exit).\n\n\nso you can avoid the OOM killer, but the costs of doing so are that you \nmake far less efficiant use of your ram overall.\n\nDavid Lang\n", "msg_date": "Wed, 27 Aug 2008 14:45:47 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, Aug 27, 2008 at 02:45:47PM -0700, [email protected] wrote:\n\n> with memory overcommit enabled (the default), the kernel recognises that \n> most programs that fork don't write to all the memory they have\n> allocated, \n\nIt doesn't \"recognise\" it; it \"hopes\" it. It happens to hope\ncorrectly in many cases, because you're quite right that many programs\ndon't actually need all the memory they allocate. But there's nothing\nabout the allocation that hints, \"By the way, I'm not really planning\nto use this.\" Also. . .\n\n> seperate copies for the seperate processes (and if at this time it runs of \n> of memory it invokes the OOM killer to free some space),\n\n. . .it kills processes that are using a lot of memory. Those are not\nnecessarily the processes that are allocating memory they don't need.\n\nThe upshot of this is that postgres tends to be a big target for the\nOOM killer, with seriously bad effects to your database. So for good\nPostgres operation, you want to run on a machine with the OOM killer\ndisabled.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Wed, 27 Aug 2008 18:13:38 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, 27 Aug 2008, Andrew Sullivan wrote:\n\n> On Wed, Aug 27, 2008 at 02:45:47PM -0700, [email protected] wrote:\n>\n>> with memory overcommit enabled (the default), the kernel recognises that\n>> most programs that fork don't write to all the memory they have\n>> allocated,\n>\n> It doesn't \"recognise\" it; it \"hopes\" it. It happens to hope\n> correctly in many cases, because you're quite right that many programs\n> don't actually need all the memory they allocate. But there's nothing\n> about the allocation that hints, \"By the way, I'm not really planning\n> to use this.\" Also. . .\n\nOk, I was meaning to say \"recognises the fact that a common pattern is to \nnot use the memory, and so it...\"\n\n>> seperate copies for the seperate processes (and if at this time it runs of\n>> of memory it invokes the OOM killer to free some space),\n>\n> . . .it kills processes that are using a lot of memory. Those are not\n> necessarily the processes that are allocating memory they don't need.\n\nthe bahavior of the OOM killer has changed over time, so far nobody has \nbeen able to come up with a 'better' strategy for it to follow.\n\n> The upshot of this is that postgres tends to be a big target for the\n> OOM killer, with seriously bad effects to your database. So for good\n> Postgres operation, you want to run on a machine with the OOM killer\n> disabled.\n\nI disagree with you. I think goof Postgres operation is so highly \ndependant on caching as much data as possible that disabling overcommit \n(and throwing away a lot of memory that could be used for cache) is a \nsolution that's as bad or worse than the problem it's trying to solve.\n\nI find that addign a modest amount of swap to the system and leaving \novercommit enabled works better for me, if the system starts swapping I \nhave a chance of noticing and taking action, but it will ride out small \noverloads. but the biggest thing is that it's not that much more \nacceptable for me to have other programs on the box failing due to memory \nallocation errors, and those will be much more common with overcommit \ndisabled then the OOM killer would be with it enabled\n\nDavid Lang\n", "msg_date": "Wed, 27 Aug 2008 15:22:09 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, Aug 27, 2008 at 4:22 PM, <[email protected]> wrote:\n> I disagree with you. I think goof Postgres operation is so highly dependant\n> on caching as much data as possible that disabling overcommit (and throwing\n> away a lot of memory that could be used for cache) is a solution that's as\n> bad or worse than the problem it's trying to solve.\n>\n> I find that addign a modest amount of swap to the system and leaving\n> overcommit enabled works better for me, if the system starts swapping I have\n> a chance of noticing and taking action, but it will ride out small\n> overloads. but the biggest thing is that it's not that much more acceptable\n> for me to have other programs on the box failing due to memory allocation\n> errors, and those will be much more common with overcommit disabled then the\n> OOM killer would be with it enabled\n\nI don't generally find this to be the case because I usually allocate\nabout 20-25% of memory to shared_buffers, use another 10-20% for\nwork_mem across all backends, and let the OS cache with the other\n50-60% or so of memory. In this situation allocations rarely, if\never, fail.\n\nNote I also turn swappiness to 0 or 1 or something small too.\nOtherwise linux winds up swapping out seldom used shared_buffers to\nmake more kernel cache, which is counter productive in the extreme.\n", "msg_date": "Wed, 27 Aug 2008 16:28:56 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "[email protected] wrote:\n> On Wed, 27 Aug 2008, Andrew Sullivan wrote:\n\n>>> seperate copies for the seperate processes (and if at this time it runs of\n>>> of memory it invokes the OOM killer to free some space),\n>>\n>> . . .it kills processes that are using a lot of memory. Those are not\n>> necessarily the processes that are allocating memory they don't need.\n>\n> the bahavior of the OOM killer has changed over time, so far nobody has \n> been able to come up with a 'better' strategy for it to follow.\n\nThe problem with OOM killer for Postgres is that it tends to kill the\npostmaster. That's really dangerous. If it simply killed a backend\nthen it wouldn't be so much of a problem.\n\nSome time ago I found that it was possible to fiddle with a /proc entry\nto convince the OOM to not touch the postmaster. A postmaster with the\nraw IO capability bit set would be skipped by the OOM too killer (this\nis an Oracle tweak AFAIK).\n\nThese are tricks that people could use in their init scripts to protect\nthemselves.\n\n(I wonder if the initscript supplied by the RPMs or Debian should\ncontain such a hack.)\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 27 Aug 2008 22:19:52 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Some time ago I found that it was possible to fiddle with a /proc entry\n> to convince the OOM to not touch the postmaster. A postmaster with the\n> raw IO capability bit set would be skipped by the OOM too killer (this\n> is an Oracle tweak AFAIK).\n> These are tricks that people could use in their init scripts to protect\n> themselves.\n\nYeah? Details please? Does the bit get inherited by child processes?\n\n> (I wonder if the initscript supplied by the RPMs or Debian should\n> contain such a hack.)\n\nIt would certainly make sense for my RHEL/Fedora-specific packages,\nsince those are targeting a very limited range of kernel versions.\nNot sure about the situation for other distros.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 28 Aug 2008 00:35:07 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "[email protected] writes:\n> On Wed, 27 Aug 2008, Andrew Sullivan wrote:\n>> The upshot of this is that postgres tends to be a big target for the\n>> OOM killer, with seriously bad effects to your database. So for good\n>> Postgres operation, you want to run on a machine with the OOM killer\n>> disabled.\n\n> I disagree with you.\n\nActually, the problem with Linux' OOM killer is that it\n*disproportionately targets the PG postmaster*, on the basis not of\nmemory that the postmaster is using but of memory its child processes\nare using. This was discussed in the PG archives a few months ago;\nI'm too lazy to search for the link right now, but the details and links\nto confirming kernel documentation are in our archives.\n\nThis is one hundred percent antithetical to the basic design philosophy\nof Postgres, which is that no matter how badly the child processes screw\nup, the postmaster should live to fight another day. The postmaster\nbasically exists to restart things after children die ungracefully.\nIf the OOM killer takes out the postmaster itself (rather than the child\nthat was actually eating the unreasonable amount of memory), we have no\nchance of recovering.\n\nSo, if you want a PG installation that is as robust as it's designed to\nbe, you *will* turn off Linux' OOM killer. Otherwise, don't complain to\nus when your database unexpectedly stops responding.\n\n(Alternatively, if you know how an unprivileged userland process can\ndefend itself against such exceedingly brain-dead kernel policy, we are\nall ears.)\n\n\t\t\tregards, tom lane\n\nPS: I think this is probably unrelated to the OP's problem, since he\nstated there was no sign of any problem from the database server's\nside.\n", "msg_date": "Thu, 28 Aug 2008 00:56:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "The OOM killer is a terrible idea for any serious database server. I wrote a detailed technical paper on this almost 15 years ago when Silicon Graphics had this same feature, and Oracle and other critical server processes couldn't be made reliable.\n\nThe problem with \"overallocating memory\" as Linux does by default is that EVERY application, no matter how well designed and written, becomes unreliable: It can be killed because of some OTHER process. You can be as clever as you like, and do all the QA possible, and demonstrate that there isn't a single bug in Postgres, and it will STILL be unreliable if you run it on a Linux system that allows overcommitted memory.\n\nIMHO, all Postgres servers should run with memory-overcommit disabled. On Linux, that means /proc/sys/vm/overcommit_memory=2.\n\nCraig\n", "msg_date": "Wed, 27 Aug 2008 22:58:51 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, 27 Aug 2008, Craig James wrote:\n\n> The OOM killer is a terrible idea for any serious database server. I wrote a \n> detailed technical paper on this almost 15 years ago when Silicon Graphics \n> had this same feature, and Oracle and other critical server processes \n> couldn't be made reliable.\n>\n> The problem with \"overallocating memory\" as Linux does by default is that \n> EVERY application, no matter how well designed and written, becomes \n> unreliable: It can be killed because of some OTHER process. You can be as \n> clever as you like, and do all the QA possible, and demonstrate that there \n> isn't a single bug in Postgres, and it will STILL be unreliable if you run it \n> on a Linux system that allows overcommitted memory.\n>\n> IMHO, all Postgres servers should run with memory-overcommit disabled. On \n> Linux, that means /proc/sys/vm/overcommit_memory=2.\n\nit depends on how much stuff you allow others to run on the box. if you \nhave no control of that then yes, the box is unreliable (but it's not just \nbecouse of the OOM killer, it's becouse those other users can eat up all \nthe other box resources as well CPU, network bandwidth, disk bandwidth, \netc)\n\neven with overcommit disabled, the only way you can be sure that a program \nwill not fail is to make sure that it never needs to allocate memory. with \novercommit off you could have one program that eats up 100% of your ram \nwithout failing (handling the error on memory allocation such that it \ndoesn't crash), but which will cause _every_ other program on the system \nto fail, including any scripts (becouse every command executed will \nrequire forking and without overcommit that will require allocating the \ntotal memory that your shell has allocated so that it can run a trivial \ncommand (like ps or kill that you are trying to use to fix the problem)\n\nif you have a box with unpredictable memory use, disabling overcommit will \nnot make it reliable. it may make it less unreliable (the fact that the \nlinux OOM killer will pick one of the worst possible processes to kill is \na problem), but less unreliable is not the same as reliable.\n\nit's also not that hard to have a process monitor the postmaster (along \nwith other box resources) to restart it if it is killed, at some point you \ncan get init to watch your watchdog and the OOM killer will not kill init. \nso while you can't prevent the postmaster from being killed, you can setup \nto recover from it.\n\nDavid Lang\n", "msg_date": "Wed, 27 Aug 2008 23:16:22 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Tom Lane wrote:\n\n> [email protected] writes:\n>> On Wed, 27 Aug 2008, Andrew Sullivan wrote:\n>>> The upshot of this is that postgres tends to be a big target for the\n>>> OOM killer, with seriously bad effects to your database. So for good\n>>> Postgres operation, you want to run on a machine with the OOM killer\n>>> disabled.\n>\n>> I disagree with you.\n>\n> Actually, the problem with Linux' OOM killer is that it\n> *disproportionately targets the PG postmaster*, on the basis not of\n> memory that the postmaster is using but of memory its child processes\n> are using. This was discussed in the PG archives a few months ago;\n> I'm too lazy to search for the link right now, but the details and links\n> to confirming kernel documentation are in our archives.\n>\n> This is one hundred percent antithetical to the basic design philosophy\n> of Postgres, which is that no matter how badly the child processes screw\n> up, the postmaster should live to fight another day. The postmaster\n> basically exists to restart things after children die ungracefully.\n> If the OOM killer takes out the postmaster itself (rather than the child\n> that was actually eating the unreasonable amount of memory), we have no\n> chance of recovering.\n>\n> So, if you want a PG installation that is as robust as it's designed to\n> be, you *will* turn off Linux' OOM killer. Otherwise, don't complain to\n> us when your database unexpectedly stops responding.\n>\n> (Alternatively, if you know how an unprivileged userland process can\n> defend itself against such exceedingly brain-dead kernel policy, we are\n> all ears.)\n\nthere are periodic flamefests on the kernel mailing list over the OOM \nkiller, if you can propose a better algorithm for it to use than the \ncurrent one that doesn't end up being just as bad for some other workload \nthe kernel policy can be changed.\n\nIIRC the reason why it targets the parent process is to deal with a \nfork-bomb type of failure where a program doesn't use much memory itself, \nbut forks off memory hogs as quickly as it can. if the OOM killer only \nkills the children the problem never gets solved.\n\nI assume that the postmaster process is monitoring the back-end processes \nby being it's parent, is there another way that this monitoring could \nbe done so that the back-end processes become independant of the \nmonitoring tool after they are started (the equivalent of nohup)?\n\nwhile this approach to monitoring may not be as quick to react as a wait \nfor a child exit, it may be worth doing if it makes the postmaster not be \nthe prime target of the OOM killer when things go bad on the system.\n\n> \t\t\tregards, tom lane\n>\n> PS: I think this is probably unrelated to the OP's problem, since he\n> stated there was no sign of any problem from the database server's\n> side.\n\nagreed.\n\nDavid Lang\n\n", "msg_date": "Wed, 27 Aug 2008 23:23:16 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, Aug 27, 2008 at 03:22:09PM -0700, [email protected] wrote:\n>\n> I disagree with you. I think goof Postgres operation is so highly dependant \n> on caching as much data as possible that disabling overcommit (and throwing \n> away a lot of memory that could be used for cache) is a solution that's as \n> bad or worse than the problem it's trying to solve.\n\nOk, but the danger is that the OOM killer kills your postmaster. To\nme, this is a cure way worse than the disease it's trying to treat.\nYMMD &c. &c.\n\nA\n\n-- \nAndrew Sullivan\[email protected]\n+1 503 667 4564 x104\nhttp://www.commandprompt.com/\n", "msg_date": "Thu, 28 Aug 2008 08:36:46 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, 27 Aug 2008, [email protected] wrote:\n> if memory overcommit is disabled, the kernel checks to see if you have an \n> extra 1G of ram available, if you do it allows the process to continue, if \n> you don't it tries to free memory (by throwing away cache, swapping to disk, \n> etc), and if it can't free the memory will return a memroy allocation error \n> (which I believe will cause firefox to exit).\n\nRemember that the memory overcommit check is checking against the amount \nof RAM + swap you have - not just the amount of RAM. When a fork occurs, \nhardly any extra actual RAM is used (due to copy on write), but the \npotential is there for the process to use it. If overcommit is switched \noff, then you just need to make sure there is *plenty* of swap to convince \nthe kernel that it can actually fulfil all of the memory requests if all \nthe processes behave badly and all shared pages become unshared. Then the \nconsequences of processes actually using that memory are that the machine \nwill swap, rather than the OOM killer having to act.\n\nOf course, it's generally bad to run a machine with more going on than \nwill fit in RAM.\n\nNeither swapping nor OOM killing are particularly good - it's just a \nconsequence of the amount of memory needed being unpredictable.\n\nProbably the best solution is to just tell the kernel somehow to never \nkill the postmaster.\n\nMatthew\n\n-- \n<Taking apron off> And now you can say honestly that you have been to a\nlecture where you watched paint dry.\n - Computer Graphics Lecturer\n", "msg_date": "Thu, 28 Aug 2008 14:26:59 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "In response to Matthew Wakeling <[email protected]>:\n> \n> Probably the best solution is to just tell the kernel somehow to never \n> kill the postmaster.\n\nThis thread interested me enough to research this a bit.\n\nIn linux, it's possible to tell the OOM killer never to consider\ncertain processes for the axe, using /proc magic. See this page:\nhttp://linux-mm.org/OOM_Killer\n\nPerhaps this should be in the PostgreSQL docs somewhere?\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 28 Aug 2008 09:53:25 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes\n\t\"An I/O error occured while sending to the backend.\" exception" }, { "msg_contents": ">>> Bill Moran <[email protected]> wrote: \n> In response to Matthew Wakeling <[email protected]>:\n>> \n>> Probably the best solution is to just tell the kernel somehow to\nnever \n>> kill the postmaster.\n> \n> This thread interested me enough to research this a bit.\n> \n> In linux, it's possible to tell the OOM killer never to consider\n> certain processes for the axe, using /proc magic. See this page:\n> http://linux-mm.org/OOM_Killer\n> \n> Perhaps this should be in the PostgreSQL docs somewhere?\n \nThat sure sounds like a good idea.\n \nEven though the one time the OOM killer kicked in on one of our\nservers, it killed a runaway backend and not the postmaster\n( http://archives.postgresql.org/pgsql-bugs/2008-07/msg00105.php ),\nI think I will modify our service scripts in /etc/init.d/ to pick off\nthe postmaster pid after a start and echo -16 (or some such) into the\n/proc/<pid>/oom_adj file (which is where I found the file on my SuSE\nsystem).\n \nThanks for the research and the link!\n \n-Kevin\n", "msg_date": "Thu, 28 Aug 2008 09:48:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes\"An I/O error\n\toccured while sending to the backend.\" exception" }, { "msg_contents": "\nOn Aug 28, 2008, at 6:26 AM, Matthew Wakeling wrote:\n\n> On Wed, 27 Aug 2008, [email protected] wrote:\n>> if memory overcommit is disabled, the kernel checks to see if you \n>> have an extra 1G of ram available, if you do it allows the process \n>> to continue, if you don't it tries to free memory (by throwing away \n>> cache, swapping to disk, etc), and if it can't free the memory will \n>> return a memroy allocation error (which I believe will cause \n>> firefox to exit).\n>\n> Remember that the memory overcommit check is checking against the \n> amount of RAM + swap you have - not just the amount of RAM. When a \n> fork occurs, hardly any extra actual RAM is used (due to copy on \n> write), but the potential is there for the process to use it. If \n> overcommit is switched off, then you just need to make sure there is \n> *plenty* of swap to convince the kernel that it can actually fulfil \n> all of the memory requests if all the processes behave badly and all \n> shared pages become unshared. Then the consequences of processes \n> actually using that memory are that the machine will swap, rather \n> than the OOM killer having to act.\n>\n> Of course, it's generally bad to run a machine with more going on \n> than will fit in RAM.\n>\n> Neither swapping nor OOM killing are particularly good - it's just a \n> consequence of the amount of memory needed being unpredictable.\n>\n> Probably the best solution is to just tell the kernel somehow to \n> never kill the postmaster.\n\nOr configure adequate swap space?\n\nCheers,\n Steve\n", "msg_date": "Thu, 28 Aug 2008 08:05:37 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Another approach we used successfully for a similar problem -- (we had lots\nof free high memory but were running out of low memory; oom killer wiped out\nMQ a couple times and postmaster a couple times) -- was to change the\nsettings for how aggressively the virtual memory system protected low memory\nby changing /proc/sys/vm/lowmem_reserve_ratio (2.6.18?+ Kernel). I don't\nremember all of the details, but we looked at\nDocumentation/filesystems/proc.txt for the 2.6.25 kernel (it wasn't\ndocumented for earlier kernel releases) to figure out how it worked and set\nit appropriate to our system memory configuration.\n\n-Jerry\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Steve Atkins\nSent: Thursday, August 28, 2008 9:06 AM\nTo: PostgreSQL Performance\nSubject: Re: [PERFORM] select on 22 GB table causes \"An I/O error occured\nwhile sending to the backend.\" exception\n\n\nOn Aug 28, 2008, at 6:26 AM, Matthew Wakeling wrote:\n\n> On Wed, 27 Aug 2008, [email protected] wrote:\n>> if memory overcommit is disabled, the kernel checks to see if you \n>> have an extra 1G of ram available, if you do it allows the process \n>> to continue, if you don't it tries to free memory (by throwing away \n>> cache, swapping to disk, etc), and if it can't free the memory will \n>> return a memroy allocation error (which I believe will cause \n>> firefox to exit).\n>\n> Remember that the memory overcommit check is checking against the \n> amount of RAM + swap you have - not just the amount of RAM. When a \n> fork occurs, hardly any extra actual RAM is used (due to copy on \n> write), but the potential is there for the process to use it. If \n> overcommit is switched off, then you just need to make sure there is \n> *plenty* of swap to convince the kernel that it can actually fulfil \n> all of the memory requests if all the processes behave badly and all \n> shared pages become unshared. Then the consequences of processes \n> actually using that memory are that the machine will swap, rather \n> than the OOM killer having to act.\n>\n> Of course, it's generally bad to run a machine with more going on \n> than will fit in RAM.\n>\n> Neither swapping nor OOM killing are particularly good - it's just a \n> consequence of the amount of memory needed being unpredictable.\n>\n> Probably the best solution is to just tell the kernel somehow to \n> never kill the postmaster.\n\nOr configure adequate swap space?\n\nCheers,\n Steve\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n", "msg_date": "Thu, 28 Aug 2008 09:47:19 -0600", "msg_from": "\"Jerry Champlin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "[email protected] wrote:\n> On Wed, 27 Aug 2008, Craig James wrote:\n>> The OOM killer is a terrible idea for any serious database server. I \n>> wrote a detailed technical paper on this almost 15 years ago when \n>> Silicon Graphics had this same feature, and Oracle and other critical \n>> server processes couldn't be made reliable.\n>>\n>> The problem with \"overallocating memory\" as Linux does by default is \n>> that EVERY application, no matter how well designed and written, \n>> becomes unreliable: It can be killed because of some OTHER process. \n>> You can be as clever as you like, and do all the QA possible, and \n>> demonstrate that there isn't a single bug in Postgres, and it will \n>> STILL be unreliable if you run it on a Linux system that allows \n>> overcommitted memory.\n>>\n>> IMHO, all Postgres servers should run with memory-overcommit \n>> disabled. On Linux, that means /proc/sys/vm/overcommit_memory=2.\n> \n> it depends on how much stuff you allow others to run on the box. if you \n> have no control of that then yes, the box is unreliable (but it's not \n> just becouse of the OOM killer, it's becouse those other users can eat \n> up all the other box resources as well CPU, network bandwidth, disk \n> bandwidth, etc)\n> \n> even with overcommit disabled, the only way you can be sure that a \n> program will not fail is to make sure that it never needs to allocate \n> memory. with overcommit off you could have one program that eats up 100% \n> of your ram without failing (handling the error on memory allocation \n> such that it doesn't crash), but which will cause _every_ other program \n> on the system to fail, including any scripts (becouse every command \n> executed will require forking and without overcommit that will require \n> allocating the total memory that your shell has allocated so that it can \n> run a trivial command (like ps or kill that you are trying to use to fix \n> the problem)\n> \n> if you have a box with unpredictable memory use, disabling overcommit \n> will not make it reliable. it may make it less unreliable (the fact that \n> the linux OOM killer will pick one of the worst possible processes to \n> kill is a problem), but less unreliable is not the same as reliable.\n\nThe problem with any argument in favor of memory overcommit and OOM is that there is a MUCH better, and simpler, solution. Buy a really big disk, say a terabyte, and allocate the whole thing as swap space. Then do a decent job of configuring your kernel so that any reasonable process can allocate huge chunks of memory that it will never use, but can't use the whole terrabyte.\n\nUsing real swap space instead of overallocated memory is a much better solution.\n\n- It's cheap.\n- There is no performance hit at all if you buy enough real memory\n- If runaway processes start actually using memory, the system slows\n down, but server processes like Postgres *aren't killed*.\n- When a runaway process starts everybody swapping, you can just\n find it and kill it. Once it's dead, everything else goes back\n to normal.\n\nIt's hard to imagine a situation where any program or collection of programs would actually try to allocate more than a terrabyte of memory and exceed the swap space on a single terrabyte disk. The cost is almost nothing, a few hundred dollars.\n\nSo turn off overcommit, and buy an extra disk if you actually need a lot of \"virtual memory\".\n\nCraig\n", "msg_date": "Thu, 28 Aug 2008 09:02:54 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Steve Atkins wrote:\n>> Probably the best solution is to just tell the kernel somehow to never kill \n>> the postmaster.\n>\n> Or configure adequate swap space?\n\nOh yes, that's very important. However, that gives the machine the \nopportunity to thrash.\n\nMatthew\n\n-- \nThe early bird gets the worm. If you want something else for breakfast, get\nup later.\n", "msg_date": "Thu, 28 Aug 2008 17:03:38 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Craig James wrote:\n\n> [email protected] wrote:\n>> On Wed, 27 Aug 2008, Craig James wrote:\n>>> The OOM killer is a terrible idea for any serious database server. I \n>>> wrote a detailed technical paper on this almost 15 years ago when Silicon \n>>> Graphics had this same feature, and Oracle and other critical server \n>>> processes couldn't be made reliable.\n>>> \n>>> The problem with \"overallocating memory\" as Linux does by default is that \n>>> EVERY application, no matter how well designed and written, becomes \n>>> unreliable: It can be killed because of some OTHER process. You can be as \n>>> clever as you like, and do all the QA possible, and demonstrate that there \n>>> isn't a single bug in Postgres, and it will STILL be unreliable if you run \n>>> it on a Linux system that allows overcommitted memory.\n>>> \n>>> IMHO, all Postgres servers should run with memory-overcommit disabled. On \n>>> Linux, that means /proc/sys/vm/overcommit_memory=2.\n>> \n>> it depends on how much stuff you allow others to run on the box. if you \n>> have no control of that then yes, the box is unreliable (but it's not just \n>> becouse of the OOM killer, it's becouse those other users can eat up all \n>> the other box resources as well CPU, network bandwidth, disk bandwidth, \n>> etc)\n>> \n>> even with overcommit disabled, the only way you can be sure that a program \n>> will not fail is to make sure that it never needs to allocate memory. with \n>> overcommit off you could have one program that eats up 100% of your ram \n>> without failing (handling the error on memory allocation such that it \n>> doesn't crash), but which will cause _every_ other program on the system to \n>> fail, including any scripts (becouse every command executed will require \n>> forking and without overcommit that will require allocating the total \n>> memory that your shell has allocated so that it can run a trivial command \n>> (like ps or kill that you are trying to use to fix the problem)\n>> \n>> if you have a box with unpredictable memory use, disabling overcommit will \n>> not make it reliable. it may make it less unreliable (the fact that the \n>> linux OOM killer will pick one of the worst possible processes to kill is a \n>> problem), but less unreliable is not the same as reliable.\n>\n> The problem with any argument in favor of memory overcommit and OOM is that \n> there is a MUCH better, and simpler, solution. Buy a really big disk, say a \n> terabyte, and allocate the whole thing as swap space. Then do a decent job \n> of configuring your kernel so that any reasonable process can allocate huge \n> chunks of memory that it will never use, but can't use the whole terrabyte.\n>\n> Using real swap space instead of overallocated memory is a much better \n> solution.\n>\n> - It's cheap.\n\ncheap in dollars, if you actually use any of it it's very expensive in \nperformance\n\n> - There is no performance hit at all if you buy enough real memory\n> - If runaway processes start actually using memory, the system slows\n> down, but server processes like Postgres *aren't killed*.\n> - When a runaway process starts everybody swapping, you can just\n> find it and kill it. Once it's dead, everything else goes back\n> to normal.\n\nall of these things are still true if you enable overcommit, the \ndifference is that with overcommit enabled your actual ram will be used \nfor cache as much as possible, with overcommit disabled you will keep \nthrowing away cache to make room for memory that's allocated but not \nwritten to.\n\nI generally allocate 2G of disk to swap, if the system ends up using even \nthat much it will have slowed to a crawl, but if you are worried that \nthat's no enough, by all means go ahead and allocate more, but allocateing \na 1TB disk is overkill (do you realize how long it takes just to _read_ an \nentire 1TB disk? try it sometime with dd if=/dev/drive of=/dev/null)\n\nDavid Lang\n\n> It's hard to imagine a situation where any program or collection of programs \n> would actually try to allocate more than a terrabyte of memory and exceed the \n> swap space on a single terrabyte disk. The cost is almost nothing, a few \n> hundred dollars.\n>\n> So turn off overcommit, and buy an extra disk if you actually need a lot of \n> \"virtual memory\".\n>\n> Craig\n>\n", "msg_date": "Thu, 28 Aug 2008 09:48:23 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Matthew Wakeling wrote:\n\n> On Wed, 27 Aug 2008, [email protected] wrote:\n>> if memory overcommit is disabled, the kernel checks to see if you have an \n>> extra 1G of ram available, if you do it allows the process to continue, if \n>> you don't it tries to free memory (by throwing away cache, swapping to \n>> disk, etc), and if it can't free the memory will return a memroy allocation \n>> error (which I believe will cause firefox to exit).\n>\n> Remember that the memory overcommit check is checking against the amount of \n> RAM + swap you have - not just the amount of RAM. When a fork occurs, hardly \n> any extra actual RAM is used (due to copy on write), but the potential is \n> there for the process to use it. If overcommit is switched off, then you just \n> need to make sure there is *plenty* of swap to convince the kernel that it \n> can actually fulfil all of the memory requests if all the processes behave \n> badly and all shared pages become unshared. Then the consequences of \n> processes actually using that memory are that the machine will swap, rather \n> than the OOM killer having to act.\n\nif you are correct that it just checks against memory+swap then it's not a \nbig deal, but I don't think it does that. I think it actually allocates \nthe memory, and if it does that it will push things out of ram to do the \nallocation, I don't believe that it will allocate swap space directly.\n\nDavid Lang\n\n> Of course, it's generally bad to run a machine with more going on than will \n> fit in RAM.\n>\n> Neither swapping nor OOM killing are particularly good - it's just a \n> consequence of the amount of memory needed being unpredictable.\n>\n> Probably the best solution is to just tell the kernel somehow to never kill \n> the postmaster.\n>\n> Matthew\n>\n>\n", "msg_date": "Thu, 28 Aug 2008 09:51:09 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 2008-08-28 at 00:56 -0400, Tom Lane wrote:\n> Actually, the problem with Linux' OOM killer is that it\n> *disproportionately targets the PG postmaster*, on the basis not of\n> memory that the postmaster is using but of memory its child processes\n> are using. This was discussed in the PG archives a few months ago;\n> I'm too lazy to search for the link right now, but the details and links\n> to confirming kernel documentation are in our archives.\n> \n\nhttp://archives.postgresql.org/pgsql-hackers/2008-02/msg00101.php\n\nIt's not so much that the OOM Killer targets the parent process for a\nfraction of the memory consumed by the child. It may not be a great\ndesign, but it's not what's causing the problem for the postmaster.\n\nThe problem for the postmaster is that the OOM killer counts the\nchildren's total vmsize -- including *shared* memory -- against the\nparent, which is such a bad idea I don't know where to start. If you\nhave shared_buffers set to 1GB and 25 connections, the postmaster will\nbe penalized as though it was using 13.5 GB of memory, even though all\nthe processes together are only using about 1GB! \n\nNot only that, killing a process doesn't free shared memory, so it's\njust flat out broken.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 28 Aug 2008 10:51:16 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Wed, 2008-08-27 at 23:23 -0700, [email protected] wrote:\n> there are periodic flamefests on the kernel mailing list over the OOM \n> killer, if you can propose a better algorithm for it to use than the \n> current one that doesn't end up being just as bad for some other workload \n> the kernel policy can be changed.\n> \n\nTried that: http://lkml.org/lkml/2007/2/9/275\n\nAll they have to do is *not* count shared memory against the process (or\nat least not count it against the parent of the process), and the system\nmay approximate sanity.\n\n> IIRC the reason why it targets the parent process is to deal with a \n> fork-bomb type of failure where a program doesn't use much memory itself, \n> but forks off memory hogs as quickly as it can. if the OOM killer only \n> kills the children the problem never gets solved.\n\nBut killing a process won't free shared memory. And there is already a\nsystem-wide limit on shared memory. So what's the point of such a bad\ndesign?\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Thu, 28 Aug 2008 11:07:18 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "Matthew Wakeling wrote:\n> On Thu, 28 Aug 2008, Steve Atkins wrote:\n>>> Probably the best solution is to just tell the kernel somehow to \n>>> never kill the postmaster.\n>>\n>> Or configure adequate swap space?\n> \n> Oh yes, that's very important. However, that gives the machine the \n> opportunity to thrash.\n\nNo, that's where the whole argument for allowing overcommitted memory falls flat.\n\nThe entire argument for allowing overcommitted memory hinges on the fact that processes *won't use the memory*. If they use it, then overcommitting causes problems everywhere, such as a Postmaster getting arbitrarily killed.\n\nIf a process *doesn't* use the memory, then there's no problem with thrashing, right?\n\nSo it never makes sense to enable overcommitted memory when Postgres, or any server, is running.\n\nAllocating a big, fat terabyte swap disk is ALWAYS better than allowing overcommitted memory. If your usage is such that overcommitted memory would never be used, then the swap disk will never be used either. If your processes do use the memory, then your performance goes into the toilet, and you know it's time to buy more memory or a second server, but in the mean time your server processes at least keep running while you kill the rogue processes.\n\nCraig\n", "msg_date": "Thu, 28 Aug 2008 11:12:14 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, [email protected] wrote:\n\n> On Thu, 28 Aug 2008, Matthew Wakeling wrote:\n>\n>> On Wed, 27 Aug 2008, [email protected] wrote:\n>>> if memory overcommit is disabled, the kernel checks to see if you have an \n>>> extra 1G of ram available, if you do it allows the process to continue, if \n>>> you don't it tries to free memory (by throwing away cache, swapping to \n>>> disk, etc), and if it can't free the memory will return a memroy \n>>> allocation error (which I believe will cause firefox to exit).\n>> \n>> Remember that the memory overcommit check is checking against the amount of \n>> RAM + swap you have - not just the amount of RAM. When a fork occurs, \n>> hardly any extra actual RAM is used (due to copy on write), but the \n>> potential is there for the process to use it. If overcommit is switched \n>> off, then you just need to make sure there is *plenty* of swap to convince \n>> the kernel that it can actually fulfil all of the memory requests if all \n>> the processes behave badly and all shared pages become unshared. Then the \n>> consequences of processes actually using that memory are that the machine \n>> will swap, rather than the OOM killer having to act.\n>\n> if you are correct that it just checks against memory+swap then it's not a \n> big deal, but I don't think it does that. I think it actually allocates the \n> memory, and if it does that it will push things out of ram to do the \n> allocation, I don't believe that it will allocate swap space directly.\n\nI just asked on the kernel mailing list and Alan Cox responded.\n\nhe is saying that you are correct, it only allocates against the total \navailable, it doesn't actually allocate ram.\n\nso with sufficiant swap overcommit off should be fine.\n\nbut you do need to allocate more swap as the total memory 'used' can be \nsignificantly higher that with overcommit on.\n\nDavid Lang\n\n> David Lang\n>\n>> Of course, it's generally bad to run a machine with more going on than will \n>> fit in RAM.\n>> \n>> Neither swapping nor OOM killing are particularly good - it's just a \n>> consequence of the amount of memory needed being unpredictable.\n>> \n>> Probably the best solution is to just tell the kernel somehow to never kill \n>> the postmaster.\n>> \n>> Matthew\n>> \n>> \n>\n", "msg_date": "Thu, 28 Aug 2008 11:17:18 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Craig James wrote:\n\n> Matthew Wakeling wrote:\n>> On Thu, 28 Aug 2008, Steve Atkins wrote:\n>>>> Probably the best solution is to just tell the kernel somehow to never \n>>>> kill the postmaster.\n>>> \n>>> Or configure adequate swap space?\n>> \n>> Oh yes, that's very important. However, that gives the machine the \n>> opportunity to thrash.\n>\n> No, that's where the whole argument for allowing overcommitted memory falls \n> flat.\n>\n> The entire argument for allowing overcommitted memory hinges on the fact that \n> processes *won't use the memory*. If they use it, then overcommitting causes \n> problems everywhere, such as a Postmaster getting arbitrarily killed.\n>\n> If a process *doesn't* use the memory, then there's no problem with \n> thrashing, right?\n>\n> So it never makes sense to enable overcommitted memory when Postgres, or any \n> server, is running.\n>\n> Allocating a big, fat terabyte swap disk is ALWAYS better than allowing \n> overcommitted memory. If your usage is such that overcommitted memory would \n> never be used, then the swap disk will never be used either. If your \n> processes do use the memory, then your performance goes into the toilet, and \n> you know it's time to buy more memory or a second server, but in the mean \n> time your server processes at least keep running while you kill the rogue \n> processes.\n\nthere was a misunderstanding (for me if nobody else) that without \novercommit it was actual ram that was getting allocated, which could push \nthings out to swap even if the memory ended up not being needed later. \nwith the clarification that this is not the case and the allocation is \njust reducing the virtual memory available it's now clear that it is just \nas efficiant to run with overcommit off.\n\nso the conclusion is:\n\nno performance/caching/buffer difference between the two modes.\n\nthe differencees between the two are:\n\nwith overcommit\n\n when all ram+swap is used OOM killer is activated.\n for the same amount of ram+swap more allocations can be done before it \nis all used up (how much more is unpredicable)\n\nwithout overcommit\n\n when all ram+swap is allocated programs (not nessasarily the memory \nhog) start getting memory allocation errors.\n\n\nDavid Lang\n", "msg_date": "Thu, 28 Aug 2008 11:27:27 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Jeff Davis wrote:\n> The problem for the postmaster is that the OOM killer counts the\n> children's total vmsize -- including *shared* memory -- against the\n> parent, which is such a bad idea I don't know where to start. If you\n> have shared_buffers set to 1GB and 25 connections, the postmaster will\n> be penalized as though it was using 13.5 GB of memory, even though all\n> the processes together are only using about 1GB!\n\nI find it really hard to believe that it counts shared memory like that. \nThat's just dumb.\n\nOf course, there are two types of \"shared\" memory. There's explicit shared \nmemory, like Postgres uses, and there's copy-on-write \"shared\" memory, \ncaused by a process fork. The copy-on-write memory needs to be counted for \neach child, but the explicit shared memory needs to be counted just once.\n\n> Not only that, killing a process doesn't free shared memory, so it's\n> just flat out broken.\n\nExactly. a cost-benefit model would work well here. Work out how much RAM \nwould be freed by killing a process, and use that when choosing which \nprocess to kill.\n\nMatthew\n\n-- \nYou will see this is a 3-blackboard lecture. This is the closest you are going\nto get from me to high-tech teaching aids. Hey, if they put nooses on this, it\nwould be fun! -- Computer Science Lecturer\n", "msg_date": "Thu, 28 Aug 2008 21:04:23 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, [email protected] wrote:\n> I just asked on the kernel mailing list and Alan Cox responded.\n>\n> he is saying that you are correct, it only allocates against the total \n> available, it doesn't actually allocate ram.\n\nThat was remarkably graceful of you. Yes, operating systems have worked \nthat way for decades - it's the beauty of copy-on-write.\n\n> but you do need to allocate more swap as the total memory 'used' can be \n> significantly higher that with overcommit on.\n\nYes, that's right.\n\nMatthew\n\n-- \nNote: some countries impose serious penalties for a conspiracy to overthrow\n the political system. THIS DOES NOT FIX THE VULNERABILITY.\n\t -- http://lcamtuf.coredump.cx/soft/trash/1apr.txt\n", "msg_date": "Thu, 28 Aug 2008 21:09:24 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Craig James wrote:\n> If your processes do use the memory, then your performance goes into the \n> toilet, and you know it's time to buy more memory or a second server, \n> but in the mean time your server processes at least keep running while \n> you kill the rogue processes.\n\nI'd argue against swap ALWAYS being better than overcommit. It's a choice \nbetween your performance going into the toilet or your processes dieing.\n\nOn the one hand, if someone fork-bombs you, the OOM killer has a chance of \nsolving the problem for you, rather than you having to log onto an \nunresponsive machine to kill the process yourself. On the other hand, the \nOOM killer may kill the wrong thing. Depending on what else you use your \nmachine for, either of the choices may be the right one.\n\nAnother point is that from a business perspective, a database that has \nstopped responding is equally bad regardless of whether that is because \nthe OOM killer has appeared or because the machine is thrashing. In both \ncases, there is a maximum throughput that the machine can handle, and if \nrequests appear quicker than that the system will collapse, especially if \nthe requests start timing out and being retried.\n\nThis problem really is caused by the kernel not having enough information \non how much memory a process is going to use. I would be much in favour of \nreplacing fork() with some more informative system call. For example, \nforkandexec() instead of fork() then exec() - the kernel would know that \nthe new process will never need any of that duplicated RAM. However, there \nis *far* too much legacy in the old fork() call to change that now.\n\nLikewise, I would be all for Postgres managing its memory better. It would \nbe very nice to be able to set a maximum amount of work-memory, rather \nthan a maximum amount per backend. Each backend could then make do with \nhowever much is left of the work-memory pool when it actually executes \nqueries. As it is, the server admin has no idea how many multiples of \nwork-mem are going to be actually used, even knowing the maximum number of \nbackends.\n\nMatthew\n\n-- \nOf course it's your fault. Everything here's your fault - it says so in your\ncontract. - Quark\n", "msg_date": "Thu, 28 Aug 2008 21:29:08 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, Aug 28, 2008 at 2:29 PM, Matthew Wakeling <[email protected]> wrote:\n\n> Another point is that from a business perspective, a database that has\n> stopped responding is equally bad regardless of whether that is because the\n> OOM killer has appeared or because the machine is thrashing. In both cases,\n> there is a maximum throughput that the machine can handle, and if requests\n> appear quicker than that the system will collapse, especially if the\n> requests start timing out and being retried.\n\nBut there's a HUGE difference between a machine that has bogged down\nunder load so badly that you have to reset it and a machine that's had\nthe postmaster slaughtered by the OOM killer. In the first situation,\nwhile the machine is unresponsive, it should come right back up with a\ncoherent database after the restart.\n\nOTOH, a machine with a dead postmaster is far more likely to have a\ncorrupted database when it gets restarted.\n\n> Likewise, I would be all for Postgres managing its memory better. It would\n> be very nice to be able to set a maximum amount of work-memory, rather than\n> a maximum amount per backend. Each backend could then make do with however\n> much is left of the work-memory pool when it actually executes queries. As\n> it is, the server admin has no idea how many multiples of work-mem are going\n> to be actually used, even knowing the maximum number of backends.\n\nAgreed. It would be useful to have a cap on all work_mem, but it\nmight be an issue that causes all the backends to talk to each other,\nwhich can be really slow if you're running a thousand or so\nconnections.\n", "msg_date": "Thu, 28 Aug 2008 16:42:47 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Scott Marlowe wrote:\n\n> On Thu, Aug 28, 2008 at 2:29 PM, Matthew Wakeling <[email protected]> wrote:\n>\n>> Another point is that from a business perspective, a database that has\n>> stopped responding is equally bad regardless of whether that is because the\n>> OOM killer has appeared or because the machine is thrashing. In both cases,\n>> there is a maximum throughput that the machine can handle, and if requests\n>> appear quicker than that the system will collapse, especially if the\n>> requests start timing out and being retried.\n>\n> But there's a HUGE difference between a machine that has bogged down\n> under load so badly that you have to reset it and a machine that's had\n> the postmaster slaughtered by the OOM killer. In the first situation,\n> while the machine is unresponsive, it should come right back up with a\n> coherent database after the restart.\n>\n> OTOH, a machine with a dead postmaster is far more likely to have a\n> corrupted database when it gets restarted.\n\nwait a min here, postgres is supposed to be able to survive a complete box \nfailure without corrupting the database, if killing a process can corrupt \nthe database it sounds like a major problem.\n\nDavid Lang\n\n>> Likewise, I would be all for Postgres managing its memory better. It would\n>> be very nice to be able to set a maximum amount of work-memory, rather than\n>> a maximum amount per backend. Each backend could then make do with however\n>> much is left of the work-memory pool when it actually executes queries. As\n>> it is, the server admin has no idea how many multiples of work-mem are going\n>> to be actually used, even knowing the maximum number of backends.\n>\n> Agreed. It would be useful to have a cap on all work_mem, but it\n> might be an issue that causes all the backends to talk to each other,\n> which can be really slow if you're running a thousand or so\n> connections.\n>\n>\n", "msg_date": "Thu, 28 Aug 2008 16:08:24 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, Aug 28, 2008 at 5:08 PM, <[email protected]> wrote:\n> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n>\n>> On Thu, Aug 28, 2008 at 2:29 PM, Matthew Wakeling <[email protected]>\n>> wrote:\n>>\n>>> Another point is that from a business perspective, a database that has\n>>> stopped responding is equally bad regardless of whether that is because\n>>> the\n>>> OOM killer has appeared or because the machine is thrashing. In both\n>>> cases,\n>>> there is a maximum throughput that the machine can handle, and if\n>>> requests\n>>> appear quicker than that the system will collapse, especially if the\n>>> requests start timing out and being retried.\n>>\n>> But there's a HUGE difference between a machine that has bogged down\n>> under load so badly that you have to reset it and a machine that's had\n>> the postmaster slaughtered by the OOM killer. In the first situation,\n>> while the machine is unresponsive, it should come right back up with a\n>> coherent database after the restart.\n>>\n>> OTOH, a machine with a dead postmaster is far more likely to have a\n>> corrupted database when it gets restarted.\n>\n> wait a min here, postgres is supposed to be able to survive a complete box\n> failure without corrupting the database, if killing a process can corrupt\n> the database it sounds like a major problem.\n\nYes it is a major problem, but not with postgresql. It's a major\nproblem with the linux OOM killer killing processes that should not be\nkilled.\n\nWould it be postgresql's fault if it corrupted data because my machine\nhad bad memory? Or a bad hard drive? This is the same kind of\nfailure. The postmaster should never be killed. It's the one thing\nholding it all together.\n", "msg_date": "Thu, 28 Aug 2008 19:11:50 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Scott Marlowe wrote:\n\n> On Thu, Aug 28, 2008 at 5:08 PM, <[email protected]> wrote:\n>> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n>>\n>>> On Thu, Aug 28, 2008 at 2:29 PM, Matthew Wakeling <[email protected]>\n>>> wrote:\n>>>\n>>>> Another point is that from a business perspective, a database that has\n>>>> stopped responding is equally bad regardless of whether that is because\n>>>> the\n>>>> OOM killer has appeared or because the machine is thrashing. In both\n>>>> cases,\n>>>> there is a maximum throughput that the machine can handle, and if\n>>>> requests\n>>>> appear quicker than that the system will collapse, especially if the\n>>>> requests start timing out and being retried.\n>>>\n>>> But there's a HUGE difference between a machine that has bogged down\n>>> under load so badly that you have to reset it and a machine that's had\n>>> the postmaster slaughtered by the OOM killer. In the first situation,\n>>> while the machine is unresponsive, it should come right back up with a\n>>> coherent database after the restart.\n>>>\n>>> OTOH, a machine with a dead postmaster is far more likely to have a\n>>> corrupted database when it gets restarted.\n>>\n>> wait a min here, postgres is supposed to be able to survive a complete box\n>> failure without corrupting the database, if killing a process can corrupt\n>> the database it sounds like a major problem.\n>\n> Yes it is a major problem, but not with postgresql. It's a major\n> problem with the linux OOM killer killing processes that should not be\n> killed.\n>\n> Would it be postgresql's fault if it corrupted data because my machine\n> had bad memory? Or a bad hard drive? This is the same kind of\n> failure. The postmaster should never be killed. It's the one thing\n> holding it all together.\n\nthe ACID guarantees that postgres is making are supposed to mean that even \nif the machine dies, the CPU goes up in smoke, etc, the transactions that \nare completed will not be corrupted.\n\nif killing the process voids all the ACID protection then something is \nseriously wrong.\n\nit may loose transactions that are in flight, but it should not corrupt \nthe database.\n\nDavid Lang\n", "msg_date": "Thu, 28 Aug 2008 18:16:16 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, Aug 28, 2008 at 7:16 PM, <[email protected]> wrote:\n> the ACID guarantees that postgres is making are supposed to mean that even\n> if the machine dies, the CPU goes up in smoke, etc, the transactions that\n> are completed will not be corrupted.\n\nAnd if any of those things happens, the machine will shut down and\nyou'll be safe.\n\n> if killing the process voids all the ACID protection then something is\n> seriously wrong.\n\nNo, your understanding of what postgresql can expect to have happen to\nit are wrong.\n\nYou'll lose data integrity if:\nIf a CPU starts creating bad output that gets written to disk,\nyour RAID controller starts writing garbage to disk,\nyour memory has bad bits and you don't have ECC,\nSome program hijacks a postgres process and starts writing random bits\nin the code,\nsome program comes along and kills the postmaster, which coordinates\nall the backends, and corrupts shared data in the process.\n\n> it may loose transactions that are in flight, but it should not corrupt the\n> database.\n\nThat's true for anything that just stops the machine or all the\npostgresql processes dead.\n\nIt's not true for a machine that is misbehaving. And any server that\nrandomly kills processes is misbehaving.\n", "msg_date": "Thu, 28 Aug 2008 19:52:00 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "On Thu, Aug 28, 2008 at 8:11 PM, Scott Marlowe <[email protected]>wrote:\n\n> > wait a min here, postgres is supposed to be able to survive a complete\n> box\n> > failure without corrupting the database, if killing a process can corrupt\n> > the database it sounds like a major problem.\n>\n> Yes it is a major problem, but not with postgresql. It's a major\n> problem with the linux OOM killer killing processes that should not be\n> killed.\n>\n> Would it be postgresql's fault if it corrupted data because my machine\n> had bad memory? Or a bad hard drive? This is the same kind of\n> failure. The postmaster should never be killed. It's the one thing\n> holding it all together.\n>\n\nI fail to see the difference between the OOM killing it and the power going\nout. And yes, if the power went out and PG came up with a corrupted DB\n(assuming I didn't turn off fsync, etc) I *would* blame PG. I understand\nthat killing the postmaster could stop all useful PG work, that it could\ncause it to stop responding to clients, that it could even \"crash\" PG, et\nceterabut if a particular process dying causes corrupted DBs, that sounds\nborked to me.\n\nOn Thu, Aug 28, 2008 at 8:11 PM, Scott Marlowe <[email protected]> wrote:\n\n> wait a min here, postgres is supposed to be able to survive a complete box\n> failure without corrupting the database, if killing a process can corrupt\n> the database it sounds like a major problem.\n\nYes it is a major problem, but not with postgresql.  It's a major\nproblem with the linux OOM killer killing processes that should not be\nkilled.\n\nWould it be postgresql's fault if it corrupted data because my machine\nhad bad memory?  Or a bad hard drive?  This is the same kind of\nfailure.  The postmaster should never be killed.  It's the one thing\nholding it all together.I fail to see the difference between the OOM killing it and the power going out.  And yes, if the power went out and PG came up with a corrupted DB (assuming I didn't turn off fsync, etc) I *would* blame PG.  I understand that killing the postmaster could stop all useful PG work, that it could cause it to stop responding to clients, that it could even \"crash\" PG, et ceterabut if a particular process dying causes corrupted DBs, that sounds borked to me.", "msg_date": "Thu, 28 Aug 2008 20:53:32 -0500", "msg_from": "\"Matthew Dennis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "On Thu, Aug 28, 2008 at 7:53 PM, Matthew Dennis <[email protected]> wrote:\n> On Thu, Aug 28, 2008 at 8:11 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> > wait a min here, postgres is supposed to be able to survive a complete\n>> > box\n>> > failure without corrupting the database, if killing a process can\n>> > corrupt\n>> > the database it sounds like a major problem.\n>>\n>> Yes it is a major problem, but not with postgresql. It's a major\n>> problem with the linux OOM killer killing processes that should not be\n>> killed.\n>>\n>> Would it be postgresql's fault if it corrupted data because my machine\n>> had bad memory? Or a bad hard drive? This is the same kind of\n>> failure. The postmaster should never be killed. It's the one thing\n>> holding it all together.\n>\n> I fail to see the difference between the OOM killing it and the power going\n> out.\n\nThen you fail to understand.\n\nscenario 1: There's a postmaster, it owns all the child processes.\nIt gets killed. The Postmaster gets restarted. Since there isn't one\nrunning, it comes up. starts new child processes. Meanwhile, the old\nchild processes that don't belong to it are busy writing to the data\nstore. Instant corruption.\n\nscenario 2: Someone pulls the plug. Every postgres child dies a quick\ndeath. Data on the drives is coherent and recoverable.\n>> And yes, if the power went out and PG came up with a corrupted DB\n> (assuming I didn't turn off fsync, etc) I *would* blame PG.\n\nThen you might be wrong. If you were using the LVM, or certain levels\nof SW RAID, or a RAID controller with cache with no battery backing\nthat is set to write-back, or if you were using an IDE or SATA drive /\ncontroller that didn't support write barriers, or using NFS mounts for\ndatabase storage, and so on. My point being that PostgreSQL HAS to\nmake certain assumptions about its environment that it simply cannot\ndirectly control or test for. Not having the postmaster shot in the\nhead while the children keep running is one of those things.\n\n> I understand\n> that killing the postmaster could stop all useful PG work, that it could\n> cause it to stop responding to clients, that it could even \"crash\" PG, et\n> ceterabut if a particular process dying causes corrupted DBs, that sounds\n> borked to me.\n\nWell, design a better method and implement it. If everything went\nthrough the postmaster you'd be lucky to get 100 transactions per\nsecond. There are compromises between performance and reliability\nunder fire that have to be made. It is not unreasonable to assume\nthat your OS is not going to randomly kill off processes because of a\ndodgy VM implementation quirk.\n\nP.s. I'm a big fan of linux, and I run my dbs on it. But I turn off\novercommit and make a few other adjustments to make sure my database\nis safe. The OOM killer as a default is fine for workstations, but\nit's an insane setting for servers, much like swappiness=60 is an\ninsane setting for a server too.\n", "msg_date": "Thu, 28 Aug 2008 20:01:53 -0600", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Scott Marlowe escribi�:\n\n> scenario 1: There's a postmaster, it owns all the child processes.\n> It gets killed. The Postmaster gets restarted. Since there isn't one\n> running, it comes up.\n\nActually there's an additional step required at this point. There isn't\na postmaster running, but a new one refuses to start, because the shmem\nsegment is in use. In order for the second postmaster to start, the\nsysadmin must remove the PID file by hand.\n\n> starts new child processes. Meanwhile, the old child processes that\n> don't belong to it are busy writing to the data store. Instant\n> corruption.\n\nIn this scenario, it is both a kernel fault and sysadmin stupidity. The\ncorruption that ensues is 100% deserved.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 28 Aug 2008 22:32:18 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Scott Marlowe wrote:\n\n> On Thu, Aug 28, 2008 at 7:53 PM, Matthew Dennis <[email protected]> wrote:\n>> On Thu, Aug 28, 2008 at 8:11 PM, Scott Marlowe <[email protected]>\n>> wrote:\n>>>\n>>>> wait a min here, postgres is supposed to be able to survive a complete\n>>>> box\n>>>> failure without corrupting the database, if killing a process can\n>>>> corrupt\n>>>> the database it sounds like a major problem.\n>>>\n>>> Yes it is a major problem, but not with postgresql. It's a major\n>>> problem with the linux OOM killer killing processes that should not be\n>>> killed.\n>>>\n>>> Would it be postgresql's fault if it corrupted data because my machine\n>>> had bad memory? Or a bad hard drive? This is the same kind of\n>>> failure. The postmaster should never be killed. It's the one thing\n>>> holding it all together.\n>>\n>> I fail to see the difference between the OOM killing it and the power going\n>> out.\n>\n> Then you fail to understand.\n>\n> scenario 1: There's a postmaster, it owns all the child processes.\n> It gets killed. The Postmaster gets restarted. Since there isn't one\n\nwhen the postmaster gets killed doesn't that kill all it's children as \nwell?\n\n> running, it comes up. starts new child processes. Meanwhile, the old\n> child processes that don't belong to it are busy writing to the data\n> store. Instant corruption.\n\nif so then the postmaster should not only check if there is an existing \npostmaster running, it should check for the presense of the child \nprocesses as well.\n\n> scenario 2: Someone pulls the plug. Every postgres child dies a quick\n> death. Data on the drives is coherent and recoverable.\n>>> And yes, if the power went out and PG came up with a corrupted DB\n>> (assuming I didn't turn off fsync, etc) I *would* blame PG.\n>\n> Then you might be wrong. If you were using the LVM, or certain levels\n> of SW RAID, or a RAID controller with cache with no battery backing\n> that is set to write-back, or if you were using an IDE or SATA drive /\n> controller that didn't support write barriers, or using NFS mounts for\n> database storage, and so on.\n\nthese all fall under \"(assuming I didn't turn off fsync, etc)\"\n\n> My point being that PostgreSQL HAS to\n> make certain assumptions about its environment that it simply cannot\n> directly control or test for. Not having the postmaster shot in the\n> head while the children keep running is one of those things.\n>\n>> I understand\n>> that killing the postmaster could stop all useful PG work, that it could\n>> cause it to stop responding to clients, that it could even \"crash\" PG, et\n>> ceterabut if a particular process dying causes corrupted DBs, that sounds\n>> borked to me.\n>\n> Well, design a better method and implement it. If everything went\n> through the postmaster you'd be lucky to get 100 transactions per\n> second.\n\nwell, if you aren't going through the postmaster, what process is \nrecieving network messages? it can't be a group of processes, only one can \nbe listening to a socket at one time.\n\nand if the postmaster isn't needed for the child processes to write to the \ndatastore, how are multiple child processes prevented from writing to the \ndatastore normally? and why doesn't that mechanism continue to work?\n\n> There are compromises between performance and reliability\n> under fire that have to be made. It is not unreasonable to assume\n> that your OS is not going to randomly kill off processes because of a\n> dodgy VM implementation quirk.\n>\n> P.s. I'm a big fan of linux, and I run my dbs on it. But I turn off\n> overcommit and make a few other adjustments to make sure my database\n> is safe. The OOM killer as a default is fine for workstations, but\n> it's an insane setting for servers, much like swappiness=60 is an\n> insane setting for a server too.\n\nso are you saying that the only possible thing that can kill the \npostmaster is the OOM killer? it can't possilby exit in any other \nsituation without the children being shutdown first?\n\nI would be surprised if that was really true.\n\nDavid Lang\n", "msg_date": "Thu, 28 Aug 2008 19:42:02 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "[email protected] escribi�:\n> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n\n>> scenario 1: There's a postmaster, it owns all the child processes.\n>> It gets killed. The Postmaster gets restarted. Since there isn't one\n>\n> when the postmaster gets killed doesn't that kill all it's children as \n> well?\n\nOf course not. The postmaster gets a SIGKILL, which is instant death.\nThere's no way to signal the children. If they were killed too then\nthis wouldn't be much of a problem.\n\n>> running, it comes up. starts new child processes. Meanwhile, the old\n>> child processes that don't belong to it are busy writing to the data\n>> store. Instant corruption.\n>\n> if so then the postmaster should not only check if there is an existing \n> postmaster running, it should check for the presense of the child \n> processes as well.\n\nSee my other followup. There's limited things it can check, but against\nsysadmin stupidity there's no silver bullet.\n\n> well, if you aren't going through the postmaster, what process is \n> recieving network messages? it can't be a group of processes, only one \n> can be listening to a socket at one time.\n\nHuh? Each backend has its own socket.\n\n> and if the postmaster isn't needed for the child processes to write to \n> the datastore, how are multiple child processes prevented from writing to \n> the datastore normally? and why doesn't that mechanism continue to work?\n\nThey use locks. Those locks are implemented using shared memory. If a\nnew postmaster starts, it gets a new shared memory, and a new set of\nlocks, that do not conflict with the ones already held by the first gang\nof backends. This is what causes the corruption.\n\n\n> so are you saying that the only possible thing that can kill the \n> postmaster is the OOM killer? it can't possilby exit in any other \n> situation without the children being shutdown first?\n>\n> I would be surprised if that was really true.\n\nIf the sysadmin sends a SIGKILL then obviously the same thing happens.\n\nAny other signal gives it the chance to signal the children before\ndying.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 28 Aug 2008 22:49:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Alvaro Herrera wrote:\n\n> [email protected] escribi?:\n>> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n>\n>>> scenario 1: There's a postmaster, it owns all the child processes.\n>>> It gets killed. The Postmaster gets restarted. Since there isn't one\n>>\n>> when the postmaster gets killed doesn't that kill all it's children as\n>> well?\n>\n> Of course not. The postmaster gets a SIGKILL, which is instant death.\n> There's no way to signal the children. If they were killed too then\n> this wouldn't be much of a problem.\n\nI'm not saying that it would signal it's children, I thought that the OS \nkilled children (unless steps were taken to allow them to re-parent)\n\n>> well, if you aren't going through the postmaster, what process is\n>> recieving network messages? it can't be a group of processes, only one\n>> can be listening to a socket at one time.\n>\n> Huh? Each backend has its own socket.\n\nwe must be talking about different things. I'm talking about the socket \nthat would be used for clients to talk to postgres, this is either a TCP \nsocket or a unix socket. in either case only one process can listen on it.\n\n>> and if the postmaster isn't needed for the child processes to write to\n>> the datastore, how are multiple child processes prevented from writing to\n>> the datastore normally? and why doesn't that mechanism continue to work?\n>\n> They use locks. Those locks are implemented using shared memory. If a\n> new postmaster starts, it gets a new shared memory, and a new set of\n> locks, that do not conflict with the ones already held by the first gang\n> of backends. This is what causes the corruption.\n\nso the new postmaster needs to detect that there is a shared memory \nsegment out that used by backends for this database.\n\nthis doesn't sound that hard, basicly something similar to a pid file in \nthe db directory that records what backends are running and what shared \nmemory segment they are using.\n\nthis would be similar to the existing pid file that would have to be \nremoved manually before a new postmaster can start (if it's not a graceful \nshutdown)\n\nbesides, some watchdog would need to start the new postmaster, that \nwatchdog can be taught to kill off the child processes before starting a \nnew postmaster along with clearing the pid file.\n\n>> so are you saying that the only possible thing that can kill the\n>> postmaster is the OOM killer? it can't possilby exit in any other\n>> situation without the children being shutdown first?\n>>\n>> I would be surprised if that was really true.\n>\n> If the sysadmin sends a SIGKILL then obviously the same thing happens.\n>\n> Any other signal gives it the chance to signal the children before\n> dying.\n\nare you sure that it's not going to die from a memory allocation error? or \nany other similar type of error without _always_ killing the children?\n\nDavid Lang\n", "msg_date": "Thu, 28 Aug 2008 20:02:48 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Tue, 26 Aug 2008, Scott Marlowe wrote:\n\n> If it is a checkpoint issue then you need more aggresive bgwriter\n> settings, and possibly more bandwidth on your storage array.\n\nSince this is 8.3.1 the main useful thing to do is increase \ncheckpoint_segments and checkpoint_completion_target to spread the I/O \nover a longer period. Making the background writer more aggressive \ndoesn't really help with\n\nWhat is checkpoint_segments set to on this system? If it's still at the \ndefault of 3, you should increase that dramatically.\n\n> What does vmstat 10 say during these spikes? If you're running the \n> sysstate service with data collection then sar can tell you a lot.\n\nHenk seemed a bit confused about this suggestion, and the typo doesn't \nhelp. You can install the sysstat package with:\n\n# apt-get install sysstat\n\nThis allows collecting system load info data at regular periods, \nautomatically, and sar is the tool you can use to look at it. On Debian, \nin order to get it to collect that information for you, I believe you just \nneed to do:\n\n# dpkg-reconfigure sysstat\n\nThen answer \"yes\" to \"Do you want to activate sysstat's cron job?\" This \nwill install a crontab file that collects all the data you need for sar to \nwork. You may need to restart the service after that. There's a useful \nwalkthrough for this at \nhttp://www.linuxweblog.com/blogs/wizap/20080126/sysstat-ubuntu\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 Aug 2008 00:31:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, Bill Moran wrote:\n\n> In linux, it's possible to tell the OOM killer never to consider\n> certain processes for the axe, using /proc magic. See this page:\n> http://linux-mm.org/OOM_Killer\n>\n> Perhaps this should be in the PostgreSQL docs somewhere?\n\nThe fact that \nhttp://www.postgresql.org/docs/current/static/kernel-resources.html#AEN22218 \ntells you to flat-out turn off overcommit is the right conservative thing \nto be in the documentation as I see it. Sure, it's possible to keep it on \nbut disable the worst side-effect in some kernels (looks like 2.6.11+, so \nno RHEL4 for example). Trying to get into all in the manual is kind of \npushing what's appropriate for the PostgreSQL docs I think.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 Aug 2008 00:45:20 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "[email protected] wrote:\n> for example if you have a process that uses 1G of ram (say firefox) \n> and it needs to start a new process (say acroread to handle a pdf \n> file), what it does is it forks the firefox process (each of which \n> have 1G of ram allocated), and then does an exec of the acroread \n> process (releasing the 1G of ram previously held by that copy of the \n> firefox process)\n>\nIndeed, which is why we have vfork. And, OK, vfork is busted if you \nhave a threaded environment, so we have posix_spawn and posix_spawnp.\n\nIt is also worth noting that the copy isn't really a full copy on any \ndecent modern UNIX - it is a reservation against the total swap space \navailable. Most pages will be happilly shared copy-on-write and never \nfully copied to the child before the exec.\n\nI can't see how an OS can lie to processes about memory being allocated \nto them and not be ridiculed as a toy, but there you go. I don't think \nLinux is the only perpetrator - doesn't AIX do this too?\n\nThe 'bests trategy' for the OOM killer is not to have one, and accept \nthat you need some swap space available (it doesn't have to be fast \nsince it won't actually be touched) to help out when fork/exec happens \nin big process images.\n\nJames\n\n", "msg_date": "Fri, 29 Aug 2008 06:17:22 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "[email protected] wrote:\n> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n>>> wait a min here, postgres is supposed to be able to survive a\n>>> complete box\n>>> failure without corrupting the database, if killing a process can\n>>> corrupt\n>>> the database it sounds like a major problem.\n>>\n>> Yes it is a major problem, but not with postgresql. It's a major\n>> problem with the linux OOM killer killing processes that should not be\n>> killed.\n>>\n>> Would it be postgresql's fault if it corrupted data because my machine\n>> had bad memory? Or a bad hard drive? This is the same kind of\n>> failure. The postmaster should never be killed. It's the one thing\n>> holding it all together.\n> \n> the ACID guarantees that postgres is making are supposed to mean that\n> even if the machine dies, the CPU goes up in smoke, etc, the\n> transactions that are completed will not be corrupted.\n> \n> if killing the process voids all the ACID protection then something is\n> seriously wrong.\n> \n> it may loose transactions that are in flight, but it should not corrupt\n> the database.\n\nAFAIK, it's not the killing of the postmaster that's the problem. The\nbackends will continue running and *not* corrupt anything, because the\nshared memory and locking sicks around between them.\n\nThe issue is if you manage to start a *new* postmaster against the same\ndata directory. But there's a whole bunch of safeguards against that, so\nit certainly shouldn't be something you manage to do by mistake.\n\nI may end up being corrected by someone who knows more, but that's how\nI've understood it works. Meaning it is safe against OOM killer, except\nit requires manual work to come back up. But it shouldn't corrupt your data.\n\n//Magnus\n", "msg_date": "Fri, 29 Aug 2008 10:00:42 +0200", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Thu, 28 Aug 2008, [email protected] wrote:\n>> Huh? Each backend has its own socket.\n>\n> we must be talking about different things. I'm talking about the socket that \n> would be used for clients to talk to postgres, this is either a TCP socket or \n> a unix socket. in either case only one process can listen on it.\n\nThe postmaster opens a socket for listening. Only one process can do that. \nWhen an incoming connection is received, postmaster passes that connection \non to a child backend process. The child then has a socket, but it is a \nconnected socket, not a listening socket.\n\nMatthew\n\n-- \nAnyone who goes to a psychiatrist ought to have his head examined.\n", "msg_date": "Fri, 29 Aug 2008 12:02:53 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "In response to Greg Smith <[email protected]>:\n\n> On Thu, 28 Aug 2008, Bill Moran wrote:\n> \n> > In linux, it's possible to tell the OOM killer never to consider\n> > certain processes for the axe, using /proc magic. See this page:\n> > http://linux-mm.org/OOM_Killer\n> >\n> > Perhaps this should be in the PostgreSQL docs somewhere?\n> \n> The fact that \n> http://www.postgresql.org/docs/current/static/kernel-resources.html#AEN22218 \n> tells you to flat-out turn off overcommit is the right conservative thing \n> to be in the documentation as I see it. Sure, it's possible to keep it on \n> but disable the worst side-effect in some kernels (looks like 2.6.11+, so \n> no RHEL4 for example). Trying to get into all in the manual is kind of \n> pushing what's appropriate for the PostgreSQL docs I think.\n\nI don't know, Greg. First off, the solution of making the postmaster\nimmune to the OOM killer seems better than disabling overcommit to me\nanyway; and secondly, I don't understand why we should avoid making\nthe PG documentation as comprehensive as possible, which seems to be\nwhat you are saying: \"we shouldn't make the PG documentation too\ncomprehensive, because then it will get very big\"\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 29 Aug 2008 07:08:52 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes\n\t\"An I/O error occured while sending to the backend.\" exception" }, { "msg_contents": "Bill Moran wrote:\n\n> In response to Greg Smith <[email protected]>:\n> \n>> <snipped...>\n> \n> I don't know, Greg. First off, the solution of making the postmaster\n> immune to the OOM killer seems better than disabling overcommit to me\n> anyway; and secondly, I don't understand why we should avoid making\n> the PG documentation as comprehensive as possible, which seems to be\n> what you are saying: \"we shouldn't make the PG documentation too\n> comprehensive, because then it will get very big\"\n\nI think it would be a hopeless morass for PostgreSQL to try to document each evolution of each OS it runs under; the general caveat seems fine, although perhaps adding something to the effect of \"search the archives for possible specifics\" might be in order. But tracking postgres's own shifts and requirements seems daunting enough w/out adding in endless flavours of different OSs.\n\nMy $0.02 worth ...\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\nRE: [PERFORM] select on 22 GB table causes \"An I/O error occured while sending to the backend.\" exception\n\n\n\nBill Moran wrote:\n\n> In response to Greg Smith <[email protected]>:\n>\n>> <snipped...>\n>\n> I don't know, Greg.  First off, the solution of making the postmaster\n> immune to the OOM killer seems better than disabling overcommit to me\n> anyway; and secondly, I don't understand why we should avoid making\n> the PG documentation as comprehensive as possible, which seems to be\n> what you are saying: \"we shouldn't make the PG documentation too\n> comprehensive, because then it will get very big\"\n\nI think it would be a hopeless morass for PostgreSQL to try to document each evolution of each OS it runs under; the general caveat seems fine, although perhaps adding something to the effect of \"search the archives for possible specifics\" might be in order. But tracking postgres's own shifts and requirements seems daunting enough w/out adding in endless flavours of different OSs.\n\nMy $0.02 worth ...\n\nGreg Williamson\nSenior DBA\nDigitalGlobe\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Fri, 29 Aug 2008 05:18:33 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "[email protected] escribi�:\n> On Thu, 28 Aug 2008, Alvaro Herrera wrote:\n>\n>> [email protected] escribi?:\n>>> On Thu, 28 Aug 2008, Scott Marlowe wrote:\n>>\n>>>> scenario 1: There's a postmaster, it owns all the child processes.\n>>>> It gets killed. The Postmaster gets restarted. Since there isn't one\n>>>\n>>> when the postmaster gets killed doesn't that kill all it's children as\n>>> well?\n>>\n>> Of course not. The postmaster gets a SIGKILL, which is instant death.\n>> There's no way to signal the children. If they were killed too then\n>> this wouldn't be much of a problem.\n>\n> I'm not saying that it would signal it's children, I thought that the OS \n> killed children (unless steps were taken to allow them to re-parent)\n\nOh, you were mistaken then.\n\n>>> well, if you aren't going through the postmaster, what process is\n>>> recieving network messages? it can't be a group of processes, only one\n>>> can be listening to a socket at one time.\n>>\n>> Huh? Each backend has its own socket.\n>\n> we must be talking about different things. I'm talking about the socket \n> that would be used for clients to talk to postgres, this is either a TCP \n> socket or a unix socket. in either case only one process can listen on \n> it.\n\nObviously only one process (the postmaster) can call listen() on a given\nTCP address/port. Once connected, the socket is passed to the\nbackend, and the postmaster is no longer involved in the communication\nbetween backend and client. Each backend has its own socket. If the\npostmaster dies, the established communication is still alive.\n\n\n>>> and if the postmaster isn't needed for the child processes to write to\n>>> the datastore, how are multiple child processes prevented from writing to\n>>> the datastore normally? and why doesn't that mechanism continue to work?\n>>\n>> They use locks. Those locks are implemented using shared memory. If a\n>> new postmaster starts, it gets a new shared memory, and a new set of\n>> locks, that do not conflict with the ones already held by the first gang\n>> of backends. This is what causes the corruption.\n>\n> so the new postmaster needs to detect that there is a shared memory \n> segment out that used by backends for this database.\n\n> this doesn't sound that hard,\n\nYou're welcome to suggest actual improvements to our interlocking\nsystem, after you've read the current code and understood its rationale.\n\n\n>> Any other signal gives it the chance to signal the children before\n>> dying.\n>\n> are you sure that it's not going to die from a memory allocation error? \n> or any other similar type of error without _always_ killing the children?\n\nI am sure. There are no memory allocations in that code. It is\ncarefully written with that one purpose.\n\nThere may be bugs, but that's another matter. This code was written\neons ago and has proven very healthy.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 29 Aug 2008 09:48:06 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "James Mansion wrote:\n> I can't see how an OS can lie to processes about memory being allocated \n> to them and not be ridiculed as a toy, but there you go. I don't think \n> Linux is the only perpetrator - doesn't AIX do this too?\n\nThis is a leftover from the days of massive physical modeling (chemistry, physics, astronomy, ...) programs written in FORTRAN. Since FORTRAN didn't have pointers, scientists would allocate massive three-dimensional arrays, and their code might only access a tiny fraction of the memory. The operating-system vendors, particularly SGI, added features to the various flavors of UNIX, including the ability to overcommit memory, to support these FORTRAN programs, which at the time were some of the most important applications driving computer science and computer architectures of workstation-class computers.\n\nWhen these workstation-class computers evolved enough to rival mainframes, companies started shifting apps like Oracle onto the cheaper workstation-class computers. Unfortunately, the legacy of the days of these FORTRAN programs is still with us, and every few years we have to go through this discussion again.\n\nDisable overcommitted memory. There is NO REASON to use it on any modern server-class computer, and MANY REASONS WHY IT IS A BAD IDEA.\n\nCraig\n", "msg_date": "Fri, 29 Aug 2008 08:25:51 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Fri, 29 Aug 2008, Craig James wrote:\n> Disable overcommitted memory. There is NO REASON to use it on any modern \n> server-class computer, and MANY REASONS WHY IT IS A BAD IDEA.\n\nAs far as I can see, the main reason nowadays for overcommit is when a \nlarge process forks and then execs. Are there any other modern programs \nthat allocate lots of RAM and never use it?\n\nMatthew\n\n-- \nNog: Look! They've made me into an ensign!\nO'Brien: I didn't know things were going so badly.\nNog: Frightening, isn't it?\n", "msg_date": "Fri, 29 Aug 2008 16:56:30 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "On Fri, 29 Aug 2008, Bill Moran wrote:\n\n> First off, the solution of making the postmaster immune to the OOM \n> killer seems better than disabling overcommit to me anyway\n\nI really side with Craig James here that the right thing to do here is to \nturn off overcommit altogether. While it's possible that servers being \nused for things other than PostgreSQL might need it, I feel that's a rare \ncase that's really hostile to the database environment and it shouldn't be \nencouraged.\n\n> I don't understand why we should avoid making the PG documentation as \n> comprehensive as possible\n\nThe main problem here is that this whole area is a moving target. To \nreally document this properly, you end up going through this messy \nexercise where you have to create a table showing what kernel versions \nsupport the option you're trying to document. And even that varies based \non what distribution you're dealing with. Just because you know when \nsomething showed up in the mainline kernel, without some research you \ncan't really know for sure when RedHat or SuSE slipped that into their \nkernel--sometimes they lead mainline, sometimes they lag.\n\nThe PG documentation shies away from mentioning thing that are so \nvolatile, because the expectation is that people will still be using that \nas a reference many years from now. For all we know, 2.6.27 will come out \nand make any documentation about this obscure oomadj feature obsolete. I \nthink the only thing that might make sense here is to write something on \nthe Wiki that goes over in what situations you might run with overcommit \nenabled, how that can interact with database operation, and then tries to \naddress the version issue. Then that can be updated as new versions are \nreleased so it stays current. If that's available with a stable URL, it \nmight make sense to point to it in the documentation near where turning \novercommit off is mentioned.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 29 Aug 2008 12:15:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "In 2003 I met this guy who was doing Computation Fluid Dynamics and he had to use this software written by physics engineers in FORTRAN. 1 Gig of ram wasn't yet the standard for a desktop pc at that time but the software required at least 1 Gig just to get started. So I thought what is the problem after all you are supposed to be able to allocate upto 2G on a 32 bit system even if you don't quite have the memory and you have sufficiently big swat space. Still, the software didn't load on Windows. So, it seems that Windows does not overcommit.\n\nregards\n\n\n--- On Fri, 29/8/08, Matthew Wakeling <[email protected]> wrote:\n\n> From: Matthew Wakeling <[email protected]>\n> Subject: Re: [PERFORM] select on 22 GB table causes \"An I/O error occured while sending to the backend.\" exception\n> To: [email protected]\n> Date: Friday, 29 August, 2008, 4:56 PM\n> On Fri, 29 Aug 2008, Craig James wrote:\n> > Disable overcommitted memory. There is NO REASON to\n> use it on any modern \n> > server-class computer, and MANY REASONS WHY IT IS A\n> BAD IDEA.\n> \n> As far as I can see, the main reason nowadays for\n> overcommit is when a \n> large process forks and then execs. Are there any other\n> modern programs \n> that allocate lots of RAM and never use it?\n> \n> Matthew\n> \n> -- \n> Nog: Look! They've made me into an ensign!\n> O'Brien: I didn't know things were going so badly.\n> Nog: Frightening, isn't it?\n> \n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nSend instant messages to your online friends http://uk.messenger.yahoo.com \n", "msg_date": "Fri, 29 Aug 2008 16:22:18 +0000 (GMT)", "msg_from": "Valentin Bogdanov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Gregory Williamson wrote:\n>Bill Moran wrote:\n\n>> In response to Greg Smith <[email protected]>:\n>>\n>> <snipped...>\n>>\n>> I don't know, Greg. First off, the solution of making the postmaster\n>> immune to the OOM killer seems better than disabling overcommit to me\n>> anyway; and secondly, I don't understand why we should avoid making\n>> the PG documentation as comprehensive as possible, which seems to be\n>> what you are saying: \"we shouldn't make the PG documentation too\n>> comprehensive, because then it will get very big\"\n\n> I think it would be a hopeless morass for PostgreSQL to try to document \n\n> each evolution of each OS it runs under; the general caveat seems \n\n>fine, although perhaps adding something to the effect of \"search the\n\n> archives for possible specifics\" might be in order. But tracking\npostgres's \n\n> own shifts and requirements seems daunting enough w/out adding in\n\n> endless flavours of different OSs.\n\n> My $0.02 worth ...\n\n\n\nIn some aspects I agree, however in this specific case I think the docs\nshould include the details about options to protect the postmaster from the\nOOM killer.\n\n \n\nSo far I've seen three basic solutions to this problem:\n(1) Disabling overcommit\n\n(2) Be generous with swap space\n\n(3) Protect postmaster from the OOM killer\n\n \n\nAs we've seen so far, there is not one solution that makes everybody happy.\nEach option has its merits and downsides. Personally, I think in this case\nthe docs should present all 3 options, perhaps in a Linux specific note or\nsection, so each DBA can decide for themselves the appropriate method.\n\n \n\nGoing one step further, I'm thinking making the third option the default on\nLinux systems might not be a bad thing either. And, if that is done, the\ndocs definitely need to contain information about it.\n\n \n\nAnother couple of cents in the pot.\n\nGreg\n\n \n\n\n\n\n\n\nRE: [PERFORM] select on 22 GB table causes \"An I/O error occured\nwhile sending to the backend.\" exception\n\n\n\n\n\nGregory\nWilliamson wrote:\n>Bill Moran wrote:\n\n>> In response to Greg Smith <[email protected]>:\n>>\n>> <snipped...>\n>>\n>> I don't know, Greg.  First off, the solution of making the\npostmaster\n>> immune to the OOM killer seems better than disabling overcommit to me\n>> anyway; and secondly, I don't understand why we should avoid making\n>> the PG documentation as comprehensive as possible, which seems to be\n>> what you are saying: \"we shouldn't make the PG documentation too\n>> comprehensive, because then it will get very big\"\n\n> I think it would be a hopeless morass for PostgreSQL to try to document \n> each evolution of each OS\nit runs under; the general caveat seems \n>fine, although perhaps\nadding something to the effect of \"search the\n> archives for possible\nspecifics\" might be in order. But tracking postgres's \n> own shifts and\nrequirements seems daunting enough w/out adding in\n> endless flavours of\ndifferent OSs.\n\n> My $0.02 worth ...\n\n\nIn some aspects I agree, however in this specific case I think\nthe docs should include the details about options to protect the postmaster\nfrom the OOM killer.\n \nSo far I’ve seen three basic solutions to this problem:\n(1) Disabling overcommit\n(2) Be generous with swap space\n(3) Protect postmaster from the OOM killer\n \nAs we’ve seen so far, there is not one solution that makes\neverybody happy. Each option has its merits and downsides. Personally, I think in\nthis case the docs should present all 3 options, perhaps in a Linux specific\nnote or section, so each DBA can decide for themselves the appropriate method.\n \nGoing one step further, I’m thinking making the third\noption the default on Linux systems might not be a bad thing either. And, if\nthat is done, the docs definitely need to contain information about it.\n \nAnother couple of cents in the pot…\nGreg", "msg_date": "Fri, 29 Aug 2008 09:37:56 -0700", "msg_from": "\"Gregory S. Youngblood\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "* Craig James:\n\n> So it never makes sense to enable overcommitted memory when\n> Postgres, or any server, is running.\n\nThere are some run-time environments which allocate huge chunks of\nmemory on startup, without marking them as not yet in use. SBCL is in\nthis category, and also the Hotspot VM (at least some extent).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Mon, 15 Sep 2008 10:36:40 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Florian Weimer wrote:\n> * Craig James:\n>> So it never makes sense to enable overcommitted memory when\n>> Postgres, or any server, is running.\n> \n> There are some run-time environments which allocate huge chunks of\n> memory on startup, without marking them as not yet in use. SBCL is in\n> this category, and also the Hotspot VM (at least some extent).\n\nI stand by my assertion: It never makes sense. Do these applications allocate a terrabyte of memory? I doubt it. Buy a terrabyte swap disk and disable overcommitted memory.\n\nCraig\n", "msg_date": "Mon, 15 Sep 2008 07:11:25 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" }, { "msg_contents": "* Craig James:\n\n>> There are some run-time environments which allocate huge chunks of\n>> memory on startup, without marking them as not yet in use. SBCL is in\n>> this category, and also the Hotspot VM (at least some extent).\n>\n> I stand by my assertion: It never makes sense. Do these\n> applications allocate a terrabyte of memory? I doubt it.\n\nSBCL sizes its allocated memory region based on the total amount of\nRAM and swap space. In this case, buying larger disks does not\nhelp. 8-P\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Mon, 15 Sep 2008 16:15:38 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured while sending\n\tto the backend.\" exception" }, { "msg_contents": "Florian Weimer wrote:\n> * Craig James:\n> \n>>> There are some run-time environments which allocate huge chunks of\n>>> memory on startup, without marking them as not yet in use. SBCL is in\n>>> this category, and also the Hotspot VM (at least some extent).\n>> I stand by my assertion: It never makes sense. Do these\n>> applications allocate a terrabyte of memory? I doubt it.\n> \n> SBCL sizes its allocated memory region based on the total amount of\n> RAM and swap space. In this case, buying larger disks does not\n> help. 8-P\n\nSBCL, as Steel Bank Common Lisp? Why would you run that on a server machine alongside Postgres? If I had to use SBLC and Postgres, I'd put SBLC on a separate machine all its own, so that it couldn't corrupt Postgres or other servers that had to be reliable.\n\nAre you saying that if I bought a terrabyte of swap disk, SBLC would allocate a terrabyte of space?\n\nCraig\n\n", "msg_date": "Mon, 15 Sep 2008 07:25:58 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select on 22 GB table causes \"An I/O error occured\n\twhile sending to the backend.\" exception" } ]
[ { "msg_contents": "Eh, there was a spurious join in that query which was created by an\nORM which messed things up apparently. Sorry for the noise. This\nabstracted version of the original query that does the same is fast:\n\nwoome=> EXPLAIN ANALYZE\nSELECT *\nFROM webapp_invite i\nINNER JOIN webapp_person p ON (i.id = p.id)\nWHERE p.is_suspended = false\nAND p.is_banned = false\nAND i.woouser = 'suggus'\nORDER BY i.id DESC LIMIT 5;\n\nQUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=4549.51..4549.52 rows=5 width=238) (actual\ntime=0.071..0.071 rows=0 loops=1)\n -> Sort (cost=4549.51..4549.58 rows=31 width=238) (actual\ntime=0.070..0.070 rows=0 loops=1)\n Sort Key: i.id\n Sort Method: quicksort Memory: 25kB\n -> Nested Loop (cost=12.20..4548.99 rows=31 width=238)\n(actual time=0.036..0.036 rows=0 loops=1)\n -> Bitmap Heap Scan on webapp_invite i\n(cost=12.20..1444.45 rows=382 width=44) (actual time=0.034..0.034\nrows=0 loops=1)\n Recheck Cond: ((woouser)::text = 'suggus'::text)\n -> Bitmap Index Scan on\nwebapp_invite_woouser_idx (cost=0.00..12.10 rows=382 width=0) (actual\ntime=0.032..0.032 rows=0 loops=1)\n Index Cond: ((woouser)::text = 'suggus'::text)\n -> Index Scan using webapp_person_pkey on\nwebapp_person p (cost=0.00..8.11 rows=1 width=194) (never executed)\n Index Cond: (p.id = i.id)\n Filter: ((NOT p.is_suspended) AND (NOT p.is_banned))\n Total runtime: 0.183 ms\n(13 rows)\n\nTime: 1.114 ms\n", "msg_date": "Tue, 26 Aug 2008 17:59:49 +0100", "msg_from": "\"Frank Joerdens\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query w empty result set with LIMIT orders of magnitude slower\n\tthan without (SOLVED, pls disregard)" } ]
[ { "msg_contents": "\nHi, \n\nWould you like to be so kind as to answer the following questions:\n\n- Is there any way to control the number of clog files and xlog files? \nI encounter an issue that there are too many clog files under the \npg_clog/ directory which occupy more space than I can endure..\n\n- What determines the number of clog files? what determines the \nnumber of xlog files?\n\n- I understand pg_xlog is used to record WAL. but what is pg_clog\nis used to? Is it used to record some meta-information on the xlog?\n\n- What effect does Deleting the clog and xlogfiles bring about?\nWill it cause Postgresql abnormal stopping?\n\n#I am not sure this is the right list to ask such a question.But\nsince I post it in the ADMIN list, and no one give me an answer,\nI wanna try this list.\n\n\nBest regards\nGera\n\n\n", "msg_date": "Wed, 27 Aug 2008 08:58:56 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "control the number of clog files and xlog files" }, { "msg_contents": "Duan Ligong wrote:\n\n> Would you like to be so kind as to answer the following questions:\n> \n> - Is there any way to control the number of clog files and xlog files? \n> I encounter an issue that there are too many clog files under the \n> pg_clog/ directory which occupy more space than I can endure..\n\npg_clog files are controlled by tuple freezing, which is done by vacuum,\nand it depends on the autovacuum_min_freeze_age parameter and\nvacuum_freeze_min_age. Please read\n\nhttp://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html\nand\nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE\n\n\n> - What determines the number of clog files? what determines the \n> number of xlog files?\n\nThe number of xlog files will depend on checkpoints. You need to\nrestrict checkpoint_segments to control this. Note that this can have a\nserious performance impact.\n\n> - I understand pg_xlog is used to record WAL. but what is pg_clog\n> is used to? Is it used to record some meta-information on the xlog?\n\nclog is the \"commit log\", i.e. it records transactions that have been\ncommitted and those that have been aborted. You cannot delete files\nunless you want to corrupt your database.\n\n> - What effect does Deleting the clog and xlogfiles bring about?\n> Will it cause Postgresql abnormal stopping?\n\nYour data will be corrupt. It may continue to work for a while, and\nsuddenly stop working at a future time.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Tue, 26 Aug 2008 21:56:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: control the number of clog files and xlog files" }, { "msg_contents": "Alvaro, Thanks for your answer. \nIt would be very helpful.\n\n> > Would you like to be so kind as to answer the following questions:\n> > \n> > - Is there any way to control the number of clog files and xlog files? \n> > I encounter an issue that there are too many clog files under the \n> > pg_clog/ directory which occupy more space than I can endure..\n> \n> pg_clog files are controlled by tuple freezing, which is done by vacuum,\n> and it depends on the autovacuum_min_freeze_age parameter and\n> vacuum_freeze_min_age. Please read\n\nSo can we reduce the number of clog by increasing the \nautovacuum_min_freeze_age parameter and vacuum_freeze_min_age\n? \n\n> http://www.postgresql.org/docs/8.3/interactive/routine-vacuuming.html\n> and\n> http://www.postgresql.org/docs/8.3/interactive/runtime-config-client.html#GUC-VACUUM-FREEZE-MIN-AGE\n\n> > - What determines the number of clog files? what determines the \n> > number of xlog files?\n> \n> The number of xlog files will depend on checkpoints. You need to\n> restrict checkpoint_segments to control this. Note that this can have a\n> serious performance impact.\n> > - I understand pg_xlog is used to record WAL. but what is pg_clog\n> > is used to? Is it used to record some meta-information on the xlog?\n> \n> clog is the \"commit log\", i.e. it records transactions that have been\n> committed and those that have been aborted. You cannot delete files\n> unless you want to corrupt your database.\n\nCould you explain how the clog files work roughly?\n(What is inside of the clog files? when and how the new clog files \nare created? when and in what case the old files are deleted \nor rotated? how does postgresql regard a file is old enough to be \ndeleted? Does Vacuum will definitely cause deleting of old files\nand creating of new clog files?)\n \n> > - What effect does Deleting the clog and xlogfiles bring about?\n> > Will it cause Postgresql abnormal stopping?\n> Your data will be corrupt. It may continue to work for a while, and\n> suddenly stop working at a future time.\n\nI encoutered a scenario that there are many files and some of them \nare as old as one month ago. Does all these files including the \nold files are still useful for postgresql? and when will they deleted \nor rotated? Or should they be deleted and maintained by external\nprograms?\n\nBest regards\nDuan\n\n> -- \n> Alvaro Herrera http://www.CommandPrompt.com/\n> The PostgreSQL Company - Command Prompt, Inc.\n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n---------------------------------------------------\nDuan Ligong\n : 8-0086-22-354\n\n\n", "msg_date": "Wed, 27 Aug 2008 19:27:53 +0800", "msg_from": "Duan Ligong <[email protected]>", "msg_from_op": true, "msg_subject": "Re: control the number of clog files and xlog files" }, { "msg_contents": "Duan Ligong wrote:\n> Alvaro, Thanks for your answer. \n> It would be very helpful.\n> \n> > > Would you like to be so kind as to answer the following questions:\n> > > \n> > > - Is there any way to control the number of clog files and xlog files? \n> > > I encounter an issue that there are too many clog files under the \n> > > pg_clog/ directory which occupy more space than I can endure..\n> > \n> > pg_clog files are controlled by tuple freezing, which is done by vacuum,\n> > and it depends on the autovacuum_min_freeze_age parameter and\n> > vacuum_freeze_min_age. Please read\n> \n> So can we reduce the number of clog by increasing the \n> autovacuum_min_freeze_age parameter and vacuum_freeze_min_age\n> ? \n\nYes, but decreasing the value.\n\nSorry, you ask more questions that I have time to answer right now.\n\n> I encoutered a scenario that there are many files and some of them \n> are as old as one month ago. Does all these files including the \n> old files are still useful for postgresql? and when will they deleted \n> or rotated? Or should they be deleted and maintained by external\n> programs?\n\nYes, those files are still useful. They will be deleted eventually.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 27 Aug 2008 09:08:00 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: control the number of clog files and xlog files" } ]
[ { "msg_contents": "Is there a way to use multi-level inheritance to achieve sub \npartitioning that the query optimizer will recognize? With our current \napplication design, we would need a partition for every other day for \n18 months which will not perform well. The reason we need so many \npartitions is that we can't afford to vacuum the active partition (750MM \ninserts + updates per day is the performance requirement for 12 months \nout). After it's a day old, there are no longer any updates or inserts \nand we can vacuum it at that point. If multi-level partitioning \nworked, we could solve this problem without changing our code. Ideas?\n\n-Jerry\n\n\n", "msg_date": "Wed, 27 Aug 2008 07:45:49 -0600", "msg_from": "Jerry Champlin <[email protected]>", "msg_from_op": true, "msg_subject": "Is there a way to SubPartition?" }, { "msg_contents": "Jerry Champlin <[email protected]> writes:\n> Is there a way to use multi-level inheritance to achieve sub \n> partitioning that the query optimizer will recognize?\n\nNo, I don't think so. How would that make things any better anyway?\nYou're still going to end up with the same very large number of\npartitions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Aug 2008 10:01:50 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to SubPartition? " }, { "msg_contents": "On Wed, 27 Aug 2008, Jerry Champlin wrote:\n> After it's a day old, there are no longer any updates or inserts and we \n> can vacuum it at that point.\n\nA pattern that has worked very well for other people is to have two \nseparate tables (or partitions). One contains today's data, and the other \ncontains historic data that is no longer updated. Once a day, transfer the \ndata between the partitions, and the historic data partition will not need \nvacuuming.\n\nSome changes to your code will be needed however.\n\nMatthew\n\n-- \nVacuums are nothings. We only mention them to let them know we know\nthey're there.\n", "msg_date": "Wed, 27 Aug 2008 15:10:28 +0100 (BST)", "msg_from": "Matthew Wakeling <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to SubPartition?" }, { "msg_contents": "If it were implemented in such a way that when the top level pruning\nhappens, a set of 3 sub partitions is selected from say 18 total and then at\nthe next level is selects the 3 matching sub partitions from each matched\ngroup of 30 then you are only looking at 18+3*30 = 108 instead of 548 checks\nto evaluate <example assumes monthly first level partitioning and daily sub\npartitioning>. If this is not supported, then we will need to solve the\nproblem a different way - probably weekly partitions and refactor the code\nto decrease updates by at least an order of magnitude. While we are in the\nprocess of doing this, is there a way to make updates faster? Postgresql is\nspending a lot of CPU cycles for each HOT update. We have\nsynchronous_commit turned off, commit siblings set to 5, commit_delay set to\n50,000. With synchronous_commit off does it make any sense to be grouping\ncommits? Buffers written by the bgwriter vs checkpoint is 6 to 1. Buffers\nwritten by clients vs buffers by checkpoint is 1 to 6. Is there anything\nobvious here?\n\n-Jerry\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Wednesday, August 27, 2008 8:02 AM\nTo: Jerry Champlin\nCc: [email protected]\nSubject: Re: [PERFORM] Is there a way to SubPartition? \n\nJerry Champlin <[email protected]> writes:\n> Is there a way to use multi-level inheritance to achieve sub \n> partitioning that the query optimizer will recognize?\n\nNo, I don't think so. How would that make things any better anyway?\nYou're still going to end up with the same very large number of\npartitions.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 27 Aug 2008 14:21:41 -0600", "msg_from": "\"Jerry Champlin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to SubPartition? " }, { "msg_contents": "Jerry Champlin wrote:\n> If it were implemented in such a way that when the top level pruning\n> happens, a set of 3 sub partitions is selected from say 18 total and then at\n> the next level is selects the 3 matching sub partitions from each matched\n> group of 30 then you are only looking at 18+3*30 = 108 instead of 548 checks\n> to evaluate <example assumes monthly first level partitioning and daily sub\n> partitioning>. If this is not supported, then we will need to solve the\n> problem a different way - probably weekly partitions and refactor the code\n> to decrease updates by at least an order of magnitude. While we are in the\n> process of doing this, is there a way to make updates faster? Postgresql is\n> spending a lot of CPU cycles for each HOT update. We have\n> synchronous_commit turned off, commit siblings set to 5, commit_delay set to\n> 50,000.\n\nPerhaps you do not realize this, but this is an exciting report to read.\nNot many years ago, this kind of system would have been unthinkable.\nWe've now tuned the system so that people is starting to consider it,\nand for a lot of people it is working.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 27 Aug 2008 16:32:34 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to SubPartition?" }, { "msg_contents": "\"Jerry Champlin\" <[email protected]> writes:\n> We have synchronous_commit turned off, commit siblings set to 5,\n> commit_delay set to 50,000. With synchronous_commit off does it make\n> any sense to be grouping commits?\n\nNo. In fact commit_delay is a total no-op in that mode. If it were\ndoing anything I think you'd have found that to be a counterproductively\nlarge setting ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 27 Aug 2008 16:54:03 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is there a way to SubPartition? " } ]
[ { "msg_contents": "I'm looking for some help in speeding up searches. My table is pretty simple\n(see below), but somewhat large, and continuously growing. Currently it has\nabout 50 million rows.\n\nThe table is (I know I have excessive indexes, I'm trying to get the\nappropriate ones and drop the extras):\n Table \"public.ad_log\"\n Column | Type |\nModifiers\n--------------+-----------------------------+-------------------------------\n-----------------------------\n ad_log_id | integer | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n channel_name | text | not null\n player_name | text | not null\n ad_name | text | not null\n start_time | timestamp without time zone | not null\n end_time | timestamp without time zone | not null\nIndexes:\n \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\nad_name, start_time, end_time)\n \"ad_log_ad_and_start\" btree (ad_name, start_time)\n \"ad_log_ad_name\" btree (ad_name)\n \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n \"ad_log_channel_name\" btree (channel_name)\n \"ad_log_end_time\" btree (end_time)\n \"ad_log_player_and_start\" btree (player_name, start_time)\n \"ad_log_player_name\" btree (player_name)\n \"ad_log_start_time\" btree (start_time)\n\n\n\nThe query I'm trying to speed up is below. In it the <field> tag can be one\nof channel_name, player_name, or ad_name. I'm actually trying to return the\ndistinct values and I found GROUP BY to be slightly faster than using\nDISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\nin which case we use '%', but it seems Postgres optimizes that pretty well.\n\nSELECT <field> FROM ad_log \n\tWHERE channel_name LIKE :channel_name\n\tAND player_name LIKE :player_name \n\tAND ad_name LIKE :ad_name \n\tAND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n\tGROUP BY <field> ORDER BY <field>\n\n\nA typical query is:\n\nexplain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\nAND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n(date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n\nwith the result being:\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Sort (cost=1163169.02..1163169.03 rows=5 width=10) (actual\ntime=75460.187..75460.192 rows=15 loops=1)\n Sort Key: channel_name\n Sort Method: quicksort Memory: 17kB\n -> HashAggregate (cost=1163168.91..1163168.96 rows=5 width=10) (actual\ntime=75460.107..75460.114 rows=15 loops=1)\n -> Bitmap Heap Scan on ad_log (cost=285064.30..1129582.84\nrows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\nloops=1)\n Recheck Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n'%'::text))\n -> Bitmap Index Scan on ad_log_start_time\n(cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\nrows=13701296 loops=1)\n Index Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Total runtime: 75460.361 ms\n\n\nIt seems to me there should be some way to create an index to speed this up,\nbut the various ones I've tried so far haven't helped. Any suggestions would\nbe greatly appreciated.\n\n", "msg_date": "Thu, 28 Aug 2008 17:06:23 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "indexing for distinct search in timestamp based table" }, { "msg_contents": "Rainer Mager wrote:\n> I'm looking for some help in speeding up searches. My table is pretty simple\n> (see below), but somewhat large, and continuously growing. Currently it has\n> about 50 million rows.\n> \n\nRegarding your use of LIKE:\n (1)If you are able to specify the beginning character(s) of the \nstatement you are searching for, you will have a better chance of your \nstatement using an index. If you specify a wildcard(%) before the search \nstring, the entire string in the column must be searched therefore no \nindex will be used.\n(2) Reorder your where clause to reduce the size of the set that LIKE \noperates on. In your example below, put the BETWEEN before the LIKE.\n(3) Consider the use of trigrams instead of LIKE. I have not used it but \nI notice that postgres supports trigrams:\n\nThe pg_trgm module provides functions and operators for determining the \nsimilarity of text based on trigram matching, as well as index operator \nclasses that support fast searching for similar strings.\n\nHere is the link: http://www.postgresql.org/docs/current/static/pgtrgm.html\n\n--cheers\nHH\n\n> The table is (I know I have excessive indexes, I'm trying to get the\n> appropriate ones and drop the extras):\n> Table \"public.ad_log\"\n> Column | Type |\n> Modifiers\n> --------------+-----------------------------+-------------------------------\n> -----------------------------\n> ad_log_id | integer | not null default\n> nextval('ad_log_ad_log_id_seq'::regclass)\n> channel_name | text | not null\n> player_name | text | not null\n> ad_name | text | not null\n> start_time | timestamp without time zone | not null\n> end_time | timestamp without time zone | not null\n> Indexes:\n> \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n> \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\n> ad_name, start_time, end_time)\n> \"ad_log_ad_and_start\" btree (ad_name, start_time)\n> \"ad_log_ad_name\" btree (ad_name)\n> \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n> \"ad_log_channel_name\" btree (channel_name)\n> \"ad_log_end_time\" btree (end_time)\n> \"ad_log_player_and_start\" btree (player_name, start_time)\n> \"ad_log_player_name\" btree (player_name)\n> \"ad_log_start_time\" btree (start_time)\n>\n>\n>\n> The query I'm trying to speed up is below. In it the <field> tag can be one\n> of channel_name, player_name, or ad_name. I'm actually trying to return the\n> distinct values and I found GROUP BY to be slightly faster than using\n> DISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\n> in which case we use '%', but it seems Postgres optimizes that pretty well.\n>\n> SELECT <field> FROM ad_log \n> \tWHERE channel_name LIKE :channel_name\n> \tAND player_name LIKE :player_name \n> \tAND ad_name LIKE :ad_name \n> \tAND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n> \tGROUP BY <field> ORDER BY <field>\n>\n>\n> A typical query is:\n>\n> explain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\n> AND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n> (date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n>\n> with the result being:\n> \n> QUERY PLAN\n> ----------------------------------------------------------------------------\n> ----------------------------------------------------------------------------\n> -------\n> Sort (cost=1163169.02..1163169.03 rows=5 width=10) (actual\n> time=75460.187..75460.192 rows=15 loops=1)\n> Sort Key: channel_name\n> Sort Method: quicksort Memory: 17kB\n> -> HashAggregate (cost=1163168.91..1163168.96 rows=5 width=10) (actual\n> time=75460.107..75460.114 rows=15 loops=1)\n> -> Bitmap Heap Scan on ad_log (cost=285064.30..1129582.84\n> rows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\n> loops=1)\n> Recheck Cond: ((start_time >= '2008-07-01\n> 00:00:00'::timestamp without time zone) AND (start_time <=\n> '2008-07-29'::date))\n> Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n> '%'::text))\n> -> Bitmap Index Scan on ad_log_start_time\n> (cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\n> rows=13701296 loops=1)\n> Index Cond: ((start_time >= '2008-07-01\n> 00:00:00'::timestamp without time zone) AND (start_time <=\n> '2008-07-29'::date))\n> Total runtime: 75460.361 ms\n>\n>\n> It seems to me there should be some way to create an index to speed this up,\n> but the various ones I've tried so far haven't helped. Any suggestions would\n> be greatly appreciated.\n>\n>\n> \n\n\n-- \nH. Hall\nReedyRiver Group LLC\nhttp://www.reedyriver.com\n\n", "msg_date": "Thu, 28 Aug 2008 05:46:06 -0400", "msg_from": "\"H. Hall\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexing for distinct search in timestamp based table" }, { "msg_contents": "I once also had a similar performance problem when looking for all matching\nrows between two timestamps. In fact that's why I'm here today. The problem\nwas with MySQL. I had some tables of around 10 million rows and all my\nsearching was timestamp based. MySQL didn't do what I wanted. I found that\nusing a CLUSTERED index with postgresql to be lightning quick. Yet mostly\nthe matching rows I was working with was not much over the 100k mark. I'm\nwondering if clustering the table on ad_log_start_time will help cut down on\nrandom reads.\n\nThat's if you can afford to block the users while postgresql clusters the\ntable.\n\nIf you're inserting in order of the start_time column (which I was) then the\ncluster should almost maintain itself (I think), providing you're not\nupdating or deleting anyway, I'd assume that since it looks like a log\ntable.\n\nDavid.\n\n \n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rainer Mager\nSent: 28 August 2008 09:06\nTo: [email protected]\nSubject: [PERFORM] indexing for distinct search in timestamp based table\n\nI'm looking for some help in speeding up searches. My table is pretty simple\n(see below), but somewhat large, and continuously growing. Currently it has\nabout 50 million rows.\n\nThe table is (I know I have excessive indexes, I'm trying to get the\nappropriate ones and drop the extras):\n Table \"public.ad_log\"\n Column | Type |\nModifiers\n--------------+-----------------------------+-------------------------------\n-----------------------------\n ad_log_id | integer | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n channel_name | text | not null\n player_name | text | not null\n ad_name | text | not null\n start_time | timestamp without time zone | not null\n end_time | timestamp without time zone | not null\nIndexes:\n \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\nad_name, start_time, end_time)\n \"ad_log_ad_and_start\" btree (ad_name, start_time)\n \"ad_log_ad_name\" btree (ad_name)\n \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n \"ad_log_channel_name\" btree (channel_name)\n \"ad_log_end_time\" btree (end_time)\n \"ad_log_player_and_start\" btree (player_name, start_time)\n \"ad_log_player_name\" btree (player_name)\n \"ad_log_start_time\" btree (start_time)\n\n\n\nThe query I'm trying to speed up is below. In it the <field> tag can be one\nof channel_name, player_name, or ad_name. I'm actually trying to return the\ndistinct values and I found GROUP BY to be slightly faster than using\nDISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\nin which case we use '%', but it seems Postgres optimizes that pretty well.\n\nSELECT <field> FROM ad_log \n\tWHERE channel_name LIKE :channel_name\n\tAND player_name LIKE :player_name \n\tAND ad_name LIKE :ad_name \n\tAND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n\tGROUP BY <field> ORDER BY <field>\n\n\nA typical query is:\n\nexplain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\nAND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n(date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n\nwith the result being:\n \nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Sort (cost=1163169.02..1163169.03 rows=5 width=10) (actual\ntime=75460.187..75460.192 rows=15 loops=1)\n Sort Key: channel_name\n Sort Method: quicksort Memory: 17kB\n -> HashAggregate (cost=1163168.91..1163168.96 rows=5 width=10) (actual\ntime=75460.107..75460.114 rows=15 loops=1)\n -> Bitmap Heap Scan on ad_log (cost=285064.30..1129582.84\nrows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\nloops=1)\n Recheck Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n'%'::text))\n -> Bitmap Index Scan on ad_log_start_time\n(cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\nrows=13701296 loops=1)\n Index Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Total runtime: 75460.361 ms\n\n\nIt seems to me there should be some way to create an index to speed this up,\nbut the various ones I've tried so far haven't helped. Any suggestions would\nbe greatly appreciated.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Thu, 28 Aug 2008 23:48:43 +0100", "msg_from": "\"David Rowley\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexing for distinct search in timestamp based table" }, { "msg_contents": "Another suggestion is to partition the table by date ranges. If most of the\nrange queries occur on particular batches of time, this will make all\nqueries more efficient, and improve locality and efficiency of all indexes\non the table.\n\nThis is more work than simply a table CLUSTER, especially in maintenance\noverhead, but it will generally help a lot in cases like these.\nAdditionally, if these don't change much after some period of time the\ntables older than the modification window can be vacuumed, clustered, and\nreindexed if needed to make them as efficient as possible and maintenance\nfree after that point (other than backups and archives).\n\nAnother benefit of clustering is in backup / restore. You can incrementally\nback up only the index partitions that have changed -- for large databases\nthis reduces pg_dump and pg_restore times substantially. To do this you\ncombine regular expressions with the pg_dump \"exclude tables\" or \"include\ntables\" flags.\n\n\nOn Thu, Aug 28, 2008 at 3:48 PM, David Rowley <[email protected]> wrote:\n\n> I once also had a similar performance problem when looking for all matching\n> rows between two timestamps. In fact that's why I'm here today. The problem\n> was with MySQL. I had some tables of around 10 million rows and all my\n> searching was timestamp based. MySQL didn't do what I wanted. I found that\n> using a CLUSTERED index with postgresql to be lightning quick. Yet mostly\n> the matching rows I was working with was not much over the 100k mark. I'm\n> wondering if clustering the table on ad_log_start_time will help cut down\n> on\n> random reads.\n>\n> That's if you can afford to block the users while postgresql clusters the\n> table.\n>\n> If you're inserting in order of the start_time column (which I was) then\n> the\n> cluster should almost maintain itself (I think), providing you're not\n> updating or deleting anyway, I'd assume that since it looks like a log\n> table.\n>\n> David.\n>\n>\n> -----Original Message-----\n> From: [email protected]\n> [mailto:[email protected]] On Behalf Of Rainer Mager\n> Sent: 28 August 2008 09:06\n> To: [email protected]\n> Subject: [PERFORM] indexing for distinct search in timestamp based table\n>\n> I'm looking for some help in speeding up searches. My table is pretty\n> simple\n> (see below), but somewhat large, and continuously growing. Currently it has\n> about 50 million rows.\n>\n> The table is (I know I have excessive indexes, I'm trying to get the\n> appropriate ones and drop the extras):\n> Table \"public.ad_log\"\n> Column | Type |\n> Modifiers\n>\n> --------------+-----------------------------+-------------------------------\n> -----------------------------\n> ad_log_id | integer | not null default\n> nextval('ad_log_ad_log_id_seq'::regclass)\n> channel_name | text | not null\n> player_name | text | not null\n> ad_name | text | not null\n> start_time | timestamp without time zone | not null\n> end_time | timestamp without time zone | not null\n> Indexes:\n> \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n> \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\n> ad_name, start_time, end_time)\n> \"ad_log_ad_and_start\" btree (ad_name, start_time)\n> \"ad_log_ad_name\" btree (ad_name)\n> \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n> \"ad_log_channel_name\" btree (channel_name)\n> \"ad_log_end_time\" btree (end_time)\n> \"ad_log_player_and_start\" btree (player_name, start_time)\n> \"ad_log_player_name\" btree (player_name)\n> \"ad_log_start_time\" btree (start_time)\n>\n>\n>\n> The query I'm trying to speed up is below. In it the <field> tag can be one\n> of channel_name, player_name, or ad_name. I'm actually trying to return the\n> distinct values and I found GROUP BY to be slightly faster than using\n> DISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\n> in which case we use '%', but it seems Postgres optimizes that pretty well.\n>\n> SELECT <field> FROM ad_log\n> WHERE channel_name LIKE :channel_name\n> AND player_name LIKE :player_name\n> AND ad_name LIKE :ad_name\n> AND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n> GROUP BY <field> ORDER BY <field>\n>\n>\n> A typical query is:\n>\n> explain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\n> AND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n> (date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n>\n> with the result being:\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------\n>\n> ----------------------------------------------------------------------------\n> -------\n> Sort (cost=1163169.02..1163169.03 rows=5 width=10) (actual\n> time=75460.187..75460.192 rows=15 loops=1)\n> Sort Key: channel_name\n> Sort Method: quicksort Memory: 17kB\n> -> HashAggregate (cost=1163168.91..1163168.96 rows=5 width=10) (actual\n> time=75460.107..75460.114 rows=15 loops=1)\n> -> Bitmap Heap Scan on ad_log (cost=285064.30..1129582.84\n> rows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\n> loops=1)\n> Recheck Cond: ((start_time >= '2008-07-01\n> 00:00:00'::timestamp without time zone) AND (start_time <=\n> '2008-07-29'::date))\n> Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n> '%'::text))\n> -> Bitmap Index Scan on ad_log_start_time\n> (cost=0.00..281705.70 rows=13434427 width=0) (actual\n> time=8488.443..8488.443\n> rows=13701296 loops=1)\n> Index Cond: ((start_time >= '2008-07-01\n> 00:00:00'::timestamp without time zone) AND (start_time <=\n> '2008-07-29'::date))\n> Total runtime: 75460.361 ms\n>\n>\n> It seems to me there should be some way to create an index to speed this\n> up,\n> but the various ones I've tried so far haven't helped. Any suggestions\n> would\n> be greatly appreciated.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nAnother suggestion is to partition the table by date ranges.  If most of the range queries occur on particular batches of time, this will make all queries more efficient, and improve locality and efficiency of all indexes on the table.\nThis is more work than simply a table CLUSTER, especially in maintenance overhead, but it will generally help a lot in cases like these.  Additionally, if these don't change much after some period of time the tables older than the modification window can be vacuumed, clustered, and reindexed if needed to make them as efficient as possible and maintenance free after that point (other than backups and archives).\nAnother benefit of clustering is in backup / restore.  You can incrementally back up only the index partitions that have changed -- for large databases this reduces pg_dump and pg_restore times substantially.  To do this you combine regular expressions with the pg_dump \"exclude tables\" or \"include tables\" flags.\nOn Thu, Aug 28, 2008 at 3:48 PM, David Rowley <[email protected]> wrote:\nI once also had a similar performance problem when looking for all matching\nrows between two timestamps. In fact that's why I'm here today. The problem\nwas with MySQL. I had some tables of around 10 million rows and all my\nsearching was timestamp based. MySQL didn't do what I wanted. I found that\nusing a CLUSTERED index with postgresql to be lightning quick. Yet mostly\nthe matching rows I was working with was not much over the 100k mark. I'm\nwondering if clustering the table on ad_log_start_time will help cut down on\nrandom reads.\n\nThat's if you can afford to block the users while postgresql clusters the\ntable.\n\nIf you're inserting in order of the start_time column (which I was) then the\ncluster should almost maintain itself (I think), providing you're not\nupdating or deleting anyway, I'd assume that since it looks like a log\ntable.\n\nDavid.\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rainer Mager\nSent: 28 August 2008 09:06\nTo: [email protected]\nSubject: [PERFORM] indexing for distinct search in timestamp based table\n\nI'm looking for some help in speeding up searches. My table is pretty simple\n(see below), but somewhat large, and continuously growing. Currently it has\nabout 50 million rows.\n\nThe table is (I know I have excessive indexes, I'm trying to get the\nappropriate ones and drop the extras):\n                                          Table \"public.ad_log\"\n    Column    |            Type             |\nModifiers\n--------------+-----------------------------+-------------------------------\n-----------------------------\n ad_log_id    | integer                     | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n channel_name | text                        | not null\n player_name  | text                        | not null\n ad_name      | text                        | not null\n start_time   | timestamp without time zone | not null\n end_time     | timestamp without time zone | not null\nIndexes:\n    \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n    \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\nad_name, start_time, end_time)\n    \"ad_log_ad_and_start\" btree (ad_name, start_time)\n    \"ad_log_ad_name\" btree (ad_name)\n    \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n    \"ad_log_channel_name\" btree (channel_name)\n    \"ad_log_end_time\" btree (end_time)\n    \"ad_log_player_and_start\" btree (player_name, start_time)\n    \"ad_log_player_name\" btree (player_name)\n    \"ad_log_start_time\" btree (start_time)\n\n\n\nThe query I'm trying to speed up is below. In it the <field> tag can be one\nof channel_name, player_name, or ad_name. I'm actually trying to return the\ndistinct values and I found GROUP BY to be slightly faster than using\nDISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\nin which case we use '%', but it seems Postgres optimizes that pretty well.\n\nSELECT <field> FROM ad_log\n        WHERE channel_name LIKE :channel_name\n        AND player_name LIKE :player_name\n        AND ad_name LIKE :ad_name\n        AND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n        GROUP BY <field> ORDER BY <field>\n\n\nA typical query is:\n\nexplain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\nAND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n(date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n\nwith the result being:\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Sort  (cost=1163169.02..1163169.03 rows=5 width=10) (actual\ntime=75460.187..75460.192 rows=15 loops=1)\n   Sort Key: channel_name\n   Sort Method:  quicksort  Memory: 17kB\n   ->  HashAggregate  (cost=1163168.91..1163168.96 rows=5 width=10) (actual\ntime=75460.107..75460.114 rows=15 loops=1)\n         ->  Bitmap Heap Scan on ad_log  (cost=285064.30..1129582.84\nrows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\nloops=1)\n               Recheck Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n               Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n'%'::text))\n               ->  Bitmap Index Scan on ad_log_start_time\n(cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\nrows=13701296 loops=1)\n                     Index Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Total runtime: 75460.361 ms\n\n\nIt seems to me there should be some way to create an index to speed this up,\nbut the various ones I've tried so far haven't helped. Any suggestions would\nbe greatly appreciated.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 28 Aug 2008 16:01:58 -0700", "msg_from": "\"Scott Carey\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexing for distinct search in timestamp based table" }, { "msg_contents": "Thanks for the suggestions.\n\n \n\nDavid's assumption is correct in that this is a log table with no updates or\ndeletes. I tried making it CLUSTERED in our test environment, but it didn't\nseem to make any difference for my query. It did take about 6 hours to do\nthe transformation, so it would be difficult to find the time to do in\nproduction, but I'm sure we could work something out if it really looked\nbeneficial. Unfortunately, as I said, initial tests don't seem to indicate\nany benefit.\n\n \n\nI believe that my performance difficulty comes from the need for DISTINCT\n(or GROUP BY) data. That is, the normal start_time index seems fine for\nlimiting the date range, but when I need to select DISTINCT data from the\ndate range it seems that Postgres still needs to scan the entire limited\ndate range.\n\n \n\nUnfortunately, we support arbitrary date range queries on this table, so I\ndon't think the partitioning idea is an option for us.\n\n \n\n \n\nWhat I'm playing with now is creating separate tables to hold the\nchannel_name, ad_name, and player_name data with PRIMARY KEY ids. Since\nthere are very few of these compared to the number of rows in the main\ntable, this will give me a quick way to get the DISTINCT values over the\nentire data set. My problem then will be reducing that to the DISTINCT\nvalues for a limited date range.\n\n \n\nAs a side effect bonus of this I expect the database to shrink considerably\nas these text fields, although not that long (roughly 20 to 50 characters),\nare certainly longer than a simple foreign key reference.\n\n \n\n \n\n--Rainer\n\n \n\n \n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Scott Carey\nSent: Friday, August 29, 2008 8:02 AM\nTo: David Rowley\nCc: Rainer Mager; [email protected]\nSubject: Re: [PERFORM] indexing for distinct search in timestamp based table\n\n \n\nAnother suggestion is to partition the table by date ranges. If most of the\nrange queries occur on particular batches of time, this will make all\nqueries more efficient, and improve locality and efficiency of all indexes\non the table.\n\nThis is more work than simply a table CLUSTER, especially in maintenance\noverhead, but it will generally help a lot in cases like these.\nAdditionally, if these don't change much after some period of time the\ntables older than the modification window can be vacuumed, clustered, and\nreindexed if needed to make them as efficient as possible and maintenance\nfree after that point (other than backups and archives).\n\nAnother benefit of clustering is in backup / restore. You can incrementally\nback up only the index partitions that have changed -- for large databases\nthis reduces pg_dump and pg_restore times substantially. To do this you\ncombine regular expressions with the pg_dump \"exclude tables\" or \"include\ntables\" flags.\n\n\n\nOn Thu, Aug 28, 2008 at 3:48 PM, David Rowley <[email protected]> wrote:\n\nI once also had a similar performance problem when looking for all matching\nrows between two timestamps. In fact that's why I'm here today. The problem\nwas with MySQL. I had some tables of around 10 million rows and all my\nsearching was timestamp based. MySQL didn't do what I wanted. I found that\nusing a CLUSTERED index with postgresql to be lightning quick. Yet mostly\nthe matching rows I was working with was not much over the 100k mark. I'm\nwondering if clustering the table on ad_log_start_time will help cut down on\nrandom reads.\n\nThat's if you can afford to block the users while postgresql clusters the\ntable.\n\nIf you're inserting in order of the start_time column (which I was) then the\ncluster should almost maintain itself (I think), providing you're not\nupdating or deleting anyway, I'd assume that since it looks like a log\ntable.\n\nDavid.\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Rainer Mager\nSent: 28 August 2008 09:06\nTo: [email protected]\nSubject: [PERFORM] indexing for distinct search in timestamp based table\n\nI'm looking for some help in speeding up searches. My table is pretty simple\n(see below), but somewhat large, and continuously growing. Currently it has\nabout 50 million rows.\n\nThe table is (I know I have excessive indexes, I'm trying to get the\nappropriate ones and drop the extras):\n Table \"public.ad_log\"\n Column | Type |\nModifiers\n--------------+-----------------------------+-------------------------------\n-----------------------------\n ad_log_id | integer | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n channel_name | text | not null\n player_name | text | not null\n ad_name | text | not null\n start_time | timestamp without time zone | not null\n end_time | timestamp without time zone | not null\nIndexes:\n \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n \"ad_log_channel_name_key\" UNIQUE, btree (channel_name, player_name,\nad_name, start_time, end_time)\n \"ad_log_ad_and_start\" btree (ad_name, start_time)\n \"ad_log_ad_name\" btree (ad_name)\n \"ad_log_all\" btree (channel_name, player_name, start_time, ad_name)\n \"ad_log_channel_name\" btree (channel_name)\n \"ad_log_end_time\" btree (end_time)\n \"ad_log_player_and_start\" btree (player_name, start_time)\n \"ad_log_player_name\" btree (player_name)\n \"ad_log_start_time\" btree (start_time)\n\n\n\nThe query I'm trying to speed up is below. In it the <field> tag can be one\nof channel_name, player_name, or ad_name. I'm actually trying to return the\ndistinct values and I found GROUP BY to be slightly faster than using\nDISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\nin which case we use '%', but it seems Postgres optimizes that pretty well.\n\nSELECT <field> FROM ad_log\n WHERE channel_name LIKE :channel_name\n AND player_name LIKE :player_name\n AND ad_name LIKE :ad_name\n AND start_time BETWEEN :start_date AND (date(:end_date) + 1)\n GROUP BY <field> ORDER BY <field>\n\n\nA typical query is:\n\nexplain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\nAND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n(date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n\nwith the result being:\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Sort (cost=1163169.02..1163169.03 rows=5 width=10) (actual\ntime=75460.187..75460.192 rows=15 loops=1)\n Sort Key: channel_name\n Sort Method: quicksort Memory: 17kB\n -> HashAggregate (cost=1163168.91..1163168.96 rows=5 width=10) (actual\ntime=75460.107..75460.114 rows=15 loops=1)\n -> Bitmap Heap Scan on ad_log (cost=285064.30..1129582.84\nrows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\nloops=1)\n Recheck Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Filter: ((channel_name ~~ '%'::text) AND (ad_name ~~\n'%'::text))\n -> Bitmap Index Scan on ad_log_start_time\n(cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\nrows=13701296 loops=1)\n Index Cond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Total runtime: 75460.361 ms\n\n\nIt seems to me there should be some way to create an index to speed this up,\nbut the various ones I've tried so far haven't helped. Any suggestions would\nbe greatly appreciated.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n \n\n\n\n\n\n\n\n\n\n\n\nThanks for the suggestions.\n \nDavid’s assumption is correct in that this is a log table\nwith no updates or deletes. I tried making it CLUSTERED in our test\nenvironment, but it didn’t seem to make any difference for my query. It\ndid take about 6 hours to do the transformation, so it would be difficult to\nfind the time to do in production, but I’m sure we could work something\nout if it really looked beneficial. Unfortunately, as I said, initial tests don’t\nseem to indicate any benefit.\n \nI believe that my performance difficulty comes from the need for\nDISTINCT (or GROUP BY) data. That is, the normal start_time index seems fine\nfor limiting the date range, but when I need to select DISTINCT data from the\ndate range it seems that Postgres still needs to scan the entire limited date\nrange.\n \nUnfortunately, we support arbitrary date range queries on this\ntable, so I don’t think the partitioning idea is an option for us.\n \n \nWhat I’m playing with now is creating separate tables to\nhold the channel_name, ad_name, and player_name data with PRIMARY KEY ids. Since\nthere are very few of these compared to the number of rows in the main table,\nthis will give me a quick way to get the DISTINCT values over the entire data\nset. My problem then will be reducing that to the DISTINCT values for a limited\ndate range.\n \nAs a side effect bonus of this I expect the database to shrink\nconsiderably as these text fields, although not that long (roughly 20 to 50\ncharacters), are certainly longer than a simple foreign key reference.\n \n \n--Rainer\n \n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Scott Carey\nSent: Friday, August 29, 2008 8:02 AM\nTo: David Rowley\nCc: Rainer Mager; [email protected]\nSubject: Re: [PERFORM] indexing for distinct search in timestamp based\ntable\n\n \n\nAnother suggestion is to\npartition the table by date ranges.  If most of the range queries occur on\nparticular batches of time, this will make all queries more efficient, and\nimprove locality and efficiency of all indexes on the table.\n\nThis is more work than simply a table CLUSTER, especially in maintenance\noverhead, but it will generally help a lot in cases like these. \nAdditionally, if these don't change much after some period of time the tables\nolder than the modification window can be vacuumed, clustered, and reindexed if\nneeded to make them as efficient as possible and maintenance free after that point\n(other than backups and archives).\n\nAnother benefit of clustering is in backup / restore.  You can\nincrementally back up only the index partitions that have changed -- for large\ndatabases this reduces pg_dump and pg_restore times substantially.  To do\nthis you combine regular expressions with the pg_dump \"exclude\ntables\" or \"include tables\" flags.\n\n\n\nOn Thu, Aug 28, 2008 at 3:48 PM, David Rowley <[email protected]> wrote:\nI once also had a similar performance problem when looking\nfor all matching\nrows between two timestamps. In fact that's why I'm here today. The problem\nwas with MySQL. I had some tables of around 10 million rows and all my\nsearching was timestamp based. MySQL didn't do what I wanted. I found that\nusing a CLUSTERED index with postgresql to be lightning quick. Yet mostly\nthe matching rows I was working with was not much over the 100k mark. I'm\nwondering if clustering the table on ad_log_start_time will help cut down on\nrandom reads.\n\nThat's if you can afford to block the users while postgresql clusters the\ntable.\n\nIf you're inserting in order of the start_time column (which I was) then the\ncluster should almost maintain itself (I think), providing you're not\nupdating or deleting anyway, I'd assume that since it looks like a log\ntable.\n\nDavid.\n\n\n\n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]]\nOn Behalf Of Rainer Mager\nSent: 28 August 2008 09:06\nTo: [email protected]\nSubject: [PERFORM] indexing for distinct search in timestamp based table\n\nI'm looking for some help in speeding up searches. My table is pretty simple\n(see below), but somewhat large, and continuously growing. Currently it has\nabout 50 million rows.\n\nThe table is (I know I have excessive indexes, I'm trying to get the\nappropriate ones and drop the extras):\n                     \n                   Table\n\"public.ad_log\"\n   Column    |          \n Type             |\nModifiers\n--------------+-----------------------------+-------------------------------\n-----------------------------\n ad_log_id    | integer          \n          | not null default\nnextval('ad_log_ad_log_id_seq'::regclass)\n channel_name | text              \n         | not null\n player_name  | text              \n         | not null\n ad_name      | text          \n             | not null\n start_time   | timestamp without time zone | not null\n end_time     | timestamp without time zone | not null\nIndexes:\n   \"ad_log_pkey\" PRIMARY KEY, btree (ad_log_id)\n   \"ad_log_channel_name_key\" UNIQUE, btree (channel_name,\nplayer_name,\nad_name, start_time, end_time)\n   \"ad_log_ad_and_start\" btree (ad_name, start_time)\n   \"ad_log_ad_name\" btree (ad_name)\n   \"ad_log_all\" btree (channel_name, player_name,\nstart_time, ad_name)\n   \"ad_log_channel_name\" btree (channel_name)\n   \"ad_log_end_time\" btree (end_time)\n   \"ad_log_player_and_start\" btree (player_name,\nstart_time)\n   \"ad_log_player_name\" btree (player_name)\n   \"ad_log_start_time\" btree (start_time)\n\n\n\nThe query I'm trying to speed up is below. In it the <field> tag can be\none\nof channel_name, player_name, or ad_name. I'm actually trying to return the\ndistinct values and I found GROUP BY to be slightly faster than using\nDISTINCT. Also, any of those fields may be unspecified in the WHERE clauses\nin which case we use '%', but it seems Postgres optimizes that pretty well.\n\nSELECT <field> FROM ad_log\n       WHERE channel_name LIKE :channel_name\n       AND player_name LIKE :player_name\n       AND ad_name LIKE :ad_name\n       AND start_time BETWEEN :start_date AND\n(date(:end_date) + 1)\n       GROUP BY <field> ORDER BY <field>\n\n\nA typical query is:\n\nexplain analyze SELECT channel_name FROM ad_log WHERE channel_name LIKE '%'\nAND ad_name LIKE '%' AND start_time BETWEEN '2008-07-01' AND\n(date('2008-07-28') + 1) GROUP BY channel_name ORDER BY channel_name;\n\nwith the result being:\n\nQUERY PLAN\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-------\n Sort  (cost=1163169.02..1163169.03 rows=5 width=10) (actual\ntime=75460.187..75460.192 rows=15 loops=1)\n  Sort Key: channel_name\n  Sort Method:  quicksort  Memory: 17kB\n  ->  HashAggregate  (cost=1163168.91..1163168.96 rows=5\nwidth=10) (actual\ntime=75460.107..75460.114 rows=15 loops=1)\n        ->  Bitmap Heap Scan on ad_log\n (cost=285064.30..1129582.84\nrows=13434427 width=10) (actual time=8506.250..65771.597 rows=13701296\nloops=1)\n              Recheck Cond: ((start_time\n>= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n              Filter: ((channel_name ~~\n'%'::text) AND (ad_name ~~\n'%'::text))\n              ->  Bitmap Index Scan\non ad_log_start_time\n(cost=0.00..281705.70 rows=13434427 width=0) (actual time=8488.443..8488.443\nrows=13701296 loops=1)\n                    Index\nCond: ((start_time >= '2008-07-01\n00:00:00'::timestamp without time zone) AND (start_time <=\n'2008-07-29'::date))\n Total runtime: 75460.361 ms\n\n\nIt seems to me there should be some way to create an index to speed this up,\nbut the various ones I've tried so far haven't helped. Any suggestions would\nbe greatly appreciated.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 29 Aug 2008 16:32:18 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexing for distinct search in timestamp based table" }, { "msg_contents": "You might get great improvement for '%' cases using index on\nchannel_name(<field>,\nstart_time) and a little bit of pl/pgsql\n\nBasically, you need to implement the following algorithm:\n 1) curr_<field> = ( select min(<field>) from ad_log )\n 2) record_exists = ( select 1 from ad_log where <field>=cur_<field> and\n_all_other_conditions limit 1 )\n 3) if record_exists==1 then add curr_<field> to the results\n 3) curr_<field> = (select min(<field>) from ad_log where <field> >\n curr_<field> )\n 4) if curr_<field> is not null then goto 2\n\n\nI believe it might make sense implement this approach in the core (I would\ncall it \"index distinct scan\")\n\nThat could dramatically improve \"select distinct <column> from <table>\" and\n\"select <column> from <table> group by <column>\" kind of queries when there\nexists an index on <column> and a particular column has very small number of\ndistinct values.\n\nFor instance: say a table has 10'000'000 rows, while column of interest has\nonly 20 distinct values. In that case, the database will be able to get\nevery of those 20 values in virtually 20 index lookups.\n\nWhat does the community think about that?\n\nYou might get great improvement for '%' cases using index on channel_name(<field>, start_time) and a little bit of pl/pgsql\nBasically, you need to implement the following algorithm: 1) curr_<field> = ( select  min(<field>) from ad_log )\n 2) record_exists = ( select 1 from ad_log where <field>=cur_<field> and _all_other_conditions limit 1 ) 3) if record_exists==1 then add curr_<field> to the results\n 3) curr_<field> = (select min(<field>) from ad_log where <field>  >  curr_<field> )  4) if curr_<field> is not null then goto 2\nI believe it might make sense implement this approach in the core (I would call it \"index distinct scan\")\nThat could dramatically improve \"select distinct <column> from <table>\" and \"select <column> from <table> group by <column>\" kind of queries when there exists an index on <column> and a particular column has very small number of distinct values.\nFor instance:  say a table has 10'000'000 rows, while column of interest has only 20 distinct values. In that case, the database will be able to get every of those 20 values in virtually 20 index lookups.\nWhat does the community think about that?", "msg_date": "Fri, 5 Sep 2008 19:10:35 +0400", "msg_from": "\"Vladimir Sitnikov\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: indexing for distinct search in timestamp based table" }, { "msg_contents": "Thanks for the suggestion. This seems to work pretty well on 8.3, but not so\nwell on 8.2. We were planning on upgrading to 8.3 soon anyway, we just have\nto move up our schedule a bit.\n\n \n\nI think that this type of algorithm would make sense in core. I suspect that\nbeing in there some further optimizations could be done that pl/pgsql can't\ndo.\n\n \n\n \n\n--Rainer\n\n \n\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Vladimir\nSitnikov\nSent: Saturday, September 06, 2008 12:11 AM\nTo: [email protected]\nSubject: Re: [PERFORM] indexing for distinct search in timestamp based table\n\n \n\nYou might get great improvement for '%' cases using index on\nchannel_name(<field>, start_time) and a little bit of pl/pgsql\n\n \n\nBasically, you need to implement the following algorithm:\n\n 1) curr_<field> = ( select min(<field>) from ad_log )\n\n 2) record_exists = ( select 1 from ad_log where <field>=cur_<field> and\n_all_other_conditions limit 1 )\n\n 3) if record_exists==1 then add curr_<field> to the results\n\n 3) curr_<field> = (select min(<field>) from ad_log where <field> >\ncurr_<field> ) \n\n 4) if curr_<field> is not null then goto 2\n\n \n\n \n\nI believe it might make sense implement this approach in the core (I would\ncall it \"index distinct scan\")\n\n \n\nThat could dramatically improve \"select distinct <column> from <table>\" and\n\"select <column> from <table> group by <column>\" kind of queries when there\nexists an index on <column> and a particular column has very small number of\ndistinct values.\n\n \n\nFor instance: say a table has 10'000'000 rows, while column of interest has\nonly 20 distinct values. In that case, the database will be able to get\nevery of those 20 values in virtually 20 index lookups.\n\n \n\nWhat does the community think about that?\n\n\n\n\n\n\n\n\n\n\n\nThanks for the suggestion. This seems to work pretty well on\n8.3, but not so well on 8.2. We were planning on upgrading to 8.3 soon anyway, we\njust have to move up our schedule a bit.\n \nI think that this type of algorithm would make sense in core. I\nsuspect that being in there some further optimizations could be done that\npl/pgsql can’t do.\n \n \n--Rainer\n \n\nFrom:\[email protected]\n[mailto:[email protected]] On Behalf Of Vladimir\nSitnikov\nSent: Saturday, September 06, 2008 12:11 AM\nTo: [email protected]\nSubject: Re: [PERFORM] indexing for distinct search in timestamp based\ntable\n\n \n\n\nYou might get great improvement for '%' cases using index\non channel_name(<field>, start_time) and a little bit of pl/pgsql\n\n\n \n\n\nBasically, you need to implement the following algorithm:\n\n\n 1) curr_<field> = ( select\n min(<field>) from ad_log )\n\n\n 2) record_exists = ( select 1 from ad_log where\n<field>=cur_<field> and _all_other_conditions limit 1 )\n\n\n 3) if record_exists==1 then add curr_<field> to\nthe results\n\n\n 3) curr_<field> = (select min(<field>)\nfrom ad_log where <field>  >  curr_<field> ) \n\n\n 4) if curr_<field> is not null then goto 2\n\n\n \n\n\n \n\n\nI believe it might make sense implement this approach in the\ncore (I would call it \"index distinct scan\")\n\n\n \n\n\nThat could dramatically improve \"select distinct\n<column> from <table>\" and \"select <column> from\n<table> group by <column>\" kind of queries when there exists\nan index on <column> and a particular column has very small number of\ndistinct values.\n\n\n \n\n\nFor instance:  say a table has 10'000'000 rows, while\ncolumn of interest has only 20 distinct values. In that case, the database will\nbe able to get every of those 20 values in virtually 20 index lookups.\n\n\n \n\n\nWhat does the community think about that?", "msg_date": "Mon, 8 Sep 2008 12:54:04 +0900", "msg_from": "\"Rainer Mager\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: indexing for distinct search in timestamp based table" } ]