|[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]|
On 23/02/12 08:39, Reuven M. Lerner wrote:
(4) I tried "chunking" the deletes, such that instead of trying to delete all of the records from the B table, I would instead delete just those associated with 100 or 200 rows from the R table. On a 1 GB subset of the data, this seemed to work just fine. But on the actual database, it was still far too slow.
This is the approach I'd take. You don't have enough control / access to come up with a better solution. Build a temp table with 100 ids to delete. Time that, and then next night you can increase to 200 etc until it takes around 3 hours.
Oh - and get the Windows admins to take a look at disk activity - the standard performance monitor can tell you more than enough. If it is swapping constantly, performance will be atrocious but even if the disks are just constantly busy then updates and deletes can be very slow.
-- Richard Huxton Archonet Ltd -- Sent via pgsql-performance mailing list (pgsql-performance@xxxxxxxxxxxxxx) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
[Postgresql General] [Postgresql PHP] [PHP Users] [PHP Home] [PHP on Windows] [Kernel Newbies] [PHP Classes] [PHP Books] [PHP Databases] [Home] [Yosemite]