Re: Vacuuming generates huge WALs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You can vacuum a limited number of tables manually in every few hours. Set a cronjob to 1. collect say 50 tables with oldest refrozenxid into a file. Append each tablename with vacuum analyze to make every line a psql command to vacuum each table. Then input this file to psql. 

Say you run it every few hours, each time it is run, 50 (in this case) oldest tables will be vacuumed. This way you can maintain the tables without overloading your server or generating excessive WALs all of a sudden.

Payal Singh,
Junior Database Administrator,
OmniTI Computer Consulting Inc.
Phone: 240.646.0770 x 253


On Thu, Feb 13, 2014 at 2:29 PM, Murthy Nunna <mnunna@xxxxxxxx> wrote:

Hi All,

 

I am running version 9.2.4 and my database is configured for replication. My database is about 1.5TB.

 

Our manual vacuum process runs for about 5 hours and during this time there is lot of WAL files generated. Even the server load goes up due to this heavy IO activity. Because of the replication, all these WALs are shipped to standby server and there is also heavy IO load on the standby server. We run read only queries on standby server so it is having user impact there as well.

 

This is probably catch 22 but I am wondering if there is any way we can decrease this WAL activity during vacuum?

 

Thanks in advance for your advice/comments.

 

Murthy



[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux