Re: Huge load on btrfs subvolume delete

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 15/08/16 à 10:16, "Austin S. Hemmelgarn" <ahferroin7@xxxxxxxxx> a écrit :
ASH> With respect to databases, you might consider backing them up separately 
ASH> too.  In many cases for something like an SQL database, it's a lot more 
ASH> flexible to have a dump of the database as a backup than it is to have 
ASH> the database files themselves, because it decouples it from the 
ASH> filesystem level layout.

With mysql|mariadb, having a consistent dump needs to lock tables during dump, not acceptable on
production servers. 

Even with specialised tools for hotdump, doing the dump on prod servers is too heavy about I/O
(I have huge db, writing the dump is expensive and long).

I used to have a slave juste for the dump (easy to stop slave, dump, and start slave), but after
a while it wasn't able to follow the writings all the day long (prod was on ssd and it wasn't,
dump hd was 100% busy all the day long), so it's for me really easier to rsync the raw
files once a day on a cheap host before dump.

(of course, I need to flush & lock table during the snapshot, before rsync, but it's just one or
two seconds, still acceptable)

-- 
Daniel
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux