Re: How to generate a large file allocating space

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Alex, hello Andreas

* Andreas Dilger <adilger@xxxxxxxxx> hat geschrieben:
>* On 2010-10-31, at 09:05, Alex Bligh wrote:
>> I am trying to allocate huge files on ext4. I will then read the extents
>> within the file and write to the disk at a block level rather than using
>> ext4 (the FS will not be mounted at this point). This will allow me to
>> have several iSCSI clients hitting the same LUN r/w safely. And at
>> some point when I know the relevant iSCSI stuff has stopped and been
>> flushed to disk, I may unlink the file.

Question: Did you consider using plain LVM for this purpose? By creating a
logical volume, no data is initialized, only the meta data is created
(what seems to be exactly what you need). Then, each client may access one
logical volume r/w. Retrieving the extents list is very easy as well. And
because there are no group management data (cluster bitmaps, inode bitmaps
and tables) of any kind, you will end up with only one single extent in
most cases regardless of the size of the volume you've created.

> Hmm, why not simply use a cluster filesystem to do this?
> 
> GFS and OCFS both handle shared writers for the same SAN disk (AFAIK),

They are SUPPOSED to do that - in theory. The last two weekends I tried to
set up a stable DRDB+GFS2 setup - I failed. Then I tried OCFS2 - again I
failed. The setup was simple: Two identical Systems with 10*500GB disks
and a hardware RAID6 yielding 4GB user disk space. That was used to create
a DRDB (no LVM or other stuff like crypto in betreen). Both were set to
primary and then I created GFS2 (later OCFS2) and started the additional
tools like clvm/o2bc. Then mounting the file systems on both machines -
everything worked up to here.

machine1: dd if=/dev/zero of=/mnt/4tb/file1
machine2: dd if=/dev/zero of=/mnt/4tb/file2

Worked well in both setups on both machines

machine1: let i=0; while let i=i+1; do echo "A$i" >> /mnt/4tb/file3; done
machine2: let i=0; while let i=i+1; do echo "B$i" >> /mnt/4tb/file3; done

GFS2: First machine works well, second machine starts returning EIO on
*ANY* request (even ls /mnt/4tb). Umount impossible. Had to reboot ->
#gfs2 #fail
OCFS2: passed this test as well as the next one

machine1: let i=0; while let i=i+1; do echo "A$i"; done >> /mnt/4tb/file4
machine2: let i=0; while let i=i+1; do echo "B$i"; done >> /mnt/4tb/file4

Then I rebooted one machine with "echo b > /proc/sysrq-trigger" while the
last test was still in progress. Guess what: The other machine stopped
working. No reads, no writes. It didn't evern go on when the first machine
came back. I had then to reboot the second one as well to continue using
the file system.

Maybe I did something wrong, maybe the file systems just aren't as stable
as we expected them to be, anyways, we decided now to use stable systems,
i.e. drbd in primary/secondary setup and ext3 with failover to the other
system if the primary goes down, and as the system already went
productive, we're not gonna change anything here in the near future. So
consider this report as strictly informative.

BTW: No, I do not longer have the config files, I didn't save them and the
systems have been completely reinstalled after testing the final setup
succeeded to wipe out everything left over from the previous attempts.

Regards, Bodo

_______________________________________________
Ext3-users mailing list
Ext3-users@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/ext3-users


[Index of Archives]         [Linux RAID]     [Kernel Development]     [Red Hat Install]     [Video 4 Linux]     [Postgresql]     [Fedora]     [Gimp]     [Yosemite News]

  Powered by Linux