Re: [PATCH 0/5] Convert seed devices to proper list API

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 22.07.20 г. 17:26 ч., David Sterba wrote:
> On Wed, Jul 15, 2020 at 01:48:45PM +0300, Nikolay Borisov wrote:
>> This series converts the existing seed devices pointer in btrfs_fs_devices to
>> proper list api. First 4 patches are cleanups preparing the code for the switch.
> 
>> Patch 5 has more information about the required transformations, it might seem
>> lengthy but he logic everywhere is pretty much the same so shouldn't be that
>> cumbersome to review.
> 
> The first 3 patches are clear, I start to have doubts in 4 and 5 was a
> stop for me, but I'm not saying it's wrong. Just that I always thought
> the seed devices + the sprout are close to a tree structure but you're
> switching that to a linear list.
> 
> I'll add the first three patches now, but would appreciate some help
> with 4 and 5.
> 

Ok, let's take it one at a time, patch 4:

Originally fs_info for fs_devices is set by btrfs_set_fs_info_ptr called
by btrfs_sysfs_add_mounted. THe latter is called in open_ctree before
block_groups are read i.e on before the fs is exposed. My patch instead
changes this so that fs_info is set in btrfs_init_devices_late, called
from btrfs_read_roots, called by init_tree_roots. The last function is
called before btrfs_sysfs_add_mounted. This means as far as setting
fs_devices is concerned its fs_info is being set earlier. And this is
correct.

Now, let's think about clearing it, the old code cleared it in :

btrfs_sysfs_remove_mounted which is called from:

1. fail_sysfs label in open_ctree
2. close_ctree
3. failure in btrfs_sysfs_add_mounted which does goto fail_fsdev_sysfs

So how are those 3 situation handled by the new code :

1 and 3 - the last function called in open_ctree is btrfs_close_devices
in case of errors and both failure labels are before it, so we are
guaranteed that on failure in open_ctree fs_devices will have it's
fs_info being NULL.

2. Again, last function called in close_ctree is btrfs_close_devices so
we are guaranteed on unmount fs_devices will have fs_info being NULL.

Thus the patch doesn't introduce any functional changes but IMO makes
the code more coherent.


Regarding patch 5 - I don't know what made you think there is a
tree-like structure involved. Simply looking at the old way seeds are
iterated:

	while (seed_devices) {
		fs_devices = seed_devices;
		seed_devices = fs_devices->seed;
 		close_fs_devices(fs_devices);
 		free_fs_devices(fs_devices);
 	}


There's no conditional deciding if we should go left/right of the tree.
Or let's take for example deleting from a list of seed devices:

		tmp_fs_devices = fs_info->fs_devices;
		while (tmp_fs_devices) {
			if (tmp_fs_devices->seed == fs_devices) {
				tmp_fs_devices->seed = fs_devices->seed;
				break;
			}
			tmp_fs_devices = tmp_fs_devices->seed;
		}

Here a simple linear search is performed and when a member of the seed
list matches our fs_devices it simply pointed 1 past our fs_devices

When the if inside the loop matches we get the following situation:

[tmp_fs_devices]->[fs_devices]->[fs_devices->seeds]

Then we perform [tmp_fs_devices] -> [fs_devices->seed]

So a simple deletion, just the fact you were confused shows the old code
is rather wonky.





[Index of Archives]     [Linux Filesystem Development]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux