On Tue, Jan 07, 2020 at 06:23:45PM +0300, Cengiz Can wrote: > On January 6, 2020 18:53:40 David Sterba <dsterba@xxxxxxx> wrote: > > > The overflow can't happen in practice. Taking generous maximum value for > > an item and sectorsize (64K) and doing the math will reach nowhere the > > overflow limit for 32bit type: > > > > 64K / 4 * 64K = 1G > > I didn't know that. Thanks for sharing. > > > I'm not sure if this is worth adding the cast, or mark the coverity > > issue as not important. > > If the cast is adding any overhead (which I don't think it does) I would > kindly request adding it for the sake of completeness. It's not a runtime overhead but typecasts should not be needed in general and when there's one it should be there for a reason. Sometimes it's apparent and does not need a comment to explain why but otherwise I see it as "overhead" when reading the code. Lots of calculations done in btrfs fit perfectly to 32bit, eg. the b-tree node or page-related ones. Where it matters is eg. conversion from/to bio::bi_sector to/from btrfs logical addresses that are u64, where the interface type is unsigned long and not a fixed width. The size constraints of the variables used in the reported expression are known to developers so I tend to think the typecast is not necessary. Maybe the static checker tool could be improved to know the invariants, that are eg. verified in fs/btrfs/disk-io.c:validate_super.
