On Mon 11-12-17 16:55:30, Josef Bacik wrote:
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 356a814e7c8e..48de090f5a07 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -179,9 +179,19 @@ enum node_stat_item {
> NR_VMSCAN_IMMEDIATE, /* Prioritise for reclaim when writeback ends */
> NR_DIRTIED, /* page dirtyings since bootup */
> NR_WRITTEN, /* page writings since bootup */
> + NR_METADATA_DIRTY_BYTES, /* Metadata dirty bytes */
> + NR_METADATA_WRITEBACK_BYTES, /* Metadata writeback bytes */
> + NR_METADATA_BYTES, /* total metadata bytes in use. */
> NR_VM_NODE_STAT_ITEMS
> };
Please add here something like: "Warning: These counters will overflow on
32-bit machines if we ever have more than 2G of metadata on such machine!
But kernel won't be able to address that easily either so it should not be
a real issue."
> diff --git a/mm/vmstat.c b/mm/vmstat.c
> index 4bb13e72ac97..0b32e6381590 100644
> --- a/mm/vmstat.c
> +++ b/mm/vmstat.c
> @@ -273,6 +273,13 @@ void __mod_node_page_state(struct pglist_data *pgdat, enum node_stat_item item,
>
> t = __this_cpu_read(pcp->stat_threshold);
>
> + /*
> + * If this item is counted in bytes and not pages adjust the threshold
> + * accordingly.
> + */
> + if (is_bytes_node_stat(item))
> + t <<= PAGE_SHIFT;
> +
> if (unlikely(x > t || x < -t)) {
> node_page_state_add(x, pgdat, item);
> x = 0;
This is wrong. The per-cpu counters are stored in s8 so you cannot just
bump the threshold. I would just ignore the PCP counters for metadata (I
don't think they are that critical for performance for metadata tracking)
and add to the comment I've suggested above: "Also note that updates to
these counters won't be batched using per-cpu counters since the updates
are generally larger than the counter threshold."
Honza
--
Jan Kara <jack@xxxxxxxx>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html