On Thu, May 08, 2014 at 07:16:17PM -0400, Zach Brown wrote:
> The compression layer seems to have been built to return -1 and have
> callers make up errors that make sense. This isn't great because there
> are different classes of errors that originate down in the compression
> layer. Allocation failure and corrupt compressed data to name two.
>
> --- a/fs/btrfs/lzo.c
> +++ b/fs/btrfs/lzo.c
> @@ -143,7 +143,7 @@ static int lzo_compress_pages(struct list_head *ws,
> if (ret != LZO_E_OK) {
> printk(KERN_DEBUG "BTRFS: deflate in loop returned %d\n",
> ret);
> - ret = -1;
> + ret = -EIO;
> goto out;
> }
>
> @@ -189,7 +189,7 @@ static int lzo_compress_pages(struct list_head *ws,
> kunmap(out_page);
> if (nr_pages == nr_dest_pages) {
> out_page = NULL;
> - ret = -1;
> + ret = -EIO;
This is not a true EIO, the error conditions says that the caller
prepared nr_dest_pages for the compressed data but the compression wants
more.
The number of pages is at most 128k / PAGE_SIZE.
It's a soft error, the data are written uncompressed. The closest errno
here seems E2BIG that would apply in the following hunk as well.
> goto out;
> }
>
> @@ -208,7 +208,7 @@ static int lzo_compress_pages(struct list_head *ws,
>
> /* we're making it bigger, give up */
> if (tot_in > 8192 && tot_in < tot_out) {
> - ret = -1;
> + ret = -EIO;
Here, E2BIG.
> goto out;
> }
>
> @@ -335,7 +335,7 @@ cont:
> break;
>
> if (page_in_index + 1 >= total_pages_in) {
> - ret = -1;
> + ret = -EIO;
That looks like an internal error, we should never ask for more pages
than is in the input, so the buffer offset calculations are wrong.
> goto done;
> }
>
Analogically the same applies to zlib. The rest of the EIOs look ok.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html