[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [ANNOUNCE] cryptsetup 1.2.0-rc1 (test release candidate)

On 11/16/2010 09:58 AM, Milan Broz wrote:
>   Be very careful before changing default to blocking /dev/random use here.

Just my personal rant here:

Why I think using /dev/random is strange idea.

I think that /dev/urandom should be usable for long term key use. Period.

If you know about some problem, please fix it. Use better PRNG. Whatever.

There are many "entropy" sources which fills the entropy pool
and which are (or will be) disputable in several situations.

Just examples:

- disk seek as source of entropy. What happens if you replace HDD with SSD
(with constant seek time)? (Recent kernels already disabled it
for non-rotational drives).

- fully virtualised environment - how we can pretend that events are random
if everything is virtualised and controlled by hypervisor?
Can hypervisor fake events such way that application waiting for /dev/random
input get some mangled values? Is it real risk or not for you?

- HW random generators. Can you prove that your favourite chip manufacturer
really generates "true random"(tm)? Can anyone fake it by hw manipulation?
(Like manipulating with voltage or whatever.)

For me seems to be better to have some defined PRNG (pseudo random number generator)
in kernel which is designed with known and open algorithms.
(an example like ANSI X9.31 PRNG based on CTR(AES-128))

It is interesting to see how various programs tried to "fix" this problem.

See Truecrypt with its random pool and hash mixes.

See gcrypt which tries to get 300 bytes from /dev/random just to initialise
its own "strong random pool".


Then read "man urandom" page:

"A read from the /dev/urandom device will not block waiting for more entropy.  As a result,
if there is not sufficient entropy in the entropy pool, the returned values are theoretically
vulnerable to a cryptographic attack  on  the  algorithms  used  by  the driver.
Knowledge  of how to do this is not available in the current unclassified literature,
but it is theoretically possible that such an attack may exist.  If this is a concern in your
application, use /dev/random instead."

then (means reading /dev/random):

" ... so if any program reads more  than  256  bits  (32 bytes) from the kernel random pool
per invocation, or per reasonable reseed interval (not less than one minute), that should be
taken as a sign that its cryptography is not skilfully implemented."

Evil application can always exhaust /dev/random pool affecting other users or intentionally
drop applications into state that /dev/random blocks.

Seems anyone implements some own random pool to avoid this.

Why every application should try to solve this in the fist place?
Why risk possible mistakes in such critical part of system like key generator
in every application?

So. The basic idea of cryptsetup is to "setup" volumes and "use" kernel provided
crypto. Not to fix kernel RNG or crypto. Not to implement cryptographic primitives
itself and introduce possible mistakes.

If the encryption algorithm is broken or proven to not be strong enough - you will
replace it with something better, right?
I think this should apply to /dev/urandom RNG too.

I implemented random/urandom selection so you can do whatever you want with it now..

But please think about it - If cryptsetup relies on kernel for encryption, it should trust
even its RNG. Even virtual machine with no entropy once seeded should provide some
reliable and nonblocking PRNG.

BTW I'll be happy if you can provide links to literature and analysis related
to this problem.

dm-crypt mailing list

[DM Devel]     [Fedora Desktop]     [ATA RAID]     [Fedora Marketing]     [Fedora Packaging]     [Fedora SELinux]     [Yosemite News]     [Yosemite Photos]     [KDE Users]     [Fedora Tools]     [Fedora Docs]

Add to Google Powered by Linux