File:  [NetBSD Developer Wiki] / wikisrc / projects / project / scalable-entropy.mdwn
Revision 1.2: download - view: text, annotated - select for diffs
Wed Sep 9 13:32:12 2020 UTC (4 months ago) by kim
Branches: MAIN
CVS tags: HEAD
Use man template

[[!template id=project

title="Scalable entropy gathering"


[Taylor R Campbell](

duration="2-3 months"

The kernel entropy pool provides unpredictable secrets via the
 [[!template id=man name="rnd" section="4"]] (/dev/urandom)
 device for applications doing cryptography, Monte Carlo simulations,
To provide unpredictable secrets, the kernel observes various physical
 hardware devices attached to it and combines the individually
 low-entropy observations into a high-entropy pool.
Sources include

- hardware random number generators,
- keystrokes and their timing,
- disk seek times,
- network packet timing,
- page fault times,

and various others.

Currently, the observations are put into a single global queue, and
 then mixed with an ad-hoc array of 128 linear feedback shift
 registers, which is hashed with SHA-1 to produce outputs.
There are two problems with this:

1. The single global queue is a point of contention for gathering
 observations: every disk seek, every network packet, every page fault
 must acquire a global lock to enter data into the global queue for

2. The cryptographic construction of the pool -- array of 128 linear
 feedback shift registers fed into SHA-1 -- is ad hoc, has never been
 analyzed by a cryptographer, and uses questionable components.

Using a queue instead of entering into the pool directly reduces the
 latency of each operation, but the contention over a single global
 resource remains, and as computers become more parallel, the cost of
 that contention grows.
*Entering* entropy should be cheap; *extracting* entropy can be
 expensive because once you have a single 256-bit unpredictable secret,
 you can efficiently expand that into an arbitrarily long unpredictable

The entropy pool should be replaced by a per-CPU collection of Keccak
 sponges, the primitive inside SHA-3.
Entering data into the poolcan be a matter of xoring into a buffer on
 the local CPU, and occasionally permuting the 1600-bit state;
 extracting data from the pool requires a CPU cross-call to extract
 data from all the per-CPU pools.

This project has an untested draft to start with.
Subtle issues in the project include avoiding deadlocks in the
 protocols for gathering entropy on demand and waking readers waiting
 for entropy before the system has gathered enough.

[[!tag gsoc]]

CVSweb for NetBSD wikisrc <> software: FreeBSD-CVSweb