On Mon, May 02, 2011 at 04:46:00PM +0200, Takashi Iwai wrote:
At Mon, 2 May 2011 15:40:20 +0100, Dimitris Papastamos wrote:
On Mon, May 02, 2011 at 04:29:01PM +0200, Takashi Iwai wrote:
At Mon, 2 May 2011 13:28:28 +0100, Dimitris Papastamos wrote:
This patch prepares the ground for the actual rbtree optimization patch which will save a pointer to the last accessed register block that was used in either the read() or write() functions.
Each node manages a block of up to RBTREE_BLOCK_NUM registers. There can be no two nodes with overlapping blocks. Currently there is no check in the code to scream in case that ever happens. Each block has a base and top register, all others lie in between these registers. Note that variable length blocks aren't supported. So if you have an interval [8, 15] and only some of those registers actually exist on the device, the block will have the non-existent registers as zero. There is also no way of reporting that any of those non-existent registers were accessed/modified.
The larger the block size, the more probable it is that one of the managed registers is non-zero, and therefore the node will need to be allocated at initialization time and waste space.
If register N is accessed and it is not part of any of the current blocks in the rbtree, a new node is created with a base register which is floor(N / RBTREE_BLOCK_NUM) * RBTREE_BLOCK_NUM and a top register as base_register + RBTREE_BLOCK_NUM - 1. All other registers in the block are initialized as expected and the node is inserted into the tree.
Signed-off-by: Dimitris Papastamos dp@opensource.wolfsonmicro.com
Looking through the patch, I wonder whether it gives a performance gain enough for the additional complexity. Did you measure somehow?
This patch itself does not provide a performance benefit. The next patch in line which caches the last used block will provide a performance benefit.
Yeah, sure, it was a question to this series of patches :)
I have planted some cache hit/miss counters in my debug code to measure this. Ideally it'd be nice for the codec drivers/machine drivers to be able to tune the block size at init time.
For such a performance improvement change, always the objective measurement is preferred. If only hit/miss matters, a hash table with a reasonable size can be often a better option (and much simpler).
The original idea was to use an rbtree to decrease memory footprint, by only allocating nodes for registers that are non-zero. The same idea has been progressed further to use a block of registers. So a node is only allocated if there is at least one register in the block that is non-zero. This patch series hopes to make the entire process less CPU intensive but to still keep the low memory footprint.
Thanks, Dimitris