On Tue, Sep 04, 2018 at 04:28:49PM +0100, Liam Girdwood wrote:
On Tue, 2018-09-04 at 16:03 +0100, Mark Brown wrote:
I was thinking the code could read it like it's doing now and then just have a trace event to dump it into the core trace infrastructure as it reads things in.
Most of the data is binary atm, so we were decoding in userspace. Any objection to decoding in the kernel as different trace events ?
I wouldn't think so.
The way the trace stuff works is that the writer dumps blocks of data into the trace buffer in raw binary form in a structured block tagged with the event type and then later on when something reads out the buffer they can be formatted as desired, including things like hex dumps (IIRC there's facilities to just get the raw buffer entries too, I've only ever used the human readable interface myself, writing software is hard). This gives very fast writes which is really nice for anything performance sensitive and means you can leave trace on a lot more than you would otherwise. I'd guess if you were decoding it'd just be splitting the buffer up into per message chunks.