Hi Mathieu,
On Fri, Sep 18, 2020 at 09:52:49AM -0600, Mathieu Poirier wrote:
Good morning,
On Fri, Sep 18, 2020 at 11:02:29AM +0200, Guennadi Liakhovetski wrote:
Hi Mathieu,
On Thu, Sep 17, 2020 at 04:01:38PM -0600, Mathieu Poirier wrote:
On Thu, Sep 10, 2020 at 01:13:51PM +0200, Guennadi Liakhovetski wrote:
Linux supports running the RPMsg protocol over the VirtIO transport protocol, but currently there is only support for VirtIO clients and no support for VirtIO servers. This patch adds a vhost-based RPMsg server implementation, which makes it possible to use RPMsg over VirtIO between guest VMs and the host.
I now get the client/server concept you are describing above but that happened only after a lot of mental gymnastics. If you drop the whole client/server concept and concentrate on what this patch does, things will go better. I would personally go with what you have in the Kconfig:
Vhost RPMsg API allows vhost drivers to communicate with VirtIO
drivers on guest VMs, using the RPMsg over VirtIO protocol.
It is concise but describes exactly what this patch provide.
Ok, thanks, will try to improve.
Signed-off-by: Guennadi Liakhovetski guennadi.liakhovetski@linux.intel.com
drivers/vhost/Kconfig | 7 + drivers/vhost/Makefile | 3 + drivers/vhost/rpmsg.c | 370 ++++++++++++++++++++++++++++++++++++ drivers/vhost/vhost_rpmsg.h | 74 ++++++++ 4 files changed, 454 insertions(+) create mode 100644 drivers/vhost/rpmsg.c create mode 100644 drivers/vhost/vhost_rpmsg.h
[snip]
diff --git a/drivers/vhost/rpmsg.c b/drivers/vhost/rpmsg.c new file mode 100644 index 000000000000..0ddee5b5f017 --- /dev/null +++ b/drivers/vhost/rpmsg.c @@ -0,0 +1,370 @@
[snip]
+/*
- Return false to terminate the external loop only if we fail to obtain either
- a request or a response buffer
- */
+static bool handle_rpmsg_req_single(struct vhost_rpmsg *vr,
struct vhost_virtqueue *vq)
+{
- struct vhost_rpmsg_iter iter;
- int ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_REQUEST, -EINVAL);
- if (!ret)
ret = vhost_rpmsg_finish_unlock(vr, &iter);
- if (ret < 0) {
if (ret != -EAGAIN)
vq_err(vq, "%s(): RPMSG processing failed %d\n",
__func__, ret);
return false;
- }
- if (!iter.ept->write)
return true;
- ret = vhost_rpmsg_start_lock(vr, &iter, VIRTIO_RPMSG_RESPONSE, -EINVAL);
- if (!ret)
ret = vhost_rpmsg_finish_unlock(vr, &iter);
- if (ret < 0) {
vq_err(vq, "%s(): RPMSG finalising failed %d\n", __func__, ret);
return false;
- }
As I said before dealing with the "response" queue here seems to be introducing coupling with vhost_rpmsg_start_lock()... Endpoints should be doing that.
Sorry, could you elaborate a bit, what do you mean by coupling?
In function vhost_rpmsg_start_lock() the rpmsg header is prepared for a response at the end of the processing associated with the reception of a VIRTIO_RPMSG_REQUEST. I assumed (perhaps wrongly) that such as response was sent here. In that case preparing the response and sending the response should be done at the same place.
This will change in the next version, in it I'll remove response preparation from request handling.
But my assumption may be completely wrong... A better question should probably be why is the VIRTIO_RPMSG_RESPONSE probed in handle_rpmsg_req_single()? Shouldn't this be solely concerned with handling requests from the guest? If I'm wondering what is going on I expect other people will also do the same, something that could be alleviated with more comments.
My RPMsg implementation supports two modes for sending data from the host (in VM terms) to guests: as responses to their requests and as asynchronous messages. If there isn't a strict request-response pattern on a certain endpont, you leave the .write callback NULL and then you send your messages as you please independent of requests. But you can also specify a .write pointer in which case after each request to generate a response.
In principle this response handling could be removed, but then drivers, that do need to respond to requests would have to schedule an asynchronous action in their .read callbacks to be triggered after request processing has completed.
Thanks Guennadi