Skip to content

Conversation

@dinosaure
Copy link
Collaborator

As spotted here: https://github.com/unikraft/unikraft/blob/staging/drivers/virtio/ring/virtio_ring.c#L54. The throughput up to ~300MB/s instead of 260MB/s (/cc @shym who did some benchmarks on hvt).

#include "virtio_pci.h"

/*
* There is no official max queue size. But we've seen 4096, so let's use the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment should probably also be adjusted (or dropped) to reflect the change.

@shym
Copy link
Contributor

shym commented Jul 17, 2025

I’ve added solo5-virtio to the network benchmarks we had run and ran it both with the released Solo5 and this PR. I get the following results.
graph
@dinosaure: did you see better results for this PR in your tests? (in particular, I don’t think the difference is significant)
@Firobe: are the baseline results for solo5-virtio similar to what you had measured?
(And sorry if you saw the first draft of this message, I had used the wrong measures for some results)

@dinosaure
Copy link
Collaborator Author

I need to check but I'm currently on several fronts and mostly about on utcp. It's hard to bisect the true bottleneck between unikraft/solo5. I need to check also if Unikraft sets the TCO or not (despite Solo5 which does nothing). A quick look tells me that the ring queue of solo5-virtio is not so different from unikraft but I'm not an expert of such target and I need to double check several implementations (like the one from FreeBSD).

I also would like to ping @ricarkol who did the initial implementation of virtio but if he does not have the time to look on it (nor the interest), it's completely understandable.

@Firobe
Copy link
Contributor

Firobe commented Jul 17, 2025

@Firobe: are the baseline results for solo5-virtio similar to what you had measured? (And sorry if you saw the first draft of this message, I had used the wrong measures for some results)

Yes, I have similar results (included below), including the non-significant difference for this PR. Now keep in mind this specific benchmark measures a relatively narrow use case (mostly receiving data), and different tests might bring more significant improvements out.

My results graph

@reynir
Copy link
Contributor

reynir commented Oct 8, 2025

The value (or macro really) VIRTQ_MAX_QUEUE_SIZE is only used once in an assertion:

assert(vq->num <= VIRTQ_MAX_QUEUE_SIZE);

So I think this change doesn't affect performance in any way.

@reynir
Copy link
Contributor

reynir commented Oct 8, 2025

I tried to follow the unikraft code to figure out where the value nr_desc comes from in that function, but it was a deep rabbit hole of tracing back other functions and functions stored in structs. I don't think VIRTQ_MAX_QUEUE_SIZE is the size of the queue in unikraft. Possibly it's a value known to be likely greater than nr_desc, but then it's weird that they don't assert that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants