I have two processors, A and B, connected directly by Ethernet.
B happens to be a DSP, but A is a PC running either W7 or XP - the o/s makes no apparent difference to the behaviour. The only other thing running on the PC is WireShark.
B writes N UDP packets with 1024 bytes of data to A
B then reads one such packet from A
A is doing the corresponding thing.
This is all repeated many times.
All packets are well formed (correct checksums etc) and the expected data are transferred..
With N=1 (so alternating reads and writes), looking at the traffic with WireShark running on A, I see
the interval (in us) between packets being read and sent as typically 37, 321, 32, 206, 18, 245, ...
Do these figures seem reasonable?
Would you expect the write to take ten times longer than the read?
Assuming this behaviour is an artefact of buffering in Windows, is 200-300us an expected time
for a round trip (one 1KB packet in, 1 1KB packet out)?
With N=16, the write still takes about 300us, but the reads take 37, 11, 9, 8, 8, 9, 8, ...
It appears that the more consecutive reads I do, the closer the time between the data appearing becomes around 8-9us.
Changing the value of N has no consistent effect on the typical time of about 300us to see the write going out. This suggests that buffering is not responsible for the large differences between reads and writes.
The very strange thing is that quite often the write takes of the order of 1ms.
The overall effect of the large and huge write times is to reduce the throughput of the system considerably.
Measurements on the DSP show that processing reads and writes always takes the expected (very short) time.
Are there any tricks to getting consistent performance or is this just the way things work under Windows?
B happens to be a DSP, but A is a PC running either W7 or XP - the o/s makes no apparent difference to the behaviour. The only other thing running on the PC is WireShark.
B writes N UDP packets with 1024 bytes of data to A
B then reads one such packet from A
A is doing the corresponding thing.
This is all repeated many times.
All packets are well formed (correct checksums etc) and the expected data are transferred..
With N=1 (so alternating reads and writes), looking at the traffic with WireShark running on A, I see
the interval (in us) between packets being read and sent as typically 37, 321, 32, 206, 18, 245, ...
Do these figures seem reasonable?
Would you expect the write to take ten times longer than the read?
Assuming this behaviour is an artefact of buffering in Windows, is 200-300us an expected time
for a round trip (one 1KB packet in, 1 1KB packet out)?
With N=16, the write still takes about 300us, but the reads take 37, 11, 9, 8, 8, 9, 8, ...
It appears that the more consecutive reads I do, the closer the time between the data appearing becomes around 8-9us.
Changing the value of N has no consistent effect on the typical time of about 300us to see the write going out. This suggests that buffering is not responsible for the large differences between reads and writes.
The very strange thing is that quite often the write takes of the order of 1ms.
The overall effect of the large and huge write times is to reduce the throughput of the system considerably.
Measurements on the DSP show that processing reads and writes always takes the expected (very short) time.
Are there any tricks to getting consistent performance or is this just the way things work under Windows?