<whitequark[cis]>
galibert: %cpu is not proportional to bandwidth, it's proportional to, more or less, interrupt count
<whitequark[cis]>
use "explicit flush" (search for i_flush in the codebase)
<whitequark[cis]>
python can easily transfer 300 Mbps
<galibert[m]>
it's nowhere in glasgow git, so I guess you mean in amaranth?
<whitequark[cis]>
er, sorry, o_flush
<whitequark[cis]>
i'm not quite awake yet
<galibert[m]>
no problem
<galibert[m]>
ah yeah, lots of places
<whitequark[cis]>
basically you know TCP Nagle?
<whitequark[cis]>
we don't have that
<whitequark[cis]>
so by default every time you send several (as little as one) consecutive bytes, it produces a packet on the host
<galibert[m]>
I thought Nagle was useful for routed networks and not direct p2p
<whitequark[cis]>
causing the entire processing chain to wake up and use up your CPU time
<whitequark[cis]>
Nagle is useful anytime there is a lot of overhead to transferring, say, 1 byte
<galibert[m]>
ok, so disable auto-flush, flush every 16 bytes (my frame size) ?
<whitequark[cis]>
disable auto-flush, flush however often you want (but not never, or you'll only get data back every megabyte or so)
<galibert[m]>
you have a megabyte of buffering on the fpga?
<whitequark[cis]>
no, on the host
<galibert[m]>
I'm only sending data in the fpga -> host direction
<galibert[m]>
(for now)
<whitequark[cis]>
there is always buffering on both FPGA and host sides
<whitequark[cis]>
the default FPGA buffer size is 512 bytes, but with autoflush enabled it'll barely get used unless you produce data at a very high rate
<galibert[m]>
ok, it's make a new signal, use it in in_flush=flush, pass the flush signal down
<whitequark[cis]>
the default host buffer size is unlimited, but aside from an explicitly sized buffer (used for flow control), there is also the USB BULK buffer management
<whitequark[cis]>
it's the latter which will soak up data for rather a while
<galibert[m]>
and it looks stable now
<whitequark[cis]>
yep. how much %cpu?
<galibert[m]>
I don't know if I'm losing packets, that's for later (train today), but the packets aren't losing bytes
<galibert[m]>
around 17%
<galibert[m]>
quite sane
<whitequark[cis]>
yeah
anubis has quit [Ping timeout: 252 seconds]
redstarcomrade has quit [Read error: Connection reset by peer]
napstx[m] has quit [Quit: Idle timeout reached: 172800s]
anubis has quit [Remote host closed the connection]
redstarcomrade has joined #glasgow
redstarcomrade has quit [Changing host]
redstarcomrade has joined #glasgow
joshua_ has joined #glasgow
joshua_ has quit [Remote host closed the connection]
joshua_ has joined #glasgow
<whitequark[cis]>
folks on Windows and non-Windows systems, especially those with unusual (non-PDM) installations of Glasgow, but who use YoWASP: please test PR #907
naruitech_36862[ has quit [Quit: Idle timeout reached: 172800s]
Attie[m] has quit [Quit: Idle timeout reached: 172800s]
q3k[cis] has quit [Quit: Idle timeout reached: 172800s]
redstarcomrade has quit [Read error: Connection reset by peer]
cr1901_ has quit [Read error: Connection reset by peer]