GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #glasgow
<galibert[m]>
So, I have a i2s-ish data capture going very well, and which sends a 17-bytes frame 44100 times per second through an in_pipe. How should I go about sending up an 8-bytes frame every 1/44100s to send data to the system, while avoiding underflow or overflow?
<whitequark[cis]>
if you want to avoid overflow and underflow you need an elasticity buffer with active feedback
<whitequark[cis]>
the USB Audio spec describes this arrangement, although the prose is not very good
<whitequark[cis]>
(they describe three different options. you're looking for the one they consider most complex although it's actually the easiest one to get right)
<galibert[m]>
well, the clock is the same on the uplink and the downlink, so there's intrinsic pacing in the system
<galibert[m]>
but I don't know what usb does in the middle
<whitequark[cis]>
anything
<whitequark[cis]>
you will get underflows (on a long enough timescale), since we're using bulk and not iso (and we can't really use iso because we don't expose USB packet boundaries to applets)
<galibert[m]>
we're talking seconds there
<whitequark[cis]>
hm?
<galibert[m]>
my longest captures (and hence uploads in the future) were like 20 seconds, with the more usual ones around 0.5s
<galibert[m]>
is the hyperram was available I'd just upload the sample and not bother about simultaneous anything
<galibert[m]>
s/is/if/
<whitequark[cis]>
i'm confused now
<whitequark[cis]>
if it's fine for you to upload/download the data as a batch, why care that it comes in 1/44000ths of whatever
<galibert[m]>
there isn't much memory on the fpga afaik?
<whitequark[cis]>
use the USB FIFO as your memory
<galibert[m]>
but yeah, batches are fine, but I have no idea how to do that cleanly withing the glasgow environment
<whitequark[cis]>
by doing less stuff
<galibert[m]>
s/withing/within/
<whitequark[cis]>
keep the pacing logic where it is (near the DUT), remove all pacing-related anything from everything else
<galibert[m]>
DUT?
<whitequark[cis]>
Device Under Test
<galibert[m]>
If I have a async method send up data continuously, the backpressure will be handled correctly automatically?
<galibert[m]>
or do I need to pace the sending somehow?
<whitequark[cis]>
load the entire input into the memory, or rather memories, with a single await self._pipe.send(all_of_your_data)
<whitequark[cis]>
the hardware abstractions handle everything else for you
<galibert[m]>
beautiful
<galibert[m]>
will the await block until everything is sent?
<whitequark[cis]>
it will block until everything is sent
<galibert[m]>
ok, so it will have its own thread
<whitequark[cis]>
there are no threads
<galibert[m]>
"thread"
<galibert[m]>
it's green threads, kinda
<whitequark[cis]>
no
<galibert[m]>
it isn't?
<whitequark[cis]>
it is fairly different from green threads
<whitequark[cis]>
actually it looks like the term "green threads" is wide enough it encompasses asyncio
<whitequark[cis]>
that's not what i thought it means but i guess that's fine
<galibert[m]>
I knew green threads (at the time, previous century) as cooperative, switching-on-system-block threads
<whitequark[cis]>
i knew green threads as threads that didn't use OS concurrency primitives or preemption, but still had you write normal blocking code
<galibert[m]>
ah yeah could be
<whitequark[cis]>
i.e. in a green thread if you call f() and it blocks, you can't observe it without resorting to external tools like timers or IO
<galibert[m]>
it's from a time where threads were still being defined anyway
<galibert[m]>
the concepts got cleaner with time
<whitequark[cis]>
in a python asyncio coroutine, the return value of f() tells you directly whether it's blocked or not
<whitequark[cis]>
which is why you need the async keyword, and await, and a fairly complicated compile time transformation that essentially turns straight line code into something equivalent to a deep tree of callbacks (but without the overhead associated with creating that many closures)
<whitequark[cis]>
another thing green threads would typically do is replace your stack, like, $esp
<galibert[m]>
true
<whitequark[cis]>
(which would cause all sorts of exciting problems)
<whitequark[cis]>
meanwhile asyncio allocates everything on the normal python heap
<galibert[m]>
making new stacks was most of what they were doing tbh
<whitequark[cis]>
(sort of, the concept of asyncio "stack frame" is very dilute)
<galibert[m]>
in addition in interpreters the concept of stack frame gets complicated
_whitenotifier-8 has quit [Server closed connection]
<whitequark[cis]>
also true
WilfriedWonkaKla has quit [Quit: Idle timeout reached: 172800s]
Foxyloxy_ has joined #glasgow
leper- has joined #glasgow
Foxyloxy has quit [Ping timeout: 252 seconds]
leper has quit [Ping timeout: 252 seconds]
leper- is now known as leper
GNUmoon has quit [Remote host closed the connection]
GNUmoon has joined #glasgow
nstandart[m] has quit [Quit: Idle timeout reached: 172800s]
redstarcomrade has joined #glasgow
redstarcomrade has quit [Changing host]
redstarcomrade has joined #glasgow
redstarcomrade has quit [Read error: Connection reset by peer]
josHua[m] has joined #glasgow
<josHua[m]>
yeah, reifying async/await is roughly what higher order typed compiler folks would call CPS conversion
<josHua[m]>
(which, my understanding is, is isomorphic to conversion to SSA form...........)
SophiaNya has quit [Server closed connection]
SophiaNya has joined #glasgow
<galibert[m]>
DUHHHHHH
<galibert[m]>
looking for a while in my capture code to understand why adding some output stuff broke it entirely. Just noticed I had switched off the synth
cr1901 has quit [Read error: Connection reset by peer]
cr1901_ has joined #glasgow
Maxxed6 has joined #glasgow
lxdr533 has quit [Read error: Connection reset by peer]