<JamesMunns[m]>
This protects you if you were doing something extra unsafe and tricky and were holding a pointer to the contents of the Node<T>, for example if there was an `UnsafeCell` inside the Node and you were doing... something with it
<JamesMunns[m]>
but if you have plain types, like say an `Option<Waker>`, that's `Unpin`, so you can just turn the `Pin<&mut Option<Waker>>` into an `&mut Option<Waker>`
<JamesMunns[m]>
but if you had spooky contents that make it `!Unpin`, you can't.
<JamesMunns[m]>
`Unpin` basically means "moving this item won't invalidate any unsafe pointers"
<DominicFischer[m>
Since my `T` is `Unpin`, it's kinda fine, but I still feel like it should be possible to just do `&mut T`, even if it means adding some restrictions elsewhere to prevent "and were holding a pointer to the contents of the Node<T>".
<DominicFischer[m>
I'm reading up on structural pinning and projections to see if it could be allowed
<JamesMunns[m]>
you could impl an IterMutUnpin or something that gives you an iterator over &muts
<JamesMunns[m]>
and bound it on T: Unpin
<JamesMunns[m]>
but if you just always give it, it must be unsafe, because it breaks the rules of pinning
<DominicFischer[m>
The contract can be, "you are not allowed to pin the mut references to T", then we can expose `&mut T`.
<JamesMunns[m]>
yeah, if you never give out a `Pin<&mut T>` I could see that working
<JamesMunns[m]>
<DominicFischer[m> "The contract can be, "you are..." <- Yeah turning &mut T into Pin<&mut T> is always unsafe, I think
<DominicFischer[m>
*if T is !Unpin yeah
<DominicFischer[m>
I'm not sure in PinList ought be have T be structurally pinned or not but for now I'll defer that decision to the future
<DominicFischer[m>
* I'm not sure if PinList ought be have T be structurally pinned or not but for now I'll defer that decision to the future
<JamesMunns[m]>
yeah, it is in cordyceps List because you might want to do additional spooky stuff
<JamesMunns[m]>
but if this is aimed at Normal Easy Mode Pinned Lists, then defaulting to never giving out a Pin<&mut T> seems reasonably
<JamesMunns[m]>
s/reasonably/reasonable/
<DominicFischer[m>
If only Rust had a way to opt-out of structural pinning. Perhaps a `NoPin` type, which removes structural pinning.
<DominicFischer[m>
`Pin<&mut T>` won't give you `&mut T` if `T` is `!Unpin`, but
<DominicFischer[m>
`Pin<&mut NoPin<T>>` will give you `&mut T` even if `T` is `!Unpin`.
<JamesMunns[m]>
Ununpin?
<DominicFischer[m>
* `Pin<&mut NoPin<T>>` will never give you a `Pin<&mut T>` (unless `T` is `Unpin`)
<DominicFischer[m>
Hahaha
<DominicFischer[m>
JamesMunns[m]: Basically yh
<i509vcb[m]>
Unpin::no_really_unpin
<DominicFischer[m>
I appreciate that I'm complicating things a bit 😂
<JamesMunns[m]>
it's fair! "technically correct but painful to use for normal tasks" is not great
<JamesMunns[m]>
We can have a version that retains structural pinning separate, or you can just use List in a mutex which basically gives you that
thalesfragoso[m] has joined #rust-embedded
<thalesfragoso[m]>
<JamesMunns[m]> "but if this is aimed at Normal..." <- Isn't that an intrusive doubly linked? I think you need Pin to ensure the drop runs.
<JamesMunns[m]>
the *node* is pinned, the item contained in the node doesn't have to be
<JamesMunns[m]>
when the node is dropped it is removed from the list
<thalesfragoso[m]>
Well, if it's intrusive, the node is the item.
<JamesMunns[m]>
the &mut you get back doesn't include the header pointers
<JamesMunns[m]>
that being said I impl'd drop on the handle instead of the node which is wrong :D
<JamesMunns[m]>
ugh ergot has the same mistake
<JamesMunns[m]>
I'll look at it some more this weekend :)
<thalesfragoso[m]>
<JamesMunns[m]> "ugh ergot has the same mistake" <- To be honest I just read the few last messages and assumed everything was about the List from cordyceps, which now I think it wasn't...
<thalesfragoso[m]>
Because on there the user defined struct (or the list's item) contains the links themselves, so it needs to be pinned.
<thalesfragoso[m]>
But if you're are talking about some field T inside the node, then yes, you don't need to force structural pinning.
<thalesfragoso[m]>
But to be honest, if the T isn't Unpin, then it probably wants to be pinned for something. In that case, giving &mut T instead of Pin<&mut T> might just be a disservice.
<thalesfragoso[m]>
If it's Unpin, then giving pinned or normal references doesn't matter.
<JamesMunns[m]>
yeah, maybe having iter_mut bound on Unpin, and iter_pin_mut is the trickl
<thalesfragoso[m]>
<JamesMunns[m]> "I think this fixes the soundness..." <- I will be sure to check that crate sometime, I have been checking cordyceps recently.
<JamesMunns[m]>
the idea is to not make people call get_mut or into_inner on every iteration
<JamesMunns[m]>
<JamesMunns[m]> "I think this fixes the soundness..." <- The fix might have provenance problems, if it does I probably know how to fix it, I need to poke it with Miri to check it out tho
<thalesfragoso[m]>
JamesMunns[m]: Got it.
<JamesMunns[m]>
<JamesMunns[m]> "the idea is to not make people..." <- but yeah the `get_mut()` one is basically just `get_mut_pin().map(Pin::into_inner)`
burrbull[m] has quit [Quit: Idle timeout reached: 172800s]
_whitelogger has joined #rust-embedded
ello- has joined #rust-embedded
ello_ has quit [Ping timeout: 260 seconds]
rainingmessages has quit [Quit: bye]
<JamesMunns[m]>
Released a first version of PinList, for anyone that wants an easier approach to intrusive linked lists :)
<DominicFischer[m>
Turns out I need to iterate through the list and do an async thing for each item.
<DominicFischer[m>
Ideally the iterator would only need the mutex to do `next()`, and remember it's position without having to hold the mutex. Unfortunately this idea doesn't work as the nodes are free to remove themselves from the list whilst the mutex is unlocked.
<JamesMunns[m]>
yep, that's the tradeoff
<DominicFischer[m>
I think the only sensible way to achieve what I want is to treat the list as a queue and move the first node to the last position, then I can use the first node as a way to keep track of what I want to look at next.
<JamesMunns[m]>
Also an option!
<JamesMunns[m]>
I can also make an async mutex version, but again, that requires that nodes are statics, so you can never drop them
<DominicFischer[m>
Yeah that's a non-starter
<JamesMunns[m]>
but this is often fine in embedded context, where you can make your "socket" or whatever a `StaticCell` or similar.
<DominicFischer[m>
I think the Cursor wrapper might be the way
<JamesMunns[m]>
IMO you could probably even make the "socket" reusable, e.g. when it IS unlinked, it could be retaken and relinked, much like tasks work in embassy (it's essentially the same trick!)
<JamesMunns[m]>
but yeah, also open to adding the Cursor API! There's some design to be had, like "what happens to the handle when the node is unlinked"
<JamesMunns[m]>
which we could just say "whatever" to, and still lock the mutex even if the node is unlinked
<DominicFischer[m>
That could work but I don't know how many "sockets" I need upfront
<JamesMunns[m]>
oh?
<JamesMunns[m]>
Do you not end up with like one socket in each task or something?
<JamesMunns[m]>
you still have to store the Nodes somewhere today, so you kinda have to know how many you have, max at least
<DominicFischer[m>
Any USB device I connect may have up to 16 endpoints, it grows even more if I use a USB hub
<DominicFischer[m>
I need a "socket" for every active endpoint on the USB bus
<JamesMunns[m]>
Can you pass that to a task and spawn it, for example?
<JamesMunns[m]>
and use pool_size to mediate your maximum number of concurrent endpoints?
<JamesMunns[m]>
you can basically use tasks to dynamically handle the number of endpoints, spawning a task for each endpoint, that way the nodes are "just" stored in the tasks
<JamesMunns[m]>
then if the task ends, the node is dropped and removed
<DominicFischer[m>
Yeah that's sorta what I'm doing (but I don't use embassy because of the static requirements)
<JamesMunns[m]>
this is basically how I handle sockets in ergot: you spawn a task for each "service" socket you want to run
<DominicFischer[m>
I've got a DIY alloc executor atm
<JamesMunns[m]>
fair! if you have alloc, you can make a `Box<Pin<Node<T>>>` or similar
<DominicFischer[m>
I'm limiting the alloc to just the executor, as it can be swapped out easily in more constrained situations. The USB host itself should still be no-alloc
<JamesMunns[m]>
fair! you can still make your API "just" an async fn for serving each endpoint, and allow the user to handle how they spawn a task around the future
<DominicFischer[m>
Actually, putting aside the static issue, it's a little hard for me to reason about storing wakers in a data structure wrapped in an async mutex
<DominicFischer[m>
s/an/a/
<JamesMunns[m]>
oh?
<DominicFischer[m>
Like, I'd need a version of poll_fn that can take an async lambda?
<JamesMunns[m]>
not sure what you mean
<JamesMunns[m]>
you can have a poll_fn that gives you context which you can use to put a waker inside the Handle
<JamesMunns[m]>
like, you can separate the "get the context" part from "do the async thing" part
<DominicFischer[m>
Yes but the put a waker inside the Handle step require locking an async mutex
<JamesMunns[m]>
where get_my_waker_and_wait_one_wake is a poll_fn that first gets the context and puts the waker in the handle guard, then drops the guard, then yields until reawoken
<JamesMunns[m]>
(you might want a more clever signalling or something, but you get the idea)
<DominicFischer[m>
I that that's deadlock
<JamesMunns[m]>
why?
<DominicFischer[m>
s/that/think/
<JamesMunns[m]>
not if you drop the guard inside the poll_fn
<DominicFischer[m>
ohhh
<JamesMunns[m]>
like, you only put the waker in once, then wait for the next wake, and probably return whether you were woken and achieved what you were really waiting for, or if it was a spurious wake
<JamesMunns[m]>
(you might need to do that in a loop until success)
<DominicFischer[m>
If I drop the guard, I can't deal with the waker changing. i.e. I need to clone the new waker if poll is called again
<JamesMunns[m]>
Another option is to store a `WaitQueue` with the `PinList`, and just have the endpoints register their interest there, and wake them to check if they've been given data, but that might require more wakes (since every waiting node needs to check if it actually got data, and if not, re-register)
<JamesMunns[m]>
but for "targeted" wakes, yeah, I think putting a waker in the handle and doing a little dance for it is probably the "right" pattern
<JamesMunns[m]>
Another nice thing with the blocking mutex is you could use a CS mutex, so you can "just" iterate through the endpoints in the usb interrupt handler too, and directly wake them from there without additional buffering
<JamesMunns[m]>
(or if you aren't doing that, you can use a ThreadModeRawMutex so you can not pay real CS costs)
<DominicFischer[m>
Yeah I think blocking mutex is still the right way for this
<DominicFischer[m>
I just need to figure out the iteration
<JamesMunns[m]>
yeah, I'm probably about to head out for the day, if you want to hack on adding Cursor support, feel free, I can give you a commit bit to the repo, or just open a PR :)
<JamesMunns[m]>
fwiw: ergot makes the policy "sends to sockets are immediate, if the socket doesn't exist or the socket is full, data will be lost/nak'd"
<JamesMunns[m]>
(for exactly these reasons)
<JamesMunns[m]>
not sure if that's reasonable for you, but the data's gotta live somewhere, and if you don't have anywhere to put it, awaiting isn't going to make that better :D
<JamesMunns[m]>
at least not for immediate "oh god we just got interrupt data, we need to use it or lose it"
<JamesMunns[m]>
and if you backup on one slow socket you might end up losing data for other, faster, sockets, etc.
<JamesMunns[m]>
also unrelated, it MIGHT be worth having a "context"-ful version of the pinlist, where locking the pinlist also gets you access to some context C, so you can have access to the context and the list with a single mutex lock
<JamesMunns[m]>
if you had other mutable stuff other than the list you might want to touch at the same time.
<JamesMunns[m]>
(you'd just stick it in PinListInner with the list itself)
<JamesMunns[m]>
so essentially using the pinlist as a channel, sending the dma transfers to a single consumer that is marshalling them?
<DominicFischer[m>
Yup!
<JamesMunns[m]>
I MIGHT suggest that you actually don't need pinlist at all for that
<JamesMunns[m]>
instead, put the SPI peripheral in an async mutex, and have each node lock it, and use the peripheral to send
<DominicFischer[m>
That's what I'm trying to move away from. 😄
<JamesMunns[m]>
async mutex is also intrusive, so you can support an arbitrary number, but they are already ordered in a fair FIFO way, and you just have the "nodes" drive the action
<JamesMunns[m]>
how come?
<DominicFischer[m>
Basically USB scheduling was a little too complicated for an async mutex
<JamesMunns[m]>
async mutex is also intrusive, so you can support an arbitrary number, but they are already ordered in a fair FIFO way, and you just have the "nodes" drive the action
<JamesMunns[m]>
edit: maitake-sync's Mutex
<JamesMunns[m]>
ah, because you don't actually want fair ordering?
<DominicFischer[m>
I need to have some interrupt transfers happen either, once per frame, once every other frame, once every 4 frames.
<DominicFischer[m>
Bulk transfer can happen whenever but interrupt/isochronous transfers must happen first in the frame to ensure there's enough time in the frame left to send the packets.
<DominicFischer[m>
Seems easier to me to organise all that in a single task with a state machine
<DominicFischer[m>
Rather than distribute it out to multiple tasks
<DominicFischer[m>
JamesMunns[m]: in short, yes haha
<JamesMunns[m]>
yeah, fair! I think you might end up wanted to shape it like a "smart" mutex that peeks inside of the header and does sorting/scheduling, but that sounds like what you're working towards anyway
<JamesMunns[m]>
you can see how I sort of separate "nodes" from "i/o" in cfg-noodle: nodes mostly interact with their handles, and you have an "i/o worker task" that mostly operates on the list itself. the nodes can signal the i/o worker with a WaitQueue stored with the PinList.
<JamesMunns[m]>
but makes sense, appreciate the context!
<JamesMunns[m]>
(in cfg-noodle, ordering isn't important, we always process all, but I could see you either seeking the highest prio, or having some mandatory ordering for nodes in the list)
<JamesMunns[m]>
you *might* end up wanting something less "simple" than PinList, but in the worst case it's a good template for you to start from :D
<JamesMunns[m]>
excited to see what you end up building :)
Koen[m] has quit [Quit: Idle timeout reached: 172800s]
<DominicFischer[m>
Ha, me too
sourcebox[m] has quit [Quit: Idle timeout reached: 172800s]
jbeaurivage[m] has quit [Quit: Idle timeout reached: 172800s]
_whitelogger has joined #rust-embedded
jannic[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]>
<JamesMunns[m]> "you can see how I sort of..." <- Just coming back to toot my own horn, this makes some code MUCH nicer, because your nodes don't have to be generic over the `T` of whatever is doing I/O: they are just data that exists in a list, which means as a library author, you can limit the "blast radius" of the generics to a "worker task" async fn you provide, and all the usage-sites of the nodes don't need to be aware of
<JamesMunns[m]>
that.
sroemer has joined #rust-embedded
sroemer has quit [Changing host]
sroemer has joined #rust-embedded
diondokter[m] has quit [Quit: Idle timeout reached: 172800s]
mkj[m] has quit [Quit: Idle timeout reached: 172800s]
Noah[m] has joined #rust-embedded
<Noah[m]>
Hmm I wonder what a good board for a fieldkit could be that has good HAL (preferrably embassy) support and a proper SWD probe. I feel like teensy would be a good contender if it had an SWD probe :/
cr1901_ has quit [Read error: Connection reset by peer]
cr1901 has joined #rust-embedded
sroemer has quit [Quit: WeeChat 4.5.2]
Mathias[m] has joined #rust-embedded
<Mathias[m]>
<Noah[m]> "Hmm I wonder what a good board..." <- Teensy have historically been fundamentally opposed to debugging. It does not look like it has changed: https://forum.pjrc.com/index.php?threads/teensy-4-1-debug-with-jtag-and-swd-really-impossible.71224/
<Mathias[m]>
I am with James Munns, I am not sure what you mean by "fieldkit". There is https://www.fieldkit.org/ and the term "field kit" seems to define a set of tools to use "in the field", so depending on what your activity is, it could mean Glasgow Explorer, Flipper Zero, Bus Pirate...
<JamesMunns[m]>
honestly I bet it's "a dev board that's reasonable to bring around with you when traveling"
<JoshuaFocht[m]>
That is what I think he means too.
<JamesMunns[m]>
I brought one on my last trip, tho I never ended up using it lol
<JamesMunns[m]>
oh and it also comes in a plastic case so it's not a bare board, comes in a little plastic padded box too
<whitequark[cis]>
i think glasgow and flipper would form a very nicely complementary pair of tools
<whitequark[cis]>
(i actually have a flipper, though i've decided not to publish anything for it after they had a few very questioable decisions in attributing work to people who produced it)
<whitequark[cis]>
s/questioable/questinable/
<JoshuaFocht[m]>
The T-Embed cc1101 is a good cheap alternative w/ esp32-s3
<Noah[m]>
<JoshuaFocht[m]> "The [Raspberry Pi Pico 2](https:..." <- they don't have a built-in debug probe though :/ they also do not have USB-C but I realize now I did not specify that ...
<Noah[m]>
<Mathias[m]> "Teensy have historically been..." <- yeah unfortunately :(
<JamesMunns[m]>
I should have brought them all to rustweek to hand out
<Noah[m]>
<JamesMunns[m]> "What's the goal?" <- a board that can be used for quick hacks if something is not right and needs a quick patch in whatever form or fashion
<JamesMunns[m]>
2x2040 with a usb hub, buncha buttons, accelerometer, potentiometer
<JamesMunns[m]>
ahh, darn, no external I/O on this one (meant for training)
<thejpster[m]>
“I should make a board with a 2350 and also a 2040 running cmsis-dap and vcom duties.” is what I was going to post until I read that
<Noah[m]>
<whitequark[cis]> "i think glasgow and flipper..." <- glasgow is already in my pocket :) even though I did not use it much yet :)
<JamesMunns[m]>
thejpster[m]: I could make one, just not profitably :D
<JamesMunns[m]>
I haven't spin a 2350 board yet, but I've got spinning 2040s down, and I don't think the 2350 is much harder if you use an LDO instead of the DC/DC
<Mathias[m]>
jamesmunns: I bought one of the LilyGo with RP2040, ESP-C (C3?) and a screen. It had the questionable design choice of using the orientation of the USB-C connector to select the MCU
<JamesMunns[m]>
s/spin/spun/
<Noah[m]>
JamesMunns[m]: oh, pitty :( I love that it's USB-C!
<JamesMunns[m]>
Mathias[m]: yeah, that's the "batshit usb connector" feature I mention :D
<whitequark[cis]>
Noah[m]: very nice! i hope you like it
<JamesMunns[m]>
Noah[m]: yeah, usb-c is pretty easy now, and the ch334 makes a very simple single component usb hub
<JamesMunns[m]>
and it's also like 50 cents or something, so it's really just the footprint size it takes up
<JamesMunns[m]>
oh, not a single component, it does require a clock, I was confusing it with the CH224K usb-pd negotiator which doesn't
<Noah[m]>
whitequark[cis]: I love it! If logic-analyzer becomes a default feature it will be even more epic :)
<JamesMunns[m]>
but it uses the same cheap 12mhz basic part the rp2040s do
<Noah[m]>
JamesMunns[m]: yeah :) Unfortunately many devkit manufacturers still refuse to use it ...
<whitequark[cis]>
Noah[m]: there is a very basic logic analyzer right now already, but it's kind of frustrating to use... I really should get around to building a better one and I might
<whitequark[cis]>
actually I might as well ask here, what would you consider an "MVP" logic analyzer?
<whitequark[cis]>
* logic analyzer? (question to everyone)
<Noah[m]>
<JamesMunns[m]> "The LilyGo T-Pico 2350 is neat..." <- interesting! looks pretty cool :)
<Noah[m]>
whitequark[cis]: oh nice :) does it work with sigrok or how does it do? :)
<JamesMunns[m]>
whitequark[cis]: IMO, the software of the saleae carries it. Does't have to be that polished, "easy to install/use, basic set of decoders works well on it"
<JamesMunns[m]>
not sure how pleasant sigrok is these days, it was rough when I tried to use it pre-saleae
mali[m] has joined #rust-embedded
<mali[m]>
P
<whitequark[cis]>
Noah[m]: it outputs a vcd file that you can then feed into sigrok to watch it spin your fans for 5 minutes decoding the most basic things
<whitequark[cis]>
it's not a good experience
<JamesMunns[m]>
yeah, that tracks
wucke13[m] has joined #rust-embedded
<wucke13[m]>
whitequark[cis]: fx2lafw, these super cheap USB logic analyzer with a cypress chip that samples directly via USB to the host. The only feature that I'm missing on that one is a configurable trigger voltage.
<Noah[m]>
whitequark[cis]: tbh, I think I am a pretty pleb user. if the UI sucks the spec of the LA does not really matter :) For me real time tracing/display is pretty crucial, just because I am me, not because it actually necessarily brings much benefit (except that you can see that you hit a certain condition)
<whitequark[cis]>
JamesMunns[m]: it's even worse for glasgow 'cause the vcd rate is fixed at 48 MHz and libsigrokdecode calls a python function for each sample
<JamesMunns[m]>
I would love to have a 100-300 EUR part that had a decent software setup that I could recommend to my clients, when they don't want to spend on the saleae
<JamesMunns[m]>
I think the analog discovery hw/sw is... not as bad as sigrok, but last time I used it, it did not spark joy either
<whitequark[cis]>
JamesMunns[m]: I think the main difference is that the development kinda stopped
<whitequark[cis]>
I use basically every other means to get what I need done before opening pulseview
<Noah[m]>
JamesMunns[m]: yep, same, except for myself and not clients :D glasgow should imo be able to replace it in due time :)
<JamesMunns[m]>
yeah, that makes sense. I guess IMO we've gotten to a point where the pico/fx2/etc logic analyzers are fine enough, hw wise, but the sw side of the experience is so bad it's hard to recommend
<whitequark[cis]>
anyway, it seems like everyone focuses on the software which is 100% fair... unfortunately building good logic analyzer software is very challenging and I have like a ton of projects already
<JamesMunns[m]>
yeah, not a fulfilling answer to the question, sorry :/
<whitequark[cis]>
I mean it's a useful one
<JamesMunns[m]>
IIRC the usb sniffer folks are building some (tui?) tooling in Rust, which I think was received well?
<JamesMunns[m]>
I think @diondokter and @wassasin used it lately, not sure how that experience was
<JamesMunns[m]>
in case anyone here feels inspired to write some ratatui to give glasgow a nicer analyzer frontend 😛
<whitequark[cis]>
cynthion?
diondokter[m] has joined #rust-embedded
<diondokter[m]>
JamesMunns[m]: It's... ok
<diondokter[m]>
You pretty much use it to export to something wireshark can read
<whitequark[cis]>
one thing I did with glasgow is I added UART, SPI, and QSPI sniffer applets
<whitequark[cis]>
so instead of doing capture+decode, you get a decode right away
<diondokter[m]>
Yeah Packetry
<diondokter[m]>
Used with a Cynthion device
<whitequark[cis]>
should also add I2C sniffer and emulator and such
<whitequark[cis]>
the main downsides for it is that it's not timestamped and you can have (right now) at most 2 interfaces for which you don't get a relative timing reference
<whitequark[cis]>
but for many simple things it's just sufficient
<whitequark[cis]>
it's especially effective for SPI/QSPI
<whitequark[cis]>
I actually want to add a USB analyzer to the toolbox, "bring your own ULPI PHY"
<whitequark[cis]>
it's not actually that difficult and with RAM-Pak this pretty much saves you a few hundred $ in additional devices
GrantM11235[m] has joined #rust-embedded
<GrantM11235[m]>
<whitequark[cis]> "it's even worse for glasgow '..." <- I've always thought it would be cool to be able to write sigrok decoders in rust, but I never got very far with it
<whitequark[cis]>
i want to implement a wasm-based API for running decoders quickly on arrays of data
<whitequark[cis]>
glasgow already soft-depends on wasmtime, adding a way to run decoders via wasmtime would be very easy
<whitequark[cis]>
but i don't really want to add GUI components at this time
<GrantM11235[m]>
Yeah, I was also planning to possibly use wasm for the decoders, but I don't know what the api would look like
<GrantM11235[m]>
My first step was to just rewrite libsigrokdecode with pyo3 to use the existing python decoders just to understand how it all currently works, but I don't think I even finished that 😭
<whitequark[cis]>
I think you should absolutely not emulate libsigrokdecode
<whitequark[cis]>
it's terribly designed and will be slow no matter what
<thejpster[m]>
I’ve published embedded-sdmmc 0.9. Lots and lots of changes in this release.
<whitequark[cis]>
the decoder input should be an object that provides two main functions: `get_sample_at(time)->value`, and `find_sample(from_time, value)->time`
<whitequark[cis]>
this lets you decode both protocols that are time-based (think UART) and protocols that are async trigger based (think SPI)
<whitequark[cis]>
and if the decoder sees no transitions on the SPI clock line for example? it can do absolutely no work besides the call to find_sample
<whitequark[cis]>
the other big issue with libsigrokdecode is how decoder stacking works
<whitequark[cis]>
for example, it's not possible to make a decoder that turns a signal (single bit captured input, like a LA probe) into another signal of the same type, where you can stack something on top that expects just a probe
<whitequark[cis]>
this means you can't make e.g. a Manchester demodulator as a separate reusable block, at least, not in a way that's user-exposed (you can define it as a function of course)
<whitequark[cis]>
I actually tried to fix that a long time ago but it was intractable even in like... 2018 or something
<whitequark[cis]>
decoders are conceptually functions on streams of payloads and defining that in a well-typed manner will solve like 90% of the complexity
<whitequark[cis]>
whitequark[cis]: for decoder stacking, you might extend this slightly, to provide also `find_different_sample(from_time)->(time,value)`, where a decoder reacts to changes in the output of another decoder
<whitequark[cis]>
it would also be feasible (in the sense that it would not result in decoders taking undue amounts of CPU time) to use a simplified API, where you represent both raw and decoded signals as streams of (time,value) where only changed values are reported in the stream. this is basically RLE compression
<whitequark[cis]>
I think the API that provides the find_sample function is slightly superior in that it allows for more efficient decoding in certain cases (e.g. it lets you skip directly to a preamble, without decoding any noise that might be on the lines otherwise) but you should be able to do a perfectly functional design with just streams of (time,value)
<whitequark[cis]>
does this all make sense?
<GrantM11235[m]>
If I remember correctly, sigrok already has a find_sample style api, but I don't know how much of that is implemented in slow python
<whitequark[cis]>
what I know is that basically every decoder I've tried does it sample by sample in practice
<whitequark[cis]>
I think "get next sample" should probably just not be a part of the API. either you do it in terms of time because you know the reference clock, or you do it edge by edge
<whitequark[cis]>
this way you can't write awful decoders like that
<GrantM11235[m]>
For the record, I only know about version 3 of their decoder api, and I have know idea when that came out
<GrantM11235[m]>
>The major change in version 3 of the libsigrokdecode PD API is that we're removing the need for the decoder code to loop over every single logic analyzer sample "by hand" in Python (which has performance implications, among other things).
<whitequark[cis]>
okay, the v3 API isn't awful
_whitelogger has joined #rust-embedded
cr1901_ has joined #rust-embedded
BradleyNelson[m] has quit [Quit: Idle timeout reached: 172800s]
rainingmessages has quit [Read error: Connection reset by peer]
cr1901 has quit [Ping timeout: 260 seconds]
M9names[m] has quit [Quit: Idle timeout reached: 172800s]
GuineaWheek[m] has quit [Quit: Idle timeout reached: 172800s]