<Ralph[m]>
JamesMunns[m]: no. nothing special. i'm just using stable rust 1.88.0
<JamesMunns[m]>
It seems like something is triggering the thiserror build-rs to enable that cfg
<Ralph[m]>
oh, cargo clean + cargo build solved it. no clue what caused it
<JamesMunns[m]>
ah, weird
<Ralph[m]>
sorry, didn't think of running cargo clean anymore last night (it was way too late)
<JamesMunns[m]>
No worries, glad it works :D
Farooq has joined #rust-embedded
<Farooq>
<Ralph[m]> "unrelated: what wastes so much..." <- This is how Rust is
<i509vcb[m]>
cargo sweep is worth an install if you think you have 300GB of target folder usage
<Farooq>
This approach of Rust has got advantages and disadvantages. As an advantage, you've got a binary without [much] deps
<i509vcb[m]>
Yes I have had 300GB of target folders before...
<Farooq>
Maybe a browser? :>
<Farooq>
* Maybe a web browser? :>
<i509vcb[m]>
When your dev folder has 50+ folders inside of it, adds up fast
<whitequark[cis]>
i feel like rust is uniquely wasteful in the way it manages intermediate products
<whitequark[cis]>
i'm not aware of any other toolchain that routinely generates 300GB target folders
<whitequark[cis]>
that's like, an entire android checkout
<JamesMunns[m]>
I think they are starting to work towards at least cleaning builds for old toolchains. That's where a lot of space accumulates, you update from 1.86 to 1.88, which means all your artifacts are rebuilt, but the 1.86 artifacts are just sitting there in the target folder
sroemer has quit [Ping timeout: 248 seconds]
sroemer has joined #rust-embedded
sroemer has quit [Changing host]
sroemer has joined #rust-embedded
zeenix[m] has joined #rust-embedded
<zeenix[m]>
`cargo sweep -r` is super helpful too
<sourcebox[m]>
I often run into issues with changes in local crates attached via the path = ./... options not being detected.
<sourcebox[m]>
Especially annoying with RA, but cargo check also throws errors.
<sourcebox[m]>
Currently, I solve it by deleting just /target/debug.
RobinMueller[m] has quit [Quit: Idle timeout reached: 172800s]
<thejpster[m]>
honestly, I just started using RTIC instead. You very quickly need to share one thing with two interrupt handlers, or cannot afford to block an interrupt X when using some resource Y that interrupt X doesn't need ... and RTIC is the right answer to that problem.
<RobinMueller[m]>
thejpster[m]: was it grounded ? ๐ sorry
<thejpster[m]>
it's unfortunate, because RTIC is quite hard to teach - there's a lot of macro shenanigans and weird syntax.
<thejpster[m]>
so if we could get some kind of `Global<T>` or `Shared<T>` going somewhere, that's a good stepping-stone on the road to getting people using RTIC.
<thejpster[m]>
Also, there's no RTIC for Arm Cortex-R / Arm GIC yet (yet). So for those systems, we must still do it the hard way.
<RobinMueller[m]>
I am going to catch lunch, but will come back later, good to know there are some efforts in this direction already.
rainingmessages has quit [Quit: bye]
rainingmessages has joined #rust-embedded
<Ralph[m]>
if anyone here has experience in working with teensy devices and maybe even better with flashing them from within docker (e.g. in automated builds) it'd be great to get feedback from you on https://forum.pjrc.com/index.php?threads/how-to-flash-from-docker-container.77132/
<Ralph[m]>
thanks!
<Ralph[m]>
(i'm not programming that one with rust - i'm flashing an existing firmware on that and then talk to it from rust)
<RobinMueller[m]>
<thejpster[m]> "it's unfortunate, because RTIC..." <- RTIC takes care of resource management. I also think it's a lot of macro magic. There also is embassy, which allows passing resources by value, but that does not help much for interrupts unfortunately. And then there is of course the bare metal code use-case and other OSes, but I guess most non-trivial projects use some basic RTOS/Embedded OS at some point.
<RobinMueller[m]>
<thejpster[m]> "so if we could get some kind of..." <- I like the pattern here. To be honest, when working with shared peripherals, I oftentimes do something possibly unrusty/hacky: I simply steal the peripheral if I know I do not use it in the main thread ๐ . I mostly require this for sharing message queue handles
<RobinMueller[m]>
s/know I//
<JamesMunns[m]>
Maybe silly question, is there a reason you aren't just making the channel as a static?
<JamesMunns[m]>
none of the send/recv functions require an &mut ref
<RobinMueller[m]>
ahhh, that works. nice. then its definitely more ergonomic than the heapless queue in that case
<RobinMueller[m]>
yeah, I added a lot of static cells / const static cells to various places in my code because rust really does not like static muts anymore, but here, this is not required. perfect
<JamesMunns[m]>
tbh, I still sort of agree with the old comments on that thread:
<JamesMunns[m]>
Anywhere you need that `Wrapper<T>`, it's a sign we need a better data structure for this.
<JamesMunns[m]>
channels, static mutexes, static cell, etc.
<RobinMueller[m]>
and I actually did this correctly in another old project I just found, I just forgot that I do not need the static cell for the embassy channels. maybe I'll crate a small PR with a usage examle for main thread / IRQ usage :)
<RobinMueller[m]>
* IRQ usage for embassy sync :)
<RobinMueller[m]>
https://github.com/jamesmunns/grounded/pull/3 still looks super interesting when you want more rust safe code instead of using hacks like stealing peripherals inside the IRQ.. maybe it could also be a separate dedicated crate?
<RobinMueller[m]>
* https://github.com/jamesmunns/grounded/pull/3 still looks super interesting for using idomatic rust safe code instead of using hacks like stealing peripherals inside the IRQ.. maybe it could also be a separate dedicated crate?
<RobinMueller[m]>
* https://github.com/jamesmunns/grounded/pull/3 still looks super interesting for using idomatic rust safe code instead of using hacks like stealing peripherals inside the interrupt handler.. maybe it could also be a separate dedicated crate?
<JamesMunns[m]>
thejpster[m]: Yeah, iirc it was a thought experiment for "what if just the resource aspect of rtic"
<thejpster[m]>
James Munns: would you be interested in moving cmim to github.com/rust-embedded-community? I could expand it to offer both 'only works in this ISR' types as well as 'works everwhere, but blocks everything' types.
<JamesMunns[m]>
Happy to move it, or just add you as a contributor to cmim
<JamesMunns[m]>
lemme know which you prefer.
<thejpster[m]>
that's then a teachable path, from "rawdog an unsafecell and YOLO", to "safe but stop the world" to "safe but stop half the world" to "fine grained resource management"
<JamesMunns[m]>
fwiw
<JamesMunns[m]>
> works everwhere, but blocks everything
<thejpster[m]>
which does somewhat imply a discoverability problem
<thejpster[m]>
or, I've seen that before I think, but it probably didn't stick in my head because there's a bunch of different variants, several aimed at async usecases
<JamesMunns[m]>
(the mutex/mutex-trait is exactly pulled from embassy-sync, to hopefully make it usable in more places)
<JamesMunns[m]>
I feel like i've mentioned the mutex crate here a lot without a lot of uptake, happy to start plugging it more often again.
<JamesMunns[m]>
I've started using it in all my crates recently.
<thejpster[m]>
so it's like a C++ mutex but not a Rust mutex?
<JamesMunns[m]>
yes, the raw mutex doesn't contain anything
<JamesMunns[m]>
the BlockingMutex does
<thejpster[m]>
Oh right, I need BlockingMutex<CriticalSectionRawMutex, T>
<thejpster[m]>
yeesh, this is going to take some explaining
<JamesMunns[m]>
tbh, it works that way because that's the way embassy-sync works, but I can imagine it being less monomorphization?
<dirbaio[m]>
it's like this so you can make data structures (async mutex, channels, etc) generic over the "mutex kind"
vollbrecht[m] has joined #rust-embedded
<vollbrecht[m]>
james time to link the updated doc's PR :D
<JamesMunns[m]>
The reason there are different mutex kinds is because it can be "pay what you use". If you never use it with interrupts and have a single core, you can not take a CS
<JamesMunns[m]>
vollbrecht[m]: already merged and released :)
<thejpster[m]>
I mena, I know why it's like this. But I still have to explain it in class to people who just want to blink an LED in an interrupt.
<dirbaio[m]>
e.g. `Channel<CriticalSectionRawMutex, T>` can be used to send things between main and interrupt, but takes a CS for each operation
<dirbaio[m]>
`Channel<LocalRawMutex, T>` can only be used to send things between tasks in the main thread, but in exchange it doesn't take any CS
<dirbaio[m]>
thejpster[m]: hot take: people who are learning to blink a led shouldn't be using interrupts :P
<JamesMunns[m]>
tbh: if you wanted to make a simple wrapper `SimpleMutex<T>` that always uses a critical section, you could, for the sake of teaching.
<JamesMunns[m]>
(it's sort of teaching a bad habit tho)
<dirbaio[m]>
especially interrupts of timer periphreals, which are horrible kitchen sink abominations because vendors cram tons of features into them beyond just timering
<vollbrecht[m]>
but but but mah input reading pwm, that triggers on my adc value not reaching my dynamic threshold, i absolutely need that :p
<thejpster[m]>
<dirbaio[m]> "hot take: people who are..." <- people who are learning interrupts in Rust find it useful to blink an LED to see the interrupt is working
<thejpster[m]>
and generally they'd be very happy to blink an LED from an interrupt in C, using some digitalWrite(1, HIGH) or HAL_GPIO_Set(PortA, Pin4, HIGH) type static function that doesn't require any context (or provide any race hazard guarantees)
<thejpster[m]>
And I have to persuade them that Rust is better, despite (because?) it makes you do all this extra stuff that was so easy to do in C.
TomB[m] has joined #rust-embedded
<TomB[m]>
Thereโs a lot of line noise with Rust Embedded today
<TomB[m]>
It can be overwhelming compared the old C SDKs or even Zephyr to be frank
<dirbaio[m]>
lol
<TomB[m]>
* be overwhelming to read compared the
<dirbaio[m]>
well to be fair in most chips you can set/clear a gpio atomically, so doing this "move owned thing to the interrupt" dance shouldn't be needed
<TomB[m]>
Reading code happens a lot more than writing it, thereโs benefits to Rust, Iโm not sure readability is one
<dirbaio[m]>
it's just we've chosen to structure our hals like that for other reasons
<thejpster[m]>
quite
<dirbaio[m]>
and a consequence is it makes using interrupts much less ergonomic
<dirbaio[m]>
okay let me rephrase my hot take: you shouldn't be using interrupts in your business logic for 99% of the use cases
<JamesMunns[m]>
dirbaio[m]: RTICv1 is suddenly very upset
<dirbaio[m]>
well there's a reason RTICv2 added async ๐
<dirbaio[m]>
your business logic should use async
<JamesMunns[m]>
dirbaio[m]: It's a scam by big async to sell more async
<dirbaio[m]>
and you should use interrupts only for writing async drivers
<thejpster[m]>
show me a certified async runtime and then I'll listen
<thejpster[m]>
RTICv1 is suitable for certification as-is
<dirbaio[m]>
drivers share only data (usually a few status flags) and wakers between task and interrupt
<TomB[m]>
The timing of interrupts with run to completion seems pretty straightforward, how do you determine timing with async?
<dirbaio[m]>
and also if you use PACs with no owner singletons you can just go and do whatever register writes you want from the interrupt
<vollbrecht[m]>
HotTake Wakers are just the "a bit more smarter" interrupt callbacks. E.g we can funnel them all together now.
<dirbaio[m]>
with all this I haven't written any "move owned thing from main to interrupt" dance in years
<dirbaio[m]>
you just keep the owned thing in the task
<dirbaio[m]>
and use it
<dirbaio[m]>
thejpster[m]: I don't think that's intrinsically not doable? it's just nobody has done it yet
<JamesMunns[m]>
thejpster[m]: show me a customer interested in one and I'll help make it happen :D
<dirbaio[m]>
TomB[m]: same as you would with a "classic" RTOS with stackful tasks. Max latency of a task is given by sum of WCET of all tasks at equal or higher priority, etc
<TomB[m]>
Can you do wcet statically with async? RTIC v1 definitely made this pretty obvious how itโd work
<JamesMunns[m]>
TomB[m]: did anyone ever make a tool that analyzed wcet for rticv1?
<TomB[m]>
Sadly no I wish someone had
<dirbaio[m]>
you can never do WCET statically. it's equivalent to the halting problem
<JamesMunns[m]>
you can wcet fragments, but it does require making a lot of assumptions
<JamesMunns[m]>
like: not having flash cache, not having memory bus transaction contention, etc.
<JamesMunns[m]>
AFAIK all of the hard deadlines we had were analyzed dynamically in safety critical, with watchdogs and fault timers.
<JamesMunns[m]>
And anything that requires external events makes that analysis harder
<JamesMunns[m]>
and since every await is waiting for an event to occur, it is exceedingly hard to analyze even fragments imo.
<dirbaio[m]>
awaits are equivalent to "save all state, return from the irq handler" in manual irq / rticv1 code
<dirbaio[m]>
so if in rticv1 you were happy analyzing the WCET of a single irq firing, in async you'll be happy analyzing the WCET of a task from one await to the next
<dirbaio[m]>
that's still not the whole picture, it's possible for example you get a storm of events making an irq fire constantly / an await never actually yield, which starves lower prio tasks
<dirbaio[m]>
but the problem was already there before, async doesn't make it "worse"
<dirbaio[m]>
can avr do atomic load/store of one word?
<dirbaio[m]>
if it can, turbowakers might've helped you
<vollbrecht[m]>
the problem is that compiler is a dummy, and for any atimic it would insert 4 instructions, where only 1 would be needed in reality
<vollbrecht[m]>
s/atimic/atomic/
<vollbrecht[m]>
if its already in a ISR
<JamesMunns[m]>
tbh, I'm not sure if Rust is really worth it on avr targets :p
<JamesMunns[m]>
I do think we should make things easier if we can, but a lot of the problem with writing general libraries is that they need to be sound *for all possible use cases*.
<JamesMunns[m]>
Yes, if you can cheat and know you'll never do something, you could make something simpler. But most folks aim to write general purpose libraries.
<JamesMunns[m]>
IMO we probably could stand to have better simple structures, that's why I wrote the `mutex` crate. If people have ideas for easier/better ones, you should go write those crates! There's no monopoly on who can write nice data structures, it doesn't have to be under the wg.
<vollbrecht[m]>
the biggest pain on avr with embassy needing 64 bit time, other than that it works fine :p
<vollbrecht[m]>
But the point still was how to share state into a ISR without wasting to much of your ISR time.
<vollbrecht[m]>
and currently it needs a good amount of handholding
<vollbrecht[m]>
well the inital callout was that its unergonomic, just because of the constrains we put ourselfs into it
<dirbaio[m]>
my point is *it's OK for `Mutex<Cell<Option<OwnedThing>>>` to be unergonomic* because there's no case where you should use it.
<dirbaio[m]>
Can you afford async? just use async.
<dirbaio[m]>
You can't afford it? You likely can't afford the mutex option dance either and are better of with unsafe.
DominicFischer[m has joined #rust-embedded
<DominicFischer[m>
Sounds a bit conveniently black and white
<DominicFischer[m>
There's a multiple reasons for not being able to afford async besides extra cycles
<vollbrecht[m]>
I think there still can exist a middleground, where it would be nice to have something ergonomic that helps you out a bit when you sailing the "unsafe seas"
<DominicFischer[m>
s/a//
<vollbrecht[m]>
but its clear that nothing just simply spring into existence, the same with the nice embassy ecosystem we got now
<dirbaio[m]>
DominicFischer[m: like?
<DominicFischer[m>
Not having enough memory to store gigantic Futures
<dirbaio[m]>
there's nothing inherently gigantic about futures, they're the sum of the sizes of all your local variables
<dirbaio[m]>
it's the same size you'd get if you'd hand-write a state machine
<dirbaio[m]>
Yes, the compiler has footguns where it's dumb and accidentally duplicates things and blows up future size
<dirbaio[m]>
but it also has similar footguns for non-async code where it duplicates things in the stack
<dirbaio[m]>
you have to learn how to avoid them anyway ๐
<thejpster[m]>
<dirbaio[m]> "my point is *it's OK for `Mutex..." <- we'll agree to disagree on this point.
<DominicFischer[m>
Also how else are you suppose to write the reactors without things like `Mutex<Cell<Option<OwnedThing>>>`
<DominicFischer[m>
AtomicWaker is cute for simple cases but once you need extra state passed along you need a bigger hammer
<dirbaio[m]>
there's no `Mutex<Cell<Option<OwnedThing>>>` anywhere within embassy
<dirbaio[m]>
hal drivers share wakers between main and interrupt, or at most a few status flags for the most complex ones (usb etc)
<DominicFischer[m>
I guess embassy only covers simple cases ๐
<dirbaio[m]>
it doesnt..?
<DominicFischer[m>
Or hides them behind traits that other people have to implement
<dirbaio[m]>
no
<i509vcb[m]>
Buffered uart does write into a set of pointers from the interrupt, but that is pretty simple still
<dirbaio[m]>
it aims to support all mcu's features, and it pretty much does in the hals that are most complete (e.g nrf)
<vollbrecht[m]>
not saying that its not the right thing in this case
<dirbaio[m]>
vollbrecht[m]: it's data, it's not an owned thing
<DominicFischer[m>
dirbaio[m]: When I say simple, I don't mean it's not fully featured, I mean the drivers are simple. Simply enough that AtomicWaker is good enough
<dirbaio[m]>
DominicFischer[m: have you *actually tried* Embassy or are you just dismissing it with prejudice?
<DominicFischer[m>
s/Simply/Simple/
<DominicFischer[m>
I have
<i509vcb[m]>
From my experience with a more traditional RTOS experience wouldn't you instead just be using a channel in an interrupt if you wanted to send something somewhat structured?
<DominicFischer[m>
Hold on, I'm not trying to bash embassy to be clear
<DominicFischer[m>
๐
<dirbaio[m]>
then what's your point?
<DominicFischer[m>
The comment I made here "I guess embassy only covers simple cases ๐" was a joke
<JamesMunns[m]>
A more constructive answer:
<JamesMunns[m]>
There are usually statics per instance of a hardware item, and they are accessed unsafely or with critical sections within the interrupt, and safely shared with the user interface.
<dirbaio[m]>
if you found some feature missing it's probably because just nobody had implemented yet, not because the "share wakers between main and interrupt" model is inherently incapable of implementing it
<DominicFischer[m>
I'm just trying to say that AtomicWaker isn't the only way
<JamesMunns[m]>
DominicFischer[m: fwiw: sarcasm doesn't come across great in text, and making comments like "this is cute" are probably not going to come across as "with good intent".
<dirbaio[m]>
yeah I totally didnt read it as a joke
<DominicFischer[m>
Yeah lesson learned. I thought the "๐" would be enough but was wrong
<cr1901>
(I'm clueless about Bluetooth) Is there an example in Rust of a Bluetooth UART peripheral for nRF52840 (I can't believe I'm saying this, but async is perfectly fine for this :D)? I.e. an echo server
<dirbaio[m]>
normally the irq handler would check+clear the interrupt flags
<dirbaio[m]>
but instead you can *disable* the interrupt and leave the flag set for the main thread to see.
<i509vcb[m]>
I still need figure out how to put the gpio wakers on a diet for mspm0
<dirbaio[m]>
works almost everywhere
<JamesMunns[m]>
i509vcb[m]: tbh I think you could get away with a single shared WaitQueue for all gpios
<dirbaio[m]>
except if the silicon vendor did dumb shit like making the irq flags clear themselves on read, or require you to clear them to make progress e.g. to receive the next event
<i509vcb[m]>
Is c1104 impractical to use in prod? Probably. But I certainly want to see how far I can push 1kB of ram
<dirbaio[m]>
turbowakers halve the waker size (2 words -> 1 word)
<i509vcb[m]>
For context the current gpio waker setup uses 25% of RAM on C110x
<JamesMunns[m]>
how many gpios do you have again?
<i509vcb[m]>
And 12 wakers go fully unused
<i509vcb[m]>
Uhh not 20 gpio, probably 17 pins max on c110x
<vollbrecht[m]>
also with a Single Map you than again would need to delegate it further to the single PinDrivers?
<JamesMunns[m]>
dirbaio[m]: yes, but it's "pay as you go", if you don't have 32 tasks waiting on gpios you only pay for the two pointers
<vollbrecht[m]>
e.g decouple them again?
<JamesMunns[m]>
the interrupts could just .wake(4) or whatever
<i509vcb[m]>
I would certainly appreciate something like that for parts that have far less RAM
<JamesMunns[m]>
wait queue is probably bigger than 32 AtomicWakers if you have 32 tasks waiting, but if you only have one or two, you can just pay 2 pointers as a base cost, then 2 pointers + a waker per task that is waiting at the same time
<JamesMunns[m]>
you're probably only ever waiting on like one gpio at a time ever :p
<dirbaio[m]>
JamesMunns[m]: right... but it can also be solved by adding one cargo feature per gpio ๐
<i509vcb[m]>
From economics I'd prefer to not have a waker feature per pin
<JamesMunns[m]>
"solved"
<i509vcb[m]>
s/economics/ergonomics/
<dirbaio[m]>
or a macro that the user calls to set up the wakers on demand
<dirbaio[m]>
similar to bind_interrupts! but for gpios
<i509vcb[m]>
dirbaio[m]: Interrupt groups makes this a bit annoying
<dirbaio[m]>
one macro call that sets up all gpios maybe
<i509vcb[m]>
And even better the lowest ram parts don't do interrupt groups so I hace to do conditional stuff
<dirbaio[m]>
so it can "group" them and generate the waker statics + the irq handlers
<dirbaio[m]>
oof :D
<dirbaio[m]>
yeah, fun stuff
<dirbaio[m]>
this is kind of a problem for the smaller stm32's as well
<dirbaio[m]>
i'd love to find some solution
<i509vcb[m]>
I have been abusing the linker for interrupt groups
<dirbaio[m]>
to set up irq handlers and wakers on-demand for gpios and dmas
<JamesMunns[m]>
not waitqueue? :p
<JamesMunns[m]>
(shared waitqueue does have CPU downsides, tho)
<i509vcb[m]>
I can give the waitmap a look when it doesn't force atomics on me (although this is single core)
<dirbaio[m]>
not waitqueue, i'd prefer it to be fully statically defined
<dirbaio[m]>
it's likely to be smaller+faster
<JamesMunns[m]>
waitmap doesn't have a "wake all" option yet tho
<JamesMunns[m]>
so if you had 4 tasks waiting for gpio 2, it would only wake the first one with the id 2
<JamesMunns[m]>
(it sort of assumes that keys are unique)
<i509vcb[m]>
Why would I need wake all for gpio? Each pin would technically have a different ID?
<JamesMunns[m]>
yeah, I mean two tasks waiting on the SAME gpio
<dirbaio[m]>
yeah you already can't wait on the same gpio from multiple tasks
<dirbaio[m]>
* multiple tasks with today's embassy api
<JamesMunns[m]>
if your wait_for_low requires &mut tho, then its not a problem
<i509vcb[m]>
Yeah you need &mut in embassy-mspm0
<JamesMunns[m]>
oh wait it's hot take day
<i509vcb[m]>
In fact embedded-hal-async requires &mut
<JamesMunns[m]>
EVERYONE SHOULD JUST USE ERGOT FOR EVERYTHING INCLUDING INTERRUPTS
<JamesMunns[m]>
(/hot take)
<dirbaio[m]>
JamesMunns[m]: on a 1kb ram micro ๐๏ธ
<JamesMunns[m]>
you should just do message passing in your interrupts!
<JamesMunns[m]>
I definitely plan to get ergot working on 8kib stm32g0s
<dirbaio[m]>
message-pass each thumbv6 instruction to the "thumbv6 execution task"
<i509vcb[m]>
So then the million dollar question. How big is each linked list entry I give to the wait map
<JamesMunns[m]>
uhhh lemme check
<i509vcb[m]>
thumbv6 as well so it will be smaller (hopefully)
<i509vcb[m]>
aarch64 types are so big lol
<i509vcb[m]>
Maybe a permanently altered sense of scale but I see 16 bytes and am disgusted lol
<i509vcb[m]>
Uhh I did try the 2KB stm32 the other day and yeah that was tight as well
<JamesMunns[m]>
so, if you use a u8 key and a () val, then 14 bytes + the size of a waker?
<JamesMunns[m]>
Wakers are 2 pointers I think? so probably 24 bytes?
<JamesMunns[m]>
(with padding?)
<DominicFischer[m>
Surely, after a few branches and loops in your Future, the space taken up by the Node would be well recycled by other parts of the Future no?
<DominicFischer[m>
So Node size won't really be an issue (assuming Rust compiler is smart enough)
<JamesMunns[m]>
yeah, it's only relevant WHILE you are waiting
<JamesMunns[m]>
so you only pay the cost WHILE you are waiting for the gpio, and the node is stored in the task future storage
<JamesMunns[m]>
whereas with 32x atomic wakers it's a constant static cost
<JamesMunns[m]>
I quoted you waitmaps size, actually, waitqueue would be a bit smaller
<i509vcb[m]>
s/wakers/waiters/
<JamesMunns[m]>
but you'd get "extra wakes" if you were waiting on multiple gpios
<DominicFischer[m>
WaitMap doesn't support multiple waiters on the same key btw, if you care about that
<JamesMunns[m]>
DominicFischer[m: already discussed, see above :)
<DominicFischer[m>
Thought that's the same as using AtomicWaker so I guess you don't care
<DominicFischer[m>
ah
<JamesMunns[m]>
but yeah, imo, chances are, you will ever only be waiting for one gpio at a time, maybe two
<JamesMunns[m]>
so it's a bet that you'll benefit from a lower base cost and higher per-item cost
<i509vcb[m]>
Selecting between one or the other is an option, but you don't necessarily want the same option for port A and B
<i509vcb[m]>
You might use a single waker on port a and 12 on port b
<i509vcb[m]>
<JamesMunns[m]> "I quoted you waitmaps size..." <- 18 per waiter + constant size for the wait queue? Although alignment probably pushes that to 20 bytes per waiter?
<i509vcb[m]>
My math there may be wrong
<JamesMunns[m]>
yeah, not sure, I'd have to check it out. rustc can rearrange most fields
<JamesMunns[m]>
24B is probably a fine estimate, could be a bit less
<Farooq>
is it common for people to write firmware for Cortex-A using Rust?
<JamesMunns[m]>
not very common, no. I have seen some folks doing it, but not very active about it.
<JamesMunns[m]>
It's *possible*, but not *popular*, currently.
<dirbaio[m]>
fix the driver by removing the bus thing
<dirbaio[m]>
you can share spi using embedded-hal-bus, there's no need for the driver to reinvent bus sharing itself
<dirbaio[m]>
and DelayNs is supposed to be cloneable
<Ralph[m]>
dirbaio[m]: yeah, see the linked PRs. it's work in progress (with no progress right now ๐)
<dirbaio[m]>
:(
<JamesMunns[m]>
๐ด
<JamesMunns[m]>
(minus the knife)
<Ralph[m]>
dirbaio[m]: it seems that this isn't the case for `stm32f4xx-hal`? at least the maintainer had issues and then tried to use `embassy-time` instead (which IMHO is wrong)
<RobinMueller[m]>
<Farooq> "is it common for people to write..." <- bare-metal, or on top of a linux OS?
<i509vcb[m]>
I'd assume the question would be bare metal, as Cortex-A Linux is very boring otherwise
<RobinMueller[m]>
<Farooq> "is it common for people to write..." <- I can only speak for the zynq7000 here: Xilinx offers both a linux OS and baremetal support. Bare-metal support is especially interesting for that SoC, because some projects have the majority of the complexity inside the FPGA, and the SW is really simple (linux would be overkill). In our institute, we have used both. Using embedded rust on top of the linux works nicely (did not
<RobinMueller[m]>
do any more compelx projects so far though, but that is planned), but bare metal support is not really commonly used. there are projects like zynq-rs, and I am also working on a HAL/PAC etc. , but I do not think it is common used yet.
<RobinMueller[m]>
* here: Xilinx/AMD offers
<RobinMueller[m]>
* I can only speak for the zynq7000 here: Xilinx/AMD offers both a linux OS and baremetal support. Bare-metal support is especially interesting for that SoC, because some projects have the majority of the complexity inside the FPGA, and the SW is really simple (linux would be overkill). In our institute, we have used both. Using embedded rust on top of the linux works nicely (did not do any more compelx projects so far though,
<RobinMueller[m]>
but that is planned), but bare metal support is not really commonly used. there are projects like zynq-rs, and I am also working on a HAL/PAC etc. , but I do not think it is commonly used yet.
<RobinMueller[m]>
* I can only speak for the zynq7000 here: Xilinx/AMD offers both a linux OS and baremetal support. Bare-metal support is especially interesting for that SoC, because some projects have the majority of the complexity inside the FPGA, and the SW is really simple (linux would be overkill). In our institute, we have used both. Using embedded rust on top of the linux works nicely (did not do any more compelx projects so far though,
<RobinMueller[m]>
but that is planned), but bare metal support is not really commonly used. there are projects like zynq-rs, and I am also working on a HAL/PAC etc. , but I do not think any of that is commonly used yet.
<RobinMueller[m]>
* I can only speak for the zynq7000 here: Xilinx/AMD offers both a linux and baremetal support. Bare-metal support is especially interesting for that SoC, because some projects have the majority of the complexity inside the FPGA, and the SW is really simple (linux would be overkill). In our institute, we have used both. Using embedded rust on top of the linux works nicely (did not do any more compelx projects so far though,
<RobinMueller[m]>
but that is planned), but bare metal support is not really commonly used. there are projects like zynq-rs, and I am also working on a HAL/PAC etc. , but I do not think any of that is commonly used yet.
<i509vcb[m]>
If someone does a full soft core then yeah the experience may or may not be consistent
KevinPFleming[m] has quit [Quit: Idle timeout reached: 172800s]
sroemer has quit [Quit: WeeChat 4.5.2]
TiagoManczak[m] has quit [Quit: Idle timeout reached: 172800s]
Kin-o-matix[m] has joined #rust-embedded
<Kin-o-matix[m]>
Is it possible to not have PanicInfo impl fmt::Display but use defmt in some manner?
<Kin-o-matix[m]>
* fmt::Display but instead use defmt
<Kin-o-matix[m]>
a hello world style elf is like 8kb and half of that seems fmt related
<jason-kairos[m]>
I can do bit shifts and whatnot, but would rather not
<i509vcb[m]>
Each is a different register?
<dirbaio[m]>
keep it simple and embrace the shifts
<jason-kairos[m]>
s/assembly/assemble/
<dngrs[m]>
you can always get down+dirty with a macro I guess
<i509vcb[m]>
I would generally discourage macro magic unless it is truly needed
KevinPFleming[m] has joined #rust-embedded
<KevinPFleming[m]>
<Kin-o-matix[m]> "a hello world style elf is like..." <- what does your `[profile.release]` in `Cargo.toml` look like?
<Kin-o-matix[m]>
whatever the defaults are
<Kin-o-matix[m]>
I didn't change it
firefrommoonligh has joined #rust-embedded
<firefrommoonligh>
<jason-kairos[m]> "I can do bit shifts and whatnot,..." <- Fn that does the bitshift
<firefrommoonligh>
Unless I"m missing something
<dngrs[m]>
i509vcb[m]: same, it's really a last resort thing. At the same time I wish there was *something* better than macros...
<dngrs[m]>
(I know: let's add a third macro system to the language!) (or a fourth, depending how you count macros 2.0 ....)
<KevinPFleming[m]>
Kin-o-matix[m]: So you don't have one at all? In that case, you aren't optimizing for size, and probably aren't using LTO either. if `fmt::Display` is never referenced in your code, then LTO should remove it completely from the binary.
<firefrommoonligh>
Macro magic has its place, e.g. if the altenratifve is worse. You don't need a macro here though; fn is fine
<jason-kairos[m]>
side question: is there an easy way to read the STM32's unique ID
<jason-kairos[m]>
I'm not sure I see it in the svd2rust crate (stm32g0 in this case), but maybe not looking in the right place
<jason-kairos[m]>
perhaps I should just to grep for the address
<firefrommoonligh>
Yea; gut says find the addr and read it manually
<KevinPFleming[m]>
jason-kairos[m]: If you can find out how, please post here... I've been looking for that too.
<thejpster[m]>
<Kin-o-matix[m]> "Is it possible to not have..." <- You can now ask if panicinfo contains a string literal, and if so, you can print it with defmt.
<dngrs[m]>
I'm pretty sure I've done it on either F1 or F4 without much hassle
<dngrs[m]>
but it's been a long time
<dngrs[m]>
* I'm pretty sure I've read the UUID on either F1 or F4 without much hassle
GrantM11235[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]>
If there were more things we wanted to look at, like if there's an easy way to categorize fmt strings, we could make a small cargo plugin to print stats pulled from the output of `nm` or whatever, without sharing any of the content of their projects
<JamesMunns[m]>
I don't think it'll show us any broad trends we don't have a good idea of, but it would be a useful datapoint imo to show a rough estimate of "how much of everyone's firmware is just panic formatting"
<JamesMunns[m]>
Maybe as a bonus question on the survey this year
<JamesMunns[m]>
I don't know if I have an easy solution. My general solution is "don't make big structs", if you have buffers or arrays, try to make sure they can be const-inited, and use a ConstStaticCell to force them definitely being const-initable. Also working hard to make sure they can be zero initted so they take up bss space and not data space (uses extra flash)
<JamesMunns[m]>
Anything that needs a variable capacity (especially large capacity), make sure it can either be const inited, or take a `&mut [T]` as the storage, so you can put the slice in a static cell
<JamesMunns[m]>
it's unsatisfying that it's not automatic, but it's rarely obtrusive if you know to design for it IME.
<JamesMunns[m]>
For variable capacity thing, I've been messing with making nice APIs for intrusive lists, which can be an alternative to const-generic-bounded storage, tho it has some api considerations that IMO makes it easier to use on the inside of a library (I use it a LOT in ergot, it optimizes well IMO)
<JamesMunns[m]>
JamesMunns[m]: This is a much smaller project where it's like 1/2 of the total bin size, 5-6KiB text, 3KiB rodata
<diondokter[m]>
For a project I tried out an unstable option so all panic paths are relative instead of absolute (so the paths are shorter) and that alone saved 13kb
<JamesMunns[m]>
Is there an easy way to objdump just the rodata strings?
<diondokter[m]>
Yeah, but I don't know it by heart
<dngrs[m]>
<JamesMunns[m]> "Hey @diondokter, re: https://..." <- completely besides the point but curious how did you get that permalink? If I "share", I get something that looks more canonical: https://bsky.app/profile/jamesmunns.com/post/3ltkoobavks2b
<vollbrecht[m]>
ah here was the full [rfc](https://rust-lang.github.io/rfcs/3127-trim-paths.html) with all options, there exist i think 3 ways --remap-path-prefix, --remap-path-scope and --remap-cwd-prefix ore something :D
<vollbrecht[m]>
and i dont think they cover everything, better have one more flag invented next
<JamesMunns[m]>
Nice, you have no panics in your example :D
<JamesMunns[m]>
<dngrs[m]> "completely besides the point but..." <- Honestly I'm not sure how to make it show you the hash ref for posts
<i509vcb[m]>
Well I am using the unwrap macro from defmt
<dngrs[m]>
JamesMunns[m]: not important at all anyway, just a headscratcher
<vollbrecht[m]>
also there is this other flag that i forget its name, where you can specify that the panic handle should store, path, linenumber and lineposition. All 3 infos can also create a lot of duplication inside the binary