tamme[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]>
If the repo is just dead, I would suggest you fork it and just start working. Eventually, you can either re-merge it back to the old project, or just ask them to point people to your work. That will unblock you, and allow folks some time to decide what they want to do.
RobinMueller[m] has quit [Quit: Idle timeout reached: 172800s]
<dcz[m]>
That makes sense, but I'd like to reach out to the RISC-V team first to make sure they see the work and also to understand what they think I should do.
<seds>
Check the their bug page, IRC chats, forums etc, that will give you an idea what you should do, then you just do it
<seds>
I would not expect riscv team to tell you what to do haha
<seds>
this is how open source generally works and how you contribute. You look into things that are needed/needs fixing and do it
<dcz[m]>
the list of issues has been full of ignored requests, so I don't really think I will get anywhere there. A maintainer did tell me to become part of the RISC-V team, so I'm trying to find someone who can personally add me. I hope making noise here helps with that and I can apply the fixes and release the crate :)
<seds>
dcz[m]: I dont know their process of joining the official team, but I find weird that they would add anyone without much past contribution. Also, I think the correct channel would be #riscv, no?
<dcz[m]>
I would also find it weird, since all I want to do is revive a single crate, but I'm doing as told. And thanks for the pointer
RobWells[m] has quit [Quit: Idle timeout reached: 172800s]
<dcz[m]>
hm, I can't find a risc-v room
<JamesMunns[m]>
Seds in on the irc bridge, you're in the matrix room dcz
<JamesMunns[m]>
for "rust on riscv" this is probably the right room
<JamesMunns[m]>
but in general, it's more likely to get approved when folks are familiar with you
<JamesMunns[m]>
so, you could start working on your fork, if folks think your work looks reasonable, you could open a PR. It'll mean getting approval from the current risc-v team.
<dcz[m]>
I realize that. I don't expect getting accepted but maybe it helps me get access to this one already dead crate. While I don't have rust-risc-v-official history, I hope I have enough background...
<dcz[m]>
and thanks for the pointer!
<seds>
Just fork the project, make your changes, open the PR with the changes and tag some related folks in the PR
<seds>
I would not open any PR adding myself as a contributor if I had not made any contributions yet hehe. If you look at the example JamesMunns[m] shared, the author does exactly that.
<dngrs[m]>
<thejpster[m]> dngrs: you can just mail me and ask for an account. It's fine....
<dngrs[m]>
thank you - but it's not about myself, instead I'm thinking about how to best foster general contributions. I totally understand the angle of combating spam/vandalism, but ultimately I think we have different ideas on how to run a wiki - and that's fine! But also probably not a good use of either of our time if we tried to come to an agreement. It's your site, I don't want to tell you how to run it at all
jr-oss_ has joined #rust-embedded
jr-oss has quit [Ping timeout: 256 seconds]
MaartendeVries[m has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]>
There was also at least one RTT issues that got fixed in recent versions of probe-rs, would be worth checking the version of probe-rs as well
ello has joined #rust-embedded
ello_ has joined #rust-embedded
ello_ has quit [Client Quit]
ello_ has joined #rust-embedded
danielb[m] has joined #rust-embedded
<danielb[m]>
rtt_target works just fine for me
<danielb[m]>
granted, I never tried printing to multiple channels like that, so it could indeed be an rtt_target issue somehow
<dcz[m]>
oh, so it's Disasm. I managed to get in touch with them, so maybe there's a chance
wose has quit [Server closed connection]
wose has joined #rust-embedded
dinkelhacker has quit [Server closed connection]
dinkelhacker has joined #rust-embedded
Mihael[m] has joined #rust-embedded
<Mihael[m]>
how normal is it for embedded software to require some form of customization/ implementation for actual functionality?
<Mihael[m]>
For example providing traits and allowing the user to implement actual features
<dcz[m]>
is there some kind of a HAL for co-operative multithreading, like there are async HAL pieces?
<Mihael[m]>
As in crates / libs
<dcz[m]>
the normal HAL would work with pre-emptive multithreading, except I've seen delay() calls which occupy the core which could be doing other things in that time. I'm tempted to give it a shot creating something if nothing exists
dirbaio[m] has joined #rust-embedded
<dirbaio[m]>
Embedded-hal-async?
<dirbaio[m]>
Not sure what you're asking for
<dirbaio[m]>
Async is cooperative multitasking
<dcz[m]>
I mean co-operative multithreading without async :)
<dcz[m]>
from what I'm thinking, it could work by defining 3 operations: yield, delay, and some synchronization primitive like a channel or mutex
<dcz[m]>
but even existing crates based on the non-async would (suboptimally) run if pre-emption was added
<MartinSivk[m]>
dcz: That is what the `nb` crate did. It used a special error WouldBlock to signal a not ready state
<MartinSivk[m]>
You need support for the primitives from within the drivers though, there is no way to yield otherwise
<MartinSivk[m]>
Pre-emption is much harder, you need to preserve the stack somehow
<MartinSivk[m]>
Unless you do run to completion and only preempt to run higher prio jobs to completion
<dcz[m]>
does nb support doing several delaying operations in one function call? I might not understand how it works, but if it returns a WouldBlock after the second operation, the next call would have to repeat the first one too
<MartinSivk[m]>
That is going to be full of unsafe in Rust :) My c++ code that did that was a mess too. But so are some aspects of the async runtimes.. so I guess it is not such a big deal.
<dcz[m]>
the async runtime I wrote doesn't have any unsafe I think... but I don't want to rewrite every driver I see. Also async is a pain to debug
<MartinSivk[m]>
RawWaker needs a tiny bit of unsafe I think, but it might depend on the env. I pulled few tricks on my small stm32 to keep the runtime simple.
<dcz[m]>
I managed to not understand wakers :D tbh I'm kinda curious even though I'm not using them
<dirbaio[m]>
<dcz[m]> from what I'm thinking, it could work by defining 3 operations: yield, delay, and some synchronization primitive like a channel or mutex
<dirbaio[m]>
this is basically reinventing async :)
<dirbaio[m]>
but worse because no compiler support
<dcz[m]>
it's solving the same problem, but without the colored function problem
<MartinSivk[m]>
Well existing drivers need some kind of support for either Delay or interrupt handling (Wakers..). Delay has pretty limited API so to use an existing driver with a pre-emptible Delay you would need something similar to the QK system i linked.
<dirbaio[m]>
ah you mean actual stackful tasks
<dcz[m]>
yeah
<MartinSivk[m]>
Basically, the delay would check the work queue and execute a higher prio job instead of waiting.
<dirbaio[m]>
you'd still have to rewrite every driver and HAL to get them to use your yield/delay primitives
<dcz[m]>
if I add pre-emption, I don't have to but I could to make them work better
<dirbaio[m]>
so you still have coloring, except the "color" is now "how does the driver do the waiting"
<dirbaio[m]>
this is why C RTOSs build their own HALs, to make them use their threading primitives
<dirbaio[m]>
you can't escape function coloring :)
<dcz[m]>
nah, the coloring is an issue because it infects all callers. With stackful threads it doesn't matter if the called function does busy loops or yields via a syscall. If the driver relies on another driver, like an adc, I can give it ADCs of either kind and it won't know
<dcz[m]>
that means yeah there are different kind of code, but the problem is nothing like the coloring one
<dirbaio[m]>
???
<dirbaio[m]>
a classic blocking driver does while !some_reg.read().done() {}, this will hang your cooperative scheduler
<dirbaio[m]>
and will waste power if you make it preemptive
<dcz[m]>
it will waste power but won't be unuseable
<dirbaio[m]>
your custom drivers will have to do while !some_reg.read().done() { my_scheduler::yied() }
<dirbaio[m]>
s//`/, s/yied/yield/, s//`/
<dirbaio[m]>
which will have to crash at runtime if you're not actually running it under my_scheduler
glitchy has quit [Remote host closed the connection]
glitchy has joined #rust-embedded
<dcz[m]>
yeah that's a bit of a bummer: drivers adapted to co-oprative scheduling would need modifications to the runtime
<dirbaio[m]>
so it's the same coloring problem:
<dirbaio[m]>
you can't run async code from a non-async context
<dirbaio[m]>
you can't run code using my_scheduler under a non-my_scheduler context
<dirbaio[m]>
except in the 1st case the compiler catches it
<dirbaio[m]>
while the 2nd will have to crash at runtime or something
<dirbaio[m]>
if i'm going to have coloring issues, i'd rather have the compiler catch it :P
<dcz[m]>
probably at link-time though
<dirbaio[m]>
dcz[m]: your binary might have my_scheduler, but you might call that code from e.g. an ISR handler
<dirbaio[m]>
where the scheduler isn't capable of scheduling it
<dirbaio[m]>
because it's not in thread mode
<dirbaio[m]>
so the linker won't catch it
<dirbaio[m]>
it really has to panic (or worse, UB) at runtime
<dcz[m]>
that's a good point, the trade-off is similar to running synchronous, power-wasting code under an async runtime
<dirbaio[m]>
this is a problem plaguing all C RTOSs, they all have this concept of which RTOS primitives are "ISR-safe" or not
<dirbaio[m]>
impossible to catch at compile time
<dirbaio[m]>
and it's still a form of "coloring"
<dirbaio[m]>
all functions that call some RTOS thing have a safety contract of "don't call this from ISRs or some similar non-scheduler-friendly context"
<dirbaio[m]>
it's also infectious :)
<dirbaio[m]>
and worse because the compiler won't check it for you, while it will for async
<dcz[m]>
why is async special here? because it doesn't run in ISR context, or because Rust is checking ownership of the system primitives?
<dirbaio[m]>
Rust checks you only call async code from other async code
<dirbaio[m]>
(or from top-level primitives that create an async context, like executor tasks, or block_on())
<dcz[m]>
I think I get it: you can't alter the state of the async execution unless you're actually running the async thing properly
<dcz[m]>
so there's no risk of doing async stuff outside async context
<dcz[m]>
thanks
<dcz[m]>
also, bummer
rjmp[m] has quit [Quit: Idle timeout reached: 172800s]
<MartinSivk[m]>
The "classic" drivers could be tricked into pre-emptive run to completion single stack yielding if they use Delay even in the fast loops. But they commonly do not....
<MartinSivk[m]>
But yeah, bummer... I fought with that when writing my async stuff too
<MartinSivk[m]>
I had to rewrite a lot of stuff to make it work