ChanServ changed the topic of #rust-embedded to: Welcome to the Rust Embedded IRC channel! Bridged to #rust-embedded:matrix.org and logged at https://libera.irclog.whitequark.org/rust-embedded, code of conduct at https://www.rust-lang.org/conduct.html
HumanG33k has quit [Ping timeout: 258 seconds]
dav1d has quit [Ping timeout: 255 seconds]
dav1d has joined #rust-embedded
HumanG33k has joined #rust-embedded
glitchy has quit [Remote host closed the connection]
glitchy has joined #rust-embedded
jistr has quit [Server closed connection]
jistr has joined #rust-embedded
_whitelogger has joined #rust-embedded
ivche has quit [Ping timeout: 248 seconds]
ivche_ has joined #rust-embedded
ivche_ is now known as ivche
ivche has quit [Ping timeout: 245 seconds]
ivche has joined #rust-embedded
sroemer has joined #rust-embedded
jfsimon has quit [Remote host closed the connection]
jfsimon has joined #rust-embedded
jfsimon has quit [Remote host closed the connection]
jfsimon has joined #rust-embedded
AshconMohseninia has quit [Quit: Idle timeout reached: 172800s]
_whitelogger has joined #rust-embedded
Noah[m]1 has quit [Quit: Idle timeout reached: 172800s]
ivche has quit [Remote host closed the connection]
glitchy has quit [Remote host closed the connection]
glitchy has joined #rust-embedded
<dcz[m]> <seds> You can start by contributing with their open source stuff
<dcz[m]> Well here's that https://github.com/riscv-rust/gd32vf103xx-hal/pull/63 but that repo is pretty dead. Which is also the whole reason I want to join: https://github.com/riscv-rust/gd32vf103xx-hal/issues/62
<dcz[m]> who should I talk to next?
tamme[m] has quit [Quit: Idle timeout reached: 172800s]
<JamesMunns[m]> If the repo is just dead, I would suggest you fork it and just start working. Eventually, you can either re-merge it back to the old project, or just ask them to point people to your work. That will unblock you, and allow folks some time to decide what they want to do.
RobinMueller[m] has quit [Quit: Idle timeout reached: 172800s]
<dcz[m]> That makes sense, but I'd like to reach out to the RISC-V team first to make sure they see the work and also to understand what they think I should do.
<seds> Check the their bug page, IRC chats, forums etc, that will give you an idea what you should do, then you just do it
<seds> I would not expect riscv team to tell you what to do haha
<seds> this is how open source generally works and how you contribute. You look into things that are needed/needs fixing and do it
<dcz[m]> the list of issues has been full of ignored requests, so I don't really think I will get anywhere there. A maintainer did tell me to become part of the RISC-V team, so I'm trying to find someone who can personally add me. I hope making noise here helps with that and I can apply the fixes and release the crate :)
<seds> dcz[m]: I dont know their process of joining the official team, but I find weird that they would add anyone without much past contribution. Also, I think the correct channel would be #riscv, no?
<dcz[m]> I would also find it weird, since all I want to do is revive a single crate, but I'm doing as told. And thanks for the pointer
RobWells[m] has quit [Quit: Idle timeout reached: 172800s]
<dcz[m]> hm, I can't find a risc-v room
<JamesMunns[m]> Seds in on the irc bridge, you're in the matrix room dcz
<JamesMunns[m]> for "rust on riscv" this is probably the right room
<JamesMunns[m]> joining the risc-v team would mean opening a PR like this: https://github.com/rust-embedded/wg/pull/862
<JamesMunns[m]> but in general, it's more likely to get approved when folks are familiar with you
<JamesMunns[m]> so, you could start working on your fork, if folks think your work looks reasonable, you could open a PR. It'll mean getting approval from the current risc-v team.
<dcz[m]> I realize that. I don't expect getting accepted but maybe it helps me get access to this one already dead crate. While I don't have rust-risc-v-official history, I hope I have enough background...
<dcz[m]> and thanks for the pointer!
<seds> Just fork the project, make your changes, open the PR with the changes and tag some related folks in the PR
<seds> I would not open any PR adding myself as a contributor if I had not made any contributions yet hehe. If you look at the example JamesMunns[m] shared, the author does exactly that.
sroemer has quit [Quit: WeeChat 4.5.2]
<dcz[m]> the dies have been cast: https://github.com/rust-embedded/wg/pull/866
rainingmessages has quit [Quit: bye]
rainingmessages has joined #rust-embedded
<dngrs[m]> <thejpster[m]> dngrs: you can just mail me and ask for an account. It's fine....
<dngrs[m]> thank you - but it's not about myself, instead I'm thinking about how to best foster general contributions. I totally understand the angle of combating spam/vandalism, but ultimately I think we have different ideas on how to run a wiki - and that's fine! But also probably not a good use of either of our time if we tried to come to an agreement. It's your site, I don't want to tell you how to run it at all
jr-oss_ has joined #rust-embedded
jr-oss has quit [Ping timeout: 256 seconds]
MaartendeVries[m has quit [Quit: Idle timeout reached: 172800s]
jr-oss_ has quit [Ping timeout: 252 seconds]
jr-oss has joined #rust-embedded
AshconMohseninia has joined #rust-embedded
<AshconMohseninia> I tried to create something useful for my project akin to PlatformIOs build output
<AshconMohseninia> quite nice tbh, but I think this could be quite useful in probe-rs
<JamesMunns[m]> It does look nice, though most folks use cargo size from cargo-binutils
<diondokter[m]> AshconMohseninia: How do you know the max size in your tool?
<JamesMunns[m]> I assume that's visible in the debuginfo if the linker script has definitions for RAM and FLASH sections? Could be wrong tho
<diondokter[m]> I don't think the elf has that info. So you'd have to make assumptions about the symbols that are present
<diondokter[m]> Cortex-m-rt has _ram_start and _ram_end. But that's only for the 'primary' ram. There's no equivalent for the flash
dav1d has quit [Ping timeout: 258 seconds]
HumanG33k has quit [*.net *.split]
therealprof[m] has quit [*.net *.split]
GrantM11235[m] has quit [*.net *.split]
dngrs[m] has quit [*.net *.split]
RockBoynton[m] has quit [*.net *.split]
AdamHorden has quit [*.net *.split]
jannic[m] has quit [*.net *.split]
newam[m] has quit [*.net *.split]
lethalbit has quit [*.net *.split]
JamesMunns[m] has quit [*.net *.split]
corecode[m] has quit [*.net *.split]
bartmassey[m] has quit [*.net *.split]
rmsyn[m] has quit [*.net *.split]
dcz[m] has quit [*.net *.split]
MovementNetwork[ has quit [*.net *.split]
majors has quit [*.net *.split]
thenightmail has quit [*.net *.split]
Lumpio- has quit [*.net *.split]
WSalmon has quit [*.net *.split]
stgl has quit [*.net *.split]
richardeoin has quit [*.net *.split]
hmw- has quit [*.net *.split]
sknebel has quit [*.net *.split]
rom4ik has quit [*.net *.split]
inara has quit [*.net *.split]
jr-oss has quit [*.net *.split]
glitchy has quit [*.net *.split]
norineko has quit [*.net *.split]
adamgreig[m] has quit [*.net *.split]
sigmaris has quit [*.net *.split]
jfsimon has quit [*.net *.split]
Artea has quit [*.net *.split]
nohit has quit [*.net *.split]
nadja has quit [*.net *.split]
Ekho has quit [*.net *.split]
xnor has quit [*.net *.split]
tschundler has quit [*.net *.split]
rainingmessages has quit [*.net *.split]
Darius has quit [*.net *.split]
ni has quit [*.net *.split]
fooker has quit [*.net *.split]
Rahix has quit [*.net *.split]
kenny has quit [*.net *.split]
edm has quit [*.net *.split]
tk has quit [*.net *.split]
jistr has quit [*.net *.split]
mort has quit [*.net *.split]
ouilemur has quit [*.net *.split]
dkm has quit [*.net *.split]
BentoMon has quit [*.net *.split]
crabbedhaloablut has quit [*.net *.split]
jsolano has quit [*.net *.split]
dnm has quit [*.net *.split]
thejpster[m] has quit [*.net *.split]
adamgreig_ has quit [*.net *.split]
jakzale has quit [*.net *.split]
vanner has quit [*.net *.split]
AshconMohseninia has quit [*.net *.split]
ryankurte[m] has quit [*.net *.split]
wassasin[m] has quit [*.net *.split]
mkj[m] has quit [*.net *.split]
i509vcb[m] has quit [*.net *.split]
rjmp[m] has quit [*.net *.split]
diondokter[m] has quit [*.net *.split]
NishanthMenon has quit [*.net *.split]
zagura has quit [*.net *.split]
Sonder has quit [*.net *.split]
kaoD has quit [*.net *.split]
dinkelhacker has quit [*.net *.split]
kline has quit [*.net *.split]
jasperw has quit [*.net *.split]
cr1901 has quit [*.net *.split]
dne has quit [*.net *.split]
cirho has quit [*.net *.split]
Socke has quit [*.net *.split]
cyrozap has quit [*.net *.split]
mathu has quit [*.net *.split]
_catircservices has quit [*.net *.split]
Foxyloxy has quit [*.net *.split]
seds has quit [*.net *.split]
fsinger has quit [*.net *.split]
vancz has quit [*.net *.split]
wose has quit [*.net *.split]
dav1d has joined #rust-embedded
AshconMohseninia has joined #rust-embedded
jr-oss has joined #rust-embedded
rainingmessages has joined #rust-embedded
glitchy has joined #rust-embedded
jfsimon has joined #rust-embedded
jistr has joined #rust-embedded
HumanG33k has joined #rust-embedded
norineko has joined #rust-embedded
Artea has joined #rust-embedded
Darius has joined #rust-embedded
rmsyn[m] has joined #rust-embedded
bartmassey[m] has joined #rust-embedded
newam[m] has joined #rust-embedded
ryankurte[m] has joined #rust-embedded
therealprof[m] has joined #rust-embedded
wassasin[m] has joined #rust-embedded
lethalbit has joined #rust-embedded
JamesMunns[m] has joined #rust-embedded
corecode[m] has joined #rust-embedded
dcz[m] has joined #rust-embedded
MovementNetwork[ has joined #rust-embedded
mkj[m] has joined #rust-embedded
i509vcb[m] has joined #rust-embedded
GrantM11235[m] has joined #rust-embedded
dngrs[m] has joined #rust-embedded
RockBoynton[m] has joined #rust-embedded
rjmp[m] has joined #rust-embedded
AdamHorden has joined #rust-embedded
nohit has joined #rust-embedded
ni has joined #rust-embedded
jannic[m] has joined #rust-embedded
tk has joined #rust-embedded
jasperw has joined #rust-embedded
adamgreig[m] has joined #rust-embedded
nadja has joined #rust-embedded
cyrozap has joined #rust-embedded
Ekho has joined #rust-embedded
xnor has joined #rust-embedded
mort has joined #rust-embedded
ouilemur has joined #rust-embedded
fooker has joined #rust-embedded
diondokter[m] has joined #rust-embedded
WSalmon has joined #rust-embedded
tschundler has joined #rust-embedded
Lumpio- has joined #rust-embedded
Foxyloxy has joined #rust-embedded
cr1901 has joined #rust-embedded
dkm has joined #rust-embedded
stgl has joined #rust-embedded
richardeoin has joined #rust-embedded
Rahix has joined #rust-embedded
kenny has joined #rust-embedded
hmw- has joined #rust-embedded
vancz has joined #rust-embedded
sknebel has joined #rust-embedded
thejpster[m] has joined #rust-embedded
dnm has joined #rust-embedded
rom4ik has joined #rust-embedded
NishanthMenon has joined #rust-embedded
edm has joined #rust-embedded
inara has joined #rust-embedded
zagura has joined #rust-embedded
dne has joined #rust-embedded
wose has joined #rust-embedded
sigmaris has joined #rust-embedded
dinkelhacker has joined #rust-embedded
kline has joined #rust-embedded
cirho has joined #rust-embedded
Socke has joined #rust-embedded
seds has joined #rust-embedded
adamgreig_ has joined #rust-embedded
mathu has joined #rust-embedded
Sonder has joined #rust-embedded
BentoMon has joined #rust-embedded
jakzale has joined #rust-embedded
crabbedhaloablut has joined #rust-embedded
jsolano has joined #rust-embedded
vanner has joined #rust-embedded
kaoD has joined #rust-embedded
fsinger has joined #rust-embedded
_catircservices has joined #rust-embedded
majors has joined #rust-embedded
thenightmail has joined #rust-embedded
<wassasin[m]> At runtime it could also mean space remaining in a sequential-storage or evk partition
<thejpster[m]> Does anyone use rtt_target? I've got someone complaining that defmt isn't working right (https://github.com/knurling-rs/defmt/issues/981), and I took a quick look and immediately saw `static mut CHANNEL: Option<UpChannel>` (https://github.com/probe-rs/rtt-target/blob/117d9519a5d3b1f4bc024bc05f9e3c5dec0a57f5/rtt-target/src/defmt.rs#L10), which makes me not want to poke too much further.
dne has quit [Remote host closed the connection]
dne has joined #rust-embedded
<JamesMunns[m]> There was also at least one RTT issues that got fixed in recent versions of probe-rs, would be worth checking the version of probe-rs as well
ello has joined #rust-embedded
ello_ has joined #rust-embedded
ello_ has quit [Client Quit]
ello_ has joined #rust-embedded
danielb[m] has joined #rust-embedded
<danielb[m]> rtt_target works just fine for me
<danielb[m]> granted, I never tried printing to multiple channels like that, so it could indeed be an rtt_target issue somehow
GuineaWheek[m] has joined #rust-embedded
<GuineaWheek[m]> <dcz[m]> who's maintaining https://github.com/stm32-rs/synopsys-usb-otg/ ?
<GuineaWheek[m]> It’s effectively unmaintained
<GuineaWheek[m]> Disasm has been MIA for years
<GuineaWheek[m]> s/years/a hot sec now/
<dcz[m]> oh, so it's Disasm. I managed to get in touch with them, so maybe there's a chance
wose has quit [Server closed connection]
wose has joined #rust-embedded
dinkelhacker has quit [Server closed connection]
dinkelhacker has joined #rust-embedded
Mihael[m] has joined #rust-embedded
<Mihael[m]> how normal is it for embedded software to require some form of customization/ implementation for actual functionality?
<Mihael[m]> For example providing traits and allowing the user to implement actual features
<dcz[m]> is there some kind of a HAL for co-operative multithreading, like there are async HAL pieces?
<Mihael[m]> As in crates / libs
<dcz[m]> the normal HAL would work with pre-emptive multithreading, except I've seen delay() calls which occupy the core which could be doing other things in that time. I'm tempted to give it a shot creating something if nothing exists
dirbaio[m] has joined #rust-embedded
<dirbaio[m]> Embedded-hal-async?
<dirbaio[m]> Not sure what you're asking for
<dirbaio[m]> Async is cooperative multitasking
<dcz[m]> I mean co-operative multithreading without async :)
<dcz[m]> from what I'm thinking, it could work by defining 3 operations: yield, delay, and some synchronization primitive like a channel or mutex
<dcz[m]> but even existing crates based on the non-async would (suboptimally) run if pre-emption was added
thenightmail has left #rust-embedded [The Lounge - https://thelounge.chat]
MartinSivk[m] has joined #rust-embedded
<MartinSivk[m]> dcz: That is what the `nb` crate did. It used a special error WouldBlock to signal a not ready state
<MartinSivk[m]> You need support for the primitives from within the drivers though, there is no way to yield otherwise
<MartinSivk[m]> Pre-emption is much harder, you need to preserve the stack somehow
<MartinSivk[m]> Unless you do run to completion and only preempt to run higher prio jobs to completion
<dcz[m]> does nb support doing several delaying operations in one function call? I might not understand how it works, but if it returns a WouldBlock after the second operation, the next call would have to repeat the first one too
<MartinSivk[m]> That is going to be full of unsafe in Rust :) My c++ code that did that was a mess too. But so are some aspects of the async runtimes.. so I guess it is not such a big deal.
<dcz[m]> I just found https://github.com/rcore-os/trapframe-rs so maybe I don't have to make the mess myself
<MartinSivk[m]> That would be up to the driver, nb just defined the return values and semantics
<MartinSivk[m]> And fell out of favor a bit..
<dcz[m]> there's also TockOS which is normally waiting nicely for userspace threads to return, but will force control away after a timeout
<MartinSivk[m]> Once upon a time I implemented something similar to https://www.state-machine.com/qpc/srs-qp_qk.html (in C++).
<dcz[m]> the async runtime I wrote doesn't have any unsafe I think... but I don't want to rewrite every driver I see. Also async is a pain to debug
<MartinSivk[m]> RawWaker needs a tiny bit of unsafe I think, but it might depend on the env. I pulled few tricks on my small stm32 to keep the runtime simple.
<dcz[m]> I managed to not understand wakers :D tbh I'm kinda curious even though I'm not using them
<dirbaio[m]> <dcz[m]> from what I'm thinking, it could work by defining 3 operations: yield, delay, and some synchronization primitive like a channel or mutex
<dirbaio[m]> this is basically reinventing async :)
<dirbaio[m]> but worse because no compiler support
<dcz[m]> it's solving the same problem, but without the colored function problem
<MartinSivk[m]> Well existing drivers need some kind of support for either Delay or interrupt handling (Wakers..). Delay has pretty limited API so to use an existing driver with a pre-emptible Delay you would need something similar to the QK system i linked.
<dirbaio[m]> ah you mean actual stackful tasks
<dcz[m]> yeah
<MartinSivk[m]> Basically, the delay would check the work queue and execute a higher prio job instead of waiting.
<dirbaio[m]> you'd still have to rewrite every driver and HAL to get them to use your yield/delay primitives
<dcz[m]> if I add pre-emption, I don't have to but I could to make them work better
<dirbaio[m]> so you still have coloring, except the "color" is now "how does the driver do the waiting"
<dirbaio[m]> this is why C RTOSs build their own HALs, to make them use their threading primitives
<dirbaio[m]> you can't escape function coloring :)
<dcz[m]> nah, the coloring is an issue because it infects all callers. With stackful threads it doesn't matter if the called function does busy loops or yields via a syscall. If the driver relies on another driver, like an adc, I can give it ADCs of either kind and it won't know
<dcz[m]> that means yeah there are different kind of code, but the problem is nothing like the coloring one
<dirbaio[m]> ???
<dirbaio[m]> a classic blocking driver does while !some_reg.read().done() {}, this will hang your cooperative scheduler
<dirbaio[m]> and will waste power if you make it preemptive
<dcz[m]> it will waste power but won't be unuseable
<dirbaio[m]> your custom drivers will have to do while !some_reg.read().done() { my_scheduler::yied() }
<dirbaio[m]> s//`/, s/yied/yield/, s//`/
<dirbaio[m]> which will have to crash at runtime if you're not actually running it under my_scheduler
glitchy has quit [Remote host closed the connection]
glitchy has joined #rust-embedded
<dcz[m]> yeah that's a bit of a bummer: drivers adapted to co-oprative scheduling would need modifications to the runtime
<dirbaio[m]> so it's the same coloring problem:
<dirbaio[m]> you can't run async code from a non-async context
<dirbaio[m]> you can't run code using my_scheduler under a non-my_scheduler context
<dirbaio[m]> except in the 1st case the compiler catches it
<dirbaio[m]> s/from/under/, s/my_scheduler/my\_scheduler/, s/my_scheduler/my\_scheduler/
<dcz[m]> oh, that's what you meant
<dirbaio[m]> while the 2nd will have to crash at runtime or something
<dirbaio[m]> if i'm going to have coloring issues, i'd rather have the compiler catch it :P
<dcz[m]> probably at link-time though
<dirbaio[m]> dcz[m]: your binary might have my_scheduler, but you might call that code from e.g. an ISR handler
<dirbaio[m]> where the scheduler isn't capable of scheduling it
<dirbaio[m]> because it's not in thread mode
<dirbaio[m]> so the linker won't catch it
<dirbaio[m]> it really has to panic (or worse, UB) at runtime
<dcz[m]> that's a good point, the trade-off is similar to running synchronous, power-wasting code under an async runtime
<dirbaio[m]> this is a problem plaguing all C RTOSs, they all have this concept of which RTOS primitives are "ISR-safe" or not
<dirbaio[m]> impossible to catch at compile time
<dirbaio[m]> and it's still a form of "coloring"
<dirbaio[m]> all functions that call some RTOS thing have a safety contract of "don't call this from ISRs or some similar non-scheduler-friendly context"
<dirbaio[m]> it's also infectious :)
<dirbaio[m]> and worse because the compiler won't check it for you, while it will for async
<dcz[m]> why is async special here? because it doesn't run in ISR context, or because Rust is checking ownership of the system primitives?
<dirbaio[m]> Rust checks you only call async code from other async code
<dirbaio[m]> (or from top-level primitives that create an async context, like executor tasks, or block_on())
<dcz[m]> I think I get it: you can't alter the state of the async execution unless you're actually running the async thing properly
<dcz[m]> so there's no risk of doing async stuff outside async context
<dcz[m]> thanks
<dcz[m]> also, bummer
rjmp[m] has quit [Quit: Idle timeout reached: 172800s]
<MartinSivk[m]> The "classic" drivers could be tricked into pre-emptive run to completion single stack yielding if they use Delay even in the fast loops. But they commonly do not....
<MartinSivk[m]> But yeah, bummer... I fought with that when writing my async stuff too
<MartinSivk[m]> I had to rewrite a lot of stuff to make it work
Socke has quit [Server closed connection]
Socke has joined #rust-embedded
cirho has quit [Server closed connection]
cirho has joined #rust-embedded