[riot-notifications] [RIOT-OS/RIOT] cpu/native: fix race in thread_yield_higher() (#10891)
notifications at github.com
Tue Jan 29 12:19:02 CET 2019
> However, I still don't really am able to track how the case before was wrong / let to invalid unblocking so that is why I don't feel 100% confident in this bug fix.
I'll try to put it in different words.
Usually on thread_yield_higher() on native, when called outside of an ISR, an "isr context" is created with "makecontext" which executes "isr_thread_yield". "swapcontext()" saves the thread context and jumps into the newly created "isr context", launching "isr_thread_yield".
Now, if there are no signals pending, the scheduler is run, then a context switch to the then-current sched_active_thread is initiated. All is good.
But it is possible that signals queue up (after irq_disable in thread_yield_higher()), making "isr_thread_yield()" call "native_irq_handler()". That function does not return, as it returns to thread context by itself, after handling ISRs. The jumping-back to thread context is almost identical to how "isr_thread_yield()" does it (if there'd been no signals pending). The main difference is that "native_irq_handler()" does not always call "sched_run()", but only if "sched_context_switch_request" was set.
This lead to the crash at hand: msg_receive() removes the current thread from the runqueue, then calls thread_yield_higher() *expecting* to be scheduled away. But in the off chance that a signal pops up at the wrong time (and the ISR does not set sched_context_switch_request=1), the scheduler doesn't get called, and msg_receive() returns without receiving.
The fix works by *always* setting "sched_context_switch_request" to 1, so should "isr_thread_yield" degrade to "native_irq_handler", the scheduler is still called.
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the notifications