[riot-notifications] [RIOT-OS/RIOT] sys/net/gnrc/tx_sync: new module (#15694)

Marian Buschsieweke notifications at github.com
Tue Jan 12 09:01:26 CET 2021


@maribu commented on this pull request.



> +/**
+ * @brief   Signal TX completion via the given tx sync packet snip
+ *
+ * @pre     Module gnrc_netttype_tx_sync is sued
+ * @pre     `pkt->type == GNRC_NETTYPE_TX_SYNC`
+ *
+ * @param   The tx sync packet snip of the packet that was transmitted
+ */
+static inline void gnrc_tx_complete(gnrc_pktsnip_t *pkt)
+{
+    assert(IS_USED(MODULE_GNRC_TX_SYNC) && (pkt->type == GNRC_NETTYPE_TX_SYNC));
+    /* Allow for multiple waiters by just unlocking the mutex until all
+     * blocked threads have resumed */
+    gnrc_tx_sync_t *sync = pkt->data;
+    do {
+        mutex_unlock(&sync->signal);

Which API do you refer to by `signal`?

A condition variable is not really matching the use case here. For that you have a shared data structure which you can only access with a mutex locked. So you lock the mutex, check the state of the shared data structure if some condition is matched, and if not, you call `cond_wait()` with the mutex still locked as argument. `cond_wait()` will internally unlock the mutex will waiting, and lock it again before it returns. So just to comply with the API, we need an additional meaningless mutex, which adds complexity.

A semaphore is also not what we want here. A semaphore is a generalized mutex where the number of threads allowed to enter the critical core becomes an initialization parameter. We could block on it by initializing it with a capacity of zero. But unblocking all waiters again would require to call `sema_post()` in a loop until all waiters are unblocked. But without relying on implementation details, it is difficult to know when all waiters have been unblocked.

Anyway: I just added this "wake all waiters" feature as I saw that `gnrc_neterr` allows multiple threads to register (well, only one per snip). So if we want to base `gnrc_neterr` on this, I thought it might be required to still allow for multiple threads to wait for the error messages. However, in the RIOT code base only the very same threads registers for error reporting (but to every snip). So maybe it would be better to just allow a single thread to wait for completion. And if a use case pops up, this can still be implemented without touching the API. (And it would also allow using some different than a mutex for synchronization, if that turns out to be beneficial in such a use case.)

-- 
You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub:
https://github.com/RIOT-OS/RIOT/pull/15694#discussion_r555576248
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.riot-os.org/pipermail/notifications/attachments/20210112/bb10ef6c/attachment.htm>


More information about the notifications mailing list