[riot-devel] Timers (caution: short mail, abstract not available)

David Lyon david.lyon at clixx.io
Tue Sep 23 15:26:43 CEST 2014


On 2014-09-18 21:23, Hauke Petersen wrote:
> Hi Oleg, hi everyone,
> 
>  first of all +1 for rethinking the current timer implementation. I
> was also having some thoughts over the last weeks on how we could
> 
>  1. Hardware peripherals:
>  This groups contains all functions that are covered directly by
> hardware peripherals. These should in my opinion be covered by
> low-level drivers. In this group I see
>  - PWM (already in RIOT)
>  - PFM (I don't know this, can it done by HW directly?)
>  - Input pulse lenth measurement (I have an interface started, but no
> PR for it yet)
>  - Watchdog (interface needed)

+1

>  The accuracy of these modules depends heavily on the underlying
> hardware and should be controlled by the low-level driver interface
> (at it is done for the PWM so far).
> 
>  2. Waiting/Sleeping

+1
>  - High resolution: waiting for for a (precise) short amount of time.
> Typically in the order of micro or even nano seconds.

In these cases, I'd just suggest using a blocking/wait as implemented to 
just chew cycles.

>  - Low resolution: waiting for for a mid-to-long term of time
> (speaking from a millisecond to many seconds) and used in
> thread-context. This I would imagine can be implemented by putting the
> calling thread to sleep. For a wider time span this behavior could be
> based on the low-level timer and rtt/rtc peripherals (basically what
> the vtimer is doing at the moment).
> 
>  3. Timeouts
>  Some threads (e.g. protocol implementations) need to react to
> timeouts, while doing other tasks when waiting. The resolution is
> typically rather low(?), in the order of mili to more seconds. ..
> 
>  4. Periodic triggers
>  I think we can all agree, that RIOT to its tickless scheduling
> paradigm is not very friendly to periodic tasks at the moment. In this

Sure. Well I'd just suggest something not done in other systems and just 
implement low-resolution timer with a default value of 1 second and 
optionally to support 100 miliseconds (once again - in low-resolution 
applications).

As an application program, I can't see why they (the applications) can't 
count their own 'seconds'. There are hardware limits to timers (that we 
all know about) in terms of how long they can count until overflow. That 
varies from processor to processor - and is therefore confusing, to 
remember specific timer overflows between uC's.

With 'int' types and 'long' types being so darn-big these days, a 
somewhat accurate "second" timer is pretty much all applications need in 
the use-case mentioned.

Embedded programmers should be able to deal with the simplicity that 
having "second" and optionally 1/10th second would bring.

These are just my opinions. It would reduce work-load, not increase the 
work-load.

If anyone wishes to discuss why a "one-second" timer wouldn't work (for 
low-res) I'd be happy for follow on discussion.

Regards

David








More information about the devel mailing list