Time delay in milliseconds 
Author Message
 Time delay in milliseconds

Hi All!

I'm writing a C program under Linux.
Is there a 'portable' function (which means, present in MS Windows too) to delay the program execution using milliseconds?

Regards,
Stefano



Sun, 21 Dec 2003 01:53:17 GMT  
 Time delay in milliseconds
Quote:

> Hi All!

> I'm writing a C program under Linux.
> Is there a 'portable' function (which means, present in MS Windows too) to delay the program execution using milliseconds?

No (rather unfortunately).

HTH,
--ag

--
Artie Gold, Austin, TX  (finger the cs.utexas.edu account for more info)

--
Verbing weirds language.



Sun, 21 Dec 2003 01:55:27 GMT  
 Time delay in milliseconds

Quote:

> Hi All!

> I'm writing a C program under Linux.
> Is there a 'portable' function (which means, present in MS Windows too) to delay the program execution using milliseconds?

> Regards,
> Stefano

Stefano...

Standard C has no delay functions and cannot read/manipulate time in increments smaller than whole seconds.
--
Morris Dovey, WB0YEF
West Des Moines, Iowa USA



Sun, 21 Dec 2003 01:56:18 GMT  
 Time delay in milliseconds

Quote:


> > Is there a 'portable' function (which means, present in MS
> > Windows too) to delay the program execution using milliseconds?

> Standard C has no delay functions and cannot read/manipulate
> time in increments smaller than whole seconds.

But we have clock(), which on most platforms returns some unit that
has to be multiplied by CLOCKS_PER_SEC to make up a full second.
That at least suggests there is something smaller than second to
manipulate. Or did I completely misunderstand your remark?

willem



Sun, 21 Dec 2003 02:54:16 GMT  
 Time delay in milliseconds

Quote:



> > > Is there a 'portable' function (which means, present in MS
> > > Windows too) to delay the program execution using milliseconds?

> > Standard C has no delay functions and cannot read/manipulate
> > time in increments smaller than whole seconds.

> But we have clock(), which on most platforms returns some unit that
> has to be multiplied by CLOCKS_PER_SEC to make up a full second.
> That at least suggests there is something smaller than second to
> manipulate. Or did I completely misunderstand your remark?

Multiplied, or divided?

-Peter



Sun, 21 Dec 2003 03:19:06 GMT  
 Time delay in milliseconds

Quote:



>> > > Is there a 'portable' function (which means, present in MS
>> > > Windows too) to delay the program execution using milliseconds?

>> > Standard C has no delay functions and cannot read/manipulate
>> > time in increments smaller than whole seconds.

>> But we have clock(), which on most platforms returns some unit that
>> has to be multiplied by CLOCKS_PER_SEC to make up a full second.
>> That at least suggests there is something smaller than second to
>> manipulate. Or did I completely misunderstand your remark?
> Multiplied, or divided?

Divided, of course. The expected time unit is seconds, not
clocks^2 per second.

--

| Kingpriest of "The Flying Lemon Tree" G++ FR FW+ M- #108 D+ ADA N+++|
| http://www.helsinki.fi/~palaste       W++ B OP+                     |
\----------------------------------------- Finland rules! ------------/

"As we all know, the hardware for the PC is great, but the software sucks."
   - Petro Tyschtschenko



Sun, 21 Dec 2003 03:53:47 GMT  
 Time delay in milliseconds

Quote:



> > > Is there a 'portable' function (which means, present in MS
> > > Windows too) to delay the program execution using milliseconds?

> > Standard C has no delay functions and cannot read/manipulate
> > time in increments smaller than whole seconds.

> But we have clock(), which on most platforms returns some unit that
> has to be multiplied by CLOCKS_PER_SEC to make up a full second.
> That at least suggests there is something smaller than second to
> manipulate. Or did I completely misunderstand your remark?

Willem...

You're right -- I should have mentioned clock() -- but view it
considerably less portable than time() since nothing can be concluded
about its resolution that improves reliably and portably on what can be
extracted from a struct tm.

CLOCKS_PER_SEC might be 1 or a million. Hmm -- suppose I'm working on a
uC-based Anti-Lock Brake system control unit for /your/ next car but no
one is willing to specify CLOCKS_PER_SEC (because the CPU hasn't
actually been selected yet, and it's possible that each model may use a
different CPU/clock circuit). Given that I'm required to write
standard-compliant (portable) code that will be used for all models,
how would you suggest I design using clock() and CLOCKS_PER_SEC?
--
Morris Dovey, WB0YEF
West Des Moines, Iowa USA



Sun, 21 Dec 2003 04:06:59 GMT  
 Time delay in milliseconds

Quote:

> Hi All!

> I'm writing a C program under Linux.
> Is there a 'portable' function (which means, present in MS Windows too) to delay the program execution using milliseconds?

> Regards,
> Stefano

Sorry...Not that I know of... But Back in the old days on slower
computers I always used a loop.  
  for (x=0;x<=64000;x++)
  {
  }
Of course depending on processor speed this could take all day or less
than a second.  Oh well.


Sun, 21 Dec 2003 06:19:37 GMT  
 Time delay in milliseconds

Quote:

> > Hi All!

> > I'm writing a C program under Linux.
> > Is there a 'portable' function (which means, present in MS Windows too) to

delay the program execution using milliseconds?

Quote:

> > Regards,
> > Stefano

> Sorry...Not that I know of... But Back in the old days on slower
> computers I always used a loop.
>   for (x=0;x<=64000;x++)
>   {
>   }
> Of course depending on processor speed this could take all day or less
> than a second.  Oh well.

Which is why it always pays to read the C FAQ before posting.

19.37:  How can I implement a delay, or time a user's response, with sub-
        second resolution?

A:      Unfortunately, there is no portable way.  V7 Unix, and derived
        systems, provided a fairly useful ftime() function with
        resolution up to a millisecond, but it has disappeared from
        System V and POSIX.  Other routines you might look for on your
        system include clock(), delay(), gettimeofday(), msleep(),
        nap(), napms(), nanosleep(), setitimer(), sleep(), times(), and
        usleep().  (A function called wait(), however, is at least under
        Unix *not* what you want.)  The select() and poll() calls (if
        available) can be pressed into service to implement simple
        delays.  On MS-DOS machines, it is possible to reprogram the
        system timer and timer interrupts.

        Of these, only clock() is part of the ANSI Standard.  The
        difference between two calls to clock() gives elapsed execution
        time, and may even have subsecond resolution, if CLOCKS_PER_SEC
        is greater than 1.  However, clock() gives elapsed processor time
        used by the current program, which on a multitasking system may
        differ considerably from real time.

        If you're trying to implement a delay and all you have available
        is a time-reporting function, you can implement a CPU-intensive
        busy-wait, but this is only an option on a single-user, single-
        tasking machine as it is terribly antisocial to any other
        processes.  Under a multitasking operating system, be sure to
        use a call which puts your process to sleep for the duration,
        such as sleep() or select(), or pause() in conjunction with
        alarm() or setitimer().

        For really brief delays, it's tempting to use a do-nothing loop
        like

                long int i;
                for(i = 0; i < 1000000; i++)
                        ;

        but resist this temptation if at all possible!  For one thing,
        your carefully-calculated delay loops will stop working properly
        next month when a faster processor comes out.  Perhaps worse, a
        clever compiler may notice that the loop does nothing and
        optimize it away completely.

        References: H&S Sec. 18.1 pp. 398-9; PCS Sec. 12 pp. 197-8,215-
        6; POSIX Sec. 4.5.2.
--
C-FAQ: http://www.eskimo.com/~scs/C-faq/top.html
 "The C-FAQ Book" ISBN 0-201-84519-9
C.A.P. FAQ: ftp://cap.connx.com/pub/Chess%20Analysis%20Project%20FAQ.htm



Sun, 21 Dec 2003 07:25:35 GMT  
 Time delay in milliseconds
On Tue, 03 Jul 2001 20:54:16 +0200, willem veenhoven

Quote:


> > > Is there a 'portable' function (which means, present in MS
> > > Windows too) to delay the program execution using milliseconds?

> > Standard C has no delay functions and cannot read/manipulate
> > time in increments smaller than whole seconds.

> But we have clock(), which on most platforms returns some unit that
> has to be multiplied by CLOCKS_PER_SEC to make up a full second.
> That at least suggests there is something smaller than second to
> manipulate. Or did I completely misunderstand your remark?

> willem

Two things about clock():

1.  It does not specify any particular resolution, certainly not
milliseconds.

2.  It most certainly does not guarantee resolution in terms of
CLOCKS_PER_SECOND.  I know a particularly conforming implementation of
clock() that uses 1,000,000,000 for CLOCKS_PER_SECOND, regardless of
the resolution of the system timer which is accounted for in one of
the files used to build the system.

--
Jack Klein
Home: http://JK-Technology.Com
FAQs for
comp.lang.c http://www.eskimo.com/~scs/C-faq/top.html
comp.lang.c++ http://www.parashift.com/c++-faq-lite/
alt.comp.lang.learn.c-c++ ftp://snurse-l.org/pub/acllc-c++/faq



Sun, 21 Dec 2003 08:04:36 GMT  
 Time delay in milliseconds

Quote:


>> > Is there a 'portable' function (which means, present in MS
>> > Windows too) to delay the program execution using milliseconds?

>> Standard C has no delay functions and cannot read/manipulate
>> time in increments smaller than whole seconds.

>But we have clock(), which on most platforms returns some unit that
>has to be multiplied by CLOCKS_PER_SEC to make up a full second.
>That at least suggests there is something smaller than second to
>manipulate.

Maybe, maybe not.  What is preventing an implementation from defining
CLOCKS_PER_SEC as 0.1 and effectively providing a granularity of 10
seconds for clock()?

Dan
--
Dan Pop
CERN, IT Division

Mail:  CERN - IT, Bat. 31 1-014, CH-1211 Geneve 23, Switzerland



Sun, 21 Dec 2003 09:59:35 GMT  
 Time delay in milliseconds

Quote:



>> > > Is there a 'portable' function (which means, present in MS
>> > > Windows too) to delay the program execution using milliseconds?

>> > Standard C has no delay functions and cannot read/manipulate
>> > time in increments smaller than whole seconds.

>> But we have clock(), which on most platforms returns some unit that

                                                             ^^^^

Quote:
>> has to be multiplied by CLOCKS_PER_SEC to make up a full second.
>> That at least suggests there is something smaller than second to
>> manipulate. Or did I completely misunderstand your remark?

>Multiplied, or divided?

Multiplied, because Willem is talking about the *time unit* used by
clock().  You need CLOCKS_PER_SEC such units to make up a full second.

Division is needed to convert the *value* returned by clock() to seconds.
Make sure that you use floating point division for this purpose, a
mere clock() / CLOCKS_PER_SEC might give you second granularity results.

Dan
--
Dan Pop
CERN, IT Division

Mail:  CERN - IT, Bat. 31 1-014, CH-1211 Geneve 23, Switzerland



Sun, 21 Dec 2003 10:05:13 GMT  
 Time delay in milliseconds

Quote:



> >> Standard C has no delay functions and cannot read/manipulate
> >> time in increments smaller than whole seconds.

> > But we have clock(), which on most platforms returns some unit
> > that has to be multiplied by CLOCKS_PER_SEC to make up a full
> > second. That at least suggests there is something smaller than
> > second to manipulate.

> Maybe, maybe not.  What is preventing an implementation from
> defining CLOCKS_PER_SEC as 0.1 and effectively providing a
> granularity of 10 seconds for clock()?

IMHO the wording of the Standard does not encourage such an
interpretation of CLOCKS_PER_SEC, which is clearly defined as the
number per second of the value returned by the clock function.
Had such a granularity been intended, then it would have been more
obvious to define SECS_PER_CLOCK instead (-;

Anyhow, on my platform CLOCKS_PER_SEC is conveniently defined as
1000.0, thus providing a millisecond resolution. OP is primarily
interested in Linux/Win32 portability, and in that case there is no
reason not to use clock() as far as I can see.

willem



Sun, 21 Dec 2003 15:41:58 GMT  
 Time delay in milliseconds

Quote:

>  Anyhow, on my platform CLOCKS_PER_SEC is conveniently defined as
>  1000.0, thus providing a millisecond resolution.

Yes, maybe. There is nothing stopping the clock() function from
returning, say, multiples of 100, thereby reducing its effective
granularity to tenths of a second.

Gergo

--
You might have mail.



Sun, 21 Dec 2003 17:25:41 GMT  
 Time delay in milliseconds

Quote:

> Anyhow, on my platform CLOCKS_PER_SEC is conveniently defined as
> 1000.0, thus providing a millisecond resolution.

This is a frequent mistake, but no less of a mistake.

The granularity of clock() on your system is 1000 per second.
The resolution may be a great deal less.

For example, your system may update clock() ten times every second, but
increase it by 100 every time.

Richard



Sun, 21 Dec 2003 17:29:35 GMT  
 
 [ 18 post ]  Go to page: [1] [2]

 Relevant Pages 

1. delays: creating time delays with time.h (unix c)

2. Millisecond delay needed

3. Problem with Millisecond delay

4. need 0.03 millisecond delay - help

5. sleeping for time under the millisecond

6. How to convert local time to milliseconds?

7. last time a file was modified in milliseconds

8. Getting time in millisecond resolution

9. time in milliseconds

10. Millisecond Timing

11. Elapsed time in milliseconds in C, please help!

12. How to get time in milliseconds in c++

 

 
Powered by phpBB® Forum Software