Currently I'm getting the execution wall time of my program in seconds by calling:
目前我通过调用以下命令获得程序的执行挂起时间:
time_t startTime = time(NULL);
//section of code
time_t endTime = time(NULL);
double duration = difftime(endTime, startTime);
Is it possible to get the wall time in milliseconds? If so how?
是否有可能以毫秒为单位获得墙上时间?如果是这样的话?
6 个解决方案
#1
8
If you're on a POSIX-ish machine, use gettimeofday()
instead; that gives you reasonable portability and microsecond resolution.
如果你在POSIX-ish机器上,请改用gettimeofday();这为您提供了合理的便携性和微秒级分辨率。
Slightly more esoteric, but also in POSIX, is the clock_gettime()
function, which gives you nanosecond resolution.
稍微更深奥,但也在POSIX中,是clock_gettime()函数,它为您提供纳秒分辨率。
On many systems, you will find a function ftime()
that actually returns you the time in seconds and milliseconds. However, it is no longer in the Single Unix Specification (roughly the same as POSIX). You need the header <sys/timeb.h>
:
在许多系统上,你会发现一个函数ftime()实际上以秒和毫秒为单位返回时间。但是,它不再是单Unix规范(与POSIX大致相同)。您需要标头
struct timeb mt;
if (ftime(&mt) == 0)
{
mt.time ... seconds
mt.millitime ... milliseconds
}
This dates back to Version 7 (or 7th Edition) Unix at least, so it has been very widely available.
这至少可以追溯到版本7(或第7版)Unix,所以它已经广泛使用。
I also have notes in my sub-second timer code on times()
and clock()
, which use other structures and headers again. I also have notes about Windows using clock()
with 1000 clock ticks per second (millisecond timing), and an older interface GetTickCount()
which is noted as necessary on Windows 95 but not on NT.
我还在times()和clock()的sub-second定时器代码中有注释,它们再次使用其他结构和头。我还有关于Windows的使用时钟(),每秒1000个时钟周期(毫秒时间),以及较旧的接口GetTickCount(),它在Windows 95上是必要的,但在NT上没有。
#2
3
If you can do this outside of the program itself, in linux, you can use time
command (time ./my_program
).
如果你可以在程序本身之外执行此操作,在linux中,你可以使用time命令(time ./my_program)。
#3
3
I recently wrote a blog post that explains how to obtain the time in milliseconds cross-platform.
我最近写了一篇博客文章,解释了如何跨平台获得毫秒级的时间。
It will work like time(NULL), but will return the number of milliseconds instead of seconds from the unix epoch on both windows and linux.
它会像time(NULL)一样工作,但会在windows和linux上返回unix时代的毫秒数而不是秒数。
Here is the code
#ifdef WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
int64 GetTimeMs64()
{
#ifdef WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
uint64 ret;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
uint64 ret;
gettimeofday(&tv, NULL);
ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
You can modify it to return microseconds instead of milliesconds if you want.
如果需要,您可以将其修改为返回微秒而不是毫秒。
#4
0
The open-source GLib library has a GTimer system that claims to provide microsecond accuracy. That library is available on Mac OS X, Windows, and Linux. I'm currently using it to do performance timings on Linux, and it seems to work perfectly.
开源GLib库有一个GTimer系统,声称提供微秒精度。该库可在Mac OS X,Windows和Linux上使用。我目前正在使用它在Linux上执行性能计时,它似乎完美无缺。
#5
0
gprof
, which is part of the GNU toolkit, is an option. Most POSIX systems will have it installed, and it's available under Cygwin for Windows. Tracking the time yourself using gettimeofday()
works fine, but it's the performance equivalent of using print statements for debugging. It's good if you just want a quick and dirty solution, but it's not quite as elegant as using proper tools.
gprof是GNU工具包的一部分,是一个选项。大多数POSIX系统都会安装它,并且可以在Cygwin for Windows下使用。使用gettimeofday()自己跟踪时间工作正常,但它的性能相当于使用print语句进行调试。如果你只是想要一个快速而肮脏的解决方案,这很好,但它不如使用适当的工具那么优雅。
To use gprof
, you must specify the -pg option when compiling with gcc
as in:
要使用gprof,必须在使用gcc进行编译时指定-pg选项,如下所示:
gcc -o prg source.c -pg
Then you can run gprof
on the generated program as follows:
然后你可以在生成的程序上运行gprof,如下所示:
gprof prg > gprof.out
By default, gprof will generate the overall runtime of your program, as well as the amount of time spent in each function, the number of times each function was called, the average time spent in each function call, and similar information.
默认情况下,gprof将生成程序的总体运行时间,以及每个函数花费的时间量,每个函数的调用次数,每次函数调用所花费的平均时间以及类似信息。
There are a large number of options you can set with gprof
. If you're interested, there is more information in the man pages or through Google.
您可以使用gprof设置大量选项。如果您有兴趣,可以在手册页或Google中获取更多信息。
#6
-4
On Windows, use QueryPerformanceCounter and the associated QueryPerformanceFrequency. They don't give you a time that is translatable to calendar time, so if you need that then ask for the time using a CRT API and then immediately use QueryPerformanceCounter. You can then do some simple addition/subtraction to calculate the calendar time, with some error due to the time it takes to execute the API's consecutively. Hey, it's a PC, what did you expect???
在Windows上,使用QueryPerformanceCounter和关联的QueryPerformanceFrequency。他们没有给你一个可以翻译到日历时间的时间,所以如果你需要那个,那么使用CRT API请求时间,然后立即使用QueryPerformanceCounter。然后,您可以执行一些简单的加法/减法来计算日历时间,由于连续执行API所需的时间会产生一些错误。嘿,这是一台PC,你期待什么?
#1
8
If you're on a POSIX-ish machine, use gettimeofday()
instead; that gives you reasonable portability and microsecond resolution.
如果你在POSIX-ish机器上,请改用gettimeofday();这为您提供了合理的便携性和微秒级分辨率。
Slightly more esoteric, but also in POSIX, is the clock_gettime()
function, which gives you nanosecond resolution.
稍微更深奥,但也在POSIX中,是clock_gettime()函数,它为您提供纳秒分辨率。
On many systems, you will find a function ftime()
that actually returns you the time in seconds and milliseconds. However, it is no longer in the Single Unix Specification (roughly the same as POSIX). You need the header <sys/timeb.h>
:
在许多系统上,你会发现一个函数ftime()实际上以秒和毫秒为单位返回时间。但是,它不再是单Unix规范(与POSIX大致相同)。您需要标头
struct timeb mt;
if (ftime(&mt) == 0)
{
mt.time ... seconds
mt.millitime ... milliseconds
}
This dates back to Version 7 (or 7th Edition) Unix at least, so it has been very widely available.
这至少可以追溯到版本7(或第7版)Unix,所以它已经广泛使用。
I also have notes in my sub-second timer code on times()
and clock()
, which use other structures and headers again. I also have notes about Windows using clock()
with 1000 clock ticks per second (millisecond timing), and an older interface GetTickCount()
which is noted as necessary on Windows 95 but not on NT.
我还在times()和clock()的sub-second定时器代码中有注释,它们再次使用其他结构和头。我还有关于Windows的使用时钟(),每秒1000个时钟周期(毫秒时间),以及较旧的接口GetTickCount(),它在Windows 95上是必要的,但在NT上没有。
#2
3
If you can do this outside of the program itself, in linux, you can use time
command (time ./my_program
).
如果你可以在程序本身之外执行此操作,在linux中,你可以使用time命令(time ./my_program)。
#3
3
I recently wrote a blog post that explains how to obtain the time in milliseconds cross-platform.
我最近写了一篇博客文章,解释了如何跨平台获得毫秒级的时间。
It will work like time(NULL), but will return the number of milliseconds instead of seconds from the unix epoch on both windows and linux.
它会像time(NULL)一样工作,但会在windows和linux上返回unix时代的毫秒数而不是秒数。
Here is the code
#ifdef WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
int64 GetTimeMs64()
{
#ifdef WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
uint64 ret;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
uint64 ret;
gettimeofday(&tv, NULL);
ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
You can modify it to return microseconds instead of milliesconds if you want.
如果需要,您可以将其修改为返回微秒而不是毫秒。
#4
0
The open-source GLib library has a GTimer system that claims to provide microsecond accuracy. That library is available on Mac OS X, Windows, and Linux. I'm currently using it to do performance timings on Linux, and it seems to work perfectly.
开源GLib库有一个GTimer系统,声称提供微秒精度。该库可在Mac OS X,Windows和Linux上使用。我目前正在使用它在Linux上执行性能计时,它似乎完美无缺。
#5
0
gprof
, which is part of the GNU toolkit, is an option. Most POSIX systems will have it installed, and it's available under Cygwin for Windows. Tracking the time yourself using gettimeofday()
works fine, but it's the performance equivalent of using print statements for debugging. It's good if you just want a quick and dirty solution, but it's not quite as elegant as using proper tools.
gprof是GNU工具包的一部分,是一个选项。大多数POSIX系统都会安装它,并且可以在Cygwin for Windows下使用。使用gettimeofday()自己跟踪时间工作正常,但它的性能相当于使用print语句进行调试。如果你只是想要一个快速而肮脏的解决方案,这很好,但它不如使用适当的工具那么优雅。
To use gprof
, you must specify the -pg option when compiling with gcc
as in:
要使用gprof,必须在使用gcc进行编译时指定-pg选项,如下所示:
gcc -o prg source.c -pg
Then you can run gprof
on the generated program as follows:
然后你可以在生成的程序上运行gprof,如下所示:
gprof prg > gprof.out
By default, gprof will generate the overall runtime of your program, as well as the amount of time spent in each function, the number of times each function was called, the average time spent in each function call, and similar information.
默认情况下,gprof将生成程序的总体运行时间,以及每个函数花费的时间量,每个函数的调用次数,每次函数调用所花费的平均时间以及类似信息。
There are a large number of options you can set with gprof
. If you're interested, there is more information in the man pages or through Google.
您可以使用gprof设置大量选项。如果您有兴趣,可以在手册页或Google中获取更多信息。
#6
-4
On Windows, use QueryPerformanceCounter and the associated QueryPerformanceFrequency. They don't give you a time that is translatable to calendar time, so if you need that then ask for the time using a CRT API and then immediately use QueryPerformanceCounter. You can then do some simple addition/subtraction to calculate the calendar time, with some error due to the time it takes to execute the API's consecutively. Hey, it's a PC, what did you expect???
在Windows上,使用QueryPerformanceCounter和关联的QueryPerformanceFrequency。他们没有给你一个可以翻译到日历时间的时间,所以如果你需要那个,那么使用CRT API请求时间,然后立即使用QueryPerformanceCounter。然后,您可以执行一些简单的加法/减法来计算日历时间,由于连续执行API所需的时间会产生一些错误。嘿,这是一台PC,你期待什么?