Recently I refactored the code to always use the time_t datatype wherever GetTime() is used. Before, each developer seemed to have used another type. Commonly I found
However, looking at the automated builds,
So, do you think we should use another type for times or do you think
DWORD
, int
,uint32_t
and unsigned int
. I chose time_t
because it was already used in some spots plus it is stated in the style guidelines in the wiki that we shouldn't use DWORD.However, looking at the automated builds,
time_t
seems to be not a portable type after all?So, do you think we should use another type for times or do you think
time_t
is alright and it just needs to be defined earlier?
If the fix is that simple and the usage of time_t is alright, could somebody commit the buildfix who has a development environment in which he can test if it compiled alright? (it does currently compile in msvc).
Also, say hello to my new flag,... currently stuck in Cambodia :-)
Also, say hello to my new flag,... currently stuck in Cambodia :-)
> If the fix is that simple and the usage of time_t is alright
It's not, but mostly because the range and precision of
time_t
is implementation-defined (i.e. it could be int, long, float, or double, or something else entirely, depending on the compiler). uint32_t
or uint64_t
would be preferable in my opinion. (This is also because by convention, time_t
usually holds the number of seconds elapsed since the Unix epoch, not some arbitrary value that has a fleeting connection to time.)
There's basically no point in ever using
long
. Its size is specified as providing "at least as much storage as [int
]", which is barely more helpful than no specification at all. If you want an integer type with a particular width, use uintXX_t
or intXX_t
. If you want an integer type that has "the natural size suggested by the architecture of the execution environment", use int
(Windows and most Unices get this wrong on 64 bit architectures by having 32 bit int
s, by the way).
Well, timeGetTime() returns a DWORD, which by MSDN definition is an
unsigned long
. My point is that I don't want any particular width but whatever GetTime returns. If a compiler under one architecture decides that timeGetTime shall return a 64bit signed int, it's ok, if under another architecture a 32bit signed int is returned, it is also ok.
We generally have explicit types in the engine for synchronization between different architectures. Almost all of the game execution and all network functions use int32_t or uint32_t for that reason.
Regarding your change in the PID_Ping packet:
a) Packets must always pack data of the same size on all architectures. Otherwise, the network will get confused. So you should pack uint32_t or uint64_t - but never int or time_t. DWORD is just the Windows way of saying "uint32_t"; we generally use uint32_t because Linux people hate all caps types.
b) You broke the network here, which was very unfortunate because we wanted to playtest the cool new visual and auditory changes made by Zapper and Clonko on Sunday evening.
c) Since we usually have fixed type sizes in the engine, I would prefer if the same was done for times.
Personally, my vote would go for uint32_t (i.e., DWORD), because timeGetTime() is a very simple and cheap function and we won't accidentally lose the high bits when times end up in a 32 bit int. To fix #521, some occurrences of "(a>b) " would need to be replaced by "int32_t(a-b)>0" and "a=Max(a,b)" would become "if (int32_t(b-a)>0) a=b;". (I didn't test it yet).
But of course an implementation that is using uint64_t and changes time functions on all architectures would also be OK.
Regarding your change in the PID_Ping packet:
// FIXME: the compiler can't compile 64bit integers (yet), the ping will
// return wrong times if GetTime() returns integers too large for uint32
a) Packets must always pack data of the same size on all architectures. Otherwise, the network will get confused. So you should pack uint32_t or uint64_t - but never int or time_t. DWORD is just the Windows way of saying "uint32_t"; we generally use uint32_t because Linux people hate all caps types.
b) You broke the network here, which was very unfortunate because we wanted to playtest the cool new visual and auditory changes made by Zapper and Clonko on Sunday evening.
c) Since we usually have fixed type sizes in the engine, I would prefer if the same was done for times.
Personally, my vote would go for uint32_t (i.e., DWORD), because timeGetTime() is a very simple and cheap function and we won't accidentally lose the high bits when times end up in a 32 bit int. To fix #521, some occurrences of "(a>b) " would need to be replaced by "int32_t(a-b)>0" and "a=Max(a,b)" would become "if (int32_t(b-a)>0) a=b;". (I didn't test it yet).
But of course an implementation that is using uint64_t and changes time functions on all architectures would also be OK.
It's probably not required to use a
uint64_t GetTime()
if we do the same thing on Windows that non-Windows platforms do: store the current time at the start of the application, then return the difference. Obviously the internally called function should still return a 64 bit value (I recommend GetSystemTimeAsFileTime
), but that doesn't have to leak out to the consumer if that makes it easier. Of course that'll still break if OC runs for more than 50 days (with unsigned values).
> Obviously the internally called function should still return a 64 bit value
If the time difference is returned as a 32 bit value there is no reason to store the start time as a 64 bit value.
> store the current time at the start of the application, then return the difference.
Doesn't this go terribly wrong when the system time is changed while Clonk is running? Some time synchronization service could do this in the background any time, no? There are monotonic timers on most platforms (Linux: clock_gettime) which take care of this.
Ah, good point; GetTickCount64 is the equivalent on Windows. (Maybe use QueryPerformanceCounter instead for compatibility with Windows XP. I can't wait until Microsoft's support for XP expires in April, because at that point I won't feel required to use XP compatible APIs anymore.)
>time_t
Okay, I will change the usages of time_t to uint32_t and put time_t on the "Don't use" list in the developers guide.
>b)
I am very sorry to hear that. What exactly was the reason for the broken network?
> To fix #521, some occurrences of "(a>b) " would need to be replaced by "int32_t(a-b)>0" and "a=Max(a,b)" would become "if (int32_t(b-a)>0) a=b;"
Would that be an overflow-proof way of comparing values returned from GetTime? If yes, I'd prefer to put this into a helper function, something like DiffTime(a,b) as mentioned in #251 because the calculations you posted are not self-explanatory IMO.
Using
time_t
is perfectly fine for all things that the CRT is using it for (just like DWORD
et al. are perfectly fine for interfacing with the Win32 API), so I don't think a blanket ban is justified.
> I am very sorry to hear that. What exactly was the reason for the broken network?
Ping timestamps weren't set before being sent via network, so the ping packet always compared the current time to zero when calculating ping times. Clients pinged out after their first ping packet.
What did this changeset fix? And, how did it fix it by casting it to unsigned int?
(By casting to unsigned int, we are back to #251)
(By casting to unsigned int, we are back to #251)
The casts to int are just for format strings used for packet logging. printf("%d", time_t) crashes if int is 32 bit and time_t is 64 bit.
I think it's generally a good idea to cast format string parameters explicitly, since we've had this problem a lot in the past.
Btw: Does anyone know if printf("%d", int32_t) is correct? Or does it have to be int? Because I believe int32_t is passed as int a lot in format strings.
I think it's generally a good idea to cast format string parameters explicitly, since we've had this problem a lot in the past.
Btw: Does anyone know if printf("%d", int32_t) is correct? Or does it have to be int? Because I believe int32_t is passed as int a lot in format strings.
More importantly what about printf("%d",uint32_t) because this is how I will change it. I will find it out.
That should be "%u". (Or rather,
PRIu32
, but see http://forum.openclonk.org/topic_show.pl?pid=25533#pid25533.)
No, you should do printf("%d",int) - i.e., keep the cast as it is. (See http://forum.openclonk.org/topic_show.pl?pid=25536#pid25536)
"%d" is never the correct parameter to output a
Technically, the "%d" for
time_t
anyway, because it's not guaranteed that time_t
is even an integer.Technically, the "%d" for
int32_t
should be PRId32
from <inttypes.h>. Unfortunately, MSVC doesn't provide that header before version 2013.
timeGetTime
is Win32 specific, and the headers typedef DWORD
as unsigned long, so no compiler targeting Windows will have a 64 bit long. However, OpenClonk doesn't only run on Windows, and therefore making GetTime
return a platform-specific type is a bad idea, especially in the context of templates, where deduction will fail if you try to use different types for the same template parameter (for example unsigned long
and uint32_t
). As Sven said, network packets must have the same representation on all platforms, so we can't use unsigned long
there, and 21d05fa had to fix just the other thing: a template call that failed to deduce its parameter type because time_t
and unsigned long
aren't the same type.
Powered by mwForum 2.29.7 © 1999-2015 Markus Wichitill