-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
Use the C standard integer types instead of defining your own. #8546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hello, Thanks for your suggestion. Generally I don't really care about vague concept of "good practices", but I'm very interested in any actual real-world problems arising from making that change vs keeping things as they have been for a long time.
AFAIK I would tend to believe they are portable and exactly the size we expect, unless proven they aren't.
In theory yes. In practice it needs to be checked with all the esoteric compilers and architectures and SDK we support. I suppose that in practice the easiest way to check may be to try and see who might complain. But see e.g. ocornut/imgui_club#56 |
There's a table on here C Data Types page that indicates how large each type is. One thing to note is that I haven't had many issues when working on desktop software. But I have come across problems in embedded systems (not ImGui related) where This issue is not crucial, so feel free to close if it if you like. |
I'm also not a huge fan of such things, but IMO this is actually one of the more reasonable ones. To me it's similar to adding technically-unnecessary parenthesis to clarify intended order of operations. Sure it's not necessary, but it's easier to just not have to think about it in the first place. Like @GavinNL said, embedded compilers still love taking advantage of C's uselessly vague type definitions even in the modern day. Desktop compilers have been sensibly-behaved for ages now, but Omar I'm actually surprised you haven't been personally burned by this in the past 😅 That being said, I would worry slightly about this change having unintended consequences, particularly in the form of warnings in consuming code. |
One of my worry is how that warning for using
I suppose in MS-DOS era I was, but I'm assuming today's arch having 16-bit ints are not likely to run dear imgui anyhow? |
Fair point. The C-style print statements are probably where the biggest issues will arise. What if you had a preprocessor definition to override the default imgui types with the standard types? That way the default behaviour is unchanged, and anyone who wants the change can enable it in their build. #ifdef IMGUI_USE_C_STANDARD_INTEGER_TYPES
#include <stdint.h>
...
typedef int64_t ImS64; // 64-bit signed integer
typedef uint64_t ImU64; // 64-bit unsigned integer
#else
...
typedef signed long long ImS64; // 64-bit signed integer
typedef unsigned long long ImU64; // 64-bit unsigned integer
#endif |
Then third party code/libraries would have to take on an additional responsibility if they use those types. What actual benefit would you get from switching to the other types? |
Where I noticed the issue was when writing some template code: it constexpr ( std::is_same_v<T, uint64_t> )
{
} This failed when T was ImU64. Now that I know the problem, I can work around it. But it was a behaviour that I didn't expect |
I think I accidentally marked this as completed. You can close/ignore this issue. |
I'll reopen if it was accidentally closed. |
Its easy to work around if you know that ImU64 will not be the same as a uint64_t. I replaced my code with the following if constexpr( std::is_unsigned_v<T> && sizeof(T)==sizeof(uint64_t))
{
} My reasoning for opening the ticket was to try to get a bit closer to conventions. I can understand your point about the "%d" potentially causing issues for other users, so I'm fine with not having this change. |
I think a major point to using the standard types would be to reduce the amount of cognitive effort the end user needs to put in. I'm not a C nor C++ developer whatsoever, but I have a profound appreciation for languages like C# where all libraries that want to use something like an integer or single-precision number must use the It tends to make reasoning about the code much easier when you have standardized types that you can expect to see everywhere. IMO, macros/type aliasing make things really messy and hard to understand. I once found a library that type-aliased |
Version/Branch of Dear ImGui:
Version 1.92, Branch: master (master/docking/etc.)
Back-ends:
imgui_impl_sdlrenderer2.cpp + imgui_impl_sdlrenderer2.cpp
Compiler, OS:
Gcc13/Linux
Full config/build information:
No response
Details:
Hello,
I don't know if this has been discussed before, I tried to do a search in the issues, but didn't find anything.
I noticed that you have
typedefs
for some of the integer types that ImGui uses. I don't think these are portable, as the types that you specified are not guaranteed to be the size that you are requesting.The C header
<stdint.h>
provides cross platform/compiler sized integer types that are guaranteed to be of the specified size. Replacing the above code with the following works and is more portable over compilers/architectures.I noticed this when I was doing some template work and had an issue with the 64-bit integers, it was failing my static_asserts. I noticed the following issue:
This may not be that big of an issues, but it would be good practice to use the standard integer types rather than defining your own based on int/long/signed/unsigned/etc.
Replacing the typedefs with the c standard integer types does not cause any issues compiling the imgui_demo, but I am unsure if anyone else would have issues with this change.
Screenshots/Video:
No response
Minimal, Complete and Verifiable Example code:
No response
The text was updated successfully, but these errors were encountered: