Skip to content

Use the C standard integer types instead of defining your own. #8546

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
GavinNL opened this issue Apr 2, 2025 · 11 comments
Open

Use the C standard integer types instead of defining your own. #8546

GavinNL opened this issue Apr 2, 2025 · 11 comments

Comments

@GavinNL
Copy link

GavinNL commented Apr 2, 2025

Version/Branch of Dear ImGui:

Version 1.92, Branch: master (master/docking/etc.)

Back-ends:

imgui_impl_sdlrenderer2.cpp + imgui_impl_sdlrenderer2.cpp

Compiler, OS:

Gcc13/Linux

Full config/build information:

No response

Details:

Hello,

I don't know if this has been discussed before, I tried to do a search in the issues, but didn't find anything.

I noticed that you have typedefs for some of the integer types that ImGui uses. I don't think these are portable, as the types that you specified are not guaranteed to be the size that you are requesting.

typedef unsigned int       ImGuiID;// A unique ID used by widgets (typically the result of hashing a stack of string)
typedef signed char        ImS8;   // 8-bit signed integer
typedef unsigned char      ImU8;   // 8-bit unsigned integer
typedef signed short       ImS16;  // 16-bit signed integer
typedef unsigned short     ImU16;  // 16-bit unsigned integer
typedef signed int         ImS32;  // 32-bit signed integer == int
typedef unsigned int       ImU32;  // 32-bit unsigned integer (often used to store packed colors)
typedef signed   long long ImS64;  // 64-bit signed integer
typedef unsigned long long ImU64;  // 64-bit unsigned integer

The C header <stdint.h> provides cross platform/compiler sized integer types that are guaranteed to be of the specified size. Replacing the above code with the following works and is more portable over compilers/architectures.

#include <stdint.h>

typedef int32_t  ImGuiID;// A unique ID used by widgets (typically the result of hashing a stack of string)
typedef int8_t   ImS8;   // 8-bit signed integer
typedef uint8_t  ImU8;   // 8-bit unsigned integer
typedef int16_t  ImS16;  // 16-bit signed integer
typedef uint16_t ImU16;  // 16-bit unsigned integer
typedef int32_t  ImS32;  // 32-bit signed integer == int
typedef uint32_t ImU32;  // 32-bit unsigned integer (often used to store packed colors)
typedef int64_t  ImS64;  // 64-bit signed integer
typedef uint64_t ImU64;  // 64-bit unsigned integer

I noticed this when I was doing some template work and had an issue with the 64-bit integers, it was failing my static_asserts. I noticed the following issue:

// fails on Linux X64/GCC-13
static_assert( std::is_same_v<ImU64, uint64_t>, "bad");
static_assert( std::is_same_v<ImS64, int64_t>, "bad");

This may not be that big of an issues, but it would be good practice to use the standard integer types rather than defining your own based on int/long/signed/unsigned/etc.

Replacing the typedefs with the c standard integer types does not cause any issues compiling the imgui_demo, but I am unsure if anyone else would have issues with this change.

Screenshots/Video:

No response

Minimal, Complete and Verifiable Example code:

No response

@ocornut
Copy link
Owner

ocornut commented Apr 2, 2025

Hello,

Thanks for your suggestion.

Generally I don't really care about vague concept of "good practices", but I'm very interested in any actual real-world problems arising from making that change vs keeping things as they have been for a long time.

I don't think these are portable, as the types that you specified are not guaranteed to be the size that you are requesting.

AFAIK I would tend to believe they are portable and exactly the size we expect, unless proven they aren't.

The C header <stdint.h> provides cross platform/compiler sized integer types that are guaranteed to be of the specified size.

In theory yes. In practice it needs to be checked with all the esoteric compilers and architectures and SDK we support. I suppose that in practice the easiest way to check may be to try and see who might complain.

But see e.g. ocornut/imgui_club#56
Where we actually changed leftovers int32_t to our types in that code (I just asked the person for details).

@GavinNL
Copy link
Author

GavinNL commented Apr 2, 2025

There's a table on here C Data Types page that indicates how large each type is. One thing to note is that singed int is at least 16 bits and not at least 32 bits. So it's possible there might be an architecture that has 16-bit integers.

I haven't had many issues when working on desktop software. But I have come across problems in embedded systems (not ImGui related) where int were 16 bits and that caused some problems when I didn't realise it and assumed they'd be 32 bits. I have always used the c-standard integer types after running into that.

This issue is not crucial, so feel free to close if it if you like.

@PathogenDavid
Copy link
Contributor

PathogenDavid commented Apr 2, 2025

Generally I don't really care about vague concept of "good practices"

I'm also not a huge fan of such things, but IMO this is actually one of the more reasonable ones.

To me it's similar to adding technically-unnecessary parenthesis to clarify intended order of operations. Sure it's not necessary, but it's easier to just not have to think about it in the first place.

Like @GavinNL said, embedded compilers still love taking advantage of C's uselessly vague type definitions even in the modern day. Desktop compilers have been sensibly-behaved for ages now, but Omar I'm actually surprised you haven't been personally burned by this in the past 😅

That being said, I would worry slightly about this change having unintended consequences, particularly in the form of warnings in consuming code.

@ocornut
Copy link
Owner

ocornut commented Apr 4, 2025

One of my worry is how that warning for using %d to print a ImU32 would be triggered for any external codebase using ImU32 + print style statements. That's likely not overly common but I'm fairly content with being able to use %d instead of %ld everywhere.

but Omar I'm actually surprised you haven't been personally burned by this in the past 😅

I suppose in MS-DOS era I was, but I'm assuming today's arch having 16-bit ints are not likely to run dear imgui anyhow?

@GavinNL
Copy link
Author

GavinNL commented Apr 4, 2025

Fair point. The C-style print statements are probably where the biggest issues will arise.

What if you had a preprocessor definition to override the default imgui types with the standard types? That way the default behaviour is unchanged, and anyone who wants the change can enable it in their build.

#ifdef IMGUI_USE_C_STANDARD_INTEGER_TYPES
#include <stdint.h>
...
typedef int64_t  ImS64;  // 64-bit signed integer
typedef uint64_t ImU64;  // 64-bit unsigned integer
#else 
...
typedef signed   long long ImS64;  // 64-bit signed integer
typedef unsigned long long ImU64;  // 64-bit unsigned integer
#endif 

@ocornut
Copy link
Owner

ocornut commented Apr 4, 2025

Then third party code/libraries would have to take on an additional responsibility if they use those types.

What actual benefit would you get from switching to the other types?

@GavinNL GavinNL closed this as completed Apr 4, 2025
@GavinNL
Copy link
Author

GavinNL commented Apr 4, 2025

Where I noticed the issue was when writing some template code:

it constexpr ( std::is_same_v<T, uint64_t> )
{
}

This failed when T was ImU64. Now that I know the problem, I can work around it. But it was a behaviour that I didn't expect

@GavinNL
Copy link
Author

GavinNL commented Apr 4, 2025

I think I accidentally marked this as completed. You can close/ignore this issue.

@ocornut ocornut reopened this Apr 4, 2025
@ocornut
Copy link
Owner

ocornut commented Apr 4, 2025

I'll reopen if it was accidentally closed.
But my question is: if you understand the type being used and can work around it, what benefit would you get from it using the other type? Is the workaround particularly tricky or problematic?

@GavinNL
Copy link
Author

GavinNL commented Apr 4, 2025

Its easy to work around if you know that ImU64 will not be the same as a uint64_t.

I replaced my code with the following

if constexpr( std::is_unsigned_v<T> && sizeof(T)==sizeof(uint64_t))
{
}

My reasoning for opening the ticket was to try to get a bit closer to conventions. I can understand your point about the "%d" potentially causing issues for other users, so I'm fine with not having this change.

@BlueCyro
Copy link

I'll reopen if it was accidentally closed. But my question is: if you understand the type being used and can work around it, what benefit would you get from it using the other type? Is the workaround particularly tricky or problematic?

I think a major point to using the standard types would be to reduce the amount of cognitive effort the end user needs to put in. I'm not a C nor C++ developer whatsoever, but I have a profound appreciation for languages like C# where all libraries that want to use something like an integer or single-precision number must use the int and float language keywords respectively.

It tends to make reasoning about the code much easier when you have standardized types that you can expect to see everywhere. IMO, macros/type aliasing make things really messy and hard to understand. I once found a library that type-aliased float away twice and I had to dig for like 30 minutes to figure out what they actually meant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants