Replies: 1 comment
-
|
This is related to issue #79 and I assume we agreed on treating hex constants similar to decimal constants. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I would like to discuss the current state about the implicit casting of hexadecimal constant literals.
Currently, binary and hexadecimal literals have a fixed type based on their width — e.g.,
0x1234isBits<16>. This contrasts with decimal literals, which use the flexible Const type.As a result, hex/bin literals don’t support automatic upcasting, which limits their usability in contexts expecting larger types.
E.g. the following code
will fail with
This happens because the memory target expects a
Bits<64>, while the hex literal like 0x00 is Bits<16>.Using a decimal literal like
4096works since it is treated asConst<4096>and can be upcasted. But for memory addresses, decimal literals aren’t a practical workaround.The alternative is to add a manual cast to 64-bit. However, this decreases readability and is cumbersome.
I think binary and hexadecimal literals should be implicitly upcasted like decimal ones. However, they seem to carry an implicit minimal size that doesn’t always match the minimal type needed for the value.
For example,
0x02would have a minimal type ofBits<8>, but if treated like a decimal literal, it could be used in a context expectingBits<4>, which might cause type truncation — something we probably want to avoid.Beta Was this translation helpful? Give feedback.
All reactions