0xDEADBEEF vs. NULL
0xDEADBEEF vs. NULL
Throughout various code, I have seen people either zero out memory(memset(ptr,NULL,size) or 0xDEADBEEF(memset(ptr,0xDEADBEEF,size) memory at allocation in debug builds.So..
- What is the advantages to using each one, and what is the generally preferred way to achieve this in C/C++?
- Also if a pointer was assigned a value of 0xDEADBEEF couldn't it still deference to valid data?
Answer by EboMike for 0xDEADBEEF vs. NULL
I would definitely recommend 0xDEADBEEF. It clearly identifies uninitialized variables, and accesses to uninitialized pointers.
Being odd, dereferencing a 0xdeadbeef pointer will definitely crash on the PowerPC architecture when loading a word, and very likely crash on other architectures since the memory is likely to be outside the process' address space.
Zeroing out memory is a convenience since many structures/classes have member variables that use 0 as their initial value, but I would very much recommend initializing each member in the constructor rather than using the default memory fill. You will really want to be on top of whether or not you properly initialized your variables.
Answer by jeroenh for 0xDEADBEEF vs. NULL
DEADBEEF is an example of HexSpeek. With it, as a programmer you convey intentionally an error condition.
Answer by Guy Sirton for 0xDEADBEEF vs. NULL
http://en.wikipedia.org/wiki/Hexspeak
These "magic" numbers are are a debugging aid to identify bad pointers, uninitialized memory etc. You want a value that is unlikely to occur during normal execution and something that is visible when doing memory dumps or inspecting variables. Initializing to zero is less useful in this regard. I would guess that when you see people initialize to zero it is because they need to have that value at zero. A pointer with a value of 0xDEADBEEF could point to a valid memory location so it's a bad idea to use that as an alternative to NULL.
Answer by Eric Z for 0xDEADBEEF vs. NULL
One reason that you null the buffer or set it to a special value is that you can easily tell whether the buffer contents is valid or not in the debugger.
Dereferencing a pointer of value "0xDEADBEEF
" is almost always dangerous(probably crashes your program/system) because in most cases you have no idea what is stored there.
Answer by Xolve for 0xDEADBEEF vs. NULL
I would personally recommend using NULL (or 0x0) as it represents the NULL as expected and comes in handy while comparison. Imagine you are using char * and in between on DEADBEEF for some reason (don't know why), then at least your debugger will come very handy to tell you that its 0x0.
Answer by Puppy for 0xDEADBEEF vs. NULL
I would go for NULL
because it's much easier to mass zero out memory than to go through later and set all the pointers to 0xDEADBEEF. In addition, there's nothing at all stopping 0xDEADBEEF from being a valid memory address on x86- admittedly, it would be unusual, but far from impossible. NULL
is more reliable.
Ultimately, look- NULL
is the language convention. 0xDEADBEEF just looks pretty and that's it. You gain nothing for it. Libraries will check for NULL
pointers, they don't check for 0xDEADBEEF pointers. In C++ then the idea of the zero pointer isn't even tied to a zero value, just indicated with the literal zero, and in C++0x there is a nullptr
and a nullptr_t
.
Answer by 6502 for 0xDEADBEEF vs. NULL
Writing 0xDEADBEEF
or another non-zero bit pattern is a good idea to be able to catch both write-after-delete and read-after-delete uses.
1) Write after delete
By writing a specific pattern you can check if a block that has already been deallocated was written over later by buggy code; in our debug memory manager we use a free list of blocks and before recycling a memory block we check that our custom pattern are still written all over the block. Of course it's sort of "late" when we discover the problem, but still much earlier than when it would be discovered not doing the check. Also we have a special function that is called periodically and that can also be called on demand that just goes through the list of all freed memory blocks and check their consistency and so we can call this function often when chasing a bug. Using 0x00000000
as value wouldn't be as effective because zero may possibly be exactly the value that buggy code wants to write in the already deallocated block e.g. zeroing a field or setting a pointer to NULL (it's instead more unlikely that the buggy code wants to write 0xDEADBEEF
).
2) Read after delete
Leaving the content of a deallocated block untouched or even writing just zeros will increase the possibility that someone reading the content of a dead memory block will still find the values reasonable and compatible with invariants (e.g. a NULL pointer as on many architectures NULL is just binary zeroes, or the integer 0, the ASCII NUL
char or a double value 0.0). By writing instead "strange" patterns like 0xDEADBEEF
most of code that will access in read mode those bytes will probably find strange unreasonable values (e.g. the integer -559038737 or a double with value -1.1885959257070704e+148), hopefully triggering some other self consistency check assertion.
Of course nothing is really specific to the bit pattern 0xDEADBEEF
, actually we use different patterns for freed blocks, before-block area, after-block area and and also our memory manager writes another (address-dependent) specific bit pattern to the content part of any memory block before giving it to the application (this is to help finding uses of uninitialized memory).
Answer by AnT for 0xDEADBEEF vs. NULL
Using either
memset(ptr, NULL, size)
ormemset(ptr, 0xDEADBEEF, size)
is a clear indication of the fact that the author did not understand what they were doing.Firstly,
memset(ptr, NULL, size)
will indeed zero-out a memory block in C and C++ ifNULL
is defined as an integral zero.However, using
NULL
to represent the zero value in this context is not an acceptable practice.NULL
is a macro introduced specifically for pointer contexts. The second parameter ofmemset
is an integer, not a pointer. The proper way to zero-out a memory block would bememset(ptr, 0, size)
. Note:0
notNULL
. I'd say that evenmemset(ptr, '\0', size)
looks better thanmemset(ptr, NULL, size)
.Moreover, the most recent (at the moment) C++ standard - C++11 - allows defining
NULL
asnullptr
.nullptr
value is not implicitly convertible to typeint
, which means that the above code is not guaranteed to compile in C++11 and later.In C language (and your question is tagged C as well) macro
NULL
can expand to(void *) 0
. Even in C(void *) 0
is not implicitly convertible to typeint
, which means that in general casememset(ptr, NULL, size)
is simply invalid code in C.Secondly, even though the second parameter of
memset
has typeint
, the function interprets it as anunsigned char
value. It means that only one lower byte of the value is used to fill the destination memory block. For this reasonmemset(ptr, 0xDEADBEEF, size)
will compile, but will not fill the target memory region with0xDEADBEEF
values, as the author of the code probably naively hoped.memset(ptr, 0xDEADBEEF, size)
is eqivalent tomemset(ptr, 0xEF, size)
(assuming 8-bit chars). While this is probably good enough to fill some memory region with intentional "garbage", things likememset(ptr, NULL, size)
ormemset(ptr, 0xDEADBEEF, size)
still betray the major lack of professionalism on the author's part.Again, as other answer have already noted, the idea here is to fill the unused memory with a "garbage" value. Zero is certainly not a good idea in this case, since it is not "garbagy" enough. When using
memset
you are limited to one-byte values, like0xAB
or0xEF
. If this is good enough for your purposes, usememset
. If you want a more expressive and unique garbage value, like0xDEDABEEF
or0xBAADFOOD
, you won't be able to usememset
with it. You'll have to write a dedicated function that can fill memory region with 4-byte pattern.A pointer in C and C++ cannot be assigned an arbitrary integer value (other than a Null Pointer Constant, i.e. zero). Such assignment can only be achieved by forcing the integral value into the pointer with an explicit cast. Formally speaking, the result of such a cast is implementation defined. The resultant value can certainly point to valid data.
Answer by jeff slesinger for 0xDEADBEEF vs. NULL
Vote me down if this is too opinion-y for StackOverflow but I think this whole discussion is a symptom of a glaring hole in the toolchain we use to make software.
Detecting uninititialized variables by initializing memory with "garabage-y" values detects only some kinds of errors in some kinds of data.
And detecting uninititialized variables in debug builds but not for release builds is like following safety procedures only when testing an aircraft and telling the flying public to be satisfied with "well, it tested OK".
WE NEED HARDWARE SUPPORT for detecting uninitialized variables. As in something like an "invalid" bit that accompanies every addressability entity of memory (=byte on most of our machines) and which is set by the OS in every byte VirtualAlloc() (et. al, or equivalents on other OS's) hands over to applications and which is automatically cleared when the byte is written to but which causes an exception if read first.
Memory is cheap enough for this and processors are fast enough for this. This end of reliance on "funny" patterns and keeps us all honest to boot.
Fatal error: Call to a member function getElementsByTagName() on a non-object in D:\XAMPP INSTALLASTION\xampp\htdocs\endunpratama9i\www-stackoverflow-info-proses.php on line 72
0 comments:
Post a Comment