I personally think there should never be a problem, as long as you don't attempt to read/write over it, which may or may not produce a segmentation fault depending on how lucky you are. Some people seem to disagree, one of them being my professor.
@Paper Carnival: Your professor is wrong. A "segmentation fault" is a hardware interrupt generated in response to a memory read/write to an address outside of a range specified by a register or RAM table of some sort. The setting of a pointer variable to a value will cause a memory write to the address of the pointer variable but not to the address which it contains. Read and write signals to the pointer's address are not issued until memory at that address is accessed. Since there is no memory access there will be no interrupt and hence no segmentation fault. This is all done in hardware so the operating system is irrelevant. It's entirely possible to have a software segmentation fault checker but the overhead would be extremely high and I have never heard of it being done.
I agree with you that this should never be a problem and I'll give a couple of good reasons why. First of all let us assume that we have an application that contains some code similar to the one you give. Further let us assume that this is application is a closed loop controls system controlling the flying surfaces on a 757 Jumbo Jet. And just to make things interesting let us also assume your professor is a passenger on the same plane flying at 35000 ft.
1. Fault response - How should our application respond to a segmentation fault of the variety that you describe (i.e. the application is otherwise operating normally and will continue to do so if the fault is ignored)? There are two obvious choices; abort the application or ignore the fault. In this instance I believe your professor would prefer the control system ignore inconsequential segmentation faults be ignored rather
plunge to his death. What about other applications? Can you name any application where it is better to crash than to continue running normally? So if the only reasonable response to a fault condition is to ignore it then what purpose does it serve to detect it.
2. Reliability/Maintainability - No suppose our application is very mature. It has been in use on thousands of aircraft for over a decade. With millions of total hours of operation our application has proven to be stable and reliable. Now suppose that a single line of trivial code, unrelated to the pointer in question, is added and the application is recompiled. What if over all these years that last p++ left p just a byte shy of the segment boundary and the new code pushed it just one byte past. This is a horrible situation - In a complex application it could be a long time before such a bug finally revealed itself. In this kind of situation any kind of changes made to the application, regardless of how small or trivial, would incur huge and unacceptable risks.
3. Redundant-Convoluted Code - In the scenario you give in the example the pointer value would have to be tested before incrementing it. So either there must be a redundant if statement (there is already a p<MAX test) before incrementing the pointer or the loop must be restructured in a less obvious and understandable way (which would also likely involve redundant code in one way or another. No useful purpose is served by requiring redundant and/or convoluted code. In my opinion redundant, convoluted code fits Ghost's description of "...leaving a loaded shotgun lying around...." much better than a pointer containing an invalid address.
4. Runtime vs Compile Error - If the compiler were able to pick this up then it could be argued that knowing that a pointer contained an invalid address could be useful in some cases. However this is not the case; segment faults occur at runtime and may not occur until long after the application has been released and in the field.