September 1996
Paul DiLascia is a freelance software consultant specializing in training and software development in C++ and Windows. He is the author of Windows++: Writing Reusable Code in C++ (Addison-Wesley, 1992). QI want to use an enhanced CWinApp object in my apps so my derived CMyWinApp class will not only override the virtual CWinApp methods, but also declare and define a couple of additional members and methods. I still want to use AfxGetApp to retrieve a pointer to my app object. Afx-GetApp seems to use a hardcoded memory location to re-trieve the app object. Do you consider it safe to cast the pointer obtained from AfxGetApp to my derived CMyWinApp type? Also, why is the CWinApp object always allocated NEAR (CMyWinApp NEAR theApp)? Hermann Pallasch AIf you look at the definition of AfxGetApp in afxwin1.inl, you'll see that it expands to afxCurrentWinApp: In earlier versions of MFC, afxCurrentWinApp was an extern pointer, but currently (MFC 4.x) afxCurrentWinApp is #defined in afxwin.h as: AfxGetModuleState returns a pointer to thread-local storage for a struct called AFX_MODULE_STATE that stores information about the current running module, including a pointer to the application object. The reason for all this subterfuge has to do with DLLs. In a normal (non-DLL) source file, you could just write to access the Mumble member of the global object theApp. But if theApp lives in a DLL, this symbol is undefined in the non-DLL file. If theApp is defined in the DLL, it's the DLL's object, not the application's. Remember, functions in a DLL are like subroutines of some app that doesn't exist (isn't linked) until run time. How can an MFC extension DLL get the application object (theApp) of the application calling it? This is where the module state comes in. Module states deserve more explanation than I can give in a short column, so I'll leave it for another day. The short story is this: MFC maintains something called the module state, which contains pointers to application-wide globals such as the current CWinApp. Each entry point in a DLL or OLE control is responsible for initializing the state through either AFX_MANAGE_STATE or METHOD_PROLOGUE. For more information, read MFC tech note #58. Based on your question, it sounds like you're not writing a DLL, but just a normal EXE. In that case, the simplest thing to do is declare your application object in some global header file that gets #included everywhere, say myapp.h. With the extern declaration as I've shown it, you can write in any CPP file that #includes myapp.h. There's nothing wrong with using the App directly. In fact, it's a faster way of getting the app than AfxGetApp because you don't have to call a function or chase all the state pointers. Since theApp is declared as CMyWinApp, you don't have to cast. If your app is composed of a main EXE with one or more DLLs, then this won't work in the DLLs. If this is the case, I suggest you write a function similar to AfxGetApp. GetMyApp returns the same pointer as AfxGetApp, but casts to CMyWinApp. The ASSERT checks that the application using your DLL is in fact a CMyWinApp and not some other kind of app. The ASSERT will not be compiled in a release build, so when you get right down to it, GetMy-App is exactly the same thing as AfxGetApp; in a release build it generates exactly the same amount of code as calling AfxGetApp. As for the second part of your question, theApp is declared NEAR because of an artifact from 16-bit days: to be more efficient, AppWizard generated code that defined theApp as NEAR data. In Win32¨, NEAR and FAR have no meaning since there's only one flat 32-bit memory model (hallelujah). In fact, under Win32, NEAR and FAR are #defined in windef.h to near and far, which are in turn #defined to nothing. By the way, there's no reason you have to call your global application object theApp. You can call it myApp, or YourApp, or theOneAndOnlyAppJack. QWhat are some guidelines to go by in deciding whether I should use the shared MFC.DLL or just statically link? I have a resource DLL compiled size of about 240K and an app compiled size of about 680K, both with the shared version of MFC.DLL. Larry Wall Via CompuServe AJust so everyone knows what we're talking about here, the issue is whether to link with the static version of the MFC libraries or the shared DLL version. In the former case, all of the MFC code is compiled into your EXE file. In the latter case, your code calls functions in a DLL MFCxxD.DLL, where xx is the version number and D indicates the debug version of the DLL. For example, my \WINDOWS \SYSTEM directory contains MFC40.DLL and MFC40D.DLL, the release and debug versions of the MFC 4.0 DLL. You select which way you want to link in the Project Settings dialog in the Visual C++¨ IDE (see Figure 1). Figure 1 Selecting Link Settings The short answer is: for most purposes, use the shared DLL version. It makes your code much smaller and your build times much quicker. If the shared DLL is such a win, why would you ever link with the static libraries? The main advantage of the static libraries is that they make your program self-contained. When you use the shared DLL, you have to ensure that the correct MFCxx.DLL is installed on your user's machine. Many programs-including some from Microsoft-use MFC, so the proper MFC DLL is probably already installed. (My machine has several MFCxx.DLLs, going back to version 2.0. I can delete some of them, but some old program will probably need the one I delete.) Still, you can't rely on the DLLs being there, so your installation program must copy MFCxx.DLL if it isn't already present. There is something satisfying about having a self-contained app. You can just copy the EXE from place to place and it'll work. Well, maybe in the old days. These days most apps rely on several dozen installed DLLs. Everything from MMSYSTEM to STORAGE to MFCxx. Even in the old days, there was the operating system itself, USER.EXE and KERNEL.EXE. As more and more programs use more and more DLLs, it's not so clear where the operating system ends and the application begins. With MFCxx.DLL installed on practically every machine, is it part of Windows¨ or the app? My point is, building a self-sufficient app is probably about as much an illusion as thinking that you can be self-sufficient by growing your own vegetables. Just as all the elements of modern society are highly interdependent, so too modern software apps rely on all sorts of DLLs being present. Independence from DLLs is probably only realistic for very small programs. Being self-contained is the most obvious reason to link with the static MFC libraries, but there are a few others worth mentioning. Performance with static libraries can be better than with dynamic link libraries. The MFC DLL is tuned for certain scenarios such as WordPad, the Visual C++ IDE, and some of the smaller sample programs that come with Visual C++. If your app matches these scenarios, there won't be much difference in performance. The only way to know for sure is to compare the differences yourself. Still, I would have to say that for most purposes this is not an issue. You might be concerned about the overall size of your program-for example, to fit on a single floppy or to download quickly over the Internet. In many cases, a standalone static app is smaller than the total size of a combined app plus shared MFC.DLL; the static version only links the OBJ files it needs, whereas the shared version uses the entire MFCxx.DLL. If users download your app over a modem or other slow link, you might want to use static linking. Of course, the best way to minimize download time over the net is to first check whether MFCxx.DLL is already installed. If it is, you don't need to download it and the shared version would be vastly smaller-MFC40.DLL is almost one megabyte! If your app uses undocumented MFC members, functions, or features, they may break in minor releases of the MFC DLL. When a new major version of MFC comes out, you get new versions of the DLLs. Programs that were compiled for 3.0 still use MFC30.DLL. But when a new minor version comes out, the DLL is not renamed, it just gets replaced. In other words, when going from MFC 4.0 to MFC 4.1, there's no MFC41.DLL, just a new version of MFC40.DLL. Microsoft does its best to not break things, but sometimes they do. Linking with the static library lets you control the exact version of MFC your app will use, so you can use undocumented features without worrying that they'll break or disappear when some other app installs the next minor version of MFC40.DLL over the one on your user's machine. (Of course, being a good programming citizen you NEVER use undocumented features, right?) Unless one of these special situations applies to you, just use the shared DLL and you'll be happy. One word of caution, though: you should develop/debug using the same approach you plan to ship with. Don't build using the shared version because it's faster, then switch at the last minute to ship a statically linked version to be self-contained-or at least don't switch without a thorough round of testing. In theory, the code should behave identically whether you use shared or static, but I've heard of bizarre situations where there are differences. QI've been teaching myself C++ and in some of my research regarding the new and delete operators I have found that if you create an array such as then when you delete this pointer you should use the delete [] operator like so In The C++ Programming Language, second edition (Addison-Wesley 1991), Bjarne Stroustrup says, "The effect of deleting an array with the plain delete syntax is undefined, as is deleting an individual object with the delete [] syntax." Yet in numerous tests I performed with my compiler (Visual C++ version 1.52), I've been unsuccessful getting the call to cause memory leaks or fail. Can you tell me in what circumstances it is necessary to use the delete [] operator, and why? Michael Morris AYou should always use delete [] whenever you've allocated an array with new []. The reason you can't get your code to leak memory or crash is probably that you're only testing it with objects like char or int that don't themselves allocate memory. I wrote a little program that shows what happens when you allocate an array of class objects whose constructors allocate memory, then delete the array without using [] (see Figure 2). To test what happens, I allocate an array of five objects then delete p with and without using []. Figure 3 shows the results when you run LEAK. As you can see, the problem lies not with the storage used for the array itself, but in calling the destructors for the objects in the array. When I use [] like a good citizen, everything is fine.When I delete without using [], only the first object in the array is destroyed. That is, only the first object in the array has its destructor called. You might think the compiler should be smart enough to know whether a pointer points to an array or an object. Indeed, it could be made smart enough to know the difference, but this would require adding extra run time information that would make the layout of an array in C++ differ from the layout of an array in C, where each object follows the next with no extra fields in between. Compatibility with C is of utmost importance, since "C++ is C." The problem is, the C++ specification says that whether you're creating/destroying an array or a single object, the operation goes through the same global new/delete operators. So how does delete(void* p) know whether p points to an array or a single object? It can't, not without extra information. So instead, C++ requires that the programmer tell it when the pointer is an array. When you write the compiler generates code something like this: How does C++ know how many elements are in the array? Originally, you had to tell it. This got tiresome quickly-it gave programmers the excessively onerous burden of having to always pass the length around to anyone who might delete the array. That requirement was relaxed; the number of elements in the array is now usually stored in a memory block header that precedes the actual array. The value is initialized when the array is allocated using new[]. In other words, there's some sort of header. One of these headers is prepended to every array allocation, just as malloc always does. This explains why you couldn't produce a leak allocating arrays of chars or ints; the array itself knows how big it is. Just like in C, when you call free(p) you don't have to say how big the array is because this information is stored in the block header. With most library implementations, you won't get into trouble allocating simple C arrays like char[256] and then deleting them without []. In fact, I see this quite frequently in real code. Since no destructors are involved, this is safe. However, the [] brackets are crucial for arrays of objects with destructors, and real programming pros always insert them, even for char arrays, out of good habit and just to show off how smart they are. Have a question about programming in C or C++? You can mail it directly to Paul at 72400.2702@compuserve.com inline CWinApp* AFXAPI AfxGetApp()
{
return afxCurrentWinApp;
}
#define afxCurrentWinApp
AfxGetModuleState()->m_pCurrentWinApp
theApp.Mumble();
class CMyWinApp : public CWinApp {
// normal class declaration
.
.
.
};
// Declare global app object
extern MyWinApp theApp;
theApp.SomeFunction();
inline CMyWinApp* GetMyApp()
{
ASSERT_KINDOF(AfxGetApp(), pApp);
return (CMyWinApp*)AfxGetApp();
}
pc = new char [4];
delete [] pc;
p = new CSomeObject [5];
delete [] p;
int n = numberOfElementsInArray(p);
for (i=0; i<n; i++)
p[i].destructor();
delete(p);
delete [n] p;
struct MEM_BLOCK_HEADER {
int size; // size of block
int numElts; // num elts in array
.
.
.
};
p = new char[256]
.
.
.
delete p; // should be delete [] p