C++ has exceptions for a very good reason.
The amount of C code that I *still* see that does not check return values for errors, it's amazing that it runs at all. However, it is not always preferable to deal with all errors in the lower-level code.
Exceptions give you the ability to have a library of code that does not pass back error values - it instead throws an exception. Whether the user of your library wants to deal with this or not is up to them, but the result of an uncaught exception is the program exiting immediately.
Code that is written within the company I work for has to pass a code review before release (no matter which language is used) - this includes catching *all* exceptions when using C++. Although it *is* possible to put in place 'catch all exceptions' code we do not allow the programmers to do so, because this just masks the error - exactly like ignoring the return codes in C does.
In reality, the number of exception blocks needed is usually small.
Ignoring errors is not a valid way to deal with them - and you are saying that this is a strength of BASIC. I see this as a weakness.
As a contrived example, imagine a section of code that reads a specific amount of data from two files ignoring for now the possibility of buffer overruns.
First in C:
/* returns 1 if ok, 0 if not */
int ReadFiles(char* Buffer1, char* Buffer2)
{
FILE* First;
FILE* Second;
int SizeReadFromFirst;
int SizeReadFromSecond;
First=fopen("first.dat", "r");
if (First == NULL)
return 0;
Second=fopen("second.dat", "r");
if (Second == NULL)
{
fclose(First);
return 0;
}
SizeReadFromFirst=fread(Buffer1, 1024, 1, First);
if (SizeReadFromFirst == 1024)
{
SizeReadFromSecond=fread(Buffer2, 1024, 1, Second);
if (SizeReadFromSecond == 1024)
{
fclose(First);
fclose(Second);
return 1;
}
}
fclose(First);
fclose(Second);
return 0;
}
Now there are plenty of ways that the code could be reduced in size, but only at the expense of readability. If I happen to accidently miss one of the 'fclose' calls when writing this code, I will eventually run out of file handles when running it. I'm also hoping that the user of this function is paying attention to the value I'm returning.
Now in C++:
void ReadFiles( char* Buffer1, char* Buffer2 )
{
MyFile First( "first.dat" );
MyFile Second( "second.dat" );
First.Read( Buffer1 );
First.Read( Buffer2 );
}
OK, it looks like I've cheated here. But all I've done is used a piece of library code that deals with *all* the error checking for me. It's the users responsibility to deal with the errors, as with the previous piece of C code, but this time they are forced to, and cannot just ignore the error. And with all the error checking code removed from the logical flow of the C++ code, which is easier to read?
Also note that the file is not being closed by me - it is being done automatically by the library code, so that I don't have to even think about it - that in itself is a good reason to use objects.
OK, here's the missing code that is written once, and reusable forever. Please note (for my pride's sake
) that I do not consider this production level code.
#include <cstdio>
#include <iostream>
#include <string>
using namespace std;
class FileError{};
class FileOpenError : public FileError {};
class FileReadError : public FileError {};
class MyFile
{
FILE* FilePtr;
public:
MyFile(const char* Filename_)
: FilePtr( fopen( Filename_, "r" ) )
{
if (FilePtr == NULL)
throw FileOpenError();
}
void Read(char * Buffer_)
{
int SizeRead;
SizeRead=fread( Buffer_, 1024, 1, FilePtr );
if (SizeRead != 1024)
throw FileReadError();
}
~MyFile()
{
fclose( FilePtr );
}
};
I've even given the coder the choice of dealing with a general FileError or a specific error of FileOpenError or FileReadError, AND THIS WOULD BE DOCUMENTED - it is library code after all. That's the key to producing bomb-proof code.
Normally, I would also include the class name, source file name, and the actual filename in the exception object returned - but this would not complicate the MyFile object itself.
Back to the buffer overruns - In C, you just hope that the user of your code has passed you buffers that are large enough to hold the data. In C++, you could use either a string or a vector to store the data - both structures that can resize themselves to hold the required data. Which is safer here?
As for error messages and warnings, we also require code to compile cleanly *without* typecasting. Because of the way C++ does typecasts, it is easy to locate them in the review, and each one must have a reason to be there.
The problem with a lot of code is programmer assumptions or lazyness, not the language. If any C++ program crashes because of buffer overruns or invalid data, the programmer deserves a good kicking because there really is no excuse anymore. Unfortunately, I think we'll see these types of errors for a long time, because of old code, and the fact that there *are* lazy programmers out there.
Last point - debugging C++ code is currently a nightmare, but this is a fault of the tools, not the language. It's getting better ... if only it could get better faster