my $success = 0;
if (open(my $fh, ">", "/dev/null")) {
if (myFunction("some parameter")) {
my $obj = My::Something->new();
if ($obj->method1()) {
if ($obj->method2()) {
$success = 1;
}
}
}
}
if (! $success) {
warn("Oh dear, something went wrong");
return 0;
}
return 1;
Levels of nesting can start to look ugly, and with lots of return value checking going on, it can start to become hard to follow or sustain. So instead, I often look to simplify things like this:
eval {
open(my $fh, ">", "/dev/null") or do {
die(My::Exception->new($!));
};
myFunction("some parameter");
my $obj = My::Something->new();
$obj->method1();
$obj->method2();
};
if ($@) {
my $ref = ref($@);
if ("My::Exception" eq $ref) {
$@->warn();
} else {
warn("Oh dear, something went wrong: $@");
}
return 0;
}
return 1;
Ensuring that all packages and functions you create throw some exception object, makes error reporting easy to localise and self contained. It's also easy to disable if you haven't warnings plastered throughout your code.
Well all be it nice, in Perl there is one caveat that caught me out. Consider the scenario above, where in the instance of My::Something, it's destructor calls some method or function that contains an eval block. With that in mind, also consider what would happen if method1 was to throw an exception. Here is what happens:
# My::Something constructor is executed.
my $obj = My::Something->new();
# When method2 throws an exception, the eval
# block is exited and $@ is set to the appropriate
# exception object by 'die'.
$obj->method2();
# After setting $@ but before executing the next
# statement after the eval block, Perl executes
# the destructor on $obj. Within the destructor,
# some method calls 'eval', which on instantiation,
# resets the $@ variable.
eval { die("Ignore this error"); };
# Now when the destructor has finished, Perl executes
# the next statement where it evaluates whether the 'eval'
# block was successful or not.
if ($@) { ...
# Because of the 'eval' instance resetting $@, the
# code skips the error reporting and returns a
# successful return value.
return 1;
This is a complete disaster and will easily go unnoticed until something much further down the line identifies something that should have happened, hasn't or vice-versa. However, there is an extremely simple way to secure the destructor of an object against such an event, by simply declaring $@ in local scope within the destructor:
sub DESTROY {
my $this = shift;
local $@;
eval {
die("Now this error will truly be ignored");
};
}
For such a simple solution, it's worth making habit to always instantiate a local copy of $@ within a destructor unless you want to explicitly propagate a destructor exception up to some other handler. But since there is a danger you will always overwrite some other more important exception that quite possibly caused the exception in the destructor in the first place, it's probably worth implementing some global variable for destructor exceptions:
package My::Something;
my $destruct_except;
sub DESTROY {
my $this = shift;
local $@;
$My::Something::destruct_except = undef;
eval {
die("Oh dear, that's not supposed to happen!");
};
if ($@) {
$My::Something::destruct_except = $@;
}
}
Obviously, if there are multiple instances of the same object type in a single eval block, it would be very difficult to track which destructor threw or which ones didn't. Then you would have to become more cunning, using some sort of hash or list to stack up the exceptions that occurred with each destructor. For the most part though, usually you are not interested in what fails within a destructor, since it's primary purpose is to clean up. If what it wants to clean doesn't exist, as far as you are concerned, it's job is done and you don't need to know about what couldn't be cleaned, because the lack of existence implies it is clean.