Wednesday, 22 December 2010

resolv.conf and DynDNS

So, in order to save a few pennies on my broadband service, I have downgraded my package which means I lose my static IP address.  Now the issue is that I have bridge mode setup on the router so I can manage my own internal network, placing different security policies onto different subnets and changing what services are listening on each network, like UPnP for example.  Since game consoles typically require UPnP and most other things don't, games consoles are locked down onto their own private subnet, connected to the outside world via a different interface on the server.

So back to the bridge mode issue.  Why is it an issue?  Well, when switching from static, it means your interfaces configuration must change from static to dhcp as well:

auto eth0
iface eth0 inet dhcp

What this means is, that dhclient will handle setting up routes and designating the IP address etc, to the interface, when it receives a DHCP response from the ISP's DHCP server.  This usually entails losing everything you have setup in resolv.conf when dhclient decides to overwrite it.  To prevent it being overwritten, you need to use the hooks provided in dhclient-script.  See dhclient-script man page.

Essentially, what is required is an enter hook that declares a function called 'make_resolv_conf'.  This function will replace the function defined in dhclient-script at the point the enter hook gets included, and thus, if the body of the function does nothing, resolv.conf doesn't get modified.  For me, this is good since DNS is managed by dnsmasqd and I forward DNS requests to OpenDNS.org to provide simple security on things like typos:

www.bcarlays.com -> Hmm, nice place to setup a spoof / phishing site I would imagine.  OpenDNS resolve addresses like these to one of your choosing.  For me, I have it resolve back to the address of my internal gateway, where I host a 404 page.

What next?  Well, there is the issue that this dynamic IP address being assigned to my bridged interface, is... well... dynamic.  So when the lease runs out, it could mean it will change to a new address, making my network inaccessible from the WAN.  To counter this, ddclient needs to be run whenever the lease runs out or a new address is assigned to the interface, as well as the periodic calls to ddclient in order to keep the DynDNS hostname alive.  I lost a host to DynDNS once before because I didn't force update it every so often, so I want to avoid that painful experience again.

So how on earth do you go about executing ddclient whenever the lease is renewed or the interface is bound to the DHCP server?  Well, lets use the dhclient-script hooks again.  I created an exit hook script this time, to listen for the dhclient-script being called with the reason of BOUND, RENEW or REBIND.  These three reasons will get triggered whenever the interface address is likely to change and often when the interface address hasn't changed.  But importantly, it will ensure ddclient can be called when the lease expires.  Here is the script:


# dhclient-script exit hook to ensure that the DYNDNS address is updated
# through the ddclient, whenever the address changes.


function ddclient_exithook() {
    local prog="${0##*/}::ddclient_exithook()"
    logger -t "$prog" "Reason: $reason"


    case $reason in
    (BOUND|RENEW|REBIND)
        # Run the ddclient script to rebind the address to the DYNDNS hostname
        cat <
Executing ddclient to renew DynDNS hostname...


$(/usr/sbin/ddclient -force -verbose 2>&1)


Executing ddclient returned exitcode: $?
DDCLIENT
        ;;
    (*)
        # No need to renew the DYNDNS address
        logger -t "$prog" "Nothing to be done for $reason"
        ;;
    esac
}


ddclient_exithook

Test the script works by taking down the interface and bringing it back up.  This will force the interface to bind to the DHCP server when it comes back up, causing dhclient-script to be invoked with the BOUND reason.

See also:

/etc/dhcp3/dhclient-enter-hooks.d
/etc/dhcp3/dhclient-exit-hooks.d
/etc/ddclient.conf

Man pages:

ddclient
dhclient-script
dhclient

Wednesday, 15 December 2010

Subversion and Gnome Keyring

The problem:

You want to run an svn command in a cron task as a user that is already logged in and authenticated against a running gnome-keyring-daemon and the svn repository in question, but DBus prevents a user acquiring the privilege to access his own daemon without an associated x-session-manager.

The solution:

Attach to the Gnome session artificially, in order to be granted access to the gnome-keyring-daemon through DBus.

Details:

So how do you do that?

Well, there are three things required here.  Firstly, the DBus session bus address.  This will be something along the lines of:

unix:abstract=/tmp/dbus-abcdefghijk,guid=1234567890abcdef09878654321

It essentially enables applications that use DBus, to actually use it.  It's the flag that notifies the applications that DBus is available.  However, this alone will not solve the problem, since this will just allow the authentication request to take place but not actually be authenticated.  So, svn will see the environment variable and request authentication through DBus to the authentication agent (gnome-keyring), but there is nothing to tell DBus what the authentication agent is and where it is.  Next step...

Get the authentication agent.

This will be the gnome-keyring daemon pretending to be an ssh-agent, since it assumes the responsibility of the SSH agent when users use the gnome-keyring-manager, authentication for SSH keys is done through the gnome-keyring-daemon SSH authentication socket.  So how do you attach to this?

The Auth socket lives in the tmp directory, but it's no use hunting for it, since there could be lots of dead instances or instances owned by other users.  The easiest way is to hijack your own x-session-manager's environment and politely steel the socket and PID from the environment.  Let's see how we do that...

$ export pid=$(ps -C x-session-manager -o pid --no-heading)
$ cat /proc/${pid//[^0-9]/}/environ | sed 's/\x00/\n/g' | grep SSH

This will give you the path to the socket and the PID of the SSH agent in use by the x-session-manager; the one you want to pretend launched your shell.  The best way to be doing this however, is from one of the getty terminals that aren't running within your x-session, or by locally ssh'ing onto your machine to detach yourself from your x-session.  This way, you can be sure it is all working.

So is that it?  Not quite, keep reading...

So, you have the DBus address, the agent socket and PID, what more could you possibly need?  Well, anything X related must be authenticated against the X server, otherwise all authentication through DBus and essentially the gnome-keyring-daemon, will fail due to X authentication issues.  So finally, we must hijack our own X session, by associating ourselves with our own X authentication cookie.  This is in the form of some UUID.  This simplest way to obtain it is exactly the same as obtaining the SSH agent information.  You politely ask the kernel for it:


$ export pid=$(ps -C x-session-manager -o pid --no-heading)
$ cat /proc/${pid//[^0-9]/}/environ | sed 's/\x00/\n/g' | grep XDG_SESSION_COOKIE

So with this arsenal of environment variables, you can effectively mimic a process created by the x-session-manager, and start having friendly conversations with the x-session-manager and gnome-keyring-daemon.  However, it's all a bit dirty at the moment, so we can clean it up quite easily.  Create a file to include in your .bashrc file.  This will ensure that any processes created by "you" will attempt to associate themselves with an x-session.  I always opt for something like .bash_functions:

#!/bin/bash

################################################################################
#
# Attaches the current BASH session to a GNOME keyring daemon
#
# Returns 0 on success 1 on failure.
#
function gnome-keyring-attach() {
    local -a vars=( \
        DBUS_SESSION_BUS_ADDRESS \
        SSH_AUTH_SOCK \
        SSH_AGENT_PID \
        XDG_SESSION_COOKIE \
    )
    local pid=$(ps -C x-session-manager -o pid --no-heading)
    eval "unset ${vars[@]}; $(printf "export %s;" $(sed 's/\x00/\n/g' /proc/${pid//[^0-9]/}/environ | grep $(printf -- "-e ^%s= " "${vars[@]}")) )"
}



The reason it is a function, is because calling a script would run as a child process, so setting anything up in the environment there will have no affect on the calling environment.  Instead, calling a bash function allows the function to modify the calling environment.  You could of course write a function that prints the shell environment settings to the screen, where they can be imported into the current environment, but I find this is tidier.  Alternative method:

eval "$(gnome-keyring-attach)"

All you need to do now, is invoke this function when required.  Either in your cron task or in every session if you so wish to grant yourself access to your X session remotely, for example.

Not impossible: setuid shell scripting

I often come across the age old question of "why can't I setuid a Bash script?".  Well the simple one word answer is "security".  Plain and simple, having a script that is potentially modifiable or susceptible to script injection, either through parameters or through the environment, is a major security flaw.  However, there are ways of making a script root executable in a controlled manner, that ensures a clean environment.

Since the introduction of 'sudo', it is possible to execute any script as root by simply replacing the shebang with the following:

#!/usr/bin/sudo /bin/bash

However, this is extremely insecure since you would be handing root privileges to /bin/bash from within the sudoers to anybody with access rights to run bash through sudo.  Thus this would be innevitable:

[blee@dragon:~]$ sudo /bin/bash --login
[root@dragon:~]# id
uid=0(root) gid=0(root) groups=0(root)

So how do you permit bash to be executed as root from the shebang, whilst maintaining control over what can actually be executed?  The answer is to write a more intricate sudo rules to enable us to execute these setuid scripts.  First a User_Alias is required to provide a list of all the users permitted to execute certain scripts:

User_Alias ROOT_SUID_USERS = blee, cnorris

Next we need to know what scripts can be run as root.

Cmd_Alias ROOT_SUID_SCRIPTS = /usr/bin/myscript

Next we want to ensure that the environment is reset when invoking these commands:

Defaults!ROOT_SUID_SCRIPTS             env_reset

Next we put the two together:

ROOT_SUID_USERS        ALL = (root) NOPASSWD: ROOT_SUID_SCRIPTS

Now, /usr/bin/myscript is permissibly executable as root by the users blee and cnorris.  However, since sudo is invoked from the calling script's shebang, we need to somehow invoke bash in a safe way, otherwise we would just end up in a loop, with sudo being invoked by itself from /usr/bin/myscript.  So what we do is prefix each of the scripts in the /bin/bash invocation, which is safe, since we are saying that /bin/bash can be invoked by sudo providing it is immediately followed by the /usr/bin/myscript argument:

Cmd_Alias ROOT_SUID_SCRIPTS = \
    /bin/bash /usr/bin/myscript, \
    /bin/bash /usr/bin/myotherscript

In /usr/bin/myscript, we replace the shebang as follows:

#!/usr/bin/sudo /bin/bash

Now, sudo will invoke /bin/bash as root given the rule, providing cnorris or blee are the users executing the script.  Here are the test results:

Before we add anything to the sudoers file, but with our shebang in place:

[blee@dragon:~]$ myscript


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:


    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.


[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash /usr/bin/myscript' as root on dragon.

So lets add the sudoers configuration and try again:

[blee@dragon:~]$ myscript
myscript: Demonstrating setuid shell scripting:
uid=0(root) gid=0(root) groups=0(root)


[tjones@dragon:~]$ myscript


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for tjones:
Sorry, user tjones is not allowed to execute '/bin/bash /usr/bin/myscript' as root on dragon.


Just to prove that /bin/bash cannot be exploited through this sudo rule:

[blee@dragon:~]$ sudo /bin/bash --login


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash --login' as root on dragon.

[blee@dragon:~]$ sudo /bin/bash

We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash' as root on dragon.

But, executing myscript the by bypassing the shebang is fine:

[blee@dragon:~]$ sudo /bin/bash /usr/bin/myscript
myscript: Demonstrating setuid shell scripting:
uid=0(root) gid=0(root) groups=0(root)

To summarise, there we have blee running myscript and assuming root privileges for the life of the script.  Obviously, it does rely on the author of the scripts being run as root, to write the scripts securely, so there will possibly be an opportunity for exploitation if scripts are written sloppily.  Also, the permissions of these scripts must be as such that they are owned and writeable only by root!  Any modification to the file permissions that would allow anybody else write access creates a window of opportunity for someone to modify the contents as such it would grant them a root shell, providing they have permission to execute it and be granted root privileges from sudo.

As sudo suggests:

It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.




Friday, 12 November 2010

More Perl eval DESTROY woes!

Something now seemingly obvious, is the scope issues associated with the $@ variable, eval and an object's destructor.  Consider the scenario, that like in a C++ program, you want to use well defined exceptions to determine the flow of the program under erroneous circumstances, rather than arbitrarily passing parameters around or relying on return value checking:

my $success = 0;
if (open(my $fh, ">", "/dev/null")) {
    if (myFunction("some parameter")) {
        my $obj = My::Something->new();


        if ($obj->method1()) {
            if ($obj->method2()) {
                $success = 1;
            }
        }
    }
}
if (! $success) {
    warn("Oh dear, something went wrong");
    return 0;
}
return 1;


Levels of nesting can start to look ugly, and with lots of return value checking going on, it can start to become hard to follow or sustain.  So instead, I often look to simplify things like this:


eval {
    open(my $fh, ">", "/dev/null") or do {
        die(My::Exception->new($!));
    };
    
    myFunction("some parameter");


    my $obj = My::Something->new();
    $obj->method1();


    $obj->method2();
};
if ($@) {
    my $ref = ref($@);
    if ("My::Exception" eq $ref) {
        $@->warn();
    } else {
        warn("Oh dear, something went wrong: $@");
    }
    return 0;
}
return 1;

Ensuring that all packages and functions you create throw some exception object, makes error reporting easy to localise and self contained.  It's also easy to disable if you haven't warnings plastered throughout your code.

Well all be it nice, in Perl there is one caveat that caught me out.  Consider the scenario above, where in the instance of My::Something, it's destructor calls some method or function that contains an eval block.  With that in mind, also consider what would happen if method1 was to throw an exception.  Here is what happens:

# My::Something constructor is executed.
my $obj = My::Something->new();

# When method2 throws an exception, the eval 
# block is exited and $@ is set to the appropriate 
# exception object by 'die'.
$obj->method2();

# After setting $@ but before executing the next 
# statement after the eval block, Perl executes 
# the destructor on $obj. Within the destructor, 
# some method calls 'eval', which on instantiation,
# resets the $@ variable.
eval { die("Ignore this error"); };

# Now when the destructor has finished, Perl executes 
# the next statement where it evaluates whether the 'eval' 
# block was successful or not.
if ($@) { ...

# Because of the 'eval' instance resetting $@, the 
# code skips the error reporting and returns a 
# successful return value.
return 1;

This is a complete disaster and will easily go unnoticed until something much further down the line identifies something that should have happened, hasn't or vice-versa.  However, there is an extremely simple way to secure the destructor of an object against such an event, by simply declaring $@ in local scope within the destructor:

sub DESTROY {
    my $this = shift;
    local $@;

    eval {
        die("Now this error will truly be ignored");
    };
}

For such a simple solution, it's worth making habit to always instantiate a local copy of $@ within a destructor unless you want to explicitly propagate a destructor exception up to some other handler.  But since there is a danger you will always overwrite some other more important exception that quite possibly caused the exception in the destructor in the first place, it's probably worth implementing some global variable for destructor exceptions:

package My::Something;

my $destruct_except;

sub DESTROY {
    my $this = shift;
    local $@;
    
    $My::Something::destruct_except = undef;
    eval {
        die("Oh dear, that's not supposed to happen!");
    };
    if ($@) {
        $My::Something::destruct_except = $@;
    }
}

Obviously, if there are multiple instances of the same object type in a single eval block, it would be very difficult to track which destructor threw or which ones didn't.  Then you would have to become more cunning, using some sort of hash or list to stack up the exceptions that occurred with each destructor.  For the most part though, usually you are not interested in what fails within a destructor, since it's primary purpose is to clean up.  If what it wants to clean doesn't exist, as far as you are concerned, it's job is done and you don't need to know about what couldn't be cleaned, because the lack of existence implies it is clean.

Monday, 9 August 2010

FOLLOW UP: Perl: eval {...}, DESTROY and fork()

Just following up on a previous entry.  I have read something interesting on the destructors of Perl modules in a threaded environment.  This doesn't work for forked processes, since the kernel is responsible for duplicating forked processes, but it does provide a mechanism for making threads with cloned objects thread-safe.

CLONE_SKIP

Friday, 30 July 2010

XSLT: Poor browser compilation reporting.

You have to love the lack of context with web browser XSLT processing:

Firefox - "Error during XSLT transformation: Evaluating an invalid expression."
All down to a double equals in an expression.  Something I regularly make the mistake of doing, but under normal cercumstances, is easy to spot via xsltproc:

XPath error : Invalid expression
$leftspan == 2
           ^


compilation error: file xxx-xxxxx.xsl line 193 element if
xsl:if : could not compile test expression '$leftspan == 2'

Thursday, 1 July 2010

Perl: eval {...}, DESTROY and fork()

Okay, the point of this exercise is just to make a note of Perl garbage collection behaviour that can have elusive twist if you are not careful. In my case, I thought Perl was erroneously calling the destructor on my object multiple times, when in actual fact it was behaving correctly.

For those unaware of how a Perl destructor is implemented, here is a quick example:


package Hello;

use strict;

sub new {
    my $class = shift;
    my $this = {};

    $this->{pid} = $$;

    # Do something

    return bless($this, $class);
}

sub DESTROY {
    my $this = shift;

    if ($$ != $this->{pid}) {
        return;
    }
}

1;


To save endless amounts of repetitive modification of the same code, the caveat (obvious to those familiar with thread safe coding styles) is pointed out. As you can see, in the constructor, the PID number of the process that the instance is created in is recorded. This is later used to identify whether the process calling into the destructor, has the right to truly destroy itself.

In the scenario I experienced, I was creating a file in a method used to create a transaction. If the transaction is never committed and the object instance goes out of scope, the file should be removed as part of the destructor; this just ensures files aren't left lying around should the object instance be discarded.

The problem was in the call to fork() elsewhere in the same scope of the object instance, but outside the Hello module. During the fork operation, the child implicitly acquires a copy of everything in memory and access to any file handles. That means that if the child process terminates before the object instance goes out of scope in the parent process, the destructor gets called in the child. Since the child has a copy of the object instance held in exactly the same state as the parent at the time of the fork, the destructor will be called. In my case, the destructor removed the transaction file because the transaction remained uncommitted.

As already pointed out, the simple solution is to only perform destructive operations outside the scope of the parent process (or the process that instantiated the object), if that is the real intention. Otherwise, caution should not be thrown to the wind and an early return from the destructor if the PID cannot be identified.

So where does eval {...}; come in to all of this?

To make debugging this issue difficult, there were a few things happening:

  1. Warnings on STDERR within the destructor didn't get output to the console.
  2. The stack trace was lacking information.
  3. An eval {...}; block always seemed to be the last call in the return stack.
To solve the warning issue, I simply reopened STDERR to a temporary file:


use File::Temp qw( mktemp );
open(STDERR, ">", mktemp("/tmp/debug.XXXXXX"));


Then warnings just follow suit. I also used Carp to obtain the return stack for each:


use Carp qw( cluck );
cluck($this, ": DEBUG: I am in the destructor - PID: ", $$);


Eventually this led to the following output in the debug log files:


Hello=HASH(0x87885e0) DEBUG: I am in the destructor - PID: 26151 at /lib/perl/Hello.pm line 20
Hello::DESTROY('Hello=HASH(0x87885e0)') called at /lib/perl/SomeModule.pm line 0
eval {...} called at /lib/perl/SomeModule.pm line 0


As you can see, the eval block is the last in the return stack. This led me on a bit of a wild goose chase, thinking that eval was somehow creating copies of the object instances in the same way fork does. It was unclear to me that something else was forking in the same scope, since I didn't call fork directly. It was only evident when I realised a fork was actually occurring within the same scope and thus calling the destructor. Now looking at the line 0 attributes of the return stack, it's characteristic of a return stack generated from a forked child process; this is something worth retaining for future reference. Since I always forget, this will be my dumping ground.