Tuesday, 22 November 2011

ECMAScript ES6 - The future of JavaScript...

...or is it just:

<script type="text/python">

import YUI from "yui.py"

</script>

Sunday, 23 October 2011

Modifying Gnome3 Login Background

I recently started using Gnome3 and found that it is seriously lacking in configuration utilities.  While this is a problem for simplicity, you can still achieve most customisations through brute force.  One of the most common customisations I have seen many requests for on Google, is that of customising the GDM3 login screen background.

The easiest way to understand how the login background can be modified, is to think of the login screen as a Gnome session, but with a different interface to that of the gnome-shell.  The active user for the Gnome login screen is the GDM user.  On Debian this user is Debian-gdm.  So in order to modify the login screen background, you in fact need to modify the Debian-gdm user's gconf settings, updating the path to the desired wallpaper.

Its a fairly straight forward task, but is made complicated by the fact that the Debian-gdm user does not have a login shell defined in /etc/passwd and cannot be used as a valid user from the GDM login screen.  So in order to access and modify the Debian-gdm user's gconf settings, you must switch to this user using the su command from a root login shell.

root# su - Debian-gdm -s /bin/bash
Debian-gdm$ 

Now, gconf settings are modified through the dconf subsystem using DBus.  So to enable access to the DBus interface, you need to launch a DBus daemon for the Debian-gdm user and import it into the environment:

Debian-gdm$ $(dbus-launch | sed 's/^/export /g')

Next you need to start the dconf-service that interfaces with DBus and provides the backend for gconf.

Debian-gdm$ ps -eao user,command | awk '("'${USER:-$LOGNAME}'" == $1 && $NF ~ /^\/usr\/lib\/d-?conf\/dconf-service/) { print $NF }'

Running the above command will print the username and command line of the running dconf-service if there is one.  If the dconf-service isn't running, then start it.

Debian-gdm$ /usr/lib/d*conf/dconf-service &

Now you can access the gconf settings for Debian-gdm through the gsettings utility.

Debian-gdm$ gsettings get org.gnome.desktop.background picture-uri

Don't forget picture-uri is a URI, so be sure to prefix it with file:// or http://.  For all of this wrapped up into a script you can run as root, see my Google code project:

http://code.google.com/p/set-gnome3-background

Wednesday, 22 December 2010

resolv.conf and DynDNS

So, in order to save a few pennies on my broadband service, I have downgraded my package which means I lose my static IP address.  Now the issue is that I have bridge mode setup on the router so I can manage my own internal network, placing different security policies onto different subnets and changing what services are listening on each network, like UPnP for example.  Since game consoles typically require UPnP and most other things don't, games consoles are locked down onto their own private subnet, connected to the outside world via a different interface on the server.

So back to the bridge mode issue.  Why is it an issue?  Well, when switching from static, it means your interfaces configuration must change from static to dhcp as well:

auto eth0
iface eth0 inet dhcp

What this means is, that dhclient will handle setting up routes and designating the IP address etc, to the interface, when it receives a DHCP response from the ISP's DHCP server.  This usually entails losing everything you have setup in resolv.conf when dhclient decides to overwrite it.  To prevent it being overwritten, you need to use the hooks provided in dhclient-script.  See dhclient-script man page.

Essentially, what is required is an enter hook that declares a function called 'make_resolv_conf'.  This function will replace the function defined in dhclient-script at the point the enter hook gets included, and thus, if the body of the function does nothing, resolv.conf doesn't get modified.  For me, this is good since DNS is managed by dnsmasqd and I forward DNS requests to OpenDNS.org to provide simple security on things like typos:

www.bcarlays.com -> Hmm, nice place to setup a spoof / phishing site I would imagine.  OpenDNS resolve addresses like these to one of your choosing.  For me, I have it resolve back to the address of my internal gateway, where I host a 404 page.

What next?  Well, there is the issue that this dynamic IP address being assigned to my bridged interface, is... well... dynamic.  So when the lease runs out, it could mean it will change to a new address, making my network inaccessible from the WAN.  To counter this, ddclient needs to be run whenever the lease runs out or a new address is assigned to the interface, as well as the periodic calls to ddclient in order to keep the DynDNS hostname alive.  I lost a host to DynDNS once before because I didn't force update it every so often, so I want to avoid that painful experience again.

So how on earth do you go about executing ddclient whenever the lease is renewed or the interface is bound to the DHCP server?  Well, lets use the dhclient-script hooks again.  I created an exit hook script this time, to listen for the dhclient-script being called with the reason of BOUND, RENEW or REBIND.  These three reasons will get triggered whenever the interface address is likely to change and often when the interface address hasn't changed.  But importantly, it will ensure ddclient can be called when the lease expires.  Here is the script:


# dhclient-script exit hook to ensure that the DYNDNS address is updated
# through the ddclient, whenever the address changes.


function ddclient_exithook() {
    local prog="${0##*/}::ddclient_exithook()"
    logger -t "$prog" "Reason: $reason"


    case $reason in
    (BOUND|RENEW|REBIND)
        # Run the ddclient script to rebind the address to the DYNDNS hostname
        cat <
Executing ddclient to renew DynDNS hostname...


$(/usr/sbin/ddclient -force -verbose 2>&1)


Executing ddclient returned exitcode: $?
DDCLIENT
        ;;
    (*)
        # No need to renew the DYNDNS address
        logger -t "$prog" "Nothing to be done for $reason"
        ;;
    esac
}


ddclient_exithook

Test the script works by taking down the interface and bringing it back up.  This will force the interface to bind to the DHCP server when it comes back up, causing dhclient-script to be invoked with the BOUND reason.

See also:

/etc/dhcp3/dhclient-enter-hooks.d
/etc/dhcp3/dhclient-exit-hooks.d
/etc/ddclient.conf

Man pages:

ddclient
dhclient-script
dhclient

Wednesday, 15 December 2010

Subversion and Gnome Keyring

The problem:

You want to run an svn command in a cron task as a user that is already logged in and authenticated against a running gnome-keyring-daemon and the svn repository in question, but DBus prevents a user acquiring the privilege to access his own daemon without an associated x-session-manager.

The solution:

Attach to the Gnome session artificially, in order to be granted access to the gnome-keyring-daemon through DBus.

Details:

So how do you do that?

Well, there are three things required here.  Firstly, the DBus session bus address.  This will be something along the lines of:

unix:abstract=/tmp/dbus-abcdefghijk,guid=1234567890abcdef09878654321

It essentially enables applications that use DBus, to actually use it.  It's the flag that notifies the applications that DBus is available.  However, this alone will not solve the problem, since this will just allow the authentication request to take place but not actually be authenticated.  So, svn will see the environment variable and request authentication through DBus to the authentication agent (gnome-keyring), but there is nothing to tell DBus what the authentication agent is and where it is.  Next step...

Get the authentication agent.

This will be the gnome-keyring daemon pretending to be an ssh-agent, since it assumes the responsibility of the SSH agent when users use the gnome-keyring-manager, authentication for SSH keys is done through the gnome-keyring-daemon SSH authentication socket.  So how do you attach to this?

The Auth socket lives in the tmp directory, but it's no use hunting for it, since there could be lots of dead instances or instances owned by other users.  The easiest way is to hijack your own x-session-manager's environment and politely steel the socket and PID from the environment.  Let's see how we do that...

$ export pid=$(ps -C x-session-manager -o pid --no-heading)
$ cat /proc/${pid//[^0-9]/}/environ | sed 's/\x00/\n/g' | grep SSH

This will give you the path to the socket and the PID of the SSH agent in use by the x-session-manager; the one you want to pretend launched your shell.  The best way to be doing this however, is from one of the getty terminals that aren't running within your x-session, or by locally ssh'ing onto your machine to detach yourself from your x-session.  This way, you can be sure it is all working.

So is that it?  Not quite, keep reading...

So, you have the DBus address, the agent socket and PID, what more could you possibly need?  Well, anything X related must be authenticated against the X server, otherwise all authentication through DBus and essentially the gnome-keyring-daemon, will fail due to X authentication issues.  So finally, we must hijack our own X session, by associating ourselves with our own X authentication cookie.  This is in the form of some UUID.  This simplest way to obtain it is exactly the same as obtaining the SSH agent information.  You politely ask the kernel for it:


$ export pid=$(ps -C x-session-manager -o pid --no-heading)
$ cat /proc/${pid//[^0-9]/}/environ | sed 's/\x00/\n/g' | grep XDG_SESSION_COOKIE

So with this arsenal of environment variables, you can effectively mimic a process created by the x-session-manager, and start having friendly conversations with the x-session-manager and gnome-keyring-daemon.  However, it's all a bit dirty at the moment, so we can clean it up quite easily.  Create a file to include in your .bashrc file.  This will ensure that any processes created by "you" will attempt to associate themselves with an x-session.  I always opt for something like .bash_functions:

#!/bin/bash

################################################################################
#
# Attaches the current BASH session to a GNOME keyring daemon
#
# Returns 0 on success 1 on failure.
#
function gnome-keyring-attach() {
    local -a vars=( \
        DBUS_SESSION_BUS_ADDRESS \
        SSH_AUTH_SOCK \
        SSH_AGENT_PID \
        XDG_SESSION_COOKIE \
    )
    local pid=$(ps -C x-session-manager -o pid --no-heading)
    eval "unset ${vars[@]}; $(printf "export %s;" $(sed 's/\x00/\n/g' /proc/${pid//[^0-9]/}/environ | grep $(printf -- "-e ^%s= " "${vars[@]}")) )"
}



The reason it is a function, is because calling a script would run as a child process, so setting anything up in the environment there will have no affect on the calling environment.  Instead, calling a bash function allows the function to modify the calling environment.  You could of course write a function that prints the shell environment settings to the screen, where they can be imported into the current environment, but I find this is tidier.  Alternative method:

eval "$(gnome-keyring-attach)"

All you need to do now, is invoke this function when required.  Either in your cron task or in every session if you so wish to grant yourself access to your X session remotely, for example.

Not impossible: setuid shell scripting

I often come across the age old question of "why can't I setuid a Bash script?".  Well the simple one word answer is "security".  Plain and simple, having a script that is potentially modifiable or susceptible to script injection, either through parameters or through the environment, is a major security flaw.  However, there are ways of making a script root executable in a controlled manner, that ensures a clean environment.

Since the introduction of 'sudo', it is possible to execute any script as root by simply replacing the shebang with the following:

#!/usr/bin/sudo /bin/bash

However, this is extremely insecure since you would be handing root privileges to /bin/bash from within the sudoers to anybody with access rights to run bash through sudo.  Thus this would be innevitable:

[blee@dragon:~]$ sudo /bin/bash --login
[root@dragon:~]# id
uid=0(root) gid=0(root) groups=0(root)

So how do you permit bash to be executed as root from the shebang, whilst maintaining control over what can actually be executed?  The answer is to write a more intricate sudo rules to enable us to execute these setuid scripts.  First a User_Alias is required to provide a list of all the users permitted to execute certain scripts:

User_Alias ROOT_SUID_USERS = blee, cnorris

Next we need to know what scripts can be run as root.

Cmd_Alias ROOT_SUID_SCRIPTS = /usr/bin/myscript

Next we want to ensure that the environment is reset when invoking these commands:

Defaults!ROOT_SUID_SCRIPTS             env_reset

Next we put the two together:

ROOT_SUID_USERS        ALL = (root) NOPASSWD: ROOT_SUID_SCRIPTS

Now, /usr/bin/myscript is permissibly executable as root by the users blee and cnorris.  However, since sudo is invoked from the calling script's shebang, we need to somehow invoke bash in a safe way, otherwise we would just end up in a loop, with sudo being invoked by itself from /usr/bin/myscript.  So what we do is prefix each of the scripts in the /bin/bash invocation, which is safe, since we are saying that /bin/bash can be invoked by sudo providing it is immediately followed by the /usr/bin/myscript argument:

Cmd_Alias ROOT_SUID_SCRIPTS = \
    /bin/bash /usr/bin/myscript, \
    /bin/bash /usr/bin/myotherscript

In /usr/bin/myscript, we replace the shebang as follows:

#!/usr/bin/sudo /bin/bash

Now, sudo will invoke /bin/bash as root given the rule, providing cnorris or blee are the users executing the script.  Here are the test results:

Before we add anything to the sudoers file, but with our shebang in place:

[blee@dragon:~]$ myscript


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:


    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.


[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash /usr/bin/myscript' as root on dragon.

So lets add the sudoers configuration and try again:

[blee@dragon:~]$ myscript
myscript: Demonstrating setuid shell scripting:
uid=0(root) gid=0(root) groups=0(root)


[tjones@dragon:~]$ myscript


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for tjones:
Sorry, user tjones is not allowed to execute '/bin/bash /usr/bin/myscript' as root on dragon.


Just to prove that /bin/bash cannot be exploited through this sudo rule:

[blee@dragon:~]$ sudo /bin/bash --login


We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash --login' as root on dragon.

[blee@dragon:~]$ sudo /bin/bash

We trust you have received the usual lecture from the local System
Administrator.  It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.

[sudo] password for blee:
Sorry, user blee is not allowed to execute '/bin/bash' as root on dragon.

But, executing myscript the by bypassing the shebang is fine:

[blee@dragon:~]$ sudo /bin/bash /usr/bin/myscript
myscript: Demonstrating setuid shell scripting:
uid=0(root) gid=0(root) groups=0(root)

To summarise, there we have blee running myscript and assuming root privileges for the life of the script.  Obviously, it does rely on the author of the scripts being run as root, to write the scripts securely, so there will possibly be an opportunity for exploitation if scripts are written sloppily.  Also, the permissions of these scripts must be as such that they are owned and writeable only by root!  Any modification to the file permissions that would allow anybody else write access creates a window of opportunity for someone to modify the contents as such it would grant them a root shell, providing they have permission to execute it and be granted root privileges from sudo.

As sudo suggests:

It usually boils down to these three things:

    #1) Respect the privacy of others.
    #2) Think before you type.
    #3) With great power comes great responsibility.




Friday, 12 November 2010

More Perl eval DESTROY woes!

Something now seemingly obvious, is the scope issues associated with the $@ variable, eval and an object's destructor.  Consider the scenario, that like in a C++ program, you want to use well defined exceptions to determine the flow of the program under erroneous circumstances, rather than arbitrarily passing parameters around or relying on return value checking:

my $success = 0;
if (open(my $fh, ">", "/dev/null")) {
    if (myFunction("some parameter")) {
        my $obj = My::Something->new();


        if ($obj->method1()) {
            if ($obj->method2()) {
                $success = 1;
            }
        }
    }
}
if (! $success) {
    warn("Oh dear, something went wrong");
    return 0;
}
return 1;


Levels of nesting can start to look ugly, and with lots of return value checking going on, it can start to become hard to follow or sustain.  So instead, I often look to simplify things like this:


eval {
    open(my $fh, ">", "/dev/null") or do {
        die(My::Exception->new($!));
    };
    
    myFunction("some parameter");


    my $obj = My::Something->new();
    $obj->method1();


    $obj->method2();
};
if ($@) {
    my $ref = ref($@);
    if ("My::Exception" eq $ref) {
        $@->warn();
    } else {
        warn("Oh dear, something went wrong: $@");
    }
    return 0;
}
return 1;

Ensuring that all packages and functions you create throw some exception object, makes error reporting easy to localise and self contained.  It's also easy to disable if you haven't warnings plastered throughout your code.

Well all be it nice, in Perl there is one caveat that caught me out.  Consider the scenario above, where in the instance of My::Something, it's destructor calls some method or function that contains an eval block.  With that in mind, also consider what would happen if method1 was to throw an exception.  Here is what happens:

# My::Something constructor is executed.
my $obj = My::Something->new();

# When method2 throws an exception, the eval 
# block is exited and $@ is set to the appropriate 
# exception object by 'die'.
$obj->method2();

# After setting $@ but before executing the next 
# statement after the eval block, Perl executes 
# the destructor on $obj. Within the destructor, 
# some method calls 'eval', which on instantiation,
# resets the $@ variable.
eval { die("Ignore this error"); };

# Now when the destructor has finished, Perl executes 
# the next statement where it evaluates whether the 'eval' 
# block was successful or not.
if ($@) { ...

# Because of the 'eval' instance resetting $@, the 
# code skips the error reporting and returns a 
# successful return value.
return 1;

This is a complete disaster and will easily go unnoticed until something much further down the line identifies something that should have happened, hasn't or vice-versa.  However, there is an extremely simple way to secure the destructor of an object against such an event, by simply declaring $@ in local scope within the destructor:

sub DESTROY {
    my $this = shift;
    local $@;

    eval {
        die("Now this error will truly be ignored");
    };
}

For such a simple solution, it's worth making habit to always instantiate a local copy of $@ within a destructor unless you want to explicitly propagate a destructor exception up to some other handler.  But since there is a danger you will always overwrite some other more important exception that quite possibly caused the exception in the destructor in the first place, it's probably worth implementing some global variable for destructor exceptions:

package My::Something;

my $destruct_except;

sub DESTROY {
    my $this = shift;
    local $@;
    
    $My::Something::destruct_except = undef;
    eval {
        die("Oh dear, that's not supposed to happen!");
    };
    if ($@) {
        $My::Something::destruct_except = $@;
    }
}

Obviously, if there are multiple instances of the same object type in a single eval block, it would be very difficult to track which destructor threw or which ones didn't.  Then you would have to become more cunning, using some sort of hash or list to stack up the exceptions that occurred with each destructor.  For the most part though, usually you are not interested in what fails within a destructor, since it's primary purpose is to clean up.  If what it wants to clean doesn't exist, as far as you are concerned, it's job is done and you don't need to know about what couldn't be cleaned, because the lack of existence implies it is clean.

Monday, 9 August 2010

FOLLOW UP: Perl: eval {...}, DESTROY and fork()

Just following up on a previous entry.  I have read something interesting on the destructors of Perl modules in a threaded environment.  This doesn't work for forked processes, since the kernel is responsible for duplicating forked processes, but it does provide a mechanism for making threads with cloned objects thread-safe.

CLONE_SKIP