Sunday, 19 January 2014

Installing Linux on ZyXel NSA-320 - Part 3 - Boot and install OS from USB

Warning: This process will erase data on your primary hard disk!

You may be wondering why you needed to get a telnet back door in Part 1.  Well, when I did this previously, I didn't have a serial cable and it was easier (but extremely risky) just to reflash the nand with uboot by writing a file over the flash device in /dev.  However, the NSA320 already has u-boot in flash so nothing needs modifying there.  Plus, with the serial cable, you get access to the system via the serial console.  I am using a modified Nokia serial cable, plugged into the serial connector on the NSA320.

I have also used an SD card reader as a USB host device, since I didn't have a USB drive to hand.  You can see the serial cable plugged into the serial connector on the NSA320 motherboard.

To get started, download the following tarball:


You need to unpack the contents of this tarball to a FAT formatted USB drive.  In my case, this is the SD card I have connected via the card reader on the USB port in the picture.  You should find four files in the tarball, used by u-boot to boot into the ArchLinux distro.  But before you can do that, you need to get the root file system tarball and place that on the USB drive as well.  You can download the ArchLinux root fs here:


When you copy to the USB drive, you need to rename it to just rootfs.tgz.  Once copying has finished, sync disks a couple of times and unmount the USB drive.  Connect it up to the front USB port on the NSA-320 and connect up your serial cable.  Run up minicom as root:

# minicom -D /dev/ttyUSB0

Now when you power on the NSA-320, you can monitor the install process.  Essentially, the stock OS on the NSA320 looks for files and scripts on the USB drive if attached.  If a certain script is present, it will execute the script.  The script on the USB drive is the ArchLinux installer that unpacks the rootfs.tgz to the primary hard drive, after repartitioning it and formatting it with an EXT file system.  It takes a while dependent on the size of the hard drive, but once complete it will reboot and boot the kernel from the primary hard disk's boot partition.  The boot partition doesn't get mounted automatically in fstab, so just remember to mount it before you install a new kernel.  Otherwise, your new kernel will be installed to the boot directory on the root file system and will never be booted by u-boot.

In order to get the DHCP client to obtain an address dynamically, some extra work is needed to ensure a hardware flag is set that prevents this.  There is a script I created post install that fixes this and requires a subsequent reboot.  The script is installed as a systemd script, to ensure the flag is set on each boot, in case the device is powered off for a prolonged period and the capacitor loses its state.  I have made the following guide on creating the script and installing it as a systemd script.

The script can be downloaded from: power_resume

Follow-up for power_resume

I initially had some trouble configuring netctl to reliably bring up the eth0 interface and noticed it was already up before having enabled it.  After creating the video above, I did some further reading.  It seems that there is a "roaming" network daemon that is started to handle network connections for certain interfaces - netctl-ifplugd.  This handles cables being connected/disconnected and wireless connections coming in and going out of range.  With this enabled and enabling the netctl@eth0 profile, conflicts occur with shutting down and starting up the DHCP client daemon.  The solution is not to enable the netctl@eth0 profile and stick with the netctl-ifplugd profile that is configured by default.  In order to make the power_resume script work, you just need to modify the dependency line in the systemd configuration script:


Changing that and then enabling and disabling the power_resume service (to reinstall it) should fix it.

Follow-up number two...

Well, second thing I learnt today is that you don't place your systemd configuration files in the /etc/systemd directory.  This is the location of enabled services, so systemd is likely to get confused if you drop something in here without it being enabled.  The actual location for system configuration is in /usr/lib/systemd.  From this directory, any service can be enabled or disabled using the systemctl command.

Monday, 23 December 2013

Installing Linux on ZyXel NSA-320 - Part 2 - Making a USB serial cable

You can fly blind with this project, but it's so much easier and faster to fly with your eyes open, so making a serial cable to connect to the serial interface on the ZyXel is a no-brainer; especially since you can make a cable for less than £5.  If you are willing to partake in some risky business, then skip the cable guide and proceed to part 3, where you can try installing Arch-Linux on your NSA-320.

The Cable

I used the NAS Central guide to using a Nokia serial cable for a Buffalo LinkStation to construct a USB serial cable from a Nokia CA-42 cable.  The guide describes how to attach/solder the cable to the motherboard of the Buffalo LinkStation, because the LinkStation doesn't have a serial connector attached to the board.  However, the NSA-320 does have a serial connector, in the form of a set of four jumper pins.  The reason for using this cable in particular, is it has a micro-controller built into the USB connector that acts as a transceiver for the high USB voltages, into the 3V signals present on serial interfaces.  This is important, since it will prevent you from sending 5 volts from your USB port into the serial interface of the SoC on your NSA-320, potentially frying it.  This is the cable I used, that I purchased from Amazon:


The Connector

So following the guide, you cut the Nokia attachment off the end of the cable, since that is the bit you won't be needing.  In place of the connector, you need to attach four female jumper pins or jumper wires.  I used wires and just soldered the wires together, wrapping insulating table between each wire.  Be sure to use coloured wires that match the original Nokia wire colours.  This will help you make sense of the wires later on.  I used something like this:


Obviously, you get a lot of wires and there are connectors at both ends.  You just need to pick the four most appropriate wires, by colour.  Then cut them in half and remove some of the insulation to expose the copper wire that you will solder to the wires on your Nokia cable.

Finding TX, RX and GND

This is supposed to be a fairly trivial task.  The guide above tells you to use a voltage meter to work out what wires are what, but this didn't help me at all, since I got a signal on only one wire and the ground.  Instead, I went for pot luck and used minicom to work out what my GND, TX, and RX wires were.  You will notice a spare wire, which is not required for a serial connection, so you just leave it disconnected from the NSA-320.  You can put a jumper on it or tape it off with insulation tape once you have worked out what it is.

So if like me, you aren't having much luck with the voltage meter, or you don't have a voltage meter.  Fear not, you can quite safely short out some pins on the connector to work out what's what.  Worst case scenario, you will short out the transceiver and need to buy another cable.  Good news, the transceiver will prevent you from shorting out your USB port, so will protect your PC from any such short comings.  Saying that, I've never managed to damage the transceiver, given that the serial connection end is only operating at 3 volts.  Before you get trigger happy shorting out wires though, think about the cable colours.  Even though cable colours are different for each cable (mine most certainly didn't match the guide on NAS central), they will definitely follow the basic wiring paradigm.  Usually reds/browns indicate positive charge, green is most certainly a receiver (RX) while black/blue is generally GND.  If you can establish that you have a black or blue wire with a green wire and two others your are not quite so sure of, you can find the TX wire by shorting each of the remaining wires with the RX wire one at a time, using minicom to check whether you have found the right one.

So hook up your USB cable to your PC/laptop and run minicom from a terminal, attaching it to the USB device your just connected.  If you're not sure, run dmesg to look for messages about a TTY device.  My device appeared as /dev/ttyUSB0.  Run it as root.  It's worth noting that this is just my really hacky, lazy, CBA method for finding the TX and RX connectors and does require a lot of guess work and a little common-sense.  However, you can be far more professional about it and use a voltage meter as I mentioned before.  The guide on NAS Central covers this already and will assist you, should you wish to do things properly.  If however, you want to take a more cowboy like approach, follow my lead :-)

The general idea is that by shorting out the TX/RX pins, your will create a loop-back serial cable.  This means that what you type will be echoed back to the terminal should you find the right TX wire and short it with your RX wire.  At this point you don't need GND.

You may find that you also get other characters printed out, mostly non-printable characters.  But as long as you get back the characters that your type as well, you can be sure you have found the right ones.  The extra characters are the result of interference coming from the lack of resistance between the RX and TX wires.  You can add a resistor if you like, but it really isn't that important, especially not for a £5 cable.

This is how mine looks, with the pin configuration and the coloured wires matching the original Nokia wiring.  The unlabelled red wire is not required and can be considered redundant.  You need only connect the GND, RX and TX connectors to the NSA-320.

Here is a video demonstration of the lazy approach to finding TX and RX on your connector, before you potentially fry the serial interface on your NSA-320.

You have every opportunity to do it properly, so you can still turn back.  If you're feeling lucky...

Connecting it all together

Once you know what wires are what, make a note of the colours and you can safely power off your NSA-320 and connect the serial cable to the serial interface.  Disconnect it from your PC first, then once everything is connected, connect it back up to your PC.  Here is the pin configuration on the NSA-320.  Note that the VCC connection is not required, since USB serial cables are powered by the USB port of the PC.  Also notice that there is a blank between the TX/RX pins and the GND.  This helps identify which is which.  Just remember that the GND is on its own.

Once you've hooked it up, run minicom as described above and power on your NSA-320; you should immediately see the boot sequence being output in the minicom terminal.  It should also now accept input from your terminal and respond to key-presses.

I will be following up soon with how to get ArchLinux installed and set up, and how to overcome some of the common issues with ArchLinux straight out of the box.

Sunday, 1 December 2013

Installing Linux on ZyXel NSA-320 - Part 1 - Telnet back door

My MK802 finally gave up on me.  It lasted a while, with the modifications I made to provide it with adequate cooling in order to operate around the clock.  But a few weeks ago it went down and never came back.  There seems to be some issue with power, where it will only last for about 10 or so seconds before it fills the syslog with spurious errors and then dies.  Booting the OS on another device shows there is nothing wrong with the OS or SD card image, so it must be hardware.  Anyway, on to matters at hand.  So now I have just picked up a brand new ZyXel NSA-320 for £60.  It is actually a very nice piece of kit on its own, with lots of features.  But I want NIS and NFS, with support for EXT journalling file systems, so will be going back down the path of my old Buffalo NAS and flashing it.

Here is the ZyXel NSA-320 in all its glory.  To give you a feel for its size, that's a 3.5 HDD with an external 2.5 HDD.  The ZyXel supports two internal 3.5 SATA HDD, has 512 MB ram, 128 MB of flash and a 1.2 GHz ARM926EJ-S CPU.  So only 300 MB less on RAM than the MK802, but makes up for it in lots of other ways.  For £60 this thing is a beast!

First off, in order to flash it, we need to get a root telnet session on the box.  This is actually really simple, taking advantage of the development telnet back door.  Typically, everything like this sort of device will have a back door of some description, since there needs to be a way of debugging devices in test harnesses when they go wrong.  Test harnesses typically have to run the releasable software/hardware, otherwise it's not really a valid test.  If something goes wrong and it's not reproducible and you have no way of logging on to investigate the failure, you have a potential PR disaster on your hands.  So, pave the way to the inevitable back doors!

The back door on this can be enabled by logging onto the device web interface in administrator mode.  Make a note of the path element I have highlighted with a red circle; you will need this to enable the telnet back door.

Having just logged in and using that part of the path, access the following URL (substituting accordingly):,/adv,/cgi-bin/remote_help-cgi?type=backdoor

After this, you will get a blank screen and the back door will be accessible for a limited time:

Now for the fun part.  The login is not simply the login you used for the web interface, it is a hash of the device's MAC address, but using a special ARM binary found on the NSA-320 itself.  So catch 22, you need to get access to the NSA-320 in order to get access to the NSA-320.  Fortunately, I have a work around, since you can download the utility and run it with qemu-arm.  You just need your device's MAC address, which is on the system status page of the administration web interface.  Ensure you use a capitalised MAC address, since anything else will result in a different hash.

Download the makekey utility here: makekey utility

Install qemu and libc6-dev-armel-cross, then ensure you have qemu-arm available at your disposal.  To get the "root" password, run the makekey like so:

Armed with your privileged user's password, you can log into the telnet back door.  Repeat the process above and get the telnet session open, then login with the user: NsaRescueAngel

And there you have it, a privileged BusyBox shell on your NSA320.  I will post a follow up demonstrating how to use the boot loader to boot your preferred ARM Linux distribution.

Thursday, 26 September 2013

TP-WN822N AP/Master Mode - Summary

If you have been following my previous threads on this topic, you will know the pains I have been through with this, both involving the ARM architecture and the Atheros ath9k driver for the TP-WN822N adapter.  The goals were simple enough:
  • Incorporate Wi-Fi access point into NAS driver.
  • Eliminate requirement for attaching ADSL / Wi-Fi switch.
  • Eliminate requirement for attaching secondary USB Ethernet port.
  • Upgrade kernel from 2.6 to 3.X.
  • Setup PPPoE on primary Ethernet port to also eliminate the need for an ADSL modem.
The end goal was really about reducing the number of things plugged into the mains.  Previously, there were roughly three devices drawing power through transformers.  The NAS itself, the ADSL / Wi-Fi switch and a EOP (Ethernet over Power) home plug.  The Wi-Fi was G rated, so was no good for use as the primary form of network connectivity.  However, upgrading to N rated adapters throughout the house and being able to relocate the NAS drive makes for there being no real requirement for physical network connectivity; wireless seems to be the way to go.  So ditching everything and upgrading to BT Infinity leaves me with the NAS drive and the Infinity modem plugged in.  The Infinity modem is a necessity unfortunately but BT kindly mounted it on the wall next to the phone socket.  What I don't have is the BT home hub or any other form of wireless switch / infinity / ADSL modem.  Instead, PPPoE on eth0 and a USB wireless adapter running in AP mode is all that is required.

How to summary:

Let me summarize the three previous posts and fill in the gaps on how I eventually got this all to work.  So firstly, let's mention that I couldn't get this working using the 3.2 kernel, though I am yet to identify the reason, given my TTL cable has only just arrived.  So a 3.4 kernel is a minimum requirement.  Building is fairly simple, you can do it on box (NAS) or cross compile it.  The choice is yours, but it could be the difference between 7 hours or 5 minutes.  Cross compiling on my i7 running parallel make processes up to a load average of 15, utilises all the memory and finishes the entire kernel build, including modules, in around four and a half minutes.  Great if you need keep changing it in experimentation mode!  I have a script that can be downloaded for making cross compilation a simple one-liner.  It will ask for permission to install the necessary modules for cross compilation against ARM architecture and will auto create base kernel configuration.  It has a usage '--help' option, so if you choose to use it, check out the options: --menuconfig is a good one to note, since you will need that for selecting the appropriate kernel modules.

Building the kernel:

Building is fairly straight forward.  Depending on the platform, be it, armel or armhf, you can use the gnueabi cross compiler.  To get started, install the following packages:

  • g++-arm-linux-gnueabi
  • dpkg-cross
  • devio
  • fakeroot
  • uboot-mkimage
  • u-boot-tools
If you want to use the graphical menu configuration tool, then also install:
  • libcurses5
  • libcurses5-dev

Next you need to be aware that all make invocations need to run with the architecture and cross compiler options set.  The simplest way to make sure you get this right is to run the following:

alias make='make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi-'

This way, you can't forget!  I actually went to the extent of creating a new user with this set in the .bashrc file.

So to build the kernel, clean it out first:

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- mrproper

Now copy in your existing configuration.  This will be the configuration you have on the target machine, not the one you are building on.  The easiest way to obtain it, is to use the following command:

ssh you@yourmachine "zcat /proc/config.gz" > ~/linux-3.4.X/.config

The output file is within the root of the kernel source you will build.

Now you have that in place, you will need to upgrade the configuration.  Run:

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- oldconfig

That will prompt you for some details about what modules to build.  For this, you may choose to accept the defaults.  You should then change the following settings in the file:


Set the LOCALVERSION to the date in the form of YYYY-MM-DD-HHMM.  This makes it easier to identify when you built it, as opposed to some sequential value.  Alternatively, set it to some unique name that you would prefer to use.

Once you have been through the setup, you then just need to setup the Atheros modules.

make ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- menuconfig

This will start the graphical menu configuration.  To build the Atheros drivers for the TP-LINK device, you need to locate them under:

 Device Drivers ---> Network Device Support ---> Wireless LAN ---> Atheros Wireless Cards 

Before you can select what Atheros drivers to build, you must first select this category using the space bar.  Once selected, you can hit enter to expand the category and enable the following modules:

Once you have selected these modules and any others you desire, you can exit the menu and build the kernel.  Now, the target you need to build with depends on whether you need the uboot hack.  If you are installing this kernel on your hacked Buffalo LinkStation NAS, you will need to build the zImage and perform the necessary hacks to create your uImage.  However, if you are using a standard uboot configuration, then just get cracking by using the uImage target:

make -j -l 15 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- uImage modules

For the uImage hack:

make -j -l 15 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- zImage modules

( devio  'wl 0xe3a01c06,4' 'wl 0xe3811031,4' ; cat arch/arm/boot/zImage ) > /tmp/zImage
mkimage -A arm -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n linux -d /tmp/zImage arch/arm/boot/uImage

Next, setup the modules install package, by installing them into a temporary directory.

make -j -l 15 ARCH=arm CROSS_COMPILE=arm-linux-gnueabi- modules_install INSTALL_MOD_PATH=/some/temporary/path

You will now have a uImage that you can copy into your boot partition on your ARM device.  You will also have modules setup and ready to install.  To install them, use ssh to transfer them across:

( cd /some/temporary/path ; tar Jcf - lib ) | ssh you@yourmachine "cd / && sudo tar Jxvf && sudo chown -R root:root /lib/modules/linux-3.4.X-2012-12-12-1212"

You get the idea.  But you can get the modules and firmware on however you choose.

Tuesday, 24 September 2013

MK802 - Linux (Debian) how to

Installing Linux is fairly straight forward providing you have an appropriately sized SD card.  Rather than go into too much detail, the images and a guide to installing can be found here:

After installing however, it's extremely advisable that you lock it down before attaching it to your network.  I've never met the author of the image, don't know him and therefore can not trust the image entirely.  My first port call was to build my own kernel.  You are best using cross tools for this, which I have explained in detail in another post.  Ensure the kernel is built with the appropriate configuration.  You can use the existing kernel configuration as follows:

$ cat /proc/config > ~/mk802-kernel-config

Next, lock down the firewall and update the rules.  Go for a block everything and open what is required approach.  If you accidentally lock yourself out, just power off the device, remove the SD card, mount on your computer and revert whatever changes necessary to allow you access again.

Go through running processes with a fine tooth comb.  Make sure you are only running what you need.  If you don't need the GUI, make sure you change the default runlevel in /etc/inittab.  A runlevel of 3 will disable the GUI normal (unless it is a fluffy distro like Ubuntu).

Make sure you change the access point passphrase if you intend to create an access point.  Also make sure the network interfaces configuration isn't defining soft MAC addresses for your network devices.

There are many more ways to harden the system.  Take a look at some of the many Linux hardening guides on the net.

Tuesday, 3 September 2013

RSync for Google Drive is here...

I have been working on an implementation of RSync specifically for Google Drive.  I now have it synchronizing between client and server using many of the basic rsync options.  I am limited on the time I have with which to develop it and test it, so am looking to find volunteers willing to use the software and identify issues.  The project is hosted on git hub here:

GSync on Github

Update:  I have published the GSync application to

To install using pypi, you just need to execute the following command:

    $ sudo pypi-install gsync

It will autoresolve dependencies and install require packages.  To install the pypi-install utility on debian, execute the following command:

    $ sudo apt-get install python-stdeb


Friday, 30 August 2013

Linux Mint - Upgrading from Maya(13) to Olivia(15)

I thought I'd give this unsupported upgrade a go to see if it would actually work.  Wow, just as I thought, it totally nose dived!  I love you Debian, for all your flaws.  Anyway, all was not lost.  The failure occurred as a result of glibc6 being uninstalled without being replaced by the upgraded glibc6.  Something went horribly wrong after uninstall, when something couldn't be restarted as a result of not having glibc6 available; yet another argument that suggests package managers should queue post install scripts until the end of installing all packages rather than trying to do a half arsed install and messing it up.

Anyway, since aptititude kindly caches all the downloaded packages in /var/cache/apt/archive, it was easy to reinstate a working glibc6, providing I didn't reboot!

$ dpkg -i $(ls -t1 /var/cache/apt/archive/*glibc6*.deb | head -1)
 After that, just running the dist-upgrade again managed to get the upgrade merrily on its way.  So my tip of the day:

No matter how bad you think the situation is, you can make it worse by hitting the reset button!