Using some parts I had lying around, namely an external USB SATA HDD caddy, the HDD from my existing NAS and a few USB cables, I was able to assemble the MK802 NAS - Phase One:
This all runs using the single DC adapter for the external HDD caddy. I am powering the MK802 using the first USB port on the HDD. The second port has a second WiFi adapter plugged in, to allow me to bridge the WiFi interface that will run in AP mode on the MK802, with the existing WiFi router. This will allow me to set up the device with access to the local network and internet (required for updating / downloading packages). All of this is hooked up to the second USB port on the MK802, via the USB input on the HDD caddy, thus allowing the HDD caddy to act as a four port hub too.
The HDD caddy was purchased a few years back on Amazon for around £15 including delivery. It has served me well and makes for a good hot-pluggable HDD caddy.
I have Debian running on the MK802 with a sunxi 3.0.36 kernel. I will provide details on how to set up the MK802 with Debian using an SD card.
Related: MK802 Project - Heat sinks
Tuesday, 4 December 2012
Saturday, 1 December 2012
MK802 Project - Adding heat sinks
I have just bought an MK802 Mini Android PC. My intention is to use this powerful but small piece of kit to create a replacement for my dated NAS. It will serve as a router, wireless AP, firewall, NAS, NIS and NFS server. It is some 6 times more powerful than it's predecessor, with 10 times more memory. It cost just over £30 including delivery and runs at 3 watts; some 8 watts less than the current NAS. It also has USB 2.0, replacing the previously poor USB 1.1 from the predecessor.
Phase one will involve using my existing 3.5" HDD, running in an external USB caddy. This will server as the NAS storage, while the OS is run directly from the 8GB UHS Micro SD card. Using u-boot, it is possible to boot a custom linux kernel from the SD card and boot an entirely different OS. I will be sticking with Debian ARM, since it has served me so well. I will be restricted to the 3.0.36 kernel for a while, downgrading from 3.4, since the Allwinner A10 SOC drivers have not yet been ported to 3.4. However, 3.4 is currently in WIP status, so I should be able to return to this version very soon.
Phase two will see the introduction of the 2.5" external USB HDD, powered by USB and thus losing the need to power the drive separately. I'm not sure how the MK802 will cope with providing this power throughput, though you have to wonder how affordable a 500GB SSD will be by that point...
Before doing anything though, I have read a few reviews that suggest the MK802 gets hot when operational. This is understandable given it's running a 1.6 GHz CPU and a dual core 500 MHz GPU and 2 x 512 MB DDR3 memory modules. The solution is quite brutal, but simple: add heat sinks! Since the whole design is based on it being minimalistic, some surgery is required. If you care to follow it, I have prepared step by step instructions:
Step 1: Dismantling the MK802
You will notice a thin layer of plastic attached to the thicker base layer. This thin layer is just clipped in place and can be removed by carefully inserting a Stanley blade into the end with the HDMI connector, and carefully twisting until the two elements separate.
Before getting dirty and hacking a whole in your casing, you should try to work out an area that will provide the maximum coverage, while not shorting anything; remember, the heat sinks are metal, so have the potential to short something if carelessly placed. Once you know where that will go, line up your four heat sinks
in single file on the inside of the lid part of the casing, precisely where they will protrude. Now carefully mark an outline using your Stanley knife. Don't press hard, you are only making a guide for when you will cut, so just a light scratch in the surface will suffice.
Step 3: Modification of the casing
With the template engraved in the lid, you can remove the heat sinks and begin cutting. Be patient and gentle with this, since trying to press too hard and cut all the way through will result in cracking the plastic or slicing a finger. Just gently score the surface, one edge at a time, repeating the same pattern until you notice the knife begin to make it's way through. This part took me about 20 minutes to complete, but it's worth taking your time. I have seen some people use electric hand tools and make a complete mess of the casing, so it pays to be patient here.
If you want to improve the finish even further, then you can also use a nail file to get rid of any burrs.
Step 4: Assembly and adhesion of the heat sinks
It is better to assemble the casing first, before adhering the heat sinks, since it will be easier to get them in the right place. You can use the hole in the case as a guide for alignment.
After adhering all the heat sinks, you should have something that looks a little like this:
I have seen some other articles where holes are drilled in the surface to allow more ventilation. This really isn't at all necessary, since the heat sinks are fairly efficient at radiating heat on their own, thus drawing cooler air through other cavities such as the USB ports, SD slot or HDMI slot.
Remember, when operating, keep the heat sinks on the top most surface, since they will not function correctly if they cannot radiate heat upwards. It is okay if the heat sinks are on the side, providing the assembly is kept horizontal. Vertical positioning will result in radiated heat preventing either the GPU or DDR memory from getting sufficient cooling.
To test that they are working adequately, turn on your device and wait a minute or two. If working correctly, the heat sinks will be hot to the touch.
Phase one will involve using my existing 3.5" HDD, running in an external USB caddy. This will server as the NAS storage, while the OS is run directly from the 8GB UHS Micro SD card. Using u-boot, it is possible to boot a custom linux kernel from the SD card and boot an entirely different OS. I will be sticking with Debian ARM, since it has served me so well. I will be restricted to the 3.0.36 kernel for a while, downgrading from 3.4, since the Allwinner A10 SOC drivers have not yet been ported to 3.4. However, 3.4 is currently in WIP status, so I should be able to return to this version very soon.
Phase two will see the introduction of the 2.5" external USB HDD, powered by USB and thus losing the need to power the drive separately. I'm not sure how the MK802 will cope with providing this power throughput, though you have to wonder how affordable a 500GB SSD will be by that point...
Before doing anything though, I have read a few reviews that suggest the MK802 gets hot when operational. This is understandable given it's running a 1.6 GHz CPU and a dual core 500 MHz GPU and 2 x 512 MB DDR3 memory modules. The solution is quite brutal, but simple: add heat sinks! Since the whole design is based on it being minimalistic, some surgery is required. If you care to follow it, I have prepared step by step instructions:
Step 1: Dismantling the MK802
You will notice a thin layer of plastic attached to the thicker base layer. This thin layer is just clipped in place and can be removed by carefully inserting a Stanley blade into the end with the HDMI connector, and carefully twisting until the two elements separate.
Before getting dirty and hacking a whole in your casing, you should try to work out an area that will provide the maximum coverage, while not shorting anything; remember, the heat sinks are metal, so have the potential to short something if carelessly placed. Once you know where that will go, line up your four heat sinks
in single file on the inside of the lid part of the casing, precisely where they will protrude. Now carefully mark an outline using your Stanley knife. Don't press hard, you are only making a guide for when you will cut, so just a light scratch in the surface will suffice.
Step 3: Modification of the casing
With the template engraved in the lid, you can remove the heat sinks and begin cutting. Be patient and gentle with this, since trying to press too hard and cut all the way through will result in cracking the plastic or slicing a finger. Just gently score the surface, one edge at a time, repeating the same pattern until you notice the knife begin to make it's way through. This part took me about 20 minutes to complete, but it's worth taking your time. I have seen some people use electric hand tools and make a complete mess of the casing, so it pays to be patient here.
If you want to improve the finish even further, then you can also use a nail file to get rid of any burrs.
Step 4: Assembly and adhesion of the heat sinks
It is better to assemble the casing first, before adhering the heat sinks, since it will be easier to get them in the right place. You can use the hole in the case as a guide for alignment.
After adhering all the heat sinks, you should have something that looks a little like this:
I have seen some other articles where holes are drilled in the surface to allow more ventilation. This really isn't at all necessary, since the heat sinks are fairly efficient at radiating heat on their own, thus drawing cooler air through other cavities such as the USB ports, SD slot or HDMI slot.
Remember, when operating, keep the heat sinks on the top most surface, since they will not function correctly if they cannot radiate heat upwards. It is okay if the heat sinks are on the side, providing the assembly is kept horizontal. Vertical positioning will result in radiated heat preventing either the GPU or DDR memory from getting sufficient cooling.
To test that they are working adequately, turn on your device and wait a minute or two. If working correctly, the heat sinks will be hot to the touch.
Saturday, 13 October 2012
PPPoE and obscure packet dropping (some websites not working)...
I have been having a mare with packets just disappearing or packets being marked as invalid and dropped, then resulting in established and related packets being dropped in response to mangled requests. I spent ages trying to debug this, logging anything and everything with iptables log rules. I couldn't determine what on earth was going on, until I started reading about MSS exceeding the MTU of 1500 set by the ISP. This is only apparent with large amounts of data and even more apparent on devices connecting from behind the firewall, with mangling going on. Essentially, it boils down to packets being mangled, increasing the segment size, which eventually leads to a packet that exceeds the MTU when the ISP reroutes it. At the firwall level, the packet is fine and fits just within the 1500 limit. Proposed solutions seem to be centred around adjusting the PPPoE interface MTU to something that would not exceed the limit of 1500 when mangled. This is a good idea to have set. For me, I modified my PPP configuration and changed the MTU:
$ sed -i.bak 's/^mtu [0-9]\+$/mtu 1454/' /etc/ppp/peers/dsl-provider
This is not always guaranteed to work alone though, as I found out. This is because you don't know what additional mangling is going on at the ISP's end. In the end, I started reading the iptables man page for the 3.4 kernel I am running. I found something very interesting...
TCPMSS
This target allows to alter the MSS value of TCP SYN packets, to control the maximum size for that connection (usually limiting it to your outgoing inter-
face's MTU minus 40 for IPv4 or 60 for IPv6, respectively). Of course, it can only be used in conjunction with -p tcp.
This target is used to overcome criminally braindead ISPs or servers which block "ICMP Fragmentation Needed" or "ICMPv6 Packet Too Big" packets. The symp-
toms of this problem are that everything works fine from your Linux firewall/router, but machines behind it can never exchange large packets:
1) Web browsers connect, then hang with no data received.
2) Small mail works fine, but large emails hang.
3) ssh works fine, but scp hangs after initial handshaking.
Workaround: activate this option and add a rule to your firewall configuration like:
iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN
-j TCPMSS --clamp-mss-to-pmtu
--set-mss value
Explicitly sets MSS option to specified value. If the MSS of the packet is already lower than value, it will not be increased (from Linux 2.6.25
onwards) to avoid more problems with hosts relying on a proper MSS.
--clamp-mss-to-pmtu
Automatically clamp MSS value to (path_MTU - 40 for IPv4; -60 for IPv6). This may not function as desired where asymmetric routes with differing
path MTU exist -- the kernel uses the path MTU which it would use to send packets from itself to the source and destination IP addresses. Prior to
Linux 2.6.25, only the path MTU to the destination IP address was considered by this option; subsequent kernels also consider the path MTU to the
source IP address.
These options are mutually exclusive.
$ sed -i.bak 's/^mtu [0-9]\+$/mtu 1454/' /etc/ppp/peers/dsl-provider
This is not always guaranteed to work alone though, as I found out. This is because you don't know what additional mangling is going on at the ISP's end. In the end, I started reading the iptables man page for the 3.4 kernel I am running. I found something very interesting...
TCPMSS
This target allows to alter the MSS value of TCP SYN packets, to control the maximum size for that connection (usually limiting it to your outgoing inter-
face's MTU minus 40 for IPv4 or 60 for IPv6, respectively). Of course, it can only be used in conjunction with -p tcp.
This target is used to overcome criminally braindead ISPs or servers which block "ICMP Fragmentation Needed" or "ICMPv6 Packet Too Big" packets. The symp-
toms of this problem are that everything works fine from your Linux firewall/router, but machines behind it can never exchange large packets:
1) Web browsers connect, then hang with no data received.
2) Small mail works fine, but large emails hang.
3) ssh works fine, but scp hangs after initial handshaking.
Workaround: activate this option and add a rule to your firewall configuration like:
iptables -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN
-j TCPMSS --clamp-mss-to-pmtu
--set-mss value
Explicitly sets MSS option to specified value. If the MSS of the packet is already lower than value, it will not be increased (from Linux 2.6.25
onwards) to avoid more problems with hosts relying on a proper MSS.
--clamp-mss-to-pmtu
Automatically clamp MSS value to (path_MTU - 40 for IPv4; -60 for IPv6). This may not function as desired where asymmetric routes with differing
path MTU exist -- the kernel uses the path MTU which it would use to send packets from itself to the source and destination IP addresses. Prior to
Linux 2.6.25, only the path MTU to the destination IP address was considered by this option; subsequent kernels also consider the path MTU to the
source IP address.
These options are mutually exclusive.
So I wasn't going mad after all and it does appear to be a widely known issue. So I gave it a try...
-A FORWARD -p tcp --tcp-flags SYN,RST SYN -j LOG --log-prefix "CLAMP-MSS-TO-PMTU" --log-tcp-options --log-ip-options --log-level 7
-A FORWARD -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu
I am only specifying the LOG action here to ensure that it works. Indeed it does. After enabling it, I instantly started seeing entries appear in the logs and all the websites that weren't previously accessible suddenly started working. I don't need to explain why this works, since it's explained enough in the above manual excerpt. But if you share my blight, then hopefully this helps you too!
Friday, 12 October 2012
Broadcast ping for host discovery
I saw a question posted on an exchange website today that took me back a few years. Often, people assume that broadcast ping can be used for host discovery on a network. But these days, broadcast ICMP requests are dropped silently to avoid stimulating springboard style DoS attacks. This is essentially where you ping a number of hosts, as many as possible across a multitude of subnets, but have all the responses returned to the machine you want to attack. This overloads the machine's network interface with ICMP traffic, obstructing real network traffic. It also poses as a threat to ADSL connections, when you consider that most people have caps on their broadband. It would be quite easy to render someone's internet connection useless through this attack mechanism. For this reason, the default response for any machine, is to not respond to broadcast ICMP requests. Most routers also filter these out, to prevent ICMP broadcast packets spreading across different subnets.
Anyway, host discovery can actually be obtained through a much simpler mechanism, given that you can ping specific hosts for a reply; again, providing that those hosts reply to ICMP requests. The solution is to ping each possible permutation of addresses on a given subnet, looking for a response from any of them. This sounds like a really heavyweight process, but it's actually very easy and very fast. The following command demonstrates this, which can be incorporated into a script if you wish (brackets are important).
$ time ( s=192.168.0 ; for i in $(seq 1 254) ; do ( ping -n -c 1 -w 1 $s.$i 1>/dev/null 2>&1 && printf "%-16s %s\n" $s.$i responded ) & done ; wait ; echo )
192.168.0.5 responded
192.168.0.11 responded
192.168.0.2 responded
192.168.0.254 responded
192.168.0.4 responded
real 0m1.317s
user 0m0.004s
sys 0m0.084s
Anyway, host discovery can actually be obtained through a much simpler mechanism, given that you can ping specific hosts for a reply; again, providing that those hosts reply to ICMP requests. The solution is to ping each possible permutation of addresses on a given subnet, looking for a response from any of them. This sounds like a really heavyweight process, but it's actually very easy and very fast. The following command demonstrates this, which can be incorporated into a script if you wish (brackets are important).
$ time ( s=192.168.0 ; for i in $(seq 1 254) ; do ( ping -n -c 1 -w 1 $s.$i 1>/dev/null 2>&1 && printf "%-16s %s\n" $s.$i responded ) & done ; wait ; echo )
192.168.0.5 responded
192.168.0.11 responded
192.168.0.2 responded
192.168.0.254 responded
192.168.0.4 responded
real 0m1.317s
user 0m0.004s
sys 0m0.084s
TP-WN822N AP/Master Mode - Part 3
The new kernel worked, at least for a little while. I had to radically rework the network configuration in order to get it functioning with minimal firewall changes. So initially I had 1 Gb on board Ethernet wired to the internal network, 100 Mb USB Ethernet on the ADSL line with the wifi provided by a bridge on the switch. Now, the 1 GB provides the PPPoE interface and WiFi supplies the network. I use the USB Ethernet adapter on it's own subnet, connected directly to my laptop, as a management port.
The best way to avoid changing the firewall, other than a simple bit of 'sedding', was to create a bridge interface between what is now regarded as the physical interface and the wireless interface. The physical interface doesn't actually exist any more, but it's there just in case; call it future proofing against my own change of mind. So let's look at the network/interfaces configuration...
allow-hotplug eth4
iface eth4 inet manual
allow-hotplug wlan0
iface wlan0 inet manual
auto br0
iface br0 inet static
bridge_ports eth4 wlan0
address ...
The rest is pretty straight forward. This creates the bridge interface. Actually, nothing happens on boot, because there is no eth4 device and no wlan0 device, therefore no bridge device is created. But once udevd detects the ath9k driver, a udev rule takes care of starting the hostapd daemon, and thus creates the bridge. The script also sets up extra iptables rules to allow traffic on br0, since everything is implicitly dropped. Having hostapd start on system boot is no good, even though it's the recommended route. With USB devices, they have to wait until the USB hub driver has loaded before plugged devices can be found. By this point, the hostapd will have failed because the devices weren't present. Instead, I completely rewrote my own version of the hostapd ifupdown.sh script, to ensure that the right thing is done when a device comes online and goes of line. I have tested this, by repeatedly unplugging the USB adapter and plugging it back in. Each time, udev unloads and loads the ath9k module, runs my script, thus installs firewall rules and runs hostapd.
Dilema! (another one)
The 3.2 kernel I am using seems to have a bug in the Atheros driver. After a period of the driver being loaded and the adapter being connected, sending and receiving traffic, the kernel crashes. There are a few problems with this. One is that the kernel crashes!! Two is that the kernel panic doesn't make it to syslog, so I have no idea what happens. Three, I need to create serial connector for the hidden serial port on the NAS drive. Hassle!
So next steps... I have purchased a Nokia CA 42 lead from Amazon. This has an inbuilt TTL, and considering it's only £4, it is far cheaper than the £30 for a proper USB TTL lead. When it arrives, I will set to work on getting the serial console up and running so I can debug the driver crash.
In the meantime, I am going to build the 3.4 kernel using my scripts and investigate what changes have been made to the ath9k modules between 3.2, 3.4 and 3.6 kernels. It's quite possible that I can backport a change that provides a solution, to my 3.2 kernel.
Stay tuned for more progress...
The best way to avoid changing the firewall, other than a simple bit of 'sedding', was to create a bridge interface between what is now regarded as the physical interface and the wireless interface. The physical interface doesn't actually exist any more, but it's there just in case; call it future proofing against my own change of mind. So let's look at the network/interfaces configuration...
allow-hotplug eth4
iface eth4 inet manual
allow-hotplug wlan0
iface wlan0 inet manual
auto br0
iface br0 inet static
bridge_ports eth4 wlan0
address ...
The rest is pretty straight forward. This creates the bridge interface. Actually, nothing happens on boot, because there is no eth4 device and no wlan0 device, therefore no bridge device is created. But once udevd detects the ath9k driver, a udev rule takes care of starting the hostapd daemon, and thus creates the bridge. The script also sets up extra iptables rules to allow traffic on br0, since everything is implicitly dropped. Having hostapd start on system boot is no good, even though it's the recommended route. With USB devices, they have to wait until the USB hub driver has loaded before plugged devices can be found. By this point, the hostapd will have failed because the devices weren't present. Instead, I completely rewrote my own version of the hostapd ifupdown.sh script, to ensure that the right thing is done when a device comes online and goes of line. I have tested this, by repeatedly unplugging the USB adapter and plugging it back in. Each time, udev unloads and loads the ath9k module, runs my script, thus installs firewall rules and runs hostapd.
Dilema! (another one)
The 3.2 kernel I am using seems to have a bug in the Atheros driver. After a period of the driver being loaded and the adapter being connected, sending and receiving traffic, the kernel crashes. There are a few problems with this. One is that the kernel crashes!! Two is that the kernel panic doesn't make it to syslog, so I have no idea what happens. Three, I need to create serial connector for the hidden serial port on the NAS drive. Hassle!
So next steps... I have purchased a Nokia CA 42 lead from Amazon. This has an inbuilt TTL, and considering it's only £4, it is far cheaper than the £30 for a proper USB TTL lead. When it arrives, I will set to work on getting the serial console up and running so I can debug the driver crash.
In the meantime, I am going to build the 3.4 kernel using my scripts and investigate what changes have been made to the ath9k modules between 3.2, 3.4 and 3.6 kernels. It's quite possible that I can backport a change that provides a solution, to my 3.2 kernel.
Stay tuned for more progress...
Monday, 8 October 2012
TP-WN822N AP/Master Mode - Part 2
Kernel Building!
Okay, building the kernels did eventually become a bit tedious given the problems I was having. Initially the 3.5.5 kernel I built worked, but was shortly followed by intermittent kernel crashes; no panics, no core dumps (disabled). So I immediately looked to the 3.4.X kernel, which would appear to be more stable. At this point I had opted for the cross compiling approach, given 7 hours to wait for a build was a bit tedious. To hopefully make the process of building kernels for the NAS more portable and easier, I created a script to do the job for me. It automates the job of installing the appropriate build tools (cross compilers, utilities and libraries) as well as keeping these up-to-date by running through the install process on each invocation.
After some playing, I found a base configuration that worked well, based on my 2.6 kernel config. At this point, I snap shot it and placed it with the script. The script now snap shots according to kernel versions built out of the same directory. I have the compilation time on my i7 down to 4 minutes. That's incredible given it took 7 hours on a 200 MHz ARM CPU. I supposed the biggest difference is the amount of memory available to me on the i7 and how many processes can be spawned. I am running make with unlimited parallel processes, limited only by the system load. I have this set to a load average of around 5. So initially, there is a surge in memory usage and CPU usage, before it all settles down, with all 8 cores working at an average capacity of about 80%. But to get an image and modules out in under 5 minutes is still astonishing!
Download the build-ukernel script
Stay tuned for more...
Okay, building the kernels did eventually become a bit tedious given the problems I was having. Initially the 3.5.5 kernel I built worked, but was shortly followed by intermittent kernel crashes; no panics, no core dumps (disabled). So I immediately looked to the 3.4.X kernel, which would appear to be more stable. At this point I had opted for the cross compiling approach, given 7 hours to wait for a build was a bit tedious. To hopefully make the process of building kernels for the NAS more portable and easier, I created a script to do the job for me. It automates the job of installing the appropriate build tools (cross compilers, utilities and libraries) as well as keeping these up-to-date by running through the install process on each invocation.
After some playing, I found a base configuration that worked well, based on my 2.6 kernel config. At this point, I snap shot it and placed it with the script. The script now snap shots according to kernel versions built out of the same directory. I have the compilation time on my i7 down to 4 minutes. That's incredible given it took 7 hours on a 200 MHz ARM CPU. I supposed the biggest difference is the amount of memory available to me on the i7 and how many processes can be spawned. I am running make with unlimited parallel processes, limited only by the system load. I have this set to a load average of around 5. So initially, there is a surge in memory usage and CPU usage, before it all settles down, with all 8 cores working at an average capacity of about 80%. But to get an image and modules out in under 5 minutes is still astonishing!
Download the build-ukernel script
Stay tuned for more...
Friday, 5 October 2012
TP-WN822N AP/Master Mode
I have been running Debian on a hacked Buffalo LS500GL NAS for at least the past four years now and this little beast is still going strong. I've since upgraded the HDD to a 1TB disk and use it for various things, such as Firewall, NIS server, NFS server, DLNA server, Web server and just about any other kind of server you can cram into the 128 MB of RAM. It only has a 200 MHz ARM CPU, so it's no good for use as a transcoder, but for general services, it serves me well.
However, having just upgrades to a fibre connection, I thought I'd take advantage of the PPPoE availability and run a wireless access point, instead of having the switch plugged in. The NAS uses some 11 watts, so is really economic. Having the switch plugged in as well, draws an extra 8 or 9 watts; which, when you consider it is not really necessary, I may as well ditch it in favour of a USB access point.
Dilemma!
There is of course a dilemma. Most wireless drivers on Linux don't support access point mode. The simple work around is to use NDIS wrapper to wrap native Windows kernel drivers, providing the Windows kernel API, whilst interfacing with the Linux kernel. However, this is not possible when you are not running a supported Windows architecture, or at least one that you can finder drivers for. In this case, there are no Windows drivers for ARM.
Solution!
The solution is simple, so it would seem. Find a device that has a Linux driver that supports AP mode. These are few and far between and figuring out which of the devices will be suitable before you buy is a tough one. Luckily I was able to find a few resources that indicated that the TP-WN822N USB adapter supports AP mode using the Atheros drivers (ath9k variant). This driver is available in the 3.X.X kernels and onwards, so first port of call is to upgrade my 2.6.X kernel.
Cross compiling means running up a computer for hours on end while it builds, so I opt for building them on the NAS. It takes a lot longer, given the memory restrictions and slow CPU, but you can log in, run up '/usr/bin/screen' and leave it running, while you go off and do something else as equally constructive for 7 hours, like sleep!
So here is my procedure:
ssh root@nasdrive
exec screen
cd /usr/src/linux-3.X.X
make menuconfig
make
Installation is slightly different given the U-Boot loader that runs out of the NAS flash. For the NAS, you need to build the zImage and modify it slightly:
make zImage
mkimage -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n 'linux' -d <( devio 'wl 0xe3a01c06,4' 'wl 0xe3811031,4'; cat arch/arm/boot/zImage ) uImage.new
mv uImage.new uImage-3.X.X
mv System.map System.map-3.X.X
mv .config config-3.X.X
Once all this has been built, it can be copied to the /boot partition and the default links updated appropriately. It's no big deal if the NAS doesn't come up on reboot. You can dismantle it, remove the HDD, relink the old kernel and start over. This doesn't happen often, but when it does, is usually the result of not having built the uImage with the correct devio header.
So, let's leave it there for now. I am going to set my kernel building and will come back and report the next stages of getting the ath9k driver loaded and getting the wlan interface up for the TP-LINK adapter. Stay tuned...
However, having just upgrades to a fibre connection, I thought I'd take advantage of the PPPoE availability and run a wireless access point, instead of having the switch plugged in. The NAS uses some 11 watts, so is really economic. Having the switch plugged in as well, draws an extra 8 or 9 watts; which, when you consider it is not really necessary, I may as well ditch it in favour of a USB access point.
Dilemma!
There is of course a dilemma. Most wireless drivers on Linux don't support access point mode. The simple work around is to use NDIS wrapper to wrap native Windows kernel drivers, providing the Windows kernel API, whilst interfacing with the Linux kernel. However, this is not possible when you are not running a supported Windows architecture, or at least one that you can finder drivers for. In this case, there are no Windows drivers for ARM.
Solution!
The solution is simple, so it would seem. Find a device that has a Linux driver that supports AP mode. These are few and far between and figuring out which of the devices will be suitable before you buy is a tough one. Luckily I was able to find a few resources that indicated that the TP-WN822N USB adapter supports AP mode using the Atheros drivers (ath9k variant). This driver is available in the 3.X.X kernels and onwards, so first port of call is to upgrade my 2.6.X kernel.
Cross compiling means running up a computer for hours on end while it builds, so I opt for building them on the NAS. It takes a lot longer, given the memory restrictions and slow CPU, but you can log in, run up '/usr/bin/screen' and leave it running, while you go off and do something else as equally constructive for 7 hours, like sleep!
So here is my procedure:
ssh root@nasdrive
exec screen
cd /usr/src/linux-3.X.X
make menuconfig
make
Installation is slightly different given the U-Boot loader that runs out of the NAS flash. For the NAS, you need to build the zImage and modify it slightly:
make zImage
mkimage -O linux -T kernel -C none -a 0x00008000 -e 0x00008000 -n 'linux' -d <( devio 'wl 0xe3a01c06,4' 'wl 0xe3811031,4'; cat arch/arm/boot/zImage ) uImage.new
mv uImage.new uImage-3.X.X
mv System.map System.map-3.X.X
mv .config config-3.X.X
Once all this has been built, it can be copied to the /boot partition and the default links updated appropriately. It's no big deal if the NAS doesn't come up on reboot. You can dismantle it, remove the HDD, relink the old kernel and start over. This doesn't happen often, but when it does, is usually the result of not having built the uImage with the correct devio header.
So, let's leave it there for now. I am going to set my kernel building and will come back and report the next stages of getting the ath9k driver loaded and getting the wlan interface up for the TP-LINK adapter. Stay tuned...
Wednesday, 13 June 2012
Debian, NIS, Headaches, Wi-Fi and AutoFS
For quite some time now, there has been a lot of hoo-har surrounding startup ordering, dependencies and chain loading of autofs, ypbind and networking. Historically, it had been solved post-installation by manually modifying insserv directives in the headers of sysinit scripts then recalculating the dependency start order. Well, that or just simply modifying the rc.d start sequence. However, with the introduction and adoption of upstart, the dependencies have been fine tuned somewhat, allowing a virtual "start-ypbind.conf" script to be provided as a blocking event, blocking the start up of things that depend on ypbind. This works exactly as everyone that uses NIS and AutoFS require it to, providing the binding takes less than five seconds. Historically, binding time-out before ypbind startup was backgrounded, was around a minute. However, since the upstart edition, this time-out has been reduced to five seconds. After which, ypbind is backgrounded and upstart startup events are emitted for autofs and other NIS dependent services. When connecting to a network via a wi-fi connection, connecting to wi-fi alone can take that long. So obviously, since AutoFS is extremely sensitive to the presence of YP binding, automount fails to start and you are presented with a system in the same age old unusable state. The simple solution would be to increase the time-out and prey it makes its way into mainstream. However, wouldn't it be nice if YP could bind in less time, perhaps without there even being a presence of any networking at all?
Well, the answer is "yes". And the answer to "how we achieve it" is actually quite simple, although it does uncover another little bug with yphelper.
Solution
So how do we bind faster and without any networking? The answer is that we need to bind locally. In order to bind locally, it means running a local NIS server, but one intended as a NIS slave. This is extremely simple to set up, using the /usr/lib/yp/ypinit utility. Providing the -s option and the fully qualified hostname of the master NIS server, the NIS server will be replicated locally. Synchronisation with the master will be maintained by ypfxrd on the master NIS server, so no need to setup any nasty cron tasks to update the NIS records periodically. There are a few configuration settings to ensure the NIS server is only made available locally, but once done, a restart of NIS and autofs will see the system switched to using the local NIS server; ypwhich will confirm this.
Modify the yp.conf file to include your original entry below a duplicate of itself, with the IP or hostname of the NIS server swapped for localhost (127.0.0.1). It should look somewhat like this:
domain some.domain.com server 127.0.0.1
domain some.domain.com server 10.0.0.1
The host on 10.0.0.1 will be the master, used only as a secondary to the local server. Effectively, this is a null op, but it serves as a reminder to the real NIS server.
Make sure the only entry that is permitted in ypserv.securenets is the link local address range. Comment anything else out to ignore all other authentication requests.
That's it! Configuring NIS in this way will have YP binding completing almost instantaneously, while the networking is able to come online and autofs is able to start.
Issues
There is a bug with ypinit, in that on some NIS master configurations, it is unable to obtain a list of maps for that server. This can be remedied by modifying the script and replacing the call to yphelper with a call to ypwhich as follows:
maps=`ypwhich -m | awk '($2 == "'$MASTER'") { printf("%s ", $1) }'`
This is a variation of the original method for obtaining maps, and one that works reliably.
Well, the answer is "yes". And the answer to "how we achieve it" is actually quite simple, although it does uncover another little bug with yphelper.
Solution
So how do we bind faster and without any networking? The answer is that we need to bind locally. In order to bind locally, it means running a local NIS server, but one intended as a NIS slave. This is extremely simple to set up, using the /usr/lib/yp/ypinit utility. Providing the -s option and the fully qualified hostname of the master NIS server, the NIS server will be replicated locally. Synchronisation with the master will be maintained by ypfxrd on the master NIS server, so no need to setup any nasty cron tasks to update the NIS records periodically. There are a few configuration settings to ensure the NIS server is only made available locally, but once done, a restart of NIS and autofs will see the system switched to using the local NIS server; ypwhich will confirm this.
Modify the yp.conf file to include your original entry below a duplicate of itself, with the IP or hostname of the NIS server swapped for localhost (127.0.0.1). It should look somewhat like this:
domain some.domain.com server 127.0.0.1
domain some.domain.com server 10.0.0.1
The host on 10.0.0.1 will be the master, used only as a secondary to the local server. Effectively, this is a null op, but it serves as a reminder to the real NIS server.
Make sure the only entry that is permitted in ypserv.securenets is the link local address range. Comment anything else out to ignore all other authentication requests.
That's it! Configuring NIS in this way will have YP binding completing almost instantaneously, while the networking is able to come online and autofs is able to start.
Issues
There is a bug with ypinit, in that on some NIS master configurations, it is unable to obtain a list of maps for that server. This can be remedied by modifying the script and replacing the call to yphelper with a call to ypwhich as follows:
maps=`ypwhich -m | awk '($2 == "'$MASTER'") { printf("%s ", $1) }'`
This is a variation of the original method for obtaining maps, and one that works reliably.
Wednesday, 30 May 2012
Tech: Leap Motion
I want one of these!! Well, providing it works as shown and there is a Linux driver for it. :-)
Wednesday, 11 April 2012
xsltproc: exslt functions, and SEGV
This has a habit of coming back to bite me when I have forgotten about it, but essentially there is an issue with performing XSLT within a block. If the XSLT within the result block is too extensive and results in a large return block or node-set, then it causes xsltproc to SEGV. There is a simple way to defend against this from ever happening, but it requires a little concious thought and consistency.
To prevent the func:result from throwing a wobbly, you must ensure you encapsulate your return value in a variable where you would otherwise return the result. Thus func:result:
<func:result>
<xsl:for-each select="myns:isInteresting(exsl:node-set($nodes)/Thing)">
<xsl:value-of select="@name"/>
</xsl:for-each>
</func:result>
Becomes:
<xsl:variable name="ret">
<xsl:for-each select="myns:isInteresting(exsl:node-set($nodes)/Thing)">
<xsl:value-of select="@name"/>
</xsl:for-each>
</xsl:variable>
<func:result select="$ret"/>
To prevent the func:result from throwing a wobbly, you must ensure you encapsulate your return value in a variable where you would otherwise return the result. Thus func:result:
<func:result>
<xsl:for-each select="myns:isInteresting(exsl:node-set($nodes)/Thing)">
<xsl:value-of select="@name"/>
</xsl:for-each>
</func:result>
Becomes:
<xsl:variable name="ret">
<xsl:for-each select="myns:isInteresting(exsl:node-set($nodes)/Thing)">
<xsl:value-of select="@name"/>
</xsl:for-each>
</xsl:variable>
<func:result select="$ret"/>
Subscribe to:
Posts (Atom)