Landfill of wisdom

Dumping ground for small tips and big papers

I had the biggest PC related scare a couple of days ago. After I had two disks in my RAID5 fail in a very short amount of time and only pure luck saved my data I moved to RAID6 and I felt safer. That was, until two days ago I ran:

# pvs -v
pvs    Scanning for physical volume names
pvs  Incorrect metadata area header checksum
pvs  Incorrect metadata area header checksum
pvs  WARNING: Volume Group vg0 is not consistent
pvs  Incorrect metadata area header checksum
pvs  Incorrect metadata area header checksum
pvs  PV         VG   Fmt  Attr PSize PFree   DevSize PV UUID
pvs  /dev/md2   vg0  lvm2 a-   1.80T 922.19G   1.80T Y9naEo-OKG6-0ZyX-qmZX-u3JP-uCPg-cE1hVX

Ooops. Not looking good.

# vgs -v
vgs    Finding all volume groups
vgs  Incorrect metadata area header checksum
vgs    Finding volume group "vg0"
vgs  Incorrect metadata area header checksum
vgs  Incorrect metadata area header checksum
vgs  VG   Attr   Ext   #PV #LV #SN VSize VFree   VG UUID
vgs  vg0  wz--n- 4.00M   1  15   0 1.80T 922.19G 8uc2fo-0OwD-lPRT-1gdh-87lw-pjQW-Y1n2vd

Hmmmm, lots of errors but at least the volume group is there… now let’s see the logical volumes:

# lvs -v
lvs    Finding all logical volumes
lvs  Incorrect metadata area header checksum
lvs  Incorrect metadata area header checksum
lvs  Incorrect metadata area header checksum
lvs  Volume group "vg0" inconsistent
lvs  Incorrect metadata area header checksum
lvs  Incorrect metadata area header checksum
lvs  WARNING: Inconsistent metadata found for VG vg0 - updating to use version 154
lvs  Incorrect metadata area header checksum
lvs  Automatic metadata correction failed
lvs  Internal error: Volume Group vg0 was not unlocked
lvs  Device '/dev/md2' has been left open.
lvs  Device '/dev/md2' has been left open.
lvs  Device '/dev/md2' has been left open.
lvs  Device '/dev/md2' has been left open.

I was expecting 15 logical volumes. The /home directory, some data, mail volume, etc. Thing is, they still worked.
continue reading…

Tools required

  • openocd
  • tftp-hpa
  • cross compiler


cd customrepo
mkdir cross-armv5tel-softfloat-linux-gnueabi
ln -s ../gentoo/sys-devel/{gcc,binutils} cross-armv5tel-softfloat-linux-gnueabi
sudo cave resolve cross-armv5tel-softfloat-linux-gnueabi/gcc

Updating u-boot:

There are several repositories, but the official one is this one: The maintainer of the official one actually recommends using the repos for each architecture, in the case of the SheevaPlug that would the Marvell repo:

git init
git remote add marvell git://
git fetch marvell
git checkout -b testing marvell/testing
cat <<EOF >GNUmakefile
CROSS_COMPILE ?= armv5tel-softfloat-linux-gnueabi-
include Makefile
make sheevaplug_config
make u-boot.kwb

Now you can try the new u-boot on your plug:

openocd -f /usr/share/openocd/scripts/board/sheevaplug.cfg

In another terminal (or screen)

telnet localhost 4444
> sheevaplug_init
> load_image /home/..../u-boot
> resume 0x0060000

Check the output of the load_image for the exact number to put
on the resume line.

If u-boot seems to run fine, you can flash it to the NAND on the plug with openocd again.
Copy the u-boot.kwb file to your tftp server and then from the u-boot prompt:

> dhcp
> tftp u-boot.kwb
> nand erase 0 0x60000
> nand write 0x200000 0 0x60000

The tftp command will have given you the address you should give to the nand
write command.

To avoid having to export the CROSS_COMPILE variable every time you recompile your cross-compile kernel, a suggestion I found is to define the variable in the Makefile itself. Like so:

ARCH = arm
CROSS_COMPILE = armv5tel-softfloat-linux-gnueabi-

This is nice, but then I get conflicts when I pull in the new kernel updates with git. Or even worse, my kernel version gets tagged as “-dirty” to reflect the modified source tree, as opposed to a pristine git checkout. So, a workaround is to put these in a GNUmakefile (checked first by GNU make, and not managed in the kernel repo).

$ cat <<EOF >GNUmakefile
ARCH = arm
CROSS_COMPILE = armv5tel-softfloat-linux-gnueabi-
include Makefile

Coda is looking like a promising piece of infrastructure. However blogging about its awesomeness (that I personally haven’t experienced yet) is pointless. People usually look for solutions to problems so here is my share. The environment that kind of worked out of the box was the following:

  • All users are in Windows Active Directory
  • Clients are Linux (Fedora) workstations joined to the AD domain using samba and winbind
  • Server is also Linux (CentOS 5.4) and joined to the same domain with samba
  • Coda is using kerberos for single sign-on Now, let’s try to change a few elements in this picture.

Try 1: Leave the server on the windows domain, but do not use samba to do the joining.

The procedure to do that deserves an article of its own, but the main issue here was with the case sensitivity of MIT Keberos (or should I say, the case insensitivity of Windows AD). You will never be able to get clog to authenticate because the server does not have a keytab with an all-uppercase name and coda request a ticket for the uppercase name of the host. Unless you actually did create a keytab with an uppercase principal in which case lots of other things will break (ssh-ing into the server would not work with kerberos  for example). This is not an issue when joining the domain with samba (net ads join), because in that case the server gets a keytab with both spellings of the host principal – all uppercase and all lowercase. This is not possible to do when doing things manually because windows will just tell you that the principal already exists when you try to create the other one. The only way I found to work around this problem is to just duplicate the principals on the linux side and make one of them in uppercase.

ktutil: rkt /etc/krb5.keytab
ktutil: l
slot KVNO Principal
---- ---- ---------------------------------------------------------------------
  1    9 host/server.local@REALM.LOCAL
ktutil: wkt /etc/krb5.keytab
ktutil: exit

After that just use some binary editor (vim -b /etc/krb5.keytab) for example and carefully change one of the server.local occurrences to uppercase.

Try 2: Put the server on a separate Linux realm

To accomplish this you have to set up cross-realm trust between the Linux realm and the windows AD. However, in my case this was the easier thing to do since the servers can automatically join the Linux realm and the reverse DNS is already mapping them to that realm. The only problem is that the lowercase/UPPERCASE principal is still an issue. With a Linux domain it is easy to create both service principals (lower and uppercase) and add them to the server keytab. I personally was getting tired of this nonsense and decided to patch coda instead.

The other change that is needed is to make sure that Coda knows what realm the users would belong to. The default one in /etc/krb5.conf is the one used for system accounts so we had to specify the coda users’ realm with kerberos5realm = WINDOWS.REALM in /etc/coda/server.conf.

I finally decided to sign up for github. What prompted me was the desire to upgrade this blog to WordPress 3.0, from its current WordPress MU 1.5.1. I find that it is much easier to pull in an upstream branch and make my own changes on top if I use git as opposed to subversion. And since importing the wordpress svn repository into git was an overnight process, I decided to share the resulting git repo on github, so others can speed up their own wordpress git import should they ever want to do that. And that is how my first ever github repo became a repo that is just tracking the progress of wordpress. Good enough for a start.

I did not use the default git-svn configuration because it was lacking a few feature I like.

First, my local repo is not using the default git-svn branch to tag mapping. Instead, I configure git-svn to prefix all remote branches with “svn” so they are easy to follow. The “trunk” I also push into a branch named trunk, to make it easy to follow on github. I also configured the github remote repo to simply push all remote svn branches to similarly named proper branches. Tags in the subversion repo also become tags in the local repo. So, the flow is: wordpress subversion -> refs/remotes/svn @ local -> refs/heads @ github. And the commands I have to run to update github are just “git svn fetch” followed by “git push github” and eventually “git push github –tags”.

The other cool thing is that with this configuration I can do the syncing with a bare git repo.

And here are the relevant parts from the config of my local git repo:

[svn-remote "svn"]
        url =
        fetch = trunk:refs/remotes/svn/trunk
        branches = branches/*:refs/remotes/svn/*
        tags = tags/*:refs/tags/*

[remote "github"]
        url = git://
        pushUrl =
        fetch = refs/heads/*:refs/remotes/svn/*
        push = refs/remotes/svn/*:refs/heads/*

Note that I also do not prefix the fetch line of github with “+” because I do not want the two remote repos – subversion and github – to accidentally overwrite the local tracking branches.

I have a pretty simple mythtv box. It boots over PXE and the root partition is read-only exported over NFS. After bumping the kernel on the server that hosts the NFS to 2.6.34 my mythbox no longer wanted to boot. It was hanging right after managing to look up the mountd port on the NFS server with RPC. After enabling NFS_DEBUG the output looks like this:

Looking up port of RPC 100003/2 on
Root-NFS: Portmapper on server returned 2049 as nfsd port
Looking up port of RPC 100005/1 on
Root-NFS: mountd port is 45731

tcpdump on the server didn’t reveal much either. Everything was fine until the point of hanging, where the client sent a couple of retransmission requests for the last UDP packet. In wireshark, these packets look like:

5127    4.366527    NFS    V2 NULL Call
5128    4.366638    NFS    V2 NULL Reply (Call In 5127)
5129    5.463390    NFS    [RPC retransmission of #5127]V2 NULL Call (Reply In 5128)
5130    5.463543    NFS    [RPC duplicate of #5128]V2 NULL Reply (Call In 5127)

A couple of more requests for retries and retransmissions later and

VFS: Unable to mount root fs via NFS, trying floppy.

I tried the following:

  • Copy the mythtv root partition to another PC and use that as the NFS server – boots fine
  • Add ,tcp to the nfsroot kernel command line parameter – boots fine
  • Use a virtual machine on my laptop to boot as the mythtv system (even gave it the same MAC address) – boots fine

I am out of ideas why only this client with only this server only with UDP does not work.

  • Check what the clipboard contents are: xsel -b
  • Put foo in the primary X selection: echo foo | xsel -p
  • Copy the clipboard to the primary X selection: xsel -b | xsel -p

This is just a hint on how to copy/paste between X programs that cannot agree on which X buffer to use. Say you have the xterm which uses the primary selection, and you have some silly java program program which only uses the clipboard. What do you do if middle click doesn’t want to paste your console output.

An ugly workaround is to fire some editor that supports both and keep pasting with one method and copying with the other before pasting in the final destination. Middle click in gvim to paste what you selected in your xterm, then select the text in gvim and copy it to the clipboard.

Or, you can use xsel. The simple features of xsel are to put something in one of the selections (if there is anything coming on STDIN) or to output the content of a buffer. So, to put the primary (the one you get when you select text in X) in the clipboard (what editors put their stuff in when you do Ctrl+C) you can do xsel -p | xsel -b.

I just had a very hard time trying to change my address on the JAL Mileage Bank page. They still had my address from about 4 years ago which changed at least a couple of times since then. When I tried to change it I realized why.

I submitted the form, properly filled with full-width characters (including the digits) and on the confirmation screen I get the same form filled with mojibake asking me to complete it correctly. You have guessed already – nothing I type in there would actually help.

Firebug to the rescue! I opened the form in Firebug, added an enctype=”multipart/form-data” to the form element and surprise, surprise – my input was accepted.

The efont family displayed in xfontsel

Finally! A set of fonts that look good and properly display Japanese and Cyrillic. That is, the Cyrillic is half-width as it is supposed to be. For Gentoo users the package is media-fonts/efont-unicode or there is the official page for everyone else. Some characters like the lowercase б and д are oddly designed but that’s a very small price to pay for the pleasure of not having to mix and match different fonts in my terminal configuration.

I wish I’d known that a little earlier.

The built-in SD card reader on my Thinkpad x60 has been always giving me headaches. It works reliably well, but when copying files my system becomes very unresponsive. The mouse is jerky and the CPU usage jumps through the roof. Obviously, DMA is not being used.

I couldn’t find much about it for a while but just today, when I included the chip id R5C822 of the reader in the search I found this thread.

However, instead of patching the source itself, it is quite easy to force the quirk when loading the sdhci module. Put the following in modprobe.conf or /etc/modprobe.d/local.conf and everything will be fine after you reboot.

options sdhci debug_quirks=2

You can also reload the module to check if it is fine even now.

  • umount any SD cards that you have mounted
  • modprobe -r sdhci-pci
  • modprobe -r sdhci
  • modprobe sdhci-pci

That’s all folks. Enjoy a true multitasking Linux.