Wednesday, January 20, 2010

Samba: permission denied on nested directory creation

I have a network setup where there is a Unbuntu (9.04, Jaunty) Samba/CIFS file server with Ubuntu and Windows clients. I had this annoying issue where, from Linux clients, recursive directory creation -- like mkdir -p /share/a/b/c would not work. In order to get /share/a/b/c you'd have to create each directory in turn: mkdir /share/a && mkdir /share/a/b && mkdir /share/a/b/c. This prevents handy commands like mkdir -p and rsync from working. Fortuantely, I finally found a work-around: turn off Samba's Unix extensions on the server. i.e.

unix extensions = no
Obviously this only helps if you don't need the Unix extensions (symlinks, hardlinks, etc on the share), but I didn't. This sounds like a bug in the CIFS client for Linux, but who knows. I'm pleased to have any work-around.


Monday, January 18, 2010

hexBinary Encoding

Recently, I hacked together a wrapper script for reporting job statuses to Hudson. The XML API in Hudson called for "hexBinary" encoded data. I hadn't heard of this before, and couldn't find much in the way of decent examples on Teh Interwebs. From the spec, it seems to be pretty simple: for each byte in your data, write out its two character hex value. So if your byte has decimal value 223, write out its hex string: "DF". (Aside: this seems like a silly encoding, at least space-wise: why not the ubiquitous base64?) I wanted a simple shell script, so the issue was how to do this encoding without pulling in a full-out scripting language. Fortunately, hexdump has format strings. Unfortunately, its docs aren't great.

Example of hexBinary encoding using hexdump:

echo "Hello world" | hexdump -v -e '1/1 "%02x"'

So what the hell is that? -v means don't suppress any duplicate data in the output, and -e is the format string. hexdump's very particular about the formatting of the -e argument; so careful with the quotes. The 1/1 means for every 1 byte encountered in the input, apply the following formatting pattern 1 time. Despite this sounding like the default behaviour in the man page, the 1/1 is not optional. /1 also works, but the 1/1 is very very slightly more readable, IMO. The "%02x" is just a standard-issue printf-style format code.


Hudson External Jobs: Wrapper Script

Lately, I've been using Hudson for a variety of tasks. Hudson is billed primarily as a continuous integration server, but one general-purpose cool feature it has is the ability to monitor "external" jobs. What this means is you can have some arbitrary process report status to Hudson periodically.

My first thought was to get important cron jobs to report status -- beats the automated emails that I tend to ignore. If you happen to have a full Hudson install on the server running the cron job, Hudson provides a simple Java-based wrapper you can use. I didn't want to have to have various .jar files copied to every machine that needed to post status, so instead I opted to use Hudson's XML over HTTP interface. Both the Java-based and HTTP approaches are documented to some extent here.

I wanted to make it as easy as possible to integrate any old script with Hudson, so I came up with the wrapper below. (It seems to work for me, but use at your own risk; no guarantees!)

Update 2010-10-21: The latest version of this script now has a home at GitHub: Thanks to Joe Miller for setting up the repo!

# Wrapper for sending the results of an arbitrary script to Hudson for
# monitoring.
# Usage: 
#   hudson_wrapper <hudson_url> <job> <script>
#   e.g. hudson_wrapper testjob /path/to/
#        hudson_wrapper testjob 'sleep 2 && ls -la'
# Requires:
#   - curl
#   - bc
# Runs <script>, capturing its stdout, stderr, and return code, then sends all
# that info to Hudson under a Hudson job named <job>.
if [ $# -lt 3 ]; then
    echo "Not enough args!"
    exit 1

HUDSON_URL=$1; shift
JOB_NAME=$1; shift

OUTFILE=$(mktemp -t hudson_wrapper.XXXXXXXX)
echo "Temp file is:     $OUTFILE" >> $OUTFILE
echo "Hudson job name:  $JOB_NAME" >> $OUTFILE
echo "Script being run: $SCRIPT" >> $OUTFILE
echo "" >> $OUTFILE

### Execute the given script, capturing the result and how long it takes.

START_TIME=$(date +%s.%N)
eval $SCRIPT >> $OUTFILE 2>&1
END_TIME=$(date +%s.%N)
ELAPSED_MS=$(echo "($END_TIME - $START_TIME) * 1000 / 1" | bc)
echo "Start time: $START_TIME" >> $OUTFILE
echo "End time:   $END_TIME" >> $OUTFILE
echo "Elapsed ms: $ELAPSED_MS" >> $OUTFILE

### Post the results of the command to Hudson.

# We build up our XML payload in a temp file -- this helps avoid 'argument list
# too long' issues.
CURLTEMP=$(mktemp -t hudson_wrapper_curl.XXXXXXXX)
echo "<run><log encoding=\"hexBinary\">$(hexdump -v -e '1/1 "%02x"' $OUTFILE)</log><result>${RESULT}</result><duration>${ELAPSED_MS}</duration></run>" > $CURLTEMP
curl -s -X POST -d @${CURLTEMP} ${HUDSON_URL}/job/${JOB_NAME}/postBuildResult

### Clean up our temp files and we're done.


If you have, for example, a crontab entry that looks like this:

00 02 * * *
you can have it report status to Hudson under a job called "test_job" by changing your crontab to look like this instead:
00 02 * * * hudson_wrapper test_job
The job "test_job" must be created as an "external job" in Hudson ahead of time for this to work.

One thing of interest here is the "hexBinary" encoding in the XML that is sent to Hudson. There is precious little info out there about "hexBinary", so hopefully I got that part right. From the spec, it seems simple enough, and the script does work for all the inputs I've thrown at it so far. Update: I wrote a more detailed post on hexBinary.

Update 2010-01-29: Added -s to curl to avoid transfer stats showing up on stderr. Also improved the wrapper to be able to handle any size output from the wrapped command. Before you were at the mercy of ARG_MAX, getting Argument list too long errors if your script output too much stuff.

Saturday, January 16, 2010

Jungle Disk 3 Linux: Automatically start on reboot

I've been a long time user of Jungle Disk, and it's historically been pretty good software. Recently, version 3 was released, and among various features and fixes, they also removed the command line version for Linux for "Jungle Disk Desktop", which I use. I wanted to run Jungle Disk on a simple file server that didn't have X, but that isn't possible with version 3. The issue has been raised with Jungle Disk by some other people, and it sounds like there's a chance they will do something about it... maybe/eventually.

In the mean time, I opted to just install a full desktop version of Ubuntu 9.10 on my file server so I could use Jungle Disk 3's GUI. But that brought to light another problem: you can't auto-start Jungle Disk 3 after a reboot because it requires a valid X display... A user must actually log in and start it! Instead of that nonsense, I run vncserver after a reboot using @reboot in a crontab, and get vncserver to run junglediskdesktop via it's xstartup file. A hack, but it seems to work well enough.

Here's a basic overview of what's needed to make this work.

  1. Install vnc4server and set it up for gnome-session, as described nicely here.
  2. Add the following line to the crontab for the user who should run Jungle Disk:
    @reboot   /usr/bin/vncserver :1
  3. Make sure ~/.vnc/xstartup looks something like the following for the user that should run Jungle Disk:
    # Uncomment the following two lines for normal desktop:
    # exec /etc/X11/xinit/xinitrc
    [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &
    #xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
    gnome-session &
    junglediskdesktop &

To test that this is working, reboot the box and then use a VNC viewer ("Remote Desktop Viewer" in Ubuntu) to connect to the server on port 5901.

Update - 18-Jul-2010: As of version 3.05 of Jungle Disk Desktop Edition, the CLI is back. See this thread at Jungle Disk for discussion. It's not perfect: you need the GUI to configure everything (which is then written to a big XML file), but once configured you can run headless.

Friday, January 15, 2010

Simple Samba/CIFS Configuration

Any time I have to do something with Samba, I run into stupid configuration and permissions issues. I just set up a dead simple Samba config and am documenting it here for next time. Possibly someone else might get some use out of it too.

Some reading I based this on: creating a public share in Samba, and some Samba on Ubuntu docs.


  • Ubuntu 9.10 Karmic as the smb server
  • Whatever Samba 3.x it comes with
  • Windows and Linux client machines
  • Anyone on has access: not secure, but convenient
  • Machine called myhostname is the file server, at IP


  1. On the file server, myhostname in this example, create a user smbuser to act as the 'guest' in Samba. Client machines that don't authenticate will act as this user:
    sudo adduser smbuser
    Then make sure /etc/passwd and /etc/group have lines something like this:
    # /etc/passwd
    smbuser:x:1001:1001:Samba user,,,:/home/smbuser:/usr/sbin/nologin
    # /etc/group
  2. Edit /etc/samba/smb.conf:
    netbios name = myhostname
    workgroup = WORKGROUP
    server string = File Server
    security = user
    map to guest = bad user
    guest account = smbuser
    create mask = 0644
    directory mask = 0755
    hosts allow =
    hosts deny =
    unix extensions = no  # unless you REALLY need them
    # Simple share that anyone can read/write to
    path = /data/photos
    browsable = yes
    guest ok = yes
    read only = no
  3. Client linux machine's /etc/fstab (Make sure smbfs is installed: sudo apt-get install smbfs):
    // /data cifs username=smbuser,password=,uid=bob,gid=bob 0 0
  4. Client Windows machine: just browse to \\ or \\myhostname

Wednesday, January 6, 2010

Snapsort: feature #1

First post for normal humans!

Yesterday we launched the first Snapsort feature: compare any two digital cameras (in a sane way). Here's an example comparison. This is a relatively simple feature, but we think it's interesting. (...And possibly even useful already?) Alex gives a better description here. This is only the beginning, but it's nice to have something "live".

Monday, January 4, 2010

s3cmd 0.9.9 on Ubuntu 9.04/Jaunty

The version of s3cmd that comes with Ubuntu Jaunty is quite old (0.9.8; from late 2008). It seems that numerous issues have been fixed in 0.9.9, which is part of Karmic. Fortunately, there is a backport to Jaunty. Here's how to install:

  1. Uninstall s3cmd if you've already installed it:
    sudo apt-get remove s3cmd
  2. Add the following lines to /etc/apt/sources.list:
    # s3cmd backports
    deb jaunty main 
    deb-src jaunty main 
  3. Run these commands:
    sudo apt-key adv --keyserver --recv-keys 78822001
    sudo apt-get update
    sudo apt-get install s3cmd
  4. Check that you've got 0.9.9:
    $ s3cmd --version
    s3cmd version 0.9.9

Thanks to Loïc Martin, from whom these packages originate.