Monday, December 27, 2010

Maven, SBT, and "Impossible to load parent"

I have a multi-module Scala project with Maven as the primary build tool. Lately I've been trying to run SBT beside it. The model seems good: SBT examines existing pom.xml files for dependency info, but gives you all the other good SBT stuff. It should be possible for the two systems to co-exist, even in large, multi-module systems (see Lift, or Scalate, for example.)

In my project, I ran into "java.io.IOException: Impossible to load parent ..." when running sbt update for the first time. The issue is that the parent (aggregator) pom.xml for my project was not in any repository that SBT was checking. Maven pom.xml files support the relativePath tag to deal with this, but SBT uses Ivy to get the dependencies out of the pom files, and Ivy has a bug with finding parents via relativePath. So my conclusion is there's no "good" way to handle this right now.

The ugly work-around:

  • Add val mavenLocal = "Local Maven Repository" at "file://" + (Path.userHome / ".m2" / "repository").absolutePath to the SBT project definition. This causes SBT to check the local Maven repo for dependencies.
  • Run mvn install once before using SBT. This will ensure the parent pom.xml files are put in the local repository for SBT to see.

References

Sunday, November 14, 2010

Lift: including and embedding templates

For some reason I am incapable of remembering that it's lift:embed and not lift:include, so I'm writing it down. Along the same lines, I can never remember how to change the evaluation order of a snippet call (is it even called a 'snippet call'?). Examples follow.

Embedding a fragment of HTML in a template

<lift:embed what="path/to/fragment"/>
<!-- 
  DO NOT add .html to the what= part 
  what= is relative to src/main/webapp in source tree 
-->

Embedding a fragment of HTML containing lift bindings in a template

<lift:MySnippet.foo eager_eval="true">
  <lift:embed what="path/to/fragment_with_bindings"/>
</lift:MySnippet.foo>

References

Tuesday, November 2, 2010

Java: URI vs URL

I was just writing some HTTP-related code using java.net.URL when I noticed that Apache httpclient 4.0's API seems to want java.net.URI instances. "Why's that, I wonder?" The answer, it seems, is that Java's java.net.URL class is broken: its equals() method is blocking! It goes out on the network and does a reverse lookup of the hostname. This is very unfortunate since in every other way that class is what I want.

From the Javadoc for java.net.URL#equals:

Two hosts are considered equivalent if both host names can be resolved
into the same IP addresses; else if either host name can't be
resolved, the host names must be equal without regard to case; or both
host names equal to null.

Since hosts comparison requires name resolution, this operation is a
blocking operation. (Emphasis mine)

Good times. So, to avoid abitrary thread "hanging" at some point down the road I guess I'll use java.net.URI. Too bad these are all valid URIs, but nonsense in an HTTP context: "mailto:me@foo.com", "abc:123", "quux"

This begs the question: what precisely is the difference between a URI and URL? There is tons written on this (Google it), but I'll add my semi-informed $0.02 as well:

URI: an identifier (name) for a resource. Doesn't necessarily say anything about how to locate the idientified resource, but sometimes does. e.g. "/foo", "http://test.com/bar", "x:y:z/a/b/c"

URL: a URI that MUST include how to locate the resource. i.e. it starts with "http", "https", "ftp", etc. e.g. "http://www.google.com", "https://bank.com", http://abc.com/foo/bar/baz.html"

So, URI is very general, and URLs are a specialization of URIs. There is another subset of URI called URN that adds even more complexity, so I'm going mostly ignore that here. I'll just paraphrase from the SO link below and say that URNs are supposed to be a unique name (over time and space) for a resource, and they say nothing about locating said resource.

References

Tuesday, October 26, 2010

Scala 2.8, Maven scala:cc and FSC

I recently upgraded a large Maven-based Scala 2.7 project to Scala 2.8. After doing this I discovered that mvn scala:cc was no longer working. The error message was:

[INFO] Cannot start compilation daemon.
[INFO] tried command: List(scala, scala.tools.nsc.CompileServer)

Running mvn scala:cc -Dfsc=false worked fine, but I lost the benefits of FSC.

I managed to fix this, but I never tracked down exactly what has happening, so this is one of those "it works now, who cares" type of things.

  • I noticed java -version was not returning what I expected; an old manually installed version in /opt was apparently eclipsing the java-6-sun version installed through apt. Fixed this with: sudo update-java-alternatives -s java-6-sun
  • Noted the ... tried command: bit of the error above mentioned it was trying the scala command first. Made sure that when I typed scala I got the appropriate 2.8 version.

Started working after this. Wish I knew in more detail what was happening, but I don't have the time right now. Perhaps this will help someone else.

References

Monday, October 25, 2010

Wednesday, October 20, 2010

Fortunately, not implemented

mvn jetty:ruin

Friday, October 15, 2010

JMX through a ssh tunnel

My production servers run Jetty (v6) and are instrumented with JMX for runtime monitoring. They're also, of course, behind a number of firewalls. Most are only readily available via ssh. I need to monitor any of these servers with VisualVM easily through a ssh tunnel. This was considerably harder to get working than you'd hope!

Here are the high-level steps I eventually settled on:
  1. Enable Jetty's JMX instrumentation
  2. Have Jetty listen for management connections over JMXMP not RMI
  3. Start VisualVM in such a way that it can speak JMXMP.
  4. Setup a tunnel and you're off!
The rest of this article explains how to make this work, and also why I selected this approach.

(Don't Use) RMI

The default way to use JMX is over RMI. This works just fine if you're on the same network as the target server and there is no firewall. It's a mess if there is a firewall. There is a level of indirection in the RMI approach that makes management through a tunnel hard or impossible. Here's what normally happens:
  1. Client connects to a RMI Registry on the server
  2. Client looks up in the registry where to connect to for JMX using magic name jmxrmi
  3. RMI Registry replies: ok, connect to jmxhost:jmxport
  4. Client connects to jmxhost:jmxport ... if possible
The problems with this when tunneling are:
  • jmxhost has to be resolvable on both sides of the tunnel. If the servers are NAT'ed (and they will be), jmxhost will be a an unroutable private IP like 192.168.1.x
  • by default, jmxport is randomly chosen by the runtime
So with the default config it completely doesn't work through a tunnel. The RMI registry will tell you to connect to some random endpoint that isn't tunneled! You can make this work -- through a single port -- with some effort, however.
You can use -Djava.rmi.server.hostname=127.0.0.1 on the server. This makes jmxhost routable on both sides of the tunnel, but it restricts the server to accepting connections on 127.0.0.1 only (probably fine since you're tunneling anyway).
You can make jmxport deterministic using a JMXServiceURL like this: service:jmx:rmi://127.0.0.1:1099/jndi/rmi://127.0.0.1:1099/jmxrmi. (That crazy URL is not a typo!) This forces jmxhost:jmxport to be 127.0.0.1:1099, so as long as you've tunneled to that location you're good. You can use a port other than 1099, but you have to make sure there's an RMI registry listening on whatever port you specify. I've read that this single-port approach is likely to cause grief if you want to use TLS, but I haven't tried it.
I find this to be way over-complicated. A simpler approach is to use the JMXMP protocol instead of RMI.

JMXMP

JMXMP is a simple protocol: serialized Java objects over a TCP connection. No indirection like RMI. It's what you wish the default was. The catch is it's not part of the core JDK. You have to download Sun^H^H^HOracle's freely available JMX Remote Reference Implementation and put jmxremote_optional.jar in the classpath of both the client and server. This is a pain, but way less of a pain than having to understand that RMI stuff above.
To use JMX over the JMXMP protocol:
  1. Ensure jmxremote_optional.jar in the classpath of both client and server
  2. Use service:jmx:jmxmp://127.0.0.1:5555 (selecting whatver IP and port you want) as the JMXServiceUrl on the server.
  3. Have the client (i.e. VisualVM) connect to service:jmx:jmxmp://127.0.0.1:5555 (assuming 127.0.0.1:5555 is a tunnel to the same location on the server)
Pretty easy.

Jetty 6 config for JMX over JMXMP

Put jmxremote_optional.jar in $JETTY_HOME/lib, and make sure the following is in your jetty.xml:
<Call id="jmxConnector" class="javax.management.remote.JMXConnectorServerFactory" name="newJMXConnectorServer">
  <Arg>
    <New  class="javax.management.remote.JMXServiceURL">
      <Arg>service:jmx:jmxmp://127.0.0.1:5555</Arg>
    </New>
  </Arg>
  <Arg>
    <Map>
      <Entry>
        <Item>jmx.remote.server.address.wildcard</Item>
        <Item>false</Item>                                                                                                                                              
      </Entry>
    </Map>
  </Arg>
  <Arg><Ref id="MBeanServer"/></Arg>
  <Call name="start"/>
</Call>
This will cause Jetty to listen on 127.0.0.1:5555 for JMX connections using the JMXMP protocol.

Starting VisualVM with JMXMP Support

Simply need to ensure jmxremote_optional.jar is on the classpath:
visualvm -cp:a /path/to/jmxremote_optional.jar
I use a little script to launch it (adjust paths as necessary):
#!/bin/bash
/usr/local/visualvm_131/bin/visualvm -cp:a ~/jmx/jmxremote_optional.jar "$@"

Jetty 6 config for JMX over RMI, single port

I'm not actually using this method, but I did get it working. For completeness, here is the snippet of config from jetty.xml:
<!-- Setup the RMIRegistry on a specific port -->
<Call id="rmiRegistry" class="java.rmi.registry.LocateRegistry" name="createRegistry">
  <Arg type="int">5555</Arg>
</Call> 
<!-- setup the JMXConnectorServer on a specific rmi server port -->
<Call id="jmxConnector" class="javax.management.remote.JMXConnectorServerFactory" name="newJMXConnectorServer">
  <Arg>
    <New class="javax.management.remote.JMXServiceURL">
      <Arg>service:jmx:rmi://127.0.0.1:5555/jndi/rmi://127.0.0.1:5555/jmxrmi</Arg>
    </New>
  </Arg>
  <Arg>
    <Map>
      <Entry>
        <Item>jmx.remote.server.address.wildcard</Item>
        <Item>false</Item>                                                                                                                                              
      </Entry>
    </Map>
  </Arg>
  <Arg><Ref id="MBeanServer"/></Arg>
  <Call name="start"/>
</Call>
References:

JMX Statistics in Jetty 6 (6.1.22)

Jetty has a bunch of JMX instrumentation available, but it is normally not active by default. There is a little bit of documentation out there describing it, but not a simple explanation of how to really enable it. I eventually figured it out, so here goes.

This was all tested with Jetty 6.1.22 and Java 1.6u16.

At the top of your jetty.xml, setup the MBeanServer by adding this:

<Call id="MBeanServer" class="java.lang.management.ManagementFactory" name="getPlatformMBeanServer"/>

<Get id="Container" name="container">
  <Call name="addEventListener">
    <Arg>
      <New class="org.mortbay.management.MBeanContainer">
        <Arg><Ref id="MBeanServer"/></Arg>
        <Call name="start" />
      </New>
    </Arg>
  </Call>
</Get>

Then, at the bottom of your jetty.xml add this (sets a request stats handler as top-level handler, see here for more):

<Get id="oldhandler" name="handler"/>
<Set name="handler">
 <New id="StatsHandler" class="org.mortbay.jetty.handler.AtomicStatisticsHandler">
  <Set name="handler"><Ref id="oldhandler"/></Set>
 </New>
</Set>

Note that the order in which this stuff appears in the jetty.xml file matters. If you don't setup the MBeanServer at the beginning, subsequent components won't register themselves with JMX. This was the key that I missed out on at first, but finally realized after reading this post on the jetty-user mailing list.

You may also want to make sure you have statsOn enabled on your Connector:

<Call name="addConnector">
  <Arg>
      <New class="org.mortbay.jetty.nio.SelectChannelConnector">
        <Set name="host"><SystemProperty name="jetty.host" /></Set>
        <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>
        <Set name="maxIdleTime">30000</Set>
        <Set name="Acceptors">2</Set>
        <Set name="statsOn">true</Set>
        <Set name="confidentialPort">8443</Set>
        <Set name="lowResourcesConnections">5000</Set>
        <Set name="lowResourcesMaxIdleTime">5000</Set>
      </New>
  </Arg>
</Call>

"Simple." Ugh.

For command line monitoring of these stats, JMXTerm seems reasonable.

Now, if you want to remotely monitor these stats with jconsole or VisualVM and you've got a firewall... that's another story, and is a complete clusterfuck (not Jetty's fault though). I will write more on that topic later. Read the comments in etc/jetty-jmx.xml that ships with Jetty to see more.

Thursday, October 14, 2010

CAP^H Theorem

Brewer's CAP Theorem is often stated as, "Consistency, Availability, and Partition tolerance: choose two." This is catchy, but in real systems you can't actually choose both C and A. If your network is down, how can a multiple node database remain consistent? I'll admit you can mess around with the definition of "available" to make this "work" (oh, it just returns '503 try again later' when the network is down), but you're kind of lying to yourself.

Coda Hale has written an excellent article going over why you can't sacrifice partition tolerance. Simply stated, logical, and not too long. Read it a few times.

Friday, September 24, 2010

Disabling JSESSIONID in Jetty context XML

The following snippet shows how to deploy a web application/.war via a Jetty context XML file with JSESSIONID disabled.

<Configure id="cs" class="org.mortbay.jetty.webapp.WebAppContext">
  <Set name="contextPath">/</Set>
  <Set name="war">/foo/bar/myapp.war</Set>
  <!-- Turn off JSESSIONID -->
  <Get id="sh" name="sessionHandler"/>
  <Ref id="sh">
      <Get name="sessionManager">
          <Set name="sessionURL">none</Set>
      </Get>
  </Ref>
</Configure>

To be clear: you'd save this as foo.xml and place it in $JETTY_HOME/contexts and your app would be deployed.

If you want JSESSIONID disabled when working with the maven-jetty-plugin or when embedding Jetty, check out Alex's post on removing JSESSIONID.

Saturday, September 18, 2010

Used Car Research: TrueDelta

A friend recently pointed me to a great site for used car reliability information called TrueDelta. These guys collect repair and fuel economy data from over 70,000 cars around North America. From this data they publish a realistic picture of how cars truly perform out in the wild. The data is updated quarterly, so you always have an up to date picture on how a car performs. Really cool idea.

Anyone with a car can join and contribute data to the research. If you're participating, you get free access to all the reports, so it's well worth it. It takes a minute or two to join, and a minute or two each quarter to submit some basic data (number of repairs, odometer, etc.). It's not a big time investment, and the reward could be very valuable next time you're buying a car. If you don't want to participate you can buy your way in for cheap, too.

Sign up today and make more informed decisions next time you have to buy a car.

Wednesday, September 15, 2010

Using a custom WebAppClassLoader in Jetty

I recently ran into a situation where I wanted to log details about what Jetty's class loader was doing for one of our web apps. There is a hook in WebAppContext to provide your own WebAppClassLoader implementation, which was just what I needed, so I proceeded to write LoggingWebAppClassLoader (source below). The trouble was: where do I actually get a chance to insert my custom implementation?

It's easy enough to do this if you're embedding Jetty in your app:

// Scala code 
val context = new WebAppContext()
val lwacl = new LoggingWebAppClassLoader(context)
context.setClassLoader(lwacl)
...

Unfortunately we're not embedding, but just deploying a .war to an existing Jetty server. I messed around with this at length, and eventually asked on StackOverflow. The answer there got me on the right path, though I had a few other issues along the way.

Here's how I got it working:

  • Changed my deployment technique to Jetty's context deployer ($JETTY_HOME/contexts) instead of just copying .war files into $JETTY_HOME/webapps.
  • Wrote myapp.xml (see below) to define the context. It's in here that you can configure Jetty to use the custom WebAppClassLoader.
  • Copied the jar file containing my LoggingWebAppClassLoader class into $JETTY_HOME/lib so the class is available to the context deployer.

Of course it seems pretty straight-forward now that I've figured it out :) The biggest issue was that I didn't know much about deploying via Jetty contexts. They seem to have a lot of advantages over the vanilla war deployer in that you can easily tweak any Jetty internals at deploy time. Downside is "programming" in XML...

myapp.xml (Jetty context):
<Configure id="mycontext" class="org.mortbay.jetty.webapp.WebAppContext">
  <Set name="contextPath">/</Set>
  <Set name="war">/foo/myapp.war</Set>
  <Set name="classLoader">
      <New class="fully.qualified.name.LoggingWebAppClassLoader">
          <Arg><Ref id="mycontext"/></Arg>
      </New>
  </Set>
</Configure>
LoggingWebAppClassLoader.java:
import java.io.IOException;
import org.mortbay.jetty.webapp.WebAppContext;
import org.mortbay.jetty.webapp.WebAppClassLoader;

public class LoggingWebAppClassLoader extends WebAppClassLoader {
  public LoggingWebAppClassLoader(ClassLoader parent, WebAppContext context) throws IOException {
      super(parent, context);
  }
  public LoggingWebAppClassLoader(WebAppContext context) throws IOException {
      super(context);
  }

  private void log(String s) {
      System.out.println(s);
  }

  @Override
  public void addClassPath(String classPath) throws IOException {
      log(String.format("addClassPath: %s", classPath));
      super.addClassPath(classPath);
  }

  @Override
  public Class loadClass(String name) throws ClassNotFoundException {
      log(String.format("loadClass: %s", name));
      return super.loadClass(name);
  }
}

Saturday, September 11, 2010

Monit notifications using Google Gmail SMTP

I had to do a bunch of fiddling around to figure this out, hence a quick blog post.
The situation: I want monit to send me email notifications and I don't have (or want) an SMTP server running on the box. Furthermore, like many ISPs, my ISP won't allow outbound connections to port 25 anyway.
Solution: as a Gmail user, I can use Google's SMTP servers for sending mail.
You need monit version >= 4.10 for this to work. I got it working on Ubuntu Jaunty 9.04 and Lucid 10.04 just fine. Here is the set mailserver syntax to make it happen:
set mailserver smtp.gmail.com port 587
    username "someuser@gmail.com" password "password"
    using tlsv1
    with timeout 30 seconds
References:

Thursday, September 2, 2010

How to sell a used car in Ontario

I recently sold my car privately. I thought I'd write about the process to help others, or to help myself if I need to refer back some day. The government has lots of info on the process, but sometimes a real person's experience is still useful.

Background:

  • I live in Ontario, so this is all Ontario-centric
  • I had no liens against the car
  • The car was in excellent condition (not a salvage or rebuild)

The steps I took to sell the car:

  1. Got the UVIP from the Government
  2. Cleared up lien issue
  3. Made a "brochure" page with detailed info about the car
  4. Posted ads
  5. Received responses; found a buyer
  6. Preparation for final sale
  7. Completed sale

UVIP

The UVIP is an official document showing that you own the car, and whether there are liens against the car. It also provides a "bill of sale" page that you fill out to complete the sale. The UVIP took about five days to arrive, but they say to allow two weeks. I ordered mine online, but you can also walk into a ServiceOntario location. It's $20.

One piece of info that the UVIP mentions is the "brand" (not to be confused with "make", e.g. Toyota) of the car. My buyer asked what that meant, so you may want to familiarize yourself with it. In short, it says whether the car was ever severely damaged and then rebuilt. If your "brand" is something other than "None", you probably need to be very aware of it. Details here.

Liens

I initially had a car loan, but it was paid off some time ago. The UVIP, however, still showed that GMAC had a lien against my car! I was surprised about this. I called up GMAC and got them to issue a letter stating that they had no further interest in the car, and they did that. I think this is a common practice. It may be (speculation) that they don't bother clearing the lien when you're done paying. Anyway, keep this in mind since it added another week of waiting around for documentation.

Brochure Page

Because I am a nerd, I made a simple Google Sites page listing all the details I could think of about the car, a bunch of pictures, the price, and how to contact me. I sent this to all interested buyers, and on Facebook, etc. This is obviously an optional part of the process, but it may help market the car, and should at least save you typing up answers to the same questions each time someone contacts you.

Posting Ads

I posted ads on two services: Kijiji and Auto Trader (trader.ca). Both are free, but Kijiji is the far superior experience, in my opinion. I got way more "leads" from Auto Trader, but at least 50% of them were scammers. I eventually sold the car through Kijiji.

AutoTrader side-rant

AutoTrader is frustrating. Your ad is only up for seven days or so before it silently expires. Eventually, someone from a call centre phones you up to try to upsell you to the AutoTrader print version. They wanted something like $140 to put my ad in the print version! I politely declined. In their defense, you can re-post your ad online over and over again for free, suffering only the inconvenience.

The biggest frustration was that each time I posted my ad it was rejected at least once before finally being admitted. The first time I tried it was my fault: I tried to put a link in the ad, which they don't allow. Subsequent attempts to post it were rejected stating it was a dealer ad. I couldn't figure this out at first, but eventually I realized what it was: I mentioned the name of the dealership where I bought the car years earlier. I don't know if they are just scanning for keywords, but it was pretty clear I wasn't a dealer. Each time you have to explain why your ad is legit there is a two to eight hour turnaround time via email. Just a clunky experience, but necessary to endure because they attract a lot of potential buyers due to their brand.

Ad Responses

At least 50% of ad responses I got via email (especially from the Auto Trader ad) were scams. The common one is: "I'd like to buy your car, but I can't come see it for [insert lame/amusing/creative reason]. I will pay you via PayPal and my 'courier' will come pick it up." It's usually easy to spot these scams, but here are some common things I noticed in scammer responses:

  • no phone number
  • phone number with international area code
  • generic text in emails like "I'd like to buy your item"
  • suspicious name that sounds auto-generated: "Kelvin Eric", "Alex Matt"

If it's a real buyer, they'll talk to you on the phone and come see the car in person. Anything else is highly suspicious. I found a legitimate buyer after about six weeks.

Preparing for Sale

After the buyer and I agreed on a price, he asked me if I would get the car e-tested and certified (safety certified). This had not crossed my mind before since the car was only four years old, and with very few kilometers on it. It's not required that the seller do these things, but it's certainly a sign of good faith, so I was game. I got the e-test done at Oil Changers ($40), and a local mechanic in Kitchener did the safety inspection ($90). For finding a mechanic to do the inspection, just make sure they are an official "Motor Vehicle Inspection Station," recognized by the government.

You could get the e-test and safety inspection done before finding a buyer, but both expire. The e-test is good for 12 months, so that's not a problem, but the safety inspection is only valid for 36 days, so keep that in mind.

Completing the Sale

The biggest question I had during this process was how I'd accept payment for the car in a safe way. I wasn't selling a beater, so it was a large sum of money to change hands. Cashier cheques and bank drafts are often faked, personal cheques are out of the question, and accepting that much in cash might be the most suspicious of all! I came across this that suggested going with the buyer to his or her bank and getting the bank draft right then and there. Seemed nearly foolproof, so this is what I did. After meeting my buyer I felt I could trust him, so I felt like a bit of a jerk for making him jump through this hoop. I just wasn't willing to accept any risk on the transfer of this much money. On the day of the actual sale we took a trip over to a nearby branch of his bank, went up to the teller together, and they issued me the draft (and gave him a receipt that showed the draft was issued).

Interestingly, when I first suggested this as a way to do the payment, my buyer got a bit nervous. Never having heard of this before, he thought I might be trying to pull something on him! I didn't expect that kind of reaction. I explained my motivation, and sent him the About.com link (above) and he was ok with it. If I was doing it all over I might advertise up front that I require buyers to pay this way.

Payment in hand, there were a few things left to do:

  • Fill out the vehicle transfer bit on my vehicle permit
  • Give the vehicle transfer bit of the permit to the buyer, keep the "plate portion" for myself
  • Fill out the bill of sale in the UVIP
  • Take the plates off my car (yes, the buyer drives away with no plates; he has 6 days to get some put on)
  • Hand over the car and keys
The process was a bit stressful at times, but I ultimately got the price I was looking for, so it was worth it.

Sunday, August 22, 2010

Clojure REPL for command line use

Each time I setup a new box, or want to upgrade to a new version of Clojure (like 1.2, that was released quite recently), I have to remember how I've set it up in the past, and what needs to change. This time I'm writing it down (1.2.0 is the current version).

This is not the simplest way to launch Clojure, for that use java -cp clojure.jar clojure.main, it's just the way I like having it setup. It's mostly adapted from the Getting Started section in the old (?) Clojure Wikibook.

Clojure REPL with rlwrap support, clojure-contrib on the classpath, and using the clojure-contrib launcher:

  1. Put clojure in /usr/local/clojure-1.2.0
  2. Put clojure-contrib in /usr/local/clojure-contrib-1.2.0
  3. Symlink: /usr/local/clojure-contrib -> /usr/local/clojure-contrib-1.2.0 (for my rlwrap script)
  4. Make sure rlwrap is installed (apt-get install rlwrap on Ubuntu)
  5. ~/bin/clj exists, is executable, and looks like this:
    #!/bin/sh
    BREAK_CHARS="(){}[],^%$#@\"\";:''|\\"
    rlwrap -r -c -b $BREAK_CHARS -f $HOME/.clj_completions /usr/local/clojure-contrib/launchers/bash/clj-env-dir "$@"
    
  6. See Clojure wikibook for generating .clj_completions.
  7. Relevant environment variables (.bashrc or whatever you like):
    export CLOJURE_EXT=~/.clojure                                                                                                                                                
    export CLOJURE_OPTS="-Xmx128m -server"
    
  8. Make sure ~/.clojure ($CLOJURE_EXT) exists and contains symlinks to Clojure and contrib:
    ~/.clojure$ ls -l
    total 0
    lrwxrwxrwx 1 mark mark 59 2010-08-22 21:33 clojure-contrib-1.2.0.jar -> /usr/local/clojure-contrib/target/clojure-contrib-1.2.0.jar
    lrwxrwxrwx 1 mark mark 36 2010-08-22 21:48 clojure.jar -> /usr/local/clojure-1.2.0/clojure.jar
    

Monday, June 28, 2010

Major update to snapsort.com

At Snapsort (where I work) we've been hard at work for the last few months on an improved version of our camera comparison and recommendation engine. Last night we flipped the switch and put the new site live!

Compared to the mini-release we did six months ago, there is a lot of new stuff to see in this version.

One last neat thing: you can embed a summary of any of our comparisons in your own blog or website. For instance, I'm interested in the new Sony mirrorless cameras, so I'll embed the summary right here:

Lots more cool stuff yet to come, but it's nice to be live with this new version.

Wednesday, January 20, 2010

Samba: permission denied on nested directory creation

I have a network setup where there is a Unbuntu (9.04, Jaunty) Samba/CIFS file server with Ubuntu and Windows clients. I had this annoying issue where, from Linux clients, recursive directory creation -- like mkdir -p /share/a/b/c would not work. In order to get /share/a/b/c you'd have to create each directory in turn: mkdir /share/a && mkdir /share/a/b && mkdir /share/a/b/c. This prevents handy commands like mkdir -p and rsync from working. Fortuantely, I finally found a work-around: turn off Samba's Unix extensions on the server. i.e.

[global]
...
unix extensions = no
...
Obviously this only helps if you don't need the Unix extensions (symlinks, hardlinks, etc on the share), but I didn't. This sounds like a bug in the CIFS client for Linux, but who knows. I'm pleased to have any work-around.

References

Monday, January 18, 2010

hexBinary Encoding

Recently, I hacked together a wrapper script for reporting job statuses to Hudson. The XML API in Hudson called for "hexBinary" encoded data. I hadn't heard of this before, and couldn't find much in the way of decent examples on Teh Interwebs. From the spec, it seems to be pretty simple: for each byte in your data, write out its two character hex value. So if your byte has decimal value 223, write out its hex string: "DF". (Aside: this seems like a silly encoding, at least space-wise: why not the ubiquitous base64?) I wanted a simple shell script, so the issue was how to do this encoding without pulling in a full-out scripting language. Fortunately, hexdump has format strings. Unfortunately, its docs aren't great.

Example of hexBinary encoding using hexdump:

echo "Hello world" | hexdump -v -e '1/1 "%02x"'
48656c6c6f20776f726c640a

So what the hell is that? -v means don't suppress any duplicate data in the output, and -e is the format string. hexdump's very particular about the formatting of the -e argument; so careful with the quotes. The 1/1 means for every 1 byte encountered in the input, apply the following formatting pattern 1 time. Despite this sounding like the default behaviour in the man page, the 1/1 is not optional. /1 also works, but the 1/1 is very very slightly more readable, IMO. The "%02x" is just a standard-issue printf-style format code.

References:

Hudson External Jobs: Wrapper Script

Lately, I've been using Hudson for a variety of tasks. Hudson is billed primarily as a continuous integration server, but one general-purpose cool feature it has is the ability to monitor "external" jobs. What this means is you can have some arbitrary process report status to Hudson periodically.

My first thought was to get important cron jobs to report status -- beats the automated emails that I tend to ignore. If you happen to have a full Hudson install on the server running the cron job, Hudson provides a simple Java-based wrapper you can use. I didn't want to have to have various .jar files copied to every machine that needed to post status, so instead I opted to use Hudson's XML over HTTP interface. Both the Java-based and HTTP approaches are documented to some extent here.

I wanted to make it as easy as possible to integrate any old script with Hudson, so I came up with the wrapper below. (It seems to work for me, but use at your own risk; no guarantees!)

Update 2010-10-21: The latest version of this script now has a home at GitHub: http://github.com/joemiller/hudson_wrapper Thanks to Joe Miller for setting up the repo!

#!/bin/sh
# Wrapper for sending the results of an arbitrary script to Hudson for
# monitoring.
#
# Usage: 
#   hudson_wrapper <hudson_url> <job> <script>
#
#   e.g. hudson_wrapper http://hudson.myco.com:8080 testjob /path/to/script.sh
#        hudson_wrapper http://hudson.myco.com:8080 testjob 'sleep 2 && ls -la'
#
# Requires:
#   - curl
#   - bc
#
# Runs <script>, capturing its stdout, stderr, and return code, then sends all
# that info to Hudson under a Hudson job named <job>.
if [ $# -lt 3 ]; then
    echo "Not enough args!"
    echo "Usage: $0 HUDSON_URL HUDSON_JOB_NAME SCRIPT"
    exit 1
fi

HUDSON_URL=$1; shift
JOB_NAME=$1; shift
SCRIPT="$@"

OUTFILE=$(mktemp -t hudson_wrapper.XXXXXXXX)
echo "Temp file is:     $OUTFILE" >> $OUTFILE
echo "Hudson job name:  $JOB_NAME" >> $OUTFILE
echo "Script being run: $SCRIPT" >> $OUTFILE
echo "" >> $OUTFILE

### Execute the given script, capturing the result and how long it takes.

START_TIME=$(date +%s.%N)
eval $SCRIPT >> $OUTFILE 2>&1
RESULT=$?
END_TIME=$(date +%s.%N)
ELAPSED_MS=$(echo "($END_TIME - $START_TIME) * 1000 / 1" | bc)
echo "Start time: $START_TIME" >> $OUTFILE
echo "End time:   $END_TIME" >> $OUTFILE
echo "Elapsed ms: $ELAPSED_MS" >> $OUTFILE

### Post the results of the command to Hudson.

# We build up our XML payload in a temp file -- this helps avoid 'argument list
# too long' issues.
CURLTEMP=$(mktemp -t hudson_wrapper_curl.XXXXXXXX)
echo "<run><log encoding=\"hexBinary\">$(hexdump -v -e '1/1 "%02x"' $OUTFILE)</log><result>${RESULT}</result><duration>${ELAPSED_MS}</duration></run>" > $CURLTEMP
curl -s -X POST -d @${CURLTEMP} ${HUDSON_URL}/job/${JOB_NAME}/postBuildResult

### Clean up our temp files and we're done.

rm $CURLTEMP
rm $OUTFILE

If you have, for example, a crontab entry that looks like this:

00 02 * * * myscript.sh
you can have it report status to Hudson under a job called "test_job" by changing your crontab to look like this instead:
00 02 * * * hudson_wrapper http://hudson.myco.com test_job myscript.sh
The job "test_job" must be created as an "external job" in Hudson ahead of time for this to work.

One thing of interest here is the "hexBinary" encoding in the XML that is sent to Hudson. There is precious little info out there about "hexBinary", so hopefully I got that part right. From the spec, it seems simple enough, and the script does work for all the inputs I've thrown at it so far. Update: I wrote a more detailed post on hexBinary.

Update 2010-01-29: Added -s to curl to avoid transfer stats showing up on stderr. Also improved the wrapper to be able to handle any size output from the wrapped command. Before you were at the mercy of ARG_MAX, getting Argument list too long errors if your script output too much stuff.

Saturday, January 16, 2010

Jungle Disk 3 Linux: Automatically start on reboot

I've been a long time user of Jungle Disk, and it's historically been pretty good software. Recently, version 3 was released, and among various features and fixes, they also removed the command line version for Linux for "Jungle Disk Desktop", which I use. I wanted to run Jungle Disk on a simple file server that didn't have X, but that isn't possible with version 3. The issue has been raised with Jungle Disk by some other people, and it sounds like there's a chance they will do something about it... maybe/eventually.

In the mean time, I opted to just install a full desktop version of Ubuntu 9.10 on my file server so I could use Jungle Disk 3's GUI. But that brought to light another problem: you can't auto-start Jungle Disk 3 after a reboot because it requires a valid X display... A user must actually log in and start it! Instead of that nonsense, I run vncserver after a reboot using @reboot in a crontab, and get vncserver to run junglediskdesktop via it's xstartup file. A hack, but it seems to work well enough.

Here's a basic overview of what's needed to make this work.

  1. Install vnc4server and set it up for gnome-session, as described nicely here.
  2. Add the following line to the crontab for the user who should run Jungle Disk:
    @reboot   /usr/bin/vncserver :1
  3. Make sure ~/.vnc/xstartup looks something like the following for the user that should run Jungle Disk:
    #!/bin/sh
    
    # Uncomment the following two lines for normal desktop:
    # unset SESSION_MANAGER
    # exec /etc/X11/xinit/xinitrc
    
    [ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
    [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
    xsetroot -solid grey
    vncconfig -iconic &
    #xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
    gnome-session &
    junglediskdesktop &
    

To test that this is working, reboot the box and then use a VNC viewer ("Remote Desktop Viewer" in Ubuntu) to connect to the server on port 5901.

Update - 18-Jul-2010: As of version 3.05 of Jungle Disk Desktop Edition, the CLI is back. See this thread at Jungle Disk for discussion. It's not perfect: you need the GUI to configure everything (which is then written to a big XML file), but once configured you can run headless.

Friday, January 15, 2010

Simple Samba/CIFS Configuration

Any time I have to do something with Samba, I run into stupid configuration and permissions issues. I just set up a dead simple Samba config and am documenting it here for next time. Possibly someone else might get some use out of it too.

Some reading I based this on: creating a public share in Samba, and some Samba on Ubuntu docs.

Overview:

  • Ubuntu 9.10 Karmic as the smb server
  • Whatever Samba 3.x it comes with
  • Windows and Linux client machines
  • Anyone on 192.168.1.0/24 has access: not secure, but convenient
  • Machine called myhostname is the file server, at IP 192.168.1.4

Config:

  1. On the file server, myhostname in this example, create a user smbuser to act as the 'guest' in Samba. Client machines that don't authenticate will act as this user:
    sudo adduser smbuser
    Then make sure /etc/passwd and /etc/group have lines something like this:
    # /etc/passwd
    smbuser:x:1001:1001:Samba user,,,:/home/smbuser:/usr/sbin/nologin
    
    # /etc/group
    smbuser:x:1001:
    
  2. Edit /etc/samba/smb.conf:
    [global]
    netbios name = myhostname
    workgroup = WORKGROUP
    server string = File Server
    security = user
    map to guest = bad user
    guest account = smbuser
    create mask = 0644
    directory mask = 0755
    hosts allow = 192.168.1.0/24
    hosts deny = 0.0.0.0/0
    unix extensions = no  # unless you REALLY need them
    
    # Simple share that anyone can read/write to
    [photos]
    path = /data/photos
    browsable = yes
    guest ok = yes
    read only = no
    
  3. Client linux machine's /etc/fstab (Make sure smbfs is installed: sudo apt-get install smbfs):
    //192.168.1.4/data /data cifs username=smbuser,password=,uid=bob,gid=bob 0 0
    
  4. Client Windows machine: just browse to \\192.168.1.4 or \\myhostname

Wednesday, January 6, 2010

Snapsort: feature #1

First post for normal humans!

Yesterday we launched the first Snapsort feature: compare any two digital cameras (in a sane way). Here's an example comparison. This is a relatively simple feature, but we think it's interesting. (...And possibly even useful already?) Alex gives a better description here. This is only the beginning, but it's nice to have something "live".

Monday, January 4, 2010

s3cmd 0.9.9 on Ubuntu 9.04/Jaunty

The version of s3cmd that comes with Ubuntu Jaunty is quite old (0.9.8; from late 2008). It seems that numerous issues have been fixed in 0.9.9, which is part of Karmic. Fortunately, there is a backport to Jaunty. Here's how to install:

  1. Uninstall s3cmd if you've already installed it:
    sudo apt-get remove s3cmd
  2. Add the following lines to /etc/apt/sources.list:
    # s3cmd backports
    deb http://ppa.launchpad.net/loic-martin3/ppa/ubuntu jaunty main 
    deb-src http://ppa.launchpad.net/loic-martin3/ppa/ubuntu jaunty main 
    
  3. Run these commands:
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 78822001
    sudo apt-get update
    sudo apt-get install s3cmd
  4. Check that you've got 0.9.9:
    $ s3cmd --version
    s3cmd version 0.9.9

Thanks to Loïc Martin, from whom these packages originate.