61: Chung Fu - Inner Truth 11: Tai - Peace

The Uncarved Block

next>> [Last update: 2011-06-02 18:18:09 GMT] [61 articles in total] [show 5 10 per page]

Getting started with github

Before we can do our marketmaking system, you need to be able to get the software.

To share the marketmaking software that I talked about in the first marketmaking article, we need to have a repository set up, so I decided to get going with github. You'll need to set up git, and you'll need maven 2 to build everything. I thought I'd put out a small article to get people started with building a small package we're going to depend on for the bigger system. Assuming this goes ok, I'll push the actual software that interfaces with bullionvault.

Once you've got git and maven2 set up, it should be simple enough. On my linux boxes I just do:

git clone git://github.com/huntse/helpers.git
cd helpers
mvn test

...and with any luck, git will get the software, maven2 will download the internet and some time later build and run the tests, and you'll see something like this:

-------------------------------------------------------
 T E S T S
-------------------------------------------------------
There are no tests to run.

Results :

Tests run: 0, Failures: 0, Errors: 0, Skipped: 0

[INFO]
[INFO] --- maven-scalatest-plugin:1.1-SNAPSHOT:test (default) @ helpers ---
[INFO] org.scalatest.tools.Runner.run(-p, "/home/sean/tmp/helpers/target/classes /home/sean/tmp/helpers/target/test-classes", -o, -fNCXEHLOWFD, /home/sean/tmp/helpers/target/scalatest-reports/file/constrained.txt, -f, /home/sean/tmp/helpers/target/scalatest-reports/file/full.txt, -u, /home/sean/tmp/helpers/target/scalatest-reports/xml, -h, /home/sean/tmp/helpers/target/scalatest-reports/html/report.html)
Run starting. Expected test count is: 7
Suite Starting - DiscoverySuite
BasicClientSpec:
An BasicClient object
- should be able to get theflautadors.org
- should be able to reget theflautadors.org using conditional get
- should be able to get HEAD of uncarved.com
- should be able to GET uncarved with parameters
- should be able to get xml
- should be able to handle redirects
- should be able to do a POST with values
Suite Completed - DiscoverySuite
Run completed in 2 seconds, 266 milliseconds.
Total number of tests run: 7
Suites: completed 2, aborted 0
Tests: succeeded 7, failed 0, ignored 0, pending 0
All tests passed.
THAT'S ALL FOLKS!
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23.140s
[INFO] Finished at: Wed Jun 01 10:27:36 GMT 2011
[INFO] Final Memory: 6M/11M
[INFO] ------------------------------------------------------------------------
mvn test  22.79s user 0.74s system 94% cpu 24.793 total

I don't use eclipse or anything like that, but it should be possible too get this working with eclipse too, just don't ask me how.

Writing an Automated Marketmaking System

The first in a series of articles in which we will construct a simple high frequency trading system.

There has been a lot of press recently about the evils of high frequency trading (HFT), with many commentators saying that HFT may well be the root of the next big financial crisis. The basis for this view is the increasing importance of computerised market-making systems in providing liquidity to markets, and the concern about what happens if these participants stop providing this liquidity in the event of some sort of market panic. Additionally, some commentators maintain that automated market makers are trading in a manner that is to the detriment of other participants.

Rather than attempting to add more commentary to this debate, I'd like to contribute in a practical way by widening the understanding of these systems so that people can decide for themselves. As such, this is going to be the first in a series of articles during which we will build a fully-functional electronic market-making system. It's going to take a little while to develop all the pieces, but since I have the first piece almost ready to go I think it's time I came out with the introductory articles. This series will mostly be aimed at people with some computing background and interest in financial markets but not assume any knowledge of how markets work.

Caveats before we begin

Nothing in these articles is going to constitute any sort of advice as to the merits of investing in a particular product, or making markets using a particular strategy. If you follow the series you will acquire some bits of software which can be used to construct an EMM (Electronic Market-Making) system. I won't make you pay for them, and if you use them, you may lose money. The software may have bugs or unintended features which cause you to lose money. I'm really sorry if that's the case. You need to understand the software and accept the consequences of what it does, because it will be trading on your account. You need to put in place any tests you require to feel happy that it is performing according to your specifications. You also need to understand that in financial markets you are dealing with random processes and as such even well-founded strategies can lead to losses. Additionally, anything you make using this toolkit will need to be able to hold it's own against other market participants who will be aiming to exploit it. In any financial situation you need to do your own research and take responsibility for what happens - this is no different. Obviously you shouldn't put more at risk than you can afford to lose, but it's your money, and you need to decide whether this is right for you.

Background - Marketmakers and liquidity

It would be very cumbersome if every time you wanted to buy something you had to find the person willing to sell exactly that quantity at that time and agree a price, so generally when we buy or sell, a marketmaker actually takes the other side of the trade, hoping to find someone else wanting the opposite bargain later in the day. The marketmaker makes money by charging a small spread (ie they buy for lower than they sell for) in return for assuming the risk of holding the position you have put them into until they are able to unwind it by doing the opposite trade. This risk is a function of the ease of the unwind (how likely they are to be able to find someone to trade with) and the price volatility of the asset. So if you want to trade in a product which has very low price volatility, and very high liquidity, then it would be easy for the marketmaker to find someone to trade out of their position with, and they would not need to worry about the price moving too much while they hold the asset, so you would expect the spread between the bid and ask prices to be very low. Conversely, high volatility and low liquidity assets would normally have high spreads to compensate marketmakers for their higher risk.

Providing liquidity

In the old days, the position of marketmaker was held only by exchange locals who had to pay for the membership that allowed them to earn this spread, but with the advent of limit order books, anyone can provide liquidity to many markets and expect to be compensated for it. Our goal is to write a computer system which will do this for us. In order to do that, we first need to understand how a limit order book works, and by way of an example I'm going to jump right in and introduce the market we will use as the basis for this whole series.

Bullionvault

The market we're going to use for our examples is bullionvault.com, which is in essence a physical gold and silver market, with all the actual metal held in escrow in reserves in New York, London and Zurich, with seperate order books in $, £ and €. If you sign up using that link I will make a small referral fee (at no cost to you) from the commissions you pay to trade your account, and that's I'm going to get for writing these articles. Before you sign up, you should of course, peruse the on-line help so you understand how their system works, and like any other investment, you should think carefully about the risks involved.

Understanding the order book

If you go to the front page, you can see the current sell and buy prices for gold in the three locations in one currency (USD by default).

Bullionvault USD gold touch prices

The touch

These prices are just the top of the order book - the so-called touch prices. You would find more quantity available at different prices to buy or sell further away in the book, but what the prices mean in this example is that if you wanted to buy one troy ounce of gold in NYC it would cost you $1528, but if you wanted to sell you would only get $1524, so the market spread is $4 per TOz or $110 per kg if you live in the modern world. We're going to use metric units for these articles as teaching something as amazing as a modern computer to think in Troy ounces (or any imperial units) is a great evil, like teaching children arithmetic only by using Roman numerals or something.

Bid and Offer

We're also going to use some real market-making terminology, so we're going to refer to bid and ask or bid and offer prices, rather than sell and buy prices. The easy way to remember which way round these are is to think about the fact that as marketmakers we want to make money and so charge people for what they want to do. If they want to sell, we're going to bid to buy those shares from them at a low price. If they want to sell, we will reluctantly offer to sell to them at a high price. Hence bid is low and offer is high. When these prices are reversed, the orderbook is said to be crossed. This happens in equity markets when they are closed for the night, there is then an auction phase where high bids are matched with low offers until the book is uncrossed and normal trading begins. We wouldn't expect the orderbook to be crossed in a continuous trading market like this unless something was wrong and matching was suspended.

Aggressive and Passive

If an order to buy comes in, and it has no limit price, it is matched with the cheapest available sell orders until it is filled. If it has a limit price, it will only be filled up to the limit price on the order. But what happens to the remaining quantity? Under normal circumstances, this quantity stays on the book at the limit price until it can be matched against an incoming sell order within its limit.

We say that an order that provides liquidity by sitting on the book waiting to be filled is passive and that an order which crosses the spread, taking liquidity from the market by crossing off with passive orders is aggressive. We can also refer to the passive touch and aggressive touch. If we have an order to buy, then the passive touch is the bid and the aggressive touch is the offer. This is because if we want to buy passively, we will place our order at the bid or lower, whereas if we want to buy aggressively, we will need to pay the offer or higher. The opposite would be true for a sell order. If we want to sell right now, we will need our limit price to be at, or more aggressive (lower) than, the current bid, whereas if we are prepared to wait, our price can be more passive (ie higher).

Still to come - the software

In the next article, I will introduce a the software we can use to connect to bullionvault. Please feel free to comment below if anything so far is unclear and I'll try to deal with it in the next article.

Mozilla Ubiquity

Fun with this amazing mozilla addon

Ubiquity is an amazing mozilla addon which gives users a command-line interface within the browser which you can use to do various things. Users can easily write their own commands in javascript and share them via the web. In that spirit, I've written my first ubiquity command, which searches hoogle, the haskell type-aware search engine.

CmdUtils.makeSearchCommand({
  homepage: "http://www.uncarved.com/",
  author: { name: "Sean Hunter"},
  license: "MPL",
  name: "hoogle-search",
  url: "http://www.haskell.org/hoogle/?hoogle={QUERY}",
  icon: "http://www.haskell.org/favicon.ico",
  description: "Searches haskell.org for functions matching by name or type signature.",
});

As you can see, it's virtually all meta-data and that's because there are a bunch of functions around search that know how to do everything you need to do. However, you can write more sophisticated commands that manipulate the browser, the web page you're on, have little built-in previews etc. All very nifty.

When you have the above, you can simply invoke ubiquity and say "hoogle-search Ord a => a -> a" or whatever and it will find you functions matching that type signature. I'll share this command (and any others I write) at www.uncarved.com using the subscription mechanism they recommend.

I was feeling rather proud of the above, however I saw this morning that if you go on a page with a search box, select it and invoke ubiquity and type "create-new-search-command" it writes something very much like the above for you.

Haskell arrays are amazing

Functional languages are all about lists... but Haskell has incredible arrays

For certain classes of problems where speed is of the essence, table lookups can be a fantastic solution. However, many functional programmers eschew these approaches partly because they are thinking in terms of lists, and lookups in lists (as we all know) are a bit rubbish most of the time. However, Haskell has a fantastic array implementation that is flexible enough to put many imperative languages to shame (especially when combined with list comprehensions). Hopefully an example will illustrate what I mean. The actual array lookup I'm doing is into arrays of much larger size, but the principle is exactly the same.

Say you want to model a regular deck of 52 playing cards and want to write functions to convert them to and from integers. We want to proceed by rank (two to ace), then for each rank, by suit (clubs, diamonds, hearts, spades) so the two of clubs is going to be 0, the two of spades is going to be 3 and the ace of spades is going to be 51.

Let's start by defining the datatypes for Rank and Suit and a simple generic card type:

data Rank =
    Two
    | Three
    | Four
    | Five
    | Six
    | Seven
    | Eight
    | Nine
    | Ten
    | Jack
    | Queen
    | King
    | Ace
    deriving (Eq, Ord, Show, Read, Enum, Ix)

data Suit =
    Clubs
    | Diamonds
    | Hearts
    | Spades
    deriving (Eq, Ord, Show, Read, Enum, Ix)

data GenCard = GenCard Rank Suit
    deriving (Eq, Ord, Show, Read)

...so far so not very interesting. Now the two functions we want to define are:

genCardOfInt :: Int -> GenCard
genCardToInt :: GenCard -> Int

...and obviously, you could do:

genCardOfInt 0 = GenCard Two Clubs
genCardOfInt 1 = GenCard Two Diamonds

...etc all the way to...

genCardOfInt 51 = GenCard Ace Spades

...and then...

genCardToInt (GenCard Two Clubs) = 0

...and so on.

This would work, but it is extremely smelly. The thing that alerts us to the fact that this is fishy is that there is a lot of repetitive typing. The gods of Haskell frown on this and generally if you're doing a bunch of it, there's probably something wrong. And indeed there is. This approach leads to a linear search through cases every time until we find a match. If we care about performance, this will be horrible and imagine how bad it will get if we get a table with (say) 4 million entries? The actual problem I am interested in requires this size of table. Happily we can use an array-driven method to not only speed up our algorithm, but also to scrap all this tedious repitition. First, some background.

For genCardOfInt what we want to do is create an array of 52 cards and then use the int passed in to look up into this array. This will give us constant time access. So what we want is:

genCardOfInt x = lookup ! x
    where
        lookup = .... to be discussed

One possible solution is to do

        lookup = listArray (0,51) [GenCard Two Clubs, GenCard Two Diamonds etc etc etc]

... but that's almost as unhaskelly as what we had before! So, here's how we do it.

genCardOfInt x = lookup ! x
    where
        lookup = listArray (0,51) [GenCard r s|r<-enumFrom Two, s<-enumFrom Clubs]

Say whaaaa? Well, let's see what ghci has to say:

*Card> enumFrom Two
[Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten,Jack,Queen,King,Ace]
*Card> enumFrom Clubs
[Clubs,Diamonds,Hearts,Spades]

So because we said "deriving Enum" for our types, we get this ability for free. Mighty 'andy.

*Card> [GenCard r s|r<-enumFrom Two, s<-enumFrom Clubs]
[GenCard Two Clubs,GenCard Two Diamonds,GenCard Two Hearts,GenCard Two Spades,GenCard Three Clubs,GenCard Three Diamonds,GenCard Three Hearts,GenCard Three Spades,GenCard Four Clubs,GenCard Four Diamonds,GenCard Four Hearts,GenCard Four Spades,GenCard Five Clubs,GenCard Five Diamonds,GenCard Five Hearts,GenCard Five Spades,GenCard Six Clubs,GenCard Six Diamonds,GenCard Six Hearts,GenCard Six Spades,GenCard Seven Clubs,GenCard Seven Diamonds,GenCard Seven Hearts,GenCard Seven Spades,GenCard Eight Clubs,GenCard Eight Diamonds,GenCard Eight Hearts,GenCard Eight Spades,GenCard Nine Clubs,GenCard Nine Diamonds,GenCard Nine Hearts,GenCard Nine Spades,GenCard Ten Clubs,GenCard Ten Diamonds,GenCard Ten Hearts,GenCard Ten Spades,GenCard Jack Clubs,GenCard Jack Diamonds,GenCard Jack Hearts,GenCard Jack Spades,GenCard Queen Clubs,GenCard Queen Diamonds,GenCard Queen Hearts,GenCard Queen Spades,GenCard King Clubs,GenCard King Diamonds,GenCard King Hearts,GenCard King Spades,GenCard Ace Clubs,GenCard Ace Diamonds,GenCard Ace Hearts,GenCard Ace Spades]

So that bit is a list comprehension. Basically it's a simple way of defining a list and avoids all the tedium and boilerplate. And "listArray" takes some dimensions and a list and returns an array. But the best is yet to come, and ghci hints at it now:

*Card> :t listArray (0,51) [GenCard r s|r<-enumFrom Two, s<-enumFrom Clubs]
listArray (0,51) [GenCard r s|r<-enumFrom Two, s<-enumFrom Clubs]
  :: (Num t, Ix t) => Array t GenCard

So the type of that "lookup" variable is an Array t GenCard. The "t" is some numeric type that we can use to index into our array, and the GenCards (as we know) are what's in the array. So the type of the array index is polymorphic.

This means we can do:

genCardToInt :: GenCard -> Int
genCardToInt (GenCard r s) = lookup ! (r,s)
    where
        lookup = listArray ((Two,Clubs),(Ace,Spades)) [x|x<-[0..51]]

Gosh! Let's try that expression in ghci:

*Card>  listArray ((Two,Clubs),(Ace,Spades)) [x|x<-[0..51]]
array ((Two,Clubs),(Ace,Spades)) [((Two,Clubs),0),((Two,Diamonds),1),((Two,Hearts),2),((Two,Spades),3),((Three,Clubs),4),((Three,Diamonds),5),((Three,Hearts),6),((Three,Spades),7),((Four,Clubs),8),((Four,Diamonds),9),((Four,Hearts),10),((Four,Spades),11),((Five,Clubs),12),((Five,Diamonds),13),((Five,Hearts),14),((Five,Spades),15),((Six,Clubs),16),((Six,Diamonds),17),((Six,Hearts),18),((Six,Spades),19),((Seven,Clubs),20),((Seven,Diamonds),21),((Seven,Hearts),22),((Seven,Spades),23),((Eight,Clubs),24),((Eight,Diamonds),25),((Eight,Hearts),26),((Eight,Spades),27),((Nine,Clubs),28),((Nine,Diamonds),29),((Nine,Hearts),30),((Nine,Spades),31),((Ten,Clubs),32),((Ten,Diamonds),33),((Ten,Hearts),34),((Ten,Spades),35),((Jack,Clubs),36),((Jack,Diamonds),37),((Jack,Hearts),38),((Jack,Spades),39),((Queen,Clubs),40),((Queen,Diamonds),41),((Queen,Hearts),42),((Queen,Spades),43),((King,Clubs),44),((King,Diamonds),45),((King,Hearts),46),((King,Spades),47),((Ace,Clubs),48),((Ace,Diamonds),49),((Ace,Hearts),50),((Ace,Spades),51)]

So we have an array which is indexed by a pair of (Rank, Suit), and the values are numbers from 0 to 51. And no boilerplate. If we wanted to test this, of course, we could write some quickcheck properties that verify this implementation against the naive one I gave before. This sort of model-based testing is going to be essential to verify the complex but fast implementation I have for my real problem against the obvious but insanely verbose and slow implementation that I can put in my test.

Aren't Haskell arrays amazing though? I was genuinely stunned when I realised I could do table-driven methods so elegantly. There's a nice tutorial to Haskell arrays in the "gentle introduction"

Using a Linux laptop in text mode

Linux has everything you need to use your laptop even if you choose not to use the pointy-clicky interfaces

So having recently got a netbook, I have been learning how to do everything on it. It's obviously all easy if you use something like xfce or gnome, or some netbook-specific spin like the Ubuntu netbook remix, but I like to use xmonad, so it helps to learn how to do all the laptoppy things in text mode.

Here's a list of things it's useful to know:

  • nm-applet - finds a network and connects to it automagically
  • pm-suspend - sends your laptop into a light sleep
  • pm-hibernate - sends your laptop into a heavy sleep where it uses no battery

I've got a new netbook

So finally I give in to temptation...

This week I bought a new netbook (an Acer Aspire One Pro P531). I was determined to reward someone who sells a Linux netbook but in spite of my best efforts noone was selling the sort of configuration I wanted (big SSD drive, 2GB memory) without Windows XP. So I bought one and installed Fedora on it myself.

It's a fantasticlittle device and everything on it just works with Fedora. Apart from the ethernet card which seems not to want to dhcp (although that might be some incompatibility with my home hub/router/adsl thingy).

I'm using it with Xmonad to do random bits of Haskell tinkering when I take the tube to work in the morning. The keyboard is a bit fiddly, but manageable, and the small form factor and light weight are terrific. In this respect it reminds me a little of my old sony vaio which was a sort of expensive predecessor to a netbook that I got second hand.

One thing I would have done differently if I had done more research is to get the other sort of Atom processor. Ones with an "N" or a "Z" at the front are based on the i686 architecture, whereas ones without that are x86_64 based. Since I have an x86_64 desktop pc at home it would have been slightly more convenient not to have to download two seperate fedora versions. The upside of this one is I think it has slightly better battery life. In fact, battery life seems amazing.

n-Bit Gray codes in Haskell

A first step in what will become a combinatorics library

I have been playing around with Gray's reflected binary code (aka Gray codes) and similar things a bit. Before I reveal why I'm doing this lets just dive in and write some code. Gray's algorithm is described well here. The code which follows is in haskell, because it's a really fantastic language and I'm playing around with it at the moment. For scala fans, don't worry. I haven't abandoned scala, this is a parallel effort.

So for starters we need a datatype for representing these things. This is how you define an algebraic datatype in haskell. In what follows, lines beginning "--" are single-line comments

-- | 'Bit' is the datatype for representing a bit in a gray code.  
data Bit = Zero | One deriving Show

Alright. So we have a type "Bit" with two constructors Zero and One and a "deriving Show" which means haskell figures out how to turn it into a string. This is useful when you're in ghci (the interactive haskell environment) debugging.

-- prepend a given item onto each of a list of lists (probably something to do this in the prelude)
prepend :: a -> [[a]] -> [[a]]
prepend t xs = map (t:) xs

A teeny helper function. Given a list of lists and a thing it sticks the thing on the front of each list in the outer list. This would append the thing on the end of each list:

append :: a -> [[a]] -> [[a]]
append t xs = map (++[t]) xs

Note I'm writing the type signatures explicitly but there's absolutely no problem if you leave them off. So let's generate our Gray codes:

-- | 'gray' generates the gray code sequence of length 'n'
gray :: Int -> [[Bit]]
gray 1 = [ [Zero], [One] ]
gray n = prepend Zero (gray (n-1)) ++ prepend One (descGray (n-1))

-- | 'descgray' generates the reversed gray code sequence of length 'n'
descGray :: Int -> [[Bit]]
descGray 1 = [ [One], [Zero] ]
descGray n = prepend One (gray (n-1)) ++ prepend Zero (descGray (n-1))

So we get an ascending and a descending one for free. Since the descending one is just the ascending one in reverse why (you might say) don't I just define descGray as descGray = reverse.gray ? Indeed, that may be a reasonable thing to do. I'm doing it this way to try to preserve as much laziness as possible, and (although my haskell-fu is still very weak at the moment) I think that if you reverse a list you pretty much have to evaluate each thing in the list. If you read the paper you'll see that this is Gray's (naive) algorithm and there has been an astonishing amount of research in this area leading to more efficient algorithms. I'll give those a crack at some point.

Why am I doing this? You'll see. This is at the heart of building a really cool combinatorics library. I needed something that could enumerate all combinations and permutations of various generic distribution-type things. There are similar but more recent orderings that are comparable to gray codes which I'm also looking into. They'll all be presented here in due course.

Which language would you use?

It depends

I got a mail a few days ago about how I wrote a couple of articles with code for an options pricer in ocaml but all my latest articles were about scala. So which language would I use today if I were to write an options pricer. The answer of course is "it depends".

Now in and of themselves both ocaml and scala are fine choices for writing just about anything. But there are tradeoffs. If I was in charge of development of a brand new pricing and risk infrastructure for a big bank that had to be able to price everything from a stock to a digital multi-asset multi-barrier callable bermudan range-accrual thingummybobber that was going to be worked on by a thousand people ranging from the lowliest intern to the most brilliant genius then I have no hesitation in saying I would use scala.

In fact a friend of mine who is in charge of the development of risk and pricing systems at a major wall st firm told me that if he was building this infrastructure far a bank from scratch today he would use scala. He's the guy who persuaded me to try scala in the first place as a matter of fact.

The reasons this would be a fine choice should be obvious- it's very simple to write serious software in scala, it shouldn't take anyone of reasonable ability much time at all to learn, the syntax is not overly burdensome and tedious, performance is adequate or better for most things, the concurrency paradigm is tractable by normal human beings and there is fantastic library support because you can just use java stuff. The extensible syntax would help for various things and the cross-platform support is always a nice thing to have if you want to have number-crunching on a Linux compute farm and desktop apps on Windows or whatever.

On the other hand if I was setting up a small quant trading shop/hedge fund or doing it for my own benefit then the choices are much more varied. I might use scala still (it would still be an excellent choice), I might do it in ocaml (or even Haskell in fact), particularly if I was going to be all or most of the programming myself or I had access to recruit a decent pipeline of smart functional-programming aware people.

The benefits of doing it in ocaml (or Haskell) would be that I would probably have a more mentally-stimulating time doing it (this is can be an important motivation also if you have a super-bright team), and would probably end up with something more aesthetically pleasing from a pure comp-sci point of view.

On the other hand I would certainly have more frustrations (eg Why has no-one noticed that you can only do one request through the ocaml curl library because there is a memory scribble? But I digress). I wouldn't really want to lead a group of 100 guys and have to keep teaching haskell monad combinators or whatever every time a new person joined. And maintaining/code reviewing etc could become excruciating when you were dealing with people of average ability less one or two standard deviations.

So horses for courses. Ultimately writing good software always requires thought, discipline and some skill. The right language fits the problem domain and suits the group and organization. Good programmers can learn to write good software in any language.

Website Changes

An occasionally-updated summary of stuff I've changed

So when I had a day off work this week I decided to fix a few things on this site that have been bugging me for a while:

  • most of the content is dynamic, yet the sitemap (used by search engines to figure out how to slurp your site) was not
  • I never bothered to put proper meta description tags on pages
  • now that there were quite a few articles things were getting lost in painful navigation
  • The site didn't do gzip encoding
  • I've finally added comments thanks to intense debate

So I've had a go at fixing them. I now generate my sitemaps automatically using the code I wrote to generate the atom and rss feeds, transparently gzip stuff if your client can accept that, and there is a new "treeview" page on the side to make things easier to get to. Oh, and the pages have proper description tags which should make them more useful on search engines.

Hope this all helps.

Adding comments

So I've thought about doing this for some time but never had the spare time. Finally I add comments.

I have added comments for the first time. I don't expect there will be a great deal of traffic and at the moment just to get started I have moderation on. If things seem to be resonably calm and the wingnuts stay away, I'll turn moderation off.

I'm using the comment system from intense debate, which has been fantastically easy to set up and use so far. Let me know what you think.

Syndicate: atom rss rss1.1 bloglines Technorati sitemap Validate: atom rss2 css xhtml rdf

Unless otherwise specified the contents of this page are copyright © 2011 Sean Hunter and are released under a creative commons attribution 2.5 license.
Machine-readable metadata for this page can be found here.