TO DO: This section is out of date: we use venti and fossil now, although the principle is the same.
The machine runs a specialised kernel that does nothing but serve files over the network. Plan 9 provides an automated backup system known as the dump file system. The entire file store is kept on a WORM device (write-once platters in an optical jukebox) and its copy-on-write characteristic is turned to advantage: at 5am each morning the file server automatically saves the a pointer to the root of the existing hierarchy, and creates a new root pointer for further updates, which are made copy-on-write (including directory updates). All updates between snapshots are automatically made in a rewritable cache on disc, because WORMs are slow, and to avoid saving too much transient data. They are made permanent at the next dump. We know from the distribution of blocks on platters on our jukebox that quite a bit of the current file store is actually shared with data first written years ago. As well as the cache, the file server commits most of its memory to buffering data from its file systems. Thus, frequently accessed data is found in RAM, most active data is found in the magnetic disc cache, and finally the rest is fetched from the appropriate platter in the jukebox, all managed transparently and automatically by the file server kernel.
As well as the main file system, the file server exports a dump directory, typically mounted on /n/dump, containing names of the form yyyy/mmdd, each providing a snapshot of the entire file store on the given date. Amongst other things, this allows running commands such as
diff -r /n/dump/1999/0101/sys/src /sys/srcto see what has changed in the sources since the start of the year, or yesterday(1):
yesterday -d .to see whether a change I made in the current directory yesterday might explain a bug today. The history(1) command will display the history of the changes to a given file or directory across all dumps. Using Plan 9's bind primitive, parts of the dump can be bound over parts of the current hierarchy (eg, for regression testing).
We recently installed a new file server called molto, that uses an HP Surestore optical jukebox for the file system, with a new 18.2Gb disc to provide a cache for the WORM data and space for scratch files. Its fsconf(8) configuration data is:
config w6which puts the cache on the last half of the 18.2 SCSI disc (w6, SCSI target 6), and describes the juke box as having changer w3, and drives w4 and w5; the file system is written on the first side of platters 0 to 9 (slots 1 to 10 of a 24-slot device), and then when all those are full, on the flip side of the the same platters (represented by numbers 24 up on a 24-slot device). This arrangement saves excessive flipping of platters. Each surface of each platter stores 2.6Gbytes. The other file system is used for swap (paging) files (rarely touched by Plan 9) and other large temporary files that would simply waste space on the WORM. Otherwise we just add space when the file server fills up.
filsys main cp(w6)50.50j(w3w4w5)(l<0-9>l<24-33>)
filsys dump o
filsys other p(w6)25.25
filsys news p(w6)0.25