The other end of the Erlang experiment is the libraries I was working with. My application was built on the Sumo Rest stack, built on top of the Erlang Cowboy HTTP server.

I found this finicky and prone to vague runtime errors. That's not important, though. The overall style of the system was pretty reasonable, and is probably a good way to write REST services in general.

Read more... )

The layout implications of this are useful and generic: one source file per type, where each type file defines its storage, serialization, and documentation; and one source file per route, where each route file has its own URL path, complete machine-processable documentation and metadata, and understands the standard failure cases. You just need the right high-level REST library in your language of choice to support this.
What do you get if you combine Haskell's basic functional style, Lisp's type system and object representation, a unique concurrency system, and a very good runtime?

Read more... )

In short, Erlang looks like a good language, a little dated, but I'd rather have good static checking and a robust library stack than an excellent runtime with okay libraries, especially if I can avoid peeking "under the hood".
There's a new release of the KDE desktop environment out, and in particular there's a test repository with packages for Kubuntu. It's apparently not just a point-zero release but the KDE maintainers are pretty unapologetic about it being a relatively raw point-zero release; "KDE 3 is fully supported and even under reasonable development, but nobody's really using a 'beta' KDE 4 and we need field bug reports."

KDE has gone in the "shiny" direction I had been hoping some mainstream Linux desktop environment would. The Compiz/Beryl/XGL/whatever X-over-OpenGL desktop effort has been pretty but it fundamentally is a concept demo that's Different From What You're Using Now. So KDE 4 has translucent-window support, some neat abilities to dim windows that have active modal dialog boxes, and some of the Mac-ish desktop effects like a window switcher that shows the current state of every open window.

The flip side of this is, well, that a lot of "normal-user" functionality isn't entirely there. I can't find the Debian app menu (which tends to be more complete than KDE's); I can't successfully log out; I can't add keybindings for "switch to desktop #5"; I can't change any characteristic of the panel; adding applets to the panel is non-intuitive. KDE 4's version of the Konqueror Web browser won't import bookmarks from the KDE 3 Konqueror and seems to find infinite loops no other browser does reading Livejournal.

Conclusion? It's definitely shiny, and if you're a bleeding-edge person and a KDE person it's probably worth playing with. But overall KDE 3 a lot more intrinsically usable than KDE 4 is at this point.
My home laptop has a Broadcom 4318 wireless card. This accursed card seems to be generally problematic for Linux users everywhere; there's a bcm43xx native Linux driver that honestly just doesn't work very well, or you can try the ndiswrapper song-and-dance. To make things worse, I run native AMD64 Linux, so if I use ndiswrapper I need to cough up a 64-bit Windows driver for the card. But such things do exist on the Internet.

Things got bad when I upgraded to Ubuntu Edgy Eft; the bcm43xx driver outright hung with the provided 2.6.17 kernel, and the ndiswrapper userspace needed some hand-holding. Fixing ndiswrapper worked, briefly, until I added more memory. Now I have 2.6.17, 2.6.19, and 2.6.20 kernels. Something in the 2.6.20 kernel changed to break ndiswrapper and Ubuntu Flighty doesn't have a fix for it. 2.6.19 with ndiswrapper works, usually, but sometimes it doesn't and it's still somewhat prone to randomly freezing. On (32-bit) Windows it seems to work pretty reliably.

I can't figure out what causes the system to sometimes work and sometimes not, and I'm not really up for doing kernel-level debugging. All I know is that this morning the system would repeatedly go into la-la land, either locking up with flashing caps lock light before X came up or successfully starting but hanging after maybe five minutes. Power-cycling is the only answer, and doesn't really help anything. This happened one day last week too, but I spent a couple of hours working on IAP class slides yesterday and it was all fine.


Edit: More hunting around believes that bcm43xx just isn't there, particularly on Broadcom 4318 cards. Both drivers have their share of current bugs, but some involve having more than 1 GB of RAM. Which I do now. And work experience of "driver/hardware loses high bits of address" is actually consistent with what I'm seeing. Workaround (successful for 5 minutes so far!): boot with mem=1024M.
  • Install Lotus Notes Designer on the Windows side of my laptop. Use it to build a Notes database to track my status reports. (Visibly work-derived, if you're wondering "why on earth Notes".)
  • Write a Scheme(-like) interpreter/compiler. Probably in Haskell.
  • Pick up my project to build a call graph from an ELF object file.
  • In the "not even a little related to work" camp, write a railroad dispatching simulator.
  • Give up on these "projects"; just play games, that's all computers were ever really meant for.
I built Gnucash 2.0 from source (to run on my AMD64 Ubuntu Dapper laptop; there are a couple of i386 builds but no obvious AMD64 ones). It's all GTK2, which is marginally shiny. And while I appreciate that that was a fair bit of coding effort, and a lot of infrastructure got rewritten for it, that's about it. As a user, the graphs are slightly less pretty; faster to draw, but still very slow to collect all of the data from a multi-year file, and still with user-configurable number-of-pixel sizes (what was ever up with that?). Stock lot tracking is useful, but the system doesn't seem thrilled about trying to collect up a lot where transaction 1 is "buy" and transaction 2 is "sell, plus record the capital gain somewhere" (which I thought was the documented way to handle this in Gnucash 1.8).

I think the big visible disappointment is the budgeting code, though. There's a "new budget" option, which creates a new account window with columns for future months. So far so good. You can enter numbers into the columns manually. But the "take your best guess button" pops up a dialog box that seems to cause nothing to happen. Documentation is scant-to-absent; the best writeup seems to be a wiki page that largely recounts past failed attempts and email battles.

End verdict: I wouldn't run out to upgrade...but if your Linux distro comes up with the new version (as presumably Ubuntu Edgy Eft will in October) I'd take it.
You have 71 unseen messages (555989 bytes), 71 total, in INBOX on PO10.MIT.EDU.
You have 1130 unseen messages (9592057 bytes), 1130 total, in INBOX.Spamscreen on PO10.MIT.EDU.
I'm in the process of converting my two Debian unstable boxes (call-with-current-continuation, an AMD64 laptop, and watertown, an IA32 desktop) into Ubuntu Breezy Badger boxes. The goal of this is to stop running Debian unstable, so that I can stop reading multiple high-traffic mailing lists that are at this point largely irrelevant to my life but I feel like I need to keep up on to know about the library breakage of the week.

gory details; some guru-level clue required )
Our new corporate masters use a well-known largely Windows-oriented mail system (thankfully not Exchange, and the client does run inder Linux). One consequence of this is that the email editor looks similar to Word or your favorite other font-aware GUI text editor. And a side consequence of this is that I'm actually happier to get HTML email...and it's hard to send good-looking text mail.

So I now understand the temptation of HTML mail: if everyone you correspond with lives in an HTML-aware mail world, then everyone's life is slightly prettier if everyone uses HTML. But I come from a world where not everyone does, so now there's the challenge (which the software doesn't help with at all) of making my GUIfied mail look good to text mail readers. I hope I'm succeeding, but this is a hard UI problem.

I think there are just too many options. Some people send their email in blue. Why blue? I'm not sure. The formatting options I want most are "monospace" and "italic", and sometimes "list", if I want more than that I'll write a document in something else and attach it. These options would be pretty easy to port over to your slightly-formatted-to-text renderer. So then you're not foisting angry fruit salad on the world, and you've made both the HTML-reader and text-reader people happy. Just as soon as I get to hacking on this closed-source heavily-legacy mail software...
I came home last night to discover that we had no network at home, and that the hard drive and CD-ROM lights on trusty old donut were blinking steadily. I hoped a therapeutic reboot might help, but after being powered down hitting the power switch again only resulted in a brief flicker on the power LED, nothing more.

I was able to recover into the new-donut project pretty quickly, surprisingly. I needed to install a newer version of OpenWRT on to the WRT54G ("commodity MIPS little-endian Linux box with built-in 802.11g AP"), and it did what I needed in terms of getting basic NAT up for the house almost out-of-the-box.

The one remaining thing is getting the magic tunnel network back up. OpenWRT doesn't have the Linux ipip kernel driver, but it does have Openswan packages, so I'm trying to set up an ipsec tunnel instead. This feels like it will work once I get the routing issues hammered out, which involves remembering the magic I had set up on donut before it died...
I figured I'd try to write a to-do list manager as a random side project. Tasks should be hierarchical, tasks should have dependencies, data should be stored locally, should have a pretty GUI around the whole thing. This doesn't sound hard, it should be something that I can get a first pass knocked off in about a weekend. Great.

I settled on C++ as an implementation language (I want something type-safe, Debian has no Haskell GUI libraries packaged, I know C++, OO code in C sucks a lot, and I kind of like Gtk). So I sat down and wrote a pretty simple data store for it. Bottom-up design and all that. So now I have a module where I can commit a new data state to it, and undo and redo. Oh yeah, and I wound up writing some basic code to do things like reference counting and callbacks when the state changed. Not hard or long, but it needed doing.

So now I start writing the GUI (using gtkmm) and I discover that I lose: to make a tree view of my hierarchical tasks, I either need to copy all of my data (eew) or write a custom TreeStore. There is no documentation for the latter. And the only way I can pass my store into the TreeView class is via a smart the Glib reference-counting system...which is different from mine.

My options are now to either tie my data store into Glib (yuck), figure out how to write the custom tree data store object in C and tie it in (more yuck), figure out how to make my adaptor object Glib-refcounted (possible, I suppose, but TFM is pretty unhelpful). Or learn OCaml, or figure out how to compile and install gtk2hs. The last is increasingly sounding like a better use of my brain cells...
Puzzle generators of various sorts. For *nix, Windows, OSX. On Debian, apt-get install sgt-puzzles. Includes Sudoku-like, Paint-by-Numbers-like, and a number of others.
For quite a while I've been getting mail exclusively at MIT and reading it exclusively through Gnus. Gnus has a lot of nice features, and since it's entirely in elisp it's very customizable. On the other hand, since it's entirely in elisp, it's also kinda slow, especially doing things like sorting mail and doing statistical spam detection, and reading mail stored in AFS sucks over any kind of consumer Internet connection. Is it time to move on?

Goals. I'd like the mail-sorting to happen "offline"; I don't want to have to wait for all of my mail to get sorted in between deciding to read it and being able to. I'd like to keep being able to use Gnus. I need my mail sorted by mailing list, and need some sort of spam filtering. Something IMAP-backed is probably ideal. It needs to be backed up, which I'm not set up to do at home, and have good connectivity and uptime.

Option 1: Google Mail. All of the cool kids are doing it. It might have the right features. AFAIK it only has a Web interface; you can get at a flat mail store by POP but that's not so interesting. An address is pretty reputable. I don't know if this would give me the level of control I want, but it's there.

Option 2: Virtual private server. There are various shops that will rent you a fraction of a machine; I've been loosely eyeing Linode's $20/month plan for a while. If I did this, I could run whatever I wanted on it, including [ profile] chooblog, SMTP and IMAP servers, and some sort of mail sorting system like procmail but less sucky. This clearly meets all of the requirements, but costs money and takes some effort to set up. It also requires me to come up with a domain name, and if I screw anything up on this it's my fault and my problem.

Option 3: Suck it up. Because the current system works okay, and I can identify the particular operations that are painful (starting Gnus, exiting mail.misc.spam), and aside from the speed and the minor sketchiness of keeping an address as an alum it does work. Since I keep mail in AFS I get Ops' quota and backup constraints (well, really, SIPB's) and not Network's.

Does gmail DWIW? Is Linode or some other similar provider reputable, and worth the $n per month? Is maintaining a public mail/Web server that tricky?
In digging around through things, I turned up a bunch of floppies for old DOS and Windows 3.1 games. They're probably completely useless now -- doing anything with them would require the relevant OS, hardware emulation, a floppy drive, and the disks actually being good -- but at the same time it's emotionally tricky to part with them. Sniff. In among there is Darklands (a medieval RPG that I never quite got into the plot on), A-Train (a train system simulator where the world grew around the trains; tricky), Unnatural Selection (breed monsters to battle other monsters), the original Master of Orion, and RoboSport (much like RoboRally but computerized and with less interesting terrain).

Probably the best thing would be to move all of the floppies on to a single CD. SIPB would be a great place to do this if it existed. watertown might be coerced into working too. I bet there's a Linux DOS emulator that could be told "yeah, um, just use this floppy image file", and at least one of them would work.
6.170 introduced me to the Census Bureau's TIGER/Line data set, and I've been experimenting with it on and off for a couple of years now. The bike trip mapping plot has revealed several gaps in the data that it's not obvious I would have found if I wasn't trying to find routes using the data. But, for example, Farm Street in Dover is broken into two segments, with TLID 87283093 being a very short segment connecting the two labelled "Census 2000 collection block boundary not represented by existing physical feature" that happens to connect Farm Street to Farm Street. Eliot Street in Natick and Washington Street in Wellesley don't line up at the city line in spite of being the same road. That sort of thing.

In poking around at newer TIGER data, I discovered that there's a $200 million federal project to fix these sorts of inaccuracies. Which is great for people like me who use this data this way. But I'm sure the same data is available from commercial sources; it's probably not cheap, but, $200 million? Is TIGER really anything more than a data set used internally by the Census Bureau and by a small number of dedicated amateurs?

...this document discusses the scope of the project a little more. A large part of the project sounds like "redesign our internal database, it's 15 years old" more than "update the data", and also "make it possible for Census field agents to do their jobs and update the database without paper maps". And there's a requirement to support every type of address in the United States, not just the 90% or 99% case. Actually, this is a kind of interesting read if you're curious how the data got put together originally and why it has the problems it has. So the money is mostly sustaining this goofy constitutional requirement that we go around and count people every ten years; it feels a little more sensible now.
Google actually now has a published API for their Maps interface. Signing up appears to not be too evil. The "we might drop ads in someday" clause bothers me a little, but I suspect if they went forward with it they'd also block out other non-adful interfaces too.

CSS is a deep and subtle thing.

The updated bike page is here. Bonus points if you can send me a pointer to something that says how to get this looking the way I want, which is loosely "duplicating the Google Maps page layout". Right now the best I can think of is to use JavaScript to force the relevant div's height to the window's height minus the height of the top panel, but that feels rude. I just can't find something in CSS to say "this element's height can grow".
Combining the publicly available information about Google Maps, a copy of the Census Bureau's TIGER database imported into PostgreSQL, some Python hacking, and general navigational memory, I now have bike trips projected on to Google Maps.

The big problem right now is TIGER's view of the world being slightly different from Google's; if you zoom in a lot, you can see the points not lining up. Also, look at the bridge between Salem and Beverly in the Rockport trip, which follows TIGER's reality and not Google's. (I also found one bug in TIGER/Line 2003, which I should figure out how to report.) There are also limits (in some cases the Google Maps JavaScript hangs, I'm not sure when but trimming out points seems to help), and a report that this doesn't work under IE.
Debian sarge finally released, so I upgraded donut (now 10 years old!), our house router. The upgrade went reasonably smoothly, ignoring that donut is slow as all heck. I haven't thoroughly tested anything, but the only things it really does are forward packets and host [ profile] chooblog...and the latter being down for a bit won't kill anyone.

Debian stable now has a pretty current pyblosxom, which looks like the same idea as blosxom but in Python and with some actual recent maintenance. ISTR a module that would help with comment spam. So my next project is to upgrade to that.

I was also hoping to switch from a home-built kernel (mmm, kernel-package) to a stock Debian kernel, since the only particularly arcane thing I need is the ipip module. Good news: the stock Debian kernels contain approximately every module under the sun. Bad news: donut's / partition is only 50 MB, which fills up pretty quickly if you're installing a kernel that's 40 MB unpacked. Eit.
One of the problems with documents in XML is that getting print out of them is vaguely irritating. The "standard" way is to use XSLT to convert your input to XSL:FO, and then use a tool like FOP or PassiveTeX to convert that to PS/PDF. The obvious problems are that the stylesheets out there aren't very good (printed DocBook is "just okay" aesthetically) and that the free formatting tools suck.

But TeX is widely available, free, and has a good formatting engine. Why does nobody use XSLT to convert XML to *TeX, and then format that? The impedance mismatch between XML's character set and LaTeX's is slightly irritating, but it's not that hard to work around, even without having EXSLT available. And then if you're familiar with both XML and LaTeX, you can probably copy-and-paste your way to happiness given a stylesheet (also true of XSL:FO, but more people know LaTeX than XSL:FO).

For a proof-of-concept, I did this for my resume (not that I'm job-hunting at all, it's just a simple XML document I have kicking around). It could use some formatting tweaks, but it came out pretty well. Certainly it proves that what I'm trying to do is eminently reasonable, and so it's sane to suggest to other people that if they have data they want to query and include in a LaTeX document that they should store the data in XML and then use XSLT to create document fragments rather than trying to write TeX macros to process the data directly.
On my very old home machine, I run [ profile] chooblog, which loosely chronicles the escapades of my model railroad. I'm running this using Blosxom, which has the useful attributes of (a) being actually free and (b) being able to post by editing local files. There are some plug-in Perl modules for this, including a "writeback" module that allows people to post comments.

The problem with this, of course, is comment spam. I'm sorry, but online poker games have nothing to do with N-scale model railroading. (And I'm glad I frobbed it to not display the submitted URL, though for different reasons.) Poking around doesn't suggest an obvious (technical) solution to the problem; I can probably tweak the Perl script to write inbound comments to separate files and then manually copy over the actual relevant file fragments if there are non-spammy comments, but this reeks of a lack of automation.

But there's an actual social question here. If I add to my comment-submission page a prominent link to a comment policy, and the comment policy says something along the lines of "I reserve the right to delete comments not relevant to the subject material of this blog, and if you post clearly irrelevant ads, you agree to pay $500 per incident", is that enforceable? If I can find an actual company behind these online-poker sites, can I extract some money from them? Would I run afoul of rules like "no commercial use of MITnet"?
That's right, I can read livejournal communities with an RSS reader. So this way my friends page doesn't get polluted with three copies of the same huge irrelevant ads. Yay using technology to solve social problems.
For no terribly good reason, I'm writing a 6.035 Espresso compiler. (I told [ profile] obra last night, "I want a compiler to hack on.") I'm doing this in Python, since that's a fairly sane (if weakly-typed) OO language, and the various arcane things I can think to do with it are easy. There aren't obvious parser/scanner generators for it, so I'm hand-writing a scanner and an LL(1) parser.

Having gone through this exercise, I'm almost done (can't deal with function-call as statement or interesting things on the left-hand side of an assignment) in about six hours of work. This makes me wonder what the motivation for tools like lex and yacc are. For lex, regexps for many things are almost overkill; my scanner is a state machine that builds up words, checks to see whether they're keywords, and emits an appropriate token. This is a little simpler for being a Python generator function (so I can return multiple values via yield), but it feels like a smallish constant factor away from equivalent C code and about as much code as the lex description. And in all that's 300 lines of code; is the code-space savings worth the loss of debuggability you get using a tool?

Pretty much the same thing can be said about yacc. Hand-writing the LL(1) parser was pretty easy. Maybe a table-based LR(k) parser has a smaller code size so it runs better on old machines? For your trouble you get zero debuggability, though possibly better performance than a recursive-descent parser. At MIT I used ANTLR, an LL(k) parser generator, but I don't think I get much beyond some automated munging of the grammar. My impression (not that I'm trying here) is that error recovery sucks in LR parsers and it's a little better in the LL world. yacc makes you munge your grammar less but you still need some work for things like operator precedence.

So if these tools don't generate fast or understandable code, and interpreting their output/error messages involves understanding their theoretical basis, and they make code harder to debug, why do they get so much use?


Nov. 12th, 2004 08:39 am
For no spectacularly good reason, I'm trying to learn Dvorak. After two days of practice or so, my short-term memory has learned where most of the letters are, and any word that lives entirely on the home row I can type quickly. But I've realized that it's really all about learning patterns and relearning finger macros: S,' won't get me out of vi, cvs is incredibly inconvenient, but I've definitely learned how to type xml. (And while it's frustrating, I don't think typing at a third speed has ruined my actual productivity that much...)
ET had a CCG-style game entitled Cthulhu: The Mercifully Uncollectible Card Game. It had the important feature that you had two key stats, Sanity and Knowledge; you started with 50 SAN and died if you dropped to zero, but whenever you lost a SAN you gained a KNOW, and you could use your KNOW to play cards out of your hand.

This corresponds scarily well with my job. Like, I've confronted the horror that is W3C XML Schema: it doesn't disgust me any more, which is probably bad, but mostly because I've converted the SAN loss from reading the spec into knowledge. Which then brings us into the fine world of Lovecraftian W3C specs...

XML Schema. KNOW 6 / 5 SAN. Specification. An oily black cloud that validates XML documents.

WSDL. KNOW 3 / 3 SAN. Specification. A screaming headless specification with a thousand young. Gain 2 KNOW if WSDL and XML Schema are both revealed.

Infoset. KNOW 2 / 4 SAN. Specification. A terrible tree-shaped mound of square brackets that devours all it comes across. +2 to attack any other specification.

The one terrible problem, of course, is the analog of Lightning Bolt, in which every player gains a single momentary glimpse of the horror of the entire web services specification series...I can't decide whether this should be Semantic Web or WS-Unspeakable.
Samba is hard to set up. Especially when you're tripping over odd bugs in the Debian package thereof. At least it easily supports IP ACLs. But yay, house music server.

I coded up Ted's stupid solitaire game ) in about 200 lines of Haskell. Could be shorter.

I've had good luck installing OpenWRT on the Linksys WRT54G I bought a while ago. At this point I either need to configure IPSec or get IP-over-IP tunnelling on it, and figure out how to set up Shorewall, and then I can swap it in for donut. Scary.
Unlike much of the rest of the world, I seem to have had yesterday off. (Either that, or I'm about to go to work and get a rude surprise.) But I actually managed to get things done for once:

  • Laundry.

  • Zipcar'd to Furniture Store of DOOM! in Natick. It was impressive, and I'm glad I went, and I found a bed frame I liked (could, actually, be this, though their online catalog seems a little sparse), didn't actually buy anything. (Do these things normally cost $600-1000?) Didn't catch a movie.

  • Since I had the car and was in the area, drove around Framingham kind of aimlessly for a bit, then rail-geeked around the commuter rail station. Maps show one line heading north and two south from Framingham, and somewhat surprisingly, they all seem to still be active.

  • Installed "new" hard drive I bought in June. Started the long and arduous (well, long, anyways) process of formatting it; exhaustive badblocks check looks like it'll take a couple of days.

  • Started poking at wireless router. Only useful if I can get a kernel onto it that supports IP-over-IP tunnelling (or perhaps IPSec, though I don't understand that terribly). If I can get a shell on it, I'll see if the Linksys kernel supports it (unlikely), or if one of the prebuilt kits out there supports it (only vaguely likely), before trying to build my own thing. Handily, I already have a Decaf compiler in Haskell for it if I need that.

Mmm, toys

Oct. 4th, 2004 07:13 am
Went to µCenter yesterday. The take: a KVM cable to plug donut into Emily's KVM switch (so I can get rid of the 8-year-old crappy keyboard and the 8-year-old dying 14" CRT); a Logitech Extreme Pro™ joystick (which, very much to my surprise, Just Worked under Linux, reporting six axes and 12 buttons under jstest and Just Working in Flightgear); a space combat game, Terminus ("looked intriguing, $10, says 'Linux' on the box"; some configuration required on modern machines); and a Linksys WRT54G (mmm, new-donut project). I haven't tried playing with the wireless router (and it's justified calling it that if you have configured it to be a proper router) yet.

watertown's syslogs think we lost power last night, with it being on at 1 AM, off at 1:20, and back on for sure at 1:28. The fan in the bedroom woke me up both when it went out and when it came back. Couldn't fall back asleep, finished reading fantasy trilogy I was working on, made it back to bed.

So this morning the network was down. donut was doing the thing recently it's been doing, which is being stuck on

GRUB Loading stage1.5...

GRUB Loading.....

(So, it found the boot sector, loaded GRUB's stage1, which loaded stage1.5, which got stuck loading stage2?) I kind of suspect this is the hard drive being picky, but then why would it find the boot block? At this point the system is old enough that I'd either want to consolidate it on to sol-draconi or go with the new-donut project ("build the house router into a Linux-based wireless AP").

watertown's subsequent reboot (to get ntp and zhm) wasn't entirely happy either. It booted fine, but the second monitor came up in 640x480 and off by a bit. So my background on the second screen was a nice picture of the lavender line, at a blue-over-purple signal. (The good news is, when I popped up an xterm, it was still the right physical size, so that bit of infrastructure definitely works.) Yay kicking the X server.

I might have found a reasonable answer to my ongoing search for a sane desktop environment. Right now, on Debian/unstable, I'm running Waimea as a window manager. It's a Blackbox derivative ("what isn't?") that looks like it's trying to capture the look of a simple window manager with "pretty" features like pseudotransparent window decorations and Xft font support. rxvt-unicode is an rxvt-based terminal emulator with pretty much the same functionality. The whole thing is a little rough around the edges, but promising: I can change keybindings for the window manager in a text file, but then I need to restart it, which takes a while, ruins my vt history, and is generally harsh for testing, for example. Still, I was able to set things up the way fvwm-themes works, mostly, including the funky xterm color scheme (backed by X resources).

I was able to build rxvt-unicode on an oldish Red Hat machine pretty painlessly, though without Xft support. Waimea just depends on too much stuff to usefully build from source. Installing it on Athena might be doable, though, and would help fix the pain I'm currently in.

(I wonder what it means that my backgrounds on both machines are from Ashmont station.)
{1} dmaze% units
2084 units, 71 prefixes, 32 nonlinear units

There's this program called units. It does everything. Well, it won't read your email. But you can do things like find out how big that "120 GB" hard drive really is:

You have: 120 gigabytes
You want: gibibytes
* 111.75871
/ 0.0089478485

What about cooking? Iron Chef Units can figure out how much butter you need:

You have: 2 tbsp butter
You want: stickbutter
* 0.25
/ 4

I hear they say, "a pint's a pound the world 'round." Is it?

You have: 1 pint water
You want: pound force
* 1.0431756
/ 0.95861142

Hmm. We should check to see if it's versed in the classical sciences, like, say, alchemy:

You have: lead
You want: gold
* 1.0519553
/ 0.95061071

Unfortunately, it doesn't seem to know about ISO standard units:

You have: 364.4 smoots
Unknown unit 'smoots'
Let's say I'm doing $INVOLVED_THING at work, which involves changes in a half-dozen widely used files. I don't want to check them in until I'm actually done for various reasons. So I'm halfway done, and my boss comes over and says "there's this bug you should fix". The fix involves changes to the same files. What do you do?

Right now, my "solution" is to have two separate check-outs, one of which is "the head" and one of which is "my working copy". This scales poorly, though, especially if the fix to "the head" also turns out to be involved. I think the thing I want is a "cheap branch": I type something like vc push -r dzm-temp-branch and my version control system creates a "branch" for things that I've modified (maybe just on the local machine) and gets the head. I make my changes, then run vc pop -r dzm-temp-branch, which attempts to merge my changes in. I could live without version control on the "branch" (especially after I actually commit it to the mainline); I definitely don't want things like automated mail.

...I could do this, almost trivially, with a shell script and CVS, huh. Thought.
I finally have watertown up and running again, with connectivity to the outside world and everything. I was kind of cranky that Debian unstable didn't seem to have gotten any more usable in the past month. But Gnucash's issue, for example, was that if the "save window position" option was on it fervently believed it should move the main window off the bottom of the screen, which was the same problem as before. And LogJam seems to be working fine, don't know why it didn't start before.

Verizon DSL, if it wasn't obvious, is a bad idea. We seem to have a "business" account, for reasons unknown. It uses PPPoE. Ick. And if the modem loses connectivity at all for any reason, when it comes back up we get renumbered. So now we're on our third external IP address in under a week. This is irritating for me.

Need to figure out wiring. Should look into conduit-type things more; Home Depot had PVC conduit for about $2/foot, which seems like a lot. The new-donut plan may be to replace it with an off-the-shelf "broadband router" or some such that runs Linux, and move the data on to a machine a step or two forward on Moore's Law, but CompUSA in Braintree didn't have the particular model the Slashdot article mentioned. They did have something with a proprietary 802.11 extension to get 35% more speed if you use their branded cards, so you could have only 48 times as much internal bandwidth before you hit the cable modem pipe to get your Web pages. For only twice the price I was expecting. Woot. Should feed model numbers into Froogle, and search terms into normal Google.
Remember back when we were freshmen? And there wasn't good network even in all of W20 yet, and so in places like the APO office there was a VT320 with a modem? The Athena dialup pool still exists. And it's great for a good hit of network, zephyr, and livejournal if you just happen to have a VT320 and a modem and phone service, but (still!) no real network.

RSS feeds?

Feb. 19th, 2004 12:07 pm
I finally got the nnrss backend in Gnus working, so I have something I can use to read RSS feeds. I only have a couple in there now (the one for the blog and Slashdot's); are there any I might be interested in? It looks like there are a couple listings but they're not too useful just because of sheer volume and it being hard to tell if a particular feed is actually worthwhile. I'm open to suggestions for things people like, or think I might like, or even just dumps of your .meetings file or equivalent.
Opened up the machine last night again and tried unplugging the CPU fan. This made it much quieter. Tonight: need to run to µCenter to get a new one. Note for future repair work: notice which direction fan is facing before removing it. Meanwhile, as an artifact of the assembly of the motherboard, the machine is memoryless, which should adequately stop prying housemates from trying to use it with no CPU fan and thus frying the processor.
I finally got around to doing the first bit of work necessary to make my desktop machine, watertown, be functional again. I went ahead and opened up the machine, took out the power supply, broke the "never open this seal" seal, and took out the old (violates FAA noise regulations for aircraft in residential areas) fan. It turned out I had a fan of the right size lying around; after a quick Home Depot run and some application of wire cutters, wire nuts, and electrical tape, I have a functional, more normally noisy machine again.

Now I just need to remember what else it was that was wrong with this. The ergonomics aren't great but better than my laptop. The two screens are nice, but I should find a window manager that deals (again) or switch to XINERAMA (with one 1600x1200 and one 1280x1024 display, though?). There's the generic "it's slow"ness, which I think is mostly the CPU (a 700 MHz Athlon), and the generic "not enough disk"ness (a 20 GB hard drive partitioned a bit much, but with 8 GB for Windows 98, 10 GB for various chunks of Linux, and 2 GB wasted, where some of that last bit is because of a bad spot on the disk). More memory would be nice, but 256 MB is adequate. So for a couple hundred bucks I could turn this back into a modern machine. But for a couple hundred bucks I could also get a new machine. I'll figure this out the next time I want to spend money.
Being a modern geek, I set up a blog for my model railroad stuff. Last night I got as far as getting the woodwork done, after some minor intermediate disasters. I need to spend more time with the crumbly pink foam to actually get to the point of thinking about laying track.

EDIT: Thanks to [ profile] obra, this is now [ profile] chooblog. (Well, that gets everything on that blog, anyways, though there's not much else there now.) We'll see if I can make the RSS behave.
I was going to update the firewall on my gateway machine at home to use iptables and set the source address on outgoing SMTP packets to come from the tunnel address, rather than the "normal" external address, so those packets don't get dropped by RCN. I was also going to put some energy into Debian things, and totally failed to. (Though it looks like some angry mail started getting exchanged between people not me over my packages, and the right answer might just be "go back to a year ago before everything got messed up". I wonder if it's easy to do that with cvs-buildpackage, and if it's easy to do that with Subversion.)

In positive news, I at least got to getting train stuff, finally, so now I'm exercising my meager woodworking skills. Yay rolling critical failures.
I now have a Python script that generates HTML with embedded Perl. Go me.

(In actuality it's a script that reads a QMTest results file, which aggressively requires Python, and generates a file that's read by RT, which uses the embedded Perl to get a list of tickets corresponding to a test failure. So it's useful and not totally looney. Well, okay, so it is totally looney, but still...)
I figured I'd upgrade the kernel on washington-street-elevated (my work desktop) to try to get around some NFS performance issues and maybe avoid the most recent local-root exploit. So from SIPB, remotely logged in, I used rpm to update the kernel RPM, and rebooted. Shockingly, it didn't work. I realized when I couldn't ping wse later that I had kind of forgotten the basics of Linux kernel updates ("rerun lilo, dumbass"); what I wound up with was the old kernel with no modules, which was pretty useless. It only took half an hour or so of fighting with that and the NVidia display driver module this morning to get everything back in working shape. But really, I should know better than to screw things up like this.
I have this computer, watertown. It's about three and a half years old now. Has a reasonable keyboard, two sufficiently nice CRT monitors. And I never really use it any more. Some of this is actual problem: the fan in the power supply is sufficiently noisy that I don't really want to wake anyone up with it, and it's kind of annoying to actually have running because of that. (And consequently it's not a good remote-login machine either.) But other things are just "the machine is old": I didn't think a 700 MHz Athlon would be too slow, but the desktop clearly underperforms my newer laptop (everett) on "things that take CPU power to render", like Gnucash (!) and Microsoft Train Simulator. Also, watertown's 20 GB disk, partitioned between Debian and Windows 98, just doesn't have the contiguous space anywhere for a reasonable Vorbis file collection.

Conclusion: Moore's Law hits, watertown takes ten damage in the usability department. This feels distressing to me, though I can't really describe why. For a few hundred dollars I could put in a new motherboard with a faster processor and extra storage, but is this worthwhile if I'll just use the laptop for everything?
I'd like a function, in any language I'm using, to run a child process and collect its output, but to also kill that process and all of its children after some timeout. This is okay to do in C, provided of course that you've immersed yourself in Advanced Programming in the Unix Environment, and in fact we have a program that runs a child and kills it after some timeout. Needed to hack it to set up its own process group to really get all of the children, but it works.

So now I want to do the same thing, but in Python. Except that the environment I'm running under uses threads. Uh-oh. And I can't set a signal handler not-in-the-main-thread, so a direct port fails. Oh, and running our C binary fails, too, mysteriously. (Could be that the top-level process doesn't intelligently handle SIGTERM.) I can actually set a timer as a thread pretty easily in Python, but calling os.kill() from the timeout handler doesn't seem to have an effect, or more appropriately, it kills the top-level process but not any of the children. Grr.
Last night I finished the main plot in Morrowind. In all, the game was well worth it for entertainment value; its biggest issue was stability, but when it crashed, it would crash cleanly, and restoring from your last quicksave was never an issue. The UI had minor issues too, but I've dealt with far worse. I thought it was very playable, and if you like fantasy RPGs and don't object to first-person interfaces, you'll probably enjoy it.

Minor plot spoilers, game reflections ) now I need a new distraction. I could put things I already have (MoO3, MSTS) on my laptop and play those. Or actually come up with a hobby that doesn't involve computers. Shocking, that concept. :-)
We're trying to figure out how fast some code is running on a Pentium 3, and we're using a magic instruction that will give us the cycle counter. Dividing number of cycles by number of loop iterations gives 1962 cycles per iteration. But this number seemed unreasonable to us, so we went and looked at the assembly output of gcc.

The loop has 3772 instructions in it. According to the P3 manual, floating-point instructions pipeline but never run in parallel with each other (except for the FXCH instruction). There's 1007 FXCH instructions, and another 66 instructions that aren't floating-point. Assuming that the scheduling is perfect (doing the first couple of dozen instructions on paper, it isn't) that means that the loop could run in, at an absolute minimum, 2699 cycles per iteration. Which is much more than 1962. Something is wrong here...
Last night I rediscovered my ability to sit down and hack on something and actually get engrossed in it. This is something that's been missing at work for a while, so it was kind of nice to find it on a side project. The down side is, I didn't actually get to sleep until 1 or so, and I'm still pretending to be dayshifted...

But, regardless: the XML-to-print toolchain feels much saner than the SGML-to-print toolchain. Maybe it's because XSL:FO and XSL Transformations are actually, say, documented by someone (the W3C). It helps that XML is trendy right now, so there are multiple online tutorials for the tools. And as a consequence of this, there are *gasp* multiple implementations of the tools (mostly the Java path and the not-Java path, but both seem to work well for XML-to-PDF). And, handily, osx will convert SGML to XML, so you don't have to go through the pain of typing out XML if you happen to have a DTD for your document floating around.
Page generated Jul. 20th, 2017 02:29 pm
Powered by Dreamwidth Studios