2017-07-13 09:02 am
Entry tags:

Library review: Cowboy, Sumo Rest

The other end of the Erlang experiment is the libraries I was working with. My application was built on the Sumo Rest stack, built on top of the Erlang Cowboy HTTP server.

I found this finicky and prone to vague runtime errors. That's not important, though. The overall style of the system was pretty reasonable, and is probably a good way to write REST services in general.

Read more... )

The layout implications of this are useful and generic: one source file per type, where each type file defines its storage, serialization, and documentation; and one source file per route, where each route file has its own URL path, complete machine-processable documentation and metadata, and understands the standard failure cases. You just need the right high-level REST library in your language of choice to support this.
2017-07-13 08:27 am
Entry tags:

Programming language review: Erlang

What do you get if you combine Haskell's basic functional style, Lisp's type system and object representation, a unique concurrency system, and a very good runtime?

Read more... )

In short, Erlang looks like a good language, a little dated, but I'd rather have good static checking and a robust library stack than an excellent runtime with okay libraries, especially if I can avoid peeking "under the hood".
2008-01-20 05:20 pm
Entry tags:

Quick review of KDE4

There's a new release of the KDE desktop environment out, and in particular there's a test repository with packages for Kubuntu. It's apparently not just a point-zero release but the KDE maintainers are pretty unapologetic about it being a relatively raw point-zero release; "KDE 3 is fully supported and even under reasonable development, but nobody's really using a 'beta' KDE 4 and we need field bug reports."

KDE has gone in the "shiny" direction I had been hoping some mainstream Linux desktop environment would. The Compiz/Beryl/XGL/whatever X-over-OpenGL desktop effort has been pretty but it fundamentally is a concept demo that's Different From What You're Using Now. So KDE 4 has translucent-window support, some neat abilities to dim windows that have active modal dialog boxes, and some of the Mac-ish desktop effects like a window switcher that shows the current state of every open window.

The flip side of this is, well, that a lot of "normal-user" functionality isn't entirely there. I can't find the Debian app menu (which tends to be more complete than KDE's); I can't successfully log out; I can't add keybindings for "switch to desktop #5"; I can't change any characteristic of the panel; adding applets to the panel is non-intuitive. KDE 4's version of the Konqueror Web browser won't import bookmarks from the KDE 3 Konqueror and seems to find infinite loops no other browser does reading Livejournal.

Conclusion? It's definitely shiny, and if you're a bleeding-edge person and a KDE person it's probably worth playing with. But overall KDE 3 a lot more intrinsically usable than KDE 4 is at this point.
2007-01-08 10:33 am
Entry tags:

Some days the penguin poops on you

My home laptop has a Broadcom 4318 wireless card. This accursed card seems to be generally problematic for Linux users everywhere; there's a bcm43xx native Linux driver that honestly just doesn't work very well, or you can try the ndiswrapper song-and-dance. To make things worse, I run native AMD64 Linux, so if I use ndiswrapper I need to cough up a 64-bit Windows driver for the card. But such things do exist on the Internet.

Things got bad when I upgraded to Ubuntu Edgy Eft; the bcm43xx driver outright hung with the provided 2.6.17 kernel, and the ndiswrapper userspace needed some hand-holding. Fixing ndiswrapper worked, briefly, until I added more memory. Now I have 2.6.17, 2.6.19, and 2.6.20 kernels. Something in the 2.6.20 kernel changed to break ndiswrapper and Ubuntu Flighty doesn't have a fix for it. 2.6.19 with ndiswrapper works, usually, but sometimes it doesn't and it's still somewhat prone to randomly freezing. On (32-bit) Windows it seems to work pretty reliably.

I can't figure out what causes the system to sometimes work and sometimes not, and I'm not really up for doing kernel-level debugging. All I know is that this morning the system would repeatedly go into la-la land, either locking up with flashing caps lock light before X came up or successfully starting but hanging after maybe five minutes. Power-cycling is the only answer, and doesn't really help anything. This happened one day last week too, but I spent a couple of hours working on IAP class slides yesterday and it was all fine.

Bleah.

Edit: More hunting around believes that bcm43xx just isn't there, particularly on Broadcom 4318 cards. Both drivers have their share of current bugs, but some involve having more than 1 GB of RAM. Which I do now. And work experience of "driver/hardware loses high bits of address" is actually consistent with what I'm seeing. Workaround (successful for 5 minutes so far!): boot with mem=1024M.
2006-09-06 07:27 pm
Entry tags:

Some potential computing projects

  • Install Lotus Notes Designer on the Windows side of my laptop. Use it to build a Notes database to track my status reports. (Visibly work-derived, if you're wondering "why on earth Notes".)
  • Write a Scheme(-like) interpreter/compiler. Probably in Haskell.
  • Pick up my project to build a call graph from an ELF object file.
  • In the "not even a little related to work" camp, write a railroad dispatching simulator.
  • Give up on these "projects"; just play games, that's all computers were ever really meant for.
2006-08-10 07:31 am
Entry tags:

Gnucash 2.0 mini-review

I built Gnucash 2.0 from source (to run on my AMD64 Ubuntu Dapper laptop; there are a couple of i386 builds but no obvious AMD64 ones). It's all GTK2, which is marginally shiny. And while I appreciate that that was a fair bit of coding effort, and a lot of infrastructure got rewritten for it, that's about it. As a user, the graphs are slightly less pretty; faster to draw, but still very slow to collect all of the data from a multi-year file, and still with user-configurable number-of-pixel sizes (what was ever up with that?). Stock lot tracking is useful, but the system doesn't seem thrilled about trying to collect up a lot where transaction 1 is "buy" and transaction 2 is "sell, plus record the capital gain somewhere" (which I thought was the documented way to handle this in Gnucash 1.8).

I think the big visible disappointment is the budgeting code, though. There's a "new budget" option, which creates a new account window with columns for future months. So far so good. You can enter numbers into the columns manually. But the "take your best guess button" pops up a dialog box that seems to cause nothing to happen. Documentation is scant-to-absent; the best writeup seems to be a wiki page that largely recounts past failed attempts and email battles.

End verdict: I wouldn't run out to upgrade...but if your Linux distro comes up with the new version (as presumably Ubuntu Edgy Eft will in October) I'd take it.
2006-02-15 09:27 am
Entry tags:

Depressing mail stats

You have 71 unseen messages (555989 bytes), 71 total, in INBOX on PO10.MIT.EDU.
You have 1130 unseen messages (9592057 bytes), 1130 total, in INBOX.Spamscreen on PO10.MIT.EDU.
2006-02-10 11:45 am
Entry tags:

Email simplification through gross hacking

I'm in the process of converting my two Debian unstable boxes (call-with-current-continuation, an AMD64 laptop, and watertown, an IA32 desktop) into Ubuntu Breezy Badger boxes. The goal of this is to stop running Debian unstable, so that I can stop reading multiple high-traffic mailing lists that are at this point largely irrelevant to my life but I feel like I need to keep up on to know about the library breakage of the week.

gory details; some guru-level clue required )
2005-10-28 10:59 am
Entry tags:

User Interface and Electronic Mail

Our new corporate masters use a well-known largely Windows-oriented mail system (thankfully not Exchange, and the client does run inder Linux). One consequence of this is that the email editor looks similar to Word or your favorite other font-aware GUI text editor. And a side consequence of this is that I'm actually happier to get HTML email...and it's hard to send good-looking text mail.

So I now understand the temptation of HTML mail: if everyone you correspond with lives in an HTML-aware mail world, then everyone's life is slightly prettier if everyone uses HTML. But I come from a world where not everyone does, so now there's the challenge (which the software doesn't help with at all) of making my GUIfied mail look good to text mail readers. I hope I'm succeeding, but this is a hard UI problem.

I think there are just too many options. Some people send their email in blue. Why blue? I'm not sure. The formatting options I want most are "monospace" and "italic", and sometimes "list", if I want more than that I'll write a document in something else and attach it. These options would be pretty easy to port over to your slightly-formatted-to-text renderer. So then you're not foisting angry fruit salad on the world, and you've made both the HTML-reader and text-reader people happy. Just as soon as I get to hacking on this closed-source heavily-legacy mail software...
2005-10-13 04:50 pm
Entry tags:

donut is dead; long live the donut

I came home last night to discover that we had no network at home, and that the hard drive and CD-ROM lights on trusty old donut were blinking steadily. I hoped a therapeutic reboot might help, but after being powered down hitting the power switch again only resulted in a brief flicker on the power LED, nothing more.

I was able to recover into the new-donut project pretty quickly, surprisingly. I needed to install a newer version of OpenWRT on to the WRT54G ("commodity MIPS little-endian Linux box with built-in 802.11g AP"), and it did what I needed in terms of getting basic NAT up for the house almost out-of-the-box.

The one remaining thing is getting the magic tunnel network back up. OpenWRT doesn't have the Linux ipip kernel driver, but it does have Openswan packages, so I'm trying to set up an ipsec tunnel instead. This feels like it will work once I get the routing issues hammered out, which involves remembering the magic I had set up on donut before it died...
2005-10-03 09:20 pm
Entry tags:

Why I hate C++

I figured I'd try to write a to-do list manager as a random side project. Tasks should be hierarchical, tasks should have dependencies, data should be stored locally, should have a pretty GUI around the whole thing. This doesn't sound hard, it should be something that I can get a first pass knocked off in about a weekend. Great.

I settled on C++ as an implementation language (I want something type-safe, Debian has no Haskell GUI libraries packaged, I know C++, OO code in C sucks a lot, and I kind of like Gtk). So I sat down and wrote a pretty simple data store for it. Bottom-up design and all that. So now I have a module where I can commit a new data state to it, and undo and redo. Oh yeah, and I wound up writing some basic code to do things like reference counting and callbacks when the state changed. Not hard or long, but it needed doing.

So now I start writing the GUI (using gtkmm) and I discover that I lose: to make a tree view of my hierarchical tasks, I either need to copy all of my data (eew) or write a custom TreeStore. There is no documentation for the latter. And the only way I can pass my store into the TreeView class is via a smart pointer...in the Glib reference-counting system...which is different from mine.

My options are now to either tie my data store into Glib (yuck), figure out how to write the custom tree data store object in C and tie it in (more yuck), figure out how to make my adaptor object Glib-refcounted (possible, I suppose, but TFM is pretty unhelpful). Or learn OCaml, or figure out how to compile and install gtk2hs. The last is increasingly sounding like a better use of my brain cells...
2005-09-18 09:46 pm
Entry tags:

Things you didn't want to know

Puzzle generators of various sorts. For *nix, Windows, OSX. On Debian, apt-get install sgt-puzzles. Includes Sudoku-like, Paint-by-Numbers-like, and a number of others.
2005-09-14 11:27 am
Entry tags:

The mail question

For quite a while I've been getting mail exclusively at MIT and reading it exclusively through Gnus. Gnus has a lot of nice features, and since it's entirely in elisp it's very customizable. On the other hand, since it's entirely in elisp, it's also kinda slow, especially doing things like sorting mail and doing statistical spam detection, and reading mail stored in AFS sucks over any kind of consumer Internet connection. Is it time to move on?

Goals. I'd like the mail-sorting to happen "offline"; I don't want to have to wait for all of my mail to get sorted in between deciding to read it and being able to. I'd like to keep being able to use Gnus. I need my mail sorted by mailing list, and need some sort of spam filtering. Something IMAP-backed is probably ideal. It needs to be backed up, which I'm not set up to do at home, and have good connectivity and uptime.

Option 1: Google Mail. All of the cool kids are doing it. It might have the right features. AFAIK it only has a Web interface; you can get at a flat mail store by POP but that's not so interesting. An @gmail.com address is pretty reputable. I don't know if this would give me the level of control I want, but it's there.

Option 2: Virtual private server. There are various shops that will rent you a fraction of a machine; I've been loosely eyeing Linode's $20/month plan for a while. If I did this, I could run whatever I wanted on it, including [livejournal.com profile] chooblog, SMTP and IMAP servers, and some sort of mail sorting system like procmail but less sucky. This clearly meets all of the requirements, but costs money and takes some effort to set up. It also requires me to come up with a domain name, and if I screw anything up on this it's my fault and my problem.

Option 3: Suck it up. Because the current system works okay, and I can identify the particular operations that are painful (starting Gnus, exiting mail.misc.spam), and aside from the speed and the minor sketchiness of keeping an @mit.edu address as an alum it does work. Since I keep mail in AFS I get Ops' quota and backup constraints (well, really, SIPB's) and not Network's.

Does gmail DWIW? Is Linode or some other similar provider reputable, and worth the $n per month? Is maintaining a public mail/Web server that tricky?
2005-08-13 10:25 am
Entry tags:

Found in the closet

In digging around through things, I turned up a bunch of floppies for old DOS and Windows 3.1 games. They're probably completely useless now -- doing anything with them would require the relevant OS, hardware emulation, a floppy drive, and the disks actually being good -- but at the same time it's emotionally tricky to part with them. Sniff. In among there is Darklands (a medieval RPG that I never quite got into the plot on), A-Train (a train system simulator where the world grew around the trains; tricky), Unnatural Selection (breed monsters to battle other monsters), the original Master of Orion, and RoboSport (much like RoboRally but computerized and with less interesting terrain).

Probably the best thing would be to move all of the floppies on to a single CD. SIPB would be a great place to do this if it existed. watertown might be coerced into working too. I bet there's a Linux DOS emulator that could be told "yeah, um, just use this floppy image file", and at least one of them would work.
2005-07-04 09:49 am
Entry tags:

MAF/TIGER Accuracy Improvement Project

6.170 introduced me to the Census Bureau's TIGER/Line data set, and I've been experimenting with it on and off for a couple of years now. The bike trip mapping plot has revealed several gaps in the data that it's not obvious I would have found if I wasn't trying to find routes using the data. But, for example, Farm Street in Dover is broken into two segments, with TLID 87283093 being a very short segment connecting the two labelled "Census 2000 collection block boundary not represented by existing physical feature" that happens to connect Farm Street to Farm Street. Eliot Street in Natick and Washington Street in Wellesley don't line up at the city line in spite of being the same road. That sort of thing.

In poking around at newer TIGER data, I discovered that there's a $200 million federal project to fix these sorts of inaccuracies. Which is great for people like me who use this data this way. But I'm sure the same data is available from commercial sources; it's probably not cheap, but, $200 million? Is TIGER really anything more than a data set used internally by the Census Bureau and by a small number of dedicated amateurs?

...this document discusses the scope of the project a little more. A large part of the project sounds like "redesign our internal database, it's 15 years old" more than "update the data", and also "make it possible for Census field agents to do their jobs and update the database without paper maps". And there's a requirement to support every type of address in the United States, not just the 90% or 99% case. Actually, this is a kind of interesting read if you're curious how the data got put together originally and why it has the problems it has. So the money is mostly sustaining this goofy constitutional requirement that we go around and count people every ten years; it feels a little more sensible now.
2005-06-29 10:41 pm
Entry tags:

Things we have learned today

Google actually now has a published API for their Maps interface. Signing up appears to not be too evil. The "we might drop ads in someday" clause bothers me a little, but I suspect if they went forward with it they'd also block out other non-adful interfaces too.

CSS is a deep and subtle thing.

The updated bike page is here. Bonus points if you can send me a pointer to something that says how to get this looking the way I want, which is loosely "duplicating the Google Maps page layout". Right now the best I can think of is to use JavaScript to force the relevant div's height to the window's height minus the height of the top panel, but that feels rude. I just can't find something in CSS to say "this element's height can grow".
2005-06-12 03:04 pm
Entry tags:

Google Maps + Biking

Combining the publicly available information about Google Maps, a copy of the Census Bureau's TIGER database imported into PostgreSQL, some Python hacking, and general navigational memory, I now have bike trips projected on to Google Maps.

The big problem right now is TIGER's view of the world being slightly different from Google's; if you zoom in a lot, you can see the points not lining up. Also, look at the bridge between Salem and Beverly in the Rockport trip, which follows TIGER's reality and not Google's. (I also found one bug in TIGER/Line 2003, which I should figure out how to report.) There are also limits (in some cases the Google Maps JavaScript hangs, I'm not sure when but trimming out points seems to help), and a report that this doesn't work under IE.
2005-06-07 09:06 am
Entry tags:

The good, the bad, the donut

Debian sarge finally released, so I upgraded donut (now 10 years old!), our house router. The upgrade went reasonably smoothly, ignoring that donut is slow as all heck. I haven't thoroughly tested anything, but the only things it really does are forward packets and host [livejournal.com profile] chooblog...and the latter being down for a bit won't kill anyone.

Debian stable now has a pretty current pyblosxom, which looks like the same idea as blosxom but in Python and with some actual recent maintenance. ISTR a module that would help with comment spam. So my next project is to upgrade to that.

I was also hoping to switch from a home-built kernel (mmm, kernel-package) to a stock Debian kernel, since the only particularly arcane thing I need is the ipip module. Good news: the stock Debian kernels contain approximately every module under the sun. Bad news: donut's / partition is only 50 MB, which fills up pretty quickly if you're installing a kernel that's 40 MB unpacked. Eit.
2005-04-03 11:55 am
Entry tags:

XML to LaTeX

One of the problems with documents in XML is that getting print out of them is vaguely irritating. The "standard" way is to use XSLT to convert your input to XSL:FO, and then use a tool like FOP or PassiveTeX to convert that to PS/PDF. The obvious problems are that the stylesheets out there aren't very good (printed DocBook is "just okay" aesthetically) and that the free formatting tools suck.

But TeX is widely available, free, and has a good formatting engine. Why does nobody use XSLT to convert XML to *TeX, and then format that? The impedance mismatch between XML's character set and LaTeX's is slightly irritating, but it's not that hard to work around, even without having EXSLT available. And then if you're familiar with both XML and LaTeX, you can probably copy-and-paste your way to happiness given a stylesheet (also true of XSL:FO, but more people know LaTeX than XSL:FO).

For a proof-of-concept, I did this for my resume (not that I'm job-hunting at all, it's just a simple XML document I have kicking around). It could use some formatting tweaks, but it came out pretty well. Certainly it proves that what I'm trying to do is eminently reasonable, and so it's sane to suggest to other people that if they have data they want to query and include in a LaTeX document that they should store the data in XML and then use XSLT to create document fragments rather than trying to write TeX macros to process the data directly.
2005-03-05 05:52 pm
Entry tags:

Blog commenting

On my very old home machine, I run [livejournal.com profile] chooblog, which loosely chronicles the escapades of my model railroad. I'm running this using Blosxom, which has the useful attributes of (a) being actually free and (b) being able to post by editing local files. There are some plug-in Perl modules for this, including a "writeback" module that allows people to post comments.

The problem with this, of course, is comment spam. I'm sorry, but online poker games have nothing to do with N-scale model railroading. (And I'm glad I frobbed it to not display the submitted URL, though for different reasons.) Poking around doesn't suggest an obvious (technical) solution to the problem; I can probably tweak the Perl script to write inbound comments to separate files and then manually copy over the actual relevant file fragments if there are non-spammy comments, but this reeks of a lack of automation.

But there's an actual social question here. If I add to my comment-submission page a prominent link to a comment policy, and the comment policy says something along the lines of "I reserve the right to delete comments not relevant to the subject material of this blog, and if you post clearly irrelevant ads, you agree to pay $500 per incident", is that enforceable? If I can find an actual company behind these online-poker sites, can I extract some money from them? Would I run afoul of rules like "no commercial use of MITnet"?