This is the 40th post. This is the last post here. I have moved.
http://tunginobi.spheredev.org/site/
This place sucks. It mangles up my code, the whitespace between paragraphs is screwed up 3/4 of the time, and the site doesn't automatically log me in. So long, I'll see you on the other side.
Thursday, July 17, 2008
Wednesday, July 9, 2008
Git for the lazy
I just wrote a guide for git, for people who don't know what a distributed version control system is, and don't care.
git for the lazy
That will take you from zero to hero... well, zero to something, fast.
On the side, I also gave Mercurial a shot, since I wanted to use it for some *shudder* Windows development. From what I saw, it lacked the two things I liked about git:
So when it came down to git and Cygwin versus Mercurial and Window's cmd.exe, the choice was somewhat one-sided.
If I'm mistaken about either of those features, and anybody wants to fill me in, you're welcome to comment. :)
::EDIT::
However, I do like Mercurial's local revision numbers. It probably works better for it than git though, since Mercurial's branches are essentially repository clones.
git for the lazy
That will take you from zero to hero... well, zero to something, fast.
On the side, I also gave Mercurial a shot, since I wanted to use it for some *shudder* Windows development. From what I saw, it lacked the two things I liked about git:
- Local, non-cloned repo "proper" branches (in progress though), and
- an index cache/staging area (I heard it has something similar, but I haven't found it anywhere).
So when it came down to git and Cygwin versus Mercurial and Window's cmd.exe, the choice was somewhat one-sided.
If I'm mistaken about either of those features, and anybody wants to fill me in, you're welcome to comment. :)
::EDIT::
However, I do like Mercurial's local revision numbers. It probably works better for it than git though, since Mercurial's branches are essentially repository clones.
Labels:
free software,
git,
linux,
mercurial,
programming,
windows
Sunday, June 29, 2008
Non-blocking sockets and Linux
Hi.
I just got through mucking around with system calls under Linux to make the network subsystem of the Sphere RPG engine work. Very painful experience.
Anyway, I fixed it, almost totally rewrote it, and I noticed a few issues with the old code.
First, reuse of a port on a machine was a pain, because my listening ports weren't using SO_REUSEADDR (which can be set with
It didn't help that somebody conveniently forgot to convert a port number to network byte order when packing the listening port into the address structure.
Then, there was an issue with sockets being blocking. The Sphere network API is meant to be asynchronous, so obviously that was a big problem. But it's what followed that really caused problems.
The routine for checking the state of a given socket wasn't working at all. The code wasn't terrible, it's just that it didn't do the job it advertised. You'd think it'd be simple: checking if a socket is connected or not.
What. The. Hell.
Sockets are fairly well documented, you can find a lot of info about them by entering even vaguely related terms into Google. Non-blocking sockets are a whole different ball game. It took me days of debugging and trawling through Google to find links that were even remotely helpful.
A guy who was looking at the same code had his father help with a stop-gap solution that worked under Mac OS X. Performing polling on the socket was a step on the right track. But sockets under Linux act a fair bit differently, or they must have, because the code he got didn't work for me in Linux.
I spent most of yesterday reading pages of returned event flags from the
The solution? When a peer closes their end of a connection, your socket receives a POLLIN event. An attempt to
The link that changed it all: I was operating under the false assumption that a disconnecting peer would raise a POLLHUP or POLLERR signal. How wrong I was.
I also checked for -1 without
I need some tea.
I just got through mucking around with system calls under Linux to make the network subsystem of the Sphere RPG engine work. Very painful experience.
Anyway, I fixed it, almost totally rewrote it, and I noticed a few issues with the old code.
First, reuse of a port on a machine was a pain, because my listening ports weren't using SO_REUSEADDR (which can be set with
setsockopt
at the SOL_SOCKET level). By what I read online, ports relinquish themselves after about a minute. Not fun when you're debugging network apps.It didn't help that somebody conveniently forgot to convert a port number to network byte order when packing the listening port into the address structure.
Then, there was an issue with sockets being blocking. The Sphere network API is meant to be asynchronous, so obviously that was a big problem. But it's what followed that really caused problems.
The routine for checking the state of a given socket wasn't working at all. The code wasn't terrible, it's just that it didn't do the job it advertised. You'd think it'd be simple: checking if a socket is connected or not.
What. The. Hell.
Sockets are fairly well documented, you can find a lot of info about them by entering even vaguely related terms into Google. Non-blocking sockets are a whole different ball game. It took me days of debugging and trawling through Google to find links that were even remotely helpful.
A guy who was looking at the same code had his father help with a stop-gap solution that worked under Mac OS X. Performing polling on the socket was a step on the right track. But sockets under Linux act a fair bit differently, or they must have, because the code he got didn't work for me in Linux.
I spent most of yesterday reading pages of returned event flags from the
poll()
system call in an effort to find out how to check if a socket was connected or not.The solution? When a peer closes their end of a connection, your socket receives a POLLIN event. An attempt to
read()
or recv()
will give you zero.The link that changed it all: I was operating under the false assumption that a disconnecting peer would raise a POLLHUP or POLLERR signal. How wrong I was.
I also checked for -1 without
errno == EAGAIN
, since this is asynchronous sockets we're talking about. Tests seem to show that everything is working. It even works for that guy using Mac OS X.I need some tea.
Saturday, June 28, 2008
LispForum
I used to dip in and out of comp.lang.lisp all the time. Had it piped to my email account and everything. But that damn place is constantly the target of spammers, and even Google seems to lack the ability to filter it. So I gave up on it.
www.lispforum.com
This LispForum could give Common Lisp the web presence it needs. Plus it's one of the few Lisp sites that doesn't look like it comes out of the 1990s (yeah, even with its stock template design).
www.lispforum.com
This LispForum could give Common Lisp the web presence it needs. Plus it's one of the few Lisp sites that doesn't look like it comes out of the 1990s (yeah, even with its stock template design).
Monday, June 23, 2008
Gmail and aliases
Hooray for email. This is what just happened to me:
I was going to rant on about how retarded this was here, but I decided to do some digging instead. Surely somebody else online had this same problem. And surely enough, here it is:
"Reply from same address" enhancement requested
I already had "reply from same address" set, so I looked at the header of my received mail: lo and behold, an aliased domain.
facepalm.jpg
If your mail goes to something like:
bob@mail.ukelele.com
... and you get mail from an alias, like this:
bob@ukelele.com
... you're screwed. Until you figure out why the damn thing isn't working. Only took me half a year.
- Received email from uni account.
- Email was forwarded to my personal Gmail account.
- I reply to email.
- I discover that the reply address was the Gmail account, and not the uni account. Crap.
I was going to rant on about how retarded this was here, but I decided to do some digging instead. Surely somebody else online had this same problem. And surely enough, here it is:
"Reply from same address" enhancement requested
I already had "reply from same address" set, so I looked at the header of my received mail: lo and behold, an aliased domain.
facepalm.jpg
If your mail goes to something like:
bob@mail.ukelele.com
... and you get mail from an alias, like this:
bob@ukelele.com
... you're screwed. Until you figure out why the damn thing isn't working. Only took me half a year.
Friday, June 20, 2008
Firefox 3's file upload box
Rage. I feel it. I feel the rage.
My problem? Firefox 3's file upload box. What was once a text field for pasting in file paths, plus a button in case you wanted to browse, has changed into one big button. That's right: clicking on the file text field == button click.
First off, it's not a friggin' button. I click a text field, I expect text entry, which means all the stuff that comes with it: cutting, pasting, copying, deleting, modification, the works. No more uploading files with names so similar it's faster to just edit the file names, or tweak the paths.
Second, I can't paste files from my file manager into the box anymore. I'm forced to use the file dialog, which is still inadequate under Linux. I need thumbnails. I need them because my image file names are totally useless. They're useless because they're Unix time stamps. I have hundreds of these time stamped files, so I refuse to name them. Without useful file names they appear as tiny, useless icons under Firefox in Ubuntu, and as even more useless generic file icons with Firefox under Xubuntu. Which was fine so long as I could paste files in from my file manager. But I can't paste files in from the file manager when the text box acts like a button. Without the text box, I'm stuck with the crappy dialog instead. With no way around.
What compelled the developers to do this? Did they think that users wouldn't be able to click a button labeled "Upload"? After having clicked such buttons the whole time they've been on a computer? Did they just not care about people who are faster on the keyboard than they are on the mouse. I'd be real interested in the motivation behind this decision.
Ways to fix this design flaw also welcome, because the only remotely related thing I could find on Google about this was this guy's comments on Firefox Beta 3.
::UPDATE::
dscdood has found some pretty interesting links about this. Read the comments. Turns out this change has been the result of an issue that has been around for a long time.
I still think it's retarded though.
::DOUBLE UPDATE::
Oh, and if the path of your file is longer than the text display, good luck checking if it's right, because you can't select the text field and scroll left or right.
My problem? Firefox 3's file upload box. What was once a text field for pasting in file paths, plus a button in case you wanted to browse, has changed into one big button. That's right: clicking on the file text field == button click.
First off, it's not a friggin' button. I click a text field, I expect text entry, which means all the stuff that comes with it: cutting, pasting, copying, deleting, modification, the works. No more uploading files with names so similar it's faster to just edit the file names, or tweak the paths.
Second, I can't paste files from my file manager into the box anymore. I'm forced to use the file dialog, which is still inadequate under Linux. I need thumbnails. I need them because my image file names are totally useless. They're useless because they're Unix time stamps. I have hundreds of these time stamped files, so I refuse to name them. Without useful file names they appear as tiny, useless icons under Firefox in Ubuntu, and as even more useless generic file icons with Firefox under Xubuntu. Which was fine so long as I could paste files in from my file manager. But I can't paste files in from the file manager when the text box acts like a button. Without the text box, I'm stuck with the crappy dialog instead. With no way around.
What compelled the developers to do this? Did they think that users wouldn't be able to click a button labeled "Upload"? After having clicked such buttons the whole time they've been on a computer? Did they just not care about people who are faster on the keyboard than they are on the mouse. I'd be real interested in the motivation behind this decision.
Ways to fix this design flaw also welcome, because the only remotely related thing I could find on Google about this was this guy's comments on Firefox Beta 3.
::UPDATE::
dscdood has found some pretty interesting links about this. Read the comments. Turns out this change has been the result of an issue that has been around for a long time.
I still think it's retarded though.
::DOUBLE UPDATE::
Oh, and if the path of your file is longer than the text display, good luck checking if it's right, because you can't select the text field and scroll left or right.
Wednesday, June 18, 2008
The spaces
After all these years, I got back to mucking around with GearHead and GearHead 2 again. Something that bugged me about those roguelikes was that the were developed on Windows: in itself, not a terrible thing.
But the problem is with the difference in the nature of the terminals versus Linux. Linux terminals are usually 80 columns by 24 rows. Windows uses 80 columns and 25 rows. That missing row causes Pascal's drawing routines to go loco-roco. Makes the games unplayable.
I wrote a shell script a while back to send terminal escape codes to resize the terminal under Linux to use 25 instead of 24 rows. I picked it up, and it still worked.
The problem is: spaces. My directories have spaces ZOMG!!1 There's a long explanation for that transgression, but long story short, I needed my home directory to be consistent.
I just spent the last hour or so updating that script to work, regardless of where the thing was launched from. Anybody who has attempted to get scripts into Linux desktop launchers will know what I'm talking about: those scripts are never launched from the right directory. Stacks of quoting and unquoting, remembering that single quotes don't allow variable substitution, and a bunch of other things.
Spaces are a pain.
Maybe I'll contribute the script back to the GearHead community once I've brought it up to quality standards. I made it this far, I may as well finish it.
But the problem is with the difference in the nature of the terminals versus Linux. Linux terminals are usually 80 columns by 24 rows. Windows uses 80 columns and 25 rows. That missing row causes Pascal's drawing routines to go loco-roco. Makes the games unplayable.
I wrote a shell script a while back to send terminal escape codes to resize the terminal under Linux to use 25 instead of 24 rows. I picked it up, and it still worked.
The problem is: spaces. My directories have spaces ZOMG!!1 There's a long explanation for that transgression, but long story short, I needed my home directory to be consistent.
I just spent the last hour or so updating that script to work, regardless of where the thing was launched from. Anybody who has attempted to get scripts into Linux desktop launchers will know what I'm talking about: those scripts are never launched from the right directory. Stacks of quoting and unquoting, remembering that single quotes don't allow variable substitution, and a bunch of other things.
Spaces are a pain.
Maybe I'll contribute the script back to the GearHead community once I've brought it up to quality standards. I made it this far, I may as well finish it.
Sunday, June 8, 2008
Ponk: A Pong clone
Two paddles, one ball. That's the name of the game when you're playing Ponk, a Pong clone made with SDL and Common Lisp.
Download Ponk source (120 KB ZIP archive)
Requirements:
- A Common Lisp implementation (I use SBCL)
- ASDF (SBCL comes with it)
- CL-SDL (SDL bindings for Common Lisp)
Launching: If you're using SBCL, just run the *nix shell script:
./ponk.sh
If you want to run it from within CL:
(asdf:oos 'asdf:load-op 'ponk)
(ponk:start)
You may have to switch to the directory to make the font load right (use
(sb-posix:chdir "path/to/game/")
with SBCL).Controls: Cursor keys for Player 1, WASD for Player 2. Holding left and right will make the ball faster/slower when it hits your paddle. ESC will end the game. First player to reach 10 points will also end the game.
Saturday, June 7, 2008
MGS4 spoilers
Well, I won't go into details, but major spoilers for Metal Gear Solid 4: Guns of the Patriots have been leaked on the Internet, less than a week before its worldwide release. I didn't mean to look; they just kinda crossed my path. And they're not really "spoilers" to me anyway: I don't have a PlayStation 3 and I don't intend to get one anytime in the foreseeable future. But even with the game "spoiled", I can say one thing.
This game is going to be EPIC.
This game is going to be EPIC.
Thursday, June 5, 2008
The thing with Lisp
So, Lisp. Awesome language. It's not popular by any stretch of the imagination, and that's okay. Popularity for popularity's sake is a waste of time. But what is this lack of popularity telling the Lisp community? Lisp has great potential, but I don't think it's being met. Maybe there are problems with Lisp in this current day and age.
Lisp looks different from other languages, but many newbies that write their Lisp critique (as all aspiring Lisp programmers inevitably do) think that "looking different" is the problem. It isn't. Syntax can be learned.
So what is the problem?
Lisp doesn't have a syntax. This makes it easy to manipulate (with Lisp macros), so why doesn't every language do this?
Syntax provides visual cues. In a city where all the buildings look the same and the roads are laid out in a grid, it's easy to get lost; there are no landmarks to go by.
S-expressions, the de-facto standard for representing expressions in Lisp, have been compared to XML on more than a few occasions. But consider this:
Spot the mistake? Here it is again in XML:
The second sample makes it easy to see the mistake. The missing
This doesn't mean that XML is necessarily better than S-expressions. The second makes it easier for humans to see the mistake.
Which brings me to my point: lack of editor support. Emacs and vim come with support for parenthesis matching. Outside of those editors from the 1980s, support is sporadic at best, so the whole missing "closing paren/tag" thing becomes a big issue.
Nobody has succeeded in adding syntax to Lisp yet, though Paul Graham's arc has it in small bits.
Lisp's lack of syntax is one of those strengths that's also a weakness. All I can hope for in the future is better editor support. And unfortunately, that requires a largish community. Guess we won't be seeing that for a while. Until then, I'm happy with Emacs.
Here's the minimum standard for code editors today:
Once "parenthesis matching" joins those, problems with using S-expressions should vanish.
I recently completed a simple Pong game in Common Lisp (I'll post about it later.) The code wasn't brief by any stretch of the imagination. To be fair, CL isn't known for brevity. I took out all the duplication that was immediately obvious, but I felt I could have done the same thing in about the same number of lines in C, with less characters per line. I'd have chosen Scheme, but libraries tend for that dialect of Lisp tend to be highly implementation-dependent. Do not want.
Writing maths was verbose. Explicitly defining a package seemed like a needless hassle. I had to consult Zach Beane's article, even though I'd done it before in the past, because I couldn't remember if the
Perl is an interesting case study in syntax. I don't like Perl, so I don't know it too well, but I can appreciate certain aspects of it, and one of those aspects are string processing. Perl does for strings what ALGOL-style languages do for maths. Strings are first-class citizens in Perl, and you can tell. Perl-compatible regular expressions are one of the most important things to come out of the language.
Perl makes string processing brief, just like any language that supports infix maths makes numeric processing brief.
No matter how I tried, the maths in my Pong game looked ugly. Maybe it's my own ineptitude, but I really felt I could have done without the parens. Yeah, I know there's a CL library for infix math, and I know there's one for PCREs too. There's no doubt about it: syntax makes things shorter. (Well, the little stuff anyway. Lisp's powerful abstractions make growing bigger things shorter.)
One of arc's aims is to make things brief, which I like. My question is, how brief can you get with S-expressions until you hit a wall? I hope that arc's direction will let us find out.
Again, I'd have chosen Scheme for my Pong experience, but libraries, plus eschewing state variables and iteration constructs are big turn-offs.
Stuff like CL's
This is a community issue, and it really weakens Lisp as a development environment. There are just too many choices. Common Lisp or Scheme? If Common Lisp, the CLiki page lists 23 choices, and none obviously stand over any of the others. If Scheme, Wikipedia lists 21 alternatives. If you're new to Lisp, how on Earth are you supposed to make an informed decision?
Making the first choices are just an entry barrier though: it becomes a non-issue once you're in. So it's not a problem. Or is it?
One of the things that Common Lisp seems to have over Scheme is that libraries are developed with support for other implementations of Common Lisp. Chicken Scheme has its own "eggs" system for libraries, PLT Scheme and its flavours have PLaneT, and it doesn't seem like they're interoperable.
That's just a specific instance of a more general issue: if you write a library, it only works for a small subset of the users. It's a big disincentive compared to say, in Python, where if you make a module for distribution, it's available to generally everybody that can use Python. With the same amount of effort, you can reach out to a small subset of Lisp users, or the vast majority of Python users. Effort is divided, communities are divided, and it all leads to a lot of energy being poured out for little return.
The Lisp community is filled with smart, talented hackers. If Lisp were the one, single language, it should be some super language with enough libraries to run circles around even the most LOL ENTERPRISE READY languages. And yet it's not. Maybe there are enough Lisp libraries to run circles around everything. Maybe we're not seeing that because of the sheer amount of duplication going on from all this plurality.
This is one of those things that can really be solved: if plurality is there, it'll always be there. You do see some exceptions. Linux is divided to all hell, but the domination of Ubuntu has visibly strengthened Linux as a whole.
My CL implementation of choice is Steel Bank Common Lisp, but it doesn't obviously stand over any of the other implementations: it's just open-source and damn fast. Anyway, you can't ask all but one CL implementation to just die off. It'd be equally dumb to tell CL or Scheme to kill themselves for the sake of the other. It'd take a miracle for one implementation to rise head-and-shoulders above all the others because they're pretty much all mature and have reached their full potentials.
So how can this be solved? A new Lisp. Yeah, I know there are a billion of those already, but it's the only way to draw away from the image of plurality, the confusion and the duplicated effort. That's not all there is to it, otherwise one of those new Lisps would have dominated, but only a Lisp that isn't Common Lisp or Scheme can hope to escape the black hole of plurality.
I'd go as far as saying that a new Lisp should not call itself a Lisp. It could be included as a footnote on its website, but it shouldn't be generally advertised as such. This point is purely a PR note for the express purpose of community building.
Again, I hold out hope for arc. It's still advertised as a Lisp dialect, but at least its name doesn't contain "Lisp". As I said before, one of its aims is to aid hackability, but I hope it will have another effect too: users of arc will be united under the one umbrella. Under one name, working towards writing stuff that all other arc users can use.
One single, canonical language with a single, canonical implementation would do universes of good.
This is how it should be: I type "lisp" into Google/a package manager. It pops up a single official website, or shows the official package as (one of) the first results. I download and install the archive/package. I launch "lisp" from a menu/terminal, and a REPL pops up. That much maps to the real world, sans the single, canonical implementation. Now I should be able to use my favourite text editor, write blah.lisp, and type lisp blah.lisp, and it'll run.
Here's where things get hairy. Some Common Lisp implementations can do this. There are probably Scheme implementations that also allow this. But if you're using any libraries, good luck. My Pong game can't be launched like this, since it uses ASDF to load SDL and SDL-ttf. I could put the library loading in the main Lisp file, but that's ugly duplication right there.
What if you really want to develop in Lisp the way it was meant to be used? Now it gets really hairy. No conventional text editor comes with the level of inter-process communication that Lisp needs for it to reach its true potential. The choices are Emacs (with SLIME), Eclipse *shudder* (with Cusp), or vim (with an experimental plugin named Limp). There's a fair amount of setup involved with all of those, with Cusp being the simplest, just involving copying files to the right places. SLIME requires adding some lines to your .emacs file, and getting the right values involves some digging around with your CL implementation of choice. Limp involves tweaking bits of the scripts themselves, far from optimal. (edit: Actually, the Limp defaults should Just Work. Open source moves fast.) With choices like those, the true Lisp development model is as substantial as a mirage on the horizon to most people.
So let's review: command-line invocation of Lisp is limited, and the true Lisp development model requires esoteric setup. There have been efforts to solve this, like Peter Seibel's Lispbox, but the lack of official backing from the groups behind Common Lisp means that it's just "an option" rather than the option, and so it remains fairly obscure. It's still not ideal: it really should be just an editor that hooks into the one CL implementation that's already there, which would be trivial if there were a single canonical CL implementation. It's a step in the right direction though.
Some setup to have the full Lisp experience is inevitable. There's not too much that can be done which isn't already being done, and I praise the efforts of the people behind SLIME, Limp and Cusp.
What about for the other case, running Lisp from the command line? That could definitely see improvement, and it leads to the next section.
Abstraction is not free. You're always paying some cost for using a "function" (in the programming sense of the word) in procedural abstraction, for instance. Abstracting the lower levels has an inevitable hit to application performance. In exchange, we have solutions that are easier to understand.
But can it go on forever? Does abstraction, applied continuously, make things ever easier to understand? Is it possible to be too abstract?
Yes, and in fact, it occurs more often than you'd think. In primary school, you're taught about numbers. That numbers have to be taught say something about numbers themselves. Numbers are abstractions for the expression of quantities: how much "stuff" do we have? Numbers are an abstraction, and they have to be taught, so at some point in all of our lives, we didn't know what numbers were. Once we poured in time and effort (or were forced into it), we understood them and built on top of them.
There's a point in mathematics where it just gets too hard for mere mortals. If I were given a non-trivial integral calculus problem today, I'd probably stare blankly at it. You can learn and learn and learn, but eventually the benefits outweigh the costs.
What has this got to do with Lisp? Well, Lisp is old, and it shows.
One example is the pathname abstraction in Common Lisp. What. The. Hell. Making a path in Common Lisp involves a lot of parameters that are now totally obsolete. As Peter Seibel explains in Practical Common Lisp, the pathname abstraction comes from an age where the Unix-style directory tree wasn't the dominant data storage structure. That it still exists is purely a legacy detail. Today, the simplest path abstraction is the string, with components separated by either forward or back slashes. A step above that is the URI, with the protocol prefixes like http://, ftp://, file://, etc. come in. That's about as complicated as it should be: a string. CL-FAD, a CL library for making file and directory access easier for the modern day, shouldn't really exist at all: it should be part of the language, provided as a module.
Another weird thing Common Lisp is the whole separation of packages and ASDF systems. For the uninitiated, packages are part of CL itself, used for bunching together symbols (which in turn allow access to functions and variables), whereas ASDF is the de facto system for loading libraries and your Lisp files in the right order, and for allowing your own software to be loaded as a library, like make.
For both of these things, you need to completely and totally qualify these things. I mentioned that I had to consult Zach Beane's article to figure out how to correctly write package and system definitions, and they look like this (shamelessly ripped):
... and this:
Some of you might be asking what the difference is between loading a Lisp file that's part of your project, and one in a library. And rightly so. Why does this abstraction exist? Python has shown that both of these concepts don't have to be more than an
Python has an unfair advantage. It was born in the age where the tree structure for files and directories was dominant, and it fully capitalised on those. The Lisp way of these things are too abstract for this day and age. Things have crystallised since the old days, and Lisp should have changed with it. I've seen some Scheme code samples that show this improvement, but with the plurality, the improvement is limited to whatever Scheme implementation that code was meant for.
Again, arc, as a new language, has an opportunity to re-introduce this simplicity to both pathnames and modules. I hope that these at least are looked at while arc grows.
This point is mostly from the perspective of Common Lisp, since Scheme is better at changing than CL.
Common Lisp was formed as a unification of a bunch of Lisps that were floating around back in the day. Lots of groups were involved in making this happen, so they all have a say in what direction the language should go in.
Fast forward to today. Recently, in comp.lang.lisp, somebody wanted to make the Common Lisp standard open for change. There was a lot of talk about this, but it ultimately led to nothing. The groups holding the copyrights had no intentions of pouring effort into something that would give them no benefit and burn lots of time.
Common Lisp hasn't changed for a long time. There's plenty of activity in the lower levels where people are making libraries for the community, but without the language itself changing, the whole thing is stagnating. Library-makers tend to stick with "safe" problems: the ones where people have a itch that needs scratching, like sockets or regular expressions, interfacing with relational database systems, and parsing XML and JSON. There's little to no action on people really changing the fundamental primitives of Common Lisp, the way that people fundamentally code with it. Kenny Tilton has Cells, and Rob something-or-other has his lexicons, but they're still outside changing the language itself.
Languages that don't change will eventually die. I don't believe that Lisp will die, but I do believe that Common Lisp will die, simply because it isn't changing. The community that's interested in it doesn't have full ownership of the language, and likely never will. It won't die without conferring its lessons to other, newer languages.
A new Lisp could escape this entirely. This is the age of new open-source languages, and once again, arc is one such language. arc itself has a community-maintained hotbed called anarki, which contains a bunch of experiments with extending the language in various ways. If such changes are good enough, Paul Graham may be inspired to incorporate similar features into the official arc implementation. This is a very good thing, and another reason that I think arc will succeed.
Common Lisp is beyond saving. Scheme will likely survive, but it won't make any progress while it's fragmented as much as it is at the moment.
The Lisp community is filled with bright people. The entry barrier into using the language ensures this. They're familiar with hard and simple problems alike. But how do you decide if a problem is hard or simple? The answer is that it varies from person to person. It also varies from community to community. A community of smart people are far more likely to see a problem as simple than a not-so-bright one. What's easy on average to a Lisp programmer may be non-trivial to the average C++ programmer.
So when somebody asks the Lisp community how to do such-and-such, the Lisp community is likely to go tell them to do it themselves. They're not being mean. Often they even provide a few code samples to get the person started. It's just these problems seem simple to them. Lisp makes it easy enough. To the Lisp community, easy enough is a fairly high bar. To mere mortals it may even be frighteningly high.
What does that mean in the greater scheme of things? It comes back to the libraries, in a couple of ways. In Common Lisp, CFFI can be used to load up and use C libraries, for instance. Hardly anybody makes bindings for C libraries because, hey, the tools you need to bind and load them are already there. Thus, it seems that library support is lacking, but as a matter of fact, it's seen as such a trivial exercise that it's hard to justify writing a library just to load things via an FFI. A non-obvious task is obvious to experts.
Such little things make a greater difference to the language. I used SDL bindings for my pong game in Common Lisp. Those bindings had already been written for me, which is a good example to go on. Common Lisp needs more of that sort of thing.
Another point is for lower level primitives for using Common Lisp in general. A simple roll-it-yourself solution can save a few seconds for a seasoned Lisp programmer. The same thing may take hours for a new Lisp programmer to learn, after either finding it hidden in documentation online, or consulting with an online group. The seasoned programmer doesn't have to save the utility: they'll either have their own locally-developed personal bank of such utilities, or they'll just say "screw that" and just write it when they need it because it's faster.
The fact that people are making these utilities should say something about Common Lisp: if the seasoned programmer is using them, maybe other people could benefit too. I know of at least one Common Lisp system dedicated to the utilities of a particular programmer (I can't remember his name). It's mostly used as a dependency for a bunch of other libraries he provides. The fact that the system was named after him implies that there isn't much focus, so it'd be more useful if the utilities were bunched off into useful little libraries of their own.
Making utilities widely available saves an experienced programmer some time, and learning programmers lots of time. Everybody wins.
It's a social issue here: things seen as trivial may actually be more beneficial than they seem at first glance. Telling people to solve their own problems, even if you provide help, doesn't improve the language. Sharing and distributing code does.
From the inherent nature of Lisp communities, I don't think there's a simple solution to this. The best that can be done is to encourage lots of rapid change to the language itself. There was a lot of buzz with arc when it was released earlier this year. Things have slowed down a bit, but the community-maintained anarki is still very active, which is good to see.
Lisp communities need to lose the DIY attitude. They should still be open to solving problems for people, but rather than just forgetting, a review process for changing and improving the language would recognise the problems that repeatedly show up so that they could be included in future iterations of said language. Python, once again, presents itself as a good example, with PEPs providing an official channel for improving the language.
arc, for a Lisp community, seems much more open to changing the language for the better than Common Lisp. Of course, it's hard for Common Lisp programmers to change the language as a whole because of the lack of change, and the plurality. The arc forum uses a self-moderating system, so there's incentive to be open and receptive to others. This means that more ideas come in. Couple this with arc's open-source nature, and it could well be onto a winning formula here.
A language is more than it syntax. It's more than a standard, and it's more than its implementation. A programming language is a unique harmony of the language semantics, implementation, community, benevolent dictator for life, and philosophy.
So what would the ideal Lisp be? It would have to contain many of the following:
A new Lisp may not even call itself a Lisp. There's a lot that Lisp could learn from today's scripting languages, just like many of today's scripting languages have learned so much from Lisp. Or maybe the lessons have already been learned. All we need now is an opportunity to demonstrate them.
There's a few other things that I haven't covered above which would benefit a new Lisp:
arc has a lot of potential. It's not perfect, but it's young and open to change. There are only a few things that irk me about it at the moment: its dependence on MzScheme, inadequate error reporting, lack of modules/packages/whatever, and its current singular focus on web development.
But arc shows promise for the Lisp world, and it may just be the thing to bring Lisp into the 21st century. And if Paul Graham has anything to say about it, the 22nd too.
Edit: Mikael Janssen says that Limp, the Lisp plugin for vim, should actually work out the box.
Syntax
Lisp looks different from other languages, but many newbies that write their Lisp critique (as all aspiring Lisp programmers inevitably do) think that "looking different" is the problem. It isn't. Syntax can be learned.
So what is the problem?
Lisp doesn't have a syntax. This makes it easy to manipulate (with Lisp macros), so why doesn't every language do this?
Syntax provides visual cues. In a city where all the buildings look the same and the roads are laid out in a grid, it's easy to get lost; there are no landmarks to go by.
S-expressions, the de-facto standard for representing expressions in Lisp, have been compared to XML on more than a few occasions. But consider this:
(task
(name "Do things")
(desc "It's important. " (em "Really important."))
Spot the mistake? Here it is again in XML:
<task>
<name>Do things</name>
<desc>It's important. <em>Really important</em>
</task>
The second sample makes it easy to see the mistake. The missing
</desc>
stands out.This doesn't mean that XML is necessarily better than S-expressions. The second makes it easier for humans to see the mistake.
Which brings me to my point: lack of editor support. Emacs and vim come with support for parenthesis matching. Outside of those editors from the 1980s, support is sporadic at best, so the whole missing "closing paren/tag" thing becomes a big issue.
Nobody has succeeded in adding syntax to Lisp yet, though Paul Graham's arc has it in small bits.
Lisp's lack of syntax is one of those strengths that's also a weakness. All I can hope for in the future is better editor support. And unfortunately, that requires a largish community. Guess we won't be seeing that for a while. Until then, I'm happy with Emacs.
Here's the minimum standard for code editors today:
- syntax highlighting
- automatic indentation
Once "parenthesis matching" joins those, problems with using S-expressions should vanish.
Lack of brevity
I recently completed a simple Pong game in Common Lisp (I'll post about it later.) The code wasn't brief by any stretch of the imagination. To be fair, CL isn't known for brevity. I took out all the duplication that was immediately obvious, but I felt I could have done the same thing in about the same number of lines in C, with less characters per line. I'd have chosen Scheme, but libraries tend for that dialect of Lisp tend to be highly implementation-dependent. Do not want.
Writing maths was verbose. Explicitly defining a package seemed like a needless hassle. I had to consult Zach Beane's article, even though I'd done it before in the past, because I couldn't remember if the
:use
clause accepted a list or took a variable number of symbols (for the curious, it's the latter), plus the syntax for an ASDF system definition file. Samples for simple string processing seemed needlessly verbose, since common tasks like splitting strings seemed missing from core CL. In a language where everything can be treated like first-class citizens, they all feel like second-class citizens.Perl is an interesting case study in syntax. I don't like Perl, so I don't know it too well, but I can appreciate certain aspects of it, and one of those aspects are string processing. Perl does for strings what ALGOL-style languages do for maths. Strings are first-class citizens in Perl, and you can tell. Perl-compatible regular expressions are one of the most important things to come out of the language.
Perl makes string processing brief, just like any language that supports infix maths makes numeric processing brief.
No matter how I tried, the maths in my Pong game looked ugly. Maybe it's my own ineptitude, but I really felt I could have done without the parens. Yeah, I know there's a CL library for infix math, and I know there's one for PCREs too. There's no doubt about it: syntax makes things shorter. (Well, the little stuff anyway. Lisp's powerful abstractions make growing bigger things shorter.)
One of arc's aims is to make things brief, which I like. My question is, how brief can you get with S-expressions until you hit a wall? I hope that arc's direction will let us find out.
Again, I'd have chosen Scheme for my Pong experience, but libraries, plus eschewing state variables and iteration constructs are big turn-offs.
Stuff like CL's
loop
macro is awesome. Similar macros for working with numbers and strings would totally solve this.Plurality
This is a community issue, and it really weakens Lisp as a development environment. There are just too many choices. Common Lisp or Scheme? If Common Lisp, the CLiki page lists 23 choices, and none obviously stand over any of the others. If Scheme, Wikipedia lists 21 alternatives. If you're new to Lisp, how on Earth are you supposed to make an informed decision?
Making the first choices are just an entry barrier though: it becomes a non-issue once you're in. So it's not a problem. Or is it?
One of the things that Common Lisp seems to have over Scheme is that libraries are developed with support for other implementations of Common Lisp. Chicken Scheme has its own "eggs" system for libraries, PLT Scheme and its flavours have PLaneT, and it doesn't seem like they're interoperable.
That's just a specific instance of a more general issue: if you write a library, it only works for a small subset of the users. It's a big disincentive compared to say, in Python, where if you make a module for distribution, it's available to generally everybody that can use Python. With the same amount of effort, you can reach out to a small subset of Lisp users, or the vast majority of Python users. Effort is divided, communities are divided, and it all leads to a lot of energy being poured out for little return.
The Lisp community is filled with smart, talented hackers. If Lisp were the one, single language, it should be some super language with enough libraries to run circles around even the most LOL ENTERPRISE READY languages. And yet it's not. Maybe there are enough Lisp libraries to run circles around everything. Maybe we're not seeing that because of the sheer amount of duplication going on from all this plurality.
This is one of those things that can really be solved: if plurality is there, it'll always be there. You do see some exceptions. Linux is divided to all hell, but the domination of Ubuntu has visibly strengthened Linux as a whole.
My CL implementation of choice is Steel Bank Common Lisp, but it doesn't obviously stand over any of the other implementations: it's just open-source and damn fast. Anyway, you can't ask all but one CL implementation to just die off. It'd be equally dumb to tell CL or Scheme to kill themselves for the sake of the other. It'd take a miracle for one implementation to rise head-and-shoulders above all the others because they're pretty much all mature and have reached their full potentials.
So how can this be solved? A new Lisp. Yeah, I know there are a billion of those already, but it's the only way to draw away from the image of plurality, the confusion and the duplicated effort. That's not all there is to it, otherwise one of those new Lisps would have dominated, but only a Lisp that isn't Common Lisp or Scheme can hope to escape the black hole of plurality.
I'd go as far as saying that a new Lisp should not call itself a Lisp. It could be included as a footnote on its website, but it shouldn't be generally advertised as such. This point is purely a PR note for the express purpose of community building.
Again, I hold out hope for arc. It's still advertised as a Lisp dialect, but at least its name doesn't contain "Lisp". As I said before, one of its aims is to aid hackability, but I hope it will have another effect too: users of arc will be united under the one umbrella. Under one name, working towards writing stuff that all other arc users can use.
One single, canonical language with a single, canonical implementation would do universes of good.
Setup
This is how it should be: I type "lisp" into Google/a package manager. It pops up a single official website, or shows the official package as (one of) the first results. I download and install the archive/package. I launch "lisp" from a menu/terminal, and a REPL pops up. That much maps to the real world, sans the single, canonical implementation. Now I should be able to use my favourite text editor, write blah.lisp, and type lisp blah.lisp, and it'll run.
Here's where things get hairy. Some Common Lisp implementations can do this. There are probably Scheme implementations that also allow this. But if you're using any libraries, good luck. My Pong game can't be launched like this, since it uses ASDF to load SDL and SDL-ttf. I could put the library loading in the main Lisp file, but that's ugly duplication right there.
What if you really want to develop in Lisp the way it was meant to be used? Now it gets really hairy. No conventional text editor comes with the level of inter-process communication that Lisp needs for it to reach its true potential. The choices are Emacs (with SLIME), Eclipse *shudder* (with Cusp), or vim (with an experimental plugin named Limp). There's a fair amount of setup involved with all of those, with Cusp being the simplest, just involving copying files to the right places. SLIME requires adding some lines to your .emacs file, and getting the right values involves some digging around with your CL implementation of choice. Limp involves tweaking bits of the scripts themselves, far from optimal. (edit: Actually, the Limp defaults should Just Work. Open source moves fast.) With choices like those, the true Lisp development model is as substantial as a mirage on the horizon to most people.
So let's review: command-line invocation of Lisp is limited, and the true Lisp development model requires esoteric setup. There have been efforts to solve this, like Peter Seibel's Lispbox, but the lack of official backing from the groups behind Common Lisp means that it's just "an option" rather than the option, and so it remains fairly obscure. It's still not ideal: it really should be just an editor that hooks into the one CL implementation that's already there, which would be trivial if there were a single canonical CL implementation. It's a step in the right direction though.
Some setup to have the full Lisp experience is inevitable. There's not too much that can be done which isn't already being done, and I praise the efforts of the people behind SLIME, Limp and Cusp.
What about for the other case, running Lisp from the command line? That could definitely see improvement, and it leads to the next section.
Over-abstraction
Abstraction is not free. You're always paying some cost for using a "function" (in the programming sense of the word) in procedural abstraction, for instance. Abstracting the lower levels has an inevitable hit to application performance. In exchange, we have solutions that are easier to understand.
But can it go on forever? Does abstraction, applied continuously, make things ever easier to understand? Is it possible to be too abstract?
Yes, and in fact, it occurs more often than you'd think. In primary school, you're taught about numbers. That numbers have to be taught say something about numbers themselves. Numbers are abstractions for the expression of quantities: how much "stuff" do we have? Numbers are an abstraction, and they have to be taught, so at some point in all of our lives, we didn't know what numbers were. Once we poured in time and effort (or were forced into it), we understood them and built on top of them.
There's a point in mathematics where it just gets too hard for mere mortals. If I were given a non-trivial integral calculus problem today, I'd probably stare blankly at it. You can learn and learn and learn, but eventually the benefits outweigh the costs.
What has this got to do with Lisp? Well, Lisp is old, and it shows.
One example is the pathname abstraction in Common Lisp. What. The. Hell. Making a path in Common Lisp involves a lot of parameters that are now totally obsolete. As Peter Seibel explains in Practical Common Lisp, the pathname abstraction comes from an age where the Unix-style directory tree wasn't the dominant data storage structure. That it still exists is purely a legacy detail. Today, the simplest path abstraction is the string, with components separated by either forward or back slashes. A step above that is the URI, with the protocol prefixes like http://, ftp://, file://, etc. come in. That's about as complicated as it should be: a string. CL-FAD, a CL library for making file and directory access easier for the modern day, shouldn't really exist at all: it should be part of the language, provided as a module.
Another weird thing Common Lisp is the whole separation of packages and ASDF systems. For the uninitiated, packages are part of CL itself, used for bunching together symbols (which in turn allow access to functions and variables), whereas ASDF is the de facto system for loading libraries and your Lisp files in the right order, and for allowing your own software to be loaded as a library, like make.
For both of these things, you need to completely and totally qualify these things. I mentioned that I had to consult Zach Beane's article to figure out how to correctly write package and system definitions, and they look like this (shamelessly ripped):
(defpackage #:stumpgrinder
(:use #:cl))
... and this:
(asdf:defsystem #:stumpgrinder
:depends-on (#:cl-ppcre)
:components ((:file "package")
(:file "string"
:depends-on ("package"))
(:file "stumpgrinder"
:depends-on ("package"
"string"))))
Some of you might be asking what the difference is between loading a Lisp file that's part of your project, and one in a library. And rightly so. Why does this abstraction exist? Python has shown that both of these concepts don't have to be more than an
import
statement in each of the relevant files.Python has an unfair advantage. It was born in the age where the tree structure for files and directories was dominant, and it fully capitalised on those. The Lisp way of these things are too abstract for this day and age. Things have crystallised since the old days, and Lisp should have changed with it. I've seen some Scheme code samples that show this improvement, but with the plurality, the improvement is limited to whatever Scheme implementation that code was meant for.
Again, arc, as a new language, has an opportunity to re-introduce this simplicity to both pathnames and modules. I hope that these at least are looked at while arc grows.
Lack of change
This point is mostly from the perspective of Common Lisp, since Scheme is better at changing than CL.
Common Lisp was formed as a unification of a bunch of Lisps that were floating around back in the day. Lots of groups were involved in making this happen, so they all have a say in what direction the language should go in.
Fast forward to today. Recently, in comp.lang.lisp, somebody wanted to make the Common Lisp standard open for change. There was a lot of talk about this, but it ultimately led to nothing. The groups holding the copyrights had no intentions of pouring effort into something that would give them no benefit and burn lots of time.
Common Lisp hasn't changed for a long time. There's plenty of activity in the lower levels where people are making libraries for the community, but without the language itself changing, the whole thing is stagnating. Library-makers tend to stick with "safe" problems: the ones where people have a itch that needs scratching, like sockets or regular expressions, interfacing with relational database systems, and parsing XML and JSON. There's little to no action on people really changing the fundamental primitives of Common Lisp, the way that people fundamentally code with it. Kenny Tilton has Cells, and Rob something-or-other has his lexicons, but they're still outside changing the language itself.
Languages that don't change will eventually die. I don't believe that Lisp will die, but I do believe that Common Lisp will die, simply because it isn't changing. The community that's interested in it doesn't have full ownership of the language, and likely never will. It won't die without conferring its lessons to other, newer languages.
A new Lisp could escape this entirely. This is the age of new open-source languages, and once again, arc is one such language. arc itself has a community-maintained hotbed called anarki, which contains a bunch of experiments with extending the language in various ways. If such changes are good enough, Paul Graham may be inspired to incorporate similar features into the official arc implementation. This is a very good thing, and another reason that I think arc will succeed.
Common Lisp is beyond saving. Scheme will likely survive, but it won't make any progress while it's fragmented as much as it is at the moment.
Do it yourself
The Lisp community is filled with bright people. The entry barrier into using the language ensures this. They're familiar with hard and simple problems alike. But how do you decide if a problem is hard or simple? The answer is that it varies from person to person. It also varies from community to community. A community of smart people are far more likely to see a problem as simple than a not-so-bright one. What's easy on average to a Lisp programmer may be non-trivial to the average C++ programmer.
So when somebody asks the Lisp community how to do such-and-such, the Lisp community is likely to go tell them to do it themselves. They're not being mean. Often they even provide a few code samples to get the person started. It's just these problems seem simple to them. Lisp makes it easy enough. To the Lisp community, easy enough is a fairly high bar. To mere mortals it may even be frighteningly high.
What does that mean in the greater scheme of things? It comes back to the libraries, in a couple of ways. In Common Lisp, CFFI can be used to load up and use C libraries, for instance. Hardly anybody makes bindings for C libraries because, hey, the tools you need to bind and load them are already there. Thus, it seems that library support is lacking, but as a matter of fact, it's seen as such a trivial exercise that it's hard to justify writing a library just to load things via an FFI. A non-obvious task is obvious to experts.
Such little things make a greater difference to the language. I used SDL bindings for my pong game in Common Lisp. Those bindings had already been written for me, which is a good example to go on. Common Lisp needs more of that sort of thing.
Another point is for lower level primitives for using Common Lisp in general. A simple roll-it-yourself solution can save a few seconds for a seasoned Lisp programmer. The same thing may take hours for a new Lisp programmer to learn, after either finding it hidden in documentation online, or consulting with an online group. The seasoned programmer doesn't have to save the utility: they'll either have their own locally-developed personal bank of such utilities, or they'll just say "screw that" and just write it when they need it because it's faster.
The fact that people are making these utilities should say something about Common Lisp: if the seasoned programmer is using them, maybe other people could benefit too. I know of at least one Common Lisp system dedicated to the utilities of a particular programmer (I can't remember his name). It's mostly used as a dependency for a bunch of other libraries he provides. The fact that the system was named after him implies that there isn't much focus, so it'd be more useful if the utilities were bunched off into useful little libraries of their own.
Making utilities widely available saves an experienced programmer some time, and learning programmers lots of time. Everybody wins.
It's a social issue here: things seen as trivial may actually be more beneficial than they seem at first glance. Telling people to solve their own problems, even if you provide help, doesn't improve the language. Sharing and distributing code does.
From the inherent nature of Lisp communities, I don't think there's a simple solution to this. The best that can be done is to encourage lots of rapid change to the language itself. There was a lot of buzz with arc when it was released earlier this year. Things have slowed down a bit, but the community-maintained anarki is still very active, which is good to see.
Lisp communities need to lose the DIY attitude. They should still be open to solving problems for people, but rather than just forgetting, a review process for changing and improving the language would recognise the problems that repeatedly show up so that they could be included in future iterations of said language. Python, once again, presents itself as a good example, with PEPs providing an official channel for improving the language.
arc, for a Lisp community, seems much more open to changing the language for the better than Common Lisp. Of course, it's hard for Common Lisp programmers to change the language as a whole because of the lack of change, and the plurality. The arc forum uses a self-moderating system, so there's incentive to be open and receptive to others. This means that more ideas come in. Couple this with arc's open-source nature, and it could well be onto a winning formula here.
Lisp for the new age
A language is more than it syntax. It's more than a standard, and it's more than its implementation. A programming language is a unique harmony of the language semantics, implementation, community, benevolent dictator for life, and philosophy.
So what would the ideal Lisp be? It would have to contain many of the following:
- Have no allergy to syntax.
- Live in a world where parenthesis matching is standard for text editors.
- Be brief.
- Have a single, canonical implementation.
- Allow for easy command-line invocation.
- Have plugins for editors outside of Emacs and vim that configure themselves automatically to the best of their ability, or an official editor that's more than just a tack-on text box.
- Not abstract beyond the point of simplicity.
- Be open to change.
- Accept that problems that seem easy may be hard for others, no matter how easy the language makes solving the problem.
A new Lisp may not even call itself a Lisp. There's a lot that Lisp could learn from today's scripting languages, just like many of today's scripting languages have learned so much from Lisp. Or maybe the lessons have already been learned. All we need now is an opportunity to demonstrate them.
There's a few other things that I haven't covered above which would benefit a new Lisp:
- Be ideal for scripting, i.e. it should be a scripting language.
- Have a small core.
- Foster a good collection of libraries.
- Have "Batteries included": standard library collection should cover common problems.
arc has a lot of potential. It's not perfect, but it's young and open to change. There are only a few things that irk me about it at the moment: its dependence on MzScheme, inadequate error reporting, lack of modules/packages/whatever, and its current singular focus on web development.
But arc shows promise for the Lisp world, and it may just be the thing to bring Lisp into the 21st century. And if Paul Graham has anything to say about it, the 22nd too.
Edit: Mikael Janssen says that Limp, the Lisp plugin for vim, should actually work out the box.
Monday, May 26, 2008
Alright!
git in the Ubuntu 8.04 repositories now supports interactive rebase! My job just got much, much easier. Thank you repo maintainers!
Tuesday, May 20, 2008
Dweller is back
Holy cow! Dweller is back online again! I remember some time ago this not being the case, so pick it up while you still can.
Dweller is a roguelike game for mobile phones. If you're after a surprisingly addictive game to tide you over on the train on the way to work, this just might be the thing for you. Sure beats wasting time on card games. I've played it myself quite a while ago, and it's surprisingly easy to control, and the levels show lots of variation. Highly recommended.
Dweller is a roguelike game for mobile phones. If you're after a surprisingly addictive game to tide you over on the train on the way to work, this just might be the thing for you. Sure beats wasting time on card games. I've played it myself quite a while ago, and it's surprisingly easy to control, and the levels show lots of variation. Highly recommended.
Saturday, May 17, 2008
There's something about LaTeX...
... that makes writing documents fun! It might have something to do with the fact that even with the default settings, the output looks awesome. For a guy whose document-writing experience has been limited to OpenOffice.org and Microsoft Word, the whole thing feels very liberating.
That's another thing: when I make a document in LaTeX, I'm free to use whatever editor I like. MS Word's interface is a mess (2007 perhaps less of a mess), while the load time for OO.o is excessive to the point where I hesitate to open anything that would load it. No more of that crap. From here on, it's Emacs and AUCTeX for me.
The biggest hurdle is rustling the tools together. I use Ubuntu Linux and Emacs already. First thing was to install texlive from the package manager, which installs the tools you need. To integrate with Emacs, just install auctex along with it.
Keys are simple enough:
LaTeX itself isn't difficult. Here's a simple document:
Making a final PDF is simple too. If you process your document as above, you already have "blah.dvi" derived from the "blah.tex" you were editing/viewing. I already have dvi2pdf available from a command prompt (which I also run in an Emacs buffer using M-x ansi-term), so just run that DVI through that program and a shiny new PDF is created with your doc. Pretty swanky.
And to think there was a day when I thought this kind of stuff was impenetrable.
That's another thing: when I make a document in LaTeX, I'm free to use whatever editor I like. MS Word's interface is a mess (2007 perhaps less of a mess), while the load time for OO.o is excessive to the point where I hesitate to open anything that would load it. No more of that crap. From here on, it's Emacs and AUCTeX for me.
The biggest hurdle is rustling the tools together. I use Ubuntu Linux and Emacs already. First thing was to install texlive from the package manager, which installs the tools you need. To integrate with Emacs, just install auctex along with it.
Keys are simple enough:
C-c C-c
to process/view your document, C-c C-p C-b
to generate inline buffer previews for headings and formulas. I'm sure there's more, but I easily use these two key chords more than any other when working with LaTeX docs in Emacs.LaTeX itself isn't difficult. Here's a simple document:
\documentclass{article}
% lol preamble (doesn't appear directly, just informational)
\title{The many ways to skin a cat}
\author{Some dude}
\begin{document}
% here's where the fun begins
\maketitle
Normally, cats aren't the sort of thing you'd even consider
eating. But if you're running low on supplies and money has
been exhausted, you may be left with no other option that to
consider making the inedible, well, edible.
I can break these damn lines up however I want.
This is on the same paragraph as the line above.
Only empty lines separate paragraphs.
I can type like a moron, because like in HTML
the whitespace in a paragraph is collapsed into
single spaces.
\end{document}
Making a final PDF is simple too. If you process your document as above, you already have "blah.dvi" derived from the "blah.tex" you were editing/viewing. I already have dvi2pdf available from a command prompt (which I also run in an Emacs buffer using M-x ansi-term), so just run that DVI through that program and a shiny new PDF is created with your doc. Pretty swanky.
And to think there was a day when I thought this kind of stuff was impenetrable.
Wednesday, May 14, 2008
Common Lisp + Emacs + SLIME under Windows
Due to reasons relating to my studies, I have to deal with Windows more often now. Crap. But I can still play around with Common Lisp.
Or so I thought at first. I already have Emacs installed, getting the CLISP Common Lisp implementation and SLIME should be easy, right?
How wrong I was.
Nothing seemed to go right. I mean, the downloads went A-OK, and I installed things alright. Tacking it all together ran me into brick walls. Repeatedly. There were always obscure errors about not being able to invoke the Lisp implementation properly. Google, universal as it is, still came up with lots of cruft.
Eventually, I found out from a post online (by Robert Zubek) that spaces in paths caused problems in SLIME. It never really comes up in *nix environments, because nobody's dumb enough to put spaces in any directory names, but SLIME groks paths using 'split-string. This of course means that directory names with spaces will be pulled apart where they aren't meant to.
Solution? Install everything in friggin' C:\. No spaces, no nothing. After the usual setting of paths for SLIME in .emacs, everything works just dandy.
Hacking on Windows: who'dve thunk it?
Or so I thought at first. I already have Emacs installed, getting the CLISP Common Lisp implementation and SLIME should be easy, right?
How wrong I was.
Nothing seemed to go right. I mean, the downloads went A-OK, and I installed things alright. Tacking it all together ran me into brick walls. Repeatedly. There were always obscure errors about not being able to invoke the Lisp implementation properly. Google, universal as it is, still came up with lots of cruft.
Eventually, I found out from a post online (by Robert Zubek) that spaces in paths caused problems in SLIME. It never really comes up in *nix environments, because nobody's dumb enough to put spaces in any directory names, but SLIME groks paths using 'split-string. This of course means that directory names with spaces will be pulled apart where they aren't meant to.
Solution? Install everything in friggin' C:\. No spaces, no nothing. After the usual setting of paths for SLIME in .emacs, everything works just dandy.
Hacking on Windows: who'dve thunk it?
Labels:
emacs,
free software,
lisp,
programming,
slime,
windows
Saturday, May 3, 2008
git: Collapse the last commits
This is mostly for my own benefit, so I don't forget it later on. I'm learning how to use git.
What happened was that I committed some changes. Pretty normal. Then I realised I forgot to take something out. I committed that, with an "Oops, I should have included this in the last commit too"-style message.
I'm using Ubuntu 7.10, which has an old version of git without interactive rebase (I'll switch soon, I swear!), so I couldn't use that. After a rather dopey hour of searching how to use rebase to collapse the latest two commits, I opted for a different tack.
Turns out all I had to do was the following:
The first line sets the current HEAD to the second-last commit, while keeping the changes I wanted in the index cache (i.e. the removal of the stuff I wanted to take out). The second makes the correction. Result? The latest intermediate commit is collapsed into the second-latest, so I don't have a messy history.
Of course, this is rewriting history, so I wouldn't want to do this for a public branch, but I'm the only one working on this small project anyway, so it's fine.
What happened was that I committed some changes. Pretty normal. Then I realised I forgot to take something out. I committed that, with an "Oops, I should have included this in the last commit too"-style message.
I'm using Ubuntu 7.10, which has an old version of git without interactive rebase (I'll switch soon, I swear!), so I couldn't use that. After a rather dopey hour of searching how to use rebase to collapse the latest two commits, I opted for a different tack.
Turns out all I had to do was the following:
git reset --soft HEAD^
git commit --amend
The first line sets the current HEAD to the second-last commit, while keeping the changes I wanted in the index cache (i.e. the removal of the stuff I wanted to take out). The second makes the correction. Result? The latest intermediate commit is collapsed into the second-latest, so I don't have a messy history.
Of course, this is rewriting history, so I wouldn't want to do this for a public branch, but I'm the only one working on this small project anyway, so it's fine.
Saturday, April 26, 2008
The Lisp effect
I was writing some Python last night. I couldn't stay on Lisp forever, as much as I'd like to. The code I'm writing has to run on a machine that I don't have control over, so I can't install something like SBCL or even a Scheme implementation. So I'm stuck with the next best thing: Python.
Flowing along test-driven development lines, I was writing some unit tests, and this... "thing" struck me a few times. I was repeating code. Most of it I was able to do away with by abstracting the common bits into helper functions, but there's only so far you can go. I'm still stuck with two unit testing classes that do similar things, but test fundamentally different input classes.
To my dismay, I discovered something.
I disliked writing Python.
Which is weird, because I consider Python a pretty decent language. And there was only one thing from Lisp that I missed: macros. With them, I never would have had to repeat a thing in the first place, and I wouldn't be stuck with two similar unit test classes with nearly the same structure: I'd just write a macro for it.
The sayings were true: Lisp does make programming in any other language unbearable.
Maybe the way to make my units tests more elegant will come to me in a dream or something. Le sigh.
Flowing along test-driven development lines, I was writing some unit tests, and this... "thing" struck me a few times. I was repeating code. Most of it I was able to do away with by abstracting the common bits into helper functions, but there's only so far you can go. I'm still stuck with two unit testing classes that do similar things, but test fundamentally different input classes.
To my dismay, I discovered something.
I disliked writing Python.
Which is weird, because I consider Python a pretty decent language. And there was only one thing from Lisp that I missed: macros. With them, I never would have had to repeat a thing in the first place, and I wouldn't be stuck with two similar unit test classes with nearly the same structure: I'd just write a macro for it.
The sayings were true: Lisp does make programming in any other language unbearable.
Maybe the way to make my units tests more elegant will come to me in a dream or something. Le sigh.
Tuesday, April 15, 2008
Back-tick-style macros in Scheme
Somebody has been lying to me.
Observe a contrived Common Lisp macro:
Here's a similar macro in Scheme:
Apparently, nobody told me that back-tick-style macros were supported in Scheme. In fact, all I've heard about Scheme macros is stuff about unneeded "pattern matching" involved in using macros.
The people who have claimed this appear to be lying scumbags. Scheme has the back-ticks, the comma, the comma-at, even GENSYM, meaning it's pretty much capable of the same simple macro style that Common Lisp users are used to.
Maybe there's more to it. Hygienic macros and that pattern matching stuff fit somewhere in Scheme.
Observe a contrived Common Lisp macro:
(defmacro print-line (x)
`(format "~a~%" ,x))
Here's a similar macro in Scheme:
(define-macro (print-line x)
`(begin
(display ,x)
(newline)))
Apparently, nobody told me that back-tick-style macros were supported in Scheme. In fact, all I've heard about Scheme macros is stuff about unneeded "pattern matching" involved in using macros.
The people who have claimed this appear to be lying scumbags. Scheme has the back-ticks, the comma, the comma-at, even GENSYM, meaning it's pretty much capable of the same simple macro style that Common Lisp users are used to.
Maybe there's more to it. Hygienic macros and that pattern matching stuff fit somewhere in Scheme.
Friday, April 11, 2008
Inferior Python mode
I just discovered that Emacs has an inferior mode for Python. This means I can get hacking away on a Python file, and send the code to a process right away. It's not the same as being able to alter a running program image, like in Common Lisp or Arc, but for small files, it's damn close.
James is going to freak when I tell him about this.
James is going to freak when I tell him about this.
Friday, March 28, 2008
AVGN: Double Vision Part 1
The Angry Video Game Nerd
(Watch out, GameTrailers as a site bleeds bandwidth.)
I've been watching this guy's videos for a while, and his most recent instalment takes a look at two of the Atari 2600's competitors: the IntelliVision and the ColecoVision.
I was practically raised on video games. I have very clear memories of playing games on my Atari 2600 and mucking about on the NES. I also remember having an IntelliVision, but the memories of this particular console are patchy at best. You can imagine my excitement when I saw this video posted up.
Already I'm spotting a few familiar titles: Zaxxon appears in the intro (though to be fair, Zaxxon appeared on a lot of platforms.) Space Battle with the funny dots drifting towards the clouds in a green field.
This first part talks about the IntelliVision. The console I recognised immediately. Well, that's not quite true. Really, the most distinct part of the IntelliVision are the controllers. They had a grid of metal-like buttons that "pop" when you press them, and a huge round... "thing" beneath that. Nobody forgets a controller like that.
The Nerd points out the IntelliVision's wood texture. I never remembered that, though it was something that I was proud that my Atari 2600 had. The 2600 is a real man's console, wood finish and all. The IntelliVision was always "the console with the big disc button."
The Nerd has a few complaints about the system. One of them is that, without instructions, they're difficult to understand how to play. Apparently, that didn't deter me from playing the games anyway. My experience boiled down to pressing random buttons and seeing what would happen on the screen, if anything. It's a legitimate complaint, compared to, say, the Atari 2600, which had a joystick controller that only had one button, versus the cluster on the IntelliVision controller. I think the games were actually more complex; probably a result of the developers having more buttons to play around with.
Speaking of the controller, that's another of his complaints: they suck. That I didn't notice as a child either, but when you're at the stage where any kinds of blinking, moving lights will amuse you, I suppose it slips by the radar. I don't remember putting much effort into doing well in these games. Since, as I implied before, Atari 2600 games were simpler, it was a lot easier to figure out how to score points and do better, so I put most of my effort into that.
The controller doesn't consist solely of the numbered buttons and the big disc thing. There's another four buttons stationed on the sides. They were tiny and black, and I never remember them doing anything useful, so I mostly ignored them.
Since the numbered buttons don't mean much by themselves, each game would come with a slide-in plastic card thing. What you'd do is slot it over the buttons from the top, and then you'd at least be able to tell what the buttons actually meant while playing the game. I always thought these were really cool, for no practical reason. I just liked seeing the funny little pictures on the thing. I only remember one picture, from one card: "PT Cruiser", with a picture of a small ship above it. Watching the video, I vaguely remember seeing a few more cards, like the one from Space Battle, with the funny triangles, but they never made much sense to me. As a kid, sometimes I'd just play around with the cards, cartridges be damned, just so I could muck around with the pictures.
Then the Nerd reaches for Advanced Dungeons and Dragons. I never had this game myself, but the limited visibility and exploration of dungeons immediately make me think of Neverwinter Nights, and Castle of the Winds.
I had a golf game. The Nerd doesn't cover it, which is just as well, because I remember not being able to hit anything. You wouldn't think that picking a club, hitting a stationary ball at some given strength would be that complicated, but I don't think I ever scored a single hole the entire time I had this game, and indeed the console itself.
Buzz Bombers. Didn't have this game myself, but it reminds me of Donkey Kong Jr. 3, where you're some guy with a spray can trying to get what I presume is Donkey Kong Jr. to the top by spraying him, but there are bugs all over the place trying to grab stuff from you. Now that I think about it, there's not that much resemblance.
And now the Nerd pulls out the "IntelliVoice voice synthesis module." What. The. Fuck. This box is totally new to me, I've never come across it in my life. This should be amusing. Only a handful of games supported it. This box, and indeed the whole console, was made by Mattel Electronics. If you were a kid like me, you remember having at least one speech-based toy made by these guys.
The first game that the Angry Video Game Nerd pops into this thing is B-17 Bomber. And the first thing that strikes me is the voice. Well, that'd be the first thing that strike anybody at that point, but I distinctly remember that exact same guy's voice in those Mattel speaking toys. Did they just hire the same old man for all of these things?
Another thing that sticks out is the terrible voice synthesis of everything after the "Mattel Electronics presents..." bit, which is to say the speech is horribly mauled, possibly "spoken" by another person. I use that word loosely, since it kinda sounds like the syllables were pulled from different parts of some recording. I wouldn't make a point of it, because the speech technology was fairly advanced for its time, but, like those Mattel speech-based toys, following the instructions are impossible because of the distortion.
My memories of the IntelliVision are vague at best. In fact, I don't even remember if I had this before, during or after I got my Atari 2600, which seems to dominate my memories. Probably just as well.
Good times.
(Watch out, GameTrailers as a site bleeds bandwidth.)
I've been watching this guy's videos for a while, and his most recent instalment takes a look at two of the Atari 2600's competitors: the IntelliVision and the ColecoVision.
I was practically raised on video games. I have very clear memories of playing games on my Atari 2600 and mucking about on the NES. I also remember having an IntelliVision, but the memories of this particular console are patchy at best. You can imagine my excitement when I saw this video posted up.
Already I'm spotting a few familiar titles: Zaxxon appears in the intro (though to be fair, Zaxxon appeared on a lot of platforms.) Space Battle with the funny dots drifting towards the clouds in a green field.
This first part talks about the IntelliVision. The console I recognised immediately. Well, that's not quite true. Really, the most distinct part of the IntelliVision are the controllers. They had a grid of metal-like buttons that "pop" when you press them, and a huge round... "thing" beneath that. Nobody forgets a controller like that.
The Nerd points out the IntelliVision's wood texture. I never remembered that, though it was something that I was proud that my Atari 2600 had. The 2600 is a real man's console, wood finish and all. The IntelliVision was always "the console with the big disc button."
The Nerd has a few complaints about the system. One of them is that, without instructions, they're difficult to understand how to play. Apparently, that didn't deter me from playing the games anyway. My experience boiled down to pressing random buttons and seeing what would happen on the screen, if anything. It's a legitimate complaint, compared to, say, the Atari 2600, which had a joystick controller that only had one button, versus the cluster on the IntelliVision controller. I think the games were actually more complex; probably a result of the developers having more buttons to play around with.
Speaking of the controller, that's another of his complaints: they suck. That I didn't notice as a child either, but when you're at the stage where any kinds of blinking, moving lights will amuse you, I suppose it slips by the radar. I don't remember putting much effort into doing well in these games. Since, as I implied before, Atari 2600 games were simpler, it was a lot easier to figure out how to score points and do better, so I put most of my effort into that.
The controller doesn't consist solely of the numbered buttons and the big disc thing. There's another four buttons stationed on the sides. They were tiny and black, and I never remember them doing anything useful, so I mostly ignored them.
Since the numbered buttons don't mean much by themselves, each game would come with a slide-in plastic card thing. What you'd do is slot it over the buttons from the top, and then you'd at least be able to tell what the buttons actually meant while playing the game. I always thought these were really cool, for no practical reason. I just liked seeing the funny little pictures on the thing. I only remember one picture, from one card: "PT Cruiser", with a picture of a small ship above it. Watching the video, I vaguely remember seeing a few more cards, like the one from Space Battle, with the funny triangles, but they never made much sense to me. As a kid, sometimes I'd just play around with the cards, cartridges be damned, just so I could muck around with the pictures.
Then the Nerd reaches for Advanced Dungeons and Dragons. I never had this game myself, but the limited visibility and exploration of dungeons immediately make me think of Neverwinter Nights, and Castle of the Winds.
I had a golf game. The Nerd doesn't cover it, which is just as well, because I remember not being able to hit anything. You wouldn't think that picking a club, hitting a stationary ball at some given strength would be that complicated, but I don't think I ever scored a single hole the entire time I had this game, and indeed the console itself.
Buzz Bombers. Didn't have this game myself, but it reminds me of Donkey Kong Jr. 3, where you're some guy with a spray can trying to get what I presume is Donkey Kong Jr. to the top by spraying him, but there are bugs all over the place trying to grab stuff from you. Now that I think about it, there's not that much resemblance.
And now the Nerd pulls out the "IntelliVoice voice synthesis module." What. The. Fuck. This box is totally new to me, I've never come across it in my life. This should be amusing. Only a handful of games supported it. This box, and indeed the whole console, was made by Mattel Electronics. If you were a kid like me, you remember having at least one speech-based toy made by these guys.
The first game that the Angry Video Game Nerd pops into this thing is B-17 Bomber. And the first thing that strikes me is the voice. Well, that'd be the first thing that strike anybody at that point, but I distinctly remember that exact same guy's voice in those Mattel speaking toys. Did they just hire the same old man for all of these things?
Another thing that sticks out is the terrible voice synthesis of everything after the "Mattel Electronics presents..." bit, which is to say the speech is horribly mauled, possibly "spoken" by another person. I use that word loosely, since it kinda sounds like the syllables were pulled from different parts of some recording. I wouldn't make a point of it, because the speech technology was fairly advanced for its time, but, like those Mattel speech-based toys, following the instructions are impossible because of the distortion.
My memories of the IntelliVision are vague at best. In fact, I don't even remember if I had this before, during or after I got my Atari 2600, which seems to dominate my memories. Probably just as well.
Good times.
Sunday, March 23, 2008
Arc + Emacs
So Paul Graham releases Arc, a new dialect of Lisp. It's been out for a little while now, so it's not exactly news, but the language itself is pretty cool. It seems to embrace the idea of programs rapidly changing, by making programs smaller.
So, how does one actually get to play around with Arc? The approach here will get you Arc running within Emacs so that you can send source from Emacs buffers into the running Arc.
Here's what I started off with:
For the interested, Anarki is the name of the community-maintained release of Arc. It has all of PG's work, plus some niceties. One of those niceties are a couple of Emacs elisp files that we'll be using to tie Emacs and Arc together.
Obviously, replace /path/to with the path from step 1.
By default, the Arc REPL prompt isn't read-only, which can be a bit strange. This will make it read-only:
If you use parenface for parenthesis dimming like I do, you can enable it for Arc buffers with this:
And if you use paredit, also as I do, then the following will enable that in Arc buffers:
All the keybindings can be read straight out of /path/to/arc-wiki/extras/inferior-arc.el, but here's the main ones I use:
Enjoy! Remember to git pull every so often to stay on the cutting edge.
So, how does one actually get to play around with Arc? The approach here will get you Arc running within Emacs so that you can send source from Emacs buffers into the running Arc.
Here's what I started off with:
- Ubuntu 7.10
- Emacs
- git
1. Get Arc
Follow the instructions here to grab Arc via git: Git and the Anarki Arc repository: a brief guide. The files you pull will come out in a subdirectory named arc-wiki/.For the interested, Anarki is the name of the community-maintained release of Arc. It has all of PG's work, plus some niceties. One of those niceties are a couple of Emacs elisp files that we'll be using to tie Emacs and Arc together.
2. Put stuff in your .emacs
Add this to your ~/.emacs:
;; Arc support
(add-to-list 'load-path "/path/to/arc-wiki/extras")
(autoload 'run-arc "inferior-arc"
"Run an inferior Arc process, input and output via buffer *arc*.")
(autoload 'arc-mode "arc"
"Major mode for editing Arc." t)
(add-to-list 'auto-mode-alist '("\\.arc$" . arc-mode))
(setq arc-program-name "/path/to/arc-wiki/arc.sh")
Obviously, replace /path/to with the path from step 1.
By default, the Arc REPL prompt isn't read-only, which can be a bit strange. This will make it read-only:
(add-hook 'inferior-arc-mode-hook
(lambda ()
(set (make-local-variable 'comint-use-prompt-regexp) t)
(set (make-local-variable 'comint-prompt-read-only) t)))
If you use parenface for parenthesis dimming like I do, you can enable it for Arc buffers with this:
(add-hook 'arc-mode-hook
(paren-face-add-support arc-font-lock-keywords-2))
(add-hook 'arc-interaction-mode-hook
(paren-face-add-support arc-font-lock-keywords-2))
And if you use paredit, also as I do, then the following will enable that in Arc buffers:
(add-hook 'arc-mode-hook (lambda () (paredit-mode +1)))
3. Trying it out
In Emacs, find (C-x C-f) your way to a file ending with ".arc", then type M-x run-arc (or select Arc -> Run Inferior Arc), and presto! We're in business.All the keybindings can be read straight out of /path/to/arc-wiki/extras/inferior-arc.el, but here's the main ones I use:
- M-C-x
- Send top-level form
- C-x C-e
- Send S-expression before point
- C-c C-l
- Load current Arc file (NB: can also unjam read-only prompt if that happens, even if the file is bogus)
Enjoy! Remember to git pull every so often to stay on the cutting edge.
Monday, March 17, 2008
Customised Xubuntu on USB Flash
I spent my holidays doing quite a few things. Making my own USB Flash drive portable Linux setup was one of them. I'm on the customised Xubuntu setup right now, in fact.
I started off with Ubuntu 7.10, already installed on my machine. Then I installed the Ubuntu Customization Kit via Synaptic. Torrenting a copy of Xubuntu 7.10 ISO, and I was ready to get to work. (I wanted to install the Xfce desktop environment with a stock Ubuntu 7.10 image, but in reality, getting that to fit in the space restraints was more work that it was worth.)
I then proceeded to customise the Xubuntu ISO by running UCK. Unfortunately, for Xubuntu, it'll download OpenOffice.org, which is big, bulky and already has a substitute on Xubuntu. If I had the patience to manually unpack the SquashFS image,
After that, I had previously selected the option to customise the image, so the prompt to do that appeared at this point. Picking what I wanted was easy using Synaptic. I wanted to focus on getting some useful work done on the go, so amongst my package choices were:
sbcl and mzscheme? I'm mostly learning Common Lisp, but with the release of Paul Graham's arc, the option of playing around with it was just too tempting to pass up.
git and Subversion? I prefer using git, but Subversion is what's used at my uni, so my portable distro wouldn't be worth much without it.
Confusingly conflicting package choices aside, all that stuff was still smaller than OpenOffice.org, which I never really liked anyway. Extraneous language packs also went away, along with a bunch of other miscellaneous things I never used.
From there, I let UCK do its job and repack the SquashFS tree, and was soon rewarded with a shiny new ISO. I backed that up in case I needed it again. At this point I could have burned it to a CD, but that wasn't what I was after.
Now for the USB Flash drive bit. I followed the instructions at USB Pen Drive Linux, with some adaptations. Since I already had Ubuntu as my desktop OS, I was able to format the Flash drive from my own terminal. Since I only needed the files from the ISO, I could just mount it like this:
That made the ISO's files accessible from /mnt/iso/. The copying of the files was straightforward, and I just ignored any missing files.
One thing that isn't made clear on the USB Pen Drive Linux site is that if the USB Flash device hasn't already been made bootable in some way, you'll need to follow the instructions at the bottom of the page.
And that was it. I can pretty much take my desktop with me now, and any work that I save under this environment will be preserved. Funky, huh?
I started off with Ubuntu 7.10, already installed on my machine. Then I installed the Ubuntu Customization Kit via Synaptic. Torrenting a copy of Xubuntu 7.10 ISO, and I was ready to get to work. (I wanted to install the Xfce desktop environment with a stock Ubuntu 7.10 image, but in reality, getting that to fit in the space restraints was more work that it was worth.)
I then proceeded to customise the Xubuntu ISO by running UCK. Unfortunately, for Xubuntu, it'll download OpenOffice.org, which is big, bulky and already has a substitute on Xubuntu. If I had the patience to manually unpack the SquashFS image,
chroot
to the extracted file system's root and do the work manually, I'd do that instead in future.After that, I had previously selected the option to customise the image, so the prompt to do that appeared at this point. Picking what I wanted was easy using Synaptic. I wanted to focus on getting some useful work done on the go, so amongst my package choices were:
- sbcl
- emacs
- slime
- git
- subversion
- emacs-w3m
- build-essential (for basic C and C++ support)
- vim
- mzscheme
- muse-el
- 7zip
sbcl and mzscheme? I'm mostly learning Common Lisp, but with the release of Paul Graham's arc, the option of playing around with it was just too tempting to pass up.
git and Subversion? I prefer using git, but Subversion is what's used at my uni, so my portable distro wouldn't be worth much without it.
Confusingly conflicting package choices aside, all that stuff was still smaller than OpenOffice.org, which I never really liked anyway. Extraneous language packs also went away, along with a bunch of other miscellaneous things I never used.
From there, I let UCK do its job and repack the SquashFS tree, and was soon rewarded with a shiny new ISO. I backed that up in case I needed it again. At this point I could have burned it to a CD, but that wasn't what I was after.
Now for the USB Flash drive bit. I followed the instructions at USB Pen Drive Linux, with some adaptations. Since I already had Ubuntu as my desktop OS, I was able to format the Flash drive from my own terminal. Since I only needed the files from the ISO, I could just mount it like this:
$ cd /mnt/
$ sudo mkdir iso
$ sudo mount -o ro,loop -t iso9660 that_xubuntu_image.iso /mnt/iso
That made the ISO's files accessible from /mnt/iso/. The copying of the files was straightforward, and I just ignored any missing files.
One thing that isn't made clear on the USB Pen Drive Linux site is that if the USB Flash device hasn't already been made bootable in some way, you'll need to follow the instructions at the bottom of the page.
And that was it. I can pretty much take my desktop with me now, and any work that I save under this environment will be preserved. Funky, huh?
Thursday, March 6, 2008
Muse: Emacs personal wiki
If you make a lot of notes, and want them to be linked and have basic formatting, Emacs Muse will be pretty helpful.
A lot of resources about Muse go on about how Muse can be used to write stuff and publish it to various formats, e.g. HTML, LaTeX, PDF, and so on. What seems to be downplayed is the simple ability to organise notes in a personal wiki, which was something I was curious about, since Muse was pitched as the successor of EmacsWikiMode. This is what I'd like to focus on.
I run Xubuntu, so I merely had to install muse-el via Synaptic, and I was up and running. But where to go from there?
First, since making a note/page in Muse makes a file, I made a directory to store those files, ~/muse/.
Then find a new file in ~/muse/, and give it ".muse" as the extension. This should put you into Muse mode, and you can get started on your wiki.
Edit an existing link by putting the point over it, and typing C-c C-e. This will bring up two minibuffer prompts for the link destination and link text.
You can also mess with the link source directly by using C-c C-l, which shows the raw wiki text of the document in Emacs.
Like a regular wiki, you can link to notes/pages that don't exist yet, and just press RET to visit them. S-RET does the same, but in a new window.
Just type the first list item manually, and use M-RET to add items to the list.
Use C-> and C-< to indent and outdent a list item respectively.
That should be all that's needed to get started with Muse in Emacs.
A lot of resources about Muse go on about how Muse can be used to write stuff and publish it to various formats, e.g. HTML, LaTeX, PDF, and so on. What seems to be downplayed is the simple ability to organise notes in a personal wiki, which was something I was curious about, since Muse was pitched as the successor of EmacsWikiMode. This is what I'd like to focus on.
I run Xubuntu, so I merely had to install muse-el via Synaptic, and I was up and running. But where to go from there?
First, since making a note/page in Muse makes a file, I made a directory to store those files, ~/muse/.
Then find a new file in ~/muse/, and give it ".muse" as the extension. This should put you into Muse mode, and you can get started on your wiki.
Formatting
There's a whole Info manual for this stuff, but for the impatient, here's the stuff I use:
* Heading 1
** Heading 2
*** Heading 3
*emphasis*
**more emphasis**
***even more emphasis***
_underline_
Linking
Links resemble MediaWiki's style of double square brackets, and less the WikiWords style that's common elsewhere. It's fine because I like the former better anyway.[[Yarg]]
will link to Yarg.muse in the same directory.[[Yarg][Alternate link text]]
does the same as above, but with "Alternate link text" as the linked text.Edit an existing link by putting the point over it, and typing C-c C-e. This will bring up two minibuffer prompts for the link destination and link text.
You can also mess with the link source directly by using C-c C-l, which shows the raw wiki text of the document in Emacs.
Like a regular wiki, you can link to notes/pages that don't exist yet, and just press RET to visit them. S-RET does the same, but in a new window.
Lists
Lists are simple enough:
- Item a
- Item b
- Item c
- Sub item a
- Sub item b
1. Blah
- Mixing it up
- With different lists
2. Haha
Just type the first list item manually, and use M-RET to add items to the list.
Use C-> and C-< to indent and outdent a list item respectively.
That should be all that's needed to get started with Muse in Emacs.
Pac-Land
What. The. Hell.
A classic game that I remember from my childhood. A classically confusing game. Nothing in this game makes sense.
The first thing that struck me when I fired up Pac-Land for the NES was the so-called publisher's logo. It must have been a very long time ago, because I don't recall any company by the name:
NAMCOT
In case it isn't obvious, that's "NAMCO" with a 'T' tacked onto the end. Couple that with the "Pac" in Pac-Land, and that's classic IP stealing right there. (Update: Apparently, NAMCOT is a legitimate alternative name to NAMCO. See the comments.) That doesn't bother me too much, but it's always symptomatic of something far worse.
The gameplay. What's this thing supposed to be about? There's no mazes, no dots to collect, and apparently the only aim is to go from left to right, or from right to left. There's ghosts, and there's Pac-Man (even if he is adorned with a hat, for whatever reason), and that's about as far as the Pac-resemblance goes.
Here are the controls: A goes right, B goes left, and any press of the D-pad causes that yellow thing you're controlling to jump. What the hell. How can you screw up controls on the friggin' NES? There's only four buttons and a D-pad. The only way you could possibly screw this up is if you never saw what a NES controller looked like. Or if the "creators" hacked this abomination of a title out of something else, which would lead one to wonder how the original creators screwed it up.
It's so hilariously bad that it's good for five-minute time-wasting runs. If this wasn't on the NES, I would have no reason to play this. And yet since it is, I'm rather curious as to what happens at the end.
If there is an end.
A classic game that I remember from my childhood. A classically confusing game. Nothing in this game makes sense.
The first thing that struck me when I fired up Pac-Land for the NES was the so-called publisher's logo. It must have been a very long time ago, because I don't recall any company by the name:
NAMCOT
In case it isn't obvious, that's "NAMCO" with a 'T' tacked onto the end. Couple that with the "Pac" in Pac-Land, and that's classic IP stealing right there. (Update: Apparently, NAMCOT is a legitimate alternative name to NAMCO. See the comments.) That doesn't bother me too much, but it's always symptomatic of something far worse.
The gameplay. What's this thing supposed to be about? There's no mazes, no dots to collect, and apparently the only aim is to go from left to right, or from right to left. There's ghosts, and there's Pac-Man (even if he is adorned with a hat, for whatever reason), and that's about as far as the Pac-resemblance goes.
Here are the controls: A goes right, B goes left, and any press of the D-pad causes that yellow thing you're controlling to jump. What the hell. How can you screw up controls on the friggin' NES? There's only four buttons and a D-pad. The only way you could possibly screw this up is if you never saw what a NES controller looked like. Or if the "creators" hacked this abomination of a title out of something else, which would lead one to wonder how the original creators screwed it up.
It's so hilariously bad that it's good for five-minute time-wasting runs. If this wasn't on the NES, I would have no reason to play this. And yet since it is, I'm rather curious as to what happens at the end.
If there is an end.
Monday, February 18, 2008
My first macro-writing macro
Any trained monkey can write a Common Lisp macro. They're pretty simple. But last night, I wrote a macro that further defines another macro. In about 40-ish lines, I can now access data from my database in a single form:
That will print out the names and details of all users in the system I'm building. I'm doing this as part of a little web application that I'm building for my own education. The idea is that the above code can be used within HTML page generators to print out what I want from the database.
As far as that macro goes, I would never have succeeded without C-c RET, which triggers SLIME's MACROEXPAND-1. I could be hacking at this macro, and in a split second have the result of using it there, on my screen.
It was some of the slowest code I'd ever written in my life. No, I'm not talking about run-time efficiency, I'm talking about how slowly I was typing code in. Getting my head to operate in two layers of back-ticks is a mind-bending experience.
The SLIME scratch buffer also helped. I wouldn't have even known about the SLIME scratch buffer without the SLIME selector.
The SLIME selector is a simple mode-switcher, containing options to go to the most recent Lisp buffer, or the REPL, or bring up the scratch. This line in your .emacs will enable it:
Some people use C-c s instead, but I like the one-key access myself.
So yeah, two-line database access is pretty awesome.
(with-users ((user-read-many) :name n :details d)
(format t "~a: ~a~%" n d))
That will print out the names and details of all users in the system I'm building. I'm doing this as part of a little web application that I'm building for my own education. The idea is that the above code can be used within HTML page generators to print out what I want from the database.
As far as that macro goes, I would never have succeeded without C-c RET, which triggers SLIME's MACROEXPAND-1. I could be hacking at this macro, and in a split second have the result of using it there, on my screen.
It was some of the slowest code I'd ever written in my life. No, I'm not talking about run-time efficiency, I'm talking about how slowly I was typing code in. Getting my head to operate in two layers of back-ticks is a mind-bending experience.
The SLIME scratch buffer also helped. I wouldn't have even known about the SLIME scratch buffer without the SLIME selector.
The SLIME selector is a simple mode-switcher, containing options to go to the most recent Lisp buffer, or the REPL, or bring up the scratch. This line in your .emacs will enable it:
(define-key global-map (kbd "<f12>") 'slime-selector)
Some people use C-c s instead, but I like the one-key access myself.
So yeah, two-line database access is pretty awesome.
Wednesday, February 13, 2008
Feeling Parenthetical
Any Lisper, regardless of dialect or skill level, recognises that the parentheses are little more than structure for the Lisp reader to handle. Your editor handles parentheses, you handle the code.
Brian Carper recently stumbled across a nice Emacs customisation that will dim parentheses: parenface. With it, parentheses will appear as dim grey instead of black. The things are still there and can be seen and manipulated as per normal, but now your code really stands out. Handy for Lisp programming.
Brian Carper recently stumbled across a nice Emacs customisation that will dim parentheses: parenface. With it, parentheses will appear as dim grey instead of black. The things are still there and can be seen and manipulated as per normal, but now your code really stands out. Handy for Lisp programming.
Sunday, February 10, 2008
Terminals? In MY Emacs?
Graphical text editors are fine. I'm okay with them. But when I'm in a terminal, I want something that will edit text while inside that terminal. Previously, this has been vim's task, and to a great extent it still is. But I've been mucking around in Emacs, and recently discovered
Not too much to say about it: it's a fully-featured terminal emulator inside Emacs. The main reason I've turned to it was because I realised the version control facilities inside Emacs insofar as git is concerned are inadequate. Branching is a mess, commits for a single set of files appear as multiple commits in the system... git is a lot easier to manage via the command line. I suppose it's not entirely Emacs's fault: its version control system seems closely tied to the anachronism that is CVS. Or maybe I just don't understand it and need more time. The fact of the matter is that it's not working for me at the moment, not in the way I'd like it.
So, git. Most easily interacted with via the terminal. And I like being able to edit text while in those terminals. Since I'm developing a system in Common Lisp inside Emacs, which happens to support a terminal mode, it makes sense for me to do that editing inside Emacs. However, at a normal terminal, I still like editing stuff in vim, because it's quick and easy for me. It's possible and very easy to set this up:
Edit your
Then add this to your
Or just run
Now if a program invokes the
M-x term
.Not too much to say about it: it's a fully-featured terminal emulator inside Emacs. The main reason I've turned to it was because I realised the version control facilities inside Emacs insofar as git is concerned are inadequate. Branching is a mess, commits for a single set of files appear as multiple commits in the system... git is a lot easier to manage via the command line. I suppose it's not entirely Emacs's fault: its version control system seems closely tied to the anachronism that is CVS. Or maybe I just don't understand it and need more time. The fact of the matter is that it's not working for me at the moment, not in the way I'd like it.
So, git. Most easily interacted with via the terminal. And I like being able to edit text while in those terminals. Since I'm developing a system in Common Lisp inside Emacs, which happens to support a terminal mode, it makes sense for me to do that editing inside Emacs. However, at a normal terminal, I still like editing stuff in vim, because it's quick and easy for me. It's possible and very easy to set this up:
Edit your
~/.bashrc
or equivalent to include something like this:
if [ "$INSIDE_EMACS" ]; then
export EDITOR=emacsclient
else
export EDITOR=vim
fi
Then add this to your
~/.emacs
(server-start)
Or just run
M-x server-start
which does the same thing.Now if a program invokes the
EDITOR
, it'll bring up an Emacs buffer while in the terminal in Emacs, which you can save/discard using C-x #
, while opting for vim when using the terminal anywhere else.
Wednesday, February 6, 2008
CL web adventures
I've been scratching up on Common Lisp, and what better way to do that than to try building a few web applications. Nothing complicated at the moment, just following a few tutorials online, but I did manage to pull together a simple guest-book in about a page of CL code.
Here's what I've done so far:
Getting Swank running once you have SLIME installed is pretty easy. What I've been doing so far is firing up my Common Lisp implementation in a terminal (SBCL for those interested), and then run the following commands:
I've memorised the above by heart, but you can just as easily put it in a script and run Lisp in the background.
Thanks to ASDF-Install, grabbing Hunchentoot, CL-WHO and cl-markdown was a breeze. It really is as simple as doing this:
ASDF-Install is awesome enough to download dependencies automatically. Initially, it may complain about GPG keys. You can ignore it, but I decided to look into it anyway, because the message that the debugger raised didn't look too hard.
First I needed to make my own public-private key pair:
I just went with the default values for most of the prompts. Then I generated my public key:
Then, whenever ASDF-Install complained about not having a key, I'd do something like:
If I was happy that the key was from the author of whatever I was downloading at the time, I'd confirm the prompt at the second command. Unfortunately, the "retry" restart for ASDF-Install at that point won't let you go through, but it's a simple matter of aborting the process and retrying the same installation command to get through.
Next, I had mod_lisp to deal with. I've never installed an Apache module before, heck, I barely knew what one was before this. First I had to download the C source file. The instructions at the mod_lisp site mentioned a tool called "apxs", that would handle the compilation of the C source file into a shared object and then install it. It took me a while to figure out that on my Ubuntu setup, it's not included with the 'apache2' package, but in 'apache2-threaded-dev', which totally isn't obscure at all. Then when I ran this command from the mod_lisp site and restarted Apache, the thing still didn't work.
As it so happens, Apache on my Ubuntu Linux system has its modules arranged into directories for small files at /etc/apache2/mods-available, and enabled modules at /etc/apache2/mods-enabled, which just symlinks into some of the files in the previous directory. I copied one of the existing .load files, referring to the new mod_lisp.so file, and after a bit of trial and error, got the thing running.
From there, configuring mod_lisp was easy. I put this in /etc/apache2/httpd.conf:
I initially had a different path for Location, but the Hunchentoot demo is hard-wired to use that particular path, so I changed it back to "/hunchentoot" so I could try it out. Going back to Swank, I fired up Emacs, ran M-x slime-connect, went with the default values at the prompts, and entered the following:
Pointing my browser to http://localhost/hunchentoot showed the demo, which was good enough for me.
From there, I tried out a couple of web applications from tutorials linked from the Hunchentoot website, before trying a simple guestbook application of my own, which brings me up to this point.
I'm learning git so that I can manage the development of a simple centralised issue tracking system, but I haven't gotten very far yet. I've found it easier to simply enter the git commands directly from shell prompts than from the Emacs hooks, so some reading into how to effectively use the version management facilities in the editor is probably in order. We'll see how things pan out.
Despite how old Lisp is as a language, it's quite surprising how active the Common Lisp community is, and how many useful libraries there are available. Even just scratching the surface like I have, there is no way that anybody can claim that Common Lisp is outdated.
Here's what I've done so far:
- Learned how to use Swank as a server for SLIME, so I know how to connect to a Lisp instance running remotely now.
- Installed Hunchentoot, CL-WHO, and cl-markdown.
- Compiled, installed and configured mod_lisp for Apache.
- Started learning to use git.
Getting Swank running once you have SLIME installed is pretty easy. What I've been doing so far is firing up my Common Lisp implementation in a terminal (SBCL for those interested), and then run the following commands:
(require 'swank)
(setf swank:*use-dedicated-output-stream* nil
swank:*communication-style* :fd-handler)
(swank:create-server :dont-close t)
I've memorised the above by heart, but you can just as easily put it in a script and run Lisp in the background.
Thanks to ASDF-Install, grabbing Hunchentoot, CL-WHO and cl-markdown was a breeze. It really is as simple as doing this:
(require 'asdf-install)
(asdf-install:install 'name-of-package-here)
ASDF-Install is awesome enough to download dependencies automatically. Initially, it may complain about GPG keys. You can ignore it, but I decided to look into it anyway, because the message that the debugger raised didn't look too hard.
First I needed to make my own public-private key pair:
gpg --gen-key
I just went with the default values for most of the prompts. Then I generated my public key:
gpg --armor --output pubkey.txt --export 'Tung Nguyen'
Then, whenever ASDF-Install complained about not having a key, I'd do something like:
gpg --recv-keys KEY_ID_FROM_CONDITION_MESSAGE
gpg --sign-key KEY_ID_FROM_CONDITION_MESSAGE
If I was happy that the key was from the author of whatever I was downloading at the time, I'd confirm the prompt at the second command. Unfortunately, the "retry" restart for ASDF-Install at that point won't let you go through, but it's a simple matter of aborting the process and retrying the same installation command to get through.
Next, I had mod_lisp to deal with. I've never installed an Apache module before, heck, I barely knew what one was before this. First I had to download the C source file. The instructions at the mod_lisp site mentioned a tool called "apxs", that would handle the compilation of the C source file into a shared object and then install it. It took me a while to figure out that on my Ubuntu setup, it's not included with the 'apache2' package, but in 'apache2-threaded-dev', which totally isn't obscure at all. Then when I ran this command from the mod_lisp site and restarted Apache, the thing still didn't work.
sudo apxs2 -c -i -a mod_lisp2.c
As it so happens, Apache on my Ubuntu Linux system has its modules arranged into directories for small files at /etc/apache2/mods-available, and enabled modules at /etc/apache2/mods-enabled, which just symlinks into some of the files in the previous directory. I copied one of the existing .load files, referring to the new mod_lisp.so file, and after a bit of trial and error, got the thing running.
From there, configuring mod_lisp was easy. I put this in /etc/apache2/httpd.conf:
# Hunchentoot stuff!
LispServer 127.0.0.1 3000 "hunchentoot"
<Location>
SetHandler lisp-handler
</Location>
I initially had a different path for Location, but the Hunchentoot demo is hard-wired to use that particular path, so I changed it back to "/hunchentoot" so I could try it out. Going back to Swank, I fired up Emacs, ran M-x slime-connect, went with the default values at the prompts, and entered the following:
(asdf:operate 'asdf-load-op 'hunchentoot-test)
(hunchentoot:start-server :port 3000 :mod-lisp-p t)
Pointing my browser to http://localhost/hunchentoot showed the demo, which was good enough for me.
From there, I tried out a couple of web applications from tutorials linked from the Hunchentoot website, before trying a simple guestbook application of my own, which brings me up to this point.
I'm learning git so that I can manage the development of a simple centralised issue tracking system, but I haven't gotten very far yet. I've found it easier to simply enter the git commands directly from shell prompts than from the Emacs hooks, so some reading into how to effectively use the version management facilities in the editor is probably in order. We'll see how things pan out.
Despite how old Lisp is as a language, it's quite surprising how active the Common Lisp community is, and how many useful libraries there are available. Even just scratching the surface like I have, there is no way that anybody can claim that Common Lisp is outdated.
Monday, January 28, 2008
Barkley Shut Up and Jam: Gaiden
A post-cyberpocalyptic RPG about basketball? What's not to like about this game?
The gameplay is solid, with an unconventional battle system, lots of puzzles and secrets abound. It's also hilarious to boot, and even makes a few nods to Space Jam, if you've ever watched that. Highly recommended.
If you can't slam with the best, then jam with the rest.
The gameplay is solid, with an unconventional battle system, lots of puzzles and secrets abound. It's also hilarious to boot, and even makes a few nods to Space Jam, if you've ever watched that. Highly recommended.
If you can't slam with the best, then jam with the rest.
Thursday, January 24, 2008
Wednesday, January 23, 2008
"ANSI Common Lisp" and bst-remove
I like Lisp. Compared to a lot of other languages, it's pretty damn nice. I don't know it very well, so I'm learning mostly out of textbooks, mostly in my free time. One such textbook is Paul Graham's ANSI Common Lisp.
It's a solid book, it's engaging, but what I really like about it is that it leaves just the right amount of room for free thought. Case in point: I reached section 4.7, which is an example implementation of a binary search tree. Everything was going smoothly until I hit bst-remove. I read through the section, and I understood Paul's approach, which was good: I was making progress.
At this point, I'd just been through Lisp "structs" country, and instead of the lame :print-function for the node struct that just printed out the object/number at that node, I decided to roll my own.
So instead of this:
I wrote this:
Paul mentions in the book that the depth parameter can be safely ignored, but I figured that since it wasn't being used for anything else, I may as well use it myself. Anyway all my version does is print the full sub-tree under the given node with indentation to give it that pseudo-graphical charm. Once I reached bst-remove, this was how I was printing all of my intermediate results with mucking about with my experimental binary search tree.
I stumbled across the error by deleting a node from the far left of the tree, leaving the node above with only a right child, which is fine. It doesn't always happen because the book's algorithm uses (random 2) to try to maintain tree balance. Say if there's a parent node 2 with two children 1 and 3 as left and right sub-trees respectively, then deletion of 2 would pull 1 up, 50% of the time.
Now suppose 2 was the left sub-tree of yet a bigger binary search tree, and the node immediately above 2 is 5. We delete 2 as above, and now let's delete 5, and say 2's successor (1) is chosen to replace 5. There's now a hole where 1 is supposed to be, and the algorithm states that one of the immediate children must take its place. 1 only had one child.
Here's where things go wrong. 1's only child was 3. Since 1 was less than 5, it was a left child to it. Dragging up 1 doesn't cause any problems. However, dragging up 3 to take 1's old place now makes 3 a left child, not a right one like it should be. The tree is broken. It wouldn't have been entirely obvious if I wasn't looking at the tree structure at each point, but I'm happy that I did.
That was last night. Today, a bit of poking around in Paul Graham's website shows this error has been found and corrected in the book's errata. It's the one on page 71. Regardless, I'm happy I spotted that, because it means that I'm learning and paying attention.
My only regret in all this is that I didn't have the tenacity to write a replacement bst-remove before finding the corrected code (linked in the book's errata page.) Thank you intertubes for making me lazy.
ed: I should have included some M-x artist-mode diagrams for this. :)
It's a solid book, it's engaging, but what I really like about it is that it leaves just the right amount of room for free thought. Case in point: I reached section 4.7, which is an example implementation of a binary search tree. Everything was going smoothly until I hit bst-remove. I read through the section, and I understood Paul's approach, which was good: I was making progress.
At this point, I'd just been through Lisp "structs" country, and instead of the lame :print-function for the node struct that just printed out the object/number at that node, I decided to roll my own.
So instead of this:
(defstruct (node (:print-function
(lambda (n s d)
(format s "#<~A>" (node-elt n)))))
elt (1 nil) (r nil))
I wrote this:
(defstruct (node (:print-function print-node))
elt (l nil) (r nil))
(defun print-node (n stream depth)
(when n
(print-node (node-r n) stream (1+ depth))
(do ((i 1 (1+ i)))
((>= i depth))
(format stream " "))
(format stream "~a~%" (node-elt n))
(print-node (node-l n) stream (1+ depth))))
Paul mentions in the book that the depth parameter can be safely ignored, but I figured that since it wasn't being used for anything else, I may as well use it myself. Anyway all my version does is print the full sub-tree under the given node with indentation to give it that pseudo-graphical charm. Once I reached bst-remove, this was how I was printing all of my intermediate results with mucking about with my experimental binary search tree.
I stumbled across the error by deleting a node from the far left of the tree, leaving the node above with only a right child, which is fine. It doesn't always happen because the book's algorithm uses (random 2) to try to maintain tree balance. Say if there's a parent node 2 with two children 1 and 3 as left and right sub-trees respectively, then deletion of 2 would pull 1 up, 50% of the time.
Now suppose 2 was the left sub-tree of yet a bigger binary search tree, and the node immediately above 2 is 5. We delete 2 as above, and now let's delete 5, and say 2's successor (1) is chosen to replace 5. There's now a hole where 1 is supposed to be, and the algorithm states that one of the immediate children must take its place. 1 only had one child.
Here's where things go wrong. 1's only child was 3. Since 1 was less than 5, it was a left child to it. Dragging up 1 doesn't cause any problems. However, dragging up 3 to take 1's old place now makes 3 a left child, not a right one like it should be. The tree is broken. It wouldn't have been entirely obvious if I wasn't looking at the tree structure at each point, but I'm happy that I did.
That was last night. Today, a bit of poking around in Paul Graham's website shows this error has been found and corrected in the book's errata. It's the one on page 71. Regardless, I'm happy I spotted that, because it means that I'm learning and paying attention.
My only regret in all this is that I didn't have the tenacity to write a replacement bst-remove before finding the corrected code (linked in the book's errata page.) Thank you intertubes for making me lazy.
ed: I should have included some M-x artist-mode diagrams for this. :)
Monday, January 14, 2008
The sound of free software
I know this isn't news, but...
Opening an instance of Emacs will never be the same again.
Opening an instance of Emacs will never be the same again.
Saturday, January 12, 2008
slime-close-parens-at-point
So I'm learning Lisp. I'm doing this by using Emacs and SLIME. That's cool, right? This combo is far and away the closest I've ever been to "interactive programming", and the experience is very, very cool. It's quite true when you hear somebody say that you need a Lisp-aware editor to truly experience Lisp. Things have mostly been going smoothly.
Except C-c C-q. This key chord supposedly triggers (slime-close-parens-at-point), which will insert just enough parentheses as necessary to make the whole surrounding expression valid. Before, I was using C-c C-], which dumps as many parentheses to finish all expressions before it, a different matter altogether. Sounds like a time-saver, right?
It would be, if not for the fact that the damn thing doesn't work. Terminal Emacs? Nope. GTK+ Emacs? Nope. In the SLIME REPL? Nope. In a Lisp buffer while SLIME is active? Nope. Why does this one thing not work? I can use other key chords just fine, like C-c C-c to incrementally run a function definition on the spot.
I'm going to do some Googling, but if anybody knows what the deal is, please leave a comment.
Except C-c C-q. This key chord supposedly triggers (slime-close-parens-at-point), which will insert just enough parentheses as necessary to make the whole surrounding expression valid. Before, I was using C-c C-], which dumps as many parentheses to finish all expressions before it, a different matter altogether. Sounds like a time-saver, right?
It would be, if not for the fact that the damn thing doesn't work. Terminal Emacs? Nope. GTK+ Emacs? Nope. In the SLIME REPL? Nope. In a Lisp buffer while SLIME is active? Nope. Why does this one thing not work? I can use other key chords just fine, like C-c C-c to incrementally run a function definition on the spot.
I'm going to do some Googling, but if anybody knows what the deal is, please leave a comment.
Compiz Fusion, Translucency and YOU
So I do a lot of coding and text editing in the terminal. Fire up the terminal by presing Alt+N, type "vim", and I'm ready. I set my terminal background to be translucent, which is handy for seeing stuff through the other stuff you're doing.
The lack of translucency always bugged me about both GVim and Emacs. Neither of those applications seem to come with the option to enable translucency at all, which makes reading or referring to other material a very Alt+Tabby affair. So with the terminal I stuck.
Until now. I was screwing around with modifier keys and the mouse wheel last night, and found out something about Compiz Fusion and window translucency:
Hold down Alt and scroll the mouse wheel to change the translucency of the window the mouse cursor is over.
Handy, no?
The lack of translucency always bugged me about both GVim and Emacs. Neither of those applications seem to come with the option to enable translucency at all, which makes reading or referring to other material a very Alt+Tabby affair. So with the terminal I stuck.
Until now. I was screwing around with modifier keys and the mouse wheel last night, and found out something about Compiz Fusion and window translucency:
Hold down Alt and scroll the mouse wheel to change the translucency of the window the mouse cursor is over.
Handy, no?
Tuesday, January 8, 2008
Dud characters
So I play RPGs. These games have characters, often many characters. But only a few characters are allowed to be in the party, or at least fighting. Those in the party get stronger and stronger, and those who aren't get weaker in comparison.
Therefore, at best, switching characters is an inconvenience: wading through menus, having to choose who to take, and sometimes even having to backtrack to a certain place. At worst, it's a massive pain, as you discover how weak these guys really are.
A while ago, my sister was playing Legend of Dragoon on the PSX. She managed most of the game just fine, but when it came to this one particular boss, not even repeated attempts could get her through. It must have been the fifth attempt when I suggested that she use the dumb, stupid sideline characters that were never used. She beat the boss on the first try.
I was just playing Front Mission 3 on the PSX. So far, I've beaten every battle with flying colours, but this one tiny map with a bunch of powerful wanzers (wanderung panzers, German for "walking tank" and the main unit of the game) were proving difficult. The situation was a bit different, this time I was forced to use one of my dud characters (Pham, for those who have played the game.) My first attempt at the battle failed slowly and dramatically when I got into the unfortunate situation of losing both arms on the two wanzers that were still on the field and ran out of restoration items. I even had these two pilots eject from their walking robots and shoot their pistols at the one remaining threat, but this thing had powerful evasion bonuses, and I ultimately lost.
My second attempt shared nearly exactly the same strategy as the first, but with one tiny difference: I attacked with Pham's wanzer first for each turn. I made this decision after I noticed she had more HP on her robot than my main machine-gunning guy. This tiny difference was enough to seal the battle for me.
So you see, even the dud characters have their time in the spotlight, so don't neglect them.... too often. I do miss the old hey-day of the original Front Mission though, where you could opt to deploy everybody for every battle.
Stupid people, on the other hand...
Therefore, at best, switching characters is an inconvenience: wading through menus, having to choose who to take, and sometimes even having to backtrack to a certain place. At worst, it's a massive pain, as you discover how weak these guys really are.
A while ago, my sister was playing Legend of Dragoon on the PSX. She managed most of the game just fine, but when it came to this one particular boss, not even repeated attempts could get her through. It must have been the fifth attempt when I suggested that she use the dumb, stupid sideline characters that were never used. She beat the boss on the first try.
I was just playing Front Mission 3 on the PSX. So far, I've beaten every battle with flying colours, but this one tiny map with a bunch of powerful wanzers (wanderung panzers, German for "walking tank" and the main unit of the game) were proving difficult. The situation was a bit different, this time I was forced to use one of my dud characters (Pham, for those who have played the game.) My first attempt at the battle failed slowly and dramatically when I got into the unfortunate situation of losing both arms on the two wanzers that were still on the field and ran out of restoration items. I even had these two pilots eject from their walking robots and shoot their pistols at the one remaining threat, but this thing had powerful evasion bonuses, and I ultimately lost.
My second attempt shared nearly exactly the same strategy as the first, but with one tiny difference: I attacked with Pham's wanzer first for each turn. I made this decision after I noticed she had more HP on her robot than my main machine-gunning guy. This tiny difference was enough to seal the battle for me.
So you see, even the dud characters have their time in the spotlight, so don't neglect them.... too often. I do miss the old hey-day of the original Front Mission though, where you could opt to deploy everybody for every battle.
Stupid people, on the other hand...
Monday, January 7, 2008
Thoughts on KDE 4.0.0
off-topic: First look at the unreleased KDE4.0.0 (with screenshots)
In my time in Linux, I've spent some time with KDE, but my heart's always been with GNOME.
I don't remember who I heard this from, but it's absolutely right. I may be a programmer and self-confessed geek, but I'm also very very lazy and intent on getting things done without my tools getting in my way.
If I'm a GNOME user, why on earth would I be interested in the new KDE release? There are only two reasons I haven't stuck with KDE: its interface hell, and the fact that it often looks like butt. Nowadays, spending any amount of time in KDE is the equivalent of setting eye razors as my background.
But it looks like things are changing, if the screenshots are anything to go by. I always said to myself that I'd give KDE another chance if it stopped looking like butt, and it doesn't look like butt anymore. Who knows? Maybe I'll be using KDE this very day next year.
And if that makes me a superficial bitch, then I guess I'm a superficial bitch.
In my time in Linux, I've spent some time with KDE, but my heart's always been with GNOME.
The best interface is one that you don't notice.
I don't remember who I heard this from, but it's absolutely right. I may be a programmer and self-confessed geek, but I'm also very very lazy and intent on getting things done without my tools getting in my way.
If I'm a GNOME user, why on earth would I be interested in the new KDE release? There are only two reasons I haven't stuck with KDE: its interface hell, and the fact that it often looks like butt. Nowadays, spending any amount of time in KDE is the equivalent of setting eye razors as my background.
But it looks like things are changing, if the screenshots are anything to go by. I always said to myself that I'd give KDE another chance if it stopped looking like butt, and it doesn't look like butt anymore. Who knows? Maybe I'll be using KDE this very day next year.
And if that makes me a superficial bitch, then I guess I'm a superficial bitch.
Friday, January 4, 2008
I just finished POWDER
Well that was fast.
And thus, the inappropriately-named FUCK completed his quest to escape the dungeon by killing some demon at the 25th level and retrieve its heart. It's kinda unfair: what'd this demon ever do to him?
Come to think of it, killing that demon was one of the easier fights in my play-through of POWDER. I fought it over the water in my water-walking shoes; I actually had more trouble and took more damage finding the heart submerged while I nearly drowned in my armour.
Some lessons learned:
The following should not be eaten:
Warning is the reason why I appear as a question mark in the screenshot: I can sense myself. Warning allows you "see" all enemies, but only by their relative threat level. Compare this against the power of telepathy, which only detects a subset of enemies ("intelligent" ones) and doesn't show how dangerous they are at all.
There's only one bad one, and that's the scroll of fire. Try not to do too much random scroll reading at low HP.
Anyway, here I am on the surface... I think I'll duck back down into the dungeon for something to eat. I swear that place is like a DIY buffet.
And thus, the inappropriately-named FUCK completed his quest to escape the dungeon by killing some demon at the 25th level and retrieve its heart. It's kinda unfair: what'd this demon ever do to him?
Come to think of it, killing that demon was one of the easier fights in my play-through of POWDER. I fought it over the water in my water-walking shoes; I actually had more trouble and took more damage finding the heart submerged while I nearly drowned in my armour.
Some lessons learned:
Get food, eat food
For variety's sake, I'd chosen Endure Hunger over Clean Kill (leave a corpse with virtually every kill). All the games before that I chose the latter. Now I know why.The following should not be eaten:
- slugs of any kind
- invisible stalkers
- chameleons (unless randomly polymorphing is your idea of fun)
- dragons and elementals (they just give you elemental weaknesses)
Reflection is fun!
In programming and in POWDER. I was lucky enough to get a shield of reflection early on, and it basically makes you impervious to any ray-based magic or dragon's breath attacks. They're more common than you'd think.WARNING
Helms of warning > helms of telepathy.Warning is the reason why I appear as a question mark in the screenshot: I can sense myself. Warning allows you "see" all enemies, but only by their relative threat level. Compare this against the power of telepathy, which only detects a subset of enemies ("intelligent" ones) and doesn't show how dangerous they are at all.
Beware cockatrices
The single most annoying enemy in the game. It petrifies, and it attacks on sight. I used wands of sleep to get around them, as well as divine intervention (which can sometimes restore you.)Cave trolls
They revive. You can tell because attempting to eat their corpses always fails. I supposedly killed 25, with other kill counts running only up to about 10 or so, but it looks like it counts re-kills too.Faith
Each level-up gives you the chance to worship a deity. Choose one and stick to one, because they'll do handy things like heal you, purge poison, revive petrification, and give you free things. None of that tribute bullcrap in other roguelikes.Zap unknown wands at monsters
Most wands have a negative effect that will harm you if you zap yourself, except the wands of light, digging, speed and invisibility, and perhaps teleportation. Most are just medieval pistols.Read all scrolls
Unlike wands, virtually all scrolls have some sort of positive effect. Pick up a lot of exotic items, and read a scroll of identify, and suddenly you can see how awesome your inventory really is, while avoiding cursed-item landmines.There's only one bad one, and that's the scroll of fire. Try not to do too much random scroll reading at low HP.
Anyway, here I am on the surface... I think I'll duck back down into the dungeon for something to eat. I swear that place is like a DIY buffet.
Thursday, January 3, 2008
POWDER
When I first stumbled across this title at The Linux Game Tome, I was half-inclined to believe it was something to do with drug-smuggling. To my pleasure/disappointment, it was a roguelike, which, come to think of it, tends to have the same addictive quality.
Like any other roguelike, you run around in a tile-based dungeon and beat stuff up until you die or complete whatever psychotic quest the creator has assigned to you. Unlike a lot of roguelikes, this game is not only available on Windows and Linux, but also comes in GameBoy Advance and Nintendo DS ROM flavours. In fact, the game was originally designed for the GBA, which shows in its restriction to 4-way movement and the fact that a context menu is available for all actions.
It raises an interesting point: why do dungeon crawlers suck so much on consoles?
It's not the addition of nice graphics. I think nice graphics are a good thing, and certainly something that a lot of games could benefit from. It can't be the controls, since the controls in POWDER are quite workable.
My theory is that the game development teams are too preoccupied with the 'random' concept that is intrinsically present in roguelikes as a whole. The very term "roguelike" is coined from the game "Rogue", which has been around for decades. The field has gotten used to the random concept, and often pushes ahead to add their own individual flair, such as ADOM with Tower of Fire and underwater levels, and Nethack with the many and varied ways to do things with other things. In POWDER, worshipping deities actually makes a difference, and the gameplay is deceptively simple, hiding a large and varied core. Looks like it was pulled straight out of the VGA era though, graphics-wise.
Console dungeon crawlers are just that: dungeon crawlers. The developers get it all wrong, nearly all of the time. I couldn't stand more than 10 minutes of any version of Azure Dreams, because the levels all looked and felt the same. The items hardly varied, and the effects were uninteresting. In POWDER, I can throw a knife straight up, just to fall back down on my head and stab me. It's totally useless, but it's amusing and shows that the creator put thought into the game. I don't think I've ever come across an effect in a console dungeon crawler and thought, "Wow! That was cool!"
It seems a bit strange, but it's the non-random bits of roguelikes that make them really stand out. POWDER delivers.
Anyway, there are four versions of POWDER available: Windows, Linux, plus GBA and NDS ROMs. If you're after something for your cell/mobile phone instead, look into Dweller (looks like the main site has fallen into disrepair, but you can still find it in Google's cache if you search for "dungeon dweller". First hit.).
Like any other roguelike, you run around in a tile-based dungeon and beat stuff up until you die or complete whatever psychotic quest the creator has assigned to you. Unlike a lot of roguelikes, this game is not only available on Windows and Linux, but also comes in GameBoy Advance and Nintendo DS ROM flavours. In fact, the game was originally designed for the GBA, which shows in its restriction to 4-way movement and the fact that a context menu is available for all actions.
It raises an interesting point: why do dungeon crawlers suck so much on consoles?
It's not the addition of nice graphics. I think nice graphics are a good thing, and certainly something that a lot of games could benefit from. It can't be the controls, since the controls in POWDER are quite workable.
My theory is that the game development teams are too preoccupied with the 'random' concept that is intrinsically present in roguelikes as a whole. The very term "roguelike" is coined from the game "Rogue", which has been around for decades. The field has gotten used to the random concept, and often pushes ahead to add their own individual flair, such as ADOM with Tower of Fire and underwater levels, and Nethack with the many and varied ways to do things with other things. In POWDER, worshipping deities actually makes a difference, and the gameplay is deceptively simple, hiding a large and varied core. Looks like it was pulled straight out of the VGA era though, graphics-wise.
Console dungeon crawlers are just that: dungeon crawlers. The developers get it all wrong, nearly all of the time. I couldn't stand more than 10 minutes of any version of Azure Dreams, because the levels all looked and felt the same. The items hardly varied, and the effects were uninteresting. In POWDER, I can throw a knife straight up, just to fall back down on my head and stab me. It's totally useless, but it's amusing and shows that the creator put thought into the game. I don't think I've ever come across an effect in a console dungeon crawler and thought, "Wow! That was cool!"
It seems a bit strange, but it's the non-random bits of roguelikes that make them really stand out. POWDER delivers.
Anyway, there are four versions of POWDER available: Windows, Linux, plus GBA and NDS ROMs. If you're after something for your cell/mobile phone instead, look into Dweller (looks like the main site has fallen into disrepair, but you can still find it in Google's cache if you search for "dungeon dweller". First hit.).
Wednesday, January 2, 2008
Stupidity? Meet opportunity.
According to some source I just pulled out of my arse, a blog is born every two seconds. As if we didn't have enough already.
Here's what I'm interested in and therefore post about:
Here's what I'm interested in and therefore post about:
- old-school retro games
- computer programming
- lol internet
- other stupid geekery
Subscribe to:
Posts (Atom)