I’m migrating my research blogging over to my new domain and consequently this blog won’t be used any more. It’s served me well and I’ll leave it be for posterity.

A few weeks ago I finally got IPv6 working on my router via 6to4. Of course this has no real use beyond just playing around, but it’s still neat that now my computer is globally addressable. There are a few IPv6 hosts around, too; today I took a look at the output of netstat on my machine on totally unrelated business and noticed that I was connected over IPv6 to some web page I was browsing at the time. After I got 6to4 hooked up, incidentally, I emailed my ISP to see if they had any IPv6 plans. Much to my delight, they replied saying they already have their IPv6 address block allocated and will be rolling it out to customers at some point in the near future. No timeline, of course, but still very very cool.

While teaching my networking course this term, I took a lecture to talk about SCTP. It’s a beautifully designed transport layer, designed to replace—maybe not entirely, but mostly—TCP and UDP. Like IPv6 it has the problem that it was designed after “mostly good enough solutions” had already been established. Unlike IPv6 it doesn’t need any help from ISP or backbone infrastructure, but it has in some ways a much bigger problem: it needs application-level protocols to use it if it’s to gain any support.

I felt a little guilty teaching SCTP when I’d never actually played with it seriously myself, so I gave it a try today. Unfortunately most consumer operating systems don’t support SCTP: OS X requires a kernel extension.

One of the neatest things about SCTP is that a single SCTP connection—”association” in SCTP lingo—can be multiplexed into any number of streams and these streams really do operate in parallel. There’s no head-of-line contention like in TCP where the whole connection gets plugged up if a packet arrives out of order. In SCTP, if one stream gets plugged up, all the other streams are entirely unaffected. This also has the benefit, of course, that you can have a multiple connections without the overhead of creating multiple connections; research evidently like to see how this will apply to HTTP. For me, I would have guessed the nicest application of SCTP would be SSH tunnelling and port forwarding and whatnot and I was a little disappointed to see no one’s taken a look at SSH over SCTP in a serious way.

One of the more troublesome aspects of SCTP is that the number of streams your “association” contains is negotiated at initialization time; by my reading of the RFC, there’s no way to add or remove streams dynamically.

So this morning I set up a little experiment to see how SCTP—or Michael Tuexen’s kernel extension for OS X anyway—would fare with absurd numbers of streams. I figured if you could create an association with a tonne of streams without any noticeable performance degradation, the limitation of having to specify the number of streams up front wouldn’t be so bad.

The bad news is that even though the RFC does give you theoretically 65000 streams, I started getting “Invalid argument” errors as soon as I started getting up to about 2000 streams. The good news is there was no change at all in performance or server/client behaviour whether I had/used 1 stream or 2000 streams. I suspect there are some possibly large memory structures built inside the kernel when you establish an association with 2000 streams, but that’s hard to see from userspace. If I get time I’ll poke through the source code to see how things are structured.

The other bad news is that SCTP is slow. Sending data as fast as I could to ::1 (my local machine, not over the network) was about a quarter as fast as TCP. I don’t think there’s much about SCTP that would require this slowness; it’s almost certainly due just to the immaturity of the implementation. This implementation, if I remember right, was taken from FreeBSD. I’ve heard stories that Linux’ SCTP implementation is the worst of all, much slower even that FreeBSD’s, which is sad, but maybe things have improved recently.

There’s another big advantage of SCTP—multi-homing—which I haven’t had time to play with. With multi-homing I could, theoretically, establish an association/connection over wired Ethernet, then transition to WiFi, with the connection totally unbroken and, what’s more, the application-layer totally unaware that anything had happened. I’m thinking with a bit of work one could even migrate SSH sessions from one host to another. It could be cool beans.

I’m sure there’s some irony in me putting effort into playing around with these networking protocols just so I can talk to myself. Now I know how Esperanto speakers must feel. It’s fun, though.

When I get the opportunity to do research these days, it’s mostly focused on trying to formalize the abstract syntax, typing and operational semantics from the programming language’s point-of-view, not the logic’s point-of-view, once and for all. Once of the major problems is that the typing for Pola is complicated, much more complicated than typing systems I’ve done in the past. On the one hand this makes me very pleased with myself that I was able to get type inference going as well as I have, but on the other hand it’s a huge obstacle to getting other people interested in what we’re doing.

I apologize for the sloppy prose, but this is from a rough draft of notes I’m keeping for myself. The notation used in this typing rule is actually much simpler than what I’d had previously.

It’s horrendously confusing and yet, through many revisions, it’s the simplest and most straight-forward I’ve been able to come up with to describe what’s happening at the type level. Sometimes I think it would be easier to just show people the code instead of the sequents, ha!

Anyway, I shall keep at trying to simplify this stuff. It’s further emboldened the motivation I have for prettying of the “face” of this language for mortal users.

The paper submitted to Typed Lambda Calculi and Applications has been rejected. Again, all of the commentary was spot on. Our notation was not introduced well; the notation we used was odd; things were not explained properly. There is a consistent problem in our write-ups of trying to fit too much into a paper. We do have a lot to present: the syntax, the logic of safety, the type checking, the type inference, the operational semantics, the soundness proof, the completeness proof, the bounding argument, the relationship to previous works. It would be flat-out impossible to fit even a quarter of these into one conference paper, I would wager, but nonetheless it shouldn’t be too much trouble to write up a proper exposition that we can properly explain to people what we’re up to.

Two of our referees gave us a weak accept and honestly I felt bad for them since both of them obviously they put a lot of effort into it. Both of them agreed something worthwhile was being explained, but it was being explained too poorly.

In the meanwhile, I’m making slow and steady progress. Things will pick up once the term is over and I can forget about teaching for a while. I’m determined this won’t happen again.

I always feel bad about not updating here, but I never have much to say. Or, perhaps equivalently, I have too much to say. The nature of Pola, the polynomial-time programming language I’m working on, is constantly changing and I wouldn’t really want to explain all the details here anyway. Suffice it to say that the spirit of the language remains essentially the same and the details of the language change day-to-day, a slight exaggeration.

I’m newly committed to better balancing my research and my teaching. Teaching had taken over my time completely this term and research had fallen by the way-side, which is really a shame. I’ve changed that this week, though, and have got a lot of good work done.

I gave a short talk about my research at the Western Research Forum, an annual mini-conference here at UWO for graduate students of all disciplines. I’m pleased to announce that I won second place for the natural sciences session. First place went to someone dealing with meteorites: it was also my choice for first place, so I have no problems losing to that. After my presentation I got a lot of good question which, if nothing else, shows I could keep people’s interest. The questions didn’t come exclusively from computer scientists, either, but from other physical scientists curious about how this might relate to all the FORTRAN code they have to work through. It was good to see.

The highlight of the conference was seeing Jorge Cham, author of Piled Higher and Deeper. I very highly recommend going to see him talk if you get the chance. It’s a funny talk, of course, but carries a lot of insight with it, too. I’ve uploaded one picture from the talk which made a big impression on me.

Talking with people about my research, I always go back and forth between my motivations and where I want to focus my research. When talking with people in theory or in programming languages, they seem to be most interested in proving theoretical properties of the language and making sure that’s all very solid. When talking with everyone else, the prevailing opinion is “yes, but no one would ever use it”, reinforcing that the “face” of the language, the syntax and the typing system, is most important. I need to get a good balance between the two.

The paper we submitted to PLDI (Programming Language Design and Implementation), sadly, was not accepted. It’s not a huge surprise: our paper wasn’t a perfect fit for PLDI; further, our work was in a mild state of flux and the paper reflected that. We’ve submitted a paper to TLCA (Typed Lambda Calculi and Applications) and are anxious to get feedback there.

In just over two hours, I’ll be on a plane to Calgary to do more research there. I’m going to be doing some work at the airport and on the plane trying to get the implementation into a good state so we can start doing research right away. Particularly I want to seriously talk about inferring bounds.

Naïve bounds inference is theoretically done, though not implemented yet. Naïve bounds inference is too, well, naïve, however. We need to tighten up the bounds a lot for it to be practical. I’m hoping we can get some time to discuss how bounds inference affects the memory model as well.

So, what we have now is: a functional language allowing inductive and coinductive types—and now even mutually recursive types!—such that every well-typed program halts in polynomial time; an implementation, complete with type inference; inference of time and space bounds.

At the end of this week things should be in really good shape! I’ll post an update, as I’ve been neglecting this blog for too long.

This image is stolen from a reddit link:

It shows ideas in computer science moving from academic research to industrial research to product to successful product. Speech recognition I find pretty amusing.

Being me, I’m disappointed no programming language concepts made it to the chart, not even objects! I can’t think off the top of my head what the first commercial object-oriented development system was: I suppose in those days there wasn’t a clear divide between selling hardware and selling software.

Anyway this is relevant to me since, when contemplating my future after my Ph.D., I often fantasize about bringing my own research to market. The first major fear I have, of course, is that no one would want to spend much money on a compiler that guarantees resource bounds; I hope I’m wrong there. The second fear, though, is that it would take far longer than I could ever expect to turn a proof-of-concept product into a marketable product. This chart confirms that, historically, there is a gap of about five years at least to turn a research idea into a consumer idea.

The chart also shows the importance of industrial research and development, the blue lines. With some notable exceptions, like Microsoft and IBM, I fear industrial research has been getting squeezed out in the past few years, which is a shame.

We need more blue!

We got a paper submitted to PLDI—Programming Language Design and Implementation—if just. The acceptance rate of late has been around 20% it seems. I think the paper was good, but I have no idea of it was that good. Since PLDI has a double-blind refereeing system, I’ll try not to give too many specifics in this post: specific enough that I can talk about things, but vague enough that a referee won’t inadvertently stumble here from Google.

I’m fairly sure that this language marks the direction my research should be going in during my Ph.D. It has the possibility of being a very nice language. Of course I’ll wait to see what the referees say. It may be unlikely, but there’s always the chance of a “this research is a dead-end because of [reason you’ve never thought of]” which is why it’s nice to get feedback from fresh eyes.

There are issue for the front-end. I think this is the biggest fault of the language as it stands right now. Considering what the language is intended to do—give automatic guarantees on polynomial time complexity—a lot of the complexity is justifiable, but still it can be improved. The typing system is a bore to get around. After becoming familiar with the language for a few weeks or so one starts to get a bit of intuition about how to structure things to “beat” the typing system. Still, it would be nice to add a bit more in the way of inference so the typing system can help you out. The syntax is perhaps more cumbersome, in the sense that there are too many constructs, than is totally necessary.

Changes to the front-end wouldn’t be purely cosmetic, either. The way we have the language right now is nice since everything that’s in there serves a purpose and everything is proved to work correctly. In a language such as this, removing something or changing something is a very delicate matter and it’s easy to destroy the guarantee mentioned above of guaranteeing time complexity. But work and cleverness aside, I’m sure it can be done.

On the back end of things, there aren’t problems so much as there are opportunities. As it stands, the implementation—which needs a lot of work yet—is just a very naïve interpreter. Making a compiler for a language such as this brings up myriad issues which one wouldn’t find in any other language. Memory management is the big one. Garbage collection is likely not necessary at all, so it will be interesting to see exactly what nice—and efficient—memory management schemes it would offer.

I apologize for not posting more often to this blog. It is, of course, the usual excuse—I always find more important things to do. In fact I’m still terribly busy, but I thought I was overdue.

I got back from working with Robin Cockett and Brian Redmond in Calgary. They’re working on a new programming language which uses type theory to enforce that each well-typed program halts in polynomial time. It’s been done before, but nowhere near as nicely as this. The implementation is just a proof-of-concept, nowhere near fit for consumption of a general computer science audience. It was this fact that it’s built on a terribly recondite foundation mixed with discussions over the minutiæ of the syntax got me thinking more about the face of a programming language, the dreaded “front end.”

Syntax in the programming languages world has, to a large extent, been considered a solved problem for the past couple decades, and for good reason. The technology behind parsing has, so far as I’m concerned, been solved. There have been enough languages created, and segregated into successes and failures, that people have a general idea of what a decent programming language should look like. Yet the process is still not formalized. When creating a new language, people—or myself at least—look at existing successful languages, fudge them to fit whatever new characteristics their language will contain, and use common sense to reason it out.

I think I’ve stated on here before my fondness for literate programming. I think code should be primarily documentation and, further, that that documentation should look gorgeous and be easy to read. As I was mulling over programming language syntaxes, I hit an epiphany when I realized that the primary goal in the syntax of a language should be how readable it is when typeset. As we move towards higher-level languages and rely on smarter compilers to do what we want to do, the purpose of a language is no longer to describe exactly which bits gets pushed where, but rather to, in the most elegant way possible, describe the essence of the solution to the problem.

I like the off-side rule—the “significant whitespace” present in Python and Haskell—for exactly this reason. When displaying code that’s meant to be read, it’s imperative that the code be indented nicely anyway. Why not have the language enforce that the program’s structure be identical to how it’s presented?

This got me thinking about syntax highlighting. Highlighting is nearly as important to indentation when understanding the structure of code, in my view. Further, when typesetting literate code, that code should properly be highlighted in the typeset document. So why not make highlighting part of the language? One could distinguish between the highlighted keyword let and the variable name let. It poses some practical problems, namely that you potentially have to abandon straight ASCII as an input format in favour of annotated ASCII such as LaTeX or Rich Text.

The other thing I started thinking about is syntax in the context of information theory. This got in my head as we were discussing the role of keywords such as of in the new language. Should the syntax dictate that you write case x of … or should it leave out the entirely unnecessary keyword of there?

In the interest of maintainable code, there’s value in redundancy. As two extremes: one can write 50 characters of C++ code and, due to the noisiness of C++’s template syntax, relay only 50 bits of actual information; however, 50 characters of Lisp code might relay 200 bits of information. In this case I like Lisp’s syntax better than C++’s, but there’s a huge danger in having a syntax which is too concise: one parenthesis in the wrong spot and the semantics of your program can be completely different from what you intended. And that difference might not be obvious to the reader.

I had this conversation years ago with someone about the operator—used both in programming and English—”not.” Note that whenever someone feels they must get across the point that they are not something, they will emphasize—using italics or boldface or uppercase letters or anything at their disposal—the word “not” and often only the word “not.” Programmers typically don’t have such luxury, and an errant ! symbol can sometimes get lost in the mix. Yet it is one of the most powerful constructs in computer science: that you want to do exactly the opposite of what you’re going to describe next. From the point-of-view of information theory, A and not A are syntactically almost identical.

So that is a clear example of where it would be nice for the language to enforce some redundancy. Don’t just throw a ! in there, but redundantly make other changes to the code that make it clear that something’s going on.

The obvious question is: so how would a language do that? An even better question is: how can a language require redundant information without being complete torture to write in? I’ve no idea, but it would be nice if someone would look at it formally, ha!

I’m so pleased with my progress on the Ca compiler, I’ve decided to provide an ad hoc link to it in its current state. I’m not confident enough to give proper instructions for it though, as it is for hardcore haxx0rz only.

I’ve made the switch from make to ant for the build system. There are a number of things which are bizarre about ant. For instance, failonerror should default to true in every case. Perhaps there are some cases where silent failure can be acceptable, which can be made explicit in those cases with a failonerror="false", but in a build system, I have a hard time believing that an error should, by default, be considered a success. Also there is no convenient built-in way to copy a file while retaining permissions.

Aside from that, I like the ant model of things. Java is not such a horrible way to describe how to build something.

As for the compiler itself, there are a number of bug fixes still in the queue, such that it still can’t be used for useful work. For useless work, however, it is quite fabulous, if I say so myself.

Even more exciting is I’ve run into a theoretical problem which is proving very difficult to prove. Research is one of the few vocations where your very metric of “success” is the number of problems you create. I’m off for a camping trip over the next few days which will give me some time to mull it over.

Next month I’ll be taking a trip back to my alma mater. I’ve been invited by my undergraduate supervisor to help him and his post-doctoral student on a new project he’s starting up. It’s a programming language based on co-inductive types where each program is guaranteed to halt in polynomial time. My job is to come up with, ideally, some clever ways of implementing it efficiently. I don’t know many more details than that, but it sounds like a dream project for me.