Conferences


The paper submitted to Typed Lambda Calculi and Applications has been rejected. Again, all of the commentary was spot on. Our notation was not introduced well; the notation we used was odd; things were not explained properly. There is a consistent problem in our write-ups of trying to fit too much into a paper. We do have a lot to present: the syntax, the logic of safety, the type checking, the type inference, the operational semantics, the soundness proof, the completeness proof, the bounding argument, the relationship to previous works. It would be flat-out impossible to fit even a quarter of these into one conference paper, I would wager, but nonetheless it shouldn’t be too much trouble to write up a proper exposition that we can properly explain to people what we’re up to.

Two of our referees gave us a weak accept and honestly I felt bad for them since both of them obviously they put a lot of effort into it. Both of them agreed something worthwhile was being explained, but it was being explained too poorly.

In the meanwhile, I’m making slow and steady progress. Things will pick up once the term is over and I can forget about teaching for a while. I’m determined this won’t happen again.

Advertisements

I always feel bad about not updating here, but I never have much to say. Or, perhaps equivalently, I have too much to say. The nature of Pola, the polynomial-time programming language I’m working on, is constantly changing and I wouldn’t really want to explain all the details here anyway. Suffice it to say that the spirit of the language remains essentially the same and the details of the language change day-to-day, a slight exaggeration.

I’m newly committed to better balancing my research and my teaching. Teaching had taken over my time completely this term and research had fallen by the way-side, which is really a shame. I’ve changed that this week, though, and have got a lot of good work done.

I gave a short talk about my research at the Western Research Forum, an annual mini-conference here at UWO for graduate students of all disciplines. I’m pleased to announce that I won second place for the natural sciences session. First place went to someone dealing with meteorites: it was also my choice for first place, so I have no problems losing to that. After my presentation I got a lot of good question which, if nothing else, shows I could keep people’s interest. The questions didn’t come exclusively from computer scientists, either, but from other physical scientists curious about how this might relate to all the FORTRAN code they have to work through. It was good to see.

The highlight of the conference was seeing Jorge Cham, author of Piled Higher and Deeper. I very highly recommend going to see him talk if you get the chance. It’s a funny talk, of course, but carries a lot of insight with it, too. I’ve uploaded one picture from the talk which made a big impression on me.

Talking with people about my research, I always go back and forth between my motivations and where I want to focus my research. When talking with people in theory or in programming languages, they seem to be most interested in proving theoretical properties of the language and making sure that’s all very solid. When talking with everyone else, the prevailing opinion is “yes, but no one would ever use it”, reinforcing that the “face” of the language, the syntax and the typing system, is most important. I need to get a good balance between the two.

The paper we submitted to PLDI (Programming Language Design and Implementation), sadly, was not accepted. It’s not a huge surprise: our paper wasn’t a perfect fit for PLDI; further, our work was in a mild state of flux and the paper reflected that. We’ve submitted a paper to TLCA (Typed Lambda Calculi and Applications) and are anxious to get feedback there.

In just over two hours, I’ll be on a plane to Calgary to do more research there. I’m going to be doing some work at the airport and on the plane trying to get the implementation into a good state so we can start doing research right away. Particularly I want to seriously talk about inferring bounds.

Naïve bounds inference is theoretically done, though not implemented yet. Naïve bounds inference is too, well, naïve, however. We need to tighten up the bounds a lot for it to be practical. I’m hoping we can get some time to discuss how bounds inference affects the memory model as well.

So, what we have now is: a functional language allowing inductive and coinductive types—and now even mutually recursive types!—such that every well-typed program halts in polynomial time; an implementation, complete with type inference; inference of time and space bounds.

At the end of this week things should be in really good shape! I’ll post an update, as I’ve been neglecting this blog for too long.

We got a paper submitted to PLDI—Programming Language Design and Implementation—if just. The acceptance rate of late has been around 20% it seems. I think the paper was good, but I have no idea of it was that good. Since PLDI has a double-blind refereeing system, I’ll try not to give too many specifics in this post: specific enough that I can talk about things, but vague enough that a referee won’t inadvertently stumble here from Google.

I’m fairly sure that this language marks the direction my research should be going in during my Ph.D. It has the possibility of being a very nice language. Of course I’ll wait to see what the referees say. It may be unlikely, but there’s always the chance of a “this research is a dead-end because of [reason you’ve never thought of]” which is why it’s nice to get feedback from fresh eyes.

There are issue for the front-end. I think this is the biggest fault of the language as it stands right now. Considering what the language is intended to do—give automatic guarantees on polynomial time complexity—a lot of the complexity is justifiable, but still it can be improved. The typing system is a bore to get around. After becoming familiar with the language for a few weeks or so one starts to get a bit of intuition about how to structure things to “beat” the typing system. Still, it would be nice to add a bit more in the way of inference so the typing system can help you out. The syntax is perhaps more cumbersome, in the sense that there are too many constructs, than is totally necessary.

Changes to the front-end wouldn’t be purely cosmetic, either. The way we have the language right now is nice since everything that’s in there serves a purpose and everything is proved to work correctly. In a language such as this, removing something or changing something is a very delicate matter and it’s easy to destroy the guarantee mentioned above of guaranteeing time complexity. But work and cleverness aside, I’m sure it can be done.

On the back end of things, there aren’t problems so much as there are opportunities. As it stands, the implementation—which needs a lot of work yet—is just a very naïve interpreter. Making a compiler for a language such as this brings up myriad issues which one wouldn’t find in any other language. Memory management is the big one. Garbage collection is likely not necessary at all, so it will be interesting to see exactly what nice—and efficient—memory management schemes it would offer.

I regularly skim through digests of comp.risks, a newsgroup devoted to the risks of computing, as it’s directly related to my research and, above all, pretty entertaining. I was recently catching up, and the subject line “A risk of static analysis tools — and refereeing” understandably caught my eye. I reproduce it shamelessly below without attribution, as I don’t know its original source, though it’s included in comp.risks 25.02 (January 14, 2008):

Interesting anecdote: Some years ago a simple static code analysis tool was submitted to a conference. Two of the three reviewers, both of whom were extremely careful programmers, ran it on their own code just to see what would happen.

The tool produced 100% false positives (not 95, not 99, but fully 100%). As a result, the paper wasn’t accepted.

The same paper was later submitted to another conference (where reviewers didn’t try this), where it was accepted and won the best paper award.

I’m now fairly settled back in Canada, arrived from SOFSEM 2008 in Slovakia. In the middle of the picture at right you might see a round—like a short tube on its end—building, which was our hotel and conference site.

The event was a few firsts for me: my first time presenting at an international conference, my first time in Europe, and of course my first time at SOFSEM. I’d highly recommend the conference to those in Canada—or anywhere outside of the former Czechoslovakia, really—who might not have heard of it. It was a really good mix of all sorts of computer science topics and a nice mix of theory of practice. The conference was a good size and generally a lot of fun.

As luck would have it, I got roomed with a fellow name Matej Košík, a Ph.D. student from Bratislava. He was looking forward to my talk, and his talk turned out to be one of the highlights for me. He was building a robust operating system kernel, sort of carrying all the advantages of a conventional microkernel, but based on static analysis, with the help of the specialized programming language Pict, instead of memory protection. One of the open problems for him was that kernel components can use up arbitrary CPU cycles or memory, which is where my research might fit in.

Other contributions of interest for me were some proofs of upper and lower bounds of proving group properties—i.e., proving that a semi-group is a group, or proving that a groupoid is a quasigroup—in matrix representation on a quantum computer; a new technique for getting shorter regular expressions on a DFA to regular expression transformation; a proof that valence-2 graph isomorphism is logspace-complete; a bunch of semantic web stuff, which I don’t follow very closely, but I like to hear about it and see the pretty pictures; an invited talk by Jarkko Kari on a novel way to prove non-termination of Wang tiles; deniable encryption; electronic voting. It was just a cornucopia of cool stuff; even if the conference didn’t attract the World Expert in any one topic, it was very cool to see what was going on in so many interesting areas.

In between all of this, and sometimes in lieu of it, of course, was going out to see the mountains and trying some of the local alcohol with some very cool grad students, though strangely Slovakian beer was pretty hard to find, Czech beer being much more common. I have taken a number of pictures.

For those who are interested, my research page has a link to my paper and to my presentation. If you don’t understand catamorphisms on the first try of reading the paper, maybe the presentation will make things more clear ha!

I got an email early this morning that my paper has been accepted to SOFSEM, which is fantastic news. I hadn’t expected it to be accepted, really; rather, I was more interested in getting good feedback from the referees, which I did get. And one referee gave some good papers too for more reading.

The paper introduces a restricted functional programming language which disallows recursion, but offers a special syntax for catamorphism. Catamorphisms are a general mechanism for “iterating through” some algebraic data structure. Over the lists, a catamorphism is just a fold—in fact sometimes I just call catamorphisms “generalized folds”. Over the natural numbers, a catamorphism is just a bounded loop, such as a for loop offered in many other languages. Anyway, the language without recursion but with catamorphism is equivalent in power to the primitive recursive functions, and provides a good model for trying to attack programs from a static analysis standpoint.

The paper that was accepted is just an introductory paper, and really I think the most exciting parts of it are in the motivation and in the considerations for future work. The end goal is to come up with algorithms to determine, statically, the running time and memory consumption of a program written in this language. As every program in this language halts, these things are certainly possible, but I think structuring them as catamorphisms mean that we should be able to get some good efficiency, and with reasonably accurate results too. The primitive recursive functions are more than powerful enough to do “useful” things in: they encompass the entire exponential hierarchy.

Anyway what was most exciting, beyond it getting accepted I suppose, is that the referees generally seemed to agree with me that it’s an interesting area of research.

Next Page »