Realidades Paralelas

Friday, December 07, 2007

First contact with Coq

I've started to look a bit into Coq, beginning with Coq in a Hurry. It begins nice enough, but then in section 3 there's a proof that for all propositions a and b, a and b implies b and a. I didn't understand a thing. Well, one bit or another, judging by previous contacts with systems for first order logic, but the tactics, and the organization of the proof, it all flew over my head. Just by reading the material it was hard to discern what was I supposed to input to get the same result. Then there are two paragraphs about the tactics, and a table. I can't imagine how I would use it right now.

But that's ok, I'm sure I'll understand it better later. There's the Coq Tutorial and a bit more of material to get into. Then I'll post about it later, if I don't get lazy. It's bound to be truly exciting stuff about the Calculus of Inductive Constructions and dependent type theory. Great subject to draw upon for chitchat at cocktail parties.

Labels: ,

Friday, July 06, 2007

In search of good mind-mapping software

So I recently started to create mind maps to summarize studies and come up with new ideas. It's better to start a map on paper than to use a computer program, granted. But then, after the initial session is over, it would be good to make a digital version of it to be consulted and edited later. A mind-mapping tool is thus a good idea.

I tried some programs on this list. FreeMind is one of the best free software tools for mind mapping, but it still has lots of usability problems, and it runs on the JVM, which means slow startup and eats-lots-of-memory. is interesting exactly because it is a web application, no need to install anything, available anywhere, etc. But I couldn't try it; after signing up for a free account more than 24 hours ago, I still haven't received the confirmation email, so I can't login. I guess I won't bother trying to register again.

But it gets worse: both of them doesn't seem to support links between nodes in different parts of the tree (cross-links). Does everyone that does mind maps manage to categorize everything hierarchically? To me, in any non-trivial map there'll be lots of relations which are not hierarchical in nature, and I think that not allowing these cross links is a serious limitation. Maybe I should consider creating semantic networks, but there doesn't seem to be tools for that.


Thursday, April 19, 2007

Great Works in Programming Languages

I guess I never linked to this before, even though I was certain I had. It's a list of papers selected by a community of programming language researchers as the most important and representative of the field. The collection is called Great Works in Programming Languages. Worth taking a look.

I have read and reviewed "An axiomatic basis for computer programming" by C.A.R. Hoare, here (in portuguese). I've read "A theory of type polymorphism in programming" by Milner more than once, and never finished "Towards a theory of type structure" by Reynolds. I also read the one by Landin ("The next 700 programming languages") and Plotkin's "Call-by-name, call-by-value, and the λ-calculus" is on my queue. Maybe someday I'll get back to them and post reviews.

Labels: ,

Another new acquisition

I just got the book Proof, Language and Interaction: Essays in Honour of Robin Milner by mail. It's a hefty tome that's over 700 pages long, containing essays in various topics related to the work of Robin Milner. The best part is that while it costs around 80 dollars, I got it for 9 dollars plus shipping. This should be a rule: if you're a starving student in a third-world university and no institutional budget for buying expensive technical books, you can buy them for 10 dollars or so. There's always hope.

Anyway, my main interest in the book lies in the essays related to the π-calculus, especially the one on Pict, but the Harper-Stone paper on the semantics of Standard ML is also on my list, as is the essay by Mike Gordon "From LCF to HOL: A Short History". I'll probably look at many others, but I doubt I'll be able to understand much of some of them, like "From Banach to Milner: Metric Semantics for Second Order Communication and Concurrency".

All in all, a good acquisition, at a very good price.


Saturday, February 24, 2007

Classic Texts in Computer Science

A selection of classic texts in the discipline, by Babar K. Zafar, with links to electronic versions of each one.

Choosing which texts are classic can certainly be controversial, although in this case I think only texts with electronic versions available were considered. Even so, there are many good things in there, some of which I have already read, some I plan to read soon.

Labels: ,

Thursday, February 22, 2007

An Introduction to the Bird-Meertens Formalism

by Jeremy Gibbons (link to paper).

Introduces the BMF, also called Squiggol, by using an example: a derivation of a linear algorithm to compute the Maximum Segment Sum of a list of numbers. The problem is presented in the book Programming Pearls by Bentley, and the derivation of the linear solution using Horner's rule apparently was first suggested by Bird. The paper first presents some motivation to using the formalism, then takes care of notation, proving some theorems by the way, then presents Horner's rule for polynomials and a generalization of the rule for arbitrary operations (that satisfy certain conditions). When the groundwork is completed, it presents a clear specification of the MSS problem that would take cubic time to compute; from this specification, using the theorems presented earlier and the generalized Horner's rule, it is derived an efficient linear algorithm based on list scans.

The paper is also interesting for showing the properties of catamorphisms and folds. Folds are more general than catamorphisms, but if the fold operation is associative, it can be expressed as catamorphism as well.

The paper makes the claim that "creativity is not needed" when deriving an efficient program from a simple specification. This doesn't seem to hold up well. Sure there is creativity somewhere in it, and although the derivation itself seems simple, it depends on the available theorems. So the choice of which theorems to state and prove, a priori (that is, when you don't know the answer beforehand) seems to involve a lot of creativity. Of course, the paper does not claim that you don't need creativity there, only in the derivation itself. Even so, the example presented is quite small. It uses only a handful of theorems, all of them stated earlier. So how does one go about choosing which theorems to use in a derivation? Is it trial and error? Isn't there some creativity involved? In a realistic formalism to derive programs from specifications, you'd probably end with lots of theorems. It seems to me that it is far from trivial to decide which one to use in each step. Maybe a experienced practitioner can decide where to go based on previous sessions of trial and error, by considering the "look" of the current expression.

Another problem is where to stop. How do you know you got an interesting, efficient implementation of your specifications? You must be able to recognize good implementations somehow. This isn't different from how you can express mathematical expressions in many ways, some more useful than others depending on circumstance. But the properties of good implementations are not clear.

All in all, claiming that "creativity is not needed" in the derivation process mostly makes me think it can be automated without resorting to AI techniques. But if you think about how to do it -- trial and error amounts to exponential search, lack of clear goals or evaluation criteria for results makes termination difficult to specify -- it becomes clear that implementing an automatic tool for derivating programs from specifications would be quite difficult. My doubts are mostly concerned with how a formalism like this would scale to the derivation of realistic programs, instead of small toys.

I've found the book by Bird and de Moor at a very good price in an online bookstore and ordered it. If it ever gets to me, we'll see how the formalism scales.

Labels: ,

Saturday, February 17, 2007

Código funcional

Enquanto eu não volto a postar neste blog, aproveito para linkar aqui um outro que criei: Code Miscellany, dedicado principalmente a postagem de código-fonte em linguagens funcionais. Na verdade está servindo mais, atualmente, para postar minha tradução do código e exercícios do livro Essentials of Programming Languages para F#, a linguagem funcional parecida com OCaml da Microsoft. Mas sempre pode ter alguma coisa diferente por lá. E depois outros livros podem ser traduzidos para outras linguagens.

Quanto a este blog aqui, sugiro que alguém que porventura tenha interesse nele assine o feed em algum agregador, por exemplo o Bloglines ou o Google Reader. Mais fácil de saber quando tem post novo.


Saturday, November 25, 2006

Teias de aranha

Mais um post apenas para comentar que o alto nível de atividade e estresse me impediram de postar por um tempo. Também faltou ter o que comentar aqui, em termos. Mas agora que o que tinha de ser defendido foi defendido, posso voltar a escrever mais. Tenho pensado em algumas direções de pesquisa, e depois posso comentar sobre isso. Por enquanto fiquem com esse link bem legal:

Writing a Lisp Interpreter in Haskell