Plinth and Immutability

This post was going to be about immutability in Plinth, but I’ve just realised that I haven’t actually announced Plinth here yet. And so…

The Plinth Programming Language

Plinth is the name for a language I have been designing in my spare time for a while. Several of my posts in 2010 and 2011 were about it (then under the working name “Language X”). This year, I started work on an actual compiler for it that could produce executable machine code right from the start. I am currently in the process of building it up to a full language, and one of the more recent things I’ve implemented is immutability.

If you want to try out the compiler, or see updates as it’s being written, the github project is here: https://github.com/abryant/Plinth

And now back to the original topic:

Immutability

A few weeks ago, I was wrestling with how to implement immutability in a way that’s both safe and easy to use. The basic idea of immutability is similar to const in C++:

  • Types can be marked as immutable using a # prefix.
    i.e. #Foo is an immutable Foo
    Any object f with this type can not have any of its fields modified, so if f had a field b of type Bar, then f.b = null would be illegal.
    Plinth has “deep” immutability, which means that changing any fields of f.b would also be illegal.
  • Methods can be marked as immutable using the immutable modifier.
    When a method has this modifier, it cannot make any changes to anything that might be accessible elsewhere. This means that changing member variables and a static variables is forbidden, as is calling any other methods which are not immutable.
  • Constructors must also be marked as immutable if they are to be called from immutable methods. An immutable constructor can modify member variables, but not static variables.

There are several types which can be marked as immutable:
Arrays, e.g. #[]uint
Class types, e.g. #Foo
Compound types, e.g. #bigint
Function types, e.g. #{uint -> string}
The object type: #object

For arrays, classes, compound types, and the object type, being immutable means that the values inside that variable cannot be changed (e.g. an array element cannot be overwritten); for these four, casting away immutability is forbidden.

However, for function types, it means the opposite: that the function does not modify any external state, so it is an immutable function in the same way methods can be immutable. Because of this difference, function types can be cast away from immutability, but they cannot be cast towards immutability. For example:

#{string -> void} func = someMethod; // fine if someMethod is immutable
{string -> void} f2 = func; // fine, func doesn't change anything
#{string -> void} f3 = f2; // error: f2 might change something

Aside: With this type of immutability, making an object cache the result of some calculations would be impossible. For this reason, fields can be marked as ‘mutable’, which means that they can be modified (assigned to or have their state changed) from within an immutable context. As an example, this type of caching is used in Plinth’s string type, to lazily find its UTF codepoints.

Now, consider what happens when we write a getter (properties will exist eventually, but suppose we want to write a method):

class Foo
{
  // ... constructor ...
  Foo next;
  immutable Foo getNext()
  {
    return next; // error: next is immutable in this context, and cannot be returned
  }
}

So we can’t write an immutable getter that would extract a non-immutable member. We would have to write two getters, one which returns immutable values, and another for non-immutable values. This is extremely inconvenient, both in terms of writing duplicated getters and disambiguating between the two of them.

One way to try to get around these inconveniences is to have a concept of contextual immutability, which is added whenever you access a member in an immutable context. Then, immutable methods could return contextually immutable things, but not explicitly immutable things, allowing us to return next as a Foo rather than a #Foo in this case, and working around the duplication. Then, the result of the method call could have the same immutability as the thing we are calling the method on.

Unfortunately, this presents a problem for factory functions:

class Factory
{
  immutable Foo createFoo()
  {
    return new Foo();
  }
}
// in some method:
#Factory factory = ...;
Foo created = factory.createFoo(); // error: the result is immutable, because factory is immutable

Here, it is impossible to have an immutable Factory create a Foo which is not immutable. This is because we assumed that immutable methods would always return members, but in cases like this they do not.

In order for this to work properly, we would need to add another modifier to the method, stating that it can return contextually immutable values. In my opinion, this would overcomplicate method calls, so I am leaving them to return either always-immutable or never-immutable values.

However, the confusion here does not apply to properties, since they are designed to be accessed as if they were fields. So we can allow property getters to return against contextual immutability:

class Foo
{
  Foo realNext;
  property Foo next
    getter { return realNext; } // works, despite realNext being a #Foo in this context
    setter { realNext = value; };
  // NOTE: we could just do "property Foo next;" instead
}
// in some method:
Foo foo = ...;
Foo next = foo.next; // getter can return a contextually immutable value
#Foo immFoo = foo;
#Foo immNext = immFoo.next; // immutable callee forces immutable result

This gives us the same functionality as it would in methods, except that property getters cannot be used as function values (but even then, you can wrap the access in a closure if necessary).

Hopefully I’ll write about Plinth development more often from now on, since I’ve been getting a lot of the core language implemented recently. The next topic will be something that Plinth does differently from a lot of mainstream languages: nullability.

Philosophical Musings

Recently, I’ve been doing a fair amount of thinking about some semi-related aspects of philosophy. So I thought I would procrastinate further from my individual project and go into them here.

Time Travel

I should preface this section by saying that I doubt that time travel is possible at all. However I still like theorizing about how it would act if it was possible.

Ever since I read Harry Potter and the Methods of Rationality, I’ve preferred the theory of time travel that it proposes to the “Trousers of Time” theory. My problem with the latter is that it is based on the universe splitting into two possible timelines whenever time travel occurs, and both are causally inconsistent (in one, the travelling thing just disappears; in the other, it appears spontaneously).

The alternative that I’ve come to accept is a single-timeline theory where effect can precede cause. In this theory, at the moment that you go back in time, the thing which you are about to do has already happened (and you may well have seen the effects of it before you go back in the first place). This makes it possible for causal loops to occur, the only restriction being that any such loops are stable (paradoxes literally can’t happen).

Suppose you find a note with something random written on it in your own handwriting; you must then write that note yourself and go back in time and plant it for your past-self to discover. If you decided not to then you wouldn’t have received it in the first place, because that wouldn’t be a stable time loop (this has a tangential relation to the section on free will later on). Even more amazing is that whatever you wrote is information that came from absolutely nowhere! You could use this effect to do very difficult calculations, just find an algorithm that only creates a stable time loop for the correct answer to the problem. If you followed that algorithm you could find the answer in a single time-travel journey.

Now suppose you have a machine that applies these algorithms for you by sending some information back in time. If you gave it an algorithm which could only result in a paradox then the only thing that could possibly happen would be that the machine breaks somehow (since the laws of thermodynamics are purely probabilistic, this is always possible, it’s just usually prohibitively unlikely). To compensate for this, you could build in a fail-safe to the machine which just says “improbable” with a very low probability; for paradoxical results, this would still be more likely than the machine breaking. The main thing is, this machine can produce anything you like so long as your algorithm makes it the only plausible time loop.

What we have come up with here is very nearly a finite improbability drive! Unfortunately, in order for it to work time travel needs to a) work, and b) work like this. I still think that’s unlikely (especially since it would prove the universe is not Turing compatible), but we can dream…

The Nature of Reality

The nature of reality is something I’ve been interested in for a long time, and they have long been a popular topic of philosophical conversation, especially since films like The Matrix and (less well known) The Thirteenth Floor.

Fundamentally, it seems impossible to tell whether we are in a simulation or the “real world” (whatever that may be). However, there are things we can tell about this universe by observing it, such as the basic laws under which it operates. But there are some more properties we can also deduce, specifically one relating to consciousness…

Split Brain Patients are people who, to treat severe cases of epilepsy, have had the two hemispheres of their brain disconnected at the Corpus Callosum. These people were originally one consciousness, but now they are two. This has been demonstrated in numerous tests. To get an idea of the condition, try watching this video. Now, if a person has had a purely physical operation that splits their entire consciousness in two, the obvious conclusion is that consciousness is a purely physical phenomenon. We have just reasoned that we are not in The Matrix!

Of course, this was not a complete proof. The machines could be measuring exactly what happens to our brains inside the simulation and then using very advanced equipment to modify our real brains in exactly the same way, in order to trick us. They would also have to cope with a vast number of other conditions such as Alzheimer’s, mood swings with various causes, etc. Overall, I don’t think it is a plausible explanation for consciousness that it is imparted to us by another plane of existence, because the inner workings of our brains seem to be so tied to the laws of this one. It’s not impossible that it comes from another plane of existence, but eventually there must always be an explanation for how consciousness is built up. Since we can find reasonable explanations for all aspects of behaviour and decision making here and have no reason to suspect another plane of existence, we can ignore that possibility using Occam’s Razor.

Not only does this make it implausible that we are living in the matrix, it makes any sort of dualistic universe infeasible. This means that any notion of a soul that depends on another plane of existence is likely to be false. This section also relates heavily to the section on free will.

Free Will

You may assume that this section will have a large impact on your everyday life. However, it has very little impact on the decisions you make every day. Something that can effect things much more is the next section on Morality.

There are two viewpoints when considering free will: compatibilism and incompatibilism. Incompatibilism defines free will such that it is impossible to have it in a materialistic universe, essentially saying that for free will to exist we require some form of dualism; this is the viewpoint that people tend to assume when considering the question of free will. Compatibilism, on the other hand, says that as long as someone has the ability to make a decision, they have free will.

As outlined above, the evidence suggests we are in a materialistic universe. One obvious question is how it is possible for someone to make a decision in such a universe. I have recently discovered the answer to this in a course on Computational Neurodynamics: Take two networks of excitatory neurons and connect them each to a network of inhibitory neurons; then connect each network of inhibitory neurons to the excitatory network from the other side. This setup only allows one of these two sets of excitatory neurons to be active at a time. If we had a very simple creature which has to decide between two sources of food, at equal distances on opposite sides, we could connect sensors for its two different sides to each of the two excitatory networks, and then connect these excitatory networks to some method for moving towards the food source. This animal has just decided between two equal sources of food, and it was all due to tiny variations in the times that some neurons fired.

Essentially, this is an example of a decision being made in a materialistic universe (note: it does not have to be a deterministic universe; quantum randomness could exist without having any effect on free will). For the compatibilist, this equates to free will. However, for the incompatibilist, things get more complicated: Would there be free will if something random at the quantum scale caused a certain neuron to spike at a certain time and the decision to be made in one direction rather than the other? Inherently, there’s nothing more free about this than in a deterministic system, we are still bound to materialistic laws, even if they are random. I am currently of the view that an incompatibilist can only have free will in a dualistic universe (and even then, something in the other plane of existence must be causing the decisions somehow; if it has the same rules of cause and effect that we do then it will exhibit the same problem, the only way around this is to abandon cause and effect).

Morality

Is there an objective standard for morality? The universe as a purely materialistic system seems to imply that there is not. Consider that any action is merely the result of a large set of previous actions beginning at the big bang; it follows that no responsibility can objectively be assigned to the person performing the action, since they were merely a product of their genetic code, culture, and everything they have ever experienced.

Yet we still give people responsibility for their own actions. I posit that this is because we are either genetically or culturally programmed to assign responsibility to people. Ultimately, it would not be useful to assign responsibility for something to the entire way the universe has unfolded. We generally see people as responsible because it affects our feelings about them, and we may wish to thank them for a good action or punish them for a bad one.

Fundamentally, morality seems to stem from group selection during evolution: if you are altruistic towards your kin, they are more likely to survive and pass on the genes that they share with you. This can easily explain from an evolutionary perspective why we find such things as murder completely immoral: it decreases the species’ chances of survival. There will be similar explanations for other aspects of morality that we evolved with. However, there are also aspects of morality that are much more modern and we could not have evolved with, such as whether or not using asbestos as insulation is a good idea. These can draw on more basic instincts, but additionally there is a sort of moral Zeitgeist that gives us a sense of the morality of a society, which we each build up through communication with others. We would not know that asbestos was bad if nobody had told us about it.

That was a lot of writing. Hopefully someone finds it at least somewhat interesting. I’m definitely not an expert on any of these subjects, so I’ve probably made lots of mistakes and bad generalisations. If you found any, please point them out and prove me wrong.

Compiler on GitHub

Yesterday, I added my compiler project to GitHub. This means that you can now read all of my code if you want to, the link is: https://github.com/abryant/Compiler.

It also means that you can marvel at how badly written parts of it are. There are several things that don’t make sense at all (e.g. I got Argument and Parameter the wrong way around, which still needs fixing). So it’s still very much a work in progress. Hopefully I’ll have time to work on it more (and finish name resolution) soon.

Why Use Linux? (Felix Article)

Today an article I wrote for Imperial College London’s student newspaper, Felix, was published as part of a debate about operating systems. As you may have guessed, I wrote about Linux. The original article is on their website [PDF]. Here is the full, unedited article:

There are many reasons to use Linux rather than Windows or Mac OS, and not just that it is completely free.
First, I must explain that Linux is actually just the most basic part of an OS, the Kernel. What you actually install is a Linux distribution, like Ubuntu, Fedora, openSUSE or any of a very long list of others. These distributions are just the Linux kernel bundled with a load of programs which you’ll probably want to have installed (things like free office tools, web browsers, music players, etc. – not the crapware that you get with a new Windows PC).
One of the best things about Linux is that all major distributions have a package manager. This lets you to search for and download free software from the internet. Installing a program in Windows would involve starting a web browser, searching for the installer, downloading it, running it, and at the end of the process you still have an installer sitting in your downloads folder. But in Linux you would just start the package manager, search for the program, and press install.
The package manager also makes updating your system much easier: you can update every part of your system with it at the same time, not just the OS itself but all of your programs as well! No more popup dialogs from random programs telling you to update them, just one subtle notification telling you that there are updates if you want to download them. Another advantage is the fact that with Linux you don’t need to restart your system after every update, only when you’ve updated critical components.
Security is taken very seriously in Linux. Every time you install/upgrade packages you must enter a password, this ensures that no system-wide changes can be made without your consent. This is part of the reason that viruses are not a problem for Linux, any programs you download cannot take over the whole system. In fact, nothing you download is executable until you set it to be, so a virus would have to convince you to run it voluntarily!
Installing Linux (depending on your choice of distribution) can be very streamlined. The most difficult part is sorting out which disk drive/partition to install it onto. You can easily install it on any part of your hard drive that you’re willing to format, or put it on a USB drive. You can even use Wubi to install Ubuntu onto a Windows disk. The rest of the process is usually just picking your time zone, user name, password, etc. If you choose a distribution which has a Live CD, you can try it out by just starting the CD, and even go online while you install it. Once it has installed, you will get a list of operating systems to choose from whenever you turn on your computer, so you can switch between Linux and another OS.
If you are looking for Hardware Support, Linux can be a good choice. There are official drivers from NVidia and AMD/ATI for their respective graphics cards, and several more open source graphics card drivers. It’s hard to keep up with hardware advances when you’re not the most popular OS around, so there might be some drivers missing. However, Linux handles a lot of hardware very well (in particular, support for older hardware is good). It is usually a good idea to try the Live CD before you install it, so that you can make sure everything works beforehand.
The way that the OS is developed is part of the reason for this hardware support: anyone can join the community and help write the software if they are willing and able. That is the real meaning of “Open Source” or “Free” software, not just that it costs nothing to download (although the vast majority of programs do cost nothing).
Because it is open source, Linux is extremely customizable, which makes it a great OS if you work with computers. It provides a command line console where you can give it lower level commands, which is occasionally very useful, but not as user-friendly as a graphical desktop.
The desktop on Linux is also extremely customizable. There are many choices you can make, for example you can choose between several Desktop Environments, which provide your workspace and some applications. The most popular of these are KDE’s Plasma Desktop and GNOME (Ubuntu chooses GNOME for you, if you want Plasma Desktop you might like Kubuntu). In whichever desktop environment you choose, you can customize the theme and color scheme of your windows and desktop as much as you like (even down to things like the placement of buttons in window title-bars), and you can add desktop widgets to them (built-in ones in Plasma Desktop, or using gDesklets in GNOME). If you use Plasma Desktop or a program called Compiz, there are also desktop effects which can do things like the exposé effect from Mac OS.
Another very useful feature of these Desktop Environments is virtual desktops. This lets you set up multiple desktops which can each contain different windows, and then move your windows around them however you like. This gives you much more space to deal with windows, so you could have GIMP’s windows arranged nicely on one desktop, Inkscape open on another and then Firefox/Chrome on a third. Then you can simply use the pager or keyboard shortcuts to switch between them.
With regards to Gaming, it is true that most games don’t have Linux versions. However there are a few that do (such as Minecraft and Doom 3) and for others you can try to use a program called WINE to get them to work. Of course, Flash games will work fine in your browser, and there are the standard card and puzzle games that you get with most distributions.
Overall, Linux is a very useful desktop operating system. It can handle most of the things you want it to, and many that others cannot.

Clock adjustments and adjtime

This morning, the clocks went back 1 hour from BST to GMT. This was nice, as it meant we gained an hour. However, it also means that we need to set all of the clocks back. Most of the clocks must be done manually, but computers usually adjust themselves automatically.

Unrelatedly, there can sometimes be slight problems with hardware clocks, they can systematically gain time by a couple of seconds per day, and so must be adjusted. For example, my server gains 2.072 seconds per day and must be automatically set back (you can read more about this in the hwclock(8) man page). To do this, a file called adjtime (at /var/lib/hwclock/adjtime on Arch Linux, but /etc/adjtime usually) stores the systematic drift and is used to automatically compensate for it. Usually, this works fine and ensures that the clock is correct.

The problem occurs when the clocks go back and hwclock mistakes the daylight savings time adjustment for a systematic bias in the system clock. hwclock decides to store something like -3598 seconds per day in the adjtime file, which causes the time to appear to creep backwards at around an hour per day. This is obviously not what was wanted, and can be fixed by simply removing the adjtime file:

sudo rm /etc/adjtime

All you have to do now is set your clock to the real time. I prefer to use openntpd for this, as synchronizing the time with a time server should be more accurate than entering it manually.

sudo /etc/rc.d/openntpd start

Grammar finished!

I have finally finished the grammar (and LALR(1) parser) for language X! The final version is here: Grammar

I have managed to change the syntax for static initializers back to how it was originally (without the class name).

There will probably still be small changes, since I’m not sure exactly how the tokenizer will work, and it will be difficult to tell the difference between a version number (integers separated by dots) and a floating point number. But apart from that, I have now written a full grammar for this programming language.

The next thing to do is to write a tokenizer, and after that work can start on the rest of the compiler. It will eventually compile down to LLVM bitcode as an intermediary to native code; this will allow it to run anywhere LLVM can, without any machine-specific code generation or optimization in the compiler.

However, semantic analysis of the code will have to be done before this, and the code needs to go through at least one more intermediary representation before it can be converted to LLVM bitcode, so (unless you want to help work on it) you can expect the project to take a while.

Grammar changes

I have just made some changes to the grammar, specifically relating to anonymous inner classes and array instantiations.

The problem is caused by the fact that conditionals in if and while statements do not have to have parentheses around them. This causes conflicts between the class member list/array initializer and the block of the if/while statement. Here are two examples:

class X {
  void foo() {
    if blah == new Runnable()
    {} // member list, or block for the if statement?
    {} // block for the if statement, or another block after it?
  }
}
class X {
  void foo() {
    if blah == new X[0]
    {} // array initializer for "new X[0]", or block for the if statement
    {} // block for the if statement, or another block after it?
  }
}

Since there is no way to tell which is meant in either of these cases, it is necessary to change the grammar and avoid this ambiguity.

For the first example, the syntax for anonymous inner classes has been changed to be slightly more readable, by adding a “class” keyword between the constructor parameters and the member list:

class X {
  void foo() {
    if blah == new Runnable() class
    {} // member list
    {} // block for the if statement
  }
}

For the second example, the offending case has been disallowed. It is no longer possible to use an array initializer if the number of elements in the array is provided. Therefore the new syntax is:

class X {
  void foo() {
    if blah == new X[]
    {} // array initializer for "new X[]"
    {} // block for the if statement
  }
}

This was a difficult decision, however I think it is better to disallow this relatively uncommon scenario in favour of removing the parentheses from if and while statements.

Shift-reduce conflicts

My principles while writing the grammar for this language have been:

  • Do not change the syntax to suit the grammar, unless the proposed new syntax makes more sense than the original.
  • Do as much work as possible during the parsing phase: do not fix a parsing problem by adding rules which add more work to the semantic analysis phase.

The result of this should be a cleaner, easier to use language. However it can cause problems while writing the grammar, usually in the form of shift-reduce conflicts. Some of the conflicts I’ve encountered are below.

Current problem

My current status in the language is that I’ve just started on statement parsing (having finished expressions). However, there’s one problem which looks like it’s very difficult to solve nicely while leaving the syntax unchanged.

The problem is that the start of a tuple type can look exactly like the start of a parenthesised expression. To illustrate the problem, here is an example of what a statement that starts with tuple type looks like:

(Comparable<String>, int, String) x, y, z = foo();

and here is an example of what a statement which starts with a parenthesised expression can look like:

(a < b ? foo : bar).test();

You can quite easily see that both of these start with a left parenthesis, followed by a Name, followed by a left angle bracket. Also, they are both statements, and are allowed in exactly the same places in a program.

The problem here is what the parser does once it gets to the left angle bracket. It can either shift the Name (which has now been reduced to a QName, or qualified name) onto the parse stack in anticipation of the first scenario (a list of type parameters), or it can reduce it into a Field Access Expression, in anticipation of a comparison with some other expression. This is a shift-reduce conflict, and makes it impossible to parse the language without rewriting part of the grammar. In this case, it will be difficult to rewrite the grammar because it spans so much of the Expression and Type hierarchies.

Other (solved) problems

I have encountered a few other problems while writing the grammar for this language. Here are some of them.

Comparison vs Shift Expression

If “>>” is actually two right angle bracket tokens, then it becomes hard to disambiguate between a shift expression and relational (comparison) expression; the following statements are hard to decide between:

int a = b >> c;
int a = b > c ? 8 : 16;

In the case of finding something followed by a right angle bracket, the parser can either shift and expect another angle bracket for a shift expression, or it can reduce and expect a relational expression. The easy way to fix this is to make “>>” into a single token. This, however, causes another problem. Type parameters can easily end in “>>”, for example:

Iterator<Comparable<String>> iterator = blah();

To fix this, a set of rules for type parameters had to be devised so that the programmer doesn’t have to add a space between the two angle brackets. It involves rules for TypeParameterListRAngle, which is a type parameter list with a single trailing right angle bracket. The solution requires the rules for both PointerType and Type to have variants which end in right angle brackets, along with TypeParameter, WildcardTypeParameter, TypeArgument, and others (the grammar is at the end of this post).

Type vs Expression

This problem was to do with closure creation. The original syntax for closure creation was:

{int -> String throws Exception} intToString = String closure(int x) throws Exception
{
  throw new NotImplementedYetException();
}

But this caused the same conflict as the one I am currently working on. When this happened, I didn’t realize that there were other situations where Type and Expression could both occur in the same place. I now realize that the problem I am currently working on is a more general version of this one, and once it is fixed, the old syntax for closures would work just as well.

However, because I didn’t realize at the time, I came up with this new syntax for closures:

{int -> String throws Exception} intToString = closure(int x -> String) throws Exception
{
  return "" + x;
}

The only reason I switched to this syntax instead of solving the problem was that I thought that, by using the arrow to indicate the return type list, this was more in keeping with the closure’s type signature.

Static Initializer vs Field/Method definition

Static initializers start with the same thing as a field/method definition, a member header. This is basically a combination of an optional access specifier and an optional list of modifiers. The reason for this is to avoid shift-reduce conflicts between the static keyword and the member header on other members. A bad member header for a static initializer (anything but a single “static” modifier) is detected and causes an error during parsing.

The difference between static initializers and fields/methods starts immediately after a member header. A static initializer starts with a block, but a field/method starts with a type, which could be a closure type. Since blocks and closure types both start with open braces, the start of a statement could also be the start of a list of types. This can again cause the Type vs Expression problem, but once that is solved a comma could indicate an assignee list as part of an assignment statement, or a type list as part of a closure type. This may be relatively easily solved by using a right recursive list for both types and assignees, but there is an alternative syntax which would make things slightly easier for the parser:

class Foo
{
  static Foo
  {
    String a, b = "hello", "world!";
  }
}

My argument for this syntax is that it makes it look slightly more like a constructor, which also does initialization. However, it does necessitate more typing (when you actually use a static initializer). So although this has currently been solved by changing the syntax, I would be happy to change it back again if anyone presents a reasonable argument for the old (or indeed any other!) syntax.

Grammar

Here is the current Grammar for the language. It still has conflicts and there are some things haven’t been implemented yet, but feel free to have a look anyway. When I have finished the grammar I will likely attach the final version to a blog post.

Generics/Templates

I’ve been thinking more about the generics/templates problem today. I can think of two ways of doing it, each having its own advantages and disadvantages.

Templating

Doing things this way would mean checking the types with the generic type arguments, and then creating a new specialization in machine code for each combination of type parameters. The type parameters would only apply to instances of a class (static members cannot relate to the type parameters).

This has the advantage that separate methods are created for each specialization of the type parameters, which means that if someone uses a tuple as a generic type parameter, it does not have to be autoboxed into a pointer type to be passed to the method (a win for performance, since it requires fewer heap allocations).

It also has a major disadvantage. If a library defines a generic class, then any code using that library will have to include its specializations as part of its own machine code. This could cause problems for dynamically linked programs, for example in the following program:

class LibStack<E>
{
  E[] stack = new E[0];
  void push(E e) { ... }
  E pop() { ... }
  E peek()
  {
    return stack[getFrontPos(stack.length)];
  }
  static int getFrontPos(int size)
  {
    return size - 1;
  }
}

(Try to ignore the bad implementation for a minute, it helps me to demonstrate the problem)

So far, this works fine and an application writer has decided to use LibStack<Int> in their application, which generates the specialization for Int inside their executable. This works great until the library writers decide to change the front position of the stack to be the start of the array.

static int getFrontPos(int size)
{
  return 0;
}

This piece of code is changed, along with the code for push(), peek() and pop(). Unfortunately, the new code for push(), peek() and pop() will only reach the application’s code when it is recompiled. So the static method will be changed (as it is loaded dynamically from the library), but the push, peek and pop methods are stored in the application’s code, and so the old versions are used. This results in application code such as the following breaking:

void main()
{
  LibStack<Int> stack = new LibStack<Int>();
  stack.push(new Int(1));
  stack.push(new Int(2));
  stack.push(new Int(3));
  println(stack.peek()); // not necessarily how std-out will be implemented in the language
}

With the old version of the library, this will print 3 as intended, however the same binary loading the new version of the library will print 1 instead.

Generics with Run Time Type Information

Doing things this way would provide the same interface to the programmer. The class/method/whatever would be type-checked with generic type arguments, and the type parameters for classes would only apply to the instances of a class, not the static members. The difference is that this way, only one copy of the class and its methods is created in machine code.

This has the advantage that additional machine code is not generated for each specialization, and so it avoids the problem with templates above as well as lowering code size.

The disadvantage of this way of doing things is that each function can only take a certain number of arguments, so tuples and other primitive types must be autoboxed into a pointer type before they can be used by a generic class/method/whatever. This boxing causes a heap allocation (essentially it would internally call a method with ‘new BoxedIntStringLongTuple(a, b, c)’ instead of just ‘(a, b, c)’ as the parameter).

This method currently sounds very much like Java’s Generics, which have type erasure so that you cannot, for example, create an array of a generic type parameter. However there is something that can be done about this: Run Time Type Information. Basically, this involves passing information about the type parameter into the classes/methods/etc that use it. So a generic class would have generic type information embedded into it, and methods of that class could query this type information to create an array of that type (of course, this would really be a boxed type if a primitive type such as a tuple or a closure was used).

Autoboxing (which causes lots of heap allocations) also provides us with an advantage. If only pointer types are used in generics, then all functions that use generic type parameters have the opportunity to call standard Object functions on them; that’s right: equals(), hashCode() and others can be provided for boxed tuples, closures, and other primitive types. These functions would have to be implemented sensibly. For tuples, they should only delegate their calculations to the values the tuple contains. But if you want to create a Set<{ -> void}>, or a Map<(int, {int -> String}), String>, it should be possible.

Again, these are only ideas, comments/suggestions for improvements are appreciated.

Language Update and Syntax Fragments

I’ve been working on my language more recently. Most importantly, I’ve implemented an LALR(1) parser generator to work with. Now that it’s working, I can work on the grammar and AST (Abstract Syntax Tree).

So far, I have the start of a parser that will take a list of tokens (the tokenizer currently just gives a constant list of tokens) and parses it directly into an AST. Currently, it can generate ASTs for the top level of the grammar, as far as the following:

CompilationUnit = epsilon
                | CompilationUnit PackageDeclaration
                | CompilationUnit ImportDeclaration
                | CompilationUnit TypeDefinition
PackageDeclaration = package QName ;
ImportDeclaration = import QName ;
                  | import QName . * ;
                  | import static QName ;
                  | import static QName . * ;
TypeDefinition = ClassDefinition
               | InterfaceDefinition
AccessSpecifier = public | private | package | protected
                | package protected | protected package
                | epsilon
Modifiers = Modifiers Modifier | epsilon
Modifier = static | abstract | final | immutable | synchronized
         | NativeSpecification | transient | volatile
NativeSpecification = native ( StringLiteral )
ClassDefinition     = AccessSpecifier Modifiers class Name ClassExtendsClause ImplementsClause { MemberList }
InterfaceDefinition = AccessSpecifier Modifiers interface Name InterfaceExtendsClause { MemberList }
ClassExtendsClause     = extends PointerType | epsilon
ImplementsClause       = implements InterfaceList | epsilon
InterfaceExtendsClause = extends InterfaceList | epsilon
InterfaceList          = InterfaceList , PointerType | PointerType
QName = Name | QName . Name

That’s just a fairly basic representation of the top level of the language. It’s quite similar to Java in a few respects, but it also has some noticeable differences.

Another thing I’ve been doing is thinking about how certain other parts of the language will work. You’ll notice that there are no generics/templates in the above grammar – this is because I haven’t even fully decided on the syntax or the semantics yet. So here are a few thoughts about various things:

Closure Types

{A -> B} f;
{A, B -> void} g;
{ -> void} run;
{A, B, C -> D, E} h;
{int, String -> String, int} swap;
{File -> String throws IOException} read;
{File, {String -> String[]} -> String[] throws IOException} splitFile;

This is currently my favourite choice for the syntax of representing the type of a closure (or any method pointer). Closures can handle exceptions, which is important if they are to be assigned from methods. They also support tuples for both parameters and results. This is important, as all function arguments and results will be tuples in this language. This allows any set of values to be passed in or out of a function.

They do not, however, support default arguments. Default arguments is really just going to be a syntactic sugar applied when calling certain types of functions, and you cannot find the values of the default arguments without also being able to see which function is being called. This restriction will also affect inheritance of default argument functions.

Closure Creation

Closures can be created in two different ways. The first, assigning from a method, is trivially simple:

(String, int) swap(int x, String y)
{
   return y, x;
}
...in some other piece of code...
{int, String -> String, int} swapValues = swap; // if there are multiple definitions of swap(), this infers the correct one

The other way, creating an anonymous closure, is slightly more involved:

{int, String -> String, int} swapClosure = (int, String) closure(String a, int b)
{
   return b, a;
}; // the semicolon is required as the closure creation is part of the expression, not its own statement

This creates an anonymous closure using approximately the same syntax as a method definition. The differences are that access specifiers and modifiers are not necessary, and that the anonymous closure specification does not have a name, it uses the ‘closure’ keyword in place of one because a closure has no use for a name before it is assigned to a variable.

Templates/Generics

Currently, I’m not sure how this part of the type system should be implemented. There are, however, a few constraints on them:

  • A type parameter must be able to represent:
    • A single class/interface type
    • A primitive type (int, long, double, boolean etc.)
    • A tuple of multiple different types. (facilitates maps where the key/value is a tuple, among other things)
    • A closure (method pointer)
    • Nothing at all (like the Void (with a capital V) type in Java)
  • It must also be able to support some restrictions on type parameters, such as:
    • X is a type which extends/implements some base class/interface Y
    • X is a superclass of some other type Y
  • It must also support wildcards, so that, for example, a method can take a Set<? extends SetItem>

Also, it would be much nicer if it didn’t use type erasure, so that the type information is available at runtime, and so that new instances of a type argument and arrays of instances of a type argument can be created.

It is likely that there will be problems with some of these constraints (there will have to be some boxing of tuples for generic function calls), but having such an expressive type system would be a great advantage in a new language.

If anyone’s reading this and has a suggestion on how to do any of this better (or how to implement templates/generics at all) please don’t hesitate to comment below or email me.