2019-08-28 21:11:34 -07:00
---
title: Compiling a Functional Language Using C++, Part 5 - Execution
date: 2019-08-06T14:26:38-07:00
tags: ["C and C++", "Functional Languages", "Compilers"]
2020-05-09 17:29:37 -07:00
description: "In this post, we define the rules for a G-machine, the abstract machine that we will target with our compiler."
2019-08-28 21:11:34 -07:00
---
2019-09-02 17:51:14 -07:00
{{< gmachine_css > }}
2019-08-28 21:11:34 -07:00
We now have trees representing valid programs in our language,
and it's time to think about how to compile them into machine code,
to be executed on hardware. But __how should we execute programs__ ?
The programs we define are actually lists of definitions. But
you can't evaluate definitions - they just tell you, well,
how things are defined. Expressions, on the other hand,
can be simplified. So, let's start by evaluating
the body of the function called `main` , similarly
to how C/C++ programs start.
Alright, we've made it past that hurdle. Next,
to figure out how to evaluate expressions. It's easy
enough with binary operators: `3+2*6` becomes `3+12` ,
and `3+12` becomes `15` . Functions are when things
get interesting. Consider:
```
double (160+3)
```
There's many perfectly valid ways to evaluate the program.
When we get to a function application, we can first evaluate
the arguments, and then expand the function definition:
```
double (160+3)
double 163
163+163
326
```
Let's come up with a more interesting program to illustrate
execution. How about:
```
data Pair = { P Int Int }
defn fst p = {
case p of {
P x y -> { x }
}
}
defn snd p = {
case p of {
P x y -> { y }
}
}
2019-11-13 13:49:47 -08:00
defn slow x = { returns x after waiting for 1 second }
2019-08-28 21:11:34 -07:00
defn main = { fst (P (slow 320) (slow 6)) }
```
If we follow our rules for evaluating functions,
the execution will follow the following steps:
```
fst (P (slow 320) (slow 6))
fst (P 320 (slow 6)) < - after 1 second
fst (P 320 6) < - after 1 second
320
```
We waited for two seconds, even though we really only
needed to wait one. To avoid this, we could instead
define our function application to substitute in
the parameters of a function before evaluating them:
```
fst (P (slow 320) (slow 6))
(slow 320)
320 < - after 1 second
```
This seems good, until we try doubling an expression again:
```
double (slow 163)
(slow 163) + (slow 163)
163 + (slow 163) < - after 1 second
163 + 163 < - after 1 second
326
```
With ony one argument, we've actually spent two seconds on the
evaluation! If we instead tried to triple using addition,
we'd spend three seconds.
Observe that with these new rules (called "call by name" in programming language theory),
we only waste time because we evaluate an expression that was passed in more than 1 time.
What if we didn't have to do that? Since we have a functional language, there's no way
that two expressions that are the same evaluate to a different value. Thus,
once we know the result of an expression, we can replace all occurences of that expression
with the result:
```
double (slow 163)
(slow 163) + (slow 163)
163 + 163 < - after 1 second
326
```
We're back down to one second, and since we're still substituting parameters
before we evaluate them, we still only take one second.
Alright, this all sounds good. How do we go about implementing this?
Since we're substituting variables for whole expressions, we can't
just use values. Instead, because expressions are represented with trees,
we might as well consider operating on trees. When we evaluate a tree,
we can substitute it in-place with what it evaluates to. We'll do this
depth-first, replacing the children of a node with their reduced trees,
and then moving on to the parent.
There's only one problem with this: if we substitute a variable that occurs many times
with the same expression tree, we no longer have a tree! Trees, by definition,
have only one path from the root to any other node. Since we now have
many ways to reach that expression we substituted, we instead have a __graph__ .
Indeed, the way we will be executing our functional code is called __graph reduction__ .
### Building Graphs
Naively, we might consider creating a tree for each function at the beginning of our
program, and then, when that function is called, substituting the variables
in it with the parameters of the application. But that approach quickly goes out
the window when we realize that we could be applying a function
multiple times - in fact, an arbitrary number of times. This means we can't
have a single tree, and we must build a new tree every time we call a function.
The question, then, is: how do we construct a new graph? We could
reach into Plato's [Theory of Forms ](https://en.wikipedia.org/wiki/Theory_of_forms ) and
have a "reference" tree which we then copy every time we apply the function.
But how do you copy a tree? Copying a tree is usually a recursive function,
and __every__ time that we copy a tree, we'll have to look at each node
and decide whether or not to visit its children (or if it has any at all).
If we copy a tree 100 times, we will have to look at each "reference"
node 100 times. Since the reference tree doesn't change, __we'd
be following the exact same sequence of decisions 100 times__. That's
no good!
An alternative approach, one that we'll use from now on, is to instead
convert each function's expression tree into a sequence of instructions
that you can follow to build an identical tree. Every time we have
to apply a function, we'll follow the corresponding recipe for
that function, and end up with a new tree that we continue evaluating.
### G-machine
2019-09-02 17:51:14 -07:00
"Instructions" is a very generic term. Specifically, we will be creating instructions
2019-08-28 21:11:34 -07:00
for a [G-machine ](https://link.springer.com/chapter/10.1007/3-540-15975-4_50 ),
an abstract architecture which we will use to reduce our graphs. The G-machine
is stack-based - all operations push and pop items from a stack. The machine
will also have a "dump", which is a stack of stacks; this will help with
2019-09-04 00:27:58 -07:00
separating evaluation of various graphs.
2019-08-28 21:11:34 -07:00
2019-09-02 17:51:14 -07:00
We will follow the same notation as Simon Peyton Jones in
[his book ](https://www.microsoft.com/en-us/research/wp-content/uploads/1992/01/student.pdf )
, which was my source of truth when implementing my compiler. The machine
will be executing instructions that we give it, and as such, it must have
an instruction queue, which we will reference as \\(i\\). We will write
\\(x:i\\) to mean "an instruction queue that starts with
an instruction x and ends with instructions \\(i\\)". A stack machine
obviously needs to have a stack - we will call it \\(s\\), and will
adopt a similar notation to the instruction queue: \\(a\_1, a\_2, a\_3 : s\\)
will mean "a stack with the top values \\(a\_1\\), \\(a\_2\\), and \\(a\_3\\),
2019-09-04 00:27:58 -07:00
and remaining instructions \\(s\\)". Finally, as we said, our stack
machine has a dump, which we will write as \\(d\\). On this dump,
we will push not only the current stack, but also the current
instructions that we are executing, so we may resume execution
later. We will write \\(\\langle i, s \\rangle : d\\) to mean
"a dump with instructions \\(i\\) and stack \\(s\\) on top,
followed by instructions and stacks in \\(d\\)".
2019-09-02 17:51:14 -07:00
There's one more thing the G-machine will have that we've not yet discussed at all,
and it's needed because of the following quip earlier in the post:
> When we evaluate a tree, we can substitute it in-place with what it evaluates to.
How can we substitute a value in place? Surely we won't iterate over the entire
tree and look for an occurence of the tree we evaluted. Rather, wouldn't it be
nice if we could update all references to a tree to be something else? Indeed,
we can achieve this effect by using __pointers__ . I don't mean specifically
C/C++ pointers - I mean the more general concept of "an address in memory".
The G-machine has a __heap__ , much like the heap of a C/C++ process. We
can create a tree node on the heap, and then get an __address__ of the node.
We then have trees use these addresses to link their child nodes.
If we want to replace a tree node with its reduced form, we keep
its address the same, but change the value on the heap.
This way, all trees that reference the node we change become updated,
without us having to change them - their child address remains the same,
but the child has now been updated. We represent the heap
using \\(h\\). We write \\(h[a : v]\\) to say "the address \\(a\\) points
to value \\(v\\) in the heap \\(h\\)". Now you also know why we used
the letter \\(a\\) when describing values on the stack - the stack contains
addresses of (or pointers to) tree nodes.
_Compiling Functional Languages: a tutorial_ also keeps another component
of the G-machine, the __global map__ , which maps function names to addresses of nodes
that represent them. We'll stick with this, and call this global map \\(m\\).
Finally, let's talk about what kind of nodes our trees will be made of.
We don't have to include every node that we've defined as a subclass of
`ast` - some nodes we can compile to instructions, without having to build
them. We will also include nodes that we didn't need for to represent expressions.
Here's the list of nodes types we'll have:
2019-09-02 23:38:27 -07:00
* `NInt` - represents an integer.
* `NApp` - represents an application (has two children).
* `NGlobal` - represents a global function (like the `f` in `f x` ).
* `NInd` - an "indrection" node that points to another node. This will help with "replacing" a node.
* `NData` - a "packed" node that will represent a constructor with all the arguments.
2019-09-02 17:51:14 -07:00
With these nodes in mind, let's try defining some instructions for the G-machine.
We start with instructions we'll use to assemble new version of function body trees as we discussed above.
First up is __PushInt__ :
{{< gmachine " PushInt " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{PushInt} \; n : i \quad s \quad d \quad h \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a : s \quad d \quad h[a : \text{NInt} \; n] \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Push an integer \(n\) onto the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
Let's go through this. We start with an instruction queue
with `PushInt n` on top. We allocate a new `NInt` with the
number `n` on the heap at address \\(a\\). We then push
the address of the `NInt` node on top of the stack. Next,
__PushGlobal__:
{{< gmachine " PushGlobal " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{PushGlobal} \; f : i \quad s \quad d \quad h \quad m[f : a] \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a : s \quad d \quad h \quad m[f : a] \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Push a global function \(f\) onto the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We don't allocate anything new on the heap for this one -
we already have a node for the global function. Next up,
__Push__:
{{< gmachine " Push " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Push} \; n : i \quad a_0, a_1, ..., a_n : s \quad d \quad h \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a_n, a_0, a_1, ..., a_n : s \quad d \quad h \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Push a value at offset \(n\) from the top of the stack onto the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We define this instruction to work if and only if there exists an address
on the stack at offset \\(n\\). We take the value at that offset, and
push it onto the stack again. This can be helpful for something like
`f x x` , where we use the same tree twice. Speaking of that - let's
define an instruction to combine two nodes into an application:
{{< gmachine " MkApp " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{MkApp} : i \quad a_0, a_1 : s \quad d \quad h \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a : s \quad d \quad h[ a : \text{NApp} \; a_0 \; a_1] \quad m \)
2019-09-02 17:51:14 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Apply a function at the top of the stack to a value after it.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We pop two things off the stack: first, the thing we're applying, then
the thing we apply it to. We then create a new node on the heap
that is an `NApp` node, with its two children being the nodes we popped off.
Finally, we push it onto the stack.
2019-09-04 01:31:31 -07:00
Let's try use these instructions to get a feel for it. In
order to conserve space, let's use \\(\\text{G}\\) for PushGlobal,
\\(\\text{I}\\) for PushInt, and \\(\\text{A}\\) for PushApp.
Let's say we want to construct a graph for `double 326` . We'll
use the instructions \\(\\text{I} \; 326\\), \\(\\text{G} \; \\text{double}\\),
and \\(\\text{A}\\). Let's watch these instructions play out:
2020-03-04 14:07:05 -08:00
{{< latex > }}
\begin{aligned}
[\text{I} \; 326, \text{G} \; \text{double}, \text{A}] & \quad s \quad & d \quad & h \quad & m[\text{double} : a_d] \\
[\text{G} \; \text{double}, \text{A}] & \quad a_1 : s \quad & d \quad & h[a_1 : \text{NInt} \; 326] \quad & m[\text{double} : a_d] \\
[\text{A}] & \quad a_d, a_1 : s \quad & d \quad & h[a_1 : \text{NInt} \; 326] \quad & m[\text{double} : a_d] \\
[] & \quad a_2 : s \quad & d \quad & h[\substack{\begin{aligned}a_1 & : \text{NInt} \; 326 \\ a_2 & : \text{NApp} \; a_d \; a_1 \end{aligned}}] \quad & m[\text{double} : a_d] \\
\end{aligned}
{{< / latex > }}
2019-09-04 01:31:31 -07:00
How did we come up with these instructions? We'll answer this question with
more generality later, but let's take a look at this particular expression
right now. We know it's an application, so we'll be using MkApp eventually.
We also know that MkApp expects two values on top of the stack from
which to make an application. The node on top has to be the function, and the next
node is the value to be passed into that function. Since a stack is first-in-last-out,
for the function (`double`, in our case) to be on top, we need
to push it last. Thus, we push `double` first, then 326. Finally,
we call MkApp now that the stack is in the right state.
2019-09-02 23:38:27 -07:00
Having defined instructions to __build__ graphs, it's now time
to move on to instructions to __reduce__ graphs - after all,
we're performing graph reduction. A crucial instruction for the
G-machine is __Unwind__ . What Unwind does depends on what
nodes are on the stack. Its name comes from how it behaves
when the top of the stack is an `NApp` node that is at
the top of a potentially long chain of applications: given
an application node, it pushes its left hand side onto the stack.
It then __continues to run Unwind__ . This is effectively a while loop:
applications nodes continue to be expanded this way until the left
hand side of an application is finally something
that __isn't__ an application. Let's write this rule as follows:
{{< gmachine " Unwind-App " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Unwind} : i \quad a : s \quad d \quad h[a : \text{NApp} \; a_0 \; a_1] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( \text{Unwind} : i \quad a_0, a : s \quad d \quad h[ a : \text{NApp} \; a_0 \; a_1] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Unwind an application by pushing its left node.
{{< / gmachine_inner > }}
{{< / gmachine > }}
Let's talk about what happens when Unwind hits a node that isn't an application. Of all nodes
we have described, `NGlobal` seems to be the most likely to be on top of the stack after
an application chain has finished unwinding. In this case we want to run the instructions
for building the referenced global function. Naturally, these instructions
may reference the arguments of the application. We can find the first argument
by looking at offset 1 on the stack, which will be an `NApp` node, and then going
to its right child. The same can be done for the second and third arguments, if
they exist. But this doesn't feel right - we don't want to constantly be looking
at the right child of a node on the stack. Instead, we replace each application
node on the stack with its right child. Once that's done, we run the actual
code for the global function:
{{< gmachine " Unwind-Global " > }}
{{< gmachine_inner " Before " > }}
2020-02-23 21:20:32 -08:00
\( \text{Unwind} : i \quad a, a_0, a_1, ..., a_{n-1} : s \quad d \quad h[\substack{a : \text{NGlobal} \; n \; c \\ a_k : \text{NApp} \; a_{k-1} \; a_k'}] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2020-02-23 21:20:32 -08:00
\( c \quad a_0', a_1', ..., a_{n-1}', a_{n-1} : s \quad d \quad h[\substack{a : \text{NGlobal} \; n \; c \\ a_k : \text{NApp} \; a_{k-1} \; a_k'}] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Call a global function.
{{< / gmachine_inner > }}
{{< / gmachine > }}
In this rule, we used a general rule for \\(a\_k\\), in which \\(k\\) is any number
2020-02-23 21:20:32 -08:00
between 1 and \\(n-1\\). We also expect the `NGlobal` node to contain two parameters,
2019-09-02 23:38:27 -07:00
\\(n\\) and \\(c\\). \\(n\\) is the arity of the function (the number of arguments
it expects), and \\(c\\) are the instructions to construct the function's tree.
2020-02-23 21:20:32 -08:00
The attentive reader will have noticed a catch: we kept \\(a\_{n-1}\\) on the stack!
This once again goes back to replacing a node in-place. \\(a\_{n-1}\\) is the address of the "root" of the
2019-09-02 23:38:27 -07:00
whole expression we're simplifying. Thus, to replace the value at this address, we need to keep
the address until we have something to replace it with.
There's one more thing that can be found at the leftmost end of a tree of applications: `NInd` .
We simply replace `NInd` with the node it points to, and resume Unwind:
{{< gmachine " Unwind-Ind " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Unwind} : i \quad a : s \quad d \quad h[a : \text{NInd} \; a' ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( \text{Unwind} : i \quad a' : s \quad d \quad h[a : \text{NInd} \; a' ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Replace indirection node with its target.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We've talked about replacing a node, and we've talked about indirection, but we
haven't yet an instruction to perform these actions. Let's do so now:
{{< gmachine " Update " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Update} \; n : i \quad a,a_0,a_1,...a_n : s \quad d \quad h \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a_0,a_1,...,a_n : s \quad d \quad h[a_n : \text{NInd} \; a ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Transform node at offset into an indirection.
{{< / gmachine_inner > }}
{{< / gmachine > }}
This instruction pops an address from the top of the stack, and replaces
a node at the given offset with an indirection to the popped node. After
we evaluate a function call, we will use `update` to make sure it's
not evaluated again.
Now, let's talk about data structures. We have mentioned an `NData` node,
but we've given no explanation of how it will work. Obviously, we need
to distinguish values of a type created by different constructors:
If we have a value of type `List` , it could have been created either
using `Nil` or `Cons` . Depending on which constructor was used to
create a value of a type, we might treat it differently. Furthermore,
it's not always possible to know what constructor was used to
create what value at compile time. So, we need a way to know,
at runtime, how the value was constructed. We do this using
a __tag__ . A tag is an integer value that will be contained in
the `NData` node. We assign a tag number to each constructor,
and when we create a node with that constructor, we set
the node's tag accordingly. This way, we can easily
tell if a `List` value is a `Nil` or a `Cons` , or
if a `Tree` value is a `Node` or a `Leaf` .
To operate on `NData` nodes, we will need two primitive operations: __Pack__ and __Split__ .
Pack will create an `NData` node with a tag from some number of nodes
on the stack. These nodes will be placed into a dynamically
allocated array:
{{< gmachine " Pack " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Pack} \; t \; n : i \quad a_1,a_2,...a_n : s \quad d \quad h \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a : s \quad d \quad h[a : \text{NData} \; t \; [a_1, a_2, ..., a_n] ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Pack \(n\) nodes from the stack into a node with tag \(t\).
{{< / gmachine_inner > }}
{{< / gmachine > }}
Split will do the opposite, by popping
of an `NData` node and moving the contents of its
array onto the stack:
{{< gmachine " Split " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Split} : i \quad a : s \quad d \quad h[a : \text{NData} \; t \; [a_1, a_2, ..., a_n] ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a_1, a_2, ...,a_n : s \quad d \quad h[a : \text{NData} \; t \; [a_1, a_2, ..., a_n] ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Unpack a data node on top of the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
These two instructions are a good start, but we're missing something
fairly big: case analysis. After we've constructed a data type,
to perform operations on it, we want to figure out what
constructor and values which were used to create it. In order
to implement patterns and case expressions, we'll need another
instruction that's capable of making a decision based on
the tag of an `NData` node. We'll call this instruction __Jump__ ,
and define it to contain a mapping from tags to instructions
to be executed for a value of that tag. For instance,
if the constructor `Nil` has tag 0, and `Cons` has tag 1,
the mapping for the case expression of a length function
could be written as \\([0 \\rightarrow [\\text{PushInt} \; 0], 1 \\rightarrow [\\text{PushGlobal} \; \\text{length}, ...] ]\\).
Let's define the rule for it:
{{< gmachine " Jump " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Jump} [..., t \rightarrow i_t, ...] : i \quad a : s \quad d \quad h[a : \text{NData} \; t \; as ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i_t, i \quad a : s \quad d \quad h[a : \text{NData} \; t \; as ] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Execute instructions corresponding to a tag.
{{< / gmachine_inner > }}
{{< / gmachine > }}
Alright, we've made it through the interesting instructions,
but there's still a few that are needed, but less shiny and cool.
For instance: imagine we've made a function call. As per the
rules for Unwind, we've placed the right hand sides of all applications
on the stack, and ran the instructions provided by the function,
creating a final graph. We then continue to reduce this final
graph. But we've left the function parameters on the stack!
This is untidy. We define a __Slide__ instruction,
which keeps the address at the top of the stack, but gets
rid of the next \\(n\\) addresses:
{{< gmachine " Slide " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{Slide} \; n : i \quad a_0, a_1, ..., a_n : s \quad d \quad h \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a_0 : s \quad d \quad h \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Remove \(n\) addresses after the top from the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
Just a few more. Next up, we observe that we have not
defined any way for our G-machine to perform arithmetic,
or indeed, any primitive operations. Since we've
not defined any built-in type for booleans,
let's avoid talking about operations like `<` , `==` ,
and so on (in fact, we've omitted them from the grammar so far).
So instead, let's talk about the [closed ](https://en.wikipedia.org/wiki/Closure_(mathematics )) operations,
namely `+` , `-` , `*` , and `/` . We'll define a special instruction for
them, called __BinOp__ :
{{< gmachine " BinOp " > }}
{{< gmachine_inner " Before " > }}
2019-09-04 00:27:58 -07:00
\( \text{BinOp} \; \text{op} : i \quad a_0, a_1 : s \quad d \quad h[\substack{a_0 : \text{NInt} \; n \\ a_1 : \text{NInt} \; m}] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
2019-09-04 00:27:58 -07:00
\( i \quad a : s \quad d \quad h[\substack{a_0 : \text{NInt} \; n \\ a_1 : \text{NInt} \; m \\ a : \text{NInt} \; (\text{op} \; n \; m)}] \quad m \)
2019-09-02 23:38:27 -07:00
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Apply a binary operator on integers.
{{< / gmachine_inner > }}
{{< / gmachine > }}
Nothing should be particularly surprising here:
the instruction pops two integers off the stack, applies the given
binary operation to them, and places the result on the stack.
We're not yet done with primitive operations, though.
We have a lazy graph reduction machine, which means
something like the expression `3*(2+6)` might not
be a binary operator applied to two `NInt` nodes.
We keep around graphs until they __really__ need to
be reduced. So now we need an instruction to trigger
reducing a graph, to say, "we need this value now".
We call this instruction __Eval__ . This is where
the dump finally comes in!
When we execute Eval, another graph becomes our "focus", and we switch
to a new stack. We obviously want to return from this once we've finished
evaluating what we "focused" on, so we must store the program state somewhere -
on the dump. Here's the rule:
{{< gmachine " Eval " > }}
{{< gmachine_inner " Before " > }}
\( \text{Eval} : i \quad a : s \quad d \quad h \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
\( [\text{Unwind}] \quad [a] \quad \langle i, s\rangle : d \quad h \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Evaluate graph to its normal form.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We store the current set of instructions and the current stack on the dump,
and start with only Unwind and the value we want to evaluate.
That does the job, but we're missing one thing - a way to return to
the state we placed onto the dump. To do this, we add __another__
rule to Unwind:
{{< gmachine " Unwind-Return " > }}
{{< gmachine_inner " Before " > }}
\( \text{Unwind} : i \quad a : s \quad \langle i', s'\rangle : d \quad h[a : \text{NInt} \; n] \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
\( i' \quad a : s' \quad d \quad h[a : \text{NInt} \; n] \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Return from Eval instruction.
{{< / gmachine_inner > }}
{{< / gmachine > }}
2019-11-14 11:05:17 -08:00
Just a couple more special-purpose instructions, and we're done!
Sometimes, it's possible for a tree node to reference itself.
2019-09-02 23:38:27 -07:00
For instance, Haskell defines the
[fixpoint combinator ](https://en.wikipedia.org/wiki/Fixed-point_combinator )
as follows:
```Haskell
fix f = let x = f x in x
```
In order to do this, an address that references a node must be present
while the node is being constructed. We define an instruction,
__Alloc__, which helps with that:
{{< gmachine " Alloc " > }}
{{< gmachine_inner " Before " > }}
\( \text{Alloc} \; n : i \quad s \quad d \quad h \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
\( i \quad s \quad d \quad h[a_k : \text{NInd} \; \text{null}] \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Allocate indirection nodes.
{{< / gmachine_inner > }}
{{< / gmachine > }}
We can allocate an indirection on the stack, and call Update on it when
we've constructed a node. While we're constructing the tree, we can
refer to the indirection when a self-reference is required.
2019-11-14 11:05:17 -08:00
Lastly, we also define a Pop instruction, which just removes
some number of nodes from the stack. We want this because
calling Update at the end of a function modifies a node further up the stack,
leaving anything on top of the stack after that node as scratch work. We get
rid of that scratch work simply by popping it.
{{< gmachine " Pop " > }}
{{< gmachine_inner " Before " > }}
\( \text{Pop} \; n : i \quad a_1, a_2, ..., a_n : s \quad d \quad h \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " After " > }}
\( i \quad s \quad d \quad h \quad m \)
{{< / gmachine_inner > }}
{{< gmachine_inner " Description " > }}
Pop \(n\) nodes from the stack.
{{< / gmachine_inner > }}
{{< / gmachine > }}
2019-09-04 01:31:31 -07:00
That's it for the instructions. Knowing them, however, doesn't
tell us what to do with our `ast` structs. We'll need to define
rules to translate trees into these instructions, and I've already
alluded to this when we went over `double 326` .
However, this has already gotten pretty long,
2019-11-14 11:05:17 -08:00
so we'll do it in the next post: [Part 6 - Compilation ]({{< relref "06_compiler_compilation.md" >}} ).