Start working on Part 5 of compiler posts.

This commit is contained in:
Danila Fedorin 2019-08-28 21:11:34 -07:00
parent 05af1350c8
commit 4d8d806706

View File

@ -0,0 +1,145 @@
---
title: Compiling a Functional Language Using C++, Part 5 - Execution
date: 2019-08-06T14:26:38-07:00
draft: true
tags: ["C and C++", "Functional Languages", "Compilers"]
---
We now have trees representing valid programs in our language,
and it's time to think about how to compile them into machine code,
to be executed on hardware. But __how should we execute programs__?
The programs we define are actually lists of definitions. But
you can't evaluate definitions - they just tell you, well,
how things are defined. Expressions, on the other hand,
can be simplified. So, let's start by evaluating
the body of the function called `main`, similarly
to how C/C++ programs start.
Alright, we've made it past that hurdle. Next,
to figure out how to evaluate expressions. It's easy
enough with binary operators: `3+2*6` becomes `3+12`,
and `3+12` becomes `15`. Functions are when things
get interesting. Consider:
```
double (160+3)
```
There's many perfectly valid ways to evaluate the program.
When we get to a function application, we can first evaluate
the arguments, and then expand the function definition:
```
double (160+3)
double 163
163+163
326
```
Let's come up with a more interesting program to illustrate
execution. How about:
```
data Pair = { P Int Int }
defn fst p = {
case p of {
P x y -> { x }
}
}
defn snd p = {
case p of {
P x y -> { y }
}
}
defn slow x = { returns x after waiting for 4 seconds }
defn main = { fst (P (slow 320) (slow 6)) }
```
If we follow our rules for evaluating functions,
the execution will follow the following steps:
```
fst (P (slow 320) (slow 6))
fst (P 320 (slow 6)) <- after 1 second
fst (P 320 6) <- after 1 second
320
```
We waited for two seconds, even though we really only
needed to wait one. To avoid this, we could instead
define our function application to substitute in
the parameters of a function before evaluating them:
```
fst (P (slow 320) (slow 6))
(slow 320)
320 <- after 1 second
```
This seems good, until we try doubling an expression again:
```
double (slow 163)
(slow 163) + (slow 163)
163 + (slow 163) <- after 1 second
163 + 163 <- after 1 second
326
```
With ony one argument, we've actually spent two seconds on the
evaluation! If we instead tried to triple using addition,
we'd spend three seconds.
Observe that with these new rules (called "call by name" in programming language theory),
we only waste time because we evaluate an expression that was passed in more than 1 time.
What if we didn't have to do that? Since we have a functional language, there's no way
that two expressions that are the same evaluate to a different value. Thus,
once we know the result of an expression, we can replace all occurences of that expression
with the result:
```
double (slow 163)
(slow 163) + (slow 163)
163 + 163 <- after 1 second
326
```
We're back down to one second, and since we're still substituting parameters
before we evaluate them, we still only take one second.
Alright, this all sounds good. How do we go about implementing this?
Since we're substituting variables for whole expressions, we can't
just use values. Instead, because expressions are represented with trees,
we might as well consider operating on trees. When we evaluate a tree,
we can substitute it in-place with what it evaluates to. We'll do this
depth-first, replacing the children of a node with their reduced trees,
and then moving on to the parent.
There's only one problem with this: if we substitute a variable that occurs many times
with the same expression tree, we no longer have a tree! Trees, by definition,
have only one path from the root to any other node. Since we now have
many ways to reach that expression we substituted, we instead have a __graph__.
Indeed, the way we will be executing our functional code is called __graph reduction__.
### Building Graphs
Naively, we might consider creating a tree for each function at the beginning of our
program, and then, when that function is called, substituting the variables
in it with the parameters of the application. But that approach quickly goes out
the window when we realize that we could be applying a function
multiple times - in fact, an arbitrary number of times. This means we can't
have a single tree, and we must build a new tree every time we call a function.
The question, then, is: how do we construct a new graph? We could
reach into Plato's [Theory of Forms](https://en.wikipedia.org/wiki/Theory_of_forms) and
have a "reference" tree which we then copy every time we apply the function.
But how do you copy a tree? Copying a tree is usually a recursive function,
and __every__ time that we copy a tree, we'll have to look at each node
and decide whether or not to visit its children (or if it has any at all).
If we copy a tree 100 times, we will have to look at each "reference"
node 100 times. Since the reference tree doesn't change, __we'd
be following the exact same sequence of decisions 100 times__. That's
no good!
An alternative approach, one that we'll use from now on, is to instead
convert each function's expression tree into a sequence of instructions
that you can follow to build an identical tree. Every time we have
to apply a function, we'll follow the corresponding recipe for
that function, and end up with a new tree that we continue evaluating.
### G-machine
"Instructions" is a very generic term. We will be creating instructions
for a [G-machine](https://link.springer.com/chapter/10.1007/3-540-15975-4_50),
an abstract architecture which we will use to reduce our graphs. The G-machine
is stack-based - all operations push and pop items from a stack. The machine
will also have a "dump", which is a stack of stacks; this will help with
separating function calls.
Besides constructing graphs, the machine will also have operations that will aid
in evaluating graphs.