Update "compiler: polymorphism" to new math delimiters

Signed-off-by: Danila Fedorin <danila.fedorin@gmail.com>
This commit is contained in:
Danila Fedorin 2024-05-13 18:45:25 -07:00
parent bee06b6731
commit 2fde7e5cf8

View File

@ -47,7 +47,7 @@ some of our notation from the [typechecking]({{< relref "03_compiler_typecheckin
{{< /latex >}}
But using our rules so far, such a thing is impossible, since there is no way for
\\(\text{Int}\\) to be unified with \\(\text{Bool}\\). We need a more powerful
\(\text{Int}\) to be unified with \(\text{Bool}\). We need a more powerful
set of rules to describe our program's types.
@ -72,23 +72,23 @@ Rule|Name and Description
\frac
{x:\sigma \in \Gamma}
{\Gamma \vdash x:\sigma}
{{< /latex >}}| __Var__: If the variable \\(x\\) is known to have some polymorphic type \\(\\sigma\\) then an expression consisting only of that variable is of that type.
{{< /latex >}}| __Var__: If the variable \(x\) is known to have some polymorphic type \(\sigma\) then an expression consisting only of that variable is of that type.
{{< latex >}}
\frac
{\Gamma \vdash e_1 : \tau_1 \rightarrow \tau_2 \quad \Gamma \vdash e_2 : \tau_1}
{\Gamma \vdash e_1 \; e_2 : \tau_2}
{{< /latex >}}| __App__: If an expression \\(e\_1\\), which is a function from monomorphic type \\(\\tau\_1\\) to another monomorphic type \\(\\tau\_2\\), is applied to an argument \\(e\_2\\) of type \\(\\tau\_1\\), then the result is of type \\(\\tau\_2\\).
{{< /latex >}}| __App__: If an expression \(e_1\), which is a function from monomorphic type \(\tau_1\) to another monomorphic type \(\tau_2\), is applied to an argument \(e_2\) of type \(\tau_1\), then the result is of type \(\tau_2\).
{{< latex >}}
\frac
{\Gamma \vdash e : \tau \quad \text{matcht}(\tau, p_i) = b_i
\quad \Gamma,b_i \vdash e_i : \tau_c}
{\Gamma \vdash \text{case} \; e \; \text{of} \;
\{ (p_1,e_1) \ldots (p_n, e_n) \} : \tau_c }
{{< /latex >}}| __Case__: This rule is not part of Hindley-Milner, and is specific to our language. If the expression being case-analyzed is of type \\(\\tau\\) and each branch \\((p\_i, e\_i)\\) is of the same type \\(\\tau\_c\\) when the pattern \\(p\_i\\) works with type \\(\\tau\\) producing extra bindings \\(b\_i\\), the whole case expression is of type \\(\\tau\_c\\).
{{< /latex >}}| __Case__: This rule is not part of Hindley-Milner, and is specific to our language. If the expression being case-analyzed is of type \(\tau\) and each branch \((p_i, e_i)\) is of the same type \(\tau_c\) when the pattern \(p_i\) works with type \(\tau\) producing extra bindings \(b_i\), the whole case expression is of type \(\tau_c\).
{{< latex >}}
\frac{\Gamma \vdash e : \sigma' \quad \sigma' \sqsubseteq \sigma}
{\Gamma \vdash e : \sigma}
{{< /latex >}}| __Inst (New)__: If type \\(\\sigma\\) is an instantiation (or specialization) of type \\(\\sigma\'\\) then an expression of type \\(\\sigma\'\\) is also an expression of type \\(\\sigma\\).
{{< /latex >}}| __Inst (New)__: If type \(\sigma\) is an instantiation (or specialization) of type \(\sigma'\) then an expression of type \(\sigma'\) is also an expression of type \(\sigma\).
{{< latex >}}
\frac
{\Gamma \vdash e : \sigma \quad \alpha \not \in \text{free}(\Gamma)}
@ -96,29 +96,29 @@ Rule|Name and Description
{{< /latex >}}| __Gen (New)__: If an expression has a type with free variables, this rule allows us generalize it to allow all possible types to be used for these free variables.
Here, there is a distinction between different forms of types. First, there are
monomorphic types, or __monotypes__, \\(\\tau\\), which are types such as \\(\\text{Int}\\),
\\(\\text{Int} \\rightarrow \\text{Bool}\\), \\(a \\rightarrow b\\)
monomorphic types, or __monotypes__, \(\tau\), which are types such as \(\text{Int}\),
\(\text{Int} \rightarrow \text{Bool}\), \(a \rightarrow b\)
and so on. These types are what we've been working with so far. Each of them
represents one (hence, "mono-"), concrete type. This is obvious in the case
of \\(\\text{Int}\\) and \\(\\text{Int} \\rightarrow \\text{Bool}\\), but
for \\(a \\rightarrow b\\) things are slightly less clear. Does it really
represent a single type, if we can put an arbtirary thing for \\(a\\)?
The answer is "yes"! Although \\(a\\) is not currently known, it stands
of \(\text{Int}\) and \(\text{Int} \rightarrow \text{Bool}\), but
for \(a \rightarrow b\) things are slightly less clear. Does it really
represent a single type, if we can put an arbtirary thing for \(a\)?
The answer is "yes"! Although \(a\) is not currently known, it stands
in place of another monotype, which is yet to be determined.
So, how do we represent polymorphic types, like that of \\(\\text{if}\\)?
So, how do we represent polymorphic types, like that of \(\text{if}\)?
We borrow some more notation from mathematics, and use the "forall" quantifier:
{{< latex >}}
\text{if} : \forall a \; . \; \text{Bool} \rightarrow a \rightarrow a \rightarrow a
{{< /latex >}}
This basically says, "the type of \\(\\text{if}\\) is \\(\\text{Bool} \\rightarrow a \\rightarrow a \\rightarrow a\\)
for all possible \\(a\\)". This new expression using "forall" is what we call a type scheme, or a polytype \\(\\sigma\\).
This basically says, "the type of \(\text{if}\) is \(\text{Bool} \rightarrow a \rightarrow a \rightarrow a\)
for all possible \(a\)". This new expression using "forall" is what we call a type scheme, or a polytype \(\sigma\).
For simplicity, we only allow "forall" to be at the front of a polytype. That is, expressions like
\\(a \\rightarrow \\forall b \\; . \\; b \\rightarrow b\\) are not valid polytypes as far as we're concerned.
\(a \rightarrow \forall b \; . \; b \rightarrow b\) are not valid polytypes as far as we're concerned.
It's key to observe that only some of the typing rules in the above table use polytypes (\\(\\sigma\\)). Whereas expressions
It's key to observe that only some of the typing rules in the above table use polytypes (\(\sigma\)). Whereas expressions
consisting of a single variable can be polymorphically typed, this is not true for function applications and case expressions.
In fact, according to our rules, there is no way to introduce a polytype anywhere into our system!
@ -127,8 +127,8 @@ this is called __Let-Polymorphism__, which means that only in `let`/`in` express
be given a polymorphic type. We, on the other hand, do not (yet) have `let`/`in` expressions, so our polymorphism
is further limited: only functions (and data type constructors) can be polymorphically typed.
Let's talk about what __Inst__ does, and what "\\(\\sqsubseteq\\)" means.
First, why don't we go ahead and write the formal inference rule for \\(\\sqsubseteq\\):
Let's talk about what __Inst__ does, and what "\(\sqsubseteq\)" means.
First, why don't we go ahead and write the formal inference rule for \(\sqsubseteq\):
{{< latex >}}
\frac
@ -137,21 +137,21 @@ First, why don't we go ahead and write the formal inference rule for \\(\\sqsubs
{{< /latex >}}
In my opinion, this is one of the more confusing inference rules we have to deal with in Hindley-Milner.
In principle, though, it's not too hard to understand. \\(\\sigma\' \\sqsubseteq \\sigma\\) says "\\(\\sigma\'\\)
is more general than \\(\\sigma\\)". Alternatively, we can interpret it as "\\(\\sigma\\) is an instance of \\(\\sigma\'\\)".
In principle, though, it's not too hard to understand. \(\sigma' \sqsubseteq \sigma\) says "\(\sigma'\)
is more general than \(\sigma\)". Alternatively, we can interpret it as "\(\sigma\) is an instance of \(\sigma'\)".
What does it mean for one polytype to be more general than another? Intuitively, we might say that \\(\forall a \\; . \\; a \\rightarrow a\\) is
more general than \\(\\text{Int} \\rightarrow \\text{Int}\\), because the former type can represent the latter, and more. Formally,
we define this in terms of __substitutions__. A substitution \\(\\{\\alpha \\mapsto \\tau \\}\\) replaces a variable
\\(\\alpha\\) with a monotype \\(\\tau\\). If we can use a substitution to convert one type into another, then the first
What does it mean for one polytype to be more general than another? Intuitively, we might say that \(\forall a \; . \; a \rightarrow a\) is
more general than \(\text{Int} \rightarrow \text{Int}\), because the former type can represent the latter, and more. Formally,
we define this in terms of __substitutions__. A substitution \(\{\alpha \mapsto \tau \}\) replaces a variable
\(\alpha\) with a monotype \(\tau\). If we can use a substitution to convert one type into another, then the first
type (the one on which the substitution was performed) is more general than the resulting type. In our previous example,
since we can apply the substitution \\(\\{a \\mapsto \\text{Int}\\}\\) to get \\(\\text{Int} \\rightarrow \\text{Int}\\)
from \\(\\forall a \\; . \\; a \\rightarrow a\\), the latter type is more general; using our notation,
\\(\\forall a \\; . \\; a \\rightarrow a \\sqsubseteq \\text{Int} \\rightarrow \\text{Int}\\).
since we can apply the substitution \(\{a \mapsto \text{Int}\}\) to get \(\text{Int} \rightarrow \text{Int}\)
from \(\forall a \; . \; a \rightarrow a\), the latter type is more general; using our notation,
\(\forall a \; . \; a \rightarrow a \sqsubseteq \text{Int} \rightarrow \text{Int}\).
That's pretty much all that the rule says, really. But what about the whole thing with \\(\\beta\\) and \\(\\text{free}\\)? The reason
That's pretty much all that the rule says, really. But what about the whole thing with \(\beta\) and \(\text{free}\)? The reason
for that part of the rule is that, in principle, we can substitute polytypes into polytypes. However, we can't
do this using \\(\\{ \\alpha \\mapsto \\sigma \\}\\). Consider, for example:
do this using \(\{ \alpha \mapsto \sigma \}\). Consider, for example:
{{< latex >}}
\{ a \mapsto \forall b \; . \; b \rightarrow b \} \; \text{Bool} \rightarrow a \rightarrow a \stackrel{?}{=}
@ -159,9 +159,9 @@ do this using \\(\\{ \\alpha \\mapsto \\sigma \\}\\). Consider, for example:
{{< /latex >}}
Hmm, this is not good. Didn't we agree that we only want quantifiers at the front of our types? Instead, to make that substitution,
we only substitute the monotype \\(b \\rightarrow b\\), and then add the \\(\\forall b\\) at the front. But
to do this, we must make sure that \\(b\\) doesn't occur anywhere in the original type
\\(\forall a \\; . \\; \\text{Bool} \\rightarrow a \\rightarrow a\\) (otherwise we can accidentally generalize
we only substitute the monotype \(b \rightarrow b\), and then add the \(\forall b\) at the front. But
to do this, we must make sure that \(b\) doesn't occur anywhere in the original type
\(\forall a \; . \; \text{Bool} \rightarrow a \rightarrow a\) (otherwise we can accidentally generalize
too much). So then, our concrete inference rule is as follows:
{{< latex >}}
@ -176,9 +176,9 @@ too much). So then, our concrete inference rule is as follows:
{{< /latex >}}
In the above rule we:
1. Replaced \\(a\\) with \\(b \\rightarrow b\\), getting rid of \\(a\\) in the quantifier.
2. Observed that \\(b\\) is not a free variable in the original type, and thus can be generalized.
3. Added the generalization for \\(b\\) to the front of the resulting type.
1. Replaced \(a\) with \(b \rightarrow b\), getting rid of \(a\) in the quantifier.
2. Observed that \(b\) is not a free variable in the original type, and thus can be generalized.
3. Added the generalization for \(b\) to the front of the resulting type.
Now, __Inst__ just allows us to perform specialization / substitution as many times
as we need to get to the type we want.
@ -188,12 +188,12 @@ as we need to get to the type we want.
Alright, now we have all these rules. How does this change our typechecking algorithm?
How about the following:
1. To every declared function, assign the type \\(a \\rightarrow ... \\rightarrow y \\rightarrow z\\),
1. To every declared function, assign the type \(a \rightarrow ... \rightarrow y \rightarrow z\),
where
{{< sidenote "right" "arguments-note" "\(a\) through \(y\) are the types of the arguments to the function," >}}
Of course, there can be more or less than 25 arguments to any function. This is just a generalization;
we use as many input types as are needed.
{{< /sidenote >}} and \\(z\\) is the function's
{{< /sidenote >}} and \(z\) is the function's
return type.
2. We typecheck each declared function, using the __Var__, __Case__, __App__, and __Inst__ rules.
3. Whatever type variables we don't fill in, we assume can be filled in with any type,
@ -214,15 +214,15 @@ defn testTwo = { if True 0 1 }
If we go through and typecheck them top-to-bottom, the following happens:
1. We start by assuming \\(\\text{if} : a \\rightarrow b \\rightarrow c \\rightarrow d\\),
\\(\\text{testOne} : e\\) and \\(\\text{testTwo} : f\\).
2. We look at `if`. We infer the type of `c` to be \\(\\text{Bool}\\), and thus, \\(a = \\text{Bool}\\).
We also infer that \\(b = c\\), since they occur in two branches of the same case expression.
Finally, we infer that \\(c = d\\), since whatever the case expression returns becomes the return
value of the function. Thus, we come out knowing that \\(\\text{if} : \\text{Bool} \\rightarrow b
\\rightarrow b \\rightarrow b\\).
3. Now, since we never figured out \\(b\\), we use __Gen__: \\(\\text{if} : \\forall b \\; . \\;
\\text{Bool} \\rightarrow b \\rightarrow b \\rightarrow b\\). Like we'd want, `if` works with
1. We start by assuming \(\text{if} : a \rightarrow b \rightarrow c \rightarrow d\),
\(\text{testOne} : e\) and \(\text{testTwo} : f\).
2. We look at `if`. We infer the type of `c` to be \(\text{Bool}\), and thus, \(a = \text{Bool}\).
We also infer that \(b = c\), since they occur in two branches of the same case expression.
Finally, we infer that \(c = d\), since whatever the case expression returns becomes the return
value of the function. Thus, we come out knowing that \(\text{if} : \text{Bool} \rightarrow b
\rightarrow b \rightarrow b\).
3. Now, since we never figured out \(b\), we use __Gen__: \(\text{if} : \forall b \; . \;
\text{Bool} \rightarrow b \rightarrow b \rightarrow b\). Like we'd want, `if` works with
all types, as long as both its inputs are of the same type.
4. When we typecheck the body of `testOne`, we use __Inst__ to turn the above type for `if`
into a single, monomorphic instance. Then, type inference proceeds as before, and all is well.
@ -231,15 +231,15 @@ and all is well again.
So far, so good. But what if we started from the bottom, and went to the top?
1. We start by assuming \\(\\text{if} : a \\rightarrow b \\rightarrow c \\rightarrow d\\),
\\(\\text{testOne} : e\\) and \\(\\text{testTwo} : f\\).
2. We look at `testTwo`. We infer that \\(a = \\text{Bool}\\) (since
we pass in `True` to `if`). We also infer that \\(b = \\text{Int}\\), and that \\(c = \\text{Int}\\).
1. We start by assuming \(\text{if} : a \rightarrow b \rightarrow c \rightarrow d\),
\(\text{testOne} : e\) and \(\text{testTwo} : f\).
2. We look at `testTwo`. We infer that \(a = \text{Bool}\) (since
we pass in `True` to `if`). We also infer that \(b = \text{Int}\), and that \(c = \text{Int}\).
Not yet sure of the return type of `if`, this is where we stop. We are left with
the knowledge that \\(f = d\\) (because the return type of `if` is the return type of `testTwo`),
but that's about it. Since \\(f\\) is no longer free, we don't generalize, and conclude that \\(\text{testTwo} : f\\).
3. We look at `testOne`. We infer that \\(a = \\text{Bool}\\) (already known). We also infer
that \\(b = \\text{Bool}\\), and that \\(c = \\text{Bool}\\). But wait a minute! This is not right.
the knowledge that \(f = d\) (because the return type of `if` is the return type of `testTwo`),
but that's about it. Since \(f\) is no longer free, we don't generalize, and conclude that \(\text{testTwo} : f\).
3. We look at `testOne`. We infer that \(a = \text{Bool}\) (already known). We also infer
that \(b = \text{Bool}\), and that \(c = \text{Bool}\). But wait a minute! This is not right.
We are back to where we started, with a unification error!
What went wrong? I claim that we typechecked the functions that _used_ `if` before we typechecked `if` itself,
@ -257,11 +257,11 @@ A transitive closure of a vertex in a graph is the list of all vertices reachabl
from that original vertex. Check out the <a href="https://en.wikipedia.org/wiki/Transitive_closure#In_graph_theory">
Wikipedia page on this</a>.
{{< /sidenote >}}
of the function dependencies. We define a function \\(f\\) to be dependent on another function \\(g\\)
if \\(f\\) calls \\(g\\). The transitive closure will help us find functions that are related indirectly.
For instance, if \\(g\\) also depends on \\(h\\), then the transitive closure of \\(f\\) will
include \\(h\\), even if \\(f\\) directly doesn't use \\(h\\).
2. We isolate groups of mutually dependent functions. If \\(f\\) depends on \\(g\\) and \\(g\\) depends \\(f\\),
of the function dependencies. We define a function \(f\) to be dependent on another function \(g\)
if \(f\) calls \(g\). The transitive closure will help us find functions that are related indirectly.
For instance, if \(g\) also depends on \(h\), then the transitive closure of \(f\) will
include \(h\), even if \(f\) directly doesn't use \(h\).
2. We isolate groups of mutually dependent functions. If \(f\) depends on \(g\) and \(g\) depends \(f\),
they are placed in one group. We then construct a dependency graph __of these groups__.
3. We compute a topological order of the group graph. This helps us typecheck the dependencies
of functions before checking the functions themselves. In our specific case, this would ensure
@ -271,7 +271,7 @@ in a group.
4. We typecheck the function groups, and functions within them, following the above topological order.
To find the transitive closure of a graph, we can use [Warshall's Algorithm](https://cs.winona.edu/lin/cs440/ch08-2.pdf).
This algorithm, with complexity \\(O(|V|^3)\\), goes as follows:
This algorithm, with complexity \(O(|V|^3)\), goes as follows:
{{< latex >}}
\begin{aligned}
& A, R^{(i)} \in \mathbb{B}^{n \times n} \\
@ -285,12 +285,12 @@ This algorithm, with complexity \\(O(|V|^3)\\), goes as follows:
\end{aligned}
{{< /latex >}}
In the above notation, \\(R^{(i)}\\) is the \\(i\\)th matrix \\(R\\), and \\(A\\) is the adjacency
matrix of the graph in question. All matrices in the algorithm are from \\(\\mathbb{B}^{n \times n}\\),
the set of \\(n\\) by \\(n\\) boolean matrices. Once this algorithm is complete, we get as output a
transitive closure adjacency matrix \\(R^{(n)}\\). Mutually dependent functions will be pretty easy to
isolate from this matrix. If \\(R^{(n)}[i,j]\\) and \\(R^{(n)}[j,i]\\), then the functions represented by vertices
\\(i\\) and \\(j\\) depend on each other.
In the above notation, \(R^{(i)}\) is the \(i\)th matrix \(R\), and \(A\) is the adjacency
matrix of the graph in question. All matrices in the algorithm are from \(\mathbb{B}^{n \times n}\),
the set of \(n\) by \(n\) boolean matrices. Once this algorithm is complete, we get as output a
transitive closure adjacency matrix \(R^{(n)}\). Mutually dependent functions will be pretty easy to
isolate from this matrix. If \(R^{(n)}[i,j]\) and \(R^{(n)}[j,i]\), then the functions represented by vertices
\(i\) and \(j\) depend on each other.
Once we've identified the groups, and
{{< sidenote "right" "group-graph-note" "constructed a group graph," >}}
@ -596,7 +596,7 @@ a value of a non-existent type), but a mature compiler should prevent this from
On the other hand, here are the steps for function definitions:
1. Find the free variables of each function to create the ordered list of groups as described above.
2. Within each group, insert a general function type (like \\(a \\rightarrow b \\rightarrow c\\))
2. Within each group, insert a general function type (like \(a \rightarrow b \rightarrow c\))
into the environment for each function.
3. Within each group (in the same pass) run typechecking
(including polymorphism, using the rules as described above).
@ -707,8 +707,8 @@ is also updated to use topological ordering:
The above code uses the yet-unexplained `generalize` method. What's going on?
Observe that the __Var__ rule of the Hindley-Milner type system says that a variable \\(x\\)
can have a __polytype__ in the environment \\(\\Gamma\\). Our `type_ptr` can only represent monotypes,
Observe that the __Var__ rule of the Hindley-Milner type system says that a variable \(x\)
can have a __polytype__ in the environment \(\Gamma\). Our `type_ptr` can only represent monotypes,
so we must change what `type_env` associates with names to a new struct for representing polytypes,
which we will call `type_scheme`. The `type_scheme` struct, just like the formal definition of
a polytype, contains zero or more "forall"-quantified type variables, followed by a monotype which