2024-05-20 21:39:03 -07:00
---
title: "Implementing and Verifying \"Static Program Analysis\" in Agda, Part 2: Combining Lattices"
series: "Static Program Analysis in Agda"
2024-05-30 20:02:49 -07:00
description: "In this post, I describe how lattices can be combined to create other, more complex lattices"
2024-08-08 16:45:13 -07:00
date: 2024-08-08T16:40:00-07:00
2024-05-20 21:46:14 -07:00
tags: ["Agda", "Programming Languages"]
2024-05-20 21:39:03 -07:00
---
In the previous post, I wrote about how lattices arise when tracking, comparing
and combining static information about programs. I then showed two simple lattices:
the natural numbers, and the (parameterized) "above-below" lattice, which
modified an arbitrary set with "bottom" and "top" elements (\(\bot\) and \(\top\)
respectively). One instance of the "above-below" lattice was the sign lattice,
which could be used to reason about the signs (positive, negative, or zero)
of variables in a program.
At the end of that post, I introduced a source of complexity: the "full"
lattices that we want to use for the program analysis aren't signs or numbers,
2024-05-25 20:45:19 -07:00
but maps of states and variables to lattice-based descriptions. The full lattice
2024-05-20 21:39:03 -07:00
for sign analysis might something in the form:
{{< latex > }}
\text{Info} \triangleq \text{ProgramStates} \to (\text{Variables} \to \text{Sign})
{{< / latex > }}
Thus, we have to compare and find least upper bounds (e.g.) of not just
signs, but maps! Proving the various lattice laws for signs was not too
2024-05-25 20:45:19 -07:00
challenging, but for for a two-level map like \(\text{Info}\) above, we'd
need to do a lot more work. We need tools to build up such complicated lattices.
2024-05-20 21:39:03 -07:00
The way to do this, it turns out, is by using simpler lattices as building blocks.
2024-08-08 16:45:13 -07:00
To start with, let's take a look at a very simple way of combining lattices
into a new one: taking the [Cartesian product ](https://mathworld.wolfram.com/CartesianProduct.html ).
2024-05-20 22:46:18 -07:00
### The Cartesian Product Lattice
Suppose you have two lattices \(L_1\) and \(L_2\). As I covered in the previous
post, each lattice comes equipped with a "least upper bound" operator \((\sqcup)\)
and a "greatest lower bound" operator \((\sqcap)\). Since we now have two lattices,
let's use numerical suffixes to disambiguate between the operators
of the first and second lattice: \((\sqcup_1)\) will be the LUB operator of
2024-05-25 20:45:19 -07:00
the first lattice \(L_1\), and \((\sqcup_2)\) of the second lattice \(L_2\),
and so on.
2024-05-20 22:46:18 -07:00
Then, let's take the Cartesian product of the elements of \(L_1\) and \(L_2\);
mathematically, we'll write this as \(L_1 \times L_2\), and in Agda, we can
just use the standard [`Data.Product` ](https://agda.github.io/agda-stdlib/master/Data.Product.html )
2024-05-25 20:45:19 -07:00
module. Then, I'll define the lattice as another [parameterized module ](https://agda.readthedocs.io/en/latest/language/module-system.html#parameterised-modules ). Since both \(L_1\) and \(L_2\)
2024-05-20 22:46:18 -07:00
are lattices, this parameterized module will require `IsLattice` instances
for both types:
2024-08-08 16:45:13 -07:00
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 1 7 " hl_lines = 7" > }}
2024-05-20 22:46:18 -07:00
Elements of \(L_1 \times L_2\) are in the form \((l_1, l_2)\), where
2024-08-08 16:45:13 -07:00
\(l_1 \in L_1\) and \(l_2 \in L_2\). Knowing that, let's define what it means
for two such elements to be equal. Recall that
2024-05-21 19:04:08 -07:00
we opted for a [custom equivalence relation ]({{< relref "01_spa_agda_lattices#definitional-equality" >}} )
instead of definitional equality to allow similar elements to be considered
equal; we'll have to define a similar relation for our new product lattice.
That's easy enough: we have an equality predicate `_≈₁_` that checks if an element
2024-05-20 22:46:18 -07:00
of \(L_1\) is equal to another, and we have `_≈₂_` that does the same for \(L_2\).
2024-05-21 19:04:08 -07:00
It's reasonable to say that _pairs_ of elements are equal if their respective
2024-05-20 22:46:18 -07:00
first and second elements are equal:
{{< latex > }}
(l_1, l_2) \approx (j_1, j_2) \iff l_1 \approx_1 j_1 \land l_2 \approx_2 j_2
{{< / latex > }}
In Agda:
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 39 40 > }}
2024-05-21 19:04:08 -07:00
Verifying that this relation has the properties of an equivalence relation
boils down to the fact that `_≈₁_` and `_≈₂_` are themselves equivalence
relations.
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 42 48 > }}
2024-05-25 20:45:19 -07:00
Defining \((\sqcup)\) and \((\sqcap)\) by simply applying the
2024-05-21 19:04:08 -07:00
corresponding operators from \(L_1\) and \(L_2\) seems quite natural as well.
{{< latex > }}
(l_1, l_2) \sqcup (j_1, j_2) \triangleq (l_1 \sqcup_1 j_1, l_2 \sqcup_2 j_2) \\
(l_1, l_2) \sqcap (j_1, j_2) \triangleq (l_1 \sqcap_1 j_1, l_2 \sqcap_2 j_2)
{{< / latex > }}
2024-05-30 22:41:15 -07:00
As an example, consider the product lattice \(\text{Sign}\times\text{Sign}\),
which is made up of pairs of signs that we talked about in the previous
post. Two elements of this lattice are \((+, +)\) and \((+, -)\). Here's
how the \((\sqcup)\) operation is evaluated on them:
{{< latex > }}
(+, +) \sqcup (+, -) = (+ \sqcup + , + \sqcup -) = (+ , \top)
{{< / latex > }}
In Agda, the definition is written very similarly to its mathematical form:
2024-05-21 19:04:08 -07:00
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 50 54 > }}
All that's left is to prove the various (semi)lattice properties. Intuitively,
2024-05-25 20:45:19 -07:00
we can see that since the "combined" operator `_⊔_` just independently applies
the element operators `_⊔₁_` and `_⊔₂_` , as long as they are idempotent,
2024-05-21 19:04:08 -07:00
commutative, and associative, so is the "combined" operator itself.
Moreover, the proofs that `_⊔_` and `_⊓_` form semilattices are identical
up to replacing \((\sqcup)\) with \((\sqcap)\). Thus, in Agda, we can write
the code once, parameterizing it by the binary operators involved (and proofs
that these operators obey the semilattice laws).
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 56 82 > }}
2024-05-25 20:45:19 -07:00
Above, I used `f₁` to stand for "either `_⊔₁_` or `_⊓₁_` ", and similarly
`f₂` for "either `_⊔₂_` or `_⊓₂_` ". Much like the semilattice properties,
proving lattice properties boils down to applying the lattice properties of
\(L_1\) and \(L_2\) to individual components.
2024-05-21 19:04:08 -07:00
{{< codelines " Agda " " agda-spa / Lattice / Prod . agda " 84 96 > }}
2024-05-30 22:41:15 -07:00
This concludes the definition of the product lattice, which is made up of
two other lattices. If we have a type of analysis that can be expressed as
{{< sidenote " right " " pair-note " " a pair of two signs , " > }}
Perhaps the signs are the smallest and largest possible values of a variable.
{{< / sidenote > }} for example, we won't have to do all the work of
proving the (semi)lattice properties of those pairs. In fact, we can build up
2024-08-08 16:45:13 -07:00
even bigger data structures. By taking a product twice, like
2024-05-30 22:41:15 -07:00
\(L_1 \times (L_2 \times L_3)\), we can construct a lattice of 3-tuples. Any
of the lattices involved in that product can itself be a product; we can
therefore create lattices out of arbitrary bundles of data, so long as
the smallest pieces that make up the bundles are themselves lattices.
Products will come very handy a bit later in this series. For now though,
our goal is to create another type of lattice: the map lattice. We will
take the same approach we did with products: assuming the elements of the
map are lattices, we'll prove that the map itself is a lattice. Then, just
like we could put products inside products when building up lattices, we'll
be able to put a map inside a map. This will allow us to represent the
\(\text{Info}\) lattice, which is a map of maps.
### The Map Lattice
2024-06-16 18:09:56 -07:00
#### The Theory
2024-05-30 22:41:15 -07:00
When I say "map", what I really means is something that associates keys with
values, like [dictionaries in Python ](https://docs.python.org/3/tutorial/datastructures.html#dictionaries ).
This data structure need not have a value for every possible key; a very precise
author might call such a map a "partial map". We might have a map
whose value (in Python-ish notation) is `{ "x": +, "y": - }` . Such a map states
that the sign of the variable `x` is `+` , and the sign of variable `y` is
`-` . Another possible map is `{ "y": +, "z": - }` ; this one states that
the sign of `y` is `+` , and the sign of another variable `z` is `-` .
Let's start thinking about what sorts of lattices our maps will be.
The thing that [motivated our introduction ]({{< relref "01_spa_agda_lattices#specificity" >}} )
of lattices was comparing them by "specificity", so let's try figure out how
to compare maps. For that, we can begin small, by looking at singleton maps.
2024-06-09 18:02:04 -07:00
If we have `{"x": +}` and `{"x": ⊤ }` , which one of them is smaller? Well, we have
previously established that `+` is more specific (and thus less than) `⊤ ` . Thus,
it shouldn't be too much of a stretch to say that for singleton maps of the same
key, the one with the smaller value is smaller.
2024-05-30 22:41:15 -07:00
Now, what about a pair of singleton maps like `{"x": +}` and `{"y": ⊤ }` ? Among
these two, each contains some information that the other does not. Although the
value of `y` is larger than the value of `x` , it describes a different key, so
it seems wrong to use that to call the `y` -singleton "larger". Let's call
these maps incompatible, then. More generally, if we have two maps and each one
has a key that the other doesn't, we can't compare them.
If only one map has a unique key, though, things are different. Take for
2024-06-09 18:02:04 -07:00
instance `{"x": +}` and `{"x": +, "y": +}` . Are they really incomparable?
The keys that the two maps do share can be compared (`+ < = +`, because they're
equal).
All of the above leads to the following conventional definition, which I find
easier to further motivate using \((\sqcup)\) and \((\sqcap)\)
(and [do so below ]({{< relref "#union-as-or" >}} )).
> A map `m1` is less than or equal to another map `m2` (`m1 <= m2`) if for
> every key `k` that has a value in `m1`, the key also has a value in `m2`, and
> `m1[k] <= m2[k]`.
That definitions matches our intuitions so far. The only key in `{"x": +}` is `x` ;
this key is also in `{"x": ⊤ }` (check) and `+ < ⊤ ` (check). On the other hand,
both `{"x": +}` and `{"y": ⊤ }` have a key that the other doesn't, so the
definition above is not satisfied. Finally, for `{"x": +}` and
`{"x": +, "y": +}` , the only key in the former is also present in the latter,
and `+ <= +` ; the definition is satisfied.
Next, we need to define the \((\sqcup)\) and \((\sqcap)\) operators that match
our definition of "less than or equal". Let's start with \((\sqcup)\). For two
maps \(m_1\) and \(m_2\), the join of those two maps, \(m_1 \sqcup m_2\) should
be greater than or equal to both; in other words, both sub-maps should be less
2024-06-09 19:39:00 -07:00
than or equal to the join.
Our newly-introduced condition for "less than or equal"
2024-06-09 18:02:04 -07:00
requires that each key in the smaller map be present in the larger one; as
a result, \(m_1 \sqcup m_2\) should contain all the keys in \(m_1\) __and__
all the keys in \(m_2\). So, we could just take the union of the two maps:
copy values from both into the result. Only, what happens if both \(m_1\)
2024-08-08 16:45:13 -07:00
and \(m_2\) have a value mapped to a particular key \(k\)? The values in the two
2024-06-09 18:02:04 -07:00
maps could be distinct, and they might even be incomparable. This is where the
second part of the condition kicks in: the value in the combination of the
maps needs to be bigger than the value in either sub-map. We already know how
to get a value that's bigger than two other values: we use a join on the
2024-06-09 19:39:00 -07:00
values!
Thus, define \(m_1 \sqcup m_2\) as a map that has all the keys
2024-06-09 18:02:04 -07:00
from \(m_1\) and \(m_2\), where the value at a particular key is given
as follows:
{{< latex > }}
(m_1 \sqcup m_2)[k] =
\begin{cases}
m_1[k] \sqcup m_2[k] & k \in m_1, k \in m_2 \\
m_1[k] & k \in m_1, k \notin m_2 \\
m_2[k] & k \notin m_1, k \in m_2
\end{cases}
{{< / latex > }}
2024-06-09 19:39:00 -07:00
If you're familiar with set theory, this operation is like
{{< sidenote " right " " map-union-note " " an extension of the union operator \(( \cup ) \)" > }}
There are, of course, other ways to extend the "union" operation to maps.
Haskell, for instance, defines it in a "left-biased" way (preferring the
elements from the left operand of the operation when duplicates are encountered).< br >
< br >
However, with a "join" operation \((\sqcup)\) that's defined on the values
stored in the map gives us an extra tool to work with. As a result, I would
argue that our extension, given such an operator, is the most natural.
{{< / sidenote > }} to maps. In fact, this begins to motivate
the choice to use \((\sqcup)\) to denote this operation. A further bit of
motivation is this:
[we've already seen ]({{< relref "01_spa_agda_lattices#lub-glub-or-and" >}} )
that the \((\sqcup)\) and \((\sqcap)\) operators correspond to "or"
and "and". The elements in the union of two sets are precisely
those that are in one set __or__ the other. Thus, using union here fits our
notion of how the \((\sqcup)\) operator behaves.
{#union-as-or}
2024-06-16 18:09:56 -07:00
Now, let's take a look at the \((\sqcap)\) operator. For two maps \(m_1\) and
\(m_2\), the meet of those two maps, \(m_1 \sqcap m_2\) should be less than
or equal to both. Our definition above requires that each key of the smaller
map is present in the larger map; for the combination of two maps to be
smaller than both, we must ensure that it only has keys present in both maps.
To combine the elements from the two maps, we can use the \((\sqcap)\) operator
on values.
{{< latex > }}
(m_1 \sqcap m_2)[k] = m_1[k] \sqcap m_2[k]
{{< / latex > }}
Turning once again to set theory, we can think of this operation like the
2024-08-08 16:45:13 -07:00
extension of the intersection operator \((\cap)\) to maps. This can be
2024-06-16 18:09:56 -07:00
motivated in the same way as the union operation above; the \((\sqcap)\)
operator combines lattice elements in such away that the result represents
both of them, and intersections of sets contain elements that are in __both__
sets.
Now we have the the two binary operators and the comparison function in hand.
There's just one detail we're missing: what it means for two maps to be
equivalent. Here, once again we take our cue from set theory: two sets are
said to be equal when each one is a subset of the other. Mathematically, we can
write this as follows:
{{< latex > }}
m_1 \approx m_2 \triangleq m_1 \subseteq m_2 \land m_1 \supseteq m_2
{{< / latex > }}
I might as well show you the Agda definition of this, since it's a word-for-word
transliteration:
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 530 531 > }}
2024-08-08 16:45:13 -07:00
Defining equivalence more abstractly this way helps avoid concerns about the
precise implementation of our maps.
2024-06-16 18:09:56 -07:00
Okay, but we haven't actually defined what it means for one map to be a subset
of another. My definition is as follows: if \(m_1 \subseteq m_2\), that is,
if \(m_1\) is a subset of \(m_2\), then every key in \(m_1\) is also present
in \(m_2\), and they are mapped to the same value. My first stab at
a mathematical definition of this is the following:
{{< latex > }}
m_1 \subseteq m_2 \triangleq \forall k, v.\ (k, v) \in m_1 \Rightarrow (k, v) \in m_2
{{< / latex > }}
Only there's a slight complication; remember that our values themselves come
from a lattice, and that this lattice might use its own equivalence operator
\((\approx)\) to group similar elements. One example where this is important
is our now-familiar "map of maps" scenario: the values store in the "outer"
map are themselves maps, and we don't want the order of the keys or other
menial details of the inner maps to influence whether the outer maps are equal.
Thus, we settle for a more robust definition of \(m_1 \subseteq m_2\)
that allows \(m_1\) to have different-but-equivalent values from those
in \(m_2\).
{{< latex > }}
m_1 \subseteq m_2 \triangleq \forall k, v.\ (k, v) \in m_1 \Rightarrow \exists v'.\ v \approx v' \land (k, v') \in m_2
{{< / latex > }}
In Agda, the core of my definition is once again very close:
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 98 99 > }}
#### The Implementation
Now it's time to show you how I implemented the Map lattice. I chose
represent maps using a list of key-value pairs, along with a condition
that the keys are unique (non-repeating). I chose this definition because
it was simple to implement, and because it makes it possible to iterate
over the keys of a map. That last property is useful if we use the maps
to later represent sets (which I did). Moreover, lists of key-value pairs are
easy to serialize and write to disk. This isn't hugely important for my
immediate static program analysis needs, but it might be nice in the future.
The requirement that the keys are unique prevents the map from being a multi-map
(which might have several values associated with a particular key).
My `Map` module is parameterized by the key and value types (`A` and `B`
respectively), and additionally requires some additional properties to
be satisfied by these types.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 6 10 > }}
For `A` , the key property is the
[decidability ](https://en.wikipedia.org/wiki/Decidability_(logic )) of
equality: there should be a way to compare keys for equality. This is
important for all sorts of map operations. For example, when inserting a new
value into a map, we need to decide if the value is already present (so that
we know to override it), but if we can't check if two values are equal, we
can't see if it's already there.
The values of the map (represented by `B` ) we expected to be lattices, so
we require them to provide the lattice operations \((\sqcup)\) and \((\sqcap)\),
as well as the equivalence relation \((\approx)\) and the proof of the lattice
properties in `isLattice` . To distinguish the lattice operations on `B`
from the ones we'll be defining on the map itself -- you might've
noticed that there's a bit of overleading going on in this post -- I've
suffixed them with the subscript `2` . My convention is to use the subscript
corresponding to the number of the type parameter. Here, `A` is "first" and `B`
is "second", so the operators on `B` get `2` .
From there, I define the map as a pair; the first component is the list of
key-value pairs, and the second is the proof that all the keys in the
list occur only once.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 480 481 > }}
Now, to implement union and intersection; for the most part, the proofs deal
just with the first component of the map -- the key-value pairs. For union,
the key operation is "insert-or-combine". We can think of merging two maps
as inserting all the keys from one map (arbitrary, the "left") into the
other. If a key is not in the "left" map, insertion won't do anything to its
prior value in the right map; similarly, if a key is not in the "right" map,
then it should appear unchanged in the final result after insertion. Finally,
if a key is inserted into the "right" map, but already has a value there, then
the two values need to be combined using `_⊔₂_` . This leads to the following
definition of `insert` on key-value pair lists:
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 114 118 > }}
Above, `f` is just a stand-in for `_⊓₂_` (making the definition a tiny bit more general).
For each element in the "right" key-value list, we check if its key matches
the one we're inserting; if it does, we have to combine the values, and
there's no need to recurse into the rest of the list. If on the other hand
the key doesn't match, we move on to the next element of the list. If we
run out of elements, we know that the key we're inserting wasn't in the "right"
map, so we insert it as-is.
The union operation is just about inserting every pair from one map into another.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 120 121 > }}
Here, I defined my own version of `foldr` which unpacks the pairs, for
convenience:
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 110 112 "" "**( Click here to see the definition of my `foldr` )**" > }}
For intersection, we do something similar; however, since only elements in
_both_ maps should be in the final output, if our "insertion" doesn't find
an existing key, it should just fall through; this can be achieved by defining
a version of `insert` whose base case simply throws away the input. Of course,
this function should also use `_⊓₂_` instead of `_⊔₂_` ; below, though, I again
use a general function `f` to provide a more general definition. I called this
version of the function `update` .
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 295 299 > }}
Just changing `insert` to `update` is not enough. It's true that calling
`update` with all keys from `m1` on `m2` would forget all keys unique to `m1` ,
it would still leave behind the only-in-`m2` keys. To get rid of these, I
defined another function, `restrict` , that drops all keys in its second
argument that aren't present in its first argument.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 304 308 > }}
Altogether, intesection is defined as follows, where `updates` just
calls `update` for every key-value pair in its first argument.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 310 311 > }}
The next hurdle is all the proofs about these implementations. I will
leave the details of the proofs either as appendices or as links to
other posts on this site.
The first key property is that the insertion, union, update, and intersection operations all
preserve uniqueness of keys; the [proofs for this are here ](#appendix-proof-of-uniqueness-of-keys ).
The set of properties are the lattice laws for union and intersection.
The proofs of those proceed by cases; to prove that \((\sqcup)\) is
commutative, we reason that if \((k , v) \in m_1 \sqcup m_2\), then it must be
either in \(m_1\), in \(m_2\), or in both; for each of these three possible
cases, we can show that \((k , v)\) must be the same in \(m_2 \sqcup m_1\).
Things get even more tedious for proofs of associativity, since there are
7 cases to consider; I describe the strategy I used for such proofs
in my [article about the "Expression" pattern ]({{< relref "agda_expr_pattern" >}} )
in Agda.
### Additional Properties of Lattices
The product and map lattices are the two pulling the most weight in my
implementation of program analyses. However, there's an additional property
that they have: if the lattices they are made of have a _finite height_ ,
then so do products and map lattices themselves. A lattice having a finite
height means that we can only line up so many elements using the less-than
operator `<` . For instance, the natural numbers are _not_ a finite-height lattice;
we can create the infinite chain:
{{< latex >}}
0 < 1 < 2 < . . .
{{< / latex > }}
On the other hand, our sign lattice _is_ of finite height; the longest chains
we can make have three elements and two `<` signs. Here's one:
2024-07-05 12:09:29 -07:00
{#sign-three-elements}
2024-06-16 18:09:56 -07:00
{{< latex > }}
\bot < + < \top
{{< / latex > }}
As a result of this, _pairs_ of signs also have a finite height; the longest
chains we can make have five elements and four `<` signs.
{{< sidenote " right " " example-note " " An example: " > }}
Notice that the elements in the example progress the same way as the ones
in the single-sign chain. This is no accident; the longest chains in the
pair lattice can be constructed from longest chains of its element
lattices. The length of the product lattice chain, counted by the number of
"less than" signs, is the sum of the lengths of the element chains.
{{< / sidenote > }}
{{< latex > }}
(\bot, \bot) < (\bot, +) < (\bot, \top) < (+, \top) < (\top, \top)
{{< / latex > }}
The same is true for maps, under certain conditions.
The finite-height property is crucial to lattice-based static program analysis;
we'll talk about it in more detail in the next post of this series.
2024-06-19 22:04:01 -07:00
{{< seriesnav > }}
2024-06-16 18:09:56 -07:00
### Appendix: Proof of Uniqueness of Keys
I will provide sketches of the proofs here, and omit the implementations
of my lemmas. Click on the link in the code block headers to jump to their
implementation on my Git server.
First, note that if we're inserting a key that's already in a list, then the keys of that list are unchanged.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 123 124 > }}
On the other hand, if we're inserting a new key, it ends up at the end, and
the rest of the keys are unchanged.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 134 135 > }}
Then, for any given key-value pair, the key either is or isn't in the list we're
inserting it into. If it is, then the list ends up unchanged, and remains
unique if it was already unique. On the other hand, if it's not in the list,
then it ends up at the end; adding a new element to the end of a unique
list produces another unique list. Thus, in either case, the final keys
are unique.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 143 148 > }}
By induction, we can then prove that calling `insert` many times as we do
in `union` preserves uniqueness too. Here, `insert-preserves-Unique` serves
as the inductive step.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 164 168 > }}
For `update` , things are simple; it doesn't change the keys of the argument
list at all, since it only modifies, and doesn't add new pairs. This
is captured by the `update-keys` property:
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 313 314 > }}
If the keys don't change, they obviously remain unique.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 328 330 > }}
For `restrict` , we note that it only ever removes keys; as a result, if
a key was not in the input to `restrict` , then it won't be in its output,
either.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 337 338 > }}
As a result, for each key of the list being restricted, we either drop it
(which does not damage uniqueness) or we keep it; since we only remove
keys, and since the keys were originally unique, the key we kept won't
conflict with any of the other final keys.
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 345 351 > }}
Since both `update` and `restrict` preserve uniqueness, then so does
`intersect` :
{{< codelines " Agda " " agda-spa / Lattice / Map . agda " 353 355 > }}