This is the first post in a series on static program analysis in Agda. See the introduction for a little bit more context.

The goal of this post is to motivate the algebraic structure called a lattice. Lattices have [note: See, for instance, Lars Hupel's excellent introduction to CRDTs which uses lattices for Conflict-Free Replicated Data Types. CRDTs can be used to implement peer-to-peer distributed systems. ] beyond static program analysis, so the work in this post is interesting in its own right. However, for the purposes of this series, I’m most interested in lattices as an encoding of program information when performing analysis. To start motivating lattices in that context, I’ll need to start with monotone frameworks.

Monotone Frameworks

The key notion for monotone frameworks is the “specificity” of information. Take, for instance, an analyzer that tries to figure out if a variable is positive, negative, or equal to zero (this is called a sign analysis, and we’ll be using this example a lot). Of course, the variable could be “none of the above” – perhaps if it was initialized from user input, which would allow both positive and negative numbers. Such an analyzer might return +, -, 0, or unknown for any given variable. These outputs are not created equal: if a variable has sign +, we know more about it than if the sign is unknown: we’ve ruled out negative numbers as possible values!

Specificity is important to us because we want our analyses to be as precise as possible. It would be valid for a program analysis to just return unknown for everything, but it wouldn’t be very useful. Thus, we want to rank possible outputs, and try pick the most specific one. The [note: I say convention, because it doesn't actually matter if we represent more specific values as "larger" or "smaller". Given a lattice with a particular order written as <, we can flip the sign in all relations (turning a < b into a > b), and get back another lattice. This lattice will have the same properties (more precisely, the properties will be dual). So we shouldn't fret about picking a direction for "what's less than what". ] seems to be to make [note: Admittedly, it's a little bit odd to say that something which is "more" than something else is actually smaller. The intuition that I favor is that something that's more specific describes fewer objects: there are less white horses than horses, so "white horse" is more specific than "horse". The direction of &lt can be thought of as comparing the number of objects.

Note that this is only an intuition; there are equally many positive and negative numbers, but we will not group them together in our order. ]
, and less specific things “larger”. Coming back to our previous example, we’d write + < unknown, since + is more specific. Of course, the exact things we’re trying to rank depend on the sort of analysis we’re trying to perform. Since I introduced sign analysis, we’re ranking signs like + and -. For other analyses, the elements will be different. The comparison, however, will be a permanent fixture.

Suppose now that we have some program analysis, and we’re feeding it some input information. Perhaps we’re giving it the signs of variables x and y, and hoping for it to give us the sign of a third variable z. It would be very unfortunate if, when given more specific information, the analysis would return a less specific output! The more you know going in, the more you should know coming out. Similarly, when given less specific / vaguer information, the analysis shouldn’t produce a more specific answer – how could it do that? This leads us to come up with the following rule:

if input1input2,then analyze(input1)analyze(input2) \textbf{if}\ \text{input}_1 \le \text{input}_2, \textbf{then}\ \text{analyze}(\text{input}_1) \le \text{analyze}(\text{input}_2)

In mathematics, such a property is called monotonicity. We say that “analyze” is a monotonic function. This property gives its name to monotone frameworks. For our purposes, this property means that being more specific “pays off”: better information in means better information out. In Agda, we can encode monotonicity as follows:

From Lattice.agda, lines 17 through 21
17
18
19
20
21
module _ {a b} {A : Set a} {B : Set b}
    (_≼₁_ : A  A  Set a) (_≼₂_ : B  B  Set b) where

    Monotonic : (A  B)  Set (a ⊔ℓ b)
    Monotonic f =  {a₁ a₂ : A}  a₁ ≼₁ a₂  f a₁ ≼₂ f a₂

Note that above, I defined Monotonic on an arbitrary function, whose outputs might be of a different type than its inputs. This will come in handy later.

The order < of our elements and the monotonicity of our analysis are useful to us for another reason: they help gauge and limit, in a roundabout way, how much work might be left for our analysis to do. This matters because we don’t want to allow analyses that can take forever to finish – that’s a little too long for a pragmatic tool used by people.

The key observation – which I will describe in detail in a later post – is that a monotonic analysis, in a way, “climbs upwards” through an order. As we continue using this analysis to refine information over and over, its results get [note: It is not a bad thing for our results to get less specific over time, because our initial information is probably incomplete. If you've only seen German shepherds in your life, that might be your picture of what a dog is like. If you then come across a chihuahua, your initial definition of "dog" would certainly not accommodate it. To allow for both German shepherds and chihuahuas, you'd have to loosen the definition of "dog". This new definition would be less specific, but it would be more accurate. ] If we add an additional ingredient, and say that the order has a fixed height, we can deduce that the analysis will eventually stop producing additional information: either it will keep “climbing”, and reach the top (thus having to stop), or it will stop on its own before reaching the top. This is the essence of the fixed-point algorithm, which in Agda-like pseudocode can be stated as follows:

module _ (IsFiniteHeight A )
         (f : A  A)
         (Monotonicᶠ : Monotonic _≼_ _≼_ f) where
    -- There exists a point...
    aᶠ : A

    -- Such that applying the monotonic function doesn't change the result.
    aᶠ≈faᶠ : aᶠ  f aᶠ

Moreover, the value we’ll get out of the fixed point algorithm will be the least fixed point. For us, this means that the result will be “the most specific result possible”.

From Fixedpoint.agda, line 86
86
aᶠ≼ :  (a : A)  a  f a  aᶠ  a

The above explanation omits a lot of details, but it’s a start. To get more precise, we must drill down into several aspects of what I’ve said so far. The first of them is, how can we compare program information using an order?

Lattices

Let’s start with a question: when it comes to our specificity-based order, is - less than, greater than, or equal to +? Surely it’s not less specific; knowing that a number is negative doesn’t give you less information than knowing if that number is positive. Similarly, it’s not any more specific, for the same reason. You could consider it equally specific, but that doesn’t seem quite right either; the information is different, so comparing specificity feels apples-to-oranges. On the other hand, both + and - are clearly more specific than unknown.

The solution to this conundrum is to simply refuse to compare certain elements: + is neither less than, greater than, nor equal to -, but + < unknown and - < unknown. Such an ordering is called a partial order.

Next, another question. Suppose that the user writes code like this:

if someCondition {
  x = exprA;
} else {
  x = exprB;
}
y = x;

If exprA has sign s1, and exprB has sign s2, what’s the sign of y? It’s not necessarily s1 nor s2, since they might not match: s1 could be +, and s2 could be -, and using either + or - for y would be incorrect. We’re looking for something that can encompass both s1 and s2. Necessarily, it would be either equally specific or less specific than either s1 or s2: there isn’t any new information coming in about x, and since we don’t know which branch is taken, we stand to lose a little bit of info. However, our goal is always to maximize specificity, since more specific signs give us more information about our program.

This gives us the following constraints. Since the combined sign s has to be equally or less specific than either s1 and s2, we have s1 <= s and s2 <= s. However, we want to pick s such that it’s more specific than any other “combined sign” candidate. Thus, if there’s another sign t, with s1 <= t and s2 <= t, then it must be less specific than s: s <= t.

At first, the above constraints might seem quite complicated. We can interpret them in more familiar territory by looking at numbers instead of signs. If we have two numbers n1 and n2, what number is the smallest number that’s bigger than either n1 or n2? Why, the maximum of the two, of course!

There is a reason why I used the constraints above instead of just saying “maximum”. For numbers, max(a,b) is either a or b. However, we saw earlier that neither + nor - works as the sign for y in our program. Moreover, we agreed above that our order is partial: how can we pick “the bigger of two elements” if neither is bigger than the other? max itself doesn’t quite work, but what we’re looking for is something similar. Instead, we simply require a similar function for our signs. We call this function “least upper bound”, since it is the “least (most specific) element that’s greater (less specific) than either s1 or s2”. Conventionally, this function is written as aba \sqcup b (or in our case, s1s2s_1 \sqcup s_2). The ()(\sqcup) symbol is also called the join of aa and bb. We can define it for our signs so far using the following Cayley table.

0+????0?0??+??+?????? \begin{array}{c|cccc} \sqcup & - & 0 & + & ? \\ \hline - & - & ? & ? & ? \\ 0 & ? & 0 & ? & ? \\ + & ? & ? & + & ? \\ ? & ? & ? & ? & ? \\ \end{array}

By using the above table, we can see that (+  ) = ?(+\ \sqcup\ -)\ =\ ? (aka unknown). This is correct; given the four signs we’re working with, that’s the most we can say. Let’s explore the analogy to the max function a little bit more, by observing that this function has certain properties:

A set that has a binary operation (like max or ()(\sqcup)) that satisfies the above properties is called a semilattice. In Agda, we can write this definition roughly as follows:

record IsSemilattice {a} (A : Set a) (_⊔_ : A  A  A) : Set a where
    field
        ⊔-assoc : (x y z : A)  ((x  y)  z)  (x  (y  z))
        ⊔-comm : (x y : A)  (x  y)  (y  x)
        ⊔-idemp : (x : A)  (x  x)  x

Note that this is an example of the “Is Something” pattern. It turns out to be convenient, however, to not require definitional equality (). For instance, we might model sets as lists. Definitional equality would force us to consider lists with the same elements but a different order to be unequal. Instead, we parameterize our definition of IsSemilattice by a binary relation _≈_, which we ask to be an equivalence relation.

From Lattice.agda, lines 23 through 39
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
record IsSemilattice {a} (A : Set a)
    (_≈_ : A  A  Set a)
    (_⊔_ : A  A  A) : Set a where

    _≼_ : A  A  Set a
    a  b = (a  b)  b

    _≺_ : A  A  Set a
    a  b = (a  b) × (¬ a  b)

    field
        ≈-equiv : IsEquivalence A _≈_
        ≈-⊔-cong :  {a₁ a₂ a₃ a₄}  a₁  a₂  a₃  a₄  (a₁  a₃)  (a₂  a₄)

        ⊔-assoc : (x y z : A)  ((x  y)  z)  (x  (y  z))
        ⊔-comm : (x y : A)  (x  y)  (y  x)
        ⊔-idemp : (x : A)  (x  x)  x

Notice that the above code also provides – but doesn’t require – _≼_ and _≺_. That’s because a least-upper-bound operation encodes an order: intuitively, if max(a, b) = b, then b must be larger than a. Lars Hupel’s CRDT series includes an explanation of how the ordering operator and the “least upper bound” function can be constructed from one another.

As it turns out, the min function has very similar properties to max: it’s idempotent, commutative, and associative. For a partial order like ours, the analog to min is “greatest lower bound”, or “the largest value that’s smaller than both inputs”. Such a function is denoted as aba\sqcap b, and often called the “meet” of aa and bb. As for what it means, where s1s2s_1 \sqcup s_2 means “combine two signs where you don’t know which one will be used” (like in an if/else), s1s2s_1 \sqcap s_2 means “combine two signs where you know [note: If you're familiar with Boolean algebra, this might look a little bit familiar to you. In fact, the symbol for "and" on booleans is \land. Similarly, the symbol for "or" is \lor. So, s1s2s_1 \sqcup s_2 means "the sign is s1s_1 or s2s_2", or "(the sign is s1s_1) \lor (the sign is s2s_2)". Similarly, s1s2s_1 \sqcap s_2 means "(the sign is s1s_1) \land (the sign is s2s_2)". Don't these symbols look similar?

In fact, booleans with ()(\lor) and ()(\land) satisfy the semilattice laws we've been discussing, and together form a lattice (to which I'm building to in the main body of the text). The same is true for the set union and intersection operations, ()(\cup) and ()(\cap). ]
”. For example, (+  ?) = +(+\ \sqcap\ ?)\ =\ +, because a variable that’s both “any sign” and “positive” must be positive.

There’s just one hiccup: what’s the greatest lower bound of + and -? it needs to be a value that’s less than both of them, but so far, we don’t have such a value. Intuitively, this value should be called something like impossible, because a number that’s both positive and negative doesn’t exist. So, let’s extend our analyzer to have a new impossible value. In fact, it turns out that this “impossible” value is the least element of our set (we added it to be the lower bound of + and co., which in turn are less than unknown). Similarly, unknown is the largest element of our set, since it’s greater than + and co, and transitively greater than impossible. In mathematics, it’s not uncommon to define the least element as \bot (read “bottom”), and the greatest element as \top (read “top”). With that in mind, the following are the updated Cayley tables for our operations.

0+000+++0+0+000+++0+ \begin{array}{c|ccccc} \sqcup & - & 0 & + & \top & \bot \\ \hline - & - & \top & \top & \top & - \\ 0 & \top & 0 & \top & \top & 0 \\ + & \top & \top & + & \top & + \\ \top & \top & \top & \top & \top & \top \\ \bot & - & 0 & + & \top & \bot \\ \end{array} \qquad \begin{array}{c|ccccc} \sqcap & - & 0 & + & \top & \bot \\ \hline - & - & \bot & \bot & - & \bot \\ 0 & \bot & 0 & \bot & 0 & \bot \\ + & \bot & \bot & + & + & \bot \\ \top & - & 0 & + & \top & \bot \\ \bot & \bot & \bot & \bot & \bot & \bot \\ \end{array}

So, it turns out that our set of possible signs is a semilattice in two ways. And if “semi” means “half”, does two “semi"s make a whole? Indeed it does!

A lattice is made up of two semilattices. The operations of these two lattices, however, must satisfy some additional properties. Let’s examine the properties in the context of min and max as we have before. They are usually called the absorption laws:

In Agda, we can therefore write a lattice as follows:

From Lattice.agda, lines 183 through 193
183
184
185
186
187
188
189
190
191
192
193
record IsLattice {a} (A : Set a)
    (_≈_ : A  A  Set a)
    (_⊔_ : A  A  A)
    (_⊓_ : A  A  A) : Set a where

    field
        joinSemilattice : IsSemilattice A _≈_ _⊔_
        meetSemilattice : IsSemilattice A _≈_ _⊓_

        absorb-⊔-⊓ : (x y : A)  (x  (x  y))  x
        absorb-⊓-⊔ : (x y : A)  (x  (x  y))  x

Concrete Examples

Natural Numbers

Since we’ve been talking about min and max as motivators for properties of ()(\sqcap) and ()(\sqcup), it might not be all that surprising that natural numbers form a lattice with min and max as the two binary operations. In fact, the Agda standard library writes min as _⊓_ and max as _⊔_! We can make use of the already-proven properties of these operators to easily define IsLattice for natural numbers. Notice that since we’re not doing anything clever, like considering lists up to reordering, there’s no reason not to use definitional equality for our equivalence relation.

From Nat.agda, lines 1 through 45
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
module Lattice.Nat where

open import Equivalence
open import Lattice
open import Relation.Binary.PropositionalEquality using (_≡_; refl; sym; trans)
open import Data.Nat using (; _⊔_; _⊓_; _≤_)
open import Data.Nat.Properties using 
    ( ⊔-assoc; ⊔-comm; ⊔-idem
    ; ⊓-assoc; ⊓-comm; ⊓-idem
    ; ⊓-mono-≤; ⊔-mono-≤
    ; m≤n⇒m≤o⊔n; m≤n⇒m⊓o≤n; ≤-refl; ≤-antisym
    )

private
    ≡-⊔-cong :  {a₁ a₂ a₃ a₄}  a₁  a₂  a₃  a₄  (a₁  a₃)  (a₂  a₄)
    ≡-⊔-cong a₁≡a₂ a₃≡a₄ rewrite a₁≡a₂ rewrite a₃≡a₄ = refl

    ≡-⊓-cong :  {a₁ a₂ a₃ a₄}  a₁  a₂  a₃  a₄  (a₁  a₃)  (a₂  a₄)
    ≡-⊓-cong a₁≡a₂ a₃≡a₄ rewrite a₁≡a₂ rewrite a₃≡a₄ = refl

isMaxSemilattice : IsSemilattice  _≡_ _⊔_
isMaxSemilattice = record
    { ≈-equiv = record
        { ≈-refl = refl
        ; ≈-sym = sym
        ; ≈-trans = trans
        }
    ; ≈-⊔-cong = ≡-⊔-cong
    ; ⊔-assoc = ⊔-assoc
    ; ⊔-comm = ⊔-comm
    ; ⊔-idemp = ⊔-idem
    }

isMinSemilattice : IsSemilattice  _≡_ _⊓_
isMinSemilattice = record
    { ≈-equiv = record
        { ≈-refl = refl
        ; ≈-sym = sym
        ; ≈-trans = trans
        }
    ; ≈-⊔-cong = ≡-⊓-cong
    ; ⊔-assoc = ⊓-assoc
    ; ⊔-comm = ⊓-comm
    ; ⊔-idemp = ⊓-idem
    }

The definition for the lattice instance itself is pretty similar; I’ll omit it here to avoid taking up a lot of vertical space, but you can find it on lines 47 through 83 of my Lattice.Nat module.

The “Above-Below” Lattice

It’s not too hard to implement our sign lattice in Agda. However, we can do it in a somewhat general way. As it turns out, extending an existing set, such as {+,,0}\{+, -, 0\}, with a “bottom” and “top” element (to be used when taking the least upper bound and greatest lower bound) is quite common and useful. For instance, if we were to do constant propagation (simplifying 7+4 to 11), we would probably do something similar, using the set of integers Z\mathbb{Z} instead of the plus-zero-minus set.

The general definition is as follows. Take some original set SS (like our 3-element set of signs), and extend it with new “top” and “bottom” elements (\top and \bot). Then, define ()(\sqcup) as follows:

x1x2={x1= or x2=x1,x2S,x1x2x1=x2x1,x2S,x1=x2x1x2=x2x1= x_1 \sqcup x_2 = \begin{cases} \top & x_1 = \top\ \text{or}\ x_2 = \top \\ \top & x_1, x_2 \in S, x_1 \neq x_2 \\ x_1 = x_2 & x_1, x_2 \in S, x_1 = x_2 \\ x_1 & x_2 = \bot \\ x_2 & x_1 = \bot \end{cases}

In other words, \top overrules anything that it’s combined with. In math terms, it’s the absorbing element of the lattice. On the other hand, \bot gets overruled by anything it’s combined with. In math terms, that’s an identity element. Finally, when combining two elements that aren’t \top or \bot (which would otherwise be covered by the prior sentences), combining an element with itself leaves it unchanged (upholding idempotence), while combining two unequal element results in \top. That last part matches the way we defined “least upper bound” earlier.

The intuition is as follows: the ()(\sqcup) operator is like an “or”. Then, “anything or positive” means “anything”; same with “anything or negative”, etc. On the other hand, “impossible or positive” means positive, since one of those cases will never happen. Finally, in the absense of additional elements, the most we can say about “positive or negative” is “any sign”; of course, “positive or positive” is the same as “positive”.

The “greatest lower bound” operator is defined by effectively swapping top and bottom.

x1x2={x1= or x2=x1,x2S,x1x2x1=x2x1,x2S,x1=x2x1x2=x2x1= x_1 \sqcup x_2 = \begin{cases} \bot & x_1 = \bot\ \text{or}\ x_2 = \bot \\ \bot & x_1, x_2 \in S, x_1 \neq x_2 \\ x_1 = x_2 & x_1, x_2 \in S, x_1 = x_2 \\ x_1 & x_2 = \top \\ x_2 & x_1 = \top \end{cases}

For this operator, \bot is the absorbing element, and \top is the identity element. The intuition here is not too different: if ()(\sqcap) is like an “and”, then “impossible and positive” can’t happen; same with “impossible and negative”, and so on. On the other hand, “anything and positive” clearly means positive. Finally, “negative and positive” can’t happen (again, there is no number that’s both positive and negative), and “positive and positive” is just “positive”.

What properties of the underlying set did we use to get this to work? The only thing we needed is to be able to check and see if two elements are equal or not; this is called decidable equality. Since that’s the only thing we used, this means that we can define an “above/below” lattice like this for any type for which we can check if two elements are equal. In Agda, I encoded this using a parameterized module:

From AboveBelow.agda, lines 5 through 8
5
6
7
8
module Lattice.AboveBelow {a} (A : Set a)
                          (_≈₁_ : A  A  Set a)
                          (≈₁-equiv : IsEquivalence A _≈₁_)
                          (≈₁-dec : IsDecidable _≈₁_) where

From there, I defined the actual data type as follows:

From AboveBelow.agda, lines 23 through 26
23
24
25
26
data AboveBelow : Set a where
     : AboveBelow
     : AboveBelow
    [_] : A  AboveBelow

From there, I defined the ()(\sqcup) and ()(\sqcap) operations almost exactly to the mathematical equation above (the cases were re-ordered to improve Agda’s reduction behavior). Here’s the former:

From AboveBelow.agda, lines 86 through 93
86
87
88
89
90
91
92
93
    _⊔_ : AboveBelow  AboveBelow  AboveBelow
      x = x
      x = 
    [ x ]  [ y ] with ≈₁-dec x y
    ...   | yes _ = [ x ]
    ...   | no  _ = 
    x   = x
    x   = 

And here’s the latter:

From AboveBelow.agda, lines 181 through 188
181
182
183
184
185
186
187
188
    _⊓_ : AboveBelow  AboveBelow  AboveBelow
      x = 
      x = x
    [ x ]  [ y ] with ≈₁-dec x y
    ...   | yes _ = [ x ]
    ...   | no  _ = 
    x   = 
    x   = x

The proofs of the lattice properties are straightforward and proceed by simple case analysis. Unfortunately, Agda doesn’t quite seem to evaluate the binary operator in every context that I would expect it to, which has led me to define some helper lemmas such as the following:

From AboveBelow.agda, lines 95 through 96
95
96
    ⊤⊔x≡⊤ :  (x : AboveBelow)    x  
    ⊤⊔x≡⊤ _ = refl

As a sample, here’s a proof of commutativity of ()(\sqcup):

From AboveBelow.agda, lines 158 through 165
158
159
160
161
162
163
164
165
    ⊔-comm :  (ab₁ ab₂ : AboveBelow)  (ab₁  ab₂)  (ab₂  ab₁)
    ⊔-comm  x rewrite x⊔⊤≡⊤ x = ≈-refl
    ⊔-comm  x rewrite x⊔⊥≡x x = ≈-refl
    ⊔-comm x  rewrite x⊔⊤≡⊤ x = ≈-refl
    ⊔-comm x  rewrite x⊔⊥≡x x = ≈-refl
    ⊔-comm [ x₁ ] [ x₂ ] with ≈₁-dec x₁ x₂
    ... | yes x₁≈x₂ rewrite x≈y⇒[x]⊔[y]≡[x] (≈₁-sym x₁≈x₂) = ≈-lift x₁≈x₂
    ... | no  x₁̷≈x₂ rewrite x̷≈y⇒[x]⊔[y]≡⊤ (x₁̷≈x₂  ≈₁-sym) = ≈-⊤-⊤

The details of the rest of the proofs can be found in the AboveBelow.agda file.

To recover the sign lattice we’ve been talking about all along, it’s sufficient to define a sign data type:

From Sign.agda, lines 19 through 22
19
20
21
22
data Sign : Set where
    + : Sign
    - : Sign
     : Sign

Then, prove decidable equality on it (effecitly defining a comparison function), and instantiate the AboveBelow module:

From Sign.agda, lines 34 through 47
34
35
36
37
38
39
40
41
42
43
44
45
46
47
-- g for siGn; s is used for strings and i is not very descriptive.
_≟ᵍ_ : IsDecidable (_≡_ {_} {Sign})
_≟ᵍ_ + + = yes refl
_≟ᵍ_ + - = no (λ ())
_≟ᵍ_ + 0ˢ = no (λ ())
_≟ᵍ_ - + = no (λ ())
_≟ᵍ_ - - = yes refl
_≟ᵍ_ - 0ˢ = no (λ ())
_≟ᵍ_ 0ˢ + = no (λ ())
_≟ᵍ_ 0ˢ - = no (λ ())
_≟ᵍ_ 0ˢ 0ˢ = yes refl

-- embelish 'sign' with a top and bottom element.
open import Lattice.AboveBelow Sign _≡_ (record { ≈-refl = refl; ≈-sym = sym; ≈-trans = trans }) _≟ᵍ_ as AB

From Simple Lattices to Complex Ones

Natural numbers and signs alone are cool enough, but they will not be sufficient to write program analyzers. That’s because when we’re writing an analyzer, we don’t just care about one variable: we care about all of them! An initial guess might be to say that when analyzing a program, we really need several signs: one for each variable. This might be reminiscent of a map. So, when we compare specificity, we’ll really be comparing the specificity of maps. Even that, though, is not enough. The reason is that variables might have different signs at different points in the program! A single map would not be able to capture that sort of nuance, so what we really need is a map associating states with another map, which in turn associates variables with their signs.

Mathematically, we might write this as:

InfoProgramStates(VariablesSign) \text{Info} \triangleq \text{ProgramStates} \to (\text{Variables} \to \text{Sign})

That’s a big step up in complexity. We now have a doubly-nested map structure instead of just a sign. and we need to compare such maps in order to gaugage their specificity and advance our analyses. But where do we even start with maps, and how do we define the ()(\sqcup) and ()(\sqcap) operations?

The solution turns out to be to define ways in which simpler lattices (like our sign) can be combined and transformed to define more complex lattices. We’ll move on to that in the next post of this series.