Table of Contents

In the previous post, we defined a machine for graph reduction, called a G-machine. However, this machine is still not particularly connected to our language. In this post, we will give meanings to programs in our language in the context of this G-machine. We will define a compilation scheme, which will be a set of rules that tell us how to translate programs in our language into G-machine instructions. To mirror Implementing Functional Languages: a tutorial, we’ll call this compilation scheme C\mathcal{C}, and write it as Ce=i\mathcal{C} ⟦e⟧ = i, meaning “the expression ee compiles to the instructions ii”.

To follow our route from the typechecking, let’s start with compiling expressions that are numbers. It’s pretty easy:

Cn=[PushInt  n] \mathcal{C} ⟦n⟧ = [\text{PushInt} \; n]

Here, we compiled a number expression to a list of instructions with only one element - PushInt.

Just like when we did typechecking, let’s move on to compiling function applications. As we informally stated in the previous chapter, since the thing we’re applying has to be on top, we want to compile it last:

Ce1  e2=Ce2Ce1[MkApp] \mathcal{C} ⟦e_1 \; e_2⟧ = \mathcal{C} ⟦e_2⟧ ⧺ \mathcal{C} ⟦e_1⟧ ⧺ [\text{MkApp}]

Here, we used the operator to represent the concatenation of two lists. Otherwise, this should be pretty intutive - we first run the instructions to create the parameter, then we run the instructions to create the function, and finally, we combine them using MkApp.

It’s variables that once again force us to adjust our strategy. If our program is well-typed, we know our variable will be on the stack: our definition of Unwind makes it so for functions, and we will define our case expression compilation scheme to match. However, we still need to know where on the stack each variable is, and this changes as the stack is modified.

To accommodate for this, we define an environment, ρ\rho, to be a partial function mapping variable names to thier offsets on the stack. We write ρ=[xn,ym]\rho = [x \rightarrow n, y \rightarrow m] to say “the environment ρ\rho maps variable xx to stack offset nn, and variable yy to stack offset mm”. We also write ρ  x\rho \; x to say “look up xx in ρ\rho”, since ρ\rho is a function. Finally, to help with the ever-changing stack, we define an augmented environment ρ+n\rho^{+n}, such that ρ+n  x=ρ  x+n\rho^{+n} \; x = \rho \; x + n. In words, this basically means “ρ+n\rho^{+n} has all the variables from ρ\rho, but their addresses are incremented by nn”. We now pass ρ\rho in to C\mathcal{C} together with the expression ee. Let’s rewrite our first two rules. For numbers:

Cn  ρ=[PushInt  n] \mathcal{C} ⟦n⟧ \; \rho = [\text{PushInt} \; n]

For function application:

Ce1  e2  ρ=Ce2  ρ    Ce1  ρ+1    [MkApp] \mathcal{C} ⟦e_1 \; e_2⟧ \; \rho = \mathcal{C} ⟦e_2⟧ \; \rho \; ⧺ \;\mathcal{C} ⟦e_1⟧ \; \rho^{+1} \; ⧺ \; [\text{MkApp}]

Notice how in that last rule, we passed in ρ+1\rho^{+1} when compiling the function’s expression. This is because the result of running the instructions for e2e_2 will have left on the stack the function’s parameter. Whatever was at the top of the stack (and thus, had index 0), is now the second element from the top (address 1). The same is true for all other things that were on the stack. So, we increment the environment accordingly.

With the environment, the variable rule is simple:

Cx  ρ=[Push  (ρ  x)] \mathcal{C} ⟦x⟧ \; \rho = [\text{Push} \; (\rho \; x)]

One more thing. If we run across a function name, we want to use PushGlobal rather than Push. Defining ff to be a name of a global function, we capture this using the following rule:

Cf  ρ=[PushGlobal  f] \mathcal{C} ⟦f⟧ \; \rho = [\text{PushGlobal} \; f]

Now it’s time for us to compile case expressions, but there’s a bit of an issue - our case expressions branches don’t map one-to-one with the titt \rightarrow i_t format of the Jump instruction. This is because we allow for name patterns in the form xx, which can possibly match more than one tag. Consider this rather useless example:

data Bool = { True, False }
defn weird b = { case b of { b -> { False } } }

We only have one branch, but we have two tags that should lead to it! Not only that, but variable patterns are location-dependent: if a variable pattern comes before a constructor pattern, then the constructor pattern will never be reached. On the other hand, if a constructor pattern comes before a variable pattern, it will be tried before the varible pattern, and thus is reachable.

We will ignore this problem for now - we will define our semantics as though each case expression branch can match exactly one tag. In our C++ code, we will write a conversion function that will figure out which tag goes to which sequence of instructions. Effectively, we’ll be performing desugaring.

Now, on to defining the compilation rules for case expressions. It’s helpful to define compiling a single branch of a case expression separately. For a branch in the form t  x1  x2  ...  xnbodyt \; x_1 \; x_2 \; ... \; x_n \rightarrow \text{body}, we define a compilation scheme A\mathcal{A} as follows:

At  x1  ...  xnbody  ρ=t[Split  n]    Cbody  ρ    [Slide  n]where  ρ=ρ+n[x10,...,xnn1] \begin{aligned} \mathcal{A} ⟦t \; x_1 \; ... \; x_n \rightarrow \text{body}⟧ \; \rho & = t \rightarrow [\text{Split} \; n] \; ⧺ \; \mathcal{C}⟦\text{body}⟧ \; \rho' \; ⧺ \; [\text{Slide} \; n] \\ \text{where} \; \rho' &= \rho^{+n}[x_1 \rightarrow 0, ..., x_n \rightarrow n - 1] \end{aligned}

First, we run Split - the node on the top of the stack is a packed constructor, and we want access to its member variables, since they can be referenced by the branch’s body via xix_i. For the same reason, we must make sure to include x1x_1 through xnx_n in our environment. Furthermore, since the split values now occupy the stack, we have to offset our environment by nn before adding bindings to our new variables. Doing all these things gives us ρ\rho', which we use to compile the body, placing the resulting instructions after Split. This leaves us with the desired graph on top of the stack - the only thing left to do is to clean up the stack of the unpacked values, which we do using Slide.

Notice that we didn’t just create instructions - we created a mapping from the tag tt to the instructions that correspond to it.

Now, it’s time for compiling the whole case expression. We first want to construct the graph for the expression we want to perform case analysis on. Next, we want to evaluate it (since we need a packed value, not a graph, to read the tag). Finally, we perform a jump depending on the tag. This is captured by the following rule:

Ccase  e  of  alt1...altn  ρ=Ce  ρ  [Eval,Jump  [Aalt1  ρ,...,Aaltn  ρ]] \mathcal{C} ⟦\text{case} \; e \; \text{of} \; \text{alt}_1 ... \text{alt}_n⟧ \; \rho = \mathcal{C} ⟦e⟧ \; \rho \; ⧺ [\text{Eval}, \text{Jump} \; [\mathcal{A} ⟦\text{alt}_1⟧ \; \rho, ..., \mathcal{A} ⟦\text{alt}_n⟧ \; \rho]]

This works because A\mathcal{A} creates not only instructions, but also a tag mapping. We simply populate our Jump instruction such mappings resulting from compiling each branch.

You may have noticed that we didn’t add rules for binary operators. Just like with type checking, we treat them as function calls. However, rather that constructing graphs when we have to instantiate those functions, we simply evaluate the arguments and perform the relevant arithmetic operation using BinOp. We will do a similar thing for constructors.

Implementation

With that out of the way, we can get around to writing some code. Let’s first define C++ structs for the instructions of the G-machine:

From instruction.hpp, entire file
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
#pragma once
#include <string>
#include <memory>
#include <vector>
#include <map>
#include <ostream>
#include "binop.hpp"

struct instruction {
    virtual ~instruction() = default;

    virtual void print(int indent, std::ostream& to) const = 0;
};

using instruction_ptr = std::unique_ptr<instruction>;

struct instruction_pushint : public instruction {
    int value;

    instruction_pushint(int v)
        : value(v) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_pushglobal : public instruction {
    std::string name;

    instruction_pushglobal(std::string n)
        : name(std::move(n)) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_push : public instruction {
    int offset;

    instruction_push(int o)
        : offset(o) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_pop : public instruction {
    int count;

    instruction_pop(int c)
        : count(c) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_mkapp : public instruction {
    void print(int indent, std::ostream& to) const;
};

struct instruction_update : public instruction {
    int offset;

    instruction_update(int o)
        : offset(o) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_pack : public instruction {
    int tag;
    int size;

    instruction_pack(int t, int s)
        : tag(t), size(s) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_split : public instruction {
    void print(int indent, std::ostream& to) const;
};

struct instruction_jump : public instruction {
    std::vector<std::vector<instruction_ptr>> branches;
    std::map<int, int> tag_mappings;

    void print(int indent, std::ostream& to) const;
};

struct instruction_slide : public instruction {
    int offset;

    instruction_slide(int o)
        : offset(o) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_binop : public instruction {
    binop op;

    instruction_binop(binop o)
        : op(o) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_eval : public instruction {
    void print(int indent, std::ostream& to) const;
};

struct instruction_alloc : public instruction {
    int amount;

    instruction_alloc(int a)
        : amount(a) {}

    void print(int indent, std::ostream& to) const;
};

struct instruction_unwind : public instruction {
    void print(int indent, std::ostream& to) const;
};

I omit the implementation of the various (trivial) print methods in this post; as always, you can look at the full project source code, which is freely available for each post in the series.

We can now envision a method on the ast struct that takes an environment (just like our compilation scheme takes the environment ρ\rho), and compiles the ast. Rather than returning a vector of instructions (which involves copying, unless we get some optimization kicking in), we’ll pass a reference to a vector to our method. The method will then place the generated instructions into the vector.

There’s one more thing to be considered. How do we tell apart a “global” from a variable? A naive solution would be to take a list or map of global functions as a third parameter to our compile method. But there’s an easier way! We know that the program passed type checking. This means that every referenced variable exists. From then, the situation is easy - if actual variable names are kept in the environment, ρ\rho, then whenever we see a variable that isn’t in the current environment, it must be a function name.

Having finished contemplating out method, it’s time to define a signature:

virtual void compile(const env_ptr& env, std::vector<instruction_ptr>& into) const;

Ah, but now we have to define “environment”. Let’s do that. Here’s our header:

From env.hpp, entire file
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#pragma once
#include <memory>
#include <string>

struct env {
    virtual ~env() = default;

    virtual int get_offset(const std::string& name) const = 0;
    virtual bool has_variable(const std::string& name) const = 0;
};

using env_ptr = std::shared_ptr<env>;

struct env_var : public env {
    std::string name;
    env_ptr parent;

    env_var(std::string& n, env_ptr p)
        : name(std::move(n)), parent(std::move(p)) {}

    int get_offset(const std::string& name) const;
    bool has_variable(const std::string& name) const;
};

struct env_offset : public env {
    int offset;
    env_ptr parent;

    env_offset(int o, env_ptr p)
        : offset(o), parent(std::move(p)) {}

    int get_offset(const std::string& name) const;
    bool has_variable(const std::string& name) const;
};

And here’s the source file:

From env.cpp, entire file
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#include "env.hpp"

int env_var::get_offset(const std::string& name) const {
    if(name == this->name) return 0;
    if(parent) return parent->get_offset(name) + 1;
    throw 0;
}

bool env_var::has_variable(const std::string& name) const {
    if(name == this->name) return true;
    if(parent) return parent->has_variable(name);
    return false;
}

int env_offset::get_offset(const std::string& name) const {
    if(parent) return parent->get_offset(name) + offset;
    throw 0;
}

bool env_offset::has_variable(const std::string& name) const {
    if(parent) return parent->has_variable(name);
    return false;
}

There’s not that much to see here, but let’s go through it anyway. We define an environment as a linked list, kind of like we did with the type environment. This time, though, we use shared pointers instead of raw pointers to reference the parent. I decided on this because we will need to be using virtual methods (since we have two subclasses of env), and thus will need to be passing the env by pointer. At that point, we might as well use the “proper” way!

I implemented the environment as a linked list because it is, in essence, a stack. However, not every “offset” in a stack is introduced by binding variables - for instance, when we create an application node, we first build the argument value on the stack, and then, with that value still on the stack, build the left hand side of the application. Thus, all the variable positions are offset by the presence of the argument on the stack, and we must account for that. Similarly, in cases when we will allocate space on the stack (we will run into these cases later), we will need to account for that change. Thus, since we can increment the offset by two ways (binding a variable and building something on the stack), we allow for two types of nodes in our env stack.

During recursion we will be tweaking the return value of get_offset to calculate the final location of a variable on the stack (if the parent of a node returned offset 1, but the node itself is a variable node and thus introduces another offset, we need to return 2). Because of this, we cannot reasonably return a constant like -1 (it will quickly be made positive on a long list), and thus we throw an exception. To allow for a safe way to check for an offset, without try-catch, we also add a has_variable method which checks if the lookup will succeed. A better approach would be to use std::optional, but it’s C++17, so we’ll shy away from it.

It will also help to move some of the functions on the binop enum into a separate file. The new neader is pretty small:

From binop.hpp, entire file
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#pragma once
#include <string>

enum binop {
    PLUS,
    MINUS,
    TIMES,
    DIVIDE
};

std::string op_name(binop op);
std::string op_action(binop op);

The new source file is not much longer:

From binop.cpp, entire file
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
#include "binop.hpp"

std::string op_name(binop op) {
    switch(op) {
        case PLUS: return "+";
        case MINUS: return "-";
        case TIMES: return "*";
        case DIVIDE: return "/";
    }
    return "??";
}

std::string op_action(binop op) {
    switch(op) {
        case PLUS: return "plus";
        case MINUS: return "minus";
        case TIMES: return "times";
        case DIVIDE: return "divide";
    }
    return "??";
}

And now, we begin our implementation. Let’s start with the easy ones: ast_int, ast_lid and ast_uid. The code for ast_int involves just pushing the integer into the stack:

From ast.cpp, lines 36 through 38
36
37
38
void ast_int::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    into.push_back(instruction_ptr(new instruction_pushint(value)));
}

The code for ast_lid needs to check if the variable is global or local, just like we discussed:

From ast.cpp, lines 53 through 58
53
54
55
56
57
58
void ast_lid::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    into.push_back(instruction_ptr(
        env->has_variable(id) ?
            (instruction*) new instruction_push(env->get_offset(id)) :
            (instruction*) new instruction_pushglobal(id)));
}

We do not have to do this for ast_uid:

From ast.cpp, lines 73 through 75
73
74
75
void ast_uid::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    into.push_back(instruction_ptr(new instruction_pushglobal(id)));
}

On to ast_binop! This is the first time we have to change our environment. As we said earlier, once we build the right operand on the stack, every offset that we counted from the top of the stack will have been shifted by 1 (we see this in our compilation scheme for function application). So, we create a new environment with env_offset, and use that when we compile the left child:

From ast.cpp, lines 103 through 110
103
104
105
106
107
108
109
110
void ast_binop::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    right->compile(env, into);
    left->compile(env_ptr(new env_offset(1, env)), into);

    into.push_back(instruction_ptr(new instruction_pushglobal(op_action(op))));
    into.push_back(instruction_ptr(new instruction_mkapp()));
    into.push_back(instruction_ptr(new instruction_mkapp()));
}

ast_binop performs two applications: (+) lhs rhs. We push rhs, then lhs, then (+), and then use MkApp twice. In ast_app, we only need to perform one application, lhs rhs:

From ast.cpp, lines 134 through 138
134
135
136
137
138
void ast_app::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    right->compile(env, into);
    left->compile(env_ptr(new env_offset(1, env)), into);
    into.push_back(instruction_ptr(new instruction_mkapp()));
}

Note that we also extend our environment in this one, for the exact same reason as before.

Case expressions are the only thing left on the agenda. This is the time during which we have to perform desugaring. Here, though, we run into an issue: we don’t have tags assigned to constructors! We need to adjust our code to keep track of the tags of the various constructors of a type. To do this, we add a subclass for the type_base struct, called type_data:

From type.hpp, lines 33 through 42
33
34
35
36
37
38
39
40
41
42
struct type_data : public type_base {
    struct constructor {
        int tag;
    };

    std::map<std::string, constructor> constructors;

    type_data(std::string n)
        : type_base(std::move(n)) {}
};

When we create types from definition_data, we tag the corresponding constructors:

From definition.cpp, lines 54 through 71
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
void definition_data::typecheck_first(type_mgr& mgr, type_env& env) {
    type_data* this_type = new type_data(name);
    type_ptr return_type = type_ptr(this_type);
    int next_tag = 0;

    for(auto& constructor : constructors) {
        constructor->tag = next_tag;
        this_type->constructors[constructor->name] = { next_tag++ };

        type_ptr full_type = return_type;
        for(auto it = constructor->types.rbegin(); it != constructor->types.rend(); it++) {
            type_ptr type = type_ptr(new type_base(*it));
            full_type = type_ptr(new type_arr(type, full_type));
        }

        env.bind(constructor->name, full_type);
    }
}

Ah, but adding constructor info to the type doesn’t solve the problem. Once we performed type checking, we don’t keep the types that we computed for an AST node, in the node. And obviously, we don’t want to go looking for them again. Furthermore, we can’t just look up a constructor in the environment, since we can well have patterns that don’t have any constructors:

match l {
    l -> { 0 }
}

So, we want each ast node to store its type (well, in practice we only need this for ast_case, but we might as well store it for all nodes). We can add it, no problem. To add to that, we can add another, non-virtual typecheck method (let’s call it typecheck_common, since naming is hard). This method will call typecheck, and store the output into the node_type field.

The signature is identical to typecheck, except it’s neither virtual nor const:

type_ptr typecheck_common(type_mgr& mgr, const type_env& env);

And the implementation is as simple as you think:

From ast.cpp, lines 9 through 12
 9
10
11
12
type_ptr ast::typecheck_common(type_mgr& mgr, const type_env& env) {
    node_type = typecheck(mgr, env);
    return node_type;
}

In client code (definition_defn::typecheck_first for instance), we should now use typecheck_common instead of typecheck. With that done, we’re almost there. However, we’re still missing something: most likely, the initial type assigned to any node is a type_var, or a type variable. In this case, type_var needs the information from type_mgr, which we will not be keeping around. Besides, it’s cleaner to keep the actual type as a member of the node, not a variable type that references it. In order to address this, we write two conversion functions that call resolve on all types in an AST, given a type manager. After this is done, the type manager can be thrown away. The signatures of the functions are as follows:

void resolve_common(const type_mgr& mgr);
virtual void resolve(const type_mgr& mgr) const = 0;

We also add the resolve method to definition, so that we can call it without having to run dynamic_cast. The implementation for ast::resolve_common just resolves the type:

From ast.cpp, lines 14 through 21
14
15
16
17
18
19
20
21
void ast::resolve_common(const type_mgr& mgr) {
    type_var* var;
    type_ptr resolved_type = mgr.resolve(node_type, var);
    if(var) throw type_error("ambiguously typed program");

    resolve(mgr);
    node_type = std::move(resolved_type);
}

The virtual ast::resolve just calls ast::resolve_common on an all ast children of a node. Here’s a sample implementation from ast_binop:

From ast.cpp, lines 98 through 101
 98
 99
100
101
void ast_binop::resolve(const type_mgr& mgr) const {
    left->resolve_common(mgr);
    right->resolve_common(mgr);
}

And here’s the implementation of definition::resolve on definition_defn:

From definition.cpp, lines 32 through 42
32
33
34
35
36
37
38
39
40
41
42
void definition_defn::resolve(const type_mgr& mgr) {
    type_var* var;
    body->resolve_common(mgr);

    return_type = mgr.resolve(return_type, var);
    if(var) throw type_error("ambiguously typed program");
    for(auto& param_type : param_types) {
        param_type = mgr.resolve(param_type, var);
        if(var) throw type_error("ambiguously typed program");
    }
}

Finally, we call resolve at the end typecheck_program in main.cpp:

From main.cpp, lines 40 through 42
40
41
42
    for(auto& def : prog) {
        def->resolve(mgr);
    }

At last, we’re ready to implement the code for compiling ast_case. Here it is, in all its glory:

From ast.cpp, lines 178 through 230
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
void ast_case::compile(const env_ptr& env, std::vector<instruction_ptr>& into) const {
    type_data* type = dynamic_cast<type_data*>(of->node_type.get());

    of->compile(env, into);
    into.push_back(instruction_ptr(new instruction_eval()));

    instruction_jump* jump_instruction = new instruction_jump();
    into.push_back(instruction_ptr(jump_instruction));
    for(auto& branch : branches) {
        std::vector<instruction_ptr> branch_instructions;
        pattern_var* vpat;
        pattern_constr* cpat;

        if((vpat = dynamic_cast<pattern_var*>(branch->pat.get()))) {
            branch->expr->compile(env_ptr(new env_offset(1, env)), branch_instructions);

            for(auto& constr_pair : type->constructors) {
                if(jump_instruction->tag_mappings.find(constr_pair.second.tag) !=
                        jump_instruction->tag_mappings.end())
                    break;

                jump_instruction->tag_mappings[constr_pair.second.tag] =
                    jump_instruction->branches.size();
            }
            jump_instruction->branches.push_back(std::move(branch_instructions));
        } else if((cpat = dynamic_cast<pattern_constr*>(branch->pat.get()))) {
            env_ptr new_env = env;
            for(auto it = cpat->params.rbegin(); it != cpat->params.rend(); it++) {
                new_env = env_ptr(new env_var(*it, new_env));
            }

            branch_instructions.push_back(instruction_ptr(new instruction_split()));
            branch->expr->compile(new_env, branch_instructions);
            branch_instructions.push_back(instruction_ptr(new instruction_slide(
                            cpat->params.size())));

            int new_tag = type->constructors[cpat->constr].tag;
            if(jump_instruction->tag_mappings.find(new_tag) !=
                    jump_instruction->tag_mappings.end())
                throw type_error("technically not a type error: duplicate pattern");

            jump_instruction->tag_mappings[new_tag] =
                jump_instruction->branches.size();
            jump_instruction->branches.push_back(std::move(branch_instructions));
        }
    }

    for(auto& constr_pair : type->constructors) {
        if(jump_instruction->tag_mappings.find(constr_pair.second.tag) ==
                jump_instruction->tag_mappings.end())
            throw type_error("non-total pattern");
    }
}

There’s a lot to unpack here. First of all, just like we said in the compilation scheme, we want to build and evaluate the expression that’s being analyzed. Once that’s done, however, things get more tricky. We know that each branch of a case expression will correspond to a vector of instructions - in fact, our jump instruction contains a mapping from tags to instructions. As we also discussed above, each list of instructions can be mapped to by multiple tags. We don’t want to recompile the same sequence of instructions multiple times (or indeed, generate machine code for it). So, we keep a mapping of tags to their corresponding sequences of instructions. We implement this by having a vector of vectors of instructions (in which each inner vector represents the code for a branch), and a map of tag number to index in the vector containing all the branches. This way, multiple tags can point to the same instruction set without duplicating information.

We also don’t allow a tag to be mapped to more than one sequence of instructions. This is handled differently depending on whether a variable pattern or a constructor pattern are encountered. Variable patterns map all tags that haven’t been mapped yet, so no error can occur. Constructor patterns, though, can explicitly try to map the same tag twice, and we don’t want that.

I implied in the previous paragraph the implementation of our case expression compilation algorithm, but let’s go through it. Once we’ve compiled the expression to be analyzed, and evaluated it (just like in our definitions above), we proceed to look at all the branches specified in the case expression.

If a branch has a variable pattern, we must map to the result of the compilation all the remaining, unmapped tags. We also aren’t going to be taking apart our value, so we don’t need to use Split, but we do need to add 1 to the environment offset to account the the presence of that value. So, we compile the branch body with that offset, and iterate through all the constructors of our data type. We skip a constructor if it’s been mapped, and if it hasn’t been, we map it to the index that this branch body will have in our list. Finally, we push the newly compiled instruction sequence into the list of branch bodies.

If a branch is a constructor pattern, on the other hand, we lead our compilation output with a Split. This takes off the value from the stack, but pushes on all the parameters of the constructor. We account for this by incrementing the environment with the offset given by the number of arguments (just like we did in our definitions of our compilation scheme). Before we map the tag, we ensure that it hasn’t already been mapped (and throw an exception, currently in the form of a type error due to the growing length of this post), and finally map it and insert the new branch code into the list of branches.

After we’re done with all the branches, we also check for non-exhaustive patterns, since otherwise we could run into runtime errors. With this, the case expression, and the last of the AST nodes, can be compiled.

We also add a compile method to definitions, since they contain our AST nodes. The method is empty for defn_data, and looks as follows for definition_defn:

From definition.cpp, lines 44 through 52
44
45
46
47
48
49
50
51
52
void definition_defn::compile() {
    env_ptr new_env = env_ptr(new env_offset(0, nullptr));
    for(auto it = params.rbegin(); it != params.rend(); it++) {
        new_env = env_ptr(new env_var(*it, new_env));
    }
    body->compile(new_env, instructions);
    instructions.push_back(instruction_ptr(new instruction_update(params.size())));
    instructions.push_back(instruction_ptr(new instruction_pop(params.size())));
}

Notice that we terminate the function with Update and Pop. Update will turn the ast_app node that served as the “root” of the application into an indirection to the value that we have computed. After this, Pop will remove all “scratch work” from the stack. In essense, this is how we can lazily evaluate expressions.

Finally, we make a function in our main.cpp file to compile all the definitions:

From main.cpp, lines 45 through 56
45
46
47
48
49
50
51
52
53
54
55
56
void compile_program(const std::vector<definition_ptr>& prog) {
    for(auto& def : prog) {
        def->compile();

        definition_defn* defn = dynamic_cast<definition_defn*>(def.get());
        if(!defn) continue;
        for(auto& instruction : defn->instructions) {
            instruction->print(0, std::cout);
        }
        std::cout << std::endl;
    }
}

In this method, we also include some extra output to help us see the result of our compilation. Since at the moment, only the definition_defn program has to be compiled, we try cast all definitions to it, and if we succeed, we print them out.

Let’s try it all out! For the below sample program:

From works1.txt, entire file
defn main = { plus 320 6 }
defn plus x y = { x + y }

Our compiler produces the following new output:

PushInt(6)
PushInt(320)
PushGlobal(plus)
MkApp()
MkApp()
Update(0)
Pop(0)

Push(1)
Push(1)
PushGlobal(plus)
MkApp()
MkApp()
Update(2)
Pop(2)

The first sequence of instructions is clearly main. It creates an application of plus to 320, and then applies that to 6, which results in plus 320 6, which is correct. The second sequence of instruction pushes the parameter that sits on offset 1 from the top of the stack (y). It then pushes a parameter from the same offset again, but this time, since y was previously pushed on the stack, x is now in that position, so x is pushed onto the stack. Finally, + is pushed, and the application (+) x y is created, which is equivalent to x+y.

Let’s also take a look at a case expression program:

From works3.txt, entire file
data List = { Nil, Cons Int List }
defn length l = {
    case l of {
        Nil -> { 0 }
        Cons x xs -> { 1 + length xs }
    }
}

The result of the compilation is as follows:

Push(0)
Eval()
Jump(
    Split()
    PushInt(0)
    Slide(0)

    Split()
    Push(1)
    PushGlobal(length)
    MkApp()
    PushInt(1)
    PushGlobal(plus)
    MkApp()
    MkApp()
    Slide(2)

)
Update(1)
Pop(1)

We push the first (and only) parameter onto the stack. We then make sure it’s evaluated, and perform case analysis: if the list is Nil, we simply push the number 0 onto the stack. If it’s a concatenation of some x and another lists xs, we push xs and length onto the stack, make the application (length xs), push the 1, and finally apply + to the result. This all makes sense!

With this, we’ve been able to compile our expressions and functions into G-machine code. We’re not done, however - our computers aren’t G-machines. We’ll need to compile our G-machine code to machine code (we will use LLVM for this), implement the runtime, and develop a garbage collector. We’ll tackle the first of these in the next post - Part 7 - Runtime.