Let’s Build A Simple Interpreter. Part 10.

Date

Today we will continue closing the gap between where we are right now and where we want to be: a fully functional interpreter for a subset of Pascal programming language.

In this article we will update our interpreter to parse and interpret our very first complete Pascal program. The program can also be compiled by the Free Pascal compiler, fpc.

Here is the program itself:

PROGRAM Part10;
VAR
   number     : INTEGER;
   a, b, c, x : INTEGER;
   y          : REAL;

BEGIN {Part10}
   BEGIN
      number := 2;
      a := number;
      b := 10 * a + 10 * number DIV 4;
      c := a - - b
   END;
   x := 11;
   y := 20 / 7 + 3.14;
   { writeln('a = ', a); }
   { writeln('b = ', b); }
   { writeln('c = ', c); }
   { writeln('number = ', number); }
   { writeln('x = ', x); }
   { writeln('y = ', y); }
END.  {Part10}

Before we start digging into the details, download the source code of the interpreter from GitHub and the Pascal source code above, and try it on the command line:

$ python spi.py part10.pas
a = 2
b = 25
c = 27
number = 2
x = 11
y = 5.99714285714


If I remove the comments around the writeln statements in the part10.pas file, compile the source code with fpc and then run the produced executable, this is what I get on my laptop:

$ fpc part10.pas
$ ./part10
a = 2
b = 25
c = 27
number = 2
x = 11
y =  5.99714285714286E+000


Okay, let’s see what we’re going cover today:

  1. We will learn how to parse and interpret the Pascal PROGRAM header
  2. We will learn how to parse Pascal variable declarations
  3. We will update our interpreter to use the DIV keyword for integer division and a forward slash / for float division
  4. We will add support for Pascal comments


Let’s dive in and look at the grammar changes first. Today we will add some new rules and update some of the existing rules.

  1. The program definition grammar rule is updated to include the PROGRAM reserved keyword, the program name, and a block that ends with a dot. Here is an example of a complete Pascal program:

    PROGRAM Part10;
    BEGIN
    END.
    
  2. The block rule combines a declarations rule and a compound_statement rule. We’ll also use the rule later in the series when we add procedure declarations. Here is an example of a block:

    VAR
       number : INTEGER;
    
    BEGIN
    END
    

    Here is another example:

    BEGIN
    END
    
  3. Pascal declarations have several parts and each part is optional. In this article, we’ll cover the variable declaration part only. The declarations rule has either a variable declaration sub-rule or it’s empty.

  4. Pascal is a statically typed language, which means that every variable needs a variable declaration that explicitly specifies its type. In Pascal, variables must be declared before they are used. This is achieved by declaring variables in the program variable declaration section using the VAR reserved keyword. You can define variables like this:

    VAR
       number     : INTEGER;
       a, b, c, x : INTEGER;
       y          : REAL;
    
  5. The type_spec rule is for handling INTEGER and REAL types and is used in variable declarations. In the example below

    VAR
       a : INTEGER;
       b : REAL;
    

    the variable “a” is declared with the type INTEGER and the variable “b” is declared with the type REAL (float). In this article we won’t enforce type checking, but we will add type checking later in the series.

  6. The term rule is updated to use the DIV keyword for integer division and a forward slash / for float division.

    Before, dividing 20 by 7 using a forward slash would produce an INTEGER 2:

    20 / 7 = 2
    

    Now, dividing 20 by 7 using a forward slash will produce a REAL (floating point number) 2.85714285714 :

    20 / 7 = 2.85714285714
    

    From now on, to get an INTEGER instead of a REAL, you need to use the DIV keyword:

    20 DIV 7 = 2
    
  7. The factor rule is updated to handle both integer and real (float) constants. I also removed the INTEGER sub-rule because the constants will be represented by INTEGER_CONST and REAL_CONST tokens and the INTEGER token will be used to represent the integer type. In the example below the lexer will generate an INTEGER_CONST token for 20 and 7 and a REAL_CONST token for 3.14 :

    y := 20 / 7 + 3.14;
    


Here is our complete grammar for today:

    program : PROGRAM variable SEMI block DOT

    block : declarations compound_statement

    declarations : VAR (variable_declaration SEMI)+
                 | empty

    variable_declaration : ID (COMMA ID)* COLON type_spec

    type_spec : INTEGER

    compound_statement : BEGIN statement_list END

    statement_list : statement
                   | statement SEMI statement_list

    statement : compound_statement
              | assignment_statement
              | empty

    assignment_statement : variable ASSIGN expr

    empty :

    expr : term ((PLUS | MINUS) term)*

    term : factor ((MUL | INTEGER_DIV | FLOAT_DIV) factor)*

    factor : PLUS factor
           | MINUS factor
           | INTEGER_CONST
           | REAL_CONST
           | LPAREN expr RPAREN
           | variable

    variable: ID

In the rest of the article we’ll go through the same drill we went through last time:

  1. Update the lexer
  2. Update the parser
  3. Update the interpreter


Updating the Lexer

Here is a summary of the lexer changes:

  1. New tokens
  2. New and updated reserved keywords
  3. New skip_comments method to handle Pascal comments
  4. Rename the integer method and make some changes to the method itself
  5. Update the get_next_token method to return new tokens

Let’s dig into the changes mentioned above:

  1. To handle a program header, variable declarations, integer and float constants as well as integer and float division, we need to add some new tokens - some of which are reserved keywords - and we also need to update the meaning of the INTEGER token to represent the integer type and not an integer constant. Here is a complete list of new and updated tokens:

    • PROGRAM (reserved keyword)
    • VAR (reserved keyword)
    • COLON (:)
    • COMMA (,)
    • INTEGER (we change it to mean integer type and not integer constant like 3 or 5)
    • REAL (for Pascal REAL type)
    • INTEGER_CONST (for example, 3 or 5)
    • REAL_CONST (for example, 3.14 and so on)
    • INTEGER_DIV for integer division (the DIV reserved keyword)
    • FLOAT_DIV for float division ( forward slash / )
  2. Here is the complete mapping of reserved keywords to tokens:

    RESERVED_KEYWORDS = {
        'PROGRAM': Token('PROGRAM', 'PROGRAM'),
        'VAR': Token('VAR', 'VAR'),
        'DIV': Token('INTEGER_DIV', 'DIV'),
        'INTEGER': Token('INTEGER', 'INTEGER'),
        'REAL': Token('REAL', 'REAL'),
        'BEGIN': Token('BEGIN', 'BEGIN'),
        'END': Token('END', 'END'),
    }
    
  3. We’re adding the skip_comment method to handle Pascal comments. The method is pretty basic and all it does is discard all the characters until the closing curly brace is found:

    def skip_comment(self):
        while self.current_char != '}':
            self.advance()
        self.advance()  # the closing curly brace
    
  4. We are renaming the integer method the number method. It can handle both integer constants and float constants like 3 and 3.14:

    def number(self):
        """Return a (multidigit) integer or float consumed from the input."""
        result = ''
        while self.current_char is not None and self.current_char.isdigit():
            result += self.current_char
            self.advance()
    
        if self.current_char == '.':
            result += self.current_char
            self.advance()
    
            while (
                self.current_char is not None and
                self.current_char.isdigit()
            ):
                result += self.current_char
                self.advance()
    
            token = Token('REAL_CONST', float(result))
        else:
            token = Token('INTEGER_CONST', int(result))
    
        return token
    
  5. We’re also updating the get_next_token method to return new tokens:

    def get_next_token(self):
        while self.current_char is not None:
            ...
            if self.current_char == '{':
                self.advance()
                self.skip_comment()
                continue
            ...
            if self.current_char.isdigit():
                return self.number()
    
            if self.current_char == ':':
                self.advance()
                return Token(COLON, ':')
    
            if self.current_char == ',':
                self.advance()
                return Token(COMMA, ',')
            ...
            if self.current_char == '/':
                self.advance()
                return Token(FLOAT_DIV, '/')
            ...
    


Updating the Parser

Now onto the parser changes.

Here is a summary of the changes:

  1. New AST nodes: Program, Block, VarDecl, Type
  2. New methods corresponding to new grammar rules: block, declarations, variable_declaration, and type_spec.
  3. Updates to the existing parser methods: program, term, and factor

Let’s go over the changes one by one:

  1. We’ll start with new AST nodes first. There are four new nodes:

    • The Program AST node represents a program and will be our root node

      class Program(AST):
          def __init__(self, name, block):
              self.name = name
              self.block = block
      
    • The Block AST node holds declarations and a compound statement:

      class Block(AST):
          def __init__(self, declarations, compound_statement):
              self.declarations = declarations
              self.compound_statement = compound_statement
      
    • The VarDecl AST node represents a variable declaration. It holds a variable node and a type node:

      class VarDecl(AST):
          def __init__(self, var_node, type_node):
              self.var_node = var_node
              self.type_node = type_node
      
    • The Type AST node represents a variable type (INTEGER or REAL):

      class Type(AST):
          def __init__(self, token):
              self.token = token
              self.value = token.value
      
  2. As you probably remember, each rule from the grammar has a corresponding method in our recursive-descent parser. Today we’re adding four new methods: block, declarations, variable_declaration, and type_spec. These methods are responsible for parsing new language constructs and constructing new AST nodes:

    def block(self):
        """block : declarations compound_statement"""
        declaration_nodes = self.declarations()
        compound_statement_node = self.compound_statement()
        node = Block(declaration_nodes, compound_statement_node)
        return node
    
    def declarations(self):
        """declarations : VAR (variable_declaration SEMI)+
                        | empty
        """
        declarations = []
        if self.current_token.type == VAR:
            self.eat(VAR)
            while self.current_token.type == ID:
                var_decl = self.variable_declaration()
                declarations.extend(var_decl)
                self.eat(SEMI)
    
        return declarations
    
    def variable_declaration(self):
        """variable_declaration : ID (COMMA ID)* COLON type_spec"""
        var_nodes = [Var(self.current_token)]  # first ID
        self.eat(ID)
    
        while self.current_token.type == COMMA:
            self.eat(COMMA)
            var_nodes.append(Var(self.current_token))
            self.eat(ID)
    
        self.eat(COLON)
    
        type_node = self.type_spec()
        var_declarations = [
            VarDecl(var_node, type_node)
            for var_node in var_nodes
        ]
        return var_declarations
    
    def type_spec(self):
        """type_spec : INTEGER
                     | REAL
        """
        token = self.current_token
        if self.current_token.type == INTEGER:
            self.eat(INTEGER)
        else:
            self.eat(REAL)
        node = Type(token)
        return node
    
  3. We also need to update the program, term, and, factor methods to accommodate our grammar changes:

    def program(self):
        """program : PROGRAM variable SEMI block DOT"""
        self.eat(PROGRAM)
        var_node = self.variable()
        prog_name = var_node.value
        self.eat(SEMI)
        block_node = self.block()
        program_node = Program(prog_name, block_node)
        self.eat(DOT)
        return program_node
    
    def term(self):
        """term : factor ((MUL | INTEGER_DIV | FLOAT_DIV) factor)*"""
        node = self.factor()
    
        while self.current_token.type in (MUL, INTEGER_DIV, FLOAT_DIV):
            token = self.current_token
            if token.type == MUL:
                self.eat(MUL)
            elif token.type == INTEGER_DIV:
                self.eat(INTEGER_DIV)
            elif token.type == FLOAT_DIV:
                self.eat(FLOAT_DIV)
    
            node = BinOp(left=node, op=token, right=self.factor())
    
        return node
    
    def factor(self):
        """factor : PLUS factor
                  | MINUS factor
                  | INTEGER_CONST
                  | REAL_CONST
                  | LPAREN expr RPAREN
                  | variable
        """
        token = self.current_token
        if token.type == PLUS:
            self.eat(PLUS)
            node = UnaryOp(token, self.factor())
            return node
        elif token.type == MINUS:
            self.eat(MINUS)
            node = UnaryOp(token, self.factor())
            return node
        elif token.type == INTEGER_CONST:
            self.eat(INTEGER_CONST)
            return Num(token)
        elif token.type == REAL_CONST:
            self.eat(REAL_CONST)
            return Num(token)
        elif token.type == LPAREN:
            self.eat(LPAREN)
            node = self.expr()
            self.eat(RPAREN)
            return node
        else:
            node = self.variable()
            return node
    


Now, let’s see what the Abstract Syntax Tree looks like with the new nodes. Here is a small working Pascal program:

PROGRAM Part10AST;
VAR
   a, b : INTEGER;
   y    : REAL;

BEGIN {Part10AST}
   a := 2;
   b := 10 * a + 10 * a DIV 4;
   y := 20 / 7 + 3.14;
END.  {Part10AST}

Let’s generate an AST and visualize it with the genastdot.py:

$ python genastdot.py part10ast.pas > ast.dot && dot -Tpng -o ast.png ast.dot

In the picture you can see the new nodes that we have added.


Updating the Interpreter

We’re done with the lexer and parser changes. What’s left is to add new visitor methods to our Interpreter class. There will be four new methods to visit our new nodes:

  • visit_Program
  • visit_Block
  • visit_VarDecl
  • visit_Type

They are pretty straightforward. You can also see that the Interpreter does nothing with VarDecl and Type nodes:

def visit_Program(self, node):
    self.visit(node.block)

def visit_Block(self, node):
    for declaration in node.declarations:
        self.visit(declaration)
    self.visit(node.compound_statement)

def visit_VarDecl(self, node):
    # Do nothing
    pass

def visit_Type(self, node):
    # Do nothing
    pass

We also need to update the visit_BinOp method to properly interpret integer and float divisions:

def visit_BinOp(self, node):
    if node.op.type == PLUS:
        return self.visit(node.left) + self.visit(node.right)
    elif node.op.type == MINUS:
        return self.visit(node.left) - self.visit(node.right)
    elif node.op.type == MUL:
        return self.visit(node.left) * self.visit(node.right)
    elif node.op.type == INTEGER_DIV:
        return self.visit(node.left) // self.visit(node.right)
    elif node.op.type == FLOAT_DIV:
        return float(self.visit(node.left)) / float(self.visit(node.right))


Let’s sum up what we had to do to extend the Pascal interpreter in this article:

  • Add new rules to the grammar and update some existing rules
  • Add new tokens and supporting methods to the lexer, update and modify some existing methods
  • Add new AST nodes to the parser for new language constructs
  • Add new methods corresponding to the new grammar rules to our recursive-descent parser and update some existing methods
  • Add new visitor methods to the interpreter and update one existing visitor method

As a result of our changes we also got rid of some of the hacks I introduced in Part 9, namely:

  • Our interpreter can now handle the PROGRAM header
  • Variables can now be declared using the VAR keyword
  • The DIV keyword is used for integer division and a forward slash / is used for float division


If you haven’t done so yet, then, as an exercise, re-implement the interpreter in this article without looking at the source code and use part10.pas as your test input file.


That’s all for today. In the next article, I’ll talk in greater detail about symbol table management. Stay tuned and see you soon!


By the way, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch. You can get a feel for the book here, here, and here. Subscribe to the mailing list to get the latest updates about the book and the release date.


All articles in this series:


Let’s Build A Simple Interpreter. Part 9.

Date

I remember when I was in university (a long time ago) and learning systems programming, I believed that the only “real” languages were Assembly and C. And Pascal was - how to put it nicely - a very high-level language used by application developers who didn’t want to know what was going on under the hood.

Little did I know back then that I would be writing almost everything in Python (and love every bit of it) to pay my bills and that I would also be writing an interpreter and compiler for Pascal for the reasons I stated in the very first article of the series.

These days, I consider myself a programming languages enthusiast, and I’m fascinated by all languages and their unique features. Having said that, I have to note that I enjoy using certain languages way more than others. I am biased and I’ll be the first one to admit that. :)

This is me before:

And now:

Okay, let’s get down to business. Here is what you’re going to learn today:

  1. How to parse and interpret a Pascal program definition.
  2. How to parse and interpret compound statements.
  3. How to parse and interpret assignment statements, including variables.
  4. A bit about symbol tables and how to store and lookup variables.

I’ll use the following sample Pascal-like program to introduce new concepts:

BEGIN
    BEGIN
        number := 2;
        a := number;
        b := 10 * a + 10 * number / 4;
        c := a - - b
    END;
    x := 11;
END.

You could say that that’s quite a jump from the command line interpreter you wrote so far by following the previous articles in the series, but it’s a jump that I hope will bring excitement. It’s not “just” a calculator anymore, we’re getting serious here, Pascal serious. :)

Let’s dive in and look at syntax diagrams for new language constructs and their corresponding grammar rules.

On your marks: Ready. Set. Go!

  1. I’ll start with describing what a Pascal program is. A Pascal program consists of a compound statement that ends with a dot. Here is an example of a program:

    “BEGIN  END.”
    

    I have to note that this is not a complete program definition, and we’ll extend it later in the series.

  2. What is a compound statement? A compound statement is a block marked with BEGIN and END that can contain a list (possibly empty) of statements including other compound statements. Every statement inside the compound statement, except for the last one, must terminate with a semicolon. The last statement in the block may or may not have a terminating semicolon. Here are some examples of valid compound statements:

    “BEGIN END”
    “BEGIN a := 5; x := 11 END”
    “BEGIN a := 5; x := 11; END”
    “BEGIN BEGIN a := 5 END; x := 11 END”
    
  3. A statement list is a list of zero or more statements inside a compound statement. See above for some examples.

  4. A statement can be a compound statement, an assignment statement, or it can be an empty statement.

  5. An assignment statement is a variable followed by an ASSIGN token (two characters, ‘:’ and ‘=’) followed by an expression.

    “a := 11”
    “b := a + 9 - 5 * 2”
    
  6. A variable is an identifier. We’ll use the ID token for variables. The value of the token will be a variable’s name like ‘a’, ‘number’, and so on. In the following code block ‘a’ and ‘b’ are variables:

    “BEGIN a := 11; b := a + 9 - 5 * 2 END”
    
  7. An empty statement represents a grammar rule with no further productions. We use the empty_statement grammar rule to indicate the end of the statement_list in the parser and also to allow for empty compound statements as in ‘BEGIN END’.

  8. The factor rule is updated to handle variables.


Now let’s take a look at our complete grammar:

    program : compound_statement DOT

    compound_statement : BEGIN statement_list END

    statement_list : statement
                   | statement SEMI statement_list

    statement : compound_statement
              | assignment_statement
              | empty

    assignment_statement : variable ASSIGN expr

    empty :

    expr: term ((PLUS | MINUS) term)*

    term: factor ((MUL | DIV) factor)*

    factor : PLUS factor
           | MINUS factor
           | INTEGER
           | LPAREN expr RPAREN
           | variable

    variable: ID

You probably noticed that I didn’t use the star ‘*’ symbol in the compound_statement rule to represent zero or more repetitions, but instead explicitly specified the statement_list rule. This is another way to represent the ‘zero or more’ operation, and it will come in handy when we look at parser generators like PLY, later in the series. I also split the “(PLUS | MINUS) factor” sub-rule into two separate rules.


In order to support the updated grammar, we need to make a number of changes to our lexer, parser, and interpreter. Let’s go over those changes one by one.

Here is the summary of the changes in our lexer:

  1. To support a Pascal program’s definition, compound statements, assignment statements, and variables, our lexer needs to return new tokens:

    • BEGIN (to mark the beginning of a compound statement)
    • END (to mark the end of the compound statement)
    • DOT (a token for a dot character ‘.’ required by a Pascal program’s definition)
    • ASSIGN (a token for a two character sequence ‘:=’). In Pascal, an assignment operator is different than in many other languages like C, Python, Java, Rust, or Go, where you would use single character ‘=’ to indicate assignment
    • SEMI (a token for a semicolon character ‘;’ that is used to mark the end of a statement inside a compound statement)
    • ID (A token for a valid identifier. Identifiers start with an alphabetical character followed by any number of alphanumerical characters)
  2. Sometimes, in order to be able to differentiate between different tokens that start with the same character, (‘:’ vs ‘:=’ or ‘==’ vs ‘=>’ ) we need to peek into the input buffer without actually consuming the next character. For this particular purpose, I introduced a peek method that will help us tokenize assignment statements. The method is not strictly required, but I thought I would introduce it earlier in the series and it will also make the get_next_token method a bit cleaner. All it does is return the next character from the text buffer without incrementing the self.pos variable. Here is the method itself:

    def peek(self):
        peek_pos = self.pos + 1
        if peek_pos > len(self.text) - 1:
            return None
        else:
            return self.text[peek_pos]
    
  3. Because Pascal variables and reserved keywords are both identifiers, we will combine their handling into one method called _id. The way it works is that the lexer consumes a sequence of alphanumerical characters and then checks if the character sequence is a reserved word. If it is, it returns a pre-constructed token for that reserved keyword. And if it’s not a reserved keyword, it returns a new ID token whose value is the character string (lexeme). I bet at this point you think, “Gosh, just show me the code.” :) Here it is:

    RESERVED_KEYWORDS = {
        'BEGIN': Token('BEGIN', 'BEGIN'),
        'END': Token('END', 'END'),
    }
    
    def _id(self):
        """Handle identifiers and reserved keywords"""
        result = ''
        while self.current_char is not None and self.current_char.isalnum():
            result += self.current_char
            self.advance()
    
        token = RESERVED_KEYWORDS.get(result, Token(ID, result))
        return token
    
  4. And now let’s take a look at the changes in the main lexer method get_next_token:

    def get_next_token(self):
        while self.current_char is not None:
            ...
            if self.current_char.isalpha():
                return self._id()
    
            if self.current_char == ':' and self.peek() == '=':
                self.advance()
                self.advance()
                return Token(ASSIGN, ':=')
    
            if self.current_char == ';':
                self.advance()
                return Token(SEMI, ';')
    
            if self.current_char == '.':
                self.advance()
                return Token(DOT, '.')
            ...
    

It’s time to see our shiny new lexer in all its glory and action. Download the source code from GitHub and launch your Python shell from the same directory where you saved the spi.py file:

>>> from spi import Lexer
>>> lexer = Lexer('BEGIN a := 2; END.')
>>> lexer.get_next_token()
Token(BEGIN, 'BEGIN')
>>> lexer.get_next_token()
Token(ID, 'a')
>>> lexer.get_next_token()
Token(ASSIGN, ':=')
>>> lexer.get_next_token()
Token(INTEGER, 2)
>>> lexer.get_next_token()
Token(SEMI, ';')
>>> lexer.get_next_token()
Token(END, 'END')
>>> lexer.get_next_token()
Token(DOT, '.')
>>> lexer.get_next_token()
Token(EOF, None)
>>>


Moving on to parser changes.

Here is the summary of changes in our parser:

  1. Let’s start with new AST nodes:

    • Compound AST node represents a compound statement. It contains a list of statement nodes in its children variable.

      class Compound(AST):
          """Represents a 'BEGIN ... END' block"""
          def __init__(self):
              self.children = []
      
    • Assign AST node represents an assignment statement. Its left variable is for storing a Var node and its right variable is for storing a node returned by the expr parser method:

      class Assign(AST):
          def __init__(self, left, op, right):
              self.left = left
              self.token = self.op = op
              self.right = right
      
    • Var AST node (you guessed it) represents a variable. The self.value holds the variable’s name.

      class Var(AST):
          """The Var node is constructed out of ID token."""
          def __init__(self, token):
              self.token = token
              self.value = token.value
      
    • NoOp node is used to represent an empty statement. For example ‘BEGIN END’ is a valid compound statement that has no statements.

      class NoOp(AST):
          pass
      
  2. As you remember, each rule from the grammar has a corresponding method in our recursive-descent parser. This time we’re adding seven new methods. These methods are responsible for parsing new language constructs and constructing new AST nodes. They are pretty straightforward:

    def program(self):
        """program : compound_statement DOT"""
        node = self.compound_statement()
        self.eat(DOT)
        return node
    
    def compound_statement(self):
        """
        compound_statement: BEGIN statement_list END
        """
        self.eat(BEGIN)
        nodes = self.statement_list()
        self.eat(END)
    
        root = Compound()
        for node in nodes:
            root.children.append(node)
    
        return root
    
    def statement_list(self):
        """
        statement_list : statement
                       | statement SEMI statement_list
        """
        node = self.statement()
    
        results = [node]
    
        while self.current_token.type == SEMI:
            self.eat(SEMI)
            results.append(self.statement())
    
        if self.current_token.type == ID:
            self.error()
    
        return results
    
    def statement(self):
        """
        statement : compound_statement
                  | assignment_statement
                  | empty
        """
        if self.current_token.type == BEGIN:
            node = self.compound_statement()
        elif self.current_token.type == ID:
            node = self.assignment_statement()
        else:
            node = self.empty()
        return node
    
    def assignment_statement(self):
        """
        assignment_statement : variable ASSIGN expr
        """
        left = self.variable()
        token = self.current_token
        self.eat(ASSIGN)
        right = self.expr()
        node = Assign(left, token, right)
        return node
    
    def variable(self):
        """
        variable : ID
        """
        node = Var(self.current_token)
        self.eat(ID)
        return node
    
    def empty(self):
        """An empty production"""
        return NoOp()
    
  3. We also need to update the existing factor method to parse variables:

    def factor(self):
        """factor : PLUS  factor
                  | MINUS factor
                  | INTEGER
                  | LPAREN expr RPAREN
                  | variable
        """
        token = self.current_token
        if token.type == PLUS:
            self.eat(PLUS)
            node = UnaryOp(token, self.factor())
            return node
        ...
        else:
            node = self.variable()
            return node
    
  4. The parser’s parse method is updated to start the parsing process by parsing a program definition:

    def parse(self):
        node = self.program()
        if self.current_token.type != EOF:
            self.error()
    
        return node
    

Here is our sample program again:

BEGIN
    BEGIN
        number := 2;
        a := number;
        b := 10 * a + 10 * number / 4;
        c := a - - b
    END;
    x := 11;
END.

Let’s visualize it with genastdot.py (For brevity, when displaying a Var node, it just shows the node’s variable name and when displaying an Assign node it shows ‘:=’ instead of showing ‘Assign’ text):

$ python genastdot.py assignments.txt > ast.dot && dot -Tpng -o ast.png ast.dot


And finally, here are the required interpreter changes:

To interpret new AST nodes, we need to add corresponding visitor methods to the interpreter. There are four new visitor methods:

  • visit_Compound
  • visit_Assign
  • visit_Var
  • visit_NoOp

Compound and NoOp visitor methods are pretty straightforward. The visit_Compound method iterates over its children and visits each one in turn, and the visit_NoOp method does nothing.

def visit_Compound(self, node):
    for child in node.children:
        self.visit(child)

def visit_NoOp(self, node):
    pass


The Assign and Var visitor methods deserve a closer examination.

When we assign a value to a variable, we need to store that value somewhere for when we need it later, and that’s exactly what the visit_Assign method does:

def visit_Assign(self, node):
    var_name = node.left.value
    self.GLOBAL_SCOPE[var_name] = self.visit(node.right)

The method stores a key-value pair (a variable name and a value associated with the variable) in a symbol table GLOBAL_SCOPE. What is a symbol table? A symbol table is an abstract data type (ADT) for tracking various symbols in source code. The only symbol category we have right now is variables and we use the Python dictionary to implement the symbol table ADT. For now I’ll just say that the way the symbol table is used in this article is pretty “hacky”: it’s not a separate class with special methods but a simple Python dictionary and it also does double duty as a memory space. In future articles, I will be talking about symbol tables in much greater detail, and together we’ll also remove all the hacks.

Let’s take a look at an AST for the statement “a := 3;” and the symbol table before and after the visit_Assign method does its job:

Now let’s take a look at an AST for the statement “b := a + 7;”

As you can see, the right-hand side of the assignment statement - “a + 7” - references the variable ‘a’, so before we can evaluate the expression “a + 7” we need to find out what the value of ‘a’ is and that’s the responsibility of the visit_Var method:

def visit_Var(self, node):
    var_name = node.value
    val = self.GLOBAL_SCOPE.get(var_name)
    if val is None:
        raise NameError(repr(var_name))
    else:
        return val

When the method visits a Var node as in the above AST picture, it first gets the variable’s name and then uses that name as a key into the GLOBAL_SCOPE dictionary to get the variable’s value. If it can find the value, it returns it, if not - it raises a NameError exception. Here are the contents of the symbol table before evaluating the assignment statement “b := a + 7;”:

These are all the changes that we need to do today to make our interpreter tick. At the end of the main program, we simply print the contents of the symbol table GLOBAL_SCOPE to standard output.

Let’s take our updated interpreter for a drive both from a Python interactive shell and from the command line. Make sure that you downloaded both the source code for the interpreter and the assignments.txt file before testing:

Launch your Python shell:

$ python
>>> from spi import Lexer, Parser, Interpreter
>>> text = """\
... BEGIN
...
...     BEGIN
...         number := 2;
...         a := number;
...         b := 10 * a + 10 * number / 4;
...         c := a - - b
...     END;
...
...     x := 11;
... END.
... """
>>> lexer = Lexer(text)
>>> parser = Parser(lexer)
>>> interpreter = Interpreter(parser)
>>> interpreter.interpret()
>>> print(interpreter.GLOBAL_SCOPE)
{'a': 2, 'x': 11, 'c': 27, 'b': 25, 'number': 2}

And from the command line, using a source file as input to our interpreter:

$ python spi.py assignments.txt
{'a': 2, 'x': 11, 'c': 27, 'b': 25, 'number': 2}

If you haven’t tried it yet, try it now and see for yourself that the interpreter is doing its job properly.


Let’s sum up what you had to do to extend the Pascal interpreter in this article:

  1. Add new rules to the grammar
  2. Add new tokens and supporting methods to the lexer and update the get_next_token method
  3. Add new AST nodes to the parser for new language constructs
  4. Add new methods corresponding to the new grammar rules to our recursive-descent parser and update any existing methods, if necessary (factor method, I’m looking at you. :)
  5. Add new visitor methods to the interpreter
  6. Add a dictionary for storing variables and for looking them up


In this part I had to introduce a number of “hacks” that we’ll remove as we move forward with the series:

  1. The program grammar rule is incomplete. We’ll extend it later with additional elements.
  2. Pascal is a statically typed language, and you must declare a variable and its type before using it. But, as you saw, that was not the case in this article.
  3. No type checking so far. It’s not a big deal at this point, but I just wanted to mention it explicitly. Once we add more types to our interpreter we’ll need to report an error when you try to add a string and an integer, for example.
  4. A symbol table in this part is a simple Python dictionary that does double duty as a memory space. Worry not: symbol tables are such an important topic that I’ll have several articles dedicated just to them. And memory space (runtime management) is a topic of its own.
  5. In our simple calculator from previous articles, we used a forward slash character ‘/’ for denoting integer division. In Pascal, though, you have to use a keyword div to specify integer division (See Exercise 1).
  6. There is also one hack that I introduced on purpose so that you could fix it in Exercise 2: in Pascal all reserved keywords and identifiers are case insensitive, but the interpreter in this article treats them as case sensitive.


To keep you fit, here are new exercises for you:

  1. Pascal variables and reserved keywords are case insensitive, unlike in many other programming languages, so BEGIN, begin, and BeGin they all refer to the same reserved keyword. Update the interpreter so that variables and reserved keywords are case insensitive. Use the following program to test it:

    BEGIN
    
        BEGIN
            number := 2;
            a := NumBer;
            B := 10 * a + 10 * NUMBER / 4;
            c := a - - b
        end;
    
        x := 11;
    END.
    
  2. I mentioned in the “hacks” section before that our interpreter is using the forward slash character ‘/’ to denote integer division, but instead it should be using Pascal’s reserved keyword div for integer division. Update the interpreter to use the div keyword for integer division, thus eliminating one of the hacks.

  3. Update the interpreter so that variables could also start with an underscore as in ‘_num := 5’.


That’s all for today. Stay tuned and see you soon.


Here is a list of books I recommend that will help you in your study of interpreters and compilers:

  1. Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages (Pragmatic Programmers)

  2. Compilers: Principles, Techniques, and Tools (2nd Edition)


By the way, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch. You can get a feel for the book here, here, and here. Subscribe to the mailing list to get the latest updates about the book and the release date.


All articles in this series:


Let’s Build A Simple Interpreter. Part 8.

Date

Today we’ll talk about unary operators, namely unary plus (+) and unary minus (-) operators.

A lot of today’s material is based on the material from the previous article, so if you need a refresher just head back to Part 7 and go over it again. Remember: repetition is the mother of all learning.

Having said that, this is what you are going to do today:

  • extend the grammar to handle unary plus and unary minus operators
  • add a new UnaryOp AST node class
  • extend the parser to generate an AST with UnaryOp nodes
  • extend the interpreter and add a new visit_UnaryOp method to interpret unary operators

Let’s get started, shall we?

So far we’ve worked with binary operators only (+, -, *, /), that is, the operators that operate on two operands.

What is a unary operator then? A unary operator is an operator that operates on one operand only.

Here are the rules for unary plus and unary minus operators:

  • The unary minus (-) operator produces the negation of its numeric operand
  • The unary plus (+) operator yields its numeric operand without change
  • The unary operators have higher precedence than the binary operators +, -, *, and /

In the expression “+ - 3” the first ‘+’ operator represents the unary plus operation and the second ‘-‘ operator represents the unary minus operation. The expression “+ - 3” is equivalent to “+ (- (3))” which is equal to -3. One could also say that -3 in the expression is a negative integer, but in our case we treat it as a unary minus operator with 3 as its positive integer operand:

Let’s take a look at another expression, “5 - - 2”:

In the expression “5 - - 2” the first ‘-‘ represents the binary subtraction operation and the second ‘-‘ represents the unary minus operation, the negation.

And some more examples:

Now let’s update our grammar to include unary plus and unary minus operators. We’ll modify the factor rule and add unary operators there because unary operators have higher precedence than binary +, -, * and / operators.

This is our current factor rule:

And this is our updated factor rule to handle unary plus and unary minus operators:

As you can see, I extended the factor rule to reference itself, which allows us to derive expressions like “- - - + - 3”, a legitimate expression with a lot of unary operators.

Here is the full grammar that can now derive expressions with unary plus and unary minus operators:

The next step is to add an AST node class to represent unary operators.

This one will do:

class UnaryOp(AST):
    def __init__(self, op, expr):
        self.token = self.op = op
        self.expr = expr

The constructor takes two parameters: op, which represents the unary operator token (plus or minus) and expr, which represents an AST node.

Our updated grammar had changes to the factor rule, so that’s what we’re going to modify in our parser - the factor method. We will add code to the method to handle the “(PLUS | MINUS) factor” sub-rule:

def factor(self):
    """factor : (PLUS | MINUS) factor | INTEGER | LPAREN expr RPAREN"""
    token = self.current_token
    if token.type == PLUS:
        self.eat(PLUS)
        node = UnaryOp(token, self.factor())
        return node
    elif token.type == MINUS:
        self.eat(MINUS)
        node = UnaryOp(token, self.factor())
        return node
    elif token.type == INTEGER:
        self.eat(INTEGER)
        return Num(token)
    elif token.type == LPAREN:
        self.eat(LPAREN)
        node = self.expr()
        self.eat(RPAREN)
        return node


And now we need to extend the Interpreter class and add a visit_UnaryOp method to interpret unary nodes:

def visit_UnaryOp(self, node):
    op = node.op.type
    if op == PLUS:
        return +self.visit(node.expr)
    elif op == MINUS:
        return -self.visit(node.expr)

Onward!

Let’s manually build an AST for the expression “5 - - - 2” and pass it to our interpreter to verify that the new visit_UnaryOp method works. Here is how you can do it from the Python shell:

>>> from spi import BinOp, UnaryOp, Num, MINUS, INTEGER, Token
>>> five_tok = Token(INTEGER, 5)
>>> two_tok = Token(INTEGER, 2)
>>> minus_tok = Token(MINUS, '-')
>>> expr_node = BinOp(
...     Num(five_tok),
...     minus_tok,
...     UnaryOp(minus_token, UnaryOp(minus_token, Num(two_tok)))
... )
>>> from spi import Interpreter
>>> inter = Interpreter(None)
>>> inter.visit(expr_node)
3

Visually the above AST tree looks like this:

Download the full source code of the interpreter for this article directly from GitHub. Try it out and see for yourself that your updated tree-based interpreter properly evaluates arithmetic expressions containing unary operators.

Here is a sample session:

$ python spi.py
spi> - 3
-3
spi> + 3
3
spi> 5 - - - + - 3
8
spi> 5 - - - + - (3 + 4) - +2
10


I also updated the genastdot.py utility to handle unary operators. Here are some of the examples of the generated AST images for expressions with unary operators:

$ python genastdot.py "- 3" > ast.dot && dot -Tpng -o ast.png ast.dot

$ python genastdot.py "+ 3" > ast.dot && dot -Tpng -o ast.png ast.dot

$ python genastdot.py "5 - - - + - 3" > ast.dot && dot -Tpng -o ast.png ast.dot

$ python genastdot.py "5 - - - + - (3 + 4) - +2" \
  > ast.dot && dot -Tpng -o ast.png ast.dot



And here is a new exercise for you:


That’s all for today. In the next article, we’ll tackle assignment statements. Stay tuned and see you soon.


Here is a list of books I recommend that will help you in your study of interpreters and compilers:

  1. Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages (Pragmatic Programmers)

  2. Writing Compilers and Interpreters: A Software Engineering Approach

  3. Modern Compiler Implementation in Java

  4. Modern Compiler Design

  5. Compilers: Principles, Techniques, and Tools (2nd Edition)


By the way, I’m writing a book “Let’s Build A Web Server: First Steps” that explains how to write a basic web server from scratch. You can get a feel for the book here, here, and here. Subscribe to the mailing list to get the latest updates about the book and the release date.


All articles in this series: