SE250:lab-9:bvic005
Lab 9
In this lab we had three options to choose from. We could write a tokeniser, extend a parser, or write a compiler.
At first, i picked the third option, write a compiler, as it sounded the most interesting. However, i soon found that this was a bad idea. As we where not given any indication whatsoever what the parse tree said compiler was supposed to compile was like(in terms of token syntax), it was rather hard to code it. We where also not given any indication as to how the compiler was supposed to be called, or any way of testing it when we coded anything.
With this lack of information on task 3, i decided i needed to find out exactly how the tokeniser and parser where supposed to be implemented, to work out what how they interfaced with the compiler, and hopefully some of the syntax that the parse tree should be using. This also proved to be a rather difficult task. The instructions for tasks 1 and 2 where just as vague as for task 3, but at least they had some example/half implemented code. Unfortunately, as this code had almost zero commenting, it was rather difficult to decipher.
After around an hour and a half staring at code and taking to others(who also didn't have a clue what was going on really), i had a general idea a to what was happening. Unfortunately, i realized that to have any hope of testing my code for task 3, i would have to have a working parser and tokeniser. Not only did i not have a working parser, as i had not done task 2, i found that the compiler implementation depends heavily of the syntax the tokeniser outputs, making writing it for toy tokeniser an exercise in futility. Therefor, i need both tasks 1 and 2 before i could have a hope of doing task3.
I then went and talked to David. He had started on task 2, and had mainly gotten it working(though it was hard to tell with as we could only test it using toy tokeniser). After some more discussion, we had a pretty good idea as to what was going on together, and decided to start work on task 1, using his working parser.
This proved to be a more difficult task than anticipated. Not only did the coding for this task prove to be rather substantial, but once again,we ran into the problem of not knowing what language syntax we where supposed to be coding for. We ended up making it up the syntax as we went along, and so far(after a good 3 hours or so coding, staring at screens and tearing our hair out) we have managed to implement around half of the functionality we estimate is required.
I don't actually have a copy of the code at the moment(we where doing it on David's laptop), so i can't post any of it, but we will probably keep working on it, and if so, i will post some at a later date.
Edit: Looks like David has posted our code so far here
Recommendations for next Year
- Comment all provided code.
- More detailed instruction, particularly in terms of input/output definitions for the whole task.
- Either merge the tokeniser and the parser into one lab, or separate them completely, instead of having them depend on each other, and having each student only doing one of them.
- Provide testing for all three steps.