SE250:lab-1:hpan027

From Marks Wiki
Revision as of 05:18, 3 November 2008 by Mark (Sọ̀rọ̀ | contribs) (25 revision(s))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

The aim of this lab was to find out how long an addition take with different variable types.

The first problem was figuring out how to compile with the EMAC programme, which I've never used before. Turns out the command line you need can be found on the SE wiki. But in the end I switched back to Visual Studio because I'm more familiar with it.

The second problem I ran into was I had no idea how to use the "clock()" function. This was solved by googling "c clock()" and reading up on some of the articles on the 'net.

Turns out clock() returns the number of clock ticks since the programme was launched, with each tick being about 1/1000 of a second.

So to calculate the time taken for an addition, the current clock tick has to be recorded before the addition and after, with the difference being the time taken.


The initial code turned out to be something like

#include <stdio.h>
#include <time.h>

int main(void) {

 int testVar = 0;
 int i;
 clock_t clockStart;
 clock_t clockFinish;
 clock_t timeTaken;
 
 clockStart = clock();

 for (i=0; i<10000000; i++) {
  testVar++;
 }

 clockFinish = clock();

 timeTaken = ( clockFinish - clockStart );
 printf("%ld", timeTaken);
 return 0;
}


There turned out to be a problem with getting results:

-The programme returns a different value each time it runs, with the first time usually taking the longest. Sometimes there are "spikes" in time and the programme takes much longer to run than usual. This was decided to be related to the computer memory so an average was taken ignoring the outliers


The results are as follows

int 20

long 21

short 23

float 89

double 88

However these results include the time taken for the loop to run and not just the addition alone, so the time taken for the loop had to be calculated as well. This was done by removing the line

testVar++;

from the code - hence the only bit of code between the two clock() calls is the loop code. This gave a surprising result.

Loop time 20

Comparing this to the time taken for the int/long/short calculations, there are hardly any difference - 0-3 ticks only, which are 1/1000 of a second each. It's impossible to tell whether this difference is due to the calculations or some other factor, so the length of the loop was increased to

for (i=0; i<1000000000; i++)


With this change, the results are as follows

int 2143

long 2160

short 2400

float 8120

double 8115

Loop time 2128

There is now a slightly more noticeably difference, with float and double taking much longer than int/long/short. However, it still seems like for int/long/short the majority of the runtime is spent on the loop rather than the calculations.


I then tried changing the loop variable to different types to see if these results are consistent (i.e. "double i" instead of "int i"). The results are:

-int/long/short calculations are still not taking much more time than the loop itself

-however now neither are the double/float calculations


Conclusion

So:


1) With an "int" type loop variable

-int/short/long calculations does not take much longer than the loop itself

-double/float calculations take much longer


2) With a "double" type loop variable

-none of the calculations take very much longer than the loop itself

It's hard to draw a definite conclusion because it seems like the loop code affects the time taken much more than the additions done within it. We can see with the first case (loop variable of type int) the double/float calculations definitely took longer. But once the loop variable type was changed to double this was no longer the case. I can only guess that this has something to do with the way memory is allocated when dealing with double/float type variables.

Hpan027 11:28, 4 March 2008 (NZDT)