SE250:lab-1:mcar147
|
Lab 1
What I did
First thing was remembering how to write C code... With the help of a few mates working in a group we were able to recall how to write code (With the assistance of last years 131 book which Shikhar had thankfully brought) Secondly, we had to figure out what the clock funtion was and how it worked. Making use of the ever present and useful Google, a quick search brought up what the code was, the syntax for it and the way it was used.
The Code
#include <stdio.h> #include <time.h> int x, z; long c; short v; float b; double n; clock_t start, end; double elapsed; int main () { start = clock(); for (z = 0; z < 1000000000; z++){ x = x + x; } end = clock(); elapsed = ((double) (end - start)) / CLOCKS_PER_SEC; printf("Time taken for 1,000,000,000 int additions is %lf seconds \n \n", elapsed); }
The rest of the program was similar to the above with the variables type being changed
Results
Wow computers are fast... To do the 1 BILLION additions, it took about 4 seconds average across the board with 3.2 for ints and 4.7 for doubles and floats
Over a period of 5 trials on this computer the average times were as follows: int = 3.2696 seconds with a range of .038 seconds long = 3.3046 seconds with a range of .005 seconds short = 3.4654 seconds with a range of .007 seconds float = 4.7582 seconds with a range of .182 seconds double = 4.7746 seconds with a range of .180 seconds
From these results it seems that long and short variables take a similar amount of time to calculate regardless of other stuff
Afterthoughts
The hardest part of this lab was recalling how to write C code, though when it started coming back most returned in a flood. I still keep making the same stupid mistakes like forgetting the ;'s.