Talk:SE250:lab-5:mgha023

From Marks Wiki
Revision as of 10:43, 3 November 2008 by Mark (Sọ̀rọ̀ | contribs) (1 revision(s))
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Comments from John Hamer

  • "I felt this was a goos sample size as this will let us test the capacity of each of the hash functions and how well it performs under critical load situations." — this is not an adequate justification. You need to explain why you feel it is a good size. Did you perform any experiments in coming to this decision? If not, how can you be at all sure the number is ok? Getting the sample size wrong will risk invalidating all of your data, so it is very important.
  • Great to see you referencing the 2007 HTTB
  • "It can be seen that from increasing the sample size from 1000 to 10000, there is a considerable increase in randomness. However there is not much of an improvement in randomness when the sample size is changed fomr 10000 to 100000." — a much more satisfying explanation, thank you.
  • You only ran the tests on the low entropy source? Shame.
  • 10/1000000 — this is an extremely low load factor. Noone runs such sparce hash tables.

Overall, a reasonable effort. Pity you didn't get on to the second half.