<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=Talk%3ASE250%3Alab-5%3Ajsmi233</id>
	<title>Talk:SE250:lab-5:jsmi233 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=Talk%3ASE250%3Alab-5%3Ajsmi233"/>
	<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=Talk:SE250:lab-5:jsmi233&amp;action=history"/>
	<updated>2026-04-29T03:43:32Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://wiki.kram.nz/index.php?title=Talk:SE250:lab-5:jsmi233&amp;diff=13851&amp;oldid=prev</id>
		<title>Mark: 1 revision(s)</title>
		<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=Talk:SE250:lab-5:jsmi233&amp;diff=13851&amp;oldid=prev"/>
		<updated>2008-11-03T10:43:36Z</updated>

		<summary type="html">&lt;p&gt;1 revision(s)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;== Comments from John Hamer ==&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;I just copied off the guy next to me.&amp;quot; &amp;amp;mdash; this is not an adequate reason.  How confident are you that the sample size is ok?  This is important.  If you get it wrong, you risk collecting worthless data.&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;Now I’m going to try and work out what this is supposed to mean&amp;quot; &amp;amp;mdash; excellent.  I&amp;#039;m pleased to see you are not intimidated by venturing into the unknown.&lt;br /&gt;
&lt;br /&gt;
* &amp;quot;. Therefore the greater number of samples, the closer we will get to theoretical values.  Hence I chose sample_size = 100000, because with any larger value, the program crashes.&amp;quot; &amp;amp;mdash; thank you!  A coherent justification.&lt;br /&gt;
&lt;br /&gt;
* In fact, &amp;quot;entropy&amp;quot; is just one measure of randomness.  You can still gerrymander the data so the entropy is high but the data is predictable.  The same is true of all the tests.  That&amp;#039;s why we use a suite of tests -- it&amp;#039;s harder to fool them all.&lt;br /&gt;
&lt;br /&gt;
* Why a load factor of 3?&lt;br /&gt;
&lt;br /&gt;
* Java String is worth commenting on: it does so-so in the stats tests, but comes up &amp;quot;better than expected&amp;quot; in practice.  How come?&lt;br /&gt;
&lt;br /&gt;
* base256 is also worthy of comment.  How can it be so bad (returning the same value for every input!)&lt;br /&gt;
&lt;br /&gt;
* And what&amp;#039;s up with Java Object hash, suddenly collapsing.&lt;br /&gt;
&lt;br /&gt;
This is a solid effort.  You show a good understanding of the material, although you skirted the harder parts of probing the meaning of the different stats tests.  Your results could have been better presented in fixed-space tables or as graphs.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
	</entry>
</feed>