<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=SE250%3Alab-5%3Allay008</id>
	<title>SE250:lab-5:llay008 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=SE250%3Alab-5%3Allay008"/>
	<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=SE250:lab-5:llay008&amp;action=history"/>
	<updated>2026-04-29T10:23:24Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://wiki.kram.nz/index.php?title=SE250:lab-5:llay008&amp;diff=6649&amp;oldid=prev</id>
		<title>Mark: 53 revision(s)</title>
		<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=SE250:lab-5:llay008&amp;diff=6649&amp;oldid=prev"/>
		<updated>2008-11-03T05:19:50Z</updated>

		<summary type="html">&lt;p&gt;53 revision(s)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==HOW DO HASH FUNCTIONS PERFORM IN THEORY AND PRACTICE?==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Task One: How Do the Functions Compare in Theoretical Randomness&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
This task took quite a long time to do, due to the large amount of repetitive compiling of the code.  It took a while at the start to understand exactly what to do.  The first thing that I did was read through the code and try to grasp was what it was.  I noticed that the code was divided into the hash fuctions and the analysis as well as the high and low entropy. I figured out which of the fuctions I had to substitute into the main code.  From then it was simply modifying this to get the required output.&lt;br /&gt;
&lt;br /&gt;
I chose a sample size of 1000, a key number of 1000 and a table number of 100000.  I chose these for two reasons: first, when I ran the code these values gave a good entropy value.  Altering them did not change the values considerably.  The second reason that I chose these values is that i had a misconception of what they are.  It wasn&amp;#039;t entirely clear to me what they were so I asked for help and got told the wrong information, namely that they sample size was the number of inputs into the hash table, not that it was the number of times the statistcal tests were run.  When I realised this mistake I didn&amp;#039;t change the values because that would invalidate the results but I ran some tests on how the results differed when I changed them.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
I kept most of the output on &amp;quot;full&amp;quot;, because, although it takes up more space, it was easier to interpret (I didn&amp;#039;t have to try and remember which values corresponded to what).&lt;br /&gt;
&lt;br /&gt;
== Output ==&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;buzhash&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
  &lt;br /&gt;
 Testing Buzhash low on 1000 samples&lt;br /&gt;
 Entropy = 7.843786 bits per byte.&lt;br /&gt;
 &lt;br /&gt;
 Optimum compression would reduce the size&lt;br /&gt;
 of this 1000 byte file by 1 percent.&lt;br /&gt;
 &lt;br /&gt;
 Chi square distribution for 1000 samples is 214.46, and randomly&lt;br /&gt;
 would exceed this value 95.00 percent of the times.&lt;br /&gt;
 &lt;br /&gt;
 Arithmetic mean value of data bytes is 128.0860 (127.5 = random).&lt;br /&gt;
 Monte Carlo value for Pi is 3.132530120 (error 0.29 percent).&lt;br /&gt;
 Serial correlation coefficient is -0.017268 (totally uncorrelated = 0.0).&lt;br /&gt;
 &lt;br /&gt;
 Buzhash low 1000/100000: llps = 2, expecting 2.00948&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 &lt;br /&gt;
 Testing Buzhash typical on 1000 samples&lt;br /&gt;
 Entropy = 7.797775 bits per byte.&lt;br /&gt;
 &lt;br /&gt;
 Optimum compression would reduce the size&lt;br /&gt;
 of this 1000 byte file by 2 percent.&lt;br /&gt;
  &lt;br /&gt;
 Chi square distribution for 1000 samples is 250.82, and randomly&lt;br /&gt;
 would exceed this value 50.00 percent of the times.&lt;br /&gt;
 &lt;br /&gt;
 Arithmetic mean value of data bytes is 126.5740 (127.5 = random).&lt;br /&gt;
 Monte Carlo value for Pi is 3.277108434 (error 4.31 percent).&lt;br /&gt;
 Serial correlation coefficient is -0.007005 (totally uncorrelated = 0.0).&lt;br /&gt;
 &lt;br /&gt;
 Buzhash typical 1000/100000: llps = 2, expecting 2.00948&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;bazhashn&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 &lt;br /&gt;
 Testing Buzhashn low on 1000 samples&lt;br /&gt;
 Entropy = 7.823873 bits per byte. &lt;br /&gt;
 &lt;br /&gt;
 Optimum compression would reduce the size&lt;br /&gt;
 of this 1000 byte file by 2 percent.&lt;br /&gt;
 &lt;br /&gt;
 Chi square distribution for 1000 samples is 220.61, and randomly&lt;br /&gt;
 would exceed this value 90.00 percent of the times.&lt;br /&gt;
 &lt;br /&gt;
 Arithmetic mean value of data bytes is 127.3730 (127.5 = random).&lt;br /&gt;
 Monte Carlo value for Pi is 3.108433735 (error 1.06 percent).&lt;br /&gt;
 Serial correlation coefficient is -0.007118 (totally uncorrelated = 0.0).&lt;br /&gt;
 &lt;br /&gt;
 Buzhashn low 1000/100000: llps = 2, expecting 2.00948&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java String Hash typical 	7.82387	90.00%	127.373	1.06%	-0.007118&lt;br /&gt;
Java String Hash typical 1000/10000: llps = 999, expecting 2.82556&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;hash_CRC&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
 &lt;br /&gt;
 Testing hash_CRC low on 1000 samples&lt;br /&gt;
 Entropy = 3.965965 bits per byte.&lt;br /&gt;
 &lt;br /&gt;
 Optimum compression would reduce the size&lt;br /&gt;
 of this 1000 byte file by 50 percent.&lt;br /&gt;
 &lt;br /&gt;
 Chi square distribution for 1000 samples is 36163.52, and randomly&lt;br /&gt;
 would exceed this value 0.01 percent of the times.&lt;br /&gt;
 &lt;br /&gt;
 Arithmetic mean value of data bytes is 93.6860 (127.5 = random).&lt;br /&gt;
 Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
 Serial correlation coefficient is -0.380754 (totally uncorrelated = 0.0).&lt;br /&gt;
 &lt;br /&gt;
 hash_CRC low 1000/100000: llps = 1, expecting 2.00948&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing hash_CRC typical on 1000 samples&lt;br /&gt;
Entropy = 7.202459 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 9 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 1660.86, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 114.9320 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.204819277 (error 2.01 percent).&lt;br /&gt;
Serial correlation coefficient is -0.032076 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
hash_CRC typical 1000/100000: llps = 2, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;base256&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing base256 low on 1000 samples&lt;br /&gt;
Entropy = 0.000000 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 100 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 255000.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 97.0000 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is undefined (all values equal!).&lt;br /&gt;
&lt;br /&gt;
base256 low 1000/100000: llps = 1000, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing base256 typical on 1000 samples&lt;br /&gt;
Entropy = 3.919224 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 51 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 19854.27, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 106.4100 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is 0.217294 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
base256 typical 1000/100000: llps = 46, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Java Integer Hash&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lab5.c:332: warning: passing arg 4 of `llps&amp;#039; from incompatible pointer type&lt;br /&gt;
Testing Java Integer Hash low on 1000 samples&lt;br /&gt;
Entropy = 2.791730 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 65 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 143448.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 31.1250 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.230200 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java Integer Hash low 1000/100000: llps = 1, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lab5.c:332: warning: passing arg 4 of `llps&amp;#039; from incompatible pointer type&lt;br /&gt;
Testing Java Integer Hash typical  on 1000 samples&lt;br /&gt;
Entropy = 2.791730 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 65 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 143448.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 31.1250 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.230200 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java Integer Hash typical 1000/100000: llps = 91, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Java Object Hash&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java Object Hash low  on 1000 samples&lt;br /&gt;
Entropy = 2.000000 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 75 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 63000.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 77.0000 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.521556 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java Object Hash low 1000/100000: llps = 1000, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java Object Hash typical  on 1000 samples&lt;br /&gt;
Entropy = 4.232015 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 1000 byte file by 47 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 1000 samples is 33033.66, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 88.4960 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.731743 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java Object Hash typical 1000/100000: llps = 1, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Java String hash&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java Stirng Hash low 	7.91640	99.99%	129.471	0.48%	0.009052&lt;br /&gt;
Java String Hash low 1000/100000: llps = 1, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java String Hash typical 	7.37782	0.01%	117.390	8.92%	-0.013887&lt;br /&gt;
Java String Hash typical 1000/100000: llps = 2, expecting 2.00948&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Around this point I realised that I had some misconceptions about the variables and tried it again using a table size of 1000.  The results are below.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java String Hash typical 	7.37782	0.01%	117.390	8.92%	-0.013887&lt;br /&gt;
Java String Hash typical 1000/1000: llps = 7, expecting 5.51384&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Increasing the sample size gave these results&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java String Hash typical 	7.89301	0.01%	126.009	1.42%	-0.021360&lt;br /&gt;
Java String Hash typical 1000/1000: llps = 7, expecting 5.51384&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For my benefit (to see exactly what the numbers correlate to)&lt;br /&gt;
&lt;br /&gt;
 Testing Java String Hash typical  on 10000 samples&lt;br /&gt;
 Entropy = &amp;#039;&amp;#039;&amp;#039;7.893010&amp;#039;&amp;#039;&amp;#039; bits per byte.&lt;br /&gt;
 &lt;br /&gt;
 Optimum compression would reduce the size&lt;br /&gt;
 of this 10000 byte file by 1 percent.&lt;br /&gt;
 &lt;br /&gt;
 Chi square distribution for 10000 samples is 2047.92, and randomly&lt;br /&gt;
 would exceed this value &amp;#039;&amp;#039;&amp;#039;0.01&amp;#039;&amp;#039;&amp;#039; percent of the times.&lt;br /&gt;
 &lt;br /&gt;
 Arithmetic mean value of data bytes is &amp;#039;&amp;#039;&amp;#039;126.0088&amp;#039;&amp;#039;&amp;#039; (127.5 = random).&lt;br /&gt;
 Monte Carlo value for Pi is 3.186074430 (error &amp;#039;&amp;#039;&amp;#039;1.42&amp;#039;&amp;#039;&amp;#039; percent).&lt;br /&gt;
 Serial correlation coefficient is &amp;#039;&amp;#039;&amp;#039;-0.021360&amp;#039;&amp;#039;&amp;#039; (totally uncorrelated = 0.0).&lt;br /&gt;
 &lt;br /&gt;
 Java String Hash typical 1000/1000: llps = 7, expecting 5.51384&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;rand&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;low&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java rand low 	7.71844	0.01%	110.541	8.92%	-0.048389&lt;br /&gt;
Java rand low 1000/10000: llps = 2, expecting 2.82556&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;typical&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Java rand typical 	7.74840	0.05%	112.891	7.38%	-0.081749&lt;br /&gt;
Java rand typical 1000/10000: llps = 3, expecting 2.82556&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;high rand&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
This gave me errors = &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lab5.c:332: error: `high_rand&amp;#039; undeclared (first use in this function)&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
TEST                             ENTROPY    S.C.C        Chi squared value&lt;br /&gt;
&lt;br /&gt;
Java String hash     low         7.91640     0.009052    99.99&lt;br /&gt;
buzhashn             low         7.823873   -0.007118    90.00 &lt;br /&gt;
buzhash              low         7.843786   -0.017268    95.00&lt;br /&gt;
rand                 low         7.71844    -0.048389     0.01&lt;br /&gt;
hash_CRC             low         3.965965   -0.380754     0.01 &lt;br /&gt;
Java Object hash     low         4.232015   -0.521556     0.01 &lt;br /&gt;
Java Integer hash    low         2.791730   -0.230200     0.01 &lt;br /&gt;
base256              low         0.000000    undefined    0.01 &lt;br /&gt;
&lt;br /&gt;
buzhash              typical     7.797775   -0.032076    50.00&lt;br /&gt;
rand                 typical     7.74840    -0.081749     0.05&lt;br /&gt;
buzhashn             typical     7.202459   -0.007118    90.00&lt;br /&gt;
Java String hash     typical     7.37782    -0.013887     0.01&lt;br /&gt;
base256              typical     3.919224    0.217294     0.01 &lt;br /&gt;
hash_CRC             typical     3.919224   -0.032076     0.01 &lt;br /&gt;
Java Object hash     typical     4.232015   -0.731743     0.01 &lt;br /&gt;
Java Integer hash    typical     2.791730   -0.230200     0.01 &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Looking at these results there is a clear difference between the top four and the bottom four, especially when looking at the entropy of each.  base256 is actually 0, which means every value is the same(?).  The difference is also seen when looking at the serial correlation coefficient.  In each of the top four the first significant figure occurs at the thousandth significant figure, while it it larger for the bottom four, in most cases by a power of ten. With the sole exception of buzhash typical, &amp;#039;&amp;#039;all&amp;#039;&amp;#039; of the hash functions performed badly on the chi squared test, scoring very high (too good to be true) or very low (not very random).  In general the low entropy test gave better results than the typical.&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
	</entry>
</feed>