<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=SE250%3Alab-5%3Ahals016</id>
	<title>SE250:lab-5:hals016 - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.kram.nz/index.php?action=history&amp;feed=atom&amp;title=SE250%3Alab-5%3Ahals016"/>
	<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=SE250:lab-5:hals016&amp;action=history"/>
	<updated>2026-04-28T19:45:35Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>https://wiki.kram.nz/index.php?title=SE250:lab-5:hals016&amp;diff=6510&amp;oldid=prev</id>
		<title>Mark: 11 revision(s)</title>
		<link rel="alternate" type="text/html" href="https://wiki.kram.nz/index.php?title=SE250:lab-5:hals016&amp;diff=6510&amp;oldid=prev"/>
		<updated>2008-11-03T05:19:46Z</updated>

		<summary type="html">&lt;p&gt;11 revision(s)&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;==Task 1==&lt;br /&gt;
&lt;br /&gt;
For this task I chose the follow values:&lt;br /&gt;
int sample_size = 200;&lt;br /&gt;
int n_keys = 10000;&lt;br /&gt;
int table_size = 100;&lt;br /&gt;
&lt;br /&gt;
A sample size of 200 seems fair looking at a scenario of a small company assigning ID&amp;#039;s to their employees.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===BuzHash Low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Output for Buzhash Low&lt;br /&gt;
&lt;br /&gt;
Testing Buzhash low on 200 samples&lt;br /&gt;
Entropy = 6.961838 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 12 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 271.04, and randomly&lt;br /&gt;
would exceed this value 25.00 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 129.8200 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.030303030 (error 3.54 percent).&lt;br /&gt;
Serial correlation coefficient is -0.140593 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Buzhash low 10000/100: llps = 134, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The randomness of the Buzhash isn&amp;#039;t very good given these results. The chi square distribution is only exceeded 25% of the time. Also the serial correlation coefficient is quite low.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===BuzHash Typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Output for Buzhash Typical&lt;br /&gt;
&lt;br /&gt;
Testing Buzhash typical on 200 samples&lt;br /&gt;
Entropy = 6.987435 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 12 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 240.32, and randomly&lt;br /&gt;
would exceed this value 50.00 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 128.0400 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.272727273 (error 4.17 percent).&lt;br /&gt;
Serial correlation coefficient is -0.005251 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Buzhash typical 10000/100: llps = 127, expecting 125.959&lt;br /&gt;
Press any key to continue . . .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Comparing these results with the buzhash low, I think they represent more randomness. This is because the chi square is now a very good value of 50%, the arithmetic value is closer to 127.5(the random value) and also the serial correlation coefficient is a lot closer to 0.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Buzhashn low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Buzhashn low on 200 samples&lt;br /&gt;
Entropy = 7.094984 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 11 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 209.60, and randomly&lt;br /&gt;
would exceed this value 97.50 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 120.9150 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.151515152 (error 0.32 percent).&lt;br /&gt;
Serial correlation coefficient is 0.099943 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Buzhashn low 10000/100: llps = 133, expecting 125.959&lt;br /&gt;
Press any key to continue . . .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The buzhashn low, compared to both buzhash&amp;#039;s, does a lot better with the monte carlo value for pi, although not so well with the arithmetic value and llps.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Buzhashn typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Buzhashn typical on 200 samples&lt;br /&gt;
Entropy = 7.094984 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 11 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 209.60, and randomly&lt;br /&gt;
would exceed this value 97.50 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 120.9150 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.151515152 (error 0.32 percent).&lt;br /&gt;
Serial correlation coefficient is 0.099943 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Buzhashn typical 10000/100: llps = 127, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The buzhashn typical looks the same as buzhashn low, except that the llps is a lot closer to the expected value.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===hash_CRC low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing hash_CRC low on 200 samples&lt;br /&gt;
Entropy = 3.470509 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 56 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 7305.92, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 94.3400 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.390902 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
hash_CRC low 10000/100: llps = 405, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
These values compared with the buzhash/buzhashn are a lot lower/worse. The 2 major ones are: llps 405 expecting 125.96 and the chi square would be exceeded by 0.01%.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===hash_CRC typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing hash_CRC typical on 200 samples&lt;br /&gt;
Entropy = 6.059310 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 24 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 934.08, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 94.7650 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.272727273 (error 4.17 percent).&lt;br /&gt;
Serial correlation coefficient is 0.129518 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
hash_CRC typical 10000/100: llps = 146, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Results are similar to hash_CRC low. However the monte carlo pi and llps have improved dramatically.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===base256 low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing base256 low on 200 samples&lt;br /&gt;
Entropy = 3.987359 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 50 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 4146.88, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 101.0700 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is 0.290495 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
base256 low 10000/100: llps = 10000, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Largest difference between expected llps and actual llps (as of yet), 1000-125.96. Chi square is a large and undesirable number.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===base256 typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing base256 typical on 200 samples&lt;br /&gt;
Entropy = 3.987359 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 50 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 4146.88, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 101.0700 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is 0.290495 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
base256 typical 10000/100: llps = 671, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Little improvements from base256 low, more or less similar.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_Integer_hash low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_Integer_hash low on 200 samples&lt;br /&gt;
Entropy = 2.178861 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 72 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 29048.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 6.1250 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.227907 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_Integer_hash low 10000/100: llps = 109, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_Integer_hash typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_Integer_hash typical on 200 samples&lt;br /&gt;
Entropy = 2.178861 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 72 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 29048.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 6.1250 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.227907 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_Integer_hash typical 10000/100: llps = 932, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_Object_hash low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_Object_hash low on 200 samples&lt;br /&gt;
Entropy = 2.000000 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 75 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 12600.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 95.5000 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.037404 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_Object_hash low 10000/100: llps = 10000, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_Object_hash typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_Object_hash typical on 200 samples&lt;br /&gt;
Entropy = 4.511741 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 43 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 3604.16, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 78.2500 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.645940 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_Object_hash typical 10000/100: llps = 406, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_String_hash low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_String_hash low on 200 samples&lt;br /&gt;
Entropy = 7.093661 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 11 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 214.72, and randomly&lt;br /&gt;
would exceed this value 95.00 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 130.9200 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.030303030 (error 3.54 percent).&lt;br /&gt;
Serial correlation coefficient is 0.052529 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_String_hash low 10000/100: llps = 109, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Java_String_hash typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing Java_String_hash typical on 200 samples&lt;br /&gt;
Entropy = 6.193853 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 22 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 839.36, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 108.6100 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 3.515151515 (error 11.89 percent).&lt;br /&gt;
Serial correlation coefficient is 0.103661 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
Java_String_hash typical 10000/100: llps = 123, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===rand low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing rand low on 200 samples&lt;br /&gt;
Entropy = 4.057145 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 49 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 13296.32, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 44.6150 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.045063 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
rand low 10000/100: llps = 132, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Comparing this Unix random number with the other hash functions, its not so good and at the same time not so bad.&lt;br /&gt;
&lt;br /&gt;
===rand typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing rand typical on 200 samples&lt;br /&gt;
Entropy = 4.057145 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 49 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 13296.32, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 44.6150 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is -0.045063 (totally uncorrelated = 0.0).&lt;br /&gt;
&lt;br /&gt;
rand typical 10000/100: llps = 132, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Similar results to rand low.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===high_rand low===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing high_rand low on 200 samples&lt;br /&gt;
Entropy = 0.000000 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 100 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 51000.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 0.0000 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is undefined (all values equal!).&lt;br /&gt;
&lt;br /&gt;
high_rand low 10000/100: llps = 133, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Randomness ... not so random compared to everything else, I can only predict the typical to be similar.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===high_rand typical===&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
Testing high_rand typical on 200 samples&lt;br /&gt;
Entropy = 0.000000 bits per byte.&lt;br /&gt;
&lt;br /&gt;
Optimum compression would reduce the size&lt;br /&gt;
of this 200 byte file by 100 percent.&lt;br /&gt;
&lt;br /&gt;
Chi square distribution for 200 samples is 51000.00, and randomly&lt;br /&gt;
would exceed this value 0.01 percent of the times.&lt;br /&gt;
&lt;br /&gt;
Arithmetic mean value of data bytes is 0.0000 (127.5 = random).&lt;br /&gt;
Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).&lt;br /&gt;
Serial correlation coefficient is undefined (all values equal!).&lt;br /&gt;
&lt;br /&gt;
high_rand typical 10000/100: llps = 133, expecting 125.959&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Similar to high_rand low.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Conclusion and Functions Ranked in Order of Randomness===&lt;br /&gt;
The Unix results are quite low compared to the other hash functions and do not produce true randomness.&lt;br /&gt;
&lt;br /&gt;
I have ranked the functions as follows(based on my judgment):&lt;br /&gt;
 1) BuzHash (very dense information storage)&lt;br /&gt;
 2) BuzHashn (dense information storage)&lt;br /&gt;
 3) Java_String_Hash (low entropy, not so much typical entropy)&lt;br /&gt;
 4) hash_CRC (typical entropy, not so much low entropy)&lt;br /&gt;
 5) Java_Object_hash&lt;br /&gt;
 6) Java_Integer_hash&lt;br /&gt;
 7) base256&lt;br /&gt;
 8) rand&lt;br /&gt;
 9) high_rand&lt;br /&gt;
 &lt;br /&gt;
===http://www.fourmilab.ch/random/===&lt;/div&gt;</summary>
		<author><name>Mark</name></author>
	</entry>
</feed>