THE DISTRIBUTED SYSTEMS GROUP, Computer Science Department, TCD Random ...

TRINITY COLLEGE DUBLIN Management Science and Information Systems Studies Project Report

THE DISTRIBUTED SYSTEMS GROUP, Computer Science Department, TCD Random Number Generators:

An Evaluation and Comparison of and Some Commonly Used Generators April 2005

Prepared by: Charmaine Kenny

Supervisor: Krzysztof Mosurski

ABSTRACT

The aim of this project is to research statistical tests that detect non-randomness in a true random number generator (). An industry-standard suite of tests was chosen to verify the complete randomness, from a statistical viewpoint, of the numbers generated by . It is envisaged that the test suite will be ran on numbers daily with results being displayed on the website. The performance of the output of is compared to other commonly used pseudorandom number generators and true random number generators. The paper also addresses a number of unresolved issues that need further exploration.

PREFACE

An ideal random number generator is a fiction Schindler and Killman [4]

The client of this project is the Distributed Systems Group (DCG), a research group in the Department of Computer Science in Trinity College Dublin. They operate a public true random number service which generates randomness based on atmospheric radio noise.

The aim of the project is essentially to verify that the output of its random number service is completely random and to recommend a suite of tests that can be ran daily on the output of . Additionally the aim is to compare to some commonly used random number generators ? both pseudo and true.

The work carried out in this project can undoubtedly be developed upon. Although a substantial amount was achieved given the time constraints, there are some pertinent issues raised throughout the report that need further consideration. The current opportunities to explore in this area seem endless. Random numbers are being increasingly used in all aspects of life and have become a staple in many fields, not just statistics. It seems that while a host of research has been done it is far from complete.

Acknowledgements

I would like to express my sincerest thanks to all those who helped in the realisation of this project. There are three people in particular that deserve a special mention;

I would like to thank Dr. Mads Haahr, the client contact and the builder of , for his insight and willingness to help. His prompt responses to my queries were greatly appreciated.

Secondly, I would like to thank Niall ? Tuathail, a fellow MSISS student, for his patience in helping me run the tests. It was an arduous task!

And finally my thanks to Dr.Kris Mosurski, my supervisor, for his help and guidance during the course of my project. I am very appreciative of his insightful discussions and dedication to ironing out even the smallest of problems.

THE DISTRIBUTED SYSTEMS GROUP Random Number Generators ? An evaluation and comparison of and some commonly used generators

April 2005

TABLE OF CONTENTS

NO. SECTION

1. INTRODUCTION AND SUMMARY

1.1 The Client 1.2 The Project Background 1.3 Terms of Reference 1.4 Report Summary

2. CONCLUSIONS AND RECOMMENDATIONS

3. LITERATURE REVIEW

3.1 Definition of a Random Sequence 3.2 Applications of Random Numbers

3.2.1 Cryptography 3.2.2 Simulation 3.2.3 Gaming 3.2.4 Sampling 3.2.5 Aesthetics 3.2.6 Applications of Numbers 3.2.7 Concluding Remarks

3.3 Types of Random Number Generators

3.3.1 True Random Number Generators 3.3.2 Pseudo-Random Number Generators 3.3.3 Comparison of TRNGs and PRNGs 3.3.4 What about ?

3.4 Statistical Testing

3.4.1 Choosing what tests to use

3.5 Review of Test Suites

3.5.1 Knuth 3.5.2 Diehard 3.5.3 Crypt-X

PAGE

1

1 1 1 2

3

5

5 6

6 7 7 7 7 7 8

8

8 9 9 10

11

12

13

13 13 13

3.5.4 National Institute of Standards and Technology

14

3.5.5 ENT

14

3.5.6 Previous MSISS Project

14

4. TESTING ISSUES AND METHODOLOGY

15

4.1 Issues

15

4.1.1 Same tests for RNGs and PRNGs?

15

4.1.2 Test Suites application dependent?

15

4.1.3 Why implement a new suite of tests?

16

4.1.4 Which suite to use?

16

4.1.5 Description of Tests

17

4.1.6 Revised Set of Tests

17

4.2 Methodology

19

4.2.1 Which numbers should be tested?

19

4.2.2 Multiple testing?

19

4.2.3 Input Sizes

19

4.2.4 Pass/fail criteria

20

5. RESULTS

23

6. OPEN ISSUES

24

6.1 Evaluation of the Test Suite

24

6.1.1 Power of Tests

24

6.1.2 Independence and Coverage of Tests

24

6.1.3 Interpretation of Results

25

6.2 Application Based Testing

25

................
................

In order to avoid copyright disputes, this page is only a partial summary.

Google Online Preview   Download