Dr. Mark Humphrys

School of Computing. Dublin City University.

Home      Blog      Teaching      Research      Contact

My big idea: Ancient Brain


CA216      CA249      CA318

CA400      CA651      CA668

Help on displaying equations

Mark Humphrys - Research - PhD - Acknowledgements - Appendix A

A Incremental sampling of random variables

Most of the results in these appendices are either well-known or trivial, but they are included here for completeness, and so they can be referred to from the main body of the text.

A.1 Single variable (average)

Let tex2html_wrap_inline9198 be samples of a stationary random variable d with expected value E(d). Then the following update algorithm provides an elegant way of sampling them. Repeat:



Proof: D's updates go:


i.e. D is simply the average of all tex2html_wrap_inline9210 samples so far.

As tex2html_wrap_inline9212, tex2html_wrap_inline9214.

More generally:


Proof: D's updates go:


As tex2html_wrap_inline9226 :


that is, tex2html_wrap_inline9214. tex2html_wrap_inline7352

One way of looking at this is to consider tex2html_wrap_inline9232 as the average of all samples before time t, samples which are now irrelevant for some reason. We can consider them as samples from a different distribution f:




as tex2html_wrap_inline9226 .

I could have made this clearer:
1/n ( dt + ... + dn ) = (n-t+1)/n   1/(n-t+1) ( dt + ... + dn )
->   1 . E(d)

A.2 Single variable (time-weighted)

An alternative approach (e.g. see [Grefenstette, 1992]) is to build up not the average of all samples, but a time-weighted sum of them, giving more weight to the most recent ones. This is accomplished by setting tex2html_wrap_inline6284 to be constant, in which case D's updates go:


Since tex2html_wrap_inline9244 as tex2html_wrap_inline9226 , this weights more recent samples higher. Grefenstette uses typically tex2html_wrap_inline9248 .

A.3 Multiple variables

Consider sampling alternately from two stationary random variables d and f:



Proof: From Theorem A.1, after 2t samples:


as tex2html_wrap_inline9212. tex2html_wrap_inline7352

More generally, consider where we have multiple stationary distributions tex2html_wrap_inline9270 , where each distribution tex2html_wrap_inline9272 has expected value tex2html_wrap_inline9274 . At each time t, we take a sample tex2html_wrap_inline9278 from one of the distributions. Each distribution tex2html_wrap_inline9272 has probability p(i) of being picked. Repeat:



Proof: For a large number of samples t we expect to take p(1) t samples from tex2html_wrap_inline9292 , p(2) t samples from tex2html_wrap_inline9296 , and so on. From Theorem A.1, we expect:


as tex2html_wrap_inline9212 . tex2html_wrap_inline7352

Appendix B

Return to Contents page.

Feeds      w2mind.org      ancientbrain.com

On the Internet since 1987.