When analyzing neuron spike trains, it is always the problem of how to set the time bin. Bin width affects much to analyzed results of such as periodicity of the spike trains. Many approaches have been proposed to determine the bin setting. However, these bins are fixed through the analysis. In this paper, we propose a randomizing method of bin width and location instead of conventional fixed bin setting. This technique is applied to analyzing periodicity of interspike interval train. Also the sensitivity of the method is presented. 1. Introduction Bin width setting is always a problem, since it affects largely analyzed results. Neural spike train usually has time-varying characteristics. Therefore, data length of spike train in stationary state with the same characteristics is often limited. That is, the number of stable data is limited, and therefore there exists limitation in decreasing bin width to analyze more precisely. The more troublesome problem is that the results become different by how much to set the bin width or even the initial position. Bin size has been determined to optimize some performance measure of time histogram [1, 2], time precision [3–5], information , rate estimation , and so forth. However, their bins are fixed after being optimized/determined. To avoid such troublesome problem, binless analysis methods are also used [8–10]. In this paper, we propose a method of setting various random bins. Random bin will be expected to decrease unfavorable effects up to the level of being neglectable. See the appendix section for preliminary easy explanation of the random bin. 2. Automutual Information of Spike-Interval Train To analyze a spike train as a time sequence, there exist mainly 4 methods of (i) spectrum analysis  which includes sideband and therefore may be limited in precise time analysis, (ii) correlation  which reflects only linear relation, (iii) time histogram  whose precision may be limited by nonstationarity of the train, and (iv) information measure [6, 12, 13] which is expected to be possible to avoid such limitations. Automutual information method dealt in this paper belongs to (iv). Mutual information (MI) is a measure of expressing common quantity of information between events A and B, as described by (1): More specifically, this is the difference between joint probability and probability in which A and B are assumed to be independent events. If A and B are indeed independent, they have no common information, and therefore the mutual information is zero. If we take an inter-spike interval train as ,
D. Endres, J. Schindelin, P. F？ldiák Peter, and M. W. Oram, “Modelling spike trains and extracting response latency with Bayesian binning,” Journal of Physiology Paris, vol. 104, no. 3-4, pp. 128–136, 2010.
P. B. Kruskal, J. J. Stanis, B. L. McNaughton, and P. J. Thomas, “A binless correlation measure reduces the variability of memory reactivation estimates,” Statistics in Medicine, vol. 26, no. 21, pp. 3997–4008, 2007.
M. Rivlin-Etzion, Y. Ritov, G. Heimer, H. Bergman, and I. Bar-Gad, “Local shuffling of spike trains boosts the accuracy of spike train spectral analysis,” Journal of Neurophysiology, vol. 95, no. 5, pp. 3245–3256, 2006.
A. Scaglione, G. Foffani, G. Scannella, S. Cerutti, and K. A. Moxon, “Mutual information expansion for studying the role of correlations in population codes: how important are autocorrelations?” Neural Computation, vol. 20, no. 11, pp. 2662–2695, 2008.
S. Ito, M. E. Hansen, R. Heiland, A. Lumsdaine, A. M. Litke, and J. M. Beggs, “Extending transfer entropy improves identification of effective connectivity in a spiking cortical network model,” PLoS ONE, vol. 6, no. 11, Article ID e27431, 2011.