Abstract:
We have analyzed the generalization performance of a student which slowly switches ensemble teachers. By calculating the generalization error analytically using statistical mechanics in the framework of on-line learning, we show that the dynamical behaviors of generalization error have the periodicity that is synchronized with the switching period and the behaviors differ with the number of ensemble teachers. Furthermore, we show that the smaller the switching period is, the larger the difference is.

Abstract:
Conventional ensemble learning combines students in the space domain. In this paper, however, we combine students in the time domain and call it time-domain ensemble learning. We analyze, compare, and discuss the generalization performances regarding time-domain ensemble learning of both a linear model and a nonlinear model. Analyzing in the framework of online learning using a statistical mechanical method, we show the qualitatively different behaviors between the two models. In a linear model, the dynamical behaviors of the generalization error are monotonic. We analytically show that time-domain ensemble learning is twice as effective as conventional ensemble learning. Furthermore, the generalization error of a nonlinear model features nonmonotonic dynamical behaviors when the learning rate is small. We numerically show that the generalization performance can be improved remarkably by using this phenomenon and the divergence of students in the time domain.

Abstract:
We analyze the generalization performance of a student in a model composed of linear perceptrons: a true teacher, ensemble teachers, and the student. Calculating the generalization error of the student analytically using statistical mechanics in the framework of on-line learning, it is proven that when learning rate $\eta <1$, the larger the number $K$ and the variety of the ensemble teachers are, the smaller the generalization error is. On the other hand, when $\eta >1$, the properties are completely reversed. If the variety of the ensemble teachers is rich enough, the direction cosine between the true teacher and the student becomes unity in the limit of $\eta \to 0$ and $K \to \infty$.

Abstract:
In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine or due to noises. The generalization performance of a new student supervised by a moving machine has been analyzed. A model composed of a true teacher, a moving teacher and a student that are all linear perceptrons with noises has been treated analytically using statistical mechanics. It has been proven that the generalization errors of a student can be smaller than that of a moving teacher, even if the student only uses examples from the moving teacher.

Abstract:
It is known that storage capacity per synapse increases by synaptic pruning in the case of a correlation-type associative memory model. However, the storage capacity of the entire network then decreases. To overcome this difficulty, we propose decreasing the connecting rate while keeping the total number of synapses constant by introducing delayed synapses. In this paper, a discrete synchronous-type model with both delayed synapses and their prunings is discussed as a concrete example of the proposal. First, we explain the Yanai-Kim theory by employing the statistical neurodynamics. This theory involves macrodynamical equations for the dynamics of a network with serial delay elements. Next, considering the translational symmetry of the explained equations, we re-derive macroscopic steady state equations of the model by using the discrete Fourier transformation. The storage capacities are analyzed quantitatively. Furthermore, two types of synaptic prunings are treated analytically: random pruning and systematic pruning. As a result, it becomes clear that in both prunings, the storage capacity increases as the length of delay increases and the connecting rate of the synapses decreases when the total number of synapses is constant. Moreover, an interesting fact becomes clear: the storage capacity asymptotically approaches $2/\pi$ due to random pruning. In contrast, the storage capacity diverges in proportion to the logarithm of the length of delay by systematic pruning and the proportion constant is $4/\pi$. These results theoretically support the significance of pruning following an overgrowth of synapses in the brain and strongly suggest that the brain prefers to store dynamic attractors such as sequences and limit cycles rather than equilibrium states.

Abstract:
Measles virus (MV) is a negative strand RNA virus of the family Paramyxoviridae, and the attenuated Edmonston-B strain can be engineered by the reverse genetics system. Here we constructed the recombinant Edmonston strain of measles virus (MV-Ed) that expressed hepatitis C virus (HCV) envelope proteins (rMV-E1E2). The rMV-E1E2 successfully expressed HCV E1 and E2 proteins. To evaluate its immunogenicity, NOD/Scid/Jak3null mice that were engrafted with human peripheral blood mononuclear cells (huPBMC-NOJ) were infected with this rMV-E1E2. Although human lymphocytes could be isolated from the spleens of mock-infected mice during the 2-weeks-long experiment, the proportion of mice that were infected with MV or rMV-E1E2 was decreased in a viral dose-dependent manner. Over 10^{3} PFU of virus infection decreased the human PBL to less than 5%. Significant decrease of B cell population in human PBL from rMV-E1E2 infected NOD-SCID mice and decrease of T cell population in those from MV infected mice were observed. Human antibody production in these mice was also examined. Thus, the results in this study may contribute for future improvement of recombinant vaccine using measles virus vector.

Natural killer (NK) cell
plays an important role in an innate immune response against viral infection.
The kinetics regulation and functional consequences of NK cells in the
pathogeneses of diseases are uncertain. We analyzed NK cell distribution and
function of successfully combination antiretroviral therapy (cART)-treated
HIV-1 infected individuals in Khon Kaen Regional Hospital, Thailand. The results
demonstrated that increased percentage and the total number of NK cell incART-treated HIV-1 infected patients
with preferential high levels of CD56^{dim}CD16^{+}and CD56^{-}CD16^{+}subsets when compared with a control
group even in undetectable viral load (<40 copies per milliliter).
Concomitantly, decreased cytotoxic activity measured by CD107asurface
expression with maintained IFN-γproduction
implied the impairment of cytolytic activity was not recovered after cART
treatment. Thus, altered NK cell frequency and function by HIV-1 infection are
not completely recovered with cART, which may contribute to impaired cellular
immune response and persistence of HIV-1.

Abstract:
Ensemble learning of $K$ nonlinear perceptrons, which determine their outputs by sign functions, is discussed within the framework of online learning and statistical mechanics. One purpose of statistical learning theory is to theoretically obtain the generalization error. This paper shows that ensemble generalization error can be calculated by using two order parameters, that is, the similarity between a teacher and a student, and the similarity among students. The differential equations that describe the dynamical behaviors of these order parameters are derived in the case of general learning rules. The concrete forms of these differential equations are derived analytically in the cases of three well-known rules: Hebbian learning, perceptron learning and AdaTron learning. Ensemble generalization errors of these three rules are calculated by using the results determined by solving their differential equations. As a result, these three rules show different characteristics in their affinity for ensemble learning, that is ``maintaining variety among students." Results show that AdaTron learning is superior to the other two rules with respect to that affinity.

Abstract:
Conventional ensemble learning combines students in the space domain. On the other hand, in this paper we combine students in the time domain and call it time domain ensemble learning. In this paper, we analyze the generalization performance of time domain ensemble learning in the framework of online learning using a statistical mechanical method. We treat a model in which both the teacher and the student are linear perceptrons with noises. Time domain ensemble learning is twice as effective as conventional space domain ensemble learning.

Abstract:
In the framework of on-line learning, a learning machine might move around a teacher due to the differences in structures or output functions between the teacher and the learning machine. In this paper we analyze the generalization performance of a new student supervised by a moving machine. A model composed of a fixed true teacher, a moving teacher, and a student is treated theoretically using statistical mechanics, where the true teacher is a nonmonotonic perceptron and the others are simple perceptrons. Calculating the generalization errors numerically, we show that the generalization errors of a student can temporarily become smaller than that of a moving teacher, even if the student only uses examples from the moving teacher. However, the generalization error of the student eventually becomes the same value with that of the moving teacher. This behavior is qualitatively different from that of a linear model.