Abstract:
We examined that the protective effects of ANX1 on 12-O-tetradecanoylphorbol-13-acetate (TPA)-induced skin inflammationin animal models using a Tat-ANX1 protein. Topicalapplication of the Tat-ANX1 protein markedly inhibited TPAinducedear edema and expression levels of cyclooxygenase-2(COX-2) as well as pro-inflammatory cytokines such as interleukin-1 beta (IL-1β), IL-6, and tumor necrosis factor-alpha(TNF-α). Also, application of Tat-ANX1 protein significantlyinhibited nuclear translocation of nuclear factor-kappa B(NF-κB) and phosphorylation of p38 and extracellular signalregulatedkinase (ERK) mitogen-activated protein kinase(MAPK) in TPA-treated mice ears. The results indicate thatTat-ANX1 protein inhibits the inflammatory response byblocking NF-κB and MAPK activation in TPA-induced miceears. Therefore, the Tat-ANX1 protein may be useful as atherapeutic agent against inflammatory skin diseases.

Abstract:
Vaccinium uliginosum L. (VU) possesses various biological properties, such as antioxidant and protective effects against VU-induced skin photoaging. The purpose of this study is to evaluate the effects of oral administration of a mixture of polyphenols and anthocyanins derived from VU on 2,4-dinitrochlorobenzene- (DNCB-) induced atopic dermatitis (AD) in NC/Nga mice. We assessed anti-AD effects in NC/Nga murine model for a period of 9 weeks. Oral administration of the mixture significantly alleviated the AD-like skin symptoms and clinical signs including ear thickness and scratching behaviors. Orally administrated mixture reduced the level of IgE and IgG1, whereas it increased the level of IgG2a in a dose-dependent manner. The calculated IgG1/IgG2a ratio for each mouse revealed that the mixture derived from VU also significantly reduced the Th2/Th1 ratio, IL-4 and IL-13 (as Th2 cytokines), IFN-γ, and IL-12 (as a Th1 cytokine) in spleens. In addition, it significantly decreased gene expression, such as IL-4, IL-5, CCR3, eotaxin-1, IL- 12, IFN-γ, MCP-1, and IL-17, in AD-like lesions and suppressed Th17. Histological analyses revealed that the epidermis thickness and number of inflammatory cells were significantly reduced. In conclusion, oral administration of the mixture in the DNCB-induced AD is confirmed to improve AD disease in mice.

Abstract:
We consider the problem of community detection or clustering in the labeled Stochastic Block Model (labeled SBM) with a finite number $K$ of clusters of sizes linearly growing with the global population of items $n$. Every pair of items is labeled independently at random, and label $\ell$ appears with probability $p(i,j,\ell)$ between two items in clusters indexed by $i$ and $j$, respectively. The objective is to reconstruct the clusters from the observation of these random labels. Clustering under the SBM and their extensions has attracted much attention recently. Most existing work aimed at characterizing the set of parameters such that it is possible to infer clusters either positively correlated with the true clusters, or with a vanishing proportion of misclassified items, or exactly matching the true clusters. We address the finer and more challenging question of determining, under the general LSBM and for any $s$, the set of parameters such that there exists a polynomial-time clustering algorithm with at most $s$ misclassified items in average. We prove that in the regime where it is possible to recover the clusters with a vanishing proportion of misclassified items, a necessary and sufficient condition to get $s=o(n)$ misclassified items in average is $\frac{n D(\alpha,p)}{ \log (n/s)} \ge 1$, where $D(\alpha,p)$ is an appropriately defined function of the parameters $p=(p(i,j,\ell), i,j, \ell)$, and $\alpha$ defining the sizes of the clusters. We further develop an algorithm, based on simple spectral methods, that achieves this fundamental performance limit. The analysis presented in this paper allows us to recover existing results for asymptotically accurate and exact cluster recovery in the SBM, but has much broader applications. For example, it implies that the minimal number of misclassified items under the LSBM considered scales as $n\exp(-nD(\alpha,p)(1+o(1)))$.

Abstract:
We consider the problem of distributed load balancing in heterogenous parallel server systems, where the service rate achieved by a user at a server depends on both the user and the server. Such heterogeneity typically arises in wireless networks (e.g., servers may represent frequency bands, and the service rate of a user varies across bands). Users select servers in a distributed manner. They initially attach to an arbitrary server. However, at random instants of time, they may probe the load at a new server and migrate there to improve their service rate. We analyze the system dynamics under the natural Random Local Search (RLS) migration scheme, introduced in \cite{sig10}. Under this scheme, when a user has the opportunity to switch servers, she does it only if this improves her service rate. The dynamics under RLS may be interpreted as those generated by strategic players updating their strategy in a load balancing game. In closed systems, where the user population is fixed, we show that this game has pure Nash Equilibriums (NEs), and we analyze their efficiency. We further prove that when the user population grows large, pure NEs get closer to a Proportionally Fair (PF) allocation of users to servers, and we characterize the gap between equilibriums and this ideal allocation depending on user population. Under the RLS algorithm, the system converges to pure NEs: we study the time it takes for the system to reach the PF allocation within a certain margin. In open systems, where users randomly enter the system and leave upon service completion, we establish that the RLS algorithm stabilizes the system whenever this it at all possible, i.e., it is throughput-optimal.

Abstract:
In this paper, we consider networks consisting of a finite number of non-overlapping communities. To extract these communities, the interaction between pairs of nodes may be sampled from a large available data set, which allows a given node pair to be sampled several times. When a node pair is sampled, the observed outcome is a binary random variable, equal to 1 if nodes interact and to 0 otherwise. The outcome is more likely to be positive if nodes belong to the same communities. For a given budget of node pair samples or observations, we wish to jointly design a sampling strategy (the sequence of sampled node pairs) and a clustering algorithm that recover the hidden communities with the highest possible accuracy. We consider both non-adaptive and adaptive sampling strategies, and for both classes of strategies, we derive fundamental performance limits satisfied by any sampling and clustering algorithm. In particular, we provide necessary conditions for the existence of algorithms recovering the communities accurately as the network size grows large. We also devise simple algorithms that accurately reconstruct the communities when this is at all possible, hence proving that the proposed necessary conditions for accurate community detection are also sufficient. The classical problem of community detection in the stochastic block model can be seen as a particular instance of the problems consider here. But our framework covers more general scenarios where the sequence of sampled node pairs can be designed in an adaptive manner. The paper provides new results for the stochastic block model, and extends the analysis to the case of adaptive sampling.

Abstract:
We consider the problem of community detection or clustering in the labeled Stochastic Block Model (labeled SBM) with a finite number $K$ of clusters of sizes linearly growing with the global population of items $n$. Every pair of items is labeled independently at random, and label $\ell$ appears with probability $p(i,j,\ell)$ between two items in clusters indexed by $i$ and $j$, respectively. The objective is to reconstruct the clusters from the observation of these random labels. Clustering under the SBM and their extensions has attracted much attention recently. Most existing work aimed at characterizing the set of parameters such that it is possible to infer clusters either positively correlated with the true clusters, or with a vanishing proportion of misclassified items, or exactly matching the true clusters. We address the finer and more challenging question of determining, under the general LSBM and for any $s$, the set of parameters such that there exists a polynomial-time clustering algorithm with at most $s$ misclassified items in average. We prove that in the regime where it is possible to recover the clusters with a vanishing proportion of misclassified items, a necessary and sufficient condition to get $s=o(n)$ misclassified items in average is $\frac{n D(\alpha,p)}{ \log (n/s)} \ge 1$, where $D(\alpha,p)$ is an appropriately defined function of the parameters $p=(p(i,j,\ell), i,j, \ell)$, and $\alpha$ defining the sizes of the clusters. We further develop an algorithm, based on simple spectral methods, that achieves this fundamental performance limit. The analysis presented in this paper allows us to recover existing results for asymptotically accurate and exact cluster recovery in the SBM, but has much broader applications. For example, it implies that the minimal number of misclassified items under the LSBM considered scales as $n\exp(-nD(\alpha,p)(1+o(1)))$.

Abstract:
We consider the problem of community detection in the Stochastic Block Model with a finite number $K$ of communities of sizes linearly growing with the network size $n$. This model consists in a random graph such that each pair of vertices is connected independently with probability $p$ within communities and $q$ across communities. One observes a realization of this random graph, and the objective is to reconstruct the communities from this observation. We show that under spectral algorithms, the number of misclassified vertices does not exceed $s$ with high probability as $n$ grows large, whenever $pn=\omega(1)$, $s=o(n)$ and \begin{equation*} \lim\inf_{n\to\infty} {n(\alpha_1 p+\alpha_2 q-(\alpha_1 + \alpha_2)p^{\frac{\alpha_1}{\alpha_1 + \alpha_2}}q^{\frac{\alpha_2}{\alpha_1 + \alpha_2}})\over \log (\frac{n}{s})} >1,\quad\quad(1) \end{equation*} where $\alpha_1$ and $\alpha_2$ denote the (fixed) proportions of vertices in the two smallest communities. In view of recent work by Abbe et al. and Mossel et al., this establishes that the proposed spectral algorithms are able to exactly recover communities whenever this is at all possible in the case of networks with two communities with equal sizes. We conjecture that condition (1) is actually necessary to obtain less than $s$ misclassified vertices asymptotically, which would establish the optimality of spectral method in more general scenarios.

Abstract:
This paper shows that an optimal antenna pattern for active phased array synthetic aperture radar (SAR) has been synthesized to meet the best performances based on particle swarm optimization (PSO) and adaptively selected weighting factors. Because the antenna radiation pattern has a very close relation with the performance of an active phased array SAR system, the authors derived the multi-objective cost functions on the basis of the system performance measures such as the range-to-ambiguity ratio, noise equivalent sigma zero, and radiometric accuracy. The antenna mask templates were derived from the SAR system design parameters in order to optimize the system requirements. To effectively minimize the cost functions and to search for the amplitude and phase excitations of an active phase array SAR antenna, the authors applied the PSO technique to SAR antenna pattern design and also carefully selected weighting factors to improve the fitness of the cost functions on the basis of the SAR performance.

Abstract:
In this paper, we consider the streaming memory-limited matrix completion problem when the observed entries are noisy versions of a small random fraction of the original entries. We are interested in scenarios where the matrix size is very large so the matrix is very hard to store and manipulate. Here, columns of the observed matrix are presented sequentially and the goal is to complete the missing entries after one pass on the data with limited memory space and limited computational complexity. We propose a streaming algorithm which produces an estimate of the original matrix with a vanishing mean square error, uses memory space scaling linearly with the ambient dimension of the matrix, i.e. the memory required to store the output alone, and spends computations as much as the number of non-zero entries of the input matrix.

Abstract:
This paper has been withdrawn because of some paper issues. Recent studies on MAC scheduling have shown that carrier sense multiple access (CSMA) can be controlled to be achieve optimality in terms of throughput or utility. These results imply that just a simple MAC algorithm without message passing is possible to achieve high performance guarantee. However, such studies are conducted only on the assumption that channel conditions are static. Noting that the main drive for achieving optimality in optimal CSMA is to let it run a good schedule for some time, formally referred to as the mixing time, it is under-explored how such optimal CSMA performs for time-varying channel conditions. In this paper, under the practical constraint of restricted back-off rates (i.e., limited sensing speed), we consider two versions of CSMAs: (i) channel-unaware CSMA (U-CSMA) and (ii) channel-aware CSMA (A-CSMA), each of which is characterized as its ability of tracking channel conditions. We first show that for fast channel variations, A-CSMA achieves almost zero throughput, implying that incomplete tracking of channel conditions may seriously degrade performance, whereas U-CSMA, accessing the media without explicit consideration of channel conditions, has positive worst-case guarantee in throughput, where the ratio of guarantee depends on network topology. On the other hand, for slow channel variations, we prove that A-CSMA is throughput-optimal for any network topology. Our results provide the precise trade-off between sensing costs and performances of CSMA algorithms, which guides a robust design on MAC scheduling under highly time-varying scenarios.