Abstract:
Stereo video is widely used because it can provide depth information. However, it is difficult to store and transmit stereo video due to the huge data amount. So, high efficient channel encoding algorithm and proper transmission strategy is needed to deal with the video transmission over limited bandwidth channel. In this paper, unequal error protection (UEP) based on low density parity check (LDPC) code was used to transmit stereo video over wireless channel with limited bandwidth. Different correction level LDPC code was used according to the importance of video stream to reconstruction at the receiver. Simulation result shows that the proposed transmission scheme increases the PSNR of reconstructed image, and improves the subjective effect.

The aim of the present study was to examine the priming effects of violent movies and aggressive words on implicit aggression by using modified STROOP task. 190 adolescents participated in this study, with 95 assigned to non-violent movie group and 95 assigned to violent movie group. The results showed that no significant difference was found in the main affect of Movie Type, but it revealed significant Movie Type × Aggressive Trait interaction, and that aggression was significantly influenced by violent movie only for high-aggressive trait (HT) adolescents, but not for mid-aggressive trait (MT) and low-aggressive trait (LT) adolescents. The possible underlying mechanism was that HT adolescents may possess a relatively stronger aggressive network of cognitive association which was easily activated by violence than MT and LT adolescents. This indicated that violent movie could effectively elicit implicit aggression for adolescents who were highly aggressive, but not for nonaggressive adolescents.

Abstract:
In this paper, we study the connectivity of multihop wireless networks
under the log-normal shadowing model by investigating the precise distribution
of the number of isolated nodes. Under such a realistic shadowing model, all
previous known works on the distribution of the number of isolated nodes were obtained
only based on simulation studies or by ignoring the important boundary effect
to avoid the challenging technical analysis, and thus cannot be applied to any
practical wireless networks. It is extremely challenging to take the
complicated boundary effect into consideration under such a realistic model
because the transmission area of each node is an irregular region other than a
circular area. Assume that the wireless nodes are represented by a Poisson
point process with densitynover a
unit-area disk, and that the transmission power is properly chosen so that the
expected node degree of the network equals lnn + ξ(n), where ξ(n) approaches to a constant ξ as n → ∞. Under such a shadowing model with the boundary effect taken into
consideration, we proved that the total number of isolated nodes is
asymptotically Poisson with mean e$ {-ξ}. The Brun’s sieve is utilized to derive the precise asymptotic distribution.
Our results can be used as design guidelines for any practical multihop
wireless network where both the shadowing and boundary effects must be taken
into consideration.

Abstract:
Training images, as an important modeling parameter in the multi-point geostatistics, directly determine the effect of modeling. It’s necessary to evaluate and select the candidate training image before using the multi-point geostatistical modeling. The overall repetition probability is not sufficient to describe the relationship of single data events in the training image. Based on the understanding, a new method was presented in this paper to select the training image. As is shown in the basic idea, the repetition probability distribution of a single data event was used to characterize the type and stationarity of the sedimentary pattern in the training image. The repetition probability mean value and deviation of single data event reflected the stationarity of the geological model of the training image; the rate of data event mismatching reflected the diversity of geological patterns in training images. The selection of optimal training image was achieved by combining the probability of repeated events and the probability of overall repetition of single data events. It’s illustrated in the simulation tests that a good training image has the advantages of high repetition probability compatibility, stable distribution of repeated probability of single data event, low probability mean value, low probability deviation and low rate of mismatching. The method can quickly select the training image and provide the basic guarantee for multi-point geostatistical simulations.

Abstract:
The biggest problem for combinatorial test is a numerous number of combinations of input parameters by combinatorial explosion. Pair-wise combinatorial coverage testing is an effective method which can reduce the test cases in a suite and is able to detect about 70% program errors. But, under many circumstances, the parameters in programs under test (PUTs) have relations with each other. So there are some ineffective test cases in pair-wise combinatorial test suites. In this paper, we propose a method of reducing ineffective combinatorial test cases from pair-wise test suite. The main ideas of the method is that we firstly analyzes the dependent relationships among input parameters, then use the relationships to reduce ineffective pair-wise combinations of input parameters, and lastly generate the pair-wise combinatorial coverage test suite. The experiments show that the method is feasible and effective, and considerably reduce the number of pair-wise combinatorial test cases for some programs under test.

Abstract:
In this paper, the holographic dark energy model with new infrared (IR) cut-off for both the flat case and the non-flat case are confronted with the combined constraints of current cosmological observations: type Ia Supernovae, Baryon Acoustic Oscillations, current Cosmic Microwave Background, and the observational hubble data. By utilizing the Markov Chain Monte Carlo (MCMC) method, we obtain the best fit values of the parameters with $1\sigma, 2\sigma$ errors in the flat model: $\Omega_{b}h^2=0.0233^{+0.0009 +0.0013}_{-0.0009 -0.0014}$, $\alpha=0.8502^{+0.0984 +0.1299}_{-0.0875 -0.1064}$, $\beta=0.4817^{+0.0842 +0.1176}_{-0.0773 -0.0955}$, $\Omega_{de0}=0.7287^{+0.0296 +0.0432}_{-0.0294 -0.0429}$, $\Omega_{m0}=0.2713^{+0.0294 +0.0429}_{-0.0296 -0.0432}$, $H_0=66.35^{+2.38 +3.35}_{-2.14 -3.07}$. In the non-flat model, the constraint results are found in $1\sigma, 2\sigma$ regions: $\Omega_{b}h^2=0.0228^{+0.0010 +0.0014}_{-0.0010 -0.0014}$, $\Omega_k=0.0305^{+0.0092 +0.0140}_{-0.0134 -0.0176}$, $\alpha=0.8824^{+0.2180 +0.2213}_{-0.1163 -0.1378}$, $\beta=0.5016^{+0.0973 +0.1247}_{-0.0871 -0.1102}$, $\Omega_{de0}=0.6934^{+0.0364 +0.0495}_{-0.0304 -0.0413}$, $\Omega_{m0}=0.2762^{+0.0278 +0.0402}_{-0.0320 -0.0412}$, $H_0=70.20^{+3.03 +3.58}_{-3.17 -4.00}$. In the best fit holographic dark energy models, the equation of state of dark energy and the deceleration parameter at present are characterized by $w_{de0}=-1.1414\pm0.0608, q_0=-0.7476\pm0.0466$ (flat case) and $w_{de0}=-1.0653\pm0.0661, q_0=-0.6231\pm0.0569$ (non-flat case). Compared to the $\Lambda \textmd{CDM}$ model, it is found the current combined datasets do not favor the holographic dark energy model over the $\Lambda \textmd{CDM}$ model.

Abstract:
In this paper, we propose a new method to use the strong lensing data sets to constrain a cosmological model. By taking the ratio $\mathcal{D}^{obs}_{ij}=\theta_{\mathrm{E_{\mathrm{i}}}}\sigma_{\mathrm{0_{\mathrm{j}}}}^2/\theta_{\mathrm{E_{\mathrm{j}}}}\sigma_{\mathrm{0_{\mathrm{i}}}}^2$ as cosmic observations, one can {\it completely} eliminate the uncertainty caused by the relation $\sigma_{\mathrm{SIS}}=f_{\mathrm{E}}\sigma_0$ which characterizes the relation between the stellar velocity dispersion $\sigma_0$ and the velocity dispersion $\sigma_{SIS}$. Via our method, a relative tight constraint to the cosmological model space can be obtained, for the spatially flat $\Lambda$CDM model as an example $\Omega_m=0.143_{- 0.143-0.143-0.143}^{+ 0.000769+0.143+0.489}$ in $3\sigma$ regions. And by using this method, one can also probe the nature of dark energy and the spatial curvature of our Universe.

Abstract:
In this paper, we perform a global constraint on the Ricci dark energy model with both the flat case and the non-flat case, using the Markov Chain Monte Carlo (MCMC) method and the combined observational data from the cluster X-ray gas mass fraction, Supernovae of type Ia (397), baryon acoustic oscillations, current Cosmic Microwave Background, and the observational Hubble function. In the flat model, we obtain the best fit values of the parameters in $1\sigma, 2\sigma$ regions: $\Omega_{m0}=0.2927^{+0.0420 +0.0542}_{-0.0323 -0.0388}$, $\alpha=0.3823^{+0.0331 +0.0415}_{-0.0418 -0.0541}$, $Age/Gyr=13.48^{+0.13 +0.17}_{-0.16 -0.21}$, $H_0=69.09^{+2.56 +3.09}_{-2.37 -3.39}$. In the non-flat model, the best fit parameters are found in $1\sigma, 2\sigma$ regions:$\Omega_{m0}=0.3003^{+0.0367 +0.0429}_{-0.0371 -0.0423}$, $\alpha=0.3845^{+0.0386 +0.0521}_{-0.0474 -0.0523}$, $\Omega_k=0.0240^{+0.0109 +0.0133}_{-0.0130 -0.0153}$, $Age/Gyr=12.54^{+0.51 +0.65}_{-0.37 -0.49}$, $H_0=72.89^{+3.31 +3.88}_{-3.05 -3.72}$. Compared to the constraint results in the $\Lambda \textmd{CDM}$ model by using the same datasets, it is shown that the current combined datasets prefer the $\Lambda \textmd{CDM}$ model to the Ricci dark energy model.

Abstract:
In this paper, the Dvali-Gabadadze-Porrati (DGP) brane model is confronted by current cosmic observational data sets from geometrical and dynamical perspectives. On the geometrical side, the recent released Union2 $557$ of type Ia supernovae (SN Ia), the baryon acoustic oscillation (BAO) from Sloan Digital Sky Survey and the Two Degree Galaxy Redshift Survey (transverse and radial to line-of-sight data points), the cosmic microwave background (CMB) measurement given by the seven-year Wilkinson Microwave Anisotropy Probe observations (shift parameters $R$, $l_a(z_\ast)$ and redshift at the last scatter surface $z_\ast$), ages of high redshifts galaxies, i.e. the lookback time (LT) and the high redshift Gamma Ray Bursts (GRBs) are used. On the dynamical side, data points about the growth function (GF) of matter linear perturbations are used. Using the same data sets combination, we also constrain the flat $\Lambda$CDM model as a comparison. The results show that current geometrical and dynamical observational data sets much favor flat $\Lambda$CDM model and the departure from it is above $4\sigma$($6\sigma$) for spatially flat DGP model with(without) SN systematic errors. The consistence of growth function data points is checked in terms of relative departure of redshift-distance relation.

Abstract:
In this paper, we test the consistency of Gamma Ray Bursts (GRBs) Data-set and Supernovae Union2 (SNU2) via the so-called {\it multi-dimensional consistency test} under the assumption that $\Lambda$CDM model is a potentially correct cosmological model. We find that the probes are inconsistent with $1.456\sigma$ and $85.47%$ in terms of probability. With this observation, it is concluded that GRBs can be combined with SNU2 to constrain cosmological models.