Abstract:
We report the magnetotransport characteristics of a trilayer ferromagnetic tunnel junction build of an electron doped manganite (La_0.7Ce_0.3MnO_3) and a hole doped manganite (La_0.7Ca_0.3MnO_3). At low temperatures the junction exhibits a large positive tunneling magnetoresistance (TMR), irrespective of the bias voltage. At intermediate temperatures below T_C the sign of the TMR is dependent on the bias voltage across the junction. The magnetoresistive characteristics of the junction strongly suggest that La_0.7Ce_0.3MnO_3 is a minority spin carrier ferromagnet with a high degree of spin polarization, i.e. a transport half metal.

Abstract:
A wavelength calibration system based on a laser frequency comb (LFC) was developed in a co-operation between the Kiepenheuer-Institut f\"ur Sonnenphysik, Freiburg, Germany and the Max-Planck-Institut f\"ur Quantenoptik, Garching, Germany for permanent installation at the German Vacuum Tower Telescope (VTT) on Tenerife, Canary Islands. The system was installed successfully in October 2011. By simultaneously recording the spectra from the Sun and the LFC, for each exposure a calibration curve can be derived from the known frequencies of the comb modes that is suitable for absolute calibration at the meters per second level. We briefly summarize some topics in solar physics that benefit from absolute spectroscopy and point out the advantages of LFC compared to traditional calibration techniques. We also sketch the basic setup of the VTT calibration system and its integration with the existing echelle spectrograph.

Abstract:
Fourier-Legendre decomposition (FLD) of solar Doppler imaging data is a promising method to estimate the sub-surface solar meridional flow. FLD is sensible to low-degree oscillation modes and thus has the potential to probe the deep meridional flow. We present a newly developed code to be used for large scale FLD analysis of helioseismic data as provided by the Global Oscillation Network Group (GONG), the Michelson Doppler Imager (MDI) instrument, and the upcoming Helioseismic and Magnetic Imager (HMI) instrument. First results obtained with the new code are qualitatively comparable to those obtained from ring-diagram analyis of the same time series.

Abstract:
The solar photospheric abundance of oxygen is still a matter of debate. For about ten years some determinations have favoured a low oxygen abundance which is at variance with the value inferred by helioseismology. Among the oxygen abundance indicators, the forbidden line at 630nm has often been considered the most reliable even though it is blended with a NiI line. In Papers I and Paper II of this series we reported a discrepancy in the oxygen abundance derived from the 630nm and the subordinate [OI] line at 636nm in dwarf stars, including the Sun. Here we analyse several, in part new, solar observations of the the centre-to-limb variation of the spectral region including the blend at 630nm in order to separate the individual contributions of oxygen and nickel. We analyse intensity spectra observed at different limb angles in comparison with line formation computations performed on a CO5BOLD 3D hydrodynamical simulation of the solar atmosphere. The oxygen abundances obtained from the forbidden line at different limb angles are inconsistent if the commonly adopted nickel abundance of 6.25 is assumed in our local thermodynamic equilibrium computations. With a slightly lower nickel abundance, A(Ni)~6.1, we obtain consistent fits indicating an oxygen abundance of A(O)=8.73+/-0.05. At this value the discrepancy with the subordinate oxygen line remains. The derived value of the oxygen abundance supports the notion of a rather low oxygen abundance in the solar hotosphere. However, it is disconcerting that the forbidden oxygen lines at 630 and 636nm give noticeably different results, and that the nickel abundance derived here from the 630nm blend is lower than expected from other nickel lines.

Abstract:
We investigate a new scheme for astronomical spectrograph calibration using the laser frequency comb at the Solar Vacuum Tower Telescope on Tenerife. Our concept is based upon a single-mode fiber channel, that simultaneously feeds the spectrograph with comb light and sunlight. This yields nearly perfect spatial mode matching between the two sources. In combination with the absolute calibration provided by the frequency comb, this method enables extremely robust and accurate spectroscopic measurements. The performance of this scheme is compared to a sequence of alternating comb and sunlight, and to absorption lines from Earth's atmosphere. We also show how the method can be used for radial-velocity detection by measuring the well-explored 5-minute oscillations averaged over the full solar disk. Our method is currently restricted to solar spectroscopy, but with further evolving fiber-injection techniques it could become an option even for faint astronomical targets.

Abstract:
An exact, analytic solution for a simple electrostatic model applicable to biomolecular recognition is presented. In the model, a layer of high dielectric constant material (representative of the solvent, water) whose thickness may vary separates two regions of low dielectric constant material (representative of proteins, DNA, RNA, or similar materials), in each of which is embedded a point charge. For identical charges, the presence of the screening layer always lowers the energy compared to the case of point charges in an infinite medium of low dielectric constant. Somewhat surprisingly, the presence of a sufficiently thick screening layer also lowers the energy compared to the case of point charges in an infinite medium of high dielectric constant. For charges of opposite sign, the screening layer always lowers the energy compared to the case of point charges in an infinite medium of either high or low dielectric constant. The behavior of the energy leads to a substantially increased repulsive force between charges of the same sign. The repulsive force between charges of opposite signs is weaker than in an infinite medium of low dielectric constant material but stronger than in an infinite medium of high dielectric constant material. The presence of this behavior, which we name asymmetric screening, in the simple system presented here confirms the generality of the behavior that was established in a more complicated system of an arbitrary number of charged dielectric spheres in an infinite solvent.

Abstract:
Motivated by a problem in the theory of randomized search heuristics, we give a very precise analysis for the coupon collector problem where the collector starts with a random set of coupons (chosen uniformly from all sets). We show that the expected number of rounds until we have a coupon of each type is $nH_{n/2} - 1/2 \pm o(1)$, where $H_{n/2}$ denotes the $(n/2)$th harmonic number when $n$ is even, and $H_{n/2}:= (1/2) H_{\lfloor n/2 \rfloor} + (1/2) H_{\lceil n/2 \rceil}$ when $n$ is odd. Consequently, the coupon collector with random initial stake is by half a round faster than the one starting with exactly $n/2$ coupons (apart from additive $o(1)$ terms). This result implies that classic simple heuristic called \emph{randomized local search} needs an expected number of $nH_{n/2} - 1/2 \pm o(1)$ iterations to find the optimum of any monotonic function defined on bit-strings of length $n$.

Abstract:
While evolutionary algorithms are known to be very successful for a broad range of applications, the algorithm designer is often left with many algorithmic choices, for example, the size of the population, the mutation rates, and the crossover rates of the algorithm. These parameters are known to have a crucial influence on the optimization time, and thus need to be chosen carefully, a task that often requires substantial efforts. Moreover, the optimal parameters can change during the optimization process. It is therefore of great interest to design mechanisms that dynamically choose best-possible parameters. An example for such an update mechanism is the one-fifth success rule for step-size adaption in evolutionary strategies. While in continuous domains this principle is well understood also from a mathematical point of view, no comparable theory is available for problems in discrete domains. In this work we show that the one-fifth success rule can be effective also in discrete settings. We regard the $(1+(\lambda,\lambda))$~GA proposed in [Doerr/Doerr/Ebel: From black-box complexity to designing new genetic algorithms, TCS 2015]. We prove that if its population size is chosen according to the one-fifth success rule then the expected optimization time on \textsc{OneMax} is linear. This is better than what \emph{any} static population size $\lambda$ can achieve and is asymptotically optimal also among all adaptive parameter choices.

Abstract:
Understanding how crossover works is still one of the big challenges in evolutionary computation research, and making our understanding precise and proven by mathematical means might be an even bigger one. As one of few examples where crossover provably is useful, the $(1+(\lambda, \lambda))$ Genetic Algorithm (GA) was proposed recently in [Doerr, Doerr, Ebel: TCS 2015]. Using the fitness level method, the expected optimization time on general OneMax functions was analyzed and a $O(\max\{n\log(n)/\lambda, \lambda n\})$ bound was proven for any offspring population size $\lambda \in [1..n]$. We improve this work in several ways, leading to sharper bounds and a better understanding of how the use of crossover speeds up the runtime in this algorithm. We first improve the upper bound on the runtime to $O(\max\{n\log(n)/\lambda, n\lambda \log\log(\lambda)/\log(\lambda)\})$. This improvement is made possible from observing that in the parallel generation of $\lambda$ offspring via crossover (but not mutation), the best of these often is better than the expected value, and hence several fitness levels can be gained in one iteration. We then present the first lower bound for this problem. It matches our upper bound for all values of $\lambda$. This allows to determine the asymptotically optimal value for the population size. It is $\lambda = \Theta(\sqrt{\log(n)\log\log(n)/\log\log\log(n)})$, which gives an optimization time of $\Theta(n \sqrt{\log(n)\log\log\log(n)/\log\log(n)})$. Hence the improved runtime analysis gives a better runtime guarantee along with a better suggestion for the parameter $\lambda$. We finally give a tail bound for the upper tail of the runtime distribution, which shows that the actual runtime exceeds our runtime guarantee by a factor of $(1+\delta)$ with probability $O((n/\lambda^2)^{-\delta})$ only.