Abstract:
La mayoría de los 1.600 muertos anuales por accidentes de tránsito en Chile se relaciona con el exceso de velocidad. Una herramienta de control de velocidad, el fotorradar, puesta en uso a mediados de la década de 1990, fue abrupta y prácticamente prohibida por una ley que se promulgó sobre la base de que ellos eran cazabobos , es decir, eran usados básicamente para incrementar los ingresos de privados y municipios y no para reducir los accidentes. El objetivo de este trabajo es analizar las causas y consecuencias de la eliminación del uso de fotorradares en Chile. En particular, se desea contrastar si la evidencia es más consistente con la hipótesis de que los fotorradares fueron usados como cazabobos, o por el contrario, constituían una herramienta de prevención y reducción de accidentes. Concluimos que la evidencia es más consistente con la segunda hipótesis, y que el término del uso del fotorradar está generando importantes pérdidas humanas asociadas a los accidentes. Dependiendo del método de valorización de la reducción de víctimas utilizado y extrapolando los resultados de la evidencia a sólo dos comunas en Santiago de Chile, concluimos que la eliminación del fotorradar genera costos netos de entre US$83 y US$ 600 millones en valor presente y que una aplicación más amplia, aumentaría sustancialmente los beneficios de un programa.

Abstract:
road crashes are a major source of transport externalities. based on jansson's road crashes model, this paper analyzes ways to internalize those externalities. jansson's model is extended to analyze the effect of congestion and safety variables related to vehicle and infrastructure design on crash rates and severity. the model is then used to compute the cost of road crashes externalities in chile

Abstract:
Los accidentes de tránsito son una fuente de generación de externalidades de transporte. En este artículo se estudian mecanismos para la apropiada internalización de estas externalidades a partir del modelo de costos de accidentes viales de Jansson. Se hace una extensión de este modelo a fin de considerar, de manera explícita, el efecto que la congestión y las políticas de dise o de seguridad vehicular y vial tienen sobre la frecuencia y la severidad de los accidentes. Por último, se calculan los costos externos efectivos de accidentes viales en Chile Road crashes are a major source of transport externalities. Based on Jansson's road crashes model, this paper analyzes ways to internalize those externalities. Jansson's model is extended to analyze the effect of congestion and safety variables related to vehicle and infrastructure design on crash rates and severity. The model is then used to compute the cost of road crashes externalities in Chile

Abstract:
Given a combinatorial decomposition for a counting problem, we resort to the simple scheme of approximating large numbers by floating-point representations in order to obtain efficient Fully Polynomial Time Approximation Schemes (FPTASes) for it. The number of bits employed for the exponent and the mantissa will depend on the error parameter $0 < \varepsilon \leq 1$ and on the characteristics of the problem. Accordingly, we propose the first FPTASes with $1 \pm \varepsilon$ relative error for counting and generating uniformly at random a labeled DAG with a given number of vertices. This is accomplished starting from a classical recurrence for counting DAGs, whose values we approximate by floating-point numbers. After extending these results to other families of DAGs, we show how the same approach works also with problems where we are given a compact representation of a combinatorial ensemble and we are asked to count and sample elements from it. We employ here the floating-point approximation method to transform the classic pseudo-polynomial algorithm for counting 0/1 Knapsack solutions into a very simple FPTAS with $1 - \varepsilon$ relative error. Its complexity improves upon the recent result (\v{S}tefankovi\v{c} et al., SIAM J. Comput., 2012), and, when $\varepsilon^{-1} = \Omega(n)$, also upon the best-known randomized algorithm (Dyer, STOC, 2003). To show the versatility of this technique, we also apply it to a recent generalization of the problem of counting 0/1 Knapsack solutions in an arc-weighted DAG, obtaining a faster and simpler FPTAS than the existing one.

Abstract:
We propose a scheme for preparing and stabilizing the Pfaffian state with high fidelity in rapidly rotating 2D traps containing a small number of bosons. The goal is achieved by strongly increasing 3-body loss processes, which suppress superpositions of three particles while permitting pairing. This filtering mechanism gives rise to reasonably small losses if the system is initialized with the right angular momentum. We discuss some methods for tuning 3-body interactions independently of 2-body collisions.

Abstract:
A graph $G$ is said to be a `set graph' if it admits an acyclic orientation that is also `extensional', in the sense that the out-neighborhoods of its vertices are pairwise distinct. Equivalently, a set graph is the underlying graph of the digraph representation of a hereditarily finite set. In this paper, we continue the study of set graphs and related topics, focusing on computational complexity aspects. We prove that set graph recognition is NP-complete, even when the input is restricted to bipartite graphs with exactly two leaves. The problem remains NP-complete if, in addition, we require that the extensional acyclic orientation be also `slim', that is, that the digraph obtained by removing any arc from it is not extensional. We also show that the counting variants of the above problems are #P-complete, and prove similar complexity results for problems related to a generalization of extensional acyclic digraphs, the so-called `hyper-extensional digraphs', which were proposed by Aczel to describe hypersets. Our proofs are based on reductions from variants of the Hamiltonian Path problem. We also consider a variant of the well-known notion of a separating code in a digraph, the so-called `open-out-separating code', and show that it is NP-complete to determine whether an input extensional acyclic digraph contains an open-out-separating code of given size.

Abstract:
The thin red giant branch (RGB) of the Carina dwarf spheroidal galaxy appears at first sight quite puzzling and seemingly in contrast with the presence of several distinct bursts of star formation. In this Letter, we provide a measurement of the color spread of red giant stars in Carina based on new BVI wide-field observations, and model the width of the RGB by means of synthetic color-magnitude diagrams. The measured color spread, Sigma{V-I}=0.021 +/- 0.005, is quite naturally accounted for by the star-formation history of the galaxy. The thin RGB appears to be essentially related to the limited age range of its dominant stellar populations, with no need for a metallicity dispersion at a given age. This result is relatively robust with respect to changes in the assumed age-metallicity relation, as long as the mean metallicity over the galaxy lifetime matches the observed value ([Fe/H] = -1.91 +/- 0.12 after correction for the age effects). This analysis of photometric data also sets some constraints on the chemical evolution of Carina by indicating that the chemical abundance of the interstellar medium in Carina remained low throughout each episode of star formation even though these episodes occurred over many Gyr.

Abstract:
(Abridged) We present the spectroscopy of red giant stars in the dwarf spheroidal galaxy LeoI, aimed at further constraining its chemical enrichment history. Intermediate-resolution spectroscopy in the CaII triplet spectral region was obtained for 54 stars in LeoI using FORS2 at the ESO Very Large Telescope. The equivalent widths of CaII triplet lines were used to derive the metallicities of the target stars on the [Fe/H] scale of Carretta & Gratton, as well as on a scale tied to the global metal abundance, [M/H]. The metallicity distribution function for LeoI stars is confirmed to be very narrow, with mean value [M/H]~-1.2 and intrinsic dispersion, sigma_[M/H]=0.08. We find a few metal-poor stars (whose metallicity values depend on the adopted extrapolation of the existing calibrations), but in no case are stars more metal-poor than [Fe/H]=-2.6. Our measurements provide a hint of a shallow metallicity gradient of -0.27 dex/Kpc among LeoI red giants. By combining the metallicities of the target stars with their photometric data, we provide age estimates and an age-metallicity relation for a subset of red giant stars in LeoI. Our age estimates indicate a rapid initial enrichment, a slowly rising metal abundance and an increase of ~0.2 dex in the last few Gyr.

Abstract:
We present deep $BVI$ observations of the dwarf irregular galaxy UKS1927-177 in Sagittarius. Statistically cleaned $V$, $(B-I)$ CMDs clearly display the key evolutionary features in this galaxy. Previously detected C stars are located in the CMDs and shown to be variable, thus confirming the presence of a significant upper-AGB intermediate age population. A group of likely red supergiants is also identified, whose magnitude and color is consistent with a 30 Myr old burst of star formation. The observed colors of both blue and red stars in SagDIG are best explained by introducing a differential reddening scenario in which internal dust extinction affects the star forming regions. Adopting a low reddening for the red giants, $E(B-V) = 0.07 \pm 0.02$, gives [Fe/H]=$-2.1 \pm 0.2$ for the mean stellar metallicity, a value consistent with the [O/H] abundance measured in the HII regions. This revised metallicity, which is in accord with the trend of metallicity against luminosity for dwarf irregular galaxies, is indicative of a ``normal'', although metal-poor, dIrr galaxy. A quantitative description is given of the spatial distribution of stars in different age intervals, in comparison with the distribution of the neutral hydrogen. We find that the youngest stars are located near the major peaks of emission on the HI shell, whereas the red giants and intermediate-age C stars define an extended halo or disk with scale length comparable to the size of the hydrogen cloud. The relationship between the distribution of ISM and star formation is briefly discussed.

Abstract:
RNA-Seq technology offers new high-throughput ways for transcript identification and quantification based on short reads, and has recently attracted great interest. The problem is usually modeled by a weighted splicing graph whose nodes stand for exons and whose edges stand for split alignments to the exons. The task consists of finding a number of paths, together with their expression levels, which optimally explain the coverages of the graph under various fitness functions, such least sum of squares. In (Tomescu et al. RECOMB-seq 2013) we showed that under general fitness functions, if we allow a polynomially bounded number of paths in an optimal solution, this problem can be solved in polynomial time by a reduction to a min-cost flow program. In this paper we further refine this problem by asking for a bounded number k of paths that optimally explain the splicing graph. This problem becomes NP-hard in the strong sense, but we give a fast combinatorial algorithm based on dynamic programming for it. In order to obtain a practical tool, we implement three optimizations and heuristics, which achieve better performance on real data, and similar or better performance on simulated data, than state-of-the-art tools Cufflinks, IsoLasso and SLIDE. Our tool, called Traph, is available at http://www.cs.helsinki.fi/gsa/traph/