Abstract:
Several techniques and tools have been developed for verification of properties expressed as Horn clauses with constraints over a background theory (CHC). Current CHC verification tools implement intricate algorithms and are often limited to certain subclasses of CHC problems. Our aim in this work is to investigate the use of a combination of off-the-shelf techniques from the literature in analysis and transformation of Constraint Logic Programs (CLPs) to solve challenging CHC verification problems. We find that many problems can be solved using a combination of tools based on well-known techniques from abstract interpretation, semantics-preserving transformations, program specialisation and query-answer transformations. This gives insights into the design of automatic, more general CHC verification tools based on a library of components.

Abstract:
Energy efficiency significantly influences user experience of battery-driven devices such as smartphones and tablets. The goal of an energy model of source code is to lay a foundation for energy-saving techniques from architecture to software development. The challenge is linking hardware energy consumption to the high-level application source code, considering the complex run-time context, such as thread scheduling, user inputs and the abstraction of the virtual machine. Traditional energy modeling is bottom-to-top, but this approach faces obstacles when software consists of a number of abstract layers. In this paper, we propose a top-to-bottom view. We focus on identifying valuable information from the source code, which results in the idea of utilizing an intermediate representation, "energy operation", to capture the energy characteristics. The experiment results show that the energy model at such a high-level can reduce the error margin to within 10% and enable energy breakdown at function-level, which helps developers understand the energy-related features of the code.

Abstract:
We present an approach to constrained Horn clause (CHC) verification combining three techniques: abstract interpretation over a domain of convex polyhedra, specialisation of the constraints in CHCs using abstract interpretation of query-answer transformed clauses, and refinement by splitting predicates. The purpose of the work is to investigate how analysis and transformation tools developed for constraint logic programs (CLP) can be applied to the Horn clause verification problem. Abstract interpretation over convex polyhedra is capable of deriving sophisticated invariants and when used in conjunction with specialisation for propagating constraints it can frequently solve challenging verification problems. This is a contribution in itself, but refinement is needed when it fails, and the question of how to refine convex polyhedral analyses has not been studied much. We present a refinement technique based on interpolants derived from a counterexample trace; these are used to drive a property-based specialisation that splits predicates, leading in turn to more precise convex polyhedral analyses. The process of specialisation, analysis and splitting can be repeated, in a manner similar to the CEGAR and iterative specialisation approaches.

Abstract:
In this paper we investigate the use of the concept of tree dimension in Horn clause analysis and verification. The dimension of a tree is a measure of its non-linearity - for example a list of any length has dimension zero while a complete binary tree has dimension equal to its height. We apply this concept to trees corresponding to Horn clause derivations. A given set of Horn clauses P can be transformed into a new set of clauses P=k can be obtained from P whose derivation trees have dimension at least k + 1. In order to prove some property of all derivations of P, we systematically apply these transformations, for various values of k, to decompose the proof into separate proofs for P=k (which could be executed in parallel). We show some preliminary results indicating that decomposition by tree dimension is a potentially useful proof technique. We also investigate the use of existing automatic proof tools to prove some interesting properties about dimension(s) of feasible derivation trees of a given program.

Abstract:
Determinisation is an important concept in the theory of finite tree automata. However the complexity of the textbook procedure for determinisation is such that it is not viewed as a being a practical procedure for manipulating tree automata, even fairly small ones. The computational problems are exacerbated when an automaton has to be both determinised and completed, for instance to compute the complement of an automaton. In this paper we develop an algorithm for determinisation and completion of finite tree automata, whose worst-case complexity remains unchanged, but which performs dramatically better than existing algorithms in practice. The algorithm is developed in stages by optimising the textbook algorithm. A critical aspect of the algorithm is that the transitions of the determinised automaton are generated in a potentially very compact form called product form, which can often be used directly when manipulating the determinised automaton. The paper contains an experimental evaluation of the algorithm on a large set of tree automata examples. Applications of the algorithm include static analysis of term rewriting systems and logic programs, and checking containment of languages defined by tree automata such as XML schemata.

Abstract:
In people with two or more cancer types, the probability that a specific type is diagnosed was determined as the number of diagnoses for that cancer type divided by the total number of cancer diagnoses. If two types of cancer occur independently of one another, then the probability that someone will develop both cancers by chance is the product of the individual probabilities for each type. The expected number of people with both cancers is the number of people at risk multiplied by the separate probabilities for each cancer. We performed the analysis on records of cancer diagnoses in British Columbia, Canada between 1970 and 2004.There were 28,159 people with records of multiple primary cancers between 1970 and 2004, including 1,492 people with between three and seven diagnoses. Among both men and women, the combinations of esophageal cancer with melanoma, and kidney cancer with oral cancer, are observed more than twice as often as expected.Our analysis suggests there are several pairs of primary cancers that might be related by a shared etiological factor. We think that our method is more appropriate than others when multiple diagnoses of primary cancer are unlikely to be the result of therapeutic or diagnostic procedures.There are several reasons that someone might be diagnosed with cancer at more than one anatomic site. First, a new cancer might be caused by the therapy for a previous cancer. The risk of breast cancer is significantly increased among women who were treated for Hodgkin Disease with radiation [1]. Second, cancer might occur at multiple sites because a factor is associated with cancer at each site. Germline mutations in mismatch repair genes can produce susceptibility to cancers of the colorectum, ovary, stomach, small bowel, upper uroepithelial tract, hepatobiliary tract and brain [2]. Likewise, cigarette smoking affects the risk of several cancer types. Third, a different cancer type might be diagnosed because of diagnostic or surveillance proced

Abstract:
The title compound, C12H8F2N2O, crystallizes with two independent molecules in the asymmetric unit. The independent molecules differ slightly in conformation; the dihedral angles between the benzene and pyridine rings are 51.58 (5) and 49.97 (4)°. In the crystal structure, molecules aggregate via N—H...Npyridine interactions as hydrogen-bonded dimers with the structural motif R22(8), and these dimers are linked via C—H...O interactions to form a supramolecular chain.

Abstract:
Cefepime, ceftriaxone, imipenem and piperacillin-tazobactam MICs were determined with 74,394 Gram-negative bacilli obtained from ICU patients with various infections in the US between 1993 and 2004. Results were grouped into four 3-year periods. The predicted cumulative fraction of response (CFR) was estimated based on patient-derived pharmacokinetic values and Monte Carlo simulation. Trends in CFR over the four study periods were assessed using the Cochran-Armitage test. The primary analysis included all organisms combined; Pseudomonas aeruginosa and Acinetobacter species were also evaluated individually.In the primary analysis, imipenem 500 mg q6h showed CFRs from 87% to 90% across all four study periods, with a trend toward slightly improved bactericidal target attainment (p < 0.01). CFRs for cefepime 2 g q12h and piperacillin-tazobactam 4.5 g q6h both declined by 2% (p < 0.01 and p < 0.05, respectively), reflecting upward shifts in the underlying MIC distributions. Ceftriaxone had <52% CFR for all regimens in all periods, with no significant trend. Against P. aeruginosa, significant declines in CFR were seen for (range, p-value): imipenem 1 g q8h (82%–79%, p < 0.01), cefepime 1 g q12h (70%–67%, p < 0.01), cefepime 2 g q12h (84%–82%, p < 0.05), piperacillin-tazobactam 3.375 g q6h (76%–73%, p < 0.01), piperacillin-tazobactam 4.5 g q8h (71%–68%, p < 0.01), and piperacillin-tazobactam 4.5 g q6h (80%–77%, p < .01). Against Acinetobacter spp., all regimens of imipenem, cefepime and piperacillin-tazobactam showed significant declines in CFR over time (p < 0.01).Our observations suggest that as a result of increasing antimicrobial resistance among ICU pathogens in the US, drug effectiveness, assessed as a function of individual agents' ability to attain pharmacodynamic targets, has declined, especially with P. aeruginosa and Acinetobacter spp. Cefepime 2 g q8h and imipenem were the most potent agents against these species, respectively. More aggressive dosing of all of

Abstract:
This volume contains the papers presented at the 19th Workshop on Logic- based methods in Programming Environments (WLPE'09), which was held in Pasadena, USA, on July 14th, 2009. WLPE aims at providing an informal meeting for researchers working on logic-based methods and tools which support program development and analy- sis. This year, we have continued and consolidated the shift in focus from en- vironmental tools for logic programming to logic-based environmental tools for programming in general, so that this workshop can be possibly interesting for a wider scientific community. All the papers submitted to WLPE'09 have gone through a careful process of peer reviewing, with at least three reviews for each paper and a subsequent in-depth discussion in the Program Committee.

Abstract:
There are various kinds of type analysis of logic programs. These include for example inference of types that describe an over-approximation of the success set of a program, inference of well-typings, and abstractions based on given types. Analyses can be descriptive or prescriptive or a mixture of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program clauses, and to highlight ill-typed program constructs, empty types or other type anomalies. Secondly the tool allows combination of the various styles of analysis. For example, a descriptive regular type can be automatically inferred for a given program, and then that type can be used to generate the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type.