Abstract:
Random graphs, where the connections between nodes are considered random variables, have wide applicability in the social sciences. Exponential-family Random Graph Models (ERGM) have shown themselves to be a useful class of models for representing com- plex social phenomena. We generalize ERGM by also modeling nodal attributes as random variates, thus creating a random model of the full network, which we call Exponential-family Random Network Models (ERNM). We demonstrate how this framework allows a new formu- lation for logistic regression in network data. We develop likelihood-based inference for the model and an MCMC algorithm to implement it. This new model formulation is used to analyze a peer social network from the National Lon- gitudinal Study of Adolescent Health. We model the relationship between substance use and friendship relations, and show how the results differ from the standard use of logistic regression on network data.

Abstract:
Exponential-family random network (ERN) models specify a joint representation of both the dyads of a network and nodal characteristics. This class of models allow the nodal characteristics to be modelled as stochastic processes, expanding the range and realism of exponential-family approaches to network modelling. In this paper we develop a theory of inference for ERN models when only part of the network is observed, as well as specific methodology for missing data, including non-ignorable mechanisms for network-based sampling designs and for latent class models. In particular, we consider data collected via contact tracing, of considerable importance to infectious disease epidemiology and public health.

Abstract:
While R has proven itself to be a powerful and flexible tool for data exploration and analysis, it lacks the ease of use present in other software such as SPSS and Minitab. An easy to use graphical user interface (GUI) can help new users accomplish tasks that would otherwise be out of their reach, and improves the efficiency of expert users by replacing fifty key strokes with five mouse clicks. With this in mind, Deducer presents dialogs that are understandable for the beginner, and yet contain all (or most) of the options that an experienced statistician, performing the same task, would want. An Excel-like spreadsheet is included for easy data viewing and editing. Deducer is based on Java's Swing GUI library and can be used on any common operating system. The GUI is independent of the specific R console and can easily be used by calling a text-based menu system. Graphical menus are provided for the JGR console and the Windows R GUI.

Abstract:
Human Papillomavirus (HPV) infection is the main cause of cervical cancers and cervical intraepithelial neoplasias (CIN) worldwide. Consequently, it would be useful to evaluate HPV testing to screen for cervical cancer. Recently developed, the second-generation Hybrid Capture (HCA II) test is a non-radioactive, relatively rapid, liquid hybridization assay designed to detect 18 HPV types, divided into high and low-risk groups. We evaluated 1055 women for HPV infection with the HCA II test. Five hundred and ten (48.3%) of these women had HPV infection; 60 (11.8%) had low cancer-risk HPV DNA; 269 (52.7%) had high-risk HPV types and 181 (35.5%) had both groups. Hence, 450 women (88.2%) in this HPV-infected group had at least one high risk HPV type, and were therefore considered to be at high risk for cancer. Among the group with Papanicolaou (Pap) test results, the overall prevalence of HPV DNA was 58.4%. Significant differences in HPV infection of the cervix were detected between Pap I (normal smears) and Pap IV (carcinomas) (p<0.0001). Values of HPV viral load obtained for Pap I and SILs were significantly different, with an upward trend (p<0.0001), suggesting a positive correlation between high viral load values and risk of SIL. Because of the high costs of the HCA II test, its use for routine cervical mass screening cannot be recommended in poor countries. Nevertheless, it is a useful tool when combined with cytology, diagnosing high-risk infections in apparently normal tissues. Use of this technique could help reduce the risk of cancer.

Abstract:
The Berezinskii-Kosterlitz-Thouless mechanism, in which a phase transition is mediated by the proliferation of topological defects, governs the critical behaviour of a wide range of equilibrium two-dimensional systems with a continuous symmetry, ranging from superconducting thin films to two-dimensional Bose fluids, such as liquid helium and ultracold atoms. We show here that this phenomenon is not restricted to thermal equilibrium, rather it survives more generally in a dissipative highly non-equilibrium system driven into a steady-state. By considering a light-matter superfluid of polaritons, in the so-called optical parametric oscillator regime, we demonstrate that it indeed undergoes a vortex binding-unbinding phase transition. Yet, the exponent of the power-law decay of the first order correlation function in the (algebraically) ordered phase can exceed the equilibrium upper limit -- a surprising occurrence, which has also been observed in a recent experiment. Thus we demonstrate that the ordered phase is somehow more robust against the quantum fluctuations of driven systems than thermal ones in equilibrium.

Abstract:
We discuss the d=2 quantum O(2)xO(2) nonlinear sigma model as a low-energy theory of phase reconstruction near a quantum critical point. We first examine the evolution of the Berezinskii-Kosterlitz-Thouless (BKT) transition as the quantum limit is approached in the usual O(2) nonlinear sigma model. Then we go on to review results on the ground-state phase diagram of the O(2)xO(2) nonlinear sigma model, and on the behaviour of the O(2)xO(M) nonlinear sigma model with M>2 in the classical limit. Finally, we present a conjectured finite-temperature phase diagram for the quantum version of the latter model in the O(2)xO(2) case. The nature of the finite-temperature BKT-like transitions in the phase diagram is discussed, and avenues for further calculation are identified.

Abstract:
In the parameterized problem \textsc{MaxLin2-AA}[$k$], we are given a system with variables $x_1,...,x_n$ consisting of equations of the form $\prod_{i \in I}x_i = b$, where $x_i,b \in \{-1, 1\}$ and $I\subseteq [n],$ each equation has a positive integral weight, and we are to decide whether it is possible to simultaneously satisfy equations of total weight at least $W/2+k$, where $W$ is the total weight of all equations and $k$ is the parameter (if $k=0$, the possibility is assured). We show that \textsc{MaxLin2-AA}[$k$] has a kernel with at most $O(k^2\log k)$ variables and can be solved in time $2^{O(k\log k)}(nm)^{O(1)}$. This solves an open problem of Mahajan et al. (2006). The problem \textsc{Max-$r$-Lin2-AA}[$k,r$] is the same as \textsc{MaxLin2-AA}[$k$] with two differences: each equation has at most $r$ variables and $r$ is the second parameter. We prove a theorem on \textsc{Max-$r$-Lin2-AA}[$k,r$] which implies that \textsc{Max-$r$-Lin2-AA}[$k,r$] has a kernel with at most $(2k-1)r$ variables improving a number of results including one by Kim and Williams (2010). The theorem also implies a lower bound on the maximum of a function $f:\ \{-1,1\}^n \rightarrow \mathbb{R}$ of degree $r$. We show applicability of the lower bound by giving a new proof of the Edwards-Erd{\H o}s bound (each connected graph on $n$ vertices and $m$ edges has a bipartite subgraph with at least $m/2 + (n-1)/4$ edges) and obtaining a generalization.

Abstract:
The Abstract Milling problem is a natural and quite general graph-theoretic model for geometric milling problems. Given a graph, one asks for a walk that covers all its vertices with a minimum number of turns, as specified in the graph model by a 0/1 turncost function fx at each vertex x giving, for each ordered pair of edges (e,f) incident at x, the turn cost at x of a walk that enters the vertex on edge e and departs on edge f. We describe an initial study of the parameterized complexity of the problem. Our main positive result shows that Abstract Milling, parameterized by: number of turns, treewidth and maximum degree, is fixed-parameter tractable, We also show that Abstract Milling parameterized by (only) the number of turns and the pathwidth, is hard for W[1] -- one of the few parameterized intractability results for bounded pathwidth.

Abstract:
The focus of warfare has shifted from the Industrial Age to the Information Age, as encapsulated by the term Network Enabled Capability. This emphasises information sharing, command decision-making, and the resultant plans made by commanders on the basis of that information. Planning by a higher level military commander is, in most cases, regarded as such a difficult process to emulate, that it is performed by a real commander during wargaming or during an experimental session based on a Synthetic Environment. Such an approach gives a rich representation of a small number of data points. However, a more complete analysis should allow search across a wider set of alternatives. This requires a closed-form version of such a simulation. In this paper, we discuss an approach to this problem, based on emulating the higher command process using a combination of game theory and genetic algorithms. This process was initially implemented in an exploratory research initiative, described here, and now forms the basis of the development of a “Mission Planner,” potentially applicable to all of our higher level closed-form simulation models. 1. Introduction Since the Cold War period, the scenario context has widened considerably, reflecting the uncertainties of the future. Moreover, decision cycles for our customer community in the UK Ministry of Defence (MoD) have significantly shortened. The focus of war has also shifted from the Industrial Age of grinding attrition to the Information Age, as encapsulated in the term Network Enabled Capability (NEC). NEC is a key goal for the MoD, with the emphasis on command, the sharing of awareness among commanders, and the creation of agile effects. These influences together have led to the need for simulation models which are focussed on command rather than equipment, which can consider a large number of future contexts, and which can robustly examine a number of “what if” alternatives [1]. In response to these demands, we have built a new generation of simulation models, with command (and commander decision making in particular) at their core [2]. These span the range from the single environment (e.g., a land only conflict at the tactical level) to the whole joint campaign, and across a number of coalition partners [3]. They also encompass both warfighting and peacekeeping operations. These models have been deliberately built as a hierarchy, feeding up from the tactical (or systems) level to the operational (or system of systems) level, to give enhanced analytical insight, as shown in Figure 1. Figure 1: The hierarchy of key