Abstract:
We present a model for the optimal design of an online auction/store by a seller. The framework we use is a stochastic optimal control problem. In our setting, the seller wishes to maximize her average wealth level, where she can control her price per unit via her reputation level. The corresponding Hamilton-Jacobi-Bellmann equation is analyzed for an introductory case. We then turn to an empirically justified model, and present introductory analysis. In both cases, {\em pulsing} advertising strategies are recovered for resource allocation. Further numerical and functional analysis will appear shortly.

Abstract:
We study the mixing time of random graphs in the $d$-dimensional toric unit cube $[0,1]^d$ generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights, drawn from some distribution. The connectivity threshold for GTGs is comparable to that of RGGs, essentially corresponding to a connectivity radius of $r=(\log n/n)^{1/d}$. However, the degree distributions at this threshold are quite different: in an RGG the degrees are essentially uniform, while RGGs have heterogeneous degrees that depend upon the weight distribution. Herein, we study the mixing times of random walks on $d$-dimensional GTGs near the connectivity threshold for $d \geq 2$. If the weight distribution function decays with $\mathbb{P}[W \geq x] = O(1/x^{d+\nu})$ for an arbitrarily small constant $\nu>0$ then the mixing time of GTG is $\mixbound$. This matches the known mixing bounds for the $d$-dimensional RGG.

Abstract:
Bootstrap percolation has been used effectively to model phenomena as diverse as emergence of magnetism in materials, spread of infection, diffusion of software viruses in computer networks, adoption of new technologies, and emergence of collective action and cultural fads in human societies. It is defined on an (arbitrary) network of interacting agents whose state is determined by the state of their neighbors according to a threshold rule. In a typical setting, bootstrap percolation starts by random and independent "activation" of nodes with a fixed probability $p$, followed by a deterministic process for additional activations based on the density of active nodes in each neighborhood ($\theta$ activated nodes). Here, we study bootstrap percolation on random geometric graphs in the regime when the latter are (almost surely) connected. Random geometric graphs provide an appropriate model in settings where the neighborhood structure of each node is determined by geographical distance, as in wireless {\it ad hoc} and sensor networks as well as in contagion. We derive bounds on the critical thresholds $p_c', p_c"$ such that for all $p > p"_c(\theta)$ full percolation takes place, whereas for $p < p'_c(\theta)$ it does not. We conclude with simulations that compare numerical thresholds with those obtained analytically.

Abstract:
We give a characterization of vertex-monotone properties with sharp thresholds in a Poisson random geometric graph or hypergraph. As an application we show that a geometric model of random k-SAT exhibits a sharp threshold for satisfiability.

Abstract:
We study bootstrap percolation with the threshold parameter $\theta \geq 2$ and the initial probability $p$ on infinite periodic trees that are defined as follows. Each node of a tree has degree selected from a finite predefined set of non-negative integers and starting from any node, all nodes at the same graph distance from it have the same degree. We show the existence of the critical threshold $p_f(\theta) \in (0,1)$ such that with high probability, (i) if $p > p_f(\theta)$ then the periodic tree becomes fully active, while (ii) if $p < p_f(\theta)$ then a periodic tree does not become fully active. We also derive a system of recurrence equations for the critical threshold $p_f(\theta)$ and compute these numerically for a collection of periodic trees and various values of $\theta$, thus extending previous results for regular (homogeneous) trees.

Abstract:
Natural disasters or attacks may disrupt infrastructure networks on a vast scale. Parts of the damaged network are interdependent, making it difficult to plan and optimally execute the recovery operations. To study how interdependencies affect the recovery schedule, we introduce a new discrete optimization problem where the goal is to minimize the total cost of installing (or recovering) a given network. This cost is determined by the structure of the network and the sequence in which the nodes are installed. Namely, the cost of installing a node is a function of the number of its neighbors that have been installed before it. We analyze the natural case where the cost function is decreasing and convex, and provide bounds on the cost of the optimal solution. We also show that all sequences have the same cost when the cost function is linear and provide an upper bound on the cost of a random solution for an Erd\H{o}s-R\'enyi random graph. Examining the computational complexity, we show that the problem is NP-hard when the cost function is arbitrary. Finally, we provide a formulation as an integer program, an exact dynamic programming algorithm, and a greedy heuristic which gives high quality solutions.

Abstract:
Viral spread on large graphs has many real-life applications such as malware propagation in computer networks and rumor (or misinformation) spread in Twitter-like online social networks. Although viral spread on large graphs has been intensively analyzed on classical models such as Susceptible-Infectious-Recovered, there still exits a deficit of effective methods in practice to contain epidemic spread once it passes a critical threshold. Against this backdrop, we explore methods of containing viral spread in large networks with the focus on sparse random networks. The viral containment strategy is to partition a large network into small components and then to ensure the sanity of all messages delivered across different components. With such a defense mechanism in place, an epidemic spread starting from any node is limited to only those nodes belonging to the same component as the initial infection node. We establish both lower and upper bounds on the costs of inspecting inter-component messages. We further propose heuristic-based approaches to partition large input graphs into small components. Finally, we study the performance of our proposed algorithms under different network topologies and different edge weight models.

Abstract:
We present a deterministic exploration mechanism for sponsored search auctions, which enables the auctioneer to learn the relevance scores of advertisers, and allows advertisers to estimate the true value of clicks generated at the auction site. This exploratory mechanism deviates only minimally from the mechanism being currently used by Google and Yahoo! in the sense that it retains the same pricing rule, similar ranking scheme, as well as, similar mathematical structure of payoffs. In particular, the estimations of the relevance scores and true-values are achieved by providing a chance to lower ranked advertisers to obtain better slots. This allows the search engine to potentially test a new pool of advertisers, and correspondingly, enables new advertisers to estimate the value of clicks/leads generated via the auction. Both these quantities are unknown a priori, and their knowledge is necessary for the auction to operate efficiently. We show that such an exploration policy can be incorporated without any significant loss in revenue for the auctioneer. We compare the revenue of the new mechanism to that of the standard mechanism at their corresponding symmetric Nash equilibria and compute the cost of uncertainty, which is defined as the relative loss in expected revenue per impression. We also bound the loss in efficiency, as well as, in user experience due to exploration, under the same solution concept (i.e. SNE). Thus the proposed exploration mechanism learns the relevance scores while incorporating the incentive constraints from the advertisers who are selfish and are trying to maximize their own profits, and therefore, the exploration is essentially achieved via mechanism design. We also discuss variations of the new mechanism such as truthful implementations.

Abstract:
We analyze the component evolution in inhomogeneous random intersection graphs when the average degree is close to 1. As the average degree increases, the size of the largest component in the random intersection graph goes through a phase transition. We give bounds on the size of the largest components before and after this transition. We also prove that the largest component after the transition is unique. These results are similar to the phase transition in Erd\H{o}s-R\'enyi random graphs; one notable difference is that the jump in the size of the largest component varies in size depending on the parameters of the random intersection graph.

Abstract:
While it is generally agreed that the nature of spacetime must be drastically different at the Planck scale, it has been a common practice to assume that spacetime is endowed with a full pseudo-Riemannian geometry regardless of the physical fields present or the length scale at which the geometry is probed, and to adopt this assumption in theories of cosmology, particle physics, quantum gravity, etc. Following Einstein's view that the mathematical description of spacetime ought to be physically motivated, we initiate a discussion on the validity of this assumption, and propose that the full pseudo-Riemannian geometry of spacetime could emerge as late as the time of electroweak phase transition when spacetime acquires the projective structure necessary to describe the motions of massive particles.