Abstract:
We study the Kaczmarz methods for solving systems of quadratic equations, i.e., the generalized phase retrieval problem. The methods extend the Kaczmarz methods for solving systems of linear equations by integrating a phase selection heuristic in each iteration and overall have the same per iteration computational complexity. Extensive empirical performance comparisons establish the computational advantages of the Kaczmarz methods over other state-of-the-art phase retrieval algorithms both in terms of the number of measurements needed for successful recovery and in terms of computation time. Preliminary convergence analysis is presented for the randomized Kaczmarz methods.

This paper proposes an adaptive sparsity-based direct position
determination (DPD) appoach to locate multiple targets in the case of
time-varying channels. The novel feature of this method is to dynamically
adjust both the overcomplete basis and the sparse solution based on a two-step
dictionary learning (DL) framework. The method first performs supervised
offline DL by using the quadratic programming approach, and then the dictionary
is continuously updated in an incremental fashion to adapt to the time-varying
channel during the online stage. Furthermore, the method does not need the
number of emitters a prior. Simulation results demonstrate
the performance of the proposed algorithm on the location estimation accuracy.

Abstract:
In order to enhance accuracy and reliability of wireless location in the mixed line-of-sight (LOS) and non-line-of-sight (NLOS) environments, a robust mobile location algorithm is presented to track the position of a mobile node (MN). An extended Kalman filter (EKF) modified in the updating phase is utilized to reduce the NLOS error in rough wireless environments, in which the NLOS bias contained in each measurement range is estimated directly by the constrained optimization method. To identify the change of channel situation between NLOS and LOS, a low complexity identification method based on innovation vectors is proposed. Numerical results illustrate that the location errors of the proposed algorithm are all significantly smaller than those of the iterated NLOS EKF algorithm and the conventional EKF algorithm in different LOS/NLOS conditions. Moreover, this location method does not require any statistical distribution knowledge of the NLOS error. In addition, complexity experiments suggest that this algorithm supports real-time applications.

Abstract:
The direct position determination (DPD) method can improve the location accuracy compared with the traditional two-step location methods due to omitting the intermediate procedure of estimating the measurement parameters. However, the DPD methods presented so far are significantly more complex than the two-step approach. To overcome the shortcomings of the published DPD algorithms, a novel multi-target direct localization approach is firstly proposed by exploiting the jointly sparse property in the discrete spatial domain. The main idea of this paper is that the location estimation can be obtained by finding the sparsest solution according to the predefined overcomplete basis. Furthermore, the locations of targets can be obtained from noisy signals, even if the number of targets is not known . Experimental results demonstrate that the proposed algorithm has superior positioning accuracy to other DPD methods and improves computational efficiency greatly.

Abstract:
In the received signal strength (RSS) based indoor wireless localization system, RSS measurements are very susceptible to the complex structures and dynamic nature of indoor environments, which will result in the system failure to achieve a high location accuracy. In this paper, we investigate the indoor positioning problem in the existence of RSS variations without prior knowledge about the localization area and without time-consuming off-line surveys. An adaptive sparsity-based localization algorithm is proposed to mitigate the effects of RSS variations. The novel feature of this method is to adjust both the overcomplete basis (a.k.a. dictionary) and the sparse solution using a dictionary learning (DL) technology based on the quadratic programming approach so that the location solution can better match the actual RSS scenario. Moreover, we extend this algorithm to deal with the problem of positioning targets from multiple categories, a novel problem that few works have ever concerned before. Simulation results demonstrate the superiority of the proposed algorithm over some state-of-art environmental-adaptive indoor localization methods.

Abstract:
To study the structure of solutions for random k-SAT and random CSPs, this paper introduces the concept of average similarity degree to characterize how solutions are similar to each other. It is proved that under certain conditions, as r (i.e. the ratio of constraints to variables) increases, the limit of average similarity degree when the number of variables approaches infinity exhibits phase transitions at a threshold point, shifting from a smaller value to a larger value abruptly. For random k-SAT this phenomenon will occur when k>4 . It is further shown that this threshold point is also a singular point with respect to r in the asymptotic estimate of the second moment of the number of solutions. Finally, we discuss how this work is helpful to understand the hardness of solving random instances and a possible application of it to the design of search algorithms.

Abstract:
This paper first analyzes the resolution complexity of two random CSP models (i.e. Model RB/RD) for which we can establish the existence of phase transitions and identify the threshold points exactly. By encoding CSPs into CNF formulas, it is proved that almost all instances of Model RB/RD have no tree-like resolution proofs of less than exponential size. Thus, we not only introduce new families of CNF formulas hard for resolution, which is a central task of Proof-Complexity theory, but also propose models with both many hard instances and exact phase transitions. Then, the implications of such models are addressed. It is shown both theoretically and experimentally that an application of Model RB/RD might be in the generation of hard satisfiable instances, which is not only of practical importance but also related to some open problems in cryptography such as generating one-way functions. Subsequently, a further theoretical support for the generation method is shown by establishing exponential lower bounds on the complexity of solving random satisfiable and forced satisfiable instances of RB/RD near the threshold. Finally, conclusions are presented, as well as a detailed comparison of Model RB/RD with the Hamiltonian cycle problem and random 3-SAT, which, respectively, exhibit three different kinds of phase transition behavior in NP-complete problems.

Abstract:
A priority queue is a fundamental data structure that maintains a dynamic ordered set of keys and supports the followig basic operations: insertion of a key, deletion of a key, and finding the smallest key. The complexity of the priority queue is closely related to that of sorting: A priority queue can be used to implement a sorting algorithm trivially. Thorup \cite{thorup2007equivalence} proved that the converse is also true in the RAM model. In particular, he designed a priority queue that uses the sorting algorithm as a black box, such that the per-operation cost of the priority queue is asymptotically the same as the per-key cost of sorting. In this paper, we prove an analogous result in the external memory model, showing that priority queues are computationally equivalent to sorting in external memory, under some mild assumptions. The reduction provides a possibility for proving lower bounds for external sorting via showing a lower bound for priority queues.

Abstract:
We study the problem of 2-dimensional orthogonal range counting with additive error. Given a set $P$ of $n$ points drawn from an $n\times n$ grid and an error parameter $\eps$, the goal is to build a data structure, such that for any orthogonal range $R$, the data structure can return the number of points in $P\cap R$ with additive error $\eps n$. A well-known solution for this problem is the {\em $\eps$-approximation}. Informally speaking, an $\eps$-approximation of $P$ is a subset $A\subseteq P$ that allows us to estimate the number of points in $P\cap R$ by counting the number of points in $A\cap R$. It is known that an $\eps$-approximation of size $O(\frac{1}{\eps} \log^{2.5} \frac{1}{\eps})$ exists for any $P$ with respect to orthogonal ranges, and the best lower bound is $\Omega(\frac{1}{\eps} \log \frac{1}{\eps})$. The $\eps$-approximation is a rather restricted data structure, as we are not allowed to store any information other than the coordinates of a subset of points in $P$. In this paper, we explore what can be achieved without any restriction on the data structure. We first describe a data structure that uses $O(\frac{1}{\eps} \log \frac{1} {\eps} \log\log \frac{1}{\eps} \log n)$ bits that answers queries with error $\eps n$. We then prove a lower bound that any data structure that answers queries with error $O(\log n)$ must use $\Omega(n\log n)$ bits. This lower bound has two consequences: 1) answering queries with error $O(\log n)$ is as hard as answering the queries exactly; and 2) our upper bound cannot be improved in general by more than an $O(\log \log \frac{1}{\eps})$ factor.

Abstract:
In this paper we propose a new type of random CSP model, called Model RB, which is a revision to the standard Model B. It is proved that phase transitions from a region where almost all problems are satisfiable to a region where almost all problems are unsatisfiable do exist for Model RB as the number of variables approaches infinity. Moreover, the critical values at which the phase transitions occur are also known exactly. By relating the hardness of Model RB to Model B, it is shown that there exist a lot of hard instances in Model RB.