oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 26 )

2019 ( 194 )

2018 ( 329 )

2017 ( 341 )

Custom range...

Search Results: 1 - 10 of 182873 matches for " Frank de Hoog "
All listed articles are free for downloading (OA Articles)
Page 1 /182873
Display every page Item
New Coherence and RIP Analysis for Weak Orthogonal Matching Pursuit
Mingrui Yang,Frank de Hoog
Mathematics , 2014,
Abstract: In this paper we define a new coherence index, named the global 2-coherence, of a given dictionary and study its relationship with the traditional mutual coherence and the restricted isometry constant. By exploring this relationship, we obtain more general results on sparse signal reconstruction using greedy algorithms in the compressive sensing (CS) framework. In particular, we obtain an improved bound over the best known results on the restricted isometry constant for successful recovery of sparse signals using orthogonal matching pursuit (OMP).
Orthogonal Matching Pursuit with Thresholding and its Application in Compressive Sensing
Mingrui Yang,Frank de Hoog
Computer Science , 2013,
Abstract: Greed is good. However, the tighter you squeeze, the less you have. In this paper, a less greedy algorithm for sparse signal reconstruction in compressive sensing, named orthogonal matching pursuit with thresholding is studied. Using the global 2-coherence , which provides a "bridge" between the well known mutual coherence and the restricted isometry constant, the performance of orthogonal matching pursuit with thresholding is analyzed and more general results for sparse signal reconstruction are obtained. It is also shown that given the same assumption on the coherence index and the restricted isometry constant as required for orthogonal matching pursuit, the thresholding variation gives exactly the same reconstruction performance with significantly less complexity.
Hyperspectral Image Recovery from Incomplete and Imperfect Measurements via Hybrid Regularization
Reza Arablouei,Frank de Hoog
Computer Science , 2015,
Abstract: Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the l_1-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyse the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.
The application of compressive sampling to radio astronomy I: Deconvolution
Feng Li,Tim J. Cornwell,Frank de Hoog
Physics , 2011, DOI: 10.1051/0004-6361/201015045
Abstract: Compressive sampling is a new paradigm for sampling, based on sparseness of signals or signal representations. It is much less restrictive than Nyquist-Shannon sampling theory and thus explains and systematises the widespread experience that methods such as the H\"ogbom CLEAN can violate the Nyquist-Shannon sampling requirements. In this paper, a CS-based deconvolution method for extended sources is introduced. This method can reconstruct both point sources and extended sources (using the isotropic undecimated wavelet transform as a basis function for the reconstruction step). We compare this CS-based deconvolution method with two CLEAN-based deconvolution methods: the H\"ogbom CLEAN and the multiscale CLEAN. This new method shows the best performance in deconvolving extended sources for both uniform and natural weighting of the sampled visibilities. Both visual and numerical results of the comparison are provided.
The application of compressive sampling to radio astronomy II: Faraday rotation measure synthesis
Feng Li,Shea Brown,Tim J. Cornwell,Frank de Hoog
Physics , 2011, DOI: 10.1051/0004-6361/201015890
Abstract: Faraday rotation measure (RM) synthesis is an important tool to study and analyze galactic and extra-galactic magnetic fields. Since there is a Fourier relation between the Faraday dispersion function and the polarized radio emission, full reconstruction of the dispersion function requires knowledge of the polarized radio emission at both positive and negative square wavelengths $\lambda^2$. However, one can only make observations for $\lambda^2 > 0$. Furthermore observations are possible only for a limited range of wavelengths. Thus reconstructing the Faraday dispersion function from these limited measurements is ill-conditioned. In this paper, we propose three new reconstruction algorithms for RM synthesis based upon compressive sensing/sampling (CS). These algorithms are designed to be appropriate for Faraday thin sources only, thick sources only, and mixed sources respectively. Both visual and numerical results show that the new RM synthesis methods provide superior reconstructions of both magnitude and phase information than RM-CLEAN
A weakly stable algorithm for general Toeplitz systems
Adam W. Bojanczyk,Richard P. Brent,Frank R. de Hoog
Mathematics , 2010, DOI: 10.1007/BF02140770
Abstract: We show that a fast algorithm for the QR factorization of a Toeplitz or Hankel matrix A is weakly stable in the sense that R^T.R is close to A^T.A. Thus, when the algorithm is used to solve the semi-normal equations R^T.Rx = A^Tb, we obtain a weakly stable method for the solution of a nonsingular Toeplitz or Hankel linear system Ax = b. The algorithm also applies to the solution of the full-rank Toeplitz or Hankel least squares problem.
Compressive hyperspectral imaging via adaptive sampling and dictionary learning
Mingrui Yang,Frank de Hoog,Yuqi Fan,Wen Hu
Computer Science , 2015,
Abstract: In this paper, we propose a new sampling strategy for hyperspectral signals that is based on dictionary learning and singular value decomposition (SVD). Specifically, we first learn a sparsifying dictionary from training spectral data using dictionary learning. We then perform an SVD on the dictionary and use the first few left singular vectors as the rows of the measurement matrix to obtain the compressive measurements for reconstruction. The proposed method provides significant improvement over the conventional compressive sensing approaches. The reconstruction performance is further improved by reconditioning the sensing matrix using matrix balancing. We also demonstrate that the combination of dictionary learning and SVD is robust by applying them to different datasets.
On the stability of the Bareiss and related Toeplitz factorization algorithms
Adam W. Bojanczyk,Richard P. Brent,Frank R. de Hoog,Douglas R. Sweet
Mathematics , 2010,
Abstract: This report contains a numerical stability analysis of factorization algorithms for computing the Cholesky decomposition of symmetric positive definite matrices of displacement rank 2. The algorithms in the class can be expressed as sequences of elementary downdating steps. The stability of the factorization algorithms follows directly from the numerical properties of algorithms for realizing elementary downdating operations. It is shown that the Bareiss algorithm for factorizing a symmetric positive definite Toeplitz matrix is in the class and hence the Bareiss algorithm is stable. Some numerical experiments that compare behavior of the Bareiss algorithm and the Levinson algorithm are presented. These experiments indicate that in general (when the reflection coefficients are not all positive) the Levinson algorithm is not stable; certainly it can give much larger residuals than the Bareiss algorithm.
Sparse Bayesian Learning for EEG Source Localization
Sajib Saha,Frank de Hoog,Ya. I. Nesterets,Rajib Rana,M. Tahtali,T. E. Gureyev
Computer Science , 2015,
Abstract: Purpose: Localizing the sources of electrical activity from electroencephalographic (EEG) data has gained considerable attention over the last few years. In this paper, we propose an innovative source localization method for EEG, based on Sparse Bayesian Learning (SBL). Methods: To better specify the sparsity profile and to ensure efficient source localization, the proposed approach considers grouping of the electrical current dipoles inside human brain. SBL is used to solve the localization problem in addition with imposed constraint that the electric current dipoles associated with the brain activity are isotropic. Results: Numerical experiments are conducted on a realistic head model that is obtained by segmentation of MRI images of the head and includes four major components, namely the scalp, the skull, the cerebrospinal fluid (CSF) and the brain, with appropriate relative conductivity values. The results demonstrate that the isotropy constraint significantly improves the performance of SBL. In a noiseless environment, the proposed method was 1 found to accurately (with accuracy of >75%) locate up to 6 simultaneously active sources, whereas for SBL without the isotropy constraint, the accuracy of finding just 3 simultaneously active sources was <75%. Conclusions: Compared to the state-of-the-art algorithms, the proposed method is potentially more consistent in specifying the sparsity profile of human brain activity and is able to produce better source localization for EEG.
EEG source localization using a sparsity prior based on Brodmann areas
S. Saha,Ya. I. Nesterets,Rajib Rana,M. Tahtali,Frank de Hoog,T. E. Gureyev
Quantitative Biology , 2014,
Abstract: Localizing the sources of electrical activity in the brain from Electroencephalographic (EEG) data is an important tool for non-invasive study of brain dynamics. Generally, the source localization process involves a high-dimensional inverse problem that has an infinite number of solutions and thus requires additional constraints to be considered to have a unique solution. In the context of EEG source localization, we propose a novel approach that is based on dividing the cerebral cortex of the brain into a finite number of Functional Zones which correspond to unitary functional areas in the brain. In this paper we investigate the use of Brodmanns areas as the Functional Zones. This approach allows us to apply a sparsity constraint to find a unique solution for the inverse EEG problem. Compared to previously published algorithms which use different sparsity constraints to solve this problem, the proposed method is potentially more consistent with the known sparsity profile of the human brain activity and thus may be able to ensure better localization. Numerical experiments are conducted on a realistic head model obtained from segmentation of MRI images of the head and includes four major compartments namely scalp, skull, cerebrospinal fluid (CSF) and brain with relative conductivity values. Three different electrode setups are tested in the numerical experiments.
Page 1 /182873
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.