Abstract:
OBJECTIVE: Our aim in this study was to compare peritoneal clearance of beta 2-microglobulin (B2M) in peritoneal dialysis (PD) patients who had high and low membrane transport status.MATERIAL and METHODS: Forty-nine PD patients were included in this study. The patients were divided into two groups according to their peritoneal equilibration test (PET) results; high transport group and low transport group. Serum B2M levels and peritoneal clearance of B2M were compared between the two groups.RESULTS: Dialysate B2M level and peritoneal clearance of B2M were higher in the high transporter group than in the low transporter group (5.92 ± 2.62 mg/L vs. 3.42 ± 1.51 mg/L, p: <0.001 and 11.13 ± 2.14 L/week/1.73 m2 vs. 6.41 ± 1.65 L/week/1.73 m2, p: <0.001, respectively). On the other hand, there was no significant difference in serum B2M concentration between the high transport group and the low transport group (24.15 ± 9.10 mg/L vs. 27.35 ± 10.10 mg/L, respectively, P>0.05). Serum B2M concentration was positively correlated with duration of PD (r: 0.518, p: <0.001).CONCLUSION: Although dialysate levels and peritoneal clearance of the middle molecule B2M were significantly higher in high transporters compared to low transporters, there was no significant difference between the two groups in terms of serum B2M concentration.

Abstract:
Purpose: We aimed to evaluate the outcome of renal transplantation patients between the dates 5.5.1994 and 13.8.2000 at Erciyes University Medical Faculty.Material and Method: Survival rates were estimated by Kaplan-Meier test and survival rates in groups by Logrank test.Results: A total of 30 patients without any contraindications for transplantation underwent regular hemodialysis or continuous ambulatory peritoneal dialysis and had renal transplantation. Nineteen of transplanted kidneys were received from living donors whereas cadaveric kidneys were used in 11 patients. Acute rejection was diagnosed and treated in 3 patients. Chronic rejection was diagnosed in 3 patients. Complications due to surgical procedure developed in 5 patients. Urinary infections were the most common form of infectious complications. A total of 4 patients died of various causes.Conclusion: Patients' survival rates were 96% and 67% for the first year and for 5 years of the follow up period, respectively. On the other hand, graft survival rates were 82% and 48% for the first year and for 5 years of the follow up period, respectively. No significant difference was found between the graft survival rates of living and cadaveric donor kidneys (83% and 80%, respectively; p>0.005) for the first year.

Abstract:
Ethylene glycol (EG) may be consumed accidentally or intentionally, usually in the form of antifreeze products or as an ethanol substitute. EG is metabolized to toxic metabolites. These metabolites cause metabolic acidosis with increased anion gap, renal failure, oxaluria, damage to the central nervous system and cranial nerves, and cardiovascular instability. Early initiation of treatment can reduce the mortality and morbidity but different clinical presentations can cause delayed diagnosis and poor prognosis. Herein, we report a case with the atypical presentation of facial paralysis, hematuria, and kidney failure due to EG poisoning which progressed to end stage renal failure and permanent right peripheral facial nerve palsy. 1. Introduction Ethylene glycol (EG) may be consumed accidentally or intentionally, usually in the form of antifreeze products or as an ethanol substitute. It is converted by alcohol dehydrogenase to active metabolites in the liver, and these metabolites cause metabolic acidosis with increased anion gap, renal failure, hypocalcemia, oxaluria, and damage to the central nervous system and cranial nerves [1–3]. Diagnosis of EG poisoning could be a challenge due to altered mental status and lack of poisoning history. While metabolic acidosis with increased anion gap is common, osmolar gap resolves within 24 to 72 hours as the EG is metabolized to toxic metabolites. Early initiation of treatment can reduce the mortality, and morbidity but different presentations especially with delayed cases could be a problem, so patients with acute renal failure require more attention [4]. In cases of renal failure with the suspicion of EG poisoning, kidney biopsy should be considered promptly. Histological examination of renal tissue often reveals widespread necrosis of the tubular epithelium and deposition of a multitude of doubly refractile oxalate crystals in the distal tubules and collecting ducts [5, 6]. Herein, we report a case with the atypical presentation of facial paralysis, hematuria, and kidney failure due to EG poisoning. 2. Case Report A 25-year-old man was admitted to the emergency department of our university hospital with sudden onset of nausea, vomiting, slight mental disability, and abdominal pain. Cardiovascular, respiration, and gastrointestinal system examination findings were normal. The neurological examinations of the cranial nerves and limbs were normal. He was clinically dehydrated. On the first day of his admission, laboratory studies revealed a BUN level of 35？mg/dL, creatinine level of 3.17？mg/dL, calcium level of

Abstract:
Tuberculosis (TB) is a contagious disease which causes considerable morbidity and mortality, constituting a serious public health problem. Our goals in the struggle against TB are mainly to prevent the spreading of the disease, to achieve complete recovery, to prohibit relapsing of tuberculosis, to prevent developing resistant bacilli, and to reduce the morbidity and mortality rate. Within the guidelines of the above mentioned goals, patients diagnosed with TB or suspected as having TB should be evaluated prior to actual TB treatment.

Abstract:
Nuclear norm minimization (NNM) has recently gained significant attention for its use in rank minimization problems. Similar to compressed sensing, using null space characterizations, recovery thresholds for NNM have been studied in \cite{arxiv,Recht_Xu_Hassibi}. However simulations show that the thresholds are far from optimal, especially in the low rank region. In this paper we apply the recent analysis of Stojnic for compressed sensing \cite{mihailo} to the null space conditions of NNM. The resulting thresholds are significantly better and in particular our weak threshold appears to match with simulation results. Further our curves suggest for any rank growing linearly with matrix size $n$ we need only three times of oversampling (the model complexity) for weak recovery. Similar to \cite{arxiv} we analyze the conditions for weak, sectional and strong thresholds. Additionally a separate analysis is given for special case of positive semidefinite matrices. We conclude by discussing simulation results and future research directions.

Abstract:
This work considers recovery of signals that are sparse over two bases. For instance, a signal might be sparse in both time and frequency, or a matrix can be low rank and sparse simultaneously. To facilitate recovery, we consider minimizing the sum of the $\ell_1$-norms that correspond to each basis, which is a tractable convex approach. We find novel optimality conditions which indicates a gain over traditional approaches where $\ell_1$ minimization is done over only one basis. Next, we analyze these optimality conditions for the particular case of time-frequency bases. Denoting sparsity in the first and second bases by $k_1,k_2$ respectively, we show that, for a general class of signals, using this approach, one requires as small as $O(\max\{k_1,k_2\}\log\log n)$ measurements for successful recovery hence overcoming the classical requirement of $\Theta(\min\{k_1,k_2\}\log(\frac{n}{\min\{k_1,k_2\}}))$ for $\ell_1$ minimization when $k_1\approx k_2$. Extensive simulations show that, our analysis is approximately tight.

Abstract:
Denoising has to do with estimating a signal $x_0$ from its noisy observations $y=x_0+z$. In this paper, we focus on the "structured denoising problem", where the signal $x_0$ possesses a certain structure and $z$ has independent normally distributed entries with mean zero and variance $\sigma^2$. We employ a structure-inducing convex function $f(\cdot)$ and solve $\min_x\{\frac{1}{2}\|y-x\|_2^2+\sigma\lambda f(x)\}$ to estimate $x_0$, for some $\lambda>0$. Common choices for $f(\cdot)$ include the $\ell_1$ norm for sparse vectors, the $\ell_1-\ell_2$ norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate $x^*$ is the normalized mean-squared-error $\text{NMSE}(\sigma)=\frac{\mathbb{E}\|x^*-x_0\|_2^2}{\sigma^2}$. We show that NMSE is maximized as $\sigma\rightarrow 0$ and we find the \emph{exact} worst case NMSE, which has a simple geometric interpretation: the mean-squared-distance of a standard normal vector to the $\lambda$-scaled subdifferential $\lambda\partial f(x_0)$. When $\lambda$ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem $\min_{f(x)\leq f(x_0)}\{\|y-x\|_2\}$. The paper also connects these results to the generalized LASSO problem, in which, one solves $\min_{f(x)\leq f(x_0)}\{\|y-Ax\|_2\}$ to estimate $x_0$ from noisy linear observations $y=Ax_0+z$. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a "phase transition" as a function of number of observations. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems.

Abstract:
Finding "densely connected clusters" in a graph is in general an important and well studied problem in the literature \cite{Schaeffer}. It has various applications in pattern recognition, social networking and data mining \cite{Duda,Mishra}. Recently, Ames and Vavasis have suggested a novel method for finding cliques in a graph by using convex optimization over the adjacency matrix of the graph \cite{Ames, Ames2}. Also, there has been recent advances in decomposing a given matrix into its "low rank" and "sparse" components \cite{Candes, Chandra}. In this paper, inspired by these results, we view "densely connected clusters" as imperfect cliques, where imperfections correspond missing edges, which are relatively sparse. We analyze the problem in a probabilistic setting and aim to detect disjointly planted clusters. Our main result basically suggests that, one can find \emph{dense} clusters in a graph, as long as the clusters are sufficiently large. We conclude by discussing possible extensions and future research directions.

Abstract:
We study embedding a subset $K$ of the unit sphere to the Hamming cube $\{-1,+1\}^m$. We characterize the tradeoff between distortion and sample complexity $m$ in terms of the Gaussian width $\omega(K)$ of the set. For subspaces and several structured sets we show that Gaussian maps provide the optimal tradeoff $m\sim \delta^{-2}\omega^2(K)$, in particular for $\delta$ distortion one needs $m\approx\delta^{-2}{d}$ where $d$ is the subspace dimension. For general sets, we provide sharp characterizations which reduces to $m\approx{\delta^{-4}}{\omega^2(K)}$ after simplification. We provide improved results for local embedding of points that are in close proximity of each other which is related to locality sensitive hashing. We also discuss faster binary embedding where one takes advantage of an initial sketching procedure based on Fast Johnson-Lindenstauss Transform. Finally, we list several numerical observations and discuss open problems.

Abstract:
Dimension reduction is the process of embedding high-dimensional data into a lower dimensional space to facilitate its analysis. In the Euclidean setting, one fundamental technique for dimension reduction is to apply a random linear map to the data. This dimension reduction procedure succeeds when it preserves certain geometric features of the set. The question is how large the embedding dimension must be to ensure that randomized dimension reduction succeeds with high probability. This paper studies a natural family of randomized dimension reduction maps and a large class of data sets. It proves that there is a phase transition in the success probability of the dimension reduction map as the embedding dimension increases. For a given data set, the location of the phase transition is the same for all maps in this family. Furthermore, each map has the same stability properties, as quantified through the restricted minimum singular value. These results can be viewed as new universality laws in high-dimensional stochastic geometry. Universality laws for randomized dimension reduction have many applications in applied mathematics, signal processing, and statistics. They yield design principles for numerical linear algebra algorithms, for compressed sensing measurement ensembles, and for random linear codes. Furthermore, these results have implications for the performance of statistical estimation methods under a large class of random experimental designs.