Abstract:
Post-traumatic stress disorder (PTSD) symptoms include behavioral avoidance which is acquired and tends to increase with time. This avoidance may represent a general learning bias; indeed, individuals with PTSD are often faster than controls on acquiring conditioned responses based on physiologically-aversive feedback. However, it is not clear whether this learning bias extends to cognitive feedback, or to learning from both reward and punishment. Here, male veterans with self-reported current, severe PTSD symptoms (PTSS group) or with few or no PTSD symptoms (control group) completed a probabilistic classification task that included both reward-based and punishment-based trials, where feedback could take the form of reward, punishment, or an ambiguous “no-feedback” outcome that could signal either successful avoidance of punishment or failure to obtain reward. The PTSS group outperformed the control group in total points obtained; the PTSS group specifically performed better than the control group on reward-based trials, with no difference on punishment-based trials. To better understand possible mechanisms underlying observed performance, we used a reinforcement learning model of the task, and applied maximum likelihood estimation techniques to derive estimated parameters describing individual participants’ behavior. Estimations of the reinforcement value of the no-feedback outcome were significantly greater in the control group than the PTSS group, suggesting that the control group was more likely to value this outcome as positively reinforcing (i.e., signaling successful avoidance of punishment). This is consistent with the control group’s generally poorer performance on reward trials, where reward feedback was to be obtained in preference to the no-feedback outcome. Differences in the interpretation of ambiguous feedback may contribute to the facilitated reinforcement learning often observed in PTSD patients, and may in turn provide new insight into how pathological behaviors are acquired and maintained in PTSD.

Abstract:
We propose a new concept of generalized differentiation of set-valued maps that captures the first order information. This concept encompasses the standard notions of Frechet differentiability, strict differentiability, calmness and Lipschitz continuity in single-valued maps, and the Aubin property and Lipschitz continuity in set-valued maps. We present calculus rules, sharpen the relationship between the Aubin property and coderivatives, and study how metric regularity and open covering can be refined to have a directional property similar to our concept of generalized differentiation. Finally, we discuss the relationship between the robust form of generalization differentiation and its one sided counterpart.

Abstract:
We show that a first order problem can approximate solutions of a robust optimization problem when the uncertainty set is scaled, and explore further properties of this first order problem.

Abstract:
The problem of computing saddle points is important in certain problems in numerical partial differential equations and computational chemistry, and is often solved numerically by a minimization problem over a set of mountain passes. We propose an algorithm to find saddle points of mountain pass type to find the bottlenecks of optimal mountain passes. The key step is to minimize the distance between level sets by using quadratic models on affine spaces similar to the strategy in the conjugate gradient algorithm. We discuss parameter choices, convergence results, and how to augment the algorithm to a path based method. Finally, we perform numerical experiments to test the convergence of our algorithm.

Abstract:
We study how the supporting hyperplanes produced by the projection process can complement the method of alternating projections and its variants for the convex set intersection problem. For the problem of finding the closest point in the intersection of closed convex sets, we propose an algorithm that, like Dykstra's algorithm, converges strongly in a Hilbert space. Moreover, this algorithm converges in finitely many iterations when the closed convex sets are cones in $\mathbb{R}^{n}$ satisfying an alignment condition. Next, we propose modifications of the alternating projection algorithm, and prove its convergence. The algorithm converges superlinearly in $\mathbb{R}^{n}$ under some nice conditions. Under a conical condition, the convergence can be finite. Lastly, we discuss the case where the intersection of the sets is empty.

Abstract:
Abstract. The Set Intersection Problem (SIP) is the problem of finding a point in the intersection of convex sets. This problem is typically solved by the method of alternating projections. To accelerate the convergence, the idea of using Quadratic Programming (QP) to project a point onto the intersection of halfspaces generated by the projection process was discussed in earlier papers. This paper looks at how one can integrate projection algorithms together with an active set QP algorithm. As a byproduct of our analysis, we show how to accelerate an SIP algorithm involving box constraints, and how to extend a version of the Algebraic Reconstruction Technique (ART) while preserving finite convergence. Lastly, the warmstart property of active set QP algorithms is a valuable property for the problem of projecting onto the intersection of convex sets.

Abstract:
A known first order method to find a feasible solution to a conic problem is an adapted von Neumann algorithm. We improve the distance reduction step there by projecting onto the convex hull of previously generated points using a primal active set quadratic programming (QP) algorithm. The convergence theory is improved when the QPs are as large as possible. For problems in R^2, we analyze our algorithm by epigraphs and the monotonicity of subdifferentials. Logically, the larger the set to project onto, the better the performance per iteration, and this is indeed seen in our numerical experiments.

Abstract:
For a real valued function, a point is critical if its derivatives are zero, and a critical point is a saddle point if it is not a local extrema. In this paper, we study algorithms to find saddle points of general Morse index. Our approach is motivated by the multidimensional mountain pass theorem, and extends our earlier work on methods (based on studying the level sets) to find saddle points of mountain pass type. We prove the convergence of our algorithms in the nonsmooth case, and the local superlinear convergence of another algorithm in the smooth finite dimensional case.

Abstract:
The von Neumann-Halperin method of alternating projections converges strongly to the projection of a given point onto the intersection of finitely many closed affine subspaces. We propose acceleration schemes making use of two ideas: Firstly, each projection onto an affine subspace identifies a hyperplane of codimension 1 containing the intersection, and secondly, it is easy to project onto a finite intersection of such hyperplanes. We give conditions for which our accelerations converge strongly. Finally, we perform numerical experiments to show that these accelerations perform well for a matrix model updating problem.

Abstract:
This paper improves the algorithms based on supporting halfspaces and quadratic programming for convex set intersection problems in our earlier paper in several directions. First, we give conditions so that much smaller quadratic programs (QPs) and approximate projections arising from partially solving the QPs are sufficient for multiple-term superlinear convergence for nonsmooth problems. Second, we identify additional regularity, which we call the second order supporting hyperplane property (SOSH), that gives multiple-term quadratic convergence. Third, we show that these fast convergence results carry over for the convex inequality problem. Fourth, we show that infeasibility can be detected in finitely many operations. Lastly, we explain how we can use the dual active set QP algorithm of Goldfarb and Idnani to get useful iterates by solving the QPs partially, overcoming the problem of solving large QPs in our algorithms.