Abstract:
In this paper, new sufficient optimality theorems for a solution of a differentiable bilevel multiobjective optimization problem (BMOP) are established. We start with a discussion on solution concepts in bilevel multiobjective programming; a theorem giving necessary and sufficient conditions for a decision vector to be called a solution of the BMOP and a proposition giving the relations between four types of solutions of a BMOP are presented and proved. Then, under the pseudoconvexity assumptions on the upper and lower level objective functions and the quasiconvexity assumptions on the constraints functions, we establish and prove two new sufficient optimality theorems for a solution of a general BMOP with coupled upper level constraints. Two corollary of these theorems, in the case where the upper and lower level objectives and constraints functions are convex are presented.

Abstract:
This paper is concerned with the optimal distributed control problem governed by b-equation. We firstly investigate the existence and uniqueness of weak solution for the controlled system with appropriate initial value and boundary condition. By contrasting with our previous result, the proof without considering viscous coefficient is a big improvement. Secondly, based on the well-posedness result, we find a unique optimal control for the controlled system with the quadratic cost functional. Moreover, by means of the optimal control theory, we obtain the sufficient and necessary optimality condition of an optimal control, which is another major novelty of this paper. Finally, we also present the optimality conditions corresponding to two physical meaningful distributive observation cases.

Abstract:
In this paper we derive a sufficiency theorem of an unconstrained fixed-endpoint problem of Lagrange which provides sufficient conditions for processes which do not satisfy the standard assumption of nonsingularity, that is, the new sufficiency theorem does not impose the strengthened condition of Legendre. The proof of the sufficiency result is direct in nature since the former uses explicitly the positivity of the second variation, in contrast with possible generalizations of conjugate points, solutions of certain matrix Riccati equations, invariant integrals, or the Hamiltonian-Jacobi theory.

Sufficient Fritz John optimality conditions are obtained for a control problem in which objective functional is pseudoconvex and constraint functions are quasiconvex or semi-strictly quasiconvex. A dual to the control problem is formulated using Fritz John type optimality criteria instead of Karush-Kuhn-Tucker optimality criteria and hence does not require a regularity condition. Various duality results amongst the control problem and its proposed dual are validated under suitable generalized convexity requirements. The relationship of our duality results to those of a nonlinear programming problem is also briefly outlined.

Abstract:
In this paper, we introduce a new class of generalized α-univex functions where the involved functions are locally Lipschitz. We extend the concept of α-type I invex [S. K. Mishra, J. S. Rautela, On nondifferentiable minimax fractional programming under generalized α-type I invexity, J. Appl. Math. Comput. 31 (2009) 317-334] to α-univexity and an example is provided to show that there exist functions that are α-univex but not α-type I invex. Furthermore, Karush-Kuhn-Tucker-type sufficient optimality conditions and duality results for three different types of dual models are obtained for nondifferentiable minimax fractional programming problem involving generalized α-univex functions. The results in this paper extend some known results in the literature.

Abstract:
This short article shows that the functional equation on the equilibrium price function is more complicated than that considered by Lucas [1], and that modification is required to complete the proof. Furthermore, we shall provide a sufficient condition that guarantees the uniqueness of the equilibrium price function.

Abstract:
We study a linear delay differential equation with a single positive and a single negative term. We find a necessary condition for the oscillation of all solutions. We also find sufficient conditions for oscillation, which improve the known conditions.

This paper provides a solution to generalize the integrator and the
integral control action. It is achieved by defining two function sets to
generalize the integrator and the integral control action, respectively,
resorting to a stabilizing controller and adopting Lyapunov method to analyze
the stability of the closed-loop system. By originating a powerful Lyapunov
function, a universal theorem to ensure regionally as well as semi-globally
asymptotic stability is established by some bounded information. Consequently,
the justification of two propositions on the generalization of integrator and
integral control action is verified. Moreover, the conditions used to define
the function sets can be viewed as a class of sufficient conditions to design
the integrator and the integral control action, respectively.

Abstract:
This paper isolates and studies a class of Markov chains with a special quasi-triangular form of the transition matrix [so-called ”m,n( ” ￠ € 2m,n)-matrix]. Many discrete stochastic processes encountered in applications (queues, inventories and dams) have transition matrices which are special cases of a ”m,n( ” ￠ € 2m,n)-matrix. Necessary and sufficient conditions for the ergodicity of a Markov chain with transition ”m,n( ” ￠ € 2m,n)-matrix are determined in the article in two equivalent versions. According to the first version, these conditions are expressed in terms of certain restrictions imposed on the generating functions Ai(x) of the elements of the i-th row of the transition matrix, i=0,1,2, ￠ € |; in the other version they are connected with the characterization of the roots of a certain associated function in the unit circle of the complex plane. Results obtained in the article generalize, complement, and refine similar results existing in the literature.

Abstract:
The optimal use of intervention strategies to mitigate the spread of Nipah Virus (NiV) using optimal control technique is studied in this paper. First of all we formulate a dynamic model of NiV infections with variable size population and two control strategies where creating awareness and treatment are considered as controls. We intend to find the optimal combination of these two control strategies that will minimize the cost of the two control measures and as a result the number of infectious individuals will decrease. We establish the existence for the optimal controls and Pontryagin’s maximum principle is used to characterize the optimal controls. The numerical simulation suggests that optimal control technique is much more effective to minimize the infected individuals and the corresponding cost of the two controls. It is also monitored that in the case of high contact rate, controls have to work for longer period of time to get the desired result. Numerical simulation reveals that the spread of Nipah virus can be controlled effectively if we apply control strategy at early stage.