Abstract:
In this paper, we consider a fitness-level model of a non-elitist mutation-only genetic algorithm (GA) with tournament selection. The model provides upper and lower bounds for the expected proportion of the individuals with fitness above given thresholds. In the case of GA with bitwise mutation and OneMax fitness function, the lower bounds are tight when population size equals one, while the upper bounds are asymptotically tight when population size tends to infinity. The lower bounds on expected proportions of sufficiently fit individuals may be obtained from the probability distribution of an appropriate Markov chain. This approach yields polynomial upper bounds on the runtime of an Iterated version of the GA on 2-SAT problem and on a family of Set Cover problems proposed by E. Balas.

Abstract:
The fitness-level technique is a simple and old way to derive upper bounds for the expected runtime of simple elitist evolutionary algorithms (EAs). Recently, the technique has been adapted to deduce the runtime of algorithms with non-elitist populations and unary variation operators. In this paper, we show that the restriction to unary variation operators can be removed. This gives rise to a much more general analytical tool which is applicable to a wide range of search processes. As introductory examples, we provide simple runtime analyses of many variants of the Genetic Algorithm on well-known benchmark functions, such as OneMax, LeadingOnes, and the sorting problem.

Abstract:
already proposed a new vision, a heuristic based on a modern branch of biology called biotechnology. "this is any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use" [scdb, 1992]. for individuals who have undergone any change in their genes through biotechnology techniques are known as transgenic, they can be animals or plants. the changes applied to these individuals are made for a specific purpose; usually to improve an individual has highlighted some of its own characteristics [cnice, 2001]. now incorporates new elements to the first algorithm for transgenic improvement. homology has been incorporated in the search for the fittest individuals. the homology has been incorporated in the search for the fittest individuals. with the use of positive and negative homology.

Abstract:
Perhaps surprisingly, it is possible to predict how long an algorithm will take to run on a previously unseen input, using machine learning techniques to build a model of the algorithm's runtime as a function of problem-specific instance features. Such models have important applications to algorithm analysis, portfolio-based algorithm selection, and the automatic configuration of parameterized algorithms. Over the past decade, a wide variety of techniques have been studied for building such models. Here, we describe extensions and improvements of existing models, new families of models, and -- perhaps most importantly -- a much more thorough treatment of algorithm parameters as model inputs. We also comprehensively describe new and existing features for predicting algorithm runtime for propositional satisfiability (SAT), travelling salesperson (TSP) and mixed integer programming (MIP) problems. We evaluate these innovations through the largest empirical analysis of its kind, comparing to a wide range of runtime modelling techniques from the literature. Our experiments consider 11 algorithms and 35 instance distributions; they also span a very wide range of SAT, MIP, and TSP instances, with the least structured having been generated uniformly at random and the most structured having emerged from real industrial applications. Overall, we demonstrate that our new models yield substantially better runtime predictions than previous approaches in terms of their generalization to new problem instances, to new algorithms from a parameterized space, and to both simultaneously.

Abstract:
Numerous machine learning algorithms contain pairwise statistical problems at their core---that is, tasks that require computations over all pairs of input points if implemented naively. Often, tree structures are used to solve these problems efficiently. Dual-tree algorithms can efficiently solve or approximate many of these problems. Using cover trees, rigorous worst-case runtime guarantees have been proven for some of these algorithms. In this paper, we present a problem-independent runtime guarantee for any dual-tree algorithm using the cover tree, separating out the problem-dependent and the problem-independent elements. This allows us to just plug in bounds for the problem-dependent elements to get runtime guarantees for dual-tree algorithms for any pairwise statistical problem without re-deriving the entire proof. We demonstrate this plug-and-play procedure for nearest-neighbor search and approximate kernel density estimation to get improved runtime guarantees. Under mild assumptions, we also present the first linear runtime guarantee for dual-tree based range search.

Abstract:
Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length $n$ is only of order $n/\log n$, the combined model does not allow for algorithms being faster than linear in $n$, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in $o(n \log n)$ iterations.

Abstract:
We prove that any tight frame in Hilbert space can be obtained by the Kaczmarz algorithm. An explicit way of constructing this correspondence is given. The uniqueness of the correspondence is determined.

Abstract:
For solving the problems of the traditional genetic algorithms, the excessively long runtime and the somewhat low rate of seeking the optimal results, a parallel genetic algorithm based on the cooperation of multi-agents was presented, with utilization of JADE development platform. Based on simple genetic algorithms, this algorithm implemented the support for client container dynamic joining into the system to run. The experiment results show that the algorithm raises working efficiency of traditonal genetee algorithms and increases success rate to seek the optimal solution.

Abstract:
Twitter is a popular microblogging platform. When users send out messages, other users have the ability to forward these messages to their own subgraph. Most research focuses on increasing retweetability from a node's perspective. Here, we center on improving message style to increase the chance of a message being forwarded. To this end, we simulate an artificial Twitter-like network with nodes deciding deterministically on retweeting a message or not. A genetic algorithm is used to optimize message composition, so that the reach of a message is increased. When analyzing the algorithm's runtime behavior across a set of different node types, we find that the algorithm consistently succeeds in significantly improving the retweetability of a message.

Abstract:
We present a method for reliably determining the lowest energy structure of an atomic cluster in an arbitrary model potential. The method is based on a genetic algorithm, which operates on a population of candidate structures to produce new candidates with lower energies. Our method dramatically outperforms simulated annealing, which we demonstrate by applying the genetic algorithm to a tight-binding model potential for carbon. With this potential, the algorithm efficiently finds fullerene cluster structures up to ${\rm C}_{60}$ starting from random atomic coordinates.