%0 Journal Article %T Service Virtualization Using a Non-von Neumann Parallel, Distributed, and Scalable Computing Model %A Rao Mikkilineni %A Giovanni Morana %A Daniele Zito %A Marco Di Sano %J Journal of Computer Networks and Communications %D 2012 %I Hindawi Publishing Corporation %R 10.1155/2012/604018 %X This paper describes a prototype implementing a high degree of transaction resilience in distributed software systems using a non-von Neumann computing model exploiting parallelism in computing nodes. The prototype incorporates fault, configuration, accounting, performance, and security (FCAPS) management using a signaling network overlay and allows the dynamic control of a set of distributed computing elements in a network. Each node is a computing entity endowed with self-management and signaling capabilities to collaborate with similar nodes in a network. The separation of parallel computing and management channels allows the end-to-end transaction management of computing tasks (provided by the autonomous distributed computing elements) to be implemented as network-level FCAPS management. While the new computing model is operating system agnostic, a Linux, Apache, MySQL, PHP/Perl/Python (LAMP) based services architecture is implemented in a prototype to demonstrate end-to-end transaction management with auto-scaling, self-repair, dynamic performance management and distributed transaction security assurance. The implementation is made possible by a non-von Neumann middleware library providing Linux process management through multi-threaded parallel execution of self-management and signaling abstractions. We did not use Hypervisors, Virtual machines, or layers of complex virtualization management systems in implementing this prototype. 1. Introduction The advent of many-core severs with tens and even hundreds of computing cores with high bandwidth communication among them makes the current generation server, networking, and storage equipment and their management systems which have evolved from server-centric and bandwidth limited architectures completely unsuited to use in the next generation computing infrastructure efficiently. It is hard to imagine replicating current TCP/IP-based socket communication, ”°isolate and fix”± diagnostic procedures, and the multiple operating systems (which do not have end-to-end visibility or control of business transactions that span across multiple cores, multiple chips, multiple servers, and multiple geographies) inside the next generation many-core servers without addressing their shortcomings. The many-core servers and processors constitute a network where each node itself is a subnetwork with different bandwidths and protocols (socket-based low bandwidth communication between servers, InfiniBand, or PCI Express bus-based communication across processors in the same server and shared memory-based low latency %U http://www.hindawi.com/journals/jcnc/2012/604018/