Abstract:
Finding, counting and/or listing triangles (three vertices with three edges) in large graphs are natural fundamental problems, which received recently much attention because of their importance in complex network analysis. We provide here a detailed state of the art on these problems, in a unified way. We note that, until now, authors paid surprisingly little attention to space complexity, despite its both fundamental and practical interest. We give the space complexities of known algorithms and discuss their implications. Then we propose improvements of a known algorithm, as well as a new algorithm, which are time optimal for triangle listing and beats previous algorithms concerning space complexity. They have the additional advantage of performing better on power-law graphs, which we also study. We finally show with an experimental study that these two algorithms perform very well in practice, allowing to handle cases that were previously out of reach.

Abstract:
Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm, called Walktrap, which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Extensive comparison tests show that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time.

Abstract:
In this note, we show that any distributive lattice is isomorphic to the set of reachable configurations of an Edge Firing Game. Together with the result of James Propp, saying that the set of reachable configurations of any Edge Firing Game is always a distributive lattice, this shows that the two concepts are equivalent.

Abstract:
Dense subgraphs of sparse graphs (communities), which appear in most real-world complex networks, play an important role in many contexts. Computing them however is generally expensive. We propose here a measure of similarities between vertices based on random walks which has several important advantages: it captures well the community structure in a network, it can be computed efficiently, it works at various scales, and it can be used in an agglomerative algorithm to compute efficiently the community structure of a network. We propose such an algorithm which runs in time O(mn^2) and space O(n^2) in the worst case, and in time O(n^2log n) and space O(n^2) in most real-world cases (n and m are respectively the number of vertices and edges in the input graph). Experimental evaluation shows that our algorithm surpasses previously proposed ones concerning the quality of the obtained community structures and that it stands among the best ones concerning the running time. This is very promising because our algorithm can be improved in several ways, which we sketch at the end of the paper.

Abstract:
Complex networks, modeled as large graphs, received much attention during these last years. However, data on such networks is only available through intricate measurement procedures. Until recently, most studies assumed that these procedures eventually lead to samples large enough to be representative of the whole, at least concerning some key properties. This has crucial impact on network modeling and simulation, which rely on these properties. Recent contributions proved that this approach may be misleading, but no solution has been proposed. We provide here the first practical way to distinguish between cases where it is indeed misleading, and cases where the observed properties may be trusted. It consists in studying how the properties of interest evolve when the sample grows, and in particular whether they reach a steady state or not. In order to illustrate this method and to demonstrate its relevance, we apply it to data-sets on complex network measurements that are representative of the ones commonly used. The obtained results show that the method fulfills its goals very well. We moreover identify some properties which seem easier to evaluate in practice, thus opening interesting perspectives.

Abstract:
Captures of IP traffic contain much information on very different kinds of activities like file transfers, users interacting with remote systems, automatic backups, or distributed computations. Identifying such activities is crucial for an appropriate analysis, modeling and monitoring of the traffic. We propose here a notion of density that captures both temporal and structural features of interactions, and generalizes the classical notion of clustering coefficient. We use it to point out important differences between distinct parts of the traffic, and to identify interesting nodes and groups of nodes in terms of roles in the network.

Abstract:
We address here the problem of generating random graphs uniformly from the set of simple connected graphs having a prescribed degree sequence. Our goal is to provide an algorithm designed for practical use both because of its ability to generate very large graphs (efficiency) and because it is easy to implement (simplicity). We focus on a family of heuristics for which we prove optimality conditions, and show how this optimality can be reached in practice. We then propose a different approach, specifically designed for typical real-world degree distributions, which outperforms the first one. Assuming a conjecture which we state and argue rigorously, we finally obtain an log-linear algorithm, which, in spite of being very simple, improves the best known complexity.

Abstract:
The degree distribution of the Internet topology is considered as one of its main properties. However, it is only known through a measurement procedure which gives a biased estimate. This measurement may in first approximation be modeled by a BFS (Breadth-First Search) tree. We explore here our ability to infer the type (Poisson or power-law) of the degree distribution from such a limited knowledge. We design procedures which estimate the degree distribution of a graph from a BFS of it, and show experimentally (on models and real-world data) that this approach succeeds in making the difference between Poisson and power-law degree distributions.

Abstract:
Many real-world complex networks actually have a bipartite nature: their nodes may be separated into two classes, the links being between nodes of different classes only. Despite this, and despite the fact that many ad-hoc tools have been designed for the study of special cases, very few exist to analyse (describe, extract relevant information) such networks in a systematic way. We propose here an extension of the most basic notions used nowadays to analyse classical complex networks to the bipartite case. To achieve this, we introduce a set of simple statistics, which we discuss by comparing their values on a representative set of real-world networks and on their random versions.

Abstract:
It appeared recently that the classical random network model used to represent complex networks does not capture their main properties (clustering, degree distribution). Since then, various attempts have been made to provide network models having these properties. We propose here the first model which achieves the following challenges: it produces networks which have the three main wanted properties, it is based on some real-world observations, and it is sufficiently simple to make it possible to prove its main properties. We first give an overview of the field by presenting the main models introduced until now, then we discuss some remarks on some complex networks which lead us to the definition of our model. We then show that the model has the expected properties and that it can actually be seen as a general model for complex networks.