Abstract:
Without observational or theoretical modifications, Newtonian and general relativity seem to be unable to explain gravitational behavior of large structure of the universe. The assumption of dark matter solves this problem without modifying theories. But it implies that most of the matter in the universe must be unobserved matter. Another solution is to modify gravitation laws. In this article, we study a third way that doesn't modify gravitation neither matter's distribution, by using a new physical assumption on the clusters. Compare with Newtonian gravitation, general relativity (in its linearized approximation) leads to add a new component without changing the gravity field. As already known, this component for galaxies is too small to explain dark matter. But we will see that the galaxies' clusters can generate a significant component and embed large structure of universe. We show that the magnitude of this embedding component is small enough to be in agreement with current experimental results, undetectable at our scale, but detectable at the scale of the galaxies and explain dark matter, in particular the rotation speed of galaxies, the rotation speed of dwarf satellite galaxies, the expected quantity of dark matter inside galaxies and the expected experimental values of parameters $\Omega$\_dm of dark matter measured in CMB. This solution implies testable consequences that differentiate it from other theories: decreasing dark matter with the distance to the cluster's center, large quantity of dark matter for galaxies close to the cluster's center, isolation of galaxies without dark matter, movement of dwarf satellite galaxies in planes close to the supergalactic plane, close orientations of spin's vectors of two close clusters, orientation of nearly all the spin's vector of galaxies of a same cluster in a same half-space, existence of very rare galaxies with two portions of their disk that rotate in opposite directions...

A recent publication revealed unexpected
observations about dark matter. In particular, the observed baryonic mass should
probably be sufficient to explain the observed rotation curves (i.e. without dark matter) and their
observations gave an empirical relation for weak accelerations. This present
work demonstrated that the equations of general relativity allow explaining the
term of dark matter (without new matter) in agreement with the results of this
publication and allow retrieving this empirical relation (observed values and
characteristics of this correlation’s curve). These observations constrain
drastically the possible gravitational potential in the frame of general
relativity to explain the term of dark matter. This theoretical solution has
already been studied with several unexpected predictions that have recently
been observed. For example, an article revealed that early galaxies (ten
billion years ago) didn’t have dark matter and a more recent paper showed
unlikely alignments of galaxies. To finish the main prediction of this solution, it is recalled:
the term of dark matter should be a Lense-Thirring effect, around the earth, of
around 0.3 and 0.6 milliarcsecond/year.

In a
previous paper, we demonstrated that the linearized general relativity could
explain dark matter (the rotation speed of galaxies, the rotation speed of
dwarf satellite galaxies, the movement in a plane of dwarf satellite galaxies,
the decreasing quantity of dark matter with the distance to the center of
galaxies’ cluster, the expected quantity of dark matter inside galaxies and the
expected experimental values of parameters Ω_{dm} of
dark matter measured in CMB). It leads, compared with Newtonian gravitation, to
taking in
account the second component (gravitational field) of the gravitation (imposed
by general relativity) without changing the gravity field (also known as
gravitomagnetism). In this explanation, dark matter would be a uniform gravitational
field that embeds some very large areas of the universe generated by the
clusters. In this article we are going to see that this specific gravitational
field, despite its weakness, could be soon detectable, allowing testing this
explanation of dark matter. It should generate a slight discrepancy in the
expected measure of the Lense-Thirring
effect of the Earth. In this theoretical frame, the Lense-Thirring
effect of the “dark matter” would be a value between around 0.3 milliarcsecond/year and 0.6 milliarcsecond/year in the
best case. In the LAGEOS or Gravity Probe B experiments, there was not enough
precision (around 0.3% for the expected 6606 mas·y^{﹣1} geodetic
and around 19% for the expected 39 mas·y^{﹣1} frame-dragging
precessions). In the GINGER experiment, there
could be enough one; the expected accuracy would be around 1%. If this discrepancy was verified, it would be the first direct measure of the dark matter.

In a previous paper, we demonstrated that the
linearized general relativity could explain dark energy (the experimental
values of parameters Ω_{Λ}, the cosmological constant, the recent acceleration
of the expansion of our Universe) offering an amazing image of our universe at
an incredible scale. This explanation of dark energy relies on the assumption
of the existence of a negative gravitational mass (with always a positive
inertial mass meaning that gravitation could be repulsive). This article demonstrates
that this assumption is not only compliant with general relativity but even
that the repulsive gravitational interaction is inscribed in the equations of
the general relativity. The absence of negative gravitational mass should then
be justified because nothing forbids its existence and until now repulsive
gravitation has never been observed. This natural possibility of general
relativity must be then avoided by adding an ad hoc paradigm.
In a way, the principle of masses’ equivalence plays indirectly this role. We
will show why this principle can be verified with a great accuracy but we also
propose experiments that could violate this principle, allowing at the
same time rejecting its status of theoretical principle. This
frame of explanation (general relativity released from this ad hoc constraint)
opened then
the way of the negative gravitational mass with its natural corollary, the
repulsive gravitational interaction, and of the following major predictions: the antimatter should have a negative
gravitational mass, the neutrino should
not be a Majorana particle, the principle of equivalence of masses
should be violated for the antiprotonic helium, the apparent
disappearance of antimatter could be explained. We recall some other
consequences: an “initial” cosmic inflation
would be unavoidable, dark energy (or cosmological constant) might not
be constant in time (causing accelerating universe). Several experiments are
testing some of these predictions: NEMO experiment tests if neutrino is a
Majorana particle, and AEgIS, ALPHA and GBAR experiments at CERN test the
behavior of the gravitational interaction on anti-matter and the sign of its
gravitational mass. First results could be obtained in 2018. Experiments are
proposed to test the violation of the principle of equivalence of the masses.

The star SO-2 at the galactic center will be soon at its closest distance to the supermassive black hole (SMBH). It will allow measuring relativistic effects. In [1] the dark matter is explained by the second component (gravitic field) of the general relativity generated by the clusters. In this theoretical frame, the gravitic field of the galaxies cannot explain the dark matter at their ends. But despite this, it seems possible that this gravitic field is in general underestimated. In the current paper, we study the component of the SO-2’s gravitational redshift due to the gravitic field of the galactic center (Z_{H}) compared to the expected gravitational redshift due to the gravity field (Z_{G}~3×10^{-4}). The value of the gravitic field of SMBH is not known but depending on its value, four cases (in agreement with general relativity) can be obtained. If the discrepancy measured on the gravitational redshift of SO-2 is Z_{H}~10^{-5}, it will mean that the gravitic field at the center of the Galaxy is too weak to be measured and, as expected, that the gravity field dominates. If a discrepancy is measured of around Z_{H}~10^{-5}, the gravitic field at the Galaxy center will be greater than expected but always inferior to the effect of the gravity field. With a measure of around Z_{H}~10^{-4}, this discrepancy could always be explained in agreement with general relativity. It will mean that the effect of the gravitic field at the Galaxy center is greater than expected and can even be of the same order of magnitude than the effect of the gravity field. Furthermore the calculation of the mass could have to be revised. If a discrepancy is measured of around Z_{H}~10^{-3} this discrepancy could always be explained in agreement with general relativity. It will mean that the effect of the gravitic field at the Galaxy center is greater than expected and even greater than the effect of the gravity field. The calculation of the mass will have to be revised. In the three previous cases, this discrepancy will be a measure of the gravitic field of SMBH and would be an important clue that would indirectly corroborate the explanation of the dark matter as the effect of the term of gravitic field. And to end, if the discrepancy is larger, it will be more difficult to explain it in the frame of the general relativity (even in the frame of the explanation of dark matter as the effect of the term of gravitic field).

Abstract:
Mai 1968 semble être sur toutes les lèvres et dans tous les esprits ces derniers jours. On ne compte plus les sorties d’ouvrages sur la question, plus ou moins opportunistes. La tonalité des débats oscille généralement entre deux extrêmes : d’un c té, la nostalgie d’une époque où l’espoir portait la puissance de devenirs meilleurs, car moins compassés et rigides ; de l’autre, la reconnaissance d’un ensemble de phénomènes à l’existence historique indéniable, mais désormais révolue voire à liqu...

Abstract:
Lors d’un colloque récent consacré aux métiers du nettoiement et à la gestion des déchets dans l’espace public, la majorité des intervenants, sinon tous, partageaient l’idée selon laquelle travailler sur de telles questions permettait d’étudier des problèmes fondamentaux et passionnants, au prix d’un certain risque de mise à l’écart académique. D’un point de vue symbolique (et spatial, si l’on considère les dynamiques du champ académique que cela implique), le chercheur paierait de fait son t...

Abstract:
De nombreux auteurs ont décrit et analysé le jeu sous l’angle des pratiques ouvrières résistancielles ou subversives renseignant sur l’autonomie du collectif face à l’organisation du travail. En partant de telles réflexions ancrées sur deux terrains empiriques contrastés (un atelier d’éboueurs, un centre d’appels téléphoniques), on s’interrogera sur les implications, au niveau des salarié·e·s, d’une utilisation, par les managers, de dispositifs ludiques visant à accentuer l’implication au travail. Many authors have described and analyzed games as resistant or subversive work practices providing information about the kind of autonomy that a group will manifest in its dealings with a work organisation. Starting with this kind of thinking – embedded in two contrasting empirical foundations (a streetsweeping depot, a call centre), the focus here is on the implications for employees – and on how managers use – “playful” mechanisms aimed at getting people more involved in their work. Muchos autores han descrito y analizado el juego desde la óptica de las prácticas obreras de resistencia o subversivas que arrojan información sobre la autonomía de lo colectivo frente a la organización del trabajo. Partiendo de esas reflexiones ancladas en dos terrenos empíricos contrastados (un taller de barrenderos, un centro de atención telefónica), nos interrogaremos sobre las implicaciones para los asalariados de una utilización por los directivos de dispositivos “lúdicos” que buscan acentuar la implicación en el trabajo.

Abstract:
Game theory is usually considered applied mathematics, but a few game-theoretic results, such as Borel determinacy, were developed by mathematicians for mathematics in a broad sense. These results usually state determinacy, i.e. the existence of a winning strategy in games that involve two players and two outcomes saying who wins. In a multi-outcome setting, the notion of winning strategy is irrelevant yet usually replaced faithfully with the notion of (pure) Nash equilibrium. This article shows that every determinacy result over an arbitrary game structure, e.g. a tree, is transferable into existence of multi-outcome (pure) Nash equilibrium over the same game structure. The equilibrium-transfer theorem requires cardinal or order-theoretic conditions on the strategy sets and the preferences, respectively, whereas counter-examples show that every requirement is relevant, albeit possibly improvable. When the outcomes are finitely many, the proof provides an algorithm computing a Nash equilibrium without significant complexity loss compared to the two-outcome case. As examples of application, this article generalises Borel determinacy, positional determinacy of parity games, and finite-memory determinacy of Muller games.

Abstract:
The quest for optimal/stable paths in graphs has gained attention in a few practical or theoretical areas. To take part in this quest this chapter adopts an equilibrium-oriented approach that is abstract and general: it works with (quasi-arbitrary) arc-labelled digraphs, and it assumes very little about the structure of the sought paths and the definition of equilibrium, \textit{i.e.} optimality/stability. In this setting, this chapter presents a sufficient condition for equilibrium existence for every graph; it also presents a necessary condition for equilibrium existence for every graph. The necessary condition does not imply the sufficient condition a priori. However, the chapter pinpoints their logical difference and thus identifies what work remains to be done. Moreover, the necessary and the sufficient conditions coincide when the definition of optimality relates to a total order, which provides a full-equivalence property. These results are applied to network routing.