A computer-aided interpretation approach is proposed to detect rheumatic arthritis (RA) of human finger joints in optical tomographic images. The image interpretation method employs a multi-variate signal detection analysis aided by a machine learning classification algorithm, called Self-Organizing Mapping (SOM). Unlike in previous studies, this allows for combining multiple physical image parameters, such as minimum and maximum values of the absorption coefficient for identifying affected and not affected joints. Classification performances obtained by the proposed method were evaluated in terms of sensitivity, specificity, Youden index, and mutual information. Different methods (i.e., clinical diagnostics, ultrasound imaging, magnet resonance imaging and inspection of optical tomographic images), were used as "ground truth"-benchmarks to determine the performance of image interpretations. Using data from 100 finger joints, findings suggest that some parameter combinations lead to higher sensitivities while others to higher specificities when compared to single parameter classifications employed in previous studies. Maximum performances were reached when combining minimum/maximum-ratio and image variance with respect to ultra sound as benchmark. In this case, sensitivity and specificity of 0.94 and 0.96 respectively were achieved. These values are much higher than results reported when a) other classification techniques were applied or b) single parameter classifications were used, where sensitivities and specificities of 0.71 were achieved.