The perception of emotions and the recognition of facial expressions play a critical role in social interaction between humans. Faces communicate a great deal of information, including dynamic features, such as an individual’s internal emotional state, and static features, such as a person’s identity. Two major views have evolved from the investigation of how facial expressions are perceived and processed, the discrete category view and the dimensional theory. According to the discrete category view, basic facial expressions convey discrete and specific emotions: anger, happiness, surprise, fear, disgust, and sadness. Conversely, the dimensional view suggests that the mental representation of emotional space consists of continuous underlying dimensions in which similar emotions are clustered together while different ones are far apart. While both theories postulate that affective information is resistant to contextual influences, research on this topic has provided reasons to believe that the relationship between facial expressions and their contexts may play an important role in determining the perceived emotion. Similarly, studies looking at the right hemisphere and the fusiform face area (FFA) have led researches to suggest that factors other than the presence of faces, such as experience and training, can also activate the FFA. This review looks at the role of facial expressions in everyday life and the two opposing theories on how facial expressions are perceived and processed in the brain. Specifically, the malleability of emotion perception and face recognition and the brain regions that involved in emotion are explored.