Further analyses revealed crucial phonetic features potentially causing the effect of sound on meaning: For instance, words with short vowels, voiceless consonants, and hissing sibilants (as in ‘piss’) feel more arousing and negative. Our findings suggest that the process of meaning making is not solely determined by arbitrary mappings between formal aspects of words and concepts they refer to. Rather, even in silent reading, words’ acoustic profiles provide affective perceptual cues that language users may implicitly use to construct words’ overall meaning.
Why 'piss' is ruder than 'pee'? The role of sound in affective meaning making
https://www.mdpi.com/2076-3425/9/3/53
Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music
music is centripetal in directing the listener’s attention to the
auditory material itself. Sound, therefore, can be considered as the
meeting point between speech and music and the question can be raised as
to the shared components between the interpretation of sound in the
domain of speech and music.
“sonants and gruffs” and which may be considered as structural opposites of these arousal-increasing sounds [74]. Instead of being unpatterned and chaotic, they are tonal and harmonically rich, with a more diffuse regularly patterned broadband spectral structure. Rather than having direct impact on listener’s arousal and affect, they seem to induce a less inherent affective force. Their richly structured spectra, moreover, make them even suited for revealing clear cues to the caller’s identity since their individual idiosyncrasies impart individually distinctive voice cues that are associated either with the dynamic action of the vocal folds or with the resonance properties of the vocal tract cavities [86,87]. Chimpanzees, likewise, are able to intentionally use grunts as referential calls and to learn new calls from other individuals [54], which represents most probably an early stage of the evolution of lexical meaning (but see [88]).
This foundation can explain the universal tendency first observed by Köhler [116] (pp. 224-225) to associate pseudowords, such as takete or kiki, with spiky shapes whereas malumba or bouba are associated with round shapes [117]. It has been shown, moreover, that the communicative importance of the affective influence of vocal signals does not disappear when brains get larger and their potential for cognitive, evaluative control of behavior increases.
a relation between high front vowels, that is, vowels with a high value for formant dispersion, and an emotional tone that is characterized by a positive valence, a feeling of weakness or submissiveness, and an active or aroused state. In contrast, low back vowels, that is, vowels with a low formant dispersion, should be associated with a negative emotional valence, dominance, and calmness.
a strong right lateralization for affective prosody perception within the STC7,15,17,25,44,46, which is consistent with the finding that the right hemisphere is more sensitive to slow-varying acoustic profiles of emotions (e.g. tempo and pausing)
The processing of emotional vocalizations and music seemed to involve common neural mechanisms. Notation obtained from acoustic signals showed how emotionally negative stimuli tended to be in Minor key, and positive stimuli in Major key, thus shedding some lights on the brain ability to understand music.
This study shows a causal role of dopamine in musical pleasure and indicates that dopaminergic transmission might play different or additive roles than the ones postulated in affective processing so far, particularly in abstract cognitive activities.
Listening to soothing music was found to increase oxytocin levels during post-surgery bed rest (Nilsson, 2009), and Kreutz (2014) found that, compared to dyadic chatting, group singing increased oxytocin levels, as well as significantly enhancing perceived psychological well-being.
https://www.frontiersin.org/articles/10.3389/fpsyg.2019.01514/full
I conclude that the comparative study of complex vocalizations and behaviors in various extant species can provide important insights into the adaptive function(s) of these traits in these species, as well as offer evidence-based speculations for the existence of “musilanguage” in our primate ancestors, and thus inform our understanding of the biology and evolution of human music and language.
This study found that the salivary oxytocin level and lnHF, which is an index of vagal activity, increased when a slow-tempo music sequence was presented, while the salivary cortisol level decreased and lnLF/lnHF, which is an index of the sympathovagal balance, increased when a fast-tempo music sequence was presented. Intriguingly, the increase in the oxytocin level correlated significantly with the increase in lnHF, the decrease in lnLF/lnHF and the decrease in HR. This indicates that oxytocin is related to the dominance of the parasympathetic nerve activity.
Our results, which show a significant increase in the salivary oxytocin
and lnHF and a significant reduction in the HR induced by listening to
the slow-tempo music sequence, are consistent with an earlier study
demonstrating that soothing music listening enhances the plasma oxytocin
level [35], reduces the HR [36], and increases the amplitude of respiratory sinus arrhythmia (RSA) (equal to the HF component of HRV) [37].
When we consider another study demonstrating that the application of
oxytocin protects against the social stress-induced suppression of RSA [20],
we can assume that oxytocin secretion induced by music listening is
related to vagal nerve activity originating from the nucleus ambiguous
(NA) [32].
https://www.frontiersin.org/articles/10.3389/fnhum.2020.00350/full
In Homo sapiens, it is conceivable that the unique prosocial, harmonizing activities of music and dance incorporated, perhaps even required, elements of this pre-existing oxytocinergic network. Music encourages affiliative interactions in infancy and adulthood, aids in the development of perceptual, cognitive, and motor skills, promotes trust and reduces a sense of social vulnerability, is rewarding and motivating, and has a beneficial effect on aspects of learning and memory. Music and its evolutionary partner dance (Richter and Ostovar, 2016) also promote synchrony and social interaction, contribute to cultural identity, and encourage the formation of cooperative networks
onality-tracking analysis of BOLD data revealed that 5HT2A receptor signaling alters the neural response to music in brain regions supporting basic and higher-level musical and auditory processing, and areas involved in memory, emotion, and self-referential processing. This suggests a critical role of 5HT2A receptor signaling in supporting the neural tracking of dynamic tonal structure in music, as well as in supporting the associated increases in emotionality, connectedness, and meaningfulness in response to music that are commonly observed after the administration of LSD and other psychedelics.
https://academic.oup.com/cercor/article/28/11/3939/4259746?login=true
oxytocin causes neurogenesis - 700 new neurons a day is the average and oxytocin increases it!!
The present study investigated the differential effects of music-induced emotion on heart rate (HR) and its variability (HRV) while playing music on the piano and listening to a recording of the same piece of music. Sixteen pianists were monitored during tasks involving emotional piano performance, non-emotional piano performance, emotional perception, and non-emotional perception. It was found that emotional induction during both perception and performance modulated HR and HRV, and that such modulations were significantly greater during musical performance than during perception. The results confirmed that musical performance was far more effective in modulating emotion-related autonomic nerve activity than musical perception in musicians. The findings suggest the presence of a neural network of reward-emotion-associated autonomic nerve activity for musical performance that is independent of a neural network for musical perception.
https://www.sciencedirect.com/science/article/abs/pii/S0167876011001772
fantastic!
Davidson and Irwin (1999), Davidson (2000, 2004), and Davidson et al. (2000),
have demonstrated that a left bias in frontal cortical activity is
associated with positive affect. Broadly, a left bias frontal asymmetry
(FA) in the alpha band (8–13 Hz) has been associated with a positive
affective style, higher levels of wellbeing and effective emotion
regulation (Tomarken et al., 1992; Jackson et al., 2000).
Interventions have been demonstrated to shift frontal
electroencephalograph (EEG) activity to the left. An 8-week meditation
training program significantly increased left sided FA when compared to
wait list controls (Davidson et al., 2003). Blood et al. (1999) observed that left frontal brain areas were more likely to be activated by pleasant music than by unpleasant music.
https://www.frontiersin.org/articles/10.3389/fpsyg.2017.02044/full
In addition, we identified two specific patterns of chills: a decreased theta activity in the right central region, which could reflect supplementary motor area activation during chills and may be related to rhythmic anticipation processing, and a decreased theta activity in the right temporal region, which may be related to musical appreciation and could reflect the right superior temporal gyrus activity. The alpha frontal/prefrontal asymmetry did not reflect the felt emotional pleasure, but the increased frontal beta to alpha ratio (measure of arousal) corresponded to increased emotional ratings.
https://www.frontiersin.org/articles/10.3389/fnins.2020.565815/full%20%20
very fascinating.
No comments:
Post a Comment