Mafa individuals in northern Cameroon accurately recognized emotions in Western
music designed to sound happy, sad, and fearful67. Similarly, German, Norwegian, Korean, and Indonesian individuals identified happy and sad instrumental performances by German
musicians68. In another example, Indian, Japanese, and Swedish listeners identified expressed emotions in each other’s traditions, as well as in Western music66,69. Finally, U.S. and rural Cambodian individuals tasked with creating music that expressed emotions like 'sad' or 'happy' created similar melodies70. The findings of these studies suggest broadly shared psychological mechanisms underlying the recognition of expressed emotions in music71.
https://osf.io/preprints/psyarxiv/cdftm_v1
many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception and other human cognitive capacities that evolved for non-musical purposes.
Well let's not dismiss the spiritual aspects! oops.
basic emotions (such as happiness and fear) appear to be recognized in music both earlier in development and, in some studies, more reliably within and across cultures relative to non-basic emotions (such as jealousy and solemnity)65,67,69,79,91.
As we have shown, shared features of human psychology indeed predispose humans to respond to music in similar ways. Such predispositions might result from human-specific adaptations, such as the physical limits of human auditory perception, or they might result from constraints that are shared across species115. Cultural evolution likely exploits these shared psychological predispositions to produce compelling performances, yielding reliable cross-cultural associations between musical form and emotional content66–68 or musical form and behavioural function1,5,119,12
Nevertheless, all participant groups favoured small integer ratios, indicating that discrete representations of rhythm were universal. As cultural traditions diverge and differences become canalized, music diversifies188–191, but it apparently always retains some universal properties.
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0244964
Harmonic organisation conveys both universal and culture-specific cues for emotional expression in music
Correlation between roughness and perceived anger across all groups is in line with previ-
ous research linking roughness to anger in speech perception [46]. This might imply a possible universal in music perception, even if it is early to make any conclusions about this finding.
Evidence for a universal association of auditory roughness with musical stability
We find an effect of harmonicity—a psychoacoustic property
resulting from chords having a spectral structure resembling a single
pitched tone (such as produced by human vowel sounds)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0291642
the presence of a distinct effect of roughness in even the most musically remote PNG
community suggests a universal, or at least non-arbitrary, association between roughness and instability.
fast undulations of loudness (the defining feature of roughness)
This provides evidence of a non-arbitrary and cross-cultural association between
roughness and dissonance. Such an association may arise because roughness is the result of acoustic elements (frequencies) that cannot be perceptually resolved, creating a sensory confusion that urges resolution.
the roughness of an acoustic signal is highly dependent on subtle aspects—such as the loudness of every partial —which are not necessarily reflected by conventional musical notation
Roughness refers to the rapid beating (undulations of loudness) that occurs when an audio signal has frequency components that are close enough that they cannot be separately resolved by the auditory system [21, 22].
https://europepmc.org/article/ppr/ppr366208
Our results suggest a common feature of music cognition – discrete rhythm “categories” at small integer ratios. These discrete representations likely stabilize musical systems in the face of cultural transmission
https://royalsocietypublishing.org/doi/epdf/10.1098/rstb.2020.0391
this capacity originally evolved to aid parent–infant communication
and bonding, and even today plays a role not only in music but also in
IDS [infant directed speech], as well as in some adult-directed speech contexts
these included the perception of contours (i.e. relational pitch and
time features of music), scales composed of unequal steps, and
a preference for small integer frequency ratios (i.e. consonances,
such as the octave (2 : 1), perfect fifth (3 : 2) and perfect fourth
(4 : 3)) over large integer ratios (dissonances, such as the tritone
(45 : 32)). In addition, Trehub suggested the universality of a
music genre for infants (e.g. lullabies and play songs). In fact,
adults can recognize a lullaby as such, even when they are unfa-
miliar with the musical culture, and can identify with almost
absolute precision when a song was sung to an infant [35–38].
In fact, IDS is associated with oxytocin levels and other neuropeptides involved
in attachment mechanisms (e.g. [186–188]). Better parent–
infant communication, and particularly mother–infant bond-
ing, could facilitate social learning in infants, allowing them
to acquire the necessary skills to survive [189]. Moreover, a
developmental tool is necessary for language to evolve, and
parent–infant communication is crucial in this aspect (as
seen today in IDS, [190]).
If parent–infant communication promotes infant survival
and development, then selection could have acted on individ-
ual variation in musicality to the benefit of those with better
ability. Moreover, because musicality seems to be at least par-
tially hereditary (see [67,135]), adults with a good level of
musicality could produce offspring better equipped to pro-
cess this information and with the potential of being yet
more successful parents, adding a new level to the selection
pressure for musicality. In fact, there are primate precursors
of guided vocal learning at least in marmoset monkeys
[191–193], providing grounds for selection to act upon
In a society where basic forms of group chorusing
( proto-songs?) start to appear in the context of social rhyth-
mic and coordinated behaviours, the interaction between
the voice of male adults and women or children would
tend to create octaves and fifths [34], provided there is a per-
ceptual preference for these consonances which in fact does
not seem to be unique to humans (for a review, see [194]).
These group activities would start to promote, not only
social bonding, but also group identity
Toro JM, Crespo-Bojorque P. 2017 Consonance
processing in the absence of relevant experience:
evidence from nonhuman animals. Comp. Cogn.
Behav. Rev. 12, 33–44. (doi:10.3819/CCBR.2017.
120004)
the salience of pitch
and human preference to consonance may have their roots
in the harmonic clarity within the human voice (Bowling
& Purves, 2015; Bowling et al., 2018)
the results from Berg et al. may suggest
that the harmonization would not be an octave as Savage
suggested but a perfect fifth, the second most consonant
interval. This is in line with the finding by Peter et al.
(2015) that women imitate men at an interval of a perfect
fifth. As such, human singing groups may be predisposed
to harmonize at different consonant intervals. This can
create the impression of sounds being merged into being
perceived as one louder, larger sound with richer timbre
https://www.sciencedirect.com/science/article/pii/S0028393218302744
The occurrence in pre-verbal infanthood of preference
for consonance (Masataka 2006; Perani et al. 2010; Schel-
lenberg and Trehub 1996; Trainor et al. 2002; Trainor and
Heinmiller 1998; Trehub 2003; Zentner and Kagan 1996;
1998 but see Platinga and Trehub 2014) as well as percep-
tion of octave equivalence (Demany and Armand 1984)
suggest a potential biological basis of these phenomena.
Changes in a sequence of consonant intervals are rapidly processed independently of musical expertise, as revealed by a change-related mismatch negativity (MMN, a component of the ERPs triggered by an odd stimulus in a sequence of stimuli) elicited in both musicians and non-musicians. In contrast, changes in a sequence of dissonant intervals elicited a late MMN only in participants with prolonged musical training. These different neural responses might form the basis for the processing advantages observed for consonance over dissonance and provide information about how formal musical training modulates them.
Prevailing theories ascribe the perception of dissonance to a sensation of roughness that comes from rapid amplitude fluctuations (called “beats”) that are produced by the combination of tones with complex frequency ratios. The more beats contained within a sound, the rougher will be the sound, which leads to an increased perception of dissonance (Helmholtz, 1954, Krumhansl, 1990, Plomp and Levelt, 1965).
This pattern of different neural responses might underlie the processing advantages for consonance reported in behavioral studies. Moreover, results from the present study suggest that the processing benefits for consonance might be found already at an early stage of auditory processing and do not depend on attention.
No comments:
Post a Comment