Breaking 19:31 Mohamed Chaouki takes over as head of Morocco’s RNI Party 19:00 Italy rules out participation in Trump’s “Peace Council” 18:30 North Korea harshly punishes citizens who watch South Korean series 18:00 Maritime crossings suspended between Algeciras, Tarifa and Tangier 17:30 Cancer figures in Lebanon raise serious concern, warns health minister 17:00 Italy investigates train disruptions amid winter Olympics opening weekend 16:30 Dakhla hosts first international forum on cooperation for project development 16:00 Man found dead in Paris apartment as police search for brother 15:30 Jack Lang’s lawyer says decision will be made “In Good Conscience” amid financial probe 15:00 Syria and Saudi Arabia sign billion-dollar telecommunications agreement 14:30 AI platform RentAHuman.ai pays $100 in USDC for real-world street task 14:05 Göbeklitepe and Tas Tepeler: Türkiye’s 12,000-year-old heritage to feature in Berlin exhibition 14:00 Scopely acknowledges using generative AI in a Star Trek game ad 13:50 Oil prices rise as United States and Iran resume indirect talks in Oman 13:40 Ethereum rebounds above 2,000 dollars as doubts linger over the recovery 13:30 Kenitra authorities and Royal Armed Forces mobilized to shelter flood-affected families 13:20 Nvidia chief says artificial intelligence rollout has years to run as demand surges 13:00 Spain and Portugal hit by second storm in days amid heavy rains and flood risks 12:45 Norway confirms Chinese Salt Typhoon hackers breached national networks 12:20 Ripple chief invokes Buffett as XRP plunges sharply from record high 11:50 Satellite images suggest Iran prioritizes missile repairs over nuclear facilities 11:30 Global economies and their leading companies 11:20 Japanese researchers unveil a 3D system for producing green ammonia 11:15 Europe recognizes chemical recycling as part of plastic recycling targets 11:00 Pakistan mourns victims of deadly Islamabad mosque attack 10:50 Musk foresees orbital artificial intelligence outpacing Earth based systems 10:45 Morocco plans major overhaul of driving license process 10:30 South Korea hopes for positive North Korean response after UN lifts aid restrictions 10:20 China signals readiness for talks after Lithuania calls Taiwan office a strategic mistake 10:15 Trump unveils TrumpRx platform to lower prescription drug costs 10:00 South Korea confirms eighth African swine fever case in 2026 09:50 Estonia bars additional Russian veterans from Schengen travel 09:45 Munich prepares for large-scale protests during global security summit 09:30 IAEA and OCP group launch partnership to strengthen global food security and soil health 09:20 Sound waves make time crystals visible in a simple laboratory setup 09:15 Ramadan 2026 programming grid: 2M puts Moroccan production in the spotlight 09:00 Epstein Case: Bill And Hillary Clinton call for public hearings 08:45 Mirna El Mohandes dies at 39 after long battle with colon cancer 08:30 Albania’s Deputy Prime Minister permanently suspended over corruption allegations 08:20 Polar vortex collapse set to push Arctic air into the United States and Europe 08:15 Cuba adopts urgent measures to confront energy crisis, including a four-day work week 08:00 Ukrainian energy network hit by major russian attack 07:50 Iran unveils a new ballistic missile as nuclear talks with the United States begin

The Melodic Code: Unveiling How Our Brains Decipher Music and Speech

Saturday 08 June 2024 - 12:00
The Melodic Code: Unveiling How Our Brains Decipher Music and Speech

In a crescendo of scientific inquiry, a recent study, published in PLOS Biology, illuminates the intricate mechanisms enabling our brains to seamlessly discern between the melodic strains of music and the rhythmic cadence of spoken language. Spearheaded by Andrew Chang of New York University and an international team of scientists, this groundbreaking research offers profound insights into the auditory processing prowess of the human mind.

While our ears act as the conduit to the auditory domain, the complex process of distinguishing between music and speech unfolds within the recesses of our cerebral cortex. As Chang explains, "Despite the myriad differences between music and speech, from pitch to sonic texture, our findings reveal that the auditory system relies on surprisingly simple acoustic parameters to make this distinction."

At the core of this auditory puzzle lie the foundational principles of amplitude and frequency modulation. Musical compositions exhibit a relatively steady amplitude modulation, oscillating between 1 and 2 Hz, while speech tends to fluctuate at higher frequencies, typically ranging from 4 to 5 Hz. For instance, the rhythmic pulse of Stevie Wonder's "Superstition" hovers around 1.6 Hz, while Anna Karina's "Roller Girl" beats at approximately 2 Hz.

To probe deeper into this phenomenon, Chang and his team conducted four experiments involving over 300 participants. In these trials, subjects were presented with synthetic sound segments mimicking either music or speech, with careful manipulation of speed and regularity of amplitude modulation. They were then tasked with identifying whether the auditory stimuli represented music or speech.

The results unveiled a compelling pattern: segments with slower and more regular modulations (< 2 Hz) were perceived as music, while faster and more irregular modulations (~4 Hz) were interpreted as speech. This led the researchers to conclude that our brains instinctively utilize these acoustic cues to categorize sounds, akin to the phenomenon of pareidolia – the tendency to perceive familiar shapes, often human faces, in random or unstructured visual stimuli.

Beyond mere scientific curiosity, this discovery carries profound implications for the treatment of language disorders such as aphasia, marked by partial or complete loss of communication ability. As the authors note, these findings could pave the way for more effective rehabilitation programs, potentially incorporating melodic intonation therapy (MIT).

MIT operates on the premise that music and singing can activate different brain regions involved in communication and language, including Broca's area, Wernicke's area, the auditory cortex, and the motor cortex. By singing phrases or words to simple melodies, individuals may learn to bypass damaged brain regions and access alternative pathways to restore communicative abilities. Armed with a deeper comprehension of the parallels and disparities in music and speech processing within the brain, researchers and therapists can craft more targeted interventions that harness patients' musical discernment to enhance verbal communication.

Supported by the National Institute on Deafness and Other Communication Disorders and the Leon Levy Neuroscience Fellowships, this study opens new vistas for innovation in communication therapies. By pinpointing the acoustic parameters exploited by our brains, scientists can now develop specialized exercises tailored to leverage patients' musical processing capacities, ultimately augmenting their verbal communication skills.

As the crescendo of scientific inquiry swells, this remarkable discovery reverberates as a harmonious symphony of knowledge, enriching our understanding of the intricate interplay between music, speech, and the extraordinary capabilities of the human brain.


  • Fajr
  • Sunrise
  • Dhuhr
  • Asr
  • Maghrib
  • Isha

Read more

This website, walaw.press, uses cookies to provide you with a good browsing experience and to continuously improve our services. By continuing to browse this site, you agree to the use of these cookies.