In October 2012 the seventh edition of the CONGres conference took place. In this edition the theme was ‘Sounds in Science‘.
Our speakers:
Dr. Bob MacCallum
Dr. Hans Slabbekoorn
Prof. Dr. Marc Leman
Dr. Jeroen Goedkoop
Prof. Dr. Ir. Nico de Jong
Prof. Dr. Henkjan Honing
Dr. Peter Meijer
Dr. Bob MacCallum’s lecture:
Darwin Tunes – Survival of the funkiest
Music is often seen as the product of a select group of revered and fêted musical geniuses. Yet music, like other cultural forms, must surely evolve under a process similar to Darwinian evolution in nature; with memes replacing genes, variants being created by chance or human design, and those variants being selectively transmitted through time. Using an evolutionary music engine called DarwinTunes, we set out to test the hypothesis: Can pleasing music evolve under the action of audience selection alone? We ran a large-scale experiment to find out. In the presentation I will describe the experiment, its results and implications, and discuss future directions for the project.
Dr. Hans Slabbekoorn’s lecture:
Fishy acoustics; use and abuse of underwater sound
The underwater world is often dark and water transparency can be very low. At the same time, sound transmits extremely well through water, and the world of aquatic animals is very much an acoustic one. Therefore, in the land of the blind a fish with ears would be king. Indeed, more and more is known about what fish hear, how, and why. They listen for predators, zoom in acoustically on prey, and also communicate with sounds in complex ways well beyond the archaic ‘blub-blub-blub’. The use of underwater sounds by fish has been known for a while, but recently studies appear at an increasing rate. I will address some of the most intriguing insights and how those relate to what we know about the role of vocalizations elsewhere in the animal kingdom. There will be examples in the context of fighting and flirting, eyeballing and eavesdropping, highlighting sound science and annoying noise from our own activities in a world largely unheard-of.
Prof. Dr. Marc Leman’s lecture:
Action-perception couplings in the musical brain
In recent years, music research has been influenced by theories of embodied music cognition, which stress the role of the human body as mediator for the encoding and decoding of musical expressiveness. More specifically, these theories state that while listening to music, human subjects decode the expressiveness of sounds by mirroring sounds to body movements which they match with a previously acquired set of action patterns, the predicted outcomes of these action patterns, their associated expressive deployment, and their emotional rewards. It is furthermore stated that this mirroring process forms the basis for a possible conceptualization of expressiveness in terms of linguistic descriptors.
The paradigm of embodied music cognition is based on empirical research and computational modeling. Research at IPEM, Ghent University, aims at testing and validating the basic claims of the theory, first in terms of behavioral research, if possible in connection with brain research. This lecture will give an overview of some of the results obtained.
I will start by explaining the action-perception coupling engine that forms the basis of the mirroring process of the musical brain. Then I will focus on recent empirical work showing how the perception of music influences our movement and the other way around, how movement influences our perception. I will also focus on recent results in applying functional data analysis and machine learning techniques to our datasets.
Dr. Jeroen Goedkoop’s lecture:
How good vibrations set up class room waves: the physics perspective on sound
In this Congo Congress Sound of Science, the Physics of Sound deserves to be discussed. I will give an introduction on vibrations, waves and a little acoustics, illustrated by a range of practical demonstrations including 1 and 2 dimensional cello’s. Some participation of the public is required, so prepare to stretch your legs and wave your arms. Of course I will stick to the adage: No formulas please, we are biologists!
Prof. Dr. Ir. Nico de Jong’s lecture:
Vibrating ultrasound and exploding bubbles
Ultrasound is the most widely used medical imaging modality. The majority of ultrasound systems operate at frequencies in the 1-5 MHz range and form images using a hand-held transducer that is external to the body. It is capable of providing real-time information about tissue structure and blood flow in the heart and larger vessels and to determine perfusion in organs with a ultrasound contrast agent (UCA). Besides using UCA for measuring perfusion there is a growing interest in the use of coated microbubbles for therapeutic applications. In this case themicrobubbles can act as transport carriers for drug delivery or when targetted, to image the targeted site can be acoustically, referred as molecular imaging. A third application for therapy is known as sonoporation. Here, the oscillation of ultrasound-driven microbubbles in close contact with a cell lead to an increased permeability of the cell to macromolecules, hence to an increased uptake of drugs or genes in close vicinity to the cell.
The use of ultrasound contrast agents as local drug delivery systems continues to grow. Current limitations are the amount of drug that can be incorporated as well as the efficiency of drug release upon insonification. High-speed imaging at ~10 million fames per second showed that for low acoustic pressures microcapsules compressed but remained intact. At higher diagnostic pressures microcapsules cracked, thereby releasing the encapsulated gas and the encapsulated drug. The vibrating bubbles nearby cells cause an increased uptake of the drug. For molecular imaging using ultrasound contrast agents, targeted microbubbles are designed with specific ligands linked to the coated shell. In this way it is possible to get information on the cell level with diagnostic ultrasound which in itself has only a resolution of 1-2 mm.
Prof. Dr. Henkjan Honing’s lecture:
What makes us musical animals?
While most humans have little trouble tapping their foot to the beat of the music, or to hear whether music speeds up or slows down, it is still impossible to get species closely related to us (such as chimpanzees or bonobos) to clap or drum to the beat of the music. They seem to lack beat induction: the cognitive mechanism that supports the detection of a regular pulse from a varying rhythm. Certain species of bird, however – budgerigars and cockatoos, for instance – do seem to be able to perceive a beat. Should this indeed be the case, then it makes the phenomenon even more intriguing, and the evolutionary implications more open for research. What traits do we share with these bird species (and not with other primates), and what can this teach us about the evolution of music?
Dr. Peter Meijer’s lecture:
Visual soundscapes from your augmented reality glasses
Rapid developments in mobile computing and sensing are opening up new opportunities for augmenting or mediating our reality with information and experiences that our biological senses could not directly provide. Apart from possible mass-market use of augmented reality glasses in the near future, there also arise new uses in niche markets such as assistive technology for the blind: the visual content of live camera views may be conveyed through sound or touch.
I will in my talk discuss how this brings together research on new man-machine interfaces, visual prostheses, computer vision, brain plasticity, synesthesia, esthetics, and even contemporary philosophy. It is also an area where progress in fundamental research (on brain plasticity) could quickly become socially relevant through software applications and training paradigms that are made globally available over the web, for use with widely available devices (smartphones, netbooks and camera glasses). Neuroscience research has in the past decade established that the visual cortex of blind people becomes responsive to sound and touch, with the visual cortex acting more like a “metamodal” processor of fine spatial information. This supports the biological plausibility of sensory substitution for the blind, as in seeing (or “seeing”) live camera views encoded in one-second soundscapes.