Posts tagged semantics

Posts tagged semantics
The effects of very early Alzheimer’s disease on the characteristics of writing by a renowned author
Iris Murdoch (I.M.) was among the most celebrated British writers of the post-war era. Her final novel, however, received a less than enthusiastic critical response on its publication in 1995. Not long afterwards, I.M. began to show signs of insidious cognitive decline, and received a diagnosis of Alzheimer’s disease, which was confirmed histologically after her death in 1999. Anecdotal evidence, as well as the natural history of the condition, would suggest that the changes of Alzheimer’s disease were already established in I.M. while she was writing her final work. The end product was unlikely, however, to have been influenced by the compensatory use of dictionaries or thesauri, let alone by later editorial interference. These facts present a unique opportunity to examine the effects of the early stages of Alzheimer’s disease on spontaneous written output from an individual with exceptional expertise in this area. Techniques of automated textual analysis were used to obtain detailed comparisons among three of her novels: her first published work, a work written during the prime of her creative life and the final novel. Whilst there were few disparities at the levels of overall structure and syntax, measures of lexical diversity and the lexical characteristics of these three texts varied markedly and in a consistent fashion. This unique set of findings is discussed in the context of the debate as to whether syntax and semantics decline separately or in parallel in patients with Alzheimer’s disease.
It is now possible to identify the meaning of words with multiple meanings, without using their semantic context

Two Brazilian physicists have now devised a method to automatically elucidate the meaning of words with several senses, based solely on their patterns of connectivity with nearby words in a given sentence – and not on semantics. Thiago Silva and Diego Amancio from the University of São Paulo, Brazil, reveal, in a paper about to be published in EPJ B, how they modelled classics texts as complex networks in order to derive their meaning. This type of model plays a key role in several natural processing language tasks such as machine translation, information retrieval, content analysis and text processing.
In this study, the authors chose a set of ten so-called polysemous words—words with multiple meanings—such as bear, jam, just, rock or present. They then verified their patterns of connectivity with nearby words in the text of literary classics such as Jane Austen’s Pride and Prejudice. Specifically, they established a model that consisted of a set of nodes representing words connected by their “edges,” if they are adjacent in a text.
The authors then compared the results of their disambiguation exercise with the traditional semantic-based approach. They observed significant accuracy rates in identifying the suitable meanings when using both techniques. The approach described in this study, based on a so-called deterministic tourist walk characterisation, can therefore be considered a complementary methodology for distinguishing between word senses.In future works, the authors are planning to devise new measures to connect not only adjacent words, but also words within a given interval in order to enhance the ability of the model to grasp semantic factors. This approach is supported by another recent study by the same authors, showing that traditional complex network measures mainly depend on the syntax.
(Source: springer.com)
The sound of small children chattering has always been considered cute – but not particularly sophisticated. However, research by a Newcastle University expert has shown their speech is far more advanced than previously understood.

Dr Cristina Dye, a lecturer in child language development, found that two to three- year-olds are using grammar far sooner than expected.
She studied fifty French speaking youngsters aged between 23 and 37 months, capturing tens of thousands of their utterances.
Dr Dye, who carried out the research while at Cornell University in the United States, found that the children were using ‘little words’ which form the skeleton of sentences such as a, an, can, is, an, far sooner than previously thought.
Dr Dye and her team used advanced recording technology including highly sensitive microphones placed close to the children, to capture the precise sounds the children voiced. They spent years painstakingly analysing every minute sound made by the toddlers and the context in which it was produced.
They found a clear, yet previously undetected, pattern of sounds and puffs of air, which consistently replaced grammatical words in many of the children’s utterances.
Dr Dye said: “Many of the toddlers we studied made a small sound, a soft breath, or a pause, at exactly the place that a grammatical word would normally be uttered.”
“The fact that this sound was always produced in the correct place in the sentence leads us to believe that young children are knowledgeable of grammatical words. They are far more sophisticated in their grammatical competence than we ever understood.
“Despite the fact the toddlers we studied were acquiring French, our findings are expected to extend to other languages. I believe we should give toddlers more credit – they’re much more amazing than we realised.”
For decades the prevailing view among developmental specialists has been that children’s early word combinations are devoid of grammatical words. On this view, children then undergo a ‘tadpole to frog’ transformation where due to an unknown mechanism, they start to develop grammar in their speech. Dye’s results now challenge the old view.
Dr Dye said: “The research sheds light on a really important part of a child’s development. Language is one of the things that makes us human and understanding how we acquire it shows just how amazing children are.
“There are also implications for understanding language delay in children. When children don’t learn to speak normally it can lead to serious issues later in life. For example, those who have it are more likely to suffer from mental illness or be unemployed later in life. If we can understand what is ‘normal’ as early as possible then we can intervene sooner to help those children.”
The research was originally published in the Journal of Linguistics.
(Source: ncl.ac.uk)

A team of cognitive neuroscientists has identified the areas of the brain responsible for processing specific words meanings, bringing us one step closer to developing multilingual mind reading machines.
Presenting the findings at the Society for the Neurobiology of Language Conference in San Sebastián, Spain, Joao Correia of Maastricht University explained that his team decided to answer one central question: “how do we represent the meaning of words independent of the language we are listening to?”
Past studies have focused on identifying areas of the brain that generate and hear general terms or feelings. However, if we can locate where the actual concept of a word — which transcends language — is processed, we would be able to read the mind of any individual. The recent case of 39-year-old Scott Routley letting doctors know he is not in pain, just by thinking, is a prime example of where this could be extremely effective in the future. After not responding to any stimulation for more than a decade, Routley was thought to be in a persistent vegetative state. However, by studying fMRI scans in real time neurologists could identify that Routley was in fact responding to their questions — they asked him to think about playing tennis or walking around at home to indicate yes or no. These two actions are processed in different areas of the brain, so answers could be extracted by reading scans. With Correia’s approach, we would need no signifier for yes or no — we could go straight to the source where the processing of the meaning of positive and negative takes place; the “hub”, as he puts it.
"This fMRI study investigates the neural network of speech processing responsible for transforming sound to meaning, by exploring the semantic similarities between bilingual wordpairs," explains an abstract of the study. To achieve this, they needed bilingual volunteers, so worked with eight Dutch candidates all fluent in English. First off, the team monitored the volunteers’ neural activity while saying the words "bull", "horse", "shark" and "duck" in English. All the words chosen had one syllable, were from a similar group and were probably learnt round the same period — this ensured that any differences would specifically relate to meaning. Different brain activity patterns appeared in the left anterior temporal cortex, and each of these were then fed into an algorithm so it would be able to flag up when one of the words was uttered again.
The hypothesis was, if the algorithm could still correctly identify the words when they were spoken in Dutch, these patterns would hold the key to where the word concepts are derived. The algorithm did exactly that. It demonstrates that words are encoded in the same way in the brain, regardless of language.
There is one pretty major drawback to the process, which quashes any visions of a full-on real-time mind translation machine hitting stores anytime soon — the neural activity patterns differed slightly from person to person. Our neurons learn and identify in unique ways, and understanding these pathway patterns through machine learning would be a long process. “You would have to scan a person as they thought their way through a dictionary,” said Matt Davis of the MRC Cognition and Brain Sciences Unit in Cambridge. It would be difficult to translate a mind now without this concept map. However, we are only at the beginning of this line of study, and an algorithm could potentially be devised to aggregate hundreds of neural activity patterns to help indicate what the brain activity of an individual unable to communicate represents.
Training computers to understand the human brain
Understanding how the human brain categorizes information through signs and language is a key part of developing computers that can ‘think’ and ‘see’ in the same way as humans. Hiroyuki Akama at the Graduate School of Decision Science and Technology, Tokyo Institute of Technology, together with co-workers in Yokohama, the USA, Italy and the UK, have completed a study using fMRI datasets to train a computer to predict the semantic category of an image originally viewed by five different people.
The participants were asked to look at pictures of animals and hand tools together with an auditory or written (orthographic) description. They were asked to silently ‘label’ each pictured object with certain properties, whilst undergoing an fMRI brain scan. The resulting scans were analysed using algorithms that identified patterns relating to the two separate semantic groups (animal or tool).
After ‘training’ the algorithms in this way using some of the auditory session data, the computer correctly identified the remaining scans 80-90% of the time. Similar results were obtained with the orthographic session data. A cross-modal approach, namely training the computer using auditory data but testing it using orthographic, reduced performance to 65-75%. Continued research in this area could lead to systems that allow people to speak through a computer simply by thinking about what they want to say.
Human beings have the ability to convert complex phenomena into a one-dimensional sequence of letters and put it down in writing. In this process, keywords serve to convey the content of the text. How letters and words correlate with the subject of a text is something Eduardo Altmann and his colleagues from the Max Planck Institute for the Physics of Complex Systems have studied with the help of statistical methods. They discovered that what denotes keywords is not the fact that they appear very frequently in a given text. It is that they are found in greater numbers only at certain points in the text. They also discovered that relationships exist between sections of text which are distant from each other, in the sense that they preferentially use the same words and letters.
Read more: In search of the key word: Bursts of certain words within a text are what make them keywords