A study into testing the effects of frequency and neighbourhood density on word recognition.
Abstract
The purpose of this research is to investigate the effects of frequency and neighbourhood density on access to the mental lexicon. This had been done through a written version of the Gating task using PowerPoint to display the letters. Participants would write on a sheet given to them recording their spontaneous thoughts as to what the word would be. Once the data had been collected it was converted into inferential statistics for analysis. There appeared to be a correlation between amount of guesses and the variables frequency and neighbourhood density. High frequency high neighbourhood density (HFHN) words were more likely to be recognised faster than the low frequency low neighbourhood density (LFLN) words. This study suggests that frequency and neighbourhood density are two considerable factors in word recognition. To further investigate this study, it would benefit researching into different nationalities to find out whether or not this is solely an English language trait or cross-languages.
Introduction
Frequency and neighbourhood density have received considerable attention due to their connections between themselves and word recognition. Investigating whether or not these aspects of Linguistics affect word recognition is important because it contrasts the idea of lexical access models that assume serial comparison of a lexical access code with items in lexical
10), an embedded approach, investigating the complexity of relationships of graphophonic knowledge. Additional activities to support phonics instruction include building word lists based on common elements (Pinnell & Fountas, 1998, p. 157), segmenting words into onset and rime (Emmitt et al, 2013, p.12) and the introduction of high frequency or sight words through modelling and sight words games such as flash cards, sentence strips, bingo, word shapes and extensive reading (Fellows & Oakley, 2010, p. 219) ensuring students reach a point of automaticity (Konza, 2016, p. 157), as sight words feature sounds that contradict the rules for learning the 44 phonemes and the point of automaticity allows higher-level comprehension processes to occur due to available cognitive
In the last half century several theories have emerged with regard to the best model for human memory. In each of these models there was a specific way to help people recall words and
The Fry Sight-Word Inventory is an informal, criterion-referenced screener which measures high-frequency word achievement. Fry 's Instant Words have been determined as the most common words used in English ranked in order of frequency. Specifically, Fry found that twenty-five words make up approximately a third of all items published, one-hundred words comprise almost half of all of the words found in publications, and three-hundred words make up approximately sixty-five percent of all written material. The first three-hundred words on Fry’s list should be mastered by the end of corresponding grade levels, and lists four through ten should be mastered between fourth and fifth grades. Each hundred words are broken down even further into twenty-five words per list, according to difficulty and frequency, and should be assessed sequentially. The goal of progress monitoring high-frequency word mastery is to increase fluency on high-frequency words in order to further automaticity within our students’ reading, which ultimately impacts overall comprehension.
There is reliable and dependable proof for first-letter access, there is only weak confirmation for syllable access. In both experiments, syllable figures did not contrast across confidence levels and a higher coincidence at confidence levels in the research. (Brown and Burrows, 2013). A Mnemonic is utilized to recollect, and it could be a phrase, a short song, or something easily recalled, and it can assist the individual in finding something that is difficult to remember. For instance, we may use a phrase like PEMDAS, which means, Please Excuse My Dear Aunt Sally". It stands for "Parentheses, Exponents, Multiplication and Division, and Addition and Subtraction".
In each trial, the participants were presented with a sequence of words on the left side of the window. Each word was presented for one and a half seconds. After all the words were presented, the response buttons were presented on the right side of window. These response buttons were labeled with words from the sequence along with new distractor words that were not part of the sequence. The goal of the participants was to click on the response buttons and identify all the words that were part of the sequence. The independent variable for this study was the types of words that were presented on the test (response buttons). The dependent variable was the percentage of each types of items reported.
Some research, though, seems to suggest that the processes behind word identification are not entirely automatic, they are to some extent avoidable. A study carried out by Kahneman and Henik (1979) supported this as they found that interference was greatly decreased when the colour name is in an adjacent location, rather than in the same location as the colour which participants are asked to name. Again though, this reduction in interference is due to the placement of the distracting word, not due to any effort by or ability of the participants.
The first, A list of 10 monosyllabic words which are phonologically similar but not semantically related (A list) was adapted from (LS) “white, height, night, light tight, write, might, quiet, bite, fight” (p 30). The second list comprised of 10 words which are semantically related (B list), similar in length, word class and frequency with the phonologically related words, dear, sugar, savory, sweet, tasty, flavor, honey, dessert, candy, treat . The level of frequency of the words were determined with the use of corpus (COCA and BNC). There are slight variations in the level of frequency of the B list words. Nevertheless, The lists were presented orally and
Using paired wordlists of nouns, Bower and Gordon demonstrated this in their 1970 experiment. In their study, they had undergraduate students learn paired wordlists by one of four methods – rehearsal of the two words; reading a sentence in which one of the words acted upon the other (i.e. “The boy hit the ball.”); creating a sentence which linked the two words (i.e. “Nancy threw her bag on the table.”) or creating a mental image of the two words together (i.e. imagining a basket of flowers) (Bower & Gordon, 1970). Results found that students who employed imagery did better on recalling the word pairs in comparison to other methods and those who utilized rehearsal had the lowest recall rate out of the four groups (Bower & Gordon, 1970).
This is a quality transition to the discussion of certain studies pertaining to the idea of a TOT state. Multiple studies with accompanying figures are reviewed, which involve the measurement of the TOT state. Findings indicate that word recall and the TOT resolution require the entire first syllable of the word to be uncovered, instead of just the first sound of the word. Additional studies found in the Phonology Is Everywhere section indicate that second-hand exposure to the first syllable of the “missing” word was mediated by a semantic connection, which may help resolve the TOT state. Supplementary, another experiment showed similar-sounding words in the same grammatical class may confuse the person in the TOT state, instead of helping them produce the correct word. The conclusion to this section was that only cue words from a different part of speech were helpful in resolving the TOT state (Abrams, 2008). Although results are indicated clearly, the conclusions would be better represented if Abrams summarized the statistical data before inferencing the results.
Word recognition involves an individual’s ability to identify words independently without requiring related words for contextual help. A widely examined topic in the field of cognitive psychology, it deals with understanding printed letters as a word which has been kept in the lexicon. The word frequency effect is important in the recognition of words. It suggests more common words in printed language are easier, faster and more accurate to distinguish than words that appear less frequently. Outlined in their journal article, Howes and Solomon utilized Thorndike-Lorge’s word count for word frequency and measured the threshold of recognition. They found correlation coefficients of -.68 to -.75 between word frequency and threshold or duration.
"Word recognition has been typically examined using lexical decision tasks in which participants are required to discriminate between real words and nonword foils. Using this task, a word has been recognized when a familiar letter string can be accurately differentiated from a novel one." Evans, Lambon Ralph, and Woollams (2012).
One long running approach to investigate how does higher-level information influence speech perception focuses on testing if lexical knowledge influences phoneme perception through top-down feedback, or
Increasing the number of words isn’t enough because the speech recognition system is unable to differentiate words like ‘to’ and ‘two’ or ‘right’ and ‘write’ (6 ,p.98).
Whether it is the modern day issue of texting and driving or simply studying for an exam in a noisy room most people experience distraction every day. What people may not know is how those distractions cause interferences when processing information; this is the underlying problem and has been known to be tested through something called a Stroop experiment. From referencing the original Stroop test that examined the interference of color word association to other variations using shapes, emotions, or spatial locations, we can still learn a lot from Stroop experiments.
The observes reacted with GSR’s of significantly greater magnitude during the pre-recognition presentation of the critical words than they did before recognizing the neutral words.