This article needs additional citations for "verification. (June 2007) ("Learn how and when to remove this template message)
"Memory is the ability of the mind to store and recall information that was previously acquired. Memory is processed through three fundamental processing stages: storage, "encoding, and "retrieval. Storing refers to the process of placing newly acquired information into memory, which is modified in the brain for easier storage. Encoding this information makes the process of retrieval easier for the brain where it can be recalled and brought into conscious thinking. Modern memory psychology differentiates between the two distinct types of memory storage: "short-term memory and "long-term memory. In addition, different memory models have suggested variations of existing short- and long-term memory to account for different ways of storing memory. The memory it can be defined as the circuit or a device which can store the information like programs data and results. Memory is generally used for the faster form and store for the slower form
Short-term memory is encoded in auditory, visual, spatial, and tactile forms. Short-term memory, is closely related to "working memory. Baddeley suggested that information stored in short-term memory is continuously deteriorating, which can eventually lead to forgetting in the absence of rehearsal. George A. Miller suggested in his paper that the capacity of the short-term memory storage is approximately seven items, plus or minus two, also known as the magic number 7, but this number has been shown to be subject to numerous variability, including the size, similarity, and other properties of the chunks. Memory span varies; it is lower for multisyllabic words than for shorter words. In general, the memory span for verbal contents i.e. letters, words, and digits, relies on the duration of time it takes to speak these contents aloud and on the degree of lexicality (relating to the words or the vocabulary of a language distinguished from its grammar and construction) of the contents. The ability to recall words. Characteristics such as the length of spoken time for each word, known as the word-length effect, or when words are similar to each other lead to fewer words being recalled.
"Chunking is the process of combining pieces of information to increase the limited amount of information that working memory can retain. Chunking includes a process by which a person organizes material into sensible groups. This type of memory process is seen frequently with phone numbers, credit cards, house number etc. With North American phone numbers, for example, people commonly chunk the first three numbers of the area code together, the next three numbers, and then the last four numbers into serial.
"Rehearsal is the process by which information is retained in short-term memory by conscious repetition of the word, phrase or number. If information has sufficient meaning to the person or if it is repeated enough, it can be encoded into long-term memory. There are two types of rehearsal: maintenance rehearsal and elaborate rehearsal. Maintenance rehearsal consists of constantly repeating the word or phrase of words to remember. Remembering a phone number is one of the best examples of this. Maintenance rehearsal is mainly used for the short-term ability to recall information. Elaborate rehearsal involves the association of old with new information.
In contrast to the short-term memory, long-term memory refers to the ability to hold information for a prolonged time and is possibly the most complex component of the human memory system. The "Atkinson–Shiffrin model of memory (Atkinson 1968) suggests that the items stored in short-term memory moves to long-term memory through repeated practice and use. Long-term storage may be similar to learning—the process by which information that may be needed again is stored for recall on demand. The process of locating this information and bringing it back to working memory is called retrieval. This knowledge that is easily recalled is explicit knowledge, whereas most long-term memory is implicit knowledge and is not readily retrievable. Scientists speculate that the "hippocampus is involved in the creation of long-term memory. It is unclear where long-term memory is stored, although there is evidence depicting long-term memory is stored in various parts of the nervous system. Long-term memory is permanent. Memory can be recalled, which, according to the dual-store memory search model, enhances the long-term memory. Forgetting may occur when the memory fails to be recalled on later occasions.
Several "memory models have been proposed to account for different types of recall processes, including cued recall, "free recall, and serial recall. However, to explain the recall process, the memory model must identify how an encoded memory can reside in the memory storage for a prolonged period until the memory is accessed again, during the recall process; but not all models use the terminology of short-term and long-term memory to explain memory storage; the dual-store theory and a modified version of Atkinson–Shiffrin model of memory (Atkinson 1968) uses both short-and long-term memory storage, but others do not.
The multi-trace distributed memory model suggests that the memories that are being encoded are converted to vectors of values, with each scalar quantity of a vector representing a different attribute of the item to be encoded. Such notion was first suggested by early theories of Hooke (1969) and Semon (1923). A single memory is distributed to multiple attributes, or features, so that each attribute represents one aspect of the memory being encoded. Such a vector of values is then added into the memory array or a matrix, composed of different traces or vectors of memory. Therefore, every time a new memory is encoded, such memory is converted to a vector or a trace, composed of "scalar quantities representing variety of attributes, which is then added to pre-existing and ever-growing memory matrix, composed of multiple traces—hence the name of the model.
Once memory traces corresponding to specific memory are stored in the matrix, to retrieve the memory for the recall process one must cue the memory matrix with a specific probe, which would be used to calculate the similarity between the test vector and the vectors stored in the memory matrix. Because the memory matrix is constantly growing with new traces being added in, one would have to perform a parallel search through all the traces present within the memory matrix to calculate the similarity, whose result can be used to perform either associative recognition, or with probabilistic choice rule, used to perform a cued recall.
While it has been claimed that human memory seems to be capable of storing a great amount of information, to the extent that some had thought an infinite amount, the presence of such ever-growing matrix within human memory sounds implausible. In addition, the model suggests that to perform the recall process, parallel-search between every single trace that resides within the ever-growing matrix is required, which also raises doubt on whether such computations can be done in a short amount of time. Such doubts, however, have been challenged by findings of Gallistel and King who present evidence on the brain’s enormous computational abilities that can be in support of such parallel support.
The multi-trace model had two key limitations: one, notion of the presence of ever-growing matrix in human memory sounds implausible; and two, computational searches for similarity against millions of traces that would be present in memory matrix to calculate similarity sounds far beyond the scope of the human recalling process. The "neural network model is the ideal model in this case, as it overcomes the limitations posed by the multi-trace model and maintains the useful features of the model as well.
The neural network model assumes that "neurons in a neural network form a complex network with other neurons, forming a highly interconnected network; each neuron is characterized by the activation value, and the connection between two neurons is characterized by the weight value. Interaction between each neuron is characterized by the McCullough–Pitts dynamical rule, and change of weight and connections between neurons resulting from learning is represented by the "Hebbian learning rule.
Anderson shows that combination of Hebbian learning rule and McCullough–Pitts dynamical rule allow network to generate a weight matrix that can store associations between different memory patterns – such matrix is the form of memory storage for the neural network model. Major differences between the matrix of multiple traces hypothesis and the neural network model is that while new memory indicates extension of the existing matrix for the multiple traces hypothesis, weight matrix of the neural network model does not extend; rather, the weight is said to be updated with introduction of new association between neurons.
Using the weight matrix and learning/dynamic rule, neurons cued with one value can retrieve the different value that is ideally a close approximation of the desired target memory vector.
As the Anderson’s weight matrix between neurons will only retrieve the approximation of the target item when cued, modified version of the model was sought in order to be able to recall the exact target memory when cued. The Hopfield Net is currently the simplest and most popular neural network model of associative memory; the model allows the recall of clear target vector when cued with the part or the 'noisy' version of the vector.
The weight matrix of Hopfield Net, that stores the memory, closely resembles the one used in weight matrix proposed by Anderson. Again, when new association is introduced, the weight matrix is said to be ‘updated’ to accommodate the introduction of new memory; it is stored until the matrix is cued by a different vector.
First developed by Atkinson and Shiffrin (1968), and refined by others, including Raajimakers and Shiffrin, the dual-store memory search model, now referred to as SAM or search of associative memory model, remains as one of the most influential computational models of memory. The model uses both short-term memory, termed short-term store (STS), and long-term memory, termed long-term store (LTS) or episodic matrix, in its mechanism.
When an item is first encoded, it is introduced into the short-term store. While the item stays in the short-term store, vector representations in long-term store go through a variety of associations. Items introduced in short-term store go through three different types of association: (autoassociation) the self-association in long-term store, (heteroassociation) the inter-item association in long-term store, and the (context association ) which refers to association between the item and its encoded context. For each item in short-term store, the longer the duration of time an item resides within the short-term store, the greater its association with itself will be with other items that co-reside within short-term store, and with its encoded context.
The size of the short-term store is defined by a parameter, r. As an item is introduced into the short-term store, and if the short-term store has already been occupied by a maximum number of items, the item will probably drop out of the short-term storage.
As items co-reside in the short-term store, their associations are constantly being updated in the long-term store matrix. The strength of association between two items depends on the amount of time the two memory items spend together within the short-term store, known as the contiguity effect. Two items that are contiguous have greater associative strength and are often recalled together from long-term storage.
Furthermore, the primacy effect, an effect seen in memory recall paradigm, reveals that the first few items in a list have a greater chance of being recalled over others in the STS, while older items have a greater chance of dropping out of STS. The item that managed to stay in the STS for an extended amount of time would have formed a stronger autoassociation, heteroassociation and context association than others, ultimately leading to greater associative strength and a higher chance of being recalled.
The "recency effect of recall experiments is when the last few items in a list are recalled exceptionally well over other items, and can be explained by the short-term store. When the study of a given list of memory has been finished, what resides in the short-term store in the end is likely to be the last few items that were introduced last. Because the short-term store is readily accessible, such items would be recalled before any item stored within long-term store. This recall accessibility also explains the fragile nature of recency effect, which is that the simplest distractors can cause a person to forget the last few items in the list, as the last items would not have had enough time to form any meaningful association within the long-term store. If the information is dropped out of the short-term store by distractors, the probability of the last items being recalled would be expected to be lower than even the pre-recency items in the middle of the list.
The dual-store SAM model also utilizes memory storage, which itself can be classified as a type of long-term storage: the semantic matrix. The long-term store in SAM represents the episodic memory, which only deals with new associations that were formed during the study of an experimental list; pre-existing associations between items of the list, then, need to be represented on different matrix, the semantic matrix. The semantic matrix remains as another source of information that is not modified by episodic associations that are formed during the exam.
Thus, the two types of memory storage, short- and long-term stores, are used in the SAM model. In the recall process, items residing in short-term memory store will be recalled first, followed by items residing in long-term store, where the probability of being recalled is proportional to the strength of the association present within the long-term store. Another memory storage, the semantic matrix, is used to explain the semantic effect associated with memory recall.