Brain hacking. Practice in NLP (recruitment). (Introduction, Part 0)
The more sophisticated the game, the more sophisticated the opponent. If the opponent is truly good, then he will drive the victim into a situation that he can control.
• If we divide Social Engineering into many aspects, we get an impressive list that will include psychology, profiling and NLP. For those who are not aware, NLP (Neurolinguistic Programming) is one of the areas in practical psychology, which is based on the technique of verbal and non-verbal behavior of people.
• In simple terms, thanks to NLP you can understand how to correctly influence others based on your own benefit using certain speech techniques, gestures and facial expressions.
• By the way, NLP is not recognized by the academic community. I wonder why? Maybe because these methods are extremely effective? If we find out what really motivates, inspires and cares about a person, then we will have a great chance to “hack” the person and “pull out” the necessary information. Everyone has their own key, but finding it is not enough – communication with the victim is like a game of chess, in which there is always a risk of losing.
• In practice, most Social Engineers are not very strong in these aspects. However, NLP and other areas are the most powerful tools in interpersonal communication, which are actively used in SI from the point of view of an individual approach.
How does memory generalize memories? (1 part)
“It’s unrealistic to know everything in the world, but I, cherishing my dream, solved the problem brilliantly…”
Remember these words from one scientific and educational program? ( Of course not )
Perhaps they perfectly describe what happens to our brain when it needs to retain tons of information from the environment, most of which is not even useful. The brain came up with an ingenious solution: it lies in the fact that any memory is not encoded separately, but is generalized together with similar ones into a certain pattern. Thus, the animal remembers a safe path to a water source, then generalizes its knowledge so that in the future it is easier to find other paths to water. Memory and generalization (or generalization) are interrelated processes that help us predict the future and correctly structure our behavior.
But how does the brain understand what needs to be generalized and what not? What explains selectivity in memory consolidation? What happens when you generalize incorrect information? The current theory of systemic consolidation says that consolidation is a slow process in which the hippocampus, which has carried out the primary encoding of information through its highly plastic synapses, transmits information to the permanent storage of the cerebral cortex (neocortex), whose synapses have slower plasticity.
System consolidation takes several days. At this time, the hippocampus, as it were, teaches the neocortex new information by playing it and gradually “gives the memory” to the neocortex.
Where is the generalization here? In fact, it occurs in parallel with consolidation, since the neocortex, as it were, distributes memories according to the criterion of their similarity to others and generalizes information. With this consideration, generalization is considered an integral part of consolidation and always occurs without any regulation. But can the brain generalize any information? What if the information contains noise? What if the environment changes too quickly and defeats the purpose of generalization?
How does memory generalize memories? (Part 2)
Part 1
Theory of systemic optimally generalized consolidation
Scientists have proposed using the concept of mathematical neural networks to answer problematic questions in the theory of system consolidation.
They compared the neocortex to a student and the environment to a teacher. A teacher can directly teach a student – this is called direct teaching. In this case, exactly what the teacher teaches the student will be remembered. You can teach both relevant information and irrelevant information (noise). In order for the student to correctly filter out relevant information for subsequent memorization, an assistant is needed – a notebook with notes. By writing down what the teacher says there, the student can independently repeat and understand this material several times.
At the mathematical level, each element of the “teacher-notes-student” system is a neural network with certain parameters. The student can repeat the material several times (the neocortex learns), which is reflected in a change in the weighting coefficients of the network.
Unfortunately, the environment is not a perfect teacher. It contains a lot of noise in the information transmitted to the brain. Returning to the example of an animal looking for a place to drink: the relevant information is fog, a large number of midges, morning dew. Noise/irrelevant information is the brightness of the sunlight and, for example, the colors of birds flying by.
It is worth noting that noise makes the environment less predictable, thereby reducing the beneficial generalization effect.
The proportion of noise and relevant information depends on the mechanisms of covert and overt attention, on the extent to which the environment – the teacher – provides enough information for the student to learn, and also on how suitable the teacher is overall for the student (in mathematical terms: the student may be linear neural network, and the teacher – nonlinear). The real world is full of noise and complexity, and the brain’s ability to connect with this world is limited.
According to system consolidation theory, memories must be integrated into a generalization system in order to predict the future. On the other hand, the unpredictability and variability of the environment should not be recorded in our memory. That is, the “student” must separate the noise from the main information. In mathematical language, this means that the difference between the teacher’s input and the student’s predictions should be reduced to
minimum.
Scientists have suggested that learning begins when the hippocampus stores a number of examples from the environment, replaying them in the form of neuronal activity. The “student” gradually studies these examples with increasing precision. But every repetition contains an error, encoded at the beginning and played back later. Thus, the more noise there is in the environment, the higher the error and the worse the generalization.
At what point does the brain realize that it needs to stop generalizing? That any subsequent generalization will distort the ideal model that the brain has already created to predict the future? How will he understand that there is too much noise?
Standard systems consolidation theory posits that generalization occurs naturally during consolidation. However, it does not take into account that generalization can also have a detrimental effect on memorization. From her point of view, the “teacher-notes-student” connection is always ideal. However, for a rapidly changing, difficult to predict environment, too much consolidation leads to degradation of generalization.
Scientists have proposed that systemic consolidation stops at a point where further consolidation would be detrimental to generalization. The brain is able to calculate the moment when the error between generalization and prediction begins to increase.
How?
One strategy is to use the simple heuristic that initial learning rate correlates with predictability. This allows you to use a time parameter to stop training.
#Психология
How does memory generalize memories? (Part 3)
Part 1
Part 2
The strategy consists of dividing information into training and testing. This division occurs in the “hippocampus—notebook.” The first information is necessary for training the model that the neocortex encodes, and the second is for its validation and subsequent use. Training ends when the prediction error begins to increase.
To demonstrate their idea, the scientists simulated damage to the hippocampus in neural models, thereby disrupting the teacher-note-student connection. System consolidation at this point was completed. As expected, the absence of a “note” in the model always led to worse memory. The same effect was achieved when the level of unpredictability of the “teacher” increased.
It is important to note the novelty of this work. Previous theories did not take into account that systemic consolidation can be harmful. The authors of this work showed that unregulated consolidation can lead to memory deterioration and weak predictions of neural networks if the initial data from the environment was small or contained excess noise. All this leads to problems of generalization.
Thus, an important advantage of the theory of systemic optimal-generalized consolidation is the idea that for successful consolidation and generalization it is important that experience be sufficient and predictable. The theory says that for memorization and generalization, what is important is not the detail of the experience, not its frequency, not the brightness of the stimulus, as many early scientific works claimed, but its predictability.
#Психология
