Structural Analysis of the Neuropsychological Data for 6—9-year-old Children



Objective. A system of integral indices was developed for the qualitative and quantitative analysis of the neuropsychological data. These indices comprised the executive functions, the information processing, the functions of activation that regulate tone, waking, and mental states. We aimed to check whether the assignment of the neuropsychological measures to different integral indices was valid with structural equation modeling. Method. A total of 471 children aged 6-9 years (older preschoolers and elementary schoolchildren without developmental disorders) participated in the study. All children underwent the neuropsychological examination including both traditional and computerized tests. Results. Two factorial models were constructed, wherein the measures of performance in both traditional and computerized tests were the observed variables, and the cognitive functions were the latent factors. Confirmatory factor analysis has shown that the models fit the empirical data well. Conclusions. The obtained results indicate that the developed integral indices of various groups of cognitive functions are valid and that the results of traditional and computerized neuropsychological examinations are compatible.

General Information

Keywords: neuropsychological examination, elementary schoolchildren, preschoolers, executive functions, auditory information processing, visuospatial information processing.

Journal rubric: Special Psychology

Article type: scientific article


Funding. The reported study was funded by the Russian Foundation for Basic Research (RFBR), project 19-013-00668.

Received: 20.12.2021


For citation: Bukinich A.M., Korneev A.A., Matveyeva E.Y., Akhutina T.V., Gusev A.N., Kremlev A.E. Structural Analysis of the Neuropsychological Data for 6—9-year-old Children. Kul'turno-istoricheskaya psikhologiya = Cultural-Historical Psychology, 2022. Vol. 18, no. 2, pp. 21–31. DOI: 10.17759/chp.2022180203.

Full text


One of the main principles of the Lurian neuropsychology, a part of the cultural-historical psychology [11; 33], is a systemic organization of higher mental functions (HMF). According to this principle, the qualitative analysis of symptoms is essential to distinct between the primary and secondary (systemic) deficits [4, p. 274 and further]. The qualitative analysis has been successfully applied in the studies of adult patients [6; 24]. It is combined with a simple ordinal three-point rating assessment of the symptoms’ severity, where 0 indicates the correct performance. However, the examination of cognitive functions in children requires a more detailed quantitative assessment to reflect the dynamics of HMF development. Therefore, a combination of qualitative and quantitative analyses is in demand.

In the modern world literature on neuropsychology, after the dominance of the psychometric, quantitative, approach, there is an increasingly clear trend towards convergence of quantitative and qualitative approaches in recent decades. [19]. The quantitative approach has its pros and cons which are discussed for the neuropsychological examination of both adults [9] and children [2; 12]. The qualitative approach, apart from its obvious advantages, has some limitations. Specifically, it provides a more comprehensive picture of the patient’s HMF but complicates the formalization of the results; they are rather a unique expert judgement that is sometimes difficult to compare with others. The understanding of the strengths and weaknesses of these approaches leads researchers to attempts to combine them; such a convergence of methods is seen in child neuropsychology [13; 36]. For instance, different schemes for the quantitative assessment of qualitative neuropsychological examination of adults [3] and children [7; 5; 8] were suggested in Russian neuropsychology. Such work requires the estimates of the qualitative neuropsychological examination to be more strictly formalized, to be converted into scales, and normative data to be accumulated (see the discussion of these issues in [12]).

In this study, we use the results of an extensive neuropsychological examination of preschoolers and elementary schoolchildren. The examination was developed within the framework of the qualitative approach, but the task performance was assessed quantitatively. For example, a number of mistakes at a particular task were calculated or a representation of some special phenomena was accessed. Based on the previous works, we attempt to check whether the integral quantitative indices of diverse cognitive functions are composed correctly using structural equation modelling.

During the development of such indices, it is useful to take into account the experience of the neuropsychological assessment which shows that the precise evaluation of a participant requires not only the general productivity scores (i.e., a ratio of correct answers to errors) but also the specific errors. A system of the qualitative discrimination between such errors was developed for the neuropsychological examination of 6—9-year-old children [7].

Noteworthy, the way to construct the integral indices which include performance characteristics of different tests is also important. The indices of this type were developed in the latent process analysis [25] within the quantitative approach. Addressing the component composition of executive functions, Miyake and his colleagues noted the task impurity problem. As human activity is complex, there are no tasks that load only a single function. Therefore, to assess a particular cognitive component with more sensitivity it is necessary to sum up some unidirectional measures from several tasks.

An equivalent technique, an addition of performance measures from different tasks to compose the indices, is also applied in the child neuropsychology based on Vygotsky-Luria theory [7]. A system of qualitative and quantitative neuropsychological assessment with the application of indices has been successfully tested in diagnostics and correction of learning disabilities in children [10]. However, the sets of measures that the indices are comprised of having to be also refined statistically. We make such an attempt in this study.

The abovementioned neuropsychological battery for the 6—9-year-old children was applied in our work to assess the HMF. Performance in the battery’s tests is assessed in numerous characteristics which are considered by a neuropsychologist in assessing one’s cognitive functions and completing of qualitative conclusion. The indices in turn are useful when it is necessary to compare the results of neuropsychological examinations in different samples of children (i.e. groups with different types of disabilities). Akhutina et al. [7] presented the set of integral indices which assessed the components of HMF associated with the three functional brain units, according to Luria. In our study we address the most important ones: (1) executive functions (i.e., programming and control of voluntary activity), related to the third brain unit; (2) two indices for the information processing (auditory verbal and visuospatial information), related to the second brain unit; (3) two indices for the functions of activation (i.e. regulation of tone, waking, and mental states), related to the first brain unit, namely (3a) hyperactivity/impulsivity and (3b) fatigue/slowness of mental activity. From the perspective of cognitive assessment of preschoolers and elementary schoolchildren, such a set of indices seems fairly complete to evaluate the most important components of cognitive functions that are actively developing at this age and are important for effective learning [31; 35].

In addition to the traditionally applied neuropsychological tasks, computerized neuropsychological batteries are being increasingly developed nowadays. The CANTAB is the most popular among them [21], however, there are a lot of others, for instance, ANAM, ImPACT, CogState, CNS-VS, etc. (see the review [28]). We have designed and used the battery of tests aimed to assess the abovementioned functions [20]. For a more precise and informed application of the computerized assessment it should be compared to the traditional examination, so we attempt to implement this below.

The indices listed above were constructed based on the theoretical analysis and experience of the neuropsychological examination, however, their composition remain to be empirically and statistically verified. Confirmatory factor analysis (CFA) is widely used to test the structure of the neuropsychological batteries [27; 22; 34].

Using the CFA, we are going to test the validity of the indices that characterize different cognitive functions in children and include various measures of performance in the traditional and computerized neuropsychological tasks. We aim to address the following research questions:

Is it possible to verify the indices’ composition that was earlier developed based on the theory and practice of the child neuropsychology using the CFA on a large sample of 6—9-year-old children?

Does the combined application of the traditional and computerized neuropsychological tasks increase the accuracy and reliability of the cognitive assessment?


The study comprised a total of 471 children. Parents of all the children gave informed consent to their participation in the study. The participants had diverse levels of academic performance but no diagnosed disorders of mental development or neurological disorders. Participant distribution by grade, sex, and age was as follows: (1) 139 preschoolers (mean age 6.53±0.61) of whom 63 were female children; (2) 90 first-graders (mean age 7.66±0.42), 56 female children; (3) 145 second-graders (mean age 8.66±0.42), 62 female children; (4) 97 third-graders (mean age 9.64±0.43), 45 female children.


Two groups of tasks were used, namely the traditional tasks for neuropsychological examination adapted for 6—9-year-old children [7] and the computerized diagnostic tests included in the battery for computerized neuropsychological examination [20].

Neuropsychological examination with quantitative assessment

  1. The Go/No-go task aims to automatize the stereotypical motor reaction to stimuli and then change this stereotype. The analyzed measures included understanding of the instruction for the second subtest (a novel, conflict program), the total number of errors, and the performance rate.
  2. The Counting task. In this task we assessed the child’s ability to count in the direct order (from 1 to 10), to count back (from 10 to 1), to count selectively (e.g., from 3 to 7 or from 8 to 4), and the total number of errors.
  3. The Verbal Fluency task. A child was asked to generate as many words as possible (any words, action names, and plant names for the first, second, and third subtests respectively) in one minute. The measures included in our analysis were the productivity (i.e., the number of correct answers) for the first and the third subtests, the number of exact plant names in the second subtest (e.g., a birch, a maple, but not a tree or a flower), and the number of set-loss errors in the second subtest.
  4. The Odd one out task. Five series of five words each were presented aurally, and a child had to find the odd one and explain his or her choice. We assessed the productivity, the total score accounting for categorical and concrete answers, and the number of inadequate answers.
  5. The Three Positions Test, or “Fist-Edge-Palm”. A child had to understand and automatize the motor program. A measure of program understanding was included in our analysis.
  6. The Auditory verbal memory task. The two groups of three words each were presented aurally three times. After presentation of each group, a child had to repeat the words; after the repetition of both groups, the child had to recall them. We assessed the productivity of the first repetition and of the third recall, the number of distortions (changes of two sounds), and extra-list intrusion errors.
  7. The Visual perception task. A child was asked to recognize items depicted on the superimposed, crossed out, and unfinished images. The total number of verbal errors was analyzed.
  8. The Visuospatial memory task. Geometric shapes that are hard to describe verbally were presented three times; after each time a child had to draw them by memory. The productivity of the first and the third recall, the number of the right-hemisphere and left-hemisphere errors, and the number of shape transformations to a sign were assessed.
  9. The Finger Position test included imitation of visually presented finger positions and reproduction of finger poses by proprioceptive memory without visual control. The sum of spatial errors was analyzed.
  10. The Copying of a three-dimensional picture of a house. Indications of the right-hemisphere (holistic) and the left-hemisphere (analytic) strategies were assessed.
  11. Five characteristics related to the functions of activation (as functions of the first brain unit, according to Luria) were evaluated based on the observation during the entire examination: fatigue, slow cognitive tempo, tendency to perseveration, hyperactivity, and impulsivity (for details see [7, p. 143—147]).

Computerized tests

  1. The “Dots” test [20]. In this test, a child had to respond to the stimuli of two types, the images of hearts and flowers. They were presented on the left or on the right side of a computer screen. When a child saw the heart, he or she had to press the button on the same side where the stimulus appeared; whereas the flower was presented, the child had to press the button on the opposite side. The task comprised three subtests: (1) a congruent one (only hearts were presented); (2) an incongruent one (only flowers were presented); and (3) a mixed one (hearts and flowers were mixed).
  2. The “Understanding of Similar Sounding Words” test. A child was presented with a set of ten pictures of distinct objects; each object had a pair whose name differed in one sound (e.g., koza and kosa — a goat and a scythe). Then sequences of objects’ names were presented aurally, one sequence at a time, with gradual increase in their length. The child had to choose the corresponding pictures on the screen in the same order as heard.
  3. The Corsi Tapping Block test. Nine cubes presented on the screen, and some of them were highlighted one by one with an increasing sequence length (started from length of 2 elements). A child was asked to reproduce each sequence in the correct order. We analyzed the maximal length of a correctly reproduced sequence and the average time between responses within a sequence.
  4. The computerized version of the “Schulte Tables” in Gorbov’s modification [20] comprised five tables, each containing black and red numbers from 1 to 10. A child had to search for and indicate the numbers in a particular order. The first two subtests required pointing to the numbers from 1 to 10 colored either in black (the subtest 1) or red (the subtest 2). The instructions in the subtests 3 and 5 were equal except for the descending order of numbers (from 10 to 1). In the most complex subtest 4, a child had to alternate between black and red numbers in the ascending order (i.e., 1 black, 1 red, 2 black, 2 red, etc.).

The number of correct answers and errors as well as the reaction time were registered for each computerized test.

Data Processing

The statistical analysis was carried out in the RStudio environment (version 2021.09.0+351; language version R 4.1.1) with lavaan package, version 0.6—9 [29]. Due to the presence of ordinal measures, we used the method of weighted least squares mean-variance adjusted (WLSMV). As the model comprised measures from the same task, the correlations between them were also included into it. The quality of models was estimated with root mean square error of approximation (RMSEA; the model quality was considered high if RMSEA was less than 0.080), comparative fit index (CFI), and Tucker-Lewis index (TLI; CFI and TLI must be more than 0.900 for a good model [30]).


We constructed two models that clarify whether it is possible to assign particular neuropsychological measures to different groups of cognitive functions. Model 1 comprised only the results of the traditional neuropsychological examination, whereas the measures from the computerized tests were added to Model 2. Five factors corresponding to the following groups of functions were identified in the models: executive functions (EF); functions of auditory verbal information processing (AV); functions of visuospatial information processing (VS); fatigue and slowness of mental activity (FS) related to the concept of sluggish cognitive tempo [15]; and phenomena of hyperactivity/impulsivity (HI) reflecting symptoms of the attention deficit hyperactivity disorder (ADHD).

Model 1 was based on the model described previously in the study by Akhutina and co-authors [7, p. 171—179]. Differences of Model 1 from the original one were as follows: first, characteristics of the functions of activation were divided into two abovementioned symptom groups (i.e., FS and HI), and second, some measures were removed from the integral indices according to the results of a preliminary analysis. Furthermore, the correlations of residuals not explained by the factors for the measures from the same task were added to the model (the same procedure was applied, for instance, in [27]). Table 1 provides the list of measures included into the model.

The model’s quality was fairly high; therefore, it can be considered to fit the empirical data well: χ2(293) = 589.412, CFI=0.925, TLI=0.910, RMSEA=0.046. Coefficients for each measure included in the model are presented in Table 1. Most latent factors were closely associated with one another; their correlations are given in Table 2.

Table 1

The coefficients of Model 1 (comprising measures of traditional examination only)
and Model 2 (comprising measures of computerized tasks additionally)



Factor loadings (standard errors), significance

Model 1

Model 2


Go/No-go: understanding of the instruction for the second subtest

0.578 (0.060)*

0.553 (0.058)*

Go/No-go: the total number of errors

0.573 (0.027)*

0.576 (0.016)*

Counting: ability to perform

0.556 (0.032)*

0.530 (0.021)*

Verbal Fluency: productivity in the third subtest (any words)

-0.489 (0.041)*

-0.544 (0.030)*

Verbal Fluency: productivity in the first subtest (actions)

0.182 (0.047)*

-0.559 (0.029)*

Verbal Fluency: the number of set-loss errors in the second subtest (plants)

-0.510 (0.036)*

0.164 (0.047)*

Odd one out: productivity

-0.443 (0.040)*

-0.532 (0.032)*

Odd one out: the total score

0.348 (0.040)*

-0.486 (0.035)*

Odd one out: the number of inadequate answers

0.448 (0.036)*

0.357 (0.027)*

Three Positions Test (“Fist-Edge-Palm”): program understanding

0.184 (0.043)*

0.399 (0.031)*

Auditory verbal memory (recall): the number of extra-list intrusion errors

0.118 (0.056), p = 0.037

0.133 (0.041), p = 0.001

“Dots”: productivity (the number of correct responses) in the third (mixed) subtest


-0.422 (0.041)*

Shulte tables: the number of errors in the fourth subtest


0.254 (0.042)*


Visual perception: the total number of verbal errors

-0.604 (0.050)*

0.118 (0.054),
p = 0.030

Verbal Fluency: the number of exact plant names in the second subtest

-0.438 (0.051)*

-0.522 (0.041)*

Auditory verbal memory (repetition): productivity of the first repetition

-0.440 (0.049)*

-0.402 (0.043)*

Auditory verbal memory (recall): productivity of the third recall

-0.611 (0.047)*

-0.616 (0.041)*

Auditory verbal memory (recall): the number of distortions

0.212 (0.045)*

0.199 (0.049)*

“Understanding of Similar Sounding Words”: the number of correct answers


-0.591 (0.0450)*

“Understanding of Similar Sounding Words”: proportion of substitutions by similar sounding words


0.345 (0.048)*


Visuospatial memory: the number of right-hemisphere errors

0.354 (0.048)*

0.287 (0.036)*

Visuospatial memory: productivity of the first recall

-0.459 (0.047)*

-0.400 (0.033)*

Visuospatial memory: productivity of the third recall

-0.557 (0.041)*

-0.500 (0.034)*

Copying of a three-dimensional picture of a house: indications of the left-hemisphere (analytic) strategy

0.674 (0.037)*

0.630 (0.034)*

Copying of a three-dimensional picture of a house: indications of the right-hemisphere (holistic) strategy

0.741 (0.035)*

0.633 (0.017)*

Corsi Tapping Block test: the maximal length of a correctly reproduced sequence


-0.505 (0.034)*

Shulte tables: the average response time in the fourth subtest


0.746 (0.018)*



0.799 (0.043)*

0.711 (0.042)*

Cognitive tempo

0.554 (0.039)*

0.473 (0.034)*


0.764 (0.044)*

0.604 (0.039)*

Go/No-go: performance rate

0.570 (0.069)*

0.487 (0.053)*

Shulte tables: the average response time in the first subtest


1.070 (0.095)*

“Dots”: the average time of correct response in the first subtest (hearts)


0.601 (0.073)*

Corsi Tapping Block test: the average time between responses within a sequence


0.495 (0.063)*



0.798 (0.065)*

0.719 (0.049)*


0.928 (0.072)*

0.890 (0.060)*

Shulte tables: the average response time in the first subtest


-0.938 (0.095)*

“Dots”: the average time of correct response in the first subtest (hearts)


-0.599 (0.078)*

Corsi Tapping Block test: the average time between responses within a sequence


-0.426 (0.067)*


Table 2

Correlation coefficients between the factors
in Models 1 and 2



Model 1

Model 2






















-0.043 (p=0.584)

0.099 (p=0.172)






0.212 (p=0.002)

0.002 (p=0.969)






Then we included measures from the “Dots” test, the “Shulte tables”, the Corsi Tapping Block test, and the “Understanding of Similar Sounding Words” test into the model and constructed Model 2. The complete list of measures is provided in the corresponding column of Table 1. Notably, FS and HI comprised equal set of timing measures, which were included in these factors with different signs as shown in Model 2.

The quality of Model 2 (wherein the measures of computerized tests were additionally included), compared to Model 1, remained sufficient: χ2(560) = 1183.845, CFI=0.917, TLI=0.907, RMSEA=0.049. The associations between measures from the same task remained almost unchanged. The most pronounced change in the correlations between the latent factors was between FS and HI (0.693; p<0.001). The correlation between VS and HI became non-significant. The rest of the correlations remained significant and did not change substantially (see Table 2).


The obtained results allowed us to verify and optimize the set of integral indices and their composition which had been previously developed based on the neuropsychological theory and practice. The construction of initial model with the traditional composition of indices [7, p. 171—187] allows to identify the uninformative measures reflecting productivity with the ceiling effect or infrequent specific errors. Their removal from Model 1 improved its quality. In this model, the measures related to the functions of activation were divided into two separate indices, while the timing measures were included in both of them with different signs. This finding confirmed the validity of separation between these indices. Notably, the presence of two indices for the functions of the first brain unit further improved the model’s quality. The measures from the computerized tests were added to Model 2. We suggest that its sufficient quality reflects the possibility and feasibility to combine the traditional and computerized data. In general, these results demonstrate that the CFA is useful to evaluate the validity of sets of such indices and their composite measures.

Our results seem prominent from several perspectives. First, they indicate that the chosen sets of cognitive measures can be consolidated into the integral indices related to essential cognitive functions in older preschoolers and elementary schoolchildren. The proposed structure of these indices is consistent with the empirical data obtained from the neuropsychological examination. Second, the sufficient model quality after the addition of measures from the computerized tests means that the results of traditional and computerized neuropsychological assessments are complementary and can be used in combination.

It should be noted that the measures traditionally associated with the neurodynamic aspect of cognition were divided into two factors, namely fatigue and slowness of mental activity (FS) and hyperactivity/impulsivity (HI). Importantly, the same timing measures included in both factors had factor loadings with opposite signs which point to the relevance of these factors to behavior phenomena of sluggish cognitive tempo and impulsivity. The groups of children with predominance of one of these symptoms were previously described on the basis of neuropsychological assessment which also illustrates the necessity of their separation [1]. Beyond ADHD, the concept of sluggish cognitive tempo is considered in literature [15]. These symptom complexes have the intersection points which are explained by deficits in both neurodynamic and executive aspects of cognition in the Russian neuropsychology. There is evidence on the associations between the sluggish cognitive tempo and ADHD, especially regarding the phenomena of poor attention in ADHD [14; 18]. Furthermore, there are reports on higher associations of other cognitive functions with the sluggish cognitive tempo than with ADHD [16; 18; 23; 32] which is consistent with the results of our study.

The associations between the factors within the models should be also discussed. The smallest correlations were found between all factors and HI. The rest of the correlation coefficients in Model 1 were more than 0.5. When measures from the computerized tests were added, the correlations between the factors, except for HI, mainly increased (except for the decrease of correlation between AV and FS in 0.6). The correlation between HI and all other factors, except for FS, decreased or remained close to zero. At the same time, the correlation between HI and FS increased substantially which might be related to the technical reasons as three equal variables were added to both factors. Associations between the factors identified on the basis of psychological assessment are frequently reported to be high, varying between 0.4 and 0.9 [17; 25; 26]. The CFA differentiates between factors and also accounts for the correlations between them. The correlations between the latent factors revealed in our study were expectable as the underlying cognitive functions are not independent in the real activity, which is in line with the idea of the systemic structure of HMF in the Vygotsky-Luria neuropsychology.


We have constructed the structural models of relations between the neuropsychological measures and factors corresponding to the different groups of cognitive functions (i.e., HMF components). The CFA carried out on the sample of 6-9-year-old children demonstrated that the suggested structural models fit the empirical data well. Therefore, we can conclude that the structural validity of the proposed set of integral indices (EF; VS; AV; and two indices for the regulation of tone, waking, and mental states, namely FS and HI) is fairly high. According to our results, the structural model involving measures of the traditional face-to-face neuropsychological examination of children may be complemented with the data derived from computerized tests. This finding indicates the consistency of the data obtained with these different methods and the possibility of their combined application to improve the reliability of cognitive assessment in older preschoolers and elementary schoolchildren.

In conclusion, it is important to emphasize that the qualitative approach to the neuropsychological examination may be integrated with statistically verifiable quantitative assessment, highlighting the high explanatory power of the Lurian neuropsychology based on the principles of cultural-historical psychology.


  1. Agris A.R., Akhutina T.V., Korneev A.A. Varianty defitsita funktsii I bloka mozga u detei s trudnostyami obucheniya (okonchanie) [Options of I brain block deficit in children with learning disabilities (ending)]. Vestnik Moskovskogo universiteta. Seriya 14: Psikhologiya = Bulletin of the Moscow University. Series 14. Psychology, 2014, no. 4, pp. 44—55. (In Russ.)
  2. Akhutina T.V., Melikyan Z.A. Neuropsychological Assessment: an overview of modern tendencies (dedicated to 110-th anniversary of A.R. Luria) [Neuropsychological testing: current trends review. To the 110th year from A.R. Luria birthday]. Klinicheskaia i spetsial’naia psikhologiia = Clinical Psychology and Special Education, 2012. Vol. 1, no. 2. Available at: (Accessed 17.12.2021) (In Russ.)
  3. Vasserman L.I., Dorofeeva S.A., Meerson Ya.A. Metody neiropsikhologicheskoi diagnostiki [Methods of neuropsychological diagnostics]. Saint Petersburg: Stroilespechat’, 1997. 360 p. (In Russ.)
  4. Vygotskii L.S. Sobranie sochinenii: v 6 t. T. 5. Diagnostika razvitiya i pedologicheskaya klinika trudnogo detstva [Collected Works: in 6 vol. Vol. 5. Development diagnistics and pedological clinics of difficult childhood]. Moscow: Pedagogika, 1983, pp. 257—321. (In Russ.)
  5. Glozman Zh.M. Neiropsikhologicheskoe obsledovanie: kachestvennaya i kolichestvennaya otsenka dannykh [Neuropsychological assessment: qualitative and quantitative data evaluation]. Moscow: Smysl, 2019. 264 p. (In Russ.)
  6. Luriya A.R. Vysshie korkovye funktsii cheloveka [Human higher mental functions]. Moscow: Publ. Moskovskogo universiteta, 1969. 504 p. (In Russ.).
  7. Akhutina T.V. [i dr.] Metody neiropsikhologicheskogo obsledovaniya detei 6—9 let [Children of 6—9 years old neuropsychological assessment method]. Moscow: V. Sekachev, 2016. 280 p. (In Russ.)
  8. Polonskaya N.N. Neiropsikhologicheskaya diagnostika detei mladshego shkol’nogo vozrasta [Neuropsychological diagnostics of young schoolers]. Moscow: Publ. tsentr «Akademiya», 2007. 186 p. (In Russ.)
  9. Rasskazova E.I., Kovyazina M.S., Varako N.A. Primenenie skriningovykh shkal v neiropsikhologicheskoi reabilitatsii: vozmozhnosti, trebovaniya i ogranicheniya [Possibilities, demands and limitations of using screening scales in neuropsychological rehabilitation]. Vestnik YuUrGU. Seriya «Psikhologiya» [Bulletin of the South Ural State University. Series “Psychology”]. 2016. Vol. 9, no. 3, pp. 5—15. DOI:10.14529/psy160301 (In Russ.)
  10. Akhutina T.V., Pylaeva N.M. Overcoming learning disabilities. Cambridge: Cambridge University Press, 2012. 299 p.
  11. Akhutina T.V., Shereshevsky G. Cultural-historical neuropsychological perspective on learning disability. In: A. Yasnitsky, Rene van der Veer, M. Ferrari (Eds.), The Cambridge handbook of cultural-historical psychology. Cambridge: Cambridge University Press, 2014, pp. 350—377. DOI:10.1017/CBO9781139028097.020
  12. Astaeva A.V., Berebin M.A. Comparative analysis of Russian and foreign systems for the neuropsychological diagnosis of children from the standpoint of the psychometric approach and its limitations when used in clinical practice. Psychology in Russia. State of the Art, 2012. Vol. 5, pp. 203—218. DOI:10.11621/pir.2012.0012
  13. Baron, I.S. Neuropsychological evaluation of the child. New York: Oxford University Press, 2004. 429 p.
  14. Becker S.P., Luebbe A.M., Fite P.J., Stoppelbein L., Greening, L. Sluggish cognitive tempo in psychiatrically hospitalized children: Factor structure and relations to internalizing symptoms, social problems, and observed behavioral dysregulation. Journal of abnormal child psychology, 2013. Vol. 42, no. 1, pp. 49—62. DOI:10.1007/s10802-013-9719-y
  15. Becker S.P., Leopold D.R., Burns G.L., Jarrett M.A., Langberg J.M., Marshall S.A., McBurnett K., Waschbusch D.A., Willcutt E.G. The internal, external, and diagnostic validity of sluggish cognitive tempo: A meta-analysis and critical review. Journal of the American Academy of Child & Adolescent Psychiatry, 2016. Vol 55, no. 3, pp. 163—178. DOI:10.1016/j.jaac.2015.12.006
  16. Creque C.A., Willcutt E.G. Sluggish Cognitive Tempo and Neuropsychological Functioning. Research on Child and Adolescent Psychopathology, 2021. Vol. 49, pp. 1001—1013. DOI:10.1007/s10802-021-00810-3
  17. Deng C.P., Liu M., Wei W., Chan R.C., Das J.P. Latent factor structure of the Das-Naglieri Cognitive Assessment System: A confirmatory factor analysis in a Chinese setting. Research in Developmental Disabilities, 2011. Vol. 32, no. 5, pp. 1988—1997. DOI:10.1016/j.ridd.2011.04.005
  18. Hartman C.A., Willcutt E.G., Rhee S.H., Pennington B.F. The relation between sluggish cognitive tempo and DSM-IV ADHD. Journal of abnormal child psychology, 2004. Vol. 32, no. 5, pp. 491—503.
  19. Hebben N., Milberg W. Essentials of neuropsychological assessment. New York: John Wiley & Sons, 2002. 264 p.
  20. Korneev A., Akhutina T., Gusev A., Kremlev A., Matveeva E. Computerized Neuropsychological Assessment in 6—9 Years-old Children. KnE Life Sciences, 2018. Vol. 4, no. 8, pp. 495—506. DOI:10.18502/kls.v4i8.3307
  21. Luciana M., Nelson C.A. Assessment of neuropsychological function through use of the Cambridge Neuropsychological Testing Automated Battery: performance in 4-to 12-year-old children. Developmental neuropsychology, 2002. Vol. 22, no. 3, pp. 595—624. DOI:10.1207/S15326942DN2203_3
  22. Masterson C.J., Tuttle J., Maerlender A. Confirmatory factor analysis of two computerized neuropsychological test batteries: Immediate post-concussion assessment and cognitive test (ImPACT) and C3 logix. Journal of Clinical and Experimental Neuropsychology, 2019. Vol. 41, no. 9, pp. 925—932. DOI:10.1080/13803395.2019.1641184
  23. McBurnett K., Villodas M., Burns G.L., Hinshaw S.P., Beaulieu A., Pfiffner L. J. Structure and validity of sluggish cognitive tempo using an expanded item pool in children with attention-deficit/hyperactivity disorder. Journal of abnormal child psychology, 2014. Vol. 42, no. 1, pp. 37—48. DOI:10.1007/s10802-013-9801-5
  24. Mikadze Y.V., Ardila A., Akhutina T.V. A.R. Luria’s approach to neuropsychological assessment and rehabilitation. Archives of Clinical Neuropsychology, 2019. Vol. 34, no. 6, pp. 795—802. DOI:10.1093/arclin/acy095
  25. Miyake A., Emerson M.J., Friedman N.P. Assessment of executive functions in clinical settings: Problems and recommendations. Seminars in speech and language, 2000. Vol. 21, no. 2, pp. 169—183. DOI:10.1055/s-2000-7563
  26. Park L.Q., Gross A.L., McLaren D.G., Pa J., Johnson J.K., Mitchell M., Manly J.J. The Alzheimer’s Disease Neuroimaging Initiative. Confirmatory factor analysis of the ADNI neuropsychological battery. Brain Imaging and Behavior, 2012. Vol. 6, no. 4, pp. 528—539. DOI:10.1007/s11682-012-9190-3
  27. Parsons T.D. Neuropsychological assessment 2.0: Computer-automated assessments. In: Clinical Neuropsychology and Technology. New York: Springer, Cham, 2016, pp. 47—63. DOI:10.1007/978-3-319-31075-6_4
  28. Rosseel Y. Lavaan: An R package for structural equation modeling and more. Version 0.5—12 (BETA). Journal of statistical software, 2012. Vol. 48, no. 2, pp. 1—36.
  29. Schumacker R.E., Lomax R.G. A beginner’s guide to structural equation modeling. Psychology press, 2004. 513 p.
  30. Stiles J., Akshoomoff N.A., Haist F. The development of visuospatial processing. In: Neural Circuit and Cognitive Development. Academic Press, 2020, pp. 359—393. DOI:10.1016/B978-0-12-397267-5.00058-3
  31. Takeda T., Burns G.L., Jiang Y., Becker S.P., McBurnett K. Psychometric properties of a sluggish cognitive tempo scale in Japanese adults with and without ADHD. ADHD Attention Deficit and Hyperactivity Disorders, 2019. Vol. 11, no. 4, pp. 353—362. DOI:10.1007/s12402-019-00300-z
  32. Toomela A. There can be no cultural-historical psychology without neuropsychology. And vice versa. In: A. Yasnitsky, Rene van der Veer, M. Ferrari (Eds.) The Cambridge handbook of cultural-historical psychology. Cambridge: Cambridge University Press, 2014. pp. 315—349.
  33. Treviño M., Zhu X., Lu Y.Y., Scheuer L.S., Passell E., Huang G.C., Germine L.T., Horowitz T.S. How do we measure attention? Using factor analysis to establish construct validity of neuropsychological tests. Cognitive Research: Principles and Implications, 2021. Vol. 6, no. 1, pp. 1—26. DOI:10.1186/s41235-021-00313-1
  34. Vanvooren S., Poelmans H., De Vos A., Ghesquière P., Wouters J. Do prereaders’ auditory processing and speech perception predict later literacy? Research in developmental disabilities, 2017. Vol. 70, pp. 138—151. DOI:10.1016/j.ridd.2017.09.005
  35. Weiler M.D., Willis W.G., Kennedy M.L. Sources of error and meaning in the pediatric neuropsychological evaluation. In: Handbook of psychological assessment. New York: Academic Press, 2019, pp. 193—226. DOI:10.1016/B978-0-12-802203-0.00007-9

Information About the Authors

Aleksei M. Bukinich, student of Psychology Department, Lomonosov Moscow State University, Researcher, Psychological Institute of Russian Academy of Education, Moscow, Russia, ORCID:, e-mail:

Alexey A. Korneev, PhD in Psychology, Senior Research Fellow, Laboratory of Neuropsychology, Department of Psychology, Lomonosov Moscow State University, Moscow, Russia, ORCID:, e-mail:

Ekaterina Y. Matveyeva, PhD in Psychology, Senior Researcher, Psychology Department, Lomonosov Moscow State University, Moscow, Russia, ORCID:, e-mail:

Tatiana V. Akhutina, Doctor of Psychology, leading research assistant of the laboratory of neuropsychology, Lomonosov Moscow State University, Moscow, Russia, ORCID:, e-mail:

Alexey N. Gusev, Doctor of Psychology, Professor, Chair of Personality Psychology, Department of Psychology, Lomonosov Moscow State University, Moscow, Russia, ORCID:, e-mail:

Alexander E. Kremlev, Engineer of Psychology Department, Lomonosov Moscow State University, Moscow, Russia, ORCID:, e-mail:



Total: 1501
Previous month: 125
Current month: 120


Total: 348
Previous month: 11
Current month: 16