The American Psychiatric Association (APA) has updated its Privacy Policy and Terms of Use, including with new information specifically addressed to individuals in the European Economic Area. As described in the Privacy Policy and Terms of Use, this website utilizes cookies, including for the purpose of offering an optimal online experience and services tailored to your preferences.

Please read the entire Privacy Policy and Terms of Use. By closing this message, browsing this website, continuing the navigation, or otherwise continuing to use the APA's websites, you confirm that you understand and accept the terms of the Privacy Policy and Terms of Use, including the utilization of cookies.

×
Published Online:

Q uantitative EEG (qEEG) involves computer-assisted imaging and statistical analysis of the EEG for detecting abnormalities, assisting the physician in making a diagnosis, and other purposes relating to patient care. Among the techniques of functional brain imaging, qEEG offers many advantages. It has an ideal temporal resolution in the millisecond time domain characteristic of neuronal information processing, employs no ionizing radiation, noninvasively images both excitatory and inhibitory cortical neuronal activity rather than secondary hemodynamic processes, and is relatively inexpensive and portable. Its formerly poor spatial resolution has increased dramatically as channel capacity has expanded from 20 a decade ago to 256 presently, with a 512-channel system expected for commercial release within the next year. Perhaps most importantly, several large qEEG normative (i.e., statistically representative) databases directly relevant to clinical psychiatry are available, and qEEG technology has advanced to the point where two systems have attained FDA approval.

Previous reviews of this area have lumped together two types of studies: those focusing on the direct clinical applicability of currently available qEEG systems and those involving more speculative areas of qEEG research. Consequently, it remains unclear whether qEEG is ready to be used as a standard laboratory test by practicing psychiatrists. A pivotal question remains unanswered concerning the actual clinical utility of qEEG and related electrophysiological methods: are the techniques sufficiently sensitive and specific to answer practical clinical questions about individual patients suffering from recognized psychiatric disorders? This article reviews briefly the types of assessments that comprise the realm of qEEG, the areas of controversy surrounding the techniques, and the published studies applying them to individual patients. The focus of this report is on whether presently available qEEG systems can tell the practicing psychiatrist anything of practical importance about the individual patient sitting across the desk from him. Its conclusions are less glowing than might be expected on the basis of previous reviews because, although qEEG can provide information of direct clinical relevance, even the most sophisticated qEEG systems now available are still very limited. We make specific recommendations regarding qEEG’s present clinical utility and areas in which additional research and development are needed.

METHOD

Selection of Literature

The focus of this review on the practical clinical utility of qEEG as a laboratory test in psychiatry requires the exclusion of a vast amount of tangentially related literature. EEG biofeedback (“neurotherapy”) is not included because it is a treatment rather than a laboratory test. qEEG drug development studies are excluded because they tend strongly to use group research designs that tell nothing about the individual clinical patient being treated. Psychiatric conditions thought to arise secondary to brain damage (e.g., stroke, traumatic brain injury) or infection (e.g., systemic lupus erythematosus) are excluded, due to the difficulty of determining whether any subsequent psychiatric condition is primary or secondary. Also excluded are psychiatric disorders for which the qEEG literature is sparse, such as Axis II disorders and substance abuse/dependence.

The latter is a particularly messy issue. Substance abuse categories are poorly defined and the criteria are inconsistently applied. Since an alcohol discriminant is included with one qEEG system and has been tested and validated in the literature, it has been included in the Depression section. As for the other recreational drugs, most published studies either involve no predictive classification 1 , 2 or use psychiatric patients as subjects 3 or, most commonly, fail to control for other drug use or important lifestyle variables. When more and better studies have been completed and when appropriate discriminants are included in qEEG systems, it will be important to review them. But for now a review is premature, particularly since discriminants are not available for use by practicing psychiatrists who may wish to use qEEG. There is an additional aspect of this area that extends it beyond the usual realm of clinical psychiatry. Since most recreational drug use is illegal, and since Thatcher has convinced the courts that (at least for brain injury) qEEG meets admissibility standards, it is especially important to tread carefully through this minefield. Discriminants intended to pick out drug users from a population, where false positives involve legal as well as medical hazards, need to be held to the highest standards of validity and reliability before they are made available for general use.

It was also necessary to omit source localization. LORETA and VARETA (low resolution/variable resolution electromagnetic tomography) are extraordinarily important advances in qEEG. They can provide unique information regarding the neurophysiological underpinnings of psychiatric disorders, and they bring qEEG squarely into the realm of functional neuroimaging. Unfortunately, these techniques fail the practical utility test. To the practicing clinical psychiatrist it makes no difference whether the major depression in the patient sitting across from him is linked to disturbances in the right prefrontal area or the left prefrontal area. The treatment will be the same. As the field develops, particularly as the current DSM categories are parsed into more meaningful subcategories (by cluster analysis, etc.), it may well be the case that psychiatric disorders linked to abnormalities in specific brain areas will be found to respond to different treatments. Indeed, functional neuroimaging should be at the forefront of neuropsychiatry/behavioral neurology development. But for the time being, LORETA and VARETA are simply irrelevant to the day to day professional life of the average working psychiatrist. For more general reviews of the EEG in psychiatry the reader is directed to the work of Chabot et al., 4 Boutros, 5 Hughes, 6 Hughes and John, 7 and Small. 8

The following is not intended to provide a comprehensive review of the qEEG literature. Rather it identifies and discusses selected between-subjects studies that are designed to find either differences between an individual patient and a defined healthy group (for simple EEG abnormality detection) or similarities between an individual patient and a defined clinical group (for diagnostic or other classification). With few exceptions, studies of between-group differences rather than between-subjects differences, and work published in non-peer-reviewed sources, are excluded. Individual articles in the published literature were located via a literature search of the National Library of Medicine databases using medical subject headings that included {EEG, qEEG, Evoked Potentials, Event-related Potentials} and {Mental Disorders, Psychiatric Disorders, Depression, Schizophrenia, Anxiety Disorder, Mood Disorder, Bipolar Illness}. Additional studies were found in the bibliographies of the located articles. Major qEEG equipment manufacturers and companies offering qEEG services or products were contacted for information.

Diagnostic Terminology

Diagnostic nomenclature has evolved rapidly and can lead to confusion when articles published at different times are compared. A particular hazard involves “rediagnosing” patients in earlier studies by attempting to fit them into current diagnostic categories. For that reason, the authors’ original terminology for patient groups has been retained in the discussions below. In contrast, the authors’ original terminology for healthy individuals used for control purposes (“normal,”“control,” “nonpatient,” etc.) has been changed to “healthy” or “healthy subjects” for the sake of uniformity.

Types of Assessment

A useful nosology of qEEG and related techniques has been provided by Duffy et al. 9 and Nuwer. 10 Of the two major types of data, the debate has centered on qEEG. Evoked potentials (EPs), event-related potentials (ERPs) and their quantitative counterparts (qEPs and qERPs) have received very little attention. The issues are essentially the same, although clinical qEP/qERP development lags far behind qEEG. The following analysis sequence described by Duffy et al. 9 proceeds from least to most controversial aspects of qEEG.

Visual Analysis

Visual analysis of the ink-written EEG by a qualified electroencephalographer remains the gold standard and is the first step in any qEEG analysis. Several authors (e.g., Hughes and John 7 ) recommend routine visual EEG screening of newly presenting psychiatric patients, particularly if a complete neurological examination is not routinely performed. 5 The use of “paperless” digital EEG (dEEG) allowing modification of display parameters and electrode montages for visual analysis on a computer screen and easy storage of the EEG record in digital form is noncontroversial once certain minimal technical criteria are met. This lack of controversy is somewhat surprising since there appear to be no studies comparing ink-written to paperless EEG to determine optimal standards of screen resolution, presentation rate, or other display variables. These variables may well exert a strong influence on the detection rate of subtle EEG abnormalities. Even worse, the artifactual production of slow activity through the process of “aliasing” 11 , 12 is possible if high frequency activity in the EEG is sampled at too low a rate. Nevertheless, there is universal agreement that a traditional visual reading of the EEG constitutes an indispensable first step in qEEG analysis.

Spectral Analysis

Conversion of the time domain EEG record (voltage plotted against time) to the frequency domain (amplitude or power plotted against frequency) using the fast Fourier transformation (FFT) has been widely used by researchers since the 1960s but is only now beginning to be employed by clinical EEG laboratories. The use of stand-alone frequency (spectral) analysis without reference to a normative database, as an adjunct to visual analysis, is relatively noncontroversial. However, here, too, there appear to be no studies comparing the clinical utility of the various analytic algorithms in use. Hanning versus sine versus a multitude of other techniques for handling window edge effects, minimum and maximum epoch lengths, and a host of other questions remain largely unaddressed. FFT spectra showing absolute measures will look very different from those showing relative measures, and spectra showing amplitude in microvolts will appear quite different from those showing power in microvolts squared. But no consideration seems to have been given to the possibility that these differing techniques could mislead the physician. The apparent presumption is that spectral analysis using any variant of the technique can call the physician’s attention to frequency domain characteristics of the ink-written or dEEG, which may aid in forming a clinical impression of the overall record.

Univariate Comparison to Normative Healthy Databases

Serious controversy begins when qEEG data recorded from a patient are compared statistically with normative databases, on the assumption that clinically significant psychiatric disturbances may be accompanied by statistically significant abnormalities in brain activity. Comparisons using single (univariate) spectral measures of the EEG (or single qEP/qERP amplitude measures) to compute z-scores reflecting the degree of statistical abnormality of the patient’s brain activity (e.g., Biologic’s Brain Atlas, Nicolet’s Brain Electrical Activity Mapping [BEAM] system) tend to be better accepted than the multivariate measures used for patient classification discussed below. But even univariate comparisons raise statistical issues. In order to achieve Gaussianity and avoid statistical bias, some qEEG systems include a log transformation of the FFT data. Also, artifact elimination from the raw data and concerns about the length of artifact-free data required for stable spectral estimates become important considerations at this level of analysis. Since the spectral composition of brain electrical activity changes systematically as a function of normal aging, the more capable qEEG systems use either age-stratified normative databases (e.g., Biologic’s Brain Atlas) or age regression (e.g., Neurometric Analysis System) to enhance sensitivity and specificity while avoiding age-related bias. Aside from aging effects, qEEG test-retest stability is remarkably high, even over several years. 13 For practical clinical applications, most head-to-head comparisons of visually analyzed to computer analyzed EEGs 14 find the computer to have the edge for detecting subtle frequency domain abnormalities. Such detection can then alert the clinician that a reevaluation of the EEG is advisable with attention to certain specific features.

The important epistemological difference between this level of qEEG analysis and conventional EEG, or for that matter techniques, such as positron emission tomography (PET) or single photon emission computed tomography (SPECT), is that conventional EEG does not involve quantitative comparisons with normative healthy or patient databases. It is the difference between having a professional opinion informed by a visual impression alone (EEG, PET, SPECT), and a professional opinion informed by a visual impression supplemented by quantitative information (qEEG). Without a reference database, the physician must rely on an impression. As psychiatry moves toward evidence-based medicine, greater reliance may be placed on quantitative analysis, but at the moment the normative databases simply do not exist for other imaging modalities. Univariate abnormality measures have the advantage of being easy to understand. When displayed as statistical probability maps (SPM; sometimes referred to as statistical parametric maps), they are a valuable aid in patient education since brain areas can be made to “light up” in proportion to the abnormality of their activity. They form vivid illustrations of the clinical point that a brain problem underlies a patient’s symptoms. This serves to destigmatize psychiatric disorders (the brain is malfunctioning just as any other organ can), bringing them into the realm of “real” medicine, and to motivate compliance with treatment. The patient may not understand theta band slowing over the left posterior parietal lobe, but he can see clearly the bright red area on his brain map.

Error checking is relatively easy since statistically abnormal univariate measures generally will correspond to visible features in the original EEG recording. However, normative databases differ in their composition and quality; a qEEG measure deemed abnormal by comparison with one may be normal when compared with another. Since most normative databases are proprietary products, they are difficult to compare systematically and generally have not had their details published in the open literature. For all such comparisons of a patient with a healthy control group, it is assumed that patients and controls differ only in the presence of abnormal brain activity underlying the patient’s disorder. Unfortunately, many patients do not match the often-stringent selection criteria for the normative healthy group (e.g., no history of neurological or psychiatric disorder, no first degree relatives with such disorders, no hypertension or diabetes, no psychoactive medications, etc.). Due to these selection criteria, controls tend to carry much less overall medical burden than do patients. It must be realized that statistically, such “hyper-healthy” controls are abnormal. Comparing a patient with a hyper-healthy control group involves two confounded components—the difference between the patient and the normative healthy population (i.e., the “street normal” population of average health, but excluding the specific disorder being investigated) and the difference between the normative healthy population and the hyper-healthy subjects. The use of hyper-healthy subjects as opposed to more carefully matched “street normal” controls inflates the type I (false positive) error rate. In many clinical applications maximizing sensitivity at the expense of specificity is defensible on the grounds that it is of overriding importance to avoid missing an abnormality (i.e., making a Type II error) and that false alarms can be weeded out by subsequent evaluations. 15 But there are costs to oversensitive screening. In addition to engendering fear and anxiety over a false positive result, subjecting patients to further diagnostic evaluation entails financial costs and a reasonable chance of additional harm in terms of discomfort, missed work, needle sticks, radiation, IV contrast, etc. Prichep and John 16 make the sensible suggestion that the threshold for clinical concern should be set with regard to the consequences of false negative and false positive results.

Multivariate Comparison to Normative Healthy and Clinical Databases

Although its use in clinical psychiatry is controversial, combining several individual (univariate) qEEG measures into a single multivariate measure may allow individual patients to be classified into categories of clinical interest. These often correspond to specific diagnostic categories (for which the classifiers are relatively well developed), but sometimes relate to more tenuously developed categories of medication responsiveness, clinical course, or other dimensions of psychiatric interest. Patient classifications are based on multivariate analysis of linear combinations of qEEG measures (discriminant functions, or “discriminants”), an approach often termed “neurometric” analysis. 16 (For legal purposes the generic term “neurometric” and its variants should be distinguished from the “Neurometric” and Neurometric Analysis System [NAS] trademarks pertaining to a widely used commercial system. For the didactic purposes of this article the distinction is trivial.) This approach extracts a large number of qEEG features and compares them with a reference database. It is assumed that the more statistically unusual the observation, the more likely it is that the underlying brain system is clinically abnormal. Although statistically significant findings are not pathognomonic, they are intended to draw the physician’s attention to features of the underlying EEG that may have been overlooked. At its most basic level this multivariate approach offers a broad post-hoc filter for determining whether the patient’s EEG is statistically normal or abnormal, much like the univariate approach described above.

Even greater controversy occurs when multivariate methods are extended beyond simple EEG abnormality detection to classify individual patients on a “best fit” basis into specific clinically defined categories. A composite quantitative profile of the individual’s EEG can be statistically defined by the particular pattern of z-score values. Patients within a diagnostic category often have distinctive multivariate profiles that are different from those of patients in other diagnostic categories, suggesting that the descriptive symptomatic taxonomy of DSM-IV and ICD-10 may reflect a biological taxonomy of brain abnormalities, which in turn produces a statistical taxonomy of qEEG results. In principle, once the multivariate statistical profiles of different diagnostic categories have been established and validated, they can be used to help diagnose an individual patient on the basis of the similarity of the patient’s multivariate qEEG profile to the previously defined profiles of the diagnostic categories. Clinical qEEG proponents are quick to point out that matching a patient’s statistical profile to a normative profile most characteristic of a specific disorder is different from using the technique to automate the diagnostic process itself. FDA approval of the Neurometric Analysis System and the NeuroGuide Analysis System (presently the only two approved systems) is for the post-hoc analysis of the EEG, and its developers repeatedly stress the need for a conservative and cautious approach to the interpretation of results.

The unfamiliar nature of multivariate statistical procedures has led some to consider them “mysterious” and consequently to be distrustful of neurometrics and related approaches. However, the mathematics are standard techniques 17 and are clearly described in the open literature. 16 , 1823 Multivariate procedures certainly are easier to understand than the mathematics underlying three-dimensional MRI image construction or, for that matter, the quantum mechanics underlying a simple transistor, though few would consider transistor radios to be mysterious and worthy of distrust. But the multivariate approach has its limitations. qEEG findings are not pathognomonic and are appropriately used only in conjunction with other clinical information rather than as stand-alone diagnostic classifiers. Additionally, due to its foundation in Bayesian statistics, for this type of multivariate comparison to be valid it is necessary to ensure that the patient belongs exclusively to one of a limited number of categories, usually healthy versus a specific disorder, but sometimes one specific disorder versus another specific disorder. Due to their non-zero false positive rates and the limited number of defined clinical categories, it is inappropriate to use these procedures as a general diagnostic screening test. Difficulties have arisen when naïive users have employed the procedures as a diagnostic filter, running a patient’s data against all possible diagnostic classifiers. Additionally, multivariate measures of pathology are more difficult for both doctors and patients to understand than their univariate components. They do not map well and therefore are of less use in patient education. They also are more difficult to check for errors since each univariate component of an abnormal multivariate measure need not in itself be abnormal.

Advanced Techniques Holding Clinical Promise

qEEG has been reported to do more than simply assist the physician in detecting EEG abnormalities and forming a diagnosis. In a number of instances, qEEG cluster analysis, which groups individuals on the basis of qEEG features without a priori outcome information, has defined subtypes within a single diagnostic class, suggesting that markedly different pathophysiological processes may produce essentially the same clinical symptoms. 4 Sometimes it is found that individuals within different qEEG clusters respond differently to treatment. Two subtypes of attention deficit/learning disabled children have been found, only one of which responds well to methylphenidate. 24 Similarly, Prichep et al. 25 and Hansen et al. 26 have identified two subtypes of obsessive-compulsive disorder (OCD) patients showing differing responses to selective serotonin reuptake inhibitor (SSRI) medications. Although this aspect of qEEG has not been developed sufficiently for clinical application, a physiological method for predicting a patient’s response to a medication could have profound value for clinical care, helping to select the medication most likely to benefit the individual patient and thereby shorten unsuccessful medication trials. This is a developing area of qEEG research. Another technique holding clinical promise is LORETA, which back-projects surface recorded qEEG onto a realistic three-dimensional brain model, optionally the patient’s own MRI. A LORETA normative database has been described and validated recently 27 and its potential clinical utility has been demonstrated. 28 It remains to be seen whether LORETA can be developed into a useful clinical laboratory test in psychiatry.

Review of the Present Controversy

The great fear seems to be that unsophisticated practitioners will attempt to use the classification ability of multivariate analysis to substitute for, rather than aid in, clinical diagnosis and treatment selection. 10 This fear is not without substance. During the 1980s one commercial vendor aggressively marketed a qEEG instrument incorporating an early version of the Neurometric system as virtually a stand-alone automatic diagnostic test. Marketing targeted psychiatrists, family practitioners, and other medical specialists unsophisticated in the use of clinical EEG. The system’s limitations—particularly those related to recording artifacts and the Bayesian structure of allowable comparisons—were ignored, and there was a very real possibility of harming patients by misinterpretation of the results. Experienced electroencephalographers of the neurological community were quick to voice their concerns 29 and have had a continuing chilling effect on qEEG.

In a 1994 paper on behalf of the American Medical EEG Association, Duffy et al. 9 assessed qEEG’s clinical efficacy and set minimum standards for its use. Central to these standards is the requirement that only specifically trained individuals should use this technology. Nuwer, 10 writing for the American Academy of Neurology and the American Clinical Neurophysiology Society, dismissed Duffy’s paper out of hand and damned the technique by faint praise. Replies by Hoffman et al. 30 representing the Association for Applied Psychophysiology and Biofeedback and the Society for the Study of Neuronal Regulation, and by Thatcher et al. 31 representing the EEG and Clinical Neuroscience Society, pointed to bias and sloppy scholarship in the Nuwer report. In 1999, a Texas court held that Nuwer’s criticisms of qEEG failed to meet acceptable scientific standards. 32 Chabot et al. 4 and Hughes and John 7 have provided more complete reviews of the qEEG literature while other authors 33 , 34 have addressed conceptual issues.

Group v. Individual Differences

Two basic approaches may be discerned to the study of psychiatric illnesses. One approach compares groups of patients to groups of healthy subjects employing research designs intended to find between-group differences attributable to the illness. An enormous research literature documents significant statistical differences between psychiatric patient groups and healthy control groups on a wide variety of qEEG measures. Such between-group designs yield a great deal of information about the workings of the normal brain and the functional alterations characteristic of psychiatric disorders. Examples include studies of the qEEG in schizophrenia, 35 dementia, 36 and depression. 37 The practical clinical problem is that even very significant between-group statistical differences on a measure do not necessarily mean that the measure is capable of classifying individuals into their respective groups with any useful degree of accuracy. 38 , 39 Unfortunately, much of the literature cited in support of clinical qEEG 4 , 7 is made up of papers, such as these—good science with unclear clinical application.

A second approach focuses on the individual patient, using qEEG measures to detect abnormalities broadly, and more narrowly to help classify the patient into a specific diagnostic, prognostic, or treatment group. Several univariate measures, such as absolute and relative power, spectral ratios, phase, coherence, and symmetry may be linearly combined to form multivariate measures. 40 In doing so the measures are found to be complementary; they are additive for the detection of abnormality but yield different topographic distributions. 41 Multivariate techniques have the decided advantage of assessing the relative contributions of multiple univariate qEEG measures, thereby reducing the likelihood that important information will be overlooked. 19 This approach not only yields information about the disorder itself, but also in principle can be useful for guiding the clinical care of individual patients. For example, Prichep et al. 42 used multivariate qEEG to classify a mixed group of 54 unipolar and 23 bipolar depression patients into their correct diagnostic groups, achieving classification accuracies of 91% and 83%, respectively. To the extent that mania may ensue in a bipolar patient being treated as a unipolar patient, this might be important information for a clinician to have.

Technology Demonstrations V. Clinical Tools

The use of quantitative statistical procedures for EEG abnormality detection, particularly when employed on a post hoc basis to call attention to features that might have been overlooked by the electroencephalographer during an initial visual reading, is supported by a convincing literature (discussed below). The application of such procedures to assist in clinical diagnosis by classifying a patient into the “best fit” multivariate category is less well supported by the peer reviewed literature, but a reasonable case can be made for its cautious use. Unfortunately, qEEG proponents, such as Hughes and John, 7 go beyond these modest boundaries and cite studies using techniques, such as cluster analysis that have no direct clinical application. Though cluster analysis is an important research tool and may lead to the development of clinically useful discriminants, the uncritical mixing of such studies with the more conservative citations undermines the credibility of their argument.

Another aspect of this general problem is that many qEEG studies in the literature bearing upon psychiatric problems use idiosyncratic methods of data recording and analysis involving ad hoc healthy and clinical normative groups. Such idiosyncratic research methods are of little help to the clinical psychiatrist who needs a standardized laboratory test. This problem is discussed in more depth later.

Commercial Interests

Nuwer 10 questions the veracity of reports published by authors having commercial interests in qEEG systems. However, there appears to be a wide range of professionalism among authors with commercial interests, paralleling the professionalism among academic authors. Both groups profit from their endeavors, whether through promotion/tenure/salary/consulting fees or through patent royalties/corporate profits. The academic who hires himself out as an expert witness testifying against qEEG has little to distinguish himself from the commercially involved researcher promoting the technique. And although it is unfortunate that qEEG requires very large databases that are available only as commercial products, it would be difficult to name a medical test that does not involve a commercial vendor. Scientific quality is where one finds it and the gold standard must remain articles, particularly independent replications, published in peer-reviewed journals.

A more troubling aspect of the commercial interest problem is the advertising by individual physicians. For example, in the advertising material for a recent “antiaging” seminar for clinicians in Las Vegas, a well known physician claimed that by using his “Brain Code, based on four electrical signals” derived from brain electrical activity mapping performed on a laptop computer in any doctor’s office, the attendees could treat “any brain disease” including dopaminergic brain dysfunctions (Parkinson’s, depression, dysthymic [sic], narcolepsy, chronic fatigue syndrome), acetyl-cholinergic [sic] brain dysfunction (Alzheimer’s, memory loss, dyslexia, attention deficit disorder[ADD], cognitive disorder, mild learning disability), gamma aminobutyric acid (GABA)-dominant brain disorder (anxiety, manic depression, headaches, migraine headaches, chronic pain), and serotonin brain disorder (social phobias, insomnia, dysthymia, mixed anxiety states, somatization, irritable bowel syndrome, fibromyalgia). But as disturbing as such claims may be in this era of evidence-based medicine, the behavior of individual physicians cannot reasonably be used as a criterion for the acceptance or rejection of a laboratory procedure. The facts must speak for themselves.

Resurrecting Moot Points

Many of the once-valid criticisms of qEEG have been addressed but continue to be raised by those opposing acceptance of the technique. Examples of such dead horse flogging 43 , 44 include standard EEG artifacts, recording errors, patient characteristics, and misapplication of techniques. It is certainly true that any of these can bias qEEG (especially multivariate) results in ways difficult to detect. Closely related are issues of technical competence and lab certification. However, these are essentially training and regulatory issues and have been dealt with through minimum practice standards, such as those proposed by Duffy et al. 9

qEEG has been criticized for employing too many statistical tests, 4345 thereby generating spurious statistical significance. This continues to be true of some systems using univariate comparisons and SPM to call the electroencephalographer’s attention to possibly important features as discussed above. Duffy et al. 9 , 46 recommends replicating each clinical test and accepting as true abnormalities that replicate. The more capable qEEG systems employing multivariate comparisons use Principal Components Analysis (PCA) to reduce the large number of variables to a much smaller set of uncorrelated factors representing the intrinsic dimensionality of the data set. Either of these procedures answers the “too many statistical tests” criticism.

Another criticism 10 is that many qEEG findings are not replicated. The validity of this criticism may be judged by the reader ( Tables 1 and 2 ). Caution should be exercised, however, in determining what constitutes a replication. Studies using discriminant analysis generally form the discriminant function from the first subject sample and test its accuracy on a second sample. In such cases the third sample would constitute the first true replication. However, the classification ability of the discriminant is assessed as early as the first sample, so has become common practice in the qEEG literature to refer to the second sample as the first replication, and this terminology has been incorporated into the present paper.

TABLE 1. Diagnostic Classification Analysis
TABLE 1. Diagnostic Classification Analysis
Enlarge table
TABLE 2. Drug Response Prediction Analyses
TABLE 2. Drug Response Prediction Analyses
Enlarge table

Yet another aspect of the problem is qEEG’s differential sensitivity. It may be sensitive to statistically significant but clinically trivial normal variants, such as an absence of a posterior resting rhythm, while being insensitive to clinically important patterns, such as fast transient epileptiform spike activity and slower triphasic waves. 41 , 47 Closely related is the criticism that computers cannot diagnose disorders. 43 , 44 To overcome these limitations the electroencephalographer’s trained eye is necessary. But qEEG does not remove the expert physician from the loop. For at least the past decade there has been universal agreement that the indispensable initial step in qEEG analysis is the “gold standard” of a clinical (visual) reading of the raw EEG by a trained electroencephalographer. qEEG is a post-hoc supplementary and complementary technique of data analysis that is specifically not intended to function as a stand-alone diagnostic instrument.

Standards of Evidence

The argument has been made that levels of specificity found in qEEG studies are often higher than those found in routinely used clinical tests, such as mammograms, cervical screenings, or CT or SPECT brain scans. 7 , 48 , 49 The unfortunate marketing history of qEEG in the 1980s has led to a situation today in which extraordinary evidence is required for its acceptance and endorsement by professional societies. Even FDA findings of safety and efficacy do not appear to be sufficient. The opponent camp championed by Nuwer maintains that additional evidence is needed, while the proponent camp championed by John counters that existing evidence is overlooked or misinterpreted. It is possible that clinical turf issues may play a role in this dispute. However, it has yet to be shown that any qEEG system available to the working clinical psychiatrist meets the methodological standards for diagnostic tests (spectrum composition, analysis of pertinent subgroups, avoidance of workup bias, avoidance of review bias, precision of results for test accuracy, presentation of indeterminate test results, test reproducibility) enumerated by Reid et al. 50

Information Availability

A major problem faced by the psychiatrist wishing to assess the practical clinical usefulness of commercial qEEG systems is that information about most systems’ capabilities is extremely difficult to obtain. The FDA has in the past placed severe restrictions on the information available to potential users, even forbidding a listing of the specific analyses available, and the ludicrous situation has arisen wherein, even after purchasing one major system, the buyer finds no such listing in the user manual. The situation may be changing since the most recently approved system is much better described. Lawsuits between commercial vendors similarly constrain the information they make available.

Applications to Specific Disorders

When reading the following sections, an important point must be kept in mind. Each section contains comparisons between standard, visually analyzed EEG and qEEG. Preceding parts of this article have stressed repeatedly that the indispensable first step in qEEG analysis is a standard visual reading of the raw EEG by a qualified electroencephalographer, and that qEEG is used responsibly only as a supplementary and complementary technique for post hoc analysis, serving to draw the physician’s attention to aspects of the original EEG that may have been overlooked. It is always the physician who performs the diagnosis or makes other relevant clinical decisions, not the machine. However, in the sections to follow, these fundamental dicta are consistently violated in order to assess the ability of the qEEG system to classify individuals independent of the clinical expertise of the user. In this manner the comparisons are artificially weighted against qEEG since the critical first step, evaluation of the raw EEG by an electroencephalographer, and the critical last step, the integration of all clinical information into the physician’s decision-making process, are omitted—omissions which would violate the most basic requirements of qEEG in actual clinical practice.

Learning Disorders, Attention Deficit Disorders

Nuwer, 10 in his AAN/ACNS position paper, gave a negative recommendation for qEEG’s clinical use in learning disabilities or attention disorders, basing his recommendation on “inconclusive or conflicting evidence from well designed clinical studies, such as case control, cohort studies, etc.” (p. 286). Hughes and John, 7 applying standards similar to those of Nuwer to a more extensive review of the literature, gave positive recommendations for qEEG in learning and attention disorders, citing many relevant (as well as some irrelevant) studies. More measured and focused reviews and evaluations of the technique and its underlying clinical literature have been provided subsequently by Chabot et al. 4 , 51

Although the taxonomy and diagnostic criteria for attention disorders and especially for the various types of learning disabilities are often problematic, it is clear that clinically significant EEG and statistically significant qEEG abnormalities increase in rough proportion to the severity of the problem. 5260

Learning Disorders

One of the seminal works in the qEEG literature was John et al.’s 1977 Science paper 18 describing the Neurometric approach and applying it to, among other conditions, learning disorders. Used to evaluate a mixed group of 118 healthy and 57 children with learning disorders, an initial discriminant accuracy of 93% was obtained (versus 76% for standard psychometric evaluation), and a jackknife replication (or “leave one out” replication in which each individual of the original sample is classified according to a discriminant formed using all other members of the sample) produced classification accuracies of 77% and 71%, respectively. Another early attempt to use neurometrics for classification of children with learning disorders exhibiting borderline normal intelligence and generalized learning disabilities, and children with specific learning disabilities with normal intelligence 61 also showed promising results. The strong points of the report were the large sample sizes and the very low false positive rates for both healthy comparison groups, which made the 46% to 58% true positive rates for the clinical groups useful. A subsequent multivariate qEEG classification study found that children with learning disorders could be discriminated from healthy children with 72% sensitivity and 80% specificity, 15 using a multivariate discriminant function derived from (and optimized for) only those two types of children. An independent replication produced lower sensitivity at 65% but higher specificity at 87%. Broadening the discriminant function by including children suffering from a variety of neurological disorders decreased the sensitivity to learning disorders, but also detected children with specific learning disabilities whose learning problems stemmed from a much wider range of etiologies. Most of the classification accuracies shown in Table 1 arguably are high enough to have practical clinical utility, and replicate well with independent samples, as shown. Similarly, Lubar et al. 62 studied the qEEGs of children with learning disorders and healthy subjects using exploratory discriminant analyses to assess the ability of various frequency bands, scalp sites, and analysis procedures to classify the children into their respective groups. Although methodological problems are apparent, the authors were able to classify children with learning disorders with sensitivities as high as 79% and specificities as high as 81% for their healthy counterparts using discriminant analyses based on 20 variables. (Using 672 variables an overall classification accuracy of 98% was attained.)

Serious doubt was cast upon the utility of neurometric methods for learning disorder detection by Yingling et al., 63 and since the dispute is frequently cited by those opposing the clinical use of qEEG, it is reviewed briefly here. These authors assembled a group of very carefully screened children suffering from “pure dyslexia” without accompanying neurological abnormalities, and an equally well-screened group of healthy control children. They then used the neurometric methods described by Ahn et al. 61 to assess both groups. As expected, Yingling’s healthy group was classified as normal when compared with the neurometric normative database. However, Yingling’s pure dyslexia group also was found to be within normal limits (no attempt was made to classify individual subjects). Noting that Ahn et al.’s specific learning disabled group contained many subjects with coexisting neurological and/or sensory deficits, Yingling attributed Ahn’s findings to the presence of such deficits rather than to learning disorders per se. This view was supported by Fein et al. 64 who also found no differences between groups of dyslexic and healthy control children (again, no attempt was made to classify individual subjects). Diaz de Leon et al., 65 however, reported significant qEEG differences between learning disordered and healthy groups of children, all lacking neurological symptoms, paralleling Ahn’s findings and calling into question Yingling’s and Fein’s view that pure dyslexia is unaccompanied by qEEG abnormalities. In the Diaz de Leon et al. study, learning disorders were shown to exert effects independent of neurological risk factors, such as prolonged labor and perinatal asphyxia. Similarly, Flynn and Deering 66 published group qEEG evidence of distinct learning disorder subtypes and Harmony et al. 58 found that qEEG abnormalities increase in parallel with reading and writing difficulties.

To account for the Yingling/Fein results, John et al. 19 and Duffy et al. 9 noted that Ahn’s learning disorder and specific learning disabled groups contained a wider range of etiologies than did Yingling’s. Ahn’s learning disorder and specific learning disabled children did not meet the rigorous screening criteria applied by Yingling for pure dyslexia, so comparing the two studies directly is inappropriate. Operationally defined, pure dyslexia is neither a learning disorder nor a specific learning disability, and great care must be taken to ensure that group membership criteria and allowable comparisons are uniform across applications. That argument, however, is a double-edged sword. On the one hand, the work of Yingling serves as a caution against the cavalier application of neurometric and other techniques to inappropriate groups and places the responsibility on the user to ensure that the patient being evaluated meets the same selection criteria as the subjects in the clinical patient group used to form the discriminant. But on the other hand, the point of the Yingling study was not that the Neurometric learning disorder and specific learning disabled discriminants failed to classify purely dyslexic children accurately (no individual classification was attempted), but rather that the purely dyslexic children as a group fell within normal limits. An interesting parallel is found in a study by Matsuura et al. 67 These authors assembled a normative healthy qEEG database of children from Japan, China, and Korea. The healthy children from these countries fell within normal limits, and children diagnosed with attention deficit hyperactivity disorder (ADHD) fell outside these limits, as expected. However, children with “deviant behavior” (apparently a rough equivalent of conduct disorder), operationally defined by the Rutter Child Questionnaires, also fell within normal limits. The take-home message appears to be that some clinically defined disorders do not manifest strongly in the qEEG. That is a good reality check and a valuable caution when interpreting negative findings.

On the other hand, in order to be clinically useful the neurometric discriminants must be applicable to the range of patients seen in clinical practice, placing a responsibility on the commercial vendor to ensure that a reasonably wide clinical spectrum of a disorder is represented in the patient sample used to form the discriminant. If discrete subtypes exist, it will be important for future studies to identify and characterize them in terms of more focused discriminants. In any case, it is important to establish and make clear to the user the parameters limiting valid application of existing discriminants to individual patients.

Heuristically, it may make little difference whether the qEEG abnormalities detected by the studies of Ahn, John, and Lubar derive from learning disorders themselves or from associated neurological disorders. From the studies reviewed it appears that most children diagnosed with a learning disorder will be found to have abnormal qEEGs, and the vast majority of healthy children will have normal qEEGs. If the clinical question can be structured as a discrimination between healthy and learning disordered, then qEEG may aid the clinician in diagnosis. If other diagnoses, such as learning disorder versus ADHD versus healthy are considered, the question becomes more complex, as discussed below under Attentional Disorders.

Attentional Disorders

Regarding attention deficits, a study published by Mann et al. 68 recorded qEEG from boys manifesting ADHD of the inattentive type (without hyperactivity, conduct disorder, dyslexia, or specific learning disability). Discriminant function analysis allowed correct classification of 80% of the purely attention disordered ADHD subjects and 74% of healthy subjects. Expanding on this theme, Monastra et al. 69 conducted a broadly based study drawing 397 ADHD patients and 85 healthy subjects from eight study centers nationwide. The study sample was unusual in that it included ADHD patients of both the inattentive and hyperactive-combined type, spanning an age range of 6 to 30 years. Using a very simple neurometric measure (theta/beta ratio measured at the vertex) to classify subjects into healthy or attention disordered groups, the authors reported 86% sensitivity and 98% specificity. The overall positive predictive value of the measure was 99%, meaning that only 1% of the individuals testing positive did not have an attentional disorder.

Casting an even wider clinical net, Chabot and Serfontein 70 recorded qEEG from 407 attention disordered children of various subtypes. A two-way discriminant analysis correctly classified healthy children and those with attentional problems (and normal IQs) into their respective groups with 93% sensitivity and 95% specificity (94% and 88% respectively upon independent replication). When this discriminant was applied to low IQ children with attentional problems it also classified them with 95% sensitivity. Similar two-way discriminant analyses differentiating normal IQ from low IQ children with attentional problems and differentiating attention disorder children with or without hyperactivity were less accurate, though clearly better than chance. These results are instructive because they show that a discriminant formed using one clinical group may be applicable to similar clinical groups, but that as the groups become increasingly dissimilar the discriminant accuracy also decreases. These results additionally illustrate a general finding that original patient samples tend to be more accurately classified than subsequent samples, pointing to the need for replications, particularly independent replications, of any discriminants offered for clinical use.

Later that same year Chabot et al. 71 published a further analysis of data originally published by Chabot and Serfontine 70 and by John et al. 15 Their primary aim was to assess the sensitivity and specificity of qEEG in the classification of children with attention deficit or specific developmental learning disorders. The secondary aim was to assess the ability of qEEG to predict treatment response of ADHD children at a 6-month follow-up. Results showed that a three-way discriminant classified with moderate accuracy healthy, ADD/ADHD and children with specific developmental learning disorders. This discriminant also classified ADD/ADHD children with low IQs into the appropriate category but was less successful in classifying children with specific developmental learning disorders. However, when a two-way discriminant was used to distinguish between ADD/ADHD and children with learning disorders it produced excellent accuracy, and additionally was very accurate in classifying specific learning disabilities and ADD/ADHD low IQ children. Furthermore, 6-month responsiveness to dextroamphetamine or methylphenidate could be predicted with accuracies high enough to be useful in guiding initial medication decisions. Independent replication confirmed the original findings, as shown in the Table.

The researchers then published a third paper assessing qEEG prediction of treatment response in the same attention disordered children after a slightly longer treatment period of 6 to 15 months. 53 They found that pre-treatment qEEG successfully predicted a favorable medication response (collapsed across specific diagnoses and medications) with a sensitivity of 83% and a specificity of 88%. An independent replication using a split half design yielded identical results. Similarly high accuracies for predicting methylphenidate response had been reported earlier by Prichep, 72 though the replication used a jackknife procedure rather than an independent sample. Chabot et al. 53 view qEEG as a useful adjunct to behavioral testing and clinical evaluation in the differential diagnosis of ADD/ADHD and specific learning disabilities. They note that while (univariate) individual qEEG abnormalities lack sensitivity and specificity, discriminant functions based on (multivariate) combinations of qEEG features can distinguish individual patients from each other and from healthy children with a high degree of accuracy. They further argue that treatment selection may be aided by qEEG analysis and call for the development and validation of additional discriminant functions to incorporate a wider range of medications. An expanded discussion of these themes can be found in Chabot et al., 4 who recommend qEEG’s routine use as an aid to diagnosis and treatment evaluation of children suffering from learning and attention disorders.

Recommendations

On the basis of several large, independently replicated studies, qEEG has been shown to be capable of providing accurate probability estimates of the likelihood that a given patient is suffering from any of a variety of attentional or learning disabilities, though extraordinary care must be taken to ensure that the individual patient being assessed matches the selection criteria of the patient group used to form the discriminant. The work of Yingling et al. 63 and Fein et al. 64 serve as an instructive lesson in that regard. Applied conservatively, its demonstrated classification accuracies suggest that qEEG may be a useful adjunct to behavioral testing and clinical evaluation in the diagnosis of children suffering from learning or attentional problems. Furthermore, there is preliminary evidence that children’s medication responses can be predicted with accuracies sufficient to influence initial treatment decisions, although the caveat must be added that this aspect of qEEG is in its infancy. Future studies are needed to gather normative clinical data from carefully diagnosed patient groups suffering from a wider spectrum of specific subtypes of attention disorders and learning disabilities, as well as other conditions, such as mood or personality disorders, that can complicate the clinical picture. For example, the excellent studies of ADHD subtypes by Clarke et al. 5457 appear to be ripe for extension from their present between-groups design to a predictive between-subjects design. Using normative clinical data, discriminant functions can be developed to better assist clinicians in the differential diagnosis of these disorders. Also badly needed are studies extending such normative clinical data into the adult years in order to assess the 60% to 70% of individuals with ADHD or a learning disorder who continue to present with some symptoms of these disorders in adulthood. Finally, the prediction of medication response is a promising and potentially very valuable use of qEEG that should be explored further.

Dementia

Nuwer’s review 10 of qEEG frequency analysis recommended it as possibly useful as an adjunct to routine EEG in dementia. Hughes and John 7 gave it a much stronger positive recommendation, citing a correspondingly much wider range of literature.

In the visually analyzed clinical EEG literature it is found generally that dementing illnesses are accompanied by increased rates of EEG abnormalities, though the details of such abnormalities differ between dementia etiologies. Alzheimer’s disease, the most common etiology, primarily affects broad posterior cortical regions of the temporal and parietal lobes and generally produces diffuse slowing of the EEG characterized by increased delta and theta, with slowing of the alpha rhythm and reduced beta (see Coburn et al. 73 for expanded discussion). These changes tend to be progressive and roughly parallel the clinical deterioration of the patient. 74 But with the exception of beta reduction, these changes are in the same direction as are those accompanying normal aging, making the identification of early Alzheimer’s disease from the visually analyzed EEG problematic. In multi-infarct vascular dementia, a common dementia etiology, more easily identified focal EEG abnormalities are sometimes present, following the distribution of discrete cortical lesions. But in cases of deep lesions or widely distributed white matter changes the EEG abnormalities may take the form of diffuse slowing similar to that seen in normal aging and Alzheimer’s disease. Complicating the picture still further, Alzheimer’s disease and multi-infarct vascular dementia not only can coexist, but do so more often than would be expected by chance. 75 , 76 Clinical diagnosis of these comorbid Alzheimer’s disease/multi-infarct vascular dementia patients is especially problematic. 77 Fronto-temporal dementias, such as Pick’s disease, involve frontal and anterior temporal cortical areas, but again the accompanying EEG abnormalities may be diffuse and difficult to characterize in the visually analyzed record. Indeed, a normal visually analyzed EEG is among the supportive criteria for fronto-temporal dementia diagnosis. 78 Although in principle fronto-temporal dementias, such as Pick’s disease, may be distinguished from Alzheimer’s disease by CT or MRI showing the distribution of structural changes, EEG showing the distribution of functional changes, and clinical presentation showing behavioral and personality changes in fronto-temporal dementia as opposed to memory and visuospatial changes in Alzheimer’s disease, in practice the vast majority of Pick’s patients are erroneously diagnosed as suffering from Alzheimer’s disease. 79

By virtue of its quantitative statistical comparisons, qEEG offers a solution to the problem of determining age-normal limits. John’s Neurometric Analysis System controls for normal aging by means of statistical age regression, while Thatcher’s system and most others use age stratified normative data. Either quantitative method appears superior to the impressions gained by visual inspection of the raw EEG. The qEEG literature shows clearly that abnormalities tend to increase in parallel with the clinical stage of dementia in senile dementia of the Alzheimer’s type, 80 Alzheimer’s disease, 81 , 82 and cognitive decline. 8385 For example, Prichep et al. 86 published a cross-sectional neurometric study of 319 subjects showing either normal aging or signs and symptoms compatible with dementia of the Alzheimer’s type. On the basis of their Global Deterioration Scale (GDS) level the subjects were divided into groups of 40 healthy (GDS 1), 91 with subjective cognitive deficits (GDS 2), 48 with subjective + objective deficits (GDS 3), 60 with mild dementia of the Alzheimer's type (GDS 4), 55 with moderate dementia of the Alzheimer's type (GDS 5), and 25 with moderately severe dementia of the Alzheimer's type (GDS 6). Abnormalities (relative theta showed the clearest results) began in the GDS 2 group and increased in parallel with increasing deterioration through the GDS 3–6 groups. The authors make the point that the high sensitivity of neurometrics to the earliest presence of qEEG abnormalities in subjects with only subjective cognitive dysfunction suggests that the technique might be clinically useful in the initial evaluation of patients with suspected dementia. However, no sensitivity, specificity, or other similar accuracy measures were reported so the applicability of these general results to individual patients is unclear. A more recent qEEG study 87 used equivalent dipole analysis to investigate mild cognitive impairment and Alzheimer’s disease, and found that although the technique could not discriminate mild cognitive impairment patients from healthy subjects, Alzheimer’s disease patients could be distinguished from both mild cognitive impairment patients (78% accuracy) and controls (84% accuracy). Perhaps more importantly, it was found that eight of the 10 mild cognitive impairment patients misclassified as Alzheimer’s disease went on to develop Alzheimer’s disease during the 48 month study. Retrospectively comparing those mild cognitive impairment patients who progressed with those who did not progress to Alzheimer’s disease, the authors reported a classification accuracy of 77%. All of these classification accuracies are compared with those obtained using traditional FFT ( Table 1 ). qEEG, in contrast to visually analyzed conventional EEG, also has been reported to show abnormalities in fronto-temporal dementia that are clearly distinguishable from those in Alzheimer’s disease and which do not represent healthy aging. 78 Unfortunately, little work has been published in this area.

Dementia Detection

In the evaluation of individual patients suspected of suffering from dementia, conventional or quantitative EEG can be used for several purposes, the most basic of which is detection of abnormal patterns of brain activity characteristic of dementing disorders. Since dementias of most etiologies are accompanied by EEG changes above and beyond those seen in healthy aging, EEG can be of practical utility if the clinical question can be structured as a choice between only two alternatives: either dementia or normal aging. For example, Prinz and Vitiello 88 used visually analyzed alpha slowing to distinguish between early stage Alzheimer’s disease patients and healthy subjects, attaining 71% sensitivity and 82% specificity. This accuracy is surprisingly high since the Alzheimer’s disease sample was restricted to early stage patients and included those with possible as well as probable Alzheimer’s disease diagnoses. Dementias of other etiologies also can be differentiated from healthy aging. Robinson et al. 89 visually analyzed EEGs from Alzheimer’s disease and comorbid Alzheimer’s disease and multi-infarct vascular dementia patients (all histopathologically confirmed), and from healthy subjects, achieving sensitivities of 87% for Alzheimer’s disease patients with 63% specificity, and 77% for Alzheimer’s disease and multi-infarct vascular dementia patients with 65% specificity. They also found a 36% false positive rate for healthy subjects, suggesting that specificity may have been sacrificed in the service of sensitivity, though nearly identical sensitivities of 89% for Alzheimer’s disease patients with 79% specificity, and 76% for multi-infarct vascular dementia patients with 79% specificity, were found by Sloan et al. 49 Yener et al. 90 reported a similar sensitivity of 85% for Alzheimer’s disease patients with 93% specificity and a more comforting 7% false positive rate for healthy subjects, and additionally found 69% sensitivity for fronto-temporal dementia patients with 93% specificity. These studies indicate that when the diagnostic question can be structured as a discrimination between healthy aging and a specific dementia (Alzheimer’s disease, multi-infarct vascular dementia, Alzheimer’s disease and multi-infarct vascular dementia, fronto-temporal dementia), then on the basis of the visually analyzed EEG moderate to high accuracies may be obtained.

A comparison between visual and quantitative EEG analysis was published by Mody et al. 91 They used both conventional and quantitative EEG from Alzheimer’s disease patients and scrupulously screened healthy elderly controls to investigate the changes accompanying Alzheimer’s disease and to classify patients and healthy subjects into their respective diagnostic categories. For diagnostic classification qEEG was found to have 98% sensitivity and 100% specificity, while a conventional reading of the same EEGs had 16% sensitivity and 100% specificity. However, in fairness it must be noted that while visual analysis found only 16% of the Alzheimer’s disease patients to show the specific constellation of changes defined by the authors as indicative of Alzheimer’s disease, fully 98% of the Alzheimer’s disease EEGs were clinically abnormal, with 76% showing generalized abnormalities. In this regard it is important to emphasize the complementary nature of visual and qEEG analysis, and to point out once again that a visual reading by a trained electroencephalographer is the first step in clinical qEEG analysis. Perhaps a better comparison between visual and quantitative analyses for dementia is provided by Yener et al., 90 who found that qEEG sensitivities and specificities were lower for Alzheimer’s disease but higher for fronto-temporal dementia.

Turning to purely quantitative techniques, an early study by John et al. 15 noted that certain qEEG abnormalities tend to increase in parallel with dementia symptoms, and assessed the discriminant classification accuracy for different stages of dementia. Although senile dementia patients with mild symptoms were difficult to distinguish from those with moderate/severe symptoms, overall demented patients could be separated from their healthy subjects with about 75% sensitivity and 60% specificity, confirmed by a jackknife replication. A small study by Duffy et al., 92 published the following year, found that qEEG could discriminate senile dementia (age ≥65) and pre-senile dementia (age <65) from age appropriate healthy subjects with sensitivities and specificities of 89% or better, and Besthorn et al. 93 attained 76% overall classification accuracy, with 72% sensitivity for Alzheimer’s disease patients and 81% specificity for healthy subjects. Schreiter-Gasser et al. 94 produced a slightly higher diagnostic classification sensitivity of 93% and specificity of 100% for Alzheimer’s disease patients versus healthy subjects. Although their sample sizes were rather small, the Schreiter-Gasser et al. study is noteworthy for using “street-normal” controls, most of whom complained of subjective memory problems although none showed frank dementia. In this manner their study closely resembles actual clinical practice.

Of course, discriminations can be weighted in favor of either sensitivity or specificity, 16 , 95 depending on the consequences of false negatives and false positives. Brenner et al. 96 used qEEG to classify Alzheimer’s disease patients and healthy subjects, weighting the discrimination to minimize false positives. Alzheimer’s disease patients could be distinguished from their healthy counterparts with a specificity of 93% accompanied by a sensitivity of 66%, and somewhat greater sensitivity was obtained for lower functioning (79%) than for higher functioning demented patients (36%). Restricting their patient group to those showing only mild dementia (CDR 1), Coben et al. 97 also found that patients and healthy subjects could be classified with low sensitivity (24%; 57% upon independent replication) but with 100% specificity. The authors note that although the low sensitivity puts constraints on the usefulness of qEEG for this application, when dementia is in the mild stage low sensitivity would be useful if the prevalence and specificity were high, and if a high false negative rate were acceptable.

Other dementing conditions also have been included in qEEG discriminations from normal aging. A small study by Leuchter et al. 98 separated a combined sample of dementia of the Alzheimer's type and multi-infarct vascular dementia patients from healthy subjects with 83% sensitivity and 100% specificity, and in a larger study using more adequate samples of dementia of the Alzheimer's type patients, multi-infarct vascular dementia patients, and healthy subjects, Leuchter et al. 99 found that the overall percentages (the only accuracy measure given) of correct classifications were 77% for dementia of the Alzheimer's type versus healthy and 81% for multi-infarct vascular dementia versus healthy. Extending the range of dementia etiologies still further, Streletz et al. 100 used qEEG to classify Huntington’s dementia patients and age equivalent healthy subjects with 70% sensitivity and 90% specificity, as well as separating dementia of the Alzheimer's type patients and healthy elderly controls into their respective categories with 68% sensitivity and 90% specificity.

qEEG classification accuracies depend not only on the specific qEEG parameters assessed, but also on the analytic technique employed. Anderer et al. 101 analyzed data from 207 demented patients, 99 with senile dementia of the Alzheimer's type and 108 with multi-infarct vascular dementia, versus 56 healthy subjects, comparing demented versus healthy classifications based on z scores, stepwise discriminant analysis, and neural networks. Receiver operating characteristic analysis was used to compare these methods, with classification performance being measured by the area under the ROC curve, as recommended by Swets. 102 Results showed neural networks (accuracies typically >90%) to be superior to stepwise discriminant analysis (87%), which in turn was superior to z scores (84%). Neural Network classification of demented patients by subdiagnosis also yielded receiver operating characteristic areas of 89% for senile dementia of the Alzheimer's type and 90% for multi-infarct vascular dementia. The same general findings were reported by Pritchard et al. 103 who compared various combinations of linear and nonlinear qEEG measures, and also linear and nonlinear analysis methods, for their abilities to classify 14 Alzheimer’s disease patients and 25 healthy subjects into their respective groups. These authors found that the addition of nonlinear to linear qEEG measures improved classification accuracy, and that nonlinear neural networks classified better than standard linear techniques of multivariate or nearest-neighbor discriminant analysis.

Despite their reasonably high accuracies (however measured), the clinical relevance of these classifications based on EEG or qEEG may be questioned since most abnormalities are characteristic of the dementing disorders themselves and do not lead to changes in treatment. Also it must be stressed once again that the above classification studies are relevant only under circumstances in which the clinical question is one of discriminating between a given dementia and normal aging. The discriminant functions on which those classifications are based are not intended to generalize to other conditions. Furthermore, the sensitivity of the discrimination appears to vary as a function of the patient’s clinical deterioration, with early stage patients being less accurately classified in most studies than later stage patients. It is highly questionable whether a clinician needs EEG or qEEG to determine that a moderately or severely demented individual is not healthy. The high abnormality rates of demented patients may assist in diagnosis, however, due to their high negative predictive value; in most studies, a normal EEG in a demented patient is strongly suggestive of a diagnosis other than Alzheimer’s disease. 89 For fronto-temporal dementia, in which the conventional EEG is often normal, the negative predictive value is lower. But identification of individual fronto-temporal dementia patients by qEEG appears to offer promise, as shown by Lindau et al. 78 and Yener et al. 90 and described below. Additionally, Robinson et al. 89 make the interesting point that conventional EEG may be superior to qEEG for such negative predictions because, while medications may influence the qEEG significantly, they are unlikely to produce changes visible to the eye.

Delirium Detection

Closely related to simple dementia detection and perhaps of greater clinical importance is the ability of EEG 5 or qEEG to assess the presence of delirium in patients presenting for dementia evaluation. Aside from commonly being misdiagnosed as dementia, the toxic, metabolic, or structural encephalopathies underlying delirium carry with them the risk of serious or even life-threatening medical complications. A pilot study by Jacobson et al. 104 examined patients suffering from dementia of the Alzheimer's type (including one with dementia of the Alzheimer's type and multi-infarct vascular dementia), delirium, delirium + dementia (of various etiologies), and healthy subjects. Stepwise discriminant analysis was used to assess the relative diagnostic contributions of qualitative (visual EEG analysis), semiquantitative (visual analysis of topographic qEEG maps), and quantitative EEG measures. For dementia detection (all patients versus healthy subjects) qEEG achieved a sensitivity of 93% and a specificity of 86%. For differential classification (delirium with or without dementia versus nondelirious dementia) qEEG attained a sensitivity for delirium of only 61% with 56% specificity, but visual analysis was found to be 94% sensitive and 78% specific for the same classification. Although a small pilot study, this work explores the interface between qualitative and quantitative assessments for the presence of delirium and serves as an inviting framework for a larger and badly needed follow-up study.

Differential Diagnostic Classification

Another area of direct clinical relevance is the ability of conventional and quantitative EEG to aid in differential diagnosis by classifying patients into the most likely of several specific diagnostic groups. However, conditions outside the psychiatric nosology of DSM, such as mild cognitive impairment and mixed conditions, such as dementia with depressive or psychotic features, have not been well studied.

Separation of demented individuals from their healthy counterparts may be comparatively easy for a clinical psychiatrist, even without the use of EEG or qEEG; but the differential diagnostic categorization of patients on the basis of discrete dementia etiologies or pseudo-dementing conditions is much more problematic. Though most such efforts involve qEEG, dementia studies in the conventional EEG literature indicate that when the electroencephalographer’s attention is focused on a limited range of diagnostic possibilities, the classification accuracy of visual analysis can be quite high. For example, Prinz and Vitiello 88 used visually analyzed alpha slowing to classify early stage Alzheimer’s disease patients and major depression patients, successfully classifying 66% of Alzheimer’s disease and 83% of depression patients into their respective diagnostic groups. As mentioned above in a different context, this accuracy is surprisingly high since the Alzheimer’s disease sample was restricted to early stage patients and included those with possible as well as probable Alzheimer’s disease diagnoses. Sloan et al. 49 also studied conventional EEGs recorded from Alzheimer’s disease patients and major depression patients, and extended the diagnostic range to multi-infarct vascular dementia patients, grouping the EEGs visually into an Alzheimer’s disease pattern, a multi-infarct vascular dementia pattern, or a “normal” (major depression) pattern. (The implications of a normal EEG in major depression are discussed further below.) Correct visual EEG classification of Alzheimer’s disease patients was 77%, multi-infarct vascular dementia 76%, and major depression 79%. The separation of depressed from demented patients, although unintentional in the Sloan et al. study, is important because depression itself can produce pseudodementia symptoms (see Depression, below). Extension of the categorization to multi-infarct vascular dementia shows that visual separation of the EEG into discretely defined categories by expert electroencephalographers may be helpful in differential diagnosis.

qEEG studies also yield high differential diagnostic classification accuracies for dementing and pseudo-dementing conditions in individual patients, and have the advantage of doing so by means of objective algorithms that can be applied across laboratories. An early contribution to this literature was the work of O’Connor et al., 105 who recorded qEEG from elderly patients suffering from “organic” dementia (either senile arteriosclerosis or senile dementia) or from depression. The demented patients could be separated from their depressed counterparts with 88% sensitivity and 100% specificity, and the senile dementia patients could be distinguished from arteriosclerotic patients with equally high accuracy. Brenner et al. 96 also used qEEG to classify Alzheimer’s disease patients and depressed patients. When the discrimination was weighted to minimize false Alzheimer’s disease classifications, a specificity of 100% was accompanied by a sensitivity of 49%, and the sensitivity was greater for lower functioning Alzheimer’s disease patients (58%) than for their higher functioning counterparts (27%). John et al. 15 assessed the discriminant classification accuracy for dementia and depression and extended the range of pseudodementing disorders to include alcoholism. Alcoholic and depressive patients could be classified with sensitivities of 61% and 74% respectively, versus 63% to 64% for dementia. A more recent classification study 106 found that Alzheimer’s disease patients could be separated from depressed patients manifesting mild cognitive impairment with 92% sensitivity for Alzheimer’s disease (92% on replication) and 90% sensitivity for depression (88% on replication). Although the literature is limited, these classification accuracies compare well with those deriving from the more expensive technique of SPECT. 107109

Regarding dementia etiologies, a small pilot classification study of dementia of the Alzheimer's type patients, multi-infarct vascular dementia patients, and healthy subjects was published by Leuchter et al. 98 As mentioned previously, the combined sample of demented patients could be distinguished from the healthy subjects with 83% sensitivity and 100% specificity. But additionally, a “high proportion” (exact percentages were not reported) of dementia of the Alzheimer's type and multi-infarct vascular dementia patients evidently could be distinguished from each other, and a three-way classification of all subjects attained 92% accuracy. Leuchter et al. 99 classified dementia of the Alzheimer's type and multi-infarct vascular dementia patients with 69% accuracy (the only accuracy measure given), but an earlier study of these same patients 110 had reported the percentage of correct dementia of the Alzheimer's type versus multi-infarct vascular dementia classifications to be 76%. Turning to another dementia etiology, Yener et al. 90 assessed the ability of qEEG to dichotomously classify Alzheimer’s disease and fronto-temporal dementia patients into their correct diagnostic groups, achieving both a sensitivity and a specificity of 85% (81% and 85%, respectively, in a jackknife replication). More recently, Lindau et al. 78 assessed several different qEEG analytic models for their ability to classify Alzheimer’s disease, fronto-temporal dementia, and healthy control subjects. Using qEEG alone, Alzheimer’s disease patients could be separated from healthy subjects with 80% accuracy, fronto-temporal dementia patients could be separated from controls with 79% accuracy, and Alzheimer’s disease patients could be separated from fronto-temporal dementia patients with 71% accuracy. Regarding Alzheimer’s disease and fronto-temporal dementia specifically, neuropsychological testing could classify individual patients with 80% accuracy alone, and with 93% accuracy with combined with qEEG.

Prediction of Clinical Course

Another clinically valuable use of qEEG was explored by Soininen et al., 111 who used discriminant analyses to test the ability of different combinations of qEEG variables, age, and gender to predict the clinical course of 24 Alzheimer’s disease patients over a 3-year period. The authors found that a combination of four qEEG variables, age, and gender correctly (though retrospectively) predicted which patients would remain at home and which would be institutionalized or die, with 100% sensitivity and specificity. Similar findings were reported by Rodriguez et al. 112 These authors recorded qEEG from 31 consecutive Alzheimer’s disease outpatients, from which they were able to predict the timing of three clinically relevant events: loss of activities of daily living (dressing, eating, and bathing), incontinence, and death. Although reported as a pilot study and needing confirmation using a larger prospective patient sample and more complete data analysis, these results illustrate the range of clinically relevant information capable of being extracted from the qEEG record (see Berg et al. 113 for review of older studies in this area).

Recommendations

EEG and especially qEEG studies show reasonably high differential classification accuracies corresponding to discrete diagnostic categories of dementing disorders, such as Alzheimer’s disease, multi-infarct vascular dementia, and fronto-temporal dementia, and pseudo-dementing disorders, such as depression, alcoholism, and delirium. These appear to offer the practicing psychiatrist a potentially valuable source of information about individual patients whose diagnoses otherwise may be unclear. It is important to stress that neither EEG nor qEEG is a method of automated diagnosis but rather these techniques constitute diagnostic adjuncts similar to any other laboratory tests that inform clinical judgment. Most of the differential classifications have been replicated in this literature, though in contrast to the literature on children’s disorders the replications tend to be between rather than within studies. Regarding more speculative areas, the use of qEEG to predict the clinical course of individual demented patients is an inviting area for further research, since the dependent variables (dressing, eating, bathing, incontinence, institutionalization, and death) are events of direct and obvious importance.

Mood Disorders

Nuwer’s position paper 10 gave a negative recommendation for qEEG in mood disorders. Hughes and John 7 gave a positive recommendation based on evidence from well-designed clinical studies.

Depression

Conventional EEG studies have found a substantial proportion (typically 20% to 40%) of depression patients to have EEG abnormalities, with several characteristic and controversial patterns described. 7 In their review of the literature Holschneider and Leuchter 41 take the opposite viewpoint, noting that the majority of conventional EEGs are normal in depression, and that abnormalities are generally mild, such as a slowing of the posterior dominant rhythm. From this viewpoint they argue that a patient with severe cognitive impairment and a normal or nearly normal EEG may be suffering from a pseudodementia of depression, whereas a similarly impaired patient with severe EEG slowing is likely to be suffering from another disease process, such as Alzheimer’s disease. (Studies relevant to this argument are reviewed above under Dementia.) However, this distinction is not likely to be seen in early stages of Alzheimer’s disease, when the EEG is normal or only mildly abnormal, generally showing posterior slowing. Holschneider and Leuchter make the point that abnormal EEGs predict functional decline regardless of diagnostic group. They also argue that EEG may be more useful than neuropsychological tests for identifying pseudo-dementia since motivational and attentional problems are less likely to interfere with testing. For all of these reasons Holschneider and Leuchter maintain that “although an abnormal EEG in a depressed patient is not specific for dementia, it does identify the patients at greatest risk for functional decline, and therefore is a useful part of the evaluation.” Unfortunately, no indications of the accuracies (e.g., sensitivity, specificity) of these statements are presented, but the authors’ view probably reflects the informed clinical consensus when EEGs are visually analyzed.

qEEG studies of depression yield widely varying results depending primarily on the analytic technique employed. A decade of studies reviewed by Pollock and Schneider 114 revealed increased alpha and beta power in slightly more than half, which in principle might allow discrimination of depression from dementia with its decreased alpha and beta. But univariate approaches using single qEEG features fail to classify depressed patients in a clinically useful manner. In contrast, by considering several variables simultaneously, multivariate approaches appear to offer the ability to classify mood disorder patients in ways that are clinically useful. Hughes and John 7 note that numerous qEEG studies have reported increased power in the theta or alpha band and decreased coherence and asymmetry over frontal regions among unipolar depressed patients, which is essentially the opposite pattern of changes seen in schizophrenia (discussed below) and which may aid in the differential diagnosis of difficult cases. Similarly, unipolar and bipolar depression appear to have different patterns of qEEG changes, with schizophrenia-like alpha decreases and beta increases in the latter. Hughes and John suggest that this difference may serve to separate unipolar from bipolar patients presenting in a state of depression without a prior history of mania, but this distinction may be compromised by antidepressant medication, which tends to reduce the excessive alpha among unipolar depressed patients.

Depression Detection

Using multivariate qEEG techniques, the accurate separation of depressed from healthy individuals has been demonstrated repeatedly and replicated in large samples. Prichep and John 115 attained 83% sensitivity and 89% specificity (jackknife replicated to 81% and 87%, respectively). A four-way classification (healthy, depression, alcoholism, and dementia) identified depressed individuals with 73% (jackknife replicated to 65%) sensitivity and 84% (jackknife replicated to 76%) specificity. A follow-up 19 using the same four-way classification identified depressed individuals with 72% sensitivity and 77% specificity (independently replicating to 85% and 75%, respectively). Still higher accuracies were reported by John et al. 24 who found that depressed individuals could be separated from their healthy counterparts using a two-way discriminant with 83% sensitivity and 86% specificity (independently replicated at 93% and 88% respectively). A three-way discriminant (healthy, depression, dementia) identified depressed patients with 84% sensitivity and specificity (independently replicated at 80% and 85% respectively), and a four-way discriminant (healthy, depression, alcoholism, dementia) identified them with 72% sensitivity and 77% specificity (independently replicated at 85% and 75% respectively).

These results also demonstrate the trade-off between the number of simultaneous discriminations (number of possible diagnostic categories into which a patient might be placed) and the accuracy (sensitivity and specificity) of the discrimination. This trade-off highlights the principle that qEEG does not take the physician out of the diagnostic loop. The more the physician knows about the patient, the more alternative diagnoses that can be excluded a priori, and the more accurate the qEEG discrimination can be.

Differential Diagnostic Classification

Several replicated qEEG studies of differential diagnostic classifications of depression versus other disorders have been based on four-way discriminants. Prichep and John 115 used a four-way discriminant to identify depressed patients with 73% (jackknife replicated to 65%) sensitivity, with specificities of 73% (73%) versus dementia and 74% (64%) versus alcoholism. John et al. 19 , 24 used a four-way discriminant with independent replication and achieved sensitivities of 72% (85%) for depression, with specificities of 79% (77%) versus dementia and 80% (90%) versus alcoholism. The latter report also used a two-way discriminant with independent replication to categorize depressed patients with 84% (88%) sensitivity and 84% (85%) specificity versus schizophrenia. Similarly, a three-way discriminant categorized depressed patients with 84% (80%) sensitivity and a specificity of 84% (71%) versus dementia. Finally, a four-way discriminant with independent replication achieved 72% (85%) sensitivity for depression with specificities of 79% (77%) for dementia and 80% (80%) for alcoholism.

The crucial differentiation between unipolar and bipolar mood disorders has been assessed using multivariate techniques by Prichep and John, 115 who reported jackknife replicated unipolar classification sensitivities of 87% (87%) and specificities of 90% (85%) versus bipolar patients. John et al. 19 found nearly identical unipolar sensitivities of 85% (85%) and specificities of 85% (87%) versus bipolar, and John and Prichep 20 reported independently replicated unipolar sensitivities of 84% (87%) versus bipolar specificities of 88% (94%). Prichep et al. 42 similarly found independently replicated unipolar sensitivities of 91% (76%) versus specificities of 83% (75%) for bipolar patients. These already high accuracies could be boosted further by adding qEP data, giving independently replicated unipolar sensitivities of 98% (76%) and specificities of 91% (82%) versus bipolar.

Some caution must be exercised, however, in generalizing results from primary depression to the secondary depression seen so often in clinical practice. Prichep et al. 116 studied qEEG characteristics of crack cocaine dependence and noted that 28 patients (54% of the total sample) had a secondary diagnosis of major depression. When a previously used depression discriminant 19 was applied to this group, it successfully identified only eight of the 28 patients, for a sensitivity of 29%.

One of the suggested uses of qEEG is to predict the most effective treatment for a given patient, and one of the most frequently cited papers in this regard is that of Suffin and Emory. 117 These authors recorded unmedicated qEEG, treatment, and outcome data from 54 patients diagnosed with DSM-III-R “affective” (mood) disorders (major depression, bipolar disorder, depressive disorder not otherwise specified) and from 46 patients suffering from ADD/ADHD. Although actual treatments varied, affective disorder patients generally were treated with antidepressants, to which anticonvulsants or lithium were added in refractory cases, followed by stimulants in those cases still unresponsive. Attention deficit patients generally were treated initially with stimulants, then antidepressants, and finally anticonvulsants for increasingly refractory cases. Pre-treatment spectral analysis revealed significantly increased alpha in some patients and increased theta in others. It also revealed hypercoherence among some but not other patients. The authors “heuristically” divided the data into frontal alpha excess, frontal theta excess, and “other” groups, the last of which (N=19) was essentially dropped from further analysis or discussion. When treatment data from the remaining subjects were analyzed, patients with similar neurometric features were found to respond to the same classes of medications, despite their differing DSM-III-R diagnoses. For example, summarizing their findings they state, “The frontal theta excess group was 100% responsive to stimulants.” This is heady stuff. The ability of qEEG to predict treatment response would have immediate clinical utility and might further suggest the presence of an underlying electrophysiological taxonomy of psychiatric disorders not entirely congruent with DSM. Unfortunately, major flaws in design, analysis, and reporting plague the Suffin and Emory 117 study, and many of their conclusions appear to be overstatements of their findings. For example, the summary statement that the frontal theta excess group was 100% responsive to stimulants might lead the reader to think that all of the 21 patients found to have frontal theta excess also responded to stimulant medication. However, in the paper’s “Results” section they state, “The frontal theta excess/normocoherent subgroup seen in Table 4 appeared only in the attentionally disordered clinical population. In that population it was 100% responsive to stimulants.” Indeed, the data show only seven of these patients to have responded to stimulants, and all seven had attentional disorders. This and a host of similar problems render the conclusions somewhat less exciting than they at first appear. However, the paper does illustrate the importance of identifying subgroups in making treatment predictions, and in that regard it serves as a valuable addition to the literature.

Recommendations

Although conventional EEG is of dubious utility in depression, qEEG has been shown in well replicated studies (albeit from a single research group) to be capable of differentiating between healthy and depressed individuals, and further to distinguish depressed patients from their demented, schizophrenic, and alcoholic counterparts. Perhaps most importantly, there is solid evidence from independently replicated studies (from the same group), that qEEG has the ability to classify individual unipolar and bipolar patients with a clinically useful accuracy.

Anxiety, OCD, Panic

Hughes and John 7 group these disorders under the heading of “Mood Disorders,” but note that while EEG and qEEG abnormalities have been reported, consistent patterns have not yet been discerned. Evidently their positive recommendation given to mood disorders does not apply to this subgroup.

OCD

Perhaps the most interesting work in this area is by Prichep et al. 25 Noting that only about half of OCD patients respond to SSRI medications, and noting further that the older EEG literature described several different patterns of abnormality in patients with OCD, these authors investigated whether the OCD diagnostic category subsumes several different pathophysiologies possessing different responses to medication. Extending earlier pilot work into a prospective study of medication free OCD patients, qEEG was subjected to multivariate cluster analysis, which defined two subgroups. Cluster 1 had qEEG characterized by excess frontal and frontotemporal relative theta power, and 80% of the patients were found subsequently to be nonresponders to SSRI medications. Cluster 2 was characterized by excess relative alpha power, and 82% of the patients were found to be SSRI responders. Although group sizes were small, the finding of distinct qEEG clusters supports the presence of pathophysiological subgroups sharing a common clinical expression in OCD, while the predictive validity of qEEG cluster membership in terms of treatment response implies that these pathophysiological subgroups have clinical relevance. Since SSRIs increase slow and fast EEG activity while reducing alpha, their effectiveness in treating the high alpha Cluster 2 patients but not the high theta Cluster 1 patients is understandable from the standpoint of normalizing brain activity. An independent prospective replication of this study by Hansen et al. 26 analyzed pre-treatment qEEG from 20 OCD patients using the subtyping algorithm developed by Prichep et al. 25 Eighteen of 20 patients were predictively classified as (Cluster 2) responders to the SSRI paroxetine, and when the blind was broken, 17 (94%) of them were rated as true responders. (Although some ambiguity exists in the criteria for clinical response, even the most uncharitable analysis still finds 83% sensitivity.) In principle, confirmation of these results by additional prospective studies of independent patient groups and formulation of a generally applicable discriminant that could be applied to individual patients could lead to a clinically valuable tool for determining which specific OCD patients would be likely to benefit from SSRI medications. In practice, however, the fact that SSRIs are effective medications for this disorder and the finding that 20% (Prichep) to 50% (Hansen) of the very small groups of Cluster 1 patients responded to these medications make it questionable whether clinicians would send a patient to the qEEG lab for the purpose of informing their initial medication choice. But at the very least, prospective identification of patients who are unlikely to benefit from SSRI medications would be a valuable research tool for the assessment of new treatments.

Panic Disorder

Few published qEEG studies of panic disorder bear direct relevance to clinical psychiatric practice. Abraham and Duffy 118 were able to classify 17 panic disorder and 50 healthy subjects into their respective diagnostic groups with 93% overall accuracy based on qEEG, but serious shortcomings in their report limit the useful information that can be drawn from it. A more recent addition to the literature by Knott et al. 39 compared 34 panic disorder patients to 19 healthy subjects. qEEG discriminant analysis correctly classified patients and controls with 71% sensitivity and 84% specificity, a sensitivity level that the authors felt to be insufficient for individual clinical classifications.

Recommendations

The qEEG literature on anxiety disorders (OCD, panic) is small and unimpressive from the standpoint of clinical utility. There are indications that several distinct etiologies share a common expression in OCD, and qEEG may become an important research tool, and conceivably an important clinical tool, in the future.

Schizophrenia

Nuwer 10 gave qEEG a negative recommendation for schizophrenia. Hughes and John 7 also gave qEEG a negative recommendation for schizophrenia and based their recommendation on conflicting Class II (well-designed clinical studies) and Class III (expert opinion, nonrandomized studies using historical controls, case reports) types of evidence. However, they recommended conventional EEG as part of the initial workup following the first presentation of schizophrenia, and suggested that qEEG may aid in the sometimes-difficult differential diagnostic discrimination between schizophrenia and mood disorder (considered under that category, above). They also cited the potential future value of qEEG in predicting the most efficacious treatment for schizophrenia.

A very large literature examines EEG and qEEG changes in schizophrenia. Major considerations have centered on the rates and types of EEG abnormalities, their localization in the brain, and their relationship to clinical phenomena, such as symptoms, subtypes, and course. 119 EEG abnormalities have been reported in 5% to 80% of schizophrenia patients 121 and although many abnormal activity patterns have been characterized, none appear to be consistently related to clinical phenomena. Evidence from cluster analysis suggests that distinct qEEG subtypes of schizophrenia exist and that they show differential responses to medications. 4 , 122 But this aspect of qEEG has not yet yielded useful clinical tools.

A recent review of the qEEG literature 7 found broad agreement that schizophrenia patients tend to have increased slow activity in the delta and theta bands with increased interhemispheric coherence, particularly over frontal areas, reduced alpha power with a downward shift of the mean alpha frequency, and increased beta power. The decreased alpha tends to be normalized by antipsychotic medications. 123 The cluster analysis studies 4 and John et al. 122 suggest that qEEG subtypes exist but their relation to clinical subtypes, medication response, and other important treatment variables remains unclear. There is a suggestion based on several studies 7 that the increased interhemispheric coherence over frontal regions may distinguish schizophrenia patients from those suffering from bipolar depression, who show decreased frontal coherence. However, this idea also does not appear to have been tested and replicated prospectively except by John and Prichep, 20 and requires further study.

The use of qEEG to study medication response among schizophrenia patients generally has involved research designs and analyses that preclude individual predictions. For example, Czobor and Volovka 124 published a study with the intriguing title “Pretreatment EEG predicts short-term response to haloperidol treatment.” Unfortunately, no prediction was involved. Rather, a retrospective analysis of pre-treatment qEEG and clinical treatment response showed that higher alpha values were associated with poorer response for the entire sample of 34 patients taken as a whole. There was no attempt to predict, either prospectively or retrospectively, the treatment response of individual patients. Without accuracy measures (sensitivity, specificity, etc.) the direct clinical utility of such work is unclear. Similarly, a later study by these authors 125 found that for a sample of nine schizophrenia patients, pre-treatment beta power and asymmetries in delta and theta were associated with overall clinical improvement. Again, no individual predictions were attempted. However, it must be acknowledged that such studies have laid the groundwork for future research in an area of great clinical importance.

There is some indication that a patient’s clinical response to an antipsychotic medication may be predicted from the qEEG changes induced by a test dose. Galderisi et al. 126 studied patients with schizophrenia, recording qEEG before and 6 hours after a test dose of haloperidol or clopenthixol. Clinical response was determined after 4 weeks of treatment. Responders and nonresponders differed trivially on baseline qEEG measures but markedly on their response to the medication test dose. Stepwise discriminant analysis showed that the qEEGs of responders differed from those of nonresponders on several measures (theta2 increased in responders, but was unchanged in nonresponders; alpha1 increased in responders, but decreased in nonresponders) and that the best response predictor was alpha1. Alpha1 change scores at C3 discriminated with an overall accuracy of 89%, with 94% of responders and 80% of nonresponders being correctly classified. These results bolster the authors’ view that only patients showing the same test dose response as healthy subjects (i.e., increased theta and alpha1 activity after high potency neuroleptics) will benefit clinically. Importantly, the authors also point out practical limitations of pharmaco-EEG: long washout periods are needed and patient cooperation is required, neither of which may be feasible in clinical practice.

Recommendations

qEEG has not been shown to be clinically useful in the diagnosis or treatment of schizophrenia patients, but may have limited utility in distinguishing between schizophrenia and depression. There are indications that distinct qEEG subtypes exist, perhaps corresponding to different etiologies or pathophysiologies within a broader schizophrenia spectrum, and possibly associated with differing responses to medication. Additional research is needed in this potentially important area, which thus far lacks direct clinical application.

Methodological Problems

From the studies reviewed above, several broad problems become apparent. Some of these problems have been framed usefully by Kaiser 34 as reflecting the difficulty of translating the methodological freedom of research into the uniform standardization necessary for clinical application. In the research literature different authors record electrical activity from different brain areas using different configurations of hardware. Artifacts are eliminated or reduced using different strategies and decision rules. From the “clean” brain activity, different signals are extracted using different configurations of software. These signals from the patient’s data are compared with those from (often ad hoc) normative databases of differing size and quality recorded from individuals screened by different methods. The accuracy of test information is reported using any of a variety of metrics, and replications are conducted in several different manners. These and other differences make comparing research results from different laboratories problematic; robust findings from one lab may not be replicated by another. As a practical consequence seemingly important findings in the research literature may not be possible to replicate in the clinic with the analysis software included in commercially available clinical qEEG packages.

Exemplifying many of these difficulties is a paper by Coutin-Churchman et al. 127 who built a qEEG system using off-the-shelf hardware and a combination of commercial and custom-written software components to assemble a stand-alone qEEG database using 100 (50 female) healthy volunteers, 18 to 55 years old. Against this healthy normative database they then assessed qEEG data from 67 healthy subjects and 340 patients, all within the same age range, the latter carrying a wide variety of psychiatric and neurological diagnoses. Analysis was restricted to the “intuitive variables” of spectral power measurements. Coherence was not examined, and multivariate classification methods were not utilized. Age regression was felt to be unnecessary. The results of this study are edifying on several levels. The sample of healthy subjects was determined to have a 12% abnormality rate, or about twice the rate expected by chance. This finding calls into question the statistical adequacy of the normative healthy subject database and of the specific statistical procedures used to determine abnormality. Certainly it would be ill advised to use such a system for serious clinical work until the type I error rate is under control. However, the authors report that patients and controls were classified with 84% sensitivity and 91% specificity. The positive predictive value was 98% and the negative predictive value was 53%. Importantly, they also report that no specific qEEG patterns were found to be associated with specific diagnostic categories. They attribute this to the likely presence of subgroups within the diagnostic categories, but perhaps their rejection of multivariate classification methods also played a role. Furthermore, examination of their data reveals that most of their diagnostic groups contained too few cases to provide sufficient statistical power for adequate assessment. Of their 22 diagnostic categories only four contained 20 or more subjects, and nine categories contained fewer than 10. In an accompanying editorial Nuwer 43 makes much of the lack of a clear association between particular qEEG findings and specific diagnostic categories, calling it “an important finding in light of past claims that specific quantitative EEG features can diagnose [sic] psychiatric illnesses…” But the questionable adequacy the healthy normative database, differences in analytic methods, and lack of statistical power make the persuasiveness of Nuwer’s argument less than compelling. Here we have an absence of evidence rather than evidence of absence.

One approach to the problem of establishing a uniform methodology for clinical testing is through minimum practice standards and tightly protocol driven data processing steps, such as those proposed by Duffy et al. 9 These are particularly applicable to the stand-alone clinical EEG lab where both recording and analysis are done on site by highly trained individuals. But qEEG is labor intensive. In a busy clinical laboratory work pressures inevitably lead to procedural shortcuts, and seemingly trivial changes can produce serious distortions of the data and erroneous test results.

A variant approach is to centralize the data analysis by having the patient’s data recorded under specified conditions in the clinical EEG lab and then sent electronically to a commercial site for standardized analysis. This variant minimizes the technical competence and the commitment of time and effort required of the individual clinical laboratory but necessitates considerable faith in the training, judgment, and diligence of personnel at the centralized commercial vendor.

An important aspect of either of the above approaches is that the development of a clinically useful qEEG system is an enormously complex, costly, and time-consuming undertaking. 95 Pilot studies must be run and standardized recording protocols must be developed. Using those protocols, large normative databases encompassing both healthy and pathological groups must be compiled. Standardized analytic protocols must be developed for comparing a patient’s data to the databases. The validity and reliability of the entire system must be established in the peer-reviewed professional literature. Further evidence may be required for FDA approval. The system must be advertised and made available to potential clinical users. The necessary expenditures of time, effort, and money for these activities are considerable. Consequently very few clinical qEEG systems are commercially available. Of these few, most (e.g., Nicolet BEAM, Biologic Brain Atlas) incorporate only univariate comparisons to a healthy database and are configured to detect only qEEG abnormalities. Two commercially available qEEG systems employ multivariate discriminate functions to categorize patients into clinically meaningful groups. The Thatcher system incorporates age-stratified normative healthy data over an age range covering most of the normal lifespan, and its multivariate discriminants have been optimized for detecting brain damage following head injury. John’s Neurometric Analysis System also incorporates normative healthy data covering most of the lifespan but using regression rather than stratification to control for normal age-related changes, and additionally includes normative data from clinically defined patient groups representing DSM diagnostic categories. Its multivariate discriminants have been optimized to detect and categorize neuropsychiatric disorders.

Problems arise when its developers attempt to elaborate a qEEG system. From a hardware standpoint, the channel capacity of commercial qEEG recording systems has increased steadily from 20 channels a decade ago to 256 today with a 512-channel system soon to be marketed. In principle, this allows greatly improved spatial resolution (since spatial resolution increases as a function of electrode density), but in practice the available qEEG normative databases are still limited to about 20 channels, obviating the improved spatial resolution. Since increasing the channel capacity of the normative database would entail discarding an enormous backlog of data collected over several decades, there is an understandable reluctance to do so.

Another problem is that as normative data are added and discriminants are refined, the system evolves away from the configuration that was validated in the literature. A partial solution would be to list each discriminant currently available and the specific literature references supporting it, but this has not been done. Consequently, the potential clinical user is left wondering whether, for example, the current Neurometric discriminant for classifying unipolar versus bipolar patients is the one developed by Prichep and John 115 based on 31 unipolar and 20 bipolar patients, the one developed by John et al. 19 using 34 unipolar and 18 bipolar patients (and independently replicated using 34 and 17 patients respectively), the one developed by Prichep et al. 42 using 54 unipolar and 23 bipolar patients (and independently replicated using 45 and 17 patients respectively), the one developed by John and Prichep 20 using 65 unipolar and 32 bipolar patients (evidently replicated using a split half design), or something else entirely.

Related to difficulties of recording and analysis methodology are problems of clinical diagnosis. The major diagnostic systems (DSM, ICD) change over time producing questionable concordance between, for example, schizophrenia patients diagnosed under DSM-II and DSM-III or DSM-IV criteria. Newer is not always better. Small et al. 119 found that EEG-based predictions were more accurate when based on DSM-I and DSM-II criteria than when based on Research Diagnostic or Feighner criteria using the same patients. In principle, the problem might be mitigated by obtaining enough information about the patients in a normative clinical group that rediagnosis would be possible as the diagnostic system evolves. A better solution in principle would be to assemble new normative clinical databases with each DSM or ICD revision, but doing so would be a difficult and time-consuming task. As with the evolving discriminant problem, the potential clinical qEEG user is left wondering whether the diagnostic criteria used to form the normative clinical databases match those currently used in practice. Disorders diagnosed on the basis of categories outside psychiatry, such as mild cognitive impairment, typically are not included in the Neurometric or Thatcher databases, but may be approximated with unknown accuracy by using the nearest neurometric equivalent of mild dementia. 15 , 88 , 96 , 97

Regardless of the diagnostic system used, it is important to ensure that the patients comprising a normative clinical group actually have the nominal disorder. Since diagnostic accuracy tends to increase as symptoms develop, there is a tendency for clinical normative groups to contain patients in whom the disorder is relatively advanced compared with the often ambiguous or equivocal individual patient being assessed. Ideally, the full spectrum of the disorder should be represented in each clinical normative database. Similarly, the normative databases typically contain only “pure” cases of the various disorders. Mixed cases are excluded by design. When mixed cases are run against the Neurometric discriminants (see Disorders of Childhood), they generally are classified less accurately than pure cases.

Another dimension of the problem derives from the fact that most patients for whom qEEG would be clinically appropriate have substantial histories of exposure to therapeutic and recreational drugs. In the former case medication records may be available but in the latter case there is little that can be ascertained with any confidence. In practice, medication history is largely ignored in normative clinical data, a judgment defensible on the grounds that, a) any additional variability introduced into the normative data by medications will tend to promote false negative rather than false positive errors; and b) most individual patients presenting for qEEG testing will be medicated. But the finding that psychotherapeutic medications tend to normalize brain activity complicates the issue and there appears to be no simple solution.

Recreational drug use among psychiatric patients and their healthy counterparts is common, cryptic, and seemingly intractable. Although drug or alcohol “abuse” or “addiction” are common exclusion factors for healthy normative subjects, there is generally no serious attempt to screen out recreational drug or alcohol users. This may be for the best since it produces a normative healthy database that is arguably more representative of the population, but it introduces a confound that is guaranteed to remain until qEEG effects of different recreational drugs in different patient as well as healthy groups are studied. The best place to study drug effects may be in nations like Turkey where unmedicated and never-medicated psychiatric patients are common 120 and where recreational drug use is less widespread than in the United States and Western Europe.

Availability of Large Databases

Developing multivariate measures that are able to aid in detecting EEG abnormalities across the entire age span or to help in classifying the wide range of patients seen by clinical psychiatrists into different diagnostic groups, is obviously a major undertaking, requiring literally thousands of high quality EEG records obtained under standardized conditions from a wide range of well diagnosed patients and healthy subjects. The most serious attempts at this have been made by E. Roy John and Robert Thatcher. Both John’s and Thatcher’s systems offer abnormality detection analogous to the univariate and multivariate methods described above, and the normative databases of both systems are well described in the literature. As mentioned previously, the Thatcher system is optimized for detecting brain damage arising from head injury (though its most recent version includes a learning disabilities discriminant) and John’s system is optimized for neuropsychiatric disorders. John’s Neurometric database is composed of healthy individuals from the United States but has been validated against healthy groups from Barbados, Sweden, Germany, Cuba, Mexico, and Venezuela. 19 , 128130 The finding that healthy individuals from these different countries fall within normal limits when evaluated against the Neurometric norms implies strongly that the latter are free from significant ethnic or cultural bias. An interesting and potentially valuable aspect of the Neurometric system, particularly when applied to disorders of childhood, is that it distinguishes between a developmental delay (i.e., an abnormal qEEG that would be normal in a child of a younger chronological age) and a frankly abnormal qEEG (one that would be abnormal at any age). This is made possible through the use of statistical regression to control for age-related changes. Other qEEG analysis systems or databases are described sporadically in the literature 67 , 127 , 131133 and some also are offered as commercial products, but tend to be poorly validated with scanty databases or part of “neurotherapy” (EEG biofeedback) systems. Descriptions and comparisons of these databases, some of which are only tangentially related to clinical psychiatry, have been published. 132 , 134137 One intriguing exception appears to be the Brain Resource International Database (www.brainresource.com), which still is in its developmental infancy. Arising as part of the integrative neuroscience movement 136 and described recently by Gordon and Konopka, 138 this rapidly growing database presently contains qEEG data as well as ERP, structural and functional MRI, skin conductance, heart rate, respiration, neuropsychological, personality, genomic, and demographic data from 1,248 adults and 607 children. Data are contributed by over 50 labs in the United States, United Kingdom, Holland, South Africa, Israel, and Australia using identical equipment and techniques, and data from neurological and psychiatric patients are included. It does not, however, appear to have reached the level of development to be used as a routine clinical test in psychiatry.

Recommendations for Future Research and Development

Recommendations pertaining to specific areas of clinical concern are discussed under the relevant diagnostic categories above. More general recommendations are noted below.

Refinement and Adoption of Minimum Professional Standards

This appears to be a necessary first step if research results are to be accepted as evidence of qEEG’s efficacy, particularly by the neurological community. Duffy 9 and others have made a fine start and even Nuwer 10 seems to accept the standards that have been proposed. Central to this issue is the requirement that qEEG recording, analysis, and interpretation be done by trained individuals under the supervision of a qualified electroencephalographer. The vulnerability of advanced qEEG systems to distortion by recording artifacts, drowsiness, and medication effects demands a high standard of training and scrupulous adherence to recording and analysis protocols.

Determination of Optimal Number of Channels

The continuing proliferation of qEEG recording channels (increasingly dense montages) provides corresponding improvement in spatial resolution but, as mentioned above, this is of little value presently since the available qEEG normative databases are limited to about 20 channels. It is presently unknown whether increasing the number of channels in the normative databases will lead to better patient classification. This could be tested by replicating one of the more robust findings in the literature (e.g., the predictive differential classification of unipolar and bipolar patients) using a high-density montage (e.g., 256 channels). Discriminants based on the standard 20 channels of the International 10–20 system should yield classification accuracies approximating those in the literature. Data from additional channels then could be added (32, 64, 128, 256) and the classification accuracy reassessed, to determine the functional relationship between number of channels and classification accuracy. Such information should be used in the construction of any future qEEG normative databases.

Construction and Validation of Large Normative Databases and Multivariate discriminants

Normative data should be collected in large multicenter studies to avoid regional biasing, and should be capable of being linked to other large databases as described by Gordon and Konopka. 138 Disorders, such as mild cognitive impairment and fronto-temporal dementia, that pose diagnostic difficulties should be well represented, as should the full spectrum of severity within each recognized disorder. Sufficient raw data meeting strict methodological standards should be collected and saved so that as new analysis methods become available and new ideas from the research literature come to the forefront, the measures can be computed from the original database 139 (Prichep, personal communication). Ideally each normative database, whether composed of healthy subjects or patients suffering from a specific disorder, would be made generally available to other researchers and clinicians after its publication. Realistically this might require commercial involvement. Even with adoption of minimal professional standards (above) quality control would be a problem. The validation of qEEG discriminants, and more generally the need to move interesting qEEG ideas from research into clinical applications, similarly would benefit from multicenter studies. Ideally, each study site would incorporate a large enough patient sample to constitute a simultaneous independent replication.

Demonstration of Value Added

Studies are badly needed to assess the value added by qEEG when used in conjunction with standard methods of diagnosis, treatment choice, and prognosis. One measure might be decreased cost of medication trials. Another might be increased clinical yield of correct diagnoses. 140 Head-to-head comparisons with other diagnostic aids, such as SPECT or PET, would be particularly relevant, especially since the latter has received the blessing of insurance reimbursement. Perhaps the best context for such comparisons would be along the normal aging–mild cognitive impairment–dementia spectrum. PET and SPECT are both used in this context, 107109 , 141 , 142 as is qEEG, 143 and the literature is growing rapidly. For all of these imaging modalities, the time and labor-intensive aspects of data acquisition and analysis, the necessity to provide feedback to the referring physician in a manner that is clinically useful, and the often spotty insurance coverage make such real-world assessments important.

Subdivision of DSM Categories

One of the potential strengths of qEEG is its ability through the use of techniques, such as cluster and factor analysis, to distinguish subgroups within a population. These subgroups are based on differences in brain activity uniquely corresponding to differences in treatment response, symptom progression, or other clinically relevant variables using factor analysis variants, or without a priori reference to any clinical variables in cluster analysis. Prospectively, brain activity then is used to predict subgroup membership of new patients. Specific studies might focus on practical clinical matters (e.g., using brain activity to predict optimal treatment) as well as on more theoretical issues (e.g., using brain activity to construct a neurobiological taxonomy to parallel the phenomenological and descriptive approaches).

Activation Studies

Present qEEG systems are based almost exclusively on the resting eyes-closed EEG. Although an impressive amount of clinically relevant information can be extracted from this condition, several authors have suggested augmenting it with activation procedures. (See Barcelo and Gale 144 and Gordon 33 for fine conceptual discussions.) The idea is to devise a set of specific activation procedures (e.g., Continuous Performance Test for suspected ADHD patients) to be used during qEEG recording in order to enhance patient-control differences and increase the test’s sensitivity to subtle early signs of the disorder. Also, by choosing activation procedures eliciting progressive changes during the course of a disorder, the patient may be staged along a spectrum of severity. This research would conveniently include qERPs. 46 But activation procedures are difficult to administer uniformly, and are exquisitely sensitive to procedural details and covert patient strategies. So until robust and tightly protocol-driven activation procedures are developed for specific clinical conditions qEEG will perforce continue to be based on the eyes-closed resting state.

Closing the Loop

Perhaps the greatest clinical value of qEEG will be shown by studies focusing on the relationship between the domains of pre-treatment (baseline) qEEG measures and ultimate treatment outcomes. In addition to techniques based on factor analysis, for which the clinical outcome must be known, some such studies undoubtedly will involve the extensive use of techniques, such as cluster analysis to define physiologically distinct subtypes within diagnostic categories. Some of these clusters will be clinically meaningless, but others may correspond to clinically distinct responses to treatment. The practical goal of such studies will be the development of discriminants that can be used to predict the treatment possessing the greatest likelihood of success for individual patients.

CONCLUSIONS

Used cautiously and with appropriate recognition of its limitations, 9 qEEG offers the clinician an accurate laboratory test to aid in the detection and differential diagnosis of several common neuropsychiatric disorders. These include both disorders of childhood, such as learning disabilities and attention-deficit disorders, and those occurring primarily during adulthood, such as depressive, bipolar, and dementing disorders. Additional uses of qEEG showing promise but not yet sufficiently developed for routine clinical application include the prediction of medication efficacy and the prediction of the clinical course of a disorder.

Sidebar

EP and ERP studies

By far, the largest application of EP and ERP methods to clinical psychiatry centers on dementing disorders. For that reason they are discussed briefly here, though the techniques lie on the periphery of qEEG. Compared with the work being done with conventional and quantitative EEG, clinical application of sensory EPs and cognitive ERPs in dementia is in its infancy. Such work tends to concentrate on two types of signals: the P300 and the P2.

As pointed out by Barrett, 145 it has been a quarter century since Goodin et al. 146 reported the potential utility of the P300 for the diagnosis of dementia, finding that about 80% of demented patients have delayed auditory oddball P300s. Although initially it was believed that short-term memory deficits were responsible for the delay, normal oddball P300s have been reported in patients suffering from severe short-term memory impairment, and delayed auditory oddball P300s have been found in nondemented, unmedicated schizophrenia patients. 120 , 147 P300 latency also can be increased among healthy young individuals by increasing their cognitive workload during an auditory oddball task 148 indicating that the P300 may reflect attentional as well as memory functions. Conversely, latencies of both auditory and visual oddball P300s may be decreased by aerobic exercise, 149 pointing to an influence of physiological arousal as well. Indeed, the multiplicity of factors affecting P300 latency may explain its high variability in control as well as in patient data. Taken as a whole, however, the literature shows that oddball P300s tend to be delayed among demented patient groups regardless of etiology, and that the delay tends to increase over time in rough parallel with clinical deterioration. 150 But any clinical utility of such delays is compromised by a lack of sensitivity to early stages of dementia. By the time the oddball P300 becomes significantly delayed, the patient is already showing clear dementia symptoms. Barrett 145 points out that P300 and other cognitive ERP studies have not led to accurate clinical predictions, in contrast to the long history of successful applications of sensory EPs in clinical diagnosis. Barrett cogently recommends that cognitive tasks showing early and marked impairment in dementing disorders be substituted for the oddball task to elicit P300s. Until it is possible to gain better experimental control over the P300 along the lines suggested by Polich 150 and Barrett, it is doubtful that it will yield clinically useful information about individual patients.

Most studies of the flash visual evoked potential (VEP) in Alzheimer’s disease find that, compared to groups of age-appropriate controls, groups of Alzheimer’s disease subjects manifest a delayed P2 component. 151 , 152 The P2 delay generally is found to be selective, in that earlier components, such as the flash P1 or the pattern reversal P100 are unaffected. (For historical reasons flash VEP components are denoted by the older polarity-wave number convention, while pattern reversal VEP components are specified using the more current polarity-latency convention.) Both the flash P1 and the pattern reversal P100 have long been known to be generated by the sparsely cholinergic primary visual cortex 153155 which remains relatively intact in Alzheimer’s disease. The flash P2, by contrast, derives from the richly cholinergic circumstriate visual association cortex, 153 , 156159 which undergoes marked progressive deterioration in this disease (see Coburn et al. 73 for fuller discussion). Groups of patients whose dementias stem from etiologies other than Alzheimer’s disease generally do not show the selective P2 delay. 160162 However, among Alzheimer’s disease patients the delay increases over time in parallel with the severity of dementia symptoms, 163 , 164 , 165 and among healthy subjects it can be produced de novo by cholinergic suppression. 162 , 166 , 167 (See Coburn et al. 73 for additional examples). More importantly, the selective P2 delay has been reported to be pathognomonic for Alzheimer’s disease and several authors 152 , 168 have called for its evaluation as a diagnostic tool. Such an evaluation has recently been completed. 38 P2 data recorded from Alzheimer’s disease patients and healthy subjects were analyzed using several techniques, yielding very significant between-group differences, but individual diagnostic accuracies of only 62% (sensitivity 80%; specificity 53%; ROC area 0.659) to 68% (sensitivity 60%; specificity 75%; ROC area 0.694). These were felt to be too low to add meaningful information to the McKhann diagnostic process or to substitute for the complete diagnostic workup. However, it must be noted that the stimulation, measurement, and analytical techniques used were deliberately restricted to the brightest flash, the simplest latency measures and univariate data analyses in order to be suitable for wide application in clinical laboratories. A follow-up study to find the optimal stimulus and recording parameters for the P2 has been completed recently, 169 but additional research is badly needed to identify the analytical technique yielding the best discrimination between individual subjects. Only then will it be known whether the robust between-group differences can be translated into clinically useful between-subject differences.

The P300 ERP and P2 EP components have not yet generated the evidence base necessary to be included among clinically applicable procedures, though the goal of a positive laboratory test for dementia in general (P300) or Alzheimer’s disease in particular (P2) continues to motivate research.

Received and accepted March 3, 2006. Dr. Coburn is affiliated with the Department of Psychiatry and Behavioral Sciences, Mercer University School of Medicine, Macon, Georgia. Dr. Lauterbach is affiliated with the Division of Adult and Geriatric Psychiatry, Mercer University School of Medicine, Macon, Georgia. Dr. Boutros is affiliated with the Department of Psychiatry and Neurology, Wayne State University, School of Medicine, Detroit, Michigan. Dr. Black is affiliated with the Department of Psychiatry, Neurology, and Radiology, Washington University School of Medicine, St. Louis, Missouri. Dr. Arciniegas is affiliated with the Department of Psychiatry and Neurology, University of Colorado School of Medicine, Denver, Colorado. Dr. Coffey is affiliated with the Henry Ford Health System, Detroit, Michigan. Address correspondence to Dr. Coburn, 655 First St., Macon, GA 31201; [email protected] (E-mail).

Copyright © 2006 American Psychiatric Publishing, Inc.

REFERENCES

1. Bauer L, Hasslebrock V: EEG autonomic and subjective correlates of the risk for alcoholism. J Stud Alcohol 1994; 54:577–589Google Scholar

2. Alper K, Chabot R, Kim A, et al: Quantitative EEG correlates of crack cocaine dependence. Psychiatr Res Neuroimaging 1990; 35:95–106Google Scholar

3. Struve F, Straumakis J, Patrick G: Persistent topographic quantitative EEG sequelae of chronic marijuana use: A replication study and initial discriminant function analysis. Clin Electroencephalogr 1994; 25:63–75Google Scholar

4. Chabot RJ, di Michele F, Prichep LS: The role of quantitative electroencephalography in child and adolescent psychiatric disorders. Child and Adolesc Psychiatr Clin N Am 2005; 14: 21–53Google Scholar

5. Boutros NN: A review of indications for routine EEG in clinical psychiatry. Hosp Community Psychiatry 1992; 43: 716–719Google Scholar

6. Hughes J: The EEG in Psychiatry: an outline with summarized points and references. Clin Electroencephalogr 1995; 26: 92–101Google Scholar

7. Hughes JR, John ER: Conventional and quantitative electroencephalography in psychiatry. J Neuropsychiatry Clin Neurosci 1999; 11:190–208Google Scholar

8. Small J: Psychiatric disorders and EEG, in Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Edited by Niedenneyer E, Lopes da Silva F. Baltimore, Williams and Wilkins, 1993, pp 581–596Google Scholar

9. Duffy FH, Hughes JR, Miranda F, et al: Status of quantitative EEG (qEEG) in clinical practice 1994. Clin Electroencephalogr 1994; 25: VI–XXIIGoogle Scholar

10. Nuwer M: Assessment of digital EEG, quantitative EEG, and EEG brain mapping: report of the American Academy of Neurology and the American Clinical Neurophysiology Society. Neurology 1997; 49:277–292Google Scholar

11. Coburn KL, Moreno MA: Facts and artifacts in brain electrical activity mapping. Brain Topogr 1988; 1:37–45Google Scholar

12. Moreno MA, Coburn KL: The aliasing artifact in topographic brain activity mapping. Midliner 1987; 9:3–4Google Scholar

13. Kondacs A, Szabo M: Long—term intra—individual variability of the background EEG in normals. Clin Neurophysiol 1999; 110:1708–1716Google Scholar

14. Silverman JS, Loychic SG: Brain—mapping abnormalities in a family with three obsessive compulsive children. J Neuropsychiatry Clin Neurosci 1990; 2:319–322Google Scholar

15. John ER, Prichep L, Ahn H, et al: Neurometric evaluation of cognitive dysfunctions and neurological disorders in children. Prog Neurobiol 1983; 21:239–290Google Scholar

16. Prichep LS, John ER: qEEG profiles of psychiatric disorders. Brain Topogr 1992; 4:249–257Google Scholar

17. Afifi A, Clark VA, May S: Computer–Aided Multivariate Analysis. Chapman Hall/CRC, Boca Raton, 2004Google Scholar

18. John ER, Karmel BZ, Corning WC, et al: Neurometrics: numerical taxonomy identifies different profiles of brain functions within groups of behaviorally similar people. Science 1977; 196:1383–1410Google Scholar

19. John E, Prichep L, Friedman J, et al: Neurometrics: computer assisted differential diagnosis of brain dysfunctions. Science 1988; 293:162–169Google Scholar

20. John ER, Prichep LS: Principles of neurometric analysis of EEG and evoked potentials, in EEG: Basic Principles, Clinical Applications, and Related Fields. Edited by Niedermeyer E, Lopes da Silva F. Baltimore, Williams and Wilkins, 1993, pp 989–1003Google Scholar

21. Thatcher RW, Cantor DS, McAlaster R, et al: Comprehensive predictions of outcome in closed head—injured patients: the development of prognostic equations. Ann N Y Acad Sci 1983; 82–101Google Scholar

22. Thatcher RW, Walker RA, Gerson I, et al: EEG discriminant analyses of mild head trauma. Electroencephalogr Clin Neurophysiol 1989; 73:94–106Google Scholar

23. Thatcher RW, North DM, Curtin RT, et al: An EEG severity index of traumatic brain injury. J Neuropsychiatry Clin Neurosci 2001; 13:77–87Google Scholar

24. John E, Prichep L, Almas M: Toward a quantitative electrophysiological classification system in psychiatry, in Biological Psychiatry. Edited by Racaga G, Brunello N, Fukuda T. New York, Excerpta Medica 1991, pp 401–406Google Scholar

25. Prichep LS, Mas F, Hollander E, et al: Quantitative electroencephalogrphic subtyping of obsessive—compulsive disorder. Psychiatr Res 1993; 50:25–32Google Scholar

26. Hansen ES, Prichep LS, Bolwig TG, et al: Quantitative electroencephalography in OCD patients treated with paroxetine. Clin Electroencephalogr 2003; 34:70–74Google Scholar

27. Thatcher RW, North D, Biver C: Evaluation and validity of a LORETA normative EEG database. Clin EEG Neurosci 2005; 36:116–122Google Scholar

28. Saletu B, Anderer P, Saletu-Zyhlarz GM, et al: EEG mapping and low–resolution electromagnetic topography (LORETA) in diagnosis and therapy of psychiatric disorders: evidence for a key-lock principle. Clin EEG Neurosci 2005; 36:108–115Google Scholar

29. Fisch BL, Pedley TA: The role of quantitative topographic mapping or “neurometrics” in the diagnosis of psychiatric and neurological disorders: the cons. Electroencephalogr Clin Neurophysiol 1989; 73:5–9Google Scholar

30. Hoffman DA, Lubar JF, Thatcher RW, et al: Limitations of the American Academy of Neurology and American Clinical Neurophysiology Society paper on qEEG. J Neurophysiol Clin Neurosci 1999; 11:401–407Google Scholar

31. Thatcher RW, Moore N, John ER, et al: qEEG and traumatic brain injury: rebuttal of the American Academy of Neurology 1997 Report by the EEG and Clinical Neuroscience Society. Clin Electroencephalogr 1999; 30:94–98Google Scholar

32. Thatcher RW, Biver CJ, North DM: Quantitative EEG and the Frye and Daubert standards of admissibility. Clin Electroencephalogr 2003; 34:39–53Google Scholar

33. Gordon E: Brain imaging technologies: how, what, when, and why? Aust N Z J Psychiatry 1999; 33:187–196Google Scholar

34. Kaiser DA: QEEG: state of the art, or state of confusion. J Neurother 2001; 4:57–75Google Scholar

35. Gattaz WF, Mayer S, Ziegler P, et al: Hvpofrontality on topographic EEG in schizophrenia correlations with neuropsychological and psychopathological parameters. Eur Arch Psychiatry Clin Neurosci 1992; 241:328–332Google Scholar

36. Passero S, Rocchi R, Vatti G, et al:Quantitative EEG mapping, regional cerebral blood flow, and neuropsychological function in Alzheimer disease. Dementia 1995; 6:148–156Google Scholar

37. Nofzinger EA, Price JC, Meltzer CC, et al: Toward a neurobiology of dysfunctional arousal in depression:the relationship between beta EEG power and regional cerebral glucose metabolism during NREM sleep. Psychiat Res Neuroimag 2000; 98:71–91Google Scholar

38. Coburn KL, Arruda JE, Estes KM, et al: Diagnostic utility of VEP changes in Alzheimer’s Disease. J Neuropsychiat Clin Neurosci 2003; 15:175–179Google Scholar

39. Knott V, Bakish D, Lusk S, et al: Quantitative EEG correlates of panic disorder. Psychiat Res 1996; 68:31–39Google Scholar

40. Ford MR, Goethe JW, Dekker DK: EEG coherence and power in the discrimination of psychiatric disorders and medication effects. Biol Psychiatry 1986; 21:1175–1188Google Scholar

41. Holschneider DP, Leuchter AF: Clinical neurophysiology using electroencephalography in geriatric psychiatry: neurobiologic implications and clinical utility. J Geriatr Psychiatr Neurol 1999; 12:150–164Google Scholar

42. Prichep LS, John ER, Essig-Peppard T, et al: Neuometric subtyping of depressive disorders, in Plasticity and Morphology of the Central Nervous System. Edited by Cazzullo CL, Sacchetti E, Conte G, et al. London, Kluwer Academic Publishers, 1990, pp. 95–107Google Scholar

43. Nuwer MR: Clinical use of QEEG. Clin Neurophysiol 2003; 114:2225Google Scholar

44. Nuwer MR: Quantitative Electroencephalography, in Current Practice of Clinical Elecrtoencephalography. Edited by Ebersole JS, Pedley TS. Philadelphia, Lippincott Williams Wilkins, 2003, pp 753–760Google Scholar

45. Abt K: Significance testing of many variables B problems and solutions. Neuropsychobiology 1983; 9:47–51Google Scholar

46. Duffy F: Long latency evoked potential database for clinical applications: justification and examples. Clin EEG Neurosci 2005; 36:88–98Google Scholar

47. Hooshmand H, Beckner E, Radfar F: Technical and clinical aspects of topographic brain mapping. Clin Electroencephalogr 1989; 20:235–247Google Scholar

48. Ritchlin CT, Chabot RJ, Alper K, et al:Quantitative electroencephalography: a new approach to the diagnosis of cerebral dysfunction in systemic lupus erythematosus. Arth Rheumat 1992; 35:1330–1342Google Scholar

49. Sloan EP, Fenton GW, Kennedy JSJ, et al: Electroencephalography and single photon emission computed tomography in dementia:a comparative study. Psychol Med 1995; 25:631–638Google Scholar

50. Reid MC, Lachs MS, Feinstein AR: Use of methodological standards in diagnostic test research: getting better but still not good. J Am Med Assoc 1995; 274:645–651Google Scholar

51. Chabot RJ, di Michel F, Prichep L, et al: The clinical role of computerized EEG in the evaluation and treatment of learning and attention disorders in children and adolescents. J Neuropsychiatry Clin Neurosci 2001; 13:171–186Google Scholar

52. Becker J, Velasco M, Harmony T: Electroencephalographic characteristics of children with learning disabilities. Clin Electroencephalogr 1987; 18:93–101Google Scholar

53. Chabot RJ, Orgill AA, Crawford G, et al: Behavioral and electrophysiologic predictors of treatment response to stimulants in children with attention disorders. J Child Neurol 1999; 14:343–351Google Scholar

54. Clarke AR, Barry RJ, McCarthy R, et al: EEG analysis in Attention-Deficit/Hyperactivity Disorder: a comparative study of two subtypes. Psychiat Res 1998; 81:19–29Google Scholar

55. Clarke AR, Barry RJ, McCarthy R, et al: EEG differences in two subtypes of Attention–Deficit/Hyperactivity Disorder. Psychophysiology 2001a; 38:212–221Google Scholar

56. Clarke AR, Barry RJ, McCarthy R, et al: Age and sex effects in the EEG: differences in two subtypes of attention—deficit/hyperactivity disorder. Clin Neurophysiol 2001b; 112:815–826Google Scholar

57. Clarke AR, Barry RJ, McCarthy R, et al: Excess beta activity in children with attention-deficit/hyperactivity disorder: an atypical electrophysiological group. Psychiat Res 2001c; 103:205–218Google Scholar

58. Harmony T, Hinojosa G, Marosi E, et al: Correlation between EEG spectral parameters and an educational evaluation. Int J Neurosci 1990; 54:147–155Google Scholar

59. Hughes JR: Electroencephalography of learning disabilities, in Progress in Learning Disabilities v. Two. Edited by Myklebust HR. New York, Grune Stratton, 1971, pp. 18–55Google Scholar

60. Kaye H, John ER, Ahn H, et al: Neurometric evaluation of learning disabled children. Int J Neurosci 1981; 13:15–25Google Scholar

61. Ahn H, Prichep L, John ER, et al: Development equations reflect brain dysfunction. Science 1980; 210:1259–1262Google Scholar

62. Lubar JF, Bianchini KJ, Calhoun WH, et al: Spectral analysis of EEG differences between children with and without learning disabilities. J Learn Disabil 1985; 18:403–408Google Scholar

63. Yingling CD, Galin D, Fein G, et al: Neurometrics does not detect “pure” dyslexics. Electroencephalogr Clin Neurophysiol 1986; 63:426–430Google Scholar

64. Fein G, Galin D, Yingling CD, et al: EEG spectra in dyslexic and control boys during resting conditions. Electroencephalogr Clin Neurophysiol 1986; 63:87–97Google Scholar

65. Diaz de Leon AE, Harmony T, Marosi E, et al: Effects of different factors on EEG spectral parameters. Int J Neurosci 1988; 43 (1–2):123–131Google Scholar

66. Flynn JM, Deering WM: Subtypes of dyslexia: investigation of Boder’s system using quantitative neurophysiology. Develop Med Child Neurol 1989; 31:215–223Google Scholar

67. Matsuura M, Okubo Y, Toru M, et al:A cross-national EEG study of children with emotional and behavioral problems: a WHO collaborative study in the western pacific region. Biol Psychiat 1993; 34:59–65Google Scholar

68. Mann CA, Lubar JF, Zimmermann AW, et al: Quantitative analysis of EEG in boys with attention-deficit hyperactivity disorder: controlled study with clinical implications. Pediatr Neurol 1992; 8:30–36Google Scholar

69. Monastra VJ, Lubar JF, Linden M, et al: Assessing attention deficit hyperactivity disorder via quantitative electroencephalography: an initial validation study. Neuropsychology 1999; 13:424–433Google Scholar

70. Chabot RJ, Serfontein G: Quantitative electroencephalographic profiles of children with attention deficit disorder. Biol Psychiatry 1996; 40:951–963Google Scholar

71. Chabot RJ, Merkin H, Wood LM, et al: Sensitivity and specificity of QEEG in children with attention deficit or specific developmental learning disorders. Clin Electroencephalogr 1996; 27:26–34Google Scholar

72. Prichep L: Neurometric studies of methylphenidate responders and non-responders, in Perspectives on Dyslexia, vol 1. Edited by Pavlidis G. New York, John Wiley and Sons 1990, pp 133–139Google Scholar

73. Coburn KL, Parks RW, Pritchard WS: Electrophysiological indexes of cortical deterioration and cognitive impairment in dementia, in Neuropsychology of Alzheimer’s Disease and Other Dementias. Edited by Parks RW, Zec RF, Wilson RS. New York, Oxford University Press, 1993, pp 511–533Google Scholar

74. Richards M, Folstein M, Albert M, et al: Multicenter study of predictors of disease course in Alzheimer disease (the “predictors study”), II: neurological, psychiatric and demographic influences on baseline measures of disease severity. Alzheimer Dis Assoc Disord 1993; 7:22–32Google Scholar

75. Ashford JW, Rosenblatt MJ, Bekian C, et al: The complete dementia evaluation:complications and complexities. Am J Alz Care Res 1987; 2:9–15Google Scholar

76. Snowdon DA, Greiner LH, Mortimer JA, et al: Brain infarction and the clinical expression of Alzheimer disease. The Nun Study. J Amer Med Assoc 1997; 12;277:813–817Google Scholar

77. Ettlin TM, Staehelin HB, Kischka U, et al: Computed tomography, electroencephalography, and clinical features in the differential diagnosis of senile dementia:a prospective clinicopathologic study. Arch Neurol 1989; 46:1217–1220Google Scholar

78. Lindau M, Jelic V, Johansson SE, et al: Quantitative EEG abnormalities and cognitive dysfunctions in frontotemporal dementia and Alzheimer disease. Dement Geriatr Cogn Disord 2003; 15:106–114Google Scholar

79. Mendez MF, Selwood A, Mastri AR, et al: Pick’s disease versus Alzheimer disease: a comparison of clinical characteristics. Neurology 1993; 43:289–292Google Scholar

80. Coben LA, Danziger W, Storandt M: A longitudinal EEG study of mild senile dementia of Alzheimer type: changes at 1 year and at 2.5 years. Electroencephalogr Clin Neurophysiol 1985; 61:101–112Google Scholar

81. Helkala EL, Laulumaa V, Soikkeli R, et al: Slow wave activity in the spectral analysis of the EEG is associated with cortical dysfunctions in patients with Alzheimer disease. Behav Neurosci 1991; 105:409–415Google Scholar

82. Hier DB, Mangone CA, Ganellen R, et al: Quantitative measurement of delta activity in Alzheimer disease. Clin Electroencephalogr 1991; 22:178–182Google Scholar

83. Williamson PC, Merskey H, Morrison S, et al: Quantitative electroencephalographic correlates of cognitive decline in normal elderly subjects. Arch Neurol 1990; 47:1185–1188Google Scholar

84. Hooijer C, Jonker C, Posthuma J, et al: Reliability, validity and follow-up of the EEG in senile dementia:sequelae of sequential measurement. Electroencephalogr Clin Neurophysiol 1990; 76:400–412Google Scholar

85. Rae-Grant A, Blume W, Lau C, et al: The EEG in Alzheimer—type dementia:A sequential study correlating the EEG with psychometric and quantitative pathologic data. Arch Neurol 1987; 44:50–54Google Scholar

86. Prichep LS, John ER, Ferris SH, et al: Quantitative EEG correlates of cognitive deterioration in the elderly. Neurobiol Aging 1994; 15:85–90Google Scholar

87. Huang C, Wahlund LO, Dierks T, et al: Discrimination of Alzheimer disease and mild cognitive impairment by equivalent EEG sources:a cross-sectional and longitudinal study. Clin Neurophysiol 2000; 111:1961–1967Google Scholar

88. Prinz PN, Vitiello MV: Dominant occipital (alpha) rhythm frequency in early stage Alzheimer disease and depression. Electroencephalogr Clin Neurophysiol 1989; 73:427–432Google Scholar

89. Robinson DJ, Merskey H, Blume WT, et al: Electroencephalography as an aid in exclusion of Alzheimer disease. Arch Neurol 1994; 51:280–284Google Scholar

90. Yener GG, Leuchter AF, Jenden D, et al: Quantitative EEG in frontotemporal dementia. Clin Electroencephalogr 1996; 27:61–68Google Scholar

91. Mody CK, McIntyre HB, Miller BL, et al: Computerized EEG frequency analysis and topographic brain mapping in Alzheimer disease. Ann N Y Acad Sci 1991; 620:45–56Google Scholar

92. Duffy FH, Albert MS, McAnulty G: Brain electrical activity in patients with presenile and senile dementia of the Alzheimer type. Ann Neurol 1984; 16:439–448Google Scholar

93. Besthorn C, Forstl H, Geiger–Kabisch C: EEG coherence in Alzheimer disease. Electroencephalogr Clin Neurophysiol 1994; 90:242–245Google Scholar

94. Schreiter-Gasser U, Gasser T, Ziegler P: Quantitative EEG analysis in early onset alzheimer’s disease:a controlled study. Electroencephalogr Clin Neurophysiol 1993; 86:15–22Google Scholar

95. Prichep LS: Use of normative databases and statistical methods in demonstrating clinical utility of QEEG: importance and cautions. Clin Electroencephalogr Neurosci 2005; 36:82–87Google Scholar

96. Brenner RP, Ulrich RF, Spiker DG, et al: Computerized EEG spectral analysis in elderly normal, demented and depressed subjects. Electroencephalogr Clin Neurophysiol 1986; 64:483–492Google Scholar

97. Coben LA, Chi D, Snyder AZ, et al: Replication of a study of frequency of the resting awake EEG in mild probable Alzheimer disease. Electroencephalogr Clin Neurophysiol 1990; 75:148–154Google Scholar

98. Leuchter AF, Spar JE, Walter DO, et al: Electroencephalographic spectra and coherence in the diagnosis of Alzheimer’s type and multi-infarct dementia. Arch Gen Psychiatry 1987; 44:993–998Google Scholar

99. Leuchter AF, Cook IA, Newton TF, et al: Regional differences in brain electrical activity in dementia: use of spectral power and spectral ratio measures. Electroencephalogr Clin Neurophysiol 1993; 87:385–393Google Scholar

100. Streletz LJ, Reyes PF, Zolewska M, et al: Computer analysis of EEG activity in dementia of the Alzheimer type and Huntington’s disease. Neurobiol Aging 1990; 11:15–20Google Scholar

101. Anderer P, Saletu B, Kloppel B, et al: Discrimination between demented patients and normals based on topographic EEG slow wave activity:comparison between z statistics, discriminant analysis and artificial neural network classifiers. Electroencephalogr Clin Neurophysiol 1994; 91:108–117Google Scholar

102. Swets JA: Measuring the accuracy of diagnostic systems. Science 1988; 240:1285–1293Google Scholar

103. Pritchard WS, Duke DW, Coburn KL, et al: EEG-based, neural-net predictive classification of Alzheimer disease versus control subjects is augmented by nonlinear EEG measures. Electroencephalogr Clin Neurophysiol 1994; 91:118–130Google Scholar

104. Jacobson SA, Leuchter AF, Walter DO: Conventional and quantitative EEG in the diagnosis of delirium among the elderly. J Neurol Neurosurg Psychiatry 1993; 56:153–158Google Scholar

105. O’Connor KP, Shaw JC, Ongley CO: The EEG and differential diagnosis in psychogeriatrics. Br J Psychiatry 1979; 135:156–162Google Scholar

106. Deslandes A, Veiga H, Cagy M, et al: Quantitative electroencephalography (QEEG) do discriminate primary degenerative dementia from major depressive disorder (depression). Arq Neuropsiquiatr 2004; 62:44–50Google Scholar

107. Cabranes JA, De Juan R, Encinas M, et al: Relevance of functional neuroimaging in the progression of mild cognitive impairment. Neurol Res 2004; 26:496–501Google Scholar

108. Encinas M, De Juan R, Marcos A, et al: Regional cerebral blood flow assessed with technetium-99m-ECD SPET as a marker of progression of mild cognitive impairment to Alzheimer disease. Eur J Nucl Med Mol Imaging 2003; 30:1473–1480Google Scholar

109. Huang C, Wahlund LO, Svensson L, et al: Cingulate cortex hypoperfusion predicts Alzheimer disease in mild cognitive impairment. BMC Neurol 2002; 12;2:9Google Scholar

110. Leuchter AF, Newton TF, Cook IA, et al: Changes in brain functional connectivity in Alzheimer type and multi-infarct dementia. Brain 1992; 115:1543–1561Google Scholar

111. Soininen H, Partanen J, Laulumaa V, et al: Serial EEG in Alzheimer disease: 3 year follow-up and clinical outcome. Electroencephalogr Clin Neurophysiol 1991; 79:342–348Google Scholar

112. Rodriguez G, Nobili F, Arrigo A, et al: Prognostic significance of quantitative electroencephalography in Alzheimer patients: preliminary observations. Electroencephalogr Clin Neurophysiol 1996; 99:123–128Google Scholar

113. Berg L, Danziger WL, Storandt M, et al: Predictive features in mild senile dementia of the Alzheimer type. Neurology 1984; 34:563–569Google Scholar

114. Pollock VE, Schneider LS: Quantitative, waking EEG research on depression. Biol Psychiatry 1990; 27:757–780Google Scholar

115. Prichep LS, John ER: Neurometrics: clinical applications, in Clinical Applications of Computer Analysis of EEG and Other Neurophysiological Variables v 2. Handbook of Electroencephalogr raphy and Clinical Neurophysiology. Edited by Lopes da Silva FH, van Leeuwen WS, Remond A. Amsterdam, Elsevier, 1986, pp 153–170Google Scholar

116. Prichep LS, Alper KR, Kowalik S, et al: Quantitative electroencephalographic characteristics of crack cocaine dependence. Biol Psychiatry 1996; 40:986–993Google Scholar

117. Suffin SC, Emory WH: Neurometric subgroups in attentional and affective disorders and their association with pharmacotherapeutic outcome. Clin Electroencaphalogr 1995; 26:76–83Google Scholar

118. Abraham HD, Duffy FH: Computed EEG abnormalities in panic disorder with and without premorbid drug abuse. Biol Psychiatry 1991; 29:687–690Google Scholar

119. Small JG, Milstein V, Sharpley PH, et al: Electroencephalographic findings in relation to diagnostic constructs in psychiatry. Biol Psychiatry 1984; 19:471–487Google Scholar

120. Gonul AS, Suer C, Coburn KL, et al: Effects of olanzapine on the auditory P300 in schizophrenia. Prog Neuropsychopharmacol Biol Psychiatry 2003; 27:173–177Google Scholar

121. Shagass C: Twisted thoughts, twisted brain waves? in Psychopathology and Brain Dysfunction. Edited by Shagass C, Gershon S, Friedhoff AJ. New York, Raven Press, 1977, pp 353–378Google Scholar

122. John ER, Prichep LS, Alper KR, et al: Quantitative electrophysiological characteristics and subtyping of schizophrenia. Biol Psychiatry 1994; 36:801–826Google Scholar

123. Moore NC, Tucker KA, Brin FB, et al: Positive symptoms of schizophrenia: response to haloperidol and remoxipride is associated with increased alpha EEG activity. Hum Psychopharmacol 1997; 12:75–80Google Scholar

124. Czobor P, Volavka J: Pretreatment EEG predicts short-term response to haloperidol treatment. Biol Psychiatry 1991; 30:927–942Google Scholar

125. Czobor P, Volovka J: Quantitative EEG effect of risperidone in schizophrenic patients. J Clin Psychopharmacol 1993; 13:332–342Google Scholar

126. Galderisi S, Maj M, Mucci A, et al: QEEG alpha 1 changes after a single dose of high-potency neuroleptics as a predictor of short-term response to treatment in schizophrenic patients. Biol Psychiatry 1994; 35:367–374Google Scholar

127. Coutin-Churchman P, Anez Y, Uzcategui M, et al: Quantitative spectral analysis of EEG in psychiatry revisited:drawing signs out of numbers in a clinical setting. Clin Neurophysiol 2003; 114:2294–2306Google Scholar

128. Alvarez A, Valdez P, Pascual R: EEG developmental equations confirmed for Cuban school children. Electroencephalogr Clin Neurophysiol 1987; 67:330–332Google Scholar

129. John ER, Ahn H, Prichep L, et al: Developmental equations for the EEG. Science 1980; 210:1255–1258Google Scholar

130. Harmony T, Alvarez A, Pascual R, et al: EEG maturation of children with different economic and psychosocial characteristics. Int J Neurosci 1988; 41:103–113Google Scholar

131. Johnstone J, Gunkelman J: Use of databases in QEEG evaluation. J Neurother 2003; 7:31–52Google Scholar

132. Lorensen TD, Dickson P: Quantitative EEG databases: a comparative investigation. J Neurother 2004; 8:53–68Google Scholar

133. Sterman MB: Basic concepts and clinical findings in the treatment of seizure disorders with EEG operant conditioning. Clin Electroencephalogr 2000; 31:45–55Google Scholar

134. Hunter M, Smith RLL, Hyslop W, et al: The Australian EEG database. Clin Electroencephalogr Neurosci 2005; 36:76–81Google Scholar

135. Johnstone J, Gunkelman J, Lunt J: Clinical database development: characterization of EEG phenotypes. Clin Electroencephalogr Neurosci 2005; 36:99–107Google Scholar

136. Koslow SH: Discovery and integrative neuroscience. Clinical EEG and Neuroscience 2005; 38:55–63Google Scholar

137. Thatcher RW: Normative EEG databases and EEG biofeedback. Neurotherapy 1998; 4:(http:/www.snr—jnt.org/journalnt/jnt (2–4)3.html)Google Scholar

138. Gordon E, Konopka LM: EEG databases in research and clinical practice: current status and future directions. Clin Electroencephalogr Neurosci 2005; 36:53–54Google Scholar

139. Blinowska KJ, Durka PJ: Efficient application of Internet databases for new signal processing methods. Clin Encephalogr Neurosci 2005; 36:123–130Google Scholar

140. Jonkman EJ, Poortvliet DCJ, Veering MM, et al: The use of neurometrics in the study of patients with cerebral ischemia. Electroencephalogr Clin Neurophysiol 1985; 61:333–341Google Scholar

141. Chetelat G, Eustache F, Viader F, et al: FDG-PET measurement is more accurate than neuropsychological assessments to predict global cognitive deterioration in patients with mild cognitive impairment. Neurocase 2005; 11:14–25Google Scholar

142. Wolf H, Jelic V, Gertz HJ, et al: A critical discussion of the role of neuroimaging in mild cognitive impairment. Acta Neurolog Scand Suppl 2003; 108:68Google Scholar

143. Jelic V, Johansson SE, Almkvist O, et al: Quantitative electroencephalography in mild cognitive impairment: longitudinal changes and possible prediction of Alzheimer disease. Neurobiol Aging 2000; 21:533–540Google Scholar

144. Barcelo F, Gale A: Electrophysiological measures of cognition in biological psychiatry: some cautionary notes. Int J Neurosci 1997 92:219–240Google Scholar

145. Barrett G: Clinical application of event-related potentials in dementing illness:issues and problems. Int J Psychophysiol 2000; 37:49–53Google Scholar

146. Goodin DS, Squires KC, Starr A: Long latency event-related components of the auditory evoked potential in dementia. Brain 1978; 101:635–648Google Scholar

147. Coburn KL, Shillcutt SD, Tucker KA, et al: P300 delay and attenuation in schizophrenia: reversal by neuroleptic medication. Biol Psychiatry 1998; 44:476–484Google Scholar

148. Jocoy EL, Arruda JE, Estes KM, et al: Concurrent visual task effects on evoked and emitted auditory P300 in adolescents. Int J Psychophysiol 1998; 30:319–328Google Scholar

149. Yagi Y, Coburn KL, Estes KM, et al: Effects of aerobic exercise and gender on visual and auditory P300, reaction time, and accuracy. Europ J Appl Physiol 1999; 80:402–408Google Scholar

150. Polich J: P300 clinical utility and control of variability. J Clin Neurophysiol 1998; 15:14–33Google Scholar

151. Ponomareva NV, Fokin VF, Selesneva ND, et al: Possible neurophysiological markers of genetic predisposition to Alzheimer disease. Dement Geriatr Cogn Disord 1998; 9:267–273Google Scholar

152. Moore NC, Tucker KA, Jann MW, et al: Flash P2 delay in primary degenerative dementia of the Alzheimer type. Prog Neuropsychopharmacol Biol Psychiatry 1995; 19:403–410Google Scholar

153. Givre SJ, Schroeder CE, Arezzo JC: Contribution of extrastriate Area V4 to the surface-recorded flash VEP in the awake macaque. Vision Res 1994; 34:415–438Google Scholar

154. Jeffreys DA, Axford JG: Source locations of pattern-specific components of human visual evoked potentials, I: component of striate cortical origin. Exper Brain Res 1972; 16:1–21Google Scholar

155. Michael WF, Halliday AM: Differences between occipital distribution of upper and lower field pattern-evoked response in man. Brain Res 1971; 32:311–324Google Scholar

156. Darcy TM, Ary JP, Fender DH: Spatio-temporal visually evoked scalp potentials in response to partial-field patterned stimulation. Electroencephalogr Clin Neurophysiol 1980; 50:348–355Google Scholar

157. Mielke R, Kessler J, Fink G, et al: Dysfunction of visual cortex contributes to disturbed processing of visual information in Alzheimer disease. Int J Neurosci 1995; 82:1–9Google Scholar

158. Rizzo JF, Cronin-Golomb A, Growdon JH, et al: Retinocalcarine function in Alzheimer disease: a clinical and electrophysiological study. Arch Neurol 1992; 49:93–101Google Scholar

159. Whittaker SC, Siegfried JB: Origin of wavelets in the visual evoked potential. Electroencephalogr Clin Neurophysiol 1983; 55:91–101Google Scholar

160. Coburn KL, Ashford JW, Moreno MA: Visual evoked potentials in dementia: selective delay of flash P2 in probable Alzheimer disease. J Neuropsychiatry Clin Neurosci 1991; 3:431–435Google Scholar

161. Coburn KL, Ashford JW, Moreno MA: Delayed late component of visual global field power in probable Alzheimer disease. J Geriatr Psychiatry Neurol 1993; 6:72–77Google Scholar

162. Daniels R, Harding GFA, Anderson SJ: Effects of dopamine and acetylcholine on the visual evoked potential. Int J Psychophysiol 1994; 16:251–261Google Scholar

163. Harding GFA, Wright CE, Orwin A: Primary presenile dementia: the use of the visual evoked potential as a diagnostic indicator. Br J Psychiatry 1985; 147:532–539Google Scholar

164. Orwin A, Wright CE, Harding GFA, et al: Serial visual evoked potential recordings in Alzheimer disease. Br Med J 1986; 293:9–10Google Scholar

165. Parks RW, Long DL, Levine DS, et al: Parallel distributed processing and neural networks: origins, methodology, and cognitive functions. Int J Neurosci 1991; 60:195–214Google Scholar

166. Bajalan AAA, Wright CE, Van Der Vliet VJ: Changes in the human visual evoked potential caused by the anticholinergic agent hyocine hydrobromide: comparison with results in Alzheimer disease. J Neurol Neurosurg Psychiatry 1986; 49:175–182Google Scholar

167. Harding GFA, Daniels R, Panchal S, et al: Visual evoked potentials to flash and pattern reversal stimulation after administration of systemic or topical scopolamine. Doc Ophthal 1994; 86:311–322Google Scholar

168. Wright CE, Harding GFA, Orwin A: Presenile dementia: the use of the flash and pattern VEP in diagnosis. Electroencephalogr Clin Neurophysiol 1984; 57:405–415Google Scholar

169. Coburn KL, Amoss RT, Arruda JE, et al: Effects of flash mode and intensity on P2 component latency and amplitude. Int J Psychophysiol 2004; 55:323–331Google Scholar