In the federal budget language for Fiscal Year 1994, Congress directed the Central Intelligence Agency (CIA) to assume responsibility for a closely-held program then managed by the Defense Intelligence Agency (DIA). Known as STAR GATE, the program was mandated to explore and exploit the reputed parapsychological phenomenon known as “remote viewing” in support of U.S. intelligence activities. STAR GATE’s mission was three-fold: Assess foreign programs in the field; contract for basic research into the existence and cause-and-effect of the phenomenon; and, most importantly, to see if remote viewing might be a useful intelligence tool.

Before accepting responsibility, the CIA first insisted that the program be evaluated to determine if it had any value. To this end, the Agency contracted with the American Institutes of Research (AIR), headquartered in the District of Columbia, to perform a scientific survey. Two heavily credentialed scientists—one a statistician and research specialist, the other a psychologist—were retained to do the assessment of the research port of STAR GATE. Jessica Utts, the statistician, is a supporter of parapsychological research; the psychologist, Ray Hyman, a professor at the University of Oregon, is a prominent skeptic. A number of AIR employees and associates would evaluate the operations portion.

By the conclusion of the AIR report, Drs. Utts and Hyman agreed that the experimental portion of STAR GATE indicated some sort of phenomenon existed, but disagreed on whether it had been proved psychic in origin. Utts thought it was, Hyman had no alternative explanations but would not accept that a psi effect was demonstrated. As for the operational side of the survey, AIR’s evaluators had concluded that remote viewing was not, and never had been of operational use. Therefore STAR GATE was not worth wasting further money on.

This verdict was justification enough for the CIA to wash its hands of the Congressional requirement to pursue remote viewing, while at the same time allowing it to integrate the dozen or so personnel spaces it had acquired from STAR GATE into its own structure—a veritable windfall in an era of rampant governmental “downsizing.” But was the AIR survey truly the thorough and objective evaluation it pretended to be? After my own assessment of the report, I can only conclude that it was not.

In fact, so skewed were the AIR report’s conclusions, that I at first suspected a clever trick by the CIA to give the impression in the public that it had dumped the program, while in reality burying it deep inside the Agency where it could continue to perk along quietly behind the scenes. Prepared to remain silent if a viable remote viewing effort really was still under wraps somewhere in the system, I made a few discreet inquiries among people who were in a position to find out. Alas, it now seems clear that the program, in any incarnation, is indeed deader than a door nail.

Since I know through long experience the value of a properly-run RV program, I was therefore quite offended by the superficiality of the AIR study and the obtuseness of the CIA. The best antidote, it would seem, would be to expose the major faults of the review and let the public sort out what ought to happen next. Consequently, I will explore in this article and in one to follow how AIR arrived at its dubious conclusions.

THE STUDY

To accomplish its three-fold mission, STAR GATE incorporated two separate activities. One was an operational unit with government-employed remote viewers, the purpose of which was to perform training and actual remote viewing intelligence-gathering sessions in support of customers in the U.S. intelligence community. The other activity was an ongoing research program, maintained separately from the operational unit. The research program resided for several years at SRI-International, but later moved to another California-based defense contractor, Science Applications International Corporation (SAIC) under the directorship of Dr. Edwin May in the later years of the project..

In evaluating the program, AIR obviously had to address both operational and research portions. On the research side, evaluators performed an exhaustive review of the reports from the ten most recent experiments Dr. May had conducted.

To evaluate the operational portion, the AIR personnel conducted interviews with STAR GATE’s project manager and viewers. Also, certain intelligence community activities were recruited to levy collection tasks on STAR GATE, then evaluate the resulting information. Finally, some of the research material that seemed to apply to operations was reviewed. In the interests of time and space, I will consider in this article only the operational portion of the AIR evaluation. The research portion will be examined at another time.

THE PROGRAM

To help understand how the AIR study erred in evaluating the operational side of the program, we must first briefly discuss the program’s history. STAR GATE traces its direct lineage to the formation of an Army program in 1977, originally created to explore what intelligence an enemy might be able to obtain about the U.S. by using remote viewing. The program’s indirect roots go back still farther, to the CIA’s flirtation with remote viewing under the SCANATE program in the early Seventies.

By 1978 the original Army program was given a new mission, to experiment with remote viewing as an actual intelligence collection tool. At about the same time, the program also moved under the administrative umbrella of the newly-created GRILL FLAME project, which was a joint effort among several agencies, but with DIA overseeing the overall program. Over the next fourteen years, the remote viewing program went through two more name changes—first in the early Eighties, and then once again in 1986 upon migrating to DIA, after a newly-appointed commanding general of the Army’s Intelligence and Security Command was directed by his superiors to divest the Army of the program. In the early Nineties the program’s status was changed from that of a SAP (“Special Access Program”) to a LIMDIS (“limited dissemination”) program and it was re-designated STAR GATE.

Altogether, over forty personnel served in the program under its various iterations, including both government civilians and members of the military. Of these forty personnel, about 23 were remote viewers. At its most robust (during the mid-to-late Eighties), the remote viewing program boasted as many as seven full-time viewers assigned at one time, along with additional analytical, administrative, and support personnel.

From the early Eighties, two primary remote viewing disciplines were used: The SRI-developed coordinate remote viewing (CRV) method, and a hybrid relaxation/meditative-based method known to program personnel as “extended remote viewing,” or ERV. Both methods had been heavily evaluated and refined before being pressed into service on “live” intelligence collection missions.

In 1988 a new and (it turned out) less reliable method, known as WRV—for “written remote viewing”—was introduced. WRV was a hybrid of both channeling and automatic writing. Surprisingly, it was almost immediately adopted as an official method for performing actual intelligence missions—without the same period of careful evaluation that either CRV or ERV had enjoyed. Many of the personnel were dubious of the new method, and in fact a good deal of divisiveness and rancor developed within the unit because of it. Nevertheless, for a several-year period the organization’s management made WRV the method of choice. There were a number of reasons for this, which I lack space and time to consider here.

By the summer of 1990, attrition of quality remote viewers was becoming a problem, through retirement, reassignment, or the departure of disenchanted personnel. Unfortunately, the higher echelons at DIA were for the most part uncomfortable with the program and chose not to replace departing employees. At the time of its transfer to CIA in June 1995, STAR GATE was down to three viewers—two using WRV, and one CRV. Further, the program was led by a project manager who had no previous experience in the field, and had been less than successful in gleaning insight from the program’s well-documented operational archives.

By 1995, after almost 20 years of operation, the remote viewing program in its various guises had conducted several hundred intelligence collection projects involving literally thousands of remote viewing sessions on behalf of nearly all of the major players in the U.S. Intelligence Community (including, despite its current vigorous disclaimers, the CIA). There were at one point more than a dozen four- and five-drawer security cabinets containing the documentation for these projects and the surrounding history of the program.

After all this, one would think that AIR had a great deal to evaluate before passing judgement on the operational value of the unit: Drawers and drawers of documents to examine, dozens of personnel and several former project managers to interview, and perhaps a score of intelligence consumers to poll. But that is not what happened. Instead, AIR chose to do only three things:

  • The few remaining viewers were interviewed for two hours as a group;
  • The project manager was interviewed once; and
  • Six intelligence customers were recruited to provide problems for the remote viewers to be targeted against, the results of which would then be evaluated by the agency submitting the request. This operational test took place during an approximately one-year period near the end of STAR GATE’s tenure at DIA—a mere 12 months and six projects balanced against a roughly 240-month history and hundreds of collection projects, all well documented in STAR GATE’s files!

Regrettably, AIR had made the arbitrary decision at the very beginning not to evaluate any of the historic data predating the adoption of the “STAR GATE” project name.

On the surface it might seem that at least the operational test AIR devised would be a reasonable assessment of STAR GATE capability and potential. But we must remember that at the time the evaluation was made, only three remote viewers remained of the 23 who had belonged to the unit over the years—and two of these three used the less-effective WRV protocols—one of them even resorting to tarot cards as a collection method. The third viewer, by self-admission, was demoralized and cynical about the management and future of the program, which undoubtedly affected viewing accuracy. The program manager, who performed triple duty as tasker, analyst, and evaluator, was inexperienced and unqualified to fulfill any of those functions.

Indeed, at the time of the AIR evaluation, the tasking methodology had degenerated markedly from past practice. In previous years, to prevent contamination of the data no “frontloading” was permitted. When in the course of a session further guidance might prove necessary, great pains were taken to provide only the most neutral cuing possible—and then only after the viewer had demonstrated unequivocal site contact. Further, operational sessions were conducted as often as possible under double-blind conditions to prevent inadvertent cuing by monitor personnel.

At the time of the AIR investigation, however, viewers were allowed “substantial background information” before their sessions (p. C-12) which often led to viewers “chang[ing] the content of their reports” to coincide with their own preconceptions about the nature of the target and the expectations of the customer (p. C-12, C-13). Complicating the matter still further, the AIR report indicates that the person providing the tasking, receiving the reports, then providing further guidance was usually one and the same person—the project manager—who was all the while fully informed of the mission and had access to any site-relevant details that were available. This is bad practice for maintaining objective analysis and unbiased viewing results.

Sessions were conducted “solo” (i.e., no monitoring personnel present), and the taskings provided to the viewer usually included the name of the tasking organization and a brief description of the target (p. C-15), a practice compounding the likelihood of contaminated results. It is no wonder that the tasking organizations—even the ones who were enthusiastic about remote viewing—found the results ultimately unhelpful.

One might argue that these were problems endemic to the unit, and that the AIR report fairly assessed the poor utility of the operational organization. However, AIR essentially guaranteed a negative conclusion from the very beginning by focusing on a narrow slice of time, late in the program’s existence when operational standards and morale were at their lowest ebb (brought on, by the way, through the ambivalence and even outright antipathy of its parent organization). It would have been a major surprise had AIR come to any other conclusion. In a truly objective study, thorough, responsible evaluators would have recognized the situation, analyzed what was going on, and dug deeper.

It should be clear by now that this ostensibly “scientific” examination of the operational portion of the program was far too superficial and narrowly-based to justify the conclusion that remote viewing had never been of intelligence use. In fact, there is plenty of evidence for collection missions in which remote viewing had been of operational significance. Obvious sources would have been the veteran remote viewers (none, as previously noted, ever interviewed, but most of whom are eager to talk about their involvement), and the final reports for closed-out projects. However, in the historical files there are also a number of customer evaluations from the likes of the Secret Service, NSA, the Military Services, Joint Chiefs of Staff, and—ironically—the CIA, reporting (occasionally even in rather glowing terms) the usefulness of remote viewing as an intelligence tool.

To be sure, not all the evaluations are positive; it would have been very suspicious if they were. Remote viewing, like any other intelligence discipline (including, despite popular perceptions, satellite imagery), often falls flat on its face. However, remote viewing was successful often enough to have gained over several years the interest of a number of otherwise hard-bitten intelligence agencies. Unfortunately, AIR with all its resources failed altogether to discover this on its own.

One might draw a fanciful analogy with the early days of radio. It’s as if Marconi’s official trial was arranged as a make-or-break test to decide whether to pursue his new invention or to scrap it as a waste of effort and resources. However, Marconi isn’t present for the event, and one of his less proficient operators is chosen to perform the test. The operator, suffering from a migraine, tunes to the wrong frequency, producing only static. Officials in attendance, impatient disbelievers from the very outset, make with great relief the immediate decision to scrap the whole thing, and go back to something they know—the telegraph.

 

Continue to Part 2, “A Second Helping”