Organizing the Church

In a time when ‘experts’ are consulted for all sorts of decisions, it is not surprising that even the American Bishops would feel the necessity to rely on their own team of experts to alleviate their workload and to seek counsel in assuming their responsibilities. For this reason the National Conference of Catholic Bishops and the US Catholic Conference were established. Those are religious organizations, but organizations all the same. In well-managed organizations there is a constant monitoring of the work of the staff in order to ascertain that they indeed facilitate the decision-making process of line managers, or whether they have claimed for themselves the responsibilities of the latter. An occurrence of this sort is frequent in organizations. Checks are also made to verify that the improved decisions due to the work of the staff are significant enough to justify financing this staff. The Bishops seem to be reluctant to conduct such monitoring of their staff, lest this should seem to cast doubt on the moral integrity of the fine people who work for them. Such a reluctance is unfounded since we have so much evidence on the group dynamics in organizations. Here we will examine a single product of the ‘expertise’ of the Bishops’ staff: the polling questionnaire distributed to the Bishops on the first day of the November 1982 Conference in Washington.

This questionnaire is entitled “Initial Reactions to the Second Draft of the Pastoral Letter on War and Peace.” A questionnaire like this one is a psychometric instrument. It purports to measure psychological attitudes on a given matter, expressed by a given population. Any such instrument must be sound and satisfy basic requirements. One does not measure weight with a poorly adjusted balance, nor volume with a measuring container with blurred markings.

A psychometric instrument must be designed in such a way as to take great care of avoiding that the respondent experiences a feeling of ambiguity in reading the question, to ascertain that each respondent understands the question to mean the same thing, that the frame of reference of the question is made explicit. Also, such instruments must offer an array of possible answers which is exhaustive and from which the respondent selects. The ubiquitous catch-all answer “none of the above” is meant precisely to fulfill this requirement of exhaustiveness and serves, when this answer receives an unduly great number of checks, as a signal that the questionnaire is poorly designed.

Another way to ensure that an exhaustive array of possible answers is offered the respondents is to ask for their attitude as measured on a continuum scale. This latter choice was apparently made for the NCCB questionnaire. However, the scale of the questionnaire has only three points: there are only three ways to answer each question. Three points are considered to be too few for this purpose. For a 3-point scale is much too coarse and loses information because it does not capture the discriminating power the respondents are capable of.

The answers-statements, such as “I basically agree,” are called “cues.” Cues must help supplement and reinforce the definition of the continuum. The term “basically” at each end of the continuum weakens rather than reinforces it. The cues must also be symmetrical to avoid bias. If at one end the cue is “I basically agree”, at the other end it must be “I basically disagree”, not “I am in basic disagreement” as appears on the NCCB questionnaire since this confuses the respondent. Further, the terms “major” and “reservations” in the middle cues are vague and can be interpreted differently by different people.

There are other factors affecting the frame of reference of respondents. “Time perspective” is such a factor. Question A mentions “signs of our times.” Our times may be construed to be this Conference year, the immediate future, the long term future, the time-span since Russia attained nuclear parity, the period starting with the Hiroshima bombing, the Los Alamos test explosion, the early 1900s when Einstein conceived the theory of relativity, the beginning of the industrial age which allows mass production of weapons, or 1917 when Our Lady addressed the issue of Communism.

The question must also clearly mention whether it seeks a “descriptive” or an “evaluative” answer. Question A may be understood as referring to the quality of the draft as a political paper, or as concerns its value as a formulation of Catholic doctrine. The first interpretation is descriptive, the second is evaluative. The two types of questions address different psychological attitudes which cannot be confounded. Question B asks about citations from Scripture. Descriptively this may mean, are they properly reported, quoted in or out of context, exhaustive relative to the subject. Evaluatively, it may mean are they well selected and presented with an eye to backing the position paper. The same distinctions may be made for “theological principles” of Question C, “strategies” in Question. D, and “tone,” “style,” “length” and “intended audience” in Question E.

Level of analysis is also an important factor affecting the frame of reference of respondents. Three levels are possible: (1) the individual level, (2) the level of relations among individuals, and (3) the level of collective phenomena. Question A does not distinguish whether the document is sound at the level of each individual Catholic (or member of the general public, it not being clear whether this is a pastoral addressed to believers or a political position paper addressed to all), as a whole moral person with other moral questions. Or whether one is asked to agree or disagree with it on the basis of its effect on the relationships of Catholics among themselves, or of individual Catholics with individual non-Catholic Americans, or of individual Americans with individual Russians, etc. Neither does Question A give an indication of whether it must be interpreted at the global level involving the U.S. Catholic Church, the U.S. Government, the Holy See, the whole U.S. population, the Soviets, Europeans, etc.

This methodological exegesis could be shorter if one had taken into account that lack of ambiguity is the prime quality in a good questionnaire. It is very obvious that ambiguity was built into each of the questions; none of them is straightforward. Question A inquires into the social and political. Those are two different things. Question B inquires into Scripture, Tradition, Teaching — three things. Question C inquires into ethical precepts and moral conclusions. Question D mixes the issues of the nature of strategies and their practicality. Question E manages to incorporate at least four questions in one (on tone, style, length and audience.)

One does not add apples and oranges. A question involving apples on the one hand and oranges on the other cannot receive a single answer. Should an answer be obtained, it cannot be said to measure anything. Yet answers were obtained by means of this poll and they were used. On the basis of these answers, the NCCB staff decided to have Questions E and C be the first ones reexamined at the conclusion of the session, at the risk of not leaving adequate time to treat other questions which were taken to yield greater agreement among the Bishops. In view of the previous analysis this decision was purely arbitrary.

A theological analysis would be necessary to determine whether the lumping of different issues into a limited number of questions resulted in a systematic doctrinal bias in the whole polling process. An organization methodologist can only raise the general questions as to whether all theological points were exhaustively covered and each accorded its proper significance.

On the limited matter of this questionnaire, it must be concluded that the NCCB/USCC staff did not produce an expert job to facilitate the decision-making process of the Bishops or in managing the time of the Conference well. They introduced many possibilities of procedural error in the questionnaire. I leave moot the question as to whether these errors resulted from sheer ignorance or from the vested interests, doctrinal or organizational, of the staff. The important thing is that they be avoided in the future. Such questionnaires should be designed scientifically. That is, by scientists, following approved scientific methods and audited by independent scientists.

Avatar photo


Jean-Francis Orsini, T.O.P., Ph.D. was at Wharton business school when he wrote this article. He currently teaches business training programs from Washington, DC.

Join the conversation in our Telegram Chat! You can also find us on Facebook, MeWe, Twitter, and Gab.