A to Z
Data &
Forms &
News &
Licensing &
Rules &
Public Health

Data Collection | Weighted Estimates | Effects of Sampling Upon Prevalence Estimates
Sampling Methods Employed | Cautions

Data Collection

BRFSS interviews are conducted by telephone.

Beginning in January 1989, adults, aged 18 years or older, residing in Oregon households having a telephone were randomly selected for interview. At first, about 140 BRFSS interviews were conducted each month throughout Oregon. After 1989, the number of interviews was increased to approximately 240 to 280 per month. The annual totals for each year during the study period are given below for respondents of known race or ethnicity:

Racial or ethnic


Year of Interview

1989 1990 1991 1992 1993 1994 Total


1601 3113 3121 3118 2723 2608 16284

(all races)

49 80 109 95 89 94 516

(incl. Pacific Islanders)

19 37 43 45 48 56 248


19 44 46 52 37 33 231


10 25 22 24 50 29 160
TOTAL 1698 3299 3341 3334 2947 2820 17439

For purposes of this set of tables, responses to questionnaire items have been aggregated over the time period that a particular question was used. Aggregating data in this way created subsamples large enough to estimate risk levels within racial and ethnic minorities and to allow comparisons among them. Tables based on questions asked each year--e.g. the table regarding the prevalence of adult diabetes--include responses from more than 17,000 interviews. About six and one-half percent--1,152--of these were members of racial or ethnic minorities. By contrast, the table which reports the subjective assessments of respondents' level of health is based upon fewer than 6,000 interviews (including only 79 African Americans and 70 American Indians) because this question was added to the survey beginning in 1993.

These differences in sample size are important because the precision and reliability of sample-based estimates are contingent upon the number of respondents interviewed. Data users should pay close attention to the number of observations made and the probable range of values associated with the estimated parameters. Many of these estimates are highly useful for the comparisons made in health planning and assessment; others are less so.

The usefulness of aggregating data over a period of several years, assumes that the mix of responses did not change greatly during the study period. For most variables employed in these tables, this assumption appears valid. However, because statewide legislation strongly affected seatbelt use, the baseline estimates for this indicator are based solely on responses obtained after legislation went into effect.

The original questionnaire was based on a core set of questions developed by researchers at the Centers for Disease Control and Prevention of the U.S. Public Health Service. It was reviewed each year and limited revisions made. Additional items were included to aid in understanding health-related conditions specific to Oregon. Core items generally remain unchanged to permit comparisons with other states and the analysis of trends. The data presented in these tables are those most useful in measuring Oregon Benchmarks established by the Oregon Progress Board or Healthy People 2000 goals set by the Centers for Disease Control. (See: Oregon Benchmarks: Standards for Measuring Statewide Progress and Institutional Performance. Report to the 1995 Legislature by the Oregon Progress Board; December, 1994. Especially page 36 and 79. U.S. Department of Health and Human Services, Healthy People 2000: National Health Promotion and Disease Prevention Objectives. Washington DC: U.S. Department of Health and Human Services, Public Health Sevice, 1991: DHHS publications no. (PHS) 91-50212.)

Weighted Estimates

A. Response weighting to achieve an equal probability sample:

Theoretically, BRFSS sampling methods insure that every residential telephone number in Oregon has the same probability of being selected as part of the sample. It is this fact that makes it possible to generalize from a relatively small set of interviews to the state in its entirety or any subpopulation within the state--a region, county, gender group, age group, racial or ethnic group, etc.--on the basis of observations made regarding the corresponding subset within the sample. However, a simple summary of the raw data can, at times, create misleading impressions. For two reasons:

1. Some households have more than one telephone: thus they are more likely to be selected for interview. Wealthy households, for example, tend to be overrepresented. Among the households selected for interview in 1989-94 about five percent had two or more telephone numbers.

2. The selection probabilities are not the same for all individuals --the unit about which we wish to generalize. That is, an adult in a four-adult household has a 25 percent chance of being selected as the respondent; whereas, the only adult in another selected household has a 100 percent chance of being selected for interview.

By assigning inverse weights to responses associated with such factors it is possible to calculate unbiased estimates for geographic areas or demographic groupings. The combined weights insure that statistical estimates are equivalent to that obtainable through simple random sampling. For example, the responses of someone living in a household with three telephones is given only one-third the weight of those from households which may be reached by only a single telephone number. Similarly, the responses of someone from a four-adult household would be given four times the weight of those of a respondent living as the sole adult resident of the household and twice the weight of responses obtained from members of two-adult households.

B. Post-stratification weights:

Within the framework of statistical theory it is clear that most randomly selected samples of a given size, drawn from the same population, would provide quite similar findings. These findings are generalizable to the population, itself. It is just as clear that the single sample actually observed in a given study never provides perfect representation for the larger population. In other words, it is not an exact image of the sampled universe. To make sample-based estimates as nearly representative of the universe as possible, they are commonly adjusted by post-stratification weights.

This system of adjusting statistical estimates is useful because of the well-established fact that health-relevant behavior and beliefs display considerable similarity among persons within the same demographic classifications--age, gender, race and ethnicity, economic level, marital status, etc. Furthermore, due to a periodic census, the demographic composition of the population is already known. This makes it possible to determine how well the specific sample selected represents the population under study. If a particular demographic group is underrepresented in the sample, the responses of the interviewees with that characteristic may be given greater weight; as the result, the newly adjusted values become a more accurate representation of the population.

For example, 18-24 year-old males were typically underrepresented and females over 64 years of age were often overrepresented in the sample relative to the number of young men and older women known to live in Oregon. To compensate whenever this occurred, each of the responses by young men were given increased weight; whereas the responses of the older women received less than average weight. As a final result, summary statistics used to generalize about the group as a whole more accurately reflect their true conditions--that is, what the findings would have been had every member of the racial or ethnic group been interviewed.

Estimates given in these tables employ post-stratification weights based on both gender and age. Operationally, post-stratification weights were calculated by first segmenting respondents into subclasses based on gender and six age categories (18-24, 25-34, 35-44, 45-54, 55-64, 65 and older)--a total of 12 subclasses. Next, population figures for these same subclasses, based on the 1990 U.S. Census and reflecting estimates for July 1, 1991, were obtained from the Oregon Center for Population Research and Census at Portland State University. Weights for each of the 12 cells were calculated by dividing the population estimate for each cell by the number of actual respondents in the cell. In effect, this determined the number of residents within the racial/ethnic group which each respondent represented. A separate set of post-stratification weights was employed for each of the five racial/ethnic categories.

Because population estimates were treated as constants throughout the study period, racial or ethnic groups which experienced rapid or extreme shifts in population during that time are likely to produce somewhat inaccurate or misleading estimates.

Effects of Sampling Upon Prevalence Estimates

The validity, precision and reliability of sample-based estimates are contingent upon the method used in selecting respondents. Some form of 'chance' selection is essential to insure valid--i.e. unbiased--prevalence estimates. At the same time, 'chance' selection insures that the estimates obtained from the particular set of respondents actually chosen as the sample will be (at least slightly) different from estimates which would have been obtained had a different set of respondents been interviewed. That is, 'chance' selection also inescapably creates variability in potential empirical findings. And even though they all employ some form of randomness in selection, most methods of selecting a sample create greater sampling variability than that of simple random sampling.

In obtaining the data used in these tables, one method of selecting the individuals to be interviewed was employed during 1989 through 1992; a second sampling plan was used in 1993 and 1994. To produce the prevalence estimates given in these tables, the data for all years was combined-- regardless of the actual sampling method employed--and computations performed as if it were a simple random sample.

For many tables this would appear to have minor effects--e.g. those which estimate the prevalence of hypertension, based on data from 1989 through 1993--because only one-fifth of the responses were obtained using the second sampling plan. On the other hand, for tables which describe subjective health assssments--in which all respondents were selected in terms of a multi-stage cluster sampling protocol.

Although statistical theory provides assurance that measures calculated on one randomly selected sample will be quite similar to those based on other samples obtained using the same procedures--at least, most of the time--they would not be precisely the same. The amount of variation to be expected from sample to sample is related to the degree of homogeneity within the population sampled. For diverse populations, the differences from one sample to the next would tend to be greater. Also, the more that selection procedures depart from those of a simple random sample, the more likely that the statistics would vary among samples. On the other hand, sampling variability decreases with an increase in the sample size.

Taking these factors into account. It is possible to estimate the amount of variability to be expected among samples; and sampling variability, in turn, determines the reliability of estimates based on a single sample. Although these tables provide prevalence estimates for racial and ethnic minorities, the reliability of these estimates varies considerably from one minority group to the next and from one table to another.

Sampling Methods Employed

In the method employed from 1989 through 1992, a list of valid residential telephone numbers was obtained from a large research corporation which provides sampling services for telephone surveys. Randomly selected telephone numbers were incremented upward by a fixed amount so that the sample would include unlisted as well as listed households. Chance selection produced a random sample of households with telephones. A single adult in each of these households was randomly chosen for interview based on the standard Kish technique (see: Kish, L. Survey Sampling. John Wiley & Sons, New York. 1965. Page 396f, especially section 11.3B). With proper weighting, this sampling plan provides data which may be analyzed as a simple random sample of individuals.

In 1993, the Waksberg method of probability cluster sampling was implemented to select respondents (see: Waksberg, J. Sampling methods for random digit dialing. Journal of the American Statistical Association. 1978; 73:40-46). This method, too, yields a representative sample of households with telephones--it is not equivalent to a simple random sample, however. To compensate for the design effect of this sampling plan, appropriate adjustments are needed in formulas for calculating variances. As a rule, cluster sampling increases the amount of variability to be expected among sample estimates.

Necessary Cautions In Interpreting These Data

Inadequate sample size. Statistics based upon too few respondents are unreliable. Generally, BRFSS prevalence estimates should be based upon 50 or, better still, 100 or more interviews in order to be published. If based upon a small number of cases, point estimates should be treated as being inexact. It is simply a midpoint of the range of values which the same statistic might be expected to take if a second sample of respondents were interviewed.

General danger of an unrepresentative sample. It is always possible that a sample--selected by 'chance'--is not representative of the larger population from which the sample was selected. That is, a biased sample or set of atypical cases was selected. In that case, the statements based on observation of sample cases would provide a very poor description of the population of interest. And some statements might be a direct contradiction of the actual facts. To the extent that sample selection is determined by 'chance,' the larger the number of cases observed, the less the likelihood that the sample will be unrepresentative and the more likely that descriptive statements will be true--unless one of the following conditions occurs.

Lack of telephone coverage. With respect to these data on racial/ethnic health-related conditions and behaviors, one of the greatest dangers to representativeness is the fact that a disproportionate number of minority households lack telephones. Among groups such as Hispanics, American Indians and African Americans the proportion of households without phones may be as high as 10 to 15 percent; whereas, in the general population the corresponding figure is below five percent. This lack of telephone coverage may result in serious bias in estimating prevalence for certain health indicators. And estimates for Hispanics, Indians and Blacks are likely to be less accurate than estimates regarding the general population.

The lack of a telephone is probably much more closely associated with a family's economic level than with minority membership, per se. To the extent that prevalence measures correspond to economic conditions, BRFSS estimates may be somewhat misleading--especially for these groups. Individuals and families living below the poverty level are probably not well-represented in these data. This is, no doubt, also true of many new arrivals to the U.S.

Communication barriers. Recent immigrants who lack fluency in English may be unable or unwilling to participate in telephone interviews. Or it is possible that the experiences associated with life in one subculture or another gives different meanings to the words used by an interviewer than intended.

Mixed populations. Prevalence estimates are, in effect, an average of behaviors which characterize an entire group. If one segment of the group is extreme in one direction (a high rate of hypertensives) and another segment is extreme in the opposite direction (few individuals with high blood pressure) the prevalence for the combined group will be an intermediate value. The combined prevalence estimate may be indistinguishable from another group which is fairly homogeneous but moderate in regard to the risk of hypertension. Unless the data are carefully and critically evaluated, a user might draw the conclusion that the two groups were essentially alike; whereas, to optimize health, distinctively different public health policies are needed for the two groups.

Genetic vs. cultural effects. Although these data may be quite useful in determining relative health levels or differences in behavior patterns between racial and ethnic groups, by themselves they are little help in determining the source of the difference. Some differences may have an important genetic component; others may be entirely cultural in causality. Without additional information or critical thinking, the segmentation of data by racial/ethnic group is of little use in distinguishing the relative importance of these broad etiological components--genetics and culturally-based behavior patterns--to health.

Demographic composition of group. Apparent differences between racial or ethnic groups may be due to differences in their demographic compostion rather than differences in health behavior patterns or the quality of medical services available. For example, hypertension typically develops after middle age, thus a group with many older adults is likely to show a higher rate of members with high blood pressure than one consisting primarily of young adults. Other demographic characteristics may also have marked effects upon prevalence estimates for certain health variables.

Need to discuss matters with state and local health officials. To reduce the risk of unwarranted conclusions or inappropriate explanations, it is always wise to discuss interpretations of the data with state and local health officials. Frequently they are aware of factors which affect prevalence rates of different groups. For private citizens, community leaders or public officials wishing to develop programs to improve the health of minority groups, these data provide a starting point for discussion. They are also intended as one source for the baselines needed to measure improvement.
Return to the top of the page.