Changing Approaches Chapter Four

Page 50

 

Chapter Four

 

Survey Research

 

Gary Dickinson

Adrian Blunt

 

Much of the substantive knowledge in the emerging discipline of adult education has been acquired in the past twenty years and has been concerned with the extent and nature of adult education as a field of social practice. An inevitable consequence of this em­phasis has been the predominance of survey methods over other research methods. The purposes of this chapter are to describe the role of the survey method in adult education research, to identify its strengths and weaknesses, arid to discuss the flow of activities together with problems typically encountered in planning and con­ducting surveys.

The first major survey in adult education was conducted in England in 1851 by Hudson, who felt it important that the public "be placed in possession of such facts as can be collected, to

 

Page 51

 

afford a just estimate of the nature and extent of the efforts which have been made, in behalf of adult education, and the effects it has produced" (Hudson, 1969, p. v). Accurate description remains a paramount goal in adult education research. Typical subjects for surveys include personnel and staffing, adults' learning needs and interests, program activities, and finance. Perhaps no other topic has been studied more extensively with the survey method than has participation in adult education.

Surveys constitute a major portion of both published non­degree research and unpublished degree research. As Dickinson and Rusnell (1971) point out, Adult Education, the principal journal of research and theory in the field, published 117 articles on em­pirical research in the twenty-year period ending in 1970, and of these, 86 percent reported the use of the survey method. Every empirical research article published during the first five years of the journal was a survey report, as were 65 percent of the articles in the last five-year period studied. Similarly, 72 percent of the doctoral studies listed in Adult Education Dissertation Abstracts (DeCrow and Loague, 1970; Grabowski and Loague, 1970) for the years 1963 through 1969 used descriptive methods, which included case studies as well as surveys. The use of such methods showed no tendency to decrease; indeed, the percentage of all dissertations using descriptive methods steadily increased from 67 percent in 1965 to 75 percent in 1969.

The survey method will probably continue to be the major means used. Even basic information about the extent and nature of present practice in adult education is far from complete, and the field is still expanding and changing; so there will be a continuing need to ensure that knowledge keeps pace with changes in practice. Similarly, the orderly development of a discipline of adult education depends on a steady flow of knowledge acquired through survey research. Knowles (1973) described six phases in the development of evolving disciplines of social practice-definition, differentiation, standard-setting, technological refinement, respectability and justification, and understanding the dynamics of the field-and listed the kinds of research that are needed in each. Survey research has a role in each phase, a crucial role in at least three. Thus, the need for surveys will undoubtedly continue in the future, although their

 


Page 52

 

dominance may diminish as other research methods become more appropriate for different developmental phases.

 

Nature and Purpose

 

The term survey research is applied to a type of social-­scientific investigation that "studies large and small populations by selecting and studying samples chosen from the populations to dis­cover the relative incidence, distribution, and interrelations of sociological and psychological variables" (Kerlinger, 1973, p. 410). Surveys vary greatly in complexity, sophistication, and cost, ranging from short-term, clerical studies for clarifying an immediate local problem to major national studies attempting to detect significant interrelationships among complex phenomena. In the literature of adult education, these extremes are represented by such studies as an investigation of the learning interests among members of one school board (Stroup, 1960) and a comprehensive survey of par­ticipation in adult education in the United States (Johnstone and Rivera, 1965).

In general, surveys in adult education have been conducted to determine the current status of a phenomenon in the field, rather than to probe deeply into causative factors. The usual purpose has been to acquire information for use in making decisions about programs. Such surveys, however, typically have contributed little to the development of adult education theory, as they have been univariate investigations of multivariate phenomena.

Surveys may be classified in several different ways, with descriptive and analytical categories being used most frequently. Descriptive surveys tend to use small samples, measures of central tendency, and percentage distributions of the variables studied. Some descriptive surveys test hypotheses, but they are not generally char­acterized by sophisticated data analyses. Analytic surveys tend to use large samples, confidence intervals, tests of significance, and multivariate data analyses. Survey designs also may be classified according to an experimental-model typology. At the simplest level, a preexperimental design known as the "one-shot case study" has features such as the collection of observations at one time only, no control over the effects of variables, and no control groups. Sophisti­cated survey designs may simulate higher levels of experimental

 

Page 53

 

design through the use of stratified random sampling, subsampling, multivariate analysis, and panels of respondents.

Surveys may also be classified by the types of variables studied, such as those sociological or psychological in nature, and by the method of obtaining information, such as the personal in­terview, telephone interview, mail questionnaire, or controlled observation. Reports of surveys published in Adult Education over a twenty-year period suggest that the emphasis in the field has been on descriptive, one-shot surveys studying sociological variables through the use of mailed questionnaires (Dickinson and Rusnell, 1971 ).

 

Advantages and Disadvantages

 

Some of the important strengths of survey research are listed below:

 

  1. Surveys can provide a fairly accurate description of a field or a phenomenon within a field at a given time.
  2. A great deal of information can be gathered from representatives of a large population, and the information is accurate within sampling error ranges.
  3. Successive surveys of the same population or phenomenon can determine trends over a period of time.
  4. Surveys assist in identifying areas where other types of research are needed by suggesting hypotheses and lines of inquiry.
  5. Surveys may attempt to simulate experimental designs through the use of multivariate data analysis or by comparing the status of two or more groups at two or more times.
  6. Surveys present information about specific, definable populations about which generalizations can be made.
  7. Measurements and observations are made in the natural setting.

 

At the same time, survey research has some obvious weaknesses:

 

1.      Surveys concentrate on describing the present without consider­ing the past or the future.

2.      Surveys are not usually employed as part of a long-range, global

 


Page 54

 

..….research plan; it is difficult to relate a single survey to other surveys in different places or at different times.

3.      The information provided by surveys may be useful to admin­istrators but is not very applicable to work with individuals.

4.      The reliability and validity of the responses to survey questions are difficult to establish, and subjects' failure to respond may affect the results.

5.      Conducting a survey requires skill in a wide variety of research techniques and procedures.

6.      Surveys are ponderous in that once a design is established, it is difficult to modify.

7.      A considerable amount of manpower, time, and money is re­quired to do a survey.

8.      The variables are not usually controlled; so there are a large number of sources of potential error and bias in survey data.

9.      The cooperation of individuals is required, but it may not be given by the total sample.

 

The relatively unsophisticated nature of survey research in adult education has been inevitable in an emerging discipline which lacks a tradition of rigorous empirical research. The increasing num­ber of persons who have earned doctorates in adult education is evidence that the field is beginning to develop a corps of specialists with interest, training, and experience in research procedures. The limitations inherent in the survey method are still quite obvious in much of the research produced in adult education, while the po­tential strengths, with rare exceptions, have not been fully realized.

The full gamut of complexity and sophistication that is possible in survey research can be observed in the body of studies related to participation in adult education; hence, a perusal of a selection of such studies would be valuable for those aspiring to conduct a survey. Relatively few studies have used complex re­search designs, and these are usually beyond the abilities and resources of a single researcher. Surveys of participation within a community or an institution are more feasible for the individual. Until recently, participation surveys generally employed simple forms of data analysis that rarely extended beyond frequency and percentage distributions, but some researchers are using more

 

Page 55

 

sophisticated types of statistical analyses to identify specific phe­nomena related to participation in adult education (Boshier, 1971; Dickinson, 1971; Kronus, 1973; Litchfield, 1965).

 

Elements of Survey Design

 

Designing a survey requires the selection of specific pro­cedures and techniques from a wide range of options. Thus the selection process is a complex matter requiring considerable judg­ment, as there is no single design that is appropriate for every survey. Our intent here is to outline briefly the basic components of survey design and to identify some difficulties the researcher might encounter. Although these elements may not be chrono­logically discrete or considered in isolation, the major phases in survey design constitute a sequence which conforms to general models of educational research.

The Problem. Obviously, a first step is to choose and define the problem to be investigated. The initial search for a researchable problem may be stimulated by personal experience working in the field, by informal discussions with other researchers, and by criti­cally reading the general literature and previous research in adult education. Although the selection of a specific problem is primarily the responsibility of the researcher, the decision is often influenced by other researchers, agencies offering adult education programs, and funding organizations. The problem selected should be interest­ing to the investigator personally as well as to others in the field, should promise to contribute to the body of knowledge in adult education, should be investigatable with existing survey methods, and should promise to produce benefits worth the cost, time, and effort required. To meet these criteria, researchers cannot choose a problem until they have acquired experience in the field, have read widely in the literature, and have gained considerable knowledge about research methods and techniques so that they can judge whether a particular problem can be resolved with available resources and whether the problem is worth resolving. A negative answer in either case should result in the rejection of a problem.

A precise and explicit statement of the problem must be developed in order to define and limit the scope of the survey.

 


Page 56

 

Early drafts of a problem statement may tend to be vague and general, but subsequent drafts should sharpen it. Refinement will assist researchers in focusing their efforts and will enable them to communicate the problem accurately to others who may be in­volved in the research. An imprecise statement of a problem can lead the researcher to develop unrealistic expectations for a study and may result in such a diffusion of energy that the original pur­pose of the study is forgotten.

The Hypotheses. Once the research problem is clearly de­fined, it must be presented so that data can be collected and analyzed to resolve the problem. The problem may be posed as a question or several questions that could be answered by collecting descriptive data or as hypotheses to be tested with survey data and accepted or rejected at a predetermined level of statistical significance. In a survey of dropouts from night-school courses, for example, the problem might be phrased as the question What is the average age of night school dropouts compared with those who do not drop out? A hypothesis dealing with the same topic might be this: Those who drop out will have a lower average age than those who do not drop out.

The latter statement is considered a research hypothesis because it is a proposition supported by existing knowledge which describes an expected relationship between two variables. Such hypotheses give direction and precision to the development of a survey design and help to prevent the collection of extraneous data. At the appropriate time during data analysis, the research hypo­theses may be restated as null hypotheses for statistical testing.

The most suitable point in a survey to formulate research hypotheses varies with the nature of the problem and the extent of existing knowledge about it. A survey may begin with precise hypotheses derived from previous research, or if previous research is lacking, it may begin with a series of questions and conclude with hypotheses developed from the data collected.

Every survey should build on the findings and methods of previous surveys and serve as a basis for future research. One of the first steps in designing a survey, therefore, is to establish links with existing research and theory so that a single survey is not con­ducted in isolation. A review and analysis of existing research and

 

Page 57

 

theory will help the researcher develop a perspective for the survey, establish its unique contribution to the discipline of adult education, and avoid any unwarranted replication of previous research.

The construction of an appropriate theoretical framework involves the identification, or creation, of relevant concepts and constructs which are generalizations of particular behaviors or con­ditions that can be observed and measured. The expected relation­ships among these concepts are stated as research hypotheses. In order to test those relationships, the concepts and constructs must be defined operationally so that empirical observations may be carried out and relevant data accumulated.

The Population and Sample. Another element is defining the population and selecting the sample. The population for a survey consists of the total number of units under consideration in the research problem. Because survey research attempts to esti­mate values in and make generalizations to given populations, Kish (1965) believes the relevant population must be specified exactly: its characteristics, the units to be studied, its geographic area, and the period in which it will be studied.

Selecting an adequate sample from a specified population is crucial to a successful survey; yet that phase is often poorly handled. Quite frequently, a sample is selected that does not adequately represent the population under study or is not large enough to yield precise analytical results. These inadequacies in sample designs are usually attributed to limitations of money or time. The dangers in compromising the sample size because of cost and time factors are that the sample may be biased and the results may not be sufficiently precise to enable the hypotheses to be tested. Furthermore, the accuracy of survey results is not determined by the number or proportion of population elements included in the sample. Rather, accuracy depends on the size of the sample itself; thus the determination of size should be governed by the desired precision of the survey results, the number of variables being in­vestigated, and the distribution of those variables in the population. In general, descriptive surveys use smaller samples than do analytic surveys, as breakdowns of the sample into subsamples for purposes of multivariate analysis are not required.

A summary of the characteristics as well as the advantages

 


Page 58

 

and disadvantages of different types of sampling plans may be found in many basic research books. However, Selltiz and others (1959) suggest that any sampling design under consideration should be evaluated according to the following criteria:

 

1.      Goal orientation. The survey research objectives must be met at all levels by the sample design.

2.      Representativeness. The sample should accurately represent the population from which it is drawn.

3.      Measurability. The sample design must allow estimates of sam­pling variability and statistical tests to be computed.

4.      Practicality. The sample design must translate theoretical sam­pling models into clear, simple, practical, and complete instruc­tions for the conduct of the survey.

5.      Sampling Procedure. The means of drawing the sample should be in strict accordance with sampling theory.

6.      Economy. The sample design should allow the survey objectives to be achieved at minimum cost.

 

The Instrument. Another important element of survey re­search is the construction of the data-collection instrument, whose function is to gather the data required to resolve the research problem. Most surveys use either an interview schedule or a mailed questionnaire, each having a number of advantages and disad­vantages. The selection of an appropriate instrument should be guided by the nature of the research problem, the size and distri­bution of the sample, the research hypotheses, and the available time, money, and supportive services. The instrument chosen should be evaluated, suggest Hill and Kerber (1967), with respect to its reliability, validity, objectivity, and discriminatory power. The instrument should produce consistent results over repeated trials, measure what it is designed to measure, provide data that can be consistently evaluated and interpreted by other investigators, and differentiate among respondents to provide the variation required for statistical analysis. Wherever possible, existing instruments with known reliability and validity should be selected. If suitable scales or indexes are not available, they must be constructed following

 

Page 59

 

recognized scaling procedures, such as those developed by Likert, Thurstone, Guttman, and others (Fishbein, 1967).

The accuracy of the data collected in any survey depends on the willingness and ability of the respondents to provide the de­sired information, and this is influenced in part by the clarity of the instructions and items included in the instrument. The researcher should minimize the probability of error due to carelessness, ignor­ance, misunderstanding, or deception on the part of respondents in order to avoid jeopardizing the validity of the instrument. A pretest of the instrument will enable the researcher to assess the frequency of errors, and the accuracy of the data provided should be verified if possible through independent sources. Follow-up inter­views can verify the reliability of responses to a mailed questionnaire. And teaching interviewers how to conduct interviews and to record and code verbal information can increase the objectivity and accuracy of data collected on interview schedules.

Survey instruments must be developed with the proposed data analysis in mind so that appropriate levels of measurement are used. Four such levels are almost universally recognized in the social sciences: nominal, ordinal, interval, and ratio. These levels are described in almost every contemporary statistics textbook, but there is still some debate about which statistical tests are appropriate for each level of measurement. In general, it is advisable to collect data using the highest possible level of measurement, because trans­lation to a lower level may be done after the data are collected, but conversion to a higher level may not. Whatever the levels of measure­ment selected, the instrument should be precoded to minimize time and error in data processing after the data are gathered.

The Final Steps. When the survey instrument is judged to be complete, a pilot study of a small sample of respondents will indicate the probable nonresponse rate, the variability of the re­sponses to specific items, the suitability of the questions, the adequacy of the instructions provided, the appropriateness of the format and sequence of questions, the adequacy of the coding plan, and the time and cost of data collection. The experience gained in a pilot study will usually result in changes in the instruments that will im­prove the quality of the data collected by the full survey and reduce

 


Page 60

 

the costs. Once the pilot study has been completed, the overall survey design should be reviewed and evaluated following a routine such as that suggested in the Research Profiling Flow Chart (Gep­hart and Bartos, 1969).

Before administering the survey instrument, the researcher should develop a plan for the data analysis to indicate how each variable will be used. The variables to be used in hypothesis testing should be identified with their respective hypotheses, while other variables thought to be useful for explanatory purposes in secondary analyses should be scrutinized closely and discarded if their utility is not readily apparent. In some cases, hypotheses may be revised to include such additional variables. The appropriate statistical tests should be selected and the levels of measurement assessed to ensure that the assumptions underlying the tests are met by the format of the data to be collected. Although texts can help the researcher choose the right statistical tests, guides using a "decision tree" approach, such as the one published by the Survey Research Center at the University of Michigan (Andrews and others, n.d.), provide a systematic means of selecting appropriate tests. Tests of significance are used frequently in survey research in adult educa­tion, whereas confidence intervals are rarely used; however, this practice is not supported by some statisticians, who contend that data presented in terms of confidence intervals are often more meaningful than those presented with tests of significance (Kish, 1959; Stanley, 1966). In addition to selecting the most suitable statistical tests, the researcher should prepare specimen tables to indicate the format and content of the data to be presented in the survey report.     .

If the analysis plan is sufficiently detailed, hypothesis testing becomes a routine procedure. The analysis, however, should extend beyond the routine testing of hypotheses to probe, as Rosenberg ( 1968) suggests, the validity of alternative explanations. The logical probing of data can lead the researcher to identify new explanatory variables and formulate new hypotheses which must be tested in subsequent studies. The process of elaboration involving the analysis of subgroups within a sample is also a useful way of investigating relationships among survey data (Lazarsfeld and others, 1972;

 

Page 61

 

Rosenberg, 1968). Although the researcher cannot make detailed plans in advance for all secondary types of data analysis, he should give some general consideration to the procedures that will be fol­lowed in order to avoid the everything-against-everything approach that is facilitated by the availability of a computer. The purpose of data analysis is not to produce as many statistically significant re­sults as possible, but rather to evaluate the magnitude and meaning of differences to identify those that are of substantive significance.

A description of all the procedures and potential problems related to carrying out a survey in adult education from the design stage to completion is beyond the scope of this chapter. But, in brief, the major phases in implementing the survey design consist of collecting and processing the data, analyzing the data, and pre­paring the survey report. Each of these phases requires a degree of sophistication and skill in making appropriate decisions to ensure that the survey design is followed. The study of a variety of avail­able works on research and actual experience in carrying out well­-designed survey projects will contribute to the building of such abilities.

 

Problems of Survey Research

 

The discipline of adult education has acquired much of its substantive content by borrowing and reformulating knowledge from other disciplines, and the knowledge so acquired is expanding at an increasing rate. However, there has not been a concurrent borrowing and reformulation of knowledge pertaining to the re­search methods developed in other disciplines; survey research in education generally has been characterized as backward and un­sophisticated (Cornell and McLoore, 1963; Trow, 1967).

One reason, unfortunately, is that the training in research currently received by adult educators does not do much to help them develop the skills required to conduct the kinds of survey research that might advance the discipline the most at the present time. Instead of giving preparation related to research design and analysis strategies and the logic of survey research, such training tends to emphasize either the derivation of statistics and precision in experimental design or the technology of conducting surveys. These

 


Page 62

 

emphases contribute, for example, to the widespread use of hypo­thesis testing as opposed to the use of confidence intervals. As a consequence, adult educators often search for statistical rather than substantive significance. The integration of empirical and theoretical elements found in other social sciences, such as sociology, which are more advanced than education with respect to the state of survey research, may be more appropriate to the needs of adult education.

The reliability and validity of survey research in adult education needs to be increased by exerting greater control over the sources of error. As Deming (1970) suggests, the factors contribut­ing to error are many, but their effects can be minimized through such steps as adhering rigorously to sampling procedures; determin­ing acceptable levels of sampling error; using only reliable, pretested instruments; increasing accuracy in recording, processing, and re­porting data; and recognizing the assumptions underlying the use of statistical tests and data manipulation. The validity of survey research in adult education could also be increased considerably through wider use of the "triangulation of methodologies" process described by Denzin (1970). In the "within-method" approach to triangulation, a single research method, such as the survey, uses several scales to measure the same phenomenon, while a "between methods" approach uses several research methods to collect data about the same phenomenon. Research hypotheses tested with data from a series of complementary investigations or measures can attain results of higher validity than those attainable in a study using a single method or instrument.

Survey research in adult education tends to lack rigor and sophistication. For the most part, surveys have been conducted by individuals within a brief time, and the primary goal has been to complete graduate degree requirements rather than to make a significant contribution to the emerging discipline of adult educa­tion. Many important problems in the field remain to be resolved, but individuals with little time and money cannot be expected to resolve them. Sophisticated and potentially valuable research designs require a long-term commitment of resources and personnel and can be used only if teams of researchers can be formed and sup­ported so that the complex tasks of survey research may be under­taken cooperatively.

 


January, 2005

Return to the top of this page.

Return to the first page.

Go to the Preface, Contents, Chapter One, Chapter Two, Chapter Three, Chapter Five, Chapter Six, Chapter Seven, Chapter Eight, the Epilogue, References, or the Index