Aids Research Paper Introduction Paul

I.  General Rules

The function of your paper's conclusion is to restate the main argument. It reminds the reader of the strengths of your main argument(s) and reiterates the most important evidence supporting those argument(s). Do this by stating clearly the context, background, and necessity of pursuing the research problem you investigated in relation to an issue, controversy, or a gap found in the literature. Make sure, however, that your conclusion is not simply a repetitive summary of the findings. This reduces the impact of the argument(s) you have developed in your essay.

When writing the conclusion to your paper, follow these general rules:

  • State your conclusions in clear, simple language. Re-state the purpose of your study then state how your findings differ or support those of other studies and why [i.e., what were the unique or new contributions your study made to the overall research about your topic?].
  • Do not simply reiterate your results or the discussion of your results. Provide a synthesis of arguments presented in the paper to show how these converge to address the research problem and the overall objectives of your study
  • Indicate opportunities for future research if you haven't already done so in the discussion section of your paper. Highlighting the need for further research provides the reader with evidence that you have an in-depth awareness of the research problem.

Consider the following points to help ensure your conclusion is presented well:

  1. If the argument or purpose of your paper is complex, you may need to summarize the argument for your reader.
  2. If, prior to your conclusion, you have not yet explained the significance of your findings or if you are proceeding inductively, use the end of your paper to describe your main points and explain their significance.
  3. Move from a detailed to a general level of consideration that returns the topic to the context provided by the introduction or within a new context that emerges from the data.

The conclusion also provides a place for you to persuasively and succinctly restate your research problem, given that the reader has now been presented with all the information about the topic. Depending on the discipline you are writing in, the concluding paragraph maycontain your reflections on the evidence presented, or on the essay's central research problem. However, the nature of being introspective about the research you have done will depend on the topic and whether your professor wants you to express your observations in this way.

NOTE: If asked to think introspectively about the topics, do not delve into idle speculation. Being introspective means looking within yourself as an author to try and understand an issue more deeply, not to guess at possible outcomes or make up scenarios not supported by evidence.


II.  Developing a Compelling Conclusion

Although an effective conclusion needs to be clear and succinct, it does not need to be written passively or lack a compelling narrative. Strategies to help you move beyond merely summarizing the key points of your research paper may include any of the following strategies:

  1. If your essay deals with a contemporary problem, warn readers of the possible consequences of not attending to the problem.
  2. Recommend a specific course or courses of action that, if adopted, could address a specific problem in practice or in the development of new knowledge.
  3. Cite a relevant quotation or expert opinion already noted in your paper in order to lend authority to the conclusion you have reached [a good place to look is research from your literature review].
  4. Explain the consequences of your research in a way that elicits action or demonstrates urgency in seeking change.
  5. Restate a key statistic, fact, or visual image to emphasize the ultimate point of your paper.
  6. If your discipline encourages personal reflection, illustrate your concluding point with a relevant narrative drawn from your own life experiences.
  7. Return to an anecdote, an example, or a quotation that you presented in your introduction, but add further insight derived from the findings of your study; use your interpretation of results to recast it in new or important ways.
  8. Provide a "take-home" message in the form of a strong, succinct statement that you want the reader to remember about your study.

III. Problems to Avoid

Failure to be concise
Your conclusion section should be concise and to the point. Conclusions that are too lengthy often have unnecessary information in them. The conclusion is not the place for details about your methodology or results. Although you should give a summary of what was learned from your research, this summary should be relatively brief, since the emphasis in the conclusion is on the implications, evaluations, insights, and other forms of analysis that you make. Strategies for writing concisely can be found here.

Failure to comment on larger, more significant issues
In the introduction, your task was to move from the general [the field of study] to the specific [the research problem]. However, in the conclusion, your task is to move from a specific discussion [your research problem] back to a general discussion [i.e., how your research contributes new understanding or fills an important gap in the literature]. In short, the conclusion is where you should place your research within a larger context [visualize your paper as an hourglass--start with a broad introduction and review of the literature, move to the specific analysis and discussion, conclude with a broad summary of the study's implications and significance].

Failure to reveal problems and negative results
Negative aspects of the research process should never be ignored. Problems, drawbacks, and challenges encountered during your study should be summarized as a way of qualifying your overall conclusions. If you encountered negative or unintended results [i.e., findings that are validated outside the research context in which they were generated], you must report them in the results section and discuss their implications in the discussion section of your paper. In the conclusion, use your summary of the negative results as an opportunity to explain their possible significance and/or how they may form the basis for future research.

Failure to provide a clear summary of what was learned
In order to be able to discuss how your research fits back into your field of study [and possibly the world at large], you need to summarize briefly and succinctly how it contributes to new knowledge or a new understanding about the research problem. This element of your conclusion may be only a few sentences long.

Failure to match the objectives of your research
Often research objectives in the social sciences change while the research is being carried out. This is not a problem unless you forget to go back and refine the original objectives in your introduction. As these changes emerge they must be documented so that they accurately reflect what you were trying to accomplish in your research [not what you thought you might accomplish when you began].

Resist the urge to apologize
If you've immersed yourself in studying the research problem, you presumably should know a good deal about it, perhaps even more than your professor! Nevertheless, by the time you have finished writing, you may be having some doubts about what you have produced. Repress those doubts!  Don't undermine your authority by saying something like, "This is just one approach to examining this problem; there may be other, much better approaches that...." The overall tone of your conclusion should convey confidence to the reader.


Assan, Joseph. Writing the Conclusion Chapter: The Good, the Bad and the Missing. Department of Geography, University of Liverpool; Concluding Paragraphs. College Writing Center at Meramec. St. Louis Community College; Conclusions. The Writing Center. University of North Carolina; Conclusions. The Writing Lab and The OWL. Purdue University; Freedman, Leora  and Jerry Plotnick. Introductions and Conclusions. The Lab Report. University College Writing Centre. University of Toronto; Leibensperger, Summer. Draft Your Conclusion. Academic Center, the University of Houston-Victoria, 2003; Make Your Last Words Count. The Writer’s Handbook. Writing Center. University of Wisconsin, Madison; Tips for Writing a Good Conclusion. Writing@CSU. Colorado State University; Kretchmer, Paul. Twelve Steps to Writing an Effective Conclusion. San Francisco Edit, 2003-2008; Writing Conclusions. Writing Tutorial Services, Center for Innovative Teaching and Learning. Indiana University; Writing: Considering Structure and Organization. Institute for Writing Rhetoric. Dartmouth College.

Abstract

It is clear that many sources of evidence have contributed to our grasp of what does and does not work in HIV/AIDS education. Despite this, there has recently been a distinct move to narrow the evidence of success in this field to experimental and comparative work, with randomized controlled trials positioned as the `gold standard'. Here we take up the question of what constitutes evidence in HIV/AIDS education. We explore the social and historical factors which `privilege' certain kinds of evidence above others and question whether there exists but one way of understanding what works best in HIV/AIDS education. We draw expressly upon earlier insights and experience in educational evaluation per se and put a case that evidence gleaned through a range of research methods is more useful than exclusive reliance on experimental and comparative work.

Introduction

In 15 years of responding to the HIV/AIDS epidemic, much has been learned about what does and does not work in HIV/AIDS education. Our knowledge of the characteristics that demonstrate effective and non-effective programming in this area comes from diverse sources. Some of the techniques that have been employed to determine whether a particular form of education works are less rigorous, though not necessarily less appropriate to a given set of circumstances, than others (Aggleton, 1997a). At times when it has been necessary to act with urgency—not uncommon throughout the epidemic—HIV educators have had to draw on personal experience, intuition and educated guesses to get education programmes under way. Over time, HIV educators have accumulated knowledge of what works in HIV/AIDS education through observation, especially observations that have been systematically applied. In other circumstances, assessments against the specific objectives of HIV/AIDS education programmes and demonstration projects have provided informative data on the outcomes of interventions. When there have been acceptable sites, participants and ethical considerations, as well as the necessary time and financial resources, randomized controlled trials (RCTs) and other experimental studies have contributed to our understanding of success in HIV/AIDS education.

Many sources of evidence, each with its corresponding strengths and limitations, have informed HIV/AIDS education and related public health policy. No single source has been sufficient to determine the appropriateness and effectiveness of different strategies. Successful approaches to HIV/AIDS education have been determined by bringing together evidence collected in different ways (Auerbach et al., 1994; Coates et al., 1996; Aggleton, 1997a). This is clearly evident in comprehensive lists of the characteristics of effective programming in HIV/AIDS education such as the compendium produced, for example, by the Washington State Department of Health (Washington State Department of Health, 1994). This lists some attributes of effective programming that are axiomatic (e.g. target the highest risk populations rather than the easiest to reach or teach, or the politically `fashionable'). Other features are based on the experiences and observations of educators (use the familiar language or vernacular of the targeted population, take into account cultural and social context, and ensure that programmes resonate with people's personal experiences). Some characteristics are designed to facilitate outcomes evaluation (e.g. have clearly defined objectives; specify what changes in behaviours are to be achieved and by whom). At times, the recommended strategies are supported by experimental results [e.g. use `natural leaders' among peers—see the work involving key opinion leaders (Kelly et al., 1991, 1992)].

Despite the fact that many sources of evidence have contributed to success in HIV/AIDS education (as measured through such outcomes as behaviour change or declining HIV infections), there has recently been a distinct move to narrow the evidence of success in this field. One way this has occurred is through effectiveness reviews that have privileged certain research methods rather than assessing the methods in terms of their appropriateness to the particular programme or intervention. There has been in the past few years a growing tendency to embrace experimental and comparative work—particularly RCTs as some sort of `gold standard'—as the only appropriate way to evaluate HIV/AIDS interventions. Correspondingly, other sources of evidence have been devalued. This push has come from several quarters, mainly in the UK (Oakley et al., 1995; Oakley and Fullerton, 1996) and in the US (Aral and Peterman, 1996; O'Leary et al., 1997).1

It is appropriate and timely to consider the question of what constitutes evidence in HIV/AIDS education, to consider the social and historical factors which `privilege' certain kinds of evidence above others, and to question whether there exists one way of understanding what works best in HIV/AIDS education. By doing so, we hope to explore some important issues concerning the research methods that are of greatest relevance to HIV/AIDS education, and how questions of evidence and research methods relate to the policy and practice of HIV/AIDS education. In this paper we examine these issues and draw expressly upon earlier insights and experience in educational evaluation per se to argue that evidence gleaned through a number of research methods is more useful than an exclusive reliance on experimental work. Our paper is a challenge to a certain prevailing orthodoxy, not a comprehensive review of successful HIV/AIDS education.

Evidence and health promotion

Without doubt, evidence is a basic requirement for rational decisions in all walks of life, in business, politics, law and medicine (O'Rourke, 1997). Nonetheless, there are many different and legitimate ways of understanding some things to be true, and of accumulating evidence of what works. Some people bring to their various professions useful characteristics such as business acumen, political nous, wisdom and even the healing touch. These traits may be enhanced through experience, observation and practice, not only through published data and accounts of controlled studies. So too in the HIV/AIDS arena. Educators and health policy analysts should be encouraged to draw upon the evidence which comes from many sources, some of it intuitive or experiential, some gleaned from systematic observation and some based on experimentation. Evidence and information derived from various means should be valued, carefully examined and filtered to see how applicable they are to a given set of circumstances. Likewise, it is important to acknowledge researcher expertise as well as practitioner expertise. Effective evaluation needs researchers skilled at understanding research methods in order to evaluate `success' appropriately.

The debate surrounding the movement towards evidence-based medicine points to the pitfalls of accepting the results of clinical research trials as the only `true' kind of medical knowledge (O'Rourke, 1997; Sweet, 1998). Not only does such a view fail to recognize the situated, historically and socially constructed nature of human understandings, it also omits the possibility that such understandings can and do change over time. The published evidence is highly selected, incomplete and often dated. The patients in RCTs are often not representative of the general patient population and trials reported in the literature rarely contain more than 50% of patients initially screened. RCTs usually test only one therapeutic agent, even though many patients with common conditions require multiple drugs. Many journals are still reluctant to publish negative or inconclusive outcomes so there is a publication bias for positive results. Major controlled trials take time to set up, conduct, analyse and report, so that by the time benefit can be ascribed to a particular treatment it may have been superseded by another with apparently greater efficacy and fewer side effects (O'Rourke, 1997). In one recent study of surgical operations, for example, only 39% of treatment evaluation questions could have been answered by an RCT in an ideal clinical research setting (Solomon and McLeod, 1995). Likewise, in the field of nursing, White (White, 1997) has argued that not all nursing problems are capable of being reduced to a clear issue that can be solved by controlled experimental work and many require artistry to find a solution. White has also pointed to the possibility that researchers applying for funding for non-experimental research may be disadvantaged, and the temptation to use experimental findings to justify restricting choice of interventions for both practitioner and patient. Moreover, there is the problem of the gap between research-based knowledge and its implementation in health-related practice (Berrow et al., 1997).

Each of these problems associated with clinical research trials in medicine has its parallel in, and is equally applicable to, experimental studies in the field of HIV/AIDS education. Apart from difficulties in controlling in a multi-faceted, socially-based intervention, another major problem remains. If knowledge of what works in HIV/AIDS education were to be limited to experimental evidence, one would surely want to ensure that the evidence was not skewed in favour of particular interventions and populations. This is certainly not a tenable proposition. Some interventions lend themselves to experimentation (e.g. teacher-led sex education), others do not (e.g. community-based HIV/AIDS education). Likewise, some populations are relatively amenable to experimental work (e.g. children who attend the same school every day), others not at all (e.g. non-gay identifying men who have sex with other men who visit cruising spots on an irregular basis). It is a perverse notion indeed to admit as evidence only that which can be tailored to the experimental suit. We must be willing to be much more realistic and flexible in our approach to evaluation if we wish to capture that which is most relevant to the frequently complex world in which infections occur and may sometimes be avoided.

Gaining evidence

As indicated above, the position taken here is that evidence in HIV/AIDS education comes, by necessity and by design, from diverse sources. This position is supported by researchers with long-standing experience in HIV/AIDS. As Coates et al. [(Coates et al., 1996), p. 6] have recently remarked, `a variety of research methods (including observational studies, quasi-experimental designs, demonstration projects, and evaluations of interventions and societal changes) are needed to illuminate effective HIV prevention strategies'. Such a position finds tentative support among researchers from other social disciplines: `The realities of social work...render the use of RCTs difficult to realise routinely. Their cost, and the fairly loose relationship that often exists between our understanding of social problems and the responses we make to them...can mean it is not sensible to place all our evaluative eggs in the RCT basket' [(Macdonald, 1996), p. 43]. Indeed, one of the strongest present day advocates for RCTs in an earlier time recommended, `This is not, of course, to say that the procedure of randomized controlled evaluation is the only means to reliable knowledge, is sufficient in itself, or is always the right approach' [(Oakley, 1990), p. 193, emphases in original], a view which we fully endorse.

The problems of affording RCTs the same status in social science as in biomedicine have been outlined numerous times, such as in a recent editorial for Health Education Research (Tones, 1997): random allocation of participants, except on a relatively small scale, is expensive and problematic, and so artificial that generalization to `normal' populations is rarely possible. It is almost impossible to stop `leakage' from experimental to control groups or to avoid competing inputs from other sources. Experimental designs may work when comparing a narrowly focussed intervention with no intervention, but educational interventions typically involve complex multifactorial inputs.

Despite the limitations of comparative and experimental designs in health promotion generally, and in HIV/AIDS education specifically, they continue to be enthusiastically embraced in some quarters. The structure of academia and the politics of research funding are contributing agents (Chapman, 1993; Aggleton, 1997b). Academic advancement is largely dependent on research output. Research units rely on grants for their continued existence and individual researchers on funds to generate output. To persons untrained in the social and educational sciences, evidence accumulated through experimental trials has greater face validity. Wherever such persons control the funding of research and they indicate, whether by commission or by omission, that a particular research method is to be employed or is to be given preference in the distribution of research grants, researchers are likely to oblige. In a regimen where, for example, only controlled experimental studies are funded, researchers are faced with the stark choice between abandoning their research endeavours or ensuring that their research proposals include both experimental and control groups.2

The latter option may be pursued irrespective of the merits of the research method or the overall importance of the research question to be explored (Speller et al., 1997).

Community-based HIV/AIDS education campaigns have been highly effective in bringing about rapid, large-scale risk behaviour change, at least initially.3

Bye (Bye, 1990), for example, has described the significant nature of behavioural change among gay men in San Francisco in the early 1980s where community-wide attitudinal and behavioural change was accomplished through planned change in gay community norms, values and institutions. Multiple strategies focussed not on individuals, mass media, and personal testing and counselling, but on full-scale community mobilization, the use of local gay media, interpersonal communication within informal peer networks, grass-roots outreach, building community, and participation and involvement through volunteer work. The effectiveness of the community mobilization strategies was documented through various assessments of the acceptance of the education messages, and high awareness and recall of the messages. A substantial decline in unsafe sexual behaviours among gay men in San Francisco and the decline in the number of new HIV infections among this group provided sound evidence of success. The efficacy of multi-level, community-based interventions in restricting the development of the epidemic among gay men has also been documented in London (King, 1993) and in Sydney (Kippax et al., 1993).

As Bye (Bye, 1990) correctly points out, research methods which might be used in the evaluation of individual-level behavioural change interventions may not be appropriate for the evaluation of community-level strategies that emphasize sociocultural change accomplished through a diffusion process. Carefully designed experiments are rarely possible and certainly not appropriate when the emphasis is on the mobilization of communities, largely informal social networks and peer-to-peer communication. Evidence of success here is best furnished through data on the acceptance and awareness of the educational messages, community change (e.g. the uptake of safer sexual practices), decreased morbidity (e.g. fewer new infections) and project replication.

For determining whether or not a particular HIV/AIDS education programme has been successful, the standard outcome measures have often been changes in individual's knowledge, attitudes, beliefs, intentions, behavioural skills (such as condom use skills, refusal skills and negotiation skills) and behaviours (such as needle and syringe sharing, unprotected anal intercourse, and condom use). Self-report remains the most feasible procedure for obtaining information about AIDS risk behaviours in diverse populations, but it is useful to corroborate these data with reliable seroconversion data where possible (Auerbach et al., 1994) or valid population-level HIV incidence rates (Aral and Peterman, 1996). Low incidence and latency periods preclude the use of seroconversion rates as the outcome measure for experimental interventions which usually have few participants and typically are of short duration. However, in situations where HIV testing is normative and frequent (e.g. among gay men in Australia), seroconversion and HIV incidence rates may provide a useful outcome measure of the impact of community and population level HIV/AIDS education.

An alternative approach to understanding the effects of HIV/AIDS education might be to use mathematical modelling to provide evidence for how prevention activity may reduce HIV transmission. Multi-level models incorporating relevant demographic, epidemiological, risk behaviour and prevention parameters can be generated to show how a given educational programme may reduce infection (Auerbach et al., 1994). Furthermore, in a climate of fiscal restraint, cost–benefit analyses setting the cost of the intervention against the medical care and other savings that would be achieved through averting HIV infections could be invaluable for garnering support for a particular intervention. Both of these approaches offer ideal ways of engaging politicians and other decision makers in HIV work. Alongside RCTs, they have a distinctly political tone, for as has recently been suggested: `Perhaps the only real justification for the experimental design is `political', and derives from the need to provide `hard' evidence of effectiveness and efficiency in order to gain the funding needed to stay in business!' [(Tones, 1997), p. ii].

A feeling of déjà vu

Now we want to look back briefly to a preAIDS era, and a time when there was a marked shift away from quantitative and experimental evaluation towards alternative, non-experimental evaluation strategies among those working in another area of social intervention. A landmark in the development of educational evaluation was US domestic reaction to the launch of the Soviet Union's Sputnik I in 1957 and the subsequent launching of a person into space in 1961 (Haney, 1981; Glaser and Silver, 1994; McBride and Schostak, 1995). Questions were raised about the quality and content of American education, prompting federal support for school curriculum improvement in a number of subjects, notably mathematics and science. Large-scale federal and state-funded evaluations of teaching projects were implemented. The majority of these evaluations were quantitative and experimental in nature, and, in retrospect, yielded little by way of helpful information (McBride and Schostak, 1995). Following a great deal of debate in the educational literature throughout the 1960s and early 1970s, a new breed of evaluators and several different prototypes of evaluation gradually emerged. Experimental studies declined from a position as the pre-eminent research method to become by the mid-1970s just one among an array of techniques for gauging success (Jenkins, 1976).

This decline of experimental work in educational evaluation came about for many reasons. There is scope here [drawing on (Jenkins, 1976)] to mention only a few of the central issues of what was then a complex debate: the tradition of individual measurement associated with experimental studies attracted increasing criticism as being too narrowly focused. Evaluators were encouraged to take a broader view of what is eligible for collection as evaluation data. Various audiences have different information needs not all of which can be met through traditional experiments. `Formative' evaluations rather than `summative' ones were required to provide the data to improve ongoing courses. Education came to be seen as a dynamic process, subject to adaptation and change, with constant interaction between the end result and the means adopted to achieve it. It was considered bad evaluation practice to undervalue the experience of practitioners, in this case teachers in classrooms. It was recognized that all research is initially based on value judgements of some kind and so no research is totally `objective' or free from bias.

Other key players in the debate argued along similar lines. For example, Cronbach (Cronbach, 1963) called for evaluation that is formative and ongoing, deploying a wide battery of techniques necessary to describe and assess the outcomes of educational programmes. In his view, the greatest service evaluation can perform is to identify aspects of a course where revision is desirable. White (White, 1971) argued that there is no single process of evaluation: there are many different forms of evaluation, all of which should play a part in assessing whether or not a new curriculum is any good. Parlett (Parlett, 1975) criticized the experimental or `agricultural–botany' paradigm of evaluation with its reliance on testing, quantitative data and notions of objectivity, reliability and neutrality. In contrast, a more social-anthropological paradigm with a wider information base, and emphasis on interpreting, explaining and discovering patterns of coherence and interconnectedness was asked for.

What then were some of the alternative paradigms to emerge in educational evaluation in the 1970s and how might HIV/AIDS education benefit from these earlier understandings of the evaluation process? Jenkins (Jenkins, 1976) summarized six prototypes of evaluation, each serving different purposes and employing different technologies and raising different problems. Here we briefly outline the various prototypes, adapting Jenkins' (Jenkins, 1976) curriculum evaluation typology to the perspective of the HIV/AIDS educator. Table I sets out the main characteristics, strengths and limitations of each prototype.

Admittedly, Table I oversimplifies the situation particularly in respect of important ways in which the alternative models supplement each other. The key message is that there is a variety of methods for evaluating programmes, serving a wide range of purposes and audiences. The conventional paradigm of educational research (i.e. experimentation) is complemented by other approaches. The objective, goal-based model determines the degree to which desired changes in knowledge, attitudes and behaviour are actually taking place. Goal-free evaluation attempts to escape the `tunnel vision' of goal-based evaluation by assessing the worth of programmes against a set of a priori checkpoints. Self-study or self-deliberation is an ongoing evaluative activity focussing on the process of educational change to gain knowledge critical in strengthening our efforts and those of our clients. Action research, conducted by practitioners and involving the clients or community in the change process, fits within the self-study paradigm. Illuminative evaluation [embraced recently in the broader health education literature (Tones, 1997)] is concerned to portray day-to-day aspects of the setting: to isolate its significant features; delineate cycles of cause and effect; and comprehend relationships between beliefs and practices, and between organizational patterns and the responses of individuals (Parlett and Hamilton, 1972). Decision-making models serve planning decisions at the various stages of programme development: at the outset when setting objectives to suit a particular context; when determining the procedure to follow; when evaluating how well the programme is proceeding; and at the conclusion when deciding whether to reject or recycle the project.

Here we encounter no less than six different prototypes and six different ways of looking at what works in HIV/AIDS education. As has been our thesis throughout, no single model supplants all others. Rather, there are useful ways in which the techniques supplement and complement each other. In so doing, and by accumulating evidence from a variety of sources, data triangulation may be achieved. If the findings from different sources point in the same direction it is good evidence that a programme really works. So much the better if the data also contain rich information on the context and process so that the intervention may be replicated and improved.

Policy and practice

Given the urgency of the epidemic, and the need for effective prevention and care, it is imperative to examine the important relationship between knowledge and methods, on the one hand, and HIV/AIDS education policy and practice, on the other. The uncompromising position that the only useful data are those derived from RCTs fails to understand the rather loose association between evidence, public health policy and the idiosyncratic, at times contradictory, choices individuals make about their own health. A prime example is that of injecting drug use (IDU) and HIV transmission. There is clear evidence—some from RCTs, though most from comparative and case studies—that multiple prevention measures of education, needle/syringe exchange programmes, methadone maintenance and accepting injecting drug users as legitimate citizens are effective (Wodak, 1996). Some of the strongest evidence comes from data which show declining IDU HIV transmission in countries which have adopted such multiple prevention measures (e.g. Netherlands, Switzerland) compared with those which have not (e.g. US), but see also Stimson et al. (Stimson et al., 1998) concerning recent research in the developing world. Despite this evidence, there is still a reluctance on the part of politicians and policy makers in many countries to rethink their belief in the effectiveness of drug law enforcement and to implement proven measures such as needle/syringe exchanges. The key challenge for HIV prevention in this area is how to establish greater openness among political leaders and decision makers.

Experimental evaluations with their emphasis on behavioural change at the individual level have limited explanatory power, and tend to obscure key social and relational factors involved in behaviour, such as peer pressure, norms, obligations, emotions, cultural beliefs and organizational structures of communities (Auerbach et al., 1994; Aggleton, 1996b). For this reason, research involving ethnographies, social network analyses and community outreach evaluations is needed to illuminate the important linkages between social structure and individual behaviours. For example, it is imperative that we understand the social conditions which may contribute to HIV transmission in developing countries. Lack of access to universal health care, lack of funding for HIV/AIDS education, and inequalities between women and men are some of the many conditions which may fuel an epidemic. Likewise, an understanding of the social, political and cultural dimensions of IDU is needed if we are to deal with vulnerability to HIV among injecting drug users. Broadly, racial and ethnic discrimination and chronic unemployment are two factors that are integral to high rates of IDU and the subsequent transmission of HIV among certain segments of the population in Western industrialized countries. At the group level, initiation practices and notions of trust and friendship have to be understood and taken into account when developing HIV/AIDS education for injecting drug users (Auerbach et al., 1994; Loxley and Ovenden, 1995; Crofts et al., 1996).

One final point about research methods and HIV/AIDS educational practice is that successful interventions are of their essence likely to be multifaceted. In developed world gay communities, for example, where men have adopted safe sex practices and reduced HIV transmission, success has come about not through a single intervention that can be studied experimentally. On the contrary, change has been achieved through multiple approaches which operate synergistically—including community development, skills building, peer education, media campaigns, destigmatization, and counselling and testing (Bye, 1990; King, 1993; Kippax et al., 1993; Auerbach et al., 1994). Similarly, most interventions to reduce the transmission of HIV among injecting drug users contain many potentially active ingredients (Des Jarlais and Friedman, 1996). To gain real insight into successes in gay communities or among injecting drug users, illuminative rather than experimental evaluations are required. An RCT of this element or that may tell us something about an individual educational factor, but it reveals little about its relative contribution as part of a multifaceted programme. Of even greater concern, it may suggest to those who fund programmes that less expensive, singular interventions are sufficient to achieve programme success when clearly they are not.

Conclusions

We have argued against the hardline position that the only useful evidence in HIV/AIDS education comes from experimentation, including RCTs, and have called for a more encompassing view of what constitutes evidence in this field. The uncompromising proposition that we can only truly know which forms of education are effective after they have been judged against RCT criteria unnecessarily restricts our knowledge base, devalues useful alternative research methods, and compromises policy and practice. Experimental researchers are not the sole repository of knowledge, nor indeed are comparative evaluations the only way in which we have come to know about the world of HIV prevention and care. Much of what we know in HIV/AIDS education depends on insights from practitioners, programme managers, community members, theorists and scholars from non-experimental traditions. Some evidence has been gleaned from true experimental designs, other evidence from qualitative, illuminative and community-based work. Given the limited role that RCTs have in this area, putting a false premium on experimental and controlled studies will weaken rather than strengthen future efforts to minimize HIV transmission.

Looking back to a parallel debate about appropriate research methods for use in evaluating educational programmes more generally, we find that after some early cold war flirtation with experimental comparative designs, educational evaluators came later to adopt a more inclusive stance. Leading educational evaluators from the 1970s onwards have embraced multiple and diverse evaluation strategies. It is recognized that different technologies—goal-based evaluation, staff self-study, experimental control, illuminative evaluation and such-like—contribute worthwhile information according to the various purposes and audiences of the assessment. No single evaluation prototype can answer all relevant questions and none is without its limitations. The strongest evidence is that which is consistent and comes from a variety of investigators, employing dissimilar methods and taking different theoretical perspectives.

The recent focus on experimental studies and the call for more RCTs has one other hidden trap. It diverts our attention from important emerging issues in HIV/AIDS education. At the HIV Prevention Works Satellite Symposium of the XI International Conference on AIDS in Vancouver it was suggested that experimental procedures could contribute little to current priorities in HIV/AIDS education. Other techniques were needed to address priority areas for HIV prevention research, e.g. the effects of multiple interventions; how best to characterize and conceptualize interventions such as peer education, community mobilization and outreach work; how best to involve people living with HIV and AIDS in the design, implementation and evaluation of programmes; and how best to achieve commensurability and synergy between public sector and NGO/community-based responses (Aggleton, 1996b). If the current fashion for more RCTs and other experimental studies persists, there is a real danger that these and other important new directions for HIV/AIDS education will go quite unresearched.

Table I.

Prototypes of evaluation

Prototype Key emphasis Purpose Key activities Key viewpoint used to delimit study Outside experts needed HIV/AIDS educator involvement Risks Pay-off 
Objectivesmodel Objectives of the HIV/AIDS programme To measure target group progress towards objectives Specify the objectives; measure progress Programme manager;HIV/AIDSeducator Measurement specialists Conceptualize objectives; collect and interpret data Oversimplify goals; ignore processes Ascertain target group progress towards specific objectives 
Self-study Staff self-deliberation To review content and procedures of the HIV/AIDS programme DiscussHIV/AIDS programme; make professional judgements Programme manager;HIV/AIDSeducator None, but possible outside authenticationor technicalhelp Committee discussionsand decisionmaking Exhaust staff; ignore valuesof outsiders Increase staff responsibility 
Illumination Descriptionand judgementdata To report the ways different people see the HIV/AIDS programme Discover what stakeholders want to know; observe; gather opinions Audience of final report `Social anthropologists' Keep logs;give opinions Stir up value conflicts;ignore causes Broad picture of the natureof theHIV/AIDSprogramme 
Decisionmaking Decision making and accountability To facilitate rational and continual decision making Identify upcoming alternatives; study implications; set up quality control Administrator; programme manager Operations analysts Anticipate decisions, contingencies Overvalue efficiency; undervalue goals Programmes sensitive to feedback 
Experimentation Cause and effect relationships To seek simple but enduring explanation of what works in HIV/AIDS education Exercise experimental control and systematic variation Theorist;HIV/AIDSresearcher HIV/AIDS research; statistical analysts Tolerate experimental constraints Artificiality; lack of generalizability Rules for developing newHIV/AIDSprogrammes 
Goal-free evaluation Goal-free checklist To assess the HIV/AIDS programme's effects (irrespectiveof its goals) Ignore proponent claims; follow checklist Evaluator Independent analyst Make theHIV/AIDSprogramme accessible Overvalue documents and record keeping Data on effects with little risk that the evaluator will beco-opted byproponents 
Adapted from Jenkins (1976). 
Prototype Key emphasis Purpose Key activities Key viewpoint used to delimit study Outside experts needed HIV/AIDS educator involvement Risks Pay-off 
Objectivesmodel Objectives of the HIV/AIDS programme To measure target group progress towards objectives Specify the objectives; measure progress Programme manager;HIV/AIDSeducator Measurement specialists Conceptualize objectives; collect and interpret data Oversimplify goals; ignore processes Ascertain target group progress towards specific objectives 
Self-study Staff self-deliberation To review content and procedures of the HIV/AIDS programme DiscussHIV/AIDS programme; make professional judgements Programme manager;HIV/AIDSeducator None, but possible outside authenticationor technicalhelp Committee discussionsand decisionmaking Exhaust staff; ignore valuesof outsiders Increase staff responsibility 
Illumination Descriptionand judgementdata To report the ways different people see the HIV/AIDS programme Discover what stakeholders want to know; observe; gather opinions Audience of final report `Social anthropologists' Keep logs;give opinions Stir up value conflicts;ignore causes Broad picture of the natureof theHIV/AIDSprogramme 
Decisionmaking Decision making and accountability To facilitate rational and continual decision making Identify upcoming alternatives; study implications; set up quality control Administrator; programme manager Operations analysts Anticipate decisions, contingencies Overvalue efficiency; undervalue goals Programmes sensitive to feedback 
Experimentation Cause and effect relationships To seek simple but enduring explanation of what works in HIV/AIDS education Exercise experimental control and systematic variation Theorist;HIV/AIDSresearcher HIV/AIDS research; statistical analysts Tolerate experimental constraints Artificiality; lack of generalizability Rules for developing newHIV/AIDSprogrammes 
Goal-free evaluation Goal-free checklist To assess the HIV/AIDS programme's effects (irrespectiveof its goals) Ignore proponent claims; follow checklist Evaluator Independent analyst Make theHIV/AIDSprogramme accessible Overvalue documents and record keeping Data on effects with little risk that the evaluator will beco-opted byproponents 
Adapted from Jenkins (1976). 

View Large

This paper was prepared when P. Van de Ven was on a Public Health Travelling Fellowship awarded by the Australian National Health and Medical Research Council. He had discussions with various researchers, policy analysts and practitioners in the Netherlands and the UK but the views herein are the authors' own.

References

Aggleton, P. (

1996

) Global priorities for HIV/AIDS intervention research.

International Journal of STD and AIDS

 ,

7 (Suppl. 2)

,

13

–16.

Google Scholar

Aggleton, P. (1996b) Research for prevention education: future directions. Keynote address at the HIV Prevention Works Satellite Symposium, XI International Conference on AIDS, Vancouver, Canada.

Google Scholar

Aggleton, P. (1997a) Success in HIV Prevention: Some Strategies and Approaches. AIDS Education and Research Trust, Horsham, West Sussex.

Google Scholar

Aggleton, P. (

1997

) Developing and sustaining interventions for prevention and care.

National AIDS Bulletin

 ,

11

(4),

12

–17.

Google Scholar

Aral, S. O. and Peterman, T. A. (

1996

). Measuring outcomes of behavioural interventions for STD/HIV prevention.

International Journal of STD and AIDS

 ,

7 (Suppl. 2)

,

30

–38.

Google Scholar

Auerbach, J. D., Wypijewska, C. and Brodie, H. K. H. (1994) AIDS and Behavior: An Integrated Approach. National Academy Press, Washington, DC.

Google Scholar

Berrow, D., Humphreys, C. and Hayward, J. (

1997

) Understanding the relationship between research and clinical policy: a study of clinicians' views.

Quality in Health Care

 ,

6

,

181

–186.

Google Scholar

Bye, L. L. (1990) Moving beyond counselling and knowledge-enhancing interventions: a plea for community-level AIDS prevention strategies. In Ostrow D. G. (ed.), Behavioral Aspects of AIDS. Plenum, New York, pp. 157–167.

Google Scholar

Chapman, S. (

1993

) Unravelling gossamer with boxing gloves: problems in explaining the decline in smoking.

British Medical Journal

 ,

307

,

429

–432.

Google Scholar

Coates, T. J., Chesney, M., Folkman, S., Hulley, S. B., Haynes-Sanstad, K., Lurie, P., VanOss Marin, B., Roos, L., Bunnett, V. and Du Wors, R. (

1996

) Designing behavioural and social science to impact practice and policy in HIV prevention and care.

International Journal of STD and AIDS

 ,

7 (Suppl. 2)

,

2

–12.

Google Scholar

Crofts, N., Louie, R., Rosenthal, D. and Jolley, D. (

1996

) The first hit: circumstances surrounding initiation into injecting.

Addiction

 ,

91

,

1187

–1196.

Google Scholar

1

That said, we recognize that health education and health promotion, especially in the UK, encompasses a variety of positions and approaches. We also recognize that the views of individual researchers and the official stances of the organizations for which they work change over time.

2

One health promotion manager in England described an interesting approach received from a social research unit (personal communication with first author, 18 August 1997). A group of researchers had secured funding for an RCT of an HIV/AIDS intervention. The problem was that the research group did not have a specific intervention to evaluate and were thus turning to the health promotion manager to provide one. This anecdote draws attention to the absurdity of the methodological tail wagging the health promotion dog.

3

Whereas community-based approaches may have been successful initially, behaviour change in relation to gay men and HIV/AIDS may not have been sustained over time in all countries. This does not mean that the methods used to assess and validate the impact and effectiveness of early prevention measures were inappropriate, merely that there is an important difference between engendering an initial response and sustaining it over time. We recognize also that community-based strategies are usually most relevant to a particular time and place, and to a particular set of historical, social and medical circumstances.

0 Thoughts to “Aids Research Paper Introduction Paul

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *