173qs

Embed Size (px)

Citation preview

  • 8/8/2019 173qs

    1/6

    Expert consensus on the desirable characteristicsof review criteria for improvement of health carequality

    H M Hearnshaw, R M Harker, F M Cheater, R H Baker, G M Grimshaw

    AbstractObjectives To identify the desirablecharacteristics of review criteria for qual-ity improvement and to determine howthey should be selected.BackgroundReview criteria are the ele-ments against which quality of care isassessed in quality improvement. Use ofinappropriate criteria may impair theeVectiveness of quality improvement ac-tivities and resources may be wasted inactivities that fail to facilitate improvedcare.

    Methods A two round modified Delphiprocess was used to generate consensusamongst an international panel of 38experts. A list of 40 characteristics ofreview criteria, identified from literaturesearches, was distributed to the expertswho were asked to rate the importanceand feasibility of each characteristic.Comments and suggestions for character-istics not included in the list were alsoinvited.Results The Delphi process refined acomprehensive literature based list of 40desirable characteristics of review criteriainto a more precise list of 26 items. Theexpert consensus view is that review crite-

    ria should be developed through a welldocumented process involving considera-tion of valid research evidence, possiblycombined with expert opinion, prioritisa-tion according to health outcomes andstrength of evidence, and pilot testing.Review criteria should also be accompa-nied by full clear information on how theymight be used and how data might be col-lected and interpreted.Conclusion The desirable characteris-tics for review criteria have been identi-fied and will be of use in the development,evaluation, and selection of review crite-ria, thus improving the cost eVectiveness

    of quality improvement activities inhealthcare settings.(Quality in Health Care 2001;10:173178)

    Keywords: review criteria; Delphi process; audit crite-ria; quality improvement

    The issue of defining and assessing the qualityof health care is central to improving clinicalpractice. In recent years quality improvementmethods such as clinical audit and clinicalutilisation review have been actively promotedby many healthcare providers and policymakers.13

    The first stage of quality improvement for agiven topic is to establish the review criteria touse. Review criteria are systematically devel-oped statements that can be used to assess theappropriateness of specific health care deci-sions, services and outcomes.4 If appropriatereview criteria are used, improvements inperformance measured against these criteriashould result in improved care. In contrast, if

    Key messages+ Review criteria are the elements against

    which quality of health care is assessed inquality improvement.

    + An expert consensus view was generatedof 26 desirable characteristics of reviewcriteria.

    + Review criteria should be developedthrough a well documented processinvolving consideration of valid researchevidence, possibly combined with expertopinion, prioritisation according tohealth outcomes and strength of evi-dence, and pilot testing.

    + Review criteria should be accompaniedby full and clear information on how theymight be used and how data might becollected and interpreted.

    + Information on the characteristics ofreview criteria will enable rational selec-tion of review criteria for quality im-provement activities.

    What is already known on the subjectReview criteria are systematically developedstatements that can be used to assess theappropriateness of specific health care deci-sions, services and outcome. There is no

    generally accepted method of definingappropriate review criteria. If quality of careis assessed against inappropriate criteria,resulting improvements in performanceagainst these criteria may not eVect anyimprovement in care, and resources may bewasted in ineVective quality improvementactivities.

    What this paper addsAn expert consensus on the desirable char-acteristics of review criteria has been gener-ated and the criteria of quality for qualityreview criteria have been identified. Reviewcriteria can now be eVectively identified andpresented so that data can be collected onjustifiable, appropriate, and valid aspects ofcare.

    Quality in Health Care 2001;10:173178 173

    Centre for Primary

    Health Care Studies,

    University of Warwick,

    Coventry CV4 7AL,

    UK

    H M Hearnshaw, seniorlecturer in primary care

    National Childrens

    Bureau, London, UK

    R M Harker, seniorresearch oYcer

    School of Healthcare

    Studies, University of

    Leeds, Leeds, UK

    F M Cheater, professor of public health nursing

    Clinical Governance

    Research and

    Development Unit,

    Department of

    General Practice and

    Primary Health Care,

    University of Leicester,

    Leicester, UK

    R H Baker, professor ofquality in health care

    Centre for Health

    Services Studies,

    University of Warwick,

    Coventry, UK

    G M Grimshaw, seniorresearch fellow

    Correspondence to:Dr H [email protected]

    Accepted 21 June 2001

    www.qualityhealthcare.com

  • 8/8/2019 173qs

    2/6

    quality of care is assessed against inappropriateor irrelevant criteria, then resulting improve-ments in performance against these criteriamay not eVect any improvement in care, andresources may be wasted in ineVective qualityimprovement activities.5

    There is no generally accepted method ofdefining appropriate review criteria. A fewauthors have proposed what the desirablecharacteristics of review criteria are,4 6 but

    there is no clear indication of how appropriatecriteria might be developed. Often reviewcriteria have been generated from guidelines.710

    Alternatively, instead of directly translatingguidelines into review criteria, it has beenargued that criteria should be based directly onhigh quality research evidence and prioritisedaccording to strength of evidence and impacton health outcomes.11 12

    Unfortunately, high quality research evi-dence is not readily available for all clinicaltopics,13 and expert opinion is often relied uponto develop criteria.14 15 Although some authorsrecommend that criteria should not be devel-oped at all if research evidence is lacking,11 an

    alternative approach is to synthesise the twomethods, drawing on expert opinion whenthere is no research evidence.16 17

    Consensus methods are increasingly used todevelop clinical guidelines18 19 and can providea way of determining how review criteriashould be selected and defining their desirablecharacteristics. The Delphi technique20 is aconsensus method that gathers expert opinionthrough an iterative questionnaire process. Theresearchers communicate in writing with apanel of experts comprising between 10 and 50members. Experts are anonymous to the extentthat other panel members do not know theiridentity at the time of data collection.

    It is recommended that the panel should

    include both advocates and referees.21 Theexpertise of advocates stems from participantinvolvement in the area under studyforexample, clinicians or quality managers. Ref-erees have less direct involvement and theirexpertise is derived from study of the topicfor example, academic researchers. Hence, weshall refer to advocates as practitioner expertsand referees as academic experts.

    Modifications to a pure Delphi process arecommon.2224 The preparatory stage of formu-lating issues can be supplanted by reference toexisting research25 and subsequent rounds canbe used to develop, rather than directlyreiterate, the concerns of previous rounds.20

    This study aimed to determine both howreview criteria should be selected and theirdesirable characteristics, using a modified Del-phi technique. This information will informboth those who develop review criteria andthose who select review criteria to make qualityimprovement in health care more eVective.

    MethodsA two round modified Delphi process was usedto generate consensus amongst an inter-national panel of experts. A decision was madeto restrict the Delphi process to two roundsbefore inviting experts to participate, since the

    initial questionnaire was based upon a carefulreview of available literature. It was consideredthat two rounds would be enough to reachadequate consensus and would minimise theworkload for participants.

    THE EXPERT PANEL

    We identified an international group of expertsin quality improvement in health care from a

    variety of professional disciplines. Threesources of information were used to identifyexperts: (1) publication record, (2) member-ship of quality improvement groups in relevantorganisations such as the Royal Colleges in theUK, and (3) recommendations from research-ers in the field. Forty nine experts werecontacted, mostly by email, and asked tocontribute to the study. The expert group wascategorised by the researchers into 26 aca-demic experts (referees) and 23 practitionerexperts (advocates). Individuals for whomcontact details were rapidly obtained were con-tacted first. Having received a suYcientnumber of positive responses from theseexperts, we ceased to seek contact details for

    further individuals. Although we acknowledgethat our list of experts is not exhaustive andother individuals could have contributed to thestudy, the expert group provided a wide rangeof views and adequate representativeness.20

    ROUND 1

    Medline and Embase databases were searchedfrom 1990 to March 1999 using the topicheadings clinical audit, medical audit,clinical utilization, quality assurance, andguidelines and text words review criteria,appropriateness criteria, clinical indica-tors, and performance indicators. Theabstract of each citation was reviewed and all

    studies concerned with the development ofreview criteria were retrieved. In addition, pub-lications of expert panel members on thedevelopment of clinical guidelines were re-viewed. The literature review was used to com-pile a list of identified desirable characteristicsof review criteria from which the questionnairefor round 1 of the Delphi process wasconstructed. The questionnaire contained 40items in three sections:(1) The process of developing review criteria

    (18 items).(2) Attributes of review criteria (11 items).(3) The usability of review criteria (11 items).

    The experts were asked to rate importance

    and feasibility for each item using 7 pointscales, anchored by not at all important andvery important, and not at all feasible andvery feasible. Free comments on each itemand suggestions about items overlooked in thequestionnaire were also invited. Questionnaireswere distributed by email to 31 experts and bypost to seven experts who did not have accessto email. Experts were asked to complete thequestionnaire within 2 weeks. Non-responderswere sent reminders after 2 weeks and, wherenecessary, after a further 10 days. Round 1 wasconcluded 5 weeks after the distribution of thequestionnaire.

    174 Hearnshaw,Harker,Cheater,et al

    www.qualityhealthcare.com

  • 8/8/2019 173qs

    3/6

    Round 1 responses were aggregated andused to identify aspects to retain for round 2. Adefinition of disagreement based upon theRAND/UCLA appropriateness method16 wasgenerated by the project team and used toexclude items from round 2 if their ratings werepolarised to the extreme points of the scalethat is, if three or more experts gave a high rat-ing of 6 or 7 whilst, in addition, three or moregave a low rating of 1 or 2. Cumulativepercentage scores were then used to determinewhich of the remaining items met the inclusioncriteria, firstly, of at least 80% of the expertpanel providing an importance rating of 5 ormore and, secondly, a feasibility rating of 4 or

    more. Items thus excluded showed lack of con-sensus on being given a rating for importanceon the top two points on the scale.Nevertheless, there was opportunity later forexperts to request the exclusions to be revoked.Where experts provided comments, these werecarefully considered in project team discus-sions. Some comments resulted in a refinementof item wording for round 2 of the Delphiprocess, others led to the inclusion of addi-tional items where experts felt significantomissions arose. The aggregation of round 1results and subsequent development of round2 occurred over a 2 week period. The round 2questionnaire was ready 7 weeks after the

    round 1 questionnaire was sent out.

    ROUND 2

    The round 2 questionnaire informed theexperts of the method used to identify items tobe included or excluded for round 2. Expertswere asked to re-rate each item for round 2 andto provide additional comments if they wished.The questionnaire reminded experts of theirown round 1 rating foreach item and presentedthe expert groups mean rating for that item. Ifthe wording of items had been altered, ratingsfor the original item were provided and the ini-tial wording was shown below the altered item.

    Some new items were added to section 1 inresponse to expert comment. These wereclearly labelled as being new. All excludeditems were shown separately at the end of eachsection. Experts could alter their ratings forthese items and comment on their exclusion, ifthey wished.

    The same processes for distribution, remind-ing, and analysis were used in round 2 as inround 1. Items retained after round 2 identified

    the desirable characteristics of review criteriaand the method of selecting them.

    Members of the panel of experts were sentdetails of the outcomes of the Delphi process.

    ResultsThirty eight of the 49 experts invited to takepart agreed to do so. The number of expertsresponding to each round of the Delphi isshown in table 1. The table also gives details ofthe number of practitioner and academicexperts included in each round. There were nosignificant diVerences in the proportion ofpractitioners and academics responding to theinitial participation request (2 = 0.3, p>0.05),

    nor did experts status as a practitioner or aca-demic relate to their likelihood of completinground 1 (2 = 1.5, df =1, p>0.05) or round 2(2 = 0.5, df=1, p>0.05). Participating expertsare listed in appendix 1.

    From a starting point of 40 items in round 1,26 items qualified for inclusion after tworounds of the Delphi processthat is, 80% ofthe experts gave importance ratings of 5 ormore and feasibility ratings of 4 or more forthese 26 items and there was no polarisation ofexpert ratings. Table 2 shows the number ofitems in each round resulting in exclusion andinclusion.

    In the final list of desirable characteristics, 13items retained the original round 1 wording, 12

    items were reworded for round 2, and one itemwas introduced in round 2. Table 3 shows thefinal list and the mean importance and feasibil-ity rating associated with each characteristic.

    The round 2 questionnaire also allowedexperts to reconsider the 11 items excluded atround 1. The round 2 responses confirmedthat all these items should be excluded. Inaddition, a further five items were excludedafter round 2 (including one of the items newlyintroduced at round 2). All the 16 excludeditems are shown in table 4 with the reasons forexclusion.

    The final lists of included and excludeditems (tables 3 and 4) were given to all expert

    participants for comment. There was nodissent to the list.

    DiscussionIt has been possible to define the desirablecharacteristics of review criteria by use of amodified Delphi process. This method hasrefined and validated the list of characteristicsinitially based on literature alone. The use ofexpert judgement has identified which of theliterature based characteristics of reviewcriteria are highly important and feasible. Thefinal list of desirable characteristics can informdevelopers and users of review criteria and lead

    Table 1 Experts involved in each stage of the Delphi process

    Academics Practitioners Total

    Invited to participate 26 23 49Agreed to participate 21 17 38Round 1 returns

    Without reminder 10 7 17After 1 reminder 5 3 8After 2 reminders 3 2 5Completed round 1 18 12 30

    Round 2 returnsWithout reminder 7 6 13

    After 1 reminder 2 2 4After 2 reminders 6 3 7Completed round 2 15 10 25

    Table 2 Items included/excluded after each round of theDelphi process

    No of items Inclusion Exclusion

    Round 1Section 1: Development 18 17 1Section 2: Attributes 11 7 4Section 3: Usability 11 5 6Total 40 29 11

    Round 2Sectio n 1: Develo pment 19 (2 new) 14 5Section 2: Attributes 7 7 0Section 3: Usability 5 5 0Total 31 26 5

    Characteristics of review criteria for improvement of health care quality 175

    www.qualityhealthcare.com

  • 8/8/2019 173qs

    4/6

    to more appropriate, less wasteful, use ofreview criteria in quality improvement activi-ties in health care.

    Our original list of important aspects ofreview criteria consisted mainly of items men-tioned in publications by expert panel members.While the Delphi process confirmed theimportance of most of these items, it alsoexcluded some. For instance, although the proc-ess retained items such as Criteria are based ona systematic review of research evidence andExpert opinion is included in the process ofdeveloping review criteria, it excluded specify-ing the search strategy used or the names of theexperts involved. We feel this has demonstratedthe eVectiveness of the Delphi process inexcluding unimportant or extreme views toarrive at a more centralised accepted definition.

    The literature based initial list of criteriaincluded four items specifically related topatient issues. The items Criteria includeaspects of care that are relevant to patients

    and The collection of information for criteriabased review is acceptable to those patientswhose care is being reviewed were retained inthe final list. However, the item The views ofpatients are included in the process of develop-ing review criteria was excluded as having lowfeasibility. Measuring the importance to pa-tients of criteria could certainly be very costlyin time and resources, making it infeasible,even though it was rated as important. Theitem Criteria are prioritised according to theirimportance to patients was rated as of lowfeasibility and low importance. The prioritisa-tion of criteria was generally rated as of lowimportance, perhaps indicating that all criteriawhich met the other characteristics should beused, and prioritisation added nothing ofimportance. Thus, it was not specifically thepatients priorities that were being excluded,but prioritisation itself.

    The external validity of the collective opin-ion produced by a Delphi method is dependent

    Table 3 List of the desirable characteristics of review criteria and mean importance and feasibility ratings, ordered by importance rating

    Mean rating

    Characteristic Importance Feasibility

    Criteria are described in unambiguous terms 6.7 5.6Criteria are based on a systematic review of research evidence 6.6 5.0The validity of identified research is rigorously appraised 6.5 5.5Criteria include clear definitions of the variables to be measured 6.5 5.9Criteria explicitly state the patient populations to which they apply 6.4 5.9Criteria are capable of diVerentiating between appropriate and inappropriate care 6.3 5.0Criteria are linked to improving health outcomes for the care being reviewed 6.3 4.7

    Criteria explicitly state the clinical settings to which they apply 6.2 5.8The collection of information required for criteria based review minimises demands on staV 6.2 6.4The method of selecting criteria is described in enough detail to be repeated 6.1 5.8Criteria are accompanied by clear instructions for their use in reviewing care 6.1 6.0The systematic review used to guide the selection of criteria is up to date 6.0 5.4Criteria are pilot tested for practical feasibility 6.0 5.1Criteria include aspects of care that are relevant to patients 6.0 4.9The collection of information for criteria based review is acceptable to those patients whose care is being reviewed 6.0 5.2The bibliographic sources used to identify research evidence are specified 5.9 6.3In selecting criteria, decisions on trade-oVs between outcomes from diVerent treatment options are stated 5.9 4.5T he c ol lec ti on o f i nf or ma ti on r eq ui red f or cr it er ia b as ed r ev ie w m in im is es d em an ds o n p at ie nt s 5. 9 5. 3The method of synthesising evidence and expert opinion is made explicit 5.8 5.3Criteria are prioritised according to the quality of supporting evidence 5.8 4.9Criteria are prioritised according to their impact on health outcomes 5.8 4.5The criteria used to assess the validity of research are stated 5.6 5.8Similar criteria should emerge if other groups review the same evidence 5.5 4.6The collection of information for criteria based review is acceptable to those sta Vwhose care is being reviewed 5.5 4.8Expert opinion is included in the process of developing review criteria 5.3 5.5C ri te ri a u se d i n p rev io us q ua li ty r ev iew s o f t he s am e c li ni ca l t op ic a re c on si de re d f or i ncl us io n 5 .3 5 .8

    Table 4 Excluded items not included in the final list of desirable characteristics

    Reason for exclusion*

    Characteristic Low importance Low feasibility Polarisation

    Development of review criteriaThe views of patients are included in the process of developing review criteria Y YCriteria are prioritised according to the cost implications (of complying with the criteria for equal health outcomes) Y YThe search strategy or keywords used to search the literature are stated YThe experts involved in the process of developing review criteria are stated YCriteria are prioritised YCriteria are prioritised according to their importance to patients Y

    Attributes of review criteriaCriteria are linked to lowering resources used in the care being reviewed Y YCriteria are presented in computer compatible format, where possible Y YAll criteria that are necessary for a review of the topic of care are included Y YThe same results should emerge if other people collect data for the criteria Y

    Usability of review criteriaCriteria facilitate data collection over a reasonable time period which may include training in data collection Y Y YCriteria facilitate data collection from acceptable sample sizes Y Y

    The average time range required to collect information for criteria based review is indicated YThe individuals conducting the review are confident that the criteria are valid YThe criteria produce ratings that are easy to interpret into appropriateness of care YThe degree of professional expertise required to collect information for criteria based review is indicated Y

    *Low importance means that less than 80% of the experts gave importance ratings of 5 or more on the 7 point scale. Low feasibility means that less than 80% of theexperts gave feasibility ratings of 4 or more. Polarisation means that more than two experts rated this item at 6 or 7 and more than two rated it at 2 or less on one ofthe scales.16

    176 Hearnshaw,Harker,Cheater,et al

    www.qualityhealthcare.com

  • 8/8/2019 173qs

    5/6

    on the composition of the expert panel. Thisstudy aimed to include both referees (academicresearchers) and advocates (quality improve-ment practitioners) to balance insight fromtheoretical understanding with practicalexperience.However, the eventual compositionof our expert panel was marginally biasedtowards an academic research perspective asslightly more referees than advocatesagreed to participate. This may have caused

    practical aspects of review criteria to be under-represented in the final definition, and couldexplain the exclusion of items reflectingresource allocation and patient considerations.Given the increase in the political importanceof resource allocation and patient considera-tions, these exclusions could be deemedinappropriate.

    The identification of desirable characteris-tics of review criteria has emerged from theexpert consensus process used in this study.Although the problems associated with thenature of consensus and the structure of ourexpert panel should be acknowledged, the defi-nition created here represents a considerable

    advance in our understanding of what appro-priate review criteria are and how they might bedeveloped. Previous published literature alonedid not directly translate into a definitive list ofthe desirable characteristics of review criteria.Inconsistency was found in the literature on therelative importance which individual research-ers assigned to diVerent aspects of review crite-ria. The use of this international panel ofexperts from a variety of professional disci-plines lends a universal relevance to this defini-tive list. It is not dependent on the views ofprofessionals from one specific context orworking within one particular healthcare sys-tem.

    The modified Delphi process used here was

    very eYcient in terms of time and resourcesspent in consultation with the experts involved.The process allowed experts from geographi-cally disparate areas to be included at relativelylow cost. Distributing questionnaires by emailfurther minimised the resources needed andthe time taken to complete each round of theDelphi process. The method proved a fast andlow cost way of gathering the opinions of aninternational panel.

    The knowledge gained from this studyshould be of relevance to all those involved inthe development of review criteria for qualityimprovement. We have provided informationon how review criteria can be eVectively identi-

    fied and presented so that data can be collectedon justifiable, appropriate, and valid aspects ofcare. This can also be used to guide the assess-ment of review criteria for those who need todecide which criteria are appropriate for theirparticular quality assurance or quality improve-ment project. In short, the criteria of quality forquality review criteria have been identified.Future studies should be directed at establish-ing the value of this set of meta-criteria as adevelopmental tool to aid the selection ofreview criteria for quality improvement activi-ties. The set is currently being used to developan instrument to appraise the quality of review

    criteria. This instrument will have the potentialto raise the standard of all quality reviews andthus improve the quality of health care.

    In summary, the expert consensus view isthat review criteria should be developedthrough a well documented process involvingconsideration of valid research evidence, possi-bly combined with expert opinion, prioritisa-tion according to health outcomes and strengthof evidence, and pilot testing. Review criteria

    should also be accompanied by full clear infor-mation on how they might be used and howdata might be collected and interpreted.

    Appendix 1: List of expert participantsAndrew Booth, University of SheYeld, UK

    John Cape, British Psychological Society, UKAlison Cooper, Fosse Healthcare NHS Trust, UKGregor Coster, Auckland University, New ZealandSusan Dovey, University of Otago, New Zealand

    Jeremy Grimshaw, Aberdeen University, UKGordon Guyatt, McMaster University, CanadaGill Harvey, Royal College of Nursing, UKNick Hicks, Oxford Public Health, UK

    Jaki Hunt, Kettering General Hospital, UKLyn Juby, Clinical Governance Research and Develop-ment Unit, UK

    Kamlesh Khunti, Clinical Governance Research andDevelopment Unit, UKBeat Kuenzi, SGAM Research Group, SwitzerlandMayur Lakhani, Clinical Governance Research andDevelopment Unit, UKPhilip Leech, National Health Service Executive, UKKatherine Lohr, Research Triangle Institute, USAAdrian Manhire, Royal College of Radiologists, UKKaren Mills, Warwickshire Multi-disciplinary AuditAdvisory Group, UKAndrew Moore, Bandolier, UKMary Ann OBrien, McMaster University, CanadaFrede Oleson, Aarhus University, DenmarkBarnaby Reeves, Royal College of Surgeons, UK

    James Rimmer, Avon Primary Care Audit Group, UKTom Robinson, Leicester General Hospital, UKMartin Roland, National Primary Care Research &Development Centre, UK

    Charles Shaw, CASPE Research, UKPaul Shekelle, RAND, USAChris Silagy, Flinders University, AustraliaTim van Zwanenburg, University of Newcastle, UKKieran Walshe, Birmingham University, UKGeoVWoodward, Royal College of Optometrists, UK

    The researchers are indebted to all the members of the expertpanel who contributed their time and expertise so willingly.Acknowledgement is also given to Dr Mayur Lakhani,University of Leicester, UK, for his helpful comments on anearlier versionof thispaper, andto theanonymous referees of anearlier version.

    This research was funded by the UK National Health ServiceResearch and Development Health Technology Assessmentprogramme.

    The views and opinions expressed here are those of the authorsand do not necessarily reflect those of the NHS Executive.

    1 Department of Health. Working for patients. London:HMSO, 1989.

    2 Department of Health. Working paper 6: medical audit.London: HMSO, 1989.

    3 Department of Health and Human Services. The challengeand potential for assuring quality health care for the 21stcentury. Publication No. OM 98-0009. Prepared for theDomestic Policy Council, Washington DC, 17 June 1998.http:www.ahcpr.gov/qual/21stcena.htm.

    4 Field MJ, Lohr KN, eds. Guidelines for clinical practice: fromdevelopment to use. Washington: Institute of Medicine,1992.

    5 Naylor CD, Guyatt GH. Users guides to the medical litera-ture XI. How to use an article about a clinical utilizationreview. JAMA 1996; 275: 14359.

    6 Lohr KN,ed.Medicare:a strategy for quality assurance.VolumeI. Washington: National Academy, 1990.

    7 Bozzett SA, Asch S. Developing quality review criteria fromstandards of care for HIV disease: a framework. J AcquirImmune Defic Syndr Hum Retrovirol 1995; 8: S4552.

    Characteristics of review criteria for improvement of health care quality 177

    www.qualityhealthcare.com

  • 8/8/2019 173qs

    6/6

    8 Lanska D. A public/private partnership in the quest forquality: development of cerebrovascular disease practiceguidelines and review criteria. Am J Med Qual 1995; 10:1006.

    9 James PA, Cowan TM, Graham RP, et al. Using a clinicalpractice guideline to measure physician practice: translat-ing a guideline for the management of heart failure. J AmBoard Fam Pract 1997; 10: 20612.

    10 Hadorn DC, Baker RW, Kamberg CJ, et al. Phase II of theAHCPR-sponsored heart failure guideline: translatingpractice recommendations into review criteria. Jt Comm JQual Improv 1996; 22: 26576

    11 Fraser RC, Khunti K, Baker R, et al. EVective audit in gen-eral practice: a method for systematically developing auditprotocols containing evidence-based review criteria. Br J

    Gen Pract1997; 47: 743612 Ashton CM, Kuykendall DH, Johnson MS, et al. A method

    of developing and weighting explicit process of care criteriafor quality assessment. Med Care 1994; 32: 75570.

    13 Shekelle PG, Kahan JP, Bernstein SJ, et al. The reproduc-ibility of a method to identify the overuse and underuse ofmedical procedures. N Engl J Med1998; 338: 188895.

    14 Johnson N. Feasibility of developing and selecting criteriafor the assessment of clinical performance. Br J Gen Pract1993; 43: 499502.

    15 Bateman DN, Eccles M, Campbell M, et al . Settingstandards of prescribing performance in primary care: useof a consensus group of general practitioners andapplication of standards to practices in the North ofEngland. Br J Gen Pract 1996; 46: 205.

    16 Brook RH, Chassin MR, Fink A, et al. A method for thedetailed assessment of the appropriateness of medicalstrategies. Int J Technol Assess Health Care 1986; 2: 5363.

    17 Campbell SM, Roland MO, Shekelle PG, et al. Develop-ment of review criteria for assessing the quality of manage-ment of stable angina, adult asthma, and non-insulindependent diabetes mellitus in general practice. Quality inHealth Care 1999; 8: 615.

    18 Eccles M, Clapp Z, Grimshaw J, et al. North of England evi-dence based guidelines development project: methods ofguideline development. BMJ1996; 312: 7602.

    19 Murphy MK, Black NA, Lamping DL, et al. Consensusdevelopment methods and their use in clinical guidelinedevelopment. Health Technol Assess 1998;2:iiv.

    20 TuroV

    M. The policy Delphi. In: Linstone HA, TuroV

    M,eds. The Delphi method:techniques and applications. London:Addison-Wesley, 1975.

    21 Critcher C, Gladstone B. Utilizing the Delphi techniquein policy discussion: a case study of a privatized utility inBritain. Public Administration 1998;76:43149.

    22 McKnight JN, Edwards N, Pickard L, et al. The Delphiapproach to strategic planning. Nurs Manage 1991;22:557.

    23 Linstone HA, TuroV M, eds. The Delphi method: techniquesand applications. London: Addison-Wesley, 1975.

    24 Mitchell VW. Using Delphi to forecast in new technologyindustries. Market Intelligence and Planning 1992;10:49.

    25 Guba ES, Lincoln YS. Fourth generation evaluation. London:Sage, 1989.

    178 Hearnshaw,Harker,Cheater,et al

    www.qualityhealthcare.com