↓ Skip to main content

Cochrane Database of Systematic Reviews

Eliciting adverse effects data from participants in clinical trials

Overview of attention for article published in Cochrane database of systematic reviews, January 2018
Altmetric Badge

About this Attention Score

  • In the top 25% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (89th percentile)
  • Above-average Attention Score compared to outputs of the same age and source (55th percentile)

Mentioned by

twitter
31 tweeters
facebook
2 Facebook pages

Citations

dimensions_citation
13 Dimensions

Readers on

mendeley
151 Mendeley
Title
Eliciting adverse effects data from participants in clinical trials
Published in
Cochrane database of systematic reviews, January 2018
DOI 10.1002/14651858.mr000039.pub2
Pubmed ID
Authors

Elizabeth N Allen, Clare IR Chandler, Nyaradzo Mandimika, Cordelia Leisegang, Karen Barnes

Abstract

Analysis of drug safety in clinical trials involves assessing adverse events (AEs) individually or by aggregate statistical synthesis to provide evidence of likely adverse drug reactions (ADR). While some AEs may be ascertained from physical examinations or tests, there is great reliance on reports from participants to detect subjective symptoms, where he/she is often the only source of information. There is no consensus on how these reports should be elicited, although it is known that questioning methods influence the extent and nature of data detected. This leaves room for measurement error and undermines comparisons between studies and pooled analyses. This review investigated comparisons of methods used in trials to elicit participant-reported AEs. This should contribute to knowledge about the methodological challenges and possible solutions for achieving better, or more consistent, AE ascertainment in trials. To systematically review the research that has compared methods used within clinical drug trials (or methods that would be specific for such trials) to elicit information about AEs defined in the protocol or in the planning for the trial. Databases (searched to March 2015 unless indicated otherwise) included: Embase; MEDLINE; MEDLINE in Process and Other Non-Indexed Citations; Cochrane Methodology Register (July 2012); Cochrane Central Register of Controlled Trials (February 2015); Cochrane Database of Systematic Reviews; Database of Abstracts of Reviews of Effects (January 2015); Health Technology Assessment database (January 2015); CINAHL; CAB Abstracts; BIOSIS (July 2013); Science Citation Index; Social Science Citation Index; Conference Proceedings Citation Index - Science. The search used thesaurus headings and synonyms for the following concepts: (A): Adverse events AND measurement; (B): Participants AND elicitation (also other synonyms for extraction of information about adverse effects from people); (C): Participants AND checklists (also other synonyms as for B). Pragmatic ways were used to limit the results whilst trying to maintain sensitivity. There were no date or sample size restrictions but only reports published in English were included fully, because of resource constraints as regards translation. Two types of studies were included: drug trials comparing two or more methods within- or between-participants to elicit participant-reported AEs, and research studies performed outside the context of a trial to compare methods which could be used in trials (evidenced by reference to such applicability). Primary outcome data included AEs elicited from participants taking part in any such clinical trial. We included any participant-reported data relevant for an assessment of drug-related harm, using the original authors' terminology (and definition, where available), with comment on whether the data were likely to be treatment-emergent AEs or not. Titles and abstracts were independently reviewed for eligibility. Full texts of potentially eligible citations were independently reviewed for final eligibility. Relevant data were extracted and subjected to a 100% check. Disagreements were resolved by discussion, involving a third author. The risk of bias was independently assessed by two authors. The Cochrane 'Risk of bias' tool was used for reports comparing outcomes between participants, while for within-participant comparisons, each study was critically evaluated in terms of potential impact of the design and conduct on findings using the framework of selection, performance, detection, attrition, reporting, and other biases. An attempt was made to contact authors to retrieve protocols or specific relevant missing information. Reports were not excluded on the basis of quality unless data for outcomes were impossible to compare (e.g. where denominators differed). A narrative synthesis was conducted because differences in study design and presentation meant that a quantitative meta-analysis was not possible. The 33 eligible studies largely compared open questions with checklist-type questions or rating scales. Two included participant interviews. Despite different designs, populations and details of questioning methods, the narrative review showed that more specific questioning of participants led to more AEs detected compared to a more general enquiry. A subset of six studies suggested that more severe, bothersome, or otherwise clinically relevant AEs were reported when an initial open enquiry was used, while some less severe, bothersome, or clinically relevant AEs were only reported with a subsequent specific enquiry. However, two studies showed that quite severe or debilitating AEs were only detected by an interview, while other studies did not find a difference in the nature of AEs between elicitation methods. No conclusions could be made regarding the impact of question method on the ability to detect a statistically significant difference between study groups. There was no common statistical rubric, but we were able to represent some effect measures as a risk ratio of the proportion of participants with at least one AE. This showed a lower level of reporting for open questions (O) compared to checklists (CL), with a range for the risk ratios of 0.12 to 0.64. This review supports concerns that methods to elicit participant-reported AEs influence the detection of these data. There was a risk for under-detection of AEs in studies using a more general elicitation method compared to those using a comprehensive method. These AEs may be important from a clinical perspective or for patients. This under-detection could compromise ability to pool AE data. However, the impact on the nature of the AE detected by different methods is unclear. The wide variety and low quality of methods to compare elicitation strategies limited this review. Future studies would be improved by using and reporting clear definitions and terminology for AEs (and other important variables), frequency and time period over which they were ascertained, how they were graded, assessed for a relationship to the study drug, coded, and tabulated/reported. While the many potential AE endpoints in a trial may preclude the development of general AE patient-reported outcome measurement instruments, much could also be learnt from how these employ both quantitative and qualitative methods to better understand data elicited. Any chosen questioning method needs to be feasible for use by both staff and participants.

Twitter Demographics

The data shown below were collected from the profiles of 31 tweeters who shared this research output. Click here to find out more about how the information was compiled.

Mendeley readers

The data shown below were compiled from readership statistics for 151 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Unknown 151 100%

Demographic breakdown

Readers by professional status Count As %
Student > Master 35 23%
Student > Bachelor 18 12%
Researcher 15 10%
Student > Ph. D. Student 15 10%
Other 12 8%
Other 20 13%
Unknown 36 24%
Readers by discipline Count As %
Medicine and Dentistry 37 25%
Nursing and Health Professions 23 15%
Pharmacology, Toxicology and Pharmaceutical Science 11 7%
Social Sciences 9 6%
Psychology 7 5%
Other 21 14%
Unknown 43 28%

Attention Score in Context

This research output has an Altmetric Attention Score of 18. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 March 2021.
All research outputs
#1,419,686
of 18,873,384 outputs
Outputs from Cochrane database of systematic reviews
#3,489
of 11,881 outputs
Outputs of similar age
#40,976
of 382,833 outputs
Outputs of similar age from Cochrane database of systematic reviews
#94
of 207 outputs
Altmetric has tracked 18,873,384 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 92nd percentile: it's in the top 10% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 11,881 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 26.6. This one has gotten more attention than average, scoring higher than 70% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 382,833 tracked outputs that were published within six weeks on either side of this one in any source. This one has done well, scoring higher than 89% of its contemporaries.
We're also able to compare this research output to 207 others from the same source and published within six weeks on either side of this one. This one has gotten more attention than average, scoring higher than 55% of its contemporaries.