Investigating Drug Effectiveness Using Consumer-Generated Reviews



Pauline C Ng*, Genome Institute of Singapore, Singapore, Singapore
Ngak-Leng Sim, Genome Institute of Singapore, Singapore, Singapore
Qiangze Hoi, Genome Institute of Singapore, Singapore, Singapore
Gina Pan, Genome Institute of Singapore, Sinagpore, Singapore


Track: Research
Presentation Topic: Public (e-)health, population health technologies, surveillance
Presentation Type: Oral presentation
Submission Type: Single Presentation

Building: Mermaid
Room: Room 4 - Queenshithe
Date: 2013-09-24 11:30 AM – 01:00 PM
Last modified: 2013-09-25
qrcode

If you are the presenter of this abstract (or if you cite this abstract in a talk or on a poster), please show the QR code in your slide or poster (QR code contains this URL).

Abstract


Background:
Head-to-head drug trials are prone to biases and tend to be inaccessible and incomprehensible to the general public. People are increasingly turning to the Internet for health information, and also participate by reviewing drugs that they have taken. As the number of consumer-generated reviews grows, this can potentially be used to provide information on drug effectiveness and satisfaction.

Objective:
To see if analysis of online reviews can successfully identify which drugs are more effective than their counterparts.

Methods:
We analyzed 96 drugs treating 76 conditions using consumer reviews from webmd.com. Using the Kruskal-Wallis test, we identify drugs that performed significantly differently from other drugs treating the same condition. We verified these findings with published literature. In addition, we performed power calculations by subsampling reviews to estimate the minimum number of reviews needed to detect significant differences.

Results:
Eleven out of the 76 conditions had significantly different satisfaction ratings between drugs (p< 0.05 with Bonferonni correction). Out of the 11 conditions, six had published head-to-head comparisons in peer-reviewed literature that we could use to confirm or invalidate our findings. Four out of the six comparisons showed agreement with our analysis. For the remaining two comparisons, one did not find significantly different results during the trial, and the other case was a controversy between natural versus synthetic ingredients. This suggests our analysis can be useful for patient decisions. For example, the sleep narcotic Rozerem has the lowest satisfaction rating among all sleeping aids reviewed. This drug was narrowly approved by the FDA and its effectiveness has been questioned. Drug manufacturers can also utilize the information to improve their products. As an example, customers complained that the brand-name ProAir inhalant is poorly designed compared to the generic inhalant, even though the two share the same active ingredient. Power calculations suggest that, on average, about 100 reviews per drug may be enough to compare drug performance. The power to detect significant differences between drugs will increase as more reviews are added.

Conclusions:
Online consumer-generated reviews can be utilized to find differences between drugs. This can be useful to pharmaceutical companies, medical professionals and the public. We will display these differences at the website choosemydrug.com




Medicine 2.0® is happy to support and promote other conferences and workshops in this area. Contact us to produce, disseminate and promote your conference or workshop under this label and in this event series. In addition, we are always looking for hosts of future World Congresses. Medicine 2.0® is a registered trademark of JMIR Publications Inc., the leading academic ehealth publisher.
Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.