Comparing Drug-Disease Associations in Clinical Practice Guideline Recommendations and Drug Product Label Indications.
Studies in health technology and informatics
2015; 216: 1039-?
Automating Identification of Multiple Chronic Conditions in Clinical Practice Guidelines.
AMIA Joint Summits on Translational Science proceedings AMIA Summit on Translational Science
2015; 2015: 456-460
Clinical practice guidelines (CPGs) and structured product labels (SPLs) are both intended to promote evidence-based medical practices and guide clinicians' prescribing decisions. However, it is unclear how well CPG recommendations about pharmacologic therapies for certain diseases match SPL indications for recommended drugs. In this study, we use publicly available data and text mining methods to examine drug-disease associations in CPG recommendations and SPL treatment indications for 15 common chronic conditions. Preliminary results suggest that there is a mismatch between guideline-recommended pharmacologic therapies and SPL indications. Conflicting or inconsistent recommendations and indications may complicate clinical decision making and implementation or measurement of best practices.
View details for PubMedID 26262338
Literacy and Retention of Information After a Multimedia Diabetes Education Program and Teach-Back
JOURNAL OF HEALTH COMMUNICATION
2011; 16: 89-102
Many clinical practice guidelines (CPGs) are intended to provide evidence-based guidance to clinicians on a single disease, and are frequently considered inadequate when caring for patients with multiple chronic conditions (MCC), or two or more chronic conditions. It is unclear to what degree disease-specific CPGs provide guidance about MCC. In this study, we develop a method for extracting knowledge from single-disease chronic condition CPGs to determine how frequently they mention commonly co-occurring chronic diseases. We focus on 15 highly prevalent chronic conditions. We use publicly available resources, including a repository of guideline summaries from the National Guideline Clearinghouse to build a text corpus, a data dictionary of ICD-9 codes from the Medicare Chronic Conditions Data Warehouse (CCW) to construct an initial list of disease terms, and disease synonyms from the National Center for Biomedical Ontology to enhance the list of disease terms. First, for each disease guideline, we determined the frequency of comorbid condition mentions (a disease-comorbidity pair) by exactly matching disease synonyms in the text corpus. Then, we developed an annotated reference standard using a sample subset of guidelines. We used this reference standard to evaluate our approach. Then, we compared the co-prevalence of common pairs of chronic conditions from Medicare CCW data to the frequency of disease-comorbidity pairs in CPGs. Our results show that some disease-comorbidity pairs occur more frequently in CPGs than others. Sixty-one (29.0%) of 210 possible disease-comorbidity pairs occurred zero times; for example, no guideline on chronic kidney disease mentioned depression, while heart failure guidelines mentioned ischemic heart disease the most frequently. Our method adequately identifies comorbid chronic conditions in CPG recommendations with precision 0.82, recall 0.75, and F-measure 0.78. Our work identifies knowledge currently embedded in the free text of clinical practice guideline recommendations and provides an initial view of the extent to which CPGs mention common comorbid conditions. Knowledge extracted from CPG text in this way may be useful to inform gaps in guideline recommendations regarding MCC and therefore identify potential opportunities for guideline improvement.
View details for PubMedID 26306285
Few studies have examined the effectiveness of teaching strategies to improve patients' recall and retention of information. As a next step in implementing a literacy-appropriate, multimedia diabetes education program (MDEP), the present study reports the results of two experiments designed to answer (a) how much knowledge is retained 2 weeks after viewing the MDEP, (b) does knowledge retention differ across literacy levels, and (c) does adding a teach-back protocol after the MDEP improve knowledge retention at 2-weeks' follow-up? In Experiment 1, adult primary care patients (n = 113) watched the MDEP and answered knowledge-based questions about diabetes before and after viewing the MDEP. Two weeks later, participants completed the knowledge assessment a third time. Methods and procedures for Experiment 2 (n = 58) were exactly the same, except that if participants answered a question incorrectly after watching the MDEP, they received teach-back, wherein the information was reviewed and the question was asked again, up to two times. Two weeks later, Experiment 2 participants completed the knowledge assessment again. Literacy was measured using the S-TOFHLA. After 2 weeks, all participants, regardless of their literacy levels, forgot approximately half the new information they had learned from the MDEP. In regression models, adding a teach-back protocol did not improve knowledge retention among participants and literacy was not associated with knowledge retention at 2 weeks. Health education interventions must incorporate strategies that can improve retention of health information and actively engage patients in long-term learning.
View details for DOI 10.1080/10810730.2011.604382
View details for Web of Science ID 000299952500010
View details for PubMedID 21951245