Introduction to Systematic Review and Meta-analysis Korean Journal of Anesthesiology

Quality Reporting of Systematic Review and Meta-Assay According to PRISMA 2020 Guidelines: Results from Recently Published Papers in the Korean Periodical of Radiology
Ho Young Park ,1 Chong Hyun Suh , 1 Sungmin Woo ,2 Pyeong Hwa Kim ,1 and Kyung Won Kim 1
oneDepartment of Radiology and Research Found of Radiology, Asan Medical Eye, University of Ulsan Higher of Medicine, Seoul, Korea.
2Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.

Respective writer: Chong Hyun Suh, Doc, PhD, Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan Higher of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Korea.

Received October xx, 2021; Revised November 26, 2021; Accepted Dec 21, 2021.

This is an Open Access article distributed nether the terms of the Creative Eatables Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0) which permits unrestricted not-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.


Go to:

Abstract


Objective

To evaluate the completeness of the reporting of systematic reviews and meta-analyses published in a full general radiology journal using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 guidelines.

Materials and Methods

Twenty-4 manufactures (systematic review and meta-analysis, northward = xviii; systematic review only, n = six) published betwixt August 2009 and September 2021 in the Korean Journal of Radiology were analyzed. Completeness of the reporting of principal texts and abstracts were evaluated using the PRISMA 2020 statement. For each item in the statement, the proportion of studies that met the guidelines' recommendation was calculated and items that were satisfied by fewer than fourscore% of the studies were identified. The review process was conducted by 2 independent reviewers.

Results

Of the 42 items (including sub-items) in the PRISMA 2020 argument for main text, 24 were satisfied by fewer than 80% of the included articles. The 24 items were grouped into eight domains: 1) assessment of the eligibility of potential articles, two) assessment of the take chances of bias, three) synthesis of results, four) additional assay of study heterogeneity, 5) assessment of non-reporting bias, 6) cess of the certainty of show, seven) provision of limitations of the study, and 8) additional information, such as protocol registration. Of the 12 items in the abstract checklists, eight were incorporated in fewer than 80% of the included publications.

Determination

Several items included in the PRISMA 2020 checklist were overlooked in systematic review and meta-assay articles published in the Korean Journal of Radiology. Based on these results, we propose a double-check listing for improving the quality of systematic reviews and meta-analyses. Authors and reviewers should familiarize themselves with the PRISMA 2020 argument and check whether the recommended items are fully satisfied prior to publication.

Systematic reviews and meta-analyses accept several strengths over individual studies because they provide estimated outcomes with higher precision, accost questions that cannot exist asked in individual studies, and provide evidence-based guidance from conflicting results [ 1]. Every bit a result, an increasing number of systematic reviews and meta-analyses are published every year in various medical fields [ ii]. Accordingly, the quality of reporting has been emphasized in systematic reviews and meta-analyses in order to provide clarity and transparency regarding study deport procedures [ 3]. This is specially of import for systematic reviews and meta-analyses considering the synthesized results are influenced by the results from individual studies and therefore can be misleading if the individual results are biased [ 4].

In 2009, the first Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement was published with the aim to improve the quality of reporting [ 3]. Since so, methodological approaches, such every bit event synthesis and hazard of bias assessment, have avant-garde, thereby necessitating update of the guidelines; thus, an updated version of the PRISMA statement was published in 2020 [ five]. Despite the publication of the PRISMA statement, the quality of systematic reviews and meta-analyses nevertheless varies between individual articles and journals [ 6]. Moreover, there has been simply a modest improvement in the quality of reporting in radiology articles since the publication of the PRISMA 2009 statement, suggesting that at that place is still room for further improvement [ 6].

To the best of our knowledge, the number of studies in the field of radiology that evaluated the quality of reporting in systematic reviews and meta-analyses using the PRISMA 2020 statement has been limited. Therefore, the goal of our study was to assess the reporting quality of recent publications in the Korean Journal of Radiology using the PRISMA 2020 statement. Based on the assessment, we aimed to provide suggestions for authors on how to improve the quality of their reports.

MATERIALS AND METHODS

Search Strategy and Written report Pick

Using the MEDLINE database, we identified all potentially relevant systematic reviews, with or without meta-assay, published in a single peer-reviewed journal, the Korean Journal of Radiology, between Baronial 2009 and September 2021. Because the first PRISMA argument was published in July 2009 [ 3], we did not include studies published earlier than that appointment. The search terms were ("Korean Journal of Radiology"[Journal]) AND ((systematic review) OR (meta-analysis)). A total of 31 records (i.e., abstracts and titles) were retrieved from the MEDLINE database and two reviewers evaluated the eligibility of each article. Among them, ii records [ 7 , 8] were removed before screening because they were published before 2009. Three records [ 9 , 10 , 11] were excluded while screening considering they were guidelines for systematic reviews and meta-analyses. Equally a result, full texts from 26 publications were retrieved and assessed for eligibility, two of which [ 12 , thirteen] were excluded because they were not systematic reviews. Finally, 24 publications were included in our analysis (Fig. i) [ 14 , 15 , sixteen , 17 , 18 , nineteen , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37].

Data Extraction

Data extraction from the included studies was performed by 2 reviewers (with 2 and viii years of feel in systematic review and meta-assay studies, respectively), as shown in particular in the Supplement.

Major Changes in the PRISMA 2020 Statement

Several changes have been made in the PRISMA 2020 statement compared with the PRISMA 2009 argument. Although the number of chief items in the checklists was unchanged (27 items), a large number of sub-items were added (42 in total, including sub-items) to provide more comprehensive guidelines. In addition, checklists for the abstracts (12 items) were included in the guidelines. Table one shows a brief summary of the major updates made in 2020 [ v]. Ii reviewers reviewed the checklists and agreed to a consensus for each item, as detailed in the Supplement.

Data Analysis

We extracted the PRISMA 2020 checklist items that were satisfied by fewer than lxxx% of the manufactures and grouped them into eight relevant domains. Suggestions for better quality of systematic reviews and meta-analyses were provided based on these domains. Evaluation of the adherence of the included articles to the PRISMA 2020 statement is decribed in the Supplement.

Characteristics of the Included Studies

The characteristics of the 24 included studies are summarized in Figure 2 and Supplementary Table i. Briefly, 18 studies (xiv univariate and four bivariate) were systematic reviews with meta-analyses [ 14 , 15 , xix , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 31 , 32 , 33 , 34 , 35] and six studies were systematic reviews without meta-analyses [ 16 , 17 , 18 , 30 , 36 , 37]. In terms of the type of information used for analyses, xiii studies used dichotomous data to mensurate the post-obit outcomes: 1) efficacy or safety of an intervention (proportion of tumor response, recurrence, or treatment-related complications), ii) efficacy of a diagnostic examination (proportion of technical failure and unreliable measurement), 3) imaging features in a certain illness (proportion of specific imaging findings), 4) evaluation of report quality or reporting quality (proportion of studies that met the specific criteria), and 5) diagnostic yield [ 14 , 15 , 18 , nineteen , 22 , 23 , 24 , 28 , 32 , 33 , 34 , 35 , 36]; 6 studies used fourth dimension-to-event data to calculate the efficacy of a new intervention or the reliability between overall survival and imaging surrogate markers [ 15 , 22 , 31 , 32 , 33 , 34]; six studies used diagnostic test data to pool the diagnostic performance of index tests [ sixteen , 25 , 26 , 27 , 29 , 37]; two studies used continuous data to evaluate the agreement and reliability of measurements between imaging methods [ 20 , 21]; 1 study used descriptive data from imaging protocols in randomized controlled trials of acute ischemic stroke [ 30]; and 1 study used qualitative and quantitative data to assess the wellness-related quality-of-life in patients with hepatocellular carcinoma [ 17]. The number of included studies ranged from 4 to 516, with the majority (83%, twenty out of 24) of the articles including more than 10 studies. The statistical methods used in the included manufactures are summarized in Supplementary Table 2.

Assessment Using PRISMA 2020 Checklists

Overall Results

Each item in the PRISMA 2020 checklist and abstract checklist was evaluated for the included articles (Tables 2 , iii). Of the 12 items in the abstruse checklist, eight were reported in fewer than fourscore% of the articles (Fig. 3). To generate abstracts with better quality, exclusion criteria, take a chance of bias assessment tools, statistical methods, and limitations of evidence should exist included.

Of the 42 items (including sub-items) included in the guidelines for the chief text, 24 were reported in fewer than 80% of the articles. While most studies satisfied the items in the Title, Introduction, and Discussion, incomplete reports were frequently observed in the Methods and the Results, especially in result synthesis (Fig. 4). The 24 items were grouped into viii domains for farther exploration: 1) assessment of the eligibility of potential manufactures (items #8, #16b), two) cess of the run a risk of bias (items #11, #18), 3) synthesis of results (items #13a, #13b, #13c, #13d, #20a), 4) boosted analysis (items #13e, #13f, #20c, #20d), 5) assessment of the non-reporting bias (items #14, #21), half-dozen) cess of the certainty of evidence (items #fifteen, #22), 7) provision of limitations of the study (item #23c), and 8) additional data (items #24–#27).

Assessment of the Eligibility of Potential Articles (Items #8, #16b)

Seven articles [ xvi , 24 , 27 , xxx , 31 , 32 , 36] did not report how many reviewers participated in the evaluation of study eligibility or whether they worked independently (item #8). Xviii articles [ 16 , 17 , xviii , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 32 , 34 , 36 , 37] did non cite the studies that seemed to meet the inclusion criteria, merely were excluded in the final stage or did not explain the reason for exclusion (detail #16b).

The PRISMA 2020 guidelines emphasize transparency in the study selection process. In add-on, the newly added item #16b requires authors to provide the reasons for exclusion of potentially eligible studies [ v].

Cess of the Risk of Bias (Items #11, #xviii)

Four articles [ 24 , 29 , 30 , 31] did not evaluate the risk of bias in the studies and one article [ 33] did not report how many reviewers assessed the risk of bias. Diverse cess tools were used in the remaining articles. For randomized controlled trials (RCTs), the Take a chance of Bias (RoB) tool or revised Jadad calibration were implemented [ 32 , 33 , 34]. For non-RCTs, Quality Cess of Diagnostic Accuracy Studies (QUADAS)-ii [ 16 , 19 , 20 , 21 , 25 , 26 , 28], QUADAS [ 37], Risk of Bias Assessment tool for non-randomized studies [ xiv , 35], Run a risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) [ 17], Newcastle Ottawa Scale [ xv , 32 , 34], and the National Institute of Health (NIH) assessment tool were used [ 22 , 23]. Among the articles that evaluated the risk of bias, only seven provided the full assessment results of the individual studies [ 17 , 19 , 22 , 23 , 26 , 33 , 34].

Evaluation of the risk of bias in studies is essential for authors to understand result synthesis or to search for possible heterogeneity among the included studies, equally well as for readers to evaluate the transparency of pooled results [ 5]. We advise authors to provide visual representation of assessment results for each study, rather than the overall results of whole studies. Among the diverse options of cess tools, the Cochrane guidelines [ i] recommend the RoB, ROBINS-I, and QAUDAS-2 as the preferred methods for assessing RCTs, not-RCTs on interventions, and diagnostic exam accurateness (DTA) studies, respectively. Although there is no universally accepted tool for the evaluation of observational studies without interventions, Newcastle Ottawa Scale, the NIH assessment tool, and Joanna Briggs Institute critical appraisal checklists may exist suitable options [ 38].

Synthesis of Results (Items #13a, #13b, #13c, #13d, #20a)

Amongst the articles that reported multiple pooled results, v [ 17 , twenty , 21 , 22 , 35] did not clearly report which studies were included for each outcome synthesis (detail #13a). Ten articles [ fourteen , 21 , 24 , 25 , 26 , 28 , 29 , 32 , 34 , 37] did not report how missing data were handled or how the data were converted for result synthesis (item #13b). Twelve articles [ fourteen , 15 , 19 , 22 , 23 , 24 , 27 , 32 , 33 , 34 , 35 , 37] did non mention the methods used for the visual representation of the results from individual studies and syntheses, although forest plots were presented in the Results, except for one article [ 37] (detail #13c). Xiii articles with meta-analysis [ 14 , 15 , xix , 20 , 21 , 22 , 23 , 25 , 26 , 27 , 28 , 31 , 35] did not report the rationale for choosing a specific statistical model (ex. fixed- vs. random-effects model) (item #13d). Three studies [ 32 , 33 , 34 , 35] selected fixed- or random-effects models based on statistical values of study heterogeneity. Ii studies [ xix , 20] used the Dersimonian-Laird random-effects model for pooling rare events (e.g., complication charge per unit). In the Results section, none of the studies reported a cursory summary of the report characteristics and the risk of bias for each synthesis (particular #20a).

The PRISMA 2020 checklist has elaborated the "synthesis of results" particular to provide a more than comprehensive evaluation regarding data preparation (item #13b), data visualization (detail #13c), and statistical methods used for consequence synthesis (item #13d) [ 5]. When multiple results are pooled, authors are advised to cite the studies and written report the number of the included studies for each effect assay (item #13a). When a meta-analysis is performed, authors should explain the rationale for choosing a statistical model. Choosing between fixed- and random-furnishings models should not exist based on statistical methods for heterogeneity (i.eastward., Cochran's Q-test or Higgins inconsistency index test) [ one]; rather, it depends on the authors' conclusion of whether effect sizes are truly identical between studies [ 1]. Therefore, the random-effects model is recommended when in that location is heterogeneity in study designs, which is very common when performing meta-analyses in the field of radiology.

Currently, the Cochrane guideline does non recommend a concensus method for result synthesis [ 1]. However, inverse-variance methods (including the DerSimonian and Laird method) should be avoided in meta-analyses of rare events [ 39]. Because these methods are based on the assumption of normal distribution of upshot sizes, significant bias in pooled results may occur in meta-analyses of rare events [ 39 , xl]. In such cases, other methods such as the Peto method, Mantel–Haenszel method without cipher-cell corrections, or generalized linear mixed models are preferred, although there is no generally accepted optimal method for dealing with rare events [ 39 , 41 , 42].

When reporting syntheses of multiple-upshot sizes, authors should consider inside-report covariance (i.e., correlation between outcomes). However, none of the articles included in this written report considered within-written report covariance despite the evident risk of correlation (e.g., pooling overall survival at multiple time points) [ xv , 20 , 21 , 22 , 31 , 32 , 34]. The potential risks of correlation in these studies are summarized in Supplementary Table three. When multiple-issue sizes are synthesized from data from the same participants, statistical dependency may occur and produce erroneous standard errors in the pooled results [ 43]. Suggestions for managing within-written report covariance are provided in the Supplement.

Additional Analysis (Items #13e, #13f, #20c, #20d)

Five articles [ 16 , 29 , 32 , 35 , 37] did not perform subgroup analysis or meta-regression. One article [ 34] did not mention the method used to explore study heterogeneity in the Materials and Methods section although subgroup assay was provided in the Results department (item #13e,#20c). Thirteen articles [ 14 , xv , 20 , 21 , 22 , 23 , 25 , 28 , 29 , 31 , 32 , 34 , 35] did not perform sensitivity analysis (item #13f, #20d).

The PRISMA 2020 guideline requires that authors perform subgroup analysis or meta-regression to evaluate the source of study heterogeneity, and sensitivity assay to assess the robustness of the synthesized results.

Assessment of Non-Reporting Bias (Items #fourteen, #21)

Vii manufactures [ 16 , xviii , 29 , 31 , 32 , 35 , 37] did non evaluate the non-reporting bias. Amid the thirteen manufactures that used funnel plots, four [ 15 , 21 , 24 , 27] did not further explore the source of bias, although disproportion was observed. Two manufactures [ 33 , 34] performed a statistical exam for funnel plot asymmetry, although fewer than x studies were included.

Non-reporting bias refers to the fact that reporting of the research findings is influenced by the p value and magnitude or management of the results [ 44]. Although non-reporting is a wide term encompassing publication bias, time-lag bias, and selective non-reporting bias, publication bias has long been the focus of involvement [ 1]. Funnel plots and statistical tests for disproportion are frequently performed to evaluate not-reporting bias; nonetheless, the examination for asymmetry has low statistical power and thus should non be used when fewer than ten studies are included [ 45 , 46]. Moreover, information technology should be noted that asymmetry in funnel plots is not always due to not-reporting bias [ 46]. As a effect, a contour-enhanced funnel plot may be preferred, considering it may indicate whether the asymmetry is due to not-reporting bias or other factors [ 47]. Other potential sources of asymmetry include poor methodological quality in small-sized studies or true heterogeneity betwixt studies [ 45]. For case, heterogeneity in the characteristics of study population or implementation of intervention between minor vs. large-sized studies may cause asymmetry in the funnel plots [ 46]. When asymmetry is observed, authors should search for potential sources of asymmetry.

Cess of the Certainty of Evidence (Items #15, #22)

Merely ii studies [ 24 , 33] used the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to evaluate the quality of evidence. The PRISMA 2020 guidelines included new items regarding the certainty of evidence for pooled results [ 5]. To evaluate the certainty of show, Cochrane has adopted the GRADE approach [ 1], which is composed of v domains: risk of bias, inconsistency, indirectness, imprecision, and publication bias [ 48]. Past incorporating the evaluation for each domain, the final cess for pooled results was classified into four categories: high, moderate, depression, and very low quality. The certainty of bear witness should exist evaluated for each result, because the level of certainty often varies betwixt outcomes [ 49]. Lastly, a "summary of findings" table should exist presented by including the outcome of involvement and its pooled event as well as the quality of prove. Although the Form approach was first developed to evaluate studies on therapeutic intervention, it can exist applied to DTA studies too [ 50 , 51]. Supplementary Table 4 is an example of a "summary of findings" tabular array, which may be produced using the GRADEpro GDT software (world wide web.gradepro.org).

Provision of Limitations of the Study (Detail #23c)

The PRISMA 2020 guidelines require that author provide limitations of not only the prove but too the review procedure. Of all the included manufactures, 58% (14 out of 24) [ 16 , 17 , xix , 20 , 22 , 24 , 26 , 27 , 30 , 32 , 33 , 34 , 36 , 37] described the limitations of the reviewing process in the Discussion section, which included: 1) limitation of search terms: "considering we found HRQoL studies using the search term'quality of life,' nosotros might have missed studies using other terminology" [ 17], 2) limitation in the study choice process: "we included studies that were available only in the abstract form, and the reported data may non be every bit accurate and consummate equally those reported in the respective total text publication" [ 33], iii) limitations in data extraction: "there were limitations in extracting the exact survival data from the report regarding censored subjects and how these might affect the results" [ 22], and iv) disability to perform planned analysis due to lack of information: "considering of the lack of sufficient data, we were unable to perform subgroup analyses to compare the consequence of TACE plus RFA and surgical resection" [ 32].

Additional Information (Items #24–#27)

None of the studies reported the registration information (item #24) or which information in the review were publicly available to the readers (item #27). Eleven studies [ 14 , 16 , 17 , 23 , 24 , 28 , 29 , 32 , 33 , 34 , 35] did non written report any financial or non-financial support (item #25), and ix studies [ 15 , 18 , 22 , 27 , 32 , 33 , 34 , 36 , 37] did not declare any competing interest for the authors (item #26).

The PRISMA 2020 guidelines crave that authors provide registration information for the review (particular #24a), a statement regarding accessibility of the registered protocol (item #24b), or any subpoena made in the protocol (item#24c) [ 5]. PROSPERO is a database that authors can use to register their protocols [ 52]. Registering the protocol earlier conducting the systematic review enables the readers to evaluate whether the article properly followed the protocol and search for any differences between the pre-specified information and the finally reported data [ 5]. If the protocol was not registered, it should be stated so, and we suggest that authors discuss the potential limitations of not doing then. In addition, authors should report whatsoever financial or non-financial support received during the study, and their competing interests. Public sharing of the data used in the review is encouraged but is not widely performed in medical enquiry [ 53]. Currently, in that location are several public data sharing platforms, such equally Open Scientific discipline Framework (https://osf.io) or Systematic Review Data Repository (https://www.ahrq.gov/cpi/about/otherwebsites/srdr.ahrq.gov/index.html).

Our study demonstrated that a substantial number of published systematic reviews with or without meta-assay required further improvements to satisfy the PRISMA 2020 guidelines. These areas for improvement could exist divided into eight domains for which thorough explanations and suggestions can exist made: ane) cess of the eligibility of potential articles, 2) cess of the take chances of bias, three) synthesis of results, 4) additional analysis to explicate written report heterogeneity, 5) cess of the not-reporting bias, half dozen) assessment of the certainty of evidence, 7) provision of limitations of the written report, and 8) additional information such as protocol registration. In improver, for better quality abstracts, authors should written report the exclusion criteria, the assessment tool for the risk of bias, the statistical methods, and limitations of the evidence.

Based on our results, we developed a double-check list consisting of the items in PRISMA 2020 guidelines that had been oftentimes missed in published articles (Table 4). In the checklist, we made specific suggestions for each domain and provided farther comments regarding the errors in statistical analyses identified in some published manufactures (due east.g., determining fixed- vs. random-effects model based on statistical values of study heterogeneity). To assistance authors properly utilize the statistical models and assessment tools, we summarized the recommended methods in Table 5. These recommended methods are mainly based on the Cochrane Handbook for Systematic Reviews of Interventions and previous guideline manufactures for DTA studies [ 1 , ix , 11 , 54 , 55 , 56].

A substantial proportion of meta-assay in the field of radiology is DTA research. In 2018, an extention of PRISMA 2009 statement has been developed for systematic reviews of DTA studies (PRISMA-DTA statement) [ 57]. When compared to PRISMA 2020 statement, PRISMA-DTA statement requires specific information regarding index test, including the clinical role of the index test and 2 × 2 data (true positive, false positive, imitation negative, and true negative) for each written report. However, the PRISMA 2020 statement provides more comprehensive checklists in the remaining fields such as information extraction, data handling, statistical assay, and outcome presentation. Thus, authors who bear systematic reviews of DTA studies should follow the PRISMA 2020 statement in general and refer to the PRISMA-DTA statement for DTA specific requirements [ v].

In that location are several limitations to our study. First, the articles used in our written report were from a single journal, which may impair extrapolation of the results. Even so, the Korean Periodical of Radiology may serve as a proper representative sample given its reputation in the field of radiology, nuclear medicine, and imaging (rank: 36 out of 452 journals in Scopus) and wide coverage of topics as a general journal of radiology. 2nd, detailed statistical background for each method in meta-analyses was non provided. In addition, we did not cover avant-garde techniques in meta-analysis, such as individual participant data meta-analysis or network meta-analysis [ 58 , 59]. Third, while our study focused on the reporting qualities of systematic reviews and meta-analysis, the reporting quality does not necessarily point the quality of the study itself. Proper inquiry questions based on the population, intervention, comparison, effect (PICO) framework and the purpose of conducting the systematic review or meta-analysis should be well established beforehand [ i]. Despite these limitations, our study clearly identified which items should exist improved for high-quality systematic review articles. Authors and reviewers who are interested in systematic reviews or meta-analyses should be familiar with the PRISMA 2020 argument. Our checklists may help authors to identify which items of the PRISMA 2020 argument should be reinforced prior to submission.

Conflicts of Interest:Chong Hyun Suh who is on the editorial board of the Korean Periodical of Radiology was not involved in the editorial evaluation or conclusion to publish this article. All remaining authors have declared no conflicts of involvement.

Author Contributions:

  • Conceptualization: Chong Hyun Suh, Kyung Won Kim.

  • Data curation: Ho Young Park.

  • Formal assay: Ho Immature Park, Chong Hyun Suh.

  • Funding acquisition: Chong Hyun Suh.

  • Methodology: Ho Immature Park, Pyeong Hwa Kim, Chong Hyun Suh.

  • Project administration: Sungmin Woo, Chong Hyun Suh.

  • Supervision: Kyung Won Kim.

  • Writing—original draft: Ho Immature Park.

  • Writing—review & editing: Sungmin Woo, Pyeong Hwa Kim, Chong Hyun Suh, Kyung Won Kim.

  • , .

Funding Statement:This inquiry was supported past a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry building of Health & Welfare, Commonwealth of Korea (grant number: HI18C2383).

Availability of Data and Material

The datasets generated or analyzed during the study are available from the corresponding author on reasonable request.

Higgins JPT, Thomas J, Chandler J, Cumpston G, Li T, Page MJ. Cochrane handbook for systematic reviews of interventions version 6.2 (updated February 2021). Training.cochrane.org Web site. [Accessed Baronial viii, 2021].

Booth A, Clarke M, Ghersi D, Moher D, Petticrew M, Stewart Fifty. An international registry of systematic-review protocols. Lancet 2010;377:108–109.

Moher D, Liberati A, Tetzlaff J, Altman DG. PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 2009;6:e1000097

Nawijn F, Ham WHW, Houwert RM, Groenwold RHH, Hietbrink F, Smeeing DPJ. Quality of reporting of systematic reviews and meta-analyses in emergency medicine based on the PRISMA argument. BMC Emerg Med 2019;xix:nineteen

Page MJ, Moher D, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. PRISMA 2020 explanation and elaboration: updated guidance and exemplars for reporting systematic reviews. BMJ 2021;372:n160

Tunis AS, McInnes Physician, Hanna R, Esmail One thousand. Association of report quality with completeness of reporting: have completeness of reporting and quality of systematic reviews and meta-analyses in major radiology journals changed since publication of the PRISMA argument? Radiology 2013;269:413–426.

Wang P, Guo YM, Liu M, Qiang YQ, Guo XJ, Zhang YL, et al. A meta-analysis of the accuracy of prostate cancer studies which use magnetic resonance spectroscopy as a diagnostic tool. Korean J Radiol 2008;nine:432–438.

Yu SH, Kim CB, Park JW, Kim MS, Radosevich DM. Ultrasonography in the diagnosis of appendicitis: evaluation by meta-analysis. Korean J Radiol 2005;half dozen:267–277.

Kim KW, Lee J, Choi SH, Huh J, Park SH. Systematic review and meta-analysis of studies evaluating diagnostic test accurateness: a practical review for clinical researchers-part I. general guidance and tips. Korean J Radiol 2015;sixteen:1175–1187.

Lee J, Kim KW, Choi SH, Huh J, Park SH. Systematic review and meta-analysis of studies evaluating diagnostic test accurateness: a practical review for clinical researchers-function II. statistical methods of meta-analysis. Korean J Radiol 2015;16:1188–1196.

Suh CH, Park SH. Successful publication of systematic review and meta-assay of studies evaluating diagnostic test accuracy. Korean J Radiol 2016;17:five–vi.

Kim SY, Chung HW, Oh TS, Lee JS. Practical guidelines for ultrasound-guided core needle biopsy of soft-tissue lesions: transformation from beginner to specialist. Korean J Radiol 2017;eighteen:361–369.

Oh SW, Cheon GJ. Prostate-specific membrane antigen PET imaging in prostate cancer: opportunities and challenges. Korean J Radiol 2018;19:819–831.

Cho SJ, Kim HS, Suh CH, Park JE. Radiological recurrence patterns subsequently bevacizumab treatment of recurrent high-class glioma: a systematic review and meta-analysis. Korean J Radiol 2020;21:908–918.

Choi SH, Kim JW, Kim JH, Kim KW. Efficacy and rubber of microwave ablation for cancerous renal tumors: an updated systematic review and meta-analysis of the literature since 2012. Korean J Radiol 2018;19:938–949.

Chung SR, Choi YJ, Suh CH, Lee JH, Baek JH. Diffusion-weighted magnetic resonance imaging for predicting response to chemoradiation therapy for caput and cervix squamous cell carcinoma: a systematic review. Korean J Radiol 2019;20:649–661.

Kang D, Shim South, Cho J, Lim HK. Systematic review of studies assessing the health-related quality of life of hepatocellular carcinoma patients from 2009 to 2018. Korean J Radiol 2020;21:633–646.

Kim DW, Jang HY, Kim KW, Shin Y, Park SH. Design characteristics of studies reporting the functioning of artificial intelligence algorithms for diagnostic analysis of medical images: results from recently published papers. Korean J Radiol 2019;20:405–410.

Kim DW, Suh CH, Kim KW, Pyo J, Park C, Jung SC. Technical operation of ii-dimensional shear wave elastography for measuring liver stiffness: a systematic review and meta-analysis. Korean J Radiol 2019;twenty:880–893.

Kim JY, Suh YJ, Han 1000, Choi BW. Reliability of coronary artery calcium severity assessment on not-electrocardiogram-gated CT: a meta-analysis. Korean J Radiol 2021;22:1034–1043.

Kim JY, Suh YJ, Han Thousand, Kim YJ, Choi BW. Cardiac CT for measurement of right ventricular volume and function in comparison with cardiac MRI: a meta-analysis. Korean J Radiol 2020;21:450–461.

Kim PH, Choi SH, Kim JH, Park SH. Comparison of radioembolization and sorafenib for the treatment of hepatocellular carcinoma with portal vein tumor thrombosis: a systematic review and meta-analysis of rubber and efficacy. Korean J Radiol 2019;twenty:385–398.

Kim PH, Kim Chiliad, Suh CH, Chung SR, Park JE, Kim SC, et al. Neuroimaging findings in patients with COVID-19: a systematic review and meta-analysis. Korean J Radiol 2021;22:1875–1885.

Kim PH, Suh CH, Kim HS, Kim KW, Kim DY, Lee EQ, et al. Immune checkpoint inhibitor with or without radiotherapy in melanoma patients with brain metastases: a systematic review and meta-analysis. Korean J Radiol 2021;22:584–595.

Kim Thursday, Woo S, Han S, Suh CH, Ghafoor S, Hricak H, et al. The diagnostic performance of the length of tumor capsular contact on MRI for detecting prostate cancer extraprostatic extension: a systematic review and meta-analysis. Korean J Radiol 2020;21:684–694.

Ko MJ, Park DA, Kim SH, Ko ES, Shin KH, Lim Due west, et al. Accuracy of digital chest tomosynthesis for detecting breast cancer in the diagnostic setting: a systematic review and meta-analysis. Korean J Radiol 2021;22:1240–1252.

Liao XL, Wei JB, Li YQ, Zhong JH, Liao CC, Wei CY. Functional magnetic resonance imaging in the diagnosis of locally recurrent prostate cancer: are all pulse sequences helpful? Korean J Radiol 2018;19:1110–1118.

Lim SJ, Kim M, Suh CH, Kim SY, Shim WH, Kim SJ. Diagnostic yield of improvidence-weighted brain magnetic resonance imaging in patients with transient global amnesia: a systematic review and meta-assay. Korean J Radiol 2021;22:1680–1689.

Park SH, Cho SH, Choi SH, Jang JK, Kim MJ, Kim SH, et al. MRI assessment of complete response to preoperative chemoradiation therapy for rectal cancer: 2020 guide for practice from the Korean Society of Abdominal Radiology. Korean J Radiol 2020;21:812–828.

Suh CH, Jung SC, Kim B, Cho SJ, Woo DC, Oh WY, et al. Neuroimaging in randomized, multi-middle clinical trials of endovascular treatment for acute ischemic stroke: a systematic review. Korean J Radiol 2020;21:42–57.

Suh CH, Kim HS, Jung SC, Choi CG, Kim SJ, Kim KW. Optimized image-based surrogate endpoints in targeted therapies for glioblastoma: a systematic review and meta-analysis of phase Three randomized controlled trials. Korean J Radiol 2020;21:471–482.

Wang WD, Zhang LH, Ni JY, Jiang XY, Chen D, Chen YT, et al. Radiofrequency ablation combined with transcatheter arterial chemoembolization therapy versus surgical resection for hepatocellular carcinoma within the Milan criteria: a meta-analysis. Korean J Radiol 2018;19:613–622.

Wang X, Hu Y, Ren Yard, Lu X, Lu G, He S. Efficacy and safety of radiofrequency ablation combined with transcatheter arterial chemoembolization for hepatocellular carcinomas compared with radiofrequency ablation lone: a time-to-event meta-analysis. Korean J Radiol 2016;17:93–102.

Zhu ZX, Liao MH, Wang XX, Huang JW. Transcatheter arterial chemoembolization plus 131I-labelled metuximab versus transcatheter arterial chemoembolization solitary in intermediate/advanced stage hepatocellular carcinoma: a systematic review and meta-analysis. Korean J Radiol 2016;17:882–892.

Kim HJ, Cho SJ, Baek JH. Comparison of thermal ablation and surgery for low-gamble papillary thyroid microcarcinoma: a systematic review and meta-assay. Korean J Radiol 2021;22:1730–1741.

Kang TW, Rhim H, Lee MW, Kim YS, Choi D, Lim HK. Terminology and reporting criteria for radiofrequency ablation of tumors in the scientific literature: systematic review of compliance with reporting standards. Korean J Radiol 2014;fifteen:95–107.

Kim Chiliad, Cho YZ, Baik SK, Kim MY, Hong WK, Kwon SO. The accurateness of ultrasonography for the evaluation of portal hypertension in patients with cirrhosis: a systematic review. Korean J Radiol 2015;xvi:314–324.

Ma LL, Wang YY, Yang ZH, Huang D, Weng H, Zeng XT. Methodological quality (risk of bias) cess tools for main and secondary medical studies: what are they and which is better? Mil Med Res 2020;vii:7

Efthimiou O. Practical guide to the meta-assay of rare events. Evid Based Ment Wellness 2018;21:72–76.

Thorlund Thousand, Wetterslev J, Awad T, Thabane L, Gluud C. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses. Res Synth Methods 2011;2:238–253.

Bradburn MJ, Deeks JJ, Berlin JA, Russell Localio A. Much ado about nothing: a comparison of the functioning of meta-analytical methods with rare events. Stat Med 2007;26:53–77.

Stijnen T, Hamza Thursday, Ozdemir P. Random effects meta-analysis of effect effect in the framework of the generalized linear mixed model with applications in sparse data. Stat Med 2010;29:3046–3067.

López-López JA, Page MJ, Lipsey MW, Higgins JPT. Dealing with issue size multiplicity in systematic reviews and meta-analyses. Res Synth Methods 2018;9:336–351.

McGauran N, Wieseler B, Kreis J, Schüler YB, Kölsch H, Kaiser T. Reporting bias in medical research - a narrative review. Trials 2010;xi:37

Egger Chiliad, Davey Smith G, Schneider M, Minder C. Bias in meta-assay detected past a simple, graphical test. BMJ 1997;315:629–634.

Sterne JA, Sutton AJ, Ioannidis JP, Terrin Northward, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;343:d4002

Peters JL, Sutton AJ, Jones DR, Abrams KR, Rushton L. Profile-enhanced meta-assay funnel plots aid distinguish publication bias from other causes of asymmetry. J Clin Epidemiol 2008;61:991–996.

Guyatt GH, Oxman Advertising, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. Grade: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008;336:924–926.

Balshem H, Helfand M, Schünemann HJ, Oxman Ad, Kunz R, Brozek J, et al. Class guidelines: 3. Rating the quality of evidence. J Clin Epidemiol 2011;64:401–406.

Gopalakrishna G, Mustafa RA, Davenport C, Scholten RJ, Hyde C, Brozek J, et al. Applying grading of recommendations assessment, development and evaluation (GRADE) to diagnostic tests was challenging only achievable. J Clin Epidemiol 2014;67:760–768.

Schünemann HJ, Mustafa R, Brozek J, Santesso N, Alonso-Coello P, Guyatt G, et al. GRADE guidelines: sixteen. Class evidence to conclusion frameworks for tests in clinical practice and public wellness. J Clin Epidemiol 2016;76:89–98.

Booth A, Clarke M, Dooley Thousand, Ghersi D, Moher D, Petticrew M, et al. The basics and bolts of PROSPERO: an international prospective annals of systematic reviews. Syst Rev 2012;1:2

Naudet F, Sakarovitch C, Janiaud P, Cristea I, Fanelli D, Moher D, et al. Information sharing and reanalysis of randomized controlled trials in leading biomedical journals with a full data sharing policy: survey of studies published in the BMJ and PLOS Medicine. BMJ 2018;360:k400

McGrath TA, Alabousi One thousand, Skidmore B, Korevaar DA, Bossuyt PMM, Moher D, et al. Recommendations for reporting of systematic reviews and meta-analyses of diagnostic exam accuracy: a systematic review. Syst Rev 2017;6:194

McGrath TA, McInnes Medico, Korevaar DA, Bossuyt PM. Meta-analyses of diagnostic accuracy in imaging journals: analysis of pooling techniques and their result on summary estimates of diagnostic accuracy. Radiology 2016;281:78–85.

Takwoingi Y, Leeflang MM, Deeks JJ. Empirical evidence of the importance of comparative studies of diagnostic test accuracy. Ann Intern Med 2013;158:544–554.

McInnes MDF, Moher D, Thombs BD, McGrath TA, Bossuyt PM, Clifford T, et al. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accurateness studies: the PRISMA-DTA statement. JAMA 2018;319:388–396.

Owen RK, Cooper NJ, Quinn TJ, Lees R, Sutton AJ. Network meta-analysis of diagnostic test accuracy studies identifies and ranks the optimal diagnostic tests and thresholds for wellness intendance policy and decision-making. J Clin Epidemiol 2018;99:64–74.

Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of private participant data: rationale, conduct, and reporting. BMJ 2010;340:c221

gaskinslaught.blogspot.com

Source: https://www.kjronline.org/DOIx.php?id=10.3348%2Fkjr.2021.0808

0 Response to "Introduction to Systematic Review and Meta-analysis Korean Journal of Anesthesiology"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel