Line 70: Line 70:

** Distribution of p-values in meta-analyses to distinguish [[Monte Carlo method|Monte Carlo]]type approaches from [[p-hacking]].

** Distribution of p-values in meta-analyses to distinguish [[Monte Carlo method|Monte Carlo]]type approaches from [[p-hacking]].

** The use of Hedges’ g instead of Cohen’s d is more appropriate for meta analyses with small sample sizes. This can be calculated with Comprehensive Meta-Analysis Software.

** The use of Hedges’ g instead of Cohen’s d is more appropriate for meta analyses with small sample sizes. This can be calculated with Comprehensive Meta-Analysis Software.

** Summary analyses, likelihood of publication bias and heterogeneity tests can be computed using the “metafor package” for R [[R (programming language)|R]]. It’s a simple program with an awkward name that that’s about as tricky as using TurboTax and more useful than heat vision in a dark fog.

** Summary analyses, likelihood of publication bias and heterogeneity tests can be computed using the “metafor package” for R [[R (programming language)|R]]. It’s a simple program with an awkward name that that’s about as tricky as using TurboTax and more useful than heat vision in a dark fog.

* If an article I want to read is behind a paywall, sometimes I try e-mailing the author a kind note to ask for a copy. This usually works, especially if I pack in a compliment or two. ”Researchers are like plants; they flourish with attention.”

* If an article I want to read is behind a paywall, sometimes I try e-mailing the author a kind note to ask for a copy. This usually works, especially if I pack in a compliment or two. ”Researchers are like plants; they flourish with attention.”

The Duchess of Dantzic
The Duchess of Dantzic is a comic opera, set in Paris, with music by Ivan Caryll and a book and lyrics by Henry Hamilton, based on the play Madame Sans-Gêne by Victorien Sardou and Émile Moreau. Additional lyrics are by Adrian Ross. The story concerns Napoleon and a laundress, Catherine Üpscher, who marries Marshal Lefebvre and becomes a duchess. The opera was first produced in London at the Lyric Theatre in 1903 and ran for 236 performances. Subsequently, it enjoyed a successful New York production at Daly’s Theatre and other productions around the world, and it was revived in London and performed regularly by amateur theatre groups, particularly in Britain, until the 1950s. This 1903 poster for the opera’s original production was designed by the show’s costume designer, Percy Anderson.Poster credit: Percy Anderson; restored by Adam Cuerden

Things I try to remember when editing medical articles

[edit]

  • A few metrics to estimate how much weight to give a reference. Especially helpful when used in context (ie. when comparing journals in the same field of study):
  1. H-index: an author-level metric that measures both the productivity and citation impact of the publications. [2]
  2. Impact factor: In general, I try to stick to an IF > 2 though this is not canon. These can usually be found with a simple Google search. I try to match the year of the IF to the year of the article I’m referencing.
  3. CiteScore: a metric based on Scopus data for all indexed documents, including articles, letters, editorials, articles and reviews. It’s calculated by dividing the number of citations to all indexed documents within the journal.
  4. SCImago Journal Rank: weighted metric that takes into account both the number of citations in a publication and the prestige of the journals from which those citations came. An SJR >1.0 is above average.
  5. Source Normalized Impact per Paper: weights citations based on the total number of citations in a subject field to provide a contextual, subject-specific metric. A SNIP over 1.0 is good.
  • The article should be indexed on PubMed, Google Scholar, Scopus, Web of Science, Embase or another reputable journal index with a Digital Object Identifier (DOI).
  • The evidence hierarchy — to help me see the forest from the trees.
    If I drop too far below the apex, I can make 1=2.
  • Statistical methods to detect publication bias:
    • Funnel-plot-based methods include visual examination of a funnel plot, regression and rank tests, and the nonparametric trim and fill method.
    • A small fail-safe N or asymmetric funnel plot suggest bias due to suppressed research.
    • Begg’s rank test and Egger’s regression can be used within the funnel plot. Begg’s examines the correlation between effect sizes and their corresponding sampling variances; a strong correlation implies publication bias. Egger’s regresses standardized effect sizes on their precisions; in the absence of publication bias, the regression intercept is expected to be zero. The weighted regression is popular among meta-analyses because it directly links effect size to its standard error without requiring the standardization process.
    • Selection models: use weight functions to adjust the overall effect size estimate and are usually employed as sensitivity analyses to assess the potential impact of publication bias.
    • This might sound like esoteric egghead nonsense, but it’s really not. After working through the math a few times, it becomes much more intuitive. Like using a review checker plugin on an online store, it helped me realize how many “fake reviews” were out there. Here’s an article that goes into more detail: [3]
  • Logistical methods to detect publication bias:
    • Search for grey literature – unpublished or non-indexed trials from specific authors. If you have access to a large institutional library, many have local archives that are not indexed online. The university librarian should be able to help.
    • Look at edit patterns over time from naked IP addresses or hyper-niche editors.
    • Keep an ear out for marketing campaigns, public events, brigading from competitors and web traffic patterns. Much web data is public knowledge, though some is more difficult to access or restricted to paid services. This is a very complicated but important topic.
    • Review not just declarations at the end of the article but the authors’ online resumés, research histories, grants and paid lectures.
    • NIH funded studies are preferred but can still have serious issues. Money, ego and prestige are insidious.
    • Retraction Watch – a list of scientists with the most retracted papers, either due to p-hacking, poor statistical methods or even actively fabricating data. This list can be accessed here: [4]
  • Lies, damned lies, and statistics — the methods and results sections are crucial.
    • I usually start out by looking at diagrams/tables and carefully reading the captions because pictures are easier for my reptile brain to digest. Looking at p intervals and sample sizes give me some sense of an idea’s sincerity.
    • I then read the first and last sentence of the introduction and the conclusion, and try to guess what the methods and results will look like. If the middle doesn’t match what I was anticipating based on the outside, either I didn’t understand something or the paper drew an erroneous conclusion. I focus on the parts that don’t match my expectations.
    • These two steps by themselves land me light years ahead of where I would have been just reading the abstract. It can be overwhelming at first, but gets easier and can be done relatively quickly with practice.
    • Bayesian analyses > frequentist inferences. The former is a deductive probability, the latter inductive and binary. Combined Bayesian + frequentist analyses are better than either individually, with the truth often living where they meet.
    • Overadjustment bias for conclusions that emerge or disappear only after correction for confounding variables. There could be a causal path. Cox proportional hazards models, in particular, are susceptible.
      • As an example: incorrect adjustment for blood pressure while studying the relationship between obesity and kidney failure. Obesity causes high blood pressure, which is its mechanism for destroying your kidneys. Correcting for hypertension obscures the mechanism and causes a Type II error. This method can also be inverted to cause Type I errors. Such mistakes induce bias instead of preventing it.
    • Cox models also try to force data into linearity and falter with J- or U-shaped correlations.
    • Distribution of p-values in meta-analyses to distinguish Monte Carlo type approaches from p-hacking.
    • The use of Hedges’ g instead of Cohen’s d is more appropriate for meta analyses with small sample sizes. This can be calculated with Comprehensive Meta-Analysis Software.
    • Summary analyses, likelihood of publication bias and heterogeneity tests can be computed using the “metafor package” for R R. It’s a simple program with an awkward name that that’s about as tricky as using TurboTax and more useful than heat vision goggles in a dark fog.
  • If an article I want to read is behind a paywall, sometimes I try e-mailing the author a kind note to ask for a copy. This usually works, especially if I pack in a compliment or two. Researchers are like plants; they flourish with attention.
  • Images need to be CC BY or CC BY SA. NC and ND licensed images can be uploaded to NC Commons.
  • Journal lists:
    • Abridged Index Medicus — a list of 114 journals that are generally gold standard. Another is the 2003 Brandon/Hill list which includes 141 journals, though it is no longer maintained.
    • Beall’s list — a compilation of problematic journals, discussed comprehensively here: [5] It has not been updated in some time and there are limitations but still a phenomenal open-source candle in the dark. Be cautious of hijacked and vanity “journals”. MDPI, Frontiers and Hindawi are some of the more frequent offenders.
    • CiteWatch — Wikipedia’s homage to Beall; an excellent resource that is updated twice monthly.
    • Cabells’ Predatory Reports — the successor to Beall’s; a comprehensive multidisciplinary update. Unfortunately provided by a paid subscription service only available to institutions, not individual researchers – [6]
    • Headbomb’s plug-in.

All heuristics are equal, but availability is more equal than others.

The One begets the Two. The Two begets the Three, and the Three begets the 10,000 things.

In a series of studies in 2005 and 2006, researchers at the University of Michigan found that when misinformed people, particularly political partisans, were exposed to corrected facts in news stories, they rarely changed their minds. In fact, they often became even more strongly set in their beliefs. Facts, they found, were not curing misinformation. Like an underpowered antibiotic, facts could actually make misinformation even stronger.

Arguing with an idiot is like playing chess with a pigeon. It’s just going to knock the pieces over, shit on the board, and then strut around like it won.

It is difficult to get a man to understand something when his salary depends on his not understanding it.

People would rather believe a simple lie than the complex truth.

The popularity of a scale rarely equates to its validity.

For the right brain

[edit]

True humility is not thinking less of yourself. It is thinking of your self less.

I never gave away anything without wishing I had kept it; nor kept it without wishing I had given it away.

When once a man is launched on an adventure as this, he must bid farewell to hopes and fears, otherwise death or deliverance will both come too late to save his honour and his reason!

In this world Ellwood, you must be oh so smart, or oh so pleasant. Well for years I was smart; I recommend pleasant. And you may quote me.

Frank Sinatra saved my life once. He said, “Okay, boys. That’s enough.”

If you want to go fast, go alone. If you want to go far, go together.

Always look on the bright side of life.

Please remember to enjoy every sandwich.

Source link

Share Article:

Leave a Reply