How to Spot Good Research (and Avoid the Bad Ones) – Part 1

With so much research out there—from academic journals to preprints and blog summaries—it can be hard to tell what’s solid science and what’s… well, not. Whether you’re a student, journalist, policymaker, or just curious, knowing how to quickly assess the quality of a research paper is a valuable skill.

Here’s a practical guide with simple checkpoints and tips to help you differentiate good research from sloppy or misleading work.


1. Check if the Numbers Add Up

One of the easiest ways to spot sloppiness:

  • Do the totals in tables and charts make sense?
  • Are percentages consistent and adding up logically (e.g., to 100%)?
  • Are sample sizes consistent throughout the paper?

Quick trick: If the authors can’t get basic arithmetic right, that’s a red flag — it might signal carelessness that affects the rest of the study.

2. Read the Abstract — and Then the Full Method Section

The abstract gives you the overview — but the methodology reveals how rigorous the research really is:

  • Is the sample size large enough?
  • Was there a control group?
  • Are the variables clearly defined?
  • Did they mention limitations?

Red flag: Vague methods, missing details, or overpromises in the conclusion with weak evidence.

3. Look for Reproducibility

Science should be reproducible. A good study:

  • Describes the methods and tools in enough detail for replication
  • Shares datasets or mentions where to access them
  • Cites prior work it builds upon

Pro tip: Some journals now include a “reproducibility badge” or “open data” section — look for it.

4. Check the References

  • Are the sources recent and relevant?
  • Do they cite peer-reviewed papers or rely heavily on outdated or non-academic sources?
  • Do the citations support the claims being made?

Tip: A cluttered or unbalanced reference list (e.g. all self-citations or missing major studies in the field) can signal bias or poor scholarship.

5. Be Wary of Cherry-Picked Data

Sometimes, authors selectively report only the results that support their hypothesis.

  • Look for whether all outcomes are reported
  • Check for statistical significance vs. practical relevance

Tool tip: Use PubPeer to see if the paper has been discussed or flagged by other scientists.

6. Look for AI-Generated or Low-Effort Work

In the age of ChatGPT and AI summarizers, some papers might be too perfect — or suspiciously generic.

  • Repetitive phrasing
  • Lack of deep discussion or insight
  • No evidence of real-world data collection

Tip: Google sections of the text to check for plagiarism or recycled content.

7. Author and Journal Credibility

  • Authors: Do they have prior work in this field? Are they affiliated with a reputable institution?
  • Journals: Is the paper published in a peer-reviewed journal or a questionable “pay-to-publish” outlet?

Tool tip: Use the Directory of Open Access Journals (DOAJ) or SCImago Journal Rank to check journal quality.


🎓 Trust but Verify

At Briefio, we’re all about making research digestible and trustworthy. But even a well-summarized paper is only as good as the original source. With a few simple habits, you can avoid being misled and make sure you’re leaning on science that holds up under scrutiny.

Stay qurios — and skeptical.

Leave a Reply

Share:

More Posts

research paper critical reading

Beyond the Abstract: Reading Scientific Papers with a Critical Eye (Part 2)

In our previous post, we discussed the initial steps of dissecting a scientific paper. Now, let’s dive deeper into some red flags and critical considerations to help you discern robust science from potentially misleading claims. 1. The Selective Spotlight: What’s Missing from the Narrative? Scientific papers highlight significant findings. But a critical reader asks: What predefined results were not mentioned? Many studies, especially clinical trials, register their protocols outlining specific endpoints before data collection. If these pre-specified outcomes are absent from the results, it raises a critical question: Why? Were they not “significant,” or perhaps contradictory? Example: A drug trial protocol registers several primary and secondary outcomes, including patient-reported pain and a specific quality-of-life measure. The published paper then focuses solely on a statistically significant reduction in pain, making no mention of the quality-of-life data. If predefined outcomes aren’t reported, investigate why. You can often find registered protocols (e.g., on ClinicalTrials.gov) using a trial registration number in the paper’s methods section. 2. The P-Value Predicament: When “Significance” Means Little A p-value less than 0.05 is often deemed “statistically significant.” However, an over-reliance on p-values without context can mislead. Example: A new blood pressure medication is tested, and the study finds a statistically significant reduction in systolic blood pressure with a p-value of 0.03. However, upon closer inspection, the absolute reduction in blood pressure is only 1.5 mmHg. While statistically significant, a 1.5 mmHg reduction might be clinically negligible for most patients and may not justify potential side effects or costs of the new medication. Critical Takeaway: Don’t let a low p-value blind you to the actual magnitude and clinical relevance of the effect. Always consider the effect size and confidence intervals alongside the p-value. 3. Correlation vs. Causation: Beware the Causal Leap Just because two things are linked doesn’t mean one causes the other. Authors sometimes imply causation when their study design only shows correlation. Example: A study finds a strong correlation between shoe size and salary. It would be incorrect to conclude that having larger feet leads to a higher salary. A more plausible explanation is that, historically, certain higher-paying professions have been predominantly held by men, who, on average, have larger shoe sizes than women. The underlying factor isn’t foot size, but gender and historical workplace demographics. Another Example: A study finds a strong correlation between ice cream sales and drowning incidents. This doesn’t mean eating ice cream causes drowning. The confounding variable here is likely the time of year: both ice cream sales and swimming (and thus drowning incidents) increase during hotter summer months. Critical Takeaway: Always scrutinize the study design. Randomized controlled trials are the gold standard for demonstrating causation, while observational studies (cohort, case-control, cross-sectional) can only suggest associations or correlations. 4. Graph Grievances: Don’t Let Visuals Deceive You Graphs can powerfully summarize data, but they can also be manipulated to create a misleading impression of results. Example: A bar graph shows a dramatic difference between a control group and a treatment group. However, if you look closely at the y-axis, you might find that it doesn’t start at zero, or that the scale is truncated, exaggerating a small difference. A change from 10 to 12 (a 20% increase) might look enormous if the y-axis only ranges from 9 to 12. Advice: 5. Definitional Discrepancies: What Exactly Are They Studying? Scientific terms can be defined differently across studies. This creates confusion and makes comparing findings difficult. Example: The term “high-risk patient” in a cardiology study could be defined differently by various authors. One study might define it based on the presence of three or more cardiovascular risk factors, while another might use a specific score from a risk assessment tool. If the authors don’t clearly define their terms, or if they deviate from standardized and referenced definitions, it becomes challenging to interpret their findings in context. Additional Red Flags: By developing a critical eye and looking for these red flags, you can truly engage with scientific literature and distinguish robust research from less reliable claims. Happy reading!

Clinical Trial - RCT - ITP - PP

Untangling the Evidence: Why “Intention-to-Treat” is Your Best Friend in Learning from Clinical Trials

At Briefio, we’re all about making complex information digestible and empowering you to learn smarter. If you’ve ever delved into medical research, particularly about new treatments or interventions, you’ve likely encountered terms like “Randomized Controlled Trials” (RCTs). These are often called the “gold standard” of evidence. But within RCTs, there’s a critical distinction in how results are analyzed that can completely change your interpretation: Intention-to-Treat (ITT) versus Per-Protocol (PP) analysis. Understanding this difference is not just for statisticians; it’s essential for anyone who wants to truly grasp the real-world implications of medical research. Think of it as a key to unlocking deeper insights from the data. The Problem: Life Isn’t Perfect (and Neither are Clinical Trials) Imagine a clinical trial testing a new blood pressure medication. Participants are randomly assigned to either the new drug or a placebo. This randomization is crucial because it ensures the groups are initially comparable, minimizing bias. But then, life happens: If you only analyze the data from patients who perfectly followed the rules, you’re looking at a very specific, often unrealistic, subset. This is where ITT and PP come in. Per-Protocol (PP) Analysis: The “Ideal World” Scenario What it is: Per-Protocol analysis only includes data from participants who strictly adhered to the study protocol – meaning they took all their assigned medication, completed all visits, and had no major deviations. The catch: While it aims to show the maximum possible effect of a treatment under ideal conditions (its “efficacy”), it has a major flaw for real-world learning: it breaks the power of randomization. When you cherry-pick only the “perfect” participants, the groups you’re comparing are no longer truly random. Patients who drop out due to side effects, for example, are inherently different from those who tolerate the treatment well. Excluding them can make a treatment look much better than it actually is, because you’re essentially removing the patients for whom it didn’t work or caused problems. Think of it like this: Imagine a fitness challenge. A Per-Protocol analysis would only count the participants who finished every single workout, ate perfectly, and never missed a day. While their results might be impressive, they don’t reflect the experience of everyone who started the challenge. Intention-to-Treat (ITT) Analysis: The “Real World” Truth What it is: Intention-to-Treat analysis analyzes participants based on their initial randomized assignment, regardless of whether they actually completed the intervention, adhered to it, or even dropped out. Why it’s the gold standard for learning: Think of it like this: In our fitness challenge, an Intention-to-Treat analysis would include everyone who signed up, even if they dropped out after a week. Their results might not be as dramatic as the “perfect” participants, but they give a much more accurate picture of the overall success rate of the program for everyone who tried it. Your Briefio Takeaway: Always Look for ITT! When you’re consuming medical information, especially from RCTs, make it a habit to check how the data was analyzed. Understanding the difference between ITT and PP isn’t just academic; it directly impacts how you interpret research and, ultimately, how you understand what works (and what doesn’t) in the real world. At Briefio, we believe that informed learning starts with understanding the nuances, and this distinction is a perfect example. What other tricky aspects of research interpretation would you like us to simplify for your learning journey? Let us know in the comments!

Breaking into Medical Affairs: Your First Steps and Ideal Roles

Are you a science enthusiast with a passion for communication and a desire to bridge the gap between groundbreaking research and patient care? Medical Affairs might just be the perfect fit for your career. This dynamic field within the pharmaceutical and biotech industry offers a unique opportunity to leverage your scientific expertise in a non-commercial setting. But where do you start, and what roles are best for someone new to the field? As someone who interviews candidates for Medical Affairs positions, I’m often asked these questions. Let’s explore some common entry points and what it takes to succeed. Common Entry-Level Roles in Medical Affairs While the Medical Affairs landscape is diverse, three roles often serve as excellent starting points for a career in this field: Medical Science Liaison (MSL) Who it’s for: If you thrive on scientific exchange, enjoy building relationships, and are comfortable traveling frequently, the MSL role could be your calling. MSLs are the scientific face of the company, engaging in peer-to-peer discussions with Key Opinion Leaders (KOLs) and other healthcare professionals to share scientific information and gather insights. What we look for: Medical Information Specialist Who it’s for: If you have a meticulous eye for detail, excellent research skills, and a talent for clear scientific writing, a Medical Information Specialist role might be an ideal fit. This position is typically office-based and involves responding to unsolicited requests for medical information from healthcare professionals and consumers. What we look for: Medical Advisor Who it’s for: Medical Advisors are the strategists of the Medical Affairs department. While often requiring some prior industry or Medical Affairs experience, it can be an entry point for highly qualified individuals, particularly those with an MD or PhD, who demonstrate strong leadership and strategic thinking. What we look for: What I Look for in a CV and During an Interview When I’m reviewing CVs and conducting interviews, I’m looking for a combination of scientific rigor, practical skills, and personal attributes that indicate a strong fit for the demanding yet rewarding world of Medical Affairs. Your CV: Make It Stand Out The Interview: Your Opportunity to Connect My interviews typically start with a brief introduction from me to create a relaxed and inclusive atmosphere. Then, it’s your turn to walk me through your career journey, highlighting skills and experiences relevant to the job description. The conversation then transitions into a Q&A session. Here’s what I’ll be looking for: Your Turn to Ask: Show Your Interest The end of the interview is your critical opportunity to ask questions. This shows your genuine interest, engagement, and that you’ve thought deeply about the role and the company. Avoid asking about salary at this stage; that’s typically handled by HR or a recruitment agent (especially in regions like Japan, where agents often navigate this sensitive topic). Here are a couple of examples of thoughtful questions you might consider: Ultimately, I’m looking for the best fit for our team: a knowledgeable and collaborative colleague who can quickly become independent, and from whom we can all learn. Diverse teams with varied backgrounds and skillsets are, in my experience, the strongest. Are you considering a career in Medical Affairs, or perhaps looking to make a switch? What aspect of this field excites you the most?

Medical Affairs

Medical Affairs: The Dynamic Heartbeat of Pharmaceutical Innovation

Medical Affairs is a relatively “young” department within the pharmaceutical industry, yet it’s rapidly become an indispensable pillar. Having spent years navigating various roles within this evolving field—from a Medical Advisor to Franchise or TA Lead in a country organization, and most recently as a Regional Lead in Asia Pacific—I’ve witnessed firsthand its profound impact and unique appeal.

Send Us A Message

briefio 2025. All Rights Reserved

Privacy Policy | Terms & Conditions