Factor 2 Is Where Your Grant Actually Dies. Here Is How to Fix It.

Most investigators spend the majority of their energy on the significance of their research. They craft the opening of their Specific Aims page. They build the argument for why their question matters. And then they get to the approach section and something shifts. The writing gets denser. The logic moves faster. The assumptions multiply. And the reviewer starts losing confidence without being able to name exactly why.

Under the NIH 2025 Simplified Review Framework, this is where your score lives. Factor 2 — Rigor and Feasibility — encompasses what used to be scored as Approach, and it now also incorporates elements previously evaluated under Additional Review Criteria, including inclusion plans, study timelines for clinical trials, and plans for valid design and analysis. It is scored 1 to 9, and in my experience as a reviewer, it is the factor that most frequently drives the final impact score.

Factor 1 gets you in the door. It makes the reviewer believe the question is worth asking. But Factor 2 is where they decide whether they believe you can answer it. And that belief lives or dies in the details of your approach section.

Here is the single most common reason I see grants fail at Factor 2, and it is not what most people think.

But first, you need to understand something about how the scores actually work — because it changes the stakes of everything I am about to tell you.

The Scoring Reality Most Investigators Do Not Understand

Under the simplified framework, Factor 1 and Factor 2 are each scored 1 to 9. The overall impact score reflects the reviewer’s holistic judgment, but there is a practical reality that governs how that judgment plays out: your overall impact score can never be better than your Factor 1 score.

Read that again. It matters more than almost anything else I can tell you about the new framework.

Factor 1 — Importance of the Research — sets the ceiling for your application. If your Factor 1 is a 4, your overall impact score cannot be a 3, a 2, or a 1. It does not matter how elegant your study design is. It does not matter how rigorous your analytic plan is. It does not matter how perfect your preliminary data are. A flawless approach to a question the reviewer does not believe is sufficiently important or innovative will never score better than the Factor 1 score allows.

Factor 1 is the ceiling. Factor 2 determines whether you actually reach it.

And here is where it gets painful. While a strong Factor 2 cannot lift you above your Factor 1, a weak Factor 2 absolutely can drag you below it. If your Factor 1 is a 2 — a compelling, important, innovative question — but your Factor 2 is a 5, your overall impact score will land somewhere in that weaker range. All that significance, all that importance, wasted because the approach did not hold.

This means the two factors are not independent bets. Factor 1 defines the best possible outcome. Factor 2 determines whether you get there or fall short. You need both. But you need Factor 1 first, because without it, nothing else matters. And once you have it, Factor 2 becomes the sole determinant of whether that potential is realized.

Which brings me to the real problem. The place where applications with strong Factor 1 scores go to die.

The Assumption Problem

The most common thing that tanks a grant at Factor 2 is not a bad study design. It is not a flawed power calculation or a missing control group. It is something much more subtle and much more pervasive.

It is embedded assumptions.

The investigator makes a methodological choice — a study design, a recruitment strategy, a measurement tool, an analytic approach — and moves on to the next sentence without explaining why. Not because they are careless. Because the rationale is so obvious to them that it does not occur to them to state it. They live inside this work every day. The logic connecting one decision to the next is as natural to them as breathing. So they assume the reviewer sees it too.

The reviewer does not see it.

The reviewer is encountering your logic for the first time. They do not share your mental model. They do not know what you considered and rejected before arriving at this design. They do not know which pilot data informed which choice. They do not know why you selected this measure over three other validated options. All they know is what is on the page. And when the page asks them to accept a decision without evidence, without a citation, without a link to your prior work — they do not accept it. They flag it. And flagged decisions become weaknesses in the critique.

One embedded assumption is a minor concern. Three or four in sequence and the reviewer’s confidence is eroding. By the time they reach your analytic plan, they have accumulated enough unanswered questions that the entire approach feels uncertain. Not because any single choice was wrong. Because no single choice was defended.

What the Reviewer Actually Experiences

I want you to understand this from the reviewer’s side, because it changes how you write.

When I am reading an approach section and I encounter a methodological choice without rationale, something happens that is more felt than articulated. I do not stop and write in my critique, the applicant failed to justify this decision. Not at first. What happens first is that I feel less steady. I was following the logic of the application, and suddenly the ground is less firm. I do not know why this choice was made. Maybe it is a good choice. Maybe it is the only reasonable choice. But I do not know, because the applicant did not tell me.

And here is the problem: that feeling accumulates. Each unsupported choice adds a small weight. By the time I finish reading, I may not remember which specific decisions lacked rationale. But I have a general sense that the approach is underdeveloped, that the investigator has not thought things through, that feasibility is uncertain. And that general sense becomes my Factor 2 score.

Now imagine I have to present this application in study section. Someone asks me why the applicant chose a cluster-randomized design. I look at the page. It is not there. Someone asks why they selected a 12-month follow-up rather than 6 or 18 months. Not there. Someone asks about the choice of primary outcome measure. Not there. Each question I cannot answer is a moment where the room’s confidence in the application drops. And I am the one standing there with nothing to say.

That is what embedded assumptions do. They do not just weaken the written critique. They disarm your advocate in the room.

What Works: The Clear Arc

The approach sections that hold up in review — the ones where my confidence builds rather than erodes — have a specific quality. They build on the investigator’s own work in a way that makes every choice feel inevitable.

Here is what I mean. The strongest approach sections read like this: We found X in our preliminary work. X told us Y about the problem. Y led us to conclude that the most effective approach would be Z. We chose Z specifically because our data showed it addresses the mechanism we identified. We will measure the outcome using this tool, which we validated in our pilot study and which demonstrated adequate sensitivity in this population.

Every decision points back to something. A finding from their preliminary data. A result from their pilot. A limitation they identified in prior work. A gap they discovered in the existing literature. The reviewer never has to wonder where a choice came from, because the trail is there on the page.

This is what I call the clear arc. It is not just an approach section. It is a lineage. The investigator shows you the path they walked to get here, and each step along that path makes the next step feel like the only logical option. By the time the reviewer finishes reading, they do not just understand what the investigator is proposing. They understand why it could not be any other way.

That is the application a reviewer can champion. Not because the science is necessarily more innovative. Because the logic is airtight and the reviewer never had to fill in a gap the applicant left open.

The Three Places Assumptions Hide

If you are writing an approach section right now, or revising one, here is where to look. These are the three places I most frequently find embedded assumptions that erode a Factor 2 score.

The study design choice.

You name your design — randomized controlled trial, mixed methods, longitudinal cohort, hybrid effectiveness-implementation — and move to the next section. But you never explain why this design and not another. Your reviewer is now wondering: did they consider alternatives? Is this the strongest design for this question, or is it the design they are most comfortable with? One or two sentences connecting your research question to the specific design and explaining why this design is the most appropriate given what your preliminary data showed can answer the question before it is asked.

The measurement and outcomes selection.

You name your primary outcome measure and cite the original validation study. But you do not explain why this measure and not the three other validated options the reviewer knows about. Worse, you do not mention whether you have used this measure yourself, whether it has been tested in your population, or whether your preliminary data suggest it has adequate sensitivity to detect the change you are proposing. The reviewer is now constructing their own evaluation of your measurement strategy, and they may come to a different conclusion than you did. Give them your conclusion and the evidence behind it so they do not have to build their own.

The analytic plan.

You describe your statistical approach — mixed effects models, propensity score matching, thematic analysis, whatever the method — but you do not explain why this analytic approach is suited to the data structure your design will produce. You do not address how you will handle the problems the reviewer is already anticipating: missing data, clustering, multiple comparisons, potential confounders. Each of these unaddressed issues is an assumption that the reviewer has to accept on faith. And reviewers do not score on faith. They score on evidence.

A Note on What Has Changed Under the New Framework

Under the 2025 Simplified Review Framework, Factor 2 now includes elements that were previously evaluated separately under Additional Review Criteria — specifically, inclusion plans, study timelines for clinical trials, and plans for valid design and analysis. These are no longer treated as compliance checkboxes evaluated after the scored criteria. They are part of the scored factor.

It is still early. The framework is new, and reviewer behavior is still calibrating to the changes. I cannot tell you definitively how much weight these consolidated elements are carrying in the Factor 2 score relative to the core approach. But I can tell you this: they are in the room now. They are part of the conversation. And if your inclusion plan is an afterthought or your study timeline does not hold up under scrutiny, it is no longer a minor note at the bottom of the critique. It is a Factor 2 concern. Treat it accordingly.

How to Audit Your Own Approach Section

Before you submit, go through your approach section line by line and ask yourself one question at every methodological decision point: did I show why, or did I assume the reviewer already knows?

If you chose a design, did you explain why it is the right design for this question? If you selected a measure, did you connect it to your preliminary data or explain why it outperforms the alternatives in your population? If you proposed an analytic approach, did you explain how it addresses the specific data structure your design will produce? If you set a sample size, did you show the power calculation and explain the assumptions that went into it? If you chose a recruitment strategy, did you provide evidence that it will work — from your own prior studies, from collaborators’ experience, or from the literature?

Every time the answer is no, you have found an embedded assumption. And every embedded assumption is a place where your reviewer’s confidence will silently erode.

The fix is almost always one or two sentences. Not a paragraph. Not a defense. Just a clear, concise statement of why, supported by evidence. That is all it takes to turn a question mark into solid ground for your reviewer to stand on.

Remember: Factor 1 sets the ceiling. Your overall impact score can never be better than your Factor 1 score. But Factor 2 determines whether you reach that ceiling or fall far below it. A compelling question with a weak approach is a waste of a good idea. And that waste almost always comes down to the same thing — assumptions the investigator made that the reviewer could not see.

Show them. Every choice, every rationale, every link to your prior work. Do not let Factor 2 be the reason a question worth funding never gets funded.

Lisa Carter-Bawa, PhD, MPH, APRN, ANP-C, FAAN, FSBM

Creator, Lost in Translation Grantsmanship Curriculum | Soul to Soul Leadership LLC © 2026

Not sure where your grant is losing reviewers? Take the free Grant Translation Diagnostic — it takes about 10 minutes.

Try the Exercise That Started It All

The Lost in Translation Grantsmanship Curriculum teaches you how to identify and eliminate the assumptions that silently weaken your application — not through templates, but through a diagnostic framework you can apply to any grant you are writing right now. Module 1 of Lost in Translation is free. Try it yourself.

Start Module 1 for free →

Previous
Previous

How to Write the Significance Section of Your NIH Grant: It's Not a Literature Review

Next
Next

I Just Sat Through 10.5 Hours of NIH Study Section. Here Is What I Learned.