Week 6: Every Choice You Did Not Explain Is a Question Your Reviewer Cannot Answer.

Field Notes from the Grant Cycle — Week 6

You have six weeks until June 5. Your approach section should be drafted or close to it. And this week I want you to do something that will take ten minutes and may be the most important revision you make before you submit.

Print your approach section. Pick up a highlighter. Read it straight through, and every time you encounter a methodological choice without a "because," highlight it.

Why this design? Highlighted. Why this measure? Highlighted. Why this population? Why this analytic approach? Why this sample size? Why twelve months instead of six? Highlighted, highlighted, highlighted.

When you finish, count the highlights. That number is how many questions your reviewer will have that you did not answer.

Why This Matters Now

Under the 2025 Simplified Review Framework, Factor 2 — Rigor and Feasibility — is where most applications actually lose their score. Factor 1 sets the ceiling. Factor 2 determines whether you reach it. And the most common way Factor 2 erodes is not through a single fatal flaw. It is through the accumulation of unexplained choices that slowly drain a reviewer's confidence.

I have written about this at length in my earlier post on Factor 2. But at six weeks out, you do not need the theory. You need the audit.

Here is what happens when a reviewer encounters an unexplained choice. They do not stop and write a critique immediately. What happens is more subtle. They feel slightly less certain. The ground under the logic of your application shifts. One unexplained choice is minor. Two is noticeable. By the fourth or fifth, the reviewer has a general sense that the approach is underdeveloped — even if no single choice is wrong.

And here is the part that costs you: when that reviewer stands up in study section and someone asks why you chose a cluster-randomized design, or why you selected that particular outcome measure, or why your follow-up period is twelve months — the reviewer looks at the page and the answer is not there. Every question they cannot answer is a moment where the room's confidence drops. You are not in that room. Your words are. And your words have to hold.

The Three Places to Look

You do not need to audit every sentence. Focus on three places where embedded assumptions do the most damage.

Your study design. You named it — randomized controlled trial, mixed methods, hybrid effectiveness-implementation, whatever it is. But did you explain why this design is the right one for this question? Not why it is a good design in general. Why it is the right design for what your preliminary data told you and what your research question demands. One or two sentences connecting the question to the design can answer the objection before it is raised.

Your outcome measures. You cited the validation study. But did you explain why this measure over the other validated options your reviewer knows about? Did you mention whether you have used it yourself? Whether it has been tested in your population? Whether your pilot data suggest it can detect the change you are proposing? A measure that is well-validated in general but untested in your specific context is a question waiting to be asked.

Your analytic plan. You named your statistical approach. But did you explain why this approach is suited to the data structure your design will produce? Did you address missing data, clustering, multiple comparisons? Each unaddressed issue is an assumption the reviewer has to accept on faith. And reviewers do not score on faith.

The Fix

Here is the good news: the fix for an embedded assumption is almost never a paragraph. It is one or two sentences. "We selected X because our preliminary data showed Y." "We chose this measure because it demonstrated adequate sensitivity in our pilot study with this population." "We will use mixed effects models to account for the clustering inherent in our site-based recruitment design."

That is all it takes. One sentence that shows why, connected to evidence — your own prior work, your pilot data, the literature. The reviewer reads it and the ground is solid. They move to the next paragraph with confidence intact.

The absence of that sentence is what costs you. Not because the reviewer thinks you made the wrong choice. Because they do not know why you made the choice at all. And uncertainty, accumulated across an entire approach section, becomes a Factor 2 score that drags your application below the ceiling your Factor 1 earned.

Ten Minutes This Week

Print it. Highlight it. Count the gaps. Then spend the rest of the week filling them — one or two sentences each. That is the revision that separates an approach section a reviewer trusts from an approach section a reviewer questions.

You have six weeks. The architecture of your argument is set. Now make sure every joint is visible.

Next week: the document nobody thinks of as a trust document — and why your budget justification is telling reviewers more about your project than you realize.

Lisa Carter-Bawa, PhD, MPH, APRN, ANP-C, FAAN, FSBM

Creator, Lost in Translation Grantsmanship Curriculum | Soul to Soul Leadership LLC © 2026

Not sure where your grant is losing reviewers? Take the free Grant Translation Diagnostic — it takes about 10 minutes.

Go Deeper

The Lost in Translation Grantsmanship Curriculum teaches you how to find and close the gaps that silently weaken your application. Module 1 is free.

Start Module 1 for free →

Next
Next

Your Score No Longer Decides. Your Critique Does.