Before you begin your ABM programme your foundations (account selection) has to be right. Many ABM programmes don’t fail because the tactics are wrong, it’s more fundamental than that, if your account selection is wishful rather than calculated, you are starting on the wrong foot.
A big logo gets picked, everyone rallies, content gets built, outreach starts then you find out the buying path is opaque, there isn’t enough repeatable work and nobody can confidently explain why this account is worth the effort right now.
ABM is not a quick fix to revenue and profit, it requires commitment and consistent time investment across research, outreach, content, relationship-building, and internal coordination. Without a consistent way to choose accounts, ABM can become a mix of activity noise and hope which is the worst thing for pipeline and speed to revenue and client-stickiness.
This article outlines a straightforward scoring method that I have used in the past to identify the accounts most likely to succeed in an ABM programme, based on recurring/upsell/cross-sell revenue potential, speed to value, and realistic expansion runway. It’s designed to be quick enough to use in the real world, even with limited resources and consistent enough to avoid internal debate.
Before you start: define your revenue goals
Before you score anything, you need to be clear on what “success” looks like commercially to avoid scoring accounts in isolation which can lead teams to chase numbers that don’t connect to a specific outcome.
A revenue forecast is simply the result you’re trying to produce and the kind of work you need to produce it. For example, you might decide your ABM programme is meant to prioritise accounts that can reach £10k MRR within 12 months, with a credible path to £20k MRR without requiring significant additional internal investment.
ABM is a focused investment, if your programme is built to generate recurring revenue for example, your account selection needs to reflect recurring revenue realities: repeatable work, reliable buying paths, urgency, and expansion potential.
Step 1: Choose a simple but solid scoring model
If your scoring process is complex it will confuse and if it’s too vague it can lead to gaming. You are aiming for something that can be done quickly, creates consistency, and stays connected to commercial outcomes.
A practical model is a 0 - 20 total score made up of five components:
Volume or Cross-sell/Upsell Potential (0 - 5)
Budget Access + Decision Path Strength (0 - 4)
Urgency / Timing (0 - 4)
Trigger-Offer Fit (0 - 4)
Expansion Runway (0 - 3)
The weighting is intentional - volume/cross-sell/upsell potential gets the most value because recurring or cross-sell/upsell revenue relies on recurring or new product/ service/jurisdiction work. The other factors determine whether you can land, deliver fast enough, and expand without friction.
Think of the total score as a quick “is this account realistically capable of hitting our ICP revenue profile?” It’s not perfect but much more effective than choosing accounts based on brand recognition and sentiment.
Step 2: Gather enough signals to score
When teams hear “account scoring,” they often assume it means deep research and analysis paralysis. It doesn’t, you only need enough signal to make a smart first-pass decision, and then you improve the score as you learn more.
This is where the evidence score provides a simple way to track how strong your proof is.
At the first pass, you’re usually working with public signals, things you can see quickly and consistently. LinkedIn can tell you team size and structure. Job posts can hint at upcoming initiatives, pain, and maturity. Annual reports and press releases can reveal growth activity, transformation programmes, acquisitions, regulatory exposure, and operational complexity.
Then you’ve got network intel, which is often the best predictor of whether you’ll get traction. A credible internal contact, a warm intro route, prior panel status, or past work can dramatically change the probability of success sometimes more than the account’s “fit” on paper.
Finally, there’s discovery confirmation where you turn assumptions into facts, from which you can confidently make a decision.
The key is to keep the first pass light. Score quickly based on what you know, then use the score to decide what you need to validate next.
Step 3: Score the five components (without overthinking)
Volume/Cross-sell/Upsell Potential (0 - 5)
Start with the most important question: is there likely to be enough repeatable/new work to justify recurring revenue or enough upsell/cross-sell opportunity?
A low score here usually means the work is sporadic, ad hoc, and unpredictable. Even if the company is large, that doesn’t automatically mean volume/opportunity in the area you sell. A mid-range score often suggests a few teams with steady needs, or moderate complexity spread across regions. A high score indicates environments with ongoing operational churn: regulated industries, data-heavy organisations, multi-entity structures, many vendors, procurement involvement, and the kind of compliance pressure that keeps work flowing.
If you’re aiming for something like £10k MRR or an average revenue per entity increase, this helps you sanity-check the idea of recurring or upsell/cross-sell work - can this account generate enough steady demand rather than isolated one-offs?
Budget Access + Decision Path Strength (0 - 4)
Next, look at how realistically you can get decisions made.
A low score means you don’t know who decides, procurement is unknown, and you have no route in to the people who can approve spend. A mid score suggests you have a route to stakeholders, and you can start to map the buying path even if it’s not fully clear. A high score means you can identify the decision owner/s, you understand the procurement route, and you have a credible way in through a warm intro, panel status, or existing relationship.
This factor matters because ABM without decision-path clarity can produce lots of “activity” without moving the opportunity forward.
Urgency / Timing (0 - 4)
Ask: is there a reason to act now?
ABM needs accounts that have momentum triggers and real pressure. Low urgency is steady-state- no deadlines, no pain, no reason to allocate budget as there is no clear challenge. Medium urgency suggests some triggers: maybe hiring, a new initiative, or mild dissatisfaction. High urgency usually shows up when there’s a time-bound driver such as M&A, restructuring, a transformation programme, regulatory change or commercial/competitor friction.
Trigger-Offer Fit (0 - 4)
Now ask how well your offer connects to a pain you can fix quickly.
If the fit is vague, you’ll struggle to get messaging right and fall back on generic content, and the account will stay “interested” without moving. A mid score suggests there’s a clear problem you can solve and a plausible early win. A top score means your offer maps clearly to their operational language, and you can describe outcomes in terms they actually care about.
You want to be able to create an offer that gets a foot in the door, proves value fast, and creates internal momentum.
Expansion Runway (0 - 3)
Finally, consider whether there’s room to grow beyond the initial scope.
A low score implies a narrow team or single use case. You can still win, but the growth opportunity beyond that is low. A medium score suggests multiple stakeholders, regions, or adjacent product types. A top score indicates multiple business units, repeatable or new work types and an ongoing need where expansion is a logical next step rather than a brand-new sale each time.
Step 4: Keeping your scorecard “honest”
First, assign an Evidence Level to the account. Keep it simple: Level 1 is based on public information, Level 2 includes credible network intel or relationship proof, and Level 3 is validated through discovery.
Second, assign Confidence (High/Medium/Low). High confidence means you’ve validated key assumptions in conversation and you can see a path to spend. Medium confidence means you have strong public signals and a decent route in. Low confidence means you’re mostly guessing.
These two fields stop the most common scoring hurdle, which is treating an optimistic first-pass score as if it’s truth. A high score with low confidence is not “wrong.” It’s simply a signal that you need further validation before you invest heavily.
Step 5: Can you service if you win it? (uncomfortable but important!)
ABM account selection is also about your ability to execute.
A perfect Tier 1 account becomes a poor ABM choice if you don’t have the people, time, or delivery bandwidth to follow through. After scoring, do a quick capacity review. Ask how many high-touch accounts you can genuinely service this quarter without lowering quality. Ask who owns the core motions: research, messaging, content, outreach, follow-up, sales enablement, and delivery coordination.
If naming owners is a challenge, you may not have capacity and need to be realistic in your approach. A wise colleague once told me - undersell and over deliver, don’t oversell and under-deliver, a strategy Apple follows coincidently!
Step 6: Use the score to tier accounts into ABM strength
The point of scoring is to determine how much ABM effort an account deserves - strategic (1:1), scale (1:few) or programmatic (1:many).
A practical approach is to map accounts into tiers. High-scoring accounts with stronger evidence and confidence are candidates for 1:1 ABM. Mid-range accounts often suit 1:few clusters where you can share a core play and personalise around specific triggers and roles. Lower-scoring accounts might still be relevant, but they belong in 1:many campaigns or a lighter engagement approach.
This helps by protecting you from applying Tier 1 effort to too many accounts and then wondering why everything feels hard.
Step 7: Re-score after discovery (scores are designed to evolve)
TA scorecard is not a once-and-done exercise. It’s a living number, and its job is to improve.
After your first meaningful discovery call, revisit the score. This is often where the biggest shifts happen: volume assumptions get corrected, decision paths become clearer (or more complicated), urgency becomes real (or not!), and trigger-offer fit either sharpens or turns out to be misaligned.
A simple cadence works well: first-pass score before outreach, second score after first discovery, third score after procurement and scope are clearer. No need to over-engineer, just keep it reflective of current status.
Step 8: Calibrate scoring so it stays consistent
If you have more than one person scoring accounts, you’ll see variance which is to be expected as relationships ands opinions from different view points vary which is why shared interpretation becomes important.
The easiest solution is a short calibration call to compare scores, and agree what a “4” really means in your context. From the start you should document definitions to reduce debate and increase consistency.
Step 9: Use the scorecard to make decisions
Your scorecard should sit in (or behind) your CRM with the current ABM score surfaced at the Account or ABM intelligence tab level.
It should be used to decide which accounts enter ABM now, what tier they sit in, what needs to be validated next, and what the next action is. If an account has a strong score but weak evidence, your next action is not to build content it should be to validate the assumptions quickly. If an account has a solid score and strong confidence, your next action is to mobilise and move via your designed outbound/inbound tactics.
For most businesses, ABM success comes from less is more - fewer accounts and clearer actions with tighter execution leading to better conversion.
For a practical way to pick the right clients for your ABM programme, download our ABM Account Scoring template
We’ve built a practical ABM Account Scoring Scorecard template based on the five factors described and including Evidence Level and Confidence fields. It’s designed to help you prioritise accounts based on commercial reality and team capacity - so your ABM programme is focused, winnable, and repeatable. Let us know if you'd like an introduction to ABM agencies that we'd recommend.






