The Stakes of Agency Selection
Agency selection represents one of the highest-consequence decisions SEO stakeholders make. The right partnership compounds value over years. The wrong partnership wastes budget, creates opportunity cost, and may inflict lasting damage through misguided recommendations.
The challenge: genuinely excellent agencies and genuinely incompetent agencies often appear similar during the selection process. Both present compelling credentials, reference satisfied clients, and promise results. Surface-level evaluation cannot distinguish capability, leaving organizations vulnerable to selection errors.
Rigorous selection processes systematically surface capability differences that superficial evaluation misses. The investment in thorough agency vetting returns dividends across the engagement duration.
Pre-RFP Preparation
Effective RFP processes begin before writing the RFP document:
Internal alignment ensures stakeholders agree on objectives, constraints, and evaluation criteria before engaging agencies. Misalignment surfacing during agency selection creates confusion and compromises decisions. Key alignment questions include:
What outcomes does the organization seek from SEO?
What budget range is realistic?
What timeline expectations exist?
Who holds decision authority, and who provides input?
What deal-breakers or non-negotiables exist?
Current state documentation provides agencies the context they need for meaningful proposals. Documentation should cover:
Historical organic performance (traffic, rankings, conversions)
Previous SEO initiatives and their outcomes
Known technical issues or limitations
Competitive landscape understanding
Organizational structure and stakeholder map
Tech stack and implementation constraints
Candidate identification determines which agencies receive RFPs. Sending RFPs too broadly wastes time on obviously poor fits. Sending too narrowly limits options. Pre-qualification through preliminary research identifies appropriate candidates.
Research sources include industry reputation assessment (conference speakers, publication contributors, award recipients), referral solicitation from professional networks, and shortlist recommendations from third-party consultants or industry analysts.
RFP Document Structure
The RFP document should enable agencies to demonstrate capability while providing consistent evaluation basis:
Company and project overview section provides context:
Organization description, industry, and market position
Website scope (pages, domains, platforms, technologies)
Target audiences and business model
Current SEO status and historical performance
Known challenges and opportunities
Objectives and success criteria section specifies desired outcomes:
Quantitative goals (traffic, rankings, revenue)
Qualitative goals (brand visibility, thought leadership)
Timeline expectations
How success will be measured
Scope of work section details expected services:
Which SEO disciplines are in scope (technical, content, link building, local)
Which services are explicitly out of scope
Expected deliverables and cadences
Integration requirements with other marketing functions
Requirements and constraints section addresses logistics:
Budget range or ceiling
Contract duration expectations
Geographic or team composition requirements
Tool or methodology requirements
Reporting format and frequency expectations
Proposal instructions section guides agency responses:
Submission format and deadline
Response length limits or structure requirements
Questions for agency response
Contact for clarifying questions
Evaluation criteria section establishes assessment framework:
What factors will inform selection
How different factors will be weighted
What the selection timeline looks like
Questions That Reveal Capability
Generic RFP questions generate generic responses. Specific questions surface meaningful capability differences:
Strategy questions reveal thinking depth:
“Based on the limited information provided, what three SEO opportunities would you prioritize investigating? Why?”
Strong answers demonstrate analytical thinking and hypothesis formation from incomplete data. Weak answers make unsupported claims or deflect to “we need to audit first.”
“How would you approach keyword strategy differently for our business model compared to a competitor with [different model]?”
Strong answers show strategic adaptation to business context. Weak answers apply generic methodology regardless of business type.
Technical questions verify implementation capability:
“Describe your approach to JavaScript rendering issues. What would you need to know about our tech stack to assess risk?”
Strong answers demonstrate current technical understanding and systematic diagnostic thinking. Weak answers show outdated knowledge or superficial understanding.
“How do you diagnose and prioritize Core Web Vitals improvements?”
Strong answers reveal hands-on experience with specific metrics and optimization techniques. Weak answers recite metric definitions without practical insight.
Process questions illuminate working style:
“Walk us through your typical first 90 days with a new client of our size and complexity.”
Strong answers show structured onboarding, discovery processes, and realistic timeline expectations. Weak answers jump immediately to tactics without discovery.
“How do you handle situations where your SEO recommendations conflict with product or engineering priorities?”
Strong answers demonstrate stakeholder management skill and collaborative problem-solving. Weak answers suggest either capitulation or confrontation.
Results questions verify track record:
“Describe a specific client situation where your recommended approach failed. What happened and what did you learn?”
Strong answers demonstrate intellectual honesty and learning orientation. Weak answers claim no failures or shift blame externally.
“For a client similar to us, what results would you realistically expect in months 3, 6, and 12?”
Strong answers provide realistic projections with appropriate uncertainty acknowledgment. Weak answers promise outcomes without caveats.
Scoring and Evaluation Frameworks
Consistent scoring enables objective comparison across candidates:
Weighted criteria scoring assigns importance to evaluation factors:
Relevant experience (25%): demonstrated success with similar organizations
Strategic approach (20%): quality of proposed methodology and thinking
Team capability (15%): qualifications and availability of proposed team
Cultural fit (15%): working style alignment and communication quality
Pricing (15%): value for proposed scope and fee structure
References (10%): third-party validation of claimed capability
Rubric development for each criterion ensures consistent evaluation:
5 – Exceptional: significantly exceeds expectations
4 – Strong: exceeds expectations in meaningful ways
3 – Adequate: meets expectations satisfactorily
2 – Weak: partially meets expectations with concerns
1 – Unacceptable: fails to meet minimum expectations
Multiple evaluator consensus prevents individual bias from driving decisions. Three to five evaluators scoring independently with subsequent consensus discussion provides balanced assessment.
Documentation discipline records evaluation rationale for later reference. If engagement subsequently fails, evaluation documentation informs post-mortem analysis.
Reference Checks That Matter
Reference checks verify agency claims against third-party experience:
Reference selection scrutiny recognizes agencies provide favorable references. Request references from specific categories: similar-sized clients, clients in similar industries, long-tenure clients, and if possible, clients who chose not to renew.
Structured reference conversations ensure consistent information gathering:
“On a scale of 1-10, how would you rate [agency’s] work? What would make it a 10?”
“What aspects of working with [agency] would you change if you could?”
“Has [agency’s] recommended approach ever caused problems for your organization?”
“If you were selecting an SEO agency today, would you choose [agency] again?”
Beyond provided references, independent research uncovers unsolicited perspectives. LinkedIn connections at agency clients, industry network inquiries, and case study verification provide additional data points.
Red Flags During Selection
Certain patterns predict engagement problems:
Results promises without caveats indicate either dishonesty or inexperience. Reputable agencies cannot guarantee rankings because they do not control algorithm changes or competitor actions.
Vague methodology descriptions suggest lack of systematic approach. Agencies should clearly articulate how they work even if specific tactics vary by client.
Unclear pricing structures with hidden fees or extensive exclusions indicate fee disputes in the engagement’s future.
High team turnover during the selection process (different people at each meeting) suggests unstable organization or bait-and-switch staffing.
Unwillingness to discuss failures demonstrates lack of intellectual honesty. Every experienced agency has encountered unsuccessful engagements.
Aggressive sales tactics or pressure for rapid decision suggest desperation or poor pipeline management.
Generic proposals that do not address the specific situation indicate copy-paste approach or insufficient interest in the engagement.
Negotiation and Contracting
After selection, negotiation and contracting formalize the relationship:
Scope negotiation confirms mutual understanding of included and excluded work. Written scope prevents mid-engagement disputes.
Pricing structure negotiation may involve retainer versus project fees, minimum commitments, rate escalation provisions, and additional work pricing.
Performance provisions may include success-based compensation, minimum performance thresholds, or termination rights tied to results.
Term negotiation addresses duration, renewal provisions, and termination rights. Initial terms of 6-12 months with renewal options balance commitment against flexibility.
Key clauses requiring attention:
Termination provisions: notice period, cause definition, work product disposition
Non-compete or exclusivity provisions: restrictions on competitor work
Intellectual property: who owns work product, methodologies, and data
Confidentiality: protection of sensitive information
Liability limitations: responsibility for recommendation consequences
Legal review of proposed contracts protects organizational interests. Standard agency contracts favor the agency; negotiated modifications appropriate the balance.
Onboarding for Engagement Success
Selection success requires effective onboarding:
Kickoff meeting aligns agency team with internal stakeholders, confirms scope understanding, and establishes relationship dynamics.
Access provisioning provides agency with necessary tools and data: analytics, Search Console, rank tracking, CMS, and relevant internal systems.
Stakeholder introduction connects agency with cross-functional partners they will work with: engineering, content, product, design.
Communication protocols establish regular meeting cadence, reporting format, and day-to-day communication channels.
Success metric confirmation ensures agency and internal stakeholders share outcome expectations and measurement approaches.
Evaluation Throughout Engagement
Agency selection does not end at contract signature. Ongoing evaluation informs continuation decisions:
Quarterly reviews assess performance against objectives, relationship quality, and value delivery.
Annual evaluation considers renewal versus transition, scope adjustment, and rate renegotiation.
Market comparison periodically checks whether the current engagement represents competitive value. Alternative options should remain visible even during successful engagements.
The selection investment pays dividends proportional to engagement duration. Organizations that rush selection frequently churn agencies, compounding costs and disrupting strategy continuity. Those investing in rigorous selection build partnerships that compound value over time.