information-architecture

Card Sorting and Tree Testing Research Plan Generator

Generate comprehensive research plans for card sorting and tree testing studies to optimize information architecture and navigation design. This prompt helps UX researchers and designers create structured methodologies for understanding user mental models, validating navigation structures, and improving content findability in digital products.

Your Prompt

  

How to Use

This prompt generates a complete research plan covering both card sorting and tree testing methodologies for information architecture optimization. Fill in the bracketed placeholders with details about your project status, content scope, target users, and available resources. The output provides a sequential workflow that typically starts with card sorting to understand mental models, then validates findings through tree testing.

Pro Tips

  • Conduct a pilot test with 2-3 participants before launching full studies to catch confusing instructions, ambiguous cards, or poorly worded tree testing tasks
  • Keep card content at the same hierarchical level—mix article titles with category names creates confusion and skews results toward obvious groupings
  • Write tree testing tasks as realistic scenarios rather than simple findability questions to mirror how users actually approach navigation in real contexts
  • Use the same participant pool for validation tree testing after making IA changes to measure actual improvement, but recruit fresh participants for initial discovery to avoid bias
  • Combine quantitative data from unmoderated sessions with qualitative insights from follow-up interviews with 5-10 participants to understand the 'why' behind patterns
  • Document your existing IA assumptions before starting research to avoid confirmation bias when analyzing results that challenge those assumptions

Understanding the Research Sequence

Card sorting and tree testing serve complementary purposes in IA research. Card sorting helps you understand how users naturally group and categorize content, revealing their mental models. Tree testing validates whether a proposed or existing navigation structure actually works for users trying to complete specific tasks. The typical workflow involves conducting open card sorting first to generate ideas, synthesizing results into a proposed IA, then using tree testing to validate that structure before implementation. For redesigns, you might also test the existing structure alongside the new one to measure improvement.

Choosing Study Parameters

Select open card sorting when creating new IA or exploring user mental models without preconceptions, as it lets participants create their own categories. Use closed card sorting when you have established categories and want to see how users distribute content within them, or when validating category labels. For tree testing, focus on tasks that represent your users' primary goals and ensure scenarios are realistic and specific without revealing the answer. Sample sizes matter: 15-30 participants for card sorting provides reliable patterns, while 30-60 for tree testing gives statistical confidence in success metrics.

Analysis and Synthesis

Card sorting analysis looks for consensus in how participants grouped items, with tools generating similarity matrices and dendrograms that visualize relationships. Focus on categories where 60-80% of participants agree, as this indicates strong mental model alignment. For tree testing, success rates below 70% indicate navigation problems requiring attention. Analyze first-click data to identify misleading labels, and examine path analysis to understand where users get lost. Look for patterns across participants rather than focusing on individual outliers, and use quantitative metrics to prioritize which issues to address first.

Translating Insights to Design

Transform research findings into actionable IA improvements by identifying both quick wins and strategic changes. Quick wins include relabeling categories where users consistently looked in the wrong place first, or moving frequently sought items higher in the hierarchy. Strategic improvements might involve restructuring major sections, adding new categories that emerged from card sorting, or simplifying overly complex navigation paths. Document decisions with evidence from your studies and create before-after comparisons to demonstrate improvements. Present findings to stakeholders with clear success metrics and user quotes that bring the data to life.

Related Prompts

View prompt
usability-testing

Usability Testing Protocols & Scripts Generator

You are an expert UX researcher specializing in usability testing and user validation. Your task is to create a comprehensive usability testing protocol and detailed test script that uncovers usability issues and validates design decisions through structured, unbiased testing sessions. Product/Feature Being Tested: [DESCRIBE WHAT YOU ARE TESTING - WEBSITE, APP, PROTOTYPE, SPECIFIC FEATURE] Test Objectives: [SPECIFY WHAT YOU WANT TO LEARN - E.G., CAN USERS COMPLETE KEY TASKS? WHERE DO THEY GET CONFUSED? IS THE NAVIGATION INTUITIVE?] Target Participants: [DESCRIBE YOUR TEST PARTICIPANTS - USER SEGMENTS, EXPERIENCE LEVEL, DEMOGRAPHICS] Testing Format: [SPECIFY MODERATED IN-PERSON, MODERATED REMOTE, OR UNMODERATED REMOTE] Prototype/Product Fidelity: [INDICATE IF TESTING LOW-FIDELITY, HIGH-FIDELITY, OR LIVE PRODUCT] Key Tasks to Test: [LIST SPECIFIC USER TASKS OR SCENARIOS YOU WANT TO EVALUATE] Session Length: [SPECIFY AVAILABLE TIME - TYPICALLY 30-60 MINUTES FOR MODERATED, 15-20 FOR UNMODERATED] Number of Participants: [INDICATE PLANNED SAMPLE SIZE - TYPICALLY 5-8 FOR QUALITATIVE INSIGHTS] Known Issues or Concerns: [MENTION ANY EXISTING PROBLEMS YOU WANT TO INVESTIGATE OR VALIDATE] Success Metrics: [DEFINE HOW YOU WILL MEASURE EFFECTIVENESS - COMPLETION RATE, TIME ON TASK, ERROR RATE, SATISFACTION] Based on this information, create a complete usability testing protocol and script that includes: ## 1. Testing Protocol Overview **Study Goals and Research Questions:** - Primary objectives for the usability test - Specific research questions you aim to answer - Hypotheses to validate or invalidate - Success criteria for the test **Methodology and Approach:** - Testing format (moderated vs. unmoderated) with rationale - Testing environment (lab, participant's location, remote) - Tools and software needed (screen recording, video conferencing, testing platforms) - Data collection methods (observation notes, recordings, surveys) **Participant Criteria:** - Detailed participant profile and screening criteria - Number of participants and rationale for sample size - Recruitment strategy and incentive structure - Scheduling considerations **Logistics and Setup:** - Session duration and timing - Equipment and materials needed - Pre-test setup checklist - Roles and responsibilities (moderator, note-taker, observer) ## 2. Complete Usability Test Script ### Part 1: Introduction and Warm-Up (5 minutes) **Welcome and Introduction:** - Moderator introduces themselves and explains their role - Brief overview of session purpose and structure - Emphasize that you're testing the product, not the participant - Explain think-aloud protocol and encourage candid feedback - Address recording and confidentiality **Example Script:** "Thank you for participating today. My name is [Name], and I'll be guiding you through this session. We're testing [product/feature] to understand how people interact with it and identify areas we can improve. There are no right or wrong answers—we're evaluating the design, not you. Your honest feedback, whether positive or negative, is extremely valuable. I'll ask you to think aloud as you complete tasks, sharing your thoughts, questions, and reactions. This session will be recorded for analysis purposes only, and your information will remain confidential. Do you have any questions before we begin?" **Consent and Permissions:** - Obtain informed consent for participation and recording - Confirm participant understands they can stop at any time - Address any questions or concerns ### Part 2: Background Questions (5-7 minutes) **Pre-Test Questionnaire:** Gather contextual information about the participant without biasing them toward the test: - Experience level with similar products/services - Frequency of use for related tools or websites - Devices and platforms typically used - Familiarity with the domain or industry - Current behaviors and pain points related to the problem space **Example Questions:** - "How often do you [perform related activity]?" - "What tools or apps do you currently use for [task domain]?" - "Can you describe your typical process for [related workflow]?" - "What frustrations, if any, have you experienced with [similar products]?" **Important:** Avoid mentioning specific features or functionality you'll be testing to prevent priming participants. ### Part 3: Task Scenarios (30-40 minutes) **Task Design Principles:** - Create realistic, goal-oriented scenarios rather than step-by-step instructions - Use neutral language that doesn't include interface terminology or hint at solutions - Order tasks logically, considering dependencies between tasks - Include 5-8 tasks maximum to prevent fatigue - Mix critical tasks with secondary tasks **For Each Task, Provide:** **Task Scenario:** A realistic context that motivates the user action without revealing how to accomplish it. **Example Task Format:** "Imagine you're planning a vacation and want to find hotels in Paris for next month within a budget of $150 per night. Using this website, show me how you would search for suitable accommodations." **NOT: "Click on the search bar and enter 'Paris hotels'."** (Too prescriptive) **Success Criteria:** - Define what constitutes task completion - Specify observable outcomes - Note critical vs. non-critical errors **Observation Points:** - Key interactions to watch for - Common paths vs. expected paths - Potential confusion points - Error recovery attempts **Follow-Up Questions (After Each Task):** - "How did you feel about completing that task?" - "On a scale of 1-5, how difficult was that task?" - "What, if anything, was confusing or frustrating?" - "What did you expect to happen when you [specific action]?" - "Is there anything you would change about this process?" **Think-Aloud Prompts (If Participant Goes Silent):** - "What are you thinking right now?" - "What are you looking for?" - "What do you expect this to do?" - "Can you tell me what you're seeing here?" **Complete Task List with Scenarios:** Provide 5-8 task scenarios covering: 1. Critical/primary tasks that align with main user goals 2. Common secondary tasks 3. Edge cases or error recovery scenarios 4. Discovery tasks (can users find X feature?) 5. Tasks testing specific concerns or hypotheses ### Part 4: Post-Test Questions (5-10 minutes) **Overall Experience:** - "What was your overall impression of [product/feature]?" - "What did you like most about the experience?" - "What frustrated you or felt difficult?" - "How does this compare to [similar products] you've used?" - "Would you use this product? Why or why not?" **Specific Feature Feedback:** - "Was there anything missing that you expected to find?" - "Were there any features you didn't understand?" - "What would make this more useful for you?" **Standardized Measures:** - System Usability Scale (SUS) questionnaire (10 questions) - Net Promoter Score: "How likely are you to recommend this to others?" (0-10 scale) - Confidence rating: "How confident did you feel using this product?" (1-5 scale) ### Part 5: Wrap-Up (3-5 minutes) **Closing:** - Thank participant for their time and valuable insights - Explain next steps (incentive delivery, when results will be used) - Ask if they have any final questions or comments - Provide contact information if they have follow-up thoughts ## 3. Moderator Guidelines (For Moderated Tests) **Do's:** - Remain neutral and non-judgmental throughout - Use consistent language and phrasing for all participants - Encourage think-aloud without interrupting task flow - Probe for clarification when participants express confusion - Take detailed observation notes on behaviors, not just comments - Allow participants to struggle before intervening **Don'ts:** - Don't lead participants toward solutions - Don't defend the design or explain how it works - Don't answer questions about how to complete tasks - Don't rush participants or impose time pressure - Don't interpret silence as understanding—prompt for thoughts - Don't show personal reactions to feedback (positive or negative) **Probing Techniques:** - Use open-ended questions: "Can you tell me more about that?" - Echo technique: Repeat participant's last words as a question - Silence: Allow pauses for participants to elaborate - Clarification: "When you said [X], what did you mean?" ## 4. Unmoderated Test Adaptations (If Applicable) **Self-Guided Instructions:** - Provide clear written instructions for each task - Include task completion confirmations - Add follow-up questions as embedded surveys - Use tools with screen recording capabilities - Simplify tasks slightly since no moderator is present to clarify **Unmoderated Script Adjustments:** - More detailed task descriptions - Multiple-choice or Likert scale questions instead of open-ended - Clear task completion indicators - Estimated time per task ## 5. Data Collection and Analysis Framework **Metrics to Track:** - **Task Success Rate:** Percentage of participants who complete each task - **Time on Task:** How long each task takes (compare to benchmarks) - **Error Rate:** Number and types of errors per task - **Path Analysis:** Common navigation routes vs. optimal paths - **Satisfaction Ratings:** Post-task and overall satisfaction scores **Observation Categories:** - Critical usability issues (prevent task completion) - Major issues (cause significant frustration or delay) - Minor issues (small annoyances) - Positive findings (what works well) **Analysis Methods:** - Thematic analysis of qualitative feedback - Severity rating for identified issues - Prioritization based on frequency and impact - Recommendations tied to specific observations ## 6. Pilot Test Plan **Before Full Testing:** - Run pilot test with 1-2 colleagues not involved in design - Validate task clarity and timing - Test recording equipment and tools - Refine script based on pilot feedback - Ensure tasks are achievable with current prototype state ## 7. Reporting Framework **Deliverable Structure:** - Executive summary with key findings - Methodology overview - Participant demographics - Task-by-task analysis with success rates and observations - Prioritized list of usability issues with severity ratings - Video clips or quotes illustrating key findings - Actionable recommendations for each issue - Next steps and follow-up testing needs Ensure the protocol maintains consistency across all participants, eliminates bias in questions and task wording, gathers both quantitative metrics and qualitative insights, provides clear guidance for moderators and note-takers, and produces actionable findings that directly inform design improvements.

user-testing
View prompt
ux-research

UX User Research Planning & Interview Guide Generator

You are an expert UX researcher with extensive experience in planning and conducting user research studies across diverse industries. Your task is to create a comprehensive user research plan and interview guide for the following project: Project Context: [DESCRIBE YOUR PRODUCT, FEATURE, OR DESIGN CHALLENGE] Target Users: [DESCRIBE YOUR TARGET AUDIENCE AND KEY USER SEGMENTS] Research Objectives: [LIST YOUR PRIMARY RESEARCH GOALS AND QUESTIONS] Timeline and Constraints: [SPECIFY AVAILABLE TIME, BUDGET, AND RESOURCES] Based on this information, create a complete user research plan that includes: 1. Problem Statement: A clear articulation of what you're trying to learn and why 2. Research Questions: 3-5 specific questions that align with your objectives 3. Methodology Recommendation: Appropriate research methods (interviews, usability tests, surveys, etc.) with rationale for each selection 4. Participant Recruitment: Detailed criteria for participant selection, sample size recommendations, and screening questions 5. Research Timeline: A realistic schedule with milestones for each phase (planning, recruitment, conducting research, analysis, reporting) 6. Interview Discussion Guide: A structured guide including: - Welcome and introduction script - Ice-breaker questions - Core interview questions organized by theme - Follow-up probes for deeper insights - Closing questions and thank you 7. Data Analysis Approach: Methods for synthesizing findings (thematic analysis, affinity mapping, etc.) 8. Deliverables: Expected outputs (research report, personas, journey maps, etc.) Ensure the interview questions are open-ended, non-leading, and designed to uncover user behaviors, motivations, pain points, and goals. Include both behavioral questions about past experiences and contextual questions about current needs.

user-interviews
View prompt
user-personas

User Persona Development from Research Data Generator

You are an expert UX researcher specializing in transforming user research data into actionable personas. Your task is to analyze research findings and create comprehensive user personas that will guide design decisions. Research Data Available: [DESCRIBE YOUR RESEARCH DATA - INCLUDE INTERVIEWS, SURVEYS, ANALYTICS, USABILITY TESTS, ETC.] Product/Project Context: [DESCRIBE THE PRODUCT, SERVICE, OR FEATURE YOU ARE DESIGNING] Key Research Insights: [SUMMARIZE MAIN FINDINGS, PATTERNS, AND THEMES FROM YOUR RESEARCH] Number of Personas Needed: [SPECIFY HOW MANY USER SEGMENTS YOU IDENTIFIED - TYPICALLY 3-5 PRIMARY PERSONAS] Based on this research data, create detailed user personas that include: 1. Data Analysis Summary: Identify and document the key patterns, behaviors, motivations, and pain points that emerged from the research data. Explain how you segmented users into distinct groups based on commonalities. 2. For Each Persona, Provide: **Basic Profile:** - Name and relevant photo description - Demographic information (age, occupation, location, education, tech-savviness) - Brief biographical narrative that humanizes the persona **Behavioral Characteristics:** - Goals and objectives related to your product - Motivations and values that drive their behavior - Frustrations and pain points they experience - Current behaviors and habits relevant to the product context - Technology usage and preferred channels **Context-Specific Details:** - User scenarios showing how they interact with your product - Direct quotes from research participants that represent this persona - Environmental factors (where, when, how they use the product) **Needs and Expectations:** - What they need from your product to succeed - Their expectations for functionality and experience - Success metrics from their perspective 3. Persona Prioritization: Indicate which are primary, secondary, or complementary personas based on business impact and frequency of occurrence in research. 4. Application Guidance: Provide specific recommendations for how each persona should influence design decisions, feature prioritization, and user experience strategy. Ensure personas are grounded in actual research data with supporting evidence, avoid stereotypes, focus on behaviors and motivations rather than superficial demographics, and are memorable enough to guide daily design decisions.

ux-research