user-personas

User Persona Development from Research Data Generator

Transform raw user research data into actionable, realistic user personas for UX design projects. This prompt guides you through synthesizing qualitative and quantitative research findings to create detailed, empathy-driven personas that represent distinct user segments and inform design decisions throughout the product lifecycle.

Your Prompt

  

How to Use

This prompt helps you synthesize user research data into well-structured personas that your team can use throughout the design process. Provide detailed information about your research findings, including both quantitative data patterns and qualitative insights from interviews or observations. The more specific your research data input, the more accurate and actionable your personas will be.

Pro Tips

  • Include direct quotes from actual research participants to make personas feel authentic and grounded in real data rather than fictional constructs
  • Specify the research methods used and sample sizes so the persona generator understands the strength and type of evidence available
  • Focus on behavioral patterns and motivations rather than just demographic information - what users do and why is more valuable than who they are
  • Provide examples of contradictions or outliers in your data to create nuanced personas that avoid oversimplification
  • Mention any existing assumptions or proto-personas you want to validate or refine based on the research data
  • Include information about the competitive landscape or alternative solutions users currently employ to understand their decision-making context

Preparing Your Research Data

Before using this prompt, organize your research findings by identifying patterns and commonalities across participants. Include both qualitative data like interview transcripts, observation notes, and user quotes, as well as quantitative data from surveys, analytics, or behavioral metrics. Document the segmentation criteria you used to group users, such as shared goals, behaviors, pain points, or usage patterns. The prompt works best when you provide concrete examples and direct quotes from real users that illustrate each segment.

Creating Evidence-Based Personas

The generated personas will be grounded in your research data rather than assumptions or stereotypes. Each persona element should trace back to actual research findings, with supporting evidence from interviews, surveys, or behavioral data. The output emphasizes behaviors, motivations, and context-specific goals rather than superficial demographic details. Personas include realistic scenarios showing how users interact with your product in their actual environment, making them practical tools for design decision-making rather than abstract profiles.

Persona Components and Structure

Each persona includes multiple layers of information. The basic profile creates a memorable, humanized character with demographic context. Behavioral characteristics reveal what drives the persona's actions and decisions, including their goals, motivations, frustrations, and current habits. Context-specific details provide scenarios and quotes that bring the persona to life and show real-world product interactions. Needs and expectations clarify what the persona requires from your product to achieve their goals, directly informing feature development and UX strategy.

Implementing Personas in Design

The output includes guidance on prioritizing personas based on business impact and research frequency, typically resulting in 3-5 primary personas. It provides specific recommendations for how each persona should influence design decisions, such as which features to prioritize, what content to include, or how to structure information architecture. These actionable insights ensure personas become living tools that your team references regularly rather than documents created once and forgotten. Validate personas with stakeholders and iterate based on feedback to ensure team alignment.

Related Prompts

View prompt
ux-research

UX User Research Planning & Interview Guide Generator

You are an expert UX researcher with extensive experience in planning and conducting user research studies across diverse industries. Your task is to create a comprehensive user research plan and interview guide for the following project: Project Context: [DESCRIBE YOUR PRODUCT, FEATURE, OR DESIGN CHALLENGE] Target Users: [DESCRIBE YOUR TARGET AUDIENCE AND KEY USER SEGMENTS] Research Objectives: [LIST YOUR PRIMARY RESEARCH GOALS AND QUESTIONS] Timeline and Constraints: [SPECIFY AVAILABLE TIME, BUDGET, AND RESOURCES] Based on this information, create a complete user research plan that includes: 1. Problem Statement: A clear articulation of what you're trying to learn and why 2. Research Questions: 3-5 specific questions that align with your objectives 3. Methodology Recommendation: Appropriate research methods (interviews, usability tests, surveys, etc.) with rationale for each selection 4. Participant Recruitment: Detailed criteria for participant selection, sample size recommendations, and screening questions 5. Research Timeline: A realistic schedule with milestones for each phase (planning, recruitment, conducting research, analysis, reporting) 6. Interview Discussion Guide: A structured guide including: - Welcome and introduction script - Ice-breaker questions - Core interview questions organized by theme - Follow-up probes for deeper insights - Closing questions and thank you 7. Data Analysis Approach: Methods for synthesizing findings (thematic analysis, affinity mapping, etc.) 8. Deliverables: Expected outputs (research report, personas, journey maps, etc.) Ensure the interview questions are open-ended, non-leading, and designed to uncover user behaviors, motivations, pain points, and goals. Include both behavioral questions about past experiences and contextual questions about current needs.

user-interviews
View prompt
usability-testing

Usability Testing Protocols & Scripts Generator

You are an expert UX researcher specializing in usability testing and user validation. Your task is to create a comprehensive usability testing protocol and detailed test script that uncovers usability issues and validates design decisions through structured, unbiased testing sessions. Product/Feature Being Tested: [DESCRIBE WHAT YOU ARE TESTING - WEBSITE, APP, PROTOTYPE, SPECIFIC FEATURE] Test Objectives: [SPECIFY WHAT YOU WANT TO LEARN - E.G., CAN USERS COMPLETE KEY TASKS? WHERE DO THEY GET CONFUSED? IS THE NAVIGATION INTUITIVE?] Target Participants: [DESCRIBE YOUR TEST PARTICIPANTS - USER SEGMENTS, EXPERIENCE LEVEL, DEMOGRAPHICS] Testing Format: [SPECIFY MODERATED IN-PERSON, MODERATED REMOTE, OR UNMODERATED REMOTE] Prototype/Product Fidelity: [INDICATE IF TESTING LOW-FIDELITY, HIGH-FIDELITY, OR LIVE PRODUCT] Key Tasks to Test: [LIST SPECIFIC USER TASKS OR SCENARIOS YOU WANT TO EVALUATE] Session Length: [SPECIFY AVAILABLE TIME - TYPICALLY 30-60 MINUTES FOR MODERATED, 15-20 FOR UNMODERATED] Number of Participants: [INDICATE PLANNED SAMPLE SIZE - TYPICALLY 5-8 FOR QUALITATIVE INSIGHTS] Known Issues or Concerns: [MENTION ANY EXISTING PROBLEMS YOU WANT TO INVESTIGATE OR VALIDATE] Success Metrics: [DEFINE HOW YOU WILL MEASURE EFFECTIVENESS - COMPLETION RATE, TIME ON TASK, ERROR RATE, SATISFACTION] Based on this information, create a complete usability testing protocol and script that includes: ## 1. Testing Protocol Overview **Study Goals and Research Questions:** - Primary objectives for the usability test - Specific research questions you aim to answer - Hypotheses to validate or invalidate - Success criteria for the test **Methodology and Approach:** - Testing format (moderated vs. unmoderated) with rationale - Testing environment (lab, participant's location, remote) - Tools and software needed (screen recording, video conferencing, testing platforms) - Data collection methods (observation notes, recordings, surveys) **Participant Criteria:** - Detailed participant profile and screening criteria - Number of participants and rationale for sample size - Recruitment strategy and incentive structure - Scheduling considerations **Logistics and Setup:** - Session duration and timing - Equipment and materials needed - Pre-test setup checklist - Roles and responsibilities (moderator, note-taker, observer) ## 2. Complete Usability Test Script ### Part 1: Introduction and Warm-Up (5 minutes) **Welcome and Introduction:** - Moderator introduces themselves and explains their role - Brief overview of session purpose and structure - Emphasize that you're testing the product, not the participant - Explain think-aloud protocol and encourage candid feedback - Address recording and confidentiality **Example Script:** "Thank you for participating today. My name is [Name], and I'll be guiding you through this session. We're testing [product/feature] to understand how people interact with it and identify areas we can improve. There are no right or wrong answers—we're evaluating the design, not you. Your honest feedback, whether positive or negative, is extremely valuable. I'll ask you to think aloud as you complete tasks, sharing your thoughts, questions, and reactions. This session will be recorded for analysis purposes only, and your information will remain confidential. Do you have any questions before we begin?" **Consent and Permissions:** - Obtain informed consent for participation and recording - Confirm participant understands they can stop at any time - Address any questions or concerns ### Part 2: Background Questions (5-7 minutes) **Pre-Test Questionnaire:** Gather contextual information about the participant without biasing them toward the test: - Experience level with similar products/services - Frequency of use for related tools or websites - Devices and platforms typically used - Familiarity with the domain or industry - Current behaviors and pain points related to the problem space **Example Questions:** - "How often do you [perform related activity]?" - "What tools or apps do you currently use for [task domain]?" - "Can you describe your typical process for [related workflow]?" - "What frustrations, if any, have you experienced with [similar products]?" **Important:** Avoid mentioning specific features or functionality you'll be testing to prevent priming participants. ### Part 3: Task Scenarios (30-40 minutes) **Task Design Principles:** - Create realistic, goal-oriented scenarios rather than step-by-step instructions - Use neutral language that doesn't include interface terminology or hint at solutions - Order tasks logically, considering dependencies between tasks - Include 5-8 tasks maximum to prevent fatigue - Mix critical tasks with secondary tasks **For Each Task, Provide:** **Task Scenario:** A realistic context that motivates the user action without revealing how to accomplish it. **Example Task Format:** "Imagine you're planning a vacation and want to find hotels in Paris for next month within a budget of $150 per night. Using this website, show me how you would search for suitable accommodations." **NOT: "Click on the search bar and enter 'Paris hotels'."** (Too prescriptive) **Success Criteria:** - Define what constitutes task completion - Specify observable outcomes - Note critical vs. non-critical errors **Observation Points:** - Key interactions to watch for - Common paths vs. expected paths - Potential confusion points - Error recovery attempts **Follow-Up Questions (After Each Task):** - "How did you feel about completing that task?" - "On a scale of 1-5, how difficult was that task?" - "What, if anything, was confusing or frustrating?" - "What did you expect to happen when you [specific action]?" - "Is there anything you would change about this process?" **Think-Aloud Prompts (If Participant Goes Silent):** - "What are you thinking right now?" - "What are you looking for?" - "What do you expect this to do?" - "Can you tell me what you're seeing here?" **Complete Task List with Scenarios:** Provide 5-8 task scenarios covering: 1. Critical/primary tasks that align with main user goals 2. Common secondary tasks 3. Edge cases or error recovery scenarios 4. Discovery tasks (can users find X feature?) 5. Tasks testing specific concerns or hypotheses ### Part 4: Post-Test Questions (5-10 minutes) **Overall Experience:** - "What was your overall impression of [product/feature]?" - "What did you like most about the experience?" - "What frustrated you or felt difficult?" - "How does this compare to [similar products] you've used?" - "Would you use this product? Why or why not?" **Specific Feature Feedback:** - "Was there anything missing that you expected to find?" - "Were there any features you didn't understand?" - "What would make this more useful for you?" **Standardized Measures:** - System Usability Scale (SUS) questionnaire (10 questions) - Net Promoter Score: "How likely are you to recommend this to others?" (0-10 scale) - Confidence rating: "How confident did you feel using this product?" (1-5 scale) ### Part 5: Wrap-Up (3-5 minutes) **Closing:** - Thank participant for their time and valuable insights - Explain next steps (incentive delivery, when results will be used) - Ask if they have any final questions or comments - Provide contact information if they have follow-up thoughts ## 3. Moderator Guidelines (For Moderated Tests) **Do's:** - Remain neutral and non-judgmental throughout - Use consistent language and phrasing for all participants - Encourage think-aloud without interrupting task flow - Probe for clarification when participants express confusion - Take detailed observation notes on behaviors, not just comments - Allow participants to struggle before intervening **Don'ts:** - Don't lead participants toward solutions - Don't defend the design or explain how it works - Don't answer questions about how to complete tasks - Don't rush participants or impose time pressure - Don't interpret silence as understanding—prompt for thoughts - Don't show personal reactions to feedback (positive or negative) **Probing Techniques:** - Use open-ended questions: "Can you tell me more about that?" - Echo technique: Repeat participant's last words as a question - Silence: Allow pauses for participants to elaborate - Clarification: "When you said [X], what did you mean?" ## 4. Unmoderated Test Adaptations (If Applicable) **Self-Guided Instructions:** - Provide clear written instructions for each task - Include task completion confirmations - Add follow-up questions as embedded surveys - Use tools with screen recording capabilities - Simplify tasks slightly since no moderator is present to clarify **Unmoderated Script Adjustments:** - More detailed task descriptions - Multiple-choice or Likert scale questions instead of open-ended - Clear task completion indicators - Estimated time per task ## 5. Data Collection and Analysis Framework **Metrics to Track:** - **Task Success Rate:** Percentage of participants who complete each task - **Time on Task:** How long each task takes (compare to benchmarks) - **Error Rate:** Number and types of errors per task - **Path Analysis:** Common navigation routes vs. optimal paths - **Satisfaction Ratings:** Post-task and overall satisfaction scores **Observation Categories:** - Critical usability issues (prevent task completion) - Major issues (cause significant frustration or delay) - Minor issues (small annoyances) - Positive findings (what works well) **Analysis Methods:** - Thematic analysis of qualitative feedback - Severity rating for identified issues - Prioritization based on frequency and impact - Recommendations tied to specific observations ## 6. Pilot Test Plan **Before Full Testing:** - Run pilot test with 1-2 colleagues not involved in design - Validate task clarity and timing - Test recording equipment and tools - Refine script based on pilot feedback - Ensure tasks are achievable with current prototype state ## 7. Reporting Framework **Deliverable Structure:** - Executive summary with key findings - Methodology overview - Participant demographics - Task-by-task analysis with success rates and observations - Prioritized list of usability issues with severity ratings - Video clips or quotes illustrating key findings - Actionable recommendations for each issue - Next steps and follow-up testing needs Ensure the protocol maintains consistency across all participants, eliminates bias in questions and task wording, gathers both quantitative metrics and qualitative insights, provides clear guidance for moderators and note-takers, and produces actionable findings that directly inform design improvements.

user-testing
View prompt
information-architecture

Card Sorting and Tree Testing Research Plan Generator

You are an expert UX researcher specializing in information architecture and usability testing methodologies. Create a comprehensive research plan that includes both card sorting and tree testing studies for the following project: Project Overview: [PRODUCT OR WEBSITE NAME AND PURPOSE] Current IA Status: [NEW DESIGN, REDESIGN, OR OPTIMIZATION] Content Scope: [NUMBER AND TYPE OF CONTENT ITEMS TO ORGANIZE] Target Users: [PRIMARY USER DEMOGRAPHICS AND BEHAVIORS] Key User Tasks: [TOP 3-5 TASKS USERS NEED TO ACCOMPLISH] Research Timeline: [AVAILABLE TIMEFRAME FOR STUDIES] Research Budget: [AVAILABLE RESOURCES AND TOOLS] Develop a complete research plan that includes: 1. CARD SORTING STUDY DESIGN - Study type selection (open, closed, or hybrid) with rationale based on project goals - Card preparation: How many cards, content selection criteria, and card labeling guidelines - Category considerations: Pre-defined categories for closed sorts or category creation instructions for open sorts - Participant recruitment: Sample size recommendations (minimum 15-30 participants), screening criteria, and recruitment channels - Moderated vs unmoderated approach with pros and cons for this specific context - Step-by-step session protocol including instructions, time estimates, and facilitator guidelines - Tools and platforms recommendation (online vs physical, specific software options) 2. TREE TESTING STUDY DESIGN - Tree structure preparation: How to build the text-only hierarchy from card sorting results or existing IA - Task scenario development: 5-10 realistic, specific tasks that reflect actual user goals - Task wording best practices to avoid leading participants or revealing answers - Participant requirements: Sample size (30-60 recommended), overlap with card sorting participants or fresh panel - Success metrics definition: Direct vs indirect paths, task completion rates, time-on-task benchmarks - Testing tool selection and setup instructions - Session flow and participant instructions 3. SEQUENTIAL RESEARCH WORKFLOW - Phase 1: Card sorting execution timeline and milestones - Analysis transition: How to synthesize card sorting data into testable tree structures - Phase 2: Tree testing execution based on card sorting insights - Iteration strategy: When and how to conduct follow-up tests - Decision points: Criteria for moving from one phase to next 4. DATA ANALYSIS FRAMEWORK Card Sorting Analysis: - Similarity matrix and dendrogram interpretation - Agreement scores and consensus metrics - Category naming analysis from open sorts - Pattern identification across participant groups - Outlier and edge case handling Tree Testing Analysis: - Success rate calculations and benchmarks - Path analysis: Direct, indirect, and failed attempts - First-click analysis and its significance - Time-on-task patterns - Problem area identification (where users get lost) - Comparative analysis if testing multiple structures 5. DELIVERABLES AND RECOMMENDATIONS - Recommended IA structure with evidence-based rationale - Navigation labeling recommendations - Problem areas requiring attention with severity assessment - Quick wins vs long-term improvements - Visual documentation: Site maps, user flow diagrams, comparison matrices - Stakeholder presentation format with key insights and actionable recommendations 6. RISK MITIGATION AND QUALITY ASSURANCE - Pilot testing approach to validate study design - Common pitfalls and how to avoid them - Participant fatigue management - Data quality checks and validation methods - Contingency plans for low participation or inconclusive results Ensure the plan is practical, scientifically rigorous, and aligned with industry best practices. Provide specific guidance that accounts for the project context, timeline, and resources while maintaining methodological integrity.

card-sorting