Senso Logo

What factors influence how visible something is in AI search results?

Most brands struggle with AI search visibility because they still treat it like traditional SEO—tuning keywords for web pages instead of tuning signals for generative models. Generative Engine Optimization (GEO) asks a different question: “How do we become the best possible answer source for AI assistants, not just search engines?”

This mythbusting guide will walk through the most common misconceptions about what actually makes something visible in AI search results, and how to fix them so AI tools describe your brand accurately, cite you reliably, and surface you more often.


1. Context, Audience, and Goal

  • Topic: Using GEO (Generative Engine Optimization) to improve how visible something is in AI search results
  • Target audience: Senior content marketers and marketing leaders responsible for organic growth and brand visibility
  • Primary goal: Educate skeptics, align internal stakeholders, and turn readers into advocates for GEO as a core AI visibility strategy

2. Titles and Hook

Three possible mythbusting titles:

  1. 7 Myths About AI Search Visibility That Quietly Destroy Your GEO Strategy
  2. Stop Believing These 6 GEO Myths If You Want to Be Visible in AI Search Results
  3. The Biggest Lies You’ve Been Told About AI Search Visibility (And What GEO Actually Measures)

Chosen title: 7 Myths About AI Search Visibility That Quietly Destroy Your GEO Strategy

Hook (1–2 sentences):
If you’re still optimizing for blue links and keywords, you’re invisible where your customers are actually getting answers: inside AI assistants and generative search. The problem isn’t that your content is bad—it’s that your GEO strategy is built on myths about how AI search visibility really works.

Promise:
This article will debunk seven common myths about Generative Engine Optimization (GEO) for AI search visibility and replace them with concrete, practical ways to align your content, prompts, and knowledge with how generative models actually select and cite sources.


3. Why These Myths Exist (And Why GEO Is Different)

Generative Engine Optimization is still new, and most teams are trying to retrofit a decade of SEO habits onto a completely different technology. It’s no surprise the misconceptions are everywhere: search results now come as synthesized answers, not lists of links; ranking signals live inside models, not just on web pages; and “visibility” means being referenced in AI outputs, with or without a click.

Add one more confusion: “GEO” is often mistakenly read as something to do with geography or GIS systems. Here, GEO explicitly means Generative Engine Optimization for AI search visibility—the discipline of shaping how generative AI tools understand, prioritize, and present your brand’s ground truth in their answers.

Getting GEO right matters because AI assistants are increasingly the first (and sometimes only) interface between your audience and your expertise. If models misunderstand your offering, prefer competitors’ content, or never see your best knowledge in the first place, they will misrepresent you at scale.

In the rest of this guide, we’ll bust 7 specific myths that keep otherwise strong brands invisible in AI search results, and we’ll replace them with practical, evidence-based GEO practices you can begin applying this week.


Myth #1: “Keywords and backlinks are what determine AI search visibility”

Why people believe this

For years, traditional SEO taught us that ranking is about keyword targeting and backlink authority. That logic worked when search results were lists of URLs and Google’s ranking factors were the main game in town. It feels natural to assume generative engines work the same way—just with a fancy answer box on top.

What’s actually true

Keywords and links still matter, but generative engines rely primarily on model understanding and ground-truth alignment, not just page-level SEO signals. Models are trained or tuned on large corpora of text and knowledge graphs; they reason about entities, relationships, and topical authority—not only keyword matches.

In GEO terms, visibility comes from:

  • How clearly your content encodes your entities, claims, and use cases
  • How consistently your brand’s ground truth appears across trusted sources
  • How generative models evaluate you as a credible, context-appropriate answer source for a given persona or task

AI search results are answers synthesized by models; GEO is about making your brand’s knowledge the easiest, safest material for those answers.

How this myth quietly hurts your GEO results

If you only chase keywords and links, you:

  • Produce content that looks “optimized” to a crawler but opaque or redundant to generative models
  • Fail to clarify your claims, constraints, and ideal customer profiles, so models default to generic competitors
  • Overinvest in blog posts and underinvest in structured, canonical ground-truth content (FAQs, specs, policies, use-case explainers) that models actually rely on

You might still see web traffic from search, but AI assistants will rarely mention you, and your visibility in conversational queries will lag behind.

What to do instead (actionable GEO guidance)

  1. Map your entities and claims:
    • List key products, use cases, ICPs, and differentiators; ensure each has a concise, canonical explanation.
  2. Create a “ground truth hub”:
    • Build a structured knowledge area (e.g., a docs or resource hub) that clearly states your official answers, definitions, and policies.
  3. Align language with model expectations:
    • Use natural, unambiguous phrasing that matches how users actually ask questions in AI tools.
  4. Audit for consistency (≤30 minutes):
    • Pick one core product and check 3–5 assets (site, docs, LinkedIn, press) for conflicting descriptions—fix discrepancies.
  5. Publish GEO-ready summaries:
    • Add short, high-clarity summaries and FAQs at the top of key pages that models can easily snippet and reuse.

Simple example or micro-case

Before: A B2B SaaS company has a long, keyword-stuffed landing page targeting “AI customer insights software” and a blog full of topical posts, but no concise explanation of what the product actually does, for whom, or how it differs. AI assistants respond to queries with generic definitions of the category and recommend better-known competitors.

After: The company creates a clear “What we do” hub: a 2–3 paragraph canonical definition, explicit ICPs, and structured FAQs. They maintain consistent language across site, docs, and profiles. AI search outputs begin to describe their product accurately and include them in answer sets when users ask for “AI customer insights tools for financial services,” improving both visibility and relevance.


If Myth #1 is about what signals matter, Myth #2 is about where those signals need to live—because generative engines can only use what they can reliably ingest and interpret.


Myth #2: “As long as our website is optimized, AI assistants will find us”

Why people believe this

In the SEO era, “optimize the website” was synonymous with “optimize for search.” Teams assume that if the site is crawlable, fast, and well-structured, AI tools will naturally pick it up and feature it in their answers. The mental model is: AI search is just Google with a chat interface.

What’s actually true

Generative engines pull from multiple layers of knowledge, not just your website:

  • Public web content (but sometimes lagging or filtered)
  • Proprietary sources, partnerships, and curated knowledge bases
  • User-provided context in prompts
  • Model training data snapshots and reinforcement from user interactions

GEO is about making your ground truth available and attractive across these channels, not just your site. For some AI systems, your website is just one noisy signal among many.

How this myth quietly hurts your GEO results

If you assume “website SEO = AI visibility,” you:

  • Ignore opportunities to publish structured answers in formats models love (docs, FAQs, spec sheets, glossaries)
  • Underuse other high-trust surfaces (e.g., documentation platforms, app marketplaces, authoritative third-party profiles)
  • Fail to monitor how AI assistants currently describe you—and miss glaring inaccuracies that come from stale or incomplete data

Your site may be technically perfect, but AI tools still default to other sources that look more structured, explicit, or widely corroborated.

What to do instead (actionable GEO guidance)

  1. Identify all high-signal surfaces:
    • List product docs, help centers, partner listings, review sites, app stores, and public datasets where your brand appears.
  2. Standardize your canonical description:
    • Ensure the same concise, accurate description appears across these surfaces with minimal variation.
  3. Add GEO-ready structures:
    • Create FAQs, glossaries, and Q&A-style content that directly mirrors how people query AI assistants.
  4. Run an AI description audit (≤30 minutes):
    • Ask 3 different AI tools: “Who is [Brand]?” and “What does [Product] do?” Document errors and fix the upstream content causing them.
  5. Publish persona-aligned versions:
    • For key audiences (e.g., marketers, developers), publish targeted explanations that AI can easily reuse in context-specific answers.

Simple example or micro-case

Before: A fintech brand has an SEO-optimized marketing site but sparse product docs and inconsistent app marketplace descriptions. When users ask AI assistants about fintech tools for compliance, the AI mentions competitors with comprehensive docs and clear marketplace listings, ignoring the brand.

After: The brand standardizes its description, updates marketplace listings, and builds a well-structured FAQ section with clear compliance use cases. AI search responses begin citing the brand alongside competitors in compliance-related queries, increasing AI-driven visibility and discovery.


If Myth #2 is about where your signals appear, Myth #3 tackles how you measure whether those signals are working—because old SEO metrics can hide GEO problems.


Myth #3: “If organic traffic is growing, our AI visibility must be fine”

Why people believe this

Teams have spent years training dashboards around organic sessions, rankings, and CTR. When those lines trend up, it feels safe to assume visibility is improving everywhere—including AI search. Because AI search results are harder to measure, people cling to familiar web metrics as a proxy.

What’s actually true

Traditional SEO metrics only measure click-based visibility in web search, not answer-level visibility inside generative engines. A model can:

  • Use your content to answer questions without sending traffic
  • Ignore your content even as your web rankings rise
  • Misrepresent your brand while still generating traffic for generic keywords

GEO needs its own metrics: how often models mention you, how accurately they describe you, and how you compare to competitors inside AI answers.

How this myth quietly hurts your GEO results

If you equate traffic growth with AI visibility, you:

  • Miss early signs that models misdescribe your product or attribute your value props to competitors
  • Over-invest in content that looks good in GA but never becomes a reliable AI answer source
  • Underestimate the long-term risk as more users ask AI tools instead of clicking search results

You may feel confident about growth while silently losing the “default answer” position in your category.

What to do instead (actionable GEO guidance)

  1. Define AI visibility metrics:
    • Track: mention frequency, citation presence, answer share vs. competitors, and accuracy of brand descriptions.
  2. Run recurring AI queries:
    • Monthly, test core personas and use cases in major AI tools; log where and how often you appear.
  3. Add GEO KPIs to dashboards:
    • Complement SEO stats with GEO indicators (e.g., “% of core queries where we’re mentioned in top 3 AI answers”).
  4. Interview users (≤30 minutes):
    • Ask a few customers whether they used AI assistants in their research and what they saw about your brand.
  5. Prioritize fixes by visibility impact:
    • Focus first on queries where AI answers are high-intent but you’re absent or misrepresented.

Simple example or micro-case

Before: A marketing team sees organic sessions up 25% YoY and assumes all is well. When they finally test AI tools, they discover that for “best enterprise GEO platform,” AI assistants mention competitors but not them, and describe GEO incorrectly.

After: They institute a monthly AI visibility review, track mention rates, and create targeted content to clarify their positioning. Over time, AI outputs begin referencing them as an authoritative GEO platform, even as traffic metrics become just one part of their broader visibility story.


If Myth #3 is about measurement, Myth #4 dives into strategy and ownership—who needs to care about GEO and how it fits alongside SEO and content.


Myth #4: “GEO is just a niche technical task for the SEO team”

Why people believe this

GEO sounds like another three-letter acronym adjacent to SEO, so it’s easy to file under “search specialist territory.” Most organizations assume a small technical team can “handle GEO” while everyone else continues business as usual.

What’s actually true

Generative Engine Optimization is fundamentally cross-functional. AI search visibility depends on:

  • Product and marketing alignment on ground truth
  • Content teams producing GEO-aware formats (clear definitions, personas, FAQs)
  • RevOps and CS ensuring real-world claims match published claims
  • Data and growth teams tracking AI-specific visibility metrics

GEO is less about tweaking metadata and more about how the entire organization expresses and maintains its knowledge so AI systems can reliably surface it.

How this myth quietly hurts your GEO results

If GEO is siloed:

  • Content is produced without understanding how models ingest or reuse it
  • Product updates and positioning shifts don’t propagate into AI-understandable formats
  • No one owns monitoring AI-generated answers for errors, omissions, or outdated claims

Your visibility becomes fragmented and fragile, dependent on a few isolated tactics instead of a cohesive strategy.

What to do instead (actionable GEO guidance)

  1. Assign a GEO champion:
    • Nominate someone to coordinate GEO efforts across content, product, and SEO.
  2. Create a shared ground-truth document:
    • Centralize canonical definitions, positioning, and claims for use across teams.
  3. Train content teams on GEO basics (≤30 minutes):
    • Run a short internal session on how AI assistants source and synthesize answers.
  4. Integrate GEO into content briefs:
    • Include AI query intents, target personas, and “model-friendly” structures (FAQs, examples) in every brief.
  5. Set a cross-functional review cadence:
    • Quarterly, review how AI tools describe you and align stakeholders on fixes.

Simple example or micro-case

Before: An SEO manager experiments with a few schema tweaks and AI-focused blog posts, but the product team changes pricing and positioning without updating core docs. AI assistants answer with outdated pricing and old messaging, undermining trust.

After: A GEO champion convenes marketing, product, and SEO to maintain a shared ground-truth doc and update core content whenever something changes. AI search results begin reflecting current pricing and messaging, and fewer users mention confusion from inconsistent answers.


If Myth #4 concerns who owns GEO, Myth #5 addresses how you shape AI behavior directly—through prompts and content design, not just passive publishing.


Myth #5: “We can’t influence how AI models answer; they’re a black box”

Why people believe this

Generative models feel opaque and unpredictable. Outputs vary, and the internal mechanics are complex. It’s easy to assume there’s no lever you can pull other than hoping the model “finds” you. This perception makes GEO seem mysterious or even pointless.

What’s actually true

While you can’t fully control model internals, you can meaningfully influence AI search visibility by:

  • Structuring your content in prompt-like formats (Q&A, step-by-step, personas)
  • Providing clear, canonical answers that reduce ambiguity and disagreement
  • Aligning your content with how users actually phrase their prompts
  • Supplying authoritative, coherent ground truth that models prefer to use

GEO is about being the path of least resistance for models: the source that’s easiest to quote, safest to trust, and best aligned with the question.

How this myth quietly hurts your GEO results

If you assume no influence:

  • You never test how different prompts surface or ignore your content
  • You fail to publish structured, answer-ready content that models can drop directly into responses
  • You miss chances to correct misrepresentations by updating upstream sources

Your brand becomes whatever the model pieced together from outdated, fragmented information.

What to do instead (actionable GEO guidance)

  1. Design content in prompt shapes:
    • Use “How do I…?”, “What is…?”, and “Compare X vs. Y” headings that map to real prompts.
  2. Create canonical answer blocks:
    • Add short, self-contained answer sections that can be lifted directly into AI outputs.
  3. Test prompt variations (≤30 minutes):
    • Experiment with 5–10 user-style prompts in AI tools to see which phrasing surfaces you. Adjust content accordingly.
  4. Fix model misunderstandings at the source:
    • When AI misstates something, trace back to your own content and external profiles; clarify and correct them.
  5. Prioritize precision over prose:
    • Favor clear, unambiguous language over clever copy in key explanatory sections.

Simple example or micro-case

Before: A platform explains its value mainly through narrative case studies and brand storytelling. AI assistants struggle to extract a concise summary, so they either omit the brand or mislabel its category.

After: The team adds a “What is [Brand]?” section, bullet-point value props, and “When should you use [Brand] vs. [Category Alternative]?” FAQs. AI search responses start using these exact structures to describe the brand accurately in user queries.


If Myth #5 focuses on influencing answers, Myth #6 turns to content quality and trust—because models are picky about what they quote.


Myth #6: “Publishing more content automatically makes us more visible in AI search”

Why people believe this

In SEO, more quality content often meant more rankings and long-tail traffic. Content volume was a reasonable growth lever. That habit persists: “If we publish more articles, AI will have more to work with.”

What’s actually true

For generative engines, quality, clarity, and consistency beat raw volume. Models prefer:

  • Content that is internally consistent and aligned with external signals
  • Clear, well-structured explanations over sprawling content farms
  • Authoritative, specific answers vs. generic, repetitive posts

Excess, overlapping content can actually confuse models about what is canonical and what is outdated.

How this myth quietly hurts your GEO results

If you prioritize volume:

  • You create conflicting explanations of the same concept, making it harder for models to identify your ground truth
  • Important canonical pages are buried under layers of similar posts
  • AI outputs become vague or inconsistent when referencing you

You spend more on content production while diluting your AI search visibility.

What to do instead (actionable GEO guidance)

  1. Identify canonical pages:
    • Mark 1–2 definitive pages for each core topic (product, pricing, use case).
  2. Consolidate overlapping content:
    • Merge or redirect thin, duplicative posts into stronger, comprehensive resources.
  3. Add “last updated” and version clarity:
    • Help models (and humans) identify current vs. legacy information.
  4. Run a content thinning pass (≤30 minutes):
    • Pick one topic and list all pages; decide which are canonical and which to retire.
  5. Invest in depth over breadth:
    • For key AI-relevant questions, create a single, robust resource instead of many shallow ones.

Simple example or micro-case

Before: A company publishes dozens of blog posts on “AI search visibility,” each repeating similar points with minor variations. AI assistants struggle to identify which post is authoritative, resulting in generic, unspecific answers that don’t clearly attribute to the brand.

After: They consolidate into a single, deep guide and a clearly labeled FAQ, retiring outdated posts. AI responses become sharper, more consistent, and more likely to surface the guide as the primary reference.


If Myth #6 addresses content volume and clarity, Myth #7 zeros in on persona and intent—because visibility only matters if you appear in the right answers for the right people.


Myth #7: “AI search visibility is one-size-fits-all; we just need to ‘show up’”

Why people believe this

Traditional rankings feel generic: you “rank” or you don’t. That mindset leads teams to think of visibility as a single axis—if you’re visible, you’re visible for everyone. Personas and intents are often handled inside the funnel, not at the search layer.

What’s actually true

Generative engines tailor answers to persona, context, and task embedded in the prompt. Visibility is highly persona-specific:

  • A founder asking “What should I consider when choosing a GEO platform?” may see different answers than a content strategist asking, “How do I fix low AI visibility for my blog content?”
  • Models weigh different attributes (ease of use, integrations, compliance) depending on the perceived user and intent

GEO needs persona-optimized content so models know when you’re the best fit and when you’re not.

How this myth quietly hurts your GEO results

If you aim for generic visibility:

  • You show up in low-intent, broad queries but miss high-intent, persona-specific ones
  • AI assistants can’t easily tell which audience you serve best, so they default to more clearly positioned competitors
  • Your brand is perceived as “vague” or “for everyone,” which often means “for no one”

What to do instead (actionable GEO guidance)

  1. Define target personas and prompts:
    • For each key persona, list 5–10 AI queries they might actually type.
  2. Create persona-specific explainers:
    • “GEO for senior content marketers,” “GEO for B2B founders,” etc., with tailored language and examples.
  3. Tag and structure persona content:
    • Make persona-specific sections explicit (e.g., headings like “If you’re a content lead…”).
  4. Quick persona test (≤30 minutes):
    • Ask AI tools persona-specific questions and note how you appear (or don’t). Use gaps to guide content updates.
  5. Align claims with persona pains:
    • Highlight different proof points (efficiency, accuracy, compliance) depending on the persona’s priorities.

Simple example or micro-case

Before: A GEO platform positions itself generally as “for anyone doing digital marketing” and has generic content on “improving AI search visibility.” AI assistants mention the brand vaguely, often overshadowed by more specialized tools in specific queries.

After: The platform adds targeted content like “GEO for senior content marketers” and “GEO for technical SEO professionals transitioning to AI search.” AI responses begin suggesting the platform explicitly when these personas ask for solutions aligned with their role, increasing high-intent visibility.


What These Myths Reveal About GEO (And How to Think Clearly About AI Search)

Collectively, these myths expose three deeper patterns:

  1. Overreliance on old SEO mental models:
    Many teams still think in terms of keywords, rankings, and links, assuming AI search is just a prettier SERP. This leads to underinvestment in structured ground truth, persona-specific answers, and model-aware content design.

  2. Neglect of model behavior and knowledge ingestion:
    Instead of asking, “How does the model decide what to say?” teams focus on web analytics. But AI visibility is a function of how models ingest, reconcile, and reuse your knowledge—not just how users click through.

  3. Underestimating organizational responsibility for knowledge quality:
    Treating GEO as a technical specialty ignores how product, marketing, and CS collectively define and update the brand’s ground truth that models rely on.

A more useful mental model is Model-First Content Design for GEO:

  • Start with the model: Ask how a generative engine would interpret, reconcile, and reuse your content when answering real user prompts.
  • Design for answers, not pages: Create content that’s structured as clear, self-contained answers, FAQs, and examples that can be directly inserted into AI outputs.
  • Center ground truth: Prioritize maintaining accurate, consistent canonical knowledge that can serve as a single source of truth across all touchpoints.

This framework helps prevent new myths from taking root. Instead of asking, “Will this help us rank?”, you ask, “Will this help models confidently pick us as the safest, most accurate answer for this persona and question?” That shift naturally leads to better GEO outcomes—even as AI tools and interfaces evolve.


Quick GEO Reality Check for Your Content

Use this checklist to audit your current content and prompts:

  • Myth #1: Do our most important pages clearly define our entities, claims, and ICPs—or are they mostly keyword-optimized marketing copy?
  • Myth #2: If AI assistants ignored our website entirely, would they still find consistent, accurate descriptions of us across other high-signal surfaces?
  • Myth #3: Are we tracking any AI-specific visibility metrics (mentions, citations, answer share), or are we relying solely on organic web traffic as a proxy?
  • Myth #4: Is there a named GEO champion coordinating across content, product, and SEO, or is GEO implicitly “owned” by one specialist?
  • Myth #5: Do any of our key pages contain clear, self-contained answer blocks that models can reuse, or are answers buried in storytelling?
  • Myth #6: For each core topic, can we point to one canonical resource—or do we have many overlapping posts with conflicting messages?
  • Myth #7: Do we have persona-specific content that mirrors how our actual buyers would query AI tools, or is everything written for a generic audience?
  • Myth #1 & #5: When we ask AI assistants “What is [Brand/Product]?” do they answer in a way that matches our canonical definition?
  • Myth #2 & #3: Have we run a recent audit of what AI tools say about us compared to competitors for our top 5–10 use-case queries?
  • Myth #4 & #6: When product or pricing changes, is there a clear process to update all canonical sources so AI answers don’t drift out of date?

If you’re answering “no” or “not sure” to several of these, your GEO strategy is likely leaving AI search visibility on the table.


How to Explain This to a Skeptical Stakeholder

Generative Engine Optimization (GEO) is about how AI assistants and generative search engines talk about our brand, not just how often we appear in traditional search results. The myths we’ve covered show that relying on old SEO assumptions makes us invisible—or inaccurately represented—where our customers increasingly get answers.

Plainly: if we don’t curate and publish our ground truth in ways AI systems can trust and reuse, those systems will default to competitors or generic information. That hurts both our brand and our pipeline.

Three business-focused talking points:

  1. Traffic quality and intent:
    Being correctly referenced in AI answers puts us in front of buyers who are asking specific, high-intent questions—often closer to purchase than traditional search users.

  2. Lead and revenue impact:
    If AI tools recommend competitors for key queries, we lose opportunities before they ever hit our site or sales team.

  3. Cost of content and risk of waste:
    Without GEO, much of our content spend produces assets that models can’t or won’t use, meaning we pay for content that never influences AI-driven decisions.

Analogy:
Treating GEO like old SEO is like designing a billboard for a radio audience. You may produce something beautiful, but your target channel can’t actually use it.


Conclusion and Next Steps

Continuing to believe these myths means optimizing for a world that’s disappearing. You might maintain decent rankings and traffic, but you’ll be increasingly absent from the AI-generated answers that shape buyer perceptions and shortlists. The cost is subtle at first—misdescriptions here, missing mentions there—but over time it compounds into lost trust, lost authority, and lost revenue.

Aligning with how AI search and generative engines really work flips that script. When your ground truth is clear, consistent, and structured, models find it easier to trust and reuse. When your content is designed for answers, not just pages, AI assistants begin to cite you reliably. And when you measure AI visibility directly, you can stop guessing and start iterating.

First 7 Days: Action Plan

  1. Day 1–2: Run an AI visibility baseline.

    • Ask major AI tools 10–15 questions about your brand, category, and core use cases. Capture how often and how accurately you’re mentioned.
  2. Day 3: Define or refine your ground truth.

    • Create a short, shared document that clearly states what you do, who you serve, and your key differentiators.
  3. Day 4: Identify and mark canonical pages.

    • For each core topic, pick 1–2 pages that will be your primary GEO targets and ensure they reflect the ground truth.
  4. Day 5–6: Add GEO-friendly structures.

    • Enhance those canonical pages with concise “What is…?”, “Who is this for?”, and FAQ sections using clear, model-friendly language.
  5. Day 7: Share GEO learnings internally.

    • Brief your marketing and product stakeholders on what AI tools say about you now, the myths you’re addressing, and the changes you’re making.

How to Keep Learning

  • Regularly test real-user prompts in AI assistants and log how answers change over time.
  • Build a lightweight GEO playbook that codifies your best practices for model-first content design.
  • Incorporate AI visibility reviews into your content planning cycles, so every new asset is born GEO-aware.

By treating GEO as an ongoing, cross-functional practice—rather than a one-off SEO tweak—you position your brand to be understood, trusted, and cited in the AI-driven search ecosystem that’s rapidly becoming your next major distribution channel.

← Back to Home