0% found this document useful (0 votes)
134 views8 pages

Maps Search Guidelines Notes

The document outlines the guidelines for evaluating Maps Search results, detailing the rating tool, workflow, and interface for assessing search relevance. It emphasizes understanding user intent, rating individual results, and the importance of comments for accuracy. Additionally, it categorizes query types and result types, providing rules for handling various scenarios, including closed businesses and unexpected languages.

Uploaded by

vishnucharan234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
134 views8 pages

Maps Search Guidelines Notes

The document outlines the guidelines for evaluating Maps Search results, detailing the rating tool, workflow, and interface for assessing search relevance. It emphasizes understanding user intent, rating individual results, and the importance of comments for accuracy. Additionally, it categorizes query types and result types, providing rules for handling various scenarios, including closed businesses and unexpected languages.

Uploaded by

vishnucharan234
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 8

Maps Search Guidelines Notes

1.1 The Rating Tool


What it is: A web-based platform for evaluating Maps Search results (Search
Relevance and Search 2.0 tasks).

Key Components:

Task Bar: Shows task type (e.g., Search 2.0), Task ID, Request ID, and estimated
rating time.

Guidelines Button: Quick access to rating rules.

Viewport Controls: Adjust map view to user location/viewport.

Submit Button: Saves ratings and moves to the next task.

Exam Tip: Always note the Task ID (not Request ID) when reporting issues.

1.2 Rating Workflow


Follow these steps in order for every query:

Understand User Intent: Use query, user location, and viewport age.

Answer the Query-Level Navigational Question: “Is there a real-world result that
completely satisfies the user’s intent?” (Yes/No).

Rate Individual Results: Check for issues (closed businesses, language errors),
then rate relevance, name/address/pin accuracy.

Mandatory Comments: Required for relevance ratings of Good or below, or data


inaccuracies.

1.3 Rating Interface


1.3.1 Query-Level Navigational Result Question
Purpose: Decide if the query has a single, unambiguous real-world result.

Example:

Query: [Eiffel Tower] → Yes (only one Eiffel Tower exists).

Query: [Starbucks] → No (multiple locations exist).

Exam Tip: Answer this question before rating individual results.

1.3.2 Result-Level Rating Checkboxes


Two critical checkboxes for Business/POI Results:

Unexpected Language/Script:

Use if the name/title is not in the query/test locale/region’s language (e.g., a


Japanese name in a French locale).

Stops further rating for that result.

Closed/Does Not Exist:

Use if research confirms the business is permanently closed or never existed.


Note: Still rate relevance as if it were open.

1.3.3 Result Relevance Rating


Rate relevance on a 5-point scale:

Rating Criteria Example


Navigational Unique result that perfectly matches intent (e.g.,
[Statue of Liberty]). Only one possible result.
Excellent High-quality match (e.g., a
Starbucks near the user’s location). Multiple Excellent
results allowed.
Good Partially matches intent (e.g., a
Starbucks 2 miles away when closer ones exist). Requires demotion
checkboxes.
Acceptable Weak match (e.g., a coffee
shop with no vegan options for [vegan coffee]). Demote further for issues.
Bad Irrelevant or too distant (e.g., a gas
station for [vegan coffee]). Always explain in comments.
1.3.4 Result Relevance Demotion Checkboxes
If rating Good or below, select why:

User Intent Issue: Result doesn’t fully match the query (e.g., a mall for
[Target]).

Distance/Prominence Issue: Closer/more prominent options exist (e.g., a Starbucks 5


miles away when one is 0.5 miles away).

1.3.5 Data Accuracy

Three components to rate for Search 2.0 tasks:

Name/Category Accuracy: Verify against official sources (e.g., Walmart vs. Wal-
Mart).

Address Accuracy: Check street number, locality, postal code, etc.

Pin Accuracy: Ensure the pin aligns with the feature’s physical location.

1.3.6 Comments
When Required:

Relevance ratings of Good or below.

Data accuracy issues (e.g., incorrect street number).

What to Include:

User intent (e.g., “User likely wanted a 24/7 vet”).

Source links (e.g., official website showing correct address).

Guideline references (e.g., “Per Section 5.19”).

Key Exam Takeaways


Query-Level Question First: Always answer before rating results.

Closed Businesses: Still rate relevance as if open (unless better options exist).

Demotion Checkboxes: Use both if both issues apply (e.g., a faraway Starbucks with
no vegan options).

Comments Matter: Be concise but specific (e.g., “Demoted for distance: closer
Starbucks at 123 Main St”).

2.1 Query Types


Queries fall into categories that help determine user intent. Memorize these for
the exam:

Query Type Definition


Example How to Handle
Address Query Full/partial address (street, city, country).
[717 E El Camino Real, Sunnyvale, CA] Verify if the result matches the exact
address.
POI Query Searches for landmarks, parks, transit stations.
[London Bridge] Check for official names and prominence.
Business Query Specific business name (chain or local).
[Zola Palo Alto] Match exact name/location; watch for chains vs.
standalone businesses.
Category Query General category (e.g., "coffee shops", "gas stations").
[gym] Prioritize proximity and prominence
within the user’s area of interest.
Product/Service Specific item/service offered by businesses.
[vanilla latte] Ensure the result offers the queried
product/service.
Coordinate/My Location Latitude/longitude or phrases like "my location".
[36.082857, -115.172916] Pin must match coordinates exactly.
Emoji Query Emojis representing categories (e.g., 🍕 = pizza). [🍕]
Use the most literal interpretation of the emoji.
No Maps Intent Queries with no physical location intent .
[Facebook], [time in NYC] Rate all results Bad.
(e.g., weather, time, online brands)

2.1.1 Research Expectation


What to Do:

Use official sources (business websites, government sites, postal services).

Check street imagery (Google Street View) for physical verification.

Apply local knowledge (e.g., knowing “SF” = San Francisco).

Translate foreign queries using tools like Google Translate.

What Not to Do:

Rely on crowdsourced/unverified directories (e.g., spammy Yelp listings).

Assume intent without evidence.

2.2 Result Types


Results are categorized based on what they represent:
Result Type Definition
Example Key Clues
Business/POI Businesses or landmarks with a name, address, and
category. McDonald’s [Fast Food] Includes a
category (e.g., “Fast Food”).
Address Street, locality, or administrative area.
9210 Bossley Park Dr, Cypress, TX No category; title = first line of address.
Feature Without Address Natural features (rivers, mountains) or transit stops
with no street address. Amazon River, Bus Stop: Main St & 5th Ave May include a
category (e.g., “Natural Feature”).
2.3 Location Intent
Determines where the user expects results. Two types:

2.3.1 Explicit Location


Definition: The query explicitly states a location (e.g., city, “near me”).

Examples:

[coffee shops Boston] → Results expected in Boston.

[food near me] → Results near the user’s GPS location.

Rules:

Ignore the viewport and user location if the query specifies a location.

2.3.2 Implicit Location


Definition: Location intent is inferred from viewport and user location.

How to Determine:

Viewport Status User Location Location Intent


Fresh Inside viewport Use viewport as primary
intent. Results inside viewport cannot be rated “Bad” for distance alone.
Fresh Outside viewport Use viewport as primary intent;
user location as secondary.
Stale Inside/Outside viewport Use user location as intent.
Missing Present Use user location.
Missing Missing Use test locale (e.g., focus
on prominent results in the country/language setting).
Exam Tips
Explicit > Implicit: Always prioritize explicit location cues in the query.

Fresh vs. Stale Viewport: Fresh viewports (recently moved) override stale ones.

No Maps Intent Queries: Quick win—rate all results Bad (e.g., [is cucumber a
fruit]).

POI vs. Address Results: Check for categories to avoid confusion (e.g., a “Park”
category = POI, not an address).

3.Rating the Query-Level Navigational Result Questions


To effectively address Query-Level Navigational Result Questions, follow this
structured approach:
Definition
A navigational result is a single, unambiguous real-world entity that completely
satisfies the user’s intent. This question asks:
“Is there any result in the real world that can fully satisfy the user’s distinct
intent?”

Key Criteria
Answer Yes only if:

The query uniquely identifies one real-world entity (e.g., a specific address,
globally unique POI).

No other interpretation of the query is possible (e.g., “Eiffel Tower” has no


ambiguity).

Answer No if:

The query is ambiguous or could refer to multiple entities (e.g., “Starbucks,”


“City Hall”).

The query is a category (e.g., “coffee shops”) or lacks location specificity.

Step-by-Step Evaluation
Analyze the Query:

Look for unique identifiers: Full addresses, official names, or globally recognized
landmarks.

✅ Yes: [Statue of Liberty], [1600 Pennsylvania Ave NW], [Sydney Opera House].

❌ No: [Target], [gas station], [Central Park] (unless the query specifies a unique
feature within it).

Check for location modifiers: A chain + specific address = navigational (e.g.,


[McDonald’s 123 Main St]).

Ignore Local Context:

Focus on global uniqueness, not local proximity.

Example: Even if only one Starbucks exists in a rural town, [Starbucks] is still No
(multiple exist worldwide).

Handle Misspellings/Ambiguity:

If the query’s intent is clear despite errors (e.g., [Eifel Tower]), answer based
on the intended unique entity.

If ambiguous (e.g., [Apple] = fruit vs. company), answer No.

Special Cases:

Coordinates: [36.1011° N, 115.1741° W] → Yes (points to one exact location).

No Maps Intent: Queries like [Facebook] or [weather NYC] → No (no physical


location).

Examples
Query Answer Explanation
[White House] Yes Unique global landmark.
[Walmart] No Multiple locations exist.
[Empire State Building] Yes Uniquely refers to one building.
[Chase Bank 456 Oak St, Dallas] Yes Specific address + business.
[pizza near me] No Category query with no unique result.
Common Mistakes to Avoid
Local vs. Global: A query like [City Hall] is No unless paired with a specific city
(e.g., [SF City Hall]).

Assuming Single Local Result: Even if only one result exists nearby, if it’s part
of a chain, answer No.

Overlooking Typos: [Louvre Museum] vs. [Louver Museum] → Still Yes if intent is
clear.

4.1 Result Name/Title in Unexpected Language or Script


4.1.1 Name of Business/POI Results
Rule: The name must be in the query language, test locale language, or result
region’s official language(s).

Acceptable:

Official brand names (e.g., “Starbucks” in English globally).

Bilingual regions (e.g., “Montréal” in French/English in Canada).

Not Acceptable:

Mixing unexpected scripts (e.g., Cyrillic in a Japanese locale).

Misspellings or unofficial translations.

Exam Tip: Use the Result name/title in unexpected language/script checkbox to flag
this. No further rating needed.

4.1.2 Title of Address Results


Rule: Address titles (first line of address) must match the local language/script
or test locale.

Acceptable:

Localized components (e.g., “Paris” in French for a French locale).

Street names in original language (no translation needed).

Not Acceptable:

Translating street names (e.g., “Main Street” → “Hauptstraße” in a non-German


locale).

Exam Tip: Use Language/Script Issue in Address checkbox for errors in address
details (not the title).

4.2 Business/POI is Closed or Does Not Exist


4.2.1 Closed/Does Not Exist vs. Inaccurate Name and Address
Closed/Does Not Exist:

The business is permanently closed, or the POI never existed.

Example: A restaurant marked “Closed” on Google My Business.

Inaccurate Name/Address:

The business exists, but details are wrong (e.g., wrong street number).

Example: “Joe’s Diner” listed at 123 Main St instead of 456 Oak St.

4.2.2 Rating Relevance of Closed/Non-Existing Business/POI


Rule: Rate relevance as if the business were open.

Example: A closed Starbucks near the user competes with open cafés → Demote for
distance/prominence.

Exceptions:

If all nearby options are closed, rate relevance based on proximity.

If the query explicitly seeks historical/closed POIs (rare), rate accordingly.

4.3 Business/POI Status is PERMANENT_CLOSURE


4.3.1 No Status Shown
Rule: Research to confirm closure.

If closed, check Business/POI is closed or does not exist.

If unsure, assume it’s open and rate accuracy (e.g., Can’t Verify).

4.3.2 Status Shown as PERMANENT_CLOSURE


Expected Closure:

All branches in the area are closed (e.g., “Blockbuster” in 2023).

Action: Rate relevance as if open (e.g., Excellent if no alternatives exist).

Unexpected Closure:

Open alternatives exist nearby (e.g., a closed Starbucks with others open).

Action: Demote relevance (e.g., Bad).

Key Exam Strategies


Unexpected Language: Always check the query/test locale language first.

Closed vs. Inaccurate: Closed = no longer exists; Inaccurate = exists but details
wrong.

PERMANENT_CLOSURE:

If expected (no alternatives), rate relevance highly.

If unexpected (alternatives exist), demote.

Comments: Always explain closures, language issues, or inaccuracies with sources.


Example Scenario
Query: [Café Central Vienna]

Result: “Café Central” marked PERMANENT_CLOSURE, but research shows it’s open.

Action:

Uncheck “Closed/Does Not Exist” (status is wrong).

Rate Incorrect for data accuracy and leave a comment with the official website
link.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy