Mike Ginley

Use ChatGPT To Simplify Google Patents

Share Article!

In This Post

Everyone is going crazy for all the LLM tools out there, both for its accuracy and inaccuracy (and lack of sources). While some are arguing that its going to take over every job in the world (SEO is dead again!), more realistic people are utilizing it to improve their workflow in various ways.

One of the best uses of this tool is to simplify complex topics and break them down into easier-to-digest chunks. Thought I would give it a shot with some key Google patents related to search. Now I am just doing this as a test. Im going to give my take on the result as well, but I am mainly curious to see how ChatGPT does at breaking them down, and what rankings I can get with content majorly created by AI.

It was the same query every time, “quickly summarize this in a handful of points in terms like I am a fifth grader“.

For real human breakdowns of Google patents, look to some of the best:

No real rhyme or reason for how i chose this or what content i grabbed from the LLM. Mainly just curious to see how this performs and what people think. The amazing thing is how well this can sum complex things up, so I recommend testing this out for other topics!

User-context-based search engine

My Take:

Context and relevancy are key here! There are a lot of times I have experienced websites where people were trying to rank for totally unrelated content, or wanted to rank for key terms that showed nowhere on the website. Don’t flood your page/site with keywords, but definitely cover them naturally as well as their related topics to help the overall context.


These points summarize the method of analyzing and classifying information based on its context and using that information to search for relevant content in a database.

  1. The method is about figuring out the meaning and context of words, phrases, and combinations of words in encyclopedias or other bodies of knowledge.
  2. The analysis helps divide the universe of information into different areas or domains and identifies words or expressions unique to those areas to help classify information.
  3. A vocabulary list is created, and each word or phrase on the list has a “macro-context,” which is a way to describe its context using different subject matters and their importance.
  4. The method counts how many times each word from the vocabulary list appears in a given text.
  5. By adding up the macro-contexts of the words found in the text, a “macro-context” is calculated to represent the overall context of the text.
  6. A “micro-context” is then determined, which is a list of words from the vocabulary list that are closely related to the given text.
  7. The list of micro-context words can be further reduced by selecting the words that have macro-contexts most similar to the macro-context of the given text.
  8. The method can be used to classify or organize information based on its macro-context or micro-context.
  9. In the searching method, a database of information is analyzed to determine macro and micro-contexts for each element in the database.
  10. The database is indexed based on these contexts, and when a user enters a query, the system determines the macro and micro-contexts associated with the query.
  11. The system then looks for information in the database that has related contexts to the query and presents it to the user.

Context sensitive ranking

My Take:

Pretty straightforward, different results for different searchers. You can show this when a client or leader is asking why you don’t rank #1 for something even though its possible you are.


This is a computer method that helps a search engine understand what you’re interested in when you use it. Here’s how it works:

  1. When you use a search engine, it looks at the web pages you’ve been visiting and figures out what kind of things you’re interested in based on that. For example, if you’ve been looking at news websites, the search engine might think you’re interested in news. If you’ve been looking at shopping websites, it might think you’re interested in shopping.
  2. The search engine also looks at what other people have been doing. It looks at their past search sessions and sees what web pages they’ve been visiting. It pays special attention to “hub pages,” which are pages that are related to a specific topic. For example, a hub page for news might be a page that collects news articles from different sources.
  3. Based on all this information, the search engine determines a “context” for your current search session. The context is like a category that describes what you’re interested in. It could be news, shopping, travel, or education, depending on what you’ve been looking at.
  4. Once the search engine knows your context, it can provide you with search results that are more relevant to what you’re interested in. It uses a “contextual click model” to rank the search results. This model takes into account which web pages people have clicked on in the past when they were in a similar context. It uses this information to figure out which search results are most likely to be helpful to you.

So, basically, this method helps the search engine understand your interests based on the web pages you’ve been visiting, and then it uses that information to give you better search results that match your interests.

System and method for clustering content according to similarity

My Take:

Use logic and reasoning when you are creating your website & content structuring. Having articles or site pages all over the place and unorganized isn’t helping anyone, especially the search engines. It is our job to make the search engines understanding of our website easier, clustering similar content helps that.


Imagine you have a lot of different pieces of content, like documents or web pages, and you want to organize them based on how similar they are to each other. We want to create groups of similar content. To do that, we use a method called clustering.

  1. First, we look at the content and assign some tags or labels to each piece of content based on what it’s about. For example, if we have a document about dogs, we might assign it a tag like “animals” or “pets.” We do this for all the pieces of content.
  2. Next, we build a model that looks at all the tags and tries to figure out what topics the content is about. It does this by looking for patterns and connections between the tags. This model gives us a probability distribution, which tells us how likely each piece of content belongs to different topics.
  3. Now, we start clustering the content. We use a special algorithm that looks at the probability distribution and groups together pieces of content that are similar. It looks at how close they are to each other based on the topics they belong to.
  4. To help with the clustering, we create something called a distance matrix. This matrix tells us how different or similar each pair of content items is based on their probability distribution. Using this matrix, we can say whether two items of content should be in the same group or not.
  5. Once we have the initial clusters, we make them even better. We analyze the patterns within the clusters to see if there are any relationships or similarities that we missed. This helps us make the clusters more accurate and useful.
  6. In the end, we have a set of clusters where similar pieces of content are grouped together. We can then use this information to do different things, like searching for specific content, organizing it, or analyzing it further.

So, the method described in this text is about using tags and a topic model to cluster similar pieces of content together, based on their probability distribution. It involves creating clusters, analyzing them, and improving them for better organization and understanding.

Document scoring based on document inception date

My Take:

Timeless – Evergreen – Content

So many times people hit publish and let it sit there forever. You definitely don’t want to fix what isn’t broken, but there should always be some room for improvement. What’s really interesting here is that links over time is taken into consideration. So don’t go buying a bunch of backlinks right after publishing. Improvement over time is the name of the game with this patent and SEO as a whole.


Imagine you have a bunch of documents and you want to figure out which ones are more important or relevant. To do that, we can follow these steps:

  1. We start by looking at each document and counting how many links go to or from that document. These links can be like references or connections to other documents.
  2. Next, we check how much time has passed since the document was first created. This is called the “inception date” and it tells us how old the document is.
  3. Now, we calculate the average rate at which links are created for each document. To do this, we take the number of links and divide it by the time that has passed since the document was created.
  4. Based on this average rate, we assign a score to each document. A higher score means the document is more important or relevant.
  5. Finally, we compare the documents and rank them based on their scores. The document with the highest score will be ranked at the top of the list, and the others will follow in descending order.

So, this method helps us determine the importance of a document by considering the number of links it has, how long it has been around, and how quickly it gathers new links. Then we use these scores to create an ordered list of documents based on their relevance.

Modifying search result ranking based on implicit user feedback

My Take:

This one is a hot topic (I think). Engagement after the click from organic matters. Think bounce rate and time on page.

High bounce rate + low time on page is not a winning combo. Create engaging content that satisfy’s the user intent, this should keep them on the site. It also seems like there are some small intricacies here with certain terms (Is It Christmas Yet?). Probably low time on site and high bounce/exit rate.


  1. To measure relevance, the search engine looks at two numbers: a first number and a second number.
  2. The first number is based on how long people spend looking at the document after they click on it from the search results. If they spend more time on it, it suggests that the document is useful and relevant.
  3. The second number is simply the total number of clicks on the document in the search results. It tells us how many times people clicked on it.
  4. The search engine calculates the measure of relevance by dividing the first number by the second number.
  5. This measure of relevance is then given to a ranking engine, which uses it to decide how high or low a document should be ranked in the search results for future searches on the same topic.

Natural language search results for intent queries

My Take:

This one really seems to separate from the “just add keywords” and rank understanding of SEO. Google is really getting intelligent here with its processing and understanding of more natural ways of searching. Think featured snippets here (and maybe Bard I guess, but that’s a different mess).

It’s often taking a combo of the Title/Header + a snippet of content that may answer a user’s query that came from a more natural question. EAT is a big one here too, but I don’t have enough time to express the overall importance of that, just read Lily Ray’s website for that :).


In summary, the system searches for information in natural language, analyzes documents, assigns categories, and provides relevant search results based on the queries. It uses templates to generate heading-text pairs and ranks them for better results.

  1. The computer system has a processor and memory. The system can search for information in natural language based on specific queries.
  2. It analyzes a document from a trusted source and extracts headings and the text underneath each heading.
  3. It assigns a topic and category to each heading-text pair and stores them in a database.
  4. When someone asks a question related to a specific topic and category, the system retrieves the corresponding heading-text pair and provides it as a search result.
  5. The text portion of a heading-text pair can be a paragraph or a list of items.
  6. The system can retrieve multiple heading-text pairs based on the topic and category and rank them to provide the most relevant results.
  7. It can also generate snippet-based search results by searching through documents and provide those alongside the natural language search results.
  8. The system uses templates to generate the heading-text pairs based on predefined patterns.
  9. The question category is determined by the template used.
  10. The system can determine the topic and question category for a query, retrieve the relevant heading-text pairs, and provide them as search results.
  11. It can handle different intent templates with non-variable and variable parts.
  12. The system generates intent templates based on frequently occurring patterns in questions from trusted sources.
  13. It selects the most common templates and stores them in memory.
  14. It can update the templates based on new questions from search records.

Ranking documents based on user behavior and/or feature data

My Take:

Natural Internal Links!

Links are how search engines crawl throughout the internet. They need them, its what they crave.

Do it naturally and logically and you will be rewarded!

Links, its what search engines crave

  1. There is a system that analyzes information about links on the internet and how users interact with them.
  2. The system stores data about different links, including the documents they come from (source documents) and the documents they lead to (target documents).
  3. It also collects information about how users navigate through these links and which ones they click on.
  4. Using this data, the system creates a model that predicts the likelihood of a user clicking on a particular link based on its features and user behavior.
  5. The model generates rules based on the features of links that users click on and those that they don’t click on.
  6. The system then applies this model to new links to predict the probability of users clicking on them.
  7. Based on these predictions, the system assigns a rank to each target document (the page the link leads to).
  8. The target documents are then ordered based on their rank, determining their position relative to other documents.

Additional points:

  1. The system can obtain data about user behavior from the devices users use to access the internet.
  2. The user behavior data can correspond to a single user or a group of users.
  3. The features of source documents include information like the address of the document, the website it belongs to, the number of links it has, and the presence of specific words.
  4. The features of links include attributes like font size, position in the document, and the words used in the link.
  5. The features of target documents include information about the document’s address, website, and its relationship to the source document.
  6. The system generates a feature vector for each link, which is a numerical representation of its features.
  7. The rules for the model can include general rules for a group of documents and specific rules for a particular document.
  8. The system periodically updates the model’s rules based on changes in user behavior data.

Identifying local experts for local search

My Take:

A very interesting one that a lot of local businesses need to read.

One of the main reasons I am not a fan of doing local SEO is all the ways people try to scam it. Fake listings with PO boxes, spammed reviews, numerous pages for every city they service with nonsense content.

Like everything, put in the effort to be natural and do quality work that positively represents your business.


To summarize, this process uses reviews from experts in a particular area and business category to rank and provide search results to users based on their queries.

  1. People write reviews about businesses in a specific area.
  2. We want to find experts among these reviewers who know a lot about a specific category of businesses in that area.
  3. When a user asks a question or searches for something, we look at the area and the type of business they’re interested in.
  4. We use the reviews written by experts in that area and category to rank the search results.
  5. The ranked search results are then sent back to the user, so they can see the most relevant information.

Recent Articles

Popular Articles