HomeWant to know about Bing and search rankings? Here you go!

Want to know about Bing and search rankings? Here you go!

Many people are silent when it comes to SEO for Bing because there is not much information about it. Interestingly, before Google, Bing used many advanced technologies and techniques. Fabrice Kennel, Bing’s principal program manager, recently shared a lot of information with Jason Bernard of Black Cube about not only how Bing works but also how search engines work in general.

Standards for indexing content

Fabrice is in charge of Bingbot Crawler, URL discovery and selection, document processing, and Bing Webmaster Tools. He is a good person to look for information about search engines, especially crawling and page selection.

Fabrice explains the process of crawling here and what I think is an important way is how he says good about what he chooses for the Bing index.

Many people feel that every page of their site deserves a chance to rank. But Google and Bing do not index everything.

Advertisement

Continue reading below

They leave certain types of pages behind.

The first feature of a page that Bing would like to index is the page that is useful.

Screenshot by Jason Bernard

Fabrice Canel explained:

“We are clearly business oriented to satisfy the end user but we have to pick and choose.

We can’t crawl everything on the internet. There are an unlimited number of URLs.

You have calendar pages. You can go the next day forever.

So it’s really about figuring out what Microsoft Bing has done most to satisfy the customer.

Advertisement

Continue reading below

Bing and key domains

Fabrice Next talks about the concept of key domains and how they are guided through key pages on the Internet to show them standard content.

This kind of Looks like an algorithm that includes a badge set of trusted sites from which a site is as likely to be spammed or useless as it is from key websites (Link Distance Ranking Algorithms)

I don’t want to put words in Fabrice’s mouth, just my observation above.

I’ll let Fabrice speak for myself.

Jason asked:

“Would you say that most of the content on the web is not useful or is it an exaggeration?”

Fabrice replied:

“I think it’s a little exaggerated.

We are guided by the key pages that are important on the Internet and we follow the links to understand what’s next.

And if we really focus on those key domains (important pages), it is leading us to standard content.

So our idea of ​​the internet is not to go deep and crawl useless content forever.

Clearly the index needs to be kept up-to-date and comprehensive, including the most relevant content on the web. “

What makes Bing a deep crawl in websites?

Jason next asks about the websites that crawl deeply. Obviously, it is important to get a search engine to index all the pages of a site.

Fabrice explains this process.

Jason asked:

“Okay. And then I think that’s the key. You prefer to go wider and deeper.”

So if I have a site at the top of the pile, would you focus more on me instead of trying to find new things you don’t already know about? “

Fabrice provided an important answer, reflecting the complex nature of what is chosen to crawl and index:

“It depends. If you have a site that is specialized and covers an interesting topic that the customer cares about, we can definitely go in-depth.

The machines choose what to crawl.

We sometimes anthropomorize search engines such as “Search engines don’t like my site.

But in reality there is nothing in the algorithm that is about liking or trusting.

Advertisement

Continue reading below

Machines don’t. Like.

Machines don’t. Trust.

Search engines are machines that are primarily programmed with goals.

Fabrice explains how Bing chooses to crawl in depth:

“I’m not choosing where we go in depth, not depth. Nor is this my team.

This is a machine.

Machine learning chooses to go deeper or deeper based on what Bing considers important to the customer.

That part is something to note about what is important to the customer. Search engine, in this case, Bing is designed to identify pages that are important to users.

When writing an article or creating an eCommerce page, it can be helpful to look at the page and ask, “How can I make this page important to people who come to this web page?”

Jason followed a question to tease out more about what is important for site visitors to choose from.

Advertisement

Continue reading below

Jason asked:

“Are you just giving the machine the goals you want it to achieve?”

Fabrice replied:

“Absolutely. Yes.

The key input we give to machine learning algorithms satisfies Bing users.

And so we look at different ways to satisfy Bing users.

Again, if you inquire for Facebook. You want the Facebook link in the top position. You don’t want some random blogs to talk about Facebook.

Search crawling is broken and needs an update.

Jason asks Fabrice why IndexNow is helpful.

Fabrice responds by explaining what is crawling these days and how this way of finding content in the index, which is almost thirty years old, needs to be updated.

The old and current way of crawling is to go to the website and “StretchingWebsite data, even if the web pages are the same and have not changed.

Search engines have to visit the entire indexed web to check if any new pages, phrases or links have been added.

Advertisement

Continue reading below

Fabrice claims that the way search engines crawl websites needs to change because there is a better way to go about it.

He explained the basic problem:

“So the crawling model is really learning, trying to figure out when things are changing.

When will Jason post again? Maybe we can model it. We can try to find out. But we don’t really know.

So what we’re doing is we’re pulling and pulling and crawling to see if anything has changed.

This is a pattern of crawling today. We can learn from the links, but at the end of the day, we go to the homepage and find out. So this model needs to change.

Fabris described the following solution:

“We need to get input from the website owner Jason and Jason can tell us through a simple API that the content of the website has changed, it helps us to discover this change – about the change. In order to notify, send crawlers and get the latest content.

It’s the industry’s overall transformation from crawling and crawling and crawling and crawling to discover whether something has changed. “

Advertisement

Continue reading below

Current search status

Google calls them users who use their site. Bing introduces the concept of people who search as customers and with it all the little aphorisms about consumers that are implicit in the customer’s first point of view as if the customer is always right, give the customer what they want Want.

Steve Jobs spoke about the innovation of consumers, which has little to do with Bing’s IndexNow, but also for publishers:

“You can’t just ask customers what they want and then try to give it to them. As long as you make it, they’ll want something new.

Push the future of search?

Bing has introduced a new push technology called IndexNow. This is a way for publishers to notify search engines when they crawl new or updated web pages. It saves hosting and data center resources in the form of electricity and bandwidth. It also makes it easier for publishers to know that search engines will arrive and retrieve content faster than the current crawl method.

Advertisement

Continue reading below

Reference

This is just one part of the discussion.

Watch the full interview with Fabrice Canel