Contrary to popular opinion,
search engine optimization (SEO) is not optimizing a website for the
commercial web search engines. In reality, SEO is optimizing a website for
people who use search engines. Both information architects and SEO
professionals organize and label website content so that it is easy to use
and easy to find. But SEO professionals take the findability process one
step further. Not only do they ensure that search engines have easy access
to desired content. They also ensure that search engines have limited or no
access to undesirable content…undesirable from a searcher’s point of view.
Unfortunately, many librarians and other information scientists do not
realize that their findability solutions pose major problems for search
engines – both web search engines and site search engines. Delivering
duplicate content to search engines can significantly decrease a document’s
search engine visibility. Additionally, duplicate content can often
decrease user confidence and negatively affect brand perception. What is
duplicate content from a search engine’s point of view? What kinds of
content and navigation systems deliver duplicate content to search engines?
How can we limit or prevent access to search engines and still deliver web
documents to those who are not searching?
In this webinar, SEO pioneer and veteran Shari Thurow will show you: ??
• How search engines “see” duplicate content (it’s not what you think)
• Types of duplicate content (personalization, tagged pages, faceted
classification, syndication, etc.)
• 3 types of duplicate content filters
• Known duplicate content solutions ?
• Common myths & misconceptions
|
No comments:
Post a Comment
Please post comments here.