Featured Post 1

Monday 4 June 2012

Some relevancy that determines the rank


Most common factors search engine gerge the value of a link. Search engine have become more and more depended metricx about an entire domain rather than just then individual pages. Its why you will see new pages or those with very few links ranking highly. Simply because they are an important, trusted, well-linked-to domain.

1st factor. Internal vs External links



Search engines are likely to place more value on link XA than link BA & CA. Because the source page is on an external domain.
                       When search engines first began valuing links as a way to determine the popularity, importance and relevant of a document. They found classic citation based rule that what other says about you is far more important and trust for the what you say about yourself. Thus, while internal links( links that point from one page on your site to another ). Do carry some way but links form external sites matter far more.

this doesn't mean its not important to have good internal link structure or to do all that you can with your internal links. Its just mean that a site performance is highly depended on how others site on the web have cited you.



2nd factor. Anchor Text

Value of Anchor Text in search results,



Anchor Text is a critical metricx passed by links. Early, in anecdotal ( Not necessarily true or reliable ) or test results. It appears that link AB will actually pass more relevancy value then link XY. For the keyword phrase E-commerce organization.
An obvious wants for those SEO business Anchor Text is one of the biggest factor in the ranking overall. The appear of exact match anchor text is more beneficial then simply inclusion of the target keyword in an anchor phrase.



3rd factor. Page Rank( PR )

Whether they called StaticRank( Microsoft ), WebRank( Yahoo ),PageRank(Google ) or mozRank(Linkscape). Some forms of an iterative( relating to ), markov chain based link analyzing, algorithm is a part of all the engines ranking system. Pages rank an others uses the anologics that links are votes and that those pages which have more votes and more influence with the votes they casts.

Lets start some basic concept of the page rank which can be listed as follows.
i)   Every URLs these assigned quantity of page rank.
ii)  If there are "An" links a page each link passes that page rank divided by n. Thus the more link lower the amount PR is once flow.
iii) An iterative calculation that flows juice through web's intire link graph, dozen's of time is use to calculation for each URLs ranking scores.
iv) Representation like those shown in those google's page rank SEOmoz'g mozrank on a 0-10 scale are logarithmic( Thus a p-r or mozRank 4 has 8-10 times link important then a PR/mR 3 )



Page rank can be calculated on the page level link graph asigning page rank scores to individual URLs. But it can also apply to the domain level grpah which is how metricx like mozRank. Domain mozRank( DMR ) are derived. By counting only links between domains, DMR can be used to determine the importance of an entire site.

4rth factor.Trust Rank( TR )

The basics of trust rank or describing the pae of stand for combeting web spam with trust rank. ( ilpubs.stanford.edu:8090/770 ) the basic tenet of TR is that the webs good and trust fro the pages tent to be linked together and that spam is much more pervasive (sperading widely) outside this center. Thus, by calculating and iterative PR-like metric that only flows Joice form trust seed source, the metric like TR can be use to predictably strike, whether a site to be likely to be high quality or spam.

While the engine don't expose any data point around this particular metric its likely that some forms of the "distance form trusted seeds" logic is applied by logic algorithm. Another interesting point on TR is it even measures who links to known spam sites is likely also  part of the engines metric sites. As with page rank trust rank can be calculated on the both page level and domain level on graph. Linkscape uses this intitution to build mozTrust(mT) and domain mozTrust(DmT). So the key point is get links form high trusted sites and don't link to potential spams. You can check TR on (www.seomastering.com).



5th factor. Domain Authority.

Though the phrase domain authority is often discussed in SEO world a formal and universal definition doesn't exist. Most practitioners are used to describe a combination of popularity importance and trust wortiness calculated by the search engines and based largely ;on link data. Though some also fail the engines may use the age of the sites here as well. Search engines likely use scares about the authority of a domain in counting link. Thus despite the fuzzy language its worth mentioning a s a data point. The domain hue on links from are potentially just as important then the individual metric of the page passing the link.


6th factor. Diversity of sources.

In analysis of co-relation data number single metric has a positive co-relation with high ranking then the number of root linking domain. This appears to be both hard metric to manipulate for spam and a metric that includes true broad popularity and importance.
Although co-relation is not causation the experience of many SEO's along with impractical(based on) data suggest that a diversity of a domains linking to your site or pages has a strong positive effect on Rank link. By this logic it follows that earning a link form a site that already linked to you in the past is not as valuable as getting a link form and entirely a unique domain. This is also suggest that potentially links form sites pages who have themselves on diverse link profiles may be more trusted and more valuable then those form low diversity sources.

7th factor. Uniqueness of source+target

The engines have number of way to gorge and predict ownership and relationship between website. They can include...
i)   Large number of shared link.
ii)  Domain registration data.
iii) Share hosting IP address.
iv) Relationship information.

If the engines determine that a free existing relation of some kind could inhibit editorial quality of a link passing between two sites they may choose to discount or even ignore these. Anecdotal (assumption) evidence that link shared between networks of websites passing little value is one point many in the organic search feed point to earn this point.


8th factor. Location of a page.



Note: Search engines may analysis a page for a different section assign weight to content or link based on location.

Microsoft wa the first engine to reveal public data about their plan to do "block level analysis". Since then many SEO's have reported observing the impact of analysis like this form google, yahoo as well. The internal links on the footer of web pages may not provide the same beneficial results that those some links will wind place into top-header navigation. Others have reported that one way the engines have been fighting pervasive  link advertising is by diminishing the value that external link carry form sidebar, footer or web page.
SEO tent to agree on point taht links form the content is most valuable, both form the values the link passes for ranking and accidentally click through as well.



9th factor. Tropical Relevance

There are numerous ways the engines can run tropical analysis to determine weather two pages cover similar subject matter. years ago Google labs featured and automatic classification tool that could predict, based on a URL, the category and sub-category virtually any types of content.(From medical to real state, marketing, sports and dozens more). Its possible that engines may use this automatic tropical classification system to identify neighborhood around particular topic and counts link more or less based on the behavior they see as accretive (process of growth) to their quality of ranking results.




10th factor.Content and context assessment.

Though tropical relevance can provide useful information to engines about linking relationship its possible that more useful in determining the value. It should pass from the source to the target. In content/context analysis the engines attended to discern (to recognize) in a machine parseavle(analysis) way why a link exist on a page.
When links are meant editorially certain pattern arrives that tent to be embed in the content,link to relevant source, us accepted norms for HTML structure, word uses phrasing, language etc. Though detail pattern matching and potentially machine learning on large data set the engines may be able form distinct about when constitutes or legitimate and editorially given links that entered as an endorsement vs those that may be place through hacking. Those that are result the content licensing carries little over weight. Those that are pay for placement.

11th factor. Geographical Location.

the geographic of a link is highly depended on the perceived location of its host. Bot the engines particularly have been getting increasingly sophisticated about employing data points to ping points. The relevance of root domain, sub domain or sub folder. This can include...
i)   The host IP address location.
ii)  The country code TLD (Top Level Domain)
      {ie. .de.uk, .cu, .us, .pk, .in etc}
iii) The language of the content.
iv) Registration with local search system and regional directories.
v)  Associated with physical address.
vi) the geographic location of links to their sits.



12th factor. Use of Rel="Nofollow"

Although in the SEO world it feels like time ago since Nofollow appeared it actually only been around since January 2005 when Google announced it was adopting support for new HTML tag. Very simply "Rel-Nofollow" when attached to a link tells the search engine not to scribe any of the editorial ends Osmond or votes the would boost a page query independent ranking materials.

Some question exists in the SEO field as to whether and how strictly each individual engine follows the protocol. ie. Google may still pass some citations quality though wikipedia's external link despite the use of follow.




13th factor. Types of link.


Links can come in a variety of formats. the big three are.
i)  Straight HTML text link.
ii)  Image link.
iii) Java Script link.


Note: Not all links are treated equally. In both anecdotal (not sure) example and testing it appears that straight HTML links with standard anchor text passes the most value followed by image link with keyword rich alt text and finally Java script link.




14th factor. Other link target on the source page.


When a page links out externally both the quality and targets of the others links that exist on that page may be taken into account by search engines when determining how much link juice should pass.
The page rank like algorithm from all the engines dived the amount of juice pass by any given page by the number of links earn that page. In addition to this metric the engine may also concern the quantity of external domain a page points to as a way to gorge quality and values of these endorsement. ie. a page links to only a few external resources on a particularly topic spread out among the content that may be perceive differently then a long list of links pointing to many different external sites. One is not necessarily better or worse then the others. But its possible that engines may pass greater endorsement through one model then another.

The engines also likely to be looking at who else the linking page endorse having a link from a page that also links to low quality passes that may be considered Spam is almost certainly less valuable then receiving links form pages that endorse and link out to high quality, reputed domains and URL.


15th factor. Domain page and link specific penalty.

As nearly everyone in the SEO business is aware search engine applied penalties to site and pages ranking form lots of the ability to pass link endorsement all the way upto a full ban from their endisious. If a page or site has lost its ability to pass link endorsement acquiring links from it provides no algorithmic value from search engines. Beware that search engine sometimes penalty publically but continue to keep those penalties in consistence so systemic manipulators can't acquired sold data point who can get hit or not.

1 comment:

  1. It is a revolutionary concept called PPSC, Private Profit Sharing Company. We will show you how to help raise money for orphaned and under privileged children; receive retail items for up to 95%-99% off; and make serious monthly residual income.

    Please watch the video to the left and if this sounds like something you would like to be a part of, please let me know!
    bids that give

    ReplyDelete