Monday, 24 December 2007

A Resounding accolade to Open-ness!

-Akshay Ranganath

In the hard-headed economically oriented and prestigious magazine, The Economist, Open Source has come in for a lot of praise. Read the article on predictions for the year 2008 with regards to adoption of Linux based machines by enterprises. Sample this: Asus Eee comes at a nifty price of just $ 400 - pre-loaded with Open Source tools along with Linux as the OS.

Compare this to a $ 1,000 PC that comes with Business editions of Windows Vista with added costs for procuring Microsoft Office, Exchange et al. and the money seems significant. The article concludes saying:

"Pundits agree: neither Microsoft nor Apple can compete at the new price points being plumbed by companies looking to cut costs. With open-source software maturing fast, Linux, OpenOffice, Firefox, MySQL, Evolution, Pidgin and some 23,000 other Linux applications available for free seem more than ready to fill that gap. By some reckonings, Linux fans will soon outnumber Macintosh addicts. Linus Torvalds should be rightly proud."

Amen from FLOSS!

Reference:

Three fealess predictions - http://economist.com/daily/columns/techview/displaystory.cfm?story_id=10410912

Saturday, 8 December 2007

Malacious help on Linux - and how to avoid it!

Today, I read this intriguing article on dangerous or malacious code on Ubuntu. The article was quite an revealeation in terms of how someone can ensure that your life with Linux is destroyed!

The post talks about some really malacious commands that were provided as solution to questions on various Ubuntu formums. They were cleaverly disguised as simple steps for solviing dialy computing issue while the ulterrior motive was to simple make life miserable for the poor hapless soul using the forum to seek solutions. The post makes you aware of the danger of seeking online help from unknown entities. Although the help on forms are not as bad as getting infected by virus, it dows play upon one main problem - the issue of trust.

Linux/Unix traditionally has been developed from communities and an open forums. Since no formal service agreements generally exist, help is often sought and obtained from communities. If the some malacious users of the very same communities turn against the users then, it does tend to become an issue. Of course, it is wrong to blame the entire community for a few malacious users but, it does suggest one problem of the Open Source model. The issue being raised is, on an average, you get good solutions but, never trust everyone. An implying issue is - how do you know whom to trust and whom not to.

As the very same post mentions, a simple solution is a wait-and-watch approach - wait for sometime to see anyone else responds with similar help of raises concerns before jumping onto to make the change. Other solutions could be:
1. Wait for some posts to appear on reputed blogs - at least a person who maintains a long time, well visited blog will not post malacious help
2. Try and get help from Linux documentation project () or the web site of your distribution - be it Red Hat, Ubuntu, etc. These guys generally maintain the forums to ensure malacious intent is not spread.
3. Post queries on multiple places and then see if you get similar responses. If something looks suspisiously different, raise a flag.

Of course all of this is time consuming but, being safe is better than losing the entire hard-drive! Maybe this is where an idea can be taken out of Microsoft's MSDN community. If something can be created like this, a really trust-worthy site could be developed where users can be relatively well assured of getting decent help.

Reference:
Ubuntu - Global Announcement: http://ubuntuforums.org/announcement.php?a=54

Why resourcing is difficult for Web Analytics?

Lately, I had to take sessions on Web Analytics to multiple user groups. The types of questions that were asked and the focus from the groups opened my eyes on a problem that we'd never realized till now - identifying the right ‘fit' of people for Web Analytics.

Come to think of it: When you have a Java project, you look for a Java resource. When you have a .Net one - you get a .Net person. But, when you have a Web Analytics implementation project, what do you do? The straight forward answer is - get a person who knows Web Analytics analytics. The question is, what is meant by *knowing* Web Analytics?

Web Analytics - Congruence of Technology and Business
The main problem in narrowing on Web Analytics skill set is that it is inherently different from the traditional technologies or Verticals. To implement an analytics solution, say for a client like Amazon.com, understanding of just Amazon's business model would not be enought. On the other hand, understanding of just the Web Analytics solution is also not enough. A knowledge of any tool will just provide details on how the tool by itself works.

To actually understand and work with Web Analytics, we'd ideally a need a candidate who is:

  1. Good with one of the Web Analytics tools
  2. Has a business tilt of mind, especially in identifying what things if measured add value to a client

The second point is nothing but an ability to uniquely identify the KPIs for the client's business being driven out of the Web Site. The KPIs could be anything from number of people buying a book (Amazon) or number of people signing up for download of a new White Paper (Lead Generation by News Distribution Sites). A person who can match the tools provided by the Analytics vendor with the measurement requirements of the customer is what we should be looking for.

Current issues
The issue we have now with Cognizant is that we have three different sets of people none of whom are really fitting into the role. We have:

  1. Developers who are completely focused on the web analytics tool itself - the way it works and the parmaters it provides
  2. Business Analysts who know the client's business but, are unaware on what the Analytics tools can and cannot do
  3. Architects who lose themselves into the intrcacies of implementation of the tool itself rather than providing customer solutions

(I was bombarded by implementation of Omniture Web Analytics product by architects in one of the session when the session was all about how to use the tool!)

So, the problem that I foresee is that we have people who have a view on just one aspect of the solution and none of whom are able to envisage the complete analytics package providing value to customer. This is the need of the hour and something that needs to be addressed

My Suggestion
To help overcome this issue, my suggestion would be:

  1. Train a batch of freshers and BAs on the Web Analytics products - focus on what the tool can do initially. Then, train the freshers on the implementation aspects.
  2. Try and identify people who envisage solutions and train them on designing Analytics solution. Ideally, these are the hands-on exprienced associates - people who have ideally had a stint at onsite and are in the position of tech lead for the projects. The reason for this category of people is two-fold:
    1. They have seen how customers think and understand how to think along the lines of customer
    2. They are sufficiently hands-on and know what is the problem being solved rather than diluting the problem definition.

Well, I've not got a batch to train as of now in Bangalore but, I do hope to get this type of group whom I can build for future projects!

Sunday, 4 November 2007

How to design website to be web analytics friendly?

-Akshay Ranganath

Last week, we received a few requests for implementing Analytics code on over 1000 pages of a client web site. It was while breaking our heads over the sheer amount of mundane task that we realized on the potential of making this process a lot more easier.

Background
Now a days, almost everyone wants to have analytics code implemented on their site. Given that Google is offering it totally for free, who would not want to use it?

So, first thing to do is start with the assumption that your site is going to have an analytics implementation at some point of time. If you see various products out there, the basic things of most products are:


  1. a small javascript code asking for insertion of a vendor provided script. This will be s_code.js for Omniture SiteCatalyst. For Google Analytics, the file will be urchin.js

  2. another piece or a same piece of code that follows this initial include line for capturing the variable that needs to be reported on. For this purpose, you have multiple option, depending on the product being used:

    1. Omniture SiteCatalyst: Use the custom variables

    2. Google Analytics: I am yet to learn this!




I am assuming that you want to implement a basic analytics solution on the web site. This is the case of most people. (I know it is a generalization but, since this industry is in a nascent stage, there is a lot of information that can be gleaned even with this level of implementation).


To help accommodate this, the pages of the web site should have two important features:

  1. A include line at the top of the file, just after the <body> tag. This line in turn can insert the necessary JavaScript code needed for Google Analytis or Omniture SiteCatalyst.

  2. A shorter code snippet for pulling in the various custom parameters.



To explain this concept, I'll use the example of Omniture SiteCatalyst. Suppose, you have a news publishing website and you need to measure the following:
1.Page name – s.pagename
2.Section Name – s.channel
3.Headline, if applicable – s.prop1
4.Name of author, if applicable – s.prop2
(Paramters after '-' are the Omniture SiteCatalyst variable)

What I would suggest is a simple implementation of placing the necessary information in a DOM accessible format on the rendered page. So, for example, if the details could be designed as follows:
1.<title></title> can contain the correct page name
2.rest of the details could be put in <div></div> tags
So, my final rendered page had the details in a format like:

<div id='section'>Latest news: Sports</div>
<div id='headline'>Man U win Championship Leage – yet again!</div>
<div id='author'>John Brown</div>

Then, we could write a generic JavaScript to pull in the values. The possible pseudo-code would look like:

function getArticleName(article) {
if article requested is 'Section' then {
check if element called section exists
[document.getElementById('section')]
if yes, then return this value
}
}

The final page for could be rendered in a very generic format:

s.pagename = document.title
s.channel = getParameterName('section');
s.prop1 = getParameterName('headline');
s.prop2 = getParameterName('author');

and so on..

Since this is a generic function, it can be easily added in all pages as it is. To ensure the correct usage, it would be a good idea to insert this code just before the tag.

Implementation example
Suppose you are using .Net then the master page concepts fits nicely into the scheme of things. Add the line for including the Javascript (s_code.js or urchin.js) in the master page.

Then, include this master page in all your ASPX pages. And that's should get you a foundation.

Then, design a short javascript having the four lines mentioned about and include it in all your pages just before the end of body tag. As a worst case, the javascript could be empty and do nothing. Still, the pain of updating each page in future is resolved.

The other option could be to use a master footer page. This option would make life a lot more simple.

Conclusion
So, you see implementation of analytics, at least the basic one can be made a lot more easy and painless with a few bit of code designed into your system. Trying to do this after everything is done could be a bit of a pain and unnecessary wastage of effort.

Friday, 2 November 2007

7 Web Analytics Sins - White Paper by ClickTracks

-Akshay Ranganath


Read this nice article on the 7 Web Analytics Sins from ClickTrack. The 7 sins are:

Sin #1: Simple Visitor Counts
Learn the factors that can potentially skew visitor data.


Sin #2: Search Term Popularity
Understand why marketers must concentrate on the quality of visitors a keyword delivers, rather than the quantity.

Sin #3: The Linear Funnel
Learn the reasons why traditional sales funnels can lead to dangerous assumptions.

Sin #4: Data Overload
Know why it’s important to be able to separate interesting information from actionable information.

Sin #5: Relying on Absolute Number
Understand the reason why it’s more important to concentrate on trends instead of absolute numbers.

Sin #6: Relying on Top 10 Lists
Learn how getting stuck in your top 10 referrers can cost you long tail opportunities.

Sin# 7: Technicolor Report
Understand the reason why the way that information is displayed can have a huge impact on ease of use and perception.

For more information visit ClickTracks White paper at http://www.clicktracks.com/downloads/7-web-analytics-sins.pdf

Monday, 29 October 2007

Case Study - How to analyze a Web Analytics Report?

Akshay Ranganath



What to see in your Analytics Report?

Once you have for the Analytics Code onto the site, what do you start to measure? Here’s a short article on it with Google Analytics and a Blog on Ubuntu Linux as an example.

The site used for recording is our own groupMAGNET blog, http://groupmagnet.blogspot.com/.

Reports from Dashboard

Visitor Count

The very first report on the site reports on the number of visits to the web site. (See article on definition of Visit).

So, in the above screen shot, on the 16th of October, I received 89 visitors to my website. For the range from Sep 28th to Oct 28th, this is the highest number. For the period of I’ve also got 213 pageviews.

The question that should come in to mind now is: Why such a sudden surge?

Content Analysis

When I see that that on October 16th there were so many visitors, I checked the Top content report. It showed something like this:

This page shows that for the 213 page views, 128 were received from just one page. This page is having a URL ending in “3-Months after Ubuntu”. So, this is the page that has created such a huge surge in the page.

Since I know that most people landed here, I want to now know if they actually found the page useful. To verify this, I invoke the report for the specific page by clicking on the first URL shown in the sceen shot above. This results in a page of the following format:

So, this page is telling me that:

  • On an average, people read this page for 3:32 minutes. This is a very good time considering that the article is really small.

  • But, it also tells me that 99.21% of the users bounced. This means that after reading this article, the visitor to my site navigated to some other web site. This means:


    • I am offering something that is of use to a lot of readers (the huge number of views) BUT

    • My site is not offering a range of solutions to keep the users hooked on.

Hence, if I were to run huge ad campaigns, etc for some other customers, it could not be a big advantage.

The next question that comes to mind is: How did people land on my site?

How do people reach the site?

To answer this question, go back to the first page – the Dashboard and look at the following report (TrafficSources Overview):

This simple report has the details that shows the mechanism by which people are landing on my site.

So, it says that the most number of users landed on my blog via Referring Sites. A referring site is any site that has a link to this blog. (Sites like Google, etc are treated as a special case and reported in the Search Engines).

Hence, my web site is famous not because a lot of people reached through Google Search but, some particularly important source is referring to my site. Who is this site? To see this detail, click on the view report link.

Here the details shows that the top traffic sources are Ububtuhq.com and Digg.com sites.

Conclusion

From the above discussion, we see that the article “3-months after Ubuntu” has drawn a lot of viewers from the sites digg.com and ubuntuhq.com. Knowing the history of what had happened, I can now conclude that:

  • Digg.com and UbuntuHq.com attract good quality viewers for topics related to Ubuntu Linux

  • These sites (digg.com and ubuntuhq.com) also have ability to target users who are specifically interested in a particular topic (Ubuntu Linux)

  • If I have anything to say on Ubuntu, it is probably a good idea to link the article from digg.com and ubuntuhq.com since it gives me viewers who are actually reading my material. (Coupled with the fact that I got around 10 comments, it also means that they actually read the contents and try to understand it too!)

PS: Google Analytics is a free tool. Anyone with a Gmail Id can get the necessary Javascript code for implementing Google Analytics.

Monday, 15 October 2007

3 Months after Ubuntu...

-Akshay Ranganath

Its been three months since I installed Ubuntu 7 (Fiesty Fawn) on my laptop. Looking back at the way I've used it and issues faced, here's my take on the system:

1. User Interface
The UI is definitely simple and easy to use. Quite nice, fast and robust. Maybe not as sleek as Windows but, it gets the job done. I did not have time to explore various other themes - so maybe, there are some better options out there.

1.a. Missing shortcuts
One thing that I definitely missed (or never managed to learn) was creating shortcuts. For example, in Windows, there is the option of "Send to desktop". I could not find any such options easily.

This meant that everytime I had to open specific directories, I had to go through the entire file structure to locate the folder that I wanted. A simple shortcut would have really helped a lot.

1.b. Locked up menu-bars
This was a real irritant. All of a sudden, menus would become locked. You cannot resize or close windows from the menu. The only way to close a menu is to right click on the tab in the screen and then choosing the close option. For someone used to closing windows by clicking on the standard X option, this is an irritant.

Another area that it really bugs is the difficulty in re-sizing windows. Of course, if window is locked, there is no way to do it. In case where the window is not locked, the option is to re-size using mouse but, somehow it is not intuitive or easy to use. Sometimes it re-sizes nicely, sometimes it does not. Maybe, this can be improved.

2. Application software
The default installed list of applications are quite helpful to get most of the work done. The OpenOffice suite is surprisingly easy and nifty to use. Yet, there were some areas where things felt a bit jarry.

2.a. Document Viewer
This is the standard PDF viewer that comes along with Ubuntu 7. It is quite light weight and does the job of displaying PDFs. The places where it caused problems were:
Selection of text - When text was selected and pasted, the formatting was lost or things like spaces were simply gobbled up. So, words would be joined and I would have to go through the copied contents and re-format the same to get the content in order.

The second area where I had a problem is that the document viewer does not allow you to choose areas as images. So, if I like a graph in a PDF that I want to copy, I'd end up rebooting in Windows, opening Acrobat and then working with the PDF. I was not really happy to do this, but it was the only way to get the job done.

2.b. OpenOffice Writer
I am not sure if I explored it but I could not find the Document Review tabs in OpenOffice. So, if I have to produce a collaborative document, I would still need to use the Windows word. Maybe, I need to search further but, this is a first impression.

Apart from this, I was quite surprised by the entire system. One area where I am really happy with is the fast boot time that it takes. On an average, Windows XP and Ubuntu take about same time to boot. But, once booted, Windows would suddenly start all sorts of Virus scans, updaters, etc which would simply hog the memory and make my system slow. So, I would have to start my system and wait for almost 5 minutes before it got to a stage where it was usable.

With Ubuntu, if I want to get something real fast, I am able to get hold of it almost the moment the system completes the booting. Way to go!

Well, that's about it for this edition of my review on the system. Please let me know your thoughts.

Wednesday, 15 August 2007

Why del.icio.us is more Web2.0 than digg.com?

-Akshay Ranganath

Today, I was reading the book, "The Long tail" and I started to wonder on which of the sites digg.com or del.icio.us offers a more user-oriented behaviour? After all Web2.0 is about this freedom right?

Where del.icio.us is better for targeting as compared to digg.com?
So, here's my thoughts on it:
As per the digg.com sites submitting methodology, users choose a URL, add in the details. The final part is to assign a topic to a story. In this, the users are completely limited to selecting a topic from amongst the ones that digg.com site owners have decided on.

This is a fine methodology but, has its limitations when your target audience is the Web2.0 crowd who want to catalogue all sorts of topics. For a simple example, whenever I write on a topic of Web Analytics, I need to assign it to programming/technology or business. The worse part is that digg allows you to choose only one topic. However, if suppose I were to post the same on del.icio.us, I could choose tags like webanalytics, programming, sitecatalyst and so on.

Add to it, my other viewers can add their own tags, as per their understanding. The tagging mechanism would create a lot more semantic meaning to a single piece of article. A similar piece on digg.com would however be associated to just one category.

However, this would mean that the stories could get into a mode of serving very niche audience. If this were my primary motive, wouldn't that be a boon instead of being a curse?

Where digg.com scores over del.icio.us?

Digg.com though is a fabulous option to target users if you have a blog/article that conforms to the topics specified under digg. For example, I had posted blogs on Ubuntu/Linux and digg sent an astounding 500+ visitors to my site as compared to the normal 4-5 visitors who would have hit the site from all other referring methods. This was a huge magnitude improvement.

Yet, when I posted articles on Web Analytics, it could not gain the same audience simply because of the lack of alignment between my blogs and the topic of "Programming".

So, if the guys at digg.com are reading this: Guys, you are amazing, but why don't you introduce tagging and tag cloud sort of a feature on your site? That would be a really useful feature!

Any comments??

Some comments I got from digg.com
http://digg.com/users/MarkDykeman/news/dugg said:
I'm no expert on Web 2.0, but I think the point that you are trying to make is that Digg is not designed to handle news, etc. on all topics and thus is less "flexible" or collaborative than a site like del.icio.us. I think, as you say, Digg's strength is its focus, plus the rankings. Digg is more for people who want to establish a reputation as a "maven" or respected source of information, whereas del.icio.us is probably a bit more altruistic and selfless. Having said that, there's a lot about del.icio.us's community structure that I don't understand yet.

Response
Thanks Mark for reading through the article.

Yes, what I said was that the *strict* list of topics on Digg.com is both a differentiators as well as a restricting force. It acts as a great targeting media if the content fits nicely in a category but, can be lost if it does not.

On the other hand, del.icio.us provides unlimited opportunity for categorization - so you can cater to the "long tail" of the readers who will be few and far in-between but, who could turn out to be your niche and loyal customers.

As for the other features of del.icio.us itself is concerned, I'd say, it's ability to share bookmarking with like-minded people is really interesting. For example, a group of us who run this blog share a similar tag for anything interesting to us and share it across each other. So, we actually know what other person read and felt interesting. Check out the gadget on the left hand side of the page on the blog - this lists all the bookmarks that we've made that we felt was interesting or related to the blog.

Sunday, 29 July 2007

Adsense Features

Let us look into some features that are given by adsense which make a publisher’s life a hell lot easier.

1. Channels:
Channels are primarily used to determine which type of ads or what pages are earning the most adsense revenue. Channels offer a deeper level of analysis than that provided by overall revenue reports. Channels allow to break down reporting to monitor the performance of sites, sections of sites or even individual ad units. Each time a channel is created, AdSense records impressions, CTR, CPM and earnings statistics for that specific page or ad unit. There are 2 types of channels that can be set up.

a. URL Channels: These can be used to track the performance of adsense for content. URL channels cannot be used for adsense for search. A URL channel can be added by simply giving the URL of the website on which you are using adsense. The URL being given can be a partial one or a complete one. Depending on the URL given this is how Google understands what to track:
example.com track all pages across all subdomains
sports.example.com track only pages across the 'sports' subdomain
sports.example.com/widgets track all pages below a specific directory
sports.example.com/index.html track a specific page

It should be noted that entering your domain without the 'www' will allow you to track the performance of any subdomains, including www.domain.com and, for example, forums.domain.com. By entering www.domain.com as your domain channel, you will track all pages below www.domain.com only. Any activity on forums.domain.com, or on any other subdomain, will not be tracked with this domain channel.

Note that so as to track your website using URL channels no change is required in placing the code on the website, just adding the website URL is sufficient.

b. Custom Channels: These can be used to track the performance of adsense for content as well as adsense for search. Custom channels can be used to provide answers to questions like:

• Which ad colors, formats and placements are most effective on my site?
• Which ad units are the highest earners on my home page?
• Do my ads perform better with or without a border?
• How does revenue for each of my adsense for search boxes compare?
• Are ad units generating more revenue than linked units on my pages?


To get the most out of custom channels, it is best to track each and every ad unit. This way one can have a core set of reports that you can use to aggregate data from all your ad units or to combine in different variations to gain deeper insight into your site's AdSense performance.
Another very important feature of custom channels is that one can make it available to advertisers for targeting (this feature is present only for adsense for content). That is once you make a custom channel available for targeting, advertisers will see your custom channel in the list of websites on which to place ads. Thus it is very important to give proper names and descriptions to custom channels as advertisers will decide to choose a channel so as to place their ads by looking at the name and description of the channel. Also if the name of an existing custom channel is changed, any existing advertiser bids for that channel will be lost.

A total of 200 channels for adsense for content (URL channels + Custom channels), 200 custom channels for adsense for search and 200 custom channels for referrals can be created. Note that so as to track your website using custom channels code needs to be generated for each of the ad units (in the process of generating the code snippet, one can associate a channel name to it, thereby a new attribute called ‘google_ad_channel’ is present in the code which is responsible for tracking that particular ad unit.).

2. Competitive Ad Filter
This is a feature given by Google so that no web site owner loses traffic to his competitor’s website. As my competitor’s website will more or less have the same keywords as those of mine, there is a very high possibility of my competitor’s ads being displayed on my website and vice versa. If I want to avoid my clients from going to my competitor’s website, through an ad displayed on my website, I can specify my competitor’s URL so that Google will not render ads of that URL on my website.

3. Site Authentication
Adsense basically works by determining the content on a website and rendering ads that are most suitable to that content. Determination of the content of a website is done by the adsense crawler. But what if a website needs authentication? How will the crawler access a website and determine its content?
So as to address this issue, Google has introduced the ‘Site Authentication’ feature. Thus website owners whose websites require authentication can also place ads on their website. This is the method one has to follow so as to utilize the site authentication feature:
.
Enter these 4 attributes:
• Restricted directory or URL
• Authentication URL
• Authentication Method (Get, Post, Plain HTTP)
• Login and Password.

On saving you will be redirected to Google’s webmasters tools and prompted to verify that you are the owner of the website. This can be done in 2 ways: either by uploading a specific file to a specific location that is given by Google, or by adding a meta tag to your website. Once this verification is confirmed by Google, all the pages of your website that are login protected will receive proper targeted ads as the adsense crawler will be able to access them.

Adsense Basics

Lets discuss a few things about Google adsense. Once you get your account approved you will have 3 different ways to make money.

1.Adsense for content
2.Adsense for search
3.Referrals

For all the 3 types, Google gives a JavaScript code snippet, that can be placed on any webpage and ads will be rendered on the page immediately after placing the code. Let us know something about these 3 ways:

1. Adsense for content


Adsense for content is one the most widely used among the three. In this Google renders ads on your web page according to the content that is present on your web page. In Adsense for content there are 2 different types of ads you can place:


Ad Unit: In an Ad Unit ads are targeted to the site based on the content and the ads can be placed on the site in various shapes. Google currently provides 11 different formats of displaying Ad units. Thus the size of the ad can be selected depending upon the layout of the web page. One can choose whether the ads that are to be displayed should only be text ads or only image ads or both. Clicking on these ads will take you directly to the advertiser’s website. Upto 3 ad units can be placed on a single page.




Link Unit: In a link unit also ads are targeted depending on the content of the web page and occur in 6 different formats. Link unit displays 4 or 5 links without any description unlike the ad unit, where the ad generally consists of a short description. Link unit differs from an ad unit in another important way in that on clicking on a link displayed in the link unit, user is taken to a page that shows more relevant ads, instead of taking the user directly to an advertisers website. Upto 3 link units can be placed on a single page.


Advertisers bid for various keywords. And if these keywords are present on a website, ads will be rendered on that page. And the revenue generated when some one clicks on an ad depends on how much the advertiser has bid. Most of the ads that re rendered work on a CPC basis. That is the website owner gets paid only if the ad is clicked.


Adsense for search:

Adsense for search is simply including the Google search on your website. While set up itself one can choose whether to include the option of searching the website. That is the various pages Google has to search can be mentioned and thus the user visiting the website will be able to perform a regular Google search or search the website.




If a visitor to the website searches the web or the website using the Google search placed on the website, along with the search results at the top and bottom of the page, sponsored links will be displayed to the user. If the user clicks these sponsored links, the website owner gets paid for it. The ads being displayed and the amount of revenue generated al depends again on the same factors as that of adsense for content.


Referrals:

Referrals as the name suggests is referring users to use products such as adsense or adwords. Currently under referrals Google offers products under various categories Automotive, Business, Computers and Electronics etc. Under these categories are present various products which can be included so as to be placed on the website. Upto 15 products can be selected and these will rotate in the referral unit. The revenue generated to the publisher depends from product to product. Suppose if the product is adsense and a user has signed up for adsense by clicking on the adsense link on a website, that web site owner gets revenue in the following way:

When a publisher who signed up for Google Adsense through your referral earns US$5.00 within 180 days of sign-up, you will be credited with US$5.00. When that same publisher earns US$100.00 within 180 days of sign-up and is eligible for payment, you will be credited with an additional US$250.00. If, in any 180 day period, you refer 25 publishers who each earn more than US$100.00 and are all eligible for payment, you will be awarded a US$2,000.00 bonus.

Thus each product has its own set of rules for generating revenue.

Saturday, 28 July 2007

How to install packages on Ubuntu behind a Proxy Server?

-Akshay Ranganath

While working behind a proxy, the normal

sudo apt-get install 
may not work. This is because of the command line interface does not detect the proxy settings. To ensure a smooth installation without unnecessary hassles of digging through the settings, using the UI based approach is a lot more easier.

5 steps to setup Synaptics Package Manager for a proxy based network

  1. Open the Synaptic Package Manager by choosing System > Administration > Synaptic Package Manager in the menu.

  2. Provide password for the application to start

  3. Set the proxy settings by using the following menu: Settings > Preferences > Network

  4. Now provide the proxy details for both HTTP and FTP along with the port number.



  5. Click apply and the Synaptics Manager is now ready to download and install packages.

5 steps to installing packages

Once the proxy setting is done, packages can be installed in a very simple manner.
  1. Click on Search button on top

  2. In the search box, choose the package to install. In the screen shot I've given Wordpress.



  3. Click OK and a result is displayed. Choose the application you are interested in by clicking on the checkbox. Here, you will get a drop down with an option to Mark for installation (you can even choose to uninstall/reinstall/upgrade). Choose this option. Automatically, the package manager will choose all the dependencies to be installed along with the application.



  4. The Apply button on top will now be enabled. You can go with search and choose as many other packages as you want to install.




  5. Click on the Apply button and the packages will be downloaded and installed.
And that's about it - you are on your way to install and upgrade packages easily on Ubuntu!

How to create AJAX enabled web applications

- Chandra Nayana

We are here to take u through development of AJAX enabled web applications easily from scratch step by step.

Before going forward have a look at below links:
Click and learn AJAX
Click and know S/W installations for AJAX

ASP.Net Ajax server controls: They contain server and client code that integrate to produce Ajax like behaviour.

Frequently used server controls are:

ScriptManager : Manages script resources for client components, partial page rendering, localization and custom user scripts.

UpdatePanel: Enables us to refresh selected parts of the page.

Updateprogress: Provides status info about partial page updates in updatepanel controls.

Timer: Allows us to perform postbacks at defined intervals.

Now we will discuss how to create AJAX enables website:

Start Visual studio or visual web developer express edition.

1. Goto ->File->New->Website as shown:



2. Then it will open window asking for which website then select AJAX enabled website which will be under visual studio templates.



3. Enter location and language and press OK.

4.Then a new web application will be opened as:



5. AJAX Extensions available are as shown:



learning how to add and use the Toolkit.

1. In the toolkit tab right click and select Add tab option as shown:



2. Then it will add a new tab name it as AJAX Toolkit or any name of ur wish.



3. To add items to the toolkit right click and select choose items options.




4.Then it will open a window having .Net framework components and COM components tabs goto >net framework components tab and click on browse



5. Then browse for the AJAX Toolkit.dll file from bin folder of the toolkit that is downloaded.



6. Then all elements in the toolkit will be added in the tab that is just created.



7. Then the elements in the new Toolkit tab will be as shown:



Then we can use them based on their use.

Now we will use modal popup control, update panel, update progress bar, normal panel controls to create a sample AJAX enabled application.

To add theme to the website right click on the project in solution explorer window and goto asp.netfolder -> theme



Then theme will be added with theme1 as default in app_themes folder. Then right click on theme1 and add new item style sheet and save it as ur wish.





Then add the following code into the .css file:

.modalBackground {
background-color:#c0c0c0;
filter:alpha(opacity=70);
opacity:0.7;
}

.modalPopup {
background-color:#ffffdd;
border-width:3px;
border-style:solid;
border-color:Gray;
padding:3px;
width:250px;
}

Colours can be of ur wish.

Then add new folder images and copy images of ur wish into that folder. Update the code given below to accept the images mentioned i.e jus change the names of the images.
such as img4 for indicator etc..

Then add different elements as shown or just copy the code from the link given below.

Link for code to be pasted in source file




Add the following code in .CS file the page.

protected void Page_Load(object sender, EventArgs e)
{
Label1.Text = "Hi This is loaded for first time.......";
}
protected void Button1_Click(object sender, EventArgs e)
{

}
protected void lnk2_Click(object sender, EventArgs e)
{
System.Threading.Thread.Sleep(4000);
Label1.Text = "Hi time changed at" + DateTime.Now.ToString();
}
protected void lnk3_Click(object sender, EventArgs e)
{
Response.Redirect("Default.aspx?");
}
}


In web.config file update the pages theme as theme1 by updating the pages theme property to theme1 in pages tag as shown:

pages theme="Theme1">controls>/controls>/pages>

Then if u build and run the website Then first screen will be:



Then if you click on "Time???" button then screen will be:



Then if you click on View time button then screen will be:



After updating time the output will be :



If you click on cancel button the screen will be updated as:

What are installations and downloadables required for developing AJAX applications

-Chandra Nayana

Installations:


Click here and learn AJAX

This installation includes Microsoft ASP.NET 2.0 AJAX Extensions, which is a server framework, and the Microsoft AJAX Library, which consists of client script that runs on the browser.

Note: The installation process installs the ASP.NET AJAX assembly (System.Web.Extensions.dll) in the global assembly cache (GAC). Do not include the assembly in the Bin folder of your AJAX-enabled Web site.

You can install and use ASP.NET AJAX with Microsoft Visual Studio 2005 or Microsoft Visual Web Developer Express Edition. However, Visual Studio 2005 is not required to use ASP.NET AJAX to create ASP.NET Web applications.

You can install and use the Microsoft AJAX Library without the .NET Framework. You can also install it on non-Windows environments to create client-based Web applications for any browser that supports JavaScript.

Downloadables:

1. ASP.NET 2.0 AJAX Extensions 1.0: This will install framework for developing and running AJAX style applications with either server centric or client centric development models and is fully supported by microsoft.

Downloadable link:
http://www.microsoft.com/downloads/details.aspx?FamilyID=ca9d90fa-e8c9-42e3-aa19-08e2c027f5d6&displaylang=en

2. ASP.NET AJAX Control Toolkit: The ASP.NET AJAX Control Toolkit is a shared-source community project consisting of samples and components that make it easier than ever to work with AJAX-enabled controls and extenders. The Control Toolkit provides both ready-to-run samples and a powerful SDK to simplify creating custom ASP.NET AJAX controls and extenders.

Downloadable link:
http://www.codeplex.com/AtlasControlToolkit/Release/ProjectReleases.aspx?ReleaseId=4923

Prerequisites: must install AJAX 1.0 before installing this.

System requirements for installation:

Microsoft ASP.NET AJAX requires the following softwares and OS:

OS:
Windows server 2003/ Windows xp/ windows vista/ Any version that supports .Net framework 2.0.

S/W:
.Net framework 2.0 or higher version,Internet explorer 5.0 or higher.

Optional S/W:
Visual Studio 2005 or Visual web developer express edition.

What is AJAX and Its importance

- Chandra Nayana

AJAX stands for Asynchronous JavaScript And XML.
AJAX is not a new programming language, but a new way to use existing standards.
With AJAX you can create better, faster, and more user-friendly web applications.
AJAX is based on JavaScript and HTTP requests.

U can find this on any website.

But----
We take U thru simple way of creating AJAX enabled website from scratch to know click below links

What are installations required for developing AJAX applications.
How to create AJAX enabled web applications.

Introduction:

ASP.Net Ajax enables u to create web pages that include a rich user experience with responsive and familiar user interface elements.

ASP.Net Ajax applications Plus's:

- Improves efficiency by processing only significant parts of web pages.
- Provides familiar UI elements such as progress indicators, tool tips, pop-up windows and many more.
- Allows partial page updates.
- Allows integration of data from different sources through calls to web services.
- Provides a framework that simplifies customization of server controls to include client capabilities.
- Supports all famous browsers.

Tuesday, 24 July 2007

Creating your own News Page

-Akshay Ranganath

For a long time, I used to always wonder if it would be possible to collate news that I was interested in and then offer it on my blog. Say for example, if I was having a web site on Web 2.0, would it be possible to offer latest news from some authentic source. The obvious reason would be to provide more material to my viewers and ensuring that they are satisfied with the overall experience on the site.

Identifying an Architecture

One simple architecture that I could identify was this: Why not try and subscribe to RSS feed from some of the web sites that served news content that correlated to our web site and somehow display the same? With this idea in mind, I went on a search and found this site called Feed2JS. On this site, if you provide the URL of the RSS, it provides a JavaScript code. If you embed this code into your site, the RSS contents can be displayed on the web site using any style sheet that you choose.

In fact the web site Feed2JS also offers the ability to choose the style sheet. It is quite rudimentary but, can definitely be used and then worked upon to provide a better style as needed by your web site.

Creating Feed – In Action

The whole process is very simple. Just log into the site Feed2JS and then, type in the feed URL that you want as a Javascript. Choose the preview option to see how it looks. Toggle the various entries to see how the feed can be made more attractive. Once satisfied, choose the Generate Javascript button and create the feed.



Just take this code and paste it in your blog or HTML page between the <body> and </body> tags. That is it! The news feed is now active on your website. Check out the example on our blog at our RSS section. In short this is all you need to do:


So, why is it so interesting?

Well, think about it this way: You are a web site offering news releases to everyone. There are sites who'd like to have your release provide you give them a functionality of changing the look and feel so that it matches their web site.

In the pre-Web2.0 world, you'd have to build mini-web sites with the style sheet for each of the client. This would have been a maintenance nightmare. Now, all you need to do is provide an RSS feed that they client can consume and display it wherever and however they want it! All you need to do is just configure the RSS system to generate one more RSS feed.

Monday, 23 July 2007

Latest News


del.icio.us Bookmarks from Akshay Ranganath









Saturday, 21 July 2007

Web Analytics - Home

A lot has been said on Web Analytics. But, when I tried to find some resources that can be read and immediately used to start doing something, there wasn't much.

In most of the professional environment, the focus is on being actionable rather than going into too much of theory. There is a pressure from Marketing departments to measure effectiveness and IT is generally left hapless without any clue on how to start the work.

Here is an attempt to help you get started with the analytics with a more practical approach. A short amount of theory is included. If you need more details, you can always refer to the more complete and detailed resources online.

  1. Starting Web Analytics - Part 1
  2. Starting Web Analytics - Part 2
  3. Reporting on Traffic on Analytics
  4. Case Study on Web Analytics Implementation
  5. 7 Web Analytics Sins - White Paper by ClickTracks
  6. How to design your website to be Web Analytics friendly?
  7. A discussion on why resourcing is difficult for Web Analytics?
Hope you'll like it! Do let me know your thoughts!

References
  1. Google Analytics
  2. Omniture SiteCatalyst
  3. Avinash Kaushik's Blogs @ Occam's Razor

10 Simple steps to a faster Ubuntu booting.

-Akshay Ranganath

  1. Open the file /etc/fstab in gedit (Applications > Accessories > Text Editor)
  2. This file will have the partition details of .the hard disk. For all the Windows partition, it will have data of the following format:
    UUID=9877-489A  /media/sda1     vfat    defaults,utf8,umask=007,gid=46 0       1
  3. If the last value is a 1 then, it means that the default setting is to scan your Windows parition every time the system boots. This is not necessary and most importantly waste of time since it is a Windows partition anyway.
  4. Just set the value to a 0. That is for all those lines having the word vfat, set the sixt tab-separated value to a zero to exclude checking.
  5. Save the file on your desktop as fstab.
  6. Open the terminal by using the option Application > Accessories > Terminal
  7. Make a copy of the fstab file for safety by executing the command:
    sudo cp /etc/fstab /etc/fstab.orig
  8. Copy the modified file to the /etc/ directory by giving the command:
    sudo cp /home/{username}/Desktop/fstab .
  9. This should copy the updated fstab
  10. Reboot and see a blazingly fast system!

Sunday, 15 July 2007

Traffic Reporting on Analytics

Unique Visitor

The unique visitor metric basically tries tho measure the number of unique people who visited the site within a given period of time. The visitor is counted exactly once for the period. Generally, the unique visitor metric is calculted for a specific period of time. Some examples are:
  • Hourly unique visitors
  • Daily unique visitors
  • Monthly unique visitors and so on.

To clarify, if I visit this blog at 10:30 am and then 6:30 pm, on the 3rd of a month and then visit at 1 pm on the 15th of the month, the report would look as follows:

Hourly unique visitors for 3rd
10-11 am : 1
6-7 pm: 1

Hourly unique visitors for 15th
1-2 pm : 1

Daily Unique visitor
3 rd of July: 1
15th of July: 1

Monthly Unique visitor
July: 1

This metric is used to loosely identify the number of unique people who saw the site. It tries to map the usage of the site to an individual human – this could help in tracking the individual behaviour.

If you have a database background, this is similar to saying “select distinct (visitor details) during the time interval chosen”



Definition

SiteCatalyst & Google Analytics: Unique visitors represent the number of unduplicated (counted only once) visitors to your website over the course of a specified time period. A unique visitor is determined with cookies.

There are some caveats to using this metric. These are quite elegantly explained at Matt Belkin's blog.

Traffic Source Reports


This report represents the percentage of visitors who have reached the site directly (by typing the URL into the browser), referrer (from a link on some other page) and Search Engines.

This report can be seen using the option Traffic Sources > Overview.



Referrer

A referrer is some other site from which a user could navigate to your site. If there is a link to any page of say Site A to your site, then the Site A refers to your site.

The report for referrer can be used to identify the behaviour of the people who are landing on your page. So for example, if you have a blog that is being referred from two of your friend's blogs, Blog1 and Blog2, the details will show how many people were referred from each of the two sites. Say something like this:

Referrels
Blog1 - 10
Blog2 - 20



Definitions

SiteCatalyst: (Referrer) A domain or URL used outside of your defined domain to access your site. The Referring Domains Report and the Referrers Report break referrer data into domains and URLs so that you can view the instances that visitors access your site from a particular domain or URL. For example, if a visitor clicks a link from Site A and arrives at your site, Site A is the referrer if it is not defined as part of your domain. During SiteCatalyst implementation, your Omniture Implementation Consultant will help you to define the domains and URLs that are part of your web site.

Google Analytics: (Referrals) A referral occurs when any hyperlink is clicked on that takes a web surfer to any page or file in another website; it could be text, an image, or any other type of link. When a web surfer arrives at your site from another site, the server records the referral information in the hit log for every file requested by that surfer. If a search engine was used to obtain the link, the search engine name and any keywords used are recorded as well.

Referrer - The URL of an HTML page that refers visitors to a site.

This metric is especially useful in seeing destinations from which people are arriving on a site. As the site grows older this list tends to get bigger and bigger. Generally, it'll probably show nothing since many people would be accessing the site directly due to word-of-mouth marketing.

Generally, for the home page, there will be a lot of “Direct” hits, if the site is well known. After that, the pages are reached using some menu or links. These are not counted as referrer since the “referring page” is also on your own site. So say, you have the site, http://goupmagnet.blogspot.com and an about us page at http://groupmagnet.blogspot.com/2007/06/about-us.html. If user first types the home page URL and clicks on the “About Us” link, the traffic source report will show only one hit for a “Direct” view. The navigation within the site is not recorded here.

Search Engines

People referred from the search engines are (generally) not counted in the Referrer reports. This is just to segregate the referrers from the search engines. For a search engine, the search key used can also be recorded. This in turn can be used for Search Engine Marketing.

For example, if a user hits the Group Magnet blog using the key GroupMagnet, via google, the site will have two reports as shown below.

To view the Search Engine report, the navigation is Traffic Sources > Search Engines.


To view the keyword report, the navigation is Traffic Sources > Search Engines > Keyword.



The other option is Traffic Sources > Keywords.

Definition

Google Analytics: A keyword is a database index entry that identifies a specific record or document. Keyword searching is the most common form of text search on the web. Most search engines do their text query and retrieval using keywords. Unless the author of the web document specifies the keywords for her document (this is possible by using meta tags), it's up to the search engine to determine them. Essentially, this means that search engines pull out and index words that are believed to be significant. Words that are mentioned towards the top of a document and words that are repeated several times throughout the document are more likely to be deemed important.

References


Wednesday, 11 July 2007

PageRanking in SEO

This article is intended to provide a fair knowledge about PageRanking in SEO. PageRank is one of the methods Google uses to determine a page’s relevance or importance. Before going into details it’s better to mention the short hands used in this article.



PR: Page Rank of the page.

Backlink: If page A links out to page B, then page B is said to have a “backlink” from page A.



What is a PageRank?


In short PageRank is a “vote”, by all the other pages on the Web, about how important a page is. So a link to a page from any other page counts as a vote of support. If there is no link it doesn’t mean it’s a vote against the page, its only not having a supporting vote.


Quoting from the original Google paper, PageRank is defined like this:


Lets assume page A has pages T1...Tn, which point to it (i.e., are Backlinks to page A).

The parameter d is a damping factor which can be set between 0 and 1. Usually “d” is set to 0.85
C(A) is defined as the number of links going out of page A.



The PageRank of a page A is given as follows:
PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))


Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one.



Let us dig it deeper.


PR(Tn) : Each page has a notion of its own self-importance. That’s “PR(T1)” for the first page in the web all the way up to “PR(Tn)” for the last page
C(Tn) : Each page spreads its vote out evenly amongst all of it’s outgoing links. The count, or number, of outgoing links for page 1 is “C(T1)”, “C(Tn)” for page n, and so on for all pages.

PR(Tn)/C(Tn) - so if our page (page A) has a backlink from page “n” the share of the vote page A will get is “PR(Tn)/C(Tn)”

d : - All these fractions of votes are added together but, to stop the other pages having too much influence, this total vote is “damped down” by multiplying it by 0.85 (the factor “d”)

(1 - d) : The (1 – d) bit at the beginning is a bit of probability math magic so the “sum of all web pages' PageRanks will be one”: it adds in the bit lost by the d.It also means that if a page has no links to it (no backlinks) even then it will still get a small PR of 0.15 (i.e. 1 – 0.85). (Aside: the Google paper says “the sum of all pages” but they mean the “the normalised sum” – otherwise known as “the average”.



How is PageRank Calculated?

It’s obvious from the formula that PR of a page depends on PR of the pages pointing to it. But we won’t know what PR those pages have until the pages pointing to them have their PR calculated and so on… And when you consider that page links can form circles and it seems impossible to do this calculation!
Its really not that difficult as it seems. According to Google Paper we can just go ahead and calculate a page’s PR without knowing the final value of the PR of the other pages. That seems strange but, basically, each time we run the calculation we’re getting a closer estimate of the final value. So all we need to do is remember each value we calculate and repeat the calculations lots of times until the numbers stop changing much.




Lets take the simplest example network: two pages, each pointing to the other:





Each page has one outgoing link (the outgoing count is 1, i.e. C(A) = 1 and C(B) = 1).



Guess 1



We don’t know what their PR should be to begin with, so let’s take a guess at 1.0 and do some calculations:



The numbers aren’t changing at all! So it looks like we started out with a wrong guess.


Guess 2


let’s start the guess at 0 instead and re-calculate:

PR(A) = 0.15 + .85*0

= 0.15


PR(B) = 0.15+ 0.85*0.15

= 0.2775


And again:


And again




and so on. The numbers just keep going up. But will the numbers stop increasing when they get to 1.0? What if a calculation over-shoots and goes above 1.0?


Guess 3


Well let’s see. Let’s start the guess at 40 each and do a few cycles:


PR(A) = 40 PR(B) = 40


First calculation





And again





Clearly those numbers are heading down. It sure looks the numbers will get to 1.0 and stop.


Principle: it doesn’t matter where you start your guess, once the PageRank calculations have settled down, the normalized probability distribution” (the average PageRank for all pages) will be 1.0



So lets take a look at some of examples and study how the PageRank is Getting affected in various scenarios. Values mentioned as PRs in the examples are calculated according to the formula mentioned above.

Example 2
A simple hierarchy with some outgoing links





As you’d expect, the home page has the most PR – after all, it has the most incoming links. But what went wrong is the average PR is not 1, as said earlier.


Why is it so? Take a look at the “external site” pages – what’s happening to their PageRank? They’re not passing it on, they’re not voting for anyone, they’re wasting theirs.

Example 3


Let’s link those external sites back into our home page just so we can see what happens to the average…




Look at the PR of our home page! All those incoming links sure make a difference.


Example 4


A simple hierarchy






Our home page has 2 and a half times as much PR as the child pages.


Observation: a hierarchy concentrates votes and PR into one page




Example 5


Looping





All the pages have the same number of incoming links, all pages are of equal importance to each other, all pages get the same PR of 1.0 (i.e. the “average” probability).


Example 6


Extensive Interlinking – or Fully Meshed




The results are the same as the Looping example above and for the same reasons.


Example 7


Hierarchical – but with a link in and one out.


We’ll assume there’s an external site that has lots of pages and links with the result that one of the pages has the average PR of 1.0. We’ll also assume that there’s just one link from that page and it’s pointing at our home page.





In example 4 the home page only had a PR of 1.92 but now it is 3.31!
Not only has site A contributed 0.85 PR to us, but the raised PR in the “About”, “Product” and “More” pages has had a lovely “feedback” effect, pushing up the home page’s PR even further!Priciple: a well structured site will amplify the effect of any contributed PR

Example 8


Looping – but with a link in and a link out






Well, the PR of our home page has gone up a little, but what’s happened to the “More” page?
The vote of the “Product” page has been split evenly between it and the external site. We now value the external Site B equally with our “More” page. The “More” page is getting only half the vote it had before – this is good for Site B but very bad for us!


Example 9


Fully meshed – but with one vote in and one vote out




That’s much better. The “More” page is still getting less share of the vote than in example 7 of course, but now the “Product” page has kept three quarters of its vote within our site - unlike example 8 where it was giving away fully half of it’s vote to the external site!
Keeping just this small extra fraction of the vote within our site has had a very nice effect on the Home Page too – PR of 2.28 compared with just 1.66 in example 8.


Observation: increasing the internal links in your site can minimize the damage to your PR when you give away votes by linking to external sites.


Principle: If a particular page is highly important – use a hierarchical structure with the important page at the “top”.
Where a group of pages may contain outward links – increase the number of internal links to retain as much PR as possible.
Where a group of pages do not contain outward links – the number of internal links in the site has no effect on the site’s average PR. You might as well use a link structure that gives the user the best navigational experience.

Site Maps


Site maps are useful in at least two ways:
If a user types in a bad URL most websites return a really unhelpful “404 – page not found” error page. This can be discouraging. Why not configure your server to return a page that shows an error has been made, but also gives the site map? This can help the user enormously
Linking to a site map on each page increases the number of internal links in the site, spreading the PR out and protecting you against your vote “donations”


Example 10


A common web layout for long documentation is to split the document into many pages with a “Previous” and “Next” link on each plus a link back to the home page. The home page then only needs to point to the first page of the document.



In this simple example, where there’s only one document, the first page of the document has a higher PR than the Home Page! This is because page B is getting all the vote from page A, but page A is only getting fractions of pages B, C and D.


Principle: in order to give users of our site a good experience, we may have to take a hit against our PR. There’s nothing we can do about this - and neither should we try to or worry about it! If our site is a pleasure to use lots of other webmasters will link to it and we’ll get back much more PR than we lost.


We can also see the trend between this and the previous example? As we add more internal links to a site it gets closer to the Fully Meshed example where every page gets the average PR for the mesh.Observation: as we add more internal links in our site, the PR will be spread out more evenly between the pages.


Example 11


Getting high PR the wrong way and the right way.
Just as an experiment, let’s see if we can get 1,000 pages pointing to our home page, but only have one link leaving it…








Those Spam pages are pretty worthless but they sure add up!


Observation: it doesn’t matter how many pages you have in your site, your average PR will always be 1.0 at best. But a hierarchical layout can strongly concentrate votes, and therefore the PR, into the home page!
This is a technique used by some disreputable sites (mostly adult content sites). If Google’s robots decide you’re doing this there’s a good chance you’ll be banned from Google!

On the other hand there are at least two right ways to do this:


1. Be a Mega-site


Mega-sites, like http://news.bbc.co.uk/ have tens or hundreds of editors writing new content – i.e. new pages - all day long! Each one of those pages has rich, worthwile content of its own and a link back to its parent or the home page! That’s why the Home page Toolbar PR of these sites is 9/10 and the rest of us just get pushed lower and lower by comparison…
Principle: Content Is King! There really is no substitute for lots of good content…


2. Give away something useful


http://www.phpbb.com/ has a Toolbar PR of 8/10 and it has no big money or marketing behind it! How can this be?
What the group has done is write a very useful bulletin board system that is becoming very popular on many websites. And at the bottom of every page, in every installation, is this HTML code:
Powered by phpBB
The administrator of each installation can remove that link, but most don’t because they want to return the favour…
Imagine all those millions of pages giving a fraction of a vote to http://www.phpbb.com/?
· Principle: Make it worth other people’s while to use your content or tools. If your give-away is good enough other site admins will gladly give you a link back. Principle: it’s probably better to get lots (perhaps thousands) of links from sites with small PR than to spend any time or money desperately trying to get just the one link from a high PR page.



Finally


PageRank is, in fact, very simple. But when a simple calculation is applied hundreds (or billions) of times over the results can seem complicated.



Reference:
This article is extracted from a paper written by Ian Rogers. He has been a Senior Research Fellow in User Interface Design and a consultant in Network Security and Database Backed Websites.
It was sponsored by IPR Computing Ltd – specialists in Secure Networks and Database Backed Websites