Friday 22 June, 2007

About Us

What is MAGNET?


MAGNET is an initiative to gather knowledge on various new and upcoming areas of technology and general business practises in the Media and Publishing industry.

MAGNET stands for Media Action Group for New and Emerging Technologies.

What we intend to do?
The core focus of the group is to concentrate on the following four areas of the industry:
1. Web Analytics
2. Online Marketing
3. Search Engine Optimization
4. Web 2.0 technologies

In each of these areas, we aim to develop expertise in not just theoritical perspective but, to reach a stage of acting as in-house consultants for other projects and industries.

What will be our activities?
Under the banner of MAGNET, we aim to accomplish two main tasks:
1. To learn and implement solutions in the area mentioned above
2. To then present a tutorial or a guide based on our learnings so that others can follow.

The blog http://groupmagnet.blogpost.com will be our one stop solution. It will be the focal point of all our experiements and it will also server as our knowledge repository. The site will this be a dual purpose mode - a test bed and a knowledge center.

Monday 18 June, 2007

Adding Google Gadget to your Blog

- Akshay Ranganath

As an experiment, I tried to add a Google Gadget to our MAGNET blog. Before starting on, let me just explain what is a Google Gadget and why I wanted it on the blog.

What is a Google Gadget?
As per the Google Gadgets page(1), 'Google Gadgets are mini-applications that work with the Google homepage, Google Desktop, or any page on the web. They can range from simple HTML to complex applications, and can be a calendar, a weather globe, a media player, or anything else you can dream up.'

There are two types of Gadgets, Universal and Desktop. In short, a Universal gadget is an application that can be embedded on any HTML page that I own and works only when I am online. For more details, refer to the Google Gadgets page (1).

Why Google Gadget?
Well, I thought since we are building a web site on the latest technological advances, it meant, we were trying to attract people who were interested in these technologies and many a times, would like to read more of the latest technology news. To handle it I could:
Update my blog time to time on the latest news OR
Directly feed the latest news feeds into the blog itself.

In the era of Web 2.0, the second case is what holds true. Sites like Digg.com, del.icio.us and the multitde of the social bookmaking sites do the same thing. The only difference is that in those sites, any user can add in any feeds. On our blog, it will be under our control, the guys who own the blog.

How do I add a Google Gadget - the short version
The process is quite simple. In short, this is what you need to do:

  1. Log into Google Pages
  2. Identify the gadget that you want
  3. Take the code of the gadget
  4. Paste that code on the blog template and voila! The gadget is on your page.

How do I add a Google Gadget - longer version

For those who have never used sites like Google Gadgets, etc, here is how you can proceed to add a gadget to the blog:

Log into the Google Gagdget home page at the URL http://google.com/ig.
You can use your Gmail account details itself.


Once logged in, copy paste this URL in the same browser: http://www.google.com/ig/directory?syn

This will a directory of gadgets that can be used for blogs.


In our case, I wanted a Technology related Gadget. So, I clicked on the Digg.com - Top in 24 hours link.



Then, customize the gadget for the height, width and color combination that you want. Once done, click on the Get Codeb button. This will display the script that needs to be inserted on your blog or any web page.

Copy the code displayed in the text box below. This is what will make the gadget appear on the web page or the blog.

Open the source code of the page/blog where you want the gadget to appear.

In case of a blog, open the template for the blog. Assuming you are using bloger.com, here is how to do it:
  • Log into Blogger.com
  • Click on the Layout tab of the blog
  • Choose the Edit HTML option
  • Now in this, paste the code copied above. Note that the placement of the code should be according to the place where you want the gadget to appear.


And behold - check the Digg.com gadget that appears on the left hand navigation of this site.

References:
  • http://code.google.com/apis/gadgets/
  • http://www.google.com/ig/directory?synd=open&cat=news

Introduction to Working of Search Engine

-Dileep Kumar

Introduction
This article will briefly present how a search engine works in general, with some examples of the famous search engines we have today.
It's every one's knowledge today that if we want information about something, it will be available on World Wide Web. There are millions of pages on an amazing variety of topics, waiting on the web. But the bad news about the Internet is that there are hundreds of millions of pages available, most of them titled according to the whim of their author, almost all of them sitting on servers with cryptic names. When we need to know about a particular subject, how do we know which pages to read?





Answer is simple - Go for a Search Engine.


Internet search engines are special sites on the Web that are designed to help people find information stored on other sites. There are differences in the ways various search engines work, but they all perform three basic tasks:
Ø They search the Internet -- or select pieces of the Internet -- based on important words.
Ø They keep an index of the words they find, and where they find them.
Ø They allow users to look for words or combinations of words found in that index.

Early search engines held an index of a few hundred thousand pages and documents, and received maybe one or two thousand inquiries each day. Today, a top search engine will index hundreds of millions of pages, and respond to tens of millions of queries per day.
Looking at the Web

Before a search engine can tell you where a file or document is, it must be found. To find information on the hundreds of millions of Web pages that exist, a search engine employs special software robots, called spiders, to build lists of the words found on Web sites. When a spider is building its lists, the process is called Web crawling. In order to build and maintain a useful list of words, a search engine's spiders have to look at a lot of pages.
How does any spider start its travel over the Web? The usual starting points are lists of heavily used servers and very popular pages. The spider will begin with a popular site, indexing the words on its pages and following every link found within the site. In this way, the spidering system quickly begins to travel, spreading out across the most widely used portions of the Web.





This image has been taken from http://www.howstuffworks.com/


You Know? Google built their initial system to use multiple spiders, usually three at one time. Each spider could keep about 300 connections to Web pages open at a time. At its peak performance, using four spiders, their system could crawl over 100 pages per second, generating around 600 kilobytes of data each second.
This means to run everything quickly a system is required to feed the Spiders with the URLs. This you can get from domain name server(DNS) One can either depend on Internet service provider or can have their own DNS .DNS translates servers names into addresses. Major search engines have their own DNS to keep the delays to minimum.


When the Google spider looks at an HTML page, it takes note of two things:
· The words within the page
· Where the words are found

Words occurring in the title, subtitles, meta tags and other positions of relative importance will be noted for special consideration during a subsequent user search. The Google spider was built to index every significant word on a page, leaving out the articles "a," "an" and "the." Other spiders take different approaches.
These different approaches usually attempt to make the spider operate faster, allow users to search more efficiently, or both. For example, some spiders will keep track of the words in the title, sub-headings and links, along with the 100 most frequently used words on the page and each word in the first 20 lines of text. Lycos is said to use this approach to spidering the Web.
Other systems, such as AltaVista, go in the other direction, indexing every single word on a page, including "a," "an," "the" and other "insignificant" words. The push to completeness in this approach is matched by other systems in the attention given to the unseen portion of the Web page, the meta tags.
Building the Index
Once the spiders have completed the task of finding information on Web pages (and we should note that this is a task that is never actually completed -- the constantly changing nature of the Web means that the spiders are always crawling), the search engine must store the information in a way that makes it useful. There are two key components involved in making the gathered data accessible to users:
· The information stored with the data
· The method by which the information is indexed
In the simplest case, a search engine could just store the word and the URL where it was found. In reality, this would make for an engine of limited use, since there would be no way of telling whether the word was used in an important or a trivial way on the page, whether the word was used once or many times or whether the page contained links to other pages containing the word. In other words, there would be no way of building the ranking list that tries to present the most useful pages at the top of the list of search results.

To make for more useful results, most search engines store more than just the word and URL. An engine might store the number of times that the word appears on a page. The engine might assign a weight to each entry, with increasing values assigned to words as they appear near the top of the document, in sub-headings, in links, in the meta tags or in the title of the page. Each commercial search engine has a different formula for assigning weight to the words in its index. This is one of the reasons that a search for the same word on different search engines will produce different lists, with the pages presented in different orders.

Regardless of the precise combination of additional pieces of information stored by a search engine, the data will be encoded to save storage space. For example, the original Google paper describes using 2 bytes, of 8 bits each, to store information on weighting -- whether the word was capitalized, its font size, position, and other information to help in ranking the hit. Each factor might take up 2 or 3 bits within the 2-byte grouping (8 bits = 1 byte). As a result, a great deal of information can be stored in a very compact form. After the information is compacted, it's ready for indexing.

An index has a single purpose: It allows information to be found as quickly as possible. There are quite a few ways for an index to be built, but one of the most effective ways is to build a hash table. In hashing, a formula is applied to attach a numerical value to each word. The formula is designed to evenly distribute the entries across a predetermined number of divisions. This numerical distribution is different from the distribution of words across the alphabet, and that is the key to a hash table's effectiveness.

In English, there are some letters that begin many words, while others begin fewer. You'll find, for example, that the "M" section of the dictionary is much thicker than the "X" section. This inequity means that finding a word beginning with a very "popular" letter could take much longer than finding a word that begins with a less popular one. Hashing evens out the difference, and reduces the average time it takes to find an entry. It also separates the index from the actual entry. The hash table contains the hashed number along with a pointer to the actual data, which can be sorted in whichever way allows it to be stored most efficiently. The combination of efficient indexing and effective storage makes it possible to get results quickly, even when the user creates a complicated search.

Searching through an index involves a user building a query and submitting it through the search engine. The query can be quite simple, a single word at minimum. Building a more complex query requires the use of Boolean operators that allow you to refine and extend the terms of the search.

The Boolean operators most often seen are:

· AND - All the terms joined by "AND" must appear in the pages or documents. Some search engines substitute the operator "+" for the word AND.
· OR - At least one of the terms joined by "OR" must appear in the pages or documents.
· NOT - The term or terms following "NOT" must not appear in the pages or documents. Some search engines substitute the operator "-" for the word NOT.
· FOLLOWED BY - One of the terms must be directly followed by the other.
· NEAR - One of the terms must be within a specified number of words of the other.
· Quotation Marks - The words between the quotation marks are treated as a phrase, and that phrase must be found within the document or file.

Future Search

The searches defined by Boolean operators are literal searches -- the engine looks for the words or phrases exactly as they are entered. This can be a problem when the entered words have multiple meanings. "Bed," for example, can be a place to sleep, a place where flowers are planted, the storage space of a truck or a place where fish lay their eggs. If you're interested in only one of these meanings, you might not want to see pages featuring all of the others. You can build a literal search that tries to eliminate unwanted meanings, but it's nice if the search engine itself can help out.
One of the areas of search engine research is concept-based searching. Some of this research involves using statistical analysis on pages containing the words or phrases you search for, in order to find other pages you might be interested in. Obviously, the information stored about each page is greater for a concept-based search engine, and far more processing is required for each search. Still, many groups are working to improve both results and performance of this type of search engine. Others have moved on to another area of research, called natural-language queries.
The idea behind natural-language queries is that you can type a question in the same way you would ask it to a human sitting beside you -- no need to keep track of Boolean operators or complex query structures. The most popular natural language query site today is AskJeeves.com, which parses the query for keywords that it then applies to the index of sites it has built. It only works with simple queries; but competition is heavy to develop a natural-language query engine that can accept a query of great complexity.


References:

http://www.howstuffworks.com/