my soulmate

my soulmate

Tuesday, 26 July 2011

ISP (INTERNET SERVICE PROVIDER)

An Internet service provider (ISP) is a company that provides access to the Internet. Access ISPs connect customers to the Internet using copper,wireless or fiber connections. Hosting ISPs lease server space for smaller businesses and host other people servers (colocation). Transit ISPs provide large tubes for connecting hosting ISPs to access ISPs.

INTERNET

Our teacher has assigned to us a project. It is about computer technology. So, today we will share some information about Internet.

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks, of local to global scope, that are linked by a broad array of electronic, wireless and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.


EXTRANET

An extranet is a computer network that allows controlled access from the outside, for specific business or educational purposes. An extranet can be viewed as an extension of a company's intranet that is extended to users outside the company, usually partners, vendors, and suppliers. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a selected set of other companies (business-to-business, B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) models involve known servers of one or more companies, communicating with previously unknown consumer users. An extranet is like a DMZ in that it provides access to needed services for channel partners, without granting access to an organization's entire network.

INTRANET

An intranet is a private computer network that uses Internet Protocol technology to securely share any part of an organization's information or network operating system within that organization. The term is used in contrast to internet, a network between organizations, and instead refers to a network within an organization. Sometimes the term refers only to the organization's internal website, but may be a more extensive part of the organization's information technology infrastructure. It may host multiple private websites and constitute an important component and focal point of internal communication and collaboration. Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer protocol). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data.
An intranet can be understood as a private analog of the Internet, or as a private extension of the Internet confined to an organization. The first intranet websites and home pages began to appear in organizations in 1996-1997. Although not officially noted, the term intranet first became common-place among early adopters, such as universities and technology corporations, in 1992.
Intranets have also contrasted with extranets. While intranets are generally restricted to employees of the organization, extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for authentication, authorization and accounting (AAA protocol).
Intranets may provide a gateway to the Internet by means of a network gateway with a firewall, shielding the intranet from unauthorized external access. The gateway often also implements user authentication, encryption of messages, and often virtual private network (VPN) connectivity for off-site employees to access company information, computing resources and internal communication.

Friday, 22 July 2011

FACTS

The human face can create 5000 expressions. people can read even sligtest nuances  and changes in facial expression and their attitudes and responses are affected by those subtle changes

 A person can recognize a smile from 300 feet away. Smilling is the most easily recognizable facial expression that we have, and is seen in every culture world wide .Smilling  reduces stress ,can lower blood pressure and even makes us appear younger and more attractive.

Japanese people 'slurp' their food,which is considered as a sign of tasty food. If you don't do it in Japan ,it may disappoint your host.
The deepest ocean in the world is the pacific Ocean,with an average depth of  13,215 feet. The greatest known depth of the pacific Ocean is 36,198 feet,found in the Mariana Trench in the  Mariana Islands.
Banana plants are the largest plants on earth without a woody stem they are actually giant herbs of the same family as lilies ,orchids and palm.

Thursday, 21 July 2011

my favourite song..........
huuuuuuuuuuuuu
love it,,,,

Wednesday, 20 July 2011

INTERNET RELAY CHAT


Internet Relay Chat (IRC) is a form of real-time Internet text messaging (chat) or synchronous conferencing.[1] It is mainly designed for group communication in discussion forums, called channels,[2]but also allows one-to-one communication via private message[3] as well as chat and data transfer,[4] including file sharing.[5]
IRC was created in 1988. Client software is now available for every major operating system that supports Internet access.[6] As of April 2011, the top 100 IRC networks served more than half a million users at a time,[7] with hundreds of thousands of channels[7] operating on a total of roughly 1,500 servers[7] out of roughly 3,200 servers worldwide.[8]

FUNCTION OF WEB SEARCH ENGINE


A search engine operates, in the following order
  1. Web crawling
  2. Indexing
  3. Searching.
Web search engines work by storing information about many web pages, which they retrieve from the html itself. These pages are retrieved by aWeb crawler (sometimes also known as a spider) — an automated Web browser which follows every link on the site. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible. Some search engines, such asGoogle, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such asAltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.
When a user enters a query into a search engine (typically by using key words), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. Unfortunately, there are currently no known public search engines that allow documents to be searched by date. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity searchwhich allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com.
The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rankthe results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This second form relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and, as a result, some employ the practice of allowing advertisers to pay money to have their listings rankedhigher in search results. Those search engines which do not accept money for their search engine results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.

WEB SEARCH ENGINE(ICTL)

DEFINISION
web search engine is designed to search for information on the World Wide Web and FTP servers. The search results are generally presented in a list of results and are often called hits. The information may consist of web pages, images, information and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained by human editors, search engines operate algorithmically or are a mixture of algorithmic and human input.


HISTORY

During the early development of the web, there was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[3] As more webservers went online the central list could not keep up. On the NCSA site new servers were announced under the title "What's New!"[4]
The very first tool used for searching on the Internet was Archie.[5] The name stands for "archive" without the "v". It was created in 1990 by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites since the amount of data was so limited it could be readily searched manually.
The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index toComputerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal GopherHierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor.
In the summer of 1993, no search engine existed yet for the web, though numerous specialized catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that would periodically mirror these pages and rewrite them into a standard format which formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993.[6]
In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called 'Wandex'. The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format.
JumpStation (released in December 1993[7]) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform on which it ran, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered.
One of the first "full text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it let users search for any word in any webpage, which has become the standard for all major search engines since. It was also the first one to be widely known by the public. Also in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor.
Soon after, many search engines appeared and vied for popularity. These included Magellan (search engine)ExciteInfoseekInktomiNorthern Light, and AltaVistaYahoo! was among the most popular ways for people to find web pages of interest, but its search function operated on itsweb directory, rather than full-text copies of web pages. Information seekers could also browse the directory instead of doing a keyword-based search.
In 1996, Netscape was looking to give a single search engine an exclusive deal to be the featured search engine on Netscape's web browser. There was so much interest that instead a deal was struck with Netscape by five of the major search engines, where for $5Million per year each search engine would be in a rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.[8][9]
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s.[10] Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001.
Around 2000, Google's search engine rose to prominence.[citation needed] The company achieved better results for many searches with an innovation called PageRank. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal.
By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which ownedAlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions.
Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999 the site began to display listings fromLooksmart blended with results from Inktomi except for a short time in 1999 when results from AltaVista were used instead. In 2004, Microsoftbegan a transition to its own search technology, powered by its own web crawler (called msnbot).
Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in whichYahoo! Search would be powered by Microsoft Bing technology.

Saturday, 16 July 2011

FAKTA

Lidah zirafah panjangnya sekitar 50cm




Mulut menghasilkan 1 liter air liur setiap hari



FAKTA

Julius Caesar tewas dengan 23 tikaman








Nama kereta Nissan berasal dari bahasa jepun Ni : 2 dan San : 3. Nissan : 23





 Semut dapat mengangkat beban 50 kali tubuhnya

FAKTA

Ikan yu kehilangan gigi lebih dari 6000 batang setiap tahun, dan gigi barunya tumbuh dalam masa 24 jam sahaja

FAKTA

NAMA YANG TERBANYAK DIGUNAKN

Muhammad

FAKTA

BANGGUNAN TERTINGGI DI DUNIA

mengikut statistik terbaru ialahBurj Khalifa iaitu Burj Dubai,setinggi 828 meter
Tapi, tak lama lagi sebuah lagi bangunan yang tengah dalam perancangan akan menjadi bangunan tertinggi di Dubai iaitu Menara Nakheel iaitu1,000 meter

Friday, 15 July 2011

FAKTA

BENUA TERBESAR DI DUNA
Benua asia sebesar 44 juta kilometer persegi
LAUTAN TERBESAR DI DUNIA
Lautan pasifik
SUNGAI TERPANJANG
Sungai nel di Egypt panjangnya, 6650km
NEGARA TER BESAR DI DUNIA
Rusia

FAKTA

HAIWAN  DARAT TERPANTAS DI DUNIA
Cheetah.berkelajuan 112 km /hour
HAIWAN PALING PERLAHAN
Sloth yang mengambil masa sebulan untuk 1 km