You are on page 1of 35

In-Service Training for Library Professionals

Jointly organized by

ADINET & INFLIBNET


June 20, 2009

Module 1

IT Skills Enhancement

Course Material
Contents

Pages

1. The Internet History 1-6

2. Internet Functions 7-10

3. Strategic Searching on the Web 11-16

4. Database Creation for Libraries based 17-24


on Standards

5. Blogs 25-30

6. Free E-resources on the Internet 31-34


The Internet History
Compiled & abridged by
Nishtha Anilkumar1

The growth of the Internet has been phenomenal. Once the preserve of the
scientific and military communities, the Internet has now blossomed into a vehicle
of expression and research for the common person. No area has remained
untouched by the internet - be it health, travel, banking or business.

Following commercialization and introduction of privately run internet service


providers (ISP) in the 1980s, and its expansion into popular use in the 1990s, the
Internet has had a drastic impact on culture and commerce. This includes the
rise of near instant communication by email, text based discussion forums, and
the World Wide Web. Investor speculation in new markets provided by these
innovations has lead to the inflation and collapse of the dot.com bubble - a major
market collapse. But despite this, the Internet continues to grow. Let us take a
look at the genesis of the Internet.

1. In the Beginning

Some 45 years ago the search for knowledge was no less insatiable but the
storage, collation, selection and retrieval technologies were rudimentary and the
expense enormous by today’s standards. 65 years past, with World War II at an
end and the might, energy and focused intellect of nations waning the war, the
first computers were being built along with man-machine interfaces. It is at this
time that visionaries first hinted at the possibilities of extending human intellect by
automating mundane, repetitive processes, devolving them to machines. One
such man, Vannevar Bush, in his 1945 essay, - “As we May think” envisaged a
time when a machine called a ‘memex’ might enhance human memory by the
storage and retrieval of documents linked by association, in much the same way
as the cognitive processes of the brain link and enforce memories by association.

2. Post-War Development

Bush’s contribution to computing science, although remarkable, was far less


critical than his efforts to unite the military and scientific communities together
with business leaders, resulting in the birth of the National Defence Research
Committee (NDRC) which was later to become the Office of Scientific Research
and Development (OSRD). In short, Bush galvanised research into technology as
the key determinant in winning the Second World War and established respect
for science within the military.

1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in

1
A few years after the war the National Science Foundation (NSF) was setup,
paving the way for subsequent government backed scientific institutions and
ensuring the American nation’s commitment to scientific research. Then in 1958,
perhaps in direct response to the Soviet launch of Sputnik, the Advanced
Research Projects Agency (ARPA) was created, and, in 1962, employed a
psychologist by the name of Joseph Licklider. He built upon Bush’s contributions
by presaging the development of the modern PC and computer networking,

Having acquired a computer from the US Air Force and heading up a couple of
research teams, he initiated research contracts with leading computer institutions
and companies who would later go on to form the ARPANET and lay down the
foundations of the first networked computing group. Together they overcame
problems associated with connecting computers delivered from different
manufacturers whose disparate communications protocols meant direct
communications was unsustainable, if not impossible.

It is interesting to note that Lick was not primarily a computer man; he was a
psychologist interested in the functionality of human thought but his
considerations on the working of the human mind brought him into the fold of
computing as a natural extension of his interest.

3. Other Key Players

Another key player, Douglas Engelbart, entered web history at this point. After
gaining his Ph.D. in electrical engineering and an Assistant Professorship at
Berkeley, he setup a research laboratory – the Augmentation Research Center –
to examine the human interface and storage and retrieval systems, producing
NLS (oNLine System) with ARPA funding, the first system to use hypertext
(coined by Ted Nelson in 1965) for collation of documents – and is credited as
the developer of the first mouse or pointing device.

Credit must be given to another thinker too, Paul Baran, for conceiving the use of
packets, small chunks of a message which could be reconstituted at destination,
upon which current internet transmission and reception is based. Working at the
RAND Corporation and with funding from government grants into Cold War
technology, Baran examined the workings of data transmission systems,
specifically, their survivability in the advent of nuclear attack. He turned to the
idea of distributed networks comprising numerous interconnected nodes. Should
one node fail the remainder of the network would still function. Across this
network this packets of information would be routed and switched to take the
optimum route and reconstructed at their destination into the original whole
message. Modern day packet switching is controlled automatically by such
routers.

2
4. ARPANET

As computer hardware became available the challenge of connecting them to


make better use of the facilities became a focus for concern, ARPA engaged a
young networking specialist, Larry Roberts, to lead a team responsible for linking
computers via telephone lines. Four university and research sites would be
connected and it was decided to build Interface Message Processors (IMPs,
devised by Wesley Clark), smaller computers talking a common language
dedicated to handle the interfacing between their hosts and the Network. Thus
the first gateways were constructed and the precursor to the Internet was born
under the name of the ARPANET in 1969.

The ‘70s saw the emergence of the first networks. As the ARPANET grew it
adopted Network Control Protocol (NCP) on its host computers and File Transfer
Protocol (FTP) is released by the Network Working Group as a user-transparent
mechanism for sharing files between host computers.

And, significantly, the first Terminal Interface Processor (TIP) was implemented,
permitting computer terminals to connect directly to ARPANET. Users at various
sites could log on to the Network and request data from a number of host
computers.

5. Communications Protocols

In 1972 Vinton Cerf was called to the chairmanship of the newly-formed Inter-
Networking Group (INWG), a team setup to develop standards for the
ARPANET. He and his team built upon their NCP communications system and
devised TCP (Transmission-Control Protocol) in an effort to facilitate
communications between the ever-growing number of networks now appearing –
satellite, radio, ground-based like Ethernet, etc.
They conceived of a protocol that could be adopted by all gateway computers
and hosts alike which would eliminate the tedious process of developing specific
interfaces to diverse systems. They envisaged an envelope of information, a
‘datagram’, whose contents would be immaterial to the transmission process,
being processed and routed until they reached their destination and only then
opened and read by the recipient host computer. In this way different networks
could be linked together to form a network of networks.

By the late ‘70s the final protocol was developed - TCP/IP (Internet Protocol) -
which would become the standard for internet communications.

Ethernet

One final piece of computer networking came together under Bob Metcalfe - :
Ethernet. He submitted a dissertation on the ARPANET and packet switching
networks for his Harvard graduate dissertation but was disappointed to have his

3
paper junked. After taking a position at Xerox’s Palo Alto Research Center
(PARC) he read a paper on Alohanet, the university of Hawaii’s radio network.
Alohanet was experiencing problems with packet collision (information was being
lost due to the nature of radio broadcasting). Metcalfe examined the problem
then refined the principles of packet collision, adopted cable as the
communications medium, formed 3Com and marketed his invention as Ethernet.
The take-up was almost immediate and the ‘80s witnessed the explosion of Local
Area Networks (LANs). First educational establishments then businesses
employed Ethernet as the business communications networking standard, and
once connected through communications servers to the Internet, the World Wide
Web was just an initiative away.

6. Birth of the Browser

In fact, it was ready and waiting in the wings. Tim Berners-Lee (now Sir Tim)
wrote a program, ‘Enquire-Within-Upon-Everything’, in 1980 whilst contracted to
CERN, the particle physics laboratory in Geneva. He needed some means to
collate his own and his colleagues’ information – notes, statistics, results, papers
– the plethora of output generated by the mass of scientists both at the institution
and located across the globe at various research centres. The seed was sown
and upon his return to CERN after other research, he set to work to resolve the
problems associated with diverse communities of scientists sharing data between
themselves, especially as many were reluctant to take on the additional workload
of structuring their output to accommodate CERN’s document architecture
format.

By 1989, the Internet was well established, LANs proliferated in business -


especially with the introduction of personal computers (PC) - and the adoption of
Microsoft’s ubiquitous Window operating system meant a stable platform for
users to create, store and share information. Tim Berners-Lee submitted a paper
to CERN’s board for evaluation, ‘Information Management: A Proposal’, wherein
he detailed and encouraged the adoption of hypertext as the means to manage
and collate the vast sum of information held by CERN and other scientific and
business establishments. Sadly, it sparked little interest but he persevered and in
1990 wrote the Hypertext Transfer Protocol (HTTP) along with a way of
identifying unique document Internet addresses, the URI or unique resource
indicator. To view retrieved documents he wrote a browser, ‘WorldWideWeb’ and
to store and transmit them, the first web server.

The World Wide Web

CERN remained diffident to his system so Berners-Lee took the next logical step:
distribute web server and browser software on the Internet. The spontaneous
take-up by computer enthusiasts was immediate and the World Wide Web came
into being.

4
The browser he created was tied to a specific make of computer, the NeXT; what
was required was a browser suited to different machines and operating systems
like Unix, the PC and the Mac, specifically so that businesses and governments,
who were increasingly using the Web to manage their public information, could
guarantee their users could use it.

Soon browsers for different platforms started appearing, Erwise and Viola for
Unix, Samba for Macintosh and … Mosaic for Unix, Mac and PC, created by
Marc Andreessen whilst at the National Center for Supercomputing Applications
(NCSA).

Mosaic took off in popularity to such an extent that it made front page of the New
York Times’ technical section in late 1993, and soon CompuServe, AOL and
Prodigy begin offering dial-up internet access.

Andreessen and Jim Clark (founder of Silicon Graphics Inc.) decided to form a
new company, Mosaic Communications Corporation, to develop a successor to
Mosaic. Since the original program belonged to the University of Illinois and was
built with their time and money, they had to start from scratch. He and Clark set
about assembling a team of developers drawn from NCSA. And Netscape
Navigator was born. By 1996, 3-quarters of web surfers were using it.

7. Other applications of Internet

E-mail (electronic mail) is the exchange of computer-stored messages by


telecommunication. (Some publications spell it email; we prefer the currently
more established spelling of e-mail.) E-mail messages are usually encoded in
ASCII text. However, you can also send non-text files, such as graphic images
and sound files, as attachments sent in binary streams. E-mail was one of the
first uses of the Internet and is still the most popular use. A large percentage of
the total traffic over the Internet is e-mail. E-mail can also be exchanged between
online service provider users and in networks other than the Internet, both public
and private.

E-mail can be distributed to lists of people as well as to individuals. A shared


distribution list can be managed by using an e-mail reflector. Some mailing lists
allow you to subscribe by sending a request to the mailing list administrator. A
mailing list that is administered automatically is called a list server.

E-mail is one of the protocols included with the Transport Control


Protocol/Internet Protocol (TCP/IP) suite of protocols. A popular protocol for
sending e-mail is Simple Mail Transfer Protocol (SMTP) and a popular protocol
for receiving it is POP3. Both Netscape and Microsoft include an e-mail utility
with their Web browsers.

5
References :

History of Internet. Wilkipedia accessible at


http://en.wikipedia.org/wiki/History_of_the_Internet

Kristula D. History of the Internet. Accessible at


http://www.davesite.com/webstation/net-history.shtml

Howe, W. A brief history of Internet. Accessible at


http://www.walthowe.com/navnet/history.html

6
Internet functions
Compiled & abridged by
Nishtha Anilkumar1

It is important to remember the Internet is a network of computer networks


interconnected by communications lines of various compositions and speeds.
Interspersed across this immense network are routers which either guide traffic
to specific destinations or keep it within well defined areas. This vastness of
scale can be distilled into two basic actions: requests for information and the
servicing of such requests, which forms the relationship between the two types of
computer using the Internet: clients and servers. Whether connected to a local
area network (LAN) at a place of business or attached by cable modem from
home, computers requesting information across a network or the Web are
generally regarded as clients; machines supplying the information are servers. In
practice the distinction is less polarized, with many computers both requesting
and delivering information, but the premise forms the basis of the Internet.

Servers often perform specific duties: web servers hosting websites, email
servers forwarding and collecting email, FTP (File Transfer Protocol) servers
uploading and downloading files.

1. Web Access

Access to the Web for home users is achieved by dial-up modem, cable
(broadband or ADSL) or wireless connection to their ISP (Internet Service
Provider); business users will typically be connected to a local area network and
gain access via a communications server or gateway, which is again linked
through an ISP to the Web. ISPs themselves may be connected to larger ISPs,
leasing high speed fibre-optic communications lines. Each of these forms a
gateway to the Web with the largest maintaining the ‘backbones’ of the Web
through which run the international ‘pipes’ connecting the world’s networks.

2. Addressing the Web

TCP/IP (Transmission Control Protocol/Internet Protocol) is the governing set of


protocols used for transmitting information over the Internet. It establishes and
manages the connection between two computers and the packets of data sent
between them.

1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in

7
Each computer connected to the Internet has a unique IP address assigned to it,
either dynamically at the moment of connection or for a period of a day or so, or
(for all intents and purposes) a fixed or static address like that assigned to a web
or name server hosting websites. The current version of IP, version 4, allows for
4.3 billion unique addresses – thought more than adequate a few years ago but,
as there are now only a billion left, no longer sufficient to address not only the
volume of new users and hosts coming online but also the influx of new
technologies demanding attendant IP addresses such as those associated with
smart internet-enabled machines like auto-ordering fridges, Pepsi dispensers and
media centres and now internet phones. However, the shortfall is being remedied
with the emergence of IPv6 and its 340 billion address slots which not
guarantees practically limitless web access but also offers intrinsic unbreakable
security encryption levels.

An IP address looks like 194.79.28.133, a cluster of four numbers known as


octets. People don’t think of addresses in such a way – although they have been
forced to for some time with phone and cell numbers and their PINs for credit
cards – but, as with email, use names as mnemonics. As the Internet grew, it
became obvious users seeking specific machines would need some method of
identifying and recalling computers quite apart from IP addresses.

3. Domain Name System

The Domain Name System (DNS) was conceived in 1984, basically a lookup
translation table converting machine readable IP addresses into human
understandable names. Locating a website by its name www.yourbusiness.co.uk
rather than entering 123.23.48.146 in the browser address bar makes eminently
more sense. These translation tables – name servers - are dotted across the
Internet and contain specific references to website/IP addresses on their own
local list, pointers to other name servers who may be able to locate the desired
computer should it not be found locally and a cache (temporary list) of recently
requested domain names.

Name servers are maintained and updated on a daily basis as IP addresses


change or are added when new websites come online. Millions of people and
automated systems maintain this distributed naming system worldwide and it is
accessed by billions of surfers each day, requesting not just websites but email
addresses and FTP servers. It is the biggest and most active distributed
database in the world.

Obviously a single name server holding all internet addresses would be


immediately brought to its knees so there are several servers duplicating domain
addresses at various levels of the system and hundreds of thousands worldwide
which, as well as speeding up the process of web access, serve as a layer of
inbuilt redundancy should local failure occur.

8
All name servers are not updated immediately - which is why a new website is
not instantly visible across the Internet. Additions to name server lists take time
to propagate around the world but are usually achieved within a day or two.

4. Domain Management

The country-specific organisations employ registrars, businesses accredited to


register and lease domain names to companies and individuals.

Nowadays the registration process is automated and remarkably simple. Choose


a domain name, check if it is not already registered, select the lease period and
pay for it.

The domain is then added to the registrar’s local domain name server and
propagated to the world’s root name servers. Whether a website exists for the
domain is immaterial, its potential existence and location is described and
forwarded. Web hosting companies may or may not be registrars which means a
domain may be registered with one company but hosted – made visible to the
Internet through a web server – by another. In this instance, the domain will be
registered and a change must be made to the default name servers list to point to
another set of name servers owned by the hosting company.

5. Driving the Web

Visionaries in the scientific, military and business industries contributed to the


World Wide Web as we now know it. However, it is important to note that The
World Wide Web is not the Internet, it is a subset, designed specifically for the
universal interchange and dissemination of information, although the terms have
become synonymous to many users. To put it another way, all internet users
have access to the Web but not conversely since some areas of the Internet are
restricted access – many scientific, military, educational and business networks
require privileged access to non public areas, areas often dedicated to research
and development.

6. Faster Communications

Bandwidth, the capacity of a network to carry information, depends on a number


of factors which are predetermined and usually hardware limiting. The transport
protocol managing movement of data (packets) around the Web, TCP/IP, is a
variable factor. A user may well enjoy a high-speed broadband link to their ISP
but from there outwards there is no guaranteeing the speed or capacity of
subsequent internet connections to the streaming video server you subscribed to.

TCP/IP was developed in the mid-‘70s and governs all Internet communications.
It has remained largely unchanged. It’s strength – and weakness – lies in its
ability to adjust data transmission to meet internet conditions, namely congestion,

9
transmission urgency and quality. It does this by sending re-requests for
information when it doesn’t receive confirmation of receipt by a certain time but it
doubles the wait time after each re-request in response to net congestion
algorithms. This is often why file downloads may begin with a burst of activity
then speed deteriorates to frustrating slowness.

References :

Schuler, R. How does the Internet work ? Accessible at


http://www.theshulers.com/whitepapers/internet_whitepaper/index.html

10
Strategic Searching on the Web
Saroj Das1

0 Abstract:

Searching for pinpointed information is very tough with the amount of information
available on the web. It is very important to understand the search engines and
searching techniques to effectively search the web. The paper here discusses
some search engines and search strategies which can minimize the efforts of
searching the required information on the web and also some website evaluation
techniques.

Keywords: Search engine; Search Strategies; Site Evaluation

1. Introduction

Now days it is practically impossible to find anything on the web without


employing a search engine to assist us. The search engines’ role in this process
is to narrow down the vicinity of web pages that could contain the information
required and to provide alternative access points for the users to initiate a
navigation session.

Search engines are currently the primary information gatekeepers of the web,
holding the precious key needed to unlock the web both for users who are
seeking information and for authors of web pages wishing to make their voice
heard.

As information gatekeepers, web search engines have the power to include and
exclude web sites and web pages from their indexes and to influence the ranking
of web pages on query result lists. Web search engines thus have a large
influence on what information users will associate with the web. Thus it is very
important, especially for the library professionals to thoroughly understand the
different aspects of search engines to pin-pointedly retrieve the desired
information.

2. Search Engine

NASA defines the term “Search” as, “A search is the organized pursuit of
information. Somewhere in a collection of documents, Web pages, and other
sources, there is information that you want to find, but you have no idea where it
is”.

1
Saroj Das, Institute for Plasma Research, Gandhinagar
Email: saroj@ipr.res.in

11
So, a search engine is the means for finding the information that you are looking
for.

According to Wikipedia, a search engine in computing terms is, an information


retrieval system designed to help find information stored on a computer system.
Search engines help to minimize the time required to find information and the
amount of information which must be consulted, akin to other techniques for
managing information overload.
The most common form of search engine is the web search engine, which
searches information on the WWW.

A search engine is a searchable database of Internet files collected by computer


programs, called spiders/crawlers. Index is created from the collected files, such
as, title, URL, language, full text, etc. Results are ranked by relevance.

Different types of search engines work differently, but they perform the following
three basic functions:
Search: They search the Internet based on the keyword provided
Index: They maintain an index of the searched term and location of the term
Retrieve: They retrieve the search terms or combination of search terms indexed
in the database

3. How Search Engine Works?


Documents Indexing
Spiders and web software
combs the addresses extracts
Internet for are collected information
documents from the
and their and sent to
web search document
addresses engines’ and stores
indexing in a
software database

Enter Keywords…
The results are listed as The matching
hypertext links in the form of document is searched
a web page in the database when
user enters the
keyword

Figure 1

12
1. Search engines uses software called spiders, which search the Internet for
documents and their Web addresses
2. The documents and Web addresses are then collected and sent to search
engine’s indexing software
3. The Indexing software extracts information from the documents and stores
in a database. The kind of information indexed depends on the particular
search engine. Some index every word in a document; others index the
document title only.
4. When a search is performed by entering keywords, the database is
searched for documents that matches
5. The search engine lists the results as hypertext links in the form of a web
page.

4. Subject Directories

Subject directories, unlike search engines, are created and maintained by human
editors and not spiders or robots. On the basis of selection criteria, the editors
review and select sites for inclusion in their directories. Most directories provide
searching capabilities.

The terms Search engine and Directories are often used interchangeably though
they are not the same thing.

There are different categories of search engines depending on the information


indexed and the way information is indexed i.e., either automated (using spiders)
or human-indexed, and there are also search engines which incorporated both
these aspects and they are known as hybrid search engines.

5. Types of Search Engines

There are various types of Search Engines:


• General Search Engines
• Meta Search Engines
• Concept Categorization Search Engines
• Vertical Search Engines

General Search Engines: These search engines covers the overall web, using its
spiders or crawlers to collect web pages for its own Index.
Examples:
• Google
• Yahoo!

Meta Search Engines: These search engines searches multiple search engines
from a single search page.
Examples:
• Dogpile

13
• Vivisimo
• Fasteagle

Concept Categorization Search Engine: These search engines organizes results


into topical categories. These categories are derived from the contents of the
search results. Concept categories can also help to learn about the topic.
Examples:
• Cuil
• Kartoo

Vertical Search Engines: These are very specific search engines, which
searches specific topic, industry, type of content, geographical location, etc. It
searches contents of Deep Web or Invisible web, which are generally difficult to
search through general search engines.
Examples:
• Scitopia (Science Specific)
• BizNar (Business Specific)

6. Search Strategies

Building an effective search strategy is the most important aspect of searching.


The more care and thought you put into your search strategy, the more relevant
your search results will be.

A well-designed search strategy:

• saves time in the long run


• allows to search for information in many different places
• helps to find a larger amount of relevant information

The steps in effective search strategy may include:

Ø Analyzing the search topic

Ø Generating Keywords

Ø Finding alternate terms

Ø Choosing appropriate search engine

Ø Applying Basic search techniques such as using Boolean Operators


(AND/OR), Truncation, Phrase search, Limiting search, etc.

Ø Reviewing results and refining search results

14
7. Evaluating Web Information

The Web provides information and data from all over the world. There is so much
of information available which appears to be fairly anonymous, therefore it is
necessary to develop skills to evaluate the information found on the web. Anyone
can write anything on the web, there are wide range of documents available,
written by wide range of authors. There may be excellent resources residing
along side the most dubious one.

There are certain criteria to evaluate the information found on the Internet:

ØAuthority
ØPublishing body
ØBiasness
ØReferrals
ØAccuracy
ØCurrency
ØAuthorship
Authorship is a major criterion for evaluation of information found on the web.
The author of information has to be identified. The author could be an individual
expert in an area of specialization or an organization.

ØPublishing body
The publishing body is critical in evaluation of Internet information. Identification
of domain name (.edu, .org, ac) or the URL can tell us about the information
location and the publishing body.

ØBiasness
It is important to examine who is providing the information and what might be
their point of view or bias. Generally every writer wants to prove his/her point and
uses the data and information that assists him/her in doing so.

ØReferrals
This evaluation criterion suggests what author knows about his/her discipline and
its practices, and what sources he/she referred to. It allows evaluating the
authors’ scholarship or knowledge of specific area of discussion.

ØAccuracy
Accuracy or verifiability of information is an important part of the evaluation
process. The information provided should be reliable and error-free. It also needs
to be examined whether the information is checked or verified by some editorial
team or any responsible authority.

15
ØCurrency
Currency refers to the timeliness of information. It is a very critical factor in
evaluating web information. It is important to examine the regularity with which
the data or information is updated.

8. Conclusion

The World Wide Web is a great platform to explore and accomplish research on
any topic. Anybody can put anything on the web easily and for free. Most of the
information found on the web are unregulated and unmonitored, causing great
difficulty in finding appropriate information. The role of search engines has
become vital in locating the right information and to use search engines
effectively is much more critical.

The revolutionary advancement in search engine technology is bringing sea


change in the way information is searched. Therefore it is important especially for
the library and information professionals to learn the searching techniques and
keeping current with the technological advancements to efficiently serve the user
community.

References:

1] Levene, Mark. Introduction to search engines and web navigation.


England. Addison-Wesley. 2006
2] http://www.internettutorials.net
3] http://www.lib.berkeley.edu/TeachingLib/Guides/Internet/Evaluate.html
4] http://www.library.jhu.edu/researchhelp/general/evaluating

16
Database Creation for Libraries based on Standards
Complied By
Yatrik Patel1

0 Introduction

Standards are documents developed and adopted by a consensus process that


contain criteria, measures of comparison, best practices and/or processes that if
followed, produce an intended result.

The library and information community has adopted a range of standards, which
facilitate to create and interchange library data, which promote the inter-
operability of library systems, and which support the operation of national and
international networks of libraries. Adherence to standards plays an important
role in improving access by users to the information resources which are held in
library collections, in collections of other cultural institutions, or which are
accessible on the World Wide Web.
In nutshell implementing standards in libraries will lead to following benefits.

• Uniformity in records
• Better resource sharing between different resource centres
• Seamless exchange of records without data loss
• Supports in creating national / International bibliographic union database
• Allows to adopt any system and provides platform interoperability

1. Global Scenario

The original library standards were set by the American Library Association in the
late 19th century. ALA created standards relating to cataloguing and the creation
of catalogs. Today ALA is still involved in the development of cataloguing rules,
but the development of library standards has been taken up by the National
Information Standards Organization (NISO). NISO is a formal standards
development organization that is accredited by the American National Standards
Institute (ANSI).

NISO owns the original MARC record standard, originally ANSI Z39.2 and now
ANSI/NISO Z39.2, and was the conduit to getting that standard certified at the
international level through ISO as ISO 2709. The organization has about two
dozen active standards ranging from the management of libraries ,International
Standard Serial Numbering (ISSN) , Z39.18 - Scientific and Technical Reports -
Preparation, Presentation and Preservation), to information retrieval (Z39.50 -

1
Yatrik Patel, Scientist C, INLFIBNET Centre, Ahmedabad
Email: yatrik@inflibnet.ac.in

17
Information Retrieval : Application Service Definition & Protocol Specification,
OpenURL). Yet the technology standard that is the most used by libraries is
the MARC21 standard for library cataloguing, is not a NISO standard. This
standard is instead managed by the Library of Congress.

Library of Congress was the force behind the development of the ANSI standard
that defined the structure for the Machine Readable Cataloguing record (MARC)
in the 1960's, which was needed to create computer-driven print-on-demand
service for the Library of Congress card program. Using that structure, the
Library of Congress developed the fields and subfields that would be used to
encode the content of a library catalog record. While the record structure of the
MARC record has not changed, and is still defined by ANSI/NISO Z39.2, the
content of the record has been under constant evolution under Library of
Congress's care.

In addition to the MARC21 standard, Library of Congress is the maintenance


agency for some other standards. As maintenance agency, the Library is the
central information point for the standards and any documentation related to the
standards. Another notable library organization that has engaged in the
development of standards is OCLC, and its sponsorship of the Dublin Core
Metadata Initiative (DCMI). Dublin Core is especially interesting in that it was
expressly developed as a non-library standard. Although Dublin Core could be
used by libraries, the organizers of the effort wanted to create a standard would
be a light-weight resource description language that could be used by
organizations that do not have the history or experience of libraries. DC is not
associated with any particular set of cataloguing rules so that any community that
has a need to describe documents can make use of it. In fact, Dublin Core is
used widely today both within and outside of the library community.

Another committee effort, but one with wide adoption, is that of the Anglo-
American Cataloging Rules. Although not strictly a technology standard, AACR
has a profound effect on the technology of libraries. The Joint Steering
Committee for Revision of the Anglo-American Cataloguing Rules has six
member organizations representing the Anglo-American library world.

2. National Scenario

At national level INFLIBNET Centre is working as catalyst for the academic


community helping into every area of their development, as a effect of that
libraries are gradually adopting the new technology and keeping pace with the
latest trends in the area. Further, through the human resource development
activity in the Centre unskilled manpower is getting skilled with the help of
different training on different specialized areas. Most importantly, recently, Centre
has joined the National Information Standards Organization (NISO, USA –
http://www.niso.org), which is a not-for-profit association accredited by ANSI,
identifies, develops, maintains, and publishes technical standards to manage

18
information in our changing and ever-more digital environment. This will help to
keep us update in the standards areas, where INFLIBNET will also play major
role in the development of global standards and influence it on Indian point of
view. INFLIBNET is also representative of Bureau of Indian Standards (BIS)
Technical Committee MSD5. This will help the Centre to educate the nation in
the area of development of standards and its implementation into country.

3. Standards Used in Libraries

3.1 Standards for Descriptive Cataloguing

The use of common cataloguing standards is of major importance in supporting


consistent access to library catalogues by users, and in promoting the
international sharing of cataloguing data, which greatly improves the efficiency of
the cataloguing process.

3.1.1 Anglo-American Cataloguing Rules (AACR)

The Anglo-American Cataloguing Rules (AACR) are designed for use in the
construction of catalogues and other lists in general libraries of all sizes. The
rules cover the description of, and the provision of access points for, all library
materials commonly collected at the present time.

The current text is the Second Edition, 2002 Revision (with 2003, 2004, and 2005
updates) which incorporates all changes approved by the JSC through February
2005. The rules are published by:

• The American Library Association


• The Canadian Library Association
• CILIP: Chartered Institute of Library and Information Professionals

AACR2 exists in several print versions, as well as an online version. Print


versions are available from the publishers. The online version is available only
via Cataloguer’s Desktop from the Library of Congress. Various translations are
also available from other sources.

Principles of AACR include cataloguing from the item 'in hand' rather than
inferring information from external sources and the concept of the 'chief source of
information' which is preferred where conflicts exist.

3.1.2 ISBD (International Standard Bibliographic Description)

The International Standard Bibliographic Description or ISBD is a set of rules


produced by the International Federation of Library Associations and
Institutions (IFLA) to describe[1] a wide range of library materials within the
context of a catalog. The consolidated edition of the ISBD was published in 2007.

19
It superseded earlier separate ISBDs that were published formonographs, older
monographic publications, cartographic materials, serials and other continuing
resources, electronic resources, non-book materials, and printed music. IFLA's
ISBD Review Group is responsible for maintaining the ISBD.

One of the original purposes of the ISBD was to provide a standard form of
bibliographic description that could be used to exchange records internationally.
This would support IFLA's program of universal bibliographic control.

3.1.3 FRBR

FRBR is a structured framework for relating the data recorded in bibliographic


records to the needs of the users of those records. It identifies bibliographic
entities (such as works, persons, events, places, etc.), their attributes and the
relationships between them, and maps these to user tasks. It also identifies a
basic level of functionality for records created by national bibliographic agencies.

FRBR was approved by an IFLA committee in 1997, and is now being used to
inform the future development of the ISBD's and AACR, in teachincal cataloguing
and in the development of databases for several projects worldwide.

FRBR offers us a fresh perspective on the structure and relationships of


bibliographic and authority records, and also a more precise vocabulary to help
future cataloguing rule makers and system designers in meeting user needs.
Before FRBR our cataloguing rules tended to be very unclear about using the
words “work,” “edition,”or “item.” Even in everyday language,we tend to say a
“book” when we may actually mean several things.For example, when we say
“book” to describe a physical object that has paper pages and a binding and can
sometimes be used to prop open a door or hold up a table leg, FRBR calls this
an “item.”

3.2. Bibliographic Standards

3.2.1 CCF (Common Communication Format)

The CCF was developed in order to facilitate the exchange of bibliographic data
between organisations, and was first published by UNESCO in 1984 i.e. first
edition.A second edition was published in 1988. At the same time it was decided
that the scope of CCF would be extended to incorporate provisions for data
elements for recording factual information that are used most frequently for
referral purposes. The third edition of CCF was divided into two volumes: CCF/B
for holding bibliographic information and CCF/F for factual information to serve
the desired purpose. Mainly CCF was designed to follow the basic principles:

• The structure of the new format conforms to the international standard ISO
2709

20
• The core record consists of a small number of mandatory data elements
essential to bibliographic description, identified in a standard manner
• The mandatory elements are augmented by additional optional data
elements, identified in a standard manner, and
• A standard technique is used for accommodating levels, relationships, and
links between bibliographic entities

3.2.2 MARC (Machine Readable Cataloguing)

The MARC standards consist of the MARC formats, which are standards for the
representation and communication of bibliographic and related information in
machine-readable form, and related documentation. It defines
a bibliographic data format that was developed by Henriette Avram at the Library
of Congress beginning in the 1960s. It provides the protocol by
which computers exchange, use, and interpret bibliographic information. Its data
elements make up the foundation of most library catalogs used today.

The record structure of MARC is an implementation of ISO 2709, also known as


ANSI/NISO Z39.2. MARC records are composed of three elements: the record
structure, the content designation, and the data content of the record. The record
structure implements national and international standards (e.g., Z39.2, ISO2709).
The content designation is "the codes and conventions established to identify
explicitly and characterize. data elements within a record" and support their
manipulation. The content of data elements in MARC records is defined by
standards outside the formats such as AACR2, L.C. Subject Headings,

The future of the MARC formats is a matter of some debate in the worldwide
library science community. On the one hand, the storage formats are quite
complex and are based on outdated technology. On the other, there is no
alternative bibliographic format with an equivalent degree of granularity.

3.2.3. Dublin Core

The need to improve the effectiveness of searching for information resources on


the World Wide Web has prompted the development of simplified metadata
standards which could be used by authors or others at relatively low cost. (The
term "metadata" refers to data which relates to one or more information
resources, supporting their discovery or management). Prominent amongst these
standards is the Dublin Core metadata set.

Dublin Core is being developed as a generic metadata standard for use by


libraries, archives, government and other publishers of online information. The
Library recognises that this standard may be applied broadly to citation and full
text descriptions, and may support interoperability between a range of schemas,
including MARC 21.

21
The Dublin Core standard includes two levels: Simple and Qualified. Simple
Dublin Core comprises fifteen elements; Qualified Dublin Core includes three
additional elements (Audience, Provenance and Rights Holder), as well as a
group of element refinements (also called qualifiers) that refine the semantics of
the elements in ways that may be useful in resource discovery.

The Dublin Core standard is still under development, in relation both to its
semantic aspects (rules for the content of the fields) and the syntax (rules for
structuring and expressing the fields)

3.3. Items Identifiers

A key type of data element is an identifier for a book, serial, journal article,
electronic resource, or other type of information resource.

3.3.1 International Standard Book Number (ISBN)

The ISBN (International Standard Book Number) is a unique machine-readable


identification number, which marks any book unmistakably. This number is
defined in ISO Standard 2108. The number has been in use now for 35 years
and has revolutionised the international book-trade. 170 countries and territories
are officially ISBN members. The ISBN accompanies a publication from its
production onwards.

3.3.2 International Standard Serial Number (ISSN)

The ISSN (International Standard Serial Number) is an eight-digit number which


identifies periodical publications as such, including electronic serials.

The ISSN is a numeric code which is used as an identifier: it has no signification


in itself and does not contain in itself any information referring to the origin or
contents of the publication.

3.3.3 Digital Object Identifier System (DOI)

The Digital Object Identifier (DOI) System is for identifying content objects in the
digital environment. DOI names are assigned to any entity for use on digital
networks. They are used to provide current information, including where they (or
information about them) can be found on the Internet. Information about a digital
object may change over time, including where to find it, but its DOI name will not
change.

The DOI System provides a framework for persistent identification, managing


intellectual content, managing metadata, linking customers with content
suppliers, facilitating electronic commerce, and enabling automated management

22
of media. DOI names can be used for any form of management of any data,
whether commercial or non-commercial.

The system is managed by the International DOI Foundation, an open


membership consortium including both commercial and non-commercial
partners, and has recently been accepted for standardisation within ISO.
Approximately 40 million DOI names have been assigned by DOI
System Registration Agencies in the US, Australasia, and Europe.

Using DOI names as identifiers makes managing intellectual property in a


networked environment much easier and more convenient, and allows the
construction of automated services and transactions.

3.4 System interconnection

The development of library networks over the next decade will be based on the
interconnection of distributed library systems, and the use of client/server
technology. The implementation of certain key technical standards will allow
particular applications such as searching and interlibrary loan to be managed
cooperatively between two computer systems. . The key standards are Z39.50,
and the Open Archives Initiative Metadata Harvesting Protocol

3.4.1 Z39.50 (ISO 23950)

The Z39.50 standard specifies the structures and rules which allow a client
machine (such as a personal computer or workstation) to search a database on a
server machine (such as a library catalogue) and retrieve records that are
identified as a result of such a search. The rather arcane designation for this
standard derives from the fact that it was the 50th standard developed by a
committee known as "Z39", the committee of the American National Standards
Institute that has the responsibility for library automation standards. While
technically a US national standard (Version 3 of which was adopted in 1995),
Z39.50 has also been copied or "cloned" as an international standard, known as
ISO 23950. The standard has been of major importance in supporting access to
distributed library databases and catalogues. The Library of Congress
undertakes the role of maintenance agency for the standard.

3.4.2 Open Archives Initiative (OAI) Metadata Harvesting Protocol

OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting) is a


protocol developed by the Open Archives Initiative. It is used to harvest (or
collect) the metadata descriptions of the records in an archive so that services
can be built using metadata from many archives.The protocol is usually just
referred to as the OAI Protocol.

23
OAI-PMH version 1.0 was introduced to the public in January 2001 at a
workshop in Washington D.C., and another in February in Berlin, Germany.
Subsequent modifications to the XML standard by the W3C required making
minor modifications to OAI-PMH resulting in version 1.1. The current version, 2.0,
was released in June 2002. It contained several technical changes and
enhancements and is not backward compatible.

OAI-PMH is based on a client-server architecture, in which "harvesters" request


information on updated records from "repositories". Requests for data can be
based on a datestamp range, and can be restricted to named sets defined by the
provider. Data providers are required to provide XML metadata in Dublin
Core format, and may also provide it in other XML formats.

4 Concluding Notes

For libraries in India, it is very difficult to strict with any standard, due to libraries
are not well recognized by their institutions and having lack of skilled manpower.
A financial crunch with lack of skilled manpower is threat for the Indian libraries
for keeping themselves with the pace of latest technology including the intruders
like computer professionals. But to survive in the field, one has to go through the
standards and control the quality in automation

Reference (All Active as on June 10, 2009)

Coyle , Karen. Libraries and Standards. Preprint. Published in The Journal of


Academic Librarianship, Volume 31, Number 4, pages 373-376,
http://www.kcoyle.net/jal-31-4.html

Chandrakar, Rajesh et al, Standards for Creating Bibliographic Databases in


Indian Academic Libraries under INFLIBNET Umbrella,2nd Convention
PLANNER - 2004, Manipur Uni., Imphal, 4-5 November, 2004, Ahmedabad
http://ir.inflibnet.ac.in:8080/jspui/handle/1944/423

Web Resources
http://www.nla.gov.au/services/standards.html
http://www.inflibnet.ac.in/publication/other/webpagelink.pdf
http://en.wikipedia.org/wiki/International_Standard_Bibliographic_Description
http://www.loc.gov/cds/downloads/FRBR.PDF
http://en.wikipedia.org/wiki/MARC_standards
http://www.loc.gov/marc/umb/
http://dublincore.org/
http://en.wikipedia.org/wiki/OAI-PMH
http://www.isbn-international.org
http://www.issn.org
http://www.doi.org
http://www.openarchives.org/OAI/openarchivesprotocol.html

24
Blogs
Saroj Das1

0 Abstract:

Web 2.0 has made a significant impact on the way information is generated and
disseminated. Blog, a Web 2.0 technology, is fast becoming popular
communication tool among different segments of our society. The literature here
discusses some aspects of Blog, its creation and uses, especially in library
environment.

Keywords: Weblog; Blogging; Web 2.0 tool

1. Introduction

Internet has made a profound contribution to modern life. Today, the web has
hundreds of millions of users. It has also eliminated the limitations of service
availability within a physical building, with limited opening hours and most
significantly, for many the web appears to be almost totally free. With Library 2.0
making headway it has become imperative for librarians to use Web 2.0
technologies to reach its users. Blogs are fast becoming popular mode of
communication, with its global proliferation it has enormous implication for
libraries. By enabling the rapid production and consumption of Web-based
publications, blogs may indeed represent an even greater milestone in the history
of publishing than do Web pages. For libraries the most obvious implication is
that blogs represent another form of publication and need to be treated as such.

2. Blogs?

Wikipedia says, “A blog (a contraction of the term weblog) is a type of website,


usually maintained by an individual with regular entries of commentary,
descriptions of events, or other material such as graphics or video. Entries are
commonly displayed in reverse-chronological order. "Blog" can also be used as a
verb, meaning to maintain or add content to a blog.”

Blog, short form for Weblog, is a web site that contains brief entries arranged in
reverse chronological order. Blogs are diverse, ranging from personal diaries to
news sites that monitor developments on anything. According to Evan Williams,
the creator of Blogger, the blog concept is about three things Frequency, Brevity
and Personality.

1
Saroj Das, Institute for Plasma Research, Gandhinagar
Email: saroj@ipr.res.in

25
Blog, Blogging, Blogger and Blogrolling

Blog is a web site consisting of dated entries arranged in reverse chronological


order, i.e. the most recent post appears first.

Blogging is the act of creating a blog

Blogger is a person who maintains a blog.

Blogrolling is the act of moving from one blog to another.

3. Why are Blogs Created?

± Easy to publish content

±Saves a lot of time in publishing new contents

±No programming skills required for starting a blog

±The older posts are archived automatically

±New posts are refreshed on the main page

±Posts are arranged automatically by reverse chronology, no need to


manually organize the contents

±Easily link to other sites and blogs

±Bloggers can get instant feedback on their post

4. How to Create Blog?

There are many blogging software available for free or inexpensive to use. One
does not need to install any software for starting a blog. Blogs are easy to create.
One of the easiest ways is, creating a free account by registering at the service
like:
• Blogger (http://www.blogger.com)
• Wordpress (http://www.wordpress.com)
• LiveJournal (http://www.livejournal.com)
• Typepad (http://www.typepad.com)

Example of creating Blog with Blogger.com

26
Start blogging in three steps:

1. Create 2. Name 3. Choose a


an Account Your Blog Template

Step 1 Create an Account: You need to create an account with Google this
is a very simple process. If you already have an account, just log in.
Step 2 Name your Blog: Naming your blog appropriately is an important
feature, as it will reflect the thought contents and drive traffic
towards your blog.
Step 3 Choose a Template: Choosing the right template is very important,
it will decide the layout of your blog. There are readily available
templates to choose from. Contents will be organized as per the
template design.

Before jumping into creating a blog, one has to find answers to these
questions:

27
±What type of blog is to be created?

±What is the purpose of creating the blog?

±Who are the target audience of the blog?

±What will be the content and scope of the blog?

±What is the key message to be conveyed through the blog?

5. Downsides of Blogs

±Most of the blogs are created and maintained by individuals, so they may
include biased or inaccurate information

±Sometimes blogs are created to air frustration, which may affect others

±Blogging can encourage inappropriate or unprofessional behavior, such as


discussing co-workers and superiors

±Misuse of Intellectual Property Rights is an area of concern

±Blogs are volatile, the blogger can edit or delete posts at his/her wish

6. Libraries and Blogs

Librarians have embraced blogs as a convenient and effective tool to keep


themselves informed, to share thoughts and ideas with each other, and to
communicate with their patrons.

Libraries and librarians have used blogs in a number of ways:

1. Information/promotion of library services and activities


2. Personal comments on professional issues
3. Conference blogging for general audience or aimed at participants
4. Raising professional issues by association/network
5. Collaborating with other libraries
6. Interacting with the user community

Jenny Levine is considered to be the first librarian to operate library-related blog,


who started her Librarian’s Site Du Jour (http://jennyscybrary.com/sitejour.html)
in 1995.

Some Library Related Blogs:

±ADINET Blog (http://alibnet.blogspot.com)

28
Views, thoughts about library profession and professional development

±Reference at Newman Library (http://referencenewman.blogspot.com)


“News and tips by and for staff providing reference services at the Newman
Library, Baruch College (New York, NY).”

±ACRL Blog (http://www.acrlblog.org)


It is official blog of the Association of College & Research Libraries. The blog is
authored by a group of academic librarians.

±Scholarly Electronic Publishing Weblog


(http://www.digitalscholarship.org/sepb/sepw/sepw.htm)
It highlights new resources from the Scholarly Electronic Publishing Bibliography

±Catalogablog (http://catalogablog.blogspot.com)
The blog is about Library cataloging, classification, metadata, subject access and
related topics.

±LISNews (http://www.lisnews.org)
It is a collaborative weblog devoted to current events and news in the world of
Library and Information Science.

±Phil Bradley's Blog (http://www.philbradley.typepad.com)


“Where librarians and the Internet meet: Internet searching, Web 2.0 resources,
search engines and their development.”

±Library Stuff: A weblog by Steven M. Cohen (http://www.librarystuff.net)


“The library weblog dedicated to resources for keeping current and professional
development.”

±Beyond the Job (http://www.beyondthejob.org)


“Articles, job-hunting advice, professional development opportunities, and other
news and ideas on how to further your library career. Compiled by the Library
Job People, Sarah Johnson and Rachel Singer Gordon.”

Blog Search Engines

±IceRocket (http://www.icerocket.com)

±BlogScope (http://www.blogscope.net)

±Google Blog Search (http://blogsearch.google.com)

±Technorati (http://technorati.com)

±Blogdigger (http://www.blogdigger.com)

29
±Blogpulse (http://www.blogpulse.com)

7. Conclusion

Blogs are increasingly becoming popular among the communities, libraries and
librarians too are not left behind, infact info-savvy librarians are more aware of
and interested in blogging than the general public. Blogs have the potential to
transform any organization by accelerating the information and communication
process. It is important for the information professionals to understand and
evaluate the role of blogging in their institutions and introduce blogs as a
promotional tool and a way of informing the users about the collection and
services. Keeping abreast of the technologies such as blogging, social
networking, etc. and putting them into real-world environment would not only
create a niche for librarians but would also enhance the overall worth of the
library profession.

References:

1] Fichter, Darlene. Why and How to Use Blogs to Promote You Library’s
Services. Accessible at http://www.infotoday.com/mls/nov03/fichter.html

2] Bhatt, Jay. Blogging as a Tool: Innovative Approaches to Information


Access. Library Hi Tech News. Number 9. p.28-32. 2005

3] Ojala, Marydee. Blogging: For knowledge sharing, management and


dissemination. Business Information Review. 22 (4). P.269-276. 2005

4] http://www.blogger.com

5] http://en.wikipedia.org/wiki/Blog

30
Free E-resources available on the Internet
Compiled by
Nishtha Anilkumar1

1. Free E-Resources

Following is a list of select free resources on the net, which I have referred often
to give valuable service to the users but it is by no means an exhaustive one :

1.1 Wikipedia (http://en.wikipedia.org)

Wikipedia is a multilingual, web-based, free-content encyclopedia. The name


"Wikipedia" is a combination of wiki (a type of collaborative Web site) and
encyclopedia. Wikipedia's articles provide links to guide the user to related
pages with additional information.

Wikipedia is written collaboratively by volunteers from all around the world.


Anyone with internet access can make changes to Wikipedia articles. Since its
creation in 2001, Wikipedia has grown rapidly into one of the largest reference
web sites, attracting at least 684 million visitors yearly

1.2 Internet Public Library (http://www.ipl.org)

The Internet Public Library is a public service organization and a


learning/teaching centre founded at the University of Michigan School of
Information and hosted by Drexel University’s College of Information Science &
Technology.

The IPL began in a graduate seminar in the School of Information and Library
Studies at the University of Michigan in the Winter 1995 semester. The idea was
twofold: (1) to ask some interesting and important questions about the
interconnections of libraries, librarians, and librarianship with a distributed
networked environment, and (2) to learn a lot about these issues by actually
designing and building something called the Internet Public Library.

From a large pool of interested students, a group of 35 was selected to make up


the class. Work began on January 5, 1995, and the Library opened on March 17,
70 days later.

1.3 Library of Congress Online Catalog (http://www.loc.gov)

1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in

31
The Library of Congress is USA's oldest federal cultural institution and serves as
the research arm of Congress. It is also the largest library in the world, with
millions of books, recordings, photographs, maps and manuscripts in its
collections in its collections.

1.4 World Digital Library (http://www.wdl.org)

The World Digital Library is a cooperative project of the Library of Congress, the
United Nations Educational Scientific and Cultural Organization (UNESCO), and
partner libraries, archives, and educational and cultural institutions from the
United States and around the world. The project brings together on a single
website, rare and unique documents – books, journals, manuscripts, maps, prints
and photographs, films, and sound recordings – that tell the story of the world’s
cultures. The site is intended for general users, students, teachers, and scholars.
The WDL interface operates in Arabic, Chinese, English, French, Portuguese,
Russian, and Spanish. The actual documents on the site are presented in their
original languages.

1.5 Infolibrarian (http://www.infolibrarian.com)


Infolibrarian is an initiative by a group of professionals working in the area of
library and computer fields. Objective behind this project is to give maximum
information to working library and information science professionals, teachers
and students at one place. An attempt is made to collect information from
different sources with brief description, wherever possible.

1.6 UNESCO Libraries Portal: An international gateway to information for


librarians and library users (http://portal.unesco.org)

UNESCO Libraries Portal gives access to websites of library institutions around


the world. It serves as an international gateway to information for librarians and
library users and international co-operation in this area.

1.7 Nobel Prize.org (http://nobelprize.org/nobel_prizes)


Every year since 1901 the Nobel Prize has been awarded for achievements in
physics, chemistry, physiology or medicine, literature and for peace. The Nobel
Prize is an international award administered by the Nobel Foundation in
Stockholm, Sweden. Each prize consists of a medal, personal diploma, and a
cash award. This site gives complete listing of all the Nobel prizes won with
Nobel speeches delivered at the award function by the Nobel laureates.

1.8 Reference Desk (http://www.refdesk.com)


Refdesk is a free and family friendly web site that indexes and reviews quality,
credible, and current web-based reference resources.

32
1.9 Higher Education (http://www.heguide.org.uk)

This website is designed to help make higher education decisions, from choosing
and applying for courses, to cash facts, taking a gap year and life after HE.

Most sections include useful weblinks and frequently asked questions to help you
find the information you need.

1.10 Health Guide (http://healthguide.co.uk)

Health Guide represents a sum of work over 10 years to support students and
staff with materials which are mainly UK focused. The site is indexed
automatically every week and over 5000 links in 10 directories are checked
regularly to ensure that Health Guide remains as was originally intended, a great
health related internet resource.

1.11 World Tourism (http://www.unwto.org/aboutwto/index.php)

The World Tourism Organization (UNWTO/OMT) is a specialized agency of the


United Nations and the leading international organization in the field of tourism. It
serves as a global forum for tourism policy issues and a practical source of
tourism know-how.

1.12 Directory of Open Access Journals (DOAJ) (http://www.doaj.org)

The aim of the Directory of Open Access Journals is to increase the visibility and
ease of use of open access scientific and scholarly journals thereby promoting
their increased usage and impact. The Directory aims to be comprehensive and
cover all open access scientific and scholarly journals that use a quality control
system to guarantee the content.

2. Full-Text E-Resources

Following are full-text resources which are very useful to keep one updated on
latest developments in the field of Library & Information Science.

2.1 First Monday (http://www.firstmonday.org)

First Monday is one of the first openly accessible, peer–reviewed journals on the
Internet, solely devoted to the Internet. Since its start in May 1996, First Monday
has published 765 papers in 127 issues; these papers were written by 905
different authors. In addition, seven special issues have appeared. The most
recent special issue is entitled Command Lines: The Emergence of Governance
in Global Cyberspace and it was edited by Sandra Braman and Thomas M.
Malaby. First Monday is indexed in Communication Abstracts, Computer &

33
Communications Security Abstracts, DoIS, eGranary Digital Library, INSPEC,
Information Science & Technology Abstracts, LISA, PAIS, and other services.

2.2 Issues in Science & Technology Librarianship (ISTL)


(http://www.istl.org)

ISTL publishes substantive material of interest to science and technology


librarians. It serves as a vehicle for sci-tech librarians to share details of
successful programs, materials for the delivery of information services,
background information and opinions on topics of current interest, to publish
research and bibliographies on issues in science and technology libraries.

2.3 D-Lib Magazine (http://www.dlib.org)

D-Lib Magazine is a solely electronic publication with a primary focus on digital


library research and development, including but not limited to new technologies,
applications, and contextual social and economic issues. The magazine is
currently published six times a year. The full contents of the magazine, including
all back issues, are available free of charge at the D-Lib web site as well as
multiple mirror sites around the world.

The primary goal of the magazine is timely and efficient information exchange for
the digital library community.

ããããããã

34

You might also like