Professional Documents
Culture Documents
Jointly organized by
Module 1
IT Skills Enhancement
Course Material
Contents
Pages
5. Blogs 25-30
The growth of the Internet has been phenomenal. Once the preserve of the
scientific and military communities, the Internet has now blossomed into a vehicle
of expression and research for the common person. No area has remained
untouched by the internet - be it health, travel, banking or business.
1. In the Beginning
Some 45 years ago the search for knowledge was no less insatiable but the
storage, collation, selection and retrieval technologies were rudimentary and the
expense enormous by today’s standards. 65 years past, with World War II at an
end and the might, energy and focused intellect of nations waning the war, the
first computers were being built along with man-machine interfaces. It is at this
time that visionaries first hinted at the possibilities of extending human intellect by
automating mundane, repetitive processes, devolving them to machines. One
such man, Vannevar Bush, in his 1945 essay, - “As we May think” envisaged a
time when a machine called a ‘memex’ might enhance human memory by the
storage and retrieval of documents linked by association, in much the same way
as the cognitive processes of the brain link and enforce memories by association.
2. Post-War Development
1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in
1
A few years after the war the National Science Foundation (NSF) was setup,
paving the way for subsequent government backed scientific institutions and
ensuring the American nation’s commitment to scientific research. Then in 1958,
perhaps in direct response to the Soviet launch of Sputnik, the Advanced
Research Projects Agency (ARPA) was created, and, in 1962, employed a
psychologist by the name of Joseph Licklider. He built upon Bush’s contributions
by presaging the development of the modern PC and computer networking,
Having acquired a computer from the US Air Force and heading up a couple of
research teams, he initiated research contracts with leading computer institutions
and companies who would later go on to form the ARPANET and lay down the
foundations of the first networked computing group. Together they overcame
problems associated with connecting computers delivered from different
manufacturers whose disparate communications protocols meant direct
communications was unsustainable, if not impossible.
It is interesting to note that Lick was not primarily a computer man; he was a
psychologist interested in the functionality of human thought but his
considerations on the working of the human mind brought him into the fold of
computing as a natural extension of his interest.
Another key player, Douglas Engelbart, entered web history at this point. After
gaining his Ph.D. in electrical engineering and an Assistant Professorship at
Berkeley, he setup a research laboratory – the Augmentation Research Center –
to examine the human interface and storage and retrieval systems, producing
NLS (oNLine System) with ARPA funding, the first system to use hypertext
(coined by Ted Nelson in 1965) for collation of documents – and is credited as
the developer of the first mouse or pointing device.
Credit must be given to another thinker too, Paul Baran, for conceiving the use of
packets, small chunks of a message which could be reconstituted at destination,
upon which current internet transmission and reception is based. Working at the
RAND Corporation and with funding from government grants into Cold War
technology, Baran examined the workings of data transmission systems,
specifically, their survivability in the advent of nuclear attack. He turned to the
idea of distributed networks comprising numerous interconnected nodes. Should
one node fail the remainder of the network would still function. Across this
network this packets of information would be routed and switched to take the
optimum route and reconstructed at their destination into the original whole
message. Modern day packet switching is controlled automatically by such
routers.
2
4. ARPANET
The ‘70s saw the emergence of the first networks. As the ARPANET grew it
adopted Network Control Protocol (NCP) on its host computers and File Transfer
Protocol (FTP) is released by the Network Working Group as a user-transparent
mechanism for sharing files between host computers.
And, significantly, the first Terminal Interface Processor (TIP) was implemented,
permitting computer terminals to connect directly to ARPANET. Users at various
sites could log on to the Network and request data from a number of host
computers.
5. Communications Protocols
In 1972 Vinton Cerf was called to the chairmanship of the newly-formed Inter-
Networking Group (INWG), a team setup to develop standards for the
ARPANET. He and his team built upon their NCP communications system and
devised TCP (Transmission-Control Protocol) in an effort to facilitate
communications between the ever-growing number of networks now appearing –
satellite, radio, ground-based like Ethernet, etc.
They conceived of a protocol that could be adopted by all gateway computers
and hosts alike which would eliminate the tedious process of developing specific
interfaces to diverse systems. They envisaged an envelope of information, a
‘datagram’, whose contents would be immaterial to the transmission process,
being processed and routed until they reached their destination and only then
opened and read by the recipient host computer. In this way different networks
could be linked together to form a network of networks.
By the late ‘70s the final protocol was developed - TCP/IP (Internet Protocol) -
which would become the standard for internet communications.
Ethernet
One final piece of computer networking came together under Bob Metcalfe - :
Ethernet. He submitted a dissertation on the ARPANET and packet switching
networks for his Harvard graduate dissertation but was disappointed to have his
3
paper junked. After taking a position at Xerox’s Palo Alto Research Center
(PARC) he read a paper on Alohanet, the university of Hawaii’s radio network.
Alohanet was experiencing problems with packet collision (information was being
lost due to the nature of radio broadcasting). Metcalfe examined the problem
then refined the principles of packet collision, adopted cable as the
communications medium, formed 3Com and marketed his invention as Ethernet.
The take-up was almost immediate and the ‘80s witnessed the explosion of Local
Area Networks (LANs). First educational establishments then businesses
employed Ethernet as the business communications networking standard, and
once connected through communications servers to the Internet, the World Wide
Web was just an initiative away.
In fact, it was ready and waiting in the wings. Tim Berners-Lee (now Sir Tim)
wrote a program, ‘Enquire-Within-Upon-Everything’, in 1980 whilst contracted to
CERN, the particle physics laboratory in Geneva. He needed some means to
collate his own and his colleagues’ information – notes, statistics, results, papers
– the plethora of output generated by the mass of scientists both at the institution
and located across the globe at various research centres. The seed was sown
and upon his return to CERN after other research, he set to work to resolve the
problems associated with diverse communities of scientists sharing data between
themselves, especially as many were reluctant to take on the additional workload
of structuring their output to accommodate CERN’s document architecture
format.
CERN remained diffident to his system so Berners-Lee took the next logical step:
distribute web server and browser software on the Internet. The spontaneous
take-up by computer enthusiasts was immediate and the World Wide Web came
into being.
4
The browser he created was tied to a specific make of computer, the NeXT; what
was required was a browser suited to different machines and operating systems
like Unix, the PC and the Mac, specifically so that businesses and governments,
who were increasingly using the Web to manage their public information, could
guarantee their users could use it.
Soon browsers for different platforms started appearing, Erwise and Viola for
Unix, Samba for Macintosh and … Mosaic for Unix, Mac and PC, created by
Marc Andreessen whilst at the National Center for Supercomputing Applications
(NCSA).
Mosaic took off in popularity to such an extent that it made front page of the New
York Times’ technical section in late 1993, and soon CompuServe, AOL and
Prodigy begin offering dial-up internet access.
Andreessen and Jim Clark (founder of Silicon Graphics Inc.) decided to form a
new company, Mosaic Communications Corporation, to develop a successor to
Mosaic. Since the original program belonged to the University of Illinois and was
built with their time and money, they had to start from scratch. He and Clark set
about assembling a team of developers drawn from NCSA. And Netscape
Navigator was born. By 1996, 3-quarters of web surfers were using it.
5
References :
6
Internet functions
Compiled & abridged by
Nishtha Anilkumar1
Servers often perform specific duties: web servers hosting websites, email
servers forwarding and collecting email, FTP (File Transfer Protocol) servers
uploading and downloading files.
1. Web Access
Access to the Web for home users is achieved by dial-up modem, cable
(broadband or ADSL) or wireless connection to their ISP (Internet Service
Provider); business users will typically be connected to a local area network and
gain access via a communications server or gateway, which is again linked
through an ISP to the Web. ISPs themselves may be connected to larger ISPs,
leasing high speed fibre-optic communications lines. Each of these forms a
gateway to the Web with the largest maintaining the ‘backbones’ of the Web
through which run the international ‘pipes’ connecting the world’s networks.
1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in
7
Each computer connected to the Internet has a unique IP address assigned to it,
either dynamically at the moment of connection or for a period of a day or so, or
(for all intents and purposes) a fixed or static address like that assigned to a web
or name server hosting websites. The current version of IP, version 4, allows for
4.3 billion unique addresses – thought more than adequate a few years ago but,
as there are now only a billion left, no longer sufficient to address not only the
volume of new users and hosts coming online but also the influx of new
technologies demanding attendant IP addresses such as those associated with
smart internet-enabled machines like auto-ordering fridges, Pepsi dispensers and
media centres and now internet phones. However, the shortfall is being remedied
with the emergence of IPv6 and its 340 billion address slots which not
guarantees practically limitless web access but also offers intrinsic unbreakable
security encryption levels.
The Domain Name System (DNS) was conceived in 1984, basically a lookup
translation table converting machine readable IP addresses into human
understandable names. Locating a website by its name www.yourbusiness.co.uk
rather than entering 123.23.48.146 in the browser address bar makes eminently
more sense. These translation tables – name servers - are dotted across the
Internet and contain specific references to website/IP addresses on their own
local list, pointers to other name servers who may be able to locate the desired
computer should it not be found locally and a cache (temporary list) of recently
requested domain names.
8
All name servers are not updated immediately - which is why a new website is
not instantly visible across the Internet. Additions to name server lists take time
to propagate around the world but are usually achieved within a day or two.
4. Domain Management
The domain is then added to the registrar’s local domain name server and
propagated to the world’s root name servers. Whether a website exists for the
domain is immaterial, its potential existence and location is described and
forwarded. Web hosting companies may or may not be registrars which means a
domain may be registered with one company but hosted – made visible to the
Internet through a web server – by another. In this instance, the domain will be
registered and a change must be made to the default name servers list to point to
another set of name servers owned by the hosting company.
6. Faster Communications
TCP/IP was developed in the mid-‘70s and governs all Internet communications.
It has remained largely unchanged. It’s strength – and weakness – lies in its
ability to adjust data transmission to meet internet conditions, namely congestion,
9
transmission urgency and quality. It does this by sending re-requests for
information when it doesn’t receive confirmation of receipt by a certain time but it
doubles the wait time after each re-request in response to net congestion
algorithms. This is often why file downloads may begin with a burst of activity
then speed deteriorates to frustrating slowness.
References :
10
Strategic Searching on the Web
Saroj Das1
0 Abstract:
Searching for pinpointed information is very tough with the amount of information
available on the web. It is very important to understand the search engines and
searching techniques to effectively search the web. The paper here discusses
some search engines and search strategies which can minimize the efforts of
searching the required information on the web and also some website evaluation
techniques.
1. Introduction
Search engines are currently the primary information gatekeepers of the web,
holding the precious key needed to unlock the web both for users who are
seeking information and for authors of web pages wishing to make their voice
heard.
As information gatekeepers, web search engines have the power to include and
exclude web sites and web pages from their indexes and to influence the ranking
of web pages on query result lists. Web search engines thus have a large
influence on what information users will associate with the web. Thus it is very
important, especially for the library professionals to thoroughly understand the
different aspects of search engines to pin-pointedly retrieve the desired
information.
2. Search Engine
NASA defines the term “Search” as, “A search is the organized pursuit of
information. Somewhere in a collection of documents, Web pages, and other
sources, there is information that you want to find, but you have no idea where it
is”.
1
Saroj Das, Institute for Plasma Research, Gandhinagar
Email: saroj@ipr.res.in
11
So, a search engine is the means for finding the information that you are looking
for.
Different types of search engines work differently, but they perform the following
three basic functions:
Search: They search the Internet based on the keyword provided
Index: They maintain an index of the searched term and location of the term
Retrieve: They retrieve the search terms or combination of search terms indexed
in the database
Enter Keywords…
The results are listed as The matching
hypertext links in the form of document is searched
a web page in the database when
user enters the
keyword
Figure 1
12
1. Search engines uses software called spiders, which search the Internet for
documents and their Web addresses
2. The documents and Web addresses are then collected and sent to search
engine’s indexing software
3. The Indexing software extracts information from the documents and stores
in a database. The kind of information indexed depends on the particular
search engine. Some index every word in a document; others index the
document title only.
4. When a search is performed by entering keywords, the database is
searched for documents that matches
5. The search engine lists the results as hypertext links in the form of a web
page.
4. Subject Directories
Subject directories, unlike search engines, are created and maintained by human
editors and not spiders or robots. On the basis of selection criteria, the editors
review and select sites for inclusion in their directories. Most directories provide
searching capabilities.
The terms Search engine and Directories are often used interchangeably though
they are not the same thing.
General Search Engines: These search engines covers the overall web, using its
spiders or crawlers to collect web pages for its own Index.
Examples:
• Google
• Yahoo!
Meta Search Engines: These search engines searches multiple search engines
from a single search page.
Examples:
• Dogpile
13
• Vivisimo
• Fasteagle
Vertical Search Engines: These are very specific search engines, which
searches specific topic, industry, type of content, geographical location, etc. It
searches contents of Deep Web or Invisible web, which are generally difficult to
search through general search engines.
Examples:
• Scitopia (Science Specific)
• BizNar (Business Specific)
6. Search Strategies
Ø Generating Keywords
14
7. Evaluating Web Information
The Web provides information and data from all over the world. There is so much
of information available which appears to be fairly anonymous, therefore it is
necessary to develop skills to evaluate the information found on the web. Anyone
can write anything on the web, there are wide range of documents available,
written by wide range of authors. There may be excellent resources residing
along side the most dubious one.
There are certain criteria to evaluate the information found on the Internet:
ØAuthority
ØPublishing body
ØBiasness
ØReferrals
ØAccuracy
ØCurrency
ØAuthorship
Authorship is a major criterion for evaluation of information found on the web.
The author of information has to be identified. The author could be an individual
expert in an area of specialization or an organization.
ØPublishing body
The publishing body is critical in evaluation of Internet information. Identification
of domain name (.edu, .org, ac) or the URL can tell us about the information
location and the publishing body.
ØBiasness
It is important to examine who is providing the information and what might be
their point of view or bias. Generally every writer wants to prove his/her point and
uses the data and information that assists him/her in doing so.
ØReferrals
This evaluation criterion suggests what author knows about his/her discipline and
its practices, and what sources he/she referred to. It allows evaluating the
authors’ scholarship or knowledge of specific area of discussion.
ØAccuracy
Accuracy or verifiability of information is an important part of the evaluation
process. The information provided should be reliable and error-free. It also needs
to be examined whether the information is checked or verified by some editorial
team or any responsible authority.
15
ØCurrency
Currency refers to the timeliness of information. It is a very critical factor in
evaluating web information. It is important to examine the regularity with which
the data or information is updated.
8. Conclusion
The World Wide Web is a great platform to explore and accomplish research on
any topic. Anybody can put anything on the web easily and for free. Most of the
information found on the web are unregulated and unmonitored, causing great
difficulty in finding appropriate information. The role of search engines has
become vital in locating the right information and to use search engines
effectively is much more critical.
References:
16
Database Creation for Libraries based on Standards
Complied By
Yatrik Patel1
0 Introduction
The library and information community has adopted a range of standards, which
facilitate to create and interchange library data, which promote the inter-
operability of library systems, and which support the operation of national and
international networks of libraries. Adherence to standards plays an important
role in improving access by users to the information resources which are held in
library collections, in collections of other cultural institutions, or which are
accessible on the World Wide Web.
In nutshell implementing standards in libraries will lead to following benefits.
• Uniformity in records
• Better resource sharing between different resource centres
• Seamless exchange of records without data loss
• Supports in creating national / International bibliographic union database
• Allows to adopt any system and provides platform interoperability
1. Global Scenario
The original library standards were set by the American Library Association in the
late 19th century. ALA created standards relating to cataloguing and the creation
of catalogs. Today ALA is still involved in the development of cataloguing rules,
but the development of library standards has been taken up by the National
Information Standards Organization (NISO). NISO is a formal standards
development organization that is accredited by the American National Standards
Institute (ANSI).
NISO owns the original MARC record standard, originally ANSI Z39.2 and now
ANSI/NISO Z39.2, and was the conduit to getting that standard certified at the
international level through ISO as ISO 2709. The organization has about two
dozen active standards ranging from the management of libraries ,International
Standard Serial Numbering (ISSN) , Z39.18 - Scientific and Technical Reports -
Preparation, Presentation and Preservation), to information retrieval (Z39.50 -
1
Yatrik Patel, Scientist C, INLFIBNET Centre, Ahmedabad
Email: yatrik@inflibnet.ac.in
17
Information Retrieval : Application Service Definition & Protocol Specification,
OpenURL). Yet the technology standard that is the most used by libraries is
the MARC21 standard for library cataloguing, is not a NISO standard. This
standard is instead managed by the Library of Congress.
Library of Congress was the force behind the development of the ANSI standard
that defined the structure for the Machine Readable Cataloguing record (MARC)
in the 1960's, which was needed to create computer-driven print-on-demand
service for the Library of Congress card program. Using that structure, the
Library of Congress developed the fields and subfields that would be used to
encode the content of a library catalog record. While the record structure of the
MARC record has not changed, and is still defined by ANSI/NISO Z39.2, the
content of the record has been under constant evolution under Library of
Congress's care.
Another committee effort, but one with wide adoption, is that of the Anglo-
American Cataloging Rules. Although not strictly a technology standard, AACR
has a profound effect on the technology of libraries. The Joint Steering
Committee for Revision of the Anglo-American Cataloguing Rules has six
member organizations representing the Anglo-American library world.
2. National Scenario
18
information in our changing and ever-more digital environment. This will help to
keep us update in the standards areas, where INFLIBNET will also play major
role in the development of global standards and influence it on Indian point of
view. INFLIBNET is also representative of Bureau of Indian Standards (BIS)
Technical Committee MSD5. This will help the Centre to educate the nation in
the area of development of standards and its implementation into country.
The Anglo-American Cataloguing Rules (AACR) are designed for use in the
construction of catalogues and other lists in general libraries of all sizes. The
rules cover the description of, and the provision of access points for, all library
materials commonly collected at the present time.
The current text is the Second Edition, 2002 Revision (with 2003, 2004, and 2005
updates) which incorporates all changes approved by the JSC through February
2005. The rules are published by:
Principles of AACR include cataloguing from the item 'in hand' rather than
inferring information from external sources and the concept of the 'chief source of
information' which is preferred where conflicts exist.
19
It superseded earlier separate ISBDs that were published formonographs, older
monographic publications, cartographic materials, serials and other continuing
resources, electronic resources, non-book materials, and printed music. IFLA's
ISBD Review Group is responsible for maintaining the ISBD.
One of the original purposes of the ISBD was to provide a standard form of
bibliographic description that could be used to exchange records internationally.
This would support IFLA's program of universal bibliographic control.
3.1.3 FRBR
FRBR was approved by an IFLA committee in 1997, and is now being used to
inform the future development of the ISBD's and AACR, in teachincal cataloguing
and in the development of databases for several projects worldwide.
The CCF was developed in order to facilitate the exchange of bibliographic data
between organisations, and was first published by UNESCO in 1984 i.e. first
edition.A second edition was published in 1988. At the same time it was decided
that the scope of CCF would be extended to incorporate provisions for data
elements for recording factual information that are used most frequently for
referral purposes. The third edition of CCF was divided into two volumes: CCF/B
for holding bibliographic information and CCF/F for factual information to serve
the desired purpose. Mainly CCF was designed to follow the basic principles:
• The structure of the new format conforms to the international standard ISO
2709
20
• The core record consists of a small number of mandatory data elements
essential to bibliographic description, identified in a standard manner
• The mandatory elements are augmented by additional optional data
elements, identified in a standard manner, and
• A standard technique is used for accommodating levels, relationships, and
links between bibliographic entities
The MARC standards consist of the MARC formats, which are standards for the
representation and communication of bibliographic and related information in
machine-readable form, and related documentation. It defines
a bibliographic data format that was developed by Henriette Avram at the Library
of Congress beginning in the 1960s. It provides the protocol by
which computers exchange, use, and interpret bibliographic information. Its data
elements make up the foundation of most library catalogs used today.
The future of the MARC formats is a matter of some debate in the worldwide
library science community. On the one hand, the storage formats are quite
complex and are based on outdated technology. On the other, there is no
alternative bibliographic format with an equivalent degree of granularity.
21
The Dublin Core standard includes two levels: Simple and Qualified. Simple
Dublin Core comprises fifteen elements; Qualified Dublin Core includes three
additional elements (Audience, Provenance and Rights Holder), as well as a
group of element refinements (also called qualifiers) that refine the semantics of
the elements in ways that may be useful in resource discovery.
The Dublin Core standard is still under development, in relation both to its
semantic aspects (rules for the content of the fields) and the syntax (rules for
structuring and expressing the fields)
A key type of data element is an identifier for a book, serial, journal article,
electronic resource, or other type of information resource.
The Digital Object Identifier (DOI) System is for identifying content objects in the
digital environment. DOI names are assigned to any entity for use on digital
networks. They are used to provide current information, including where they (or
information about them) can be found on the Internet. Information about a digital
object may change over time, including where to find it, but its DOI name will not
change.
22
of media. DOI names can be used for any form of management of any data,
whether commercial or non-commercial.
The development of library networks over the next decade will be based on the
interconnection of distributed library systems, and the use of client/server
technology. The implementation of certain key technical standards will allow
particular applications such as searching and interlibrary loan to be managed
cooperatively between two computer systems. . The key standards are Z39.50,
and the Open Archives Initiative Metadata Harvesting Protocol
The Z39.50 standard specifies the structures and rules which allow a client
machine (such as a personal computer or workstation) to search a database on a
server machine (such as a library catalogue) and retrieve records that are
identified as a result of such a search. The rather arcane designation for this
standard derives from the fact that it was the 50th standard developed by a
committee known as "Z39", the committee of the American National Standards
Institute that has the responsibility for library automation standards. While
technically a US national standard (Version 3 of which was adopted in 1995),
Z39.50 has also been copied or "cloned" as an international standard, known as
ISO 23950. The standard has been of major importance in supporting access to
distributed library databases and catalogues. The Library of Congress
undertakes the role of maintenance agency for the standard.
23
OAI-PMH version 1.0 was introduced to the public in January 2001 at a
workshop in Washington D.C., and another in February in Berlin, Germany.
Subsequent modifications to the XML standard by the W3C required making
minor modifications to OAI-PMH resulting in version 1.1. The current version, 2.0,
was released in June 2002. It contained several technical changes and
enhancements and is not backward compatible.
4 Concluding Notes
For libraries in India, it is very difficult to strict with any standard, due to libraries
are not well recognized by their institutions and having lack of skilled manpower.
A financial crunch with lack of skilled manpower is threat for the Indian libraries
for keeping themselves with the pace of latest technology including the intruders
like computer professionals. But to survive in the field, one has to go through the
standards and control the quality in automation
Web Resources
http://www.nla.gov.au/services/standards.html
http://www.inflibnet.ac.in/publication/other/webpagelink.pdf
http://en.wikipedia.org/wiki/International_Standard_Bibliographic_Description
http://www.loc.gov/cds/downloads/FRBR.PDF
http://en.wikipedia.org/wiki/MARC_standards
http://www.loc.gov/marc/umb/
http://dublincore.org/
http://en.wikipedia.org/wiki/OAI-PMH
http://www.isbn-international.org
http://www.issn.org
http://www.doi.org
http://www.openarchives.org/OAI/openarchivesprotocol.html
24
Blogs
Saroj Das1
0 Abstract:
Web 2.0 has made a significant impact on the way information is generated and
disseminated. Blog, a Web 2.0 technology, is fast becoming popular
communication tool among different segments of our society. The literature here
discusses some aspects of Blog, its creation and uses, especially in library
environment.
1. Introduction
Internet has made a profound contribution to modern life. Today, the web has
hundreds of millions of users. It has also eliminated the limitations of service
availability within a physical building, with limited opening hours and most
significantly, for many the web appears to be almost totally free. With Library 2.0
making headway it has become imperative for librarians to use Web 2.0
technologies to reach its users. Blogs are fast becoming popular mode of
communication, with its global proliferation it has enormous implication for
libraries. By enabling the rapid production and consumption of Web-based
publications, blogs may indeed represent an even greater milestone in the history
of publishing than do Web pages. For libraries the most obvious implication is
that blogs represent another form of publication and need to be treated as such.
2. Blogs?
Blog, short form for Weblog, is a web site that contains brief entries arranged in
reverse chronological order. Blogs are diverse, ranging from personal diaries to
news sites that monitor developments on anything. According to Evan Williams,
the creator of Blogger, the blog concept is about three things Frequency, Brevity
and Personality.
1
Saroj Das, Institute for Plasma Research, Gandhinagar
Email: saroj@ipr.res.in
25
Blog, Blogging, Blogger and Blogrolling
There are many blogging software available for free or inexpensive to use. One
does not need to install any software for starting a blog. Blogs are easy to create.
One of the easiest ways is, creating a free account by registering at the service
like:
• Blogger (http://www.blogger.com)
• Wordpress (http://www.wordpress.com)
• LiveJournal (http://www.livejournal.com)
• Typepad (http://www.typepad.com)
26
Start blogging in three steps:
Step 1 Create an Account: You need to create an account with Google this
is a very simple process. If you already have an account, just log in.
Step 2 Name your Blog: Naming your blog appropriately is an important
feature, as it will reflect the thought contents and drive traffic
towards your blog.
Step 3 Choose a Template: Choosing the right template is very important,
it will decide the layout of your blog. There are readily available
templates to choose from. Contents will be organized as per the
template design.
Before jumping into creating a blog, one has to find answers to these
questions:
27
±What type of blog is to be created?
5. Downsides of Blogs
±Most of the blogs are created and maintained by individuals, so they may
include biased or inaccurate information
±Sometimes blogs are created to air frustration, which may affect others
±Blogs are volatile, the blogger can edit or delete posts at his/her wish
28
Views, thoughts about library profession and professional development
±Catalogablog (http://catalogablog.blogspot.com)
The blog is about Library cataloging, classification, metadata, subject access and
related topics.
±LISNews (http://www.lisnews.org)
It is a collaborative weblog devoted to current events and news in the world of
Library and Information Science.
±IceRocket (http://www.icerocket.com)
±BlogScope (http://www.blogscope.net)
±Technorati (http://technorati.com)
±Blogdigger (http://www.blogdigger.com)
29
±Blogpulse (http://www.blogpulse.com)
7. Conclusion
Blogs are increasingly becoming popular among the communities, libraries and
librarians too are not left behind, infact info-savvy librarians are more aware of
and interested in blogging than the general public. Blogs have the potential to
transform any organization by accelerating the information and communication
process. It is important for the information professionals to understand and
evaluate the role of blogging in their institutions and introduce blogs as a
promotional tool and a way of informing the users about the collection and
services. Keeping abreast of the technologies such as blogging, social
networking, etc. and putting them into real-world environment would not only
create a niche for librarians but would also enhance the overall worth of the
library profession.
References:
1] Fichter, Darlene. Why and How to Use Blogs to Promote You Library’s
Services. Accessible at http://www.infotoday.com/mls/nov03/fichter.html
4] http://www.blogger.com
5] http://en.wikipedia.org/wiki/Blog
30
Free E-resources available on the Internet
Compiled by
Nishtha Anilkumar1
1. Free E-Resources
Following is a list of select free resources on the net, which I have referred often
to give valuable service to the users but it is by no means an exhaustive one :
The IPL began in a graduate seminar in the School of Information and Library
Studies at the University of Michigan in the Winter 1995 semester. The idea was
twofold: (1) to ask some interesting and important questions about the
interconnections of libraries, librarians, and librarianship with a distributed
networked environment, and (2) to learn a lot about these issues by actually
designing and building something called the Internet Public Library.
1
Nishtha Anilkumar, Physical Research Laboratory, Ahmedabad
Email: nishtha@prl.res.in
31
The Library of Congress is USA's oldest federal cultural institution and serves as
the research arm of Congress. It is also the largest library in the world, with
millions of books, recordings, photographs, maps and manuscripts in its
collections in its collections.
The World Digital Library is a cooperative project of the Library of Congress, the
United Nations Educational Scientific and Cultural Organization (UNESCO), and
partner libraries, archives, and educational and cultural institutions from the
United States and around the world. The project brings together on a single
website, rare and unique documents – books, journals, manuscripts, maps, prints
and photographs, films, and sound recordings – that tell the story of the world’s
cultures. The site is intended for general users, students, teachers, and scholars.
The WDL interface operates in Arabic, Chinese, English, French, Portuguese,
Russian, and Spanish. The actual documents on the site are presented in their
original languages.
32
1.9 Higher Education (http://www.heguide.org.uk)
This website is designed to help make higher education decisions, from choosing
and applying for courses, to cash facts, taking a gap year and life after HE.
Most sections include useful weblinks and frequently asked questions to help you
find the information you need.
Health Guide represents a sum of work over 10 years to support students and
staff with materials which are mainly UK focused. The site is indexed
automatically every week and over 5000 links in 10 directories are checked
regularly to ensure that Health Guide remains as was originally intended, a great
health related internet resource.
The aim of the Directory of Open Access Journals is to increase the visibility and
ease of use of open access scientific and scholarly journals thereby promoting
their increased usage and impact. The Directory aims to be comprehensive and
cover all open access scientific and scholarly journals that use a quality control
system to guarantee the content.
2. Full-Text E-Resources
Following are full-text resources which are very useful to keep one updated on
latest developments in the field of Library & Information Science.
First Monday is one of the first openly accessible, peer–reviewed journals on the
Internet, solely devoted to the Internet. Since its start in May 1996, First Monday
has published 765 papers in 127 issues; these papers were written by 905
different authors. In addition, seven special issues have appeared. The most
recent special issue is entitled Command Lines: The Emergence of Governance
in Global Cyberspace and it was edited by Sandra Braman and Thomas M.
Malaby. First Monday is indexed in Communication Abstracts, Computer &
33
Communications Security Abstracts, DoIS, eGranary Digital Library, INSPEC,
Information Science & Technology Abstracts, LISA, PAIS, and other services.
The primary goal of the magazine is timely and efficient information exchange for
the digital library community.
ããããããã
34