You are on page 1of 38

What is Intranet

An intranet is a private network that is contained within an enterprise. It may consist of


many interlinked local area networks and also use leased lines in the wide area network.
Typically, an intranet includes connections through one or more gateway computers to
the outside Internet. The main purpose of an intranet is to share company information and
computing resources among employees. An intranet can also be used to facilitate working
in groups and for teleconferences.

An intranet uses TCP/IP, HTTP, and other Internet protocols and in general looks like a
private version of the Internet. With tunneling, companies can send private messages
through the public network, using the public network with special encryption/decryption
and other security safeguards to connect one part of their intranet to another.

Typically, larger enterprises allow users within their intranet to access the public Internet
through firewall servers that have the ability to screen messages in both directions so that
company security is maintained. When part of an intranet is made accessible to
customers, partners, suppliers, or others outside the company, that part becomes part of
an extranet.

Advantages of Intranet
1. Workforce productivity: Intranets can also help users to locate and view
information faster and use applications relevant to their roles and responsibilities.
With the help of a web browser interface, users can access data held in any
database the organization wants to make available, anytime and - subject to
security provisions - from anywhere within the company workstations, increasing
employees' ability to perform their jobs faster, more accurately, and with
confidence that they have the right information. It also helps to improve the
services provided to the users.
2. Time: With intranets, organizations can make more information available to
employees on a "pull" basis (i.e., employees can link to relevant information at a
time which suits them) rather than being deluged indiscriminately by emails.
3. Communication: Intranets can serve as powerful tools for communication within
an organization, vertically and horizontally. From a communications standpoint,
intranets are useful to communicate strategic initiatives that have a global reach
throughout the organization./// The type of information that can easily be
conveyed is the purpose of the initiative and what the initiative is aiming to
achieve, who is driving the initiative, results achieved to date, and who to speak to
for more information. By providing this information on the intranet, staff have the
opportunity to keep up-to-date with the strategic focus of the organization. Some
examples of communication would be chat, email, and or blogs. A great real
world example of where an intranet helped a company communicate is when
Nestle had a number of food processing plants in Scandinavia. Their central
support system had to deal with a number of queries every day (McGovern,
Gerry). When Nestle decided to invest in an intranet, they quickly realized the
savings. McGovern says the savings from the reduction in query calls was
substantially greater than the investment in the intranet.
4. Web publishing: It Allows 'cumbersome' corporate knowledge to be maintained
and easily accessed throughout the company using hypermedia and Web
technologies. Examples include: employee manuals, benefits documents,
company policies, business standards, newsfeeds, and even training, can be
accessed using common Internet standards (Acrobat files, Flash files, CGI
applications). Because each business unit can update the online copy of a
document, the most recent version is always available to employees using the
intranet.
5. Business operations and management: Intranets are also being used as a
platform for developing and deploying applications to support business operations
and decisions across the internetworked enterprise.
6. Cost-effective: Users can view information and data via web-browser rather than
maintaining physical documents such as procedure manuals, internal phone list
and requisition forms. This can potentially save the business money on printing,
duplicating documents, and the environment as well as document maintenance
overhead. "PeopleSoft, a large software company, has derived significant cost
savings by shifting HR processes to the intranet". Gerry McGovern goes on to say
the manual cost of enrolling in benefits was found to be USD109.48 per
enrollment. "Shifting this process to the intranet reduced the cost per enrollment
to $21.79; a saving of 80 percent". PeopleSoft also saved some money when they
received requests for mailing address change. "For an individual to request a
change to their mailing address, the manual cost was USD17.77. The intranet
reduced this cost to USD4.87, a saving of 73 percent" PeopleSoft was just one of
the many companies that saved money by using an intranet. Another company
that saved a lot of money on expense reports was Cisco. "In 1996, Cisco
processed 54,000 reports and the amount of dollars processed was USD19
million"
7. Promote common corporate culture: Every user is viewing the same
information within the Intranet.
8. Enhance Collaboration: With information easily accessible by all authorised
users, teamwork is enabled.
9. Cross-platform Capability: Standards-compliant web browsers are available for
Windows, Mac, and UNIX.
10. Built for One Audience: Many companies dictate computer specifications.
Which, in turn, may allow Intranet developers to write applications that only have
to work on one browser (no cross-browser compatibility issues).
11. Knowledge of your Audience: Being able to specifically address your "viewer"
is a great advantange. Since Intranets are user specific (requiring
database/network authentication prior to access), you know exactly who you are
interfacing with. So, you can personalize your Intranet based on role (job title,
department) or individual ("Congratulations Jane, on your 3rd year with our
company!").
12. Immediate Updates: When dealing with the public in any capacity,
laws/specifications/parameters can change. With an Intranet and providing your
audience with "live" changes, they are never out of date, which can limit a
company's liability.
13. Supports a distributed computing architecture: The intranet can also be linked
to a company’s management information system, for example a time keeping
system.

Disadvantages of Intranets
 Management fears loss of control
 Hidden or unknown complexity and costs
Management concerns
 Potential for chaos
Security concerns  Unauthorized access
 Abuse of access
 Denial of service

 Packet sniffing
Productivity concerns  Overabundance of information
 Information overload lowers productivity

 Users set up own web pages

Planning and creating an intranet


Most organizations devote considerable resources into the planning and implementation
of their intranet as it is of strategic importance to the organization's success. Some of the
planning would include topics such as:

• The purpose and goals of the intranet


• Persons or departments responsible for implementation and management
• Functional plans, information architecture, page layouts, design
• Implementation schedules and phase-out of existing systems
• Defining and implementing security of the intranet
• How to ensure it is within legal boundaries and other constraints
• Level of interactivity (eg wikis, on-line forms) desired.
• Is the input of new data and updating of existing data to be centrally controlled or
devolved.
These are in addition to the hardware and software decisions (like content management
systems), participation issues (like good taste, harassment, confidentiality), and features
to be supported

The actual implementation would include steps such as:

• Securing senior management support and funding


• Business requirements analysis.
• Setting up web server access using a TCP/IP network.
• Installing required user applications on computers.
• Creation of document framework for the content to be hosted User involvement in
testing and promoting use of intranet.
• Ongoing measurement and evaluation, including through benchmarking against
other intranets. Content is King: A successful Intranet project engages its viewers
and provides them with immense corporate value by:

• Feeding the Intranet: Key personnel must be assigned and committed to feeding
Intranet consumers. The alternative for your project to become the "yellow-pages"
(a tool that is used as a last resort).
• Keep it current: Information that is current, relevant, informative, and useful to
the end-user is the only way to keep them coming back for more.
• Interact or "Listen": Allow your users to create content. Social networking must
be an integral part of any Intranet project, if a company is serious about providing
information to and receiving information from their employees.
• Feedback: Allow a specific forum for users to tell you what they want and what
they do not like.

Act on Feedback: Your users of the Intranet are typically the employees of the company
with their finger on the pulse of your industry. Those that are in the trenches on a daily
basis will be able to tell "corporate" what trends are happening in the marketplace before
any news source. This two-way communication is critical for any successful Intranet.
Company executives must read the input and create responses based on the company's
direction. Otherwise, what is the point of any employee taking the time to respond. If an
employee submits their opinion or their observation, they need to feel that they have been
heard. This is accomplished by:

• Require management to review intranet posts on a daily basis and respond to the
poster. Let them know that their post has already been addressed, is being
reviewed, or is being referred to a department head. This ensures the poster that
their post has been read and is being acted upon accordingly. If they do not
receive feedback, they will discontinue posting.
• Broadcast feedback: The ideas that make it into the "this is a great idea" bucket,
should become "news-worthy". This makes the poster feel useful and encourages
others to follow.
• Log feedback by users: This information can be useful when considering an
applicant for promotion/transfer, etc. It will also let you know who is focused on
the company's benefit and not just "filling a position".
• Require executives to provide daily/weekly content: Everyone wants to hear from
the person(s) they are working for. The Executive Team needs to lead the way in
communicating the company's vision to their associates on a frequent (daily, if
possible. If not, no less than weekly) basis.

Requirements and Recommendations for Intranet

1. Network Service--the basic communication service upon which other distributed


services are built.
2. Directory Service--the service by which information about system resources is
located on demand. Resources include people, files, servers, databases and
printers.
3. Security Service--a general mechanism to provide proof of identity for both
people and servers, authorization to access all resources based upon a single user
identity, and secure encrypted communication.
4. Messaging Service--electronic mail, bulletin board, and real-time notification
services.
5. Application Service--served applications that can be downloaded or run on
demand at individual workstations. Freeware, shareware and site-licensed
applications will be delivered throughout the campus. Proprietary applications
will be available through a floating license service to authorized users.
6. File Service--a community-wide shared file service.
7. Database Service--a mechanism for collecting, serving and querying institutional
data.
8. Cal Poly Pomona Web--a mechanism for providing global hyperaccess and
control of all infrastructure services. The Cal Poly Pomona Web is currently
providing delivery of publications; we should implement additional graphical
Web interfaces to the directory, security, messaging, application, file and database
services, and demote arcane keyboard-bound terminal interfaces.
Types of Intranet
1. The collaboration platform. This type is very big on two-way publishing. Users
publish just as much as they consume. This type of intranet is big on discussion
forums and other ways to people to connect with each other. Information tends to
be less formal, more conversational.
2. The internal Web site. This type is based on one-way publishing. People who
interact with it are divided into two groups: consumers and publishers.

There is a defined “admin side” to it which comparitively few people have access.
Information is reviewed before it’s published, and it’s often subject to workflow
and approvals. The intranet is structured just like a public Web site, it just
happens to be behind the fire wall.

3. The distributed intranet. In larger organizations, your intranet very quickly


becomes decentralized. You end up not with a single, definable “intranet,” but
with dozens or even hundreds of small applications (e.g. - a phone directory, an
announcements system, a document library) that you group around common
infrastructure, like a centralized user database (often LDAP or Active Directory),
and a centralized store of design elements so all the mini-applications can look the
same.

Intranet Architecture
Before discussing the architecture of Intranets, a few background concepts need to be
introduced.

Three Sources of Information


At least three sources of content quickly emerge on enterprise Intranets: formal,
project/group, and informal.

The formal information is the officially sanctioned and commissioned information of


the enterprise. It usually has been reviewed for accuracy, currency, confidentiality,
liability and commitment. This is the information with which the formal management
infrastructure is most concerned.

Project/group information is intended for use within a specific group. It may be used to
communicate and share ideas, coordinate activities or manage the development and
approval of content that eventually will become formal. Project/Group information
generally is not listed in the enterprise-wide directories and may be protected by
passwords or other restrictions if general access might create problems.

Informal information begins to appear on the Intranet when authors and users discover
how easy it is to publish within the existing infrastructure. Informal information is not
necessarily the same thing as personal home pages. A personal folder or directory on an
Intranet server can serve as a repository for white papers, notes and concepts that may be
shared with others in the enterprise to further common interests, for the solicitation of
comments or for some other reason. Instead of making copies, the URL can be given to
the interested parties, and the latest version can be read and tracked as it changes. This
type of informal information can become a powerful stimulus for the collaborative
development of new concepts and ideas.

Two Types of Pages


There are two basic types of pages: content pages and broker pages. Content pages
contain the information of value required by a user. Broker pages help users find the
content pages appropriate for their current requirements.

Content pages can take many forms. They may be static pages, like the ones you are
reading here, or they may be active pages where the page content is generated "on the
fly" from a database or other repository of information. Content pages generally are
owned by an individual. Over time expect the "form and sense" of content pages to
change as more experience is gained in the areas of non-linear documents (hyperlinking),
multimedia, modular content and integration of content and logic using applets.

Broker pages also come in more than one form, but all have the same function, to help
users find relevant information. Good broker pages serve an explicitly defined audience
or function. Many of the pages with which we already are familiar are broker pages. A
hyperlink broker page contains links to other pages, in context. It also may have a short
description of the content to which it is pointing to help the user evaluate the possibilities.
On the other hand, a search oriented broker page is not restricted to the author's scope,
but it also does not provide the same level of context to help the user formulate the
appropriate question.

Combination search and hyperlink broker pages are common today. Search engines
return the "hits" as a hyperlink broker page with weightings and first lines for context,
and hyperlink broker pages sometimes end in a specific category that is refined by
searching that defined space. It is unlikely that hyperlink broker pages ever will be
generated entirely by search engines and agents, because the context that an expert broker
provides often contains subjective or expert value in its own right. After all, not all
content is of equal quality or value for specific purposes, and even context sensitive word
searches cannot provide these qualitative assessments. As the amount of raw content
increases, we will continue to need reviewers to screen which competing content is most
useful, or the official source, for workers in our enterprise.

A special use of broker pages is for assisting with the management of web content. There
are several specific instances of these management pages. We call one instance the
"Enterprise Map" because collectively these broker pages form a hyperlinked map of all
the formal content in the organization. Other sets are used for project management,
functional management and to support content review cycles. The use of broker pages for
each of these management functions is discussed in more detail in the next section.
The Enterprise Map
A structured set of broker pages can be very useful for managing the life cycle of
published content. We call this the Enterprise Map, and while the primary audience for
this set of broker pages is management, we have discovered that end users frequently find
the Enterprise Map useful for browsing or to find content when their other broker pages
have failed them.

With the exception of the content pages at the bottom of the map, the Enterprise Map
pages consist only of links. Each page corresponds to an organization committed to the
creation and quality of a set of content pages. In today's organizations, commitments tend
to aggregate into a hierarchical pyramid, but the mapping technique also could be applied
to most any organizational model. The Enterprise Map also does not have to be based on
organization. It could be a logical map where the top level is the mission, the next level
the major focuses required to accomplish the mission, and so on, down to the content
level. Since most large organizations are starting from a pyramidal accountability
structure, that is the form of the example that follows.

Using the terminology from the previous chapter, the Enterprise Map begins with a top
page, owned by the CIO and /or CEO (with responsibility usually delegated to the Web
Administrator). This page consists of a link to the Map Page of each line of business and
major support organization in the enterprise. The Map pages at this next level are owned
by the publisher for each organization. The Publisher Pages, in turn, consist of links to
each of their Editor's Pages. The Editor's Pages may have additional pages or structure
below them created and maintained by the editor that help organize the content, but
ultimately these pages point to the formal content pages.

This model can scale to governments or large diversified companies. In a government


organization, the Administrator's Page would point to all the Agencies, and the map
would follow each agency structure to the content level. Since each agency may be a
large organization, each may have its own Administrator and Web Council. A major
advantage of this mapping architecture is its flexibility. It can originate from the top
down or the bottom up. If several government agencies developed their Intranets
independently, with this type of Enterprise Mapping structure, they can be linked together
at any time in the future by creating the next level map page. None of the existing Maps
need to be changed. This flexibility is a result of the distributed decision making central
coordination model on which the architecture is built.

The Map provides a commitment (or accountability) view of all the formal content in the
enterprise. Management can start at their point in the map and follow the links to all the
content which supports the functions for which they are responsible. They also can look
at what other organizations provide and how well it integrates. Experience predicts that
when a Management Map is first implemented, and managers get involved, they are
shocked by the quality and incompleteness of the information for which they are
responsible. The reason is that they have never been able to easily browse all the
information and create multiple, contextual views of their own when the information was
on paper or in rigid electronic formats. The Intranet gives them this ability. Handled
properly, demonstrating this ability to managers is a great opportunity to show the
strengths of an Intranet for improving not just accessibility but information quality.

An Enterprise Map has several interesting characteristics. Once it is in place, authors and
editors can self publish, and the information automatically shows up in a logical
structure. Also, content categories and even editor level functions generally are not
affected by reorganizations, because major product lines and service areas generally are
not added or deleted. Most reorganizations shift responsibilities at higher levels in the
Map. This means that when a reorganization does occur, the Map can be adjusted
quickly, by the managers affected, by changing one or a few links. Content does not need
to be moved around. The result is a very low maintenance path to all the formal
enterprise content, without forcing publishing through a central authority that can quickly
become a bottleneck.

Shadow Maps
The Enterprise Map provides a management path to all the formally published content.
However, management also has a need to see work in progress, formal content that is not
yet completed. This is the realm of project and departmental information. A Shadow Map
can be constructed for this purpose. The Shadow Map works the same way as the
Enterprise Map, but it is not generally advertised and can be protected by passwords or
other access controls. The Shadow Map can be enhanced with a few additional Broker
Pages to assist with the management of content development.

A Shadow Map continues down to the author level. In this model, the author maintains an
Index Page that is divided into two sections, work commitments and work completed.
When the first draft of committed content is created, the author places it in his web
directory and links the item line on his Index Page to the file. As revisions are made, the
author places the latest version in the same directory with the same name so the Index
automatically points to the latest version. This does not preclude keeping back versions if
they are required. The previous version is copied and numbered as it is moved out of the
current version status. When the content completes review and goes into "production" the
author moves the item from the committed section to the completed section and redirects
the link to the permanent address of the published item. Note that this can work for
development of non-web content as well by configuring mime types and having the
browser automatically start up the appropriate application on the client when the link is
activated.

A second Broker Page that can be added is a Project Page. This page is created by the
Project Manager and contains item lines for all the project deliverables. When the author
creates the first draft, she not only links the content file to her Index Page; she also
notifies the Project Manager of the location so the content can be linked to the
appropriate line item on the Project Page. Like the Index Page, as the content is revised
the Project Page always points to the most current version, without additional
maintenance.

In a matrix organization a third Broker Page can be created by the Functional Manager.
This page consists of links to the Index Page for each employee reporting to the
Functional Manager. This provides a quick path to the work, both in progress and
completed, of all her employees. Once again, after the structure is set up, it takes little
maintenance, with each person keeping his own information up to date.

Finally, Reviewer Pages can be created when the content is ready for review. Each
reviewer has a "Review Page," which consists of links to all the content in their review
queue. When the Editor (or whoever is responsible for managing the review process)
places the content into formal review, it is added to each reviewer's page. The reviewers
access their page when they are ready to do reviews, and by selecting a link can retrieve
and view the content. There are numerous ways the comments and comment resolution
could be handled using Internet technology. One is to funnel comments into a threaded-
discussion-group format. Automated email messages can be used to notify or remind
reviewers of deadlines and status.

The various Broker Pages discussed above are meant to create a model of the basic
management functions and how they can be structured. Whether or not the specific model
described here is used, the most effective process for managing Intranet content will use
Intranet tools and approaches.

When we first conceived of this model, there were no higher level tools to help create and
manage the pages for a process like this. Today several tools are emerging to help
manage functional sets of pages, and they can be configured to support these processes.
Some are message-based, others are centralized, shared-database models with Web front-
ends. Over time, we anticipate that a variety of vendors will offer improved tools, based
on Intranet paradigms, that are specifically tuned to support the distributed, message-
based management model. What ever tools are chosen, the most effective are those that
help the functional managers use the Intranet to manage the development of the content
for which they are responsible, without requiring technical specialists in between. In the
beginning, many managers may find a simple static page implementation of this logical
structure more approachable than a more sophisticated automated tool.

General Brokering
Brokers are the main way users find information on an Intranet. A broker may serve
many functions. He may provide information to users in the context of specific processes,
providing structure for efficiency and consistency. He may screen large pools of content
for material relevant to a large number of employees so each one does not have to
duplicate the process. He may identify which information is considered official. Or, he
may provide interpretation of general information in the context of the organization.

Most knowledge worker jobs today involve some form of information brokering. In the
paper world the broker output often is formally sanctioned by the organization and may
be the worker's main responsibility. The same kinds of roles will evolve in the Intranet
world, and ideally the people in the role today will evolve into the electronic version of
their role. These types of formally managed broker pages can be treated as content in the
map structure described above.

Most organizations also have informal broker pages that spring up. An individual may
start the page for herself, and it gains a following, or she may identify an unfilled need
and consciously fill it. These pages can be a valuable way to identify and quickly meet
new requirements. However, until these pages are in a formal commitment (or
accountability) structure, there is no guarantee that the content is verified or that the
author will keep the content current.

The Broker Directory


An Enterprise Broker Directory, sometimes called a "Yellow-Pages," organized by
subject, can help users find the broker that they need. The Broker Directory generally is
maintained by either the Web Administrator or the Web Grandmaster. Because informal
broker pages are included in the Broker Directory, some mechanism needs to be included
to keep the directory from filling up with outdated and abandoned pages. As with other
Intranet functions, the challenge becomes providing the centralized view without
imposing a central implementation bottleneck.
One way to handle this is to create a "Sunset" provision for all pages not officially
managed by an organizational entity. Any broker can list their page by submitting a web
form or email to the Web Administrator or an automated script. However, informal pages
are only listed for 60 days. If the broker does not renew the request in 60 days, the page is
removed from the Broker Directory. This allows informal brokers to "self-publish," and
protects the directory from becoming a repository of links to abandoned pages.

The Enterprise Index


The Enterprise Index provides users with another way to find information. This
frequently is tied to the Search Engine. Keeping with a distributed decision-making
model, the Index and Search Engine should not require pages to be published on a
specific system or managed by specific management software. The Index and Search
Engine should be fed by a discovery agent (Web Crawler or Spider) that regularly
searches the Intranet and catalogs the content. This is consistent with the coordination
versus control model and also protects the enterprise from major conversion efforts
(proprietary locks) if an alternative product or upgrade is desired in the future. The
Enterprise Index provides yet another way for users to find the content they require.

Brokering Summary
Three distinct discovery paths need to be provided by the Intranet Infrastructure:

• The Enterprise Map


• The Broker Directory
• The Index and Search Engine
Workflow Management
Workflow management is a relatively new focus for the Intranet. Historically, a number
of Internet/Web tools have been available to help with this process. Email, threaded-mail
discussion groups and news groups provide forums for discussion and resolution of
issues. The HTML "mailto:" function has been used to provide reviewers with easy
connections through their browser to these forums.

What has been missing are packages that integrate the functionality of the independent
tools, add routing and tracking, and provide the user with an interface that is easy to
configure. This appears to be changing with the appearance of companies like MKS,
Action Technologies, WebFlow and Netmosphere who now offer web-enabled and
web-based products that support groupware, reviewer comments, routing, sign-off,
checkout-checkin and project management functionality in an open, web environment.

Access to Database Information


Discrete, structured information still is managed best by a database management system.
However, the quest for a universal user interface has led to the requirement for access to
existing database information through a web browser. Three models of access can be
identified:

• Automatic tailoring of page content


• User specified database requests
• User initiated database updates

From a technical standpoint, there are a number of ways these interfaces can be created.
What is important is that access be provided to the authors (knowledge workers) in a way
that supports the distributed decision-making, enabling model rather than the centralized
expertise model. This means that authors who are relatively naive technically need to be
able to incorporate database managed data into their pages.

A number of tools are beginning to emerge that move in this direction. Most of the
database vendors and several other application vendors are pushing the use of their
databases to manage all the content in the web. The advantage is unique pages can be
generated automatically and easily. These are the tools that support the first model
identified above, automatic tailoring of page content. The disadvantage is that much of
the "distributed decision making" and "do for yourself" paradigm is violated. Experts are
still needed to manage and change the database schemas for innovation to occur.

A more promising approach combines a library of scripts (CGI, Java, Active-X, etc.)
residing on the hosting web-server with templates, wizards and "bots" incorporated into
WYSIWYG authoring packages (e.g. Microsoft's FrontPage ). Another set of tools,
coming from the database side, automatically converts existing database schemas into
hyperlinked web pages that allow users to browse and access the data from their web
browser (e.g. Netscheme). When applications that merge these two functional approaches
begin to appear, very powerful packages will be available to content providers who need
to incorporate database information into their pages.

This approach satisfies both the "distributed decision making" and the "do for yourself"
paradigms. At the current time, these approaches do contain a "proprietary lock." The
authoring tool and web server extensions are tightly coupled and not interchangeable with
other packages. However, at this point the proprietary nature is not unduly restrictive.
First, the client remains independent of the authoring tool and server extensions. Second,
individual authors can choose to use different tools than those used by their peers, as long
as a server with their tool’s extension set is available. Third, this technology is still in the
early innovative stages, where a significant amount of knowledge needs to be gained.
This is the appropriate stage for non-standard solutions. As more knowledge is gained,
one hopes that the authoring tools will become increasingly independent either through
standardization of the script libraries or through standardization of object linking
technology.

The development of object linking standards and the availability of tools that conform to
these standards will increase the power of Intranet technology. These tools, in
conjunction with previously mentioned software that uses agents to discover and create
organized views of distributed objects, provide a promising base for supporting the
distributed decision making and implementation model. A major trend one can expect to
see is a move away from the use of database technology (or other structured technologies
like SGML) for integrating content enterprise-wide. Instead these tools will be used to
manage local content, and integration will take place as needed by linking the content
objects through Intranet standard pages.

Designing an Intranet (Building Corporate Wide Web)


Points to keep in mind while designing the Intranet
Integrating Information Design
Don’t Overlook Design
Implementation of Tasks rather than Documents
Organize tasks into larger processes
Virtual Workgroups
Reflection of Intranet

Integrating Information Design


It is must to integrate all information collected in the organization to develop the
intranet. All the information must be according to the business needs and business
planning. Focusing on processes rather than departments is a widely-hailed business
trend. Intranet should help employees in collaborating on business process such as
product development or any order fulfillment

Don’t Overlook Design

An intranet needs to be carefully designed to help employees’ access information and


collaborate effectively. None of the design should present any irrelevant information
of the company. There must be an organization chart of the company to represent the
company flow chart to outsiders and as well to its employees.

Implementation of Tasks rather than Documents

Intranet is not just a collection of documents rather it is the collection of information.


Intranet users actually use documents to complete the tasks. These task can be
organized in a way that all process should be done accurately. Finally on the basis of
these tasks employees does different functions, as required

Organize tasks into larger processes

It is required that all isolated tasks are collected together and make a larger process.
The most important processes in a company are those that create value for a
customer. Processes can be relatively distinct, such as developing or selling products.
So all the processes must be handled in a way that for intranet users it should be an
easy task to perform.

Virtual Workgroups

For intranet users there must be virtual workgroup to work together. Intranet can also
bring together employees and partners who are geographically isolated to work on
common problems. By putting all people together they can work on single task with
their best. The central to the value of an intranet is the design of virtual spaces, which
promotes new forms of collaboration, but in being paid less attention.

Reflection of Intranet

An intranet is actually the reflection of the company. By seeing the intranet of any
company people can make decision how the company can be. An intranet that reflects
the culture of its company will make employees feel more at home. For the intranet to
be successful, it must provide ways of empowering all employees.

HTTP Protocols
HTTP stands for Hypertext Transfer Protocol. It is an TCP/IP based communication protocol which is used to deliver
virtually all files and other data, collectively called resources, on the World Wide Web. These resources could be HTML files,
image files, query results, or anything else.

A browser is works as an HTTP client because it sends requests to an HTTP server which is called Web server. The Web
Server then sends responses back to the client. The standard and default port for HTTP servers to listen on is 80 but it can be
changed to any other port like 8080 etc.

There are three important things about HTTP of which you should be aware:

• HTTP is connectionless: After a request is made, the client disconnects from the server and waits for a response.
The server must re-establish the connection after it process the request.
• HTTP is media independent: Any type of data can be sent by HTTP as long as both the client and server know
how to handle the data content. How content is handled is determined by the MIME specification.
• HTTP is stateless: This is a direct result of HTTP's being connectionless. The server and client are aware of each
other only during a request. Afterwards, each forgets the other. For this reason neither the client nor the browser can
retain information between different request across the web pages.

Following diagram shows where HTTP Protocol fits in communication:

Like most network protocols, HTTP uses the client-server model: An HTTP client opens a connection and sends a request
message to an HTTP server; the server then returns a response message, usually containing the resource that was requested.
After delivering the response, the server closes the connection.

The format of the request and response messages are similar and will have following structure:

• An initial line CRLF


• Zero or more header lines CRLF
• A blank line ie. a CRLF

• An optioanl message body like file, query data or query output.

Initial lines and headers should end in CRLF. Though you should gracefully handle lines ending in just LF. More exactly, CR
and LF here mean ASCII values 13 and 10.

Initial Line : Request


The initial line is different for the request than for the response. A request line has three parts, separated by spaces:
• An HTTP Method Name
• The local path of the requested resource.
• The version of HTTP being used.

Here is an exampple of initial line for Request Message.

GET /path/to/file/index.html HTTP/1.0

• GET is the most common HTTP method. Other methods could be POST, HEAD etc.
• The path is the part of the URL after the host name. This path is also called the request Uniform Resource Identifier
(URI). A URI is like a URL, but more general.
• The HTTP version always takes the form "HTTP/x.x", uppercase.

Initial Line : Response


The initial response line, called the status line, also has three parts separated by spaces:

• The version of HTTP being used.


• A response status code that gives the result of the request.
• An English reason phrase describing the status code.

Here is an exampple of initial line for Response Message.

HTTP/1.0 200 OK

or

HTTP/1.0 404 Not Found

Header Lines
Header lines provide information about the request or response, or about the object sent in the message body.

The header lines are in the usual text header format, which is: one line per header, of the form "Header-Name: value", ending
with CRLF. It's the same format used for email and news postings, defined in RFC 822.

• A header line should end in CRLF, but you should handle LF correctly.
• The header name is not case-sensitive.
• Any number of spaces or tabs may be between the ":" and the value.
• Header lines beginning with space or tab are actually part of the previous header line, folded into multiple lines for
easy reading.

Here is an exampple of ione header line

User-agent: Mozilla/3.0Gold

or

Last-Modified: Fri, 31 Dec 1999 23:59:59 GMT


The Message Body
An HTTP message may have a body of data sent after the header lines. In a response, this is where the requested resource is
returned to the client (the most common use of the message body), or perhaps explanatory text if there's an error. In a request,
this is where user-entered data or uploaded files are sent to the server.

If an HTTP message includes a body, there are usually header lines in the message that describe the body. In particular:

• The Content-Type: header gives the MIME-type of the data in the body, such as text/html or image/gif.
• The Content-Length: header gives the number of bytes in the body.

The request message consists of the following:

• Request line, such as GET /images/logo.gif HTTP/1.1, which requests a


resource called /images/logo.gif from server
• Headers, such as Accept-Language: en
• An empty line
• An optional message body

The request line and headers must all end with <CR><LF> (that is, a carriage return
followed by a line feed). The empty line must consist of only <CR><LF> and no other
whitespace. In the HTTP/1.1 protocol, all headers except Host are optional.

A request line containing only the path name is accepted by servers to maintain
compatibility with HTTP clients before the HTTP/1.0 specification in RFC1945

9.3 GET

The GET method means retrieve whatever information (in the form of an entity) is
identified by the Request-URI. If the Request-URI refers to a data-producing process, it
is the produced data which shall be returned as the entity in the response and not the
source text of the process, unless that text happens to be the output of the process.

The semantics of the GET method change to a "conditional GET" if the request message
includes an If-Modified-Since, If-Unmodified-Since, If-Match, If-None-Match, or If-
Range header field. A conditional GET method requests that the entity be transferred
only under the circumstances described by the conditional header field(s). The
conditional GET method is intended to reduce unnecessary network usage by allowing
cached entities to be refreshed without requiring multiple requests or transferring data
already held by the client.

The semantics of the GET method change to a "partial GET" if the request message
includes a Range header field. The partial GET method is intended to reduce unnecessary
network usage by allowing partially-retrieved entities to be completed without
transferring data already held by the client.
9.4 HEAD

The HEAD method is identical to GET except that the server MUST NOT return a
message-body in the response. The metainformation contained in the HTTP headers in
response to a HEAD request SHOULD be identical to the information sent in response to
a GET request. This method can be used for obtaining metainformation about the entity
implied by the request without transferring the entity-body itself. This method is often
used for testing hypertext links for validity, accessibility, and recent modification.

The response to a HEAD request MAY be cacheable in the sense that the information
contained in the response MAY be used to update a previously cached entity from that
resource. If the new field values indicate that the cached entity differs from the current
entity (as would be indicated by a change in Content-Length, Content-MD5, ETag or
Last-Modified), then the cache MUST treat the cache entry as stale.

9.5 POST

The POST method is used to request that the origin server accept the entity enclosed in
the request as a new subordinate of the resource identified by the Request-URI in the
Request-Line. POST is designed to allow a uniform method to cover the following
functions:

- Annotation of existing resources;


- Posting a message to a bulletin board,
newsgroup, mailing list,
or similar group of articles;
- Providing a block of data, such as the result
of submitting a
form, to a data-handling process;
- Extending a database through an append
operation.

The actual function performed by the POST method is determined by the server and is
usually dependent on the Request-URI. The posted entity is subordinate to that URI in
the same way that a file is subordinate to a directory containing it, a news article is
subordinate to a newsgroup to which it is posted, or a record is subordinate to a database.

The action performed by the POST method might not result in a resource that can be
identified by a URI. In this case, either 200 (OK) or 204 (No Content) is the appropriate
response status, depending on whether or not the response includes an entity that
describes the result.

If a resource has been created on the origin server, the response SHOULD be 201
(Created) and contain an entity which describes the status of the request and refers to the
new resource, and a Location header Responses to this method are not cacheable, unless
the response includes appropriate Cache-Control or Expires header fields. However, the
303 response can be used to direct the user agent to retrieve a cacheable resource.

9.6 PUT

The PUT method requests that the enclosed entity be stored under the supplied Request-
URI. If the Request-URI refers to an already existing resource, the enclosed entity
SHOULD be considered as a modified version of the one residing on the origin server. If
the Request-URI does not point to an existing resource, and that URI is capable of being
defined as a new resource by the requesting user agent, the origin server can create the
resource with that URI. If a new resource is created, the origin server MUST inform the
user agent via the 201 (Created) response. If an existing resource is modified, either the
200 (OK) or 204 (No Content) response codes SHOULD be sent to indicate successful
completion of the request. If the resource could not be created or modified with the
Request-URI, an appropriate error response SHOULD be given that reflects the nature of
the problem. The recipient of the entity MUST NOT ignore any Content-* (e.g. Content-
Range) headers that it does not understand or implement and MUST return a 501 (Not
Implemented) response in such cases.

If the request passes through a cache and the Request-URI identifies one or more
currently cached entities, those entries SHOULD be treated as stale. Responses to this
method are not cacheable.

The fundamental difference between the POST and PUT requests is reflected in the
different meaning of the Request-URI. The URI in a POST request identifies the resource
that will handle the enclosed entity. That resource might be a data-accepting process, a
gateway to some other protocol, or a separate entity that accepts annotations. In contrast,
the URI in a PUT request identifies the entity enclosed with the request -- the user agent
knows what URI is intended and the server MUST NOT attempt to apply the request to
some other resource. If the server desires that the request be applied to a different URI,

it MUST send a 301 (Moved Permanently) response; the user agent MAY then make its
own decision regarding whether or not to redirect the request.

A single resource MAY be identified by many different URIs. For example, an article
might have a URI for identifying "the current version" which is separate from the URI
identifying each particular version. In this case, a PUT request on a general URI might
result in several other URIs being defined by the origin server.

Unless otherwise specified for a particular entity-header, the entity-headers in the PUT
request SHOULD be applied to the resource created or modified by the PUT.

9.7 DELETE

The DELETE method requests that the origin server delete the resource identified by the
Request-URI. This method MAY be overridden by human intervention (or other means)
on the origin server. The client cannot be guaranteed that the operation has been carried
out, even if the status code returned from the origin server indicates that the action has
been completed successfully. However, the server SHOULD NOT indicate success
unless, at the time the response is given, it intends to delete the resource or move it to an
inaccessible location.

A successful response SHOULD be 200 (OK) if the response includes an entity


describing the status, 202 (Accepted) if the action has not yet been enacted, or 204 (No
Content) if the action has been enacted but the response does not include an entity.

If the request passes through a cache and the Request-URI identifies one or more
currently cached entities, those entries SHOULD be treated as stale. Responses to this
method are not cacheable.

9.8 TRACE

The TRACE method is used to invoke a remote, application-layer loop- back of the
request message. The final recipient of the request SHOULD reflect the message received
back to the client as the entity-body of a 200 (OK) response. The final recipient is either
the

origin server or the first proxy or gateway to receive a Max-Forwards value of zero (0) in
the request (see section 14.31). A TRACE request MUST NOT include an entity.

TRACE allows the client to see what is being received at the other end of the request
chain and use that data for testing or diagnostic information. The value of the Via header
field (section 14.45) is of particular interest, since it acts as a trace of the request chain.
Use of the Max-Forwards header field allows the client to limit the length of the request
chain, which is useful for testing a chain of proxies forwarding messages in an infinite
loop.

If the request is valid, the response SHOULD contain the entire request message in the
entity-body, with a Content-Type of "message/http". Responses to this method MUST
NOT be cached.

9.9 CONNECT

This specification reserves the method name CONNECT for use with a proxy that can
dynamically switch

The OPTIONS method represents a request for information about the communication
options available on the request/response chain identified by the Request-URI. This
method allows the client to determine the options and/or requirements associated with a
resource, or the capabilities of a server, without implying a resource action or initiating a
resource retrieval.
Responses to this method are not cacheable.

If the OPTIONS request includes an entity-body (as indicated by the presence of


Content-Length or Transfer-Encoding), then the media type MUST be indicated by a
Content-Type field. Although this specification does not define any use for such a body,
future extensions to HTTP might use the OPTIONS body to make more detailed queries
on the server. A server that does not support such an extension MAY discard the request
body.

If the Request-URI is an asterisk ("*"), the OPTIONS request is intended to apply to the
server in general rather than to a specific resource. Since a server's communication
options typically depend on the resource, the "*" request is only useful as a "ping" or "no-
op" type of method; it does nothing beyond allowing the client to test the capabilities of
the server. For example, this can be used to test a proxy for HTTP/1.1 compliance (or
lack thereof).

If the Request-URI is not an asterisk, the OPTIONS request applies only to the options
that are available when communicating with that resource.

A 200 response SHOULD include any header fields that indicate optional features
implemented by the server and applicable to that resource (e.g., Allow), possibly
including extensions not defined by this specification. The response body, if any,
SHOULD also include information about the communication options. The format for
such a

body is not defined by this specification, but might be defined by future extensions to
HTTP. Content negotiation MAY be used to select the appropriate response format. If no
response body is included, the response MUST include a Content-Length field with a
field-value of "0".

The Max-Forwards request-header field MAY be used to target a specific proxy in the
request chain. When a proxy receives an OPTIONS request on an absoluteURI for which
request forwarding is permitted, the proxy MUST check for a Max-Forwards field. If the
Max-Forwards field-value is zero ("0"), the proxy MUST NOT forward the message;
instead, the proxy SHOULD respond with its own communication options. If the Max-
Forwards field-value is an integer greater than zero, the proxy MUST decrement the
field-value when it forwards the request. If no Max-Forwards field is present in the
request, then the forwarded request MUST NOT include a Max-Forwards field.

Implementors should be aware that the software represents the user in their interactions
over the Internet, and should be careful to allow the user to be aware of any actions they
might take which may have an unexpected significance to themselves or others.

In particular, the convention has been established that the GET and HEAD methods
SHOULD NOT have the significance of taking an action other than retrieval. These
methods ought to be considered "safe". This allows user agents to represent other
methods, such as POST, PUT and DELETE, in a special way, so that the user is made
aware of the fact that a possibly unsafe action is being requested.

Naturally, it is not possible to ensure that the server does not generate side-effects as a
result of performing a GET request; in fact, some dynamic resources consider that a
feature. The important distinction here is that the user did not request the side-effects, so
therefore cannot be held accountable for them.

9.1.2 Idempotent Methods

Methods can also have the property of "idempotence" in that (aside from error or
expiration issues) the side-effects of N > 0 identical requests is the same as for a single
request. The methods GET, HEAD, PUT and DELETE share this property. Also, the
methods OPTIONS and TRACE SHOULD NOT have side effects, and so are inherently
idempotent.

However, it is possible that a sequence of several requests is non- idempotent, even if all
of the methods executed in that sequence are idempotent. (A sequence is idempotent if a
single execution of the entire sequence always yields a result that is not changed by a
reexecution of all, or part, of that sequence.) For example, a sequence is non-idempotent
if its result depends on a value that is later modified in the same sequence.

A sequence that never has side effects is idempotent, by definition (provided that no
concurrent operations are being executed on the same set of resources).

Status codes
The values of the numeric status code to HTTP requests are as follows. The data sections
of messages Error, Forward and redirection responses may be used to contain human-
readable diagnostic information.

Success 2xx

These codes indicate success. The body section if present is the object returned by the
request. It is a MIME format object. It is in MIME format, and may only be in text/plain,
text/html or one fo the formats specified as acceptable in the request.

OK 200

The request was fulfilled.

CREATED 201

Following a POST command, this indicates success, but the textual part of the response
line indicates the URI by which the newly created document should be known.
Accepted 202

The request has been accepted for processing, but the processing has not been completed.
The request may or may not eventually be acted upon, as it may be disallowed when
processing actually takes place. there is no facility for status returns from asynchronous
operations such as this.

Partial Information 203

When received in the response to a GET command, this indicates that the returned
metainformation is not a definitive set of the object from a server with a copy of the
object, but is from a private overlaid web. This may include annotation information about
the object, for example.

No Response 204

Server has received the request but there is no information to send back, and the client
should stay in the same document view. This is mainly to allow input for scripts without
changing the document at the same time.

Error 4xx, 5xx

The 4xx codes are intended for cases in which the client seems to have erred, and the 5xx
codes for the cases in which the server is aware that the server has erred. It is impossible
to distinguish these cases in general, so the difference is only informational.

The body section may contain a document describing the error in human readable form.
The document is in MIME format, and may only be in text/plain, text/html or one for the
formats specified as acceptable in the request.

Bad request 400

The request had bad syntax or was inherently impossible to be satisfied.

Unauthorized 401

The parameter to this message gives a specification of authorization schemes which are
acceptable. The client should retry the request with a suitable Authorization header.

PaymentRequired 402

The parameter to this message gives a specification of charging schemes acceptable. The
client may retry the request with a suitable ChargeTo header.
Forbidden 403

The request is for something forbidden. Authorization will not help.

Not found 404

The server has not found anything matching the URI given

Internal Error 500

The server encountered an unexpected condition which prevented it from fulfilling the
request.

Not implemented 501

The server does not support the facility required.

Service temporarily overloaded 502 (TO BE DISCUSSED)

The server cannot process the request due to a high load (whether HTTP servicing or
other requests). The implication is that this is a temporary condition which maybe
alleviated at other times.

Gateway timeout 503 (TO BE DISCUSSED)

This is equivalent to Internal Error 500, but in the case of a server which is in turn
accessing some other service, this indicates that the respose from the other service did not
return within a time that the gateway was prepared to wait. As from the point of view of
the clientand the HTTP transaction the other service is hidden within the server, this
maybe treated identically to Internal error 500, but has more diagnostic value.

Redirection 3xx

The codes in this section indicate action to be taken (normally automatically) by the
client in order to fulfill the request.

Moved 301

The data requested has been assigned a new URI, the change is permanent. (N.B. this is
an optimisation, which must, pragmatically, be included in this definition. Browsers with
link editing capabiliy should automatically relink to the new reference, where possible)

The response contains one or more header lines of the form

URI: <url> String CrLf


Which specify alternative addresses for the object in question. The String is an optional
comment field. If the response is to indicate a set of variants which each correspond to
the requested URI, then the multipart/alternative wrapping may be used to distinguish
different sets

Found 302

The data requested actually resides under a different URL, however, the redirection may
be altered on occasion (when making links to these kinds of document, the browser
should default to using the Udi of the redirection document, but have the option of
linking to the final document) as for "Forward".

Method 303
Method: <method> <url>
body-section

Note: This status code is to be specified in more detail. For the moment it is for
discussion only.

Like the found response, this suggests that the client go try another network address. In
this case, a different method may be used too, rather than GET.

The body-section contains the parameters to be used for the method. This allows a
document to be a pointer to a complex query operation.

The body may be preceded by the following additional fields as listed

Not Modified 304

If the client has done a conditional GET and access is allowed, but the document has not
been modified since the date and time specified in If-Modified-Since field, the server
responds with a 304 status code and does not send the document body to the client.

Response headers are as if the client had sent a HEAD request, but limited to only those
headers which make sense in this context. This means only headers that are relevant to
cache managers and which may have changed independently of the document's Last-
Modified date. Examples include Date , Server and Expires .

The purpose of this feature is to allow efficient updates of local cache information
(including relevant metainformation) without requiring the overhead of multiple HTTP
requests (e.g. a HEAD followed by a GET) and minimizing the transmittal of information
already known by the requesting client (usually a caching proxy).
What is HTTP Persistent Connections?
HTTP persistent connections, also called HTTP keep-alive, or HTTP connection reuse, is
the idea of using the same TCP connection to send and receive multiple HTTP
requests/responses, as opposed to opening a new one for every single request/response
pair. Using persistent connections is very important for improving HTTP performance.

There are several advantages of using persistent connections, including:

• Network friendly. Less network traffic due to fewer setting up and tearing down
of TCP connections.
• Reduced latency on subsequent request. Due to avoidance of initial TCP
handshake
• Long lasting connections allowing TCP sufficient time to determine the
congestion state of the network, thus to react appropriately.

The advantages are even more obvious with HTTPS or HTTP over SSL/TLS. There,
persistent connections may reduce the number of costly SSL/TLS handshake to establish
security associations, in addition to the initial TCP connection set up.

In HTTP/1.1, persistent connections are the default behavior of any connection. That is,
unless otherwise indicated, the client SHOULD assume that the server will maintain a
persistent connection, even after error responses from the server. However, the protocol
provides means for a client and a server to signal the closing of a TCP connection.

SESSION STATE
Session state is a server side tool for managing state. Every time your web app
goes to the server to get the next request, the server has to know how much of the
last web page needs to be "remembered" when the new information is sent to the
web page. The process of knowing the values of controls and variables is known as
state management.

When a page postback occurs, ASP.Net has many techniques to remember state
information. Some of these state management information methods are on the client
side and others are on the server side. Client side methods for maintaining state
include query strings, cookies, hidden fields and view state.

Most client side state management modes can be read by users and other programs,
meaning that user ids and passwords can be stolen. But session state sits on the
server and the ability for other users to capture this information is reduced and in
some cases eliminated.

Session State is Server Side


Session state is server side. In session state, a special session id is stored on the
server. This session id identifies a specific ASP.Net application. The session id is
assigned to the calling browser.

The importance of this method is the server, especially in a web farm, can know if a
particular user is a new user or has already visited this web page. Imagine in a web
farm, where you have multiple servers serving the same web page. How do the
servers recognize unique visitors? It is through the session id. Even if server one
gets the initial request, server two and server three can recognize user A as already
having a session in process.

Now the server can store session specific information about the current user. Is there
highly critical sensitive information about the user that needs to be remembered?
Like credit card information or name, address and phone number? This information
can be kept out of the prying eyes of internet identity thieves with session state.

TCP/IP MODEL
TCP/IP stands for Transmission Control Protocol/Internet Protocol which is widely
accepted and used communications protocol. TCP/IP has only four layers, which
roughly correspond to groups of the OSI model. The Internet, many internal business
networks and some home networks used TCP/IP. TCP (Transmission Control
Protocol) – responsible for reliable delivery of data. IP (Internet Protocol) – provides
addressing and routing information.

TCP/IP Layers
The four layers in TCP/IP are :
 Application Layer
 Transport Layer
 Internet Layer
 Network Interface Layer

Network Interface Layer


This layer Provides physical interface for transmission of information.
It covers all - mechanical, electrical, functional and procedural - aspects for physical
communication. This layer attempts to provide reliable communication over the physical
layer interface. It supports points-to-point as well as broadcast communication. It
supports simplex, half-duplex or full-duplex communication

Internet Layer
It implements routing of frames (packets) through the network. It defines the most
optimum path the packet should take from the source to the destination. This layer also
handles congestion in the network. The network layer also defines how to fragment a
packet into smaller packets to accommodate different media.
Transport Layer
Purpose of this layer is to provide a reliable mechanism for the exchange of data between
two processes in different computers. It ensures that the data units are delivered error
free. This layer also ensures that there is no loss or duplication of data units. It provide
the connection management. With this layer multiplex multiple connection can be
managed over a single channel.

Application Layer
Application layer interacts with application programs and is the highest level of TCP/IP
model. Application layer contains management functions to support distributed
applications. Examples of application layer are applications such as file transfer,
electronic mail, remote login etc.

What are the issues in Intranet Security

The scenario is all too familiar: computer systems within an enterprise previously thought
to be isolated from the outside world become accessible through carelessness and back
doors introduced. Your company develops a major new product in secret using its
Intranet, hackers creep in and sell the details to the competition or blackmail the
enterprise.

Security has long been seen as a major sticking point in the adoption of Internet
technology in the enterprise. As networks have grown and connected to the Internet, the
spectre of the hacker has haunted managers responsible for both delivering information
within the enterprise and to its partners, and protecting it from unauthorised outsiders.

In fact, the security capabilities of the latest Internet and intranet technologies enable
companies to control the availability of information and the authenticity of that
information better than ever before. The increasing sophistication of both server and
client software means that this unprecedented level of security can be provided without
requiring users to undergo complex and bureaucratic procedures to gain legitimate access
to sites.

Firewalls

For intranet developers, restricting access to the site has been the primary security
concern. The simplest way to achieve this is to position the internal site where it cannot
be seen or accessed from the Internet at large-behind a firewall. At their simplest,
firewalls consist of software which blocks access to internal networks from the Internet.
While legitimate traffic such as email is allowed in to the mail server, programs such as
search engine spiders or FTP clients cannot access machines inside the safe boundary of
the firewall.
Firewalls also offer some protection to users venturing out from the network to the
Internet, acting as proxies to fetch web pages so that the name and IP number of
machines on the network are not revealed to web sites that they visit-preventing hackers
from learning details of the structure of the network.

While the basic firewall remains a fundamental of Internet and intranet security,
increasing levels of sophistication are required by many users as access to the corporate
intranet needs to be widened beyond those physically present on the same network.
Allowing users dial-up access behind the firewall violates basic security principles;
restricting them to the same access offered to the rest of the Internet in front of the
firewall denies them valuable services.

Web server security

Intranets and extranets are often constructed using Web servers to deliver information to
users in a now-familiar form. Username/password authentication has long been used as a
mechanism for restricting access to web sites. But because these character strings are
themselves passed as clear text, capable of being intercepted and read with simple
network management tools, basic passwords do not adequately secure communications.

A significant improvement can be achieved by encrypting communications between a


browser and server. The most common way of doing this is to establish a secure
connection using a variation on HTTP (the standard web protocol) called the Secure
Sockets Layer (SSL). Increasingly, commercial web sites are using SSL to guarantee the
authenticity of the server and integrity of the data delivered to web site users, and to
protect visitors' responses to interactive elements on the site. Whenever you point your
browser to a URL that begins with https://, you are using SSL.

SSL has become fundamental to the spread of Internet commerce, and is being used for
an increasing range of transactions across the Internet. However, by default most SSL
implementations in web servers do not authenticate the client web browser. In its raw
form, therefore, SSL is best suited to the largely anonymous requirements of retailing.

Virtual private networks

One option for widening access is to set up a virtual private network (VPN) using the
Internet. A VPN uses software or hardware to encrypt all the traffic that travels over the
Internet between two predetermined end-points. This is an ideal solution where limited
access to an intranet is required, for example between two sites of the same company
requiring access to the same corporate information, or suppliers and customers
integrating their supply chains.

A potential weakness of VPN solutions is their relative inflexibility. VPNs work well for
creating fixed tunnels from one known point to another, but they are less well suited to
situations where access needs to be given on-the-fly to groups of people not necessarily
known at the outset, or who need to gain access from a variety of locations. VPN
technology at present works best for encrypting traffic between two known points that are
accepted as valid destinations for traffic: once a link has been established, the technology
is used to encrypt the information which is sent, not for establishing the validity of the
destination to which it is being sent.

As more flexible VPN access is required, the prime issue becomes that of authenticating
potential visitors to the site and the credentials that they present. Are they who they say
they are, or an impostor? With this capability it is possible to open up the system to
provide access to a wider range of partners, customers or suppliers.

Certification authorities

One solution for is to use a digital certificate-based solution. Users are given access based
on their possession of certificates signed or authorised for access by or on behalf of the
server to which they wish to gain access. The certificate acts as evidence of their digital
identity. Certificates can also be combined with other access control mechanisms, such as
tokens (identification hardware carried by users) or only accepting visitors from certain
authenticated addresses.

At the moment this option is most easily achieved with a custom solution combined with
a certification authority (CA) server or external CA service, which can issue and revoke
certificates and authenticate any certificates presented in order to gain access. This can
involve a simple implementation of a public key infrastructure (PKI), a system which
establishes a hierarchy of authority for the issuance and authentication of certificates and
users presenting them.

Digital certificates can provide a sophisticated means of controlling and monitoring


access. The certificate itself acts as a token for access control: the user must present it in
order to gain access. In many implementations this can be done automatically: in some
implementations the certificate is stored on a separate token such as a smart card which
the user has to present to the local client in order for it to pass it to the server to gain
access.

Public key infrastructure solutions

The use of public-key based security systems requires considerable care in system design
and management. The security of the entire system is ultimately guaranteed by the
security of the key used for signing certificates at the top (commonly called the root) of
the public key infrastructure. Here specialized hardware can play a useful role.

Normally, all keys that are accessed by the server are held at some point in the main
memory of the server, where they are potentially vulnerable to attack (for example, in a
server core dump). A higher degree of protection is desirable for the most valuable keys.

A specialized hardware cryptographic module for storing and protecting the signing keys
provides an answer. The keys are stored in a strongly encrypted format. When loaded for
signing, the keys are decrypted and loaded into the memory of the secure cryptographic
module, which then performs all the signing operations on behalf of the server. The keys
are never revealed in their unencrypted form to the server, so even if an intruder manages
to access the network, the keys will remain safe. Security is further assisted by physical
design features of the module; tamper-resistant enclosures and advanced manufacturing
techniques protect the keys from physical attack.

The signing of digital certificates is also a computation-intensive process, so it makes


sense to consider combining some kind of hardware acceleration of cryptography within
the key storage module. This way, keys are rapidly handled within a secure environment
and no processing bottleneck is introduced, even when a high transaction throughput is
required.

Future of Intranet
Intranet trends follow closely on the heels of the latest Internet trends. The biggest
Internet buzzword right now is Web 2.0. Web 2.0 is all about social media and user-
generated content as opposed to the static, read-only nature of Web 1.0.

Many of the most trafficked Web sites are fueled by Web 2.0 principles. It explains the
explosion of blogs, the pre-eminence of Wikipedia and the tremendous popularity of
online social networking sites like MySpace, Facebook and LinkedIn.

Corporate intranets are getting an upgrade now that Net generation students are entering
the workplace. The Net Generation grew up in a world steeped in communications
technology. Many of them don't remember life before they had a MySpace account, and
they'd be lost without their cell phones.

Net Generation employees expect their employers to think and communicate the same
way they do. E-mail is just a start. They want to have their own company blogs and
subscribe to RSS (Really Simple Simplification) feeds from the blogs of their bosses and
coworkers. They want to help build a company Wiki and hook up with friends on a
company-wide social network.

Only recently have businesses woken up to the necessity of so-called intranet 2.0 to
attract and maintain talented young employees. According to a recent survey of chief
information officers, only 18 percent of American businesses host blogs on their intranet
and only 13 percent have launched corporate Wikis. However, 40 percent said they have
such programs in the development and testing stages [source: Prescient Digital].

Corporate intranets will take on increasing importance as more and more businesses turn
to Web-based applications to manage core business systems like SAP and PeopleSoft.
Companies are learning that on-demand Web services are cheaper to maintain and easier
to use than hosting software on their own systems. All of these Web-based applications
can be bundled into the corporate intranet where they can be accessed securely with one
network password.

Cost of Intranet
A corporate intranet can cost very little (from $3,000 to $4,000) if it is done with existing
hardware and free software that can be downloaded from the Internet. Most corporate
intranets however cost between $50,000 and $150,000 to get started. The corporation
must also budget for maintaining the intranet and this will usually cost more than what
was spent on start-up as it will involve salaries for new staff and possibly more hardware
and software as the intranet grows.

Return-on-investment can be quite substantial. Conservative figures place the payback at


a low of 23% to a high of 88%, over 1 to 2 years. Costs will be reduced in paper
dissemination and printing, but the greatest benefits realized will relate to information
flow.

Protocols used for Communications

1. HTTP
2. TCP/IP
3. SMTP
4. NNTP
5. FTP
6. SOAP
7. UDP

SMTP
SMTP is a short for Simple Mail Transfer Protocol and it is used to transfer e-mail
messages between computers. It is a text based protocol and in this, message text is
specified along with the recipients of the message. Simple Mail Transfer Protocol is a
'push' protocol and it cannot be used to 'pull' the messages from the server. A procedure
of queries and responses is used to send the message between the client and the server.
An end user's e-mail client or the relaying server's Mail Transport Agents can act as an
SMTP client which is used to initiate a TCP connection to the port 25 of the server.
SMTP is used to send the message from the mail client to the mail server and an e-mail
client using the POP or IMAP is used to retrieve the message from the server.
SMTP Functions

An SMTP server performs the following two functions :

1. It verifies the configuration and grants the permission to a computer that is


making an attempt to send a message. It sends the message to the specified
destination and tracks it to see whether it's delivered successfully or not. If it's not
delivered successfully then an error message is send to the sender.
2. There's one limitation to SMTP and it's the inability to authenticate the senders
and this results in e-mail spamming. The enhanced version of SMTP also exists
and it?s called as Extended SMTP (ESMTP). ESMTP is used to send the e-mails
that include graphics and other attachments.

NNTP
NNTP (Network News Transfer Protocol) is the predominant protocol used by computer
clients and servers for managing the notes posted on Usenet newsgroups. NNTP replaced
the original Usenet protocol, UNIX-to-UNIX Copy Protocol (UUCP) some time ago.
NNTP servers manage the global network of collected Usenet newsgroups and include
the server at your Internet access provider. An NNTP client is included as part of a
Netscape, Internet Explorer, Opera, or other Web browser or you may use a separate
client program called a newsreader.

FTP
FTP (File Transfer Protocol) is the generic term for a group of computer programs aimed
at facilitating the transfer of files or data from one computer to another. It originated in
the Massachusetts Institute of Technology (MIT) in the early 1970s when mainframes,
dumb terminals and time-sharing were the standard.

Traditionally, when communications speeds were low (ranging from the then-standard
9.8 kbps to the "fast" 16.8 Kbps unlike today's broadband 1 Mbps standard) FTP was the
method of choice for downloading large files from various websites. Although the FTP
programs have been improved and updated over time, the basic concepts and definitions
remain the same and are still in use today.

FTP Concepts and Definitions


The key definition to remember is the term "protocol," which means a set of rules or
standards that govern the interactions between computers. It is a key component in many
terms that are now taken for granted: Transmission Control Protocol /Internet Protocol or
TCP/IP, the governing standards for internet communications; Hyper Text Transfer
Protocol or HTTP, which established the benchmarks for internet addresses and
communications between two computers in the internet; and File Transfer Protocol (FTP)
which, as has been said, sets the rules for transferring files between computers.

The primary objective in the formulation of File Transfer Protocols was to make file
transfers uncomplicated and to relieve the user of the burden of learning the details on
how the transfer is actually accomplished. The result of all these standards and rules can
be seen in today's web interactions, where pointing-and-clicking (with a mouse) initiates
a series of actions that the typical internet user does not see or even remotely understand.

Differences between FTP and HTTP


The major difference between FTP and HTTP is that FTP is a two-way system - it can be
used to copy or move files from a server to a client computer as well as upload or transfer
files from a client to a server. HTTP, on the other hand, is strictly one-way: "transferring"
text, pictures and other data (formulated into a web page) from the "server" to a client
computer which uses a web browser to view the data.

Another point to bear in mind is that File Transfer in FTP means exactly that: files are
automatically copied or moved from a file server to a client computer's hard drive, and
vice versa. On the other hand, files in an HTTP transfer are viewed and can 'disappear'
when the browser is turned off unless the user executes commands to move the data to
the computer's memory.

Another major difference between the two systems lies in the manner in which the data is
encoded and transmitted. FTP systems generally encode and transmit their data in binary
sets which allow for faster data transfer; HTTP systems encode their data in MIME
format which is larger and more complex. Note that when attaching files to emails, the
size of the file is usually larger than the original because of the additional encoding
involved.

SOAP
SOAP (Simple Object Access Protocol) is a way for a program running in one kind of
operating system (such as Windows 2000) to communicate with a progam in the same or
another kind of an operating system (such as Linux) by using the World Wide Web's
Hypertext Transfer Protocol (HTTP)and its Extensible Markup Language (XML) as the
mechanisms for information exchange. Since Web protocols are installed and available
for use by all major operating system platforms, HTTP and XML provide an already at-
hand solution to the problem of how programs running under different operating systems
in a network can communicate with each other. SOAP specifies exactly how to encode an
HTTP header and an XML file so that a program in one computer can call a program in
another computer and pass it information. It also specifies how the called program can
return a response.
SOAP was developed by Microsoft, DevelopMentor, and Userland Software and has
been proposed as a standard interface to the Internet Engineering Task Force (IETF). It is
somewhat similar to the Internet Inter-ORB Protocol (IIOP), a protocol that is part of the
Common Object Request Broker Architecture (CORBA). Sun Microsystems' Remote
Method Invocation (RMI) is a similar client/server interprogram protocol between
programs written in Java.

An advantage of SOAP is that program calls are much more likely to get through firewall
servers that screen out requests other than those for known applications (through the
designated port mechanism). Since HTTP requests are usually allowed through firewalls,
programs using SOAP to communicate can be sure that they can communicate with
programs anywhere.

UDP
UDP (User Datagram Protocol) is a communications protocol that offers a limited
amount of service when messages are exchanged between computers in a network that
uses the Internet Protocol (IP). UDP is an alternative to the Transmission Control
Protocol (TCP) and, together with IP, is sometimes referred to as UDP/IP. Like the
Transmission Control Protocol, UDP uses the Internet Protocol to actually get a data unit
(called a datagram) from one computer to another. Unlike TCP, however, UDP does not
provide the service of dividing a message into packets (datagrams) and reassembling it at
the other end. Specifically, UDP doesn't provide sequencing of the packets that the data
arrives in. This means that the application program that uses UDP must be able to make
sure that the entire message has arrived and is in the right order. Network applications
that want to save processing time because they have very small data units to exchange
(and therefore very little message reassembling to do) may prefer UDP to TCP. The
Trivial File Transfer Protocol (TFTP) uses UDP instead of TCP.

UDP provides two services not provided by the IP layer. It provides port numbers to help
distinguish different user requests and, optionally, a checksum capability to verify that
the data arrived intact.

You might also like