You are on page 1of 108

Database (Data) Tier: At this tier, only database resides.

Database along with its query


processing languages sits in layer-3 of 3-tier architecture. It also contains all relations and their
constraints.

Application (Middle) Tier: At this tier the application server and program, which access
database, resides. For a user this application tier works as abstracted view of database. Users
are unaware of any existence of database beyond application. For database-tier, application
tier is the user of it. Database tier is not aware of any other user beyond application tier. This
tier works as mediator between the two.

User (Presentation) Tier: An end user sits on this tier. From a users aspect this tier is
everything. He/she doesn't know about any existence or form of database beyond this layer. At
this layer multiple views of database can be provided by the application. All views are
generated by applications, which resides in application tier.

END

Using a Three-Tier Architecture


Model

The three-tier architecture model, which is the fundamental framework for the logical design model,
segments an application's components into three tiers of services. These tiers do not necessarily
correspond to physical locations on various computers on a network, but rather to logical layers of the
application. How the pieces of an application are distributed in a physical topology can change, depending
on the system requirements.
Following are brief descriptions of the services allocated to each tier:
The presentation tier, or user services layer, gives a user access to the application. This
layer presents data to the user and optionally permits data manipulation and data entry. The two main
types of user interface for this layer are the traditional application and the Web-based application. Webbased applications now often contain most of the data manipulation features that traditional applications
use. This is accomplished through use of Dynamic HTML and client-side data sources and data cursors.
Note In a three-tiered application, the client-side application will be skinnier than a client-server
application because it will not contain the service components now located in the middle tier. This results
in less overhead for the user, but more network traffic for the system because components are distributed
among different machines.
The middle tier, or business services layer, consists of business and data rules. Also
referred to as the business logic tier, the middle tier is where COM+ developers can solve mission-critical
business problems and achieve major productivity advantages. The components that make up this layer
can exist on a server machine, to assist in resource sharing. These components can be used to enforce
business rules, such as business algorithms and legal or governmental regulations, and data rules, which
are designed to keep the data structures consistent within either specific or multiple
databases. Because these middle-tier components are not tied to a specific client, they can be used by all
applications and can be moved to different locations, as response time and other rules require. For
example, simple edits can be placed on the client side to minimize network round-trips, or data rules can
be placed in stored procedures.

The data tier, or data services layer, interacts with persistent data usually stored in a
database or in permanent storage. This is the actual DBMS access layer. It can be accessed through
the business services layer and on occasion by the user services layer. This layer consists of data access
components (rather than raw DBMS connections) to aid in resource sharing and to allow clients to be
configured without installing the DBMS libraries and ODBC drivers on each client.
During an application's life cycle, the three-tier approach provides benefits such as reusability, flexibility,
manageability, maintainability, and scalability. You can share and reuse the components and services you
create, and you can distribute them across a network of computers as needed. You can divide large and
complex projects into simpler projects and assign them to different programmers or programming teams.
You can also deploy components and services on a server to help keep up with changes, and you can
redeploy them as growth of the application's user base, data, and transaction volume increases.
END

What is a 3-tier architecture?


Three-tier (layer) is a client-server architecture in which the user interface, business process (business
rules) and data storage and data access are developed and maintained as independent modules or
most often on separate platforms. Basically, there are 3 layers, tier 1 (presentation tier, GUI tier), tier
2 (business objects, business logic tier) and tier 3 (data access tier). These tiers can be developed and
tested separately.
What is the need for dividing the code in 3-tiers? Separation of the user interface from business logic
and database access has many advantages. Some of the advantages are as follows:

Reusability of the business logic component results in quick development. Let's say we have a
module that handles adding, updating, deleting and finding customers in the system. As this
component is developed and tested, we can use it in any other project that might involve
maintaining customers.

Transformation of the system is easy. Since the business logic is separate from the data access
layer, changing the data access layer wont affect the business logic module much. Let's say if
we are moving from SQL Server data storage to Oracle there shouldnt be any changes required
in the business layer component and in the GUI component.

Change management of the system is easy. Let's say if there is a minor change in the business
logic, we dont have to install the entire system in individual users PCs. E.g. if GST (TAX) is
changed from 10% to 15% we only need to update the business logic component without
affecting the users and without any downtime.

Having separate functionality servers allows for parallel development of individual tiers by
application specialists.

Provides more flexible resource allocation. Can reduce the network traffic by having the
functionality servers strip data to the precise structure needed before sending it to the clients.

END

What are the advantages and disadvantages of using a DBMS instead of a file processing System?
The database approach offers a number of potential advantages compared to traditional file
processing systems.

1. Program-Data Independence: The separation of data descriptions from the application programs
that use the data is called Data independence. With the database approach. Data descriptions are
stored in a central location called the repository. This property of database systems allows an
organizations data to change without changing the application programs
that process the data.
2. Data Redundancy and Inconsistency: In File-processing System, files having different formats and
application programs may created by different programmers. Similarly different programs may be
written in several programming languages. The same information placed at different files which cause
redundancy and inconsistency consequently higher storage and access cost. For Example,the address
and telephone number of a person may exist in two files containing saving account records and
checking account records. Now a change in persons address may reflect the saving account records
but not any where in the whole system. This results the data inconsistency. One solution to avoid this
data redundancy is keeping the multiple copies of same information, replace it by a system where the
address and telephone number stored at just one place physically while it is accessible to all
applications from this itself. DBMS can handle Data Redundancy and Inconsistency.
3. Difficulty in accessing Data: - In Classical file organization the data is stored in the files.Whenever
data has to be retrieved as per the requirements then a new application program has to be written.
This is tedious process.
4. Data isolation: - Since data is scattered in various files, and files may be in different formats, it is
difficulty to write new application programs to retrieve the appropriate data.
5. Concurrent access: - There is no central control of data in classical file organization. So, the
concurrent access of data by many users is difficult to implement.
6. Security Problems: - Since there is no centralized control of data in classical file
organization. So, security, enforcement is difficult in File-processing system.
7. Integrity Problem: - The data values stored in the database must satisfy certain types of
consistency constraints. For example, the balance of a bank account may never fall below a
prescribed amount. These constraints are enforced in the system by adding appropriate code in the
various application programs. However, when new constraints are added, it is difficult to change the
programs to enforce them. The problem is compounded when constraints involve several data items
from different files.
8. Improved Data Sharing: - A database is designed as a shared corporate resource.
Authorized internal and external users are granted permission to use the database, and each user is
provided one or more user views to facilitate this use. A user view is a logical description of some
portion of the database that is required by a user to perform some task.
9. Increased Productive of Application Development: - A major advantage of the database approach is
that it greatly reduces the cost and time of developing new business applications.
There are two important reasons that data base applications can often be developed much more
rapidly than conventional file applications.
a) Assuming that the database and the related data capture and maintenance applications have
already been designed and implemented, the programmer can concentrate on the specific functions
required for the new application, without having to worry about file design or low-level
implementation details.

b) The data base management system provided a number of high-level productivity tools such as
forms and reports generators and high-level languages that automate some of the activities of
database design and implementation.
END

The principal advantages of DBMS over file processing system:

Flexibility: Because programs and data are independent, programs do not have to be modified when
types of unrelated data are added to or deleted from the database, or when physical storage changes.
Fast response to information requests: Because data is integrated into a single database, complex
requests can be handled much more rapidly than locating data separately. In many businesses, faster
response means better customer service.
Multiple access: Database software allows data to be accessed in a variety of ways (through various
key fields), by using several programming languages (both3GL and nonprocedural4GL programs).
Lower user training costs: Users often find it easier to learn such systems and training costs may be
reduced. Also, the total time taken to process requests may be less, which would increase user
productivity.
Less storage: Theoretically, all occurrences of data items need be stored only once, thereby eliminating
the storage of redundant data. System developers and database designers often use data normalization to
minimize data redundancy.

Here are some disadvantages:

DBMS subjects business to risk of critical data loss in its electronic format and can be more readily stolen
without proper security.
The cost of a DBMS can be prohibitive for small enterprises as they struggle with cost justification for
making investment in the infrastructure.
Improper use of the DBMS can lead to incorrect decision making as people take presented data for
granted as accurate.
Data can be stolen by weak password security policy.

END

DSS
A Decision Support System (DSS) is a computer-based information system that supports business or
organizational decision-making activities. DSSs serve the management, operations, and planning levels of an
organization (usually mid and higher management) and help to make decisions, which may be rapidly changing and not
easily specified in advance (Unstructured and Semi-Structured decision problems). Decision support systems can be
either fully computerized, human or a combination of both.
While academics have perceived DSS as a tool to support decision making process, DSS users see DSS as a tool to
facilitate organizational processes.[1] Some authors have extended the definition of DSS to include any system that might
support decision making.[2] Sprague (1980) defines DSS by its characteristics:
1. DSS tends to be aimed at the less well structured, underspecified problem that upper level managers typically
face;
2. DSS attempts to combine the use of models or analytic techniques with traditional data access and retrieval
functions;
3. DSS specifically focuses on features which make them easy to use by noncomputer people in an interactive
mode; and
4. DSS emphasizes flexibility and adaptability to accommodate changes in the environment and the decision
making approach of the user.
DSSs include knowledge-based systems. A properly designed DSS is an interactive software-based system intended to
help decision makers compile useful information from a combination of raw data, documents, and personal knowledge, or
business models to identify and solve problems and make decisions.
Typical information that a decision support application might gather and present includes:

inventories of information assets (including legacy and relational data sources, cubes, data warehouses, and data
marts),

comparative sales figures between one period and the next,

projected revenue figures based on product sales assumptions.

History[edit]
The concept of decision support has evolved from two main areas of research: The theoretical studies of organizational
decision making done at the Carnegie Institute of Technology during the late 1950s and early 1960s, and the technical
work on Technology in the 1960s.[3] DSS became an area of research of its own in the middle of the 1970s, before gaining
in intensity during the 1980s. In the middle and late 1980s, executive information systems (EIS), group decision support
systems (GDSS), and organizational decision support systems (ODSS) evolved from the single user and model-oriented
DSS.

According to Sol (1987)[4] the definition and scope of DSS has been migrating over the years. In the 1970s DSS was
described as "a computer-based system to aid decision making". In the late 1970s the DSS movement started focusing on
"interactive computer-based systems which help decision-makers utilize data bases and models to solve ill-structured
problems". In the 1980s DSS should provide systems "using suitable and available technology to improve effectiveness of
managerial and professional activities", and towards the end of 1980s DSS faced a new challenge towards the design of
intelligent workstations.[4]
In 1987, Texas Instruments completed development of the Gate Assignment Display System (GADS) for United Airlines.
This decision support system is credited with significantly reducing travel delays by aiding the management of ground
operations at various airports, beginning with O'Hare International Airport in Chicago and Stapleton Airport
in Denver Colorado.[5] Beginning in about 1990, data warehousing and on-line analytical processing (OLAP) began
broadening the realm of DSS. As the turn of the millennium approached, new Web-based analytical applications were
introduced.
The advent of better and better reporting technologies has seen DSS start to emerge as a critical component
of management design. Examples of this can be seen in the intense amount of discussion of DSS in the education
environment.
DSS also have a weak connection to the user interface paradigm of hypertext. Both the University of
Vermont PROMIS system (for medical decision making) and the Carnegie Mellon ZOG/KMS system (for military and
business decision making) were decision support systems which also were major breakthroughs in user interface
research. Furthermore, although hypertext researchers have generally been concerned with information overload, certain
researchers, notablyDouglas Engelbart, have been focused on decision makers in particular.

Taxonomies[edit]
Using the relationship with the user as the criterion, Haettenschwiler [6] differentiates passive, active, and cooperative DSS.
A passive DSS is a system that aids the process of decision making, but that cannot bring out explicit decision
suggestions or solutions. An active DSS can bring out such decision suggestions or solutions. A cooperative DSS allows
the decision maker (or its advisor) to modify, complete, or refine the decision suggestions provided by the system, before
sending them back to the system for validation. The system again improves, completes, and refines the suggestions of
the decision maker and sends them back to them for validation. The whole process then starts again, until a consolidated
solution is generated.
Another taxonomy for DSS has been created by Daniel Power. Using the mode of assistance as the criterion, Power
differentiates communication-driven DSS, data-driven DSS, document-driven DSS, knowledge-driven DSS, and modeldriven DSS.[7]

A communication-driven DSS supports more than one person working on a shared task; examples include
integrated tools like Google Docs or Groove[8]

A data-driven DSS or data-oriented DSS emphasizes access to and manipulation of a time series of internal
company data and, sometimes, external data.

A document-driven DSS manages, retrieves, and manipulates unstructured information in a variety of electronic
formats.

A knowledge-driven DSS provides specialized problem-solving expertise stored as facts, rules, procedures, or in
similar structures.[7]

A model-driven DSS emphasizes access to and manipulation of a statistical, financial, optimization,


or simulation model. Model-driven DSS use data and parameters provided by users to assist decision makers in
analyzing a situation; they are not necessarily data-intensive. Dicodess is an example of an open source model-driven
DSS generator.[9]

Using scope as the criterion, Power[10] differentiates enterprise-wide DSS and desktop DSS. An enterprise-wide DSS is
linked to large data warehouses and serves many managers in the company. A desktop, single-user DSS is a small
system that runs on an individual manager's PC.

Components[edit]
Three fundamental components of a DSS architecture are:[6][7][11][12][13]
1. the database (or knowledge base),
2. the model (i.e., the decision context and user criteria), and
3. the user interface.
The users themselves are also important components of the architecture.[6][13]

Development frameworks[edit]
DSS systems are not entirely different from other systems and require a structured approach. Such a framework includes
people, technology, and the development approach. [11]
The Early Framework of Decision Support System consists of four phases:
Intelligence Searching for conditions that call for decision.
Design Developing and analyzing possible alternative actions of solution.
Choice Selecting a course of action among those.
Implementation Adopting the selected course of action in decision situation.
DSS technology levels (of hardware and software) may include:
1. The actual application that will be used by the user. This is the part of the application that allows the decision
maker to make decisions in a particular problem area. The user can act upon that particular problem.
2. Generator contains Hardware/software environment that allows people to easily develop specific DSS
applications. This level makes use of case tools or systems such as Crystal, Analytica and iThink.
3. Tools include lower level hardware/software. DSS generators including special languages, function libraries and
linking modules
An iterative developmental approach allows for the DSS to be changed and redesigned at various intervals. Once the
system is designed, it will need to be tested and revised where necessary for the desired outcome.

Classification[edit]
There are several ways to classify DSS applications. Not every DSS fits neatly into one of the categories, but may be a
mix of two or more architectures.
Holsapple and Whinston[14] classify DSS into the following six frameworks: text-oriented DSS, database-oriented DSS,
spreadsheet-oriented DSS, solver-oriented DSS, rule-oriented DSS, and compound DSS.
A compound DSS is the most popular classification for a DSS. It is a hybrid system that includes two or more of the five
basic structures described by Holsapple and Whinston. [14]
The support given by DSS can be separated into three distinct, interrelated categories: [15] Personal Support, Group
Support, and Organizational Support.
DSS components may be classified as:
1. Inputs: Factors, numbers, and characteristics to analyze
2. User Knowledge and Expertise: Inputs requiring manual analysis by the user
3. Outputs: Transformed data from which DSS "decisions" are generated
4. Decisions: Results generated by the DSS based on user criteria
DSSs which perform selected cognitive decision-making functions and are based on artificial intelligence or intelligent
agents technologies are called Intelligent Decision Support Systems (IDSS)[16]
The nascent field of Decision engineering treats the decision itself as an engineered object, and applies engineering
principles such as Design and Quality assurance to an explicit representation of the elements that make up a decision.

Applications[edit]
As mentioned above, there are theoretical possibilities of building such systems in any knowledge domain.
One is the clinical decision support system for medical diagnosis. There are four stages in the evolution of clinical decision
support system (CDSS). The primitive version is standalone which does not support integration. The second generation of
CDSS supports integration with other medical systems. The third generation is standard-based while the fourth is service
model-based.[17]
Other examples include a bank loan officer verifying the credit of a loan applicant or an engineering firm that has bids on
several projects and wants to know if they can be competitive with their costs.
DSS is extensively used in business and management. Executive dashboard and other business performance software
allow faster decision making, identification of negative trends, and better allocation of business resources. Due to DSS all
the information from any organization is represented in the form of charts, graphs i.e. in a summarized way, which helps
the management to take strategic decision.
A growing area of DSS application, concepts, principles, and techniques is in agricultural production, marketing
for sustainable development. For example, the DSSAT4 package,[18][19] developed through financial support
of USAID during the 80s and 90s, has allowed rapid assessment of several agricultural production systems around the
world to facilitate decision-making at the farm and policy levels. There are, however, many constraints to the successful
adoption on DSS in agriculture.[20]

DSS are also prevalent in forest management where the long planning horizon and the spatial dimension of planning
problems demands specific requirements. All aspects of Forest management, from log transportation, harvest scheduling
to sustainability and ecosystem protection have been addressed by modern DSSs. In this context the consideration of
single or multiple management objectives related to the provision of goods and services that traded or non-traded and
often subject to resource constraints and decision problems. The Community of Practice of Forest Management Decision
Support Systems provides a large repository on knowledge about the construction and use of forest Decision Support
Systems.[21]
A specific example concerns the Canadian National Railway system, which tests its equipment on a regular basis using a
decision support system. A problem faced by any railroad is worn-out or defective rails, which can result in hundreds
of derailments per year. Under a DSS, CN managed to decrease the incidence of derailments at the same time other
companies were experiencing an increase.

Benefits[edit]
1. Improves personal efficiency
2. Speed up the process of decision making
3. Increases organizational control
4. Encourages exploration and discovery on the part of the decision maker
5. Speeds up problem solving in an organization
6. Facilitates interpersonal communication
7. Promotes learning or training
8. Generates new evidence in support of a decision
9. Creates a competitive advantage over competition
10. Reveals new approaches to thinking about the problem space
11. Helps automate managerial processes
12. Create Innovative ideas to speed up the performance

Features[edit]
1. Solve semi-structured and unstructured problems
2. Support managers at all levels
3. Support individuals and groups

4. Interdependence and sequence of decisions


5. Support Intelligence, Design, Choice
6. Adaptable and flexible
7. Interactive and ease of use
8. Interactive and efficiency
9. Human control of the process
10. Ease of development by end user
11. Modeling and analysis
12. Data access
13. Standalone and web-based integration
14. Support varieties of decision processes
15. Support varieties of decision trees
16. Quick response

END

Systems analyst
From Wikipedia, the free encyclopedia

A systems analyst is an IT professional who specializes in analyzing, designing and implementing information systems.
System analysts assess the suitability of information systems in terms of their intended outcomes and liaise with end
users, software vendors and programmers in order to achieve these outcomes. [1] A systems analyst is a person who uses
analysis and design techniques to solve business problems using information technology.[2] Systems analysts may serve
as change agents who identify the organizational improvements needed, design systems to implement those changes,
and train and motivate others to use the systems.
Although they may be familiar with a variety of programming languages, operating systems, and computer
hardware platforms, they do not normally involve themselves in the actual hardware or software development. They may
be responsible for developing cost analysis, design considerations, staff impact amelioration, and implementation
timelines.
A systems analyst is typically confined to an assigned or given system and will often work in conjunction with a business
analyst. These roles, although having some overlap, are not the same. A business analyst will evaluate the business need
and identify the appropriate solution and, to some degree, design a solution without diving too deep into the technical
nature of such. They will often rely on a systems analyst to do so. A systems analyst will often evaluate code, review
scripting and, possibly, even modify such to some extent.
Some dedicated professionals possess practical knowledge in both areas (business and system analysis) and manage to
successfully combine both of these occupations, effectively blending the line between business analyst andsystems
analyst.

Roles[edit]
A systems analyst may:

Identify, understand and plan for organizational and human impacts of planned systems, and ensure that new
technical requirements are properly integrated with existing processes and skill sets.
Plan a system flow from the ground up.
Interact with internal users and customers to learn and document requirements that are then used to produce
business requirements documents.

Write technical requirements from a critical phase.

Interact with designers to understand software limitations.

Help programmers during system development, e.g. provide use cases, flowcharts or even database design.

Perform system testing.

Deploy the completed system.

Document requirements or contribute to user manuals.

Whenever a development process is conducted, the system analyst is responsible for designing components and
providing that information to the developer.

System development life cycle[edit]


The system development life cycle (SDLC) is the traditional system development method that organizations use for largescale IT Projects. The SDLC is a structured framework that consists of sequential processes by which information system
are developed.
1. System Investigation
2. System Analysis
3. System Design
4. Programming and Testing
5. Implementation
6. Operation and Maintenance
Once a development project has the necessary approvals from all participants, the systems analysis stage begins.
System analysis is the examination of the business problem that organizations plan to solve with an information system.
The main purpose of the systems analysis stage is to gather information about the existing system in order to determine
the requirements for an enhanced system or a new system. The end product of this stage, known as the deliverable, is a
set of system requirements.
Perhaps the most difficult task in system analysis is identifying the specific requirements that the system must satisfy.
These requirements often are called user requirements because users provide them. When the system developers have
accumulated the user requirements for the new system, they proceed to the system design stage.

A computer systems analyst is an occupation in the field of information technology. A computer systems analyst works to
solve problems related to computer technology. Many analysts set up new computer systems, both
the hardware andsoftware, add new software applications to increase computer productivity. Others act as system
developers or system architects, but most analysts specialize in a specific type of system such as business
systems, accounting systems,financial systems, or scientific systems.
As of 2011, the sectors employing the greatest numbers of computer systems analysts were state government, insurance,
computer system design, professional and commercial equipment, and company and enterprise management. The
number of jobs in this field is projected to grow from 487,000 as of 2009 to 650,000 by 2016.
This job ranked third best in a 2010 survey,[3] fifth best in the 2011 survey, 9th best in the 2012 survey and the 10th best in
the 2013 survey.[4]

In popular culture[edit]

The American humor publication The Onion consistently features a systems analyst on its (fake) panel of
"American Voices," a spoof of man-on-the-street journalism. The systems analyst's take on the issue at hand is
typically given next to two other individuals with absurd professions (such as "spelunking instructor").

In an episode of The Simpsons, "Separate Vocations", elementary school student Martin Prince is told that his
future career will be that of a systems analyst.

In the television show King of the Hill, Kahn Souphanousinphone is a systems analyst.

End

Systems Analyst/Developer

Job Descriptions Main Page

This position reports to the Chief Information Technology Officer, and will work closely with the
Enterprise Systems Architect and Network Support group, as well as 3rd party development
consultants.
The Information Technology Department manages the technology and computer infrastructure that
drives the organizations business systems.
The IT department manages a network infrastructure that supports the national office and 7 regional
offices across.
The Systems Analyst/Developer position requires strong business skills and would be responsible for
reviewing, analyzing and occasionally modifying systems including encoding, testing, debugging and
installing to support application systems.
The incumbent will consult with users to identify current operating procedures and to clarify program
objectives.
The incumbent will also be responsible for writing documentation to describe custom configuration of
applications and operating procedures to liaison with 3rd party application development consultants.
The position requires at least 7 years of experience in the field or in a related area. You must have a
working knowledge of relational databases, web and client-server concepts, and be able to rely on
experience and judgment to plan and accomplish goals.

Responsibilities:

Provide technical expertise and recommendations in assessing new IT software projects and

initiatives to support and enhance our existing Microsoft based systems.


Make recommendations on custom applications which include a number of MS-Access data
capture systems for Stewardship and other databases which need to be moved into a central SQL
repository.
Identify opportunities that can improve efficiency of business processes.
Investigate and resolve application functionality related issues and provide first level support and
troubleshooting of our Financial Edge and Raiser's Edge systems
Coordinate application development for multiple projects.
Assist in troubleshooting software application issues.
Assist in managing an outsource relationship for 3rd party application development and
programming consultants.
Assist network administrator with application installation and testing.
Troubleshoot technical issues and identify modifications needed in existing applications to meet
changing user requirements.
Analyze data contained in the corporate database and identify data integrity issues with existing
and proposed systems and implement solutions.
Provides assistance and advice to business users in the effective use of applications and
information technology.
Provide minor programming for some in-house IT projects.
Provide SQL administration in live and test environments.
Write technical procedures and documentation for the applications including operations, user
guide, etc.
Produce technical documentation for new and existing applications.
Verify database and data integrity.
Participate in weekly meetings with the IT network team to discuss progress and issues to be
resolved, and report progress on a weekly basis to the CIO.
Participate on IT project steering committees and be involved in the design phase of any new IT
software development projects.
Assist in the creation of the system design and functional specifications for all new development
projects.
Serve as a liaison and facilitator between all business units to assist in addressing and resolving
IT software issues.
Qualifications:

Should have a minimum of 7 years of technology experience with at least 5 years hands-on

technical roles in the field and relies on experience and judgment to plan and accomplish goals.
Extensive knowledge of data processing, hardware platforms, and enterprise software
applications.
Technical experience with systems networking, databases, Web development, and user support.
Good background in Data Base design in Microsoft SQL and Access.
Background in Microsoft .NET, Visual Basic, Excel, Word, Outlook and HTML.
Good working knowledge skills with Microsoft Office Products, Microsoft Visio, and Microsoft
Project.
Working knowledge of Citrix Metaframe XP, Blackbaud Raisers Edge and Financial Edge would
be an asset
Strong project management skills with effective results focus within an information systems

environment.
Strong analytical and problem solving skills.
Experience in the development and implementation of standards, procedures and guidelines to
support operational processes.
Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities;
Proven ability to be flexible and work hard, both independently and in a team environment, in a
high pressure on-call environment with changing priorities.
Willingness to work occasionally outside of normal business hours.
Excellent English oral and written communication skills.
Post secondary degree in computer science or related field or a combination of related
experience and education.
A results oriented individual who thrives working in a fast paced environment.

End

Database
From Wikipedia, the free encyclopedia

A database is an organized collection of data.[1] The data is typically organized to model aspects of reality in a way that
supports processes requiring information. For example, modelling the availability of rooms in hotels in a way that supports
finding a hotel with vacancies.
Database management systems are computer software applications that interact with the user, other applications, and
the database itself to capture and analyze data. A general-purpose DBMS is designed to allow the definition, creation,
querying, update, and administration of databases. Well-known DBMSs include MySQL, PostgreSQL, Microsoft SQL
Server, Oracle, Sybase and IBM DB2. A database is not generally portable across different DBMSs, but different DBMS
can interoperate by using standards such as SQL and ODBC or JDBC to allow a single application to work with more than
one DBMS. Database management systems are often classified according to the database model that they support; the
most popular database systems since the 1980s have all supported the relational model as represented by
the SQL language. Sometimes a DBMS is loosely referred to as a 'database'.

Terminology and overview[edit]


Formally, a "database" refers to a set of related data and the way it is structured or organized. Access to this data is
usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that
allows users to interact with one or more databases and provides access to all of the data contained in the database
(although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry,
storage and retrieval of large quantities of information as well as provide ways to manage how that information is
organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database
and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of
related data (such as a spreadsheet or a card index). This article is concerned only with databases where the size and
usage requirements necessitate use of a database management system.[2]
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into
four main functional groups:

Data definition Creation, modification and removal of definitions that define the organization of the data.

Update Insertion, modification, and deletion of the actual data. [3]

Retrieval Providing information in a form directly usable or for further processing by other applications. The
retrieved data may be made available in a form basically the same as it is stored in the database or in a new form
obtained by altering or combining existing data from the database. [4]

Administration Registering and monitoring users, enforcing data security, monitoring performance, maintaining
data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event
such as an unexpected system failure.[5]

Both a database and its DBMS conform to the principles of a particular database model.[6] "Database system" refers
collectively to the database model, database management system, and database. [7]
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related
software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for
stable storage. RAID is used for recovery of data if any of the disks fail. Hardware database accelerators, connected to
one or more servers via a high-speed channel, are also used in large volume transaction processing environments.
DBMSs are found at the heart of most database applications. DBMSs may be built around a
custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating
system to provide these functions.[citation needed] Since DBMSs comprise a significant economical market, computer and storage
vendors often take into account DBMS requirements in their own development plans. [citation needed]
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or
XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to
access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability,
resilience, and security.

Applications[edit]
Databases are used to support internal operations of organizations and to underpin online interactions with customers
and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic
models. Examples of database applications include computerized library systems, flight reservation systems and
computerized parts inventory systems.
Application areas of DBMS
1. Banking: For customer information, accounts, and loans, and banking transactions.
2. Airlines: For reservations and schedule information. Airlines were among the first to use databases in a geographically
distributed manner - terminals situated around the world accessed the central database system through phone lines and
other data networks.
3. Universities: For student information, course registrations, and grades.
4. Credit card transactions: For purchases on credit cards and generation of monthly statements.
5. Telecommunication: For keeping records of calls made, generating monthly bills, maintaining balances on prepaid
calling cards, and storing information about the communication networks.
6. Finance: For storing information about holdings, sales, and purchases of financial instruments such as stocks and
bonds.
7. Sales: For customer, product, and purchase information.
8. Manufacturing: For management of supply chain and for tracking production of items in factories, inventories of items in
warehouses / stores, and orders for items.
9. Human resources: For information about employees, salaries, payroll taxes and benefits, and for generation of
paychecks.

General-purpose and special-purpose DBMSs[edit]

A DBMS has evolved into a complex software system and its development typically requires thousands of person-years of
development effort.[8] Some general-purpose DBMSs such as Adabas, Oracle and DB2 have been undergoing upgrades
since the 1970s. General-purpose DBMSs aim to meet the needs of as many applications as possible, which adds to the
complexity. However, the fact that their development cost can be spread over a large number of users means that they
are often the most cost-effective approach. However, a general-purpose DBMS is not always the optimal solution: in some
cases a general-purpose DBMS may introduce unnecessary overhead. Therefore, there are many examples of systems
that use special-purpose databases. A common example is an email system that performs many of the functions of a
general-purpose DBMS such as the insertion and deletion of messages composed of various items of data or associating
messages with a particular email address; but these functions are limited to what is required to handle email and don't
provide the user with the all of the functionality that would be available using a general-purpose DBMS.
Many databases have application software that accesses the database on behalf of end-users, without exposing the
DBMS interface directly. Application programmers may use a wire protocol directly, or more likely through an application
programming interface. Database designers and database administrators interact with the DBMS through dedicated
interfaces to build and maintain the applications' databases, and thus need some more knowledge and understanding
about how DBMSs operate and the DBMSs' external interfaces and tuning parameters.

History[edit]
Following the technology progress in the areas of processors, computer memory, computer storage and computer
networks, the sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of
magnitude. The development of database technology can be divided into three eras based on data model or
structure: navigational,[9] SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model, epitomized by IBM's IMS system, and
the CODASYL model (network model), implemented in a number of products such as IDMS.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications
should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables,
each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow
the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems
dominated in all large-scale data processing applications, and as of 2015 they remain
dominant : Oracle, mySQL and SQL server are the top DBMS.[10] The dominant database language, standardised SQL for
the relational model, has influenced database languages for other data models. [citation needed]
Object databases were developed in the 1980s to overcome the inconvenience of object-relational impedance mismatch,
which led to the coining of the term "post-relational" and also the development of hybrid object-relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing
fast key-value stores and document-oriented databases. A competing "next generation" known as NewSQL databases
attempted new implementations that retained the relational/SQL model while aiming to match the high performance of
NoSQL compared to commercially available relational DBMSs.

1960s, navigational DBMS[edit]


Further information: Navigational database
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the
mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive
use rather than daily batch processing. The Oxford English dictionary cites[11] a 1962 report by the System Development
Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s
a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman,
author of one such product, the Integrated Data Store (IDS), founded the "Database Task Group" within CODASYL, the
group responsible for the creation and standardization of COBOL. In 1971 the Database Task Group delivered their
standard, which generally became known as the "CODASYL approach", and soon a number of commercial products
based on this approach entered the market.

The CODASYL approach relied on the "manual" navigation of a linked data set which was formed into a large network.
Applications could find records by one of three methods:
1. Use of a primary key (known as a CALC key, typically implemented by hashing)
2. Navigating relationships (called sets) from one record to another
3. Scanning all the records in a sequential order
Later systems added B-Trees to provide alternate access paths. Many CODASYL databases also added a very
straightforward query language. However, in the final tally, CODASYL was very complex and required significant training
and effort to produce useful applications.
IBM also had their own DBMS in 1968, known as Information Management System (IMS). IMS was a development of
software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a
strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known
as navigational databases due to the way data was accessed, and Bachman's 1973 Turing Award presentation was The
Programmer as Navigator. IMS is classified[by whom?] as a hierarchical database. IDMS and Cincom
Systems' TOTAL database are classified as network databases. IMS remains in use as of 2014. [12]

1970s, relational DBMS[edit]


Edgar Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the
development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the
lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction
that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.[13]
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in
some sort of linked list of free-form records as in CODASYL, Codd's idea was to use a "table" of fixed-length records, with
each table used for a different type of entity. A linked-list system would be very inefficient when storing "sparse" databases
where some of the data for any one record could be left empty. The relational model solved this by splitting the data into a
series of normalized tables (or relations), with optional elements being moved out of the main table to where they would
take up room only if needed. Data may be freely inserted, deleted and edited in these tables, with the DBMS doing
whatever maintenance needed to present a table view to the application/user.
The relational model also allowed the content of the database to evolve without constant rewriting of links and pointers.
The relational part comes from entities referencing other entities in what is known as one-to-many relationship, like a
traditional hierarchical model, and many-to-many relationship, like a navigational (network) model. Thus, a relational
model can express both hierarchical and navigational models, as well as its native tabular model, allowing for pure or
combined modeling in terms of these three models, as the application requires.
For instance, a common use of a database system is to track information about users, their name, login information,
various addresses and phone numbers. In the navigational approach all of these data would be placed in a single record,
and unused items would simply not be placed in the database. In the relational approach, the data would
be normalized into a user table, an address table and a phone number table (for instance). Records would be created in
these optional tables only if the address or phone numbers were actually provided.
Linking the information back together is the key to this system. In the relational model, some bit of information was used
as a "key", uniquely defining a particular record. When information was being collected about a user, information stored in
the optional tables would be found by searching for this key. For instance, if the login name of a user is unique, addresses
and phone numbers for that user would be recorded with the login name as its key. This simple "re-linking" of related data
back into a single collection is something that traditional computer languages are not designed for.
Just as the navigational approach would require programs to loop in order to collect records, the relational approach
would require loops to collect information about any one record. Codd's solution to the necessary looping was a setoriented language, a suggestion that would later spawn the ubiquitous SQL. Using a branch of mathematics known
as tuple calculus, he demonstrated that such a system could support all the operations of normal databases (inserting,
updating etc.) as well as providing a simple system for finding and returning sets of data in a single operation.

Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project
known as INGRES using funding that had already been allocated for a geographical database project and student
programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for
widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data
access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both
now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora
Dataphorand Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System[14] based on D.L.
Childs' Set-Theoretic Data model.[15][16][17] Micro was used to manage very large data sets by the US Department of Labor,
the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan,
and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System.[18] The system
remained in production until 1998.

Integrated approach[edit]
Main article: Database machine
In the 1970s and 1980s attempts were made to build database systems with integrated hardware and software. The
underlying philosophy was that such integration would provide higher performance at lower cost. Examples were
IBMSystem/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller
with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized
database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus
most database systems nowadays are software systems running on general-purpose hardware, using general-purpose
computer data storage. However this idea is still pursued for certain applications by some companies like Netezza and
Oracle (Exadata).

Late 1970s, SQL DBMS[edit]


IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first
version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of
the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user
versions were tested by customers in 1978 and 1979, by which time a standardized query language SQL[citation needed] had
been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to
develop a true production version of System R, known as SQL/DS, and, later, Database 2(DB2).
Larry Ellison's Oracle started from a different chain, based on IBM's papers on System R, and beat IBM to market when
the first version was released in 1978.[citation needed]
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as
PostgreSQL. PostgreSQL is often used for global mission critical applications (the .org and .info domain name registries
use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In
1984, this project was consolidated into an independent enterprise. In the early 1980s, Mimer introduced transaction
handling for high robustness in applications, an idea that was subsequently implemented on most other DBMSs.
Another data model, the entityrelationship model, emerged in 1976 and gained popularity for database design as it
emphasized a more familiar description than the earlier relational model. Later on, entityrelationship constructs were
retrofitted as a data modeling construct for the relational model, and the difference between the two have become
irrelevant.[citation needed]

1980s, on the desktop[edit]


The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets
like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user
to understand out of the box. C. Wayne Ratliff the creator of dBASE stated: "dBASE was different from programs like
BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by
dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty

details of opening, reading, and closing files, and managing space allocation." [19] dBASE was one of the top selling
software titles in the 1980s and early 1990s.

1980s, object-oriented[edit]
The 1980s, along with a rise in object-oriented programming, saw a growth in how data in various databases were
handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a
person's data were in a database, that person's attributes, such as their address, phone number, and age, were now
considered to belong to that person instead of being extraneous data. This allows for relations between data to be
relations to objects and their attributes and not to individual fields. [20] The term "object-relational impedance mismatch"
described the inconvenience of translating between programmed objects and database tables. Object
databases and object-relational databases attempt to solve this problem by providing an object-oriented language
(sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming
side, libraries known asobject-relational mappings (ORMs) attempt to solve the same problem.

2000s, NoSQL and NewSQL[edit]


Main articles: NoSQL and NewSQL
The next generation of post-relational databases in the 2000s became known as NoSQL databases, including fast keyvalue stores and document-oriented databases.
XML databases are a type of structured document-oriented database that allows querying based on XML document
attributes. XML databases are mostly used in enterprise database management, where XML is being used as the
machine-to-machine data interoperability standard. XML database management systems include commercial
software MarkLogic and Oracle Berkeley DB XML, and a free use software Clusterpoint Distributed XML/JSON Database.
All areenterprise software database platforms and support industry standard ACID-compliant transaction processing with
strong database consistency characteristics and high level of database security.[21][22][23]
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by
storing denormalized data, and are designed to scale horizontally. The most popular NoSQL systems
include MongoDB, Couchbase, Riak,Memcached, Redis, CouchDB, Hazelcast, Apache Cassandra and HBase,[24] which
are all open-source software products.
In recent years there was a high demand for massively distributed databases with high partition tolerance but according to
the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability and partition
tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For
that reason many NoSQL databases are using what is called eventual consistency to provide both availability and partition
tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL
systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID
guarantees of a traditional database system. Such databases
include ScaleBase, Clustrix, EnterpriseDB, MemSQL, NuoDB[25] and VoltDB.

Research[edit]
Database technology has been an active research topic since the 1960s, both in academia and in the research and
development groups of companies (for example IBM Research). Research activity includes theory and development
ofprototypes. Notable research topics have included models, the atomic transaction concept and related concurrency
control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database
Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD,
ACM PODS,VLDB, IEEE ICDE).

Examples[edit]
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or
multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies,
banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface
type. This section lists a few of the adjectives used to characterize different kinds of databases.

An in-memory database is a database that primarily resides in main memory, but is typically backed-up by nonvolatile computer data storage. Main memory databases are faster than disk databases, and so are often used where
response time is critical, such as in telecommunications network equipment. [26]SAP HANA platform is a very hot topic
for in-memory database. By May 2012, HANA was able to run on servers with 100TB main memory powered by IBM.
The co founder of the company claimed that the system was big enough to run the 8 largest SAP customers.

An active database includes an event-driven architecture which can respond to conditions both inside and outside
the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many
databases provide active database features in the form of database triggers.

A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the
cloud", while its applications are both developed by programmers and later maintained and utilized by (application's)
end-users through a web browser and Open APIs.

Data warehouses archive data from operational databases and often from external sources such as market
research firms. The warehouse becomes the central source of data for use by managers and other end-users who
may not have access to operational data. For example, sales data might be aggregated to weekly totals and
converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic
and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading
and managing data so as to make them available for further use.

A deductive database combines logic programming with a relational database, for example by using
the Datalog language.

A distributed database is one in which both the data and the DBMS span multiple computers.

A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi
structured data, information. Document-oriented databases are one of the main categories of NoSQL databases.

An embedded database system is a DBMS which is tightly integrated with an application software that requires
access to stored data in such a way that the DBMS is hidden from the applications end-users and requires little or no
ongoing maintenance.[27]

End-user databases consist of data developed by individual end-users. Examples of these are collections of
documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such
databases. Some of them are much simpler than full-fledged DBMSs, with more elementary DBMS functionality.

A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a
single database by a federated database management system (FDBMS), which transparently integrates multiple
autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system),
and provides them with an integrated conceptual view.

Sometimes the term multi-database is used as a synonym to federated database, though it may refer to a less
integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single
application. In this case typically middleware is used for distribution, which typically includes an atomic commit
protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating
databases.

A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to
represent and store information. General graph databases that can store any graph are distinct from specialized
graph databases such as triplestores and network databases.

An array DBMS is a kind of NoSQL DBMS that allows to model, store, and retrieve (usually large) multidimensional arrays such as satellite images and climate simulation output.

In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of
text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for
organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias,
where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext
database.

A knowledge base (abbreviated KB, kb or [28][29]) is a special kind of database for knowledge management,
providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data
representing problems with their solutions and related experiences.

A mobile database can be carried on or synchronized from a mobile computing device.

Operational databases store detailed data about the operations of an organization. They typically process
relatively high volumes of updates using transactions. Examples include customer databases that record contact,
credit, and demographic information about a business' customers, personnel databases that hold information such as
salary, benefits, skills data about employees, enterprise resource planning systems that record details about product
components, parts inventory, and financial databases that keep track of the organization's money, accounting and
financial dealings.

A parallel database seeks to improve performance through parallelization for tasks such as loading data, building
indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:

Shared memory architecture, where multiple processors share the main memory space, as well as other
data storage.

Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its
own main memory, but all units share the other storage.

Shared nothing architecture, where each processing unit has its own main memory and other storage.

Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.

Real-time databases process transactions fast enough for the result to come back and be acted on right away.

A spatial database can store the data with multidimensional features. The queries on such data include location
based queries, like "Where is the closest hotel in my area?".

A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL.
More specifically the temporal aspects usually include valid-time and transaction-time.

A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.

An unstructured data database is intended to store in a manageable and protected way diverse objects that do not
fit naturally and conveniently in common databases. It may include email messages, documents, journals,
multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the
entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now
support unstructured data in various ways, and new dedicated DBMSs are emerging.

Design and modeling[edit]


Main article: Database design
The first task of a database designer is to produce a conceptual data model that reflects the structure of the
information to be held in the database. A common approach to this is to develop an entity-relationship model, often
with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model
will accurately reflect the possible state of the external world being modeled: for example, if people can have more
than one phone number, it will allow this information to be captured. Designing a good conceptual data model
requires a good understanding of the application domain; it typically involves asking deep questions about the things
of interest to an organisation, like "can a customer also be a supplier?", or "if a product is sold with two different forms
of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via
Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the
terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis
of workflow in the organization. This can help to establish what information is needed in the database, and what can
be left out. For example, it can help when deciding whether the database needs to hold historic data as well as
current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into
a schema that implements the relevant data structures within the database. This process is often called logical
database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual
data model is (in theory at least) independent of the choice of database technology, the logical data model will be
expressed in terms of a particular database model supported by the chosen DBMS. (The terms data
model and database model are often used interchangeably, but in this article we use data model for the design of a
specific database, and database model for the modelling notation used to express that design.)
The most popular database model for general-purpose databases is the relational model, or more precisely, the
relational model as represented by the SQL language. The process of creating a logical database design using this
model uses a methodical approach known as normalization. The goal of normalization is to ensure that each
elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain
consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and
the like. This is often called physical database design. A key goal during this stage is data independence, meaning
that the decisions made for performance optimization purposes should be invisible to end-users and applications.
Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected
workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as
well as defining security levels and methods for the data itself.

Models[edit]
Main article: Database model
A database model is a type of data model that determines the logical structure of a database and fundamentally
determines in which manner data can be stored, organized, and manipulated. The most popular example of a
database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:

Hierarchical database model

Network model

Relational model

Entityrelationship model

Enhanced entityrelationship model

Object model

Document model

Entityattributevalue model

Star schema

An object-relational database combines the two related structures.


Physical data models include:

Inverted index

Flat file

Other models include:

Associative model

Multidimensional model

Array model

Multivalue model

Semantic model

XML database

External, conceptual, and internal views[edit]


[30]

A database management system provides three views of the database data:

The external level defines how each group of end-users sees the organization of data in the database. A single
database can have any number of views at the external level.

The conceptual level unifies the various external views into a compatible global view.[31] It provides the synthesis
of all the external views. It is out of the scope of the various database end-users, and is rather of interest to
database application developers and database administrators.

The internal level (or physical level) is the internal organization of data inside a DBMS (see Implementation
section below). It is concerned with cost, performance, scalability and other operational matters. It deals with
storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it
stores data of individual views (materialized views), computed from generic data, if performance justification
exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in
an attempt to optimize overall performance across all activities.

While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any
number of different external views. This allows users to see database information in a more business-related way
rather than from a technical, processing viewpoint. For example, a financial department of a company needs the

payment details of all employees as part of the company's expenses, but does not need details about employees that
are the interest of the human resources department. Thus different departments need different views of the
company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial
driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a
higher level. For example, changes in the internal level do not affect application programs written using conceptual
level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common
view of the database, independent of different external view structures, and on the other hand it abstracts away
details of how the data is stored or managed (internal level). In principle every level, and even every external view,
can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the
external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and
depends on its implementation (see Implementation section below), requires a different level of detail and uses its
own types of data structure types.
Separating the external, conceptual and internal levels was a major feature of the relational database model
implementations that dominate 21st century databases.[31]

Languages[edit]
Database languages are special-purpose languages, which do one or more of the following:

Data definition language defines data types and the relationships among them

Data manipulation language performs tasks such as inserting, updating, or deleting data occurrences

Query language allows searching for information and computing derived information

Database languages are specific to a particular data model. Notable examples include:

SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the
first commercial languages for the relational model, although it departs in some respects from the relational model
as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of
the American National Standards Institute (ANSI) in 1986, and of the International Organization for
Standardization (ISO) in 1987. The standards have been regularly enhanced since and is supported (with varying
degrees of conformance) by all mainstream commercial relational DBMSs. [32][33]

OQL is an object model language standard (from the Object Data Management Group). It has influenced the
design of some of the newer query languages like JDOQL and EJB QL.

XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist,
by relational databases with XML capability such as Oracle and DB2, and also by in-memory XML processors
such asSaxon.

SQL/XML combines XQuery with SQL.[34]

A database language may also incorporate features like:

DBMS-specific Configuration and storage engine management

Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing

Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)

Application programming interface version of the query language, for programmer convenience

Performance, security, and availability[edit]


Because of the critical importance of database technology to the smooth running of an enterprise, database systems
include complex mechanisms to deliver the required performance, security, and availability, and allow database
administrators to control the use of these features.

Storage[edit]
Main articles: Computer data storage and Database engine
Database storage is the container of the physical materialization of a database. It comprises
the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata,
"data about the data", and internal data structures) to reconstruct the conceptual level and external level from the
internal level when needed. Putting data into permanent storage is generally the responsibility of the database
engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and
often utilizing the operating systems' file systems as intermediates for storage layout), storage properties and
configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained
by database administrators. A DBMS, while in operation, always has its database residing in several types of storage
(e.g., memory and external storage). The database data and the additional needed information, possibly in very large
amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the
way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these
levels' reconstruction when needed by users and programs, as well as for computing additional types of needed
information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be
used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be
written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional
storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views[edit]
Main article: Materialized view
Often storage redundancy is employed to increase performance. A common example is storing materialized views,
which consist of frequently needed external views or query results. Storing such views saves the expensive
computing of them each time they are needed. The downsides of materialized views are the overhead incurred when
updating them to keep them synchronized with their original updated database data, and the cost of storage
redundancy.
Replication[edit]
Main article: Database replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to
increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same
database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a
replicated object need to be synchronized across the object copies. In many cases the entire database is replicated.

Security[edit]
Database security deals with all various aspects of protecting the database content, its owners, and its users. It
ranges from protection from intentional unauthorized database uses to unintentional database accesses by
unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access
what information in the database. The information may comprise specific database objects (e.g., record types, specific
records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or utilizing
specific access paths to the former (e.g., using specific indexes or other data structures to access information).

Database access controls are set by special authorized (by the database owner) personnel that uses dedicated
protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or
(in the most elaborate models) through the assignment of individuals and groups to roles which are then granted
entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords,
users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee
database can contain all the data about an individual employee, but one group of users may be authorized to view
only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way
to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal
databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or
destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful
information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers;
e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed.
Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes.
Sometimes application-level code is used to record changes rather than leaving this to the database. Monitoring can
be set up to attempt to detect security breaches.

Transactions and concurrency[edit]


Further information: Concurrency control
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from
a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database
(e.g., reading a database object, writing, acquiring lock, etc.), an abstraction supported in database and also other
systems. Each transaction has well defined boundaries in terms of which program/code executions are included in
that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: Atomicity, Consistency, Isolation,
and Durability.

Migration[edit]
See also section Database migration in article Data migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in
some situations it is desirable to move, migrate a database from one DBMS to another. The reasons are primarily
economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational
(different DBMSs may have different capabilities). The migration involves the database's transformation from one
DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all
related application programs) intact. Thus, the database's conceptual and external architectural levels should be
maintained in the transformation. It may be desired that also some aspects of the architecture internal level are
maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself,
which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help
migration between specific DBMSs. Typically a DBMS vendor provides tools to help importing databases from
other popular DBMSs.

Building, maintaining, and tuning[edit]


Main article: Database tuning
After designing a database for an application, the next stage is building the database. Typically an
appropriate general-purpose DBMS can be selected to be utilized for this purpose. A DBMS provides the
needed user interfaces to be utilized by database administrators to define the needed application's data
structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS
parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined) it is typically
populated with initial application's data (database initialization, which is typically a distinct project; in many cases

using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases the
database becomes operational while empty of application data, and data is accumulated during its operation.
After the database is created, initialised and populated it needs to be maintained. Various database parameters
may need changing and the database may need to be tuned (tuning) for better performance; application's data
structures may be changed or added, new related application programs may be written to add to the application's
functionality, etc.

Backup and restore[edit]


Main article: Backup
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the
database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve
this a backupoperation is done occasionally or continuously, where each desired database state (i.e., the values
of its data and their embedding in database's data structures) is kept within dedicated backup files (many
techniques exist to do this effectively). When this state is needed, i.e., when it is decided by a database
administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when
the database was in this state), these files are utilized to restore that state.

Other[edit]
Other DBMS features might include:

Database logs

Graphics component for producing graphs and charts, especially in a data warehouse system

Query optimizer Performs query optimization on every query to choose for it the most efficient query
plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a
particular storage engine.

Tools or hooks for database design, application programming, application program maintenance, database
performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a
DBMS and related database may span computers, networks, and storage units) and related database
mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage
migration, etc.

End
Feasibility study
From Wikipedia, the free encyclopedia
For other uses, see Feasibility study (disambiguation).

It has been suggested that technical feasibility be merged into this ar


(Discuss) Proposed since November 2014.
The feasibility study is an evaluation and analysis of the potential of a proposed project. It
is based on extensive investigation and research to support the process of decision making.
Contents
[hide]

1 Overview

2 Feasibility study topics echo

o 2.1 Common factors


o 2.2 Other feasibility factors
o 2.3 Market research study and analysis

3 See also

4 References

5 Further reading

6 External links

Overview[edit]
Feasibility studies aim to objectively and rationally uncover the strengths and weaknesses of
an existing business or proposed venture, opportunities and threats present in
the environment, the resources required to carry through, and ultimately the prospects for
success.[1][2] In its simplest terms, the two criteria to judge feasibility are cost required
and value to be attained.[3]
A well-designed feasibility study should provide a historical background of the business or
project, a description of the product or service, accounting statements, details of
the operations and management, marketing research and policies, financial data, legal
requirements and tax obligations.[1] Generally, feasibility studies precede technical
development and project implementation.
A feasibility study evaluates the project's potential for success; therefore, perceived
objectivity is an important factor in the credibility of the study for potential investors and
lending institutions.[citation needed] It must therefore be conducted with an objective, unbiased
approach to provide information upon which decisions can be based.[citation needed]
Feasibility study topics echo[edit]
Common factors[edit]
The acronym TELOS refers to the five areas of feasibility - Technical, Economic, Legal,
Operational, and Scheduling.
Technology and system feasibility
The assessment is based on an outline design of system requirements, to determine whether
the company has the technical expertise to handle completion of the project. When writing a
feasibility report, the following should be taken to consideration:

A brief description of the business to assess more possible factors which could affect
the study

The part of the business being examined

The human and economic factor

The possible solutions to the problem

At this level, the concern is whether the proposal is both technically and legally feasible
(assuming moderate cost).
Legal Feasibility
Determines whether the proposed system conflicts with legal requirements, e.g. a data
processing system must comply with the local Data Protection Acts.
Operational Feasibility
Operational feasibility is a measure of how well a proposed system solves the problems, and
takes advantage of the opportunities identified during scope definition and how it satisfies
the requirements identified in the requirements analysis phase of system development. [4]
The operational feasibility assessment focuses on the degree to which the proposed
development projects fits in with the existing business environment and objectives with
regard to development schedule, delivery date, corporate culture, and existing business
processes.
To ensure success, desired operational outcomes must be imparted during design and
development. These include such design-dependent parameters such as reliability,
maintainability, supportability, usability, producibility, disposability, sustainability,
affordability and others. These parameters are required to be considered at the early stages
of design if desired operational behaviors are to be realized. A system design and
development requires appropriate and timely application of engineering and management
efforts to meet the previously mentioned parameters. A system may serve its intended
purpose most effectively when its technical and operating characteristics are engineered into
the design. Therefore operational feasibility is a critical aspect of systems engineering that
needs to be an integral part of the early design phases.[5]
Economic Feasibility
The purpose of the economic feasibility assessment is to determine the positive economic
benefits to the organization that the proposed system will provide. It includes quantification
and identification of all the benefits expected. This assessment typically involves a cost/
benefits analysis.
Technical Feasibility
The technical feasibility assessment is focused on gaining an understanding of the present
technical resources of the organization and their applicability to the expected needs of the
proposed system. It is an evaluation of the hardware and software and how it meets the need
of the proposed system[6]
Schedule Feasibility
A project will fail if it takes too long to be completed before it is useful. Typically this means
estimating how long the system will take to develop, and if it can be completed in a given
time period using some methods like payback period. Schedule feasibility is a measure of
how reasonable the project timetable is. Given our technical expertise, are the project

deadlines reasonable? Some projects are initiated with specific deadlines. It is necessary to
determine whether the deadlines are mandatory or desirable.
Other feasibility factors[edit]
Market and real estate feasibility
Market feasibility studies typically involve testing geographic locations for a real estate
development project, and usually involve parcels of real estate land. Developers often
conduct market studies to determine the best location within a jurisdiction, and to test
alternative land uses for given parcels. Jurisdictions often require developers to complete
feasibility studies before they will approve a permit application for retail, commercial,
industrial, manufacturing, housing, office or mixed-use project. Market Feasibility takes into
account the importance of the business in the selected area.
Resource feasibility
This involves questions such as how much time is available to build the new system, when it
can be built, whether it interferes with normal business operations, type and amount of
resources required, dependencies, and developmental procedures with company revenue
prospectus.
Cultural feasibility
In this stage, the project's alternatives are evaluated for their impact on the local and
general culture. For example, environmental factors need to be considered and these factors
are to be well known. Further an enterprise's own culture can clash with the results of the
project.
Financial feasibility study
In case of a new project, financial viability can be judged on the following parameters:

Total estimated cost of the project

Financing of the project in terms of its capital structure, debt equity ratio and
promoter's share of total cost

Existing investment by the promoter in any other business

Projected cash flow and profitability

The financial viability of a project should provide the following information: [citation needed]

Full details of the assets to be financed and how liquid those assets are.

Rate of conversion to cash-liquidity (i.e. how easily can the various assets be converted
to cash?).

Project's funding potential and repayment terms.

Sensitivity in the repayments capability to the following factors:

Time delays.

Mild slowing of sales.

Acute reduction/slowing of sales.

Small increase in cost.

Large increase in cost.

Adverse economic conditions.

Market research study and analysis[edit]


This is one of the most important sections of the feasibility study as it examines the
marketability of the product or services and convinces readers that there is a potential
market for the product or services.[citation needed] If a significant market for the product or
services cannot be established, then there is no project.
Typically, market studies will assess the potential sales of the product, absorption and market
capture rates and the project's timing.[citation needed]
The feasibility study outputs the feasibility study report, a report detailing the evaluation
criteria, the study findings, and the recommendations.[7]
End
Factors Contributing to Failures
Many a times MIS is a failures. The common factors which are responsible for this are listed as follows:

The MIS is conceived as a data processing and not as an information processing system.

The MIS does not provide that information which is needed by the managers but it tends to provide the information
generally the function calls for. The MIS then becomes an impersonal system.

Underestimating the complexity in the business systems and not recognizing it in the MIS design leads to problems in
the successful implementation.

Adequate attention is not given to the quality control aspects of the inputs, the process and the outputs leading to
insufficient checks and controls in the MIS.

The MIS is developed without streamlining the transaction processing systems in the organization.

Lack of training and appreciation that the users of the information and the generators of the data are different, and
they have to play an important responsible role in the MIS.

The MIS does not meet certain critical and key factors of its users such as a response to the query on the database,
an inability to get the processing done in a particular manner, lack of user-friendly system and the dependence on the
system personnel.

A belief that the computerized MIS can solve all the management problems of planning and control of the business.

Lack of administrative discipline in following the standardized systems and procedures, wrong coding and deviating
from the system specifications result in incomplete and incorrect information.

The MIS does not give perfect information to all the users in the organization.

End
PERT CHART
What it is:
A PERT chart is a graphic representation of a projects schedule, showing the sequence of
tasks, which tasks can be performed
simultaneously, and the critical path of tasks that must be completed on time in order for the
project to meet its completion deadline. The
chart can be constructed with a variety of attributes, such as earliest and latest start dates
for each task, earliest and latest finish dates for
each task, and slack time between tasks. A PERT chart can document an entire project or a
key phase of a project. The chart allows a
team to avoid unrealistic timetables and schedule expectations, to help identify and shorten
tasks that are bottlenecks, and to focus
attention on most critical tasks.
When to use it:
Because it is primarily a project-management tools, a PERT chart is most useful for planning
and tracking entire projects or for
scheduling and tracking the implementation phase of a planning or improvement effort.
How to use it:
Identify all tasks or project components. Make sure the team includes people with
firsthand knowledge of the project so that
during the brainstorming session all component tasks needed to complete the project are
captured. Document the tasks on small note
cards.
Identify the first task that must be completed. Place the appropriate card at the
extreme left of the working surface.
Identify any other tasks that can be started simultaneously with task #1. Align
these tasks either above or below task #1
on the working surface.
Identify the next task that must be completed. Select a task that must wait to begin
until task #1(or a task that starts

simultaneously with task #1) is completed. Place the appropriate card to the right of the card
showing the preceding task.
Identify any other tasks that can be started simultaneously with task #2. Align
these tasks either above or below task #2
on the working surface.
Continue this process until all component tasks are sequenced.
End

PERT
Complex projects require a series of activities, some of which must be performed sequentially
and others that can be performed in parallel with other activities. This collection of series and
parallel tasks can be modeled as a network.
In 1957 the Critical Path Method (CPM) was developed as a network model for project
management. CPM is a deterministic method that uses a fixed time estimate for each activity.
While CPM is easy to understand and use, it does not consider the time variations that can
have a great impact on the completion time of a complex project.
The Program Evaluation and Review Technique (PERT) is a network model that allows for
randomness in activity completion times. PERT was developed in the late 1950's for the U.S.
Navy's Polaris project having thousands of contractors. It has the potential to reduce both the
time and cost required to complete a project.
End

DEFINITION OF 'PERT CHART'


A project management tool that provides a graphical representation of a project's timeline.
PERT, or Program Evaluation Review Technique, was developed by the United States Navy for
the Polaris submarine missile program in the 1950s. PERT charts allow the tasks in a
particular project to be analyzed, with particular attention to the time required to complete
each task, and the minimum time required to finish the entire project.

INVESTOPEDIA EXPLAINS 'PERT CHART'


A PERT chart is a graph that represents all of the tasks necessary to a project's completion,
and the order in which they must be completed along with the corresponding time
requirements. Certain tasks are dependent on serial tasks, which must be completed in a
certain sequence. Tasks that are not dependent on the completion of other tasks are called
parallel or concurrent tasks and can generally be worked on simultaneously. PERT charts are
preferable to Gantt charts because they more clearly identify task dependencies; however,

the PERT chart is often more challenging to interpret. As such, project managers frequently
employ both methodologies.
End
What is a Gantt Chart?
A Gantt chart is a timeline view that makes it easy to see how a project is tracking. You can
visualize project tasks and see how they relate to each other as projects progress over time.
Use this tool to simplify your tasks and details with a visual project timeline by transforming
task names, start dates, durations, and end dates into cascading horizontal bar charts.
With a Gantt you can plan out all of your tasks, so complex projects are manageable and
easy to tackle. You can use a Gantt to figure out the minimum delivery time for your project
and to schedule the right people when theyre available to get your project finished
efficiently.
End
Gantt Chart

A Gantt chart is a vital tool for any project manager. It helps you create a schedule for your
project and track the status of each task. There are hundreds of tools for creating gantt
charts, some far more complex than others. If you have Excel, you can create a project
schedule with almost no learning curve by downloading Vertex42's free Gantt Chart
Template.
For complicated project management activities, you may need a tool such as Microsoft
Project. But, if you want to create a simple project schedulequickly and easily, you only
need basic Excel skills to use this template (such as knowing how to copy and insert rows).
End
What is a Gantt chart?
A Gantt chart, commonly used in project management, is one of the most popular and useful
ways of showing activities (tasks or events) displayed against time. On the left of the chart is
a list of the activities and along the top is a suitable time scale. Each activity is represented
by a bar; the position and length of the bar reflects the start date, duration and end date of
the activity. This allows you to see at a glance:

What the various activities are

When each activity begins and ends

How long each activity is scheduled to last

Where activities overlap with other activities, and by how much

The start and end date of the whole project

To summarize, a Gantt chart shows you what has to be done (the activities) and when (the
schedule).
End
A Gantt chart is a type of bar chart, first developed by Karol Adamiecki in 1896, and
independently by Henry Gantt in the 1910s, that illustrates a project schedule. Gantt charts
illustrate the start and finish dates of the terminal elements and summary elements of
a project. Terminal elements and summary elements comprise the work breakdown
structure of the project. Modern Gantt charts also show the dependency (i.e., precedence
network) relationships between activities. Gantt charts can be used to show current schedule
status using percent-complete shadings and a vertical "TODAY" line as shown here.
Although now regarded as a common charting technique, Gantt charts were considered
revolutionary when first introduced.[1] This chart is also used in information technology to
represent data that have been collected.
End
Intranet
This is a network that is not available to the world outside of the Intranet. If the Intranet
network is connected to the Internet, the Intranet will reside behind a firewall and, if it allows
access from the Internet, will be an Extranet. The firewall helps to control access between the
Intranet and Internet to permit access to the Intranet only to people who are members of the
same company or organisation.
In its simplest form, an Intranet can be set up on a networked PC without any PC on the
network having access via the Intranet network to the Internet.
For example, consider an office with a few PCs and a few printers all networked together. The
network would not be connected to the outside world. On one of the drives of one of the PCs
there would be a directory of web pages that comprise the Intranet. Other PCs on the
network could access this Intranet by pointing their browser (Netscape or Internet Explorer)
to this directory - for example
U:\inet\index.htm.
From then onwards they would navigate around the Intranet in the same way as they would
get around the Internet.
Extranet
An Extranet is actually an Intranet that is partially accessible to authorised outsiders. The
actual server (the computer that serves up the web pages) will reside behind a firewall. The
firewall helps to control access between the Intranet and Internet permitting access to the
Intranet only to people who are suitably authorised. The level of access can be set to
different levels for individuals or groups of outside users. The access can be based on a
username and password or an IP address (a unique set of numbers such as 209.33.27.100
that defines the computer that the user is on).

End
EXTRANET
An extranet is an extension of the information system of the company to its partners located
outside of the network.
Access to the extranet must be secured to the extent that the same provides access to the
information system for persons located outside of the enterprise.
This might involve simple authentication (authentication via user name and password) or
strong authentication (authentication via a certificate). It is recommended to use HTTPS for
all web pages that are consulted from the outside to secure the transport of HTTP queries and
answers and to prevent, in particular, the open transfer of the password on the network.
An extranet is therefore neither an Intranet nor an Internet site. It is rather a supplementary
system providing, for example, the clients of an enterprise, its partners or its subsidiaries
with privileged access to certain computer resources of the enterprise via a Web interface.

Intranet
An intranet is a set of Internet services (for example a web server) inside a local network, i.e.
only accessible from workstations of a local network, or rather a set of well-defined networks
that are invisible (or inaccessible) from the outside. It involves the use of Internet clientserver standards (using TCP/IP) protocols such as, for example, the use of Web browsers
(HTTP protocol-based client) and Web servers (HTTP protocol), to create an information
system inside of an organization or enterprise.

Intranet/extranet system
An intranet is generally based on a three-tier architecture, comprising:

clients (generally Web browsers);


one or several application servers (middleware): a web server which makes it possible to
interpret CGI, PHP, ASP or other scripts and translate them into SQL queries to query a
database;
a database server.
End
EXTRANET
An extranet is a computer network that allows controlled access from outside of an
organization's intranet. Extranets are used for specific use cases including business-tobusiness (B2B). In a business-to-business context, an extranet can be viewed as an extension
of an organization's intranet that is extended to users outside the organization, usually

partners, vendors and suppliers, in isolation from all other Internet users. It is in context of
that isolation that an extranet is different from an intranet or internet. In contrast, businessto-consumer (B2C) models involve known servers of one or more companies, communicating
with previously unknown consumer users. An extranet is similar to a DMZ in that it provides
access to needed services for channel partners, without granting access to an organization's
entire network.
Relationship to an intranet An extranet could be understood as an intranet mapped onto the
public Internet or some other transmission system not accessible to the general public, but
managed by more than one company's administrator(s). For example, military networks of
different security levels may map onto a common military radio transmission system that
never connects to the Internet. Any private network mapped onto a public one is a virtual
private network (VPN), often using special security protocols.
For decades, institutions have been interconnecting to each other to create private networks
for sharing information. One of the differences that characterizes an extranet, however, is
that its interconnections are over a shared network rather than through dedicated physical
lines. With respect to Internet Protocol networks, RFC 4364 states "If all the sites in a VPN are
owned by the same enterprise, the VPN is a corporate intranet. If the various sites in a VPN
are owned by different enterprises, the VPN is an extranet. A site can be in more than one
VPN; e.g. in an intranet and several extranets. We regard both intranets and extranets as
VPNs. In general, when we use the term VPN we will not be distinguishing between intranets
and extranets. Even if this argument is valid, the term "extranet" is still applied and can be
used to eliminate the use of the above description.
In the quote above from RFC 4364, the term "site" refers to a distinct networked
environment. Two sites connected to each other across the public Internet backbone
comprise a VPN. The term "site" does not mean "website." Thus, a small company in a single
building can have an "intranet," but to have a VPN, they would need to provide tunneled
access to that network for geographically distributed employees.
Similarly, for smaller, geographically united organizations, "extranet" is a useful term to
describe selective access to intranet systems granted to suppliers, customers, or other
companies. Such access does not involve tunneling, but rather simply an authentication
mechanism to a web server. In this sense, an "extranet" designates the "private part" of a
website, where "registered users" can navigate, enabled by authentication mechanisms on a
"login page".
An extranet requires network security. These can include firewalls, server management, the
issuance and use of digital certificates or similar means of user authentication, encryption of
messages and the use of virtual private networks (VPNs) that tunnel through the public
network.
Many technical specifications describe methods of implementing extranets, but often never
explicitly define an extranet. RFC 3457 [1] presents requirements for remote access to
extranets. RFC 2709 [2] discusses extranet implementation using IPsec and advanced
network address translation (NAT).

INTRANET
An intranet is a computer network that uses Internet Protocol technology to share
information, operational systems, or computing services within an organization. This term is
used in contrast to extranet, a network between organizations, and instead refers to a
network within an organization. Sometimes, the term refers only to the organization's internal
website, but may be a more extensive part of the organization's information technology
infrastructure, and may be composed of multiple local area networks. The objective is to
organize each individual's desktop with minimal cost, time and effort to be more productive,
cost efficient, timely, and competitive.
An intranet may host multiple private websites and constitute an important component and
focal point of internal communication and collaboration. Any of the well known Internet
protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP
(file transfer protocol). Internet technologies are often deployed to provide modern interfaces
to legacy information systems hosting corporate data.
An intranet can be understood as a private analog of the Internet, or as a private extension of
the Internet confined to an organization. The first intranet websites and home pages were
published in 1991,[1][2] and began to appear in non-educational organizations in 1994.[3]
Intranets are sometimes contrasted to extranets. While intranets are generally restricted to
employees of the organization, extranets may also be accessed by customers, suppliers, or
other approved parties.[4] Extranets extend a private network onto the Internet with special
provisions for authentication, authorization and accounting (AAA protocol).
In many organizations, intranets are protected from unauthorized external access by means
of a network gateway and firewall. For smaller companies, intranets may be created simply
by using private IP address ranges. In these cases, the intranet can only be directly accessed
from a computer in the local network; however, companies may provide access to off-site
employees by using a virtual private network, or by other access methods, requiring user
authentication and encryption.
End

Introduction

F.M
The financial manager plays a dynamic role in a modern companys development. This has
not always been the case. Until around the first half of the 1900s financial managers
primarily
raised funds and managed their firms cash positions and that was pretty much it. In
the 1950s, the increasing acceptance of present value concepts encouraged financial
managers
to expand their responsibilities and to become concerned with the selection of capital
investment
projects.
Today, external factors have an increasing impact on the financial manager. Heightened
corporate competition, technological change, volatility in inflation and interest rates,
worldwide
economic uncertainty, fluctuating exchange rates, tax law changes, and ethical concerns
over certain financial dealings must be dealt with almost daily. As a result, finance is required
to play an ever more vital strategic role within the corporation. The financial manager has
emerged as a team player in the overall effort of a company to create value. The old ways
of doing things simply are not good enough in a world where old ways quickly become
obsolete. Thus todays financial manager must have the flexibility to adapt to the changing
external environment if his or her firm is to survive.
The successful financial manager of tomorrow will need to supplement the traditional
metrics of performance with new methods that encourage a greater role for uncertainty
and multiple assumptions. These new methods will seek to value the flexibility inherent in
initiatives that is, the way in which taking one step offers you the option to stop or continue
down one or more paths. In short, a correct decision may involve doing something today
that in itself has small value, but gives you the option to do something of greater value in
the future.
If you become a financial manager, your ability to adapt to change, raise funds, invest in
assets, and manage wisely will affect the success of your firm and, ultimately, the overall
economy as well. To the extent that funds are misallocated, the growth of the economy will
be
slowed. When economic wants are unfulfilled, this misallocation of funds may work to the
detriment of society. In an economy, efficient allocation of resources is vital to optimal growth
in that economy; it is also vital to ensuring that individuals obtain satisfaction of their highest
levels of personal wants. Thus, through efficiently acquiring, financing, and managing assets,
the financial manager contributes to the firm and to the vitality and growth of the economy
as a whole.
What Is Financial Management?
Financial management is concerned with the acquisition, financing, and management of
assets with some overall goal in mind. Thus the decision function of financial management

can be broken down into three major areas: the investment, financing, and asset
management
decisions.
n n n Investment Decision
The investment decision is the most important of the firms three major decisions when it
comes to value creation. It begins with a determination of the total amount of assets needed
to be held by the firm. Picture the firms balance sheet in your mind for a moment. Imagine
liabilities and owners equity being listed on the right side of the balance sheet and its assets
on the left. The financial manager needs to determine the dollar amount that appears above
the double lines on the left-hand side of the balance sheet that is, the size of the firm. Even
when this number is known, the composition of the assets must still be decided. For example,
how much of the firms total assets should be devoted to cash or to inventory? Also, the flip
Part 1 Introduction to Financial Management
2
Financial
management
Concerns the
acquisition, financing,
and management of
assets with some
overall goal in mind.
side of investment disinvestment must not be ignored. Assets that can no longer be
economically justified may need to be reduced, eliminated, or replaced.
n n n Financing Decision
The second major decision of the firm is the financing decision. Here the financial manager
is concerned with the makeup of the right-hand side of the balance sheet. If you look at the
mix of financing for firms across industries, you will see marked differences. Some firms
have relatively large amounts of debt, whereas others are almost debt free. Does the type of
financing employed make a difference? If so, why? And, in some sense, can a certain mix
of financing be thought of as best?
In addition, dividend policy must be viewed as an integral part of the firms financing
decision. The dividend-payout ratio determines the amount of earnings that can be
retained
in the firm. Retaining a greater amount of current earnings in the firm means that fewer
dollars will be available for current dividend payments. The value of the dividends paid to
stockholders must therefore be balanced against the opportunity cost of retained earnings
lost
as a means of equity financing.
Once the mix of financing has been decided, the financial manager must still determine
how best to physically acquire the needed funds. The mechanics of getting a short-term loan,
entering into a long-term lease arrangement, or negotiating a sale of bonds or stock must be
understood.
n n n Asset Management Decision
The third important decision of the firm is the asset management decision. Once assets
have been acquired and appropriate financing provided, these assets must still be managed
efficiently. The financial manager is charged with varying degrees of operating responsibility
over existing assets. These responsibilities require that the financial manager be more
concerned
with the management of current assets than with that of fixed assets. A large share

of the responsibility for the management of fixed assets would reside with the operating
managers who employ these assets.
End

MBR
Researcher Obligations
As the individual responsible for the implementation of the research, the principal
investigator bears ultimate responsibility for protecting every research subject. Each
investigator is obliged to be personally certain that each subject is adequately informed and
freely consents to participate in the investigator's research. The investigator must personally
assure that every reasonable precaution is taken to minimize any risk to the subject. The
investigator also assumes responsibility for compliance with all Federal, State, and
institutional rules and regulations related to research involving human subjects and humansubject-derived information and materials.
As a principal investigator you must:

Accept responsibility for the scientific and ethical conduct of this research study

Obtain prior review from the IRB before implementing any protocol amendments and/or
changes to approved research, except where necessary to eliminate apparent
immediate hazards to the study subjects

Immediately report to the IRB any serious adverse reactions and/or unanticipated
effects on subjects which occur as a result of this study

Submit a progress report prior to expiration of specified approval period

Report any significant changes to the study site and significant deviations from the

research protocol

Report all deaths of subjects enrolled in the research project


Upon completion, termination, or non-renewal of the project, submit a protocol
termination form

Train study personnel in the proper conduct of human subject research

Prepare and maintain adequate and accurate case histories that record all observations
and other data pertinent to the investigation on each individual administered the
investigational drug/device or employed as a control in the investigation
o Case histories include the case report forms and supporting data such as, signed
and dated consent forms and medical records containing progress notes of the
physician, the individual's hospital chart(s), and nurses' notes. The case history
must have documented evidence from each individual that informed consent was
obtained prior to participation in the study

Assure peer review of the protocol

Submit an indemnification agreement for the protocol for industry sponsored protocols
taking place in Mount Carmel facilities

Assure coordination with Mount Carmel departments, including, but not limited to:
o Staff in-servicing with the appropriate nursing units
o Coordination of drug accountability with the pharmacy
o Coordination with radiology department, pathology department, laboratory
department, and/or other departments as applicable

It is the responsibility of the investigator to assure that only individuals who are licensed or
otherwise qualified perform procedures in a study and that those procedures are performed
with the appropriate level of supervision under the laws of the state of Ohio and the polices
of Mount Carmel. The principal investigator may delegate some tasks and responsibilities, but
retains ultimate responsibility for the ethical conduct of the research.
Per Federal regulations, the investigator obtaining informed consent must note in the source
documents, including the hospital medical record, the subject's willingness to participate and
the volunteering of informed consent prior to any study related procedures. This is required
for all studies. The continuing informed consent should also be noted. The documentation of
informed consent must be maintained in compliance with institutional policies, FDA
regulations, OHRP regulations, and ICH regulations or contractual obligations as applicable.
End
Your Rights as a Research Participant
If you are asked to consent to be a subject in a research study, you have the following rights:

To have enough time to decide whether or not to be in the research study, and to make that
decision without any pressure from the people who are conducting the research.

To refuse to be in the study at all, or to stop participating at any time after you begin the study. If you
decide to stop participating in the study, you have a right to continued, necessary medical
treatment.

To be told what the study is trying to find out, what will happen to you, what drug/device will be used
in the study, and what you will be asked to do if you are in the study.

To be told about the reasonably foreseeable risks of being in the study.

To be told about the possible benefits of being in the study.

To be told whether there are any costs associated with being in the study and whether you will be
compensated for participating in the study.

To be told who will have access to information collected about you and how your confidentiality will
be protected.

To be told whom to contact with questions about the research, about research-related injury, and
about your rights as a research subject.

If the study involves treatment or therapy:


o To be told about the other non-research treatment choices you have.
o To be told where treatment is available should you have a research-related injury, and who
will pay for research-related injury treatment.

To receive a copy of the consent form that you will sign.

To ask any questions you may have.

Your Responsibilities as a Research Participant

Completely read the consent form and ask the Principal Investigator (PI) any questions you may
have. You should understand what will happen to you during the study before you agree to
participate.

Know the dates when your study participation starts and ends.

Carefully weigh the possible benefits (if any) and risks of being in the study.

Talk to the Principal Investigator (PI; the person in charge of the study) if you want to stop being part
of the research study.

Contact the PI and/or the Creighton University Institutional Review Board (IRB) with complaints or
concerns about your participation in the study.

Report to the PI immediately any and all problems you may be having with the study
drug/procedure/device.

Fulfill the responsibilities of participation as described on the consent forms unless you are stopping
your participation in the study.

Tell the PI or the person you are working with on the study when you have received the
compensation you were promised for participating in the study.

Ask for the results of the study, if you want them.

Keep a copy of the consent form for your records.

The Investigator's Responsibilities


The PI is the individual who is responsible for a research study. The PI is required to:

Follow the Creighton University IRB-approved research study plan.

Obtain informed consent from all study participants.

Maintain the confidentiality of study participants.

Quickly respond to all participant concerns and questions.

Tell participants about changes to the risks or benefits of the study.

Get approval from the Creighton University IRB for any changes to the study.

Promptly report all unanticipated problems or research-related injuries to the IRB.

Per policy, keep research records for seven (7) years after the study is over.

Effectively train/mentor student researchers in ethical conduct of research.

Comply with all Creighton University procedures for the ethical conduct of human subject research.
End

1. Introduction
This policy outlines faculty members' rights and responsibilities in the conduct of research at
Stanford.
BACK TO TOP

2. Rights of Faculty Members


To carry out Stanford's research mission effectively, scholars are guaranteed certain
freedoms. Faculty have the right to academic freedom in the pursuit and support of research
as defined in the statement of Principles Concerning Research, found in the Research Policy
Handbook. Faculty have the right to disseminate the results and findings of his or her
research without suppression or modification from external sponsors beyond those provisions
explicitly stated in the policy on Openness in Research. Members of the Academic Council
have the right to engage in external consulting activities, subject to the University's, and in
some cases their School's, limitations. It's important that faculty adhere to both the spirit and
the letter of the policy.
Along with these freedoms come corresponding responsibilities:
BACK TO TOP
3. Responsibilities of Faculty to Staff and Students
Faculty members must be aware of their obligations to staff and students working as part of
the research team. It is particularly important that at least annually, each faculty member
should review intellectual and tangible property rights and responsibilities (for management
of data in all media, for proper authorship attribution, etc.), with all members of the group
under his or her direction, including staff, students, postdocs, and visiting scholars. Each
member has the right to know who is sponsoring the research and supporting his or her
salary or stipend.
On an individual level, the best interests of each staff member and student should be of
particular concern. The University is committed to demonstrate support and appreciation for
its staff. To that end, faculty members are encouraged to provide staff development
opportunities and, if possible, a mentor relationship for those in their group.
Health and Safety
Each faculty member is responsible for training members of his or her team in appropriate
health and safety procedures for that particular research area, and for management of those
procedures in his or her laboratory or other workplace. PIs are also responsible to assure the
periodic inspection of lab facilities, and to cooperate in any inspections by Stanford personnel
or by external agencies.
Consulting by Academic Staff - Research
On an exception basis, members of the Academic Staff-Research occasionally may be
permitted to engage in outside consulting activities under conditions outlined in the
RPH,Conflict of Commitment and Interest for Academic Staff and Other Teaching Staff.
BACK TO TOP
4. Responsibilities to Sponsors
Fiscal Obligations

Although the legal agreement funding a sponsored project is between the sponsor and the
Stanford University Board of Trustees, the overall responsibility for management of a
sponsored project within funding limitations rests with the PI. Funds must be expended within
the restrictions of the contract or grant, and if any overdraft should occur, it is the
responsibility of the PI to clear the overdraft by transferring charges to an appropriate
account.
Equipment Control
The control of both Stanford and Government-owned equipment is mandatory under
Stanford's externally sponsored contracts and grants as well as under University policy. PIs
are responsible for securing necessary approvals for the purchase of the equipment, proper
tagging, inventory, utilization of equipment and disposal once equipment becomes excess.
For specific guidance, please refer to the Property Administration Manual.
Proposal Preparation
The cost of proposal preparation activities in support of new directions in research may not
be charged to sponsored projects. Department Chairs and School Deans must ensure that
non-sponsored project funds are available to offset the portion of the investigator's and his or
her staff's salaries from sponsored projects for effort spent preparing proposals to support
new directions in research. The cost of proposal preparation efforts for continuing research is
appropriately charged to current projects. Also, should there be questions on which direct
costs are subject to indirect costs as proposal budgets are prepared, please refer to the
appropriate documents in the Research Policy Handbook.
Certification of Salaries and Expenses to Sponsored Projects
Sponsored project and cost sharing accounts must be reviewed and certified by the PI
quarterly. It is the responsibility of each department chair and school dean to see that a
system is in place to ensure that the PIs in their areas fulfill the requirement for review and
certification of salaries and other expenditures, and to assure that salaries charged to
sponsored projects correspond to effort expended on those projects, within the appropriate
limitation for their school.
Technical and Invention Reports
Principal Investigators are responsible for submitting sponsor-required reports through the
Office of Sponsored Research or the Research Management Group (OSR or RMG) on a timely
basis. The reports must be sent directly to your project monitor, with a copy to OSR or RMG
at the same time so that contract and grant files are complete.
Patents and Copyrights
All participating researchers, including postdocs, students, and visiting scholars, must sign
Stanford's Patent and Copyright Agreement (SU-18) before the commencement of any
research activities.
BACK TO TOP
5. Other Responsibilities

A. Conflict of Interest
The key to Stanford's policy pertaining to conflict of interest is the trust in the integrity of the
individual faculty member to disclose any situation that could lead to real or apparent conflict
of interest. Stanford policy requires an annual certification of compliance and disclosure of
potentially conflicting relationships. In addition, situations which arise during the year in
which outside obligations have the potential for conflict with the faculty member's allegiance
and responsibility to the University require a prompt ad hoc disclosure.

B. Research Protocols
Faculty members also need to ensure that approved research protocols for the use of human
and animal subjects in research are obtained and followed.
End

Principles of Good Research Conduct


i. Excellence
Researchers are expected to strive for excellence when conducting their research; aiming to
design, conduct, produce and disseminate work of the highest quality and ethical standards.
ii. Honesty
Researchers must be honest in respect of their own actions and in their responses to the actions
of others. This applies to the whole range of research activity including:
applying for funding;
experimental and protocol design;
generating, recording, analysing and interpreting data;
publishing and exploiting results;
acknowledging the direct and indirect contributions of colleagues, collaborators and others;
and reporting cases of suspected misconduct in a responsible and appropriate manner.
iii. Openness
Researchers must be open when conducting and communicating their research (subject to the
terms and conditions of any research contracts and the protection of intellectual property and
commercial exploitation). This includes:
the disclosure of any conflicts of interest;
the reporting of research data collection methods;
the analysis and interpretation of data;
making all research findings widely available (including sharing negative results as
appropriate);
disseminating research in a way that will have the widest impact;
and promoting public engagement/involvement in research.

iv. Rigour
Researchers must be thorough and meticulous in performing their research. Care must be taken:
to use the appropriate methods;
to adhere to an agreed protocol (where appropriate);
when drawing interpretations and conclusions from the research;
and when communicating the results.
v. Safety
All research should be conducted in a manner which, so far as is reasonably practicable, is safe
for researchers, participants, the University and the environment. Researchers must familiarise
themselves, and comply with, the obligations set down by the University in its policies and
guidelines, as well as relevant legislation and regulatory practice in this area.
vi. Ethical responsibility
Researchers should have respect for all participants in, and subjects of, research including
humans, animals, the environment and cultural objects. The University expects all researchers to
consider the ethical implications of their research and to be aware of their responsibilities to
society, the environment, their profession, the University, research participants and the
organisation(s) funding the research.
vii. Responsible management
Established researchers are responsible for nurturing researchers of the future; fostering a
constructive and supportive environment without undue pressure and ensuring that appropriate
supervision, mentoring and training are provided.
viii. Regulatory compliance
Researchers are expected to make themselves aware of, and comply with, any legislation or
regulations that govern their research. This includes, but is not limited to:
Data Protection
Clinical Trials Regulations
Human Tissue Act
ix. Professional standards
Researchers should observe the standards of practice set out in guidelines published by
professional societies, funding agencies and other relevant bodies, where appropriate and
available. They must ensure that they have the necessary skills and training to conduct the
research.
x. Report research misconduct
Researchers should be aware of the extreme seriousness of research misconduct. Staff and
students of the University have an obligation to report suspected research misconduct in
accordance with the Universitys Code of Practice for Investigating Concerns about the Conduct of
Research.
End
Principles of Good Research

All research is different but the following factors are common to all good pieces of research involving social care service
users, their families and carers and staff working in this area.

There is a clear statement of research aims, which defines the research question.
There is an information sheet for participants, which sets out clearly what the research is about, what it will involve and
consent is obtained in writing on a consent form prior to research beginning.

The methodology is appropriate to the research question. So, if the research


is into peoples perceptions, a more qualitative, unstructured interview may be
appropriate. If the research aims to identify the scale of a problem or need, a more quantitative, randomised, statistical
sample survey may be more appropriate. Good research can often use a combination of methodologies, which
complement one another.

The research should be carried out in an unbiased fashion. As far as possible the researcher should not influence the
results of the research in any way. If this is likely, it needs to be addressed explicitly and systematically.

From the beginning, the research should have appropriate and sufficient resources in terms of people, time,
transport, money etc. allocated to it.

The people conducting the research should be trained in research and research methods and this training should
provide:

Knowledge around appropriate information gathering techniques,


An understanding of research issues,
An understanding of the research area,
An understanding of the issues around dealing with vulnerable social care clients and housing clients, especially
regarding risk, privacy and sensitivity and the possible need for support.

Those involved in designing, conducting, analysing and supervising the research should have a full understanding
of the subject area.

In some instances, it helps if the researcher has experience of working in the area. However, this can also be a negative
factor, as sometimes research benefits from the fresh eyes and ears of an outsider, which may lead to less bias.

If applicable, the information generated from the research will inform the policy-making process.
All research should be ethical and not harmful in any way to the participants.
End

What Are the Main Features of a Formal Business Report?

A formal business report will show the results of projects discussed in the report.
A formal business report focuses on reporting the results or findings of any given project,
such as budget changes or customer service feedback surveys. The business report will focus
reporting the purpose of the project, the strategies and the findings, so a reader who has no
prior knowledge of the project will understand the purpose and results.
Introduction and Conclusion

o A formal business report will include a short introduction and a short conclusion.
The introduction should be the first thing the reader encounters, while the
conclusion should be the last. Both should be written in third person to keep the
business report formal, as should the entire report. While the introduction
explains the primary purpose of the project in question, the conclusion should
focus on the results and if any other changes need to be made.
Purpose / Overview
o The overview comes directly after the introduction. The overview is also known as
the purpose section, as it explains the reasons why the project is being
completed. For example, if the project is about hearing customer feedback, the
reason could be that the company has implemented new services or products
and want feedback to see if the products are better than the old ones. When
writing this section, refer to previous examples of product implementations, if
possible.
o

Methods and Strategies


o Another feature of a formal business report is a presentation of the methods or
strategies used to get the information or research for the report. To expand on
the previous example, the company may have used phone interviews or online
questionnaires to get the feedback from the customers. It is important that you
mention the methods used to get the information, in case the information does
not end up working out. Then the executives can go back and analyze the
method to see what needs to change in order to get the research they desire.
Findings and Results
o The findings or results part of the report is often the most important aspect of the
business report. The findings will reveal the answers to the questions posed in
the interviews or questionnaires, for example. Present the overall results in the
charts or graph formats. These results will be used to improve customer service
programs, improving product developments or make budget cuts to save money.
The results are essentially used to solve problems within the business, improve
the budgets, improve customer relationships and increase overall sales in order
to make more money.
End
Essential Characteristics or Features of a Good Report
Report provides factual information depending on which decisions are made. So everyone
should be taken to ensure that a report has all the essential qualities which turn it into a good
report. A good report must have the following qualities:
1. Precision
In a good report, the report writer is very clear about the exact and definite purpose of

writing the report. His investigation, analysis, recommendations and others are directed by
this central purpose. Precision of a report provides the unity to the report and makes it a
valuable document for best usage.
2. Accuracy of Facts
Information contained in a report must be based on accurate fact. Since decisions are taken
on the basis of report information, any inaccurate information or statistics will lead to wrong
decision. It will hamper to achieve the organizational goal.
3. Relevancy
The facts presented in a report should not be only accurate but also be relevant. Irrelevant
facts make a report confusing and likely to be misleading to make proper decision.
4. Reader-Orientation
While drafting any report, it is necessary to keep in mind about the person who is going to
read it. That's why a good report is always reader oriented. Readers knowledge and level of
understanding should be considered by the writer of report. Well reader-oriented information
qualify a report to be a good one.
5. Simple Language
This is just another essential features of a good report. A good report is written in a simple
language avoiding vague and unclear words. The language of the report should not be
influenced by the writer's emotion or goal. The message of a good report should be selfexplanatory.
6. Conciseness
A good report should be concise but it does not mean that a report can never be long. Rather
it means that a good report or a business report is one that transmits maximum information
with minimum words. It avoids unnecessary detail and includes everything which are
significant and necessary to present proper information.
7. Grammatical Accuracy
A good report is free from errors. Any faulty construction of a sentence may make its
meaning different to the reader's mind. And sometimes it may become confusing or
ambiguous.
8. Unbiased Recommendation
Recommendation on report usually make effect on the reader mind. So if recommendations
are made at the end of a report, they must be impartial and objective. They should come as
logical conclusion for investigation and analysis.
9. Clarity
Clarity depends on proper arrangement of facts. A good report is absolutely clear. Reporter
should make his purpose clear, define his sources, state his findings and finally make
necessary recommendation. To be an effective communication through report, A report must
be clear to understand for making communication success.

10. Attractive Presentation


Presentation of a report is also a factor which should be consider for a good report. A good
report provides a catchy and smart look and creates attention of the reader. Structure,
content, language, typing and presentation style of a good report should be attractive to
make a clear impression in the mind of its reader.
End
What are Features or Characteristics of Report ?

1. Complete and Compact Document : Report is a complete and compact written


document giving updated information about a specific problem.
2. Systematic Presentation of Facts : Report is a systematic presentation of facts,
figures, conclusions and recommendations. Report writers closely study the problem
under investigation and prepare a report after analyzing all relevant information
regarding the problem. Report is supported by facts and evidence. There is no scope for
imagination in a report which is basically a factual document.
3. Prepared in Writing : Reports are usually in writing. Writing reports are useful for
reference purpose. It serves as complete, compact and self-explanatory document over
a long period. Oral reporting is possible in the case of secret and confidential matters.
4. Provides Information and Guidance : Report is a valuable document which gives
information and guidance to the management while framing future policies. It
facilitates planning and decision making. Reports are also useful for solving problems
faced by a business enterprise.
5. Self-explanatory Document : Report is a comprehensive document and covers all
aspects of the subject matter of study. It is a self-explanatory and complete document
by itself.
6. Acts as a Tool of Internal Communication : Report is an effective tool of
communication between top executives and subordinate staff working in an
organization. It provides feedback to employees and to executives for decision making.
Reports are generally submitted to higher authorities. It is an example of upward
communication. Similarly, reports are also sent by company executives to the lower
levels of management. This is treated as downward communication. In addition, reports
are also sent to shareholders and others connected with the company. It may be
pointed out that report writing / preparation acts as a backbone of any system of
communication.
7. Acts as Permanent Record : A report serves as a permanent record relating to
certain business matter. It is useful for future reference and guidance.

8. Time Consuming and Costly Activity : Report writing is a time consuming, lengthy
and costly activity as it involves collection of facts, drawing conclusion and making
recommendations.
End
Quantitative and Qualitative Data collection methods
The Quantitative data collection methods, rely on random sampling and structured data
collection instruments that fit diverse experiences into predetermined response categories.
They produce results that are easy to summarize, compare, and generalize.
Quantitative research is concerned with testing hypotheses derived from theory and/or being
able to estimate the size of a phenomenon of interest. Depending on the research question,
participants may be randomly assigned to different treatments. If this is not feasible, the
researcher may collect data on participant and situational characteristics in order to
statistically control for their influence on the dependent, or outcome, variable.If the intent is
to generalize from the research participants to a larger population, the researcher will employ
probability sampling to select participants.
Typical quantitative data gathering strategies include:

Experiments/clinical trials.

Observing and recording well-defined events (e.g., counting the number of patients
waiting in emergency at specified times of the day).

Obtaining relevant data from management information systems.

Administering surveys with closed-ended questions (e.g., face-to face and telephone
interviews, questionnaires etc).(http://www.achrn.org/quantitative_methods.htm)

Interviews
In Quantitative research(survey research),interviews are more structured than in Qualitative
research.(http://www.stat.ncsu.edu/info/srms/survpamphlet.html
In a structured interview,the researcher asks a standard set of questions and nothing more.
(Leedy and Ormrod, 2001)
Face -to -face interviews have a distinct advantage of enabling the researcher to establish
rapport with potential partiocipants and therefor gain their cooperation.These interviews
yield highest response rates in survey research.They also allow the researcher to clarify
ambiguous answers and when appropriate, seek follow-up information. Disadvantages
include impractical when large samples are involved time consuming and expensive.(Leedy
and Ormrod, 2001)
Telephone interviews are less time consuming and less expensive and the researcher has
ready access to anyone on the planet who hasa telephone.Disadvantages are that the
response rate is not as high as the face-to- face interview but cosiderably higher than the
mailed questionnaire.The sample may be biased to the extent that people without phones are
part of the population about whom the researcher wants to draw inferences.

Computer Assisted Personal Interviewing (CAPI): is a form of personal interviewing, but


instead of completing a questionnaire, the interviewer brings along a laptop or hand-held
computer to enter the information directly into the database. This method saves time
involved in processing the data, as well as saving the interviewer from carrying around
hundreds of questionnaires. However, this type of data collection method can be expensive
to set up and requires that interviewers have computer and typing skills.
Questionnaires
Paper-pencil-questionnaires can be sent to a large number of people and saves the
researcher time and money.People are more truthful while responding to the questionnaires
regarding controversial issues in particular due to the fact that their responses are
anonymous. But they also have drawbacks.Majority of the people who receive questionnaires
don't return them and those who do might not be representative of the originally selected
sample.(Leedy and Ormrod, 2001)
Web based questionnaires : A new and inevitably growing methodology is the use of
Internet based research. This would mean receiving an e-mail on which you would click on an
address that would take you to a secure web-site to fill in a questionnaire. This type of
research is often quicker and less detailed.Some disadvantages of this method include the
exclusion of people who do not have a computer or are unable to access a computer.Also the
validity of such surveys are in question as people might be in a hurry to complete it and so
might not give accurate responses.
(http://www.statcan.ca/english/edu/power/ch2/methods/methods.htm)
Questionnaires often make use of Checklist and rating scales.These devices help simplify and
quantify people's behaviors and attitudes.A checklistis a list of behaviors,characteristics,or
other entities that te researcher is looking for.Either the researcher or survey participant
simply checks whether each item on the list is observed, present or true or vice
versa.A rating scale is more useful when a behavior needs to be evaluated on a
continuum.They are also known as Likert scales. (Leedy and Ormrod, 2001)

Qualitative data collection methods play an important role in impact evaluation by


providing information useful to understand the processes behind observed results and assess
changes in peoples perceptions of their well-being.Furthermore qualitative methods can
beused to improve the quality of survey-based quantitative evaluations by helping generate
evaluation hypothesis; strengthening the design of survey questionnaires and expanding or
clarifying quantitative evaluation findings. These methods are characterized by the following
attributes:

they tend to be open-ended and have less structured protocols (i.e., researchers may
change the data collection strategy by adding, refining, or dropping techniques or
informants)

they rely more heavily on iteractive interviews; respondents may be interviewed


several times to follow up on a particular issue, clarify concepts or check the reliability
of data

they use triangulation to increase the credibility of their findings (i.e., researchers rely
on multiple data collection methods to check the authenticity of their results)

generally their findings are not generalizable to any specific population, rather each
case study produces a single piece of evidence that can be used to seek general
patterns among different studies of the same issue

Regardless of the kinds of data involved,data collection in a qualitative study takes a great
deal of time.The researcher needs to record any potentially useful data
thououghly,accurately, and systematically,using field notes,sketches,audiotapes,photographs
and other suitable means.The data collection methods must observe the ethical principles of
research.
The qualitative methods most commonly used in evaluation can be classified in three broad
categories:

indepth interview

observation methods

document review

The following link provides more information on the above three methods.
http://www.worldbank.org/poverty/impact/methods/qualitative.htm#indepth
Different ways of collecting evaluation data are useful for different purposes, and each has
advantages and disadvantages. Various factors will influence your choice of a data collection
method: the questions you want to investigate, resources available to you, your timeline, and
more. (http://dmc.umn.edu/evaluation/data.shtml
End
6.3 DATA COLLECTION METHODS

6.3.1
6.3.2
6.3.3
6.3.4
6.3.5

Registration
Questionnaires
Interviews
Direct observations
Reporting

6.3.1 Registration
A register is a depository of information on fishing vessels, companies, gear, licenses or
individual fishers. It can be used to obtain complete enumeration through a legal
requirement. Registers are implemented when there is a need for accurate knowledge of the
size and type of the fishing fleet and for closer monitoring of fishing activities to ensure
compliance with fishery regulations. They may also incorporate information related to fiscal
purposes (e.g. issuance or renewal of fishing licenses). Although registers are usually

implemented for purposes other than to collect data, they can be very useful in the design
and implementation of a statistical system, provided that the data they contain are reliable,
timely and complete
6.3.1.1 Registration data types
In most countries, vessels, especially commercial fishing vessels, and chartered or contract
fishing vessels are registered with the fisheries authorities. Data on vessel type, size, gear
type, country of origin, fish holding capacity, number of fishers and engine horsepower
should be made available for the registry.
Companies dealing with fisheries agencies are registered for various purposes. These
companies may not only include fishing companies, but also other type of companies
involved in processing and marketing fishery products. Data, such as the number of vessels,
gear type and vessel size of registered fishing companies, should be recorded during such
registration. Processing companies should provide basic data on the type of processing, type
of raw material, capacity of processing, and even the source of material.
Fishing vessels and fishing gears may often be required to hold a valid fishing licence.
Unlike vessel registers, licences tend to be issued for access to specific fisheries over a set
period of time. Because licences may have to be periodically renewed, they can be a useful
way to update information on vessel and gear characteristics.
6.3.1.2 Registry design
A registry must not only capture new records, but be able to indicate that a particular record
is inactive (e.g. a company has ceased operations) or record changes in operations (e.g. a
company's processing capacity has increased). If licences must be renewed each year, data
collected from licensing is particularly useful, as records are updated on an annual basis.
Registry data also contain criteria for the classification of fishing units into strata. These
classifications are usually based on assumptions and a priori knowledge regarding differences
on catch rates, species composition and species selectively.
In general, vessel registers are complex systems requiring well-established administrative
procedures supported by effective data communications, data storage and processing
components. As such, they predominantly deal with only certain types and size of fishing
units, most often belonging to industrial and semi-industrial fleets. Small-scale and
subsistence fisheries involving large numbers of fishing units are often not part of a register
system or, if registered, are not easily traced so as to allow validation or updating.
6.3.2 Questionnaires
In contrast with interviews, where an enumerator poses questions directly, questionnaires
refer to forms filled in by respondents alone. Questionnaires can be handed out or sent by
mail and later collected or returned by stamped addressed envelope. This method can be
adopted for the entire population or sampled sectors.
Questionnaires may be used to collect regular or infrequent routine data, and data for
specialised studies. While the information in this section applies to questionnaires for all
these uses, examples will concern only routine data, whether regular or infrequent. Some of

the data often obtained through questionnaires include demographic characteristics, fishing
practices, opinions of stakeholders on fisheries issues or management, general information
on fishers and household food budgets.
A questionnaire requires respondents to fill out the form themselves, and so requires a high
level of literacy. Where multiple languages are common, questionnaires should be prepared
using the major languages of the target group. Special care needs to be taken in these cases
to ensure accurate translations.
In order to maximise return rates, questionnaires should be designed to be as simple and
clear as possible, with targeted sections and questions. Most importantly, questionnaires
should also be as short as possible. If the questionnaire is being given to a sample
population, then it may be preferable to prepare several smaller, more targeted
questionnaires, each provided to a sub-sample. If the questionnaire is used for a complete
enumeration, then special care needs to be taken to avoid overburdening the respondent. If,
for instance, several agencies require the same data, attempts should be made to coordinate its collection to avoid duplication.
The information that can be obtained through questionnaires consists of almost any data
variable. For example, catch or landing information can be collected through questionnaire
from fishers, market middle-persons, market sellers and buyers, processors etc. Likewise,
socio-economic data can also be obtained through questionnaires from a variety of sources.
However, in all cases variables obtained are an opinion and not a direct measurement, and so
may be subject to serious errors. Using direct observations (6.3.4) or reporting systems
(6.3.5) for these sorts of data is more reliable.
Questionnaires, like interviews, can contain either structured questions with blanks to be
filled in, multiple choice questions, or they can contain open-ended questions where the
respondent is encouraged to reply at length and choose their own focus to some extent.
To facilitate filling out forms and data entry in a structured format, the form should ideally be
machine-readable, or at least laid out with data fields clearly identifiable and responses precoded. In general, writing should be reduced to a minimum (e.g. tick boxes, multiple choices),
preferably being limited to numerals. In an open-ended format, keywords and other
structuring procedures should be imposed later to facilitate database entry and analysis, if
necessary.
6.3.3 Interviews
In interviews information is obtained through inquiry and recorded by enumerators.
Structured interviews are performed by using survey forms, whereas open interviews are
notes taken while talking with respondents. The notes are subsequently structured
(interpreted) for further analysis. Open-ended interviews, which need to be interpreted and
analysed even during the interview, have to be carried out by well-trained observers and/or
enumerators.
As in preparing a questionnaire, it is important to pilot test forms designed for the interviews.
The best attempt to clarify and focus by the designer cannot anticipate all possible

respondent interpretations. A small-scale test prior to actual use for data collection will
assure better data and avoid wasting time and money.
Although structured interviews can be used to obtain almost any information, as with
questionnaires, information is based on personal opinion. Data on variables such as catch or
effort are potentially subject to large errors, due to poor estimates or intentional errors of
sensitive information.
6.3.3.1 Open-ended interviews
Open-ended interviews cover a variety of data-gathering activities, including a number of
social science research methods.
Focus groups are small (5-15 individuals) and composed of representative members of a
group whose beliefs, practises or opinions are sought. By asking initial questions and
structuring the subsequent discussion, the facilitator/interviewer can obtain, for example,
information on common gear use practices, responses to management regulations or
opinions about fishing.
Panel surveys involve the random selection of a small number of representative individuals
from a group, who agree to be available over an extended period - often one to three years.
During that period, they serve as a stratified random sample of people from whom data can
be elicited on a variety of topics.
6.3.3.2 Structured interview
Generally, structured interviews are conducted with a well-designed form already
established. Forms are filled in by researchers, instead of respondents, and in that it differs
from questionnaires. While this approach is more expensive, more complicated questions can
be asked and data can be validated as it is collected, improving data quality. Interviews can
be undertaken with variety of data sources (fishers to consumers), and through alternative
media, such as by telephone or in person.
Structured interviews form the basis for much of the data collection in small-scale fisheries.
In an interview approach for sample catch, effort and prices, the enumerators work
according to a schedule of landing site visits to record data. Enumerators can be mobile (that
is sites are visited on a rotational basis) or resident at a specific sampling site. Their job is to
sample vessels, obtaining data on landings, effort and prices from all boat/gear types that are
expected to operate during the sampling day. The sample should be as representative as
possible of fleet activities. Some additional data related to fishing operations may be required
for certain types of fishing units, such as beach seines or boats making multiple fishing trips
in one day. For these, the interview may cover planned activities as well as activities already
completed.
In an interview approach for boat/gear activities, the enumerators work according to a
schedule of homeport visits to record data on boat/gear activities. Enumerators can be
mobile (that is homeports are visited on a rotational basis) or resident at a specific sampling
site. In either case, their job is to determine the total number of fishing units (and if feasible,

fishing gears) for all boat/gear types based at that homeport and number of those that have
been fishing during the sampling day.
There are several ways of recording boat/gear activities. In many cases, they combine the
interview method with direct observations. Direct observations can be used to identify
inactive fishing units by observing those that are moored or beached, and the total number
of vessels based at the homeport are already known, perhaps from a frame survey or
register. Often enumerators will still have to verify that vessels are fishing as opposed to
other activities by using interviews during the visit.
The pure interview approach can be used in those cases where a pre-determined sub-set of
the fishing units has been selected. The enumerator's job is to trace all fishers on the list and,
by means of interviewing, find out those that had been active during the sampling day. For
sites involving a workable number of fishing units (e.g. not larger than 20), the interview may
involve all fishing units.
Sometimes it is possible to ask questions on fishing activity which refer to the previous day or
even to two days back. This extra information increases the sample size significantly with
little extra cost, ultimately resulting in better estimates of total fishing effort. Experience has
shown that most of the variability in boat/gear activity is in time rather than space.
6.3.4 Direct observations
6.3.4.1 Observers
Observers can make direct measurements on the fishing vessels, at landing sites, processing
plants, or in markets. The variables that enumerators can collect include catch (landing and
discards), effort, vessel/gears, operations, environmental variables (e.g. sea state,
temperature), biological variables (e.g. length, weight, age), the values and quantities of
landings and sales.
In practice, observers do not only make direct measurements (observations), but also
conduct interviews and surveys using questionnaires. They might also be involved in data
processing and analysis. The tasks of an observer are difficult and adequate training and
supervision are therefore essential.
Clear decisions need to be made on the nature and extent of data collected during any one
trip. Often, the amount of data and frequency of collection can be established analytically
with preliminary data.
Preferably, observers should only collect data, not carry out other activities, such as
enforcement, licensing or tax collection. This should help to minimise bias by reducing the
incentives to lie. Problems in terms of conflicts between data collection and law enforcement,
for example, can be reduced by clear demarcation, separating activities by location or time.
This becomes a necessity for at-sea observers. Their positions on fishing vessels and the
tasks that they perform depend significantly on a good working relationship with the captain
and crew, which can be lost if they are perceived as enforcement personnel.
The major data obtained through at-sea observers are catch and effort data, which are
often used for cross checking fishing logs. At the same time, the at-sea observers can collect

extra biological (fish size, maturity, and sex), by-catch and environmental data, as well as
other information on the gears, fishing operations etc. Frequently, discards data can only be
collected by at-sea observers.
The main data obtained from observers at landing sites, processing plants and
markets include landing (amount, quality, value and price), biological (size, maturity), and
effort (how many hauls, hours fishing) data. For the large-scale fishery where a logbook
system is used, data collected at landing sites could be used to crosscheck data recorded in
logbooks. Data collected from processing plants include quantities by species and, especially
in modern factory practices, the batch number of raw materials, which can sometimes be
traced back to fishing vessels. These data if collected can be used to validate landing data.
Collecting data to estimate raising factors for converting landed processed fish weight to the
whole weight equivalent may be necessary. By sampling fish before and after processing,
conversion factors may be improved. Potential seasonal, life history stage and other
variations in body/gut weight ratios suggest date, species, sex and size should be recorded in
samples.
Economic and demographic data at each level (e.g. input and output of various products to
and from market and processors) are usually obtained by interview and questionnaire.
However, the data directly collected by enumerators can also be the major source as well as
supporting data for those collected through other methods.
While product data in processing plants can be collected through questionnaire (6.3.2) or
interview (6.3.3), enumerators can directly collect many physical variables (weight, number,
size etc.) more accurately. Automatic scales, through which a continuous stream of fish
passes, can record the weight of fish mechanically or through computerised sensors.
Similarly, mechanical or automatic weighing bins for whole frozen or defrosted fish, prior to
entry to a processing line or cold store, can be used to record weights for each batch.
Otherwise, boxes need to be counted and sub-sampled to ensure their fish contents are
correctly identified and weighed.
Fish is often landed in bulk together with non-fish materials (e.g. ice, brine slurry, packing
material and pallets). It can be very difficult to estimate the total fish weight, let alone weight
by species, product and size grade. Methods need to be established to record whether nonfish material is included in any weighing process (e.g. are scales set to automatically subtract
pallet weight?). In the case of processed fish in sealed boxes, it may be that sampling to
determine an average weight and then box or pallet counting is sufficient. Alternatively, each
box or pallet is weighed and a note taken whether box and pallet weight should be
subtracted at a later data when processing the data.
Complete landings of all catch in relation to a vessel's trip (i.e. emptying of holds) is preferred
since records can then be matched against logsheets. However, in some circumstances offloading in harbours, at the dock or at sea may only be partial, some being retained on board
until the next off-loading. In this case, records should be maintained of both catch landed and
retained on board.
6.3.4.2 Inspectors

Inspectors are a kind of enumerator involved in law enforcement and surveillance (for fishing
regulations, sanitary inspections, labour control, etc.). They may work at sea on surveillance
vessels, at landing sites on shore, at processing factories and at markets. In general,
scientific data are better collected by enumerators who are not directly involved in law
enforcement. Nevertheless, many variables collected by the inspectors are very useful, and
include landings, operational information, effort, landing price, processing procedure and
values of product to the market and processors. Inspectors are also useful in collecting
employment data.
Inspectors may play an important role in verification. In many cases, reports can be
physically checked with observations. For example, random samples of boxes can be taken to
check box contents (species, product type and size grade) against box identification marks.
Inspectors need to be skilled in such sampling strategies.
As with enumerators/observers, inspector data should be treated with caution because of the
high chance of sampling bias. This potential bias of data collected by law-enforcement
officers should be considered in analyses.
6.3.4.3 Scientific research
Ecological research methods can be undertaken independent of commercial fishing
operations to measure variables related to fish populations or the environment. Such
research can be carried out by institutional research vessels or by industry or institutions
using commercial fishing vessels. The objective is to obtain observations on biological (e.g.
stock abundance or spatial distribution and fish size, maturity and spawning activities) and
environmental (e.g. salinity and temperature) variables. It is important that this type of
research is carried out periodically in order to obtain time sequential data.
Similarly, socio-cultural research methods can be used to obtain specific information useful to
management. Although these methods may not often be considered routine, they provide
important data and should be considered for infrequent data collection where possible.
Key informants are individuals with specialised knowledge on a particular topic. They may
include academic specialists, community leaders, or especially skilled fishers. Interviews are
usually begun with a set of baseline questions, but the interviewer expects to elicit new and
perhaps unexpected information by requesting that the key informant expand on his or her
answers to these initial questions. This method is ideal for obtaining in-depth descriptive data
on beliefs and practices, including historical practices.
Participant-observation is a technique whereby the researcher spends an extended period
of time (from weeks to years, depending on the objective and the context) living with a target
community, both observing their behaviour and participating in their practices. During this
time, the researcher will be conducting formal and informal open-ended interviewing on a
variety of topics. This is a good method for learning about the actual processes of decisionmaking, as opposed to the formal procedures. Cultural and institutional rules are rarely
followed to the letter, and there are usually informal standards for an acceptable leeway.
However, information on these standards can often only be obtained through participantobservation.

6.3.4.4 Data logging


Automatic Location Communicators (ALC) automatically log data through positioning and
communications technology. They allow remote observation through recording of fishing
activities at sea, and could replace logbooks and observers/inspectors on the bridges of
fishing vessels. However, ALCs will be deficient in one simple respect: entry of data on the
catch remains the responsibility of the captain.
Many data on fishing operations can be automatically recorded from bridge instrumentation.
Position, speed, heading, deployment of gear through links to electronic instruments are
likely to become more common in future. Once gathered, such data may be automatically
transmitted to databases through satellite or ground communications.
The technology that combines vessel position and a catch assessment for management
authorities through remote means is generally known as a Vessel Monitoring System (VMS).
Confidentiality is the key to the widespread acceptance of VMS, as information on current
fishing grounds, and therefore security of position information, is a major concern.
However, vessel positions, activities and catch reporting through these systems, directly to
databases and thence to reports that either aggregate data or remove vessel identifiers are
becoming possible. Since it will be relatively simple to check remotely sensed position
against recorded position, logsheet records should become more representative of real vessel
activities at sea.
6.3.5 Reporting
In most complete enumeration approaches, fisheries staff do not directly undertake data
collection, but use external data sources. Most commonly, these sources are data forms
completed by the fishing companies themselves, middle persons, market operators,
processors and even trading companies and custom offices. Such methods are almost
exclusively used for semi-industrial and industrial fisheries and institutions.
Fishing companies are often a good source of information regarding basic data on catches
and fishing effort. Regular submission of basic data is a part of the fishing licensing process.
Data submitted by companies are often in the form of logbooks or landings declarations.
Logbooks should contain detailed information on individual fishing operations, including
fishing grounds, type and duration of operation, catch by species and other types of data
relating to weather and sea conditions. Landings declarations usually deal with grouped data
presented as summaries of fishing trips and catch by species.
The advantage of using reports is that data are compiled by agents other than fisheries staff
and sometimes can be made available in pre-processed computerised format directly from
the company's records, thereby reducing administration costs. Confidentiality of information
(such as fishing grounds and catch rates) should be part of the agreement for data
submission, and statistical outputs of the survey should not contain information related to
individual fishing vessels or companies. However, there are also risks of under-reporting or of
deliberate distortion of data, especially fishing ground, catch and revenue related
information.
6.3.5.1 Harvest

The collection of data from all vessels within a fishery sector is sometimes needed usually
from large-scale fisheries. Normally each vessel will be required to record their catch and
effort data for every trip on a specially designed logbook. Because it is a painstaking task,
usually only essential data are required. For various reasons, the data collected by this
method could be inaccurate and thus validation from time to time by inspectors is important.
6.3.5.2 Post harvest
Data from post harvest operations are often used for obtaining information on landings,
biology, markets, costs and earnings. Where logsheets, landings records and market reports
are not available, reliable information can often only be obtained from processing factories.
Reports by the processors generally include quantities and value of fish received and the
resulting products. Additional information may include the origin of catch (fishing and
transport vessels) and size categories of fish.
Monitoring off-loading catch in processed or whole round form requires considerable attention
to detail and much depends on the relationship between the fishery authority and vessel
captains or companies. It may be that sufficient trust has been developed to allow vessel or
company off-loading records to be used directly, perhaps with random spot checks.
In some circumstances, off-loading may proceed directly to a processing factory or cold store
(particularly by conveyor of bulk fish such as small pelagics, tuna etc.). Detailed landings can
still be recorded as long as each batch is marked with its source (vessel name and trip
identifier).
Most factories will maintain records of fish (by species, product type and size grade) that
enter processing directly or cold store. They will also maintain information on their output
and sales, including destination and price, although such data may be much more difficult or
impossible to obtain unless legally required. Data forms will need to be customised to the
type of processing and the factory management system.
6.3.5.3 Sale
Market transaction records may form a feasible way of collecting landings with complete
enumeration, particularly in large fleets of small-scale vessels that land in central locations.
All invoices, sales slips or sales tallies should be designed with care as to content, style and
availability to ensure completeness of coverage. Given the potential volume of paper work,
simplicity and brevity will often be the most important criteria.
The primary identifier on records should be the name of the vessel (including all carrier
vessels unloading from more distant fleets) that sold the catch, and the date or trip number,
since vessels may make more than one sale from one landing. Total weight by species or
commercial group, and price should be collected. Ideally, further data should be obtained on
fishing ground and level of fishing effort, although often this is not possible.
In similar fashion to logsheets and landings sheets, sales records should be prepared in
appropriately identified forms in multiple copies as required. Copies are likely to be required
for the market administration (if necessary), the seller, the buyer and the fishery authority.

General sales records, such as volume of sales and prices by product type, provide useful
information for bio-economic analyses and a source of data on catch and landings when all
other avenues for data collection are unavailable. Three information sources on general sales
are usually available: market, processing factory and export data. However, these data must
always be treated with care. The further away the data sources are from the primary source,
the more errors will be introduced, and the more details (e.g. fishing ground, fishing effort)
will be lost.
In addition to these, direct surveys of fishing companies may provide vital details upon which
overall fisheries management and administration can be based. Annual fisheries statistical
surveys can be voluntary or compulsory. If voluntary, responses will depend on the level of
co-operation between the private sector and the authorities. If compulsory, legislation is
required and can be drafted in various forms, such as Companies or Statistics Acts.
6.3.5.4 Trade
Trade data refers to information from customs or similar sources on trade. These data are
used in socio-economic indicators and, in some exceptional cases, support landings data.
Information on exports and imports is published in most countries. It is particularly important
where export or import taxes are payable, or export incentives given. Of course, export and
import data is of limited use in estimating the total production of fish unless there are also
means to establish the proportion of catch that is used in domestic consumption. However, in
some particular cases, the trade data are the main source for estimating landings (e.g. shark,
tunas). If trade data are used for validating or estimating landings, the quantities will usually
need converting to whole weight.
The lack of detail in export data can be a problem simply because of the form in which they
are collected. Export categories recorded by the authorities (not usually in co-operation with
fishery authorities) can mask much of the information required. Canned fish, frozen fish, fresh
fish, dried fish and fishmeal may be the only relevant categories for export authorities.
Together with accurate raising factors, these data can be used for total fish production. This
method of estimation is fairly accurate when there is a small local market. However, unless
they are broken down by species and linked back directly to sources of data closer to the
harvest sector, they provide little value for fishery management purposes.
End

Questionnaire
A questionnaire is a research instrument consisting of a series of questions and other prompts for the purpose of
gathering information from respondents. Although they are often designed forstatistical analysis of the responses, this is
not always the case. The questionnaire was invented by Sir Francis Galton.[citation needed]
Questionnaires have advantages over some other types of surveys in that they are cheap, do not require as much effort
from the questioner as verbal or telephone surveys, and often have standardized answers that make it simple to compile
data. However, such standardized answers may frustrate users. Questionnaires are also sharply limited by the fact that
respondents must be able to read the questions and respond to them. Thus, for some demographic groups conducting a
survey by questionnaire may not be concrete.

As a type of survey, questionnaires also have many of the same problems relating to question construction and wording
that exist in other types of opinion polls.
Contents
[hide]

1 Types
1.1 Examples

2 Questionnaire construction
o

2.1 Question types

2.2 Question sequence

2.3 Basic rules for questionnaire item construction

3 Questionnaire administration modes

4 Concerns with questionnaires

5 Technology

6 See also

7 Further reading

8 References

9 External links

Types[edit]
A distinction can be made between questionnaires with questions that measure separate variables, and questionnaires
with questions that are aggregated into either a scale or index.[1] Questionnaires within the former category are commonly
part of surveys, whereas questionnaires in the latter category are commonly part of tests.
Questionnaires with questions that measure separate variables, could for instance include questions on:

preferences (e.g. political party)

behaviors (e.g. food consumption)

facts (e.g. gender)

Questionnaires with questions that are aggregated into either a scale or index, include for instance questions that
measure:

latent traits (e.g. personality traits such as extroversion)

attitudes (e.g. towards immigration)

an index (e.g. Social Economic Status)

Examples[edit]

A food frequency questionnaire (FFQ) is a questionnaire to assess the type of diet consumed in people, and may
be used as a research instrument. Examples of usages include assessment of intake of vitamins or toxins such
asacrylamide.[2][3]

Questionnaire construction[edit]
Main article: Questionnaire construction

Question types[edit]
Usually, a questionnaire consists of a number of questions that the respondent has to answer in a set format. A distinction
is made between open-ended and closed-ended questions. An open-ended question asks the respondent to formulate his
own answer, whereas a closed-ended question has the respondent pick an answer from a given number of options. The
response options for a closed-ended question should be exhaustive and mutually exclusive. Four types of response
scales for closed-ended questions are distinguished:

Dichotomous, where the respondent has two options

Nominal-polytomous, where the respondent has more than two unordered options

Ordinal-polytomous, where the respondent has more than two ordered options

(Bounded)Continuous, where the respondent is presented with a continuous scale

A respondent's answer to an open-ended question is coded into a response scale afterwards. An example of an openended question is a question where the testee has to complete a sentence (sentence completion item). [1]

Question sequence[edit]
In general, questions should flow logically from one to the next. To achieve the best response rates, questions should flow
from the least sensitive to the most sensitive, from the factual and behavioural to the attitudinal, and from the more
general to the more specific.
There typically is a flow that should be followed when constructing a questionnaire in regards to the order that the
questions are asked. The order is as follows:
1. Screens
2. Warm-ups
3. Transitions
4. Skips
5. Difficult
6. Changing Formula
Screens are used as a screening method to find out early whether or not someone should complete the
questionnaire. Warm-ups are simple to answer, help capture interest in the survey, and may not even pertain to research
objectives.Transition questions are used to make different areas flow well together. Skips include questions similar to "If
yes, then answer question 3. If no, then continue to question 5." Difficult questions are towards the end because the
respondent is in "response mode." Also, when completing an online questionnaire, the progress bars lets the respondent
know that they are almost done so they are more willing to answer more difficult questions. Classification, or

demographic question should be at the end because typically they can feel like personal questions which will make
respondents uncomfortable and not willing to finish survey.[4]

Basic rules for questionnaire item construction [edit]

Use statements which are interpreted in the same way by members of different subpopulations of the population
of interest.

Use statements where persons that have different opinions or traits will give different answers.

Think of having an "open" answer category after a list of possible answers.

Use only one aspect of the construct you are interested in per item.

Use positive statements and avoid negatives or double negatives.

Do not make assumptions about the respondent.

Use clear and comprehensible wording, easily understandable for all educational levels

Use correct spelling, grammar and punctuation.

Avoid items that contain more than one question per item (e.g. Do you like strawberries and potatoes?).

Question should not be biased or even leading the participant towards an answer.

Questionnaire administration modes[edit]


Main modes of questionnaire administration are:[1]

Face-to-face questionnaire administration, where an interviewer presents the items orally.

Paper-and-pencil questionnaire administration, where the items are presented on paper.

Computerized questionnaire administration, where the items are presented on the computer.

Adaptive computerized questionnaire administration, where a selection of items is presented on the computer, and
based on the answers on those items, the computer selects following items optimized for the testee's estimated ability
or trait.

Concerns with questionnaires[edit]


While questionnaires are inexpensive, quick, and easy to analyze, often the questionnaire can have more problems than
benefits. For example, unlike interviews, the people conducting the research may never know if the respondent
understood the question that was being asked. Also, because the questions are so specific to what the researchers are
asking, the information gained can be minimal.[5] Often, questionnaires such as the Myers-Briggs Type Indicator, give too
few options to answer; respondents can answer either option but must choose only one response. Questionnaires also
produce very low return rates, whether they are mail or online questionnaires. The other problem associated with return
rates is that often the people that do return the questionnaire are those that have a really positive or a really negative
viewpoint and want their opinion heard. The people that are most likely unbiased either way typically don't respond
because it is not worth their time.

Some questionnaires have questions addressing the participants gender. Seeing someone as male or female is
something we all do unconsciously, we dont give much important to ones sex or gender as most people use the terms
sex and gender interchangeably, unaware that they are not synonyms. [6] Gender is a term to exemplify the attributes that
a society or culture constitutes as masculine or feminine. Although your sex as male or female stands at a biological fact
that is identical in any culture, what that specific sex means in reference to your gender role as a woman or man in
society varies cross culturally according to what things are considered to be masculine or feminine. The survey question
should really be what is your sex. Sex is traditionally split into two categories, which we typically dont have control over,
you were either born a girl or born a boy and thats decided by nature. [7] There's also the intersex population which is
disregarded in the North American society as a sex. Not many questionnaires have a box for people that fall under
Intersex.[6] These are some small things that can be misinterpreted or ignored in questionnaires.

Technology[edit]
There are a number of software solutions to create or process questionnaires. They include SPSS [8] and QuickTapSurvey.
[9]
Dooblo's SurveyToGo software works offline and is used to conduct market research questionnaires in the developing
world where internet connectivity is limited.[10]
Survey Monkey is an on-line solution.

End

Questionnaire construction
Questionnaire construction regards questionnaires. It is a series of questions asked to individuals to
obtain statistically useful information about a given topic.[1] When properly constructed and responsibly administered,
questionnaires become a vital instrument by which statements can be made about specific groups, or people, or entire
populations.
Contents
[hide]

1 Questionnaires

2 Questionnaire construction issues


o

2.1 Methods of collection

3 Types of questions

4 Question sequence

5 Marketings

6 References

7 External links

Questionnaires[edit]
Questionnaires are frequently used in quantitative marketing research and social research. They are a valuable method of
collecting a wide range of information from a large number of individuals, often referred to as respondents.
Adequate questionnaire construction is critical to the success of a survey. Inappropriate questions, incorrect ordering of
questions, incorrect scaling, or bad questionnaire format can make the survey valueless, as it may not accurately reflect
the views and opinions of the participants. A useful method for checking a questionnaire and making sure it is accurately
capturing the intended information is to pretest among a smaller subset of target respondents.

Questionnaire construction issues[edit]

Know how (and whether) you will use the results of your research before you start. If, for example, the results
won't influence your decision or you can't afford to implement the findings or the cost of the research outweighs its
usefulness, then save your time and money; don't bother doing the research.

The research objectives and frame of reference should be defined beforehand, including the questionnaire's
context of time, budget, manpower, intrusion and privacy.

How (randomly or not) and from where (your sampling frame) you select the respondents will determine whether
you will be able to generalize your findings to the larger population.

The nature of the expected responses should be defined and retained for interpretation of the responses, be it
preferences (of products or services), facts, beliefs, feelings, descriptions of past behavior, or standards of action.

Unneeded questions are an expense to the researcher and an unwelcome imposition on the respondents. All
questions should contribute to the objective(s) of the research.

If you "research backwards" and determine what you want to say in the report (i.e., Package A is more/less
preferred by X% of the sample vs. Package B, and y% compared to Package C) then even though you don't know the
exact answers yet, you will be certain to ask all the questions you need - and only the ones you need - in such a way
(metrics) to write your report.

The topics should fit the respondents frame of reference. Their background may affect their interpretation of the
questions. Respondents should have enough information or expertise to answer the questions truthfully.

The type of scale, index, or typology to be used shall be determined.

The level of measurement you use will determine what you can do with and conclude from the data. If the
response option is yes/no then you will only know how many or what percent of your sample answered yes/no. You
cannot, however, conclude what the average respondent answered.

The types of questions (closed, multiple-choice, open) should fit the statistical data analysis techniques available
and your goals.

Questions and prepared responses to choose from should be neutral as to intended outcome. A biased question
or questionnaire encourages respondents to answer one way rather than another.[2] Even questions without bias may
leave respondents with expectations.

The order or "natural" grouping of questions is often relevant. Prior previous questions may bias later questions.

The wording should be kept simple: no technical or specialized words.

The meaning should be clear. Ambiguous words, equivocal sentence structures and negatives may cause
misunderstanding, possibly invalidating questionnaire results. Double negatives should be reworded as positives.

If a survey question actually contains more than one issue, the researcher will not know which one the respondent
is answering. Care should be taken to ask one question at a time.

The list of possible responses should be collectively exhaustive. Respondents should not find themselves with no
category that fits their situation. One solution is to use a final category for "other ________".

The possible responses should also be mutually exclusive. Categories should not overlap. Respondents should
not find themselves in more than one category, for example in both the "married" category and the "single" category there may be need for separate questions on marital status and living situation.

Writing style should be conversational, yet concise and accurate and appropriate to the target audience.
Many people will not answer personal or intimate questions. For this reason, questions about age, income, marital
status, etc. are generally placed at the end of the survey. This way, even if the respondent refuses to answer these
"personal" questions, he/she will have already answered the research questions.

"Loaded" questions evoke emotional responses and may skew results.


Presentation of the questions on the page (or computer screen) and use of white space, colors, pictures, charts,
or other graphics may affect respondent's interest or distract from the questions.

Numbering of questions may be helpful.


Questionnaires can be administered by research staff, by volunteers or self-administered by the respondents.
Clear, detailed instructions are needed in either case, matching the needs of each audience.

Methods of collection[edit]
Main article: Survey data collection
Method

Benefits/Cautions

Postal

Low cost-per-response.
Mail is subject to postal delays, which can be substantial when posting remote areas or unpredictable
events such as natural disasters.

Survey participants can choose to remain anonymous.

It is not labour-intensive.

Questionnaires can be conducted swiftly.

Rapport with respondents

High response rate

Telephone

Be careful that your sampling frame (i.e., where you get the phone numbers from) doesn't skew your
sample, For example, if you select the phone numbers from a phone book, you are necessarily excluding
people who only have a mobile phone, those who requested an unpublished phone number, and individuals
who have recently moved to the area because none of these people will be in the book.

Are more prone to social desirability biases than other modes, so telephone interviews are generally not
suitable for sensitive topics[3][4]

This method has a low ongoing cost, and on most surveys costs nothing for the participants and little
for the surveyors. However, Initial set-up costs can be high for a customised design due to the effort
required in developing the back-end system or programming the questionnaire itself.

Electronic

Personally
Administered

Questionnaires can be conducted swiftly, without postal delays.


Survey participants can choose to remain anonymous, though risk being tracked through cookies,
unique links and other technology.
It is not labour-intensive.

Questions can be more detailed, as opposed to the limits of paper or telephones. {Respicius,
Rwehumbiza (2010)}

This method works well if your survey contains several branching questions. Help or instructions can
be dynamically displayed with the question as needed, and automatic sequencing means the computer can
determine the next question, rather than relying on respondents to correctly follow skip instructions.

Not all of the sample may be able to access the electronic form, and therefore results may not be
representative of the target population.

Questions can be more detailed and obtains a lot of comprehensive information, as opposed to the
limits of paper or telephones. However, respondents are often limited to their working memory: specially
designed visual cues (such as prompt cards) may help in some cases.

Rapport with respondents is generally higher than other modes

Typically higher response rate than other modes.

Can be extremely expensive and time consuming to train and maintain an interviewer panel. Each
interview also has a marginal cost associated with collecting the data.

Usually a convenience (vs. a statistical or representative) sample so you cannot generalize your results.
However, use of rigorous selection methods (e.g. those used by national statistical organisations) can result
in a much more representative sample.

Types of questions[edit]
1. Contingency questions - A question that is answered only if the respondent gives a particular response to a
previous question. This avoids asking questions of people that do not apply to them (for example, asking men if
they have ever been pregnant).

2. Matrix questions - Identical response categories are assigned to multiple questions. The questions are placed
one under the other, forming a matrix with response categories along the top and a list of questions down the
side. This is an efficient use of page space and respondents time.
3. Closed ended questions - Respondents answers are limited to a fixed set of responses. Most scales are closed
ended. Other types of closed ended questions include:

Yes/no questions - The respondent answers with a "yes" or a "no".

Multiple choice - The respondent has several option from which to choose.

Scaled questions - Responses are graded on a continuum (example : rate the appearance of the product
on a scale from 1 to 10, with 10 being the most preferred appearance). Examples of types of scales include
the Likert scale, semantic differential scale, and rank-order scale (See scale for a complete list of scaling
techniques.).

4. Open ended questions - No options or predefined categories are suggested. The respondent supplies their own
answer without being constrained by a fixed set of possible responses. Examples of types of open ended
questions include:

Completely unstructured - For example, "What is your opinion on questionnaires?"

Word association - Words are presented and the respondent mentions the first word that comes to mind.

Sentence completion - Respondents complete an incomplete sentence. For example, "The most
important consideration in my decision to buy a new house is . . ."

Story completion - Respondents complete an incomplete story.

Picture completion - Respondents fill in an empty conversation balloon.

Thematic apperception test - Respondents explain a picture or make up a story about what they think is
happening in the picture

Question sequence[edit]

Questions should flow logically from one to the next.

The researcher must ensure that the answer to a question is not influenced by previous questions.

Questions should flow from the more general to the more specific.

Questions should flow from the least sensitive to the most sensitive.

Questions should flow from factual and behavioral questions to attitudinal and opinion questions.

Questions should flow from unaided to aided questions.

According to the three stage theory (also called the sandwich theory), initial questions should be screening and
rapport questions. Then in the second stage you ask all the product specific questions. In the last stage you
askdemographic questions.

Marketings[edit]

Computer-assisted telephone interviewing

Computer-assisted personal interviewing

Automated computer telephone interviewing

Official statistics

Bureau of Labor Statistics

Questionnaires

Questionnaire construction

Paid survey

Data Mining

NIPO Software

DIY research

SPSS

Marketing

Marketing Research

Scale

Statistical survey

Quantitative marketing research

End

Questionnaires

Questionnaires provide a relatively cheap, quick and efficient way of obtaining large amounts
of information from a large sample of people. Data can be collected relatively quickly
because the researcher would not need to be present when the questionnaires were
completed. This is useful for large populations when interviews would be impractical.

However, a problem with questionnaire is that respondents may lie due to social desirability.
Most people want to present a positive image of themselves and so may lie or bend the truth
to look good, e.g. pupils would exaggerate revision duration.
Also the language of a questionnaire should be appropriate to the vocabulary of the group of
people being studied. For example, the researcher must change the language of questions to
match the social background of respondents' age / educational level / social class / ethnicity
etc.
Questionnaires can be an effective means of measuring the behaviour, attitudes,
preferences, opinions and intentions of relatively large numbers of subjects more cheaply and
quickly than other methods. An important distinction is between open-ended and closed
questions.
Often a questionnaire uses both open and closed questions to collect data. This is beneficial
as it means both quantitative and qualitative data can be obtained.
Closed Questions
Closed questions structure the answer by allowing only answers which fit into categories that
have been decided in advanced by the researcher. Data that can be placed into a category is
called nominal data.
The options can be restricted to as few as two (e.g. 'yes' or 'no', 'male' or 'female'), or include
quite complex lists of alternatives from which the respondent can choose.
The respondent provides information which can be easily converted into quantitative data
(e.g. count the number of 'yes' or 'no' answers).
Closed questions can also provide ordinal data (which can be ranked). This often involves
using a rating scale to measure the strength of an attitudes or emotions.
For example, strongly agree/agree/neutral/disagree/strongly disagree/unable to answer
Strengths

They can economical. This means they can provide large amounts of research data for
relatively low costs.

The data can be quickly obtained as closed questions are easy to answer (usually just
ticking a box). This means a large sample size can be obtained which should be
representative of the population, which a researcher can then generalize from.

The questions are standardised. All respondents are asked exactly the same questions
in the same order. This means questionnaire can be replicated easily to check for
reliability. Therefore, a second researcher can use the questionnaire to check that the
results are consistent.

Limitations

They lack detail. Because the response if fixed, there is less scope for respondents to
supply answers which reflects their true feelings on a topic.

Open Questions
Open questions allow people to express what they think in their own words.
Open-ended questions enable the respondent to answer in as much detail as she likes in her
own words. For example: can you tell me how happy you feel right now?
Lawrence Kohlberg presented his subjects with moral dilemmas. One of the most famous
concerns a character called Heinz who is faced with the choice between watching his wife die
of cancer or stealing the only drug that could help her. Subjects are asked whether Heinz
should steal the drug or not and, more importantly, for their reasons why upholding or
breaking the law is right.
Open ended questions provide a rich source of qualitative information (i.e. more descriptive
than numerical) as there is no restriction to the response. However, they are harder to
analyse and make comparisons from.
If you want to gather more in-depth answers from your respondents, then open questions will
work better. These give no pre-set answer options and instead allow the respondents to put
down exactly what they like in their own words.
Open questions are often used for complex questions that cannot be answered in a few
simple categories but require more detail and discussion.
Strengths

Rich qualitative data is obtained as open questions allow the respondent to elaborate
on their answer. This means the research can find out why a people holds a certain
attitude.

The data can be quickly collected as closed questions are easy to answer (usually just
ticking a box). This means a large sample size can be obtained which should be
representative of the population, which a researcher can then generalize from.

Limitations

Time consuming to collect the data. It takes longer for the respondent to complete
open questions. This is a problem as a smaller sample size may be obtained.

Time consuming to analyze the data. It takes longer for the research to analyze
qualitative as they have to read the answers and try to put them into categories by
coding, which is often subjective and difficult. However, Smith (1992) has devoted an
entire book to the issues of thematic content analysis the includes 14 different scoring
systems for open-ended questions.

Not suitable for less educated respondents as open questions require superior writing
skills and a better ability to express one's feelings verbally.

Designing a Questionnaire

With some questionnaires suffering from a response rate as low as 5% it is essential that a
questionnaire is well designed. There are a number of important factors in questionnaire
design.

Aims: Make sure that any questions asked address the aims of the research.

Length: The longer the questionnaire the less likely people will complete it. Questions
should be short, clear, and be to the point; any unnecessary questionnaires should be
omitted. Two sides of A4 is usually an ideal length.

Pilot Study: Run a small scale practice study to ensure people understand the
questions. People will also be able to give detailed honest feedback on the
questionnaire design.

Question order: Easy questions first progressing to more difficult questions.

Terminology: There should be a minimum of technical jargon.

Question Formation: Questions should be simple, to the point and easy to


understand.

Presentation: Make sure it looks professional, include clear and concise instructions. If
sent through the posy make sure the envelope does not signify junk mail.

Ethical Issues
The researcher must ensure that the information provided by the respondent is kept
confidential, e.g. name, address etc.
This means questionnaires are good for researching sensitive topics as respondents will be
more honest when they cannot be identified. Keeping the questionnaire confidential should
also reduce the likelihood of any psychological harm, such as embarrassment.
Participants must provide informed consent prior to completing the questionnaire, and must
be aware that they have the right to withdraw their information at any time during the
survey/ study.
Problems with Postal Questionnaires
The data might not be valid (i.e. truthful) as we can never be sure that the right person
actually completed the postal questionnaire.
Also postal questionnaires may not be representative of the population they are studying?

This is because some questionnaires may be lost in the post reducing the sample size.

The questionnaire may be completed by someone who is not a member of the research
population.

Those with strong views on the questionnaires subject are more likely to complete it
than those with no interest in it.

Benefits of a Pilot Study

A pilot study is a practice / small-scale study conducted before the main study.
It allows the researcher to try out the study with a few participants so that adjustments can
be made before the main study, so saving time and money.
It is important to conduct a questionnaire pilot study for the following reasons:

Check that respondents understand the terminology used in the questionnaire

Check that emotive questions have not been used as they make people defensive and
could invalidate their answers.

Check that leading questions have not been used as they could bias the respondent's
answer

Ensure the questionnaire can be completed in an appropriate time frame (i.e. it's not
too long).

End

Ten Steps Towards Designing a Questionnaire


Market research is all about reducing your business risks through the smart use of
information. It is often cited that 'knowledge is power', and through market research you will
have the power to discover new business opportunities, closely monitor your competitors,
effectively develop products and services, and target your customers in the most costefficient way.
However in order to get useful results you need to make sure you are asking the right
questions to the right people and in the right way. The following tips are designed to help you
avoid some of the common pitfalls when designing a market research questionnaire.
1. What are you trying to find out?
A good questionnaire is designed so that your results will tell you what you want to find out.
Start by writing down what you are trying to do in a few clear sentences, and design your
questionnaire around this.
2. How are you going to use the information?
There is no point conducting research if the results arent going to be used make sure you
know why you are asking the questions in the first place.
Make sure you cover everything you will need when it come to analysing the answers. e.g.
maybe you want to compare answers given by men and women. You can only do this if
youve remembered to record the gender of each respondent on each questionnaire.
3. Telephone, Postal, Web, Face-to-Face?
There are many methods used to ask questions, and each has its good and bad points. For
example, postal surveys can be cheap but responses can be low and can take a long time to
receive, face-to-face can be expensive but will generate the fullest responses, web surveys

can be cost-effective but hit and miss on response rates, and telephone can be costly, but
will often generate high response rates, give fast turnaround and will allow for probing.
4. Qualitative or Quantitative?
Do you want to focus on the number e.g. 87% of respondents thought this, or are you more
interested in interpreting feedback from respondents to bring out common themes?
The method used will generally be determined by the subject matter you are researching and
the types of respondents you will be contacting.
5. Keep it short. In fact, quite often the shorter the better.
We are all busy, and as a general rule people are less likely to answer a long questionnaire
than a short one.
If you are going to be asking your customers to answer your questionnaire in-store, make
sure the interview is no longer than 10 minutes maximum (this will be about 10 to 15
questions).
If your questionnaire is too long, try to remove some questions. Read each question and ask,
"How am I going to use this information?" If you dont know, dont include it!
6. Use simple and direct language.
The questions must be clearly understood by the respondent. The wording of a question
should be simple and to the point. Do not use uncommon words or long sentences.
7. Start with something general.
Respondents will be put-off and may even refuse to complete your questionnaire if you ask
questions that are too personal at the start (e.g. questions about financial matters, age, even
whether or not they are married).
8. Place the most important questions in the first half of the questionnaire.
Respondents sometimes only complete part of a questionnaire. By putting the most
important items near the beginning, the partially completed questionnaires will still contain
important information.
9. Leave enough space to record the answers.
If you are going to include questions which may require a long answer e.g. ask someone why
they do a particular thing, then make sure you leave enough room to write in the possible
answers. It sounds obvious, but its so often overlooked!
10. Test your questionnaire on your colleagues.
No matter how much time and effort you put into designing your questionnaire, there is no
substitute for testing it. Complete some interviews with your colleagues BEFORE you ask the
real respondents. This will allow you to time your questionnaire, make any final changes, and
get feedback from your colleagues.

Please note that this article is not written by WiRE but by a third party company. Whilst WiRE
have made every effort to ensure that the information and details are accurate, we are
unable to guarantee that they completely and WiRE are therefore unable to accept liability
for any loss you may suffer as a result of omission or inaccuracy,
End

Ten Steps to Design a Questionnaire


Designing a questionnaire involves 10 main steps:
1.

Write a study protocol

This involves getting acquainted with the subject, making a literature review, decide on
objectives, formulate a hypothesis, and define the main information needed to test the
hypothesis.
2.

Draw a plan of analysis

This steps determines how the information defined in step 1 should be analysed. The plan of
analysis should contain the measures of association and the statistical tests that you intend
to use. In addition, you should draw dummy tables with the information of interest. The plan
of analysis will help you to determine which type of results you want to obtain. An example of
a dummy table is shown below.
Exposure

nr Cases (%)

Total

Attack Rate

RR (CI95%)

Tomato salad
Chicken breast
3.

Draw a list of the information needed

From the plan of analysis you can draw a list of the information you need to collect from
participants. In this step you should determine the type and format of variables needed.
4.

Design different parts of the questionnaire

You can start now designing different parts of the questionnaire using this list of needed
information.
5.

Write the questions

Knowing the education and occupation level of the study population, ethnic or migration
background, language knowledge and special sensitivities at this step is crucial at this stage.
Please keep in mind that the questionnaire needs to be adapted to your study population.
Please see "Format of Questions" section for more details.
6.

Decide on the order of the questions asked

You should start from easy, general and factual to difficult, particular or abstract questions.
Please consider carefully where to place the most sensitive questions. They should be rather

placed in the middle or towards the end of the questionnaire. Make sure, however, not to put
the most important item last, since some people might not complete the interview.
7.

Complete the questionnaire

Add instructions for the interviewers and definitions of key words for participants. Insure a
smooth flow from one topic to the next one (ex. "and now I will ask you some questions about
your own health..."). Insert jumps between questions if some questions are only targeted at a
subgroup of the respondents.
8.

Verify the content and style of the questions

Verify that each question answers to one of the objectives and all your objectives are covered
by the questions asked. Delete questions that are not directly related to your objectives.
Make sure that each question is clear, unambiguous, simple and short. Check the logical
order and flow of the questions. Make sure the questionnaire is easy to read and has an clear
layout. Please see the Hints to Design a good Questionnaire section for more details.
9.

Conduct a pilot study

You should always conduct a pilot study among the intended population before starting the
study. Please see thePiloting Questionnaires section for more details.
10. Refine your questionnaire
Depending on the results of the pilot study, you will need to amend the questionnaire before
the main survey starts.
End
Research Methodology

Key Concepts of the Scientific Method

There are several important aspects to research methodology. This is a summary


of the key concepts in scientific research and an attempt to erase some common
misconceptions in science.
Steps of the scientific method are shaped like an hourglass - starting from general questions,
narrowing down to focus on one specific aspect, and designing researchwhere we
can observe and analyze this aspect. At last, we conclude and generalize to the real world.
Formulating a Research Problem
Researchers organize their research by formulating and defining a research problem. This
helps them focus the research process so that they can draw conclusionsreflecting the real
world in the best possible way.

Hypothesis
In research, a hypothesis is a suggested explanation of a phenomenon.
A null hypothesis is a hypothesis which a researcher tries to disprove. Normally, the null
hypothesis represents the current view/explanation of an aspect of the world that the
researcher wants to challenge.
Research methodology involves the researcher providing an alternative hypothesis, aresearch
hypothesis, as an alternate way to explain the phenomenon.
The researcher tests the hypothesis to disprove the null hypothesis, not because he/she loves
the research hypothesis, but because it would mean coming closer to finding an answer to a
specific problem. The research hypothesis is often based onobservations that evoke suspicion
that the null hypothesis is not always correct.
In the Stanley Milgram Experiment, the null hypothesis was that the personality determined
whether a person would hurt another person, while the research hypothesis was that the role,
instructions and orders were much more important in determining whether people would hurt
others.

Variables
A variable is something that changes. It changes according to different factors. Some
variables change easily, like the stock-exchange value, while other variables are almost
constant, like the name of someone. Researchers are often seeking tomeasure variables.
The variable can be a number, a name, or anything where the value can change.
An example of a variable is temperature. The temperature varies according to other variable
and factors. You can measure different temperature inside and outside. If it is a sunny day,

chances are that the temperature will be higher than if it's cloudy. Another thing that can
make the temperature change is whether something has been done to manipulate the
temperature, like lighting a fire in the chimney.
In research, you typically define variables according to what you're measuring.
Theindependent variable is the variable which the researcher would like to measure (the
cause), while the dependent variable is the effect (or assumed effect), dependent on the
independent variable. These variables are often stated in experimental research, in
a hypothesis, e.g. "what is the effect of personality on helping behavior?"
In explorative research methodology, e.g. in some qualitative research, the independent and
the dependent variables might not be identified beforehand. They might not be stated
because the researcher does not have a clear idea yet on what is really going on.
Confounding variables are variables with a significant effect on the dependent variable that
the researcher failed to control or eliminate - sometimes because the researcher is not aware
of the effect of the confounding variable. The key is to identify possible confounding variables
and somehow try to eliminate or control them.
Operationalization
Operationalization is to take a fuzzy concept (conceptual variables), such as 'helping
behavior', and try to measure it by specific observations, e.g. how likely are people to help a
stranger with problems.

See also:
Conceptual Variables
Choosing the Research Method

The selection of the research method is crucial for what conclusions you can make about a
phenomenon. It affects what you can say about the cause and factors influencing the
phenomenon.
It is also important to choose a research method which is within the limits of what the
researcher can do. Time, money, feasibility, ethics and availability to measure the
phenomenon correctly are examples of issues constraining the research.
Choosing the Measurement
Choosing the scientific measurements are also crucial for getting the correct conclusion.
Some measurements might not reflect the real world, because they do not measure the
phenomenon as it should.
Results
Significance Test
To test a hypothesis, quantitative research uses significance tests to determine which
hypothesis is right.
The significance test can show whether the null hypothesis is more likely correct than the
research hypothesis. Research methodology in a number of areas like social sciences
depends heavily on significance tests.
A significance test may even drive the research process in a whole new direction, based on
the findings.
The t-test (also called the Student's T-Test) is one of many statistical significance tests, which
compares two supposedly equal sets of data to see if they really are alike or not. The t-test
helps the researcher conclude whether a hypothesis is supported or not.
Drawing Conclusions
Drawing a conclusion is based on several factors of the research process, not just because
the researcher got the expected result. It has to be based on the validity and reliability of the
measurement, how good the measurement was to reflect the real world and what more could
have affected the results.
The observations are often referred to as 'empirical evidence' and the logic/thinking leads to
the conclusions. Anyone should be able to check the observation and logic, to see if they also
reach the same conclusions.
Errors of the observations may stem from measurement-problems, misinterpretations,
unlikely random events etc.
A common error is to think that correlation implies a causal relationship. This is not
necessarily true.
Generalization

Generalization is to which extent the research and the conclusions of the research apply to
the real world. It is not always so that good research will reflect the real world, since we can
only measure a small portion of the population at a time.

Validity and Reliability


Validity refers to what degree the research reflects the given research problem, while
Reliability refers to how consistent a set of measurements are.

Types of validity:

External Validity

Population Validity

Ecological Validity

Internal Validity

Content Validity

Face Validity

Construct Validity

Convergent and Discriminant Validity

Test Validity

Criterion Validity

Concurrent Validity

Predictive Validity

A definition of reliability may be "Yielding the same or compatible results in different clinical
experiments or statistical trials" (the free dictionary). Research methodology lacking
reliability cannot be trusted. Replication studies are a way to test reliability.
Types of Reliability:

Test-Retest Reliability

Interrater Reliability

Internal Consistency Reliability

Instrument Reliability

Statistical Reliability

Reproducibility

Both validity and reliability are important aspects of the research methodology to get better
explanations of the world.
Errors in Research
Logically, there are two types of errors when drawing conclusions in research:
Type 1 error is when we accept the research hypothesis when the null hypothesis is in fact
correct.
Type 2 error is when we reject the research hypothesis even if the null hypothesis is wrong.
End

CHOOSING APPROPRIATE
RESEARCH METHODOLOGIES
It is vital you pick approach research methodologies and methods for your thesis - your research after all is what your
whole dissertation will rest on.

Choosing qualitative or quantitative


research methodologies
Your research will dictate the kinds of research methodologies you use to underpin your work and methods you
use in order to collect data. If you wish to collect quantitative data you are probably measuring variables and
verifying existing theories or hypotheses or questioning them. Data is often used to generate new hypotheses
based on the results of data collected about different variables. Ones colleagues are often much happier about
the ability to verify quantitative data as many people feel safe only with numbers and statistics.
However, often collections of statistics and number crunching are not the answer to understanding meanings,
beliefs and experience, which are better understood through qualitative data. And quantitative data, it must be
remembered, are also collected in accordance with certain research vehicles and underlying research
questions. Even the production of numbers is guided by the kinds of questions asked of the subjects, so is
essentially subjective, although it appears less so than qualitative research data.

Qualitative research
This is carried out when we wish to understand meanings, look at, describe and understand experience, ideas,
beliefs and values, intangibles such as these. Example: an area of study that would benefit from qualitative
research would be that of students learning styles and approaches to study, which are described and
understood subjectively by students.

Using quantitative and qualitative


research methods together
This is a common approach and helps you to 'triangulate' ie to back up one set of findings from one method of
data collection underpinned by one methodology, with another very different method underpinned by another
methodology - for example, you might give out a questionnaire (normally quantitative) to gather statistical data
about responses, and then back this up and research in more depth by interviewing (normally qualitative)
selected members of your questionnaire sample.

For further information see Chapter 8 of The Postgraduate Research Handbook by Gina Wisker.

Research methods in brief


Look at the very brief outlines of different methods below. Consider which you intend using and whether you
could also find it more useful to combine the quantitative with the qualitative. You will be familiar with many of
these methods from your work and from MA, MSc or BA study already.

Qualitative research methods

Interviews

Interviews enable face to face discussion with human subjects. If you are going to use interviews you will have
to decide whether you will take notes (distracting), tape the interview (accurate but time consuming) rely on
your memory (foolish) or write in their answers (can lead to closed questioning for times sake). If you decide to
interview you will need to draw up an interview schedule of questions which can be either closed or open
questions, or a mixture of these. Closed questions tend to be used for asking for and receiving answers about
fixed facts such as name, numbers, and so on. They do not require speculation and they tend to produce short
answers. With closed questions you could even give your interviewees a small selection of possible answers
from which to choose. If you do this you will be able to manage the data and quantify the responses quite
easily. The Household Survey and Census ask closed questions, and often market researchers who stop you in
the street do too. You might ask them to indicate how true for them a certain statement was felt to be, and this
too can provide both a closed response, and one which can be quantified (30% of those asked said they never
ate rice, while 45% said they did so regularly at least once a week... and so on).
The problem with closed questions is that they limit the response the interviewee can give and do not enable
them to think deeply or test their real feelings or values.
If you ask open questions such as what do you think about the increase in traffic? you could elicit an almost
endless number of responses. This would give you a very good idea of the variety of ideas and feelings people
have, it would enable them to think and talk for longer and so show their feelings and views more fully. But it is
very difficult to quantify these results. You will find that you will need to read all the comments through and to
categorise them after you have received them, or merely report them in their diversity and make general
statements, or pick out particular comments if they seem to fit your purpose. If you decide to use interviews:

Identify your sample.

Draw up a set of questions that seem appropriate to what you need to find out.

Do start with some basic closed questions (name etc.).

Don't ask leading questions.

Try them out with a colleague.

Pilot them, then refine the questions so that they are genuinely engaged with your research object.

Contact your interviewees and ask permission, explain the interview and its use.

Carry out interviews and keep notes/tape.

Transcribe.

Thematically analyse results and relate these findings to others from your other research methods.

For further information see Chapters 11 and 16 of The Postgraduate Research Handbook by Gina Wisker.

Quantitative research methods

Questionnaires

Questionnaires often seem a logical and easy option as a way of collecting information from people. They are
actually rather difficult to design and because of the frequency of their use in all contexts in the modern world,
the response rate is nearly always going to be a problem (low) unless you have ways of making people
complete them and hand them in on the spot (and this of course limits your sample, how long the questionnaire
can be and the kinds of questions asked). As with interviews, you can decide to use closed or open questions,
and can also offer respondents multiple choice questions from which to choose the statement which most
nearly describes their response to a statement or item. Their layout is an art form in itself because in poorly laid
out questionnaires respondents tend, for example, to repeat their ticking of boxes in the same pattern. If given a
choice of response on a scale 1-5, they will usually opt for the middle point, and often tend to miss out
subsections to questions. You need to take expert advice in setting up a questionnaire, ensure that all the
information about the respondents which you need is included and filled in, and ensure that you actually get
them returned. Expecting people to pay to return postal questionnaires is sheer folly, and drawing up a really
lengthy questionnaire will also inhibit response rates. You will need to ensure that questions are clear, and that
you have reliable ways of collecting and managing the data. Setting up a questionnaire that can be read by an
optical mark reader is an excellent idea if you wish to collect large numbers of responses and analyse them
statistically rather than reading each questionnaire and entering data manually.
You would find it useful to consult the range of full and excellent research books available. These will deal in
much greater depth with the reasons for, processes of holding, and processes of analysing data from the variety
of research methods available to you.
Developing and using a questionnaire - some tips:

Identify your research questions

Identify your sample

Draw up a list of appropriate questions and try them out with a colleague

Pilot them

Ensure questions are well laid out and it is clear how to 'score them' (tick, circle, delete)

Ensure questions are not leading and confusing

Code up the questionnaire so you can analyse it afterwards

Gain permission to use questionnaires from your sample

Ensure they put their names or numbers on so you can identify them but keep real names confidential

Hand them out/post them with reply paid envelopes

Ensure you collect in as many as possible

Follow up if you get a small return

Analyse statistically if possible and/or thematically

Activity
What kind of research methods are you going to use? Are they mostly:

Quantitative, or qualitative, or a mixture of both?

What do you think your methods will enable you to discover?

What might they prevent you from discovering?

What kinds of research methods would be best suited to the kind of research you are undertaking and
the research questions you are pursuing?

What sort of problems do you envisage in setting up these methods?

What are their benefits?

What will you need to do to ensure they gather useful data?

End

Research Methods/Types of Research

1 TYPOLOGY OF RESEARCH
o

1.1 BASIC RESEARCH

1.2 APPLIED RESEARCH

1.3 QUANTITATIVE RESEARCH

1.4 QUALITATIVE RESEARCH

TYPOLOGY OF RESEARCH[edit]
Research can be classified in many different ways on the basis of the methodology of research. The knowledge it creates,
the user group, the research problem it investigates etc,.

BASIC RESEARCH[edit]
The research which is done for knowledge enhancement, the research which does not have immediate commercial
potential. The research which is done for human welfare, animal welfare and plant kingdom welfare. It is called
basic,pure,fundamental research. The main motivation is to expand man's knowledge, not to create or invent something.
There is no obvious commercial value to the discoveries that result from basic research. Basic research lay down the
foundation for the applied research. Dr.G.Smoot says people cannot foresee the future well enough to predict what is
going to develop from the basic research Eg:-how did the universe begin?

APPLIED RESEARCH[edit]
Applied research is designed to solve practical problem of the modern world, rather than to acquire knowledge for
knowledges sake. The goal of applied research is to improve the human condition. It focus on analysis and solving social
and real life problems. This research is generally conducted on large scale basis, it is expensive. As such, it often
conducted with the support of some financing agency like government , public corporation , world bank, UNICEF,
UGC,Etc,. According to hunt, applied research is an investigation for ways of using scientific knowledge to solve practical
problems for example:- improve agriculture crop production, treat or cure a specific disease, improve the energy
efficiency homes, offices, how can communication among workers in large companies be improved? Applied research can
be further classified as problem oriented and problem solving research.
Problem oriented research:- research is done by industry apex body for sorting out problems faced by all the companies.
Eg:- WTO does problem oriented research for developing countries, in india agriculture and processed food export
development authority (APEDA) conduct regular research for the benefit of agri-industry.
Problem solving:-this type of research is done by an individual company for the problem faced by it. Marketing research
and market research are the applied research. For eg:- videocon international conducts research to study customer
satisfaction level, it will be problem solving research. In short, the main aim of applied research is to discover some
solution for some pressing practical problem.

QUANTITATIVE RESEARCH[edit]
This research is base on numeric figures or numbers. Quantitative research aim to measure the quantity or amount and
compares it with past records and tries to project for future period. In social sciences, quantitative research refers to the

systematic empirical investigation of quantitative properties and phenomena and their relationships. The objective of
quantitative research is to develop and employ mathematical models, theories or hypothesis pertaining to phenomena.
The process of measurement is central to quantitative research because it provides fundamental connection between
empirical observation and mathematical expression of quantitative relationships. Statistics is the most widely used branch
of mathematics in quantitative research. Statistical methods are used extensively with in fields such as economics and
commerce.
Quantitative research involving the use of structured questions, where the response options have been Pre-determined
and large number of respondents is involved.eg:-total sales of soap industry interms of rupees cores and or quantity
interms of lakhs tones for particular year, say 2008,could be researched, compared with past 5 years and then projection
for 2009 could be made. b

QUALITATIVE RESEARCH[edit]
Qualitative research presents non-quantitative type of analysis. Qualitative research is collecting, analyzing and
interpreting data by observing what people do and say. Qualitative research refers to the meanings, definitions,
characteristics, symbols, metaphors, and description of things. Qualitative research is much more subjective and uses
very different methods of collecting information,mainly individual, in-depth interviews and focus groups.
The nature of this type of research is exploratory and open ended. Small number of people are interviewed in depth and
or a relatively small number of focus groups are conducted. Qualitative research can be further classified in the following
type.
I. Phenomenology:-a form of research in which the researcher attempts to understand how one or more individuals
experience a phenomenon. Eg:-we might interview 20 victims of bhopal tragedy.
II. Ethnography:- this type of research focuses on describing the culture of a group of people. A culture is the shared
attributes, values, norms, practices, language, and material things of a group of people. Eg:-the researcher might decide
to go and live with the tribal in Andaman island and study the culture and the educational practices.
III. Case study:-is a form of qualitative research that is focused on providing a detailed account of one or more cases. Eg:we may study a classroom that was given a new curriculum for technology use.
IV. Grounded theory:- it is an inductive type of research,based or grounded in the observations of data from which it was
developed; it uses a variety of data sources, including quantitative data, review of records, interviews, observation and
surveys
V. Historical research:-it allows one to discuss past and present events in the context of the present condition, and allows
one to reflect and provide possible answers to current issues and problems. Eg:-the lending pattern of business in the
19th century.
In addition to the above, we also have the descriptive research. Fundamental research, of which this is based on
establishing various theories
Also the research is classified in to 1. Descriptive research 2. Analytical research 3. Fundamental research 4. Conceptual
research 5. Empirical research 6. One time research or longitudinal research 7. Field-setting research or laboratory
research or simulation research 8. Clinical or diagnostic research 9. Exploratory research 10.Historical research
11.conclusion oriented research
End

Population vs Sample
The main difference between a population and sample has to do with how observations are assigned to the data
set.

A population includes all of the elements from a set of data.

A sample consists of one or more observations from the population.

Depending on the sampling method, a sample can have fewer observations than the population, the same
number of observations, or more observations. More than one sample can be derived from the same population.
Other differences have to do with nomenclature, notation, and computations. For example,

A a measurable characteristic of a population, such as a mean or standard deviation, is called


aparameter; but a measurable characteristic of a sample is called a statistic.

We will see in future lessons that the mean of a population is denoted by the symbol ; but the mean of
a sample is denoted by the symbol x.

We will also learn in future lessons that the formula for the standard deviation of a population is different
from the formula for the standard deviation of a sample.

End

Structured Face-to-Face Interviews

Advantages. Face-to-face interviews may be quicker to conduct than questionnaire surveys because it is
not necessary to add time to account for mail delivery and for the respondents to turn their attention to the
questionnaire. A major advantage is that they allow more opportunity to assess the respondent's
understanding and interpretation of the questions and to clarify any confusion that arises about the
meaning of the question or the response. They also allow the opportunity to present material to
respondents and obtain their reactions. For example, face-to-face interviews have been used to assess the
meaning that non-literate subjects attach to symbols. For these reasons, face-to-face interviews are useful
for pilot-testing mail-out questionnaires
Face-to-face interviews can be useful in dealing with certain situations that pose challenges for mail-out
questionnaires. They are generally better suited than mail or electronic questionnaires with respondents
whose reading and writing skills may not be adequate for the questions being asked. They may also be
helpful when sensitive information is being sought. Interviewers may be able to establish a relationship of
trust with the respondent and be better able to solicit answers to questions which respondents may
otherwise be reluctant to answer or to answer truthfully.

Where less is known about the way in which respondents think about an issue or about the range of
possible answers to a question, structured interviews create the opportunity for interviewers to ask
supplementary questions, when needed to obtain adequate answers.
Disadvantages. However, interviews also create the potential for an interviewer to intentionally or
unintentionally influence results and violate consistency in measurement. Survey respondents will be
sensitive to cues given by the interviewer's verbal and non-verbal behavior. As well, an interviewer will
have to ask additional questions or provide clarifications and may unduly influence responses.
Consequently, adequate interviewer training is essential. Training is needed to ensure that interviewers
understand the ways in which they could inadvertently influence responses, the importance of not doing
so, and the proper techniques that can be used to assist the respondent or elicit needed information
without affecting the integrity of the interview.
Although they may be quicker to conduct than mail questionnaire surveys, face-to-face interviews are
costly due to the amount of staff time required to conduct interviews and to the cost of travel.
End

Personal Interview Survey


The Face-to-Face Method
A personal interview survey, also called as a face-to-face survey, is a survey method that is
utilized when a specific target population is involved. The purpose of conducting a personal
interview survey is to explore the responses of the people to gather more and deeper
information.
Personal interview surveys are used to probe the answers of the respondents and at the same time, to observe the
behavior of the respondents, either individually or as a group. The personal interview method is preferred by researchers
for a couple of advantages. But before choosing this method for your own survey, you also have to read about the
disadvantages of conducting personal interview surveys. In addition, you must be able to understand the types of
personal or face-to-face surveys.

Advantages of Personal Interview Survey


1. High Response Rates
One of the main reasons why researchers achieve good response rates through this method is the face-to-face nature of
the personal interview survey. Unlike administering questionnaires, people are more likely to readily answer live questions
about the subject (for instance, a product) simply because they can actually see, touch, feel or even taste the product.

2. Tolerable Longer Interviews


If you wish to probe the answers of the respondents, you may do so using a personal interview approach. Open-ended
questions are more tolerated through interviews due to the fact that the respondents would be more convenient at
expressing their long answers orally than in writing.

3. Better Observation of Behavior


Market researchers can benefit from personal interview survey because it presents a greater opportunity to observe the
attitude and behavior of the respondents / consumers toward a product.

Disadvantages of Personal Interview Survey


1. High Costs
Face-to-face interview surveys are considerably more expensive than paper-and-pencil questionnaire surveys, online
surveys and other types of surveys.

2. Time-consuming
Personal interview surveys are not usually time-bounded, so the gathering of data from the respondents can take a longer
time. Another thing that makes this method time-consuming is when there is a need to travel and meet the respondents at
either single or different locations.
End

Advantages and Disadvantages of Face-to-Face


Data Collection
As with any research project, data collection is incredibly important. However, several
aspects come into play in thedata collection process. The three most crucial aspects
include: the cost of the selected data collection method; the accuracy of data collected; and
the efficiency of data collection.
Despite the rise in popularity of online and mobile surveys, face-to-face (in-person)
interviews still remain a popular data collection method. A face-to-face interview method
provides advantages over other data collection methods. They include:

Accurate screening. Face-to-face interviews help with more accurate screening. Te individual being
interviewed is unable to provide false information during screening questions such as gender, age, or race. It is
possible to get around screening questions in online and mobile surveys. Online and mobile surveys that offer
incentives may actually encourage answer falsification. Individuals may enter incorrect demographic information
so they are able to complete the survey and gain the incentive. The answers the individual provides may all be
truthful, but for the purpose of data analysis, the data will be inaccurate and misleading.

Capture verbal and non-verbal ques. A face-to-face interview is no doubt going to capture verbal and nonverbal ques, but this method also affords the capture of non-verbal ques including body language, which can
indicate a level of discomfort with the questions. Adversely, it can also indicate a level of enthusiasm for the
topics being discussed in the interview. Lets discuss an employee job interview, for example. Capturing nonverbal ques may make the difference between selecting an employee that is less skilled, but displays a
tremendous amount of enthusiasm for the position. Capturing non-verbal ques is not possible in online or mobile
surveys.
Keep focus. The interviewer is the one that has control over the interview and can keep the interviewee
focused and on track to completion. Online and mobile surveys are often completed during time convenient for
the respondent, but are often in the midst of other distractions such as texting, reading and answering emails,
video streaming, web surfing, social sharing, and more. Face-to-face interviews are in-the-moment, free from
technological distractions.
Capture emotions and behaviors. Face-to-face interviews can no doubt capture an interviewees emotions
and behaviors. Similar to not being able to capture verbal and non-verbal ques, online and mobile surveys can
also not capture raw emotions and behavior.

As with any data collection method, face-to-face interviews also provide some disadvantages
over other data collection methods. They include:

Cost. Cost is a major disadvantage for face-to-face interviews. They require a staff of people to conduct the
interviews, which means there will be personnel costs. Personnel are the highest cost a business can incur. Its
difficult to keep costs low when personnel are needed.
Quality of data by interviewer. The quality of data you receive will often depend on the ability of the
interviewer. Some people have the natural ability to conduct an interview and gather data well. The likelihood of
the entire interviewing staff having those skills is low. Some interviewers may also have their own biases that
could impact the way they input responses. This is likely to happen in hot-topic opinion polls.
Manual data entry. If the interview is administered on paper, the data collected will need to be entered
manually, or scanned, if a scannable interview questionnaire is created. Data entry and scanning
of paper questionnaires can significantly increase the cost of the project. A staff of data entry personnel will need
to be hired. Additionally, data entry can prolong the analysis process.Mobile surveys on iPads, tablets, or other
mobile devices can cut-down on manual data entry costs and information is ready for analysis.
Limit sample size. The size of the sample is limited to the size of your interviewing staff, the area in which
the interviews are conducted, and the number of qualified respondents within that area. It may be necessary to
conduct several interviews over multiple areas, which again can increase costs.

End

Advantages
You

have unlimited control over the setting and the environment of the interview.
You are able to have prepared notes, a resume, and your cover letter in front of you to reference at
any time.
You have the comfort of familiar surroundings.
You do not have to travel to the interview.

Disadvantages
You

cannot see or respond to interviewers non-verbal cues, which are often important in
interpreting how to respond properly.
The interviewer cannot see or respond to your non-verbal cues, which limits the capacity to
demonstrate your interpersonal skills.
You have to sell yourself using only words and the tone of your voice
End

What are the advantages and


disadvantages of telephone
interviews?
A disadvantage of using this method would be that the interviewee
could randomly end the interview without warning or explanation,
by hanging up the phone. This is understandable given the
numerous marketing calls people are bombarded with on a daily
basis. To minimize this type of no response problem, it would be
advisable to call the interviewee ahead of time to request
involvement in the survey, giving an estimated idea of how long the
interview would last and setting up a mutually convenient time. In
doing this, interviewees are shown courtesy which they tend to
appreciate making them more likely to operate.
What are the advantages of telephone interviewing? There are a
number of advantages of conducting employment interviews by
telephone:

Telephone interviews are simpler to arrange, and the process


itself takes much less time than face-to-face interview sessions.

When using this method as an initial screening process, the


cost of interviewing a large number of candidates is much lower
than if they were interviewed in person.

Telephone interviewing also cuts costs when candidates live far


away, since most businesses reimburse interviewee travel

expenses. Using the telephone to screen out unsuitable candidates


can greatly reduce these costs.

This format is an ideal way to assess a candidate's telephone


manner. This is particularly helpful if the job requires telephone
communication skills or is heavily customer-service based.

For automated interviews, the list of questions can be


completely standardized. This facilitates more objective decisions
based entirely on core criteria, removing personal perceptions or
biases from the process.
Are there any disadvantages to telephone interviewing? Although
telephone interviews can be very useful, there are limitations. These
include:

Candidates may be unfamiliar with the format or


uncomfortable using the telephone, which could make them
nervous and/or provoke uncharacteristic responses.

It is difficult to make a thorough assessment of a candidate


over the telephone. Non-verbal behavior or body language, both of
which are important in forming an opinion of people, cannot be
gauged over the telephone.

Telephone interview candidates learn less about your business


than those who visit your premises and meet potential colleagues
in person. The on-site experience helps candidates decide whether
they wish to pursue the interviewing process. It is important to
remember that the recruitment process works both ways,
providing an opportunity for candidates to assess your business as
it allows you to assess them.
End

Telephone Survey
A telephone survey is one of the survey methods used in collecting data either
from the general population or from a specific target population. Telephone
numbers are utilized by trained interviewers to contact and gather information
from possible respondents.
The telephone survey approach is usually utilized when there is a need to collection information via
public opinion polling. In other words, phone surveys are ideal for data gathering which takes anyone
from the general population as potentialrespondents. This means that the contacted people will
become included in the sample once they agree to participate in the phone survey.
Let us see the different advantages and disadvantages of the telephone survey method.

Advantages of Telephone Survey


1. High Accessibility
Market researchers can benefit from conducting a telephone survey because of the large scale
accessibility associated with it. Over 95% of the American population has a phone at their respective
homes. People who do not have access to the Internet such as those who live in remote areas can
still become respondents through their telephones.

2. Good Quality Control


Trained interviewers can ask the questions to the respondents in a uniform manner, promoting
accuracy and precision in eliciting responses. The phone interviews are also recorded, which means
that the analyst has an opportunity to observe and analyze the behavior or attitude of the
respondents toward controversial issues (e.g. state disputes, preferred presidential candidates, etc.)
or new concepts (new products, laws to be passed, etc.).

3. Anonymous Respondents
The telephone survey approach provides perhaps the highest level of anonymity for respondents
who wish to hold their opinions in confidentiality. This facilitates accuracy in responses, especially in
controversial topics.

4. Quick Data Processing and Handling


The emergence of the computer-assisted telephone interviewing or CATI has led to a faster manner
of processing, handling and storing the data gathered from phone interviews. Both real-time data
and past data can be rapidly analyzed using CATI.

Disadvantages of Telephone Survey


1. Time-Constrained Interviews
Since telephone surveys may interrupt the personal time of the respondents, interviews via phone
are to be conducted no longer than 15 minutes. This calls for a single open-ended question needing
a lengthy answer to be changed into a few close-ended questions.

2. Hard-to-Reach Respondents
Many people use call screening to accept only calls that they are expecting. These people include
credit-challenged ones who screen not only the calls from their creditors, but also those calls from
unknown numbers. Also, extremely busy people often screen calls to accept only those from their
business partners or family members and significant others.

3. Unseen Product
In market research, it is more ideal to conduct a face-to-face interview survey rather than a
telephone survey because better responses can be elicited when the participants could see, feel or
taste the product.

End

Advantages and Disadvantages of Online Surveys


Over the past decade, the use of online methods for market research has skyrocketed. Due to everincreasing technological advances, it has become possible for do-it-yourself researchers to design,
conduct and analyze their own surveys for literally a fraction of the cost and time it would have taken in
the past.
But are there any drawbacks compared to traditional methods (such as mail, telephone and personal
interviewing)? Today I'll provide a list of several main advantages and disadvantages of conducting
market research surveys over the internet. While the choice of mode is entirely dependent on your
specific topic,purpose and goals, internet questionnaires are a great option in many instances.
Advantages

Low costs. Due to drastically lower overhead, collecting data does not have to cost you

thousands of dollars.
Automation and real-time access. Respondents input their own data, and it is automatically

stored electronically. Analysis thus becomes easier and can be streamlined, and is available
immediately.
Less time. Rapid deployment and return times are possible with online surveys that cannot be

attained by traditional methods. If you have bad contact information for some respondents, you'll
know it almost right after you've sent out your surveys.
Convenience for respondents. They can answer questions on their schedule, at their pace, and

can even start a survey at one time, stop, and complete it later.
Design flexibility. Surveys can be programmed even if they are very complex. Intricate skip

patterns and logic can be employed seamlessly. You can also require that respondents provide only
one response to single-choice questions, which cuts down on error.
No interviewer. Respondents may be more willing to share personal information because they're

not disclosing it directly to another person. Interviewers can also influence responses in some cases.
Disadvantages

Limited sampling and respondent availability. Certain populations are less likely to have internet

access and to respond to online questionnaires. It is also harder to draw probability samples based
on e-mail addresses or website visitations.
Possible cooperation problems. Although online surveys in many fields can attain response rates

equal to or slightly higher than that of traditional modes, internet users today are constantly
bombarded by messages and can easily delete your advances.
No interviewer. A lack of a trained interviewer to clarify and probe can possibly lead to less

reliable data.
Though the list is not exhaustive, you can see that the benefits may outweigh the drawbacks for
researchers in most situations, especially for shorter, simpler projects. Get started creating your own
online surveys today with afree trial from Cvent!

End

Advantages of an online survey


Design, distribution, completion and data collection are done through an online (web based)
application. One of the advantages of an online survey tool is also automatic processing of the
surveys.

Low demands on time in online survey


The speed of design and survey processing are typical and desirable attributes. Within a short
time frame anybody, including an inexperienced person, can manage to design a survey,
distribute it and receive responses. Everything takes place in real time.

Preliminary results and questionnaires analysis


Online systems for designing surveys constantly process respondents answers and
therefore allow the researcher to immediately browse and analyse the
results obtained.Automatic processing of responses, which every modern surveys
applications can efficiently transfer into graphs and charts, saves money especially to those
users who are less experienced. It also saves researchers a lot of time and errors.

Multimedia in questionnaires
Applying and usage of the multimedia in an online questionnaire is very simple. Thanks
to information technology and common computer programs (such as web browser) multimedia

elements pictures, animation, video, music, etc. can be easily incorporated into the online
surveys.

Surveys processing costs

Online surveys are the cheapest surveying method. No costs for printing the
questionnaires, no costs for questioners, telephone interviews and all the other complicated
procedures associated with the classic answers collection.

You also dont have to pay any workers to do the primary data processing (that is to
unify and summarize the individual data that we obtain from the questionnaires).

No costs for workers who manually transcribe the data and draw charts with answers,
for example in Excel. Online forms and their automated saving also eliminate the possibility
of any human errors that can happen during manual transcription.

Distribution and answers collection

Thanks to the online environment is the distribution of the questionnaires easy and fast.
All you have to do is to e-mail the invitations to complete a survey, post a link on your website
or personal blog or use social networks.

Options for distributing the questionnaires online are plenty. You can use paid PPC
advertising, purchase respondents in various panels, etc. After distribution you can just sit
back and wait to collect answers from addressed respondents.

Export the answers into Excel, SPSS, etc.


Thanks to machinable processing the answers can be converted into formats intended for
processing in statistical SW or charts programs (such as Microsoft Excel, OpenOffice Calc,
SPSS, Statistica, etc). These programs are then used for secondary analysis of data obtained.
Typical formats of exported files are XLS,CSV (for SPSS), HTML, XML.

Simple to manage
With interactive environment online surveys are easy to manage even for beginners with
no experience. Respondents dont have to go anywhere, they fill the questionnaire whenever

they want to and have greater sense of anonymity. All this leads to higher willingness and a
higher response rate.

Disadvantages of online surveys /


questionnaires
The identity of the respondent
Online surveys are distributed on the Internet, mainly by e-mails. They are then being filled by
respondents using various computer or mobile phone devices without any control from the
interviewers. It is very difficult to check whether the survey is completed by the right person.
However some tools such as Survio offer you the possibility of identification.

Response rate
If your online surveys are not distributed properly and they dont reach the right target group,
their response rate will be low. Therefore think about your distribution channels. Are they
powerful enough to get you the number of responses you need? A significant element affecting
a response rate of any questionnaire is motivation to complete it gifts, money, rewards,
vouchers, etc.

Questions complexity issues


Questions that are too complex, complicated or contextual should not be used. The respondent
may not understand them properly, and because he/she has nobody to ask for thorough
explanation, he/she may become demotivated to complete the questionnaire. The solution is to
ask simple, clear and intelligible questions.

End

Online Surveys
One of the most widely utilized survey methods, an online survey is the
systematic gathering of data from the target audience characterized by the
invitation of the respondents and the completion of the questionnaire over the
World Wide Web.
For the past few years, the Internet has been used by many companies in conducting all sorts of
studies all over the world. Whether it is market or scientific research, theonline survey has been a
faster way of collecting data from the respondents as compared to other survey methods such as
paper-and-pencil method and personal interviews. Other than this advantage, the web-based survey
also presents other pros and benefits for anyone who wishes to conduct a survey. However, one
should consider the drawbacks and disadvantages of an online survey method.
See also: Web Survey Tools.

Advantages of Online Survey


1. Ease of Data Gathering
The Internet is a vast virtual world that connects all kinds of people from around the globe. For this
reason, a survey that requires a hundred or more respondents can be conducted faster via the
Internet. The survey questionnaire can be rapidly deployed and completed by the respondents,
especially if theres an incentive that is given after their participation.

2. Minimal Costs
Traditional survey methods often require you to spend thousands of dollars to achieve the optimal
results. On the other hand, studies show that conducting an Internet survey facilitates low-cost and
fast data collection from the target population. Sending email questionnaires and other online
questionnaires are more affordable than the face-to-face method.

3. Automation in Data Input and Handling


With online surveys, the respondents are able to answer the questionnaire by means of inputting
their answers while connected to the Internet. Then, the responses are automatically stored in a
survey database, providing hassle-free handling of data and a smaller possibility of data errors.

4. Increase in Response Rates


Online survey provides the highest level of convenience for the respondents because they can
answer the questionnaire according to their own pace, chosen time, and preferences.

5. Flexibility of Design
Complex types of surveys can be easily conducted through the Internet. The questionnaire may
include more than one type of response format in such a way that the respondents would not get
discouraged from the changes in the manner they answer the questions.

Disadvantages of Online Survey

1. Absence of Interviewer
An online survey is not suitable for surveys which ask open-ended questions because there is no
trained interviewer to explore the answers of the respondents.

2. Inability to Reach Challenging Population


This method is not applicable for surveys that require respondents who do not have an access to the
Internet. Some examples of these respondents include the elderly and people who reside in remote
areas.

3. Survey Fraud
Survey fraud is probably the heaviest disadvantage of an online survey. There are people who
answer online surveys for the sake of getting the incentive (usually in the form of money) after they
have completed the survey, not with a desire to contribute to the advancement of the study.
End

1. Closed-Ended Questions
Closed-ended questions limit the answers of the respondents to response options provided on the
questionnaire.

Advantages: time-efficient; responses are easy to code and interpret; ideal


for quantitative type of research

Disadvantages: respondents are required to choose a response that does not exactly
reflect their answer; the researcher cannot further explore the meaning of the responses
Some examples of close ended questions are:

a.

Dichotomous or two-point questions (e.g. Yes or No, Unsatisfied or Satisfied)

b.

Multiple choice questions (e.g. A, B, C or D)

c.

Scaled questions that are making use of rating scales such as the Likert scale (i.e.
a type of five-point scale), three-point scales, semantic differential scales, and sevenpoint scales
2. Open-Ended Questions

In open-ended questions, there are no predefined options or categories included.


Theparticipants should supply their own answers.

Advantages: participants can respond to the questions exactly as how they would like to
answer them; the researcher can investigate the meaning of the responses; ideal
for qualitative type of research

Disadvantages: time-consuming; responses are difficult to code and interpret


Some examples of open-ended questions include:

a.

Completely unstructured questions- openly ask the opinion or view of the


respondent

b.

Word association questions - the participant states the first word that pops in his
mind once a series of words are presented

c.

Thematic Apperception Test a picture is presented to the respondent which he


explains on his own point-of-view

d.

Sentence, story or picture completion the respondent continues an incomplete


sentence or story, or writes on empty conversation balloons in a picture

End

Open and Closed Questions


These are two types of questions you can use that are very different in character and usage.

Closed questions
Definition
There are two definitions that are used to describe closed questions. A common definition is:
A closed question can be answered with either a single word or a short phrase.
Thus 'How old are you?' and 'Where do you live?' are closed questions. A more limiting
definition that is sometimes used is:
A closed question can be answered with either 'yes' or 'no'.
By this definition 'Are you happy?' and 'Is that a knife I see before me?' are closed questions,
whilst 'What time is it?' and 'How old are you?' are not. This causes a problem of how to
classify the short-answer non-yes-or-no questions, which do not fit well with the definition for
open questions. A way of handling this is to define 'yes-no' as a sub-class of the short-answer
closed question.

Using closed questions


Closed questions have the following characteristics:
They give you facts.
They are easy to answer.
They are quick to answer.

They keep control of the conversation with the questioner.


This makes closed questions useful in the following situations:

Usage
As opening questions in a conversation,
as it makes it easy for the other person
to answer, and doesn't force them to
reveal too much about themselves.
For testing their understanding (asking
yes/no questions). This is also a great
way to break into a long ramble.
For setting up a desired positive or
negative frame of mind in them (asking
successive questions with obvious
answers either yes or no ).

For achieving closure of a persuasion


(seeking yes to the big question).

Example
It's great weather, isn't it?
Where do you live?
What time is it?
So, you want to move into our
apartment, with your own bedroom and
bathroom -- true?

Are you happy with your current


supplier?
Do they give you all that you need?
Would you like to find a better
supplier?
If I can deliver this tomorrow, will you
sign for it now?

Note how you can turn any opinion into a closed question that forces a yes or no by adding tag
questions, such as "isn't it?", "don't you?" or "can't they?", to any statement.
The first word of a question sets up the dynamic of the closed question and signals the easy
answer ahead. Note how these are words like: do, would, are, will, if.

Open questions
Definition
An open question can be defined thus:
An open question is likely to receive a long answer.
Although any question can receive a long answer, open questions deliberately seek longer
answers, and are the opposite of closed questions.

Using open questions


Open questions have the following characteristics:
They ask the respondent to think and reflect.
They will give you opinions and feelings.
They hand control of the conversation to the respondent.
This makes open questions useful in the following situations:

Usage
As a follow-on from closed questions,
to develop a conversation and open up
someone who is rather quiet.

Example
What did you do on you holidays?
How do you keep focused on your
work?

To find out more about a person, their


wants, needs, problems, and so on.

What's keeping you awake these days?

To get people to realize the extend of


their problems (to which, of course, you
have the solution).

I wonder what would happen if your


customers complained even more?

To get them to feel good about you by


asking after their health or otherwise
demonstrating human concern about
them.

How have you been after your


operation?

Why is that so important to you?

Rob Jones used to go out late. What


happened to him?

You're looking down. What's up?

Open questions begin with such as: what, why, how, describe.
Using open questions can be scary, as they seem to hand the baton of control over to the other
person. However, well-placed questions do leave you in control as you steer their interest and
engage them where you want them.
When opening conversations, a good balance is around three closed questions to one open
question. The closed questions start the conversation and summarize progress, whilst the open
question gets the other person thinking and continuing to give you useful information about
them.
A neat trick is to get them to ask you open questions. This then gives you the floor to talk

about what you want. The way to achieve this is to intrigue them with an incomplete story or
benefit.

End

You might also like