You are on page 1of 9

1|Page By Gurpree t Singh

ARTIFICIAL INTELLIGENCE (BTCS-701)

Assignment 3

Very short questions

http://gsbprogramming.blogspot.in/
2|Page By Gurpree t Singh

Q1: What are agents?


A: In artificial intelligence, agent is an autonomous entity which observes through sensors and
acts upon an environment using actuators (i.e. it is an agent) and directs its activity towards
achieving goals (i.e. it is "rational", as defined in economics). Intelligent agents may also learn or
use knowledge to achieve their goals. They may be very simple or very complex: a reflex machine
such as a thermostat is an intelligent agent, as is a human being, as is a community of human
beings working together towards a goal.
Q2: What is predicate logic?
A: In mathematical logic, predicate logic is the generic term for symbolic formal systems like first-
order logic, second-order logic, many-sorted logic, or infinitary logic. This formal system is
distinguished from other systems in that its formulae contain variables which can be quantified.
Two common quantifiers are the existential ("there exists") and universal ("for all")
quantifiers. The variables could be elements in the universe under discussion, or perhaps
relations or functions over that universe. For instance, an existential quantifier over a function
symbol would be interpreted as modifier "there is a function"

Q3: What is certainty factor?


A: A certainty factor is an integer in the range from -100 (representing certain falsehood) to +100
(representing certain truth). The purpose of a certainty factor is to quantify the reliability of (or
degree of confidence in) a rule or proposition. A certainty factor of 0 represents neutral feelings,
'unknown' or 'don't know'.

Q4: Define supervised learning.


A: Supervised learning is the machine learning task of inferring a function from labeled training
data. The training data consist of a set of training examples. In supervised learning, each example
is a pair consisting of an input object (typically a vector) and a desired output value (also called
the supervisory signal).

Q5: Define branch and bound algorithm.


A: Branch and bound (BB or B&B) is an algorithm design paradigm for discrete and combinatorial
optimization problems, as well as general real valued problems. A branch-and-bound algorithm
consists of a systematic enumeration of candidate solutions by means of state space search: the
set of candidate solutions is thought of as forming a rooted tree with the full set at the root. The
algorithm explores branches of this tree, which represent subsets of the solution set. Before

http://gsbprogramming.blogspot.in/
3|Page By Gurpree t Singh

enumerating the candidate solutions of a branch, the branch is checked against upper and lower
estimated bounds on the optimal solution, and is discarded if it cannot produce a better solution
than the best one found so far by the algorithm.

The method was first proposed by A. H. Land and A. G. Doig in 1960 for discrete programming,
and has become the most commonly used tool for solving NP-hard optimization problems. The
name "branch and bound" first occurred in the work of Little et al. on the traveling salesman
problem.

Q6: What is LISP?

A: Lisp (historically, LISP) is a family of computer programming languages with a long history and
a distinctive, fully parenthesized Polish prefix notation. The name LISP derives from "LISt
Processing". Linked lists are one of Lisp language's major data structures, and Lisp source code is
itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure,
giving rise to the macro systems that allow programmers to create new syntax or new domain-
specific languages embedded in Lisp.

Q7: What are facts?

A: A fact is something that has really occurred or is actually the case. The usual test for a
statement of fact is verifiability, that is, whether it can be demonstrated to correspond to
experience. Standard reference works are often used to check facts. Scientific facts are verified
by repeatable careful observation or measurement (by experiments or other means).

Q8: What is probability reasoning?

A: The aim of a probabilistic logic (also probability logic and probabilistic reasoning) is to combine
the capacity of probability theory to handle uncertainty with the capacity of deductive logic to
exploit structure. The result is a richer and more expressive formalism with a broad range of
possible application areas. Probabilistic logics attempt to find a natural extension of traditional
logic truth tables: the results they define are derived through probabilistic expressions instead.
A difficulty with probabilistic logics is that they tend to multiply the computational complexities
of their probabilistic and logical components. Other difficulties include the possibility of counter-
intuitive results, such as those of Dempster-Shafer theory. The need to deal with a broad variety
of contexts and issues has led to many different proposals.

http://gsbprogramming.blogspot.in/
4|Page By Gurpree t Singh

Q9: What is reasoning under uncertainty?

A: Three ways of handling uncertainty:

Probabilistic reasoning.
Certainty factors
Dempster-Shafer Theory

Q10: Give examples for heuristic search

A: Fuzzy logic is a form of many-valued logic that deals with approximate, rather than fixed and
exact reasoning. Compared to traditional binary logic (where variables may take on true or false
values), fuzzy logic variables may have a truth value that ranges in degree between 0 and 1. Fuzzy
logic has been extended to handle the concept of partial truth, where the truth value may range
between completely true and completely false. Furthermore, when linguistic variables are used,
these degrees may be managed by specific functions

Short Questions

Q1: Explain Best First Search with example.

A: Best-first search is a search algorithm which explores a graph by expanding


the most promising node chosen according to a specified rule.

Judea Pearl described best-first search as estimating the promise of node n


by a "heuristic evaluation function f(n) which, in general, may depend on the
description of n, the description of the goal, the information gathered by the
search up to that point, and most important, on any extra knowledge about
the problem domain."

http://gsbprogramming.blogspot.in/
5|Page By Gurpree t Singh

Some authors have used "best-first search" to refer specifically to a search


with a heuristic that attempts to predict how close the end of a path is to a
solution, so that paths which are judged to be closer to a solution are
extended first. This specific type of search is called greedy best-first search.

A* uses a best-first search and finds a least-cost path from a given initial node
to one goal node (out of one or more possible goals). As A* traverses the
graph, it follows a path of the lowest expected total cost or distance, keeping
a sorted priority queue of alternate path segments along the way.

It uses a knowledge-plus-heuristic cost function of node x (usually denoted


f(x)) to determine the order in which the search visits nodes in the tree. The
cost function is a sum of two functions:

the past path-cost function, which is the known distance from the
starting node to the current node x (usually denoted g(x))
a future path-cost function, which is an admissible "heuristic
estimate" of the distance from x to the goal (usually denoted h(x)).

Q2: Explain and prove Bayes Theorem. What is conditional probability?

A: In probability theory and statistics, Bayes' theorem (alternatively Bayes' law or Bayes' rule)
relates current probability to prior probability. It is important in the mathematical manipulation
of conditional probabilities.

When applied, the probabilities involved in Bayes' theorem may different interpretations. In one
of these interpretations, the theorem is used directly as part of a particular approach to statistical
inference. In particular, with the Bayesian interpretation of probability, the theorem expresses
how a subjective degree of belief should rationally change to account for evidence: this is
Bayesian inference, which is fundamental to Bayesian statistics. However, Bayes' theorem has
applications in a wide range of calculations involving probabilities, not just in Bayesian inference.

http://gsbprogramming.blogspot.in/
6|Page By Gurpree t Singh

Bayes' theorem is stated mathematically as the following equation:

where A and B are events.

P(A) and P(B) are the probabilities of A and B independent of each other.
P(A|B), a conditional probability, is the probability of A given that B is true.
P(B|A), is the probability of B given that A is true.

In probability theory, a conditional probability measures the probability of an event given that (by
assumption, presumption, assertion or evidence) another event has occurred.

For example, the probability that any given person has a cough on any given day may be only 5%. But if
we know or assume that the person has a cold, then they are much more likely to be coughing. The
conditional probability of coughing given that you have a cold might be a much higher 75%.

If the event of interest is A and the event B is known or assumed to have occurred, "the conditional
probability of A given B", or "the probability of A under the condition B", is usually written as P(A|B), or
sometimes PB(A).

The concept of conditional probability is one of the most fundamental and one of the most important
concepts in probability theory. But conditional probabilities can be quite slippery and require careful
interpretation. For example, there need not be a causal or temporal relationship between A and B.

In general P(A|B) is not equal to P(B|A). For example, if you have cancer you might have a 90% chance
of testing positive for cancer, but if you test positive for cancer you might have only a 10% of actually
having cancer because cancer is very rare. Falsely equating the two probabilities causes various errors of
reasoning such as the base rate fallacy. Conditional probabilities can be correctly reversed using Bayes'
Theorem.

P(A|B) (the conditional probability of A given B) may or may not be equal to P(A) (the unconditional
probability of A). If P(A|B) = P(A), A and B are said to be independent.

Q3: What are the steps in natural language processing? List and explain them briefly

A: Natural language processing (NLP) can be defined as the automatic (or semi-automatic)
processing of human language. The term NLP is sometimes used rather more narrowly than
that, often excluding information retrieval and sometimes even excluding machine translation.
NLP is sometimes contrasted with computational linguistics, with NLP being thought of as more
applied.

http://gsbprogramming.blogspot.in/
7|Page By Gurpree t Singh

1) Morphological Analysis: Individual worlds are analyzed into their components and non word
tokens, such as punctuation are separated from the words. Separate words into individual
morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on
the complexity of the morphology i.e. the structure of words of the language being considered.

2) Syntactic Analysis: Linear sequences of words are transformed into structures that show how
the words relate each other. Some word sequences may be rejected if they violate the languages
rules for how words may be combined.

3) Semantic Analysis: The structures created by the syntactic analyzer are assigned meanings.

4) Discourse Integration: The meaning of an individual sentences may depend on the sentences
that precede it and may influence the meanings of the sentence (may depend on the sentences that
precede it) that follow it.

5) Pragmatic Analysis: The structure representing what was said is reinterpreted to determine that
what was actually meant. For example, the sentence Do you know what time it is? should be
interpreted as a request to be told the time.

Q4: Explain partial order planning.

A: Partial-order planning is an approach to automated planning that leaves decisions about the
ordering of actions as open as possible. It contrasts with total-order planning, which produces an
exact ordering of actions. Given a problem in which some sequence of actions is required in order
to achieve a goal, a partial-order plan specifies all actions that need to be taken, but specifies an
ordering of the actions only where necessary.

Consider the following situation: a person must get from point A to point B. In between points A
and B, there is an obstacle course. In a partial order plan, the specific path that this person will
take to get from point A to point B will not be conceived of all at once. Instead the person will
navigate the obstacle course by deciding which obstacles to master one at a time. Partial-order
planning exhibits the Principle of Least Commitment, which contributes to the efficiency of this
planning system as a whole. Often there are many possible plans for a problem which only differ
in the order of the actions. Many traditional automated planners search for plans in the full
search space containing all possible orders. In addition to the smaller search space for partial-
order planning, it may also be advantageous to leave the option about the order of the actions
open for later. An important distinction to be made is between ordering steps of an action and
conceptualizing those steps. Partial order planning doesnt sequence actions until it is absolutely
necessary; however, these actions are conceived of much before they are sequenced. This type
of planning system is simply a relation structure between actions. It is not the mechanism by
which these actions mentally come to fruition.

http://gsbprogramming.blogspot.in/
8|Page By Gurpree t Singh

A partial-order plan or partial plan is a plan which specifies all actions that need to be taken,
but does not specify an exact order for the actions when the order does not matter. It is the result
of a partial-order planner. A partial-order plan consists of four components:

A set of actions (also known as operators).


A partial order for the actions. It specifies the conditions about the order of some
actions.
A set of causal links. It specifies which actions meet which preconditions of other
actions. Alternatively, a set of bindings between the variables in actions.
A set of open preconditions. It specifies which preconditions are not fulfilled by any
action in the partial-order plan.

In order to keep the possible orders of the actions as open as possible, the set of order conditions
and causal links must be as small as possible.

A plan is a solution if the set of open preconditions is empty.

A linearization of a partial order plan is a total order plan derived of the particular partial order
plan.

Q5: Explain Neural Expert System

A: In machine learning and cognitive science, artificial neural networks (ANNs) are a family of
statistical learning algorithms inspired by biological neural networks (the central nervous systems
of animals, in particular the brain) and are used to estimate or approximate functions that can
depend on a large number of inputs and are generally unknown. Artificial neural networks are
generally presented as systems of interconnected "neurons" which can compute values from
inputs, and are capable of machine learning as well as pattern recognition thanks to their
adaptive nature.

For example, a neural network for handwriting recognition is defined by a set of input neurons
which may be activated by the pixels of an input image. After being weighted and transformed
by a function (determined by the network's designer), the activations of these neurons are then
passed on to other neurons. This process is repeated until finally, an output neuron is activated.
This determines which character was read.

Like other machine learning methods - systems that learn from data - neural networks have been
used to solve a wide variety of tasks that are hard to solve using ordinary rule-based
programming, including computer vision and speech recognition.

There is no single formal definition of what an artificial neural network is. However, a class of
statistical models may commonly be called "Neural" if they possess the following characteristics:

http://gsbprogramming.blogspot.in/
9|Page By Gurpree t Singh

1. consist of sets of adaptive weights, i.e. numerical parameters that are tuned by a learning
algorithm, and
2. are capable of approximating non-linear functions of their inputs.

The adaptive weights are conceptually connection strengths between neurons, which are
activated during training and prediction.

Neural networks are similar to biological neural networks in performing functions collectively
and in parallel by the units, rather than there being a clear delineation of subtasks to which
various units are assigned. The term "neural network" usually refers to models employed in
statistics, cognitive psychology and artificial intelligence. Neural network models which emulate
the central nervous system are part of theoretical neuroscience and computational neuroscience.

http://gsbprogramming.blogspot.in/

You might also like