You are on page 1of 30

August 2009

Master of Science in Information Technology (MScIT-NEW) –


Semester 3
MT0047 –: Advance Software Engineering– 4Credits
(Book ID: B0808 & B0809)
Assignment Set – 1 (60 Marks)

Each question carries six marks 10 x 6 = 60

Book ID: B0808


1. Explain the different Software Applications

Ans:- Systems Software is "Middleware" that runs the infrastructure that supports the
application software. Most people never notice or interact with this software unless it breaks
and then you get frustrated because you can't do your normal job or transaction.

Application Software is the software that most end-users are familiar with and where the
"work" is done by the average person, i.e. order entry, billing, data searching, etc.

2. What are the limitation of the linear sequential model?


Ans:- Description: This is the most frequently used model. The development team
specifies the business functionality of the software prior to developing a system. It
then breaks the complex mission of development into several logical steps, starting
with requirements gathering and ending with release. The development process
follows through analysis, design, coding, testing, and then support in a strictly linear
fashion. Leads to a formal project management approach such as sign-off and one
deliverableperphase.
Limitations/Problems: Not robust to change. Does not aid in using information
gathered after coding has begun,such as a better understanding of the requirements.
Deliverable may not fully satisfy the customer
3. Explain Iterative Development Model in detail.

Ans:- Software Planner is an award winning application lifecycle management (ALM)


iterative development tool that helps Information Technology (IT) departments manage all
components of software development including managing customer requirements, project
deliverables, test cases, defects, and support tickets.

Coupled with collaborative tools like document sharing, team calendars, interactive
dashboards, knowledge bases and threaded discussions, teams begin communicating more
effectively and begin delivering solutions quickly and with high quality.

Software Planner is an award winning application lifecycle management (ALM) tool that helps
Information Technology (IT) departments manage all components of software development
including managing customer requirements, project deliverables, test cases, defects, and
support tickets

4. What are the major technical and non technical factors which militate against
widespread software reuse

Ans:- Predicting the evolution of software engineering technology is, at best, a dubious
proposition; most typically, it is a frustrating exercise in disappointment and anxiety. It is not
difficult to see why: the evolution of software technology is fast paced, and is determined by a
dizzying array of factors, many of them outside the arena of software engineering, and most of
them cannot be identified, let alone predicted, with any significant advance notice. In this
paper, we briefly discuss our first ventures in this domain, and some (very) preliminary
conclusions and resolutions.

5. What are advantage of software development with reuse

ans:- There’s a time for custom software development. And there’s a time for out-of-the-box
solutions. Knowing when to use each is critical for project success – and just happens to be
something we excel at.

Even when Itteco does custom development, we often use libraries of reusable code developed
by us or other commercial or community organizations. If your project calls for a content
management system, for example, we may use an existing one because there are so many good
ones already. Using one instead of building one lets us focus on developing the more unique
components of your project

Software reuse keeps companies focused on their core development strengths and avoid
wasting valuable project time on something that’s already been done. Software reuse lets us
help our customers achieve their project goals in less time for less money.
2
Important advantages of software reuse include:

• Earlier time-to-market because the amount of coding required is reduced.


• High product quality because the development community extensively tests reusable
code.
• Access to more current technologies because early adopters of cutting edge
technology routinely make their reusable code available to the public.
• Faster turnaround times for software updates because quality reusable code
typically requires fewer updates and fixes.
• Lower cost of ownership because reusable code reduces coding time.

How it works:

1. Gather requirements. We gather your project requirements, assess your individual


business needs and record them for you in a Project Vision document.
2. Evaluate applications. We evaluate the best out-of-the-box software applications
against your project requirements to narrow down the best candidates.
3. Create application comparison matrix. This step takes our reuse service to the next
level. We create a comparison matrix of the best reusable applications to meet your
project goals. That way, you can be fully confident that the reusable application you
select is the best choice for your project, no question. If we believe your project can
benefit from multiple out-of-the-box applications, we will let you know so you can
make a fully informed decision.
4. Implement custom requirements. We implement any custom modifications to the
application to ensure it meets the specific requirements of your project and provide any
needed customized source code.
5. Deploy and maintain the application. When the selected out-of-the-box application
has been modified to your project specifications, we deploy it and provide maintenance
parameters to ensure it continues to serve your needs.

Book ID: B0809

6. What are the advantages of Incremental Models?

Ans:- There are many variations on this theme. One of the main advantages of the incremental
models is their ability to cope with change during the development of the system. As an
example, consider an error in the requirements. With the waterfall model, the error may not be
noticed until acceptance testing, when it is probably too late to correct it. (Note that the client
probably does not see the software running until the acceptance tests.)

In the incremental model, there is a good chance that a requirements error will be recognized as
soon as the corresponding software is incorporated into the system. It is then not a big deal to
correct it. The waterfall model relies on careful review of documents to avoid errors. Once a

3
phase has been completed, there is limited provision for stepping back. It is difficult to verify
documents precisely and this is, again, a weakness of the waterfall model.

There is a wide array of software development processes tagged as incremental models. They
all take their origins in the above mentioned deficiency of the Waterfall model, i.e. its inability
to cope with change. In incremental models, software is build not written. Software is
constructed step by step in the same way a building is constructed. The products is designed,
implemented, integrated and tested as a series of incremental builds, where a build consists of
code pieces from various modules interacting together to provide a specific functional
capability and testable as a whole. If the incremental model is broken in too few builds it is
becoming a build and fix model. The requirements, specifications and architectural design must
be completed before the implementation for the various builds commences. A typical product
will consist of 10-50 builts.

One of the main reasons why the Waterfall model is not appropriate in most cases is the
accumulation of too much unstable information at all stages. For example, a complete list of
500 requirements is extremely likely to change, no matter how confident is the client on the
quality of these requirements at this point. Inevitably, the further design and implementation
phases will uncover flaws in these requirements, raising the need for the update and re-
verification of the requirements as a whole each time majors flaws are uncovered. A better
approach is thus to limit the accumulation of unstable information by concentrating on the
definition and implementation of only a subset of the requirements at a time. Such an approach
has the benefit of distributing the feedback on the quality of the accumulated information. In
the Waterfall model, most of the relevant feedback is received towards the end of the
development cycle, where the programming is concentrated. By distributing the development
efforts throughout the development cycle, incremental models also acheive distribution of
feedback, thus increasing the stability of the accumulated artifacts.

Advantages

• Delivers an operational quality product at each stage, but one that satisfies only a subset
of the clients requirements.
• A relative small number of programmers/developers may be used.
• From the delivery of the first build, the client is able to perform useful work (portions of
the complete product might be available to customers in weeks instead of waiting for
the final product, compared waterfall, rapid prototyping model, which deliver only then
the complete product is finished).
• Reduces the traumatic effect of imposing a completely new product on the client
organization by providing a gradual introduction.
• There is a working system at all times.
• Clients can see the system and provide feedback.
• Progress is visible, rather than being buried in documents.

4
• Most importantly, it breaks down the problem into sub-problems, dealing with reduced
complexity, and reduced the ripple effect of changes by reducing the scope to only a
part of the problem at a time.
• Distributes feedback throughout the whole development cycle, leading to more stable
artifacts
7. Write an overview on the Rational Unified Process.

Ans:- During my employment at Page Technologies, I had the opportunity to introduce the
concept of the RUP® (the Rational Unified Process®) to a variety of individuals in a variety of
scenarios across a variety of business domains. In some scenarios, I was in the role of project
manager, in some I had a sales role, in others I was conducting a seminar. In each case, the
challenge was how to effectively communicate the robust capabilities that the RUP offers.

In a nutshell, I find that it's helpful to compare the RUP to a restaurant buffet. That's right.

Before I explain, though, I'd like to suggest that you might find this analogy useful if your job
involves trying to sell or introduce RUP to an organization. The buffet comparison is most
helpful in communicating the basic concepts of RUP: the four phases, iterative development,
tailoring RUP to meet the business need, and so on. You may find it useful if:

1. You are a project manager trying to convince your team members or management that
your organization should adopt RUP.
2. You are part of a process group that wants the broader development organization to
adopt RUP.
3. You are trying to change the opinion of an organization or individuals who think RUP
is an inflexible monolith that can't compete with processes such as XP, RAD, etc.
4. You are trying to convey the value of RUP to an organization or individuals who think
process is just a make-work effort geared at large artifact thwomp factors (i.e., how loud
of a sound a document makes when it hits the desk).

The goal for this article is not to provide ROI analysis or empirical data that will allow you to
justify implementing RUP. Rather, it is to provide an analogy that will help your audience
understand how RUP should be implemented.

The Basic Analogy

Let me start off by stating that I love buffets. I enjoy the periodic lunchtime jaunts to the local
Chinese, Indian, Thai, and Mexican buffets that my development teams and I have taken over
the past many years. It was after one of these forays to a local Minneapolis restaurant a few
years ago that it occurred to me: The RUP is not unlike a visit to one of these buffets. You have
requirements, and you have a potential solution. The requirements are driven by a need
(hunger) and the solution (food) is provided by the buffet. How you deliver this solution to
your stomach is the process. Chances are, if you are like me, that you will make several trips to

5
the buffet (iterations) and use several tools and activities to assemble different foods from
different courses (phases). Table 1 details this analogy

8. Write the merits & demerits of proto type model

Ans:- Prototype is a model that can explain the structure of the functions we are
going to use in our C and C++ programs.Suppose we are going to use one function
called add with two parameters to add means the prototype of the function is int
add(int a,int b), simply prototype in C and C++ is the function declaration

Prototype is a model that can explain the structure of the functions we are going to
use in our C and C++ programs.Suppose we are going to use one function called add
with two parameters to add means the prototype of the function is int add(int a,int b),
simply prototype in C and C++ is the function declaration

prototype of the function tell the compiler that this type of function is defined in the
program

The word prototype comes from the Latin words proto, meaning original, and typus,
meaning form or model. In a non-technical context, a prototype is an especially
representative example of a given category.

In prototype-based programming, a prototype is an original object; new objects are


created by copying the prototype.

9. Explain the round-trip problem solving approach.

Ans:- The round trip theorem is used to send the output of an inverse function back to the
input it came from, hence the name round trip. The book has a pretty good explanation of
it. It basicalls says that:

g(f(y))=y and f(g(x))=x


for y in the domain of f and all x in the domain of g

It is just a way of determining if 2 functions are inverses.

Question to 3R, #57: What is the order or transformations. I thought you could stretch
vertically by 3, translate 7 units right, reflect over the x-axis, and translate 2 units up. The
book got something different. Can someone explain to me the order of transformations?

6
10. Explain various phases of software development life cycle

Ans:- Summary: As in any other engineering discipline, software engineering also has
some structured models for software development. This document will provide you with a
generic overview about different software development methodologies adopted by
contemporary software firms. Read on to know more about the Software Development
Life Cycle (SDLC) in detail.Like any other set of engineering products, software products
are also oriented towards the customer. It is either market driven or it drives the market.
Customer Satisfaction was the buzzword of the 80's. Customer Delight is today's
buzzword and Customer Ecstasy is the buzzword of the new millennium. Products that
are not customer or user friendly have no place in the market although they are
engineered using the best technology. The interface of the product is as crucial as the
internal technology of the product

A market study is made to identify a potential customer's need. This process is also known
as market research. Here, the already existing need and the possible and potential needs
that are available in a segment of the society are studied carefully. The market study is
done based on a lot of assumptions. Assumptions are the crucial factors in the
development or inception of a product's development. Unrealistic assumptions can cause
a nosedive in the entire venture. Though assumptions are abstract, there should be a
move to develop tangible assumptions to come up with a successful product.

Once the Market Research is carried out, the customer's need is given to the Research &
Development division (R&D) to conceptualize a cost-effective system that could
potentially solve the customer's needs in a manner that is better than the one adopted by
the competitors at present. Once the conceptual system is developed and tested in a
hypothetical environment, the development team takes control of it. The development
team adopts one of the software development methodologies that is given below,
develops the proposed system, and gives it to the customer.

7
August 2009
Master of Science in Information Technology (MScIT-NEW) –
Semester 3
MT0047 –: Advance Software Engineering– 4Credits
(Book ID: B0808 & B0809)
Assignment Set – 2 (60 Marks)

Each question carries six marks 10 x 6 = 60


Book ID: B0808
1. Explain Iterative Development Model in detail.

Ans:- The basic idea behind iterative enhancement is to develop a software system
incrementally, allowing the developer to take advantage of what was being learned during
the development of earlier, incremental, deliverable versions of the system. Learning
comes from both the development and use of the system, where possible. Key steps in the
process were to start with a simple implementation of a subset of the software
requirements and iteratively enhance the evolving sequence of versions until the full
system is implemented. At each iteration, design modifications are made and new
functional capabilities are added.

The Procedure itself consists of the Initialization step, the Iteration step, and the Project
Control List. The initialization step creates a base version of the system. The goal for this
initial implementation is to create a product to which the user can react. It should offer a
sampling of the key aspects of the problem and provide a solution that is simple enough to
understand and implement easily. To guide the iteration process, a project control list is
created that contains a record of all tasks that need to be performed. It includes such
items as new features to be implemented and areas of redesign of the existing solution.
The control list is constantly being revised as a result of the analysis phase.

The iteration involves the redesign and implementation of a task from the project control
list, and the analysis of the current version of the system. The goal for the design and
implementation of any iteration is to be simple, straightforward, and modular, supporting
redesign at that stage or as a task added to the project control list. The level of design
detail is not dictated by the interactive approach. In a light-weight iterative project the code
may represent the major source of documentation of the system; however, in a mission-
critical iterative project a formal Software Design Document may be used. The analysis of
an iteration is based upon user feedback, and the program analysis facilities available. It
involves analysis of the structure, modularity, usability, reliability, efficiency, &
achievement of goals. The project control list is modified in light of the analysis results.
Iterative development slices the deliverable business value (system functionality) into

8
iterations. In each iteration a slice of functionality is delivered through cross-discipline
work, starting from the model/requirements through to the testing/deployment. The unified
process groups iterations into phases: inception, elaboration, construction, and transition.

Inception identifies project scope, risks, and requirements (functional and non-functional)
at a high level but in enough detail that work can be estimated.

Elaboration delivers a working architecture that mitigates the top risks and fulfills the non-
functional requirements.

Construction incrementally fills-in the architecture with production-ready code produced


from analysis, design, implementation, and testing of the functional requirements.

Transition delivers the system into the production operating environment.

Each of the phases may be divided into 1 or more iterations, which are usually time-boxed
rather than feature-boxed. Architects and analysts work one iteration ahead of developers and
testers to keep their work-product backlog full.

2. Mention the characteristics of object oriented design.

Ans:- Department of Information and Systems Management, The Hong Kong University of
Science and Technology, Clear Water Bay, Kowloon, Hong Kong, People's Republic of China
b
Delta Banka AD, Belgrade, Yugoslavia

Received 11 November 1996;


revised 27 January 1997;
accepted 10 March 1997.
Available online 19 January 1999.

Abstract

The metrication of object-oriented software systems is still an underdeveloped part


within the domain of the object paradigm. An empirical investigation aimed at finding
appropriate measures and establishing simple, yet usable and cost-effective models
for estimation and control of object-oriented system projects, was undertaken on a set
of object-oriented projects implemented in a stable environment. First, the measures
available were screened for possible correlations; then, the models suitable for

9
estimation were derived and discussed. Effort was found to correlate well with the total
number of classes and the total number of methods, both of which are known at the
end of the design phase. A number of other models for estimation of the source code
complexity were also defined.

Author Keywords: Software measurement; Software estimation; Object-oriented software

3. What is software reengineering? Explain

Ans:- The reengineering of was described by Chikofsky and Cross in their 1990 paper, as
"The examination and alteration of a system to reconstitute it in a new form". Less
formally, reengineering is the modification of a software system that takes place after it has
been reverse engineered, generally to add new functionality, or to correct errors.

This entire process is often erroneously referred to as; however, it is more accurate to say that
reverse engineering is the initial examination of the system, and reengineering is the
subsequent modification.

4. Explain parallel or concurrent development Model.

Ans:- The Dual Vee Model, like the V-Model is a systems development model designed
to simplify the understanding of the complexity associated with developing systems.[1][2][3] In
systems engineering it is used to define a uniform procedure for product or project
development.

The model addresses the necessary concurrent development of a system’s architecture


with the entities of that architecture and illuminates the necessary interactions and
sequences recommended for orderly maturation of a system and systems of systems. This
article explains the power of the Dual Vee Model when applied as a reminder model for
development of complex systems, Thomas Kuhn observed in his noteworthy The
Structure of Scientific Revolution[4]

"The power of science seems quite generally to increase with the number of symbolic
generalizations its practitioners have at their disposal."

Models simplify our life by clarifying the complex and by explaining the difficult to
understand. They also facilitate our explaining to others the principles we hold dear. In
systems engineering we model just about everything. We routinely make user
requirements understanding models, technical feasibility models, physical fit models, and
we model complex operational scenarios to gain the required comprehension and we then
hone the models to achieve even better solutions.

When systems engineers develop systems they frequently rely on current models to guide
their way. However, the most prevalent models, waterfall, spiral, and Vee have not been

10
sufficiently explicit regarding the concurrent development of the system architecture and
the entities of the same system.

A software development method defined by Dr. Winston W. Royce in 1969 to promote a


sequentially phased software development process. The model promotes knowing the
requirements before designing and designing before coding, etc. The objective was to
provide a repeatable process to the then undisciplined (generally ad hoc) software
development environment. While the model is accurate for what is depicted it is silent on
user involvement, risk management, and most importantly, it is a single solution model that
fails to address architecture development that is inherent in all multiple entity systems.
Although designed for software, the model can be applied to hardware and system
development.

Spiral Model[6] -- A software development method defined by Dr. Barry Boehm in 1980 to
promote the management of recognized risks prior to attempting traditional phased
software development. The model promotes resolving requirements, feasibility, and
operational risks prior to proceeding with the traditional waterfall phases. The objective is
to involve users and stakeholders in resolving recognized software development issues
preceding concept development and design. This model is accurate for what is depicted
but it also fails to address the problems of architecture development and management.
Although designed for software the spiral model can also be applied to hardware and
system development.

Vee Model[7][8][9][10][11] -- A system development method defined and elaborated by Dr. Kevin
Forsberg and Hal Mooz from 1987 to 2005 to address system development issues of
decomposition, definition, integration, and verification. The model also includes
user/stakeholder participation, concurrent opportunity and risk management, and
verification problem resolution

5. Explain white box and black box model.


Ans:- A white-box framework requires the framework user to understand the internals of the
framework to use it effectively. In a white box framework, you usually extend behavior by
creating subclasses, taking advantage of inheritance. A white box framework often comes with
source code.

A black-box framework does not require a deep understanding of the framework’s


implementation in order to use it. Behavior is extended by composing objects together, and
delegating behavior between objects.

A framework can be both white-box and black-box at the same time. Your perception of how
“transparent” a framework is may depend on non-code aspects such as documentation or tools.

Frameworks tend to change over their lifetime. (See [Johnson & Roberts].) When a framework
is new, it tends to be white-box: you change things by subclassing, and you have to peak at
source code to get things done. As it evolves, it becomes more black-box, and you find yourself
11
composing structures of smaller objects. Johnson and Roberts point out that frameworks can
evolve beyond black-box, perhaps becoming visual programming environments, where
programs can be created by interconnecting components selected from a palette. (JavaBeans is
an effort in that direction.)

Visual environments and even black-box frameworks sound so much easier to use than white-
box frameworks - why would we ever bother creating white-box frameworks in the first place?
The basic answer is “cost”. To develop a black-box framework, we need a sense of which
objects change the most, so we can know where the flexibility is most needed. To develop a
visual environment, we need even more information: we need to know how objects are
typically connected together. Discovering this costs time. White-box frameworks are easier to
create and have more flexibility.

White-Box Frameworks
The most common sign that a framework is white-box is heavy use of inheritance. When you
use the framework by extending a number of abstract (or even concrete) classes, you are
dealing with a white-box framework. Inheritance is a closer coupling than composition; an
inherited class has more context it must be aware of and maintain. This is visible even in Java’s
protection scheme: a subclass has access to the public and protected parts of the class, while a
separate object only sees the public parts. Furthermore, a subclass can potentially “break” a
superclass even in methods it doesn’t override, for example by changing a protected field in an
unexpected way.

What are the effects of an inheritance-based approach?


• • We need to understand how the subclass and superclass work together.
• • We have access to both the protected and the public parts of the class.
• • To provide functionality, we can override existing methods, and implement abstract
methods.
• • We have access to the parent’s methods (by calling super.method()).

Example: A simple applet


import …
public class MyApplet extends Applet {
public void paint(…) {…} //TBD??
}
Notice how we depend directly on the superclass, even to using its methods freely.

A subclass is coupled to its parents, and we deal with the benefits and costs of that fact.

12
Black-Box Frameworks
Black-box frameworks are based on composition and delegation rather than inheritance.
Delegation is the idea that instead of an object doing something itself, it gives another object
the task. When you delegate, the object you delegate to has a protocol or interface it supports,
that the main object can rely on.

In black-box frameworks, objects tend to be smaller, and there tend to be more of them. The
intelligence of the system comes as much from how these objects are connected together as
much as what they do in themselves.

Composition tends to be more flexible than inheritance. Consider an object that uses
inheritance versus one that delegates. With inheritance, the object basically has two choices: it
can do the work itself, or it can call on the parent class to do it. In a language like Java, the
parent class is fixed at compile time.

With delegation, an object can do the work itself (or perhaps in its parent), or it can give the
work to another object. This object can be of any useful class (rather than only the parent), but
it can also change over time. (You can delegate to an object, change the delegate to be another
object, and use the new object as the delegate next time.)

Example: TBD

[TBD compare to cutting grass - children vs. lawn service?]

Converting Inheritance to Composition


In “Designing Reusable Classes,” Johnson and Foote identify this rule:
Rule 12. Send messages to components instead of to self. An inheritance-based
framework can be converted into a component-based framework black box
structure by replacing over-ridden methods by message sends to components.

Let’s apply their advice to an example. Suppose we’ve got a class that sues the Template
Method design pattern like this:
public class ListProcessor {
final public void processList() {
doSetup();
while (hasMore()) {
doProcess1();
}
doTeardown();
}

13
protected void doSetup() {}
protected void doTeardown() {}
abstract protected boolean hasMore();
abstract protected void doProcess1();
// …
}

with this as a typical subclass:

public class TestProcessor extends ListProcessor {


int i = 0;
protected boolean hasMore() {return i < 5;}
protected void doProcess1() {System.out.println(i);}
}

In the simplest form of the transformation, we can take each method designed to be subclassed,
and whenever it is called, replace it by a message to the delegate. We’ll also add some code to
set up the delegate.

public class ListProcessor {


ListProcessor delegate;

public ListProcessor(lp) {delegate = lp;}

final public void processList() {


delegate.doSetup();
while (delegate.hasMore()) {
delegate.doProcess1();
}
delegate.doTeardown();
}

protected void doSetup() {}


protected void doTeardown() {}
abstract protected boolean hasMore();
abstract protected void doProcess1();
// …
}

14
We can extend this like before. [TBD] You might create it like this:
ListProcessor lp = new ListProcessor (new
TestProcessor());

Compare how these two situations look at run-time:


[LP] ß [Test] => [:Test]

[LP]odelegate => [:LP] --- [:Test]


^
[Test]

So far, this doesn’t seem to be worth the trouble: ListProcessor already has a copy of
“processList()”; it doesn’t need the one in Test.

The next step is to introduce a new class for the delegate, restricted to just the capabilities it
needs. The methods called on the delegate define its protocol. We could handle this via an
abstract class, but I prefer to use a Java interface:
public interface ListProcessAction {
void doSetup();
boolean hasMore();
void doProcess1();
void doTeardown();
}

These are the methods intended to be over-ridden.

Now the main class can use the ListProcessorAction for its delegate. Furthermore, as
ListProcessor is no longer intended to be subclassed, it no longer has any need for those
protected methods:

We could make our class depend on this interface:


public class ListProcessor implements ListProcessorAction {
ListProcessorAction delegate;

public ListProcessor(ListProcessorAction a) {delegate = a;}

final public void processList() {


delegate.doSetup();
while (delegate.hasMore()) {
delegate.doProcess1();
}
15
delegate.doTeardown();
}

// …
}

with this concrete implementation of the action:


public class ConcreteAction {
int i = 5;
public void doSetup() {}
public boolean hasMore() {return i < 5;}
public void doProcess1() {System.out.println(i);}
public void doTeardown() {}
}

[TBD: Typically when these involve abstract methods, you might create an abstract class
version, which will be extended by the end class. Or if the protocol is small and completely
abstract, you don’t need concrete classes.]

The structure looks like this:


[LP] -delegate- [<<interface>> LPA] => [:LP] -delegate- [:CA]
^
[ConcreteAction]

This runtime structure is similar to the previous one, but now ListProcessor and
ConcreteAction are separate classes.

We have split one big object, that knew both the algorithm and the individual steps, into two
classes, one for each concern. Look at the tradeoffs. In the initial version, everything was in one
place. To trace the new version, you have to understand the delegation structure and how it can
vary at runtime. When you write a new action, it’s easier to focus on it in isolation, but harder
to see how it fits into the big picture.

See how the design has changed: [TBD]

16
delega te
Lis tP roc es s or < < interfac e> >
Lis tP roc es s or
Lis tP roc es s orA c tion
bec om es

Y ourLis tP roc es s or

Y ou rLis tP roc es s orA c t ion

[//TBD]

Step by Step
This is a systematic description of how to convert
[Base] to [Base] –delegate-- [<<int>>Action] <- [ConcreteAction]

Cautions: [TBD]
• • Calls to super(). (Need to work through ramifications.)
• • Recursion. (Need to eliminate or understand.)

This approach follows a re-factoring style, moving in small steps and letting the compiler do
the work. (See [Fowler].)

1. 1. Create a new interface.


public interface Action { }
(name it appropriately).

2. 2. Create a new class implementing this interface:


public class ConcreteAction implements Action {}

3. 3. In Base, add a delegate field, and modify each constructor to take an Action as a
parameter; use this to set the delegate:
protected Action delegate;

public Base (Action a) {


delegate = a;
// rest as before
}

4. 4. Each protected method in Base is presumably called in Base. For each such method:
• • Move the signature to Action and change it to “public”
17
• • Move the routine itself to ConcreteAction (and make it public there as well).
• • Replace each call to “method()” with “delegate.method()”.

For example,
Base {…
protected method() { /*impl*/.}
… method(); …
}

becomes
Base {… delegate.method(); … }
Action { … public method(); …}
ConcreteAction {… public method() {/*impl*/} … }

Note: You can find the call sites by temporarily changing the method name to “xxx”, and
seeing what breaks in base.

Moving protected methods may force you to pull over some private methods as well, or
perhaps maintain a reference back to Base. Unfortunately, this is not a fully mechanical
process. Similarly, if Base’s methods involve recursion or calls to super(), you will need insight
into how the class works and how you want to split it.

5. 5. Check whether methods in Base call any of Base’s public methods. If so,
• • Copy the signature to Action
• • Copy the method to ConcreteAction
• • Replace the method body in Base with “delegate.method()”.

Again, be aware of private methods, recursion, and super().

6. 6. Polish things up: get Base, Action, and ConcreteAction to compile properly.

7. 7. Check out any subclasses of Base. Figure out whether they should remain a subclass of
Base, become a subclass of ConcreteAction, or become an independent action
implementing the Action interface. (The class may need to be split.) Distribute the subclass’
behavior where it should go. As usual, be careful about recursion and super().

8. 8. Find each call to a constructor for Base or its subclasses. (Let the compiler tell you
where they are.) Add a new parameter to the constructor, “new ConcreteAction()” where
Base wants it.

18
Conflicting Class Hierarchies
Java only supports single inheritance, but sometimes you find yourself wanting multiple
inheritance. You can use interfaces and delegation to help in many such situations.

Look at java.util.Observable. In some ways, it could be a basis for an implementation of the


Observer design pattern. However, the fact that it is a class and not an interface is a flaw.

Suppose you have a Vector, and you’d like it be Observable (perhaps so a listbox widget can
watch it for changes). Because both Vector and Observable classes are already classes, you’d
like this:
[Vector] <- ObservableVector -> [Observable] // MI => !Java

Suppose Observable were an interface instead. A class can implement as many interfaces as it
needs to, so we could do this:
[Vector] <- ObservableVector - - - > [<<int>> Observable] // Legal
Java but not JDK 1.x

[TBD - double-check against JDK]

There’s a reason Observable is a class though: it maintains machinery for notification. If we


had a convenience implementation, we could delegate to it.

[Vector] <- ObservableVector -- -- > [<<int>> Observable]


-- delegate -- [ObservableHelper]
If Java had multiple inheritance, one class could cover both the Vector behavior and the
Observable behavior. Without multiple inheritance, we can get the effect by connecting
together a pair of classes.

[TBD: How class can we get to the 1.1 Listener event model?]

The Swing library designers faced this problem. Their solution is to ignore Observable, and
instead create a ListModel that models the basics of vector handling. In Swing, there is an
AbstractListModel, that handles notification, and can delegate to a concrete vector or list class.

Inner Classes for Multiple Inheritance


Sometimes a class implements several interfaces when it would be better served by using Java's
inner classes. Consider this example:

public class MyPanel extends JPanel implements MouseListener {


JButton b1 = new JButton("First");
19
JButton b2 = new JButton("Second");

public MyPanel() {
b1.addActionListener(this);
add(b1);
b2.addActionListener(this);
add(b2);
}

public void actionPerformed(ActionEvent e) {


if (e.getSource() == b1) {
System.out.println("b1 action");
} else { // assume b2
System.out.println("b2 action");
}
}
}

Here, MyPanel acts both like a JPanel and an listener. Notice the code for actionPerformed():
it’s got an ugly “if” statement that’s practically a case statement. Such a construct is a sign that
we’re not as OO as we could be, and we can move intelligence around.

We’ll use a pair of inner classes, cleaning up MyPanel a bit. This keeps it from receiving
unnecessary notifications, and avoids the need for a test to see which button was clicked.

public class MyPanel extends JPanel implements MouseListener {


JButton b1 = new JButton("First");
JButton b2 = new JButton("Second");

public class FirstListener implements ActionListener {


public void actionPerformed(ActionEvent e) {
System.out.println(“b1”+e);
}
}

public class SecondListener implements ActionListener {


public void actionPerformed(ActionEvent e) {
System.out.println(“b2” + Date.getTime());
}
}

public MyPanel() {
b1.addActionListener(new FirstListener());
add(b1);

20
b2.addActionListener(new SecondListener());
add(b2);
}

The first version was more in the style of JDK 1.0.2, where the event detection hierarchy had to
match the container hierarchy. The second version is more in JDK 1.1 style.

You can carry this a step further to use anonymous inner classes:
public class MyPanel extends JPanel implements MouseListener {
JButton b1 = new JButton("First");
JButton b2 = new JButton("Second");

public MyPanel() {
b1.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
System.out.println(“b1”+e);
}
});
add(b1);

b2.addActionListener(new ActionListener() {
public void actionPerformed(ActionEvent e) {
System.out.println(“b2” + Date.getTime());
}
});
add(b2);
}

Many people might find this on the edge of readability - some on the near side and some on the
far side.

“Rule 6. The top of the class hierarchy should be abstract.”

Book ID: B0809


6. What are the various strategies for organizing the process of software
development in an organization
Ans:- This workshop is designed for IS professionals at all levels of experience who desire
an understanding of how to define, collect, analyze and present a variety of software
measurement information. It is appropriate for those organizations that wish to establish a
21
measurement program or enhance an existing one. The workshop focuses on the
definition of software measures and providing guidelines for analysis, interpretation and
presentation.

The workshop will provide participants with the methods and techniques to define, analyze
and report the appropriate measurement data to various audiences ranging from project
staff to senior management. The workshop approach will allow the participants to develop
usable measurement reports based on actual data collected in your organization

Course Topics
The following topics are addressed by this training:

• The Uses and Benefits of Measurement


• Evaluating Measurement Requirements
• Data Sources and Collection Points
• Roles and Responsibilities
• Metrics Definition
• Data Analysis
• Measurement Reporting
• Identifying Improvement Opportunities

This course is designed for all professionals who have a need to effectively estimate effort,
schedule and cost for software projects. The course is built on the understanding that the
most successful estimating is based on accurately quantifying customer requirements
using software measurement techniques. This includes potential scope creep,
understanding project attributes and, most important, the size of the product to be
delivered.

Software size is the one factor that has the greatest impact on estimates. Function Point
Analysis is now recognized as the best method to size software deliverables. Function
Point Analysis is a proven, reliable method for measuring software development work-
products. The resulting work-product is expressed in terms of functionality as seen from
the customer's perspective. Therefore, Function Point Analysis is extremely useful in
communicating customer needs as well as estimating projects, measuring productivity and
managing change of scope.

This course demonstrates how Function Point Analysis techniques, in conjunction with
reliable and repeatable estimating methods, provide an effective approach to predict
project and application performance. This course focuses on leveraging Function Point
information to forecast performance, while highlighting specific opportunities that can
increase efficiency and effectiveness.

Course Topics
The following topics are addressed:

• Introduction to Estimating
22
• Benefits and Uses
• The Fundamentals of Estimating with Function Points
• Overview of the Process
• Software Estimation Models and Techniques
• Understanding Function Point Based Productivity Rates
• Using Productivity Rates to Estimate Effort
• Estimating Cost and Schedule
• Understanding Influencing Attributes on Projects
• Adjusting Estimates Based on Attributes
• Documenting and Communicating the Estimate
• Implement Estimating Techniques
7. What are the main characteristics of successful teams?

Ans:- Team goals are derived from critical farm problems that influence whether the
business will exist in ten or twenty years such as the following:

• 30 percent increase in milk sales.


• Change in management styles from stall barn to milking parlors.
• Sale or transfer of the business.
• Arranging a new partnership for the business.
• Specialization in milk production only.
• New ventures.
• Managing non-family labor.
• Addressing complex unresolved management problems.

Teams should avoid farm problems that don't require the skills, experience, and judgment
of off-farm advisers. This is a misuse of valuable resources and will eventually lead to
dissatisfaction and dissolution of the advisory team.

Team members and team problems should be well matched. As the team sets new goals,
the composition of the team should be re-evaluated. Having a crop consultant or
veterinarian on an intergenerational farm transfer team is likely to underutilize the crop
consultant or veterinarian's abilities. An estate planner or attorney might be a better
choice.

Outstanding team members should have unique skills, experiences, and judgment not
resident on the farm staff. They should also be team players and believe in the team
process. Team members that have cross-purposes or hidden agendas can destroy a
team's effectiveness and will have to be removed from the team.

A team meeting is not a committee meeting but a highly creative process that benefits
from locations that foster thinking and orderly discussion. Teams should meet in an
environment similar to a boardroom, comfortable and away from interruptions and
distractions
23
Complex problems rarely have simple solutions. Using processes for making decisions
can clarify solutions, but solutions often need refinement over time. By frequently
tracking progress toward goals and using measuring techniques, the team can monitor
the degree of success and evaluate when to intercede. Also, the monitoring process
helps advisers see progress and assess their time commitment. Without a measured
benefit advisers cannot continue to justify their commitment as team members.
8. What are the various types of prototyping models?

Ans:- Software prototyping, an activity during certain software development, is the


creation of prototypes, i.e., incomplete versions of the software program being developed.

A prototype typically simulates only a few aspects of the features of the eventual program,
and may be completely different from the eventual implementation.

The conventional purpose of a prototype is to allow users of the software to evaluate


developers' proposals for the design of the eventual product by actually trying them out,
rather than having to interpret and evaluate the design based on descriptions. Prototyping
can also be used by end users to describe and prove requirements that developers have
not considered, so "controlling the prototype" can be a key factor in the commercial
relationship between solution providers and their clients.

Prototyping has several benefits: The software designer and implementer can obtain
feedback from the users early in the project. The client and the contractor can compare if
the software made matches the software specification, according to which the software
program is built. It also allows the software engineer some insight into the accuracy of
initial project estimates and whether the deadlines and milestones proposed can be
successfully met. The degree of completeness and the techniques used in the prototyping
have been in development and debate since its proposal in the early 1970s.[6]

This process is in contrast with the 1960s and 1970s monolithic development cycle of
building the entire program first and then working out any inconsistencies between design
and implementation, which led to higher software costs and poor estimates of time and
cost. The monolithic approach has been dubbed the "Slaying the (software) Dragon"
technique, since it assumes that the software designer and developer is a single hero who
has to slay the entire dragon alone. Prototyping can also avoid the great expense and
difficulty of changing a finished software product.

Also called close ended prototyping. Throwaway or Rapid Prototyping refers to the creation of
a model that will eventually be discarded rather than becoming part of the final delivered
software. After preliminary requirements gathering is accomplished, a simple working model
of the system is constructed to visually show the users what their requirements may look like
when they are implemented into a finished system.
24
Rapid Prototyping involved creating a working model of various parts of the system at a
very early stage, after a relatively short investigation. The method used in building it is
usually quite informal, the most important factor being the speed with which the model
is provided. The model then becomes the starting point from which users can re-
examine their expectations and clarify their requirements. When this has been achieved,
the prototype model is 'thrown away', and the system is formally developed based on
the identified requirements.[7]

The most obvious reason for using Throwaway Prototyping is that it can be done quickly. If the
users can get quick feedback on their requirements, they may be able to refine them early in the
development of the software. Making changes early in the development lifecycle is extremely
cost effective since there is nothing at that point to redo. If a project is changed after a
considerable work has been done then small changes could require large efforts to implement
since software systems have many dependencies. Speed is crucial in implementing a
throwaway prototype, since with a limited budget of time and money little can be expended on
a prototype that will be discarded.

Another strength of Throwaway Prototyping is its ability to construct interfaces that the
users can test. The user interface is what the user sees as the system, and by seeing it in
front of them, it is much easier to grasp how the system will work.

…it is asserted that revolutionary rapid prototyping is a more effective manner in


which to deal with user requirements-related issues, and therefore a greater
enhancement to software productivity overall. Requirements can be identified,
simulated, and tested far more quickly and cheaply when issues of evolvability,
maintainability, and software structure are ignored. This, in turn, leads to the
accurate specification of requirements, and the subsequent construction of a valid
and usable system from the user's perspective via conventional software
development models. [8]

Prototypes can be classified according to the fidelity with which they resemble the actual
product in terms of appearance, interaction and timing. One method of creating a low
fidelity Throwaway Prototype is Paper Prototyping. The prototype is implemented using
paper and pencil, and thus mimics the function of the actual product, but does not look at
all like it. Another method to easily build high fidelity Throwaway Prototypes is to use a
GUI Builder and create a click dummy, a prototype that looks like the goal system, but
does not provide any functionality.

Not exactly the same as Throwaway Prototyping, but certainly in the same family, is the
usage of storyboards, animatics or drawings. These are non-functional implementations
but show how the system will look.

SUMMARY:-In this approach the prototype is constructed with the idea that it will be
discarded and the final system will be built from scratch. The steps in this approach are:

25
9. What are the essential best-practices which are followed in the rational unified
process?

Ans:- The IBM Rational Unified Process® (RUP®) is a complete software-development


process framework that comes with several out-of-the-box instances. Processes derived
from RUP vary from lightweight addressing the needs of small projects with short product
cycles—to more comprehensive processes addressing the broader needs of large,
possibly distributed project teams. Projects of all types and sizes have successfully used
RUP. This white paper describes how to apply RUP in a lightweight manner to small
projects. We describe how to effectively apply extreme Programming (XP) techniques
within the broader context of a complete project.

Inception

Inception is significant for new development efforts, where you must address important
business and requirement risks before the project can proceed. For projects focused on
enhancements to an existing system, the Inception phase is shorter, but is still focused on
ensuring that the project is both worth doing and possible. During Inception, you make the
business case for building the software. The Vision is a key artifact produced during
Inception. It is a high-level description of the system. It tells everyone what the system is,
and may also tell who will use it, why it will be used, what features must be present and
what constraints 1 XP defines three phases: Exploration, Commitment, and Steering.
These do not map well to RUP phases so we choose to use the four RUP phases to
describe the process exist. The Vision may be very short, perhaps only a paragraph or
two. Often the Vision contains the critical features the software must provide to the
customer.

Four essential Inception activities specified in RUP are:

• Formulate the scope of the project.

• Plan and prepare the business case.

• Synthesize candidate architecture.

• Prepare the project environment.

Elaboration

26
The goal of the Elaboration phase is to baseline the architecture of the system to provide a
stable basis for the bulk of the design and implementation effort in the Construction phase.
The architecture evolves out of a consideration of the most significant requirements (those
that have a great impact on the architecture of the system) and an assessment of risk. The
stability of the architecture is evaluated through one or more architectural prototypes.

In RUP, design activities focus on the notion of system architecture and, for software-
intensive systems, software architecture. Using component architectures is one of the six
best practices of software development embodied in RUP, which recommends spending
time developing and maintaining the architecture. The time spent on this effort mitigates
the risks associated with a brittle and inflexible system. XP replaces the notion of
architecture by “metaphor.” The metaphor captures part of the architecture, whereas the
rest of the architecture evolves as a natural result of code development. XP assumes that
architecture emerges from producing the simplest design and continually refactoring the
code.

On any project, you should do at least these three activities during Elaboration:

• Define, validate, and baseline the architecture

• Refine the Vision.

• Create and baseline iteration plans for the Construction phase.

Construction

The goal of Construction is to complete the development of the system. The Construction
phase is, in some sense, a manufacturing process, where you emphasize managing
resources and controlling operations to optimize costs, schedules, and quality. In this
sense, the management mindset undergoes a transition from the development of
intellectual property during Inception and Elaboration, to the development of deployable
products during Construction and Transition.

Each Construction iteration has three essential activities:

• Manage resources and control process.

• Develop and test components.

• Assess the iteration.

Transition

The focus of Transition is to ensure that software is available for its end users. The
Transition phase includes testing the product in preparation for release and making minor

27
adjustments based on user feedback. At this point in the lifecycle, user feedback needs to
focus mainly on fine-tuning the product, configuring, installing, and usability issues.

The essential Transition activities are the following:

• Finalize end-user support material.

• Test the product deliverable in a customer environment.

• Fine tune the product based upon customer feedback.

• Deliver the final product to the end user.

You can produce several artifacts during the Transition phase. If your product is one that
will have future releases (and how many do not?), you will have begun identifying features
and defect fixes for the next release.

The essential artifacts for any project are:

• Deployment Plan

• Release Notes

• Training Materials and Documentation.

Digest

Building software is more than writing code. A software development process must focus
on all activities necessary to deliver quality to your customers. A complete process does
not have to be heavy. We have shown how you can have a small, yet complete, process
by focusing on the essential activities and artifacts for your project. Perform an activity or
produce an artifact if it helps mitigate risk on your project. Use as much, or as little,
process and formality as you need for your project team and your organization. RUP and
XP are not necessarily exclusive. By incorporating techniques from both methods, you can
arrive at a process that helps you deliver better quality software quicker than you do today.
Robert Martin describes a process called the dX process, which he claims to be RUP
compliant.8 It is an instance of a process built from the RUP framework

10. What are the advantages of the rapid application development model?
Ans:- The practice of rapid application development was developed by James Martin in
the year 1991. It is a frequently adopted method in development of high end software. The
aim of the process is to develop a complete software solution within as less time as
possible. It makes use of various structural procedures, (Computer Assisted Software
Engineering tools) CASE tools and prototyping for describing processes to increase the
28
pace at which a software is developed. If a company is developing a graphical interface for
a gaming software, rapid software development tools would facilitate speedy development
of codes by integrating all the basic parameters in the prototype tools. A developer would
simply use the tools, instead of writing a separate section of codes for that procedure.
Sometimes, some features of a program are compromised in order to generate the end
product in less time.

An astounding 65% of the budget of large firms is spent on maintenance and upgrading its
operating systems. These were designed only a certain time back, but given the nature of
changes and their frequency, many softwares require changes. Quite often, the end users
can satisfactorily meet all their requirements even without some essential components of a
software. It is the task of a software developing team to identify all such potential areas of
operation which can be left out or encapsulated within a broader heading, to save the time,
effort and cost. In some other cases, the business which has ordered the software can
negotiate on certain parts which can be done away with, before the software is fully
developed. It may again save valuable cost and time before implementing the software.

RAD is a combination of entities like a well-defined methodology, a dedicated and trained


staff, proper and efficient management practices, and using both efficient manual and
computer tools. The entire system of operation can be summarized as software
development mechanism providing higher quality in less time using

• Software essentials restructuring


• Prototyping and verification designs
• Integrating all changes in the current model
• Minimizing the effort of reviewing, testing and other such steps

Advantages of using RAD

There are many advantages of using RAD and it can solve many concerns of the user as
well as the developers.

• Conventional software development methods take, on an average almost 20%


more time than the RAD procedure. Using RAD can enable quicker visualization of
the end-design and hence allow rapid software testing and rectifying steps. The
customer is able to have a faster look at the design to add valuable inputs, in order
to make the design more user-friendly.
• The current competitive scenario demands frequently upgraded softwares in order
to effectively satisfy customers' requirements. RAD enables a faster and updated
version to reach the end user, by systematically eliminating redundant steps or
using the prototype methods.
• Cost overruns and meeting the time constraints are another advantage, though not
a big consideration in high-end uses.
• RAD makes the development process to be a more credible one by facilitating a
scope for the customer to actively provide inputs in the development process. This
may also prove a feasible study from the point of view of a developer.
29
• It protects the current project from the variations in the market.

Rapid Application Development is an efficient methodology which can assist faster


software development, and at the same time ensure maximum quality of the project.

30

You might also like