You are on page 1of 5

What Is Windows DNA?

Microsoft® Windows® Distributed interNet Applications Architecture (Windows DNA) is the


application development model for the Windows platform. Windows DNA specifies how to: develop
robust, scalable, distributed applications using the Windows platform; extend existing data and
external applications to support the Internet; and support a wide range of client devices
maximizing the reach of an application. Because Windows DNA relies on a comprehensive and
integrated set of services provided by the Windows platform, developers are free from the burden
of building or assembling the required infrastructure for distributed applications and can focus on
delivering business solutions.
Windows DNA addresses requirements at all tiers of modern distributed applications:
presentation, business logic, and data. Like the familiar PC environment, Windows DNA enables
developers to build tightly integrated applications by accessing a rich set of application services in
the Windows platform using a wide range of familiar tools. These services are exposed in a unified
way through the Component Object Model (COM). Windows DNA provides customers with a
roadmap for creating successful solutions that build on their existing computing investments and
will take them into the future. Using Windows DNA, any developer will be able to build or extend
existing applications to combine the power and richness of the PC, the robustness of client/server
computing, and the universal reach and global communications capabilities of the Internet.

Q - What is an interface?
A - An interface is nothing more than a defined set of public methods and properties (which are
really methods, also) that are supported by an object. More than one type of object can support
the same interface (for e.g., Textboxes and ComboBoxes are both Controls) and an object can
support more than one interface (a Textbox is both a Textbox and a Control). Interfaces serve as
a contract between the creator of an object and the user of that object. Any object which supports
an interface is required to provide each and every property and method that makes up that
interface.

Q - What is inheritance?
A – Inheritance is one of the three pillars of object-oriented theory. Inheritance refers to the
concept of deriving new objects from existing ones. This creates "is a type of" relationships
between the objects. Generally this works along the lines of specific object A is a type of general
object B (i.e. object "Poodle" is a type of "Dog" object). This classifying of objects into base types
and sub types is also referred to as abstraction. Inheritance can also take on different meanings.
There is implementation inheritance (sometimes called "true" inheritance), where a derived object
inherits the behavior (method) of its parent(s). The derived object may use this behavior or in
some cases may define its own behavior for selected methods. Interface inheritance, on the other
hand, involves a derived object sharing the same interface as its parent. With interface
inheritance, the derived class is responsible for providing its own implementation for each
inherited method.

Q - What is Encapsulation?
A - Encapsulation is one of the three pillars of object-oriented theory. It refers to hiding or
protecting implementation details (such as data) within an object, only exposing that which is
necessary, and controlling access (for example, providing functions to get and set data values,
rather than providing direct access to variables). Essentially, proper encapsulation makes an
object a "black box". The user of the object provides the required input and receives an expected
output, but has no "under the hood" knowledge of how the object works.

Q - What is polymorphism?
A - Polymorphism is one of the three pillars of object-oriented theory. In essence, it means that
objects can be more than one type of thing. We see this every day in life: a two-seat convertible
is a type of car, which is a type of vehicle. The same concept that applies to physical objects can
also be applied to software objects. Code that expects a parameter of type car should work
equally well with our two-seater or a station wagon. This allows us to write code that is either
specific or abstract, depending on our needs.
Polymorphism is the ability to override the behavior of the base class.
Q - What is an hWnd and what can it be used for?
A - An hWnd is a Handle to a Window. A handle is a long integer generated by the operating
system so it can keep track of the all the objects (a form, a command button etc.). You can't set
an hWnd at design or runtime, and the value of the handle changes each time the form is opened.
Handles are used when you make calls to API functions; the function needs to know the handle of
the window, plus other arguments depending on what the API does.
Declare Function GetWindowText Lib "user32" alias _
"GetWindowTextA" (byval Hwnd as long, byval lpstring as string, byval cch as long) as Long

Q. What is the difference between non-virtual and virtual functions?


A. The behavior of a non-virtual function is known at compile time while the behavior of a virtual
function is not known until the run time.

Q. What is a pure virtual function?


A. It is a member function without implementation.

Q. What is an abstract base class?


A. It is a class that has one or more pure virtual functions.

Q. What is the difference between function overloading and function overriding?


A. Overloading is a method that allows defining multiple member functions with the same name
but different signatures. The compiler will pick the correct function based on the signature.
Overriding is a method that allows the derived class to redefine the behavior of member functions
which the derived class inherits from a base class. The signatures of both base class member
function and derived class member function are the same; however, the implementation and,
therefore, the behavior will differ.

Q. What is function's signature?


A. Function's signature is its name plus the number and types of the parameters it accepts.

# A thread is a set of one or more instructions given by a task (application) to a CPU. In a sense,
every time you run an application, you can think of that application instance running in memory
as a thread.

# Concurrency is the existence of more than one request for the same object at the same time.
When object or component concurrency occurs, a locking mechanism is necessary to serialize
requests. Serializing requests means that the object or component handling the request only one
request at a time.

# A class is an abstract entity (a “thing” that carries out a subset of your user’s requirements)
with behaviors (a set of functions or methods) and attributes (variables or properties that identify
the class). Class defines the behavior and identity of objects. Some classes can also define the
behavior and identity of other classes. Such classes are called base classes. Base classes that
can’t be instantiated into concrete objects are called abstract classes.

SDLC
The life cycle begins when an application is first conceived and ends when it is no longer in use. It
includes aspects such as initial concept, requirements analysis, functional design, internal design,
documentation planning, test planning, coding, document preparation, integration, testing,
maintenance, updates, retesting, phase-out, and other aspects. Small to Medium database
software projects are generally broken down into six stages:
1. Project Planning
2. Requirement Definitions
3. Design
4. Development
5. Integration and Test
6. Installation and Acceptance
The relationship of each stage to the others can be roughly described as a waterfall, where the
outputs from the specific stage serve as the initial inputs for the following stage. During each
stage, additional information is gathered or developed combined with inputs, and used to produce
the stage deliverables.

Software Testing

What is 'Software Quality Assurance'?


Software QA involves the entire software development PROCESS - monitoring and improving the
process, making sure that any agreed-upon standards and procedures are followed, and ensuring
that problems are found and dealt with. It is oriented to 'prevention'.

What is 'Software Testing'?


Testing involves operation of a system or application under controlled conditions and evaluating
the results. Testing is the process of examining an application to ensure it fulfills the requirements
for which it was designed and meets quality expectations. More importantly, testing ensures the
application meets customer expectations. Testing accomplishes a variety of things, but most
importantly it measures the quality of the software you are developing.

Glass Box Testing (also known as Structural Testing or Clear Box Testing or White Box Testing)
focuses on the inside of the “box” or relying on the internal knowledge of the system as a method
for testing. Knowledge of the code is not needed during glass box testing.
Glass box testing has several variations of testing, one of which is static and dynamic analysis
where techniques are used to facilitate the execution of the software product being tested.
Dynamic analysis is the testing portion that involves running the system. This type of testing can
be used when testing a software product such as database of college students, courses they are
registered for academic status, gender, social security number, address, etc. There are several
different types of static techniques. Statement Coverage testing is performed by executing every
statement at least once. Branch Coverage testing is performed by running a series of tests or test
cases to ensure that all branches of a test requirement or software component are tested at least
once. Path Coverage testing is performed by testing all the various paths of each test requirement
or software component.

There are several advantages of Glass Box testing such as:


• It forces the tester to use reason when testing the software. It approximates the
partitioning performed by execution equivalence. It reveals all errors in the “hidden” code.
• It reveals optimizations.

The disadvantages of Glass box testing are:


• It is expensive.
• A tester may miss cases omitted in the code.

Black Box Testing (also known as Behavioral, Functional, Closed Box, and Opaque Box) focuses
on testing the functionality of the system, using functional requirements if available. Black box
testing treats the system as a “black box”. This type of testing is based solely on system
requirements. This type of testing can be used when testing a order tracking system such as
purchasing a product over the Internet which involves creating a user account, selecting items to
purchase, entering personal information, and entering payment information, where all
functionality stated in the requirements has to be met before the software product is ready for
production, i.e. one requirement would be a user has to able to select an item to purchase and
have the ability to select additional items to purchase or delete the items currently in the
shopping cart.

Unit Testing takes the smallest piece of testable software in the application, isolate it from the
remainder of the code, and determine whether it behaves exactly as you expect. Each unit is
tested separately before integrating them into modules to test the interfaces between modules.
Unit testing has proven its value in that a large percentage of defects are identified during its use.
The most common approach to unit testing requires drivers and stubs to be written. The driver
simulates a calling unit and the stub simulates a called unit. The investment of developer time in
this activity sometimes results in demoting unit testing to a lower level of priority and that is
almost always a mistake. Even though the drivers and stubs cost time and money, unit testing
provides some undeniable advantages. It allows for automation of the testing process, reduces
difficulties of discovering errors contained in more complex pieces of the application, and test
coverage is often enhanced because attention is given to each unit.

Integration Testing is a logical extension of unit testing. In its simplest form, two units that
have already been tested are combined into a component and the interface between them is
tested. A component, in this sense, refers to an integrated aggregate of more than one unit. In a
realistic scenario, many units are combined into components, which are in turn aggregated into
even larger parts of the program. The idea is to test combinations of pieces and eventually
expand the process to test your modules with those of other groups. Eventually all the modules
making up a process are tested together. Beyond that, if the program is composed of more than
one process, they should be tested in pairs rather than all at once.
Integration testing identifies problems that occur when units are combined. By using a test plan
that requires you to test each unit and ensure the viability of each before combining units, you
know that any errors discovered when combining units are likely related to the interface between
units. This method reduces the number of possibilities to a far simpler level of analysis.

Regression Testing ensures that no anomalies appear in the system due to changes made. This
involves testing any modifications made to the system to ensure no new anomalies are found. An
example of this type of testing would be to test a shopping cart feature of an order tracking
system by using an existing user and purchasing additional products and purchasing products as
a new user ensuring that all the key functionality of the system is tested.

System Testing allows the tester to prove that the system meets all objectives and
requirements. System testing is a sort of verification process and provides an external view of the
system. This type of testing should be approached in a manner that would be easy for user to use
since they are not concerned with how the system works the system responses and the interface.
The user only wishes to know that the system functions properly.
System Testing verifies the system when the testing occurs in a non-live test environment using
non-live test data and could also be referred to as verification testing. An IT group consisting of
testers, developers, end users, and operations staff usually performs system testing. This type of
testing would ensure that the software product operates in various operating systems, various
Internet browsers, various Internet speed, and various resolutions.

CMM

What is the CMM?


It is a model that describes how software engineering practices in an organization evolve under
certain conditions:

1. The work performed is organized and viewed as a process


2. The evolution of the process is managed systematically

What are the four principles underlying the CMM?


1. Evolution is possible and takes time
2. Process maturity can be defined in distinguishable stages
3. Process evolution implies that some things must be done before others
4. Maturity will erode unless it is sustained

The Capability Maturity Model (CMM) is a methodology used to develop and refine an
organization's software development process. The model describes a five-level evolutionary path
of increasingly organized and systematically more mature processes. CMM was developed and is
promoted by the Software Engineering Institute (SEI), a research and development center
sponsored by the U.S. Department of Defense (DoD). SEI was founded in 1984 to address
software engineering issues and, in a broad sense, to advance software engineering
methodologies. More specifically, SEI was established to optimize the process of developing,
acquiring, and maintaining heavily software-reliant systems for the DoD. Because the processes
involved are equally applicable to the software industry as a whole, SEI advocates industry-wide
adoption of the CMM.
The CMM is similar to ISO 9001, one of the ISO 9000 series of standards specified by the
International Organization for Standardization (ISO). The ISO 9000 standards specify an effective
quality system for manufacturing and service industries; ISO 9001 deals specifically with software
development and maintenance. The main difference between the two systems lies in their
respective purposes: ISO 9001 specifies a minimal acceptable quality level for software
processes, while the CMM establishes a framework for continuous process improvement and is
more explicit than the ISO standard in defining the means to be employed to that end.

CMM's Five Maturity Levels of Software Processes


• At the initial level, processes are disorganized, even chaotic. Success is likely to depend
on individual efforts, and is not considered to be repeatable, because processes would not
be sufficiently defined and documented to allow them to be replicated. The software
process is characterized as ad hoc, and occasionally even chaotic. Few processes are
defined, and success depends on individual effort and heroics
• At the repeatable level, basic project management techniques are established, and
successes could be repeated, because the requisite processes would have been made
established, defined, and documented. Basic project management processes are
established to track cost, schedule, and functionality. The necessary process discipline is in
place to repeat earlier successes on projects with similar applications.
• At the defined level, an organization has developed its own standard software process
through greater attention to documentation, standardization, and integration. The software
process for both management and engineering activities is documented, standardized, and
integrated into a standard software process for the organization. All projects use an
approved, tailored version of the organization's standard software process for developing
and maintaining software
• At the managed level, an organization monitors and controls its own processes through
data collection and analysis. Detailed measures of the software process and product
quality are collected. Both the software process and products are quantitatively understood
and controlled.
• At the optimizing level, processes are constantly being improved through monitoring
feedback from current processes and introducing innovative processes to better serve the
organization's particular needs.

You might also like