You are on page 1of 4

Dynamic linking

The traditional way of linking programs, when all object modules are collected into a single
executable, is called static linking, because all references between modules are resolved
statically, at link time. This approach has certain disadvantages:

The resulting program always contains all the code and data that may be required during
execution, and all that code and data are loaded into memory when the program is started.
However, in a particular environment significant subsets of them are never used and just
waste memory space.

When any of the object modules are modified, the entire executable has to be relinked.

If an application suite contains several executables, each of them contains a copy of the
run-time library. Other code is often duplicated as well.

1) VIEWS OF LINKING AND EXECUTION:

How does the dynamic linking work


A dynamic link library has a table which contains names of entities (procedures and variables)
which are exported --- made available to other executable components, along with the
information about their location within the DLL. Table entries also have numbers associated with
them --- so called ordinal numbers.
In turn, an executable file (EXE or DLL) referencing the DLL contains, for each reference, a
record with the DLL name and a name or an ordinal number of an exported entity. When the
operating system loads the executable, it also loads all the DLLs it references, and tunes up the
executable's code so that it will call DLL procedures and access DLL variables. This is called
load-time dynamic linking.

Alternatively, an application may explicitly load a DLL at run-time using an API call. Another
call retrieves an address of a procedure or variable exported from the loaded DLL, given its
name or ordinal number. This is called run-time dynamic linking.

Dynamic Linking: Advantages


Dynamic linking offers several advantages over static linking: code sharing, automatic updates,
and security.
Code Sharing
With dynamic linking, programs can share identical code instead of owning individual copies of
the same library. Think of the standard C or C++ libraries. They are both huge and ubiquitous.
Every C or C++ program uses at least a portion of these libraries for I/O, date and time
manipulation, memory allocation, string processing, and so on. If distinct copies of these
libraries were statically linked into every executable file, even tiny programs would occupy
dozens of megabytes.
Worse yet, whenever a new version of the said libraries is released, every executable file would
have to be replaced with a newly-linked executable in order to reflect the change. Fortunately,
these libraries are usually implemented as shared dynamic libraries that are loaded into the core
program at runtime.
Automatic Updates
Whenever a new version of a dynamically-linked library is installed, it automatically supercedes
the previous version. When you run a program, it automatically picks the most up-to-date version
without forcing the user to re-link.
Security
If you're concerned about protecting your intellectual property, splitting an application into
several linkage units makes it harder for crackers to disassemble and decompile an executable
file (at least in theory).

Dynamic Linking Disadvantages


Considering dynamic linking's prominent advantages, you could assume that it's superior to static
linking by all means. This is not the case, though. Paradoxically, every advantage of dynamic
linking can become a disadvantage under certain conditions. Let's look at these disadvantages
more closely.
DLL Hell
While it is usually a good idea to share a single instance of a library rather than reduplicate it into
every individual program, this sharing can become a serious problem. Think of two different
Web browsers that reside on the same machine. Both browsers use the same shared libraries to
access the system's modem, graphics card, and I/O routines. Now, suppose that you recently had
to install an update of one of these shared libraries because of a security loophole found in the
original version. Alas, the new version causes one of the browsers to crash. You can either wait
for a new patch or revert to the previous version of the shared library.
None of these workarounds is ideal. Had the library been statically linked into each browser
application, this mess would have been avoided. Unfortunately, such conflicts are reconciled
only when you explicitly enable multiple versions of the same shared library to coexist on the
same machine. However, this workaround complicates system management and forces users to
become experts. In the Windows world, this problem is called the "DLL hell" although it exists
in the POSIX world too.
Aliasing
Dynamic linking enables you to share identical code, which is usually a good idea. However,
sharing the same data is rarely desirable.
If, for example, two processes call localtime(), it must return a different local static tm struct for
every process. Designers of dynamically linked libraries were aware of this necessity and
devised various techniques to cope with it. However, programmers must explicitly state which
components of a shared library are truly shared and which ones aren't. The result is a larger and
slower shared library as well as cluttered-up source files.
Code Cracking
Windows programmers who wish to thwart crackers' attempt to decompile their code often split
an application into multiple DLLs. Ironically, this makes a cracker's job much easier because
DLLs (and shared libraries in general) must contain metacode, i.e., code that documents code.
This metacode (reminiscent of debug information) tells the runtime linker where each function

and object is stored in the file, what their non-decorated names are, and so on. A hacker that
knows how this information is encoded can decompile the code with less effort than required for
decompiling an optimized, statically linked executable.

You might also like