Professional Documents
Culture Documents
S
E
P
T
E
M
B
E
R
2
0
1
4
O
P
E
N
S
O
U
R
C
E
F
O
R
Y
O
U
V
O
L
U
M
E
:
0
2
I
S
S
U
E
:
1
2
Email : sales-in@liferay.com
DEVELOPERS
26 Improve Python Code by
Using a Profler
30 Understanding the
Document Object Model
(DOM) in Mozilla
40 Introducing AngularJS
45 Use Bugzilla to Manage
Defects in Software
48 An Introduction to Device
Drivers in the Linux Kernel
52 Creating Dynamic Web
Portals Using Joomla and
WordPress
56 Compile a GPIO Control
Application and Test It On
the Raspberry Pi
ADMIN
59 Use Pound on RHEL to
Balance the Load on Web
Servers
67 Boost the Performance of
CloudStack with Varnish
74 Use Wireshark to
Detect ARP Spoofng
77 Make Your Own PBX with Asterisk
OPEN GURUS
80 How to Make Your USB Boot
with Multiple ISOs
86 Contiki OS Connecting
Microcontrollers to the
Internet of Things
Contents
REGULAR FEATURES
08 You Said It...
09 Ofers of the Month
10 New Products
13 FOSSBytes
25 Editorial Calendar
100 Tips & Tricks
105 FOSS Jobs
Experimenting with More Functions in Haskell
Why We Need to Handle Bounced Emails
35
63
4 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com
YOUSAID IT
8 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | PB
Online access to old issues
I want all the issues of OSFY from 2011, right up to the
current issue. How can I get these online, and what would be
the cost?
c kiran kumar;
kirru.chappidi@gmail.com
ED: It feels great to know that we have such valuable readers.
Thank you, Kiran, for bringing this request to us. You can avail
all the back issues of Open Source For You in e-zine format from
www.ezines.efyindia.com
Request for a sample issue
I am with a company called Relia-Tech, which is a brick-
and-mortar computer service company. We are interested in
subscribing to your magazine. Would you be willing to send us a
magazine to check out before we commit to anything?
Lindsay Steele;
lsteele@relia-tech.net
ED: Thanks for your mail. You can visit our website www.ezine.
lfymag.com and access our sample issue.
A thank-you and a request for more help
I began reading your magazine in my college library and
thought of offering some feedback.
I was facing a problem with Oracle Virtual Box, but after
reading an article on the topic in OSFY, the task became so easy.
Thanks for the wonderful help. I am also trying to set up
my local (LAN-based) GIT server. I have no idea how to
set it up. I have worked a little with GitHub. I do wish your
magazine would feature content on this topic in upcoming
editions.
Abhinav Ambure;
adambure21@gmail.com
ED: Thank you so much for your valuable feedback. We
really value our readers and are glad that our content proves
Share Your
Please send your comments
or suggestions to:
Open Source For You,
The Editor,
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email: osfyedit@efy.in
helpful to them. We will surely look into your request and
try to include the topic you have asked for in upcoming
issues. Keep reading OSFY and continue sending us your
feedback!
Annual subscription
Ive bought the July 2014 issue of OSFY and I loved
it. I want the latest version of the Ubuntu 14.04 LTS and the
programming tools (JDK and other tools for C, C+, Java and
Python). Also, how can I subscribe to your magazine for one
year and can I get it at my village (address enclosed)?
Parveen Kumar;
parveen199214@gmail.com
ED: Thank you for the compliments. We're glad to know that
you enjoy reading our magazine. We will defnitely look into
your request. Also, I am forwarding your query regarding
subscribing to the magazine to the concerned team. Please
feel free to get back to us in case of any other suggestions or
questions. We're always happy to help.
Availability of OSFY in your city
I want to purchase Open Source For You for the
library in my organisation but I am unable to fnd copies
in the city I live in (Jabalpur in Madhya Pradesh). I cannot
go in for the subscription as well. Please give me the name
of the distributor or dealer in my city through whom I can
purchase the magazine.
Gaurav Singh;
gaurav_kumar_singh@hotmail.com
ED: We have a website where you can locate the nearest store
in your city that supplies Open Source For You. Do log on
to http://ezine.lfymag.com/listwholeseller.asp. You will fnd
there are two dealers of the magazine in your city: Sahu News
Agency (Sanjay Sahu, Ph: 09301201157) and Janta News
Agency (Harish, Ph: 09039675118). They can ensure regular
supply of the magazine to your organisation.
O
F
F
E
R
S
THE
MONTH
www.space2host.com
Get 10%
discount
Free Dedicated hosting/VPS for one
month. Subscribe for annual package
of Dedicated hosting/VPS and get
one month FREE
Reseller package special offer !
Contact us at 09841073179
or Write to sales@space2host.com
2000
Rupees
Coupon
No condition attached for trial of our
cloud platform
(Free Trial Coupon)
Enjoy & Please share Feedback at
sales@cloudoye.com
For more information, call us on
1800-212-2022 / +91-120-666-7718
www.cloudoye.com www.esds.co.in
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
Free Dedicated Server Hosting
for one month
For more information, call us
on 1800-209-3006/ +91-253-6636500
One
month
free
Subscribe for our Annual Package of Dedicated
Server Hosting & enjoy one month free service
Subscribe for the Annual Packages of
Dedicated Server Hosting & Enjoy Next
12 Months Free Services
Get
12 Months
Free
For more information, call us on
1800-212-2022 / +91-120-666-7777
www.goforhosting.com
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
Pay Annually & get 12 Month Free
Services on Dedicated Server Hosting
Get 35% off on course fees and if you appear
for two Red Hat exams, the second shot is free
35%
off & more
Contact us @ 98409 82184/85 or
Write to enquiry@vectratech.in
www.vectratech.in
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
Do not wait! Be a part of
the winning team
www.prox.packwebhosting.com
Contact us at 98769-44977 or
Write to support@packwebhosting.com
Get 25%
Off
Considering VPS or a Dedicated
Server? Save Big !!! And go
with our ProX Plans
H
urry!
O
ffer valid till 30
th
Septem
ber 2014!
PACKWEB
Pr oX
PACK WEB
HOSTING
Time to go PRO now
25% Off on ProX Plans - Ideal for running
High Traffic or E-Commerce Website
Coupon Code : OSFY2014
Contact us at +91-98453-65845 or
Write to babu_krishnamurthy@yahoo.com
Pay the most
competitive
Fee
EMBEDDED SOFTWARE DEVELOPMENT
COURSES AND WORKSHOPS
Faculty: Mr. Babu Krishnamurthy
Visiting Faculty / CDAC/ ACTS with 18 years
of Industry and Faculty Experience
Date: 20-21 Sept 2014 ( 2 days program)
Embedded RTOS -Architecture, Internals
and Programming - on ARM platform
To advertise here, contact
Omar on +91-995 888 1862 or
011-26810601/02/03 or
Write to omar.farooq@efy.in
www.opensourceforu.com
COURSE
FEE:
RS.5620/-
(all inclusive)
Ubuntu 14.04.1 LTS is out
The Ubuntu 14.04 LTS has
been around for quite some
time now and most people
must have upgraded it.
Another smaller update is
ready 14.04.1. Canonical
has announced that this
Ubuntu update fxes many
bugs and includes security
updates. There is also a list of bugs and other updates in Ubuntu 14.04.1 that
you might want to have a look at, in order to see the scope of this update. If you
havent upgraded to 14.04.1 yet, do so as soon as possible. It is a worthy upgrade
if you use an older version of Ubuntu.
Android Device Manager makes it easier to
search for lost phones!
Google has created an update in Android Device
Manager that will help the devices users better
security. This latest version is called 1.3.8. It will
help add a phone number in the remote locking
screen, and the lock screen password can also
be changed. An optional message can also be set
up. If the phone number is added, then a big green
button will appear on the lock screen saying Call
owner. If the lost phone is found by someone,
then the owner can be easily contacted. Earlier,
only a message could be added by the users. The
call-back number can be set up through the Android
Device Manager app as well as the Web interface,
if another Android device is not present. Both
these message and call-back features are optional,
though. But its highly recommended that these features are used so that a lost
phone can be easily found.
Ubuntus Amazon shopping feature complies
with UK Data Protection Act
The independent body investigating
the implementation of Ubuntus Unity
Shopping Lens feature and its compliance
with the UK Data Protection Act (DPA) of
1998 has found no instances of Canonical
being in breach of the act. Ubuntus
controversial Amazon shopping feature
has been found to be compliant with
relevant data protection and privacy laws
in the UK, something that was checked in response to a complaint fled by blogger
Luis de Sousa last year. Notably, the feature sends out queries made in the Dash to an
intermediary Canonical server, which sends it forward to Amazon. The e-commerce
giant then returns product suggestions matching the query back to the Dash. The
feature also sends across non-identifable location data out in the process.
FOSSBYTES
VLC 2.1.5 has been
released
VideoLAN has announced the
release of the fnal update in the
2.1.x series of its popular open
source, cross-platform media player
and streaming media
server: the VLC media
player. VLC 2.1.5 is
now available for
download and
installation on
Windows, Mac and
Linux operating systems. Notably,
the next big release for the VLC
media player will be that of the
2.2.x branch. A careful look at the
change log reveals that although the
VLC 2.1.5 update has been released
across multiple platforms, the most
noticeable improvements are for OS
X users. Others could consider it as a
minor update.
For OS X users, VLC 2.1.5
brings about additional stability
to the Qtsound capture module as
well as improved support for Reti.
Other notable changes (for the OS
X platform) include compilation
fxes for OS/2 operating systems.
Also, MP3 fle conversions will no
longer be renamed .raw under the
Qt interface following the update. A
few decoder fxes will now beneft
DxVA2 sample decoding, MAD
resistance in broken MP3 streams
and PGS alignment tweaks for MKV.
In terms of security, the new release
comes with fxes for GNU TLS and
libpng as well. One should remember
that VLC is a portable, free and open
source, cross-platform media player
and streaming media server written by
the VideoLAN project that supports
many audio and video compression
methods and fle formats. It comes
with a large number of free decoding
and encoding libraries, thereby
eliminating the need of fnding or
calibrating proprietary plugins.
Powered by www.efytimes.com
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 13
FOSSBYTES
CALENDAR OF FORTHCOMING EVENTS
Name, Date and Venue Description Contact Details and Website
4th Annual Datacenter
Dynamics Converged.
September 18, 2014;
Bengaluru
The event aims to assist the community in
the data centre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.
Praveen Nair; Email: Praveen.nair@
datacenterdynamics.com; Ph: +91
9820003158; Website:
http://www.datacenterdynamics.com/
Gartner Symposium IT Xpo,
October 14-17, 2014; Grand
Hyatt, Goa
CIOs and senior IT executives from across the
world will gather at this event, which offers
talks and workshops on new ideas and strate-
gies in the IT industry.
Website:
http://www.gartner.com
Open Source India,
November 7-8, 2014;
NIMHANS Center, Bengaluru
Asias premier open source conference that
aims to nurture and promote the open source
ecosystem across the sub-continent.
Omar Farooq; Email: omar.farooq@
efy.in; Ph: 09958881862
http://www.osidays.com
CeBit
November 12-14, 2014;
BIEC, Bengaluru
This is one of the worlds leading business IT
events, and offers a combination of services
and benefits that will strengthen the Indian IT
and ITES markets.
Website:
http://www.cebit-india.com/
5th Annual Datacenter
Dynamics Converged;
December 9, 2014; Riyadh
The event aims to assist the community in
the datacentre domain by exchanging ideas,
accessing market knowledge and launching
new initiatives.
Praveen Nair; Email: Praveen.nair@
datacenterdynamics.com; Ph: +91
9820003158; Website:
http://www.datacenterdynamics.com/
Hostingconindia
December 12-13, 2014;
NCPA, Jamshedji Bhabha
Theatre, Mumbai
This event will be attended by Web hosting
companies, Web design companies, domain
and hosting resellers, ISPs and SMBs from
across the world.
Website:
http://www.hostingcon.com/
contact-us/
According to Sousa, the Shopping Lens implementation contravened a
1995 EU Directive on the protection of users personal data. Sousa had provided
a number of instances to put forward his point. Initially, Sousa began by reaching
out to Canonical for clarifcation but to no avail. He was fnally forced to fle a
complaint with the Information Commissioners Offce regarding his security
concerns. Finally, the ICO responded to Sousas need for clarifcation by clearly
stating that the Shopping Lens feature complies with the DPA (Data Protection Act)
very well and in no way breaches users privacy.
Oracle launches Solaris 11.2 with OpenStack support
Oracle Corp recently launched the latest
version of its Solaris enterprise UNIX
platform: Solaris 11.2. Notably, this new
version was in beta since April. The
latest release comes with several key
enhancementsthe support for OpenStack
as well as software-defned networking
(SDN). Additionally, there are various
security, performance and compliance
enhancements introduced in Oracles
new release. Solaris 11.2 comes with OpenStack integration, which is perhaps its
most crucial enhancement. The latest version runs the most recent version of the
popular toolbox for building clouds: OpenStack Havana. Meanwhile, the inclusion
of software-defned networking (SDN) support is seen as Oracles ongoing effort to
transform its Exalogic Elastic Cloud into one-stop data centres. Until now, Exalogic
boxes were being increasingly used in the form of massive servers or for transaction
processing. They were therefore not fulflling their real purpose, which is to work
Heres whats new in Linux 3.16
The founder of Linux, Linus Torvalds,
announced the release of the stable build of
Linux 3.16 recently. This version is known
as Shuffing Zombie Juror for developers.
There are a host of improvements and new
features in this new stable build of Linux.
These include new and improved drivers,
and some complex integral improvements
like a unifed control hierarchy. This new
Linux 3.16 stable version will be ideal for
the Ubuntu Linux Kernel 14.10. LTS version
users will get this update once the 14.10
kernel is released.
Shutter 0.92 for Linux released
and fixes a number of bugs
Users have had some trouble using the
popular Shutter screenshot tool for Linux
owing to the many irritating bugs and
stability issues that came along. But they are
in for a pleasant surprise as developers have
now released a new bug fx for the tool that
aims to address some of its more prominent
issues. The new bug fxShutter 0.92is
now available for download for the Linux
platform and a number of stability issues
have been dealt with for good.
Open source community irked
by broken Linux kernel patches
One of the many fne threads that bind the
open source community is avid participation
and cooperation between developers across
the globe, with the common goal of improving
the Linux kernel. However, not everyone is
actually trying to help out there, as recent
happenings suggest. Trolls exist even in the
Linux community, and one that has managed
to make a big impression is Nick Krause.
Krauses recent antics have led to signifcant
bouts of frustration among Linux kernel
maintainers. Krause continuously tries to get
broken patches past the maintainersonly
his goals are not very clear at the moment.
Many developers believe that Krause aims to
damage the Linux kernel. While that might
be a distant dream for him (at least for now),
he has managed to irk quite a lot of people,
slowing down the whole development process
because of the need to keep fxing broken
patches introduced by him.
14 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com
FOSSBYTES
as cloud-hosting systems. However, with SDN support added, Oracle is aiming
to change all this. Oracle plans to directly take on network equipment makers
like Cisco, Hewlett-Packard and Brocade with the introduction of Solaris 11.2.
Enterprises using Solaris can now simply purchase a handful of Solaris boxes and
run their mission-critical clouds. In addition, they can also use bits of OpenStack
without acquiring additional hardware.
Canonical launches Ubuntu 12.04.5 LTS
Marking its ffth point release, Canonical has announced that Ubuntu 12.04.5 LTS
is available for download and installation. Ubuntu 12.04
LTS was frst released back in April 2012. Canonical
will continue supporting the LTS until 2017 with regular
updates from time to time. Also, this is the frst major
release for Canonical since the debut of Ubuntu 14.04
LTS earlier this year. The most notable improvement
in the new release is the inclusion of an updated kernel
(3.13) and X.org stack. Both of these have been traded
from Ubuntu 14.04 LTS. The new release is out now for
desktop, server, cloud and core products, as well as other
favours of Ubuntu with long-term support. In addition, the new release also comes
with security updates and corrections for other high-impact bugs, with a focus
on maintaining stability and compatibility with Ubuntu 12.04 LTS. Meanwhile,
Kubuntu 12.04.5 LTS, Edubuntu 12.04.5 LTS and Ubuntu Studio 12.04.5 LTS are
also available for download and install.
Storm Energys SunSniffer charmed by Raspberry Pi!
The humble Raspberry Pi single board computer is indeed going places, receiving critical
acclaim for, well, being downright awesome. The latest to be smitten by it is the German
company, Storm Energy, which builds products like SunSniffer, a
solar plant monitoring system. The SunSniffer system is designed
to monitor photovoltaic (PV) solar power installations of varied
sizes. The company has now upgraded the system to a Linux-
based platform running on a Raspberry Pi. In addition to this, the
latest SunSniffer version also comes with a custom expansion
board and customised Linux OS. The SunSniffer is IP65-rated,
and the new Connection Boxs custom Raspberry Pi expansion
board comes with fve RS-485 ports and eight analogue/digital
I/O interfaces to help simultaneously monitor a wide variety
of solar inverters (Refusol, Huawei and Kostal, among others). In short, the new system
can remotely control solar inverters via a radio ripple control receiver, as against earlier
versions where users could only monitor their data.
The Raspberry Pi-laden SunSniffer also offers SSL-encryption and optional
integrated anti-theft protection.
Italian city of Turin switching to open source technology
In a recent development, the Italian city of Turin is considering ditching all
Microsoft products in favour of open source alternatives. The move is directly
aimed at cutting government costs, while not compromising on functionality. If at
all Turin gets rid of all proprietary software, it will go on to become one of the frst
Italian open source cities and save itself at least a whopping six million Euros. A
report suggests that as many as 8,300 computers of the local administration in Turin
will soon have Ubuntu under the hood and will be shipped with the Mozilla Firefox
Android-x86 4.4 R1
Linux distro available for
download and testing
The team behind Android-x86
recently launched version 4.4 R1 of
the port of the Android OS designed
specifcally for the x86 platform.
Android-x86 4.4 KitKat is now
available for download and testing
on the Linux platform for your
PC. Android is actually based on a
modifed Linux kernel, with many
believing it to be a stand alone Linux
distribution in its own right. With that
said, developers have managed to
tweak Android to make it port to the
PC for the x86 platforms; thats what
Android-x86 is really all about.
Linux Mint Debian edition
to switch from snapshot
cycle to Debian stable
package base
The team behind Linux Mint has
decided to let go of the current
snapshot cycle in the Debian edition
for the Linux distribution and instead
switch over to a Debian stable
package base. The current Linux
Mint editions are based on Ubuntu
and the team is most likely to stick
to that for at least a couple of years.
The team recently launched the
latest iteration of Linux Mint, a.k.a.
Qiana. Both the Cinnamon and
Mate versions are now available for
download with the KDE and XFCE
versions expected to come out soon.
Meanwhile, it has been announced
that the next three Linux Mint
releases would also, in all probability,
be based on Ubuntu 14.04 LTS.
16 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com
FOSSBYTES
Web browser and OpenOffcethe two joys of the open source world. The local
government has argued that a large amount of money is spent on buying licences in
case of proprietary software, wasting a lot of the local tax payers money. Therefore,
a decision to drop Microsoft in favour of cost-effective open source alternatives
seems to be a viable option.
LibreOffice coming to Android
LibreOffce needs no introduction. The Document Foundations popular open
source offce suite is widely used by millions of people across the globe. Therefore,
news that the suite could soon be
launched on Android is something to
watch out for. You heard that right! A
new report by Tech Republic suggests
that the Document Foundation is
currently on a rigorous workout to
make this happen. However, as things
stand, there is still some time before that happens for real. Even as the Document
Foundation came out with the frst Release Candidate (RC) version of the upcoming
LibreOffce 4.2.5 recently (it has been quite consistent in updating its stable version
on a timely basis), work is on to make LibreOffce available for Googles much
loved Android platform as well, the report says. The buzz is that developers back
home are currently talking about (and working at) getting the fle size right, that is,
something well below the Google limit. Until they are able to do that, LibreOffce
for Android is a distant dream, sadly.
However, as and when this happens, LibreOffce would be in direct competition
with Google Docs. Since there is a genuine need for Open Document Format (ODF)
support in Android, the release might just be what the doctor ordered for many users.
This is more of a rumour at the moment, and things will get clearer in time. There is
no offcial word from either Google or the Document Foundation about this, but we
will keep you posted on developments. The recent release the LibreOffce 4.2.5
RC1meanwhile tries to curb many key bugs that plagued the last 4.2.4 fnal release.
This, in turn, has improved its usability and stability to a signifcant extent.
RHEL 6.6 beta is released; draws major inspiration from RHEL 7
Just so RHEL 6.x users (who wish to continue with this branch of the distribution for
a bit longer) dont feel left out, Red Hat has launched a beta release of its Red Hat
Enterprise Linux 6.6 (RHEL 6.6) platform. Taking much of its inspiration from the
recently released RHEL 7, the move is directed towards RHEL 6.x users so that they
beneft from new platform features. At the same time, it comes with some real cool
features that are quite independent of RHEL 7 and which make 6.6 beta stand out
on its own merits. Red Hat offers Application Binary Interface (ABI) compatibility
for RHEL for a period of ten years, so technically speaking, it cannot drastically
change major elements of an in-production release. Quite simply put, it cant and
wont change an in-production release in a way that could alter stability or existing
compatibility. This would eventually mean that the new release on offer cannot go
much against the tide with respect to RHEL 6. Although the feature list for RHEL
6.6 beta ties in closely with the feature list of the major release (6.0), it doesnt
mean RHEL 6.6 beta is simply old wine served in a new bottle. It does manage to
introduce some key improvements for RHEL 6.x users. To begin with, RHEL 6.6
beta includes some features that were frst introduced with RHEL 7, the most notable
being Performance Co-Pilot (PCP). The new beta release will also offer RHEL 6.x
users more integrated Remote Direct Memory Access (RDMA) capabilities.
Khronos releases OpenGL NG
The Khronos Group recently announced
the release of the latest iteration of
OpenGL (the oldest high-level 3D
graphics API still in popular use).
Although OpenGL 4.5 is a noteworthy
release in its own right, the Groups
second major release in the next
generation OpenGL initiative is garnering
widespread appreciation. While OpenGL
4.5 is what some might call a fairly
standard annual OpenGL update, OpenGL
NG is a complete rebuild of the OpenGL
API, designed with the idea of building an
entirely new version of OpenGL. This new
version will have a signifcantly reduced
overhead owing to the removal of a lot
of abstraction. Also, it will do away with
the major ineffciencies of older versions
when working at a low level with the bare
metal GPU hardware.
Being a very high-level API, earlier
versions of OpenGL made it hard to
effciently run code on the GPU directly.
While this didnt matter so much earlier,
now things have changed. Fuelled by
more mature GPUs, developers today
tend to ask for graphics APIs that allow
them to get much closer to the bare
metal. The next generation OpenGL
initiative is directed at developers who
are looking to improve performance and
reduce overhead.
Dropboxs updated Android
App offers improved features
A major update has been announced
by Dropbox in connection with its
offcial Android app, and is available
at Google Play. This new update
carries version number 2.4.3 and
comes with a lot of improved features.
As the Google Play listing suggests,
this new Dropbox version supports in-
app previews of Word, PowerPoint and
PDF fles. A better search experience is
also offered in this new version, which
enables tracking of recent queries, and
suggestions are also displayed. One
can also search in specifc folders from
now onwards.
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 17
CPU socket
The central processing unit is the key component of a motherboard
and its performance is primarily determined by the kind of
processor it is designed to hold. The CPU socket can be defned
as an electrical component that connects or attaches to the
motherboard and is designed to house a microprocessor. So,
when youre buying a motherboard, you should look for a CPU
socket that is compatible with the CPU you have planned to use.
Most of the time, motherboards use one of the following fve
sockets -- LGA1155, LGA2011, AM3, AM3+ and FM1. Some
of the sockets are backward compatible and some of the chips
are interchangeable. Once you opt for a motherboard, you will be
limited to using the processors that offer similar specifcations.
Form factor
A motherboards capabilities are broadly determined by its
shape, size and how much it can be expanded these aspects
are known as form factors. Although there is no fxed design or
form for motherboards, and they are available in many variations,
two form factors have always been the favourites -- ATX and
microATX. The ATX motherboard measures around 305cm
x 23cm (12 inch x 9 inch) and offers the highest number of
expansion slots, RAM bays and data connectors. MicroATX
motherboards measure 24.38cm x 24.38cm (9.6 x 9.6 inch) and
have fewer expansion slots, RAM bays and other components.
The form factor of a motherboard can be decided according to
what purpose the motherboard is expected to serve.
RAM bays
Random access memory (RAM) is considered the most important
workspace in a motherboard, where data is processed even after
being removed from the hard disk drive or solid state drive. The
effciency of your PC directly depends on the speed and size of your
RAM. The more space you have on your RAM, the more effcient
your computing will be. But its no use having a RAM with greater
effciency than your motherboard can support, as that will be just a
waste of the extra potential. Neither can you have RAM with lesser
effciency than the motherboard, as then the PC will not work well
due to the bottlenecks caused by mismatched capabilities. Choosing
the motherboard which supports just the right RAM is vital.
Apart from these factors, there are many others to consider before
selecting a motherboard. These include the audio system, display,
LAN support, expansion capabilities and peripheral interfaces.
If you are a gamer, or like to customise your PC and build it from scratch, the motherboard is
what you require to link all the important and key components together. Lets find out how to
select the best desktop motherboards.
T
he central processing unit (CPU) can be considered to
be the brain of a system or a PC in laymans language,
but it still needs a nervous system to be connected
with all the other components in your PC. A motherboard
plays this role, as all the components are attached to it and
to each other with the help of this board. It can be defned
as a PCB (printed circuit board) that has the capability of
expanding. As the name suggests, a motherboard is believed
to be the mother of all the components attached in it,
including network cards, sound cards, hard drives, TV tuner
cards, slots, etc. It holds the most signifcant sub-systems
the processor along with other important components. A
motherboard is found in all electronics devices like TVs,
washing machines and other embedded systems. Since it
provides the electrical connections through which other
components are connected and linked with each other, it needs
the most attention. It hosts other devices and subsystems and
also contains the central processing unit, unlike the backplane.
There are quite a lot of companies that deal with
motherboards and Simmtronics is one among the leading players.
According to Dr Inderjeet Sabbrawal, chairman, Simmtronics,
Simmtronics has been one of the exclusive manufacturers of
motherboards in the hardware industry over the last 20 years. We
strongly believe in creativity, innovation and R&D. Currently, we
are fulflling our commitment to provide the latest mainstream
motherboards. At Simmtronics, the quality of the motherboards
is strictly controlled. At present, the market is not growing.
India still has a varied market for older generation models as well
as the latest models of motherboards.
Factors to consider while buying a motherboard
In a desktop, several essential units and components
are attached directly to the motherboard, such as the
microprocessor, main memory, etc. Other components, such
as the external storage controllers for sound and video display
and various peripheral devices, are attached to it through
slots, plug-in cards or cables. There are a number of factors to
keep in mind while buying a motherboard, and these depend
on the specifc requirements. Linux is slowly taking over the
PC world and, hence, people now look for Linux-supported
motherboards. As a result, almost every motherboard now
supports Linux. The many factors to keep in mind when
buying a Linux-supported motherboard are discussed below.
Buyers Guide
18 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com
Motherboards
The Lifeline of Your Desktop
Supported CPU: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors in the LGA1150 package
Memory supported: 32GB of system memory, dual
channel DDR3 2400+ MHz, DDR3 1600/1333 MHz
Form factor: ATX form factor
CPU supported: Intel Core2nd and Core3rd
Generation i7/i5/i3/Pentium/Celeron
Main memory supported: Dual channel DDR3
1333/1066
BIOS: 132MB Flash ROM
Connectors: 14-pin ATX 12V power connector
Chipset: Intel H61 (B3 Version)
Intel: DZ87KLT-75K
motherboard
Simmtronics SIMM-INT H61
(V3) motherboard
The author is a part of the editorial team at EFY.
By: Manvi Saxena
Asus: Z87-K
motherboard
Supported CPU: Fourth generation Intel Core
i7 processor, Intel Core i5 processor and other
Intel processors
Memory supported: Dual channel memory
architecture supports Intel XMP
Form factor: ATX form factor
CPU supported: Fourth generation Intel Core i7
processor, Intel Core i5 processor and other Intel
processors
Memory supported: Supports DDR3 3000
Form factor: MicroATX
Gigabyte Technology:
GA-Z87X-OC motherboard
Buyers Guide
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 19
A few desktop motherboards
with the latest chipsets
CODE
SPORT
Sandya Mannarswamy
20 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com
F
or the past few months, we have been discussing
information retrieval and natural language processing,
as well as the algorithms associated with them. This
month, we continue our discussion on natural language
processing (NLP) and look at how NLP can be applied
in the feld of software engineering. Given one or many
text documents, NLP techniques can be applied to extract
information from the text documents. The software
engineering (SE) lifecycle gives rise to a number of textual
documents, to which NLP can be applied.
So what are the software artifacts that arise in SE?
During the requirements phase, a requirements document
is an important textual artifact. This specifes the expected
behaviour of the software product being designed, in terms
of its functionality, user interface, performance, etc. It is
important that the requirements being specifed are clear
and unambiguous, since during product delivery, customers
would like to confrm that the delivered product meets all
their specifed requirements.
Having vague ambiguous requirements can hamper
requirement verifcation. So text analysis techniques can
be applied to the requirements document to determine
whether there are any ambiguous or vague statements.
For instance, consider a statement like, Servicing of user
requests should be fast, and request waiting time should
be low. This statement is ambiguous since it is not clear
what exactly the customers expectations of fast service
or low waiting time may be. NLP tools can detect such
ambiguous requirements. It is also important that there are
no logical inconsistencies in the requirements. For instance,
a requirement that Login names should allow a maximum
of 16 characters, and that The login database will have a
feld for login names which is 8 characters wide, confict
with each other. While the user interface allows up to a
maximum of 16 characters, the backend login database
will support fewer characters, which is inconsistent with
the earlier requirement. Though currently such inconsistent
requirements are fagged by human inspection, it is possible
to design text analysis tools to detect them.
The software design phase also produces a number of
SE artifacts such as the design document, design models
in the form of UML documents, etc, which also can be
mined for information. Design documents can be analysed
to generate automatic test cases in order to test the fnal
product. During the development and maintenance phases,
a number of textual artifacts are generated. Source code
itself can be considered as a textual document. Apart from
source code, source code control system logs such as SVN/
GIT logs, Bugzilla defect reports, developers mailing lists,
feld reports, crash reports, etc, are the various SE artifacts to
which text mining can be applied.
Various types of text analysis techniques can be applied
to SE artifacts. One popular method is duplicate or similar
document detection. This technique can be applied to
fnd out duplicate bug reports in bug tracking systems. A
variation of this technique can be applied to code clones
and copy-and-paste snippets.
Automatic summarisation is another popular technique
in NLP. These techniques try to generate a summary of a
given document by looking for the key points contained in it.
There are two approaches to automatic summarisation. One
is known as extractive summarisation, using which key
phrases and sentences in the given document are extracted
and put back together to provide a summary of the document.
The other is the abstractive summarisation technique, which
is used to build an internal semantic representation of the
given document, from which key concepts are extracted, and
a summary generated using natural language understanding.
The abstractive summarisation technique is close to how
humans would summarise a given document. Typically, we
would proceed by building a knowledge representation of
the document in our minds and then using our own words
to provide a summary of the key concepts. Abstractive
summarisation is obviously more complex than extractive
summarisation, but yields better summaries.
Coming to SE artifacts, automatic summarisation
techniques can be applied to generate large bug reports.
They can also be applied to generate high level comments
In this months column, we continue our discussion on natural language processing.
www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 21
Guest Col umn CodeSport
of methods contained in source code. In this case, each method
can be treated as an independent document and the high level
comment associated with that method or function is nothing but a
short summary of the method.
Another popular text analysis technique involves the use of
language models, which enables predicting what the next word
would be in a particular sentence. This technique is typically used in
optical character recognition (OCR) generated documents, where due
to OCR errors, the next word is not visible or gets lost and hence the
tool needs to make a best case estimate of the word that may appear
there. A similar need also arises in the case of speech recognition
systems. In case of poor speech quality, when a sentence is being
transcribed by the speech recognition tool, a particular word may
not be clear or could get lost in transmission. In such a case, the tool
needs to predict what the missing word is and add it automatically.
Language modelling techniques can also be applied in intelligent
development environments (IDE) to provide auto-completion
suggestions to the developers. Note that in this case, the source code
itself is being treated as text and is analysed.
Classifying a set of documents into specifc categories is another
well-known text analysis technique. Consider a large number of news
articles that need to be categorised based on topics or their genre, such
as politics, business, sports, etc. A number of well-known text analysis
techniques are available for document classifcation. Document
classifcation techniques can also be applied to defect reports in SE to
classify the category to which the defect belongs. For instance, security
related bug reports need to be prioritised. While people currently
inspect bug reports, or search for specifc key words in a bug category
feld in Bugzilla reports in order to classify bug reports, more robust
and automated techniques are needed to classify defect reports in large
scale open source projects. Text analysis techniques for document
classifcation can be employed in such cases.
Another important need in the SE lifecycle is to trace source
code to its origin in the requirements document. If a feature X
is present in the source code, what is the requirement Y in the
requirements document which necessitated the development
of this feature? This is known as traceability of source code to
requirements. As source code evolves over time, maintaining
traceability links automatically through tools is essential to
scale out large software projects. Text analysis techniques
can be employed to connect a particular requirement from the
requirements document to a feature in the source code and hence
automatically generate the traceability links.
We have now covered automatic summarisation techniques
for generating summaries of bug reports and generating header
level comments for methods. Another possible use for such
techniques in SE artifacts is to enable the automatic generation
of user documentation associated with that software project.
A number of text mining techniques have been employed to
mine stack overfow mailing lists to generate automatic user
documentation or FAQ documents for different software projects.
Regarding the identifcation of inconsistencies in the
requirements document, inconsistency detection techniques
can be applied to source code comments also. It is a general
expectation that source code comments express the programmers
intent. Hence, the code written by the developer and the comment
associated with that piece of code should be consistent with each
other. Consider the simple code sample shown below:
/* linux/drivers/scsi/in2000.c: */
/* caller must hold instance lock */
Static int reset_hardware()
{
.
}
static int in2000_bus_reset()
{
..
reset_hardware();
}
In the above code snippet, the developer has expressed the
intention that instance_lock must be held before the function
reset_hardware is called as a code comment. However, in the
actual source code, the lock is not acquired before the call to
reset_hardware is made. This is a logical inconsistency, which can
arise either due to: (a) comments being outdated with respect to the
source code; or (b) incorrect code. Hence, fagging such errors is
useful to the developer who can fx either the comment or the code,
depending on which is incorrect.
My must-read book for this month
This months book suggestion comes from one of our readers,
Sharada, and her recommendation is very appropriate to the
current column. She recommends an excellent resource for natural
language processinga book called, Speech and Language
Processing: An Introduction to Natural Language Processing by
Jurafsky and Martin. The book describes different algorithms for
NLP techniques and can be used as an introduction to the subject.
Thank you, Sharada, for your valuable recommendation.
If you have a favourite programming book or article that you
think is a must-read for every programmer, please do send me
a note with the books name, and a short write-up on why you
think it is useful so I can mention it in the column. This would
help many readers who want to improve their software skills.
If you have any favourite programming questions/software
topics that you would like to discuss on this forum, please
send them to me, along with your solutions and feedback, at
sandyasm_AT_yahoo_DOT_com. Till we meet again next
month, happy programming!
The author is an expert in systems software and is currently working
with Hewlett Packard India Ltd. Her interests include compilers,
multi-core and storage systems. If you are preparing for systems
software interviews, you may nd it useful to visit Sandya's LinkedIn
group Computer Science Interview Training India at http://www.
linkedin.com/groups?home=HYPERLINK "http://www.linkedin.com/
groups?home=&gid=2339182"
By: Sandya Mannarswamy
Guest Col umn Exploring Software
22 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 23
Hadoop is a large scale, open source storage and processing
framework for data sets. In this article, the author sets up Hadoop
on a single node, takes the reader through testing it, and later
tests it on multiple nodes.
Exploring Big Data on a Desktop
Getting Started with Hadoop
F
edora 20 makes it easy to install Hadoop. Version 2.2
is packaged and available in the standard repositories.
It will place the confguration fles in /etc/hadoop,
with reasonable defaults so that you can get started easily. As
you may expect, managing the various Hadoop services is
integrated with systemd.
Setting up a single node
First, start an instance, with name h-mstr, in OpenStack
using a Fedora Cloud image (http://fedoraproject.
org/get-fedora#clouds). You may get an IP like
192.168.32.2. You will need to choose at least the
m1.small flavour, i.e., 2GB RAM and 20GB disk. Add
an entry in /etc/hosts for convenience:
192.168.32.2 h-mstr
Now, install and test the Hadoop packages on the virtual
machine by following the article, http://fedoraproject.org/
wiki/Changes/Hadoop:
$ ssh fedora@h-mstr
$ sudo yum install hadoop-common hadoop-common-native hadoop-
hdfs \
hadoop-mapreduce hadoop-mapreduce-examples hadoop-yarn
It will download over 200MB of packages and take about
500MB of disk space.
Create an entry in the /etc/hosts fle for h-mstr using the
name in /etc/hostname, e.g.:
192.168.32.2 h-mstr h-mstr.novalocal
Now, you can test the installation. First, run a script to
create the needed hdfs directories:
$ sudo hdfs-create-dirs
Then, start the Hadoop services using systemctl:
$ sudo systemctl start hadoop-namenode hadoop-datanode \
hadoop-nodemanager hadoop-resourcemanager
You can find out the hdfs directories created as
follows. The command may look complex, but you are
running the hadoop fs command in a shell as Hadoop's
internal user, hdfs:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c hadoop fs -ls
/
Found 3 items
drwxrwxrwt - hdfs supergroup 0 2014-07-15 13:21 /tmp
drwxr-xr-x - hdfs supergroup 0 2014-07-15 14:18 /user
drwxr-xr-x - hdfs supergroup 0 2014-07-15 13:22 /var
Testing the single node
Create a directory with the right permissions for the user,
fedora, to be able to run the test scripts:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-mkdir /user/fedora"
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-chown fedora /user/fedora"
Disable the firewall and iptables and run a
mapreduce example. You can monitor the progress at
http://h-mstr:8088/. Figure 1 shows an example running
on three nodes.
The frst test is to calculate pi using 10 maps and
1,000,000 samples. It took about 90 seconds to estimate the
value of pi to be 3.1415844.
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar pi 10 1000000
In the next test, you create 10 million records of 100
bytes each, that is, 1GB of data (~1 min). Then, sort it (~8
min) and, fnally, verify it (~1 min). You may want to clean
up the directories created in the process:
Anil Seth
Guest Col umn Exploring Software
24 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 25
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar teragen 10000000 gendata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar terasort gendata sortdata
$ hadoop jar /usr/share/java/hadoop/hadoop-mapreduce-
examples.jar teravalidate sortdata reportdata
$ hadoop fs -rm -r gendata sortdata reportdata
Stop the Hadoop services before creating and working
with multiple data nodes, and clean up the data directories:
$ sudo systemctl stop hadoop-namenode hadoop-datanode \
hadoop-nodemanager hadoop-resourcemanager
$ sudo rm -rf /var/cache/hadoop-hdfs/hdfs/dfs/*
Testing with multiple nodes
The following steps simplify creation of multiple instances:
Generate ssh keys for password-less log in from any node
to any other node.
$ ssh-keygen
$ cat .ssh/id_rsa.pub >> .ssh/authorized_keys
In /etc/ssh/ssh_confg, add the following to ensure that
ssh does not prompt for authenticating a new host the frst
time you try to log in.
StrictHostKeyChecking no
In /etc/hosts, add entries for slave nodes yet to be created:
192.168.32.2 h-mstr h-mstr.novalocal
192.168.32.3 h-slv1 h-slv1.novalocal
192.168.32.4 h-slv2 h-slv2.novalocal
Now, modify the confguration fles located in /etc/hadoop.
Edit core-site.xml and modify the value of fs.default.name
by replacing localhost by h-mstr:
<property>
<name>fs.default.name</name>
<value>hdfs://h-mstr:8020</value>
</property>
Edit mapred-site.xml and modify the value of mapred.job.
tracker by replacing localhost by h-mstr:
<property>
<name>mapred.job.tracker</name>
<value>h-mstr:8021</value>
</property>
Delete the following lines from hdfs-site.xml:
<!-- Immediately exit safemode as soon as one DataNode
checks in.
On a multi-node cluster, these confgurations must be
removed. -->
<property>
<name>dfs.safemode.extension</name>
<value>0</value>
</property>
<property>
<name>dfs.safemode.min.datanodes</name>
<value>1</value>
</property>
Edit or create, if needed, slaves with the host names of the
data nodes:
[fedora@h-mstr hadoop]$ cat slaves
h-slv1
h-slv2
Add the following lines to yarn-site.xml so that multiple
node managers can be run:
<property>
<name>yarn.resourcemanager.hostname</name>
<value>h-mstr</value>
</property>
Now, create a snapshot, Hadoop-Base. Its creation will
take time. It may not give you an indication of an error if it
runs out of disk space!
Figure 1: OpenStack-Hadoop
Guest Col umn Exploring Software
24 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 25
Launch instances h-slv1 and h-slv2 serially using
Hadoop-Base as the instance boot source. Launching of the
frst instance from a snapshot is pretty slow. In case the IP
addresses are not the same as your guess in /etc/hosts, edit /
etc/hosts on each of the three nodes to the correct value. For
your convenience, you may want to make entries for h-slv1
and h-slv2 on the desktop /etc/hosts fle as well.
The following commands should be run from Fedora on
h-mstr. Reformat the namenode to make sure that the single
node tests are not causing any unexpected issues:
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop
namenode -format"
Start the hadoop services on h-mstr.
$ sudo systemctl start hadoop-namenode hadoop-datanode
hadoop-nodemanager hadoop-resourcemanager
Start the datanode and yarn services on the slave nodes:
$ ssh -t fedora@h-slv1 sudo systemctl start hadoop-datanode
hadoop-nodemanager
$ ssh -t fedora@h-slv2 sudo systemctl start hadoop-datanode
hadoop-nodemanager
Create the hdfs directories and a directory for user fedora
as on a single node:
$ sudo hdfs-create-dirs
The author has earned the right to do what interests him.
You can nd him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at anil@sethanil.com
By: Dr Anil Seth
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-mkdir /user/fedora"
$ sudo runuser hdfs -s /bin/bash /bin/bash -c "hadoop fs
-chown fedora /user/fedora"
You can run the same tests again. Although you are using
three nodes, the improvement in the performance compared to
the single node is not expected to be noticeable as the nodes
are running on a single desktop.
The pi example took about one minute on the three nodes,
compared to the 90 seconds taken earlier. Terasort took 7
minutes instead of 8.
Note: I used an AMD Phenom II X4 965 with 16GB
RAM to arrive at the timings. All virtual machines and their
data were on a single physical disk.
Both OpenStack and Mapreduce are a collection of
interrelated services working together. Diagnosing problems,
especially in the beginning, is tough as each service has its
own log fles. It takes a while to get used to realising where to
look. However, once these are working, it is incredible how
easy they make distributed processing!
MONTH THEME FEATURED LIST BUYERS GUIDE
March 2014 Network monitoring Security -------------------
April 2014 Android Special Anti Virus Wifi Hotspot Devices
May 2014 Backup and Data Storage Certification External Storage
June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs
July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs
August 2014 Kernel Development Big Data solution Providers SSDs for Servers
September 2014 Open Source for Start-ups Cloud Android Devices
October 2014 Mobile App Development Training on Programming Languages Projectors
November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers
December 2014 Web Development Leading Ecommerce Sites AV Conferencing
January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs
February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers
OSFY Magazine Attractions During 2014-15
Developers Insight
H
ave you ever wondered which module is slowing
down your Python program and how to optimise
it? Well, there are profilers that can come to
your rescue.
Profling, in simple terms, is the analysis of a program
to measure the memory used by a certain module,
frequency and duration of function calls, and the time
complexity of the same. Such profling tools are termed
proflers. This article will discuss the line_profler for
Python.
Installation
Installing pre-requisites: Before installing line_profler
make sure you install these pre-requisites:
a) For Ubuntu/Debian-based systems (recent versions):
sudo apt-get install mercurial python python3 python-pip
python3-pip Cython Cython3
b) For Fedora systems:
sudo yum install -y mercurial python python3 python-pip
Note: 1. I have used the y argument to
automatically install the packages after being tracked by
the yum installer.
2. Mac users can use Homebrew to install these packages.
Cython is a pre-requisite because the source releases
require a C compiler. If the Cython package is not found or is
too old in your current Linux distribution version, install it by
running the following command in a terminal:
sudo pip install Cython
Note: Mac OS X users can install Cython using pip.
Improve Python Code
by Using a Profiler
The line_profiler gives
a line-by-line analysis
of the Python code
and can thus identify
bottlenecks that slow
down the execution of
a program. By making
modifications to the
code based on the
results of this profiler,
developers can
improve the code and
refine the program.
26 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 27
Developers Insight
Figure 1: line_profiler output
Cloning line_profler: Let us begin
by cloning the line_profler source code
from bitbucket.
To do so, run the following
command in a terminal:
hg clone https://bitbucket.org/
robertkern/line_profler
The above repository is the offcial
line_profler repository, with support for
python 2.4 - 2.7.x.
For python 3.x support, we will
need to clone a fork of the offcial
source code that provides python 3.x
compatibility for line_profler and kernprof.
hg clonehttps://bitbucket.org/kmike/line_profler
Installing line_profler: Navigate to the cloned
repository by running the following command in a terminal:
cd line_profler
To build and install line_profler in your system, run the
following command:
a) For offcial source (supported by python 2.4 - 2.7.x):
sudo python setup.py install
b) For forked source (supported by python 3.x):
sudo python3 setup.py install
Using line_profiler
Adding profler to your code: Since line_profler has been
designed to be used as a decorator, we need to decorate the
specifed function using a @profle decorator. We can do so
by adding an extra line before a function, as follows:
@profle
def foo(bar):
.....
Running line_profiler: Once the slow module is
profiled, the next step is to run the line_profiler, which
will give line-by-line computation of the code within the
profiled function.
Open a terminal, navigate to the folder where the .py fle
is located and type the following command:
kernprof.py -l example.py; python3 -m line_proflerexample.
py.lprof
Note: I have combined both the commands in a single
line separated by a semicolon ; to immediately show the
profled results.
You can run the two commands separately or run
kernprof.py with v argument to view the formatted result in
the terminal.
kernprof.py -l compiles the profiled function in
example.py line by line; hence, the argument -l stores
the result in a binary file with a .lprof extension. (Here,
example.py.lprof)
We then run line_profler on this binary fle by using the
-m line_profler argument. Here -m is followed by the
module name, i.e., line_profler.
Case study: We will use the Gnome-Music source code
for our case study. There is a module named _connect_view
in the view.py fle, which handles the different views (artists,
albums, playlists, etc) within the music player. This module is
reportedly running slow because a variable is initialised each
time the view is changed.
By profling the source code, we get the following result:
Wrote profle results to gnome-music.lprof
Timer unit: 1e-06 s
File: ./gnomemusic/view.py
Function: _connect_view at line 211
Total time: 0.000627 s
Line # Hits Time Per Hit % Time Line Contents
=============================================================
211 @profle
212 def _connect_view(self):
213 4 205 51.2 32.7 vadjustment =
self.view.get_vadjustment()
214 4 98 24.5 15.6 self._
adjustmentValueId =
vadjustment.connect(
215 4 79 19.8 12.6 'value-changed',
28 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 29
Developers Insight
[1] http://pythonhosted.org/line_profiler/
[2] http://jacksonisaac.wordpress.com/2013/09/08/using-
line_profiler-with-python/
[3] https://pypi.python.org/pypi/line_profiler
[4] https://bitbucket.org/robertkern/line_profiler
[5] https://bitbucket.org/kmike/line_profiler
References
216 4 245 61.2 39.1
self._on_scrolled_win_change)
In the above code, line no
213, vadjustment = self.view.get_
vadjustment(), is called too many times,
which makes the process slower than
expected. After caching (initialising) it
in the init function, we get the following
result tested under the same condition.
You can see that there is a signifcant
improvement in the results (Figure 2).
Wrote profle results to gnome-music.lprof
Timer unit: 1e-06 s
File: ./gnomemusic/view.py
Function: _connect_view at line 211
Total time: 0.000466 s
Line # Hits Time Per Hit % Time Line Contents
============================================================
211 @profle
212 def _connect_view(self):
213 4 86 21.5 18.5 self._adjustmentValueId =
vadjustment.connect(
214 4 161 40.2 34.5 'value-changed',
215 4 219 54.8 47.0 self._on_scrolled_win_change)
Understanding the output
Here is an analysis of the output shown in the above snippet.
Function: Displays the name of the function that is
profled and its line number.
Line#: The line number of the code in the respective fle.
Hits: The number of times the code in the corresponding
line was executed.
Time: Total amount of time spent in executing the line
in Timer unit (i.e., 1e-06s here). This may vary from
system to system.
Per hit: The average amount of time spent in executing
the line once in Timer unit.
% time: The percentage of time spent on a line with respect
By: Jackson Isaac
The author is an active open source contributor to projects
like gnome-music, Mozilla Firefox and Mozillians. Follow
him on jacksonisaac.wordpress.com or email him at
jacksonisaac2008@gmail.com
Figure 2: Optimised code line_profiler output
to the total amount of recorded time spent in the function.
Line content: It displays the actual source code.
Note: If you make changes in the source code you
need to run the kernprof and line_profler again in order to
profle the updated code and get the latest results.
Advantages
Line_profiler helps us to profile our code line-by-line,
giving the number of hits, time taken for each hit and
%time. This helps us to understand which part of our code
is running slow. It also helps in testing large projects and
the time spent by modules to execute a particular function.
Using this data, we can commit changes and improve our
code to build faster and better programs.
28 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 29
Developers Insight
objects. For example, the document object that represents the
document itself, the tableObject that implements the special
HTMLTableElement DOM interface to access the HTML
tables, and so forth.
Why is DOM important?
Dynamic HTML (DHTML) is a term used by some vendors
to describe the combination of HTML, style sheets and
scripts that allow documents to be animated. The W3C DOM
working group is aiming to make sure interoperable and
language-neutral solutions are agreed upon.
As Mozilla claims the title of Web Application Platform,
support for the DOM is one of the most requested features; in
fact, it is a necessity if Mozilla wants to be a viable alternative
to the other browsers. The user interface of Mozilla (also
Firefox and Thunderbird) is built using XUL and the DOM to
manipulate its own user interface.
How do I access the DOM?
You dont have to do anything special to begin using the
T
he Document Object Model (DOM) is a programming
interface for HTML and XML documents. It provides
a structured representation of a document and it
defnes a way that the structure can be accessed from the
programs so that they can change the document structure,
style and content. The DOM provides a representation of the
document as a structured group of nodes and objects that have
properties and methods. Essentially, it connects Web pages to
scripts or programming languages.
A Web page is a document that can either be displayed in
the browser window or as an HTML source that is in the same
document. The DOM provides another way to represent, store
and manipulate that same document. In simple terms, we can
say that the DOM is a fully object-oriented representation of a
Web page, which can be modifed by any scripting language.
The W3C DOM standard forms the basis of the DOM
implementation in most modern browsers. Many browsers
offer extensions beyond the W3C standard.
All the properties, methods and events available for
manipulating and creating the Web pages are organised into
This article is an introduction to the DOM programming interface and the DOM inspector,
which is a tool that can be used to inspect and edit the live DOM of any Web document or
XUL application.
Understanding the Document
Object Model (DOM) in Mozilla
30 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 31
Developers Insight
DOM. Different browsers have different implementations of
it, which exhibit varying degrees of conformity to the actual
DOM standard but every browser uses some DOM to make
Web pages accessible to the script.
When you create a script, whether its inline in a
script element or included in the Web page by means of
a script loading instruction, you can immediately begin
using the API for the document or window elements.
This is to manipulate the document itself or to get at the
children of that document, which are the various elements
in the Web page.
Your DOM programming may be something as simple as
the following, which displays an alert message by using the
alert( ) function from a window object or it may use more
sophisticated DOM methods to actually create them, as in the
longer examples that follow:
<body onload = window.alert (welcome to my home page!); >
Aside from the script element in which JavaScript is
defned, this JavaScript sets a function to run when the
document is loaded. This function creates a new element H1,
adds text to that element, and then adds H1 to the tree for this
document, as shown below:
<html>
<head>
<script>
// run this function when the document is loaded
window.onload = function() {
// create a couple of elements
// in an otherwise empty HTML page
heading = document.createElement(h1);
heading_text = document.createTextNode(Big Head!);
heading.appendChild(heading_text);
document.body.appendChild(heading);
}
</script>
</head>
<body>
</body>
</html>
DOM interfaces
These interfaces just give you an idea about the actual things
that you can use to manipulate the DOM hierarchy. The object
representing the HTMLFormElement gets its name property
from the HTMLFormElement interface but its className
property from the HTMLElement interface. In both cases, the
property you want is simply in the form object.
Interfaces and objects
Many objects borrow from several different interfaces. The
table object, for example, implements a specialised HTML
table element interface, which includes such methods as
createCaption and insertRow. Since an HTML element is
also, as far as the DOM is concerned, a node in the tree of
nodes that makes up the object model for a Web page or an
XML page, the table element also implements the more basic
node interface, from which the element derives.
When you get a reference to a table object, as in
the following example, you routinely use all three of
these interfaces interchangeably on the object, perhaps
unknowingly:
var table = document.getElementById (table);
var tableAttrs = table.attributes; // Node/Element interface
for (var i = 0; i < tableAttrs.length; i++) {
// HTMLTableElement interface: border attribute
if(tableAttrs[i].nodeName.toLowerCase() == border)
table.border = 1;
}
// HTMLTableElement interface: summary attribute
table.summary = note: increased border ;
Core interfaces in the DOM
These are some of the important and most commonly
used interfaces in the DOM. These common APIs are
used in the longer examples of DOM. You will often
see the following APIs, which are types of methods and
properties, when you use DOM.
The interfaces of document and window objects are
generally used most often in DOM programming. In
simple terms, the window object represents something
like the browser, and the document object is the root of
the document itself. The element inherits from the generic
node interface and, together, these two interfaces provide
many of the methods and properties you use on individual
elements. These elements may also have specific interfaces
for dealing with the kind of data those elements hold, as in
the table object example.
Figure 1: DOM inspector
Figure 2: Inspecting content documents
30 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 31
Developers Insight
The following are a few common APIs in XML and Web
page scripting that show the use of DOM:
document.getElementById (id)
element.getElementsByTagName (name)
document.createElement (name)
parentNode.appendChild (node)
element.innerHTML
element.style.left
element.setAttribute
element.getAttribute
element.addEventListener
window.content
window.onload
window.dump
window.scrollTo
Testing the DOM API
Here, you will be provided samples for every interface
that you can use in Web development. In some cases, the
samples are complete HTML pages, with the DOM access
in a <script> element, the interface (e.g., buttons) necessary
to fre up the script in a form, and the HTML elements upon
which the DOM operates listed as well. When this is the
case, you can cut and paste the example into a new HTML
document, save it, and run the example from the browser.
There are some cases, however, when the examples are
more concise. To run examples that only demonstrate the
basic relationship of the interface to the HTML elements,
you may want to set up a test page in which interfaces can be
easily accessed from scripts.
An introduction to the DOM inspector
The DOM inspector is a Mozilla extension that you can
access from the Tools -> Web Development menu in
SeaMonkey, or by selecting the DOM inspector menu item
from the Tools menu in Firefox and Thunderbird or by
using Ctrl/Cmd+Shift+I in either application. The DOM
inspector is a standalone extension; it supports all toolkit
applications, and its possible to embed it in your own
XULRunner app. The DOM inspector can serve as a sanity
check to verify the state of the DOM, or it can be used to
manipulate the DOM manually, if desired.
When you frst start the DOM inspector, you are presented
with a two-pane application window that looks a little like the
main Mozilla browser. Like the browser, the DOM inspector
includes an address bar and some of the same menus. In
SeaMonkey, additional global menus are available.
Using the DOM inspector
Once youve opened the document for the page you are
interested in Chrome, youll see that it loads the DOM nodes
viewer in the document pane and the DOM node viewer in
the object pane. In the DOM nodes viewer, there should be a
structured, hierarchical view of the DOM.
By clicking around in the document pane, youll see
that the viewers are linked; whenever you select a new node
from the DOM nodes viewer, the DOM node viewer is
automatically updated to refect the information for that node.
Linked viewers are the frst major aspect to understand when
learning how to use the DOM inspector.
Inspecting a document
When the DOM inspector opens, it may or may not load an
associated document, depending on the host application. If it
doesnt automatically load a document or loads a document
other than the one youd like to inspect, you can select the
desired document in a few different ways.
Figure 3: Inspecting Chrome documents
Figure 5: Inspecting a Web page
Figure 4: Inspecting arbitrary URLs
32 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 33
Developers Insight
There are three ways of inspecting any document, which
are described below.
Inspecting content documents: The Inspect Content
Document menu popup can be accessed from the File menu,
and it will list the currently loaded content documents. In
the Firefox and SeaMonkey browsers, these will be the
Web pages you have opened in tabs. For Thunderbird and
SeaMonkey Mail and News, any messages youre viewing
will be listed here.
Inspecting Chrome documents: The Inspect Chrome
Document menu popup can be accessed from the File menu,
and it will contain the list of currently loaded Chrome
windows and sub-documents. A browser window and the
DOM inspector are likely to already be open and displayed
in this list. The DOM inspector keeps track of all the
windows that are open, so to inspect the DOM of a particular
window in the DOM inspector, simply access that window
as you would normally do and then choose its title from this
dynamically updated menu list.
Inspecting arbitrary URLs: We can also inspect the
DOM of arbitrary URLs by using the Inspect a URL menu
item in the File menu, or by just entering a URL into the
DOM inspectors address bar and clicking Inspect or pressing
Enter. We should not use this approach to inspect Chrome
documents, but instead ensure that the Chrome document
loads normally, and use the Inspect Chrome Document menu
popup to inspect the document.
When you inspect a Web page by this method, a browser
pane at the bottom of the DOM inspector window will open
up, displaying the Web page. This allows you to use the DOM
inspector without having to use a separate browser window,
or without embedding a browser in your application at all. If
you fnd that the browser pane takes up too much space, you
may close it, but you will not be able to visually observe any
of the consequences of your actions.
DOM inspector viewers
You can use the DOM nodes viewer in the document pane
of the DOM inspector to fnd and inspect the nodes you
are interested in. One of the biggest and most immediate
advantages that this brings to your Web and application
development is that it makes it possible to fnd the mark-up
and the nodes in which the interesting parts of a page or a
piece of the user interface are defned.
One common use of the DOM inspector is to fnd the
name and location of a particular icon being used in the
Figure 6: Finding app content Figure 7: Search on Click
EMBEDDED SOFTWARE
DEVELOPMENT
COURSES AND WORKSHOPS
Embedded RTOS -ARCHITECTURE, INTERNALS
AND PROGRAMMING - ON ARM PLATFORM
FACULTY : Babu Krishnamurthy
(Visiting Faculty, CDAC/ACTS - with 18 years of Industry
and Faculty Experience)
AUDIENCE : BE/BTECH Students, PG Diploma Students,
ME/MTECH Students and Embedded / sw Engineers
DATES : 20-09-2014 and 21-09-2014 (2 Days Program)
VENUE: School of Embedded Software Development,
M.V. Creators' Wing,
3rd Floor, #218, Sunshine Complex, Kammanahalli,
4th Main, 2nd Block, HRBR Layout, Kalyan Nagar,
Bangalore - 560043.
(Opposite to HDFC Bank, Next to FoodWorld
and near JalaVayu Vihar )
Email : babu_krishnamurthy@yahoo.com
Phone : 080-41207855
SMS : +91-9845365845 ( leave a message and we will call you back )
UPCOMING COURSES :
RTOS - BSP AND DRIVER DEVELOPMENT, REAL-TIME LINUX DEVELOPMENT
LINUX INTERNALS AND DEVICE DRIVERS - FUNDAMENTALS AND
LINUX INTERNALS AND DEVICE DRIVERS - ADVANCED
32 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 33
Developers Insight
user interface, which is not an easy task otherwise. If youre
inspecting a Chrome document, as you select nodes in the
DOM nodes viewer, the rendered versions of those nodes are
highlighted in the user interface itself. Note that there are
bugs that prevent the fasher from the DOM inspector APIs
that are working currently on certain platforms.
If you inspect the main browser window, for example,
and select nodes in the DOM nodes viewer, you will see the
various parts of the browser interface being highlighted with
a blinking red border. You can traverse the structure and go
from the topmost parts of the DOM tree to lower level nodes,
such as the search-go-button icon that lets users perform a
query using the selected search engine.
The list of viewers available from the viewer menu gives
you some idea about how extensive the DOM inspectors
capabilities are. The following descriptions provide an
overview of these viewers capabilities:
1. The DOM nodes viewer shows attributes of nodes that can
take them, or the text content of text nodes, comments and
processing instructions. The attributes and text contents
may also be edited.
2. The Box Model viewer gives various metrics about XUL
and HTML elements, including placement and size.
3. The XBL Bindings viewer lists the XBL bindings attached
to elements. If a binding extends to another binding, the
binding menu list will list them in descending order to
root binding.
4. The CSS Rules viewer shows the CSS rules that
are applied to the node. Alternatively, when used in
conjunction with the Style Sheets viewer, the CSS Rules
viewer lists all recognised rules from that style sheet.
Properties may also be edited. Rules applying to pseudo-
elements do not appear.
5. This viewer gives a hierarchical tree of the object panes
subject. The JavaScript Object viewer also allows
JavaScript to be evaluated by selecting the appropriate
menu item in the context menu.
Three basic actions of DOM node viewers are
described below.
Selecting elements by clicking: A powerful interactive
feature of the DOM inspector is that when you have it open
and have enabled this functionality by choosing Edit >
Select Element by Click (or by clicking the little magnifying
glass icon in the upper left portion of the DOM Inspector
application), you can click anywhere in a loaded Web page or
the Inspect Chrome document. The element you click will be
shown in the document pane in the DOM nodes viewer and
the information will be displayed in the object pane.
Searching for nodes in the DOM: Another way to inspect
the DOM is to search for particular elements youre interested in
by ID, class or attribute. When you select Edit > Find Nodes...
or press Ctrl + F, the DOM inspector displays a Find dialogue
that lets you fnd elements in various ways, and that gives you
incremental searching by way of the <F3> shortcut key.
Updating the DOM, dynamically: Another feature
worth mentioning is the ability the DOM inspector gives
you to dynamically update information refected in the DOM
about Web pages, the user interface and other elements. Note
that when the DOM inspector displays information about a
particular node or sub-tree, it presents individual nodes and
their values in an active list. You can perform actions on the
individual items in this list from the Context menu and the
Edit menu, both of which contain menu items that allow you
to edit the values of those attributes.
This interactivity allows you to shrink and grow the
element size, change icons, and do other layout-tweaking
updatesall without actually changing the DOM as it is
defned in the fle on disk.
[1] https://developer.mozilla.org/en-US/docs/Web/API/
Document_Object_Model
[2] https://developer.mozilla.org/en/docs/Web/API/Document
References
By: Anup Allamsetty
The author is an active contributor to Mozilla and GNOME. He
blogs at https://anup07.wordpress.com/ and you can email him
at allamsetty.anup@gmail.com.
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales
Get East, West, North & South Editions at you
doorstep. Write to us at myeb@efy.in and get EB
Times regularly
An EFY Group publication
EB TIMES
is Becoming Regional
Electronics Trade Channel Updates
34 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | PB
A
function in Haskell has the function name
followed by arguments. An infx operator function
has operands on either side of it. A simple infx
add operation is shown below:
*Main> 3 + 5
8
If you wish to convert an infx function to a prefx
function, it must be enclosed within parentheses:
*Main> (+) 3 5
8
Similarly, if you wish to convert a prefix function
into an infix function, you must enclose the function
name within backquotes(`). The elem function takes an
element and a list, and returns true if the element is a
member of the list:
*Main> 3 `elem` [1, 2, 3]
True
*Main> 4 `elem` [1, 2, 3]
False
Functions can also be partially applied in Haskell. A function that
subtracts ten from a given number can be defned as:
diffTen :: Integer -> Integer
diffTen = (10 -)
Loading the fle in GHCi and passing three as an argument yields:
*Main> diffTen 3
7
Haskell exhibits polymorphism. A type variable in a function
is said to be polymorphic if it can take any type. Consider the last
function that returns the last element in an array. Its type signature is:
Experimenting with
More Functions in Haskell
We continue our exploration of the open source, advanced and purely functional
programming language, Haskell. In the third article in the series, we will focus on more
Haskell functions, conditional constructs and their usage.
Developers Let's Try
PB | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 35
Developers Let's Try
*Main> :t last
last :: [a] -> a
The a in the above snippet refers to a type variable and
can represent any type. Thus, the last function can operate on a
list of integers or characters (string):
*Main> last [1, 2, 3, 4, 5]
5
*Main> last "Hello, World"
'd'
You can use a where clause for local defnitions inside a
function, as shown in the following example, to compute the
area of a circle:
areaOfCircle :: Float -> Float
areaOfCircle radius = pi * radius * radius
where pi = 3.1415
Loading it in GHCi and computing the area for radius
1 gives:
*Main> areaOfCircle 1
3.1415
You can also use the let expression with the in statement to
compute the area of a circle:
areaOfCircle :: Float -> Float
areaOfCircle radius = let pi = 3.1415 in pi * radius * radius
Executing the above with input radius 1 gives:
*Main> areaOfCircle 1
3.1415
Indentation is very important in Haskell as it helps in code
readability the compiler will emit errors otherwise. You must
make use of white spaces instead of tab when aligning code. If
the let and in constructs in a function span multiple lines, they
must be aligned vertically as shown below:
compute :: Integer -> Integer -> Integer
compute x y =
let a = x + 1
b = y + 2
in
a * b
Loading the example with GHCi, you get the following output:
*Main> compute 1 2
8
Similarly, the if and else constructs must be neatly aligned.
The else statement is mandatory in Haskell. For example:
sign :: Integer -> String
sign x =
if x > 0
then "Positive"
else
if x < 0
then "Negative"
else "Zero"
Running the example with GHCi, you get:
*Main> sign 0
"Zero"
*Main> sign 1
"Positive"
*Main> sign (-1)
"Negative"
The case construct can be used for pattern matching
against possible expression values. It needs to be combined
with the of keyword. The different values need to be aligned
and the resulting action must be specifed after the ->
symbol for each case. For example:
sign :: Integer -> String
sign x =
case compare x 0 of
LT -> "Negative"
GT -> "Positive"
EQ -> "Zero"
The compare function compares two arguments and
returns LT if the frst argument is lesser than the second, GT
if the frst argument is greater than the second, and EQ if both
are equal. Executing the above example, you get:
*Main> sign 2
"Positive"
*Main> sign 0
"Zero"
*Main> sign (-2)
"Negative"
The sign function can also be expressed using guards
(|) for readability. The action for a matching case must be
specifed after the = sign. You can use a default guard with
the otherwise keyword:
sign :: Integer -> String
sign x
| x > 0 = "Positive"
36 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 37
Developers Let's Try
| x < 0 = "Negative"
| otherwise = "Zero"
The guards have to be neatly aligned:
*Main> sign 0
"Zero"
*Main> sign 3
"Positive"
*Main> sign (-3)
"Negative"
There are three very important higher order functions in
Haskell map, flter and fold.
The map function takes a function and a list, and applies
the function to each and every element of the list. Its type
signature is:
*Main> :t map
map :: (a -> b) -> [a] -> [b]
The frst function argument accepts an element of type a
and returns an element of type b. An example of adding two
to every element in a list can be implemented using map:
*Main> map (+ 2) [1, 2, 3, 4, 5]
[3,4,5,6,7]
The flter function accepts a predicate function for
evaluation, and a list, and returns the list with those elements
that satisfy the predicate. For example:
*Main> flter (> 0) [-2, -1, 0, 1, 2]
[1,2]
Its type signature is:
flter :: (a -> Bool) -> [a] -> [a]
The predicate function for flter takes as its frst argument
an element of type a and returns True or False.
The fold function performs a cumulative operation on a
list. It takes as arguments a function, an accumulator (starting
with an initial value) and a list. It cumulatively aggregates the
computation of the function on the accumulator value as well
as each member of the list. There are two types of folds left
and right fold.
*Main> foldl (+) 0 [1, 2, 3, 4, 5]
15
*Main> foldr (+) 0 [1, 2, 3, 4, 5]
15
Their type signatures are, respectively:
*Main> :t foldl
foldl :: (a -> b -> a) -> a -> [b] -> a
*Main> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
The way the fold is evaluated among the two types is
different and is demonstrated below:
*Main> foldl (+) 0 [1, 2, 3]
6
*Main> foldl (+) 1 [2, 3]
6
*Main> foldl (+) 3 [3]
6
It can be represented as f (f (f a b1) b2) b3 where f is the
function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The parenthesis is accumulated on
the left for a left fold. The computation looks like this:
*Main> (+) ((+) ((+) 0 1) 2) 3
6
*Main> (+) 0 1
1
*Main> (+) ((+) 0 1) 2
3
*Main> (+) ((+) ((+) 0 1) 2) 3
6
With the recursion, the expression is constructed and
evaluated only when it is fnally formed. It can thus cause
stack overfow or never complete when working with infnite
lists. The foldr evaluation looks like this:
*Main> foldr (+) 0 [1, 2, 3]
6
*Main> foldr (+) 0 [1, 2] + 3
6
*Main> foldr (+) 0 [1] + 2 + 3
6
It can be represented as f b1 (f b2 (f b3 a)) where f is the
function, a is the accumulator value, and b1, b2 and b3
are the elements of the list. The computation looks like this:
*Main> (+) 1 ((+) 2 ((+) 3 0))
6
*Main> (+) 3 0
3
*Main> (+) 2 ((+) 3 0)
5
*Main> (+) 1 ((+) 2 ((+) 3 0))
6
To be continued on page.... 44
36 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 37
Developers Insight
Introducing
AngularJS
the Hello World program in minutes. With the help of
Angular, the combined power of HTML and JavaScript can
be put to maximum use. One of the prominent features of
Angular is that it is extremely easy to test. And that makes
it very suitable for creating large-scale applications. Also,
the Angular community, comprising Googles developers
primarily, is very active in the development process.
Google Trends gives assuring proof of Angulars future in
the field of Web development (Figure 1).
Core features
Before getting into the basics of AngularJS, you need to
understand two key termstemplates and models. The
HTML page that is rendered out to you is pretty much the
template. So basically, your template has HTML, Angular
entities (directives, flters, model variables, etc) and CSS (if
necessary). The example code given below for data binding
is a template.
In an SPA, the data and presentation of data is separated
by a model layer that handles data and a view layer that reads
A
ngularJS can be introduced as a front-end
framework capable of incorporating the
dynamicity of JavaScript with HTML. The self-
proclaimed super heroic JavaScript MVW (Model View
Whatever) framework is maintained by Google and many
other developers at Github. This open source framework
works its magic on Web applications of the Single Page
Applications (SPA) category. The logic behind an SPA is
that an initial page is loaded at the start of an application
from the server. When an action is performed, the
application fetches the required resources from the server
and adds them to the initial page. The key point here is
that an SPA just makes one server round trip, providing
you with the initial page. This makes your applications
very responsive.
Why AngularJS?
AngularJS brings out the beauty in Web development.
It is extremely simple to understand and code. If youre
familiar with HTML and JavaScript, you can write
AngularJS is an open source Web application framework maintained by Google and the
community, which helps to build Single Page Applications (SPA). Lets get to know it better.
40 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 41
Developers Insight
from models. This helps an SPA in redrawing any part of the
UI without requiring a server round trip to retrieve HTML.
When the data is updated, its view is notifed and the altered
data is produced in the view.
Data binding
AngularJS provides you with two-way binding between the
model variables and HTML elements. One-way binding
would mean a one-way relation between the twowhen the
model variables are updated, so are the values in the HTML
elements; but not the other way around. Lets understand
two-way binding by looking at an example:
<html ng-app >
<head>
<script src="http://ajax.googleapis.com/ajax/libs/
angularjs/1.0.7/angular.min.js">
</script>
</head>
<body ng-init = yourtext = Data binding is cool! >
Enter your text: <input type="text" ng-model =
"yourtext" />
<strong>You entered :</strong> {{yourtext}}
</body>
</html>
The model variable yourtext is bound to the HTML input
element. Whenever you change the value in the input box,
yourtext gets updated. Also, the value of the HTML input box
is initialised to that of the yourtext variable.
Directives
In the above example, many words like ng-app, ng-init
and ng-model may have struck you as odd. Well, these
are attributes that represent directives - ngApp, ngInit and
ngModel, respectively. As described in the offcial AngularJS
developer guide, Directives are markers on a DOM element
(such as an attribute, element name, comment or CSS class)
that tell AngularJS's HTML compiler ($compile) to attach a
specifed behaviour to that DOM element. Lets look into
the purpose of some common directives.
ngApp:This directive bootstraps your angular
application and considers the HTML element in which the
attribute is specified to be the root element of Angular.
In the above example, the entire HTML page becomes an
angular application, since the ng-app attribute is given
to the <html> tag. If it was given to the <body> tag,
the body alone becomes the root element. Or you could
create your own Angular module and let that be the root
of your application. An AngularJS module might consist
of controllers, services, directives, etc. To create a new
module, use the following commands:
var moduleName = angular.module( moduleName , [ ] );
// The array is a list of modules our module depends on
Also, remember to initialise your ng-app attribute to
moduleName. For instance,
<html ng-app = moduleName >
ngModel: The purpose of this directive is to bind the
view with the model. For instance,
<input type = "text" ng-model = "sometext" />
<p> Your text: {{ sometext }}</p>
Here, the model sometext is bound (two-way) to the
view. The double curly braces will notify Angular to put the
value of sometext in its place.
ngClick: How this directive functions is similar to that of
the onclick event of Javascript.
<button ng-click="mul = mul * 2" ng-init="mul = 1"> Multiply
with 2 </button>
After multiplying : {{mul}}
Whenever the button is clicked, mul gets multiplied by
two.
Filters
A flter helps you in modifying the output to your view. You
can subject your expression to any kind of constraints to give
out the desired output. The format is:
{{ expression | flter }}
You can flter the output of flter1 again with flter2, using
the following format:
{{ expression | flter1 | flter2 }}
The following code flters the members of the people
array using the name as the criteria:
Figure 1: Report from Google Trends
Average 2005 2007 2009
March 2009
Topics
Interest over time News headlines Forecast
Subscribe
angularjs emberjs knockoutjs backbonejs
search term search term search term search term
+ Add term
angularjs:0
emberjs:0
knockoutjs:0
backbonejs:0
2011 2013
42 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 43
Developers Insight
<body ng-init=" people=[{name:'Tony',branch:'CSE'},
{name:'Santhosh', branch:'EEE'},
{name:'Manisha', branch:'ECE'}];">
Name: <input type="text" ng-model="name"/>
<li ng-repeat="person in people | flter: name"> {{person.
name }} - {{person.branch}}
</li>
</body>
Advanced features
Controllers: To bring some more action to our app, we
need controllers. These are JavaScript functions that add
behaviour to our app. Lets make use of the ngController
directive to bind the controller to the DOM:
<body ng-controller="ContactController">
<input type="text" ng-model="name"/>
<button ng-click="disp()">Alert !</button>
<script type="text/javascript">
function ContactController($scope) {
$scope.disp = function( ){
alert("Hey " + $scope.name);
};
}
</script>
</body>
One term to be explained here is $scope. To quote
from the developer guide: Scope is an object that
refers to the application model. With the help of scope,
the model variables can be initialised and accessed.
In the above example, when the button is clicked the
disp( ) comes into play, i.e., the scope is assigned with
a behaviour. Inside disp( ), the model variable name is
accessed using scope.
Views and routes: In any usual application, we
navigate to different pages. In an SPA, instead of pages, we
have views. So, you can use views to load different parts
of your application. Switching to different views is done
through routing. For routing, we make use of the ngRoute
and ngView directives:
var miniApp = angular.module( 'miniApp', ['ngRoute'] );
miniApp.confg(function( $routeProvider ){
$routeProvider.when( '/home', { templateUrl:
'partials/home.html' } );
$routeProvider.when( '/animal', {templateUrl:
'partials/animals.html' } );
$routeProvider.otherwise( { redirectTo: '/home' } );
});
ngRoute enables routing in applications and
$routeProvider is used to confgure the routes. home.
html and animals.html are examples of partials; these are
fles that will be loaded to your view, depending on the
URL passed. For example, you could have an app that has
icons and whenever the icon is clicked, a link is passed.
Depending on the link, the corresponding partial is loaded to
the view. This is how you pass links:
<a href='#/home'><img src='partials/home.jpg' /></a>
<a href='#/animal'><img src='partials/animals.jpg' /></a>
Dont forget to add the ng-view attribute to the HTML
component of your choice. That component will act as a
placeholder for your views.
<div ng-view=""></div>
Services: According to the official documentation of
AngularJS, Angular services are substitutable objects
that are wired together using dependency injection (DI).
You can use services to organise and share code across
your app. With DI, every component will receive
a reference to the service. Angular provides useful
services like $http, $window, and $location. In order to
use these services in controllers, you can add them as
dependencies. As in:
var testapp = angular.module( testapp, [ ] );
testapp.controller ( testcont, function( $window ) {
//body of controller
});
To defne a custom service, write the following:
testapp.factory ('serviceName', function( ) {
var obj;
return obj; // returned object will be injected to
the component
//that has called the service
});
Testing
Testing is done to correct your code on-the-go and avoid
ending up with a pile of errors on completing your apps
development. Testing can get complicated when your
app grows in size and APIs start to get tangled up, but
Angular has got its own defined testing schemes. Usually,
two kinds of testing are employed, unit and end-to-end
testing (E2E). Unit testing is used to test individual API
components, while in E2E testing, the working of a set of
components is tested.
The usual components of unit testing are describe( ),
beforeEach( ) and it( ). You have to load the angular module
before testing and beforeEach( ) does this. Also, this function
42 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 43
Developers Insight
[1] http://singlepageappbook.com/goal.html
[2] https://github.com/angular/angular.js
[3] https://docs.angularjs.org/guide/
[4] http://karma-runner.github.io/0.12/index.html
[5] http://viralpatel.net/blogs/angularjs-introduction-hello-
world-tutorial/
[6] https://builtwith.angularjs.org/
References
By: Tina Johnson
The author is a FOSS enthusiast who has contributed to
Mediawiki and Mozilla's Bugzilla. She is also working on a project
to build a browser (using AngularJS) for autistic children.
makes use of the injector method to inject dependencies.
The test to be conducted is given in it( ). The test suite is
describe( ), and both beforeEach( ) and it( ) come inside
it. E2E testing makes use of all the above functions.
One other function used is expect( ). This creates
expectations, which verify if the particular application's
state (value of a variable or URL) is the same as the
expected values.
Recommended frameworks for unit testing are
Jasmine and Karma, and for E2E testing, Protractor is the
one to go with.
Who uses AngularJs?
Some of the following corporate giants use AngularJS:
Google
Sony (YouTube on PS3)
Virgin America
Nike
msnbc (msnbc.com)
You can fnd a lot of interesting and innovative apps in
the Built with AngularJS page.
Competing technologies
Features Ember.js AngularJS Backbone.js
Routing Yes Yes Yes
Views Yes Yes Yes
Two-way binding Yes Yes No
The chart above covers only the core features of the three
frameworks. Angular is the oldest of the lot and has the
biggest community.
To be continued from page.... 37
There are some statements like condition checking
where f b1 can be computed even without requiring the
subsequent arguments, and hence the foldr function can
work with infinite lists. There is also a strict version of
foldl (foldl) that forces the computation before proceeding
with the recursion.
If you want a reference to a matched pattern, you can use
the as pattern syntax. The tail function accepts an input list
and returns everything except the head of the list. You can
write a tailString function that accepts a string as input and
returns the string with the frst character removed:
tailString :: String -> String
tailString "" = ""
tailString input@(x:xs) = "Tail of " ++ input ++ " is " ++ xs
The entire matched pattern is represented by input in the
above code snippet.
Functions can be chained to create other functions. This is
called composing functions. The mathematical defnition is
as under:
(f o g)(x) = f(g(x))
This dot (.) operator has the highest precedence and is
left-associative. If you want to force an evaluation, you can
use the function application operator ($) that has the second
highest precedence and is right-associative. For example:
*Main> (reverse ((++) "yrruC " (unwords ["skoorB",
"lleksaH"])))
"Haskell Brooks Curry"
You can rewrite the above using the function application
operator that is right-associative:
Prelude> reverse $ (++) "yrruC " $ unwords ["skoorB",
"lleksaH"]
"Haskell Brooks Curry"
You can also use the dot notation to make it even more
readable, but the fnal argument needs to be evaluated frst;
hence, you need to use the function application operator for it:
*Main> reverse . (++) "yrruC " . unwords $ ["skoorB",
"lleksaH"]
"Haskell Brooks Curry"
By: Shakthi Kannan
The author is a free software enthusiast and blogs
at shakthimaan.com.
44 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | PB
Use Bugzilla
to Manage Defects in Software
them are on your Linux system before proceeding with the
installation. This specifc installation covers MySQL as the
backend database.
Step 2: User and database creation
Before proceeding with the installation, the user and database
need to be created by following the steps mentioned below.
The names used here for the database or the users are
specifc to this installation, which can change between
installations.
Start the service by issuing the following command:
$/etc/rc.d/init.d/mysql start
Trigger MySQL by issuing the following command (you
will be asked for the root password, so ensure you keep it
handy):
$mysql -u root -p
Use the following keywords as shown in the MySQL
prompt for creating a user in the database for Bugzilla:
mysql > CREATE USER 'bugzilla'@'localhost' IDENTIFIED BY
I
n any project, defect management and various types of
testing play key roles in ensuring quality. Defects need
to be logged, tracked and closed to ensure the project
meets quality expectations. Generating defect trends also
helps project managers to take informed decisions and make
the appropriate course corrections while the project is being
executed. Bugzilla is one of the most popular open source
defect management tools and helps project managers to track
the complete lifecycle of a defect.
Installation and configuration of Bugzilla
Step 1: Getting the source code
Bugzilla is part of the Mozilla foundation. Its latest releases
are available from the official website. This article will
be covering the installation of Bugzilla version 4.4.2.
The steps mentioned here should apply to later releases
as well. However, for version-specific releases, check the
appropriate release notes. Here is the URL for downloading
Bugzilla version 4.4.2 on a Linux system: http://www.
bugzilla.org/releases/4.4.2/
Pre-requisites for Bugzilla include a CGI-enabled Web
server (an Apache http server), a database engine (MySQL,
PostgreSQL, etc) and the latest Perl modules. Ensure all of
In the quest for excellence in software products, developers have to go through the process of
defect management. The tool of choice for defect containment is Mozilla's Bugzilla. Learn how to
install, configure and use it to file a bug report and act on it.
Let's Try Developers
PB | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 45
Developers Let's Try
'password';
> GRANT ALL PRIVILEGES ON *. * TO 'bugzilla'@'localhost';
> FLUSH PRIVILEGES;
mysql > CREATE DATABASE bugzilla_db CHARACTER SET utf8;
> GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,ALTER,CREATE,DROP,
REFERENCES ON bugzilla_db.* TO 'bugzilla'@'localhost'
IDENTIFIED BY 'cspasswd';
> FLUSH PRIVILEGES;
> QUIT
Use the following command to connect the user with the
database:
$mysql -u bugzilla -p bugzilla_db
$mysql > use bugzilla_db
Step 3: Bugzilla installation and configuration
After downloading the Bugzilla archive from the URL
mentioned above, untar the package into the /var/www
directory. All the confguration related information can
be modifed by the localconfg fle. To start with, set the
variable $webservergroup as www' and set other items as
mentioned in Figure 1.
Followed by the confguration, installation can be
completed by executing the following Perl script. Ensure this
script is executed with root privileges:
$ ./checksetup.pl
Step 4: Integrating Bugzilla with Apache
Insert the following lines in the Apache server confguration
fle (server.xml) to integrate Bugzilla into it. Place the
directory bugzilla inside www in our build folder:
<Directory /var/www/bugzilla>
AddHandler cgi-script.cgi
Options +ExecCGI
DirectoryIndex index.cgi index.html
AllowOverride Limit FileInfo Indexes Options
</Directory>
Our set up is now ready. Lets hit the address in the
browser to see the home page of our freshly deployed Web
application (http://localhost/bugzilla).
Defect lifecycle management
The main purpose of Bugzilla is to manage the defects
lifecycle. Defects are created and logged in various phases of
the project (e.g., functional testing), where they are created by
the test engineer and assigned to development engineers for
resolution. Along with that, managers or team members need
to be aware of the change in the state of the defect to ensure
that there is a good amount of traceability of the defects.
When the defect is created, it is given a new state, after
which it is assigned to a development engineer for resolution.
Subsequently, it will get resolved and eventually be moved
to the closed state.
Step 1: User account creation
To start using Bugzilla, various user accounts have to be
created. In this example, Bugzilla is deployed in a server
named hydrogen. On the home page, click the New
Account link available in the header/footer of the pages (refer
to Figure 4). You will be asked for your email address; enter
it and click the Send button. After registration is accepted,
you should receive an email at the address you provided
confrming your registration. Now all you need to do is to
Figure 1: Configuring Bugzilla by changing the localconfig file
Figure 2: Bugzilla main page
Figure 3: Defect lifecycle
Figure 4: New account creation
46 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 47
Developers Let's Try
Step 4: Reports and dashboards
Typically, in large scale projects, there could be thousands of
defects logged and fxed by hundreds of development and test
engineers. To monitor the project at various phases, generation of
reports and dashboards becomes very important. Bugzilla offers
very simple but very powerful search and reporting features with
which all the necessary information can be obtained immediately.
By exploring the Search and Reports options, one can easily
fgure out ways to generate reports. A couple of simple examples
are provided in Figure 7 (search) and Figure 8 (reports). Outputs
can be exported to formats like CSV for further analysis.
Bugzilla is a very simple but powerful open source tool
that helps in complete defect management in projects. Along
with the information provided above, Bugzilla also exposes its
source code, which can be explored for further scripting and
programming. This helps to make Bugzilla a super-customised,
defect-tracking tool for effectively managing defects.
By: Satyanarayana Sampangi
Satyanarayana Sampangi is a Member - Embedded software at
Emertxe Information Technologies (http://www.emertxe.com). His
area of interest lies in Embedded C programming combined with
data structures and micro-controllers. He likes to experiment with
C programming and open source tools in his spare time to explore
new horizons. He can be reached at satya@emertxe.com
Figure 5: New defect creation
Figure 6: Defect resolution
Figure 8: Simple dashboard of defects
Figure 7: Simple search
click the Log in link in the header/footer at the bottom of
the page in your browser, enter your email address and the
password you just chose into the login form, and click on the
Log in button. You will be redirected to the Bugzilla home
page for defect interfacing.
Step 2: Reporting the new bug
1. Click the New link available in the header/footer of the
pages, or the File a bug option displayed on the home
page of the Bugzilla installation as shown in Figure 5.
2. Select the product in which you found a bug. Please note
that the administrator will be able to create an appropriate
product and corresponding versions from his account,
which is not demonstrated here.
3. You now see a form on which you can specify the
component, the version of the program you were using, the
operating system and platform your program is running on,
and the severity of the bug, as shown in Figure 5.
4. If there is any attachment like a screenshot of the bug,
attach it using the option Add an attachment shown at
the bottom of the page, else click on Submit Bug.
Step 3: Defect resolution and closure
Once the bug is filed, the assignees (typically, developers)
get an email when the bug gets fixed. If the developers
fix the bug successfully by adding the details like a bug
fixing summary and then marking the status as resolved
in the status button, they can route the defect back to the
tester or to the development team leader for further review.
This can be easily done by changing the assignee field
of a defect and filling it with an appropriate email ID.
When the developers complete fixing the defect, it can
be marked as shown in Figure 6. When the test engineers
receive the resolved defect report, they can verify it and
mark the status as closed. At every step, notes from each
individual are to be captured and logged along with the
time-stamp. This helps in backtracking the defect in case
any clarifications are required.
46 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 47
Developers How To
Device: This can be the actual device present at the
hardware level, or a pseudo device.
Let us take an example where a user-space
application sends data to a character device.
Instead of using an actual device we are going to
use a pseudo device. As the name suggests, this
device is not a physical device. In GNU/Linux /
dev/null is the most commonly used pseudo
device. This device accepts any kind of data
(i.e., input) and simply discards it. And it
doesn't produce any output.
Let us send some data to the /dev/null
pseudo device:
[mickey]$ echo -n 'a' > /dev/null
In the above example, echo is a user-
space application and null is a special
fle present in the /dev directory. There
is a null driver present in the kernel to
control the pseudo device.
To send or receive data to and
from the device or application,
use the corresponding device
file that is connected to the driver
through the Virtual File System (VFS)
layer. Whenever an application wants to perform any
operation on the actual device, it performs this on the
device file. The VFS layer redirects those operations to
the appropriate functions that are implemented inside the
driver. This means that whenever an application performs
the open() operation on a device file, in reality the open()
function from the driver is invoked, and the same concept
applies to the other functions. The implementation of these
operations is device-specific.
Major and minor numbers
We have seen that the echo command directly sends data to
the device fle. Hence, it is clear that to send or receive data to
and from the device, the application uses special device fles.
But how does communication between the device fle and the
driver take place? It happens via a pair of numbers referred to
as major and minor numbers.
The command below lists the major and minor numbers
associated with a character device fle:
H
ave you ever wondered how a computer
plays audio or shows video? The
answer is: by using device drivers.
A few years ago we would always install
audio or video drivers after installing MS
Windows XP. Only then we were able
to listen the audio. Let us explore device
drivers in this column.
A device driver (often referred to as
driver') is a piece of software that controls
a particular type of device which is
connected to the computer system.
It provides a software interface to
the hardware device, and enables
access to the operating system
and other applications. There are
various types of drivers present
in GNU/Linux such as Character,
Block, Network and USB
drivers. In this column,
we will explore only
character drivers.
Character drivers
are the most common
drivers. They provide
unbuffered, direct access to hardware
devices. One can think of character drivers as a
long sequence of bytes -- same as regular fles but can be
accessed only in sequential order. Character drivers support
at least the open(), close(), read() and write() operations. The
text console, i.e., /dev/console, serial consoles /dev/stty*, and
audio/video drivers fall under this category.
To make a device usable there must be a driver present
for it. So let us understand how an application accesses data
from a device with the help of a driver. We will discuss the
following four major entities.
User-space application: This can be any simple utility
like echo, or any complex application.
Device fle: This is a special fle that provides an interface
for the driver. It is present in the fle system as an ordinary
fle. The application can perform all supported operation on
it, just like for an ordinary fle. It can move, copy, delete,
rename, read and write these device fles.
Device driver: This is the software interface for the device
and resides in the kernel space.
In the article An Introduction to the Linux Kernel in the August 2014 issue of OSFY, we wrote and
compiled a kernel module. In the second article in this series, we move on to device drivers.
An Introduction to
Device Drivers in the Linux Kernel
48 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 49
Developers How To
[bash]$ ls -l /dev/null
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
In the above output there are two numbers separated by a
comma (1 and 3). Here, 1 is the major and 3 is the minor
number. The major number identifes the driver associated
with the device, i.e., which driver is to be used. The minor
number is used by the kernel to determine exactly which
device is being referred to. For instance, a hard disk may
have three partitions. Each partition will have a separate
minor number but only one major number, because the same
storage driver is used for all the partitions.
Older kernels used to have a separate major number
for each driver. But modern Linux kernels allow multiple
drivers to share the same major number. For instance, /
dev/full, /dev/null, /dev/random and /dev/zero use the same
major number but different minor numbers. The output
below illustrates this:
[bash]$ ls -l /dev/full /dev/null /dev/random /dev/zero
crw-rw-rw- 1 root root 1, 7 Jul 11 20:47 /dev/full
crw-rw-rw- 1 root root 1, 3 Jul 11 20:47 /dev/null
crw-rw-rw- 1 root root 1, 8 Jul 11 20:47 /dev/random
crw-rw-rw- 1 root root 1, 5 Jul 11 20:47 /dev/zero
The kernel uses the dev_t type to store major and minor
numbers. dev_t type is defned in the <linux/types.h> header
fle. Given below is the representation of dev_t type from the
header fle:
#ifndef _LINUX_TYPES_H
#defne _LINUX_TYPES_H
#defne __EXPORTED_HEADERS__
#include <uapi/linux/types.h>
typedef __u32 __kernel_dev_t;
typedef __kernel_dev_t dev_t;
dev_t is an unsigned 32-bit integer, where 12 bits are used
to store the major number and the remaining 20 bits are used to
store the minor number. But don't try to extract the major and
minor numbers directly. Instead, the kernel provides MAJOR
and MINOR macros that can be used to extract the major and
minor numbers. The defnition of the MAJOR and MINOR
macros from the <linux/kdev_t.h> header fle is given below:
#ifndef _LINUX_KDEV_T_H
#defne _LINUX_KDEV_T_H
#include <uapi/linux/kdev_t.h>
#defne MINORBITS 20
#defne MINORMASK ((1U << MINORBITS) - 1)
#defne MAJOR(dev) ((unsigned int) ((dev) >> MINORBITS))
#defne MINOR(dev) ((unsigned int) ((dev) & MINORMASK))
If you have major and minor numbers and you want to
convert them to the dev_t type, the MKDEV macro will do
the needful. The defnition of the MKDEV macro from the
<linux/kdev_t.h> header fle is given below:
#defne MKDEV(ma,mi) (((ma) << MINORBITS) | (mi))
We now know what major and minor numbers are and the
role they play. Let us see how we can allocate major numbers.
Here is the prototype of the register_chrdev():
int register_chrdev(unsigned int major, const char *name,
struct fle_operations *fops);
This function registers a major number for character
devices. Arguments of this function are self-explanatory. The
major argument implies the major number of interest, name
is the name of the driver and appears in the /proc/devices area
and, fnally, fops is the pointer to the fle_operations structure.
Certain major numbers are reserved for special drivers;
hence, one should exclude those and use dynamically allocated
major numbers. To allocate a major number dynamically, provide
the value zero to the frst argument, i.e., major == 0. This
function will dynamically allocate and return a major number.
To deallocate an allocated major number use the
unregister_chrdev() function. The prototype is given below
and the parameters of the function are self-explanatory:
void unregister_chrdev(unsigned int major, const char *name)
The values of the major and name parameters must be
the same as those passed to the register_chrdev() function;
otherwise, the call will fail.
File operations
So we know how to allocate/deallocate the major number, but
we haven't yet connected any of our drivers operations to the
major number. To set up a connection, we are going to use
the fle_operations structure. This structure is defned in the
<linux/fs.h> header fle.
Each feld in the structure must point to the function in the
driver that implements a specifc operation, or be left NULL for
unsupported operations. The example given below illustrates that.
Without discussing lengthy theory, let us write our frst
null driver, which mimics the functionality of a /dev/null
pseudo device. Given below is the complete working code for
the null driver.
Open a fle using your favourite text editor and save the
code given below as null_driver.c:
48 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 49
Developers How To
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/kdev_t.h>
static int major;
static char *name = "null_driver";
static int null_open(struct inode *i, struct fle *f)
{
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
}
static int null_release(struct inode *i, struct fle *f)
{
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
}
static ssize_t null_read(struct fle *f, char __user *buf,
size_t len, loff_t *off)
{
printk(KERN_INFO "Calling: %s\n", __func__);
return 0;
}
static ssize_t null_write(struct fle *f, const char __user
*buf, size_t len, loff_t *off)
{
printk(KERN_INFO "Calling: %s\n", __func__);
return len;
}
static struct fle_operations null_ops =
{
.owner = THIS_MODULE,
.open = null_open,
.release = null_release,
.read = null_read,
.write = null_write
};
static int __init null_init(void)
{
major = register_chrdev(0, name, &null_ops);
if (major < 0) {
printk(KERN_INFO "Failed to register driver.");
return -1;
}
printk(KERN_INFO "Device registered successfully.\n");
return 0;
}
static void __exit null_exit(void)
{
unregister_chrdev(major, name);
printk(KERN_INFO "Device unregistered successfully.\n");
}
module_init(null_init);
module_exit(null_exit);
MODULE_AUTHOR("Narendra Kangralkar.");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("Null driver");
Our driver code is ready. Let us compile and insert the
module. In the article last month, we did learn how to write
Makefle for kernel modules.
[mickey]$ make
[root]# insmod ./null_driver.ko
We are now going to create a device fle for our driver.
But for this we need a major number, and we know that
our driver's register_chrdev() function will allocate the
major number dynamically. Let us fnd out this dynamically
allocated major number from /proc/devices, which shows the
currently loaded kernel modules:
[root]# grep "null_driver" /proc/devices
248 null_driver
From the above output, we are going to use 248 as a
major number for our driver. We are only interested in the
major number, and the minor number can be anything within
a valid range. I'll use 0 as the minor number. To create the
character device fle, use the mknod utility. Please note that to
create the device fle you must have superuser privileges:
[root]# mknod /dev/null_driver c 248 0
Now it's time for the action. Let us send some data to the
pseudo device using the echo command and check the output
of the dmesg command:
[root]# echo "Hello" > /dev/null_driver
[root]# dmesg
Device registered successfully.
Calling: null_open
Calling: null_write
Calling: null_release
Yes! We got the expected output. When open, write, close
50 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 51
Developers How To
operations are performed on a device fle, the appropriate
functions from our driver's code get called. Let us perform the
read operation and check the output of the dmesg command:
[root]# cat /dev/null_driver
[root]# dmesg
Calling: null_open
Calling: null_read
Calling: null_release
To make things simple I have used printk() statements in
every function. If we remove these statements, then /dev/null_
driver will behave exactly the same as the /dev/null pseudo
device. Our code is working as expected. Let us understand
the details of our character driver.
First, take a look at the driver's function. Given below are the
prototypes of a few functions from the fle_operations structure:
int (*open)(struct inode *i, struct fle *f);
int (*release)(struct inode *i, struct fle *f);
ssize_t (*read)(struct fle *f, char __user *buf, size_t len,
loff_t *off);
ssize_t (*write)(struct fle *f, const char __user buf*,
size_t len, loff_t *off);
The prototype of the open() and release() functions is
exactly same. These functions accept two parametersthe frst
is the pointer to the inode structure. All fle-related information
such as size, owner, access permissions of the fle, fle creation
timestamps, number of hard-links, etc, is represented by the
inode structure. And each open fle is represented internally by
the fle structure. The open() function is responsible for opening
the device and allocation of required resources. The release()
function does exactly the reverse job, which closes the device
and deallocates the resources.
As the name suggests, the read() function reads data from the
device and sends it to the application. The frst parameter of this
function is the pointer to the fle structure. The second parameter
is the user-space buffer. The third parameter is the size, which
implies the number of bytes to be transferred to the user space
buffer. And, fnally, the fourth parameter is the fle offset which
updates the current fle position. Whenever the read() operation
is performed on a device fle, the driver should copy len bytes
of data from the device to the user-space buffer buf and update
the fle offset off accordingly. This function returns the number
of bytes read successfully. Our null driver doesn't read anything;
that is why the return value is always zero, i.e., EOF.
The driver's write() function accepts the data from the
user-space application. The frst parameter of this function is the
pointer to the fle structure. The second parameter is the user-
space buffer, which holds the data received from the application.
The third parameter is len which is the size of the data. The
fourth parameter is the fle offset. Whenever the write() operation
is performed on a device fle, the driver should transfer len bytes
of data to the device and update the fle offset off accordingly.
Our null driver accepts input of any length; hence, return value is
always len, i.e., all bytes are written successfully.
In the next step we have initialised the fle_operations
structure with the appropriate driver's function. In initialisation
function we have done a registration related job, and we are
deregistering the character device in cleanup function.
Implementation of the full pseudo driver
Let us implement one more pseudo device, namely, full. Any write
operation on this device fails and gives the ENOSPC error. This
can be used to test how a program handles disk-full errors. Given
below is the complete working code of the full driver:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/fs.h>
#include <linux/kdev_t.h>
static int major;
static char *name = "full_driver";
static int full_open(struct inode *i, struct fle *f)
{
return 0;
}
static int full_release(struct inode *i, struct fle *f)
{
return 0;
}
static ssize_t full_read(struct fle *f, char __user *buf,
size_t len, loff_t *off)
{
return 0;
}
static ssize_t full_write(struct fle *f, const char __user
*buf, size_t len, loff_t *off)
{
return -ENOSPC;
}
static struct fle_operations full_ops =
{
.owner = THIS_MODULE,
.open = full_open,
.release = full_release,
.read = full_read,
.write = full_write
};
To be continued on page.... 55
50 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 51
Developers Insight
Joomla and WordPress are popular
Web content management
systems, which provide authoring,
collaboration and administration
tools designed to allow amateurs
to create and manage websites
with ease.
Creating Dynamic Web Portals
Using Joomla and WordPress
be called by the programmer depending upon the module and
feature required in the application. As far as user-friendliness is
concerned, the CMSs are very easy to use. CMS products can
be used and deployed even by those who do not have very good
programming skills.
A framework can be considered as a model, a structure
or simply a programming template that provides classes,
events and methods to develop an application. Generally,
the software framework is a real or conceptual structure of
software intended to serve as a support or guide to build
something that expands the structure into something useful.
The software framework can be seen as a layered structure,
indicating which kind of programs can or should be built and
the way they interrelate.
Content Management Systems (CMSs)
The digital repositories and CMSs have a lot of feature-
overlap, but both systems are unique in terms of their
underlying purposes and the functions they fulfll.
A CMS for developing Web applications is an integrated
application that is used to create, deploy, manage and store
content on Web pages. The Web content includes plain or
formatted text, embedded graphics in multiple formats,
photos, video, audio as well as the code that can be third party
APIs for interaction with the user.
N
owadays, every organisation wishes to have an online
presence for maximum visibility as well as reach.
Industries from across different sectors have their
own websites with detailed portfolios so that marketing as
well as broadcasting can be integrated very effectively.
Web 2.0 applications are quite popular in the global market.
With Web 2.0, the applications developed are fully dynamic
so that the website can provide customised results or output to
the client. Traditionally, long term core coding, using different
programming or scripting languages like CGI PERL, Python,
Java, PHP, ASP and many others, has been in vogue. But today
excellent applications can be developed within very little
time. The major factor behind the implementation of RAD
frameworks is re-usability. By making changes to the existing
code or by merely reusing the applications, development has
now become very fast and easy.
Software frameworks
Software frameworks and content management systems
(CMS) are entirely different concepts. In the case of CMSs, the
reusable modules, plugins and related components are provided
with the source code and all that is required is to only plug in or
plug out. The frameworks need to be installed and imported on
the host machine and then the functions are called. This means
that the framework with different classes and functions needs to
52 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 53
Developers Insight
Digital repositories
An institutional repository refers to the online archive or
library for collecting, preserving and disseminating digital
copies of the intellectual output of the institution, particularly
in the feld of research.
For any academic institution like a university, it also
includes digital content such as academic journal articles. It
covers both pre-prints and post-prints, articles undergoing
peer review, as well as digital versions of theses and
dissertations. It even includes some other digital assets
generated in an institution such as administrative documents,
course notes or learning objectives. Depositing material in
an institutional repository is sometimes mandated by some
institutions.
Joomla CMS
Joomla is an award-winning open source CMS written in
PHP. It enables the building of websites and powerful online
applications. Many aspects, including its user-friendliness and
extensible nature, makes Joomla the most popular Web-based
software development CMS. Joomla is built on the model
viewcontroller (MVC) Web application framework, which
can be used independent of the CMS.
Joomla CMS can store data in a MySQL, MS SQL or
PostgreSQL database, and includes features like page caching,
RSS feeds, printable versions of pages, news fashes, blogs,
polls, search and support for language internationalisation.
According to reports by Market Wire, New York, as of
February 2014, Joomla has been downloaded over 50 million
times. Over 7,700 free and commercial extensions are available
from the offcial Joomla Extension Directory and more are
available from other sources. It is supposedly the second most
used CMS on the Internet after WordPress. Many websites
provide information on installing and maintaining Joomla sites.
Joomla is used across the globe to power websites of all
types and sizes:
Corporate websites or portals
Corporate intranets and extranets
Online magazines, newspapers and publications
E-commerce and online reservation sites
Sites offering government applications
Websites of small businesses and NGOs
Community-based portals
School and church websites
Personal or family home pages
Joomlas user base includes:
The military - http://www.militaryadvice.org/
US Army Corps of Engineers - Country: http://www.spl.
usace.army.mil/cms/index.php
MTV Networks Quizilla (social networking) - http://www.
quizilla.com
New Hampshire National Guard - https://www.nh.ngb.
army.mil/
United Nations Regional Information Centre - http://www.
unric.org
IHOP (a restaurant chain) - http://www.ihop.com
Harvard University - http://gsas.harvard.edu
and many others
The essential features of Joomla are:
User management
Media manager
Language manager
Banner management
Contact management
Polls
Search
Web link management
Content management
Syndication and newsfeed management
Menu manager
Template management
Integrated help system
System features
Web services
Powerful extensibility
Joomla extensions
Joomla extensions are used to extend the functionality of
Joomla-based Web applications. The Joomla extensions for
multiple categories and services can be downloaded from
http://extensions.joomla.org.
PHP-based open source CMSs
Joomla
Drupal
WordPress
Typo3
Mambo
PHP-based open source frameworks
Laravel
Phalcon
Symfony
CodeIgniter
Prado
Seagull
Yii
CakePHP
Figure 1: Joomla extensions
52 | SEPTEMBER 2014 | OPEN SOURCE FOR YOU | www.OpenSourceForU.com www.OpenSourceForU.com | OPEN SOURCE FOR YOU | SEPTEMBER 2014 | 53
Developers Insight
Installing and working with Joomla
For Joomla installation on a Web server, whether local or hosted,
we need to download the Joomla installation package, which
ought to be done from the offcial website, Joomla.org. If Joomla
is downloaded from websites other than the offcial one, there are
risks of viruses or malicious code in the set-up fles.
Once you click the Download button for the latest stable
Joomla version, the installation package will be saved to the local
hard disk. Extract it so that it can be made ready for deployment.
Now, at this instant, upload the extracted fles and folders
to the Web server. The easiest and safest method to upload the
Joomla installation fles is via FTP.
If Joomla is required to be installed live on a specifc
domain, upload the extracted fles to the public_html folder
on the online fle manager of the domain. If access to Joomla
is needed on a sub-folder of any domain (www.mydomain.
com/myjoomla) it should be uploaded to the appropriate sub-
directory (public_html/myjoomla/).
After this step, create a blank MySQL database and assign
a user to it with full permissions. A blank database is created
because Joomla will automatically create the tables inside
that database. Once you have created your MySQL database
and user, save the database name, database user name and
password just created because, during Joomla installation, you
will be asked for these credentials.
After uploading the installation fles, open the Web
browser and navigate to the main domain (http://www.
mysite.com), or to the appropriate sub-domain (http://www.
mysite.com/joomla), depending upon the location the Joomla
installation package is uploaded to. Once done, the frst screen
of the Joomla Web Installer will open up.
Once you fll in all the required felds, press the Next button
to proceed with the installation. On the next screen, you will have
to enter the necessary information for your MySQL database.
After all the necessary information has been flled in at all
stages, press the Next button to proceed. You will be forwarded
to the last page of the installation process. On this page, specify
if you want any sample data installed on your server.
The second part of the page will show the pre-installation
checks. The Web hosting servers will check that all Joomla
requirements and prerequisites have been met and you will
see a green check after each line.
Finally, click the Install button to start the actual Joomla
installation. In a few moments, you will be redirected to the last
screen of the Joomla Web Installer. On the last screen of the
installation process, press the Remove installation folder button.
This is required for security reasons; otherwise, every time, the
installation will restart. Joomla is now ready to be used.
Creating articles and linking them with the menu
After installation, the administrator panel to control the
Joomla website is displayed. Here, different modules, plugins
and components, along with the HTML contents, can be
added or modifed.
WordPress CMS
WordPress is another free and open source blogging CMS
tool based on PHP and MySQL. The features of WordPress
include a specialised plugin architecture with a template
system. WordPress is the most popular blogging system in use
on the Web, used by more than 60 million websites. It was
initially released in 2003 with the objective of providing an
easy-to-use CMS for multiple domains.
The installation steps for all CMSs are almost the same.
The compressed fle is extracted and deployed on the public_
html folder of the Web server. In the same way, a blank
database is created and the credentials are placed during the
installation steps.
According to the offcial declaration of WordPress, this
CMS powers more than 17 per cent of the Web and the fgure
is rising every day.
Figure 2: Creating a MySQL user in a Web hosting panel
Database Overview Confguration
Joomla!