You are on page 1of 19

Fog Computing: Introduction to a new Cloud evolution

Introduction to Fog Computing and future study in Cloud evolution.


Jonathan Bar-Magen Numhauser
Universidad de Alcala

In the following article we will introduce the reader to a line of


investigation that have been gaining greater interest in the last few months and which
may define the origins to the expansion of this line of investigation, Fog Computing.
Cloud computing has been gaining more importance during the last decade 1. Yet
Cloud Computing is a mere definition of a common known telecommunication
structure, the Server-Client system.

A Server-Client relationship, in which Clients rely on Servers to either view


information as well as save such information, is the core philosophy of the Internet
and the W3C (World Wide Web Consortium) 2 since its public release.

Yet a certain evolution has been noticed in the last two decades, mainly in the speed
in which content transactions have been taking place. The evolution of such speed
increment can be seen in latest communication algorithms, and shows us that from
the Internet conception to our present time, speed in communication suffered a
significant number of changes3.

Nevertheless from such study we can deduce another issue, and it is the exponential
increment in the physical size of the also known Internet Universe4. With speed also
comes a higher request from users or clients for better communication, a more
interactive and entertaining communication 5. This element resulted in being the
keystone of our investigation, as it instigates a new phenomenon in
telecommunication history, the ever-growing dependency of the Client side on the
Internet Universe-Server side.

Such dependency has grown in the last decade, and thanks to it the concept of Server-
Client relationship changed its name to Cloud Computing. In it any entity from a
single person to a large size company can store all its information on a third party

1
Shacklett, Mary. 2011. CLOUD computing. World Trade, WT 100 24 (1).
2
World Wide Web Consortium http://www.w3.org/
3
Light, J. 2009. An efficient wireless communication protocol for secured transmission of content-
sensitive multimedia data. World of Wireless, Mobile and Multimedia Networks & Workshops,
2009.WoWMoM 2009.IEEE International Symposium on a: 1
4
Rajkumar Buyya. 2011. Cloud computing: Principles and paradigms. Cloud Computing:
Principles and Paradigms
5
Angeli, Daniele. 2012. A cost-effective cloud computing framework for accelerating
multimedia communication simulations. Journal of Parallel & Distributed Computing 72 (10):
1373-85

1
server, and so access the information from any place at any time6. A concept
reinvented, but which existed before.

In 1998 as a result of the increment in size of Internet Universe, a popular search


engine of that time, AltaVista, was unable to cover in a certain speed the search
necessities of its clients. And so Google search engine was introduced with a better
speed latency results, increasing its popularity and success. Search engines evolution
are the perfect image of the Internet Universe growing speed. As a consequence of
the increase in information banks size there was a need for stronger and optimized
search engines that may allow us to access such information 7.

And so if we analyze the creation of such tools, we can also stumble on many other
services that use the massive size of the Internet Universe and its immense amount of
servers to promote their business. During the past decade we can notice the creation
of a new phenomena in the Internet Universe, social networking8.

Still it makes us wonder what if Cloud Computing was more a name change for
marketing purposes than the creation of a new methodology and organization
techniques, maybe such concept had been applied to other cases, such as social
networking.

First of all the Internet itself was always a social tool. The introduction of
personalized web based profiles did not turn the Internet into a social tool. ICQ 9 and
Geocities10 were only two of many other tools created in the decade of the 90's,
which offered a rich social experience to their users, from communication to the
creation of a personalized user profile and friend connection.

Yet the technological advances, mainly in the area of data transfer speeds as well as
media sharing, image in particular, made tools like FaceBook and Twitter into a
successful center for social union11. Which brings us back to the fact that speed and
technological improvement in daily usage components, as may be mobile phones,
affected the outcome in the usage of such tools.

Such improvement in speed data transfer, as well as the increased dependency in


third party data storage, including social networks tools and mail services, have
brought us to the core issue of our investigation, the hazards of a massive usage of
6
Brandon, John. 2008. Living in the cloud. PC Magazine 27 (8): 19-20,
7
Anderson, Mary Alice. 2012. Google literacy lesson plans: Way beyond 'just google it'.
Internet@Schools 19 (4): 20-2
8
Google inc files patent application for multi-community content sharing in online social networks. 2010.
Indian Patents News
9
http://www.icq.com
10
http://geocities.yahoo.com/ or http://en.wikipedia.org/wiki/GeoCities
11
Aggarwal, Charu C. 2011. An introduction to social network data analytics. An Introduction to Social
Network Data Analytics

2
Cloud Computing12, and the resulting definition of an Internet Universe inflicted by
this usage known to us as Fog Computing.

Next we will expose the first steps of our investigation, how did we realize the
possible inconvenient that Cloud Computing could inflict in the social structure of the
Internet. Following this brief summary, we will expose a number of statistical
information that demonstrate the fast growing speed of the Internet in our present
time and how Cloud Computing has been collaborating with this situation.

Finally we will explain how Fog Computing is defined in this entire schema and what
are our investigation results to the current date. Practical and theoretical concepts will
be exposed as well to clarify how the investigation is taking place, and what are the
problematic issues that we are working on solving.

Preexisting context
Before we start analyzing the subject of massive Cloud Computing and its
resulting Fog Computing, I would like to start by exposing a number of basic
concepts, let's say how did we get to this point.

The Internet systems together with the http protocol that certifies and manages the
W3C were introduced in October 1994. Since those years, constant improvement in
data transfer speeds as well as content exposure took dominant roles. Speed
connection was doubling every year to two years as well as content complexity,
starting from simple text based content to a richer video and animation-based
content13.

Towards the end of the second half of the 90's, users of the Internet could interact
with other users in various ways. From product acquisition through Amazon and
eBay, to simple communication chat offered by messenger systems like mIRC, ICQ,
as well as voice chat communication with programs like Mediaring or Roger Wilco. 14
Never to forget the possible communication for gamming purposes, which opened the
window for massive user interaction through gamming experiences, from basic game
experiences in the Internet Gaming Zone to more complex in such games as Starcraft,
EA Sports games, and Heroes of Might and Magic 3. All this features where mainly
seen before the turn of the century.

To give a better control over the content being send back and forward over the
Internet, Google Search Engine15 was introduce as a rival to Altavista Search
12
Brandon, John. 2008. Living in the cloud. PC Magazine 27 (8): 19-20
13
Light, J. 2009. An efficient wireless communication protocol for secured transmission of content-
sensitive multimedia data. World of Wireless, Mobile and Multimedia Networks & Workshops,
2009.WoWMoM 2009.IEEE International Symposium on a: 1
14
Moch, Chrissy. 1999. MediaRing breaks out of PC mold. Telephony 236 (13): 16
15
http://www.google.com

3
Engine16, and through the offering of potentially fast search results, allowed users to
access information in a fast and instantaneous manner.

And so a first strong dependency of clients towards a certain virtual service was born.
With the increase in Internet size, there was a greater need to depend on a search
engine that may offer a fast and easy way to access such information. In a short
period of time, Google became the stronger and larger content manager of the
Internet Universe. Yet, if we look at present numbers, Google only indexes 0,0004%
of the internet content, which gives us a small idea of the tremendous size of the
internet (Fig 1).

Google soon started offering their first Client-Server service, or Cloud Computing
service, the Gmail17. The most significant feature of the Gmail service was the
introduction of a 2 GB storage space to allow all users not to worry about their inbox
size and space. This mail service was first introduced in 2003, in parallel with a
smaller company known as Walla 18. Both Google and Walla offered a considerably
amount of storage size on their servers, in an act that may be considered the first
Cloud Computing service.
Soon enough Google's competition followed their steps, and a new market was borne,
the third party storage service also known as Cloud Computing.

The first decade of the Twentieth century will see the increment in Server size
storage, and the resulting increase in data transfer size as well as content complexity.
If at the end of the 90's we could observe services as Amazon 19, chat communication
as ICQ and even gamming experiences, now thanks to the increase in data transfer
speeds, servers could not only store a larger amount of information, but also receive a
larger amount of data, and so allowing the virtual world to mimic the real world,
storing social information and reflecting such information in a social structure, thus
creating what was later will be known as Social Networks 20.

MySpace21, Friendster22, Facebook23, Twitter24 and such were the result of the
evolution in data transfer speed as well as the increase in content complexity. The
evolution in content complexity resulted in a greater interaction between the user and
the virtual world, between the Client and Server. And so a dependency that originated
significantly by the introduction of the Google Search engine in the 90’s was

16
http://www.altavista.com
17
http://www.gmail.com
18
http://friends.walla.co.il
19
http://www.amazon.com
20
Aggarwal, Charu C. 2011. An introduction to social network data analytics. An Introduction to Social
Network Data Analytics,
21
http://www.myspace.com
22
http://www.friendster.com/
23
http://www.facebook.com
24
http://www.twitter.com

4
translated into a greater one with the social tools. Such was the dependency, that
users intentionally uploaded and enriched their profiles in these social tools. Finally
as a result of the third party cloud storage nature, this information passed from the
User's ownership to the service provider ownership.

From all those tools, Facebook resulted in having the most success. In its privacy
policies, information uploaded by their users pass to be of complete ownership of the
Facebook company. Facebook now holds more than half a billion profiles of clients
or users25. And so even if all users stop using Facebook, their profiles will still be
under Facebook’s control, turning Facebook into the largest peoples profiles bank in
the world, and used for many third party purposes as well as governmental interests.

Other services that share their user databases with third party institutions, in concrete
US government institutions, are Yahoo, Microsoft, Apple and Google 26. Under the
US acts for defense against terrorism, the government has complete authority to
access those data banks.

With the evolution of social, and later entity networks, the definition of Cloud
Computing was finally released as an independent concept, even though it existed
since the creation of the Internet. The most significant services that companies, which
offer Cloud Computing Storage, tend to offer are the possibility of centralizing all
information for future access from different devices, a backup system for their
information and the insurance that their information will always be up to date.

From the previous contextual study the attributes that derived us to the present
moment in which we decided to study the overall structure of Cloud Computing and
Mass Content Transaction has been establish, and now we will continue to explain
the issues in present day Internet Universe, Cloud Computing and Mass usage of
content transactions.

Present Internet
With the pre exiting variables established, we will proceed with explaining
the current existing variables that motivated us to define the Fog Computing
phenomena.

Data content sharing


The base pillar of the Internet Universe is data sharing. Since its inception,
the network was designed to allow a number of nodes or participants in it to share
25
HANE, PAULA J. 2012. Facebook in the spotlight. Information Today 29 (7)
26
Cyber Intelligence Sharing and Protection Act (CISPA) and the US Patriot Act http://www.fincen.gov/statutes_regs/patriot/index.html

5
digital information27. Such information could only be viewed on the machines, and if
we desired to change its format from digital to, for example physical, we needed a
machine to transform such format into another, being for example the Printer one of
them.
Eventually sharing content was always part of the global network. Considering this
would prove logical that with the increment in speed connectivity, we perceived an
increase in data sharing transactions, as shown in the following statistics.

But the evolution of digital media also introduced a variety of new format types,
which implied that there would be a number of new content types and in some way
an increase in complexity. Video and Music streaming wasn't a trivial matter in the
beginning of the 90's, but towards the first decade of the 21th century MP3 format
gain in popularity, and soon also movie formats, as AVI and MPEG would allow
movies to be shared easily.

The increase in speed transactions implied that not only more content would be
shared between network entities; it also implies that there would be an increase in
user or client nodes numbers in the network.
One of the implications that had the introduction of social tools into the virtual world
was the increase in online users that looked forward to rely more on this world rather
than the real one.
Previous to the introduction of such tools, the amount of Internet users was at
28
360,985,492, being 2,267,233,742 the amount after the introduction of those changes .
So eventually the public that wasn't interested in the virtual world because it was
mainly directed towards communication, business transactions and gamming, now
found a new and attractive usage to the Internet.

We cannot forget that the increase in media sharing capabilities also affected the
number of new online users, which can be considered the second reason for this
situation. Music and Video sharing on the Internet allowed many new users to reach
entertaining content easily and faster.

So by now we can see from the following data the different usage that the users give
to the Internet divided in percentages.

Being content sharing the modus operandi of the Internet 29, we have to start
considering a number of issues that may limit the freedom and commodity of the user
community in the matter. Sharing data on a large scale is presently available without

27
Nelson, Michael R. 2009. The cloud, the crowd, and public policy. Issues in Science & Technology 25
(4): 71-6
28
http://www.internetworldstats.com/stats.htm
29
Ekanayake, J. 2011. A scalable communication runtime for clouds. Cloud Computing (CLOUD), 2011
IEEE International Conference on: 211

6
significant limitation, and speed connection only keeps rising30. Yet one of the issues
that instigate to the creation of the Fog Computing phenomena is the excess of
content on the virtual world. This world's growth speed is greater than any other seen
to date, and the social knowledge reflected in it is of great value to some, and proved
to be as leverage in delicate situations.

Constantly algorithms are created to facilitate the access to such information,


including the search engine algorithms, i.e. Google. "Putting some order to the mess"
is a possible way of looking at it, when search engines try to create access channel for
users to view the information on the virtual world. Yet Google only indexes 0.0004%
of the content in the virtual world, which implies to a possible phenomenon of
Silenced Writings or Silenced Content. When an intermediate like Google, with its
commercial implications is only accessing such low percentage of content, we have
to ask ourselves if it's useful as well as impartial.

As a possible solution, many investigation teams in the area look to create new Data
Mining algorithms31. Such algorithms as their name reflect gather data from the
virtual world and create smarter systems that may allow easier and more complete
access to the network.
The main inconvenient with such systems is a matter of space expansion. As time
passes, the virtual world space expands, and that expansion speed is faster than the
speed in which algorithms are created to permit access to the questioned content. If
we consider the evolution in content complexity as well, we have to add some
variables to the equation, content size, complexity and speed, which eventually
overcome the creation of access and ordering algorithms.

As a result we can conclude that data content sharing in the virtual world, for which
most is done on a Server-Client based service, compared to the Peer to Peer
architecture, is reaching a critical point in size and speed that in the next few years
will create a crisis in content reliability. Being Server-Client service equivalent to
Cloud Computing, from now on we will refer to Server-Client service as Cloud
Computing.

Privacy Policies and components


When we consider Data Mining we have to forcibly investigate about
privacy policies. For Data Mining algorithms, and any kind of content-based

30
Angeli, Daniele. 2012. A cost-effective cloud computing framework for accelerating multimedia
communication simulations. Journal of Parallel & Distributed Computing 72
31
Boanerges Aleman-Meza. 2006. Semantic analytics on social networks: Experiences in addressing the
problem of conflict of interest detection. WWW '06: Proceedings of the 15th International Conference on
World Wide Web

7
algorithms to work properly, there has to be access to virtual content. Accessing
content is linked to legal policies, known in the market as Privacy Policies 32.

These policies were created initially to offer a defensive line for privacy information
protection. Acts and protocols in many countries were passed to offer the highest
level of content protection for their citizens.
Yet these policies are no more than contracts between users and service providers,
and such contracts can differ from one provider to the next.

Before we analyze some existing examples, we would like to define a number of


terms. First of all is the Cookie. A Cookie is a piece of information, or content, that is
saved on the user’s local machine when he access content on the virtual world. This
was first introduced as part of an optimization plan to reduce the machine access to
the network and so reduce the transaction size. By harvesting a certain amount of
temporal content on the local machine, when you access for the second time the
virtual world, some content can be saved from transaction for a second time, so the
local Client don't need to ask for the same content to the remote Server, and so
reducing the data transfer size.

The reason for this method appeared when the first web browsers were created, as
connections were not as fast as todays, and any reduction in traffic could be
considered a significant improvement to the functionality.
Cookies were introduced under those terms, but slowly change to be pieces of
information that could be access in a bidirectional system.
Communication between Client and Server can be bidirectional, which means that
either the client can access the server, or the server can access the client. Initially
Server access to Client is not very common. Yet many service providers use
information stored on Clients machines, mainly on Cookies to know more about the
Clients. This element instigated the definition of many privacy policies, and in the
present time some countries are trying to illegalize the use of Cookies, for example
the EU through the EU Cookie Law33.

The main problem with Cookies is that many of the users are not aware of their
existence, and even if they are, it's hard to turn them off. Since the beginning of the
“Cookie Hunt”, many service providers decided to create alternative system to allow
them to keep accessing user information. An example is Google's attempts in Apples'

32
Anthonysamy, P. 2011. Do the privacy policies reflect the privacy controls on social networks?
Privacy, Security, Risk and Trust (Passat), 2011 Ieee Third International Conference on and 2011 Ieee
Third International Conference on Social Computing (Socialcom): 1155
33
ico. Information Commissioner's Office:
http://www.ico.gov.uk/for_organisations/privacy_and_electronic_communications/the_guide/cookies.asp
x

8
Safari browser to inject Cookie read and write to their users 34, and by that know about
the users interests and personal information, as well as navigation history.

The second concept that I want to define is the Client Profile. When we consider a
Client in the Cloud, or Server-Client relation, we consider a node that may be a
computer with a number of users and entities. When a user connects to online
services, like Web Mails (Gmail, Yahoo, Hotmail, etc.) or to social tools (Facebook,
Twitter, etc.) he is interacting with a number Servers that constantly are recollecting
information about his usage of their services. We define Client Profile as any profile
created on such Servers and that may or may not be under the control of the user or
client35.

Depending on the privacy policy signed between the Client and the Service Provider
(Servers), his control over his own information and content can vary significantly.
Clients’ profiles in general are not open for client control, and in most cases the
Service Provider has complete authority and ownership of that information.
In case of Gmail, Google makes use of the content in the mails to decipher the users
interests and thus orients the ads offering as well as search results to those interests 36.
Google's algorithms are of the most complex and elaborated, and demonstrate the
great importance that the company gives to this field of investigation.

For any data mining algorithm to work properly, the service providers may either
access the Client Profiles in the Cloud (Servers) or the cookies in the Clients local
machine. As cookies are being illegalized, there is an increasing interest in Cloud
Computing, as clients increase the amount of information uploaded in the Cloud, and
in some cases without conscience allow the service providers to use their profile for
productivity use and marketing orientation37.

Privacy policies are considered one of the main variables that instigated the creation
of the Fog Computing. Even though it is believed that such policies are good and
should be enforced, the companies as stated before that depend on the content
transaction and Data Mining to profit, will find alternatives to maintain their control
over Client information, and so they introduce a strong campaign for Cloud
Computing.

It is surprising to see that when it has to do with information control policies, most of
the companies unite under the leadership of the W3C, which is the greatest

34
FTC backs $22.5 million Google settlement over Safari, Diane Bartz, Jul 31, 2012, REUTERS
35
Stolfo, S. J. 2012. Fog computing: Mitigating insider data theft attacks in the cloud. Security and
Privacy Workshops (SPW), 2012 IEEE Symposium on: 125
36
Yang, Yanwu. 2012. A budget optimization framework for search advertisements across markets. IEEE
Transactions on Systems, Man & Cybernetics: Part A 42 (5): 1141-51
37
The economics of cloud computing: An overview for decision makers. 2012. The Economics of Cloud
Computing: An Overview for Decision Makers

9
beneficiary of the current content transaction structure, and work together to find
alternative solutions to the content access limitation policies. Many alternatives to the
HTML protocols that the W3C promulgates were created in the last few years. Those
alternatives ensured a significant control of the private content of users. Eventually
the W3C and their collaborating companies hunted down those alternatives as it
created a possible content channel outside of their control scope. The most notorious
case is the Html 5 VS Flash conflict, resulted in Apple and Google extracting Flash
from part of their products, and so uniting under the interests of the W3C to preserve
a hyper text protocol with higher tendency towards content data hacking and data
mining38. Yet in case of Flash, it can be stated that there is still much support and that
it is still unclear on how it will evolve in the future.

Cloud Computing

With most of the variables exposed, I can finally proceed with the subject on
which this study is based, the evolution of Cloud Computing towards Fog
Computing.

As expressed in the previous sections, we refer to Cloud Computing as the content-


based structure on which the Internet is based on, the Client-Server relation and the
services that providers of storage space offer in the virtual world.
Being so determined that Cloud refers to the overall combination of all the existing
Servers that have in them content stored and that these content forms the virtual
world on the Internet.

In the last few years, an increase in advertisement followed by a profound market


study derived in a new work methodology based on Cloud Computing 39. The fast
communication channels, and the complex content data resulted in the creation of a
richer virtual world, in which most of our digital based activities can be stored.
Such storage wasn't possible a couple of decades ago.

Many companies have been promulgating the use of such technology, and even
incorporated it into their trademark, like Apple with its iCloud. On other services, we
could have noticed a slow but determined transition towards a more Cloud structure.
Google has been introducing Cloud services in most of their tools, Google Docs,
Gmail, Google + and etc. Microsoft didn't fall back, and together with their Bing
search engine, and Facebook, work on new algorithms for content search using Client
Profiles in their Cloud. Microsoft also is about to introduce their new Office, which
will work practically completely in the Cloud, in other words remote Servers.

38
Factbox: Adobe vs Apple on Flash technology, REUTERS 2012.
39
hacklett, Mary. 2011. CLOUD computing. World Trade, WT 100 24 (1)

10
So with the inevitable approach of a Cloud base daily life, we have to ask ourselves,
shouldn't we analyze the previous behavior of the Internet, in the last 20 years, and
search for those inconvenient that tend to appear in a Server-Client or Cloud
architecture. This question was just another of the many we have formulated.

If we consider a couple of test subjects, we may be able to take for example Blog
services and Video upload services. In specific for Blog services we didn't close our
study on a specific service, but on a general overview, while for video upload we
studies YouTube and Google Video.

Considering both services, we have noticed a certain level of chaos in its usage.
There are an uncountable number of Blogs in the virtual world 40. Many of them are
inactive, but those who are active, offer information that is constantly uploaded to the
Cloud. Blogs are mostly based on textual content and don't suppose a large scale of
the storage capacity of the Cloud. A Facebook profile can be equally considered Blog
as it reflects an activity of a user or client and stores the information in their Cloud.
Video upload has certain differences, eventually it is similar in most of the elements
to the rest of services, but has a unique characteristic, which is the fact that the
content itself is in Video format, rendering it as a large size element, and forcing the
service providers to have a good and optimal Cloud to store all this content.
Another characteristic of Video content is its incompatibility with data mining
algorithms, to this date there is no efficient algorithm that permit the users identify a
video content directly, so it needs to use tags or keywords associated to the video that
will allow a better indexation of the element itself.

Data mining algorithms are based on keywords, and for a right result these algorithms
read these keywords as well as external textual content to decipher its importance in
regard to the searching key.

The size of the Cloud keeps growing at a considerable speed, mainly because it is in
the service providers’ interest to obtain the largest amount of information possible
for future trading, and such necessity drives the structure of the Internet to a chaotic
consequence. This extreme increase in size, and the private interests of the service
providers, is causing a serious effect on the reliability of the Internet as well as a war
of publicity and information positioning in it.

Private interests- Marketing dominance over the Cloud Computing

As stated previously, a commercial based conflict is growing in the web.


The conflict is instigated from the idea that the service that contains the greatest

40
Hart, M. 2009. Usable privacy controls for blogs. Computational Science and Engineering, 2009.CSE
'09.International Conference on: 401

11
amount of information about the Clients will have the greatest impact in the markets,
and eventually will have the key for successful product placement.

On these concepts services like Facebook and Google thrive, and their capacity of
reaching a large amount of Clients, as well as store a considerably amount of profiles
in their Clouds resulted in the creation of a speculative industry, which needs to be
studied accordingly. In this study we didn't emphasized on the economic impact of
this growing industry, yet we believe that it is an important issue to study specially
with the events that are occurring in the markets with this kind of services.

Apart from commercial interest, the use of users profiles by government agencies to
improve their capabilities to access potential threats to security as well as the use of
the same channels to control in a variety of ways the daily lives of the subjects, has
been taking increasing effects41. While governmental institutions use the Facebook
and its Image Identification algorithms to find their profiles in the Cloud and acquire
a large amount of personal data on potential subjects, private companies make use of
those tools to award or punish their employees or future employees 42. The clients’
virtual life is becoming more important, as the clients seek to upload all their reality
to the Cloud.

In all cases public or private, one usage is common and it is brand placement as well
as public opinion control. By the use of the correct algorithms, the Cloud client
profiles banks give access to elements in the users local machines, public entities as
well as private companies are manipulating popular opinion to accept or deny such
issues. By studying a users profile using this algorithm, is not only that the companies
can orient their products towards potential clients, and the clients can discard
information of no interest to them, but also they can manipulate their interests by
knowing their general tendencies43.
This element falls under the social studies field and we did not invest enough time on
its analysis, yet we can affirm that brand placement is directed towards making the
client believe that the companies are reducing the possibility of him to access non
importance issues, but at the same time they keep creating a more accurate profile of
his personality, and in many cases also try to force the client to only see elements in
fields that interest only him.

Eventually the use of Cloud computing, its promulgation in the markets is mainly to
allow entities to benefit themselves out of these profile data banks, and introduce
them to another tool capable of manipulating public opinion.
41 42
Anthonysamy, P. 2011. Do the privacy policies reflect the privacy controls on social networks?
Privacy, Security, Risk and Trust (Passat), 2011 Ieee Third International Conference on and 2011 Ieee
Third International Conference on Social Computing (Socialcom): 1155
43
Yu&#x0308. 2010. An approach for protecting privacy on social networks. Systems and Networks
Communications (ICSNC), 2010 Fifth International Conference on: 154

12
If we add to the equation the strong competition between those entities to acquire the
largest amount of client profiles for their Clouds, we have to consider a scenario of
open war between varieties of entities.
One of those scenarios happened a few months ago. Megaupload is another Cloud
service provider. Its creators allowed the users to upload content to their servers, and
charged a certain amount of money to access part of the content. This service, which
is at the moment closed and under investigation, was hypothetically operating as
many other existing services that share content. Aside from the legal implication of
such activity, and taking in consideration that the member of Megaupload are being
processed in the moment in which we are writing this paper, we can affirm that the
case in which this group was shut down by the US special forces is no coincidence 44.

As stated before, the competition for information control, and the creation of the most
complete Clouds with profile data banks have created a situation in which a certain
service provider, that has its Cloud servers outside of US jurisdiction, had to be
closed down because of its share size. Such importance is given towards Clouds with
profile banks that the competition between entities that contain information is
becoming a matter of concern for government and they make use of their force to
reduce such competition.

On the other hand, this kind of competition creates another pre requisite for the
existence of Fog Computing, the duplicity of profiles. With the attempt to control as
much clients profiles as possible, many service providers do not work on the
possibility of sharing information or existing profiles between other service
providers, but they instead duplicate such information, creating a situation in which a
person may exist in a variety of service. The increase in services is causing that the
client won't be able to keep control of all the profiles he created, and without
knowing some of that information can be leaked to third parties.
In the case of Facebook, the profiles are shared in all their Cloud, allowing third
parties to access in a rather easy manner the profiles of users.

Increment in client profiles, lack of private policies, connection speeds


improvements, complexity in content format, open competition for information
control, and the establishment of a Cloud network architecture are only some of the
variables that resulted in the creation of Fog Computing.

Fog Computing

Since the beginning of this article I have been preparing the reader to
understand the grounds on which this investigation was based on. It is so as I proceed
44
Megaupload site wants assets back, to fight charges, Jeremy Pelofsky, January 20th 2012, Reuters

13
to introduce what are the steps that took me to define this phenomenon, why is it
called Fog Computing and how it should be considered for further investigation in a
variety of fields. Eventually I will give a brief explanation of future works, including
the first practical solution for this situation.

What is fog computing?

As the name implies, Fog Computing has a strong relation to Cloud


Computing. We consider that Fog Computing is the next step in the evolution of the
society of information, which strongly relies on the virtual world generated by the
Internet. The protocols and tools that make possible the existence of such world have
been described in short in the previous segments, and drove us to understand the
importance that each variable has in the greater equation.

The fast speed increase in storage size, in combination with political and social
behavior, as well as a strong market base philosophy, has driven the Internet into a
situation of mass information, that instead of facilitating the function of our society of
information, it ends clouding its judgment, and causes a level of chaos, that strong
entities and service providers use to monopolize information transaction and limit
social learning45.

The virtual world, Internet, World Wide Web, or any other name that has been
associated to the physical existence of a Server-Client network, has been suffering
since its inception a series of changes. As part of its evolution, we achieved a network
of information with an unmeasured size that result in a lack of liberty in the access to
such source of information.

By promoting in the last 10 years the dependency in Cloud Computing, and reducing
the data stored on local Client's machine, the network has been growing in an
overwhelming speed, generating what is defined as Fog computing.

When the access to the network of information is only available to a certain number
of entities resulting in the Client being forced to use those intermediate entities to
access such information, and when the Client by itself is unable to navigate through
this sea of information depending on the use of such entities turning them as a
necessity for working with the Internet, it generated a situation of blinding Fog, a Fog
that can act in a number of ways against and for the common User without his
knowledge.

45
Boritz, E. 2009. A gap in perceived importance of privacy policies between individuals and companies.
Privacy, Security, Trust and the Management of e-Business, 2009.CONGRESS '09.World Congress on:
181

14
The intermediate entities make use of that fact, on which the common Client does not
have the equipment to access this amount of information, as well as strong economic
pressure to dominate the information market, and so mold the resulting access to
information to beneficiate, a number of private interests.

As mentioned before the intermediate entities work to guide the users towards
information but they have more interests in providing economic results to their
paying clients, and so force the common user to see a certain amount of information
making him believe that it is of their true interest, while reducing his capacity for
critical though and turning him into a statistic.

One of the main objectives of those entities by creating the Fog is to be able to forge
a more accurate statistical structure of the potential market and so by that offer their
paying clients, or in other words private companies, an accurate impact for their
investment46. If we consider the following example, Facebook promotes their
advertising system by underlining the fact that every ad may get to half a billion
users, and so improve the chance of the client-company to access a wider market.
This intention is more accurate when you look at analytic tools, like Google's. They
can offer in all their Internet tools accurate statistics to allow their paying clients to
view their impact on the potential crowd. Applying Data Mining algorithms and
accessing users profiles in the cloud, as well as local information in the user’s
machines could only achieve this statistics.

The dependency of highly technological advances needed to make such algorithmic


calculation intends to ensure that only few entities may be able to "make order and
access" the information in the Internet. And so depending on their privacy policies,
that many times are dictated by local authorities, in case of Google and Facebook
the US government, common users will navigate in a Fogy virtual world unable to
exist without the intermediate entity that helps them to access to the specific
information, eventually deriving in an unbreakable dependency in such services, and
allowing them to dictate access rights resulting in a potential content censorship and
silenced information.

Chain reaction in the fog

Working in a fog virtual reality in which speed and content transaction is at


the speed of light has to be treated accordingly. To study one of the resulting
situations of working in such environment, our investigation group decided to
analyze the existence of user's profiles on the virtual world.
46
Boritz, E. 2009. A gap in perceived importance of privacy policies between individuals and companies.
Privacy, Security, Trust and the Management of e-Business, 2009.CONGRESS '09.World Congress on:
181

15
By now many Internet users posses a certain web mail, being Gmail, Yahoo, Hotmail
and etc. From this we could assume that most Internet users if not to say all had a
certain experience with Cloud Computing.

The next step in our analysis was to check the amount of users with another account
in some other web tool, from bank web profiles to social networks profiles. Most of
the users confirmed that they had more than one user profile on the web.

Being this the keystone of Fog Computing, when a user possesses more than one
profile in different and not connected Cloud services, it generates a situation of lack
of synchronization between the service providers as well as a lack of safety at the
moment of controlling the information being moved on the Internet.

Finally we checked the amount of users that received a variety of unwanted


communications, and found out that their profile information was used by a third
party to published. We were surprised to find out that even if Spam mail is a common
reality as result of the nature of the Fogy Internet, the number of users that in a
certain time of their Internet usage experience discovered about an unauthorized use
of their profile was high. At least 43% of them implied to a misuse of their profile,
from the association of products and personalized advertisement to direct user name
and profile information used in other services.

Two known information leaks of user profiles were the continued robbery of
Facebook profiles in the past years47, as well as the Sony Play station accounts profile
robbery that occurred a couple of years ago48.
Those are not the only cases of profile leaks. These leaks end up creating a chain
reaction, when a profile is reused in a variety of ways and in a variety of information
circles. A common user as a result of the Fog is not capable of accessing all corners
of the virtual world to search for his stolen profile. Then he relies on services like
Google, that itself doesn't have access to all information circles in the virtual world.
And so in a near future, a user may find himself accessing areas that he didn't
accessed before, and that the entities that do govern in those areas may already have a
complete profile on that person violating his rights for privacy.

A more real life example can be based on the following case. Chinese Internet service
is significantly separated from the world wide virtual Internet 49. And so a company in
China that operates in their closed virtual world may be behind the acquiring of a
number of thousands of user profiles. As a result of the difficulties to access that

47
Facebook profile access 'leaked' claims Symantec, 11 May2011, BBC News
48
PlayStation Network users fear identity theft after major data leak, Charles Arthur, Keith Stuart, 27
April 2011, The Guardian.
49
Zittrain, J., and B. Edelman. 2003. Internet filtering in china. Internet Computing, IEEE 7 (2): 70-7.

16
region of the virtual world, users outside of it may not be aware that their profiles are
being harvested.
Until one day a user travels to China, and registers in a hotel. The hotel may be
operating in a perfect legal manner and the personal information of the client is added
to the data computer. In that moment, a direct link between the profile acquired in the
past, and the information entered in his visit is established. Depending on the profile
acquired in the past the hotel now may have better knowledge of this user, if it was a
social network account they may even know about his family structure, political
ideologies and tendencies, etc. If it was a profile linked to a credit card account, they
may know more about their client's financial situation.

In any case we can observe that the leak of profile information may have a chain
reaction in space and time, which shows the serious implication of a Fogy
information society in which a user has no control over his personal data.

A variety of studies inside the fog

Fog computing is a field in the making. Even though it may sound as an


extreme situation of mass information in Cloud Computing, Fog Computing covers a
vast number of cases on which miss use of information results in hazardous
consequences to the users in the net.

The study of Fog Computing may be easily adapted to the study of Fog Databases, on
which incongruence in data information may derive in an increment in data
calculation, resulting in the need to create optimized systems in which to reduce the
impact of such duplicities, unlinking and false data.

Another study that may derive from Fog Computing and which has been overtaken
by part of our group is the impact of such information structure on the Society of
Information. Clearly falling in fields of social studies the impact that Fog Computing
can have on a variety of aspects in social behavior may be of great interest in future
studies.

A most interesting study that also has been adopted by our investigation is the Time
and Space dimensions in the Cloud and Fog. Existing information in the cloud, and
linking it by fog, can result in the creation of a unified neural structure of data that
even if not accessed by a central system (search engines and such) it compounds a
network of data that is interconnected practically in small intervals of time.
Such relation between material information, its space increment and the time it
implies to access such data is being studied and will soon result in new fascinating
results.

17
These are only few of the many fields that Fog Computing has been related to. On
another page, our main investigation line is the establishment of a network
architecture that will reduce the probabilities of the Fog Computing achieving new
heights. The fact that by now the internet is in a critical point and that the increment
in Cloud Computing usage only increases Fog structure forces us to investigate at a
high speed for alternatives to solve this situation.

Future investigation

As was mentioned in the previous section, many fields can derive from Fog
Computing investigation. And so we decided to study a number of fields that will
show us the possible impacts of such phenomena in the stated fields.

The study of Fog Computing in the Society of Information is a first approach to find
the impact of the Internet structure and Fogy data in the social behavior. In such
studies we may find the miss use of laws and standards, economic interests, political
interests and etc. From such interests we will obtain a study of level impact that Fog
Computing has in society, and how the stated interests result in a strong social
restructure.

A second study that we are investing our main efforts is relied more on the field of
communications and technological solutions. By using the latest technologies, and
the new components available, we aim to find an alternative to Cloud Computing,
and so be able to stop the advance of Fog Computing. The creation and establishment
of new architectures that will ensure a more equilibrate and equivalent data network
structure can obtain such solution.

Finally and as a first approach we are aiming to solve an interesting issue that has
been proposed as a more physical question. The existence of a variety of dimensions
in specific the Space and Time dimensions in the Fog. Fog Computing is an existing
phenomena, and it's based on physical space on which data is stored as well as the
time on which data is accessed. It is on these grounds that we began an investigation
on the dimensional properties that Fog Computing has in the overall structure of the
Internet.

Conclusions

In this work we have introduced the reader to a new field of investigation in


telecommunications and informatics, Fog Computing. This field was the result of the
study of a number of elements and variables that create such phenomena. A few of
those variables were the nature of the Internet and its definition from the moment it

18
was created, and the latest development in data storage expansion impulse by the now
called Cloud Computing.

We established that Fog Computing is a direct result of those variables, and the
interest of a number of entities, from the private and public sectors, to in the same
time lure as much users as possible to their data storage centers and acquire as much
control as possible on such information.
Regardless from existing struggles between private and public entities, most of those
entities align themselves under a number of standards and consortiums, as may be the
W3C, and ensure that the access to the cloud could only pass through their channels.
If we add the fact that each entity is acting under a number of privacy policies that
allow them complete control over the information that users upload in third party
service providers, we can conclude that there is much interest in maintaining such
Fogy structure in the virtual world, and so implement a veil over the data resulting in
Silenced Information.

As a result of defining Fog Computing, we proceeded with the definition of a


number of study fields as well as future works that we pretend to treat and offer new
results in the near future.

Fig1. The size of the Internet

19

You might also like