(Post 16/10/2005)
In the last few years, hardware and software have both
evolved side-by-side. While price-wise both have become relatively cheaper,
performance-wise they have improved. With more and more processing power
being made available on the desktop, the applications that were formerly
designed for mainframes can now be run on the desktop PCs. The design
of the software itself has also undergone a sea change. In this article
we shall have a look at how the software has changed, and what it is now.
Years ago (in what can be considered as the Jurassic
era for computers!), applications were built from scratch to meet the
client’s requirement. Analysis and design was done once the client’s requirement
was understood, and the application was then developed. Indeed, this is
still done now, but with a difference. This difference will become clearer
as we proceed. Here, however, developing an application was like an artist
starting off with a blank canvas every time. The colors had to be mixed
in the palette once again to get that right shade!
With time, however, this kind of application development
led to the clients, as well as the developers, getting frustrated. On
the part of the clients, the project was rarely (if ever) implemented
on time, and the project deadlines almost always went awry. The developers,
on the other hand, had to code for similar functionality across projects,
and therefore had to rewrite similar code across applications. This meant
less productivity as well as code redundancy.
Over time, as though in answer to this, a new paradigm
in software development evolved, where software ‘components’ were developed.
These components comprised of the often-used functionality.
Consider, for example, a user authentication module.
This could be a standard piece of code that could be used across any kind
of application. Such a module could be considered as a component and stored
in a library, for reuse across applications. This meant that a category
of developers—the component developers—focused only on developing such
components that could be reused while developing applications.
A component could be a code component or a user interface
component. The user authentication module we spoke of above, would require
two input boxes, one for the user name, and the other for the password,
as well as the ‘Submit’ and ‘Reset’ buttons. These four interface components
could be grouped together into one component and made available to the
user. This is an example of a user interface component. Similarly, the
code used for authenticating the user could also be stored in a code library
and used across applications. Note that the user interface component could
also include the code component so as to build the necessary functionality
along with the interface.
Thus the ‘component’ technology was born, where ‘components’
(like their industrial counterparts) were built and kept. When software
had to be ‘produced’, these components were plugged in together and the
necessary code to ‘glue’ these together was written... and voila! An Application
was ready!
This kind of software development had obvious advantages.
Firstly, the components were ‘ready-made’, and the developers had to just
bother about adding the extra functionality as well as gluing these components
together rather than rewriting code from scratch. This meant that the
developer could focus more on adding the unique business value to the
software rather than concentrating on providing the already available
functionality. Secondly, the time to market was low, and as a result the
software could be developed and marketed much quicker. This meant that
deadlines were met, as the software did not have to be developed from
scratch. The colors in the artist’s palette were already mixed to achieve
that right hue, the painting had just to begin.
The developers were happy and the clients were happier!
This worked fine as along as the components (whether
the code variety or the user interface variety) were present on the system
on which the developer was developing the application.
However, on another front, the network started gaining
prominence, with PCs being connected together to form PC-networks. On
a larger scale, the Internet was being conceived. In the years that followed,
the computing world witnessed a revolution in the form of the Internet.
The way people computed changed, the kind of hardware changed, and even
the kind of applications being developed changed. Applications developed
in this new era, therefore, had to be designed keeping this new medium—the
Internet—at the core. These were web-enabled applications that needed
to work across any platform.
At such a time, the need was felt for the component technology
to be extended such that a component could be accessed across the network.
It is then that technologies like Microsoft’s DCOM, Java’s RMI, and OMG’s
CORBA were developed to meet this ever-growing need. While these provided
access to components across an intranet, accessing the same components
across the Internet required the developer to write a lot more ‘plumbing’
code. This tended to make the entire code look very ugly. The approach
somehow did not look right! To add to it, the same technology needed to
be used on the server side as well as the client side. This meant that
D-COM worked fine as long as you worked with Microsoft products on the
client as well as the server. Using RMI, we could access components across
the Internet as well, provided the components were Java-based.
So, while the need was met to a certain extent, it was
obvious that vendor-independence was certainly not on. The developer community
was happy, and then again, not so happy. They needed some way in which
they could use this same component technology across the Internet, without
needing to write too much of ‘plumbing code’, and without being limited
to a particular vendor or technology. Of course, it was a time when Linux
and the concept of Open Source were being embraced by the developer community
as being the right approach.
And then came the concept of Web Services!
Web Services is nothing but the implementation of component
programming over the web! And that too in a vendor-independent, platform-independent
manner! Here, each application that is developed, and present on a network
has the potential to be a service. That is, every application can be considered
as a component, having within it the required functionality to act as
a unit, and exposing the necessary methods that other applications can
call in order to make use of a part or all of that functionality.
Imagine, therefore, three sites: one offering travel
and tour destinations, another for hotel bookings, and the third one offering
flight ticket services. In the current scenario, the three different sites
would be owned and run by three different firms, each of which would,
through the site, provide its own business to the customer. Each of these
sites would therefore exist as solitary islands. However, to the customer
who would like to go on a holiday to some place, the same sites would
appear to be related.
The person who would like to go to, say, Mauritius, would
need to first go to one site to check out the list of destinations, then
browse to the next site to get room bookings, and finally a third site
for making his travel arrangements. All the information provided by each
of the sites would need to be collected and managed by the customer himself.
This is illustrated in the figure below.
Note that in such a scenario, the customer would need
to actively browse through the different sites and gather relevant information,
and hence come to a decision after weighing all the information gathered
from the different sites.
Here, it is obvious that what the customer actually needs
is a collaborative experience, in which through one site he can get all
that information, so that he can decide on a holiday destination, book
the hotel room as well as the flight tickets in one go! And this is just
where the concept of Web Services fits in.
With the concept of web services, each web site is built
like a component, whereby it itself exists as an application, but also
wherein it exposes some functionality in the form of methods or functions.
Another web site can then call these functions in order to get some information
from this site. To understand this better, let us go back to the example
we considered earlier. Each of the sites in our ‘holiday to Mauritius’
example would continue to exist as separate entities but would expose
their functionality so that the other websites could utilize it. Therefore,
the ‘Travels and Tours’ site would perhaps expose a method/function, which
would display a list of the travel destinations in the different countries,
and the specialties of each of these locations along with images. Similarly,
the ‘Hotel’ site would expose methods displaying all the hotels in the
different locations, their rates, as well as the availability. Finally,
the ‘Flight Bookings’ site would expose methods that provide information
on the flights that ply to the different locations, their ticket costs,
and the availability.
Each of these sites, or then a fourth site (call it all-in-one.com),
could then utilize the methods exposed by each, in building an application
that harnesses all this functionality. This would result in one site that
offers various services to the customer under one virtual roof, so that
the customer can check out the holiday destinations and decide on any
one, view the hotel accommodation available, and book the flight tickets
after checking out the rates. All at one go, and on one web site! This
would make it appear to the customer as though all these functionalities
were offered by one site. In the background, however, all the three web
sites would be involved in the processing. Therefore, when the customer
visits the travel and tours site, he will be presented with a list of
locations from which he can select a location that he would like to go
to. The location that he selects will be sent to the ‘Hotel’ and ‘Flight
bookings’ web sites. These would then return information about the hotels
in that location, as well as the flights plying to and fro from the same
location. This communication with the other two sites remains transparent
to the user. At the end of this, the customer not only chooses a location,
but also makes flight and hotel bookings. The customer, therefore, does
not need to manually remember all the information that he has picked up
at each individual site. The site itself takes care of this, thus providing
the user a rich seamless and collaborative experience.
The figures below illustrate two scenarios. The first
one shows a fourth site (all-in-one.com) having components of the other
three sites, implying that it pulls in the methods exposed by the other
three. The green portion indicates the added unique functionality that
the site itself provides. This might be an intelligent search feature
using which the customer can enter information such as the number of days
he wants to spend on a holiday, his budget, the kind of holiday spot he
is looking for. Based on this information, the site could perhaps pull
in the relevant details from the other three sites to provide him the
destinations, hotel rooms, and flight ticket rates as per his requirements
and present these to him as options from which he can choose. The second
figure illustrates the ‘Travel and Tours’ site pulling in the functionality
offered by the other two sites such that once the customer selects the
location he wants to go to, he could be presented the hotels available
in that location, as well as the relevant flight details.
Note that in either scenario, what the customer gets
is an experience wherein, he needs to just select from the choices he
sees in front of him. There is no need for him to go to multiple sites
to collect the information and then collate it, as he had to do earlier.
This is what web services do—expose methods over the net in a vendor-independent,
platform-independent manner, using which third parties can offer services
to the customer. Of course it needs to be noted that the functionality
exposed in the form of methods, can only be pulled in provided the web
site exposing them is willing to!
With this new approach, as in component programming,
delivery time is reduced, as existing functionality is integrated into
the site. Moreover, since each of the services being used would have been
previously tested for errors within the parent site, the likelihood of
errors in the host site would be reduced.
Therefore, it can be seen that over the years, a paradigm
shift has taken place in the way software has been perceived and developed.
From being an application that was developed from scratch, uniquely designed
for a particular customer, to being a ‘service’ whose functionality can
be accessed in a vendor and platform-independent manner across the Internet...Software
development has gone a long way!
|