‘Cloud Computing:’ An Old Concept On the Verge of Maturity

Written by Zeeshan Naseh

Cloud Computing Challenges The term “cloud computing” is a relatively new buzzword that describes a decades-old approach to sharing computing resources.

In a nutshell, cloud computing refers to making software, number crunching and storage technology available through a network of servers that are located somewhere else. Broadly speaking, cloud computing allows users to employ somebody else’s servers for storing, retrieving and changing data.

What we call cloud computing today has its roots in the 1950s, when computers were primarily big mainframes the size of a room.

Because of the expense of those machines, making efficient use of them was at a premium. So the concept of “time sharing” emerged. Using machines that didn’t have computing power of their own – called “thin clients” – different parties could access the computing power of the mainframe at the same time.

A handful of visionaries helped expand the idea of computers that could talk to each other, a notion that ultimately led to the modern web. John McCarthy in 1961 launched his vision for computing being sold like a public utility, much like water or electricity is made available to the world. J.C.R. Licklider helped create the Advanced Research Projects Agency Network, or ARPANET, generally regarded as the predecessor to the Internet.

Along the way, writers such as Nicholas Carr have helped define web-based “cloud computing” while shaping our thinking about the widespread implications of unlimited computing capacity.

Fast Forward to AWS

Fast forward to the last decade, when Amazon.com launched Amazon Web Services, which, among other things, allowed companies to rent time on computers to run applications and other software. That not only helped popularize the notion of cloud computing, but also made it available to small users of computing capacity.

But as with anything, cloud computing has its challenges. When using a public cloud, the person or company using the service has no ownership over the hardware that is providing computing capacity. The bottom line: When things go haywire, the user has no control over what’s happening. Further, if your telecom company is different than the data center owner who is different than the cloud provider, there is no “one-throat-to-choke” if problems come up.

In addition, if you have computing systems of your own, they must be integrated with those of your cloud provider so that the different computers can talk to each other and play well together. Making that happen can be a challenge.

Private Clouds are better, especially for large organizations

All of which is why, for some users, private clouds can make more sense. You have control over the hardware and software alike. You also can act quickly and effectively when things break, and you can ensure that nobody leaves a virtual door open for cyber villains to enter your cloud and make mischief.

The trouble is, building a cloud system of your own can be a costly and cumbersome process. Setting up a cloud can take months, and the existing vendors you’d need to tap for technology and related services may be expensive.

What to do? Answers are starting to emerge in the form of what Gartner is calling cloud management platforms, services that marry computing power with networking and storage in a way that is both more affordable and can be constructed literally overnight.

Stay tuned. In future blogs, we’ll continue to explore cloud computing and the new ways companies are finding to build secure and quickly scalable solutions to their business needs.