Columns

The Race Toward Clean Cloud Computing

Imagine a world where most of the functions of our personal computers – running applications, communicating, and storing data – do not take place on those computers but rather at massive computer server farms located in remote locations and linked through high-speed networks.  This is not the stuff of science fiction but rather describes "cloud computing,"  one of the hottest Internet and computing trends and the subject of my weekly technology law column (Toronto Star version, Vancouver Sun, homepage version).

Despite limited public attention, cloud computing has already woven its way into the fabric of the Internet.  Web-based applications allow users to word process, create presentations, and manipulate data spreadsheets online, Internet-based data backup services offer the chance to store mirror images of our computer hard drives, and every day hundreds of millions of people use Internet services such as web-based email, photo sharing sites, or Facebook applications where the significant computing power resides elsewhere (in the "cloud" of the Internet).

Critics argue that the benefits of cloud computing – greater computing efficiencies and the accessibility of data and applications from anywhere – are offset by the privacy implications of lost control over our personal data.  Moreover, a growing number of environmental groups note with alarm the enormous energy requirements to power (and keep cool) hundreds of thousands of computer servers.

While cloud computing is an international trend, Canada may interestingly enjoy a global competitive advantage that would address some of the critics' concerns. Indeed, led by Bill St. Arnaud, the Senior Director of Advanced Networks for CANARIE, which focuses on advanced networks in Canada, there are mounting efforts to position the Canadian north as the ideal home for cloud computing.

The starting point is to establish high-speed, optical networks that run north – south between the Canadian arctic and the major Canadian urban centres.  Connecting these two regions by optical network would use minimal energy and have the power to instantly transfer huge amounts of data.

Locating the server farms in the Canadian north offers several environmental advantages.  These include easy access to clean energy sources such as wind and geo-thermal energy and, given the colder climate, decreased energy requirements to cool the computer server farms.  In fact, St. Arnaud argues that the heat generated by the computers can be captured and used to heat nearby buildings leading to zero carbon data centres.  

In addition to the environmental considerations, locating computer server farms in Canada would offer Canadians better privacy protection since their data would never leave the country and would be subject to national privacy laws.

St. Arnaud is quick to point out that Canada is not alone in competing for cloud computing installations, however.  As companies increasingly factor environmental and legal considerations into their decision making processes, other countries are trying to position themselves as the ideal hosts.  For example, Iceland recently announced the creation of a high-speed link with the United States, as it seeks to parlay its geographic position and availability of geo-thermal energy to advantage.

Canada already has much of the technical and privacy infrastructure in place to become a global player.  It is a recognized fibre-optic network leader in close proximity to the United States and its privacy legislation meets international standards, thereby removing a potential impediment to data transfers into the country.

A significant barrier, however, may be the failure to address several legal issues that increases the risk of storing data in Canada.  These include the absence of legal protections for Internet intermediaries (such as Internet service providers) for content they host but over which they have no knowledge or control.  Moreover, the absence of a "fair use" provision under Canadian law increases the potential liability for innovative business models that rely on cloud computing infrastructure.

Many of the world's leading technology companies, including IBM, Google, and Microsoft, are moving rapidly toward the cloud computing model.  If Canada responds, it could emerge as a leader and in the process address mounting concerns over the cloud computing's effect on personal privacy and the environment. 

8 Comments

  1. IBM has been in the business for years
    At least a form of it. It was called the mainframe.

    Great for system administrators, since there are so few machines to manage. Potentially terrible for productivity.

    Network is overloaded = less productivity.
    Network is down = no productivity.

    Server is overloaded = reduced productivity.
    Server is down = no productivity.

    The affect is not just one user but all users of the resource. (Based on personal experience with shared resources such as this concept advocates).

  2. Although the network concern could still valid, your server view is a little outdated. For that very reason there is no more somenthing like ONE server machine. Todays machines that SERVE an application are many. Did you ever see amazon going down? (See CLUSTER COMPUTER, LOAD BALANCING, and the like). The age of a single MAIN computer is gone. The same MAINFRAMES sold now are big boxes of MANY modules that mirrors and backup each other including the power supply.

  3. R. Bassett Jr. says:

    I marvel at how often in human society very old ideas are reborn as the next “leading edge” thing. More amazing is how people actually buy into the hype, which only shows how close humanity remains to stepping back into the dark ages with its collective short term memory. It was my father who tipped me off to this phenomenon when the front loading washing machine made a come back, but was hailed as new. Same with the electric vibrating disposable razor, the hydrogen power vehicle, “thin clients”, etc. ad nauseum.

    “Cloud Computing” is nothing more than a new term for Client/Server computing that has been with us [in wide spread every day use] for 50 or so years. From a technical perspective there’s nothing to see here, move along…

    As for environmental impact, I sure hope they don’t plan on treading all over the permafrost to create these server farms and windmills, as once the surface is broken the ecosystem irrevocably changed. However, putting the centers in the polar regions beyond the permafrost would be an excellent way for Canada to exert its sovereignty and I imagine that does play a large role in the desire to build centers there. It’s certainly not a matter of practicality, unless they plan to sting fiber optics along the pipe line and purchase shipping container data centers [ link ], as this approach would require the least amount of initial investment while minimizing the environmental impact of building the system…

  4. ENO. That is why I said a form it it. One of the benefits that has been touted for “cloud” computing is that there is less work for the system administrators. Certainly there is less, however you need to set up clusters, which does not magically happen, and has its own set of challenges. Otherwise, you end up with mainframe operation. So, if you assume that you can do the work with 5 machines, you need to increase that number to allow for transparent failover, to say 7 or even 8 machines. Theoretically the desktops can be relatively inexpensive (they need to be powerful enough to run the OS in question).

    The concept reminds me of the graphical workstation that I used 12 – 15 years ago. It ran the X-server locally, but all of the programs ran remotely on one of a number of machines (in this particular case, the X Window Manager as well). But, it was network intensive. Setting up the clusters was very expensive. In the end, it comes down to how much money you are willing and able to spend. To do it properly is expensive.

    Does it have its benefits, yes.
    Does it have situations where it is likely to be useful? Yes.
    Is it the silver bullet that I’ve seen some proposing it is? No.

    For me, where the technology is most useful is where there are some very resource intensive applications that are used not all that often by people. Where the application is used often, many people are using the same application simultaneously, or the expense of recovering lost work is too great, I don’t believe it is as useful as distributing the workload. What I mean by recovery of lost work in the previous sentence is that, if the network connection goes down, will you lose all unsaved changes? Much of this is related to the application software that is executing… not all software provides a good means to recover from a crash.

  5. @ENO
    \”Did you ever see amazon going down?\”

    Yes! It just did. Check out current IT news. Also, network connections go down quite frequently. After a recent storm here, internet connectivity for everyone in my area was down for most of a day. A few pings told me it was not my ISP\’s fault, so obviously networking technology in Western Canada is not sufficiently advance for a business to depend on fog, eh, I mean cloud computing just yet. Maby elswhere, but I doubt it. The internet still depend on too few lines which are all too easily hooked by some anchor. We recently saw Asia, the Middle East and North Africa severely crippled for many days. There\’s about 50 of those shipping incidents a year. Again, check the news. lol

  6. I never denied “network” problems could occur. What I said is that denial of service because of machine failure is very unlikely since if a center is well architected there isn’t a machine that would stop without some kind of backup kicking in. I work from home accessing remotely machines in another country. Every morning I wake up and work with those machine for 8 hours. Never had a problem for 5 years in a row. The only problem I had was last month when my laptop died. Since I did not lost any data (since they are store remotely) what I had to do is buying another laptop, install my VPN client and I was back on business in a blink. Cloud computing is already happening. Nothing ever happen out of the blue. If Canada has this opportunity it is better grabbing it than losing it.

  7. Todd Sieling says:

    > At least a form of it. It was called the mainframe.

    It’s not quite the same thing, as the environment has changed a fair bit, but there are similarities. The mainframe and the client-server model largely assume a completely dumb terminal/client (the part that the end user works with to access the large, shared resource). Modern clients have appreciable offline storage that hasn’t yet been leveraged much by web-based apps, as well as their own processing power and memory that *are* moderately utilized, particularly in the browser.

    In the historic IBM model, little if anything beyond displaying data actually happens on the terminal machine, so if the network or the mainframe had trouble, productivity necessarily suffer. With cloud apps that take advantage of client-side storage and greater advantage of client-side processing, productivity doesn’t necessarily suffer with outages or disconnection. Another difference is on the computing side, where the monolithic, all-seeing, all-doing single giant box mainframe has given way to the server farm, where machines shift their workload between each other and route around ailing machines with ease. Expanding the computing capacity of a cloud service doesn’t require one large box to be upgraded (downtime!) or replaced (downtime plus spin-up time), but just dropping in more relatively cheap boxes and bringing them online in less than a day, without disruption to regular operations. The idea of a large resource shared by many remote users is still at the heart, but too much has changed to talk about it as ‘the same old idea back in fashion.’

    Disdaining cloud computing misses a lot of what’s going on, as millions of people have already been using it for years in limited ways, mostly email services like Hotmail (over a decade old) and Gmail (a few years old, but with an interface so fugly it could be 20yrs old). The outages in Amazon’s computing services translated into minutes or zero disruption of end users due to customer-side caching, and you can be sure that Amazon is going to be looking at why they can only say 99.9% uptime right now, which is about where their performance record is at.

  8. Todd. Please see my response to ENO (2008-02-15 16:08:41). If I gave the impression that I have a disdain for “Cloud Computing”, that was not my intention. Far too often, in my years in the IT business (I’ve been doing this since ’91), I’ve seen old ideas rebranded, sometimes with minor changes to the way that it works. This whole concept of the “cloud” strikes me that way… the change was a move towards smaller, desktop like platforms (today my desktop has more power than the large VAX cluster I worked on in ’91) to implement the cluster. Otherwise, it strikes me as being very similar.

    The later point I made to ENO was that I don’t think it will fix all your problems. It is a tool, and should be treated as such. Certainly having the data in a central location that is easily backed up is a good thing (I could do that fairly easily with a Unix or Linux platform), but it needs to be balanced against a number of factors: availability (primarily outages due to network outages or storage outages, rather than processor outages) and security (if the farm isn’t your company’s own, would you put proprietary or classified data on it?) among others. For some applications, it makes sense to do it. For others, a true client/server model would work better, for others, all work done locally at the desktop. Does IT go through “fashions”? Most certainly it does.

    I am not so sure about locating the farms in the north, however. For the reasons given (cooling, local power generation, etc) it makes a lot of sense to me. What I am not sure of is the economics of the high-capacity links that would be required. Would they have to be run from, say, Edmonton, or is there sufficient preexisting capacity to places such as Yellowknife that the required capacity could be run from there.