• OpenECP, An Enomaly ECP Fork

     
    Image representing Enomaly Inc as depicted in ...

    Image via CrunchBase

    I am a strong supporter of Open Source Software and a proponent of the importance of open source in cloud computing. It is my strong opinion that open source will empower the customers giving them access to the software even after the company behind the product goes out of business. In this regard, I have even called Open Source as a SaaS endgame. Even though Open Source plays a predominant role in empowering the customers, there are some vendors who use open source as a pure marketing ploy. These vendors use open source to entice users to their product and as soon as they gain reasonable traction, they stop supporting the open source version of the product.

    In the early SaaS days, we saw how Activecollab abused the open source spirit. In recent days, the cloud vendor Enomaly Inc. has been facing the wrath for running away from the open source version of their Elastic Compute Platform (Enomaly ECP). Unlike Activecollab and few others who unabashedly abused the spirit of open source, Reuven Cohen, Founder and CTO of Enomaly Inc., has been vocal about their involvement in open source. In fact, I found him to be one of the strong voices in support of open source and open cloud efforts. Recently, one of the Clouderati, Sam Johnston, noticed that Enomaly has stopped supporting the open source version of their platform.

    It doesn’t help my sentiment either that every last trace of the Open Source ECP Community Edition was recently scrubbed from the Internet without notice, leaving angry customers high and dry, purportedly pending the “rejigging [of their] OSS strategy”. While my previous attempts to fork the product as Freenomalism failed when we were unable to get the daemon to start, having the code in any condition is better than not having it at all. In my opinion this is little more than blatantly (and successfully I might add) taking advantage of the Open Source community for as long as necessary to get the product into the limelight. Had they not filled this void others would certainly have done so, and the Open Cloud would be better off today as a result.

    Today, Sam Johnston startled everyone in the Clouderati with an announcement that he is forking the community version of Enomaly ECP with some additional security fixes. The new project called OpenECP is released under the Affero General Public License v3 or similar. The Open Elastic Computing Platform (OpenECP) is currently under Version 4.0 Alpha and provisionally tested on Debian GNU/Linux 5.0. In short, Sam Johnston has done what CentOS has done to RedHat Enterprise Linux. Take the source code, rip up the trademarks and release it under a different name.

    My initial reaction to this news was that Sam is mad but this is a brilliant move. Frankly speaking, this fork is a result of some personal bickering between Sam Johnston and Reuven Cohen (as evidenced by their public tweets). Personally, I know both of them “equally well” through Twitter and I have absolutely no comment on the tensions between them. I am approaching this issue from the point of view of open source licensing and what this move by Sam means to the customers. From what I scraped out from Twitter and various online forums, Enomaly has decided to not work on the open source edition for some tactical reasons from their business strategy point of view. Even though they have promised to bring the open source edition back in few months, there is absolutely no guarantee. Under such circumstances, this fork ensures that the community edition continues to live on even if Enomaly decides to not revive it. It ensures that the current users of the community edition are not left in a lurch. However, there is no guarantee that the development of the open source software will continue beyond this point. On a related note, I strongly recommend you read the post on 451 Group’s CAOS Theory and my response to it.

    To be fair to Enomaly, I would like to point out that the current proprietary version of Enomaly ECP has some additional features like vlan, metering, etc.. Plus, it was directed towards service providers who want to buy it straight from the software vendor. Though fork is fun and easy, ultimately the hardest job was done by the original vendor by spending thousand of hours on the product. But, this episode should serve as a warning to software vendors who want to take the open source route. If they take the path of open source, they should stick to it all the way through. But if they try to use it just as a shortcut for gaining users, they will lose control of their product and may even face a potential danger of losing their business.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Computing And Green

     
    IBM Smarter Planet Initiative

    Image via Wikipedia

    One of the buzzwords we hear in the marketing campaigns of this cloud era is the concept of Green. Some of the cloud providers target our guilt to sell their services. They clearly understand that most of us are very worried about the impact of global climate change and we are willing to do everything possible to stop/reduce it. So, every single cloud provider use the idea of going green in their marketing campaigns giving an impression that anything cloud computing is green. In this post, let us dig through the hype and cut to the chaff.

    There are many ways in which we can make IT environment friendly and chief among them are the efficient use of compute resources and reduction of environment impact due to power and cooling. The former could be achieved by the effective use of virtualization and automation. The latter can be achieved by adding efficiency in power generation and cooling and, also, by tapping into non-conventional energy resources. An example for this approach is the new datacenter opened by IBM last week at their Research Triangle Park campus in North Carolina. The data center currently is using about 60,000 square feet of raised floor space consuming 6 megawatts of power, with the capacity to grow to 100,000 feet and 15 megawatts. At full capacity, the facility will be able to handle the computing needs of 40 to 50 clients. This datacenter could save 15% in the energy costs and they do this by increasing the efficiency of how the datacenter is operated.

    IBM’s Smarter Planet initiative is designed to incorporate greater intelligence into infrastructures—from buildings, transportation systems and utilities to businesses and even cities—to make them run more efficiently. Along those lines, IBM has put in more than 8,000 branch circuit monitoring points that keep an eye on the systems, more than 2,000 sensors that gather temperature, pressure, humidity and air flow data from air conditioners, and more than 30,000 utility and environmental sensors that interconnect with IBM software tools. Data from these sensors can be analyzed to help with future planning for the building and for energy conservation.

    Technically, you don’t have to be a cloud provider to do this and even traditional IT can embrace these strategies to reduce the impact on the environment.

    However, cloud providers are uniquely positioned to be more effective in achieving the Green IT. By the very definition of cloud computing, they have

    • Multi-tenancy
    • Cloud Economics

    incorporated in their business strategy. The consolidation of multiple customers using multi-tenancy will lead to lesser use of energy resources and a positive impact on the environment. The very presence of cloud economics, where the cloud providers offer compute resources for literally pennies, will force the providers to be more efficient in their IT and cut costs in every possible way. This means that the cloud providers will find ways to cut down drastically on the power and cooling costs, leading to a greener IT.

    In reality, none of these cloud providers like Amazon, Google, Microsoft, etc. offer any raw data to show how energy efficient they are with respect to the utilization of compute resources. Some players like Amazon employ ideas like Spot Instances which gives us some understanding of their strategy to maximize their resource usage. Still, there is no hard evidence available to show us that these cloud providers are much greener than the traditional IT vendors who are employing a good mix of virtualization and automation. Now, if we include the fact that many SaaS vendors don’t use the cloud infrastructure providers for their infrastructure needs and they either use their own datacenters or resort to the traditional managed hosting providers, the green claims gets more and more foggy.

    There is too much hand waving going on when it comes to Cloud Computing and Green IT. There are no known hard data and unless the cloud vendors come forward with complete information about their energy efficiency, there is no way we can verify these claims. However, the following factors are clear

    • Even the traditional vendors can be highly energy efficient with a proper use of virtualization and automation
    • Cloud Computing offers us great opportunity to cut down tremendously on the energy costs
    • More importantly, the cloud computing era and the associated awareness regarding the environmental impact of IT has kick started a realization that we need not spend more money on running IT. This, in turn, has forced enterprises of all sizes and shapes to optimize their IT towards Green IT.

    I take this post to call upon the cloud providers to come forward and offer some insights to customers by giving some raw numbers explaining their Green strategy. Such voluntary steps from vendors will go a long way in shaping sustainable, socially responsible capitalism.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Computing And Green

     
    IBM Smarter Planet Initiative

    Image via Wikipedia

    One of the buzzwords we hear in the marketing campaigns of this cloud era is the concept of Green. Some of the cloud providers target our guilt to sell their services. They clearly understand that most of us are very worried about the impact of global climate change and we are willing to do everything possible to stop/reduce it. So, every single cloud provider use the idea of going green in their marketing campaigns giving an impression that anything cloud computing is green. In this post, let us dig through the hype and cut to the chaff.

    There are many ways in which we can make IT environment friendly and chief among them are the efficient use of compute resources and reduction of environment impact due to power and cooling. The former could be achieved by the effective use of virtualization and automation. The latter can be achieved by adding efficiency in power generation and cooling and, also, by tapping into non-conventional energy resources. An example for this approach is the new datacenter opened by IBM last week at their Research Triangle Park campus in North Carolina. The data center currently is using about 60,000 square feet of raised floor space consuming 6 megawatts of power, with the capacity to grow to 100,000 feet and 15 megawatts. At full capacity, the facility will be able to handle the computing needs of 40 to 50 clients. This datacenter could save 15% in the energy costs and they do this by increasing the efficiency of how the datacenter is operated.

    IBM’s Smarter Planet initiative is designed to incorporate greater intelligence into infrastructures—from buildings, transportation systems and utilities to businesses and even cities—to make them run more efficiently. Along those lines, IBM has put in more than 8,000 branch circuit monitoring points that keep an eye on the systems, more than 2,000 sensors that gather temperature, pressure, humidity and air flow data from air conditioners, and more than 30,000 utility and environmental sensors that interconnect with IBM software tools. Data from these sensors can be analyzed to help with future planning for the building and for energy conservation.

    Technically, you don’t have to be a cloud provider to do this and even traditional IT can embrace these strategies to reduce the impact on the environment.

    However, cloud providers are uniquely positioned to be more effective in achieving the Green IT. By the very definition of cloud computing, they have

    • Multi-tenancy
    • Cloud Economics

    incorporated in their business strategy. The consolidation of multiple customers using multi-tenancy will lead to lesser use of energy resources and a positive impact on the environment. The very presence of cloud economics, where the cloud providers offer compute resources for literally pennies, will force the providers to be more efficient in their IT and cut costs in every possible way. This means that the cloud providers will find ways to cut down drastically on the power and cooling costs, leading to a greener IT.

    In reality, none of these cloud providers like Amazon, Google, Microsoft, etc. offer any raw data to show how energy efficient they are with respect to the utilization of compute resources. Some players like Amazon employ ideas like Spot Instances which gives us some understanding of their strategy to maximize their resource usage. Still, there is no hard evidence available to show us that these cloud providers are much greener than the traditional IT vendors who are employing a good mix of virtualization and automation. Now, if we include the fact that many SaaS vendors don’t use the cloud infrastructure providers for their infrastructure needs and they either use their own datacenters or resort to the traditional managed hosting providers, the green claims gets more and more foggy.

    There is too much hand waving going on when it comes to Cloud Computing and Green IT. There are no known hard data and unless the cloud vendors come forward with complete information about their energy efficiency, there is no way we can verify these claims. However, the following factors are clear

    • Even the traditional vendors can be highly energy efficient with a proper use of virtualization and automation
    • Cloud Computing offers us great opportunity to cut down tremendously on the energy costs
    • More importantly, the cloud computing era and the associated awareness regarding the environmental impact of IT has kick started a realization that we need not spend more money on running IT. This, in turn, has forced enterprises of all sizes and shapes to optimize their IT towards Green IT.

    I take this post to call upon the cloud providers to come forward and offer some insights to customers by giving some raw numbers explaining their Green strategy. Such voluntary steps from vendors will go a long way in shaping sustainable, socially responsible capitalism.

    CloudAve is exclusively sponsored by

    Read more

  • Should Scientists Use Microsoft’s Free Cloud Services Offerings?

     
    Science icon

    Image via Wikipedia

    I am a strong proponent of using cloud computing for scientific research. Some of my posts on the topic are listed below.

    If you read these posts, you can understand how strongly I feel about the use of cloud computing in academic research, in general, and scientific research, in particular. Cloud Computing can empower scientists and help them accelerate their research while cutting down on the expenses. This is especially true in a country like US where scientific funding has been seriously curtailed since 2001. Using cloud computing is, possibly, the most efficient and cost effective way to do scientific research.

    In fact, Microsoft Research has a featured story talking about how cloud computing can help scientists because of the economies of scale and an interesting comment by Prof. David Patterson, a professor of computer science at the University of California, Berkeley. According to Prof. Patterson, the potential impact of cloud computing is comparable to that of the invention of microprocessors. Absolutely fantastic comparison, in my humble opinion.

    Patterson adds that the economies of scale possible with the cloud are just as much about performance as cost. The most exciting part of cloud computing, he says, is the ability to “scale up” the processing power dedicated to a task in an instant.

    Even though I am happy to see Microsoft echoing some of the ideas I have been advocating in this blog for more than a year, I am deeply disturbed by one aspect of this article. It is about Microsoft’s attempt to push their S+S approach to the scientists. Here is a comment by Dan Reed, corporate vice president of Microsoft’s Technology Policy and Strategy and eXtreme Computing Group

    There is a large community of researchers — social scientists, life scientists, physicists —running many computations on massive amounts of data. To use an example many people can understand — how can we enable researchers to run an Excel spreadsheet that involves billions of rows and columns, and takes thousands of hours to compute, but still give them the answer in 10 minutes, and maintain the desktop experience? Client plus cloud computing offers that kind of sweet spot.

    It appears National Science Foundation has already signed an agreement with Microsoft to offer American Scientists free access to Microsoft’s cloud computing services.

    The National Science Foundation and the Microsoft Corporationhave agreed to offer American scientific researchers free access to the company’s new cloud computing service. A goal of the three-year project is to give scientists the computing power to cope with exploding amounts of research data. It uses Microsoft’s Windows Azure computing system.

    Even though I am fully convinced about the impact of cloud computing on Scientific research, I don’t see their S+S strategy serving the interests of Science. Well, actually, I am against this agreement for two reasons.

    • In my opinion, Science has to be completely open. Any attempts to lock-in the scientific results in proprietary platforms or applications or data formats goes against the very spirit of openness in science. By locking in the scientific results within Microsoft’s platform, we are forcing the entire scientific community to be dependent on Microsoft, affecting the very advancement of science itself. This is morally wrong and very bad for the scientific community in the long term.
    • Secondly, S+S approach is promoted by Microsoft to protect their cash cow. It adds problems in two ways for the scientific community. First, It is not cost effective and, more importantly, it is not an efficient way to do science either. The exorbitant licensing costs for the traditional software used to access these cloud services will put a huge dent on the funding for scientific projects. In my opinion, the huge amount of money spent on such traditional software packages can be better utilized elsewhere. Then, there is the issue of a need for bigger resources to use these bulky software. These resource needs not only make this S+S approach inefficient, it also adds to the cost of computing. What is the point in saving money on the traditional infrastructure expenditure and then spend a part of it on traditional extra powerful desktop (laptop) computers?

    Another important point to note from the NYT story is that this agreement between NSF and Microsoft is for three years. In the absence of an indefinite free use agreement, this is just a pure marketing ploy. After three years, these research projects will be forced to spend quite a bit of money on the licensing fees for the traditional software plus the cloud services offered by Microsoft. Not only that, any researcher who wants to either extend these projects or use their results on new projects may end up paying part of their funding money to Microsoft.

    In short, this agreement between NSF and Microsoft is very shortsighted and it may as well go against the very spirit of science. I have no problem with Microsoft pushing their S+S strategy in the market. Eventually, the economics and the value add will determine whether S+S or pure SaaS will be the ultimate winner. However, when it comes to altruistic issues like Science where public money is heavily involved and its impact on the public is very significant, it is not a good idea to push a strategy that serves a particular company’s self interest alone. This move is neither good for Science nor for Cloud Computing.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Pricing War Begins

     
    Diagram showing economics of cloud computing v...

    Image via Wikipedia

    Finally, the cloud pricing war has begun. I have been complaining about the AWS pricing here at Cloud Ave for some time. In my Sept, 2009 post, I argued that Amazon needs to price aggressively to capture more market share.

    However, I would like to to use this post to once again voice my concern about Amazon’s EC2 pricing. For example, if I setup a small on-demand linux instance and not send ANY traffic towards or from it, I would have to pay $72.00. In my opinion, this is pretty expensive and I am hoping that the competition will eventually drive down the prices. In fact, Amazon has cut the prices of reserved instances by 30% but it is not very appealing to me because on-demand pricing, which is at the very heart of cloud computing, is still expensive. In the case of reserved instances, I am left with the traditional hosting economics and not cloud economics. If Amazon is serious about getting more SMBs and, even, enterprises, they have to price their EC2 offering aggressively.

    Finally, with the official launch of Windows Azure on Monday, the competition got heated up. Microsoft priced its cloud offering very aggressively compared to what Amazon was offering at that time. For example, Windows Azure compute pricing was as follows:

    • Compute = $0.12 / hour
    • Storage = $0.15 / GB stored / month
    • Storage transactions = $0.01 / 10K
    • Data transfers = $0.10 in / $0.15 out / GB

    In November 2009, Amazon cut down their prices by 15% across all on-demand instance families and sizes. Today, Amazon countered the impact news cycle regarding the official availability with further reduction in their AWS data transfer prices. The pricing for data out has been reduced by 2 cents per GB. The first 10 TB has been reduced to 0.15 per GB from 0.17 per GB. The next 40 TB has been reduced to 0.11 per GB from 0.13 per GB and so on. Similarly, they have reduced Amazon Cloudfront pricing also by 2 cents per GB.

    According to ChannelWeb, Microsoft is also running an Azure promotion for customers who sign up for a 6 months subscription.

    For $59.95 per month, developers can get 750 hours of Azure compute time, 10 GB of storage, and one million storage transactions, along with 7 GB of inbound data transfers and 14 GB of outbound data.

    The competition is really getting interesting and we can expect to see further reduction in prices by all these providers, leading to an all out pricing war. With Amazon coming up with the innovative idea of spot instances to increase the efficiency in the usage of their resources, further reduction in the pricing is possible in the future. Microsoft, with all its cash reserves and a strong desire to win the cloud game, will hit back with its own price reductions. With all these back and forth pricing reductions, the ultimate winner will be the customer. Long live price wars.

    CloudAve is exclusively sponsored by

    Read more

  • In The Era Of Mashups, MashSSL Could Be A Savior

     
    A tag cloud with terms related to Web 2.

    Image via Wikipedia

    From Web 2.0 era to the current SaaS era, we are seeing a proliferation of Mashups, not just in the consumer space but also in the enterprise space. Well, the idea of mashing up of data from two or more data sources/applications is not unique to these times. We have seen such mashups even during the traditional computing era but what makes this attractive is the availability of such mashups over the web for consumption using web browsers or Rich Internet Applications. For example, when you check the weather on a website by inputting the zipcode, it picks up weather data from one application and map data from another application, mashes it up and presents the result to the user through the browser. This ready availability of mashups over the web poses some security problems and one such problem is going to be the topic of this post. First, let me describe the problem and, then, talk about one of the solutions considered by a industry consortium.

    In the more traditional web era, which some people dub as Web 1.0, the security and integrity of the data moving from the data source (server) to the browser (client) was mediated by the Secure Socket Layer, popularly known as SSL. SSL protocol helps us establish a secure channel between two entities but it doesn’t help when more than two entities are in play, as in the mashups. Even though there are reports about the possibility of compromising SSL to attack such two party web communications, it has served us pretty well so far. SSL prevents the man in the middle attacks by using TCP as a communication layer and public/private key encryption, to provide a reliable end-to-end secure and authenticated connection between two points over the internet. SSL uses public and private key to establish a trust through a handshake between two entities. Once the handshake is completed, these entities can securely transfer data without any worries.

    However, SSL doesn’t scale to mashups and other SaaS interoperability use cases. In the case of mashups (and, of course, in SaaS interoperability scenarios), two or more application communicate with each other through the user’s browser. There is no standard way for these applications to authenticate each other and establish a secure communication channel. From the consumer SaaS applications to Enterprise 2.0 applications, we are now seeing some kind of mashup of data sources from different applications. When two applications connect with each other through the user’s browser, how can these applications be sure that it is not a man in the middle sitting to either grab the data or inject “bad” data or a browser infected by malware capturing important data and sending it to “bad hands”? Since mashups happens at the application layer, there is no easy way to authenticate and establish trust. SSL doesn’t help in the multi-party situations as, by definition, it is supposed to stop multi-party situations like man in the middle attacks. Moreover, SSL mostly works on the TCP layer and cannot help much in the case of mashups (Security gurus, feel free to point out any situation where SSL could be tapped to solve the mashup needs but I haven’t come across any such situation).

    To solve this problem, we can go ahead and develop a protocol and standardize it but it is time consuming. In this era of faster adoption of such technologies by users, especially enterprises, there is a need to find an alternative solution. The solution should be

    • Simple and with no complex needs for new cryptographic techniques or protocols. Such new technologies delays adoption as trust is not something that can be gained fast.
    • Must be RESTful so that it is lightweight and can sit on top of ubiquitous http
    • Not requiring any changes to the browser because it will also delay the adoption
    • More importantly, it should be able to scale well in this cloud based world
    • Definitely open and, preferably, under one of the OSI approved licenses

    Enter MashSSL, an alliance formed by a consortium of leading technology companies including leading SSL certificate vendors Comodo, DigiCert, Entrust and VeriSign; leading providers of security technology and services Arcot, Cenzic, ChosenSecurity, Denim Group, OneHealthPort, QuoVadis, SafeMashups and Venafi; leading security research institutions Institute for Cyber Security, UTSA, MIT Kerberos Consortium and Secure Business Austria, along with noted security experts in November, 2009.

    MashSSL is a new multi-party protocol that has been expressly built on top of SSL so that it can take advantage of the trust SSL already enjoys. MashSSL is based on an unique insight which uses deliberately introduced trusted Man in the Middle entities which could manipulate the messages but eventually cancelling out the effect of such manipulations so that the two applications always receive the exact data they would have got in the absence of such Man in the Middle entities. This whitepaper explains this very well with some neat case studies.

    MashSSL was first developed by a company called SafeMashups and has now become an open specification with an open source reference implementation, and is in the process of being standardized. Essentially, MashSSL repurposes SSL to create a secure application layer pipe through which open protocols like OAuth, OpenID, OpenAJAX, etc., and proprietary applications like payment provider interfaces can flow in a more secure fashion while leveraging the already existing trust and credential infrastructure.

    As I concluded in one of my recent posts,

    With Web 2.0 and SaaS, we are mostly seeing adoption by geeks and pundits. There is no widespread adoption from mainstream consumers yet and only a small segment of businesses are using them. With more and more adoption of these technologies, such attacks are only going to increase. If these providers don’t have the security (infrastructure, application, people, etc) correct, we are going to see large scale attacks and chaos.

    Mashup security will become crucial with further adoption in both consumer and enterprise space. Especially, in the case of enterprises where critical data are mashed up for gaining valuable business intelligence, this security between various data sources and/or applications becomes one of the most important issues. This issue should be giving the CIOs and CSOs nightmare. With further tweaking of the MashSSL proposal and standardization, they could mitigate a big chunk of the risks involved.

    PS: This is my attempt to simplify the complex security issue for the consumption by our readers. If I have missed out any crucial information, feel free to jump in and add your comments. This post was motivated by a note posted by Christofer Hoff in his blog.

    CloudAve is exclusively sponsored by

    Read more

  • iPad And SaaS Junkies

     
    Apple Inc.

    Image via Wikipedia

    When Apple announced the release of iPad earlier this week, all hell broke lose in the tech blogosphere including here at Cloud Ave. On a personal level, I am put off by the lack of camera and Apple’s arrogance to wield control on people’s buying habits. However, the idea of iPad excites me on a different level, especially as a SaaS power user and evangelist.

    Just when I was planning to write a post on the topic, I came across a guest post at GigaOm by Joe Hewitt, the guy who originally developed the Facebook iPhone application. Even though he is approaching the issue from a different angle, some of the statements he has made in that post completely captures how I feel about iPhone.

    I felt strongly that all Apple needed to do to revolutionize computing was simply to make an iPhone with a large screen. Anyone who feels underwhelmed by that doesn’t understand how much of the iPhone OS’s potential is still untapped.

    He ends the post with how he feels about iPad as a developer

    So, in the end, what it comes down to is that iPad offers new metaphors that will let users engage with their computers with dramatically less friction. That gives me, as a developer, a sense of power and potency and creativity like no other. It makes the software market feel wide open again, like no one’s hegemony is safe. How anyone can feel underwhelmed by that is beyond me.

    I am looking at it from a completely different perspective. As a heavy SaaS user, it excites me to have access to my applications from a mobile device that is reasonably bigger than a mobile phone and without the disadvantages of netbooks. iPhone changed the way I used business apps. Coupled with SaaS, my productivity has increased many-fold. Most of the SaaS vendors offer access through mobile phone in one form or another. Some like Mindmeister, Remember The Milk, etc. offer native iPhone applications whereas many others, who are fed up with the Apple approval process, use mobile web applications. In fact, SaaS providers like Google and Zoho (disclaimer: Zoho is the exclusive sponsor of this blog but this is my independent opinion) offer mobile web apps that almost mimics the users’ web experience.

    However, my experience with using SaaS apps on iPhone left a lot to be desired. I found the iPhone screen to be too small to have a strain-free experience. I also wanted the keyboard to be a bit bigger to suit my fingers. At times, I also want better processing power to have a more seamless experience. With iPad, I get all of these and more. It is a perfect mobile companion for heavy SaaS users without the clunkiness associated with netbooks. In short, iPad is a great device for any SaaS junkie and, in some ways, magical.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Computing And Wall Street

     

    Cloud Computing is slowly creeping into many different industries. It fits very well to the needs of Main Street. Will it fit well for Wall Street too? Here is a video in which Senior Analyst Kevin McPartland at The TABB Group talks about the scenario. (Video link from Datacenter Knowledge)

    CloudAve is exclusively sponsored by

    Read more

  • RIP: Sun Cloud

     

    In March of last year, then independent Sun Microsystems announced their plans for cloud computing. There were some people who dismissed it outright and many more who were skeptical of Sun’s plans. I was in a minority and was pretty excited about the announcement because of its potential to keep the clouds open. In the post I wrote immediately after the announcement, I went gaga about what Sun’s Cloud can do for the cloud ecosystem, in general.

    In fact, when Sun made their announcement last week, the first thing that struck me was it was a good step in the right direction. With their emphasis on openness and interoperability, they are helping the idea of Federated Clouds. If we do not push for this idea of Open Federated Clouds, we will end up with a monopoly of one or two providers in the infrastructure space. Such a monopoly goes against the open federated structure of the internet. The very foundation of Cloud Computing is on top of the internet and it is only natural to take the same open federated structure to Cloud Computing also. In this sense, the announcement of Sun Microsystems is exciting and I hope they follow through on their promise. This announcement should serve as a wake up call to other vendors too. If they don’t embrace the idea of openness, they will end up losing in this new world where the idea of interoperability and dataportability are already intertwined with the consciousness of the users.

    After the announcement, Oracle announced its intention to acquire Sun Microsystems and they went silent on their cloud plans. My attempts to elicit information from them fell flat and I had a chance to talk to some Sun folks during Structure ‘09 conference and they told me that Sun is rethinking their cloud strategy. They tried hard to emphasize that it has nothing to do with Oracle’s plans but they are assessing whether they should go with the public cloud or focus on helping enterprises build private cloud.

    When I was in Structure ’09 conference last week, I had a chance to talk to some folks from Sun and I asked them about what happened to their Cloud Computing plans. Their response got me intrigued. They told me that Sun is rethinking their Cloud strategy. They told me that Sun is now discussing whether they should continue with their plans for public cloud offering or focus only on what is known as Private Clouds. However, they put in lot of efforts to emphasize that their engineers are still continuing with their work on their promised public cloud offerings but there is no final decision on the path Sun will take in this regard. I asked them if it has anything to do with Oracle and they replied in negative.

    Then Oracle-Sun merger ran into problems with European Union and everything on Sun’s cloud side went silent except for some good Whitepapers. In my year end post talking about the Winners and Losers of 2009, I added Sun Microsystems in the Losers category. I was pretty convinced that the game is over for Sun public cloud.

    Now, it is officially over. According to The Register, Sun executives had shot down the plans unequivocally.

    It took a major acquisition to finally deliver a dose of reality, but Sun Microsystems’ me-too Amazon-style cloud is finally dead.

    On Wednesday, Edward Screven, Oracle’s chief corporate architect, said unequivocally that the database giant would not be offering Sun’s long-planned and highly-vaunted compute resource service.

    It looks like whatever I heard from Sun folks during the Structure ‘09 conference might stay on their roadmap.

    “We don’t plan to be in the rent-by-minute computer business,” Screven said. “We plan to provide technology for others that are in the rent-by-minute computer business and lots of other business you might call cloud computing.”

    With Larry Ellison going nuts about the idea of Cloud Computing, this announcement is expected. It is sad to see a giant of yesteryears go down like this. It is sad for the open cloud evangelists. RIP – the sun cloud.

    CloudAve is exclusively sponsored by

    Read more

  • Thinking About Security Is Old School? – A Dangerous Trend

     

    Recently, I was listening to a podcast in which analysts were debating about public and private clouds. During the course of the discussions, one of the participants, a SaaS vendor, made a comment that disturbed me a bit. I…

    Read more