• Google Buzz And Their Social Strategy

     
    Google Inc.

    Image via Wikipedia

    Last week Google released Google Buzz, a social network integrated with Gmail, tapping into the already existing social graph of the users. As expected the tech blogosphere went through wild emotions from overtly positive reaction with amazement to criticisms about their perceived privacy flaws. I was waiting for the hysteria to die down before I offer my thoughts on the product and Google’s strategy.

    Google has long been criticized as being clueless on their social strategy. In my opinion, Google is well aware of what it wants to do in the social space and are slowly laying the necessary foundation. Forget Orkut, Google is not even serious about such walled garden type of approach. Google is out to build something grander, mimicking the very nature of the underlying internet platform. Their approach to social is more open and distributed than other major social networking sites. Facebook may have more than 400 million users but it doesn’t mimic the kind of diverse world we live in. An online social network can never mimic our real life social network with a walled garden approach. Period. Let us take a step back and dig a bit deeper on this topic and what Google might be trying to do.

    In real life, we socialize in many different environments starting with socializing inside our homes to socializing in coffee shops to socializing in the bars to book clubs to, even, socializing while lying down on the beds of the emergency rooms. We take a compartmentalized approach to our social life. As we move from a more grounded world to the web based world, the various social tools we use on the internet platform must mimic the nature of interactions we have in the real life. In the real life, I go to a local book club to talk about a book of my interest, hit a sports bar with friends to watch Superbowl, etc.. I go to different places for different interests. I meet with different sets of people for each of my interests. I am free to take one of my friends from one group, say book club, to another networking group as I please as long as my friend is ok with it. So, any online social network should allow me to have the same kind of social interaction. In real world, I don’t go to the same club house for all of my social activities. Similarly, my social network on the web should be open and distributed without being held inside the walled gardens of any single social network. Any attempt to shut my social life inside the walled gardens goes against my liberty and I call it the big brother approach.

    Having made my thoughts on the nature of online social interactions clear, let us take a look at how Google is approaching the problem. Even though Google went along with Orkut earlier, I always felt that they were not serious about taking it to the next level. Their announcement about OpenSocial gave some hints on their social strategy. Instead of trying to keep users inside their properties, Google is letting them socialize on whatever web app they are comfortable with already while giving them a platform to manage their social graph. Google has also introduced social features into many of their web properties individually. For example, I can socialize with a group of people inside Google Reader and another group inside Picasaweb. In short, Google is letting me socialize in whatever place (app) I want with whoever I want just like I do in the real world. I am not restricted inside Google’s properties alone and I can do this on any third party services supporting OpenSocial. Thus, Google is taking a distributed path to social.

    With Google Buzz, they are trying to add a social layer with real time updates inside Gmail. With this move, they are trying to achieve the following

    • Bringing real-time social conversation inside the Gmail
    • Keep Gmail relevant even even as email fades in this world of tweets and other social networking based messages
    • More importantly, prepare Gmail as a dashboard for various distributed social properties

    Eventually, Google can tie up all the different services, their own properties and other OpenSocial applications, inside Gmail giving an unified interface for the users. It is important to understand the difference between this approach of Google and the Facebook approach. While Facebook wants all the players to dance on their platform, thereby keeping them inside the walled garden, Google wants to let users dance wherever they want as long as they come back to sleep on Google’s properties. By doing this, Google is letting us have the same kind of social experience as we have in the real life and, at the same time, keep control over our online activities.

    Google Buzz is just a beginning and it may not even be their main social strategy. They are taking a distributed approach to social with the hope that one day they can tie it all together without drastically disrupting the user experience. The main advantages in their strategy are the distributed approach to social and the promise to be open (with a huge emphasis on data liberation). Essentially, in my opinion, we haven’t really seen the core of Google’s social strategy yet and, as Google usually does, they are slowly going to make us embrace their tools to manage our online social interactions. As I have told many times, Google’s main goal is to collect all of the world’s information. They don’t want to do it by forcing users to do everything on their properties. They clearly understand that it will not work for a goal like theirs. Instead, they are letting us do whatever we want at whatever place we want and, by keeping a control over our social graph, they are accumulating the wealth of information they set out to collect. In my opinion, this is Google’s strategy and they are executing it very well (whether it is good or bad is entirely a different topic).

    CloudAve is exclusively sponsored by

    Read more

  • OpenECP, An Enomaly ECP Fork

     
    Image representing Enomaly Inc as depicted in ...

    Image via CrunchBase

    I am a strong supporter of Open Source Software and a proponent of the importance of open source in cloud computing. It is my strong opinion that open source will empower the customers giving them access to the software even after the company behind the product goes out of business. In this regard, I have even called Open Source as a SaaS endgame. Even though Open Source plays a predominant role in empowering the customers, there are some vendors who use open source as a pure marketing ploy. These vendors use open source to entice users to their product and as soon as they gain reasonable traction, they stop supporting the open source version of the product.

    In the early SaaS days, we saw how Activecollab abused the open source spirit. In recent days, the cloud vendor Enomaly Inc. has been facing the wrath for running away from the open source version of their Elastic Compute Platform (Enomaly ECP). Unlike Activecollab and few others who unabashedly abused the spirit of open source, Reuven Cohen, Founder and CTO of Enomaly Inc., has been vocal about their involvement in open source. In fact, I found him to be one of the strong voices in support of open source and open cloud efforts. Recently, one of the Clouderati, Sam Johnston, noticed that Enomaly has stopped supporting the open source version of their platform.

    It doesn’t help my sentiment either that every last trace of the Open Source ECP Community Edition was recently scrubbed from the Internet without notice, leaving angry customers high and dry, purportedly pending the “rejigging [of their] OSS strategy”. While my previous attempts to fork the product as Freenomalism failed when we were unable to get the daemon to start, having the code in any condition is better than not having it at all. In my opinion this is little more than blatantly (and successfully I might add) taking advantage of the Open Source community for as long as necessary to get the product into the limelight. Had they not filled this void others would certainly have done so, and the Open Cloud would be better off today as a result.

    Today, Sam Johnston startled everyone in the Clouderati with an announcement that he is forking the community version of Enomaly ECP with some additional security fixes. The new project called OpenECP is released under the Affero General Public License v3 or similar. The Open Elastic Computing Platform (OpenECP) is currently under Version 4.0 Alpha and provisionally tested on Debian GNU/Linux 5.0. In short, Sam Johnston has done what CentOS has done to RedHat Enterprise Linux. Take the source code, rip up the trademarks and release it under a different name.

    My initial reaction to this news was that Sam is mad but this is a brilliant move. Frankly speaking, this fork is a result of some personal bickering between Sam Johnston and Reuven Cohen (as evidenced by their public tweets). Personally, I know both of them “equally well” through Twitter and I have absolutely no comment on the tensions between them. I am approaching this issue from the point of view of open source licensing and what this move by Sam means to the customers. From what I scraped out from Twitter and various online forums, Enomaly has decided to not work on the open source edition for some tactical reasons from their business strategy point of view. Even though they have promised to bring the open source edition back in few months, there is absolutely no guarantee. Under such circumstances, this fork ensures that the community edition continues to live on even if Enomaly decides to not revive it. It ensures that the current users of the community edition are not left in a lurch. However, there is no guarantee that the development of the open source software will continue beyond this point. On a related note, I strongly recommend you read the post on 451 Group’s CAOS Theory and my response to it.

    To be fair to Enomaly, I would like to point out that the current proprietary version of Enomaly ECP has some additional features like vlan, metering, etc.. Plus, it was directed towards service providers who want to buy it straight from the software vendor. Though fork is fun and easy, ultimately the hardest job was done by the original vendor by spending thousand of hours on the product. But, this episode should serve as a warning to software vendors who want to take the open source route. If they take the path of open source, they should stick to it all the way through. But if they try to use it just as a shortcut for gaining users, they will lose control of their product and may even face a potential danger of losing their business.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Computing And Green

     
    IBM Smarter Planet Initiative

    Image via Wikipedia

    One of the buzzwords we hear in the marketing campaigns of this cloud era is the concept of Green. Some of the cloud providers target our guilt to sell their services. They clearly understand that most of us are very worried about the impact of global climate change and we are willing to do everything possible to stop/reduce it. So, every single cloud provider use the idea of going green in their marketing campaigns giving an impression that anything cloud computing is green. In this post, let us dig through the hype and cut to the chaff.

    There are many ways in which we can make IT environment friendly and chief among them are the efficient use of compute resources and reduction of environment impact due to power and cooling. The former could be achieved by the effective use of virtualization and automation. The latter can be achieved by adding efficiency in power generation and cooling and, also, by tapping into non-conventional energy resources. An example for this approach is the new datacenter opened by IBM last week at their Research Triangle Park campus in North Carolina. The data center currently is using about 60,000 square feet of raised floor space consuming 6 megawatts of power, with the capacity to grow to 100,000 feet and 15 megawatts. At full capacity, the facility will be able to handle the computing needs of 40 to 50 clients. This datacenter could save 15% in the energy costs and they do this by increasing the efficiency of how the datacenter is operated.

    IBM’s Smarter Planet initiative is designed to incorporate greater intelligence into infrastructures—from buildings, transportation systems and utilities to businesses and even cities—to make them run more efficiently. Along those lines, IBM has put in more than 8,000 branch circuit monitoring points that keep an eye on the systems, more than 2,000 sensors that gather temperature, pressure, humidity and air flow data from air conditioners, and more than 30,000 utility and environmental sensors that interconnect with IBM software tools. Data from these sensors can be analyzed to help with future planning for the building and for energy conservation.

    Technically, you don’t have to be a cloud provider to do this and even traditional IT can embrace these strategies to reduce the impact on the environment.

    However, cloud providers are uniquely positioned to be more effective in achieving the Green IT. By the very definition of cloud computing, they have

    • Multi-tenancy
    • Cloud Economics

    incorporated in their business strategy. The consolidation of multiple customers using multi-tenancy will lead to lesser use of energy resources and a positive impact on the environment. The very presence of cloud economics, where the cloud providers offer compute resources for literally pennies, will force the providers to be more efficient in their IT and cut costs in every possible way. This means that the cloud providers will find ways to cut down drastically on the power and cooling costs, leading to a greener IT.

    In reality, none of these cloud providers like Amazon, Google, Microsoft, etc. offer any raw data to show how energy efficient they are with respect to the utilization of compute resources. Some players like Amazon employ ideas like Spot Instances which gives us some understanding of their strategy to maximize their resource usage. Still, there is no hard evidence available to show us that these cloud providers are much greener than the traditional IT vendors who are employing a good mix of virtualization and automation. Now, if we include the fact that many SaaS vendors don’t use the cloud infrastructure providers for their infrastructure needs and they either use their own datacenters or resort to the traditional managed hosting providers, the green claims gets more and more foggy.

    There is too much hand waving going on when it comes to Cloud Computing and Green IT. There are no known hard data and unless the cloud vendors come forward with complete information about their energy efficiency, there is no way we can verify these claims. However, the following factors are clear

    • Even the traditional vendors can be highly energy efficient with a proper use of virtualization and automation
    • Cloud Computing offers us great opportunity to cut down tremendously on the energy costs
    • More importantly, the cloud computing era and the associated awareness regarding the environmental impact of IT has kick started a realization that we need not spend more money on running IT. This, in turn, has forced enterprises of all sizes and shapes to optimize their IT towards Green IT.

    I take this post to call upon the cloud providers to come forward and offer some insights to customers by giving some raw numbers explaining their Green strategy. Such voluntary steps from vendors will go a long way in shaping sustainable, socially responsible capitalism.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Computing And Green

     
    IBM Smarter Planet Initiative

    Image via Wikipedia

    One of the buzzwords we hear in the marketing campaigns of this cloud era is the concept of Green. Some of the cloud providers target our guilt to sell their services. They clearly understand that most of us are very worried about the impact of global climate change and we are willing to do everything possible to stop/reduce it. So, every single cloud provider use the idea of going green in their marketing campaigns giving an impression that anything cloud computing is green. In this post, let us dig through the hype and cut to the chaff.

    There are many ways in which we can make IT environment friendly and chief among them are the efficient use of compute resources and reduction of environment impact due to power and cooling. The former could be achieved by the effective use of virtualization and automation. The latter can be achieved by adding efficiency in power generation and cooling and, also, by tapping into non-conventional energy resources. An example for this approach is the new datacenter opened by IBM last week at their Research Triangle Park campus in North Carolina. The data center currently is using about 60,000 square feet of raised floor space consuming 6 megawatts of power, with the capacity to grow to 100,000 feet and 15 megawatts. At full capacity, the facility will be able to handle the computing needs of 40 to 50 clients. This datacenter could save 15% in the energy costs and they do this by increasing the efficiency of how the datacenter is operated.

    IBM’s Smarter Planet initiative is designed to incorporate greater intelligence into infrastructures—from buildings, transportation systems and utilities to businesses and even cities—to make them run more efficiently. Along those lines, IBM has put in more than 8,000 branch circuit monitoring points that keep an eye on the systems, more than 2,000 sensors that gather temperature, pressure, humidity and air flow data from air conditioners, and more than 30,000 utility and environmental sensors that interconnect with IBM software tools. Data from these sensors can be analyzed to help with future planning for the building and for energy conservation.

    Technically, you don’t have to be a cloud provider to do this and even traditional IT can embrace these strategies to reduce the impact on the environment.

    However, cloud providers are uniquely positioned to be more effective in achieving the Green IT. By the very definition of cloud computing, they have

    • Multi-tenancy
    • Cloud Economics

    incorporated in their business strategy. The consolidation of multiple customers using multi-tenancy will lead to lesser use of energy resources and a positive impact on the environment. The very presence of cloud economics, where the cloud providers offer compute resources for literally pennies, will force the providers to be more efficient in their IT and cut costs in every possible way. This means that the cloud providers will find ways to cut down drastically on the power and cooling costs, leading to a greener IT.

    In reality, none of these cloud providers like Amazon, Google, Microsoft, etc. offer any raw data to show how energy efficient they are with respect to the utilization of compute resources. Some players like Amazon employ ideas like Spot Instances which gives us some understanding of their strategy to maximize their resource usage. Still, there is no hard evidence available to show us that these cloud providers are much greener than the traditional IT vendors who are employing a good mix of virtualization and automation. Now, if we include the fact that many SaaS vendors don’t use the cloud infrastructure providers for their infrastructure needs and they either use their own datacenters or resort to the traditional managed hosting providers, the green claims gets more and more foggy.

    There is too much hand waving going on when it comes to Cloud Computing and Green IT. There are no known hard data and unless the cloud vendors come forward with complete information about their energy efficiency, there is no way we can verify these claims. However, the following factors are clear

    • Even the traditional vendors can be highly energy efficient with a proper use of virtualization and automation
    • Cloud Computing offers us great opportunity to cut down tremendously on the energy costs
    • More importantly, the cloud computing era and the associated awareness regarding the environmental impact of IT has kick started a realization that we need not spend more money on running IT. This, in turn, has forced enterprises of all sizes and shapes to optimize their IT towards Green IT.

    I take this post to call upon the cloud providers to come forward and offer some insights to customers by giving some raw numbers explaining their Green strategy. Such voluntary steps from vendors will go a long way in shaping sustainable, socially responsible capitalism.

    CloudAve is exclusively sponsored by

    Read more

  • Online Finance – Rigid Segmentation Doesn’t Work

     

    Recently ReadWriteWeb started a series taking a very high level look at online finance. One of the posts discussed the evolving online finance ecosystem. In the post, RWW editor Richard MacManus interviewed CEO of Xero (see disclosure), Rod Drury…

    Read more

  • Should Scientists Use Microsoft’s Free Cloud Services Offerings?

     
    Science icon

    Image via Wikipedia

    I am a strong proponent of using cloud computing for scientific research. Some of my posts on the topic are listed below.

    If you read these posts, you can understand how strongly I feel about the use of cloud computing in academic research, in general, and scientific research, in particular. Cloud Computing can empower scientists and help them accelerate their research while cutting down on the expenses. This is especially true in a country like US where scientific funding has been seriously curtailed since 2001. Using cloud computing is, possibly, the most efficient and cost effective way to do scientific research.

    In fact, Microsoft Research has a featured story talking about how cloud computing can help scientists because of the economies of scale and an interesting comment by Prof. David Patterson, a professor of computer science at the University of California, Berkeley. According to Prof. Patterson, the potential impact of cloud computing is comparable to that of the invention of microprocessors. Absolutely fantastic comparison, in my humble opinion.

    Patterson adds that the economies of scale possible with the cloud are just as much about performance as cost. The most exciting part of cloud computing, he says, is the ability to “scale up” the processing power dedicated to a task in an instant.

    Even though I am happy to see Microsoft echoing some of the ideas I have been advocating in this blog for more than a year, I am deeply disturbed by one aspect of this article. It is about Microsoft’s attempt to push their S+S approach to the scientists. Here is a comment by Dan Reed, corporate vice president of Microsoft’s Technology Policy and Strategy and eXtreme Computing Group

    There is a large community of researchers — social scientists, life scientists, physicists —running many computations on massive amounts of data. To use an example many people can understand — how can we enable researchers to run an Excel spreadsheet that involves billions of rows and columns, and takes thousands of hours to compute, but still give them the answer in 10 minutes, and maintain the desktop experience? Client plus cloud computing offers that kind of sweet spot.

    It appears National Science Foundation has already signed an agreement with Microsoft to offer American Scientists free access to Microsoft’s cloud computing services.

    The National Science Foundation and the Microsoft Corporationhave agreed to offer American scientific researchers free access to the company’s new cloud computing service. A goal of the three-year project is to give scientists the computing power to cope with exploding amounts of research data. It uses Microsoft’s Windows Azure computing system.

    Even though I am fully convinced about the impact of cloud computing on Scientific research, I don’t see their S+S strategy serving the interests of Science. Well, actually, I am against this agreement for two reasons.

    • In my opinion, Science has to be completely open. Any attempts to lock-in the scientific results in proprietary platforms or applications or data formats goes against the very spirit of openness in science. By locking in the scientific results within Microsoft’s platform, we are forcing the entire scientific community to be dependent on Microsoft, affecting the very advancement of science itself. This is morally wrong and very bad for the scientific community in the long term.
    • Secondly, S+S approach is promoted by Microsoft to protect their cash cow. It adds problems in two ways for the scientific community. First, It is not cost effective and, more importantly, it is not an efficient way to do science either. The exorbitant licensing costs for the traditional software used to access these cloud services will put a huge dent on the funding for scientific projects. In my opinion, the huge amount of money spent on such traditional software packages can be better utilized elsewhere. Then, there is the issue of a need for bigger resources to use these bulky software. These resource needs not only make this S+S approach inefficient, it also adds to the cost of computing. What is the point in saving money on the traditional infrastructure expenditure and then spend a part of it on traditional extra powerful desktop (laptop) computers?

    Another important point to note from the NYT story is that this agreement between NSF and Microsoft is for three years. In the absence of an indefinite free use agreement, this is just a pure marketing ploy. After three years, these research projects will be forced to spend quite a bit of money on the licensing fees for the traditional software plus the cloud services offered by Microsoft. Not only that, any researcher who wants to either extend these projects or use their results on new projects may end up paying part of their funding money to Microsoft.

    In short, this agreement between NSF and Microsoft is very shortsighted and it may as well go against the very spirit of science. I have no problem with Microsoft pushing their S+S strategy in the market. Eventually, the economics and the value add will determine whether S+S or pure SaaS will be the ultimate winner. However, when it comes to altruistic issues like Science where public money is heavily involved and its impact on the public is very significant, it is not a good idea to push a strategy that serves a particular company’s self interest alone. This move is neither good for Science nor for Cloud Computing.

    CloudAve is exclusively sponsored by

    Read more

  • Cloud Pricing War Begins

     
    Diagram showing economics of cloud computing v...

    Image via Wikipedia

    Finally, the cloud pricing war has begun. I have been complaining about the AWS pricing here at Cloud Ave for some time. In my Sept, 2009 post, I argued that Amazon needs to price aggressively to capture more market share.

    However, I would like to to use this post to once again voice my concern about Amazon’s EC2 pricing. For example, if I setup a small on-demand linux instance and not send ANY traffic towards or from it, I would have to pay $72.00. In my opinion, this is pretty expensive and I am hoping that the competition will eventually drive down the prices. In fact, Amazon has cut the prices of reserved instances by 30% but it is not very appealing to me because on-demand pricing, which is at the very heart of cloud computing, is still expensive. In the case of reserved instances, I am left with the traditional hosting economics and not cloud economics. If Amazon is serious about getting more SMBs and, even, enterprises, they have to price their EC2 offering aggressively.

    Finally, with the official launch of Windows Azure on Monday, the competition got heated up. Microsoft priced its cloud offering very aggressively compared to what Amazon was offering at that time. For example, Windows Azure compute pricing was as follows:

    • Compute = $0.12 / hour
    • Storage = $0.15 / GB stored / month
    • Storage transactions = $0.01 / 10K
    • Data transfers = $0.10 in / $0.15 out / GB

    In November 2009, Amazon cut down their prices by 15% across all on-demand instance families and sizes. Today, Amazon countered the impact news cycle regarding the official availability with further reduction in their AWS data transfer prices. The pricing for data out has been reduced by 2 cents per GB. The first 10 TB has been reduced to 0.15 per GB from 0.17 per GB. The next 40 TB has been reduced to 0.11 per GB from 0.13 per GB and so on. Similarly, they have reduced Amazon Cloudfront pricing also by 2 cents per GB.

    According to ChannelWeb, Microsoft is also running an Azure promotion for customers who sign up for a 6 months subscription.

    For $59.95 per month, developers can get 750 hours of Azure compute time, 10 GB of storage, and one million storage transactions, along with 7 GB of inbound data transfers and 14 GB of outbound data.

    The competition is really getting interesting and we can expect to see further reduction in prices by all these providers, leading to an all out pricing war. With Amazon coming up with the innovative idea of spot instances to increase the efficiency in the usage of their resources, further reduction in the pricing is possible in the future. Microsoft, with all its cash reserves and a strong desire to win the cloud game, will hit back with its own price reductions. With all these back and forth pricing reductions, the ultimate winner will be the customer. Long live price wars.

    CloudAve is exclusively sponsored by

    Read more

  • In The Era Of Mashups, MashSSL Could Be A Savior

     
    A tag cloud with terms related to Web 2.

    Image via Wikipedia

    From Web 2.0 era to the current SaaS era, we are seeing a proliferation of Mashups, not just in the consumer space but also in the enterprise space. Well, the idea of mashing up of data from two or more data sources/applications is not unique to these times. We have seen such mashups even during the traditional computing era but what makes this attractive is the availability of such mashups over the web for consumption using web browsers or Rich Internet Applications. For example, when you check the weather on a website by inputting the zipcode, it picks up weather data from one application and map data from another application, mashes it up and presents the result to the user through the browser. This ready availability of mashups over the web poses some security problems and one such problem is going to be the topic of this post. First, let me describe the problem and, then, talk about one of the solutions considered by a industry consortium.

    In the more traditional web era, which some people dub as Web 1.0, the security and integrity of the data moving from the data source (server) to the browser (client) was mediated by the Secure Socket Layer, popularly known as SSL. SSL protocol helps us establish a secure channel between two entities but it doesn’t help when more than two entities are in play, as in the mashups. Even though there are reports about the possibility of compromising SSL to attack such two party web communications, it has served us pretty well so far. SSL prevents the man in the middle attacks by using TCP as a communication layer and public/private key encryption, to provide a reliable end-to-end secure and authenticated connection between two points over the internet. SSL uses public and private key to establish a trust through a handshake between two entities. Once the handshake is completed, these entities can securely transfer data without any worries.

    However, SSL doesn’t scale to mashups and other SaaS interoperability use cases. In the case of mashups (and, of course, in SaaS interoperability scenarios), two or more application communicate with each other through the user’s browser. There is no standard way for these applications to authenticate each other and establish a secure communication channel. From the consumer SaaS applications to Enterprise 2.0 applications, we are now seeing some kind of mashup of data sources from different applications. When two applications connect with each other through the user’s browser, how can these applications be sure that it is not a man in the middle sitting to either grab the data or inject “bad” data or a browser infected by malware capturing important data and sending it to “bad hands”? Since mashups happens at the application layer, there is no easy way to authenticate and establish trust. SSL doesn’t help in the multi-party situations as, by definition, it is supposed to stop multi-party situations like man in the middle attacks. Moreover, SSL mostly works on the TCP layer and cannot help much in the case of mashups (Security gurus, feel free to point out any situation where SSL could be tapped to solve the mashup needs but I haven’t come across any such situation).

    To solve this problem, we can go ahead and develop a protocol and standardize it but it is time consuming. In this era of faster adoption of such technologies by users, especially enterprises, there is a need to find an alternative solution. The solution should be

    • Simple and with no complex needs for new cryptographic techniques or protocols. Such new technologies delays adoption as trust is not something that can be gained fast.
    • Must be RESTful so that it is lightweight and can sit on top of ubiquitous http
    • Not requiring any changes to the browser because it will also delay the adoption
    • More importantly, it should be able to scale well in this cloud based world
    • Definitely open and, preferably, under one of the OSI approved licenses

    Enter MashSSL, an alliance formed by a consortium of leading technology companies including leading SSL certificate vendors Comodo, DigiCert, Entrust and VeriSign; leading providers of security technology and services Arcot, Cenzic, ChosenSecurity, Denim Group, OneHealthPort, QuoVadis, SafeMashups and Venafi; leading security research institutions Institute for Cyber Security, UTSA, MIT Kerberos Consortium and Secure Business Austria, along with noted security experts in November, 2009.

    MashSSL is a new multi-party protocol that has been expressly built on top of SSL so that it can take advantage of the trust SSL already enjoys. MashSSL is based on an unique insight which uses deliberately introduced trusted Man in the Middle entities which could manipulate the messages but eventually cancelling out the effect of such manipulations so that the two applications always receive the exact data they would have got in the absence of such Man in the Middle entities. This whitepaper explains this very well with some neat case studies.

    MashSSL was first developed by a company called SafeMashups and has now become an open specification with an open source reference implementation, and is in the process of being standardized. Essentially, MashSSL repurposes SSL to create a secure application layer pipe through which open protocols like OAuth, OpenID, OpenAJAX, etc., and proprietary applications like payment provider interfaces can flow in a more secure fashion while leveraging the already existing trust and credential infrastructure.

    As I concluded in one of my recent posts,

    With Web 2.0 and SaaS, we are mostly seeing adoption by geeks and pundits. There is no widespread adoption from mainstream consumers yet and only a small segment of businesses are using them. With more and more adoption of these technologies, such attacks are only going to increase. If these providers don’t have the security (infrastructure, application, people, etc) correct, we are going to see large scale attacks and chaos.

    Mashup security will become crucial with further adoption in both consumer and enterprise space. Especially, in the case of enterprises where critical data are mashed up for gaining valuable business intelligence, this security between various data sources and/or applications becomes one of the most important issues. This issue should be giving the CIOs and CSOs nightmare. With further tweaking of the MashSSL proposal and standardization, they could mitigate a big chunk of the risks involved.

    PS: This is my attempt to simplify the complex security issue for the consumption by our readers. If I have missed out any crucial information, feel free to jump in and add your comments. This post was motivated by a note posted by Christofer Hoff in his blog.

    CloudAve is exclusively sponsored by

    Read more

  • Multi-Currency – The Maelstrom Continues

     

    Disclosure – One of my specialties is accounting software – I quote half a dozen vendors in the below post, some of whom I have past or current consulting relationships with. Full disclosure of all my industry affiliations can be seen here.

    A little while ago I posted discussing the recent introduction of multi-currency functionality for several accounting software vendors. My post received a quick and somewhat defensive reply penned by David Turner from FinancialForce. In his reply, Turner is fairly critical of other SaaS players, alleging that:

    it’s dawned on them that even SMBs need to handle more than one currency in today’s globalized market….it could be dangerous to treat vital functions such as multicurrency as an afterthought

    Without going into the technical details of Turner’s argument, suffice it to say he bought up a few examples of where what he deemed to be poorly executed multi-currency functionality could cause problems for organizations. In his reply I believe Turner ignored context – the companies I originally wrote about were servicing the micro to small market and not big business. As such their customers desired visibility above all, and are prepared to forego 100% accuracy in order to gain that visibility far below the price point of Turner’s company’s offering.

    I replied to Turners post commenting that:

    I guess there is some context to consider here. FAC (and FreshBooks for that matter) are aimed for micro to small businesses – for them it’s all about giving visibility of their position – that’s more important than to-the nth-degree accuracy… Which beings me back to a conversation I was having with someone today who mentioned the apocryphal accountant who spends all day in the pursuit of a rogue 22 cents missing from the ledger – at the end of the day he’d found the 22 cents but missed the entire point of “value add” in the process.

    While I have to admit my comment was a little flippant, and foreign exchange discrepancies should not be completely ignored, I still contend that there is a context issue in all of this.

    Deciding to illicit the help of other accounting vendors, I reached out to a number of people to get their opinion on the exchange. I was interested to talk to the traditional desktop accounting vendors – servicing many millions of customers and with either little or no multi currency functionality. Julian Smith, General Manager of MYOB (on premise vendor with over a million users) saw multi-currency as less than critical:

    the multi-currency feature is really only relevant to a minority of SMEs – our savvier users are avoiding any currency issues with their online businesses, by using their payment gateway to do the work for them… and they account in one currency…

    It’s important to note that Quickbooks only just introduced multi-currency capability – and managed to get to around five million customers without it – so the criticality or otherwise of multi-currency is debatable – a contention that Heather Villa, CEO of IAC-EZ and a CPA herself, concurred with:

    for the majority of small business the ability to say invoice and accept payment in multiple currency, may be a beneficial feature, but not one I see high on demand. As an accountant I currently have a client base of over 200 clients, and only 3 of them invoice in multi currency

    That said, most vendors agree that multi-currency, while still relevant to only a small number of SMBs, is becoming more important. From Daniel Druker, SVP at Intacct:

    more and more and smaller and smaller firms are transacting in multiple currencies – globalization and the internet is pushing down the barriers to working globally – both from a client and a vendor perspective – so more and more firms will benefit from some level of multi-currency capabilities.

    Interestingly MYOB’s perspective is that multi-currency introduces an inappropriate level of complexity into accounting products to be used by SMBs and is best handled by their accountants:

    There are significant complexities in multicurrency and for users who struggle with the fundamental principles of accounting, adding exchange gains and losses etc can be too much, many users therefore rely on their accountants assistance in this area. Effective usage of multi-currency solutions required significant financial and accounting experience to manage properly – regardless of the solution used. If foreign currency conversion  is a big part of business, often the client will pre-purchase currency notes to get a more favourable exchange rate, thereby making currency conversion data integration (from XE.com or others) inappropriate for their business.

    Which, at least to a certain extent conflicts with their statement that:

    Despite the complexities of multi-currency processes, their management and integration into MYOB’s desktop and SaaS solutions is a significant part of our R&D roadmap into the future.

    And from Rod Drury, CEO of Xero:

    …SMBs that do need multi-currency is still a large and significant market possibly with a higher affinity for modern solutions that improve productivity.

    Ed Molyneux, CEO of FreeAgentCentral:

    We believe there’s a small but important minority of our potential users at the micro (<10employees) end of the market who need multi-currency invoicing (and a somewhat smaller number who also need multi-currency banking)

    But it’s still about context – the complexity of the solution needs to be tailored for the needs of the organization. Druker again:

    multi-currency capabilities need to be matched to the size and capabilities of the target buyer… if you have simple requirements simple solutions can be very helpful… it’s very silly to ascribe the most complex requirements to very small businesses – they value simplicity and ease of use…Larger business have more complex requirements – they need automation to be efficient, they need flexibility to match their business process, they need exceptions and overrides for when non-standard things happen. But most importantly they are larger, their numbers are more material, they have people on staff who care about being precise, they have audit requirements

    And from Molyneux (quoting many of the assertions made in Turner’s critical post:

    our users couldn’t care less about ‘GL clutter’, ‘control of unrealized and realized losses and gains being in their hands’, and our software being ‘multi-everything international from the ground up

    And one more from Drury:

    We want to give the business owner the information they need to make decisions, not think about accounting

    SO… where does that leave us at the end of the debate? Clearly in a position of needing to understand the users of software work in – micro businesses are in a daily struggle for survival and trying to manage their time – tools that give them a degree of visibility into their financial position, regardless of the absolute accuracy of that view, will win in the end.

    I believe that, the multi-currency functionality rolled out by FreshBooks, Xero and FreeAgentCentral over the past few months is appropriately designed for their target customers – and that, after all, is what is important.

    CloudAve is exclusively sponsored by

    Read more

  • Thinking About Security Is Old School? – A Dangerous Trend

     

    Recently, I was listening to a podcast in which analysts were debating about public and private clouds. During the course of the discussions, one of the participants, a SaaS vendor, made a comment that disturbed me a bit. I…

    Read more

...7891011
...7891011