• Apigee And Heroku Partner To Make App Development A Child's Play

     
    Apigee , the free API tools platform from Sonoa Systems (See our previous coverage of Apigee here and here), today partnered with Heroku, the Ruby PaaS provider (See our previous coverage of Heroku here and here ), to offer an easy access to Twitter platform.

    Apigee has been providing API tools for developers with focus on high performance, simplicity, security and superior analytics. During the chip conference more than a month back, Apigee announced a tie-up with Twitter to simplify access to Twitter API with enhanced rate limit and better performance. This made it easy for Twitter app developers to access Twitter API without having to read through pages and pages of documentation.

    We all know that Heroku is on a mission to simplify application deployment by offering a platform service that makes it super easy for anyone to deploy their application with a few clicks. Their ruby based platform is very attractive for developers who develop apps for social services like Twitter, Facebook, etc..

    A marriage between these two companies is quite natural and can greatly empower the developers. In my posts on Heroku, I had mentioned how Heroku’s add-on architecture is removing one of the objections of developers towards PaaS. In another post, I wrote about Northscale and how easy it is for Heroku developers to use memcached with their applications. Today, Apigee announced “Apigee for Twitter Heroku addon” and this will allow Heroku developers to quickly quickly access Twitter API through Apigee with a few clicks, very much like the other add-ons.

    Now Ruby developers on Heroku platform can develop Twitter apps with enhanced API rate limits, OAuth authentication, etc.. This is a win win for Twitter app developers. On one hand, they can take advantage of a multiple-tenant highly scalable Heroku platform for their app and, on the other hand, they can take advantage of the simplified access offered by Apigee to Twitter platform with some enhanced features that is otherwise unavailable to them.

    Platforms are the future of cloud services and APIs are the oxygen for that future. Heroku and Apigee are getting it right to be major players in the PaaSy future.

    CloudAve is exclusively sponsored by

    Read more

  • Relevance Of Open Source In A Cloud Based World: Boto And Google Storage

     
    Picture credit: seduc.pa.gov.brThe Clouderati are still busy debating whether open source has any relevance in the cloud based world. This video is one such debate which you might be interested in watching. As I have told in the past, there are two schools of thoughts. One, lead by folks like Tim O’ Reilly, advocate the importance of open architecture, open formats, etc. over licensing and there are others who insist that open source is equally important. Even though open source, by itself, cannot prevent vendor lock-in in some cases, it is equally important in ensuring an open federated cloud ecosystem. In fact, I would even argue that open source is extremely important in accelerating the evolution of the open federated cloud ecosystem which some of us are fantasizing. Having said that I am going to take an example of an open source software and argue that it plays a significant role in creating an interoperable cloud ecosystem.
    Last week during the Google I/O event, Google announced Google Storage for Developers, a RESTful storage service for developers to store their data on the highly replicated Google infrastructure. In addition they also announced an open source command-line tool to store, share and manage data, called gsutil.
    Using this RESTful API, developers can easily connect their applications to fast, reliable storage replicated across several US data centers. The service offers multiple authentication methods, SSL support and convenient access controls for sharing with individuals and groups. It is highly scalable – supporting read-after-write data consistency, objects of hundreds of gigabytes in size per request, and a domain-scoped namespace. In addition, developers can manage their storage from a web-based interface and use GSUtil, an open-source command-line tool and library.
    Apart from the impact of this announcement on the market and speculations about Google’s motives, there is something that is more interesting here and it has been highlighted by Mitch Garnaat in this blog post.
    What was even cooler to me personally was that gsutil leverages boto for API-level communication with S3 and GS.  In addition, Google engineers have extended boto with a higher-level abstraction of storage services that implements the URL-style identifiers.  The command line tools are then built on top of this layer.
    As an open source developer, it is very satisfying when other developers use your code to do something interesting and this is certainly no exception.
    With hardware under the complete control of proprietary vendors and the source code having less direct relevance when open source software is hosted on third party servers without any need to share back to the community, our expectations on the role of open source should also change. We cannot use our worldview from the desktop world and then argue that open source is irrelevant in a cloud based world. When there is a complete shift in how we use computing resources, then we should see open source’s role with a different kind of lens than the one we used during the desktop era. As seen in the above example of Google tapping into Boto project to develop gsutil, open source licensing of Boto was the main reason for Google to develop a software which will let users access files in Amazon S3, Google Storage or even your own storage system using an URL-style identifiers. What are the chances for Google to extend a proprietary tool available for Amazon services to work with their own thus leading to interoperability between Google and Amazon storage services. If you listen to Mitch Garnaat, it is possible to extend Boto to other storage services also. The fact that it is an open source tool makes it highly likely for someone with a need (itch) to extend Boto/gsutils to other services in the future. With a proprietary license, there is no chance for a developer with a need to extend a tool to meet his/her needs. 
    In short, open source still is important even in the cloud based world. In spite of all the arguments given against the relevance of open source by pundits, I don’t see it going anywhere. If anything, it only goes on to accelerate the trend towards an interoperable federated cloud ecosystem. Having said that, we need to see the role of open source from a completely different perspective than its role in the desktop dominated world.
    CloudAve is exclusively sponsored by

    Read more

  • PaaS Is The Future Of Cloud Services: Northscale Wants To Help Extend Platform Services

     
    This is the 4th post in the PaaS is the future of Cloud Services Series. In this post, I am going to talk about the importance of extensibility of any platform and highlight one of the vendors in the space, Northscale. Yesterday Northscale announced a Series B funding worth $10M led by Mayfield Fund. Series A investors included Accel Partners and NorthBridge Venture Partners. They also announced the addition of industry veteran Bob Wiederhold as their CEO. I got a chance to talk to Northscale during Under the Radar event and, again, over a telephone call. I am pretty impressed with what they are doing. In this post, I will talk a little bit about the importance of any platform service to be extensible and then dig a bit about Northscale and what they do to extend some of the well known Platform Services vendors.

    Some of the biggest concerns about platform services are the possible vendor lockin and lack of options to meet the diverse needs of the customers. The former can be solved by making the platform open in the sense of open protocols, support for open formats, etc.. The latter problem can be solved by making the platform extensible. The world is diverse and the needs are diverse too. It is impossible for single vendor to satisfy the diverse needs of the marketplace. In fact, if any vendor tries to do it, it is foolish in my opinion. Any smart vendor will make their platform extensible so that third party developers can build various products/services around the platform. Even in the traditional software world, extensibility was very important for the success of the platform.

    In the world of cloud computing, IaaS has the maximum extensibility and SaaS has the least extensibility. PaaS lies somewhere in between. In fact, the biggest attraction towards IaaS in the early days of cloud computing was mostly due to the fine grained control the developers gets in customizing the platform stack. But the flip side of this is the burden on the developers to manage automatic scaling of the platform and ensuring its security. This led to PaaS being more attractive but developers want more extensible platforms that could cater to their varying needs. Vendors like Heroku and Engine Yard jumped in to fill the gap. Heroku offers what are called as “Add-ons” and Engine Yard offers something called “extensible configurations”. These are extensible platforms that allows developers to customize the platform to match their application needs.

    Such extensible platforms encourages third party developers and other vendors to offer services around the platform leading to a vibrant marketplace with wide variety of products and services. Some of these marketplaces are centralized and others are somewhat decentralized without the platform vendor forcing their hands on the developers. In the case of Heroku, they made it very easy for the third party vendors to offer services around their platform. Any developer or vendor wanting to offer their services for Heroku customers can easily hookup with their platform by sending a small configuration file. In fact, Heroku handles everything else including billing, thereby, making it very easy for the third party developers. From the users side, they can signup for the services and use in their applications with a few clicks. Successful platforms are the ones that makes extensibility part of their core dna making it easy for developers and users. 

    Extensible platforms are just one part of the story. There should be a vibrant ecosystem around the platform to make the platform really useful. For the platform services to be successful, these developers/vendors are important. They are the ones who offer services that are otherwise unavailable for the developers. The role of such providers becomes all the more important for the PaaSy future we have been talking around. Northscale is one such vendor offering memcached based data management solutions that helps developers scale their applications seamlessly without much effort. Even though their core business is virtual appliance based distribution of memcached with some enhanced features, their offering around Heroku platform is essentially memcached as a service. Heroku platform users can buy slices of their service based on their application needs and pay only for what they used, in a typical cloud computing style. 

    Let us take a look at why Northscale’s technology is important in today’s world and how it is core to the success of platform services. In the era of web apps and SaaS, the amount of data produced and stored increases at an exponential rate. In the traditional world, the need for additional data resources were handled in the scale-up manner. RDBMS played a crucial role for handling all these data. For the kind of data we are dealing in today’s world, the traditional scale-up approach to scaling will not work. We need a scale-out approach to handling data. RDBMS fails big time in scaling out. As an alternative solution, NoSQL gained steam offering Scale-out solutions to the data management problems we face today. However, organizations that are deep rooted in the RDBMS world are skeptical about taking a plunge into the NoSQL approach because there is a discontinuity while jumping directly into the alternative approaches to data management. Ideally, these organizations would want to take step by step approach in the transition. This is where Northscale comes in. They offer solutions for organizations to take a gradual approach while moving from the erstwhile scale-up technologies to the newer scale-out technologies. 
     
    Northscale solves this problem by offering a seamless, stepwise path from this starting point, to an alternative database model that scales-out, thereby matching the scaling strategy employed at the application layer of a modern web application using the well tested memcached solution. Northscale’s Memcached Server, a directly-addressed, distributed (scale-out), in-memory, key-value cache, can be used with the existing RDBMS implementations, caching frequently used data, thereby reducing the number of queries a database server must perform for web servers delivering a web/saas application. In fact, according to Northscale, their memcached offering is the only solution that offers non-disruptive move to a full featured scale-out data management solution.

    As Platform services gain stream and as more and more businesses start using PaaS for their application needs, they will find solutions like Northscale’s very useful. By making their platform extensible, PaaS providers can let application developers use technology like the one offered by Northscale. Even though they are only offering their services for Heroku platform right now, they are open to doing it for other platforms in the future. They are ready and waiting to help extend platform services of the future. 
    CloudAve is exclusively sponsored by

    Read more

  • PaaS Is The Future Of The Cloud Services: Heroku Is Ready To Be There

     
    Image representing Heroku as depicted in Crunc...

    Image via CrunchBase

    This is the third post in the PaaS is the future of cloud services series but a long overdue one. The focus of the series is to highlight the fact that PaaS, not IaaS, is going to play an important role in the future of cloud services because of the value it brings to any organization interested in embracing cloud computing. In my last post, I highlighted how VMWare realizes this future and partnered with Salesforce.com to offer VMforce platform. Venture capitalists are also seeing such a PaaSy future and it was evident from the news that came out on Monday. Heroku, a Ruby based PaaS provider, has secured $10 Million Series B funding led by Ignition along with existing investors Redpoint Ventures, Baseline Ventures, and Harrison Metal Capital. This is going to push this Y-Combinator kid further into the PaaS game and position themselves as a leading player in the PaaSy future. They already have more than 60,000 apps deployed on their platform and additional funding will help them expand further. In this post, I am going to dig a bit deeper into Heroku’s offerings.

    Simple Workflow: Heroku greatly simplifies the life of developers making it easy for anyone to create, deploy and manage apps on the cloud. Let me briefly discuss how it is done and then dig a little bit on their architecture. With Heroku, creating apps are very simple. For example,

    $ sudo gem install heroku

    $ heroku create yourappname

    Created http://yourappname.heroku.com/

    git@heroku.com:yourappname.git

    Once created, the deployment process is same as what many of the developers are already familiar with, a Git workflow. The deployment process is a push to the repo inside Heroku and, bingo, the app is ready. Everything else needed for the app can be managed by their API.

    Simplified Architecture: PaaS is exciting because it completely abstracts away all the underlying complexity for the developers. They just don’t have to worry about failing hardware, network issues, scaling, etc.. That is exactly what Heroku is offering to developers. They abstract away everything from the hardware to virtual machines to middleware to framework. The developers have to write the code and push it into Heroku platform and watch it scale based on how exciting their app is for the users. It can’t get any easier than that for the developers. As it is the case with any aaS offering, Heroku platform is multi-tenant platform offering high performance and scalability. Let me now discuss the underlying nuts and bolts of the Heroku platform briefly. Even though the advantage of PaaS is not to worry about the underlying nuts and bolts, it is important that we understand the underlying dynamics before investing on a platform service.

    From the application users point of view, their platform has an entry layer comprising of HTTP reverse proxy, HTTP cache and a routing mesh. HTTP reverse proxy handles the the http-level processing such as SSL and gzip compression before the requests are passed on to HTTP cache. Most of the modern web applications have a caching mechanism built in to offer much higher performance for the applications. As the request comes into the cache, it responds if it has the content cached already with immediate results. If not, it passes it on to the routing mesh which balances the requests by routing intelligently to the available computing resources. The computing resources include a highly scalable dyno grid, scalable databases and memory based caching system.

    From the application developers point of view, when they push the code into the Heroku platform, it compiles their code into what is called as Slugs, a self contained read only version of the app including all the required libraries. This compiled code is then run on Dynos, a single process running the ruby code on the servers in the Heroku grid. It is similar to Mongrel application server but it can run on multiple servers. The servers in the Heroku grid are Amazon EC2 instances running on Amazon datacenters. Based on the traffic for the app, the developers can increase the number of dynos to scale. Since the app is already compiled into ready to run “containers”, the dynos can be started in 2 seconds, allowing a seamless scaling. Each dyno contains the app along with the framework (Rails or Sinatra depending on whether it is a full app or lightweight app), middleware, App Server, Ruby VM and a POSIX environment. By taking the POSIX route, they are implementing the battle tested unix permissions system over a security model running inside the Ruby VM. This ensures that the apps are running in a secure environment completely isolated from other tenants in the system.

    Simple Extensibility: The high point of Heroku platform is their Addons offerings. These are third party solutions like Amazon RDS, WebSolr, Memcache, etc. and other value added functionalities like Cron processes, SSL, etc., which developers can easily integrate inside their apps. I spoke with both an add-on provider, Northscale offering Memcache, and a developer. The Northscale folks told me that it is extremely easy for them to plug their services into the Heroku platform. They said Heroku platform offered them a faster access to the market without worrying about the nitty gritty tasks like billing, reporting, etc.. They just have to send a small file containing some config information. As simple as that. From the developers side, they can add any of the addons with a few clicks. They have to select the “size” of the addon, pay with a credit card and the addon is ready to be integrated inside their app with a slight modification of their code. The addon marketplace is going to be one of the biggest factors in Heroku’s success. In fact, for any PaaS offering to be successful, it is important that their platform is extensible like Heroku’s platform. In some of the future posts in this series, I am going to dig deeper into the extensibility of the plarforms and also about some of the service providers offering their services through such addon marketplace.

    Conclusion: Heroku has set a bar on the kind of platform services that can be offered in the marketplace. Of course, not everyone likes such a higher levels of abstractions. Heroku competitors like Engine Yard are filling up those gaps (I will write about Engine Yard in this space one day). But by making app development child’s play, Heroku has pushed the envelope further up. Google App Engine and Microsoft Azure (well, Microsoft is planning to open up VM level access and it may not entirely fit in the mold of Heroku) are doing the same but Heroku has the momentum. It will be interesting to see the evolution of Heroku as they now have the necessary financial backup.

    CloudAve is exclusively sponsored by

    Read more

  • Why I think Facebook Has Gone Rogue

     
    Picture Courtesy: Sparked.bizThe last few days were filled with blog posts about Facebook going rogue. Many pundits are upset that Facebook treats users’ privacy with complete disdain. There are a few who feel that Facebook has done nothing wrong and those who don’t like their new policies should leave it and go elsewhere. After all we live in a world dominated by free markets and there is nothing that is stopping us from making a move elsewhere. This post is not about Facebook’s privacy issues but about the point of view that users who don’t like Facebook’s privacy policies should go elsewhere. I am going to argue that this argument doesn’t apply in the case of Facebook and offer two compelling reasons to highlight why it is not the case.
    To begin with, let us take a moment and understand the philosophy that drives the arguments of these folks. Their argument is not complete nonsense. It is the core of how we do business in a free market system. In a free market system, a product vendor or service provider offers value to the users in the form of a product (say, traditional shrink wrapped software) or a service (say, SaaS or other cloud services). The users pay for these products/services either with their money or with eyeballs (in the case of advertisement supported products and services). In short, the vendor/provider creates the full value for the users in exchange for their money/attention. If a user doesn’t like the product or service, they are free to leave and use another product/service that fits their needs. The only requirement from the user side is that the vendor/provider support data portability standards that helps them take their data with them to another product/service. This model fits very well with the argument that if someone doesn’t like a service, they are free to move elsewhere just because the value is entirely created by the vendor/provider and the user is just a user. The user doesn’t add any value to the vendor’s platform or product except, maybe, some word of mouth marketing.
    In the case of Facebook, it is my opinion that the above kind of arguments will not apply. I strongly feel that Facebook has gone rogue and I offer the following two arguments to counter those who support Facebook’s rights to change the terms the way they like.
    • Unlike the other products and services we use in the real world where the vendor is solely responsible for the value creation, social networks belong to a completely different category. Even though the company behind the social networks are solely responsible for the creation of the underlying platform, the value to the social networking platform is added by what is known as network effect. Without the users, the platform is of no value even if the platform is technologically very sophisticated and innovative. In the case of Facebook platform, there is absolutely no value to the platform unless large number of users join the service. This very fact forces almost every user to do their part in bringing in their friends and family into the service. An user with no friends on the Facebook platform sees no value in the platform. Therefore, the value in the Facebook platform (for that matter, any social networking platform) is added by the users who join the service. At this point, I am pretty sure many people would want to counter me with the argument that the value add by the users is similar to the licensing fees or subscription or attention given by users in other services. Nope. They are not one and the same. Facebook monetizes the platform with ads. Not only users offer their attention for this monetization process, they are bringing in more eyeballs to monetize. There is no equivalent to network effect in the traditional product/service category. No, the word of mouth marketing by the users of those products/services is not equal to the network effect in the social networking platforms. The word of mouth marketing only helps the vendors/providers of the product/service whereas network effect in the social networking platforms helps users add more value to the platform even as it brings more eyeballs to the vendor’s monetization strategy. This is clearly unique to social networking kind of services. This is exactly the reason why we cannot apply the argument “if you don’t like the service of a provider, you are free to go to another one” to social networking services. If we go out of the way to rationalize this argument, criticisms that Facebook is exploiting their users also holds true. In short, I have added quite a bit of value to the Facebook platform from my participation alone and asking me to leave if I don’t like their newly introduced terms is not reasonable.
    • All of us here at Cloudave advise the potential SaaS/other cloud services users to check if the service provider offers data portability options for their service. Otherwise, we warn them to look for another service that supports data portability. It is our belief that the complete ownership of users’ data is with the user and they should be able to take it out of a service if they intend to leave the service anytime. It is the same case with Facebook platform too. Any data I create, whether it is a photo I post or a wall message I leave on my friend’s wall or a personal message I send to my friend, it is my own property. This also includes the information my friends willingly share with me. If I have to leave the service for any reason, I should be able to take these data in one of the open formats. Unfortunately, data portability is not fully supported by Facebook. There is no way for my friends to let me know if I can take their information with me when I leave the service and even if they are willing to let me take their information with me, there is no way for me to take it with me. They are my friends and I should be able to take them with me when I leave. I could give a real world example to highlight this point. In the real world, I could take my friends to a bar for regular chats and if I don’t want to go to that bar anymore and, instead, want to go to a coffee shop, I should be able to take my friends and the information we shared along with me. If the bar owner tells me that I cannot take my friends or the information we shared to the coffee shop, it will be considered ridiculous. It is the same case with online social networks too. They have no reason not to support data portability in full. In fact, Facebook doesn’t support complete data portability. Under such a scenario, asking me to leave the service now is not valid. I have spent my valuable time storing data and forming relationships in that platform. In exchange for offering me a platform to socialize, Facebook has monetized with my eyeballs as well as my friends’ eyeballs. Asking me to leave my data behind and go elsewhere now is no different from someone who puts a gun on my head and asks me to leave my wallet with him/her. The data is my property and the platform cannot refuse to allow me to take it with me when I leave. In the absence of any such option for complete data portability, the argument that one should leave the service if they cannot agree to the newly introduced terms is not a reasonable argument.
    I am sure there are many who will disagree with me. I would like to hear from you about your stance on the issue and whether or not you agree with me.
    Update: Inside Facebook has a detailed analysis on Facebook’s privacy issues for anyone who is interested to dig in more.
    CloudAve is exclusively sponsored by

    Read more

  • VMOps Reboots As Cloud.com

     
    Regular readers of CloudAve know that I am a sucker of federated clouds and I have been pushing the idea of regional clouds hard. In this context, I have been talking about enablers like VMOps and regional cloud providers like Reliacloud and Scaleup Technologies . Today, VMOps announced that they are rebranding as Cloud.com along with few other interesting announcements. The name VMOps has been confusing to many people. Initially, I also thought that they are somehow affiliated with VMWare. I guess others, including their potential customers, had the same problem and they decided to rebrand. Cloud.com should be a great name for any company in the cloud computing space but I am not sure if it really solves the confusion.
    Let me first summarize their recent announcements including the ones made today and then dig a little deeper to get a better perspective.
    • First and foremost, they are now Cloud.com
    • Second, but most important from my perspective, their core platform is now open source
    • \They have another round of funding worth $11 Million with Index Ventures leading the Series B round. The total funding to date is $17.6 Million.
    • Version 2 of their platform is out and is more powerful than the previous version. Their platform now has comprehensive support for major cloud providers like the Amazon Web Services API, Citrix Cloud Center™ (C3) and VMware’s vCloud initiative
    • Many more customers added since the last time I spoke with them
    From my point of view, the two announcements that interests me the most are the release of their new platform and its release under open source license. I will dig a bit deeper on these two themes and explain why it interests me the most. I am going to harp on the theme of federated clouds again and, then, argue how this announcement plays a significant role in the emergence of such an ecosystem.
    Their platform, now called as CloudStack and which is in version 2, helps in the deployment, management and configuration of multi-tier and multi-tenant private and public cloud services. CloudStack platform offers an easy way to provision virtual machines and scale up and down instantaneously like many of the other cloud providers. Some of the significant features of CloudStack platform are
    • Easy provisioning of virtual machines of any “compute size” with complete automation of the distribution of compute, network and storage while adhering to defined policies on load balancing, data security and compliance. Their powerful management tool helps in mefining, metering, deploying and managing services to be consumed within the existing cloud or IT infrastructure
    • An integrated billing system that will help public providers with billing of their customers and private cloud users with chargeback
    • It is easily integrated with Amazon and works well with Citrix Cloud Center API and vCloud API
    • The fact that it is open source means that anyone can add functionalities to suit their existing IT environment
    • They have done a good job on the security front too. They offer isolation at the network level and it could come handy for any organization that has to deal with regulatory issues
    • Their built in reporting system makes billing and compliance easy for both service providers and enterprises
    Even though we only heard about service provider deployment during their initial stages of existence (ReliaCloud and Cloud Centra l in Australia), they are now focussing on the enterprise side too. Their CloudStack platform comes in three flavors.
    • Cloud.com CloudStack Platform Community Edition
    • Cloud.com CloudStack Platform Enterprise Edition
    • Cloud.com CloudStack Platform Service Provider Edition
    The Community edition is the open source core platform which is free to download. This is released under GPLv3 license and is available for download from this site . The enterprise and service provider edition have some proprietary components available with a subscription based support. The service provider edition will help public cloud providers offer an Amazon Web Services like offerings without any big financial or labor investments. Core management functions like end user self service administration, management, cloud administration, billing and reporting. The enterprise edition has features similar to this but more suitable for building a private cloud inside of the enterprise.
    The other interesting part of the announcements to me was the release of their open source edition. This is a win-win for both Cloud.com and users of the platform. Cloud.com gets an opportunity to tap into the distributed world of open source talent. This open licensing model will help them get contribution from the users. The world is full of users with diverse needs. Just a handful of big cloud providers cannot satisfy these needs. The “my way or highway” approach of big providers may not be palatable to many. Our world is diverse and any computing ecosystem that satisfy their needs will be a diverse one. Clearly, we are going towards a future where we have a federated ecosystem of diverse cloud providers. Releasing the core platform as an open source is a clever way for Cloud.com to gain traction in such a marketplace.
    There are few people who are skeptical about the idea of “Regional Clouds” because they can’t scale like Amazon or Google. While it is true that no single regional player can scale like Amazon or Google, an open federated ecosystem implies that they can achieve scaling by coming together with an open platform underneath their offerings. Let me try to explain it better. When I wrote a post about ScaleUp Technologies , I talked about their partnership with XSeed Co. Ltd. in Japan. That was clearly a case of a regional provider partnering with another provider using the same platform to offer scaling and geographical redundancy to their end users. This is a perfect example for a federated cloud ecosystem.
    However, the fact that their cloud is built on top of Applogic allows for federation and they have partnered with XSeed Co. Ltd., a Japan based cloud provider also built on top of 3tera’s Applogic platform. This partnership allows ScaleUp to let their customers tap into XSeed’s infrastructure (and vice versa) right from the their UI. This is a perfect example of an federated cloud ecosystem in action, albeit a smaller one.
    When we have cloud built on top of open platforms with open standards, interfaces and formats, we can have an open federated cloud ecosystem that could scale well like the big players like Amazon, Google, etc. and offer even better geographical redundancy. Such an ecosystem can support not just the diverse needs of the end users but also the regulatory requirements put forward by their governments. In this regard, opensourcing of CloudStack platform is significant and could play a role in establishing the open federated cloud ecosystem I am dreaming. Plus, it is a very good marketing tool for them. Clearly, the market is skewed towards some of the big Infrastructure service providers and there is a heavy competition for them on the private cloud market too. Open source makes it easy for them to gain significant traction in this competitive landscape.
    The folks at VMOps, oops, Cloud.com were successful in keeping my mouth shut for a long time with their embargo and, finally, I got a chance to write about what I felt about their move. I think it is a great move and could play a major role in establishing an open federated cloud ecosystem. Already the post has become too long and I didn’t get a chance to write about how this move could have major implications on the private cloud enterprise market. Well, I guess I have to keep it for another day.
    CloudAve is exclusively sponsored by

    Read more

  • Riptano, Cloudera For Cassandra

     

    Riptano , a new company launched recently can be considered Cloudera of Cassandra project. This company was started by two ex-Rackspace employees (disclaimer: Rackspace’s Email Division is a client of Diversity Analysis) to provide support services for Cassandra much like how Cloudera was started to offer support services for Apache Hadoop. When I wrote about Cloudera just before its launch, I compared it to Redhat

    Cloudera is planning to do for Cloud Computing what Redhat did for Linux more than a decade back. Redhat took the Open Source Linux operating system, repackaged it and offered it along with paid technical support. They were essentially making money out of a free software (as in beer) by using what was a new and innovative business model at that time. Enterprises were skeptical about Linux till then and Redhat’s model helped in a faster adoption of Linux by the enterprises. Enterprise adoption of Cloud Computing is in the same situation where Linux was more than a decade ago.

    In fact, Riptano takes a similar approach to Cloudera, catering to the needs of businesses who are willing to pay for support from people who know the nuts and bolts of the open source software. Similar to Cloudera, Riptano also plans to offer some proprietary components for Cassandra.

    Cassandra is under Apache now with the name Apache Cassandra project. Cassandra was originally developed by Facebook following the distributed design of Amazon’s Dynamo and a data model similar to Google’s Bigtable. In 2008, Facebook open sourced the software and it is currently under Apache Software Foundation. Off late, Rackspace has become an enthusiastic supporter of Cassandra project and had three of the most active committers of the project on its payroll. Some of the well known users of Cassandra include Digg, Facebook, Twitter, Reddit, Rackspace, Cloudkick, Cisco, SimpleGeo, Ooyala, OpenX, and many others.

    The minds behind Riptano are the two former Rackers and active Cassandra committers, Jonathan Ellis and Matt Pfeil. They founded the company with support from Rackspace. Cassandra is very durable and the interest among enterprises are due to its scalability, support for multiple datacenters and hadoop analytics.

    The starting of Riptano doesn’t mean forking of Cassandra and according to the founders, they see no need for forking because the mainline development team is very active and there is no need for a third party to fork it for additional development. However, they might offer a custom distribution of Cassandra like what Cloudera has done with Hadoop. They also have plans to offer some proprietary tools that could extend the functionalities of Cassandra.

    According to Charles Babcock of Information Week, Riptano will offer training and technical support at three levels for Cassandra.

    Riptano will offer training in Cassandra, consulting and technical support, said Pfeil, summing up the new company’s business plan. Support will come in Bronze at $1,000 a year per node, Silver at $2,000 a year per node or Gold at $4,000 a year per node. Cassandra typically runs on a server cluster and Cassandra clusters can be expanded to an unlimited number of nodes, according to current users.

    The difference between bronze and gold is a 48-hour response time versus a four-hour response.

    This is a good strategy from their point of view. Let us see how they are received in the market and I will keep a tab on the company and report back after sometime.

    Update: Apologies for spelling the name wrong. I have corrected it. 

    CloudAve is exclusively sponsored by

    Read more

  • Red Hat Takes Another Step Towards Cloud Computing

     
    Picture Source: ArstechnicaRedhat, the poster child of open source and maker of most popular Linux distribution in the enterprise market, took another step into the cloudy future. Redhat recently released version 5.5 of their popular Enterprise Linux distribution. They followed it up with an announcement focused mainly on the hybrid nature of the enterprise cloud adoption in the immediate future.
    According to the announcement, Redhat announced what they call as Red Hat Cloud Access which allows their enterprise customers to use their existing subscriptions on Amazon Web Services, EC2 to be more specific. With this, Amazon Web Services becomes first Red Hat Premier Certified Cloud Provider. With Red Hat Cloud Access, eligible Redhat customers can move their Red Hat Enterprise Linux subscriptions between traditional on-premise servers and Amazon Web Services. With this feature, customers can select appropriate computing resources for their needs, without the need for new business or support models. It is important to note that not all customers can move their licenses to AWS but those enterprise customers with at least 25 subscriptions are the only ones allowed to move back and forth. Check their website for further eligibility requirements.
    Red Hat is also introducing new features designed to continue allowing enterprises to leverage the benefits of Amazon Web Services:
    • The latest versions of Red Hat Enterprise Linux will be available on Amazon EC2 at the same time as the release for traditional on-premise deployments, in an effort to provide consistency between on and off-premise usage. This includes the features in the recently released Red Hat Enterprise Linux 5.5.
    • Standardized, secure 32-bit and 64-bit Red Hat Enterprise Linux images, which include cloud-specific content like creating images from a specific manifest and certificates, are secured using SELinux and firewall protections.
    • Continuous delivery of updates to Red Hat Enterprise Linux within Amazon Web Services, offering delivery of security errata and feature enhancements.
    Well, this is long expected from Redhat with Canonical making it damn simple for enterprises to use cloud computing. Unlike Redhat, Canonical doesn’t charge a subscription for their distribution and, rather, they charge for the support. Such a business model allows Canonical customers (well, there aren’t many like Redhat on the enterprise side) to easily tap into Amazon Web Services without worrying about the subscription terms. Plus, Canonical has taken a major leap into cloud computing by tightly integrating Eucalyptus into their Ubuntu Enterprise Cloud edition. The rave reviews about UEC has put enormous pressure on Redhat to do something as more and more enterprises are warming up to cloud computing, both public and private. This easing of subscription terms is an important step in ensuring that enterprises have the necessary flexibility to move from on-premise to cloud. There is a long way to go before Redhat becomes an important player in the cloud market. We will have to wait and see how it shapes up.
    CloudAve is exclusively sponsored by

    Read more

  • PaaS Is The Future Of Cloud Services: VMForce – A Marriage Of Convenience

     

    This is a second post in my series titled “PaaS Is The Future Of Cloud Services“. I was planning to write about Heroku but since the VMForce news was a good fit for the topic, I am pushing the…

    Read more

  • Standards Or Openness?

     
    Lew Moorman writing on GigaOm (warning: two weeks old post) suggests that openness is more important standards because it gives more options to consumers. 
    Many suggest that standards are the key to encouraging broader adoption of cloud computing. I disagree; I think the key is openness. What’s the difference? In the standards approach, a cloud would look and work just like any other. Open clouds, on the other hand, could come in many different flavors, but they would share one essential feature: all of the services they’d offer could run outside of them.
    I completely agree with his take. I also feel the same if we have to decide between openness and standards. I would love to open this up for debate and want to hear more from both practitioners and users on what they think about it. Feel free to jump in and debate.
    Disclaimer: Rackspace is a client of Diversity Analysis.
    CloudAve is exclusively sponsored by

    Read more

...45678...
...45678...