• Protect Your Facebook Privacy

     
    I have been critical of Facebook’s privacy policies in this blog. In my opinion, Facebook has really gone rogue. They give a damn to users just like what Microsoft did during the time it had monopoly like power in the market. Having said that the movement to delete Facebook accounts is at best hysterical. I think we should have a more pragmatic approach than the emotional approach of deleting the Facebook account in protest. At least, from my point of view, I am not going to delete my account even though I am terribly upset about Facebook’s attitude. I have my entire family and most of my friends in there and I cannot afford to delete my Facebook account (at least at this time). I am more inclined to take a pragmatic approach, continue to voice my protest through Facebook and other online forums and take steps to protect my privacy as much as I can.
    Now, there is an option. The good folks at ReclaimPrivacy.org has released a bookmarklet that helps you tighten your privacy in Facebook. It is a scanner that scans your Facebook account to inspect your privacy settings and warn you about settings that might be unexpectedly public. All you have to do is to drag the bookmarklet to your browser toolbar, log into your Facebook account and click the bookmarklet you just installed. It scans and tells you if your privacy is good. If not, it gives you an option to fix it with a click. A pretty nifty tool that can help people like me who still want to stay in Facebook but maintain a level of privacy.
    I wish these folks had put up a nice “About” section giving background on the effort. It would have given more confidence to non technical users (well, there are more than 450 Million of them in Facebook) who want to use the tool for fixing the Facebook privacy issues. Technical users can see that it is an open source tool with source code available. This is good enough to assure us that we can trust the tool. Anyhow, if you are worried about the privacy of your Facebook account, I strongly urge you to use this tool to fix it. 
    Disclaimer: It is a publicly available open source tool which I used to fix the privacy issues in my account. Your mileage may vary and I am not responsible for anything that happens with the use of the tool. As a blogger, I am writing about a tool that could potentially help Millions of Facebook users. Check out the Techmeme discussion about this tool.
    CloudAve is exclusively sponsored by

    Read more

  • Living In The Clouds – iPad Edition

     

    It has been sometime since I wrote the last post in the Living In The Cloud Series. With iPad in my hand, I thought it is time to revisit this series and talk briefly about how iPad fits in a life on the clouds. When iPad was announced, I wrote a post…

    Read more

  • Facebook May Not Care About Your Privacy But It Definitely Cares About Your Security

     
    Picture Courtesy: Allfacebook.comFacebook may not give you a damn about your privacy and it may have gone rogue. But, it is serious about ensuring the security of your account. Today, they have announced some steps to ensure that there are no unauthorized access to your facebook account. Even though this effort is highly laudable, it is somewhat hypocritical without a strong respect for users’ privacy. 
    According to a Facebook blog post, the two significant safety measures are
    • Login Notifications: You will be notified instantly by email and SMS (optional feature). You will be asked to register all the devices you use with a name for each one of them. When you log in from a new device, you will be asked to name the device and immediately a notification is sent to the email address and mobile phone on the file (as per your settings)
    • Blocking suspicious logins: When the system detects some suspicious login activity, it asks the user to answer some trivial questions that can identify the real user (like the birth date or the name of a friend). You are allowed to login after the correct identifications. There is an option to verify the login logs and reset password if something suspicious is found.
    The first option is no brainer but there could be some issues with the second one. First, it could get annoying if you are someone who logs in from many different places including libraries, friend’s machines, etc.. Second, and most importantly for the networkers like Robert Scoble, if you login from an unknown location and facebook’s system deems it suspicious, the system could ask you to identify a facebook friend’s photograph and there is a high likelihood that you may not know the name of that person from several thousand “friends” and your facebook account will get locked. It is not clear if it completely locks you out or still allow you access from trusted devices. Even though it is a good measure, it could get annoying at times from a convenience perspective.
    Related Posts:
    CloudAve is exclusively sponsored by

    Read more

  • PaaS Is The Future Of The Cloud Services: Heroku Is Ready To Be There

     
    Image representing Heroku as depicted in Crunc...

    Image via CrunchBase

    This is the third post in the PaaS is the future of cloud services series but a long overdue one. The focus of the series is to highlight the fact that PaaS, not IaaS, is going to play an important role in the future of cloud services because of the value it brings to any organization interested in embracing cloud computing. In my last post, I highlighted how VMWare realizes this future and partnered with Salesforce.com to offer VMforce platform. Venture capitalists are also seeing such a PaaSy future and it was evident from the news that came out on Monday. Heroku, a Ruby based PaaS provider, has secured $10 Million Series B funding led by Ignition along with existing investors Redpoint Ventures, Baseline Ventures, and Harrison Metal Capital. This is going to push this Y-Combinator kid further into the PaaS game and position themselves as a leading player in the PaaSy future. They already have more than 60,000 apps deployed on their platform and additional funding will help them expand further. In this post, I am going to dig a bit deeper into Heroku’s offerings.

    Simple Workflow: Heroku greatly simplifies the life of developers making it easy for anyone to create, deploy and manage apps on the cloud. Let me briefly discuss how it is done and then dig a little bit on their architecture. With Heroku, creating apps are very simple. For example,

    $ sudo gem install heroku

    $ heroku create yourappname

    Created http://yourappname.heroku.com/

    git@heroku.com:yourappname.git

    Once created, the deployment process is same as what many of the developers are already familiar with, a Git workflow. The deployment process is a push to the repo inside Heroku and, bingo, the app is ready. Everything else needed for the app can be managed by their API.

    Simplified Architecture: PaaS is exciting because it completely abstracts away all the underlying complexity for the developers. They just don’t have to worry about failing hardware, network issues, scaling, etc.. That is exactly what Heroku is offering to developers. They abstract away everything from the hardware to virtual machines to middleware to framework. The developers have to write the code and push it into Heroku platform and watch it scale based on how exciting their app is for the users. It can’t get any easier than that for the developers. As it is the case with any aaS offering, Heroku platform is multi-tenant platform offering high performance and scalability. Let me now discuss the underlying nuts and bolts of the Heroku platform briefly. Even though the advantage of PaaS is not to worry about the underlying nuts and bolts, it is important that we understand the underlying dynamics before investing on a platform service.

    From the application users point of view, their platform has an entry layer comprising of HTTP reverse proxy, HTTP cache and a routing mesh. HTTP reverse proxy handles the the http-level processing such as SSL and gzip compression before the requests are passed on to HTTP cache. Most of the modern web applications have a caching mechanism built in to offer much higher performance for the applications. As the request comes into the cache, it responds if it has the content cached already with immediate results. If not, it passes it on to the routing mesh which balances the requests by routing intelligently to the available computing resources. The computing resources include a highly scalable dyno grid, scalable databases and memory based caching system.

    From the application developers point of view, when they push the code into the Heroku platform, it compiles their code into what is called as Slugs, a self contained read only version of the app including all the required libraries. This compiled code is then run on Dynos, a single process running the ruby code on the servers in the Heroku grid. It is similar to Mongrel application server but it can run on multiple servers. The servers in the Heroku grid are Amazon EC2 instances running on Amazon datacenters. Based on the traffic for the app, the developers can increase the number of dynos to scale. Since the app is already compiled into ready to run “containers”, the dynos can be started in 2 seconds, allowing a seamless scaling. Each dyno contains the app along with the framework (Rails or Sinatra depending on whether it is a full app or lightweight app), middleware, App Server, Ruby VM and a POSIX environment. By taking the POSIX route, they are implementing the battle tested unix permissions system over a security model running inside the Ruby VM. This ensures that the apps are running in a secure environment completely isolated from other tenants in the system.

    Simple Extensibility: The high point of Heroku platform is their Addons offerings. These are third party solutions like Amazon RDS, WebSolr, Memcache, etc. and other value added functionalities like Cron processes, SSL, etc., which developers can easily integrate inside their apps. I spoke with both an add-on provider, Northscale offering Memcache, and a developer. The Northscale folks told me that it is extremely easy for them to plug their services into the Heroku platform. They said Heroku platform offered them a faster access to the market without worrying about the nitty gritty tasks like billing, reporting, etc.. They just have to send a small file containing some config information. As simple as that. From the developers side, they can add any of the addons with a few clicks. They have to select the “size” of the addon, pay with a credit card and the addon is ready to be integrated inside their app with a slight modification of their code. The addon marketplace is going to be one of the biggest factors in Heroku’s success. In fact, for any PaaS offering to be successful, it is important that their platform is extensible like Heroku’s platform. In some of the future posts in this series, I am going to dig deeper into the extensibility of the plarforms and also about some of the service providers offering their services through such addon marketplace.

    Conclusion: Heroku has set a bar on the kind of platform services that can be offered in the marketplace. Of course, not everyone likes such a higher levels of abstractions. Heroku competitors like Engine Yard are filling up those gaps (I will write about Engine Yard in this space one day). But by making app development child’s play, Heroku has pushed the envelope further up. Google App Engine and Microsoft Azure (well, Microsoft is planning to open up VM level access and it may not entirely fit in the mold of Heroku) are doing the same but Heroku has the momentum. It will be interesting to see the evolution of Heroku as they now have the necessary financial backup.

    CloudAve is exclusively sponsored by

    Read more

  • Why I think Facebook Has Gone Rogue

     
    Picture Courtesy: Sparked.bizThe last few days were filled with blog posts about Facebook going rogue. Many pundits are upset that Facebook treats users’ privacy with complete disdain. There are a few who feel that Facebook has done nothing wrong and those who don’t like their new policies should leave it and go elsewhere. After all we live in a world dominated by free markets and there is nothing that is stopping us from making a move elsewhere. This post is not about Facebook’s privacy issues but about the point of view that users who don’t like Facebook’s privacy policies should go elsewhere. I am going to argue that this argument doesn’t apply in the case of Facebook and offer two compelling reasons to highlight why it is not the case.
    To begin with, let us take a moment and understand the philosophy that drives the arguments of these folks. Their argument is not complete nonsense. It is the core of how we do business in a free market system. In a free market system, a product vendor or service provider offers value to the users in the form of a product (say, traditional shrink wrapped software) or a service (say, SaaS or other cloud services). The users pay for these products/services either with their money or with eyeballs (in the case of advertisement supported products and services). In short, the vendor/provider creates the full value for the users in exchange for their money/attention. If a user doesn’t like the product or service, they are free to leave and use another product/service that fits their needs. The only requirement from the user side is that the vendor/provider support data portability standards that helps them take their data with them to another product/service. This model fits very well with the argument that if someone doesn’t like a service, they are free to move elsewhere just because the value is entirely created by the vendor/provider and the user is just a user. The user doesn’t add any value to the vendor’s platform or product except, maybe, some word of mouth marketing.
    In the case of Facebook, it is my opinion that the above kind of arguments will not apply. I strongly feel that Facebook has gone rogue and I offer the following two arguments to counter those who support Facebook’s rights to change the terms the way they like.
    • Unlike the other products and services we use in the real world where the vendor is solely responsible for the value creation, social networks belong to a completely different category. Even though the company behind the social networks are solely responsible for the creation of the underlying platform, the value to the social networking platform is added by what is known as network effect. Without the users, the platform is of no value even if the platform is technologically very sophisticated and innovative. In the case of Facebook platform, there is absolutely no value to the platform unless large number of users join the service. This very fact forces almost every user to do their part in bringing in their friends and family into the service. An user with no friends on the Facebook platform sees no value in the platform. Therefore, the value in the Facebook platform (for that matter, any social networking platform) is added by the users who join the service. At this point, I am pretty sure many people would want to counter me with the argument that the value add by the users is similar to the licensing fees or subscription or attention given by users in other services. Nope. They are not one and the same. Facebook monetizes the platform with ads. Not only users offer their attention for this monetization process, they are bringing in more eyeballs to monetize. There is no equivalent to network effect in the traditional product/service category. No, the word of mouth marketing by the users of those products/services is not equal to the network effect in the social networking platforms. The word of mouth marketing only helps the vendors/providers of the product/service whereas network effect in the social networking platforms helps users add more value to the platform even as it brings more eyeballs to the vendor’s monetization strategy. This is clearly unique to social networking kind of services. This is exactly the reason why we cannot apply the argument “if you don’t like the service of a provider, you are free to go to another one” to social networking services. If we go out of the way to rationalize this argument, criticisms that Facebook is exploiting their users also holds true. In short, I have added quite a bit of value to the Facebook platform from my participation alone and asking me to leave if I don’t like their newly introduced terms is not reasonable.
    • All of us here at Cloudave advise the potential SaaS/other cloud services users to check if the service provider offers data portability options for their service. Otherwise, we warn them to look for another service that supports data portability. It is our belief that the complete ownership of users’ data is with the user and they should be able to take it out of a service if they intend to leave the service anytime. It is the same case with Facebook platform too. Any data I create, whether it is a photo I post or a wall message I leave on my friend’s wall or a personal message I send to my friend, it is my own property. This also includes the information my friends willingly share with me. If I have to leave the service for any reason, I should be able to take these data in one of the open formats. Unfortunately, data portability is not fully supported by Facebook. There is no way for my friends to let me know if I can take their information with me when I leave the service and even if they are willing to let me take their information with me, there is no way for me to take it with me. They are my friends and I should be able to take them with me when I leave. I could give a real world example to highlight this point. In the real world, I could take my friends to a bar for regular chats and if I don’t want to go to that bar anymore and, instead, want to go to a coffee shop, I should be able to take my friends and the information we shared along with me. If the bar owner tells me that I cannot take my friends or the information we shared to the coffee shop, it will be considered ridiculous. It is the same case with online social networks too. They have no reason not to support data portability in full. In fact, Facebook doesn’t support complete data portability. Under such a scenario, asking me to leave the service now is not valid. I have spent my valuable time storing data and forming relationships in that platform. In exchange for offering me a platform to socialize, Facebook has monetized with my eyeballs as well as my friends’ eyeballs. Asking me to leave my data behind and go elsewhere now is no different from someone who puts a gun on my head and asks me to leave my wallet with him/her. The data is my property and the platform cannot refuse to allow me to take it with me when I leave. In the absence of any such option for complete data portability, the argument that one should leave the service if they cannot agree to the newly introduced terms is not a reasonable argument.
    I am sure there are many who will disagree with me. I would like to hear from you about your stance on the issue and whether or not you agree with me.
    Update: Inside Facebook has a detailed analysis on Facebook’s privacy issues for anyone who is interested to dig in more.
    CloudAve is exclusively sponsored by

    Read more

  • Know Thy Art Of Defence

     

    When we talk about Cloud Computing and security in the same sentence, we immediately think about infrastructure security and a debate kicks off around the topic. Yes, infrastructure security is important and it is the headache of the IaaS provider. As a developer running a web app on top of IaaS or a startup building a SaaS application on the cloud or on top of a managed provider infrastructure, one needs to worry about the application security. In this post, I will briefly discuss the state of application security in the cloud and introduce an interesting company, Art of Defence , in this space.

    In fact, it is my gut feeling that many startups offering web 2.0ish applications or SaaS applications are completely ignoring the application security and it is just a matter of time before things blow up on their face. The IBM X-Force annual report in 2008 showed clearly how there was a 8X increase in the count of web application vulnerabilities (an exponential increase from 2004-2008) and at the end of 2008, 74% of these vulnerabilities were left unpatched. Most of the users have absolutely no idea about what is in store for them when the web applications they use are severely vulnerable. It is like a bomb waiting to explode and the costs of any attacks using these vulnerabilities could be devastating to both the vendors and their users.

    In the traditional web application hosting era, we used web applications firewalls which adds a layer around the web server fending off any attacks based on the rules we add to the configuration of such firewalls. Mod Security is one such example for Apache web server and in my previous avatar of system admin, I have used Mod Security extensively to fend off attacks on PHP scripts running on our servers. Such web application firewalls served the purpose to a reasonable extent protecting web applications from attacks exploiting vulnerabilities (known and, sometimes, unknown using some of the Just In Time rules).

    As web applications moved from traditional development model to a SaaS model, things got pretty complex. For one, it makes it difficult for cloud providers because they will have more than one client in a single hardware. The traditional web application firewall approach will not work here. Not only these firewalls are dependent on the hardware and, thus, adding to the complexity, they also consume quite a bit of resources. This makes it useless in a cloud based scenario.  A better way to do it is to implement the security measures into the applications itself so that the security also scales well with the cloud. It is not happening anytime soon and we need a different kind of solution to handle this requirement. Enter dWAF, distributed Web Applications Firewall. dWAF comes in the form of a plugin or even a SaaS service and seamlessly integrates with many cloud environments. These firewalls offer support for detection of vulnerabilities and protection from attacks in a seamless way without consuming much resources.

    Art of Defence, founded in Germany with a recently opened office in San  Francisco, has done great work in developing such a distributed firewall and their flagship product, distributed Web application firewall (dWAF) Hyperguard, offers comprehensive application security for the cloud era. They have partnered with Amazon Web Services and GoGrid to offer their firewall solution as a SaaS. AWS customers can access hyperguard SaaS by simply adding a small software plug-in to an existing web server Amazon Machine Image (AMI), or by using art of defences custom AMI. GoGrid customers can also do the same.

    Hyperguard has three components

    • The enforcer, a small plugin that can be plugged into a web server or a network firewall or a load balancer. The Enforcer sends request and response data to a component called Decider and also modifies requests and responses if needed. The Enforcer is an adapter for hyperguard to get the data it needs to enforce the policy
    • The decider, the core policy engine receives the request from the enforcer, decides what to do and offers a response
    • The admin interface, the UI that lets the administrators set the policies, monitor and track alerts

    Art of Defence has recently partnered with the Santa Clara based Whitehat Security, a company that helps businesses with website risk management and compliance. With this partnership, art of defence’s hyperguard is tightly integrated with the WhiteHat Sentinel website vulnerability management service. Art of defence used WhiteHat Security’s operational open XML API to enable hyperguard to transform WhiteHat Sentinel’s verified website vulnerability assessment results into viable rule-set suggestions for hyperguard’s security policy management. Now, companies that use both solutions will be able to take advantage of “virtual patching” functionality and mitigate website vulnerabilities quickly, limiting exposure to exploits.

    Some of the top folks from Art of Defence is also heavily involved in Cloud Security Alliance’s efforts to promote best cloud security practices. They have played a major role in the application security part (domain 10) of Security Guidance for Critical Areas of Focus in Cloud Computing report. It is an interesting company to keep a tab on for anyone who follows cloud security closely.

    CloudAve is exclusively sponsored by

    Read more

  • Gone Google or Google Gone?

     
    Google launched their Google Apps campaign called Gone Google with so much fanfare. Now there seems to be some backlash against their offering. There were some rumors that Yale is delaying the switch to Google Apps due to security concerns and a Techcrunch post about concerns raised by City of Los Angeles bureaucrats about Google Apps deployment. Today Information Week has an exclusive article about University of California-Davis decision to end Google Pilot citing privacy and security fears.
    Many faculty “expressed concerns that our campus’s commitment to protecting the privacy of their communications is not demonstrated by Google and that the appropriate safeguards are neither in place at this time nor planned for in the near future,” the letter said

    Along with concerns about storing their data on third party providers, UC-Davis officials also pointed to University of California Electronic Communications Policy.

    This is interesting from two fronts. One is about Google’s effectiveness in luring customers to their online Office Suite offerings and the second is about the very idea of convincing users to put their data on third party servers. This clearly highlights the work cut out for us, the cloud computing evangelists and the vendors. On one side, it is important for the vendors to go all out to ensure the highest security levels possible in their offerings and, on the other side, evangelists like us should take it upon ourselves to convince people about cloud based services. We need to do a better job of educating users about how it is not all that bad in the cloud world and some regulations may even need a overhaul to keep up with the advances in technology.
    CloudAve is exclusively sponsored by

    Read more

  • VMOps Reboots As Cloud.com

     
    Regular readers of CloudAve know that I am a sucker of federated clouds and I have been pushing the idea of regional clouds hard. In this context, I have been talking about enablers like VMOps and regional cloud providers like Reliacloud and Scaleup Technologies . Today, VMOps announced that they are rebranding as Cloud.com along with few other interesting announcements. The name VMOps has been confusing to many people. Initially, I also thought that they are somehow affiliated with VMWare. I guess others, including their potential customers, had the same problem and they decided to rebrand. Cloud.com should be a great name for any company in the cloud computing space but I am not sure if it really solves the confusion.
    Let me first summarize their recent announcements including the ones made today and then dig a little deeper to get a better perspective.
    • First and foremost, they are now Cloud.com
    • Second, but most important from my perspective, their core platform is now open source
    • \They have another round of funding worth $11 Million with Index Ventures leading the Series B round. The total funding to date is $17.6 Million.
    • Version 2 of their platform is out and is more powerful than the previous version. Their platform now has comprehensive support for major cloud providers like the Amazon Web Services API, Citrix Cloud Center™ (C3) and VMware’s vCloud initiative
    • Many more customers added since the last time I spoke with them
    From my point of view, the two announcements that interests me the most are the release of their new platform and its release under open source license. I will dig a bit deeper on these two themes and explain why it interests me the most. I am going to harp on the theme of federated clouds again and, then, argue how this announcement plays a significant role in the emergence of such an ecosystem.
    Their platform, now called as CloudStack and which is in version 2, helps in the deployment, management and configuration of multi-tier and multi-tenant private and public cloud services. CloudStack platform offers an easy way to provision virtual machines and scale up and down instantaneously like many of the other cloud providers. Some of the significant features of CloudStack platform are
    • Easy provisioning of virtual machines of any “compute size” with complete automation of the distribution of compute, network and storage while adhering to defined policies on load balancing, data security and compliance. Their powerful management tool helps in mefining, metering, deploying and managing services to be consumed within the existing cloud or IT infrastructure
    • An integrated billing system that will help public providers with billing of their customers and private cloud users with chargeback
    • It is easily integrated with Amazon and works well with Citrix Cloud Center API and vCloud API
    • The fact that it is open source means that anyone can add functionalities to suit their existing IT environment
    • They have done a good job on the security front too. They offer isolation at the network level and it could come handy for any organization that has to deal with regulatory issues
    • Their built in reporting system makes billing and compliance easy for both service providers and enterprises
    Even though we only heard about service provider deployment during their initial stages of existence (ReliaCloud and Cloud Centra l in Australia), they are now focussing on the enterprise side too. Their CloudStack platform comes in three flavors.
    • Cloud.com CloudStack Platform Community Edition
    • Cloud.com CloudStack Platform Enterprise Edition
    • Cloud.com CloudStack Platform Service Provider Edition
    The Community edition is the open source core platform which is free to download. This is released under GPLv3 license and is available for download from this site . The enterprise and service provider edition have some proprietary components available with a subscription based support. The service provider edition will help public cloud providers offer an Amazon Web Services like offerings without any big financial or labor investments. Core management functions like end user self service administration, management, cloud administration, billing and reporting. The enterprise edition has features similar to this but more suitable for building a private cloud inside of the enterprise.
    The other interesting part of the announcements to me was the release of their open source edition. This is a win-win for both Cloud.com and users of the platform. Cloud.com gets an opportunity to tap into the distributed world of open source talent. This open licensing model will help them get contribution from the users. The world is full of users with diverse needs. Just a handful of big cloud providers cannot satisfy these needs. The “my way or highway” approach of big providers may not be palatable to many. Our world is diverse and any computing ecosystem that satisfy their needs will be a diverse one. Clearly, we are going towards a future where we have a federated ecosystem of diverse cloud providers. Releasing the core platform as an open source is a clever way for Cloud.com to gain traction in such a marketplace.
    There are few people who are skeptical about the idea of “Regional Clouds” because they can’t scale like Amazon or Google. While it is true that no single regional player can scale like Amazon or Google, an open federated ecosystem implies that they can achieve scaling by coming together with an open platform underneath their offerings. Let me try to explain it better. When I wrote a post about ScaleUp Technologies , I talked about their partnership with XSeed Co. Ltd. in Japan. That was clearly a case of a regional provider partnering with another provider using the same platform to offer scaling and geographical redundancy to their end users. This is a perfect example for a federated cloud ecosystem.
    However, the fact that their cloud is built on top of Applogic allows for federation and they have partnered with XSeed Co. Ltd., a Japan based cloud provider also built on top of 3tera’s Applogic platform. This partnership allows ScaleUp to let their customers tap into XSeed’s infrastructure (and vice versa) right from the their UI. This is a perfect example of an federated cloud ecosystem in action, albeit a smaller one.
    When we have cloud built on top of open platforms with open standards, interfaces and formats, we can have an open federated cloud ecosystem that could scale well like the big players like Amazon, Google, etc. and offer even better geographical redundancy. Such an ecosystem can support not just the diverse needs of the end users but also the regulatory requirements put forward by their governments. In this regard, opensourcing of CloudStack platform is significant and could play a role in establishing the open federated cloud ecosystem I am dreaming. Plus, it is a very good marketing tool for them. Clearly, the market is skewed towards some of the big Infrastructure service providers and there is a heavy competition for them on the private cloud market too. Open source makes it easy for them to gain significant traction in this competitive landscape.
    The folks at VMOps, oops, Cloud.com were successful in keeping my mouth shut for a long time with their embargo and, finally, I got a chance to write about what I felt about their move. I think it is a great move and could play a major role in establishing an open federated cloud ecosystem. Already the post has become too long and I didn’t get a chance to write about how this move could have major implications on the private cloud enterprise market. Well, I guess I have to keep it for another day.
    CloudAve is exclusively sponsored by

    Read more

  • Riptano, Cloudera For Cassandra

     

    Riptano , a new company launched recently can be considered Cloudera of Cassandra project. This company was started by two ex-Rackspace employees (disclaimer: Rackspace’s Email Division is a client of Diversity Analysis) to provide support services for Cassandra much like how Cloudera was started to offer support services for Apache Hadoop. When I wrote about Cloudera just before its launch, I compared it to Redhat

    Cloudera is planning to do for Cloud Computing what Redhat did for Linux more than a decade back. Redhat took the Open Source Linux operating system, repackaged it and offered it along with paid technical support. They were essentially making money out of a free software (as in beer) by using what was a new and innovative business model at that time. Enterprises were skeptical about Linux till then and Redhat’s model helped in a faster adoption of Linux by the enterprises. Enterprise adoption of Cloud Computing is in the same situation where Linux was more than a decade ago.

    In fact, Riptano takes a similar approach to Cloudera, catering to the needs of businesses who are willing to pay for support from people who know the nuts and bolts of the open source software. Similar to Cloudera, Riptano also plans to offer some proprietary components for Cassandra.

    Cassandra is under Apache now with the name Apache Cassandra project. Cassandra was originally developed by Facebook following the distributed design of Amazon’s Dynamo and a data model similar to Google’s Bigtable. In 2008, Facebook open sourced the software and it is currently under Apache Software Foundation. Off late, Rackspace has become an enthusiastic supporter of Cassandra project and had three of the most active committers of the project on its payroll. Some of the well known users of Cassandra include Digg, Facebook, Twitter, Reddit, Rackspace, Cloudkick, Cisco, SimpleGeo, Ooyala, OpenX, and many others.

    The minds behind Riptano are the two former Rackers and active Cassandra committers, Jonathan Ellis and Matt Pfeil. They founded the company with support from Rackspace. Cassandra is very durable and the interest among enterprises are due to its scalability, support for multiple datacenters and hadoop analytics.

    The starting of Riptano doesn’t mean forking of Cassandra and according to the founders, they see no need for forking because the mainline development team is very active and there is no need for a third party to fork it for additional development. However, they might offer a custom distribution of Cassandra like what Cloudera has done with Hadoop. They also have plans to offer some proprietary tools that could extend the functionalities of Cassandra.

    According to Charles Babcock of Information Week, Riptano will offer training and technical support at three levels for Cassandra.

    Riptano will offer training in Cassandra, consulting and technical support, said Pfeil, summing up the new company’s business plan. Support will come in Bronze at $1,000 a year per node, Silver at $2,000 a year per node or Gold at $4,000 a year per node. Cassandra typically runs on a server cluster and Cassandra clusters can be expanded to an unlimited number of nodes, according to current users.

    The difference between bronze and gold is a 48-hour response time versus a four-hour response.

    This is a good strategy from their point of view. Let us see how they are received in the market and I will keep a tab on the company and report back after sometime.

    Update: Apologies for spelling the name wrong. I have corrected it. 

    CloudAve is exclusively sponsored by

    Read more

  • Video: Can Cloud And On Premise Storage Co-Exist

     
    From time to time, we showcase videos showing different vendor’s perspectives on cloud computing. As a part of this tradition, today we offer the perspective of i365 , a Seagate Company. If the embedded video is not available for you, use this link.

    CloudAve is exclusively sponsored by

    Read more

...89101112...
...89101112...