A month or so ago I moderated an interesting roundtable on focus.com that looked at the API Economy. Along with panelists Sam Ramji from Apigee, Mike Maney from Alcatel-Lucent and Delyn Simons from Mashery, we talked a bunch about the API economy, the risks and rewards and where the value lies within API marketplaces. If you haven’t heard the roundtable – you can do so here.

Off the back of that conversation I was approached by 3scale, another API management provider. 3scale wanted to have a discussion around what they see as some issues around API management and in particular the flexibility that they believe customers really want.

The crux of the 3scale argument rests in the fact that, whereas other vendors approach API management with a proxy, taking all traffic from the API provider through their own platform, 3scale is a proxy-less player. What this means is that API providers integrate with 3scale using plugins based on 3scale’s own API. In this way 3scale believes that they deliver more flexibility and scalability than other API management platforms.

That’s a valid argument, and one which 3scale could use against their competitors. Until today….

You see yesterday Apigee announced some changes that answer all the concerns about a proxy service. As detailed on ReadWriteWeb, the API Delivery Network solves issues about both response time and availability – while still maintaining the advantages of a proxy service. Apigee is promising;

  • A decreased API response time by 10X
  • Improvement in API reliability and consistency by increasing the infrastructure availability to 99.99%
  • Better scalability at a much lower cost

Steven Willmot, from 3scale raised some concerns about traffic still going through Apigee’s infrastructure for 100% of clients content – that seems a little bot of a red herring coming from a company that, after all, is all about abstracting non-core functions away from organizations.

Anyway – I took the opportunity to sit down and interview Sam Ramji from Apigee about the announcement – see the vid below;

 

Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

3 Comments
  • Good post Ben and great to get some discussion going. First of all props to Apigee for the product release – always great to see new things coming out and new services becoming available.

    However, as you might imagine we don’t see things quite as you put them 🙂 – 3scale’s architecture allows our customers to connect APIs to infrastructure wherever they are and traffic reaches their infrastructure direct with no additional roundtrips to anywhere. That provides a lot of flexibility – including integrating with existing CDNs, caches or serving content from multiple data centers customers already have.

    Apigee’s new product certainly helps take some of the sting out of a centralized cloud proxy model, but it still retains many of the problems (cost, inflexibility in what monitored/tracked, data privacy, deep dependence on a single provider). (e.g. compare what solutions we’re able to offer for free, even to high traffic volumes compared to cloud proxy based solutions – simply because the cost math is so different). Abstracting and moving to the cloud is certainly a key goal of many API deployments, but this is a far cry from routing all traffic via 3rd party.

    Again – props the Apigee team for the release and great to see innovation – and it’s great to see solutions evolving, there wont be a one size solution for a long time so great to see different options available!

  • Ben, thanks for covering the API Management space and hosting this conversation.  What Apigee announced yesterday we couldn’t agree with more.  Mashery has been deployed in this fashion from the beginning and it’s one of the primary reasons we designed our service from the ground up to be a cloud based content distribution system rather than an in-house appliance like others have done.  Even before Amazon Web Services was in more than one geography, we ran on AWS as well as our private global network of POPs that helped our customers distribute their API content quickly and with more availability than you can accomplish with a single homed data center serving content. This strategy is what allowed us to grow as quickly as we did.

    Some details and recommendations based on our 4 years of experience in running in this fashion.  

    1) Speed improvement is absolutely there for cacheable content.  
    Simple truth. Network latency is usually the biggest hurdle when looking at performance of an API.  If you can get your content close to the requesting application, you can achieve the 10x performance improvement that we have offered our customers (e.g. New York Times and USA Today) and what Apigee can start to offer now that they are out of the datacenter.  This simple hurdle is what the biggest CDNs built their businesses on after all.  

    2) Most APIs are only marginally cacheable.  
    You can’t cache a write request obviously.  Some like Fred Wilson even think an API without the ability to write isn’t an API at all (http://www.avc.com/a_vc/2009/11/apis-in-the-late-afternoon.html).  User specific read requests such as social media streams or private account data might be cacheable, but the speed benefits of caching are far less apparent than open static data.  So when speed is king on an API an API Management service also needs to be in lots of locations or in the datacenter so as to not add too much latency on likely the most valuable aspects of an API.  

    3) Reliability and availability of an API is a function of many things.  
    a) Cache your content. If you have a read only API and you can cache your content out to the edge, awesome.  Your backend content server can fail but you can still be serving responses.  API Management companies with caching ability help you here.  Of course, an API Provider could try to build their own network of cache servers, but that seems like a distraction from focusing on their core business.  After all, very few companies try to build their own website CDN.
    b) Build resiliency into your API servers.  Again, cacheable content in an API is only a subset of most people’s API. In the end, you still must consider your origin servers.  You will want to do the right thing with redundancy in your data centers and possibly even geographic redundancy if it’s really important.  
    c) If you chose to work with an API Management service provider, you need to make sure they can support your deployment topology as well as having their own SLA around their availability with an ability to hot fail over between regions.  In other words, if one of the provider’s data center fails, route to a secondary data center on the fly.  If the API Management service regions fails ala Amazon Web Services East a couple weeks ago, all of the API Provider’s configuration, policies, developer partner keys, etc. need to be available across the rest of their network so your API continues to operate. 

    Again, we couldn’t agree more with what Apigee launched yesterday.  It’s an important part of what any API Management company needs to provide. It should continue to be an interesting space to watch for people interested in the cloud and the trend towards connected devices and APIs. 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.