I had a few comments about my post the other day regarding SaaS/Web 2.0 and the intersect of those two things. The comments revolved around finding technical breakthroughs that would bring us to some point of artificial intelligence (AI) or automatic contextualisation. This was seen as a tool to finding the sweet spot for the web and the delivery thereof.

I thought it wise to clarify my views in relation to this.

First I need to state that, first and foremost, I am a humanist that sees technology as a tool to enable us to achieve what we desire. In one of my roles I am involved with an organisation that designs, implements and integrates Design (capitalization intentional) into all aspects of the organisations it works with. Quite simply Design is defined by that organisation as “Formulating the best-fit service, product or system that serves both consumer and producer”.

This same ethos applies in the technology companies that I help/observe/comment upon. They cannot and must not forget that their service needs to fulfil their customers needs at all times. How you ask does this relate to AI? Let me answer a question with a question… would IBM ever consider replacing their board or CEO with a few deep blue supercomputers connected together to form some sort of quasi sentient being? Answer: No. Reason: Human context. I contend that it is not practically possible (and I am sure Fulafulu will disagree with me here!) to replace the human experience, context, emotion and being with technology.

This I believe is a chasm that truly successful IT companies have to bridge – the chasm between technical feasibility and human requirement.

Google is a lesson in bridging this chasm – their development is based around human fulfilment, not technical possibility. Sure they are slowly moving society closer towards the technical possibilities, but meanwhile the technical possibilities are also moving further on – thus the chasm remains.

There is thus a model to be built that plots SaaS, Web 2.0, human context and technical possibility on four different axis. The intersect (or point of closest fit) between all of these is nirvana – it’s where great as opposed to good businesses will be built and it’s where paradigms will be shifted.

Ben Kepes

Ben Kepes is a technology evangelist, an investor, a commentator and a business adviser. Ben covers the convergence of technology, mobile, ubiquity and agility, all enabled by the Cloud. His areas of interest extend to enterprise software, software integration, financial/accounting software, platforms and infrastructure as well as articulating technology simply for everyday users.

5 Comments
  • Falafulu Fisi |

    Ben, I will make a comment later on this subject AI as I’ve got to run. If you like what Google offers you, then you already love AI.

  • Falafulu Fisi |

    I am gonna make a series of comments on this topic , since it is Friday night and I am not sure how many Steinlager I can consume in front of my computer screen, before my concentration on what to write on regarding the topic of discussion, deteriorates.

    Ben said…
    “Formulating the best-fit service, product or system that serves both consumer and producer”. This same ethos applies in the technology companies that I help/observe/comment upon. They cannot and must not forget that their service needs to fulfil their customers needs at all times. How you ask does this relate to AI?

    Not everything can be done via AI. There are some tasks that suit the traditional procedural computing paradigm and some (especially the complex & difficult ones) require declarative paradigms (goal seek), which is how AI is supposed to be like. State the goal , and it doesn’t matter how one reaches it, as long as the goal is achieved. So, if there are some tasks that in adopting declarative paradigm, would improve its performance over procedural way, then the obvious choice is to flip into the declarative model. One would lose his/her competitive advantage if she/he doesn’t do that. And in the world of business today, competition in every sector is fierce that one vendor doesn’t want to get left behind or otherwise he may just watches his competitors whom perhaps have adopted the technology sailed further & further away from where he is.

    In reality, AI has already replaced humans in a variety of industries and this trend will increase over time whether we like it or not, it is real.

    I would like to mention some examples here:

    #1) Medical Diagnosis Expert System (ES)
    ——————————————-
    Mycin was an experimental medical diagnosis system developed in the 1970s to help physician make a quick decision on certain infectious blood diseases where the system recommend antibiotics and treatments. Mycin did outperformed the non-specialist physicians but its performance was still below those specialist physicians. Mycin was never used in a commercial environment because of fear whom might a patient sue in case the system made a wrong diagnosis or prescribed a wrong treatment.

    Now, there are commercial Medical Diagnosis Expert Systems in use today at clinics & hospitals around the world that they indeed outperformed the diagnosis of of some specialist physicians in certain medical domain. Even the human specialists themselves tend to adopt the recommendation of the expert system and shoved aside their own diagnostic opinions. This is an area that I am interested in to develop software products. I have already have the mathematical library developed, it is that I just need the second phase to use the library to develop a full functional system. I am targeting MRI medical image expert system, EEG expert system and may be more to be developed. What I am doing is not new, perhaps it does sound new to some readers but these things are well research by academics & industries where they published those peer review papers in international journals and one of those (there are a few) that I frequently browse thru, is the Artificial Intelligence in Medicine journal , which is available (all yearly volumes) at Auckland Uni library.

    AI is not there to replace humans. They are being used to help humans simplify complex tasks. But as civilization advances, more & more tasks are delegated to machines to do as they are perceived to be better than the decisions that humans made.

    #2) Finance Expert System
    —————————
    I had posted this article link in blogosphere before, but I am posting it here again, just to demonstrate how AI is slowly overtaking humans.

    Robots beat human commodity trader

    Now, is there any reader here who would want to challenge the software agent robot described in the article above, by putting your money and trade against them and see who comes on top? I wouldn’t, but I love to use the bots to do the trading on my behalf.

    Humans made redundant as super-trader does the sums

    Human stock-brokers would be out of a job in around 2015, according to the article. This area is also my interest because I am developing something similar. Again, it is not new as some readers might think. The types of algorithms described in the article are also available from the Economic & Finance literatures, such as this, this, this and a few others.

  • How many Steinlagers was that Falafulu?????

  • Falafulu Fisi |

    Ben said…
    How many Steinlagers was that Falafulu?????

    A dozen at home, although, I didn’t count how many Heinekens that I had at my local bar along Ponsoby Rd last night?

    Ben said…
    Let me answer a question with a question… would IBM ever consider replacing their board or CEO with a few deep blue supercomputers connected together to form some sort of quasi sentient being? Answer: No. Reason: Human context.

    Perhaps, IBM would consider such decision in the future, may be 20 to 40 years from now. The rise in adoption of Business Intelligence (BI) software of today, indicates that corporate managers are already relying on tje software’s recommendation of the best way to run their business. The software crunches the corporate data, then give the best possible scenarios of what things to do. Today, corporate managers hardly over-rule the advises produced by the BI software (what-if-scenario), Ok, they do sometimes on rare occasions. One may want to just want to browse BI related articles at websites such as DMReview to read how many corporate managers are adopting BI technology in massive numbers. They do this for one reason only, and that is Business Advantage over their competitors. One simple example is the use of sophisticated forecasting algorithms by managers. Managers don’t just dream up (foresee via psychic ability) of what might happen in the future in relation to the financial operation of the business, but they use software to give projection (forecasting). Software developers are scouring the forecasting literatures, such as International Journal of Forecasting (my favorite), to implement the latest algorithms , which have less errors (more robust or accurate) in comparison to the current available ones. Forecasting algorithms are not magic bullet, but they do outperform the pure intuition of an expert humans.

    Ben said…
    I contend that it is not practically possible (and I am sure Fulafulu will disagree with me here!) to replace the human experience, context, emotion and being with technology.

    About 60 years ago, no one could foresee that computers, which used to house a space such as a warehouse could shrink into the size of today’s laptop , where one could carry it around. It might not practically possible now, but since the trend of adopting intelligent decision support softwares is increasing at a phenomenon rate, there is no doubt in my mind that there will be a stage, where technology takes us to that point.

    Now, time for some Steinlagers and get ready to cheers up for the All Blacks.

  • My understanding is that the O’Reilly’s take on the “architecture of participation” revolves primarily around systems that invite user participation to build something, for example open source software development. But it also alludes to something deeper, albeit less tangible.

    Without wishing to get into a philosophical debate, it seems to me that knowledge resides within user communities, not in artifices such as books and software code. It is the conversations and interactions of community participants that actually create rich new knowledge and generate real value.

    Here’s an idea – form a developer community for a specific project that has commercial potential and let them actually become stakeholders in the outcome.

    But I’m sure someone has thought of it already.

Leave a Reply to benkepesCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.