Monday, August 2, 2010

The Physical divide!

'Digital divide' refers to the gap between people with effective access to digital and information technology, and those with very limited or no access at all. There have been many theses and books written on this topic. I want to instead talk of the 'physical divide' which I would define as the gap between people living in a physical world with effective access to enhanced reality with digital technology and those with only a physical world. The difference between the two is subtle and needs to be explained. Its not that when we talk of digital divide, that there is no physical world. Its just that when I want to talk about the physical world I want to talk of the world itself rather than digital technology but I want to talk of the real world enhanced by use of digital technology. The so-called physical divide is addressed through a branch of computer vision and image processing generally referred to as 'augmented reality'. Augmented reality refers to the technology of combination of a real scene viewed by a user and a virtual scene generated by a computer that augments the scene with additional information. Mobile augmented reality which refers to providing augmented reality experiences on the mobile is the most developing and evolving technology today and has been described by MIT and Gartner in different studies as one of the top five disruptive technologies that will take centrestage in next 5 years.

We have heard of virtual reality. Without getting into the philospophy of 'reality', in contrast to virtual reality there is augmented reality. While virtual reality refers to computer-generated simulation environments that can simulate some places in the real (and imaginary) world, whereas augmented reality refers to augmentation or blending of real world scenes with computer generated additional information. On the spectrum between virtual reality, which creates immersive, computer-generated environments, and the real world, augmented reality is closer to the real world. This is why I chose to call it as a technology that addresses the physical divide. Augmented reality adds graphics, sounds, haptic feedback and smell to the natural world as it exists. Both video games and cell phones are driving the development of augmented reality. To take this to the horizon, let me give two examples of what this could mean in future. Think of a computer game that lets you drive a car on an F1 track. That would be virtual reality, but imagine instead of a video that has an actual recording of an F1 event which is augmented with your car which you are driving in a real F1 environment! Many of us know the Wii Nintendo game which was a pathbreaker in many ways. If you were to play, for example, a game of tennis in Wii, it would still be virtual reality. But imagine you playing a Wimbledon game with Roger Federer. If you could ever do that, that would be due to augmented reality.

As you can imagine, the applications are many. Augmented reality has extensive applications in the fields of medical science, entertainment, military training, engineering design, robotics, manufacturing, maintenance and repair, consumer design, hazard detection amongst others.

There are two commonly accepted definitions of augmented reality (AR) today.
  1. Azuma's definition says that AR has three components : it combines real and virtual, that it is interactive in real-time and that it is registered in 3D
  2. Milgram and Kishino defined in 1994 actually coined the term 'augmented reality' while defining what they called a reality-virtuality continuum. The continuum expands from a pure real world to a pure virtual world which is punctuated by mixed reality covering augmented reality, virtual reality and augmented virtuality.
To combine the physical and virtual world we need precise models, locations and optical properties of the viewer (or camera) and the display. The calibration of all the devices must be precise and need a mechanism to combine all local co-ordinate systems centered on the devices and the objects in a scene in a global co-ordinate system.

The biggest challenge for AR is a requirement of a very detailed description of the physical scene. Today AR struggles from finding optimal methods for registration of two distinct sets of images and keep them registered in real-time . Computer vision is sourcing some of the algorithms in this area for AR. AR also needs displays that can merge these two images.

Augmented reality systems are expected to run in the real-time so that the user can freely move in the environment, and are expected to show properly rendered augmented images. Thereby, implying that the two primary performance related issues with AR today are related to the update frequency of generating the augmented image and accuracy of the registration between the real and virtual image. While these may be easily stated, the challenges come from the technology barriers today for both registration and generation of augmented images.

Failure in proper registration and/or rendering leads to hardly desirable defects in the augmented scene and can show the real scene as less real than more virtual. AR requires that the augmentation has to have positive effect and that is a huge challenge today, not unsurmountable given the human desire to innovate always.

There are many AR solutions available today. The SixthSense augmented reality system lets you project a phone pad onto your palm and fingers and phone a friend -- without removing the phone from your pocket/purse.Some of the other AR solutions include the Arhrrrr - the augmented reality shooter, ARIS - the mobile media learning games, ARsights, eTreasure, LearnAR, SciMorph and the wikitude world browser.

Wireless applications are increasingly driving this technology into the mobile space where they offer a great deal of promise. Initially, AR required unwieldy headsets and kept users largely tethered to their desktop computers. The camera and screen embedded in smart phones and other mobile devices now serve as the means to combine real world data with virtual data; using GPS capability, image recognition, and a compass, AR applications can pinpoint where the mobile’s camera is pointing and overlay relevant information at appropriate points on the screen.

For example, the Wikitude World Browser is an augmented reality (AR) browser for the Android platform based on location-based Wikipedia and Qype content. It is a handy application for planning a trip or to find out about landmarks in your surroundings; 350,000 world-wide points of interest may be searched by GPS or by address and displayed in a list view, map view and “Augmented Reality” cam view. The latest version of WIKITUDE World Browser includes an Augmented Reality Photo Feature, which allows you to capture and share the AR camera view you experience through your mobile.


While I do not know when in future, if at all, we would be able to play up a tennis match with Roger Federer or Steffi Graf, AR is here to stay and has in many small ways has already helped many application areas. Outside of entertainment, in essential services such as medicine and surgery, there are already applications that help surgeons have a better view of the patient. Mobile AR will create many applications that will alter tomorrow's horizon. Like digital divide, whether physical divide continues to exist, we many never be able to tell, but changes are that the line between physical and virtual world will continue to blur in future.

Sunday, July 18, 2010

I Write Like Isaac Asimov!

"I Write Like" is an online tool that helps you find your inner author. The website "I Write Like" (http://iwl.me) has erupted online and scores of writers are tempted to go and check it online to see just who they write like. I Write Like is both entertainment and education. I have read Charles Dickens a lot in my life and he may have influenced a writing style subconsciously. So I was determined to find who I write like. The way the site works is simple. You go to the website and cut-and-paste your writings and press "analyze" button. And the website, without any explanations, tells you, that you write like ABC or XYZ. I pasted one of my older blog articles and the analysis had it that I write like "Arthur C Clarke". Hmm.. I thought I wrote some serious thought provoking proses and not science fiction! So I submitted a few of my other paragraphs from other older articles. The analysis indicated that I wrote, at times, like Isaac Asimov, at other times like Dan Brown and still at other times like Stephen King!

Who does not like to be an Arthur C Clarke, Isaac Asimov and Dan Brown all in one ! I would not mind a bit ;-) But then being the curious one, I started looking for pattern and it was obvious, not for once, the IWL analysis ever said I wrote like a famous English literateur. I was never quite close to Charles Dickens for sure, never close to Ernest Hemingway, not D H Lawrence, not Forsyth, not even Robin Cook. The pattern started emering. All of my blog articles are related to articles on technology and science and may be that is why names like Arthur C Clarke and Isaac Asimov sprang. Just to test this notion, I pasted a paragraph from a letter I had written to my parents some time back (not about technology and science) and lo and behold. It said I wrote like Charles Dickens!

So much about entertainment. Surely the concept is catchy and provides interesting insights for any one curious enough. Equally surely, it can not be an exact science, and it is not. But simply the idea of an algorithm that can provide traces of influence in writing has proven wildly popular.

Who is behind IWL? Though the site might seem the idle dalliance of an English professor on summer break, it was created by Dmitry Chestnykh, a 27-year-old Russian software programmer currently living in Montenegro. Though he speaks English reasonably well, it's his second language. In his own words, Dmitry wanted it to be educational. Chestnykh modeled the site on software for e-mail spam filters. This means that the site's text analysis is largely keyword based. Even if you write in short, declarative, Hemingwayesque sentences, its your word choice that may determine your comparison. Most writers will tell you, though, that the most telling signs of influence come from punctuation, rhythm and structure. I Write Like does account for some elements of style by things such as number of words per sentence.

Chestnykh says “Actually, the algorithm is not a rocket science, and you can find it on every computer today. It’s a Bayesian classifier, which is widely used to fight spam on the Internet. Take for example the “Mark as spam” button in Gmail or Outlook. When you receive a message that you think is spam, you click this button, and the internal database gets trained to recognize future messages similar to this one as spam. This is basically how “I Write Like” works on my side: I feed it with “Frankenstein” and tell it, “This is Mary Shelley. Recognize works similar to this as Mary Shelley.” Of course, the algorithm is slightly different from the one used to detect spam, because it takes into account more stylistic features of the text, such as the number of words in sentences, the number of commas, semicolons, and whether the sentence is a direct speech or a quotation.”

Chestnykh has uploaded works by about 50 authors — three books for each, he said. That, too, explains some of its shortcomings. Melville, for example, isn't in the system. But Chestnykh never expected the sudden success of the site and he plans to improve its accuracy by including more books and adding a probability percentage for each result. He hopes it can eventually be profitable.

Whatever the deficiencies of I Write Like, it does exude a love of writing and its many techniques. The site's blog updates with inspiring quotations from writers, and Chestnykh — whose company, Coding Robots, is also working on blog editing and diary writing software — shows a love of literature. He counts Gabriel Garcia Marquez and Agatha Christie among his favorites.

Whatever the strengths and weaknesses of IWL, it is sure that the algorithm does work and work well for almost any writing you submit. It analyzes with a certain probability and brackets you the author with someone well known. It is expected that each article we write has a different style and probably what is really required is another meta-level algorithm that can take various articles from an author and rather than saying that one writes like Arthur C Clarke, other like Isaac Asimov and Dan Brown, it should say your set of articles have a writing style like Isaac Asimov (I would like to hear it that way ;-)

Be that as it may, the educational value is there. This is by far the best known example of Bayesian classification I have heard and another point in the case for making teachings of quantitative methods in probability and statistics more interesting than it is !


Monday, July 5, 2010

Theory and practice

The FIFA world cup and schools reopening in India after summer were both partially responsible for my slump in the frequency of my blogs in the last month. Coming out of hibernation of sorts, I felt this time I should touch upon a topic that spans across all my technology domain areas. While I have written earlier about the role of innovation, this time around, I want to focus on a point that addresses whether, in any domain, theory indeed precedes practice. That is, for any technology, whether theoretical foundations are worked upon first before they are put into practice. This is a highly debatable and questionable topic - all the more reason I thought I should share my viewpoint on this.

When Computer Graphics, as an area was still evolving and still in its early days, I happened to read a column titled "Jim Blinn Corner" that used to appear in the IEEE transactions on Computer Graphics and Applications in early 1980s. Jim Blinn was considered a father-figure in the area, having worked on simulations of NASA JPL's Voyager project, as well as the 3-D simulations for the TV series Cosmos by Carl Sagan and for his research into many areas of computer graphics algorithms including shading models.

In one of his articles (dont recall specifically which one), he was discussing the topic of the title of this article. He argued whether theory should be developed first and only then algorithms should be developed. Considering that rasterization and implications of continuous domain into the discrete ones were not fully understood then, his primary goal was to solve the problem at hand. That meant carrying out some or the other simulation successfully. This required him to experiment a lot and developing theory was not necessarily an option for him at the time. His explanation that one should experiment a lot and when one is happy with an algorithm, then use all the governing laws and principles in the area to explain why it should work anyway, had a kind of an impact on me that has also shaped my later years. This is counter to the premise that theory precedes its applications and kind of puts the cart before the horse and argues that even theoretical development of the domain is aided when it is supplemented by practical products in the area.

While Jim Blinn was talking about graphics in that era, when he made the comment, it is clearly a generic comment that applies to all evolving domains that need practical solutions. Let us look at some of them I am working on and see how that can help
  1. Computer vision is much like computer graphics and derives much of its first principles from there, so surely all algorithmic development under image processing and computer graphics can happen first followed by a theoretical explanation of why it should work anyway.
  2. Mobile handsets is another areas. In an era of Apple iPhone, and android phones and many other intuitive designs, it is difficult to evolve the technology first. Solutions are made and then theory is used to explain why it will work anyway.
  3. I talked of harnessing solar energy (and also other renewable energy forms such as wind) in my last article and also addressed why research has not been complete in the area. There is a case for developing products, intuitive or counter-intuitive first, and then use our knowledge of physics and semiconductors to explain why it should work anyway.
While am completely aware of the fact that theoretical physicists frown upon their experimental counterparts and least likely are going to be impressed by the thesis in this article, the idea really is to take the debate beyond the boundaries of theory and experimentation, and take it to a point where it only helps solve a problem. More likely, the concept of innovation always operates in technological domains where groundwork in terms of development of theoretical concepts is always in inphancy and as a rule one needs to look at an approach to develop the domain. Computer graphics is richer because of Jim Blinn's thought process then, and many areas will benefit simiarly if we come out of the traditional thought process.

Technology, by definition, works at applying concepts evolved in science and engineering for day-to-day use in such a way that the human race benefits overall. In such a scenario, for a technology success, solving peoples' problems becomes the stated problem. That problem can be solved either by developing theory first (if we are lucky) or by developing products first and then explaining in theory, why it should work anyway.

In the larger scheme of things, theory and practice are both mere tools and they need to used intelligently and judiciously. It can then be left as a matter of personal opinion whether one approach is right against the other.

Tuesday, June 8, 2010

Fathomless sun!

As a source of energy, nothing matches the Sun. It out-powers anything that human technology could ever produce. Only a small fraction of the sun’s power output strikes the Earth, but even that provides 10,000 times as much as all the commercial energy that humans use on the planet. If one believes the big bang theory, then the sun has been around at least as long as the Earth has been and its been that way for around 4-1/2 billion years approximately. Also ever since humans stepped on to this planet, they have been witness to the daily cycle of days and nights. Sun has been accepted as a great source of energy since ancient times and still as yet it has been glowing away to glory to showcase the human limitations in harnessing that energy. There is limited success alright but largely there are only gaps in the technological solution.

Let us look at this in perspective. The earth receives 174 peta Watts (1 energy unit = 1KWh, 1PWh = 10^12 units) of solar radiation at the upper layer of the atmosphere. Of this, approximately, 30 percent is reflected back into the space, whereas the rest is absorbed by the clouds, oceans and land masses. So there is plenty to harness, but technological limitations are brought to the fore and to me, in essence, they bring to the fore the limitations of the otherwise celebrated human brain. Sure, there are many achievements that the human race can be proud of, but the bar is set by the Sun. The problem is there and known, resources in the form of solar energy are there and plenty, but the solution in the form of harnessing them are far fewer. The solar power has the potential to provide over 1,000 times total world energy consumption, though today it provides only around 0.02% of the total.

India's power sector has a total installed capacity of approximately 1,46,753 Megawatt (MW) of which 54% is coal-based, 25% hydro, 8% is renewables and the balance is the gas and nuclear-based. Power shortages are estimated at about 11% of total energy and 15% of peak capacity requirements and are likely to increase in the coming years. In the next 10 years, another 10,000 MW of capacity and investment of about Rs. 24 lakh crore are required.

Fortunately, India lies in sunny regions of the world. Most parts of India receive 4-7 kWh of Solar radiation per square metre per day with 250-300 sunny days in a year. India has abundant Solar resources, as it receives about 3000 hours of sunshine every year, equivalent to over 5,000 trillion kWh. India can easily utilize the Solar energy or Solar Power. Today the contribution of Solar power with an installed capacity of 9.84 MW, is a fraction (< 0.1 percent) of the total renewable energy installed 13, 242.41(as on 31st October 2008 by MNRE - the Ministry of New Renewables Energy). Solar power generation has lagged behind other sources like wind, small hydropower, biomass etc. By the way, India is only an example. The scenario is no different world-wide. In the US, this percentage is only around 1 %.

Solar power, is a term largely related to the generation of electricity from sunlight. This can be direct as with photovoltaics or indirect as with concentrated solar power (CSP), where the sun's energy is used to boil water, which is then used to generate electricity. Photovoltaic materials convert light energy to electricity using semiconductor materials like Silicon. When certain semiconducting materials, such as certain kinds of silicon, are exposed to sunlight, they release small amounts of electricity. This process is known as the photoelectric effect. The photoelectric effect refers to the emission, or ejection, of electrons from the surface of a metal in response to light. It is the basic physical process in which a solar electric or photovoltaic (PV) cell converts sunlight to electricity.

A typical PV system is made up of different components. These include PV modules (groups of PV cells), which are commonly called PV panels; one or more batteries; a charge regulator or controller for a stand-alone system; an inverter for a utility-grid-connected system and when alternating current (ac) rather than direct current (dc) is required; wiring; and mounting hardware or a framework.

Concentrated Solar Power, on the other hand, uses the concept of focused sunlight. CSP plants generate electric power by using mirrors to concentrate (focus) the sun's energy and convert it into high-temperature heat. That heat is then channeled through a conventional generator. The plants consist of two parts: one that collects solar energy and converts it to heat, and another that converts the heat energy to electricity. Within the United States, over 350MW of CSP capacity exists and these plants have been operating reliably for more than 15 years. The amount of power generated by a concentrating solar power plant depends on the amount of direct sunlight at the site. CSP technologies make use of only direct-beam (rather than diffuse) sunlight.

CSP can use the conventional and ubiquitously available flat solar panels, or parabolic troughs, or even power towers for concentrating the solar light.

The problem with human technology in harnessing the solar energy has been in the area of efficiency of conversion. Typically it is no better than 10-20%. Given their manufacturing costs, modules of today’s cells incorporated in the power grid would produce electricity at a cost roughly 3 to 6 times higher than current prices, or 18-30 cents per kilowatt hour. To make solar economically competitive, engineers must find ways to improve the efficiency of the cells and to lower their manufacturing costs.

Prospects for improving solar efficiency are promising. Current standard cells have a theoretical maximum efficiency of aroud 30 percent because of the electronic properties of the silicon material. But new materials, arranged in novel ways, can evade that limit, with some multilayer cells reaching 34 percent efficiency. Experimental cells have exceeded 40 percent efficiency.

Another idea for enhancing efficiency involves developments in nanotechnology, the engineering of structures on sizes comparable to those of atoms and molecules, measured in nanometers (one nanometer is a billionth of a meter). Recent experiments have reported intriguing advances in the use of nanocrystals made from the elements lead and selenium. In standard cells, the impact of a particle of light (a photon) releases an electron to carry electric charge, but it also produces some useless excess heat. Lead-selenium nanocrystals enhance the chance of releasing a second electron rather than the heat, boosting the electric current output. Other experiments suggest this phenomenon can occur in silicon as well.

Theoretically the nanocrystal approach could reach efficiencies of 60 percent or higher, though it may be smaller in practice. To the core, lies the problem of engineering advances and in turn human inability to harness the Sun. These advances will be required to find ways of integrating such nanocrystal cells into a system that can transmit the energy into a circuit.

Even if the engineering challenges are overcome and advanced solar cells become available for generating electricity cheaply and efficiently, a major barrier to widespread use of the sun’s energy remains the need for storage. While there is sunlight roughly 50% of the time in a daily cycle, its usage in night hours requires us to store all the energy captured during the day. Then, there is the cloudy weather that interrupts solar energy’s availability. At times and locations where sunlight is plentiful, its energy must be captured and stored for use at other times and places.

Many technologies offer mass-storage opportunities, but none perfected yet. Pumping water (for recovery as hydroelectric power) or large banks of batteries are proven methods of energy storage, but they face serious problems when scaled up to power-grid proportions. New materials could greatly enhance the effectiveness of capacitors, superconducting magnets, or flyweels, all of which could provide convenient power storage in many applications.

Another possible solution to the storage problem would mimic the biological capture of sunshine by photosynthesis in plants, which stores the sun’s energy in the chemical bonds of molecules that can be used as food. The plant’s way of using sunlight to produce food could be duplicated by people to produce fuel.

For example, sunlight could power the electrolysis of water, generating hydrogen as a fuel. Hydrogen could then power fuel cells, electricity-generating devices that produce virtually no polluting byproducts, as the hydrogen combines with oxygen to produce water again. But splitting water efficiently will require advances in chemical reaction efficiencies, perhaps through engineering new catalysts. Nature’s catalysts, enzymes, can produce hydrogen from water with a much higher efficiency than current industrial catalysts. Developing catalysts that can match those found in living cells would dramatically enhance the attractiveness of a solar production-fuel cell storage system for a solar energy economy.

Fuel cells have other advantages. They could be distributed widely, avoiding the vulnerabilities of centralized power generation.

If the engineering challenges can be met for improving solar cells, reducing their costs, and providing efficient ways to use their electricity to create storable fuel, solar power will assert its superiority to fossil fuels as a sustainable motive force for civilization’s continued prosperity.

So what I have outlined above are the challenges for harnessing. solar energy Generally, the technological innovation is required while tapping scarce resources. But here is a case of plentiful. There is so much to tap and unfortunately it only exposes the brazen limitations of human mind.

It is possible, that historically where the technological innovations happened most, that is in the western world, there may not have been focus on this subject much as they dont get much of sunlight and the sun - but temperate countries like India are offered with an opportunity on the platter.

There is a need, the reserves of coal are limited, nuclear energy can only generate that much but the demand will never cease. In times of increased pressures on deploying green technologies, what better opportunity, than tapping the Sun and meeting all of the requirements? Here is an opportunity for India and similar countries to showcase her innovation capabilities in coming up with solutions that the world can then follow!

Wednesday, June 2, 2010

Emotion recognition

For centuries, art lovers have wondered about the Mona Lisa's mysterious smile, and what Mona Lisa may have been thinking or feeling the time she was painted. Now, scientists in The Netherlands have used emotion-recognition software to determine the Mona Lisa's emotions while sitting for her portrait by Leonardo da Vinci.

Computer vision is an extremely difficult subject because it tries to mimic the human cognitive faculties. Technically, computer vision is about mimicking the human visual system not just by seeing, for the camera is akin to the eye, but the interpretation of what we see through the eye, the way we relate it to surroundings and use our past knowledge of situations, associativity etc to understand what we saw. That is the hardest part to put into an algorithm. Strictly speaking emotion recognition should not be part of computer vision and is not, but is closely related. The difference is that we are not trying to figure out by measuring the geometry of her face to ascertain whether she was smiling or not, but using even deeper knowledge of relationship of this geometry to emotions and figuring out what emotions were being displayed ! This background was necessary to state the complexity and uniqueness of this experiment, which is one of its kind surely.

Why this is unique is because the chances of errors are very high. Robustness is another issue. It can not just work on the Mona Lisa. It should work on many or all other human faces.

Coming back to emotion recognition. Recognition part is easy to understand but the "emotion" is really a colloquial term and needs formal approach. Research in psychology has shown that human emotions can be classified into six archetypal emotions: surprise, fear, disgust, anger, happiness and sadness. Facial motion plays an integral part in expressing these emotions. The other part that completes this expression is speech. But is outside of this scope, for no one quite has Mona Lisa's audio tapes!

An interesting research from psychology was to understand the role speech and facial motion play in understanding each of the emotions. The findings showed that while sadness and fear can be made out from speech data, whereas the video or the facial clues provided clues on anger and happiness.

There have been countless theories of fans of The Da Vinci Code who know that the Mona Lisa smile isn't the only mystery associated with Leonardo's masterpiece. In 1509, he collaborated with mathematician Fra Luca Pacioli and artist Piero della Francesca on a book about the golden ratio: If a line is divided into two unequal lengths, in such a way that the ratio of the longer segment to the shorter segment will be the same as the ratio of the whole line to the longer segment, the resulting number will be something close to 1.618. Some art historians say that within the painting, the relationship between the Mona Lisa's right shoulder and cheek and her left shoulder and cheek forms a golden triangle whose shortest sides are in divine proportion to its base.

Now advances in computer vision have facilitated whole new generation of software programs and point in case is an algorithm developed that can now map a person's face onto a mesh computer model and calculate facial expressions based on facial points such as lip curvature, eyebrow position, and cheek contraction. The algorithm claims it detects happiness, disgust, fear, anger, surprise and sadness with 85 percent accuracy, but researchers don't yet have the technology to detect more subtle emotions.

So any guesses when the algorithm was subjected to the famed Mona Lisa painting? It analyzed it and found this. The Mona Lisa's expression is 83-percent happy, 9-percent disgusted, 6-percent fearful, and two-percent angry!

The researchers also found that George Bush was feeling surprise, fear and sadness during a speech regarding the war in Iraq. Michael Jackson was 33-percent fearful in his mug shot and angry and disgusted as the press snapped pictures after his trial.

Any invention has to lead to practical use and this invention of the algorithm can become an innovation if used appropriately. For example, emotion-recognition technology may be used to detect that a driver is getting sleepy at the wheel and have an alert signal and to detect how you feel about certain items while you're shopping ... Proof it takes a look at the past to pave the way for the future. Other applications of emotion recognition software might be to detect terror suspects on the basis of their emotions, not just on their physical characteristics.

The inventors of this same program were hired by Unilever, the food and consumer goods giant, to work on a project that could probably change the face of marketing. At the Unilever outlets, around 300 women faces were willingly photographed in 6 European cities to capture their facial expressions while tasting five food types: vanilla ice cream, chocolate, cereal bars, yogurt and apples. Not surprisingly, ice cream and chocolate produced the most happy expressions.

Not surprisingly, the software registered fewer smiley faces for healthy foods. Apples produced 87 percent neutral expressions, with Italians and Swedes registering disappointment when eating them; yogurt didn't fare much better, evoking "sad" expressions for 28 percent of Europeans.

This is not necessarily a new research, but has been picking up recently in last 3-4 years. Why it was interesting to report was because of the 'fun' and 'educational' elements therein. Serious research can be quite a challenge and more innovation in terms of its applications to other areas hiterto unexplored, may be even more tricky and challenging.

Thursday, May 27, 2010

The innovation gene

I have been focusing a while on specific topics of my interest for my articles. Today, I want to abstract out at a meta-physical level and because the company, Innotomy, is about science of exploring innovations, let us focus on innovation.

A few questions we should explore to get a better understanding are:
  1. What is innovation?
  2. Why is innovation important?
  3. What traits should the innovation gene have, if there is one? What qualities innovators possess to make them different from the rest?
  4. Is this innovation gene present in all homo sapiens?

What is Innovation? It is loosely defined as an act in which the thought process is modified for either doing something totally new or for doing new things that are more useful. An important aspect of the definition stems from comparison of innovation with invention. Invention is about new ideas whereas innovation is about putting these ideas into practise. Invention necessarily has to be brand new and unique, but an innovation need not always be unique. Unique it need not be, but it must be sufficiently and substantially different to be innovative.

Arguably the most important invention of the 20th century is attributed to John Bardeen, William Shockley and walter Brattain who invented the transistor in 1947. In the same century in 1906, Lee De Forest invented the vacuum tube triode. But transistor was far superior to the vacuum tube. That is not the topic of discussion. But both of them were great inventions. These had a cascading effect then on, where inventions of personal computers, then Internet, then the web and now social networking sites such as Twitter, MySpace and Facebook are all inventions. When Subway or McDonalds use Twitter or the Internet to advertise and connect with consumers in new ways to increase their customer base, it is innovations. So inventions are very unique and rare but in the time line, usually have a series of innovations that benefit either the individuals, or businesses, or the community, or the society at large.

The train ticketing systems in the western world is another example. While the system itself may be inventive in nature, its usage can be innovative. While in Europe and the US, the gates are usually closed and they open after ticket is swiped, whereas in Japan, gates are always open, but they close when a ticket is not swiped. This is due to the fact that density of people entering the gates in Japan is very high compared to those in Europe and the US. This is a simple alteration to the invetion to suit your geographical needs. This qualifies as an innovation, because it is new, it is different and it puts into practise a system that benefits many!

The examples are varied and many. The point is hopefully clear that inventions are innovations are different and understanding that difference is key to understanding innovations. Why are they important? History is replete with examples of continuous innovations and because they bring a significant difference in quality of life, they are naturally sought after. The impact of globalisation, migration, technology and knowledge revolutions make it imperative for individuals and businesses to continue to focus on innovations that can bring about some niche area for them to be more competitive. Research shows that competition combined with strong demand is a major driver of innovation. Intensity of competition is the determinant of innovation and productivity. Innovation, besides products and services, also includes new processes, new business systems and new methods of management, which have a significant impact on productivity and growth.

We have thus far talked about what is innovation and why it is important? But who are the people behind innovations? While there can be countably few Newtons and Einsteins who were primarily inventors, there can be many innovators. In fact, I would like to argue that all of us can be innovators. But what does it take to become an innovator? THere are some qualities. What are they? Is there such a thing called the innovation gene, that brings these qualities to people? The answers may lie in what is generally described as disruptive innovation.

Disruptive innovation, as coined by Clayton Christensen, describes a process by which a product or service takes root initially in simple applications at the bottom of a market and then relentlessly moves ‘up market’, eventually displacing established competitors. An innovation that is disruptive allows a whole new population of consumers access to a product or service that was historically only accessible to consumers with a lot of money or a lot of skill. At INSEAD in France, Hal Gregersen, has published interesting results of a study he and his team conducted over last decade. He says, to be innovative, one should possess the following skills.
  1. Associating - creative people 'connect the dots', many times leading to unexpected connections
  2. Observing - you must be a very keen observer
  3. Experimenting - one may not know the solution, it is important to keep trying. call it trial and error or call it an experimenter.
  4. Questioning - observations and associations can be rationalized through questioning. No questions may not lead you much further on the road of innovation
  5. Networking - This is not social networking that can land one a better job, but as Gregersen puts it, "Innovators are intentional about finding diverse people who are just the opposites of who they are, that they talk to, to get ideas that seriously challenge their own".
It is not required to be great in all of these traits to be a good innovator. But it is imporant to exercise the ones you are better at. According to Gregersen, Steve Jobs is good at associating, Scott cook of Intuit was good at observing whereas Jeff Bezos of Amazon was an experimenter.

So that is the innovation gene. It is present in all human beings. All of us are capable of associating, observing, experimenting, questioning and networking. It means all of us can be innovators but all of us obviously are not. Why is so? It is because it is not easy for adults to practise all of innovation practises. Trying all of them together may be very counter-intuitive. It is because of the human conditioning over the years. The inventors are their prime best in their late 20s.

That is why it is often said, that in the absence of this conditioning, children are the best innovators. My daughter every now or then comes with a lateral, out of the box approach to simple day-to-day problems. It is not that she is thinking out-of-the-box, it is just that we are so completely boxed. So please grow up and be a child again !

Thursday, May 20, 2010

Electricity no cheaper!

Apologies for a gap of over two weeks since my last blog as I was on a family vacation to the US. Although the idea was to keep the blogs going, somehow the visits to Universal studios, sea worlds, zoos, river walks, city towers, and of course the fabled Disneyland with my family ensured that I could not quite keep the resolve. I am back in India 2-3 days ago. A month in the US was quite educational for my primary area of interest that is conservation of limited resources such as energy and water. What I learnt there was that even if areas and states are pronounced drought hit, no home has any less supply of water and for sure never ever is there any load shedding and lack of power supply - whether homes or industry.

Coming home from there, I hit upon a news clip couple of hours ago. I read a news item quoting the minister of power, Government of India, suggesting that the cost of electricity across the country is likely to go up by a Re 1 per KWh. This got me thinking. THe price normally goes up when either the resources are scarce or when the limited resources are made available at an extra cost (the toll). But this hike is because of the former, which means that the natural rare resources became more dear and rare. The Cabinet hiked price of gas sold to power, fertilizer and city gas projects from Rs3,200 per thousand cubic meters ($1.79 per million British thermal unit) to Rs6,818 per thousand cubic meters ($3.818 per mmBtu). So the cost of electricity goes up ! It is sad to note that although there are compulsive reasons why cabinets have to take such steps from time to time, it hardly addresses the basic requirement of higher energy that India needs. If there had been a parallel announcement of such a plan along with the Re 1 increase per KWh, it would have made more sense. So from an end user point of view all it means is that you pay more for less energy. The scheduled and unscheduled load shedding will continue. There is no promise on that. Just that if your electricity bill for consumption of 50,000 units (KWh) was INR 700,000, then now you start paying around 800,000 for no additional promise!


There is an interesting facet to this though. What it drives home the point is that it is even more imperative for users to conserve more. That is the only way they can avoid additional financial burden of paying up for the usage. The interesting part is that it in a very convoluted way actually instils the spirit of conservation and will help consumers see the benefit right away. No indirect maths is required, what you use is what you pay for and the less you use the less you pay. For companies offering energy management solutions, this means that their business case is even stronger. Without doing anything additional from a portfolio point of view, all they can now claim is that with their systems, the savings would be that much more, as the power tariff has generally gone up.


The Central government of India enacted the Electricity Act 2003 for primarily reducing power cost. But power tariff has increased many folds since and power deficit increased. Only power generators and traders are getting benefitted. Prolonged shortfalls in power supply throughout the country have led India to boast the dubious distinction of being the state that has the highest cost per unit of electricity in the world. During the first half of the 11th five-year plan, the cost rose to Rs 5.9 per unit for the 59,000 crore units flowing through the sector’s various mechanisms for inter-state trading, according to a Planning Commission report. And please remember that would becmme close to Rs 7 now with the proposed increase !

With short supply causing the price of power to rise, those states with excess energy have made a profitable scheme out of the sale of power, while managing to keep their consumer tariffs low. Unfortunately, states with power deficits are dictating power exchange trends resulting in frequent unscheduled interchanges (UI).

Each day, regional load dispatch centres prepare for the next days’ power consumption by asking states to declare how much power they will be supplying to the grid and how much they will require. When states are unable to keep their word and end up withholding power from the grid or withdrawing excess energy, they pay an unscheduled interchange surcharge ranging from 12 paise per kilowatt hour (kWh) to 735 paise per kWh, depending on the fluctuation in frequency.

Along with supply shortfalls, these UI’s have caused massive inflation in rates, which are then passed on to consumers. And although these fluctuations in grid frequency draw additional financial burden on both the deficit and surplus states, approximately 41 per cent of the volume of power trading comes from UI.

More than 48 per cent of power traded has come through bilateral exchanges while just less than 11 per cent is exchanged on India’s two power exchanges.

In an earlier blog article, I had mentioned that the energy generation scenario in India is not very good. Now it is clear that whatever little energy is generated is also not very cheap and is getting costlier by day. One just cannot shy away from the need for energy conservation in India – be it homes or industries.