Friday, April 30, 2010

Degree of openness

This week, I read about an interesting advice Steve Jobs of Apple had for Adobe. Apple was responding as to why Flash is not part of Apple products (iPhone, iPad, iPods etc). Apple argument was linked to Adobe's charge that Apple software is closed and proprietary and needed to open up. In return, Steve argues how the reverse is true. That article has inspired this one.

To me, the two most closed software entities (for justified or otherwise reasons) are products from Adobe and Apple. Maybe it is so because Apple had a 20% share in Adobe in its early days and Adobe's thought process may have been influenced by Apple's. Be that as it may, it was plain hilarious to read about Adobe and Apple charging and counter-charging and claiming how theirs is an open software and how the other is not.

Let us try to develop a scale that shows the degree of openness. Let it be a scale of 1 to 10, with 1 for closed and 10 for open. While what definition of closed is well and truly understood (the iron cage), it may be prudent to define what is open. An open system should provide free access to source code, should allow its free re-distrubution, should allow free distribution of derived work, the license should be technology neutral and should not be product specific. On such a scale, I would be tempted to rate both Adobe and Apple at "1".

But in reality, it is not that bad. Both Apple and Adobe, while closely guarding their respective softwares, do have opened up their software to the extent of interface APIs for developers from open community to develop applications using these softwares. Today, there is a huge developer and third party application community that develops unique and interesting apps for both Apple and Adobe, that are not shipped by them with the product. So they are a bit open. But they still are not open in the true sense of the word. The main engines are still closely guarded. So may be let me rank both at "2".

Let us look at the other side of the spectrum. Is there anything truly open? That is, is there any software that can be ranked at "10"? Well there are many. My favourite has to be Linux. When Linus Torvalds announced the launch of Linux in 1991 with an intention of tapping many volunteer programmers, the end product was a complete OS that one can build from scratch. The entire OS source code was available and it was truly an open system. What is remarkable 20 years hence, is that Linux continues to remain open and in the forefront of list of products that are truly open. I dont think anyone will doubt my intentions of ranking it with a score of perfect "10". Ofcourse there are many other open softwares that score a perfect "10". Apache HTTP web server, for example.

Couple of years ago, Google vowed for openness and announced launch of Linux based Android platform for mobile phones, that looked a relevant alternative to Apple iPhone OS. It wowed the enthusiastic amateur and developer. On the same scale of openness, I would be tempted to rate Android a perfect "10".

But wait! Soon it became apparent that Android was not open after all. There have been frustrations about lack of SDK updates. The new versions of the SDK were being released under NDA. Non-disclosure agreements, selective access to development tools are hardly emblematic of open source ecosystem and did not strike the right chord with the developer community. Unlike Linux release management, there was a lack of transparency in the development process for Android. What was promised a truly open OS, has not been so after all. So a perfect "10" becomes an imperfect "10" for Android. Maybe an "8".

User experience is most important. Steve Jobs says it is documented that integration of Flash brings down Mac OS. What open softwares do is allow a third party investigation. With one black box resident on the other (Flash on Mac OS, for example), it is difficult to judge. Flash may certainly have issues with Apple OS, but flash has also been a reasonable success outside of Apple products. There are many users of Apple products and many users of Flash.

In summary then, what is open is not so open, and what is closed is not so closed. There are shades of gray. But also, there are shining examples of products that are truly open and also of products that are truly closed! It is a completely different matter and matter of opinion, as to whether Apple can and should be allowed to take moral high ground in labelling "Adobe products as closed and proprietary" !

Tuesday, April 20, 2010

Humans not so superior?

It has always been believed, that with evolution and advanced brain, human beings are generally good at intelligence and in fact far better at it than the rest of the animal kingdom. The statement is true with stress on the word 'generally'. There are many experiments, that have proven that intelligence is not necessarily only a human virtue though. Be it monkeys, or dolphins, or penguins, or whales - many animal species have proven that they have intelligence too. There is some more 'uncomfortable' evidence now coming through that goes to prove that humans are not necessarily the most superior intelligent species. There are other species which actually, in controlled environments, have proven that they solve some mathematical problems in a more optimal, faster way than humans do! Take pigeons for example.

In an interesting research paper titled "Are birds smarter than mathematicians? Pigeons (Columba livia) perform optimally on a version of the Monty Hall Dilemma.", by Herbranson, Walter T. and Schroeder, Julia that appeared in Feb 2010 issue of Journal of Comparative Psychology (Vol 124(1), Feb 2010, 1-13), the authors claim that with pigeons used as subjects to solve the Monty Hall problem, the results showed that pigeons are better at it than humans (also interestingly maybe a lay man was better than a PhD).

What is a Monty Hall problem? A probability problem based on a popular American TV game show, "Let's Make a Deal", and named after its host Monty Hall. The results appear absurd but can be demonstrated to be true. So it is also called at times Monty Hall Paradox. Here is a problem statement:

Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?

As the player can not be certain which of the two remaining unopened doors has a car, most humans believe that each door has an equal chance and conclude that switching does not alter their chance of winning and stay with their initial guess. The solution actually requires you to switch, as by switching you doubly increase your probability of winning from 1/3 to 2/3 !

A variation of the Monty hall problem was experimented with pigeons and it was found that the birds reached the optimum strategy, going from switching roughly 36% of time on day 1 to 96% on day 30. If that was not interesting enough, here is more evidence about humans. On the other hand, 12 undergraduate student volunteers failed to adopt the best strategy with a similar apparatus, even after 200 trials of practice each.

The Monty Hall problem, in one of its common formulations, is mathematically equivalent to the earlier Three Prisoners Problems published in Martin Gardner's Mathematical Games column in Scientific American in 1959, and both bear some similarity to the much older Bertrand's box paradox. These and other problems involving unequal distributions of probability are notoriously difficult for people to solve correctly, and have led to numerous psychological studies that address how the problems are perceived. Even when given a completely unambiguous statement of the Monty Hall problem, explanations, simulations, and formal mathematical proofs, many people still meet the correct answer with disbelief. The fact that people do badly at this problem is true across cultures, including Brazil, China, Sweden and the United States.

The difference between these two behaviours may be explained by the fact that most of the times, humans use classical probability interpretation, whereas pigeons used empirical interpretation or the interpretation based on experimental learning.

So are humans not superior? Actually they are, generally speaking; it is just that they are bad at certain types of problems than some of the other animal species!

Wednesday, April 14, 2010

Hot air

Is global warming literally hot air? or, only figuratively? The term 'hot air' signifies many things. Physics-wise, it is the air around us that becomes (or is) hot. The term also has another interesting connotation. It refers to things, ideas, realms that are closer to fantasy and are floated as though real. Here, I want to talk about the first definition, as-in not literally hot-air but more encompassing concept called global warming. But interestingly enough, it is also a topic of my second definition. This is an essay on the topic and is not meant to be a scientific study.

Is climate change a myth? Is global warming a myth? Is anthropogenic global warming a myth? What is myth and what is not? Unfortunately, we have landed into many such questions related to planet earth due to diabolic studies that seem to have had some vested interest (or were just poor research). What was once considered undisputed has now been drawn into disputes. What was once considered uncontroversial is now controversy. So we need to set the clock back and start all over again. Start from the time (or notions) that can not be disputed and work upwards to the final conclusion (whatever that be).

Is climate change real? How is climate change defined? Climate change is basically seasonal variations at a given location. Ofcourse, the variations have to be outside of tolerances so have to be drastic in some sense. Maybe we could call it 'extreme variations' in climate? The places which used to have highest rainfall no longer are that way. Temperatures were never outside of 30-35 degree Centigrade in the place where I grew up in summer. It is upwards of 40 now in summers. Places which used to be cold have become unbearably cold. There are torrential rains in places where rainfall was modest not so long ago. There are areas of increased snow-fall around the globe (why did Europe get so much snow this winter?) compared to earlier times. Yesterday Bangalore recorded highest temperature in summer in 25 years! There are droughts at some places and some places not. There are floods and there are not. Two conclusions can be drawn. One is that at a given location, climate change or extreme variations in weather are a given. These are observed phenomena that can not be disputed. These clearly indicate a departure from the 'norm', if there was one and hence climate change is real.

Is global warming for real? We have to be a bit careful with what we mean by global warming. I explained climate change as a local phenomenon. When we observe a series of such local phenomena and transcend them with a grand unifying theory, we call it global warming. Is global warming for real then? It depends upon, statistically speaking, how many local phenomena of the observed series pass that criterion. If they do, statistically, we talk of global warming.

Is anthropogenic global warming a myth? What is anthropogenic global warming? Man-made or man-induced. Have humans contributed to this? This is where we start entering scientific twilight. It was undisputed till a few months back but no more. Clearly there is a lot of data to be made sense of and clearly there is a debate required over the conclusions to decisively conclude that global warming is man-made. So until proven guilty, let us say it is a myth or a fantasy. Basic climate study will tell you that the globe warms by a degree or 2 every 250 years. What is now disputed is this part of the larger cycle of nature (of which humans are an insignificant part) or is it that humans attained superhuman scales and altered the cycle of nature to a point beyond reversibility? It is possible, distinctly possible, but not proven yet.

What is worrying is that every article that is written on the topic is seen through the lens of vested interests. Including this one. Some one can easily say that I have been 'bought' by anti-GW lobby to write an article like this. Each one of us can not go into each other's courts to clarify our versions. Somewhere we need break this vicious cycle. Harm was done by some inaccurate research. The harm needs to be undone without the angle of doubt.

The problem with these kind of debates is also that they get very passionate. People on both sides are passionate. That is why, it would take some out-of-the-box thinking to get us out of this muddle. If we as humans dont get our act together, like dinosaurs of today, we may soon too become an object of fancy for some alien out there. If existence of extra-terrestrial intelligence is also disupted, then there wont even be any admirers left of us. Did I say climate change is anthropogenic?

Sunday, April 11, 2010

Irked by randomness

The title of this article is inspired by the famous book 'Fooled by Randomness' by Nassim Taleb. While his focus was on the effect of randomnesss on the world economics, my focus is on a topic about 'randomness' - probability and statistics. Why, generally speaking, is probability a thorn in our student life? Why does it send shivers down our spines - be it about living through the lectures over a period of 3-6 months or appearing for the final examination? After all, the subject is so fair. It teaches us that anything is possible. As an outcome of any experiment, it only attaches a number (probability) to its occurrence. Even impossible events have a number and so have possible events! So while the subject should actually make us feel great about life, as anything can be explained with this subject, it actually scares us. Let us find out why, if we can.

I was no different and I quite disliked the subject when I was a student. Unfortunately, for me, my academic path required that one or the other day I should face it head-on and put the fear or anxiety aside. My father was a master at mathematics (and also master of mathematics), my elder brother had done quite well in the subject in school. But as we all agree, pedigree is of no great relevance in a subject like this, where either one gets it or does not. More often than not it is the latter. They tried hard to get me initiated but I must admit I was an average student in the subject then.

During my doctoral days at IIT Bombay, I was teaching assistant to my guide Prof SC Sahasrabudhe, on a course titled communication theory but was really about probability. The importance of having learned, great men around can only have the final desirable effect -albeit a little late sometimes. It was more about the clarity that my father, and later my professors had on the subject that slowly started getting the fear out of me and consequently getting fundamentals in place to start enjoying the subject.

I have now graduated to a point where I have taught this course in a business school with its application to business decision making. This life cycle of being a student first, to becoming a teaching assistant, and then eventually a teacher has achieved two things, one that it ensured I remained exposed to the subject for many years than just 1 or 2. And secondly, destiny had it that I was to be with some celebrated names in this field and eventually picked up the pieces.

The experience as a teacher was most fulfilling though. It was vindication of my belief for long. The belief was that the subject is just not introduced and taught appropriately. It takes good teachers to get the topics right and it is quite obvious that there are fewer such teachers around and then it entirely depends upon one's luck if he/she stumbles upon one or more of such in our student days.

For a start, the introduction of concepts through the toss of a coin, throw of a die or drawing of one or more cards from a card deck as mentioned every time the subject is taught is monotonic. While they may be simple experiments (with finite and small sample space) to teach the concepts, they are also a quick put-off. I have realised that the student psychology works this way. They remain interested till these experiments are mentioned. Then most of them think probability is only this and they know it alright and dis-engagement with the subject starts taking shape, to an extent where they again fail to get the concepts till they appear for another advanced or similar course a year later and same cycle repeats. This cycle ensures that they remain fearful of the subject irrespective of the number of courses they take.

Let me tell you an example of introducing a similar concept in my class. I tried to explain binomial distribution through the example of two evenly matched tennis players, Federer and Nadal and worked out the probability of one of them winning the grand slam final in 3-sets, 4-sets and 5-sets. The results themselves were interesting enough but what was good to see was the connect this example had with the students rather than an experiment of tossing a coin 10 times!

So lets look at the larger picture and let me confine to India where the problems mentioned above have been noticed. India boasts of around 500 universities, and around 20,000 colleges. India's Human Resources Development minister, Mr Kapil Sibal, who has initiated revolutionary changes to the Indian education system, mentions that in next 12 years, India needs another 600 universities and another 35,000 colleges to meet the expected economic growth in double digits of GDP.

Obviously, the kind of colleges and universities required to create next wave of graduates in India will require them to be business savvy and adept at quantitative methods and analysis. What is bothering me is this. As it is today, there is a scarcity of good teachers who can introduce these concepts innovatively and with today's examples rather than yesterdays. In a decade from now, there will be more students starving for such good teachers, their number as a percentage of total number of teachers will dwindle further. This leads me to believe that for years to come, the X factor will remain in students' psyche and we will continue to have graduates passing out who are less skilled on this most important form of science that they will use most in their later lives.

Thursday, April 8, 2010

Fancy of mobile phones

Are mobile phones supposed to be phones or should they have glamour value? Mobile phones today are used as gadgets that are devices to communicate to some while status symbols to others. Mobile phones come in all shapes and sizes and have come a long way since early 1980s when they used to be just "mobile" and not necessarily portable! The first versions of GSM mobile phones in early 1980s were of the size of a typewriter and weighed around 5kgs! Mobile all right but you needed a back-pack to carry it! In just 3 decades, they are far sleeker, far more functional and surely part of a person's style statement.

Why would mobile phones undergo such a metamorphosis? Maybe because it is a device-in-need. A must-have gadget due to its primary purpose of it being a phone. But why are we so possessed about the mega pixel specification of its image sensor? Why are we so obsessed with its touch screens and just why are we bowled over by its looks? After all, there are many other devices-in-need. Wrist watches for example. But they did not evolve so much. We did not quite like to have anything else in a watch (other than gold and diamonds maybe) but we would rather have a clock in our phone!

Consumerism is the answer. All of this is evolved from markets driven by consumers. I belong to a geography and a generation where I learnt the subjects first and then used the devices. I did a course on Telephony before I actually used a telephone to make a call (not joking) and was quite scared of using it! I did a course on TV before the TV penetration became big in India. I did a course on wireless communication before I used a mobile phone. I worked on the 3G phone internals before I ever saw a physical 3G phone. In late 90s it used to be kind of funny working with some of the clients, not having the experience of using a phone but working on requirements specifications for a mobile phone. An application would be for example called 'Music jukebox'. We had neither used a phone nor used a jukebox to ever quite comprehend what was expected out of this feature !

As years passed by , I learnt that this was a two-way phenomenon and not necessarily one inflicted upon the 'third world'. Emerging markets such as Asia, Africa and Middle East became business centres for growth of mobile usage and soon there was a need for mobile phone makers to work on specifications targeted for these regions. It gave me immense pleasure when my peers from the developed world (NA and western Europe) struggled to understand just why a mobile phone should have a lantern or FM radio!

So coming back to consumerism, the geography and social needs dictate the requirements and then only sky is the limit. So we see mobiles in all shapes and forms. Folks from lower socio-economic strata still want that elusive talk-only phone (a dream for them really) while the more initiated ones get hooked onto advanced products such as iPhone and Blackberry. Between the ultra low cost GSM phone and Blackberrys of the world there is a rainbow of products in terms of features, hardware and style.

What elements go into the design of a typical mobile phone?
  1. Market inputs for requirements (e.g. lantern!)
  2. Market inputs for style (youth segment, farmers, initiated advance users?)
  3. Geographical needs
  4. Wireless coverage (GSM, GPRS, EDGE, 3G)
  5. Socio-economic index (to decide handset price - folks should buy the phone)
  6. Hardware elements (e.g. QWERTY keyboard, camera, WiFi)
  7. Size of display and its type (non-touch or touch - resistive or capacitive)
  8. The hardware baseband platform (chipset vendors)
  9. The OS or the software platform (LiMo, Maemo, Windows Mobile, Android or other)
  10. Language selection
  11. Application suite
  12. One or more SIM slots with SIM card size
  13. Battery life and size of battery (in terms of mAh)
  14. and so on...
I can speak only for myself, but I have this observation. Till now, I have not found a phone model that meets all of my requirements. Some are good in some areas, whereas others do better in other areas. The primary reason for this is human greed. In the limiting case, we would like to have a gadget that offers everything at no cost ! Technology has advanced so much that it is possible to meet the almost asymptotic greed. Some want it to guide them through an unknown city with digital maps and GPS, some want it to be a music box, while some want it to be a gaming device. That also probably is a reason why many users switch or upgrade their phones continuously for that elusive phone that will be their dream gadget!

Sunday, April 4, 2010

Renaissance of computer vision

Before I get into renaissance, I think I should explain what computer vision is all about and why either the field itself or its renaissance interests me. Here is a simple reason why. Computer vision is close to me because it was my topic of doctoral thesis. I did my PhD in computer vision and image processing in the best part of the decade of 1990. But what is computer vision? Well, very loosely speaking, it is that field of science which is related to getting computers to do everything that our eyes do. There is an element of all - seeing, observing, analysing, relating all for improving overall understanding. They say, "A picture is worth thousand words" but really only for humans. When the maxim is applied to computers trying to do what humans do, even today many times we may be better off with words ;-)

Humans are always fascinated by exploration of unknown and almost always there are certain innovations at time that enable a plethora of research on particular topic around that time epoch.

Lets look at the time line. To get computers to do what human eyes do, could not have been possible, for a start, without computers. So this field could not have existed or flourished before 1970s when computers did not exist in abundance. The bursting of personal computers on the scene around the 1980s paved way for the existence of computer vision. Also, in the decade of 1980s, a certain David Marr from the MIT Media Lab had applied the cognitive theory and neurological study to understand how computers could be used to mimic human visual system. David Marr and the other contemporaries were primarily from the established fields of science who were exploring if computers could be applied for understanding their fields better.

1990s was then, that romantic time for computer vision research, a time when a fundamental theory was in place for understanding human visual system and was offering enormous challenges and opportunities for refining those theories and exploring new ones. So that way, I was lucky to have belonged to the right time! We are talking of a duration of around 10-20 years only that saw rapid strides in the field of computer vision. So by the time I joined my PhD program, it was a "happening" field, but by the time I passed out, although it still was "happening", a lot of research was in place and the practitioners had already started of talking of 'saturation' of the field. There was very little to innovate from an algorithmic point of view, or so it was believed then.

Whenever we talk of a particular discipline, it is important to talk of related disciplines that either enhance or hinder its progress. The hardware in the case of computer vision was one such limitation then. To do a real-time replication of a human eye required both hardware that was capable of seeing as fast as human eyes do and a computer that could provide the software that was capable of thinking as fast as human brains do. Generally the progress is only as fast as the weakest link and computer vision took a (relatively speaking) backseat in frontal science.

While inter-disciplinary dependencies define the growth of a field, I would also like to draw your attention to my earlier comment upon new innovations preceding a huge spurt in research of some of the other related fields. There are two major revolutions that we should look at in context.

Firstly, Let us not forget that 1991 was the year of the Internet and 1994 was the year of the Web. These two fascinations, that had even escaped the science fiction writers of up to 1970s, impacted the human society world-wide in a big way. Many new innovations happened around the web and computer vision too benefited from this, just as all other fields.

Secondly, around 2000, telecommunication got a big boost through mobile phones. The mobiles reached many consumers that were deprived of telecommunications before and that enabled research around mobile phones. Mobile phone features are mostly driven by consumer interest and music and video remained the top two (outside of it first being a phone).

Semiconductor research boomed around the same time and enabled manufacturing of mobile phones that could offer extremely sleek, high resolution CCD image capture devices on such portable device as a mobile phone. Something 10 years back was difficult to manufacture in volumes, irrespective of size ! Today it is estimated that mobile phones constitute almost 80% of total CCD image sensor market and almost 50% of image sensor revenue in next 5 years is likely to be enabled by mobile phones!

Now let me come to the renaissance. Computer vision is back and back to benefit large size of human civilization through devices that have both hardware and software capability (I talked of before) in abundance on a small device. What is then required is only a market that comes up with requirements that showcase these capabilities. Gaming is a huge market in telecommunications today and whether it is motion based sensors such as accelerometers or 3-D games, are fed through the work of 30-40 years of computer vision research. Whether it is open source Android based phones, or close source Apple's iPhone, or even the Google App engine in the cloud, all are enabling a huge interest in machine or computer vision.

The saturated field 10 years back is back in demand, may be not for its research potential, but for its potential to put that research to use for humans. Either way, computer vision as a field would benefit and would be richer.

I was part of the golden years of technology evolution, through the 1990s and continue to be so in early part of 21st century. I was benefited by the field that evolved at that time around the evolution of computers, semiconductors, Internet and global sharing of knowledge. I have benefited from having worked with mobile phones and wireless communication through its evolution. Hopefully, now I can benefit the community at large with its renaissance now!