Sunday, April 24, 2011

Not so foolish anymore..

Chaucer’s celebrated ‘Canterbury tales’ published in 1392 makes reference to April 1st and its association as being the April Fool or All-Fools Day. This is that month of the year and the tradition has shown no respite when friends and relatives pull tricks of jokes and make fool of us on that day. When these tricks reach the level where ones’ tricks makes a fool of many, it becomes even more interesting and one of the leaders in this field has been Google. Did you hear about Google Motion? If not, I give a brief below. They published this article on their blog at 12:01 AM on 1st April and was truly their joke on their social network. Google wrote thus:

“In 1874 the QWERTY keyboard was invented. In 1963, the world was introduced to the mouse. Some 50 years later, we’ve seen the advent of microprocessors, high resolution webcams, and spatial tracking technology. But all the while we’ve continued to use outdated technology to interact with devices. Why?

This is a question that we’ve been thinking about a lot at Google, and we’re excited to introduce our first attempts at next generation human computer interaction: Gmail Motion. Gmail Motion allows you to control Gmail — composing and replying to messages — using your body.

To use Gmail Motion, you’ll need a computer with a built-in webcam. Once you enable Gmail Motion from the Settings page, Gmail will enable your webcam when you sign in and automatically recognize any one of the detected movements via a spatial tracking algorithm. We designed the movements to be easy and intuitive to perform and consulted with top experts in kinestetics and body movement in devising them.

We’ve been testing Gmail Motion with Googlers over the last few months and have been really excited about the feedback we’ve been hearing. We’ve also done some internal tests to measure productivity improvements and found an average 14% increase in email composition speed and12% reduction in average time in inbox. With Gmail Motion, Googlers were able to get more done and get in and out of their inboxes more quickly.

To use Gmail Motion, you’ll need the latest version of Google Chrome or Firefox 3.5+ and a built-in webcam. If it’s not already enabled on your account, sit tight — we’ll be making it available to everyone over the next day or so.”

You might ask what is different this year? Google and other technology companies play this prank every 1st of April and what relevance has this to the title ‘not so foolish’ ? They had after all fooled the world through their in-depth video that swinging a first backhand in the air would allow you to reply to the message, swinging two-fists would do a reply-all, and licking your hand and tapping the knee would send the email.

What is not so foolish? You mean, can this not be done? Well here is some development. Inspired by the Google blog, hackers at the University of Southern California Institute for Creative Technologies wanted to make it a reality! Towards this, a group of developers took Microsoft Kinect sensor and some software they had done for previous projects; and tied them together to create a fully working prototype of Google Motion! This was their response to the Google blog:

“This morning, Google introduced Gmail Motion, allowing users to control Gmail using gestures and body movement. However, for whatever reason, their application doesn’t appear to work. So, we demonstrate our solution — the Software Library Optimizing Obligatory Waving (SLOOW) — and show how it can be used with a Microsoft Kinect sensor to control Gmail using the gestures described by Google.”

While this whole episode was funny (or foolish), what it brings out are the technological advances in sensors and image processing – that what we think is fantasy can become real in no time. So, the bar for being creative has been raised significantly, for it to remain a fantasy for a while, otherwise folks watch out, technology will catch up in no time!

Sunday, April 10, 2011

Days of "Altruism"

Last week India was upbeat with Anna Hazare’s social cause. It can be be classically defined as truly altruistic endeavour. But, was it altruism? What is altruism? In a pure sense, it is the selfless concern for welfare of others. It is a traditional virtue in many cultures. There is no expectation of reward. Is pure altruism possible, though? Social evolution is a discipline that is concerned with social behaviours, i.e. that have fitness consequences for individuals other than the actor, does classify altruism as one of the accepted social behaviours. Social behaviours have been categorized by W D Hamilton in 1960s as follows:

  1. Mutually beneficial - a behavior that increases the direct fitness of both the actor and the recipient
  2. Selfish - a behavior that increases the direct fitness of the actor, but the recipient suffers a loss
  3. Altruistic - a behavior that increases the direct fitness of the recipient, but the actor suffers a loss
  4. Spiteful - a behavior that decreases the direct fitness of both the actor and the recipient

Hamilton proposed the above classification saying that Darwin’s natural selection favoured mutually beneficial or selfish behaviours while kin selection could explain altruism and spite. The closed we come to understanding altruism scientifically is by understanding biological altruism. In evolutionary biology, an organism is said to behave altruistically when its behaviour benefits other organisms, at a cost to itself. The costs and benefits are measured in terms of reproductive fitness, or expected number of offspring. So by behaving altruistically, an organism reduces the number of offspring it is likely to produce itself, but boosts the number that other organisms are likely to produce. This biological notion of altruism is not identical to the everyday concept. In everyday parlance, an action would only be called ‘altruistic’ if it was done with the conscious intention of helping another. But in the biological sense there is no such requirement. Indeed, some of the most interesting examples of biological altruism are found among creatures that are (presumably) not capable of conscious thought at all, e.g. insects. For the biologist, it is the consequences of an action for reproductive fitness that determine whether the action counts as altruistic, not the intentions, if any, with which the action is performed.

For decades, selflessness - as exhibited in eusocial (true social) insect colonies where workers sacrifice themselves for the greater good – has been explained in terms of genetic relatedness. Called kin selection, it was a neat solution to the conundrum of selflessness. The dominant evolutionary theory and its influence on human altruism are now under attack.

On the face of it, self-serving humans are nothing like paper wasps, which along with their relatives, ants, bees and termites, are defined as eusocial, creatures that display the highest levels of social organization. Famed Harvard biologist and author Edward O. Wilson, who gave eusociality its first clear meaning, refers to such behaviour as “civilization by instinct”.

The evolutionary theories , in particular kin selection, go a long way towards reconciling the existence of altruism in nature with Darwinian principles. However, some people have felt these theories in a way devalue altruism, and that the behaviours they explain are not ‘really’ altruistic. The grounds for this view are easy to see. Ordinarily we think of altruistic actions as disinterested, done with the interests of the recipient, rather than our own interests, in mind. But kin selection theory explains altruistic behaviour as a clever strategy devised by selfish genes as a way of increasing their representation in the gene-pool, at the expense of other genes. Surely this means that the behaviours in question are only ‘apparently’ altruistic, for they are ultimately the result of genic self-interest? Reciprocal altruism theory also seems to ‘take the altruism out of altruism’. Behaving nicely to someone in order to procure return benefits from them in the future seems in a way the antithesis of ‘real’ altruism — it is just delayed self-interest.

To some extent, the idea that kin-directed altruism is not ‘real’ altruism has been fostered by the use of the ‘selfish gene’ terminology of Dawkins (1976). As we have seen, the gene's-eye perspective is heuristically useful for understanding the evolution of altruistic behaviours, especially those that evolve by kin selection. But talking about ‘selfish’ genes trying to increase their representation in the gene-pool is of course just a metaphor (as Dawkins fully admits); there is no literal sense in which genes ‘try’ to do anything. Any evolutionary explanation of how a phenotypic trait evolves must ultimately show that the trait leads to an increase in frequency of the genes that code for it (presuming the trait is transmitted genetically.) Therefore, a ‘selfish gene’ story can by definition be told about any trait, including a behavioural trait, that evolves by Darwinian natural selection. To say that kin selection interprets altruistic behaviour as a strategy designed by ‘selfish’ genes to aid their propagation is not wrong; but it is just another way of saying that a Darwinian explanation for the evolution of altruism has been found. As Sober and Wilson (1998) note, if one insists on saying that behaviours which evolve by kin selection / donor-recipient correlation are ‘really selfish’, one ends up reserving the word ‘altruistic’ for behaviours which cannot evolve by natural selection at all.

For the past four decades kin selection theory has been the major theoretical attempt to explain the evolution of eusociality,” writes Wilson and Harvard theoretical biologists Martin Nowak and Corina Tarnita in an Aug. 25 Nature 2010 paper. “Here we show the limitations of its approach.”

According to the standard metric of reproductive fitness, insects that altruistically contribute to their community’s welfare but don’t themselves reproduce score a zero. They shouldn’t exist, except as aberrations — but they’re common, and their colonies are fabulously successful. Just 2 percent of insects are eusocial, but they account for two-thirds of all insect biomass.

Kin selection made sense of this by targeting evolution at shared genes, and portraying individuals and groups as mere vessels for those genes. Before long, kin selection was a cornerstone of evolutionary biology. It was invoked to help explain social and cooperative behavior across the animal kingdom, even in humans.

But according to Wilson, Nowak and Tarnita, the great limitation of kin selection is that it simply doesn’t fit the data. Wilson et al claim that looking at a worker ant and asking why it is altruistic is the wrong level of analysis. The important unit is the colony.

Their new theory of eusocialty may be useful in understanding, for example, how single-celled organisms gave rise to multi-celled organisms. Human selflessness and cooperation, involves interation of culture and sentience, not just genes and genetics. As claimed in the paper, ‘there are certain things we can learn from ants. Its easier to think about ants, but people are complicated’.

I am not proposing any scientific evidence for human altruism, indeed if it exists. Most definitely not for the last week’s event that drew me to read more about it. Was it pure altruism, or apparent altruism? Or kin selection? Or plain selfishness?

Sunday, March 27, 2011

Crowd behaviour

Last week, one of my friends pointed out to me that my many recent blog articles have been only on the energy efficiency topic and are not as unpredictable and/or interesting as the earlier ones. So this time am making a conscious attempt at not writing about energy. There is lot of construction activity in Bangalore and the place where I live in. The construction is not just restricted to residences but increased number of residences puts pressure on municipal corporation to provide more water and handle sewage. The road outside where we live, has been dug to lay in water and all kinds of pipes. That means our only road that gets us out of layout is closed and as is common place – a new temporary road has been found out. That road can not handle the pressures of the traffic. I was thinking about this scenario and it gave me an idea of today’s topic. Is there a deterministic (non-random) way of assessing the crowd traffic and its impact on better understanding of crowd behaviour, improved design of the built infrastructure? Crowd is being used in a generic sense and although it is about a group of people, here it is being used in a more generic sense as you would find anywhere in India – in that it is a collection of group of people, herd of cows and goats, a group of auto rickshaws, a grop of water tankers in summer and in general a group of vehicles that move in all possible directions even though the road may be straight ! If you leave in time, what is the probability in a scientific way of reaching your destination in a fixed time? Does crowd monitoring help? Let us explore.

Although crowds are made up of independent individuals or entities (remember not to leave aside the cows and buffalos and even vehicles that are driven by individuals) , each with their own objectives, intelligence and behaviour patterns, the behaviour of crowds is widely understood to have collective characterisitics which can be described in general terms. Since the Roman times, the mob rule or mob mentality is an implication of a crowd that is something other than the sum of its individual parts and that it possesses behaviour patterns which differ from the behaviour expected individually from its participants. If there is any scientific basis for the study of crowd behaviour, it must belong to the realm of social sciences and psychology, and that the mere mortals of physical sciences and engineering have limited or no business in getting involved with such studies. But I came across an article a few years ago that was interesting. It said understanding of field theory and flow dynamics is good enough to get started on getting a solution to crowd monitoring and may offer solutions that are technology based and control the crowd behaviour using developments in image processing and image understanding.

The article I mentioned above was one of IEE publications. Do not recall which one. But the thought process left an impression. It said our knowledge of study of gases can provide us insgihts into the study of understanding crowd behaviour. After all, a gas is made up of individual molecules, moving about more-or-less independently of each other, with differing velocities and directions. The ideal-gas theory provides a reasonably accurate basis of predicting the properties and behaviour of gases over a wide range of conditions, without considering behaviour of individual molecules. This was a major breakthrough and something not possible to conceive if the notion had prevailed that equations of motion for each individual molecule had to be solved in order to predict overall behaviour of a gas in any particular direction. What it also proved was an observation in mob rule, that the overall behaviour is something other than the sum total of its parts.

Now where does this similarity end? Surely the molecules of gas are different from cows and buffaloes and individuals and vehicles. They are far more complex and have a mind of their own. The theory of gases does not attribute intelligence to molecules. The possessed crowd that moves in a particular direction in a mindless pursuit is akin to the behaviour of charged particles under the influence of electric field. When you have a temporary road that is bi-directional, you not only have a crowd moving in one direction but in both and capable of inducing collisions, like particles of opposite charges.

I have known many techniques in recent years in image processing that use those well-established techniques for monitoring and collection of data on crowd behaviour. A key factor in the solutions is the use of techniques where inferences can be drawn by rising above individual pixels or objects – a notion akin to rising above molecules and individuals that make up the spaces.

Whether all of this can lead me to predict fixed time of arrival at destination is anybody’s guess. But it does provide insights into crowd behaviour and probably an interesting application of science that can make your journey to the destination enjoyable.

Sunday, March 6, 2011

The Green Rebound

What is a rebound effect? In traditional sense, it is used in medicine to describe an effect where it shows the tendency of medication, when discontinued, causes a return of symptoms being treated to be more pronounced than before. So what has ‘green’ got to do with the rebound effect? Well, couple of weeks ago, there was an article in Nature News that has rekindled interest in this topic; which has been a point of discussion for many days now, I must confess. The green rebound, as I call it, is the rebound effect as applied to energy conservation. I have been emphasizing through many articles before on the need to be energy-prudent, to be energy conscious and hence do things which conserve energy. But just what happens when you save?

The green rebound, which is application of rebound effect to energy conservation, is a term that describes the effect that the lower costs of energy services, due to increased energy efficiency, has on consumer behavior. It generally indicates either an increase in number of hours of energy use, or increase in quality of energy use thereby creating a situation where you end up using more than you save and hence portraying a kind of a paradox.

For instance, if a 18W compact fluorescent bulb replaces a 75W incandescent bulb, the energy saving should be 76%. However, it seldom is. Consumers, realizing that the lighting now costs less per hour to run, are often less concerned with switching it off; in fact, they may intentionally leave it on all night. Thus, they ‘take back’ some of the energy savings in the form of higher levels of energy service (more hours of light). This is particularly the case where the past level of energy services, such as heating or cooling, was considered inadequate.

What is not debated is whether the effect exists. You may be surprised to know it does. What is being debated is the extent of this rebound? Like all other economic models, this one too is tending to overstate the reality.

1. The actual resource savings are higher than expected – the rebound effect is negative. This is unusual, and can only occur in certain specific situations (e.g. if the government mandates the use of more resource efficient technologies that are also more costly to use).

2. The actual savings are less than expected savings – the rebound effect is between 0% and 100%. This is sometimes known as 'take-back', and is the most common result of empirical studies on individual markets.

3. The actual resource savings are negative – the rebound effect is higher than 100%. This situation is commonly known as the Jevons paradox, and is sometimes referred to as 'back-fire'.

The rebound effect is a phenomenon based on economic theory and long-term historical studies, but as with all economic observations its magnitude is a matter of considerable dispute. Its significance for energy olicy has increased over the last two decades, with the claim by energy analysts in the 1970s, and later by environmentalists in the late 1980s, that increasing energy efficiency would lead to reduced national energy consumption, and hence lower green gas emissions. Whether this claim is feasible depends crucially on the extent of the rebound effect: if it is small (less than 100%) then energy efficiency improvements will lead to lower energy consumption, if it is large (greater than 100%) then energy consumption will be higher. Note the use of the relative terms ‘lower’ and ‘higher’: what exactly they are relative to has often been left unstated and has been a cause of much confusion in energy policy debates. Sometimes it refers to current energy consumption, at other times to a reduction in the future rate of growth in energy onsumption.

The claim that increasing energy efficiency would lead to reduced national energy consumption was first challenged by Len Brookes in 1979, in his review of Leach's pioneering work, A Low Energy Strategy for the UK, when he criticized Leach's approach to estimating national energy savings because of its failure to consider macroeconomic factors. This was followed in the early 1980s by similar criticism by Daniel Khazzoom of the work of Amory Lovins. The criticism of Brookes and Khazzoom was given the name of the Khazzoom-Brookes (KB) postulate by the economist Harry Saunders in 1992. The KB postulate may be described as: those energy efficiency improvements that, on the broadest considerations, are economically justified at the microlevel lead to higher levels of energy consumption at the macrolevel than in the absence of such improvements.

This work provided a theoretical grounding for empirical studies and played an important role in framing the problem of the rebound effect. It also reinforced an emerging ideological divide between energy economists on the extent of the yet to be named effect. The two tightly held positions are:

1. Technological improvements in energy efficiency enable economic growth that was otherwise impossible without the improvement; as such, energy efficiency improvements will usually back-fire in the long term.

2. Technological improvements in energy efficiency may result in a small take-back. However, even in the long term, energy efficiency improvements usually result in large overall energy savings.

Even though many studies have been undertaken in this area, neither position has yet claimed a consensus view in the academic literature. Recent studies have demonstrated that direct rebound effects are significant (about 30% for energy), but that there is not enough information about indirect effects to know whether or how often back-fire occurs. Economists tend to the first position, but most governments, businesses, and environmental groups adhere to the second.

The Nature news mentions a report from the Breakthrough Institute, an advocacy group based in Oakland, California, that is pushing for huge government investment in clean-energy technologies, suggests that various types of rebound effect could negate much, if not all, of the emissions reductions gained through efficiency. Energy efficiency should still be pursued vigorously as a way to bolster the economy and free up money for investments in low-carbon technologies, the institute says, but rosy assumptions about emissions reductions should be treated with caution.

Should there be an alarm due to such reports that you may across? Well no. Every coin has two sides and if anyone assumes that this report makes a non-case of energy efficiency, that is far-fetched. It only means that as we start conserving, we need to be more careful in terms of usage and hence I believe monitoring of your energy resources not just once in a while, but on a continuous basis will ensure the rebound does not take place. So monitoring is like that medicine, which once withdrawn, can have rebound effect.

Sunday, February 20, 2011

Internet of Things

I, many times wonder, just where the management schools and management gurus were before the watershed year 1991. I will most likely research that topic some day and write about it too. But for now let us see what was special about 1991. I call 1991 a watershed year because it was called the ‘Year of the Internet’ – the year when the TCP/IP protocol suite made its way out of ARPANET and MIT/UCLA and started reaching out to the masses at large. This is my conjecture that the great management thought processes and the schools of thought that continuously generate and/or evolve alternative revenue streams (of which we have excess of these days), also germinated in that year.

In a way, I agree with Malcolm Gladwell’s thought process in Outliers – a classy book published couple of years ago – in which he argues that the main secret of success is the advantage (or just luck) of being born at the right time. He says that the many successful men today just were born between 1953 and 1956 and hence were of a right age by the year 1975 to take advantage of the personal computer revolution. He cites many examples including the greats such as Paul Allen (1953), Bill Joy (1954), Scott McNealy (1954), Steve Jobs (1955), Eric Schmidt (1955), Bill Gates (1955), and Steve Ballmer (1956). Be that as it may, I believe in this theory because I was mid-way in my life around the year 1991 and have seen both the worlds – the Internet-free and Internet-infested worlds and have honestly enjoyed both. But the fact is if I was not born at the right time to experiment with Internet at the University, then I would have missed out on a great learning concept.

Coming back to the “Year of the Internet” and birth of management catch-phrases (which were introduced by you-know-who), the juggernaut has rolled along. 1994, like 1991, changed the face of the world being tagged the ‘Year of the Web’ when the then clumsy looking HTTP protocol made its appearance on the world-stage for the first time outside of CERN premises. Since 1994, each year has been tagged year of something or the other. The trivialization, howsoever metaphorical, has led us to 2011 where the year is actually tagged as the ‘Year of Internet of Things’. We have passed through eras of advertising, searching, mobile commerce, gaming where each has been reduced to a commodity thus waiting for a new innovation each time. Just what is ‘Internet of Things’ and why it is interesting is what I will explain. Like a true neutral observer, I will detail in next couple of paragraphs, the benefits it will bring and likewise the challenges it will bring in. I will never forecast the future as since 1991, each and every forecast has faded away in comparison to reality.

Technically speaking, ‘Internet of Things’ describes a world-scenario where trillions of devices will interconnect and communicate. It will integrate ‘things’ such as the ubiquitous communication layer, pervasive computing including cloud computing and ambient intelligence (wondering what it is?). Internet of Things is a vision where ‘things’ such as ‘every day objects’ such as all home appliances are readable, recognizable, addressable, locatable and controllable via the Internet.

If Internet revolution connected billions of people world-wide through computers and mobile phones, Internet of Things would connect trillions of devices billions of people use. Imagine if all the objects in the world had all the information that they needed to function optimally. Buildings would adjust themselves according to the temperature. Ovens would cook things for exactly the right time. The handles of umbrellas would glow when it was about to rain. We long ago inserted "intelligence" into objects in the form of thermostats and the like; the internet of Things will extend this principle exponentially, giving us unprecedented control over the objects that surround us.

Energy monitoring, infrastructure security and transport safety mechanisms are just some of the envisioned applications that will have tremendous boost due to the Internet of Things. It is being enabled because of technology revolution that includes miniaturization of devices, emergence of IPv6 to resolve finite address space issues, mobile phones as data capturing devices and availability of low-power energy neutral devices.

The vision is great but the challenges are plenty. It is just a vision and its roadmap has many hurdles. Its primary acceptance would depend upon the progress of machine-to-machine interfaces and protocols of electronic communication, sensors, RFID, actuators etc.

As I see it today, the challenges would extend to robustness, responsiveness, privacy among other things, which have no clear cut answers today. Why should you know how much my oven takes to bake a cake? But what is the problem if your oven can learn from mine if I baked one a few minutes ago and use that learning to do a perfect bake for you?

As an article on the topic in ‘The Economist’ summarized a month or two ago, it may just turn out to be the ‘Year of Internet of Hype’.

Saturday, February 5, 2011

Energy Intensity - why & wherefore

In these days of heightened awareness about global warming and factors that impact the planet adversely, one of the key terms you would have seen used in the news is ‘energy intensity’. What exactly is energy intensity? Broadly, it is a measure of the energy efficiency of a nation’s economy and is generally defined as the number of units of energy per unit of GDP. So, high energy intensity indicates a higher price of converting energy into GDP and vice versa.

Many factors influence an economy’s overall energy efficiency. It reflects general standard of living and weather conditions. It is common for particularly cold or hot climates to require higher energy for heating and cooling respectively. Generally, a nation that is highly economically productive, with mild and temperate weather will have lower energy intensity than a nation that is less productive with extreme weather conditions.

Energy efficiency improves when a given level of service is provided with reduced amounts of energy inputs or services are enhanced for a given amount of energy input. On the contrary, energy intensity indicates an economy’s capability to produce products with less energy use. Energy efficiency refers to the activity or product that can be produced with a given amount of energy; for example, the number of tons of steel that can be melted with a megawatt hour of electricity. At the level of a specific technology, the difference between efficiency and energy intensity is insignificant — one is simply the inverse of the other. In this example, energy intensity is the number of megawatt hours used to melt one ton of steel.

At the level of the aggregate economy (or even at the level of an end-use sector) energy efficiency is not a meaningful concept because of the heterogeneous nature of the output. The production of a huge number of goods, the mixing of the transport of freight and people, and the variety of housing and climates makes an aggregate energy intensity number based on Gross Domestic Product (GDP), a number that disguises rather than illuminates. A simple intensity measure can be calculated (as Energy/GDP), but this number has little information content without the underlying sector detail.

The distinction between energy intensity and energy efficiency is important when multiple technologies or multiple products underlie what is being compared. While it would not be sensible to compare the energy efficiency of steel production with the energy efficiency of ethanol production, it is possible to examine the energy intensity of all manufacturing.

An inverse way of looking at the issue would be an 'economic energy efficiency,' or economic rate of return on its consumption of energy: how many economic units of GDP are produced by the consumption of units of energy.

In India, today the energy requirements are in the range of 450-500 m MTOE (1 Metric Tonnes of Oil Equivalent = 11630 KWh). I cannot guarantee the correctness of these numbers as they would be probably a bit different. But the overall message is not affected by it. This requirement is expected to grow in next 20 years to triple this value and is estimated at 1500 m MTOE. If the economy also has to grow at 8+ % GDP, then it is imperative that that energy intensity should reduce. Today, the energy intensity in India is around 0.02 Kg of oil equivalent per Rupee of GDP. In the absence of focus on reducing energy intensity, in the ‘business-as-usual’ scenario, the reduction is only expected to be around 0.018 (compared to 0.02) in next 20 years. But it can be significantly reduced to around 0.010 in the same time period if India resorts to some hybrid high-growth, high-efficiency scenario.

India has pledged to reduce energy intensity by 20% in next 10 years. To achieve this target, India has announced the Perform, Achieve and Trade (PAT) scheme under the National Mission for Enhanced Energy Efficiency (NMEEE) programme. Sectors that consume maximum energy will be assigned efficiency targets in April. The aim is to save 10 million tonnes of oil equivalent (mtoe) by 2014. NMEEE is a part of the National Action Plan on Climate Change (NAPCC) launched in June 2008. Just how effective the scheme will be, will have to be seen, as it is expected that the targets are too soft to achieve the ambitious reduction in energy intensity trends India has promised at international fora.

Just to leave you with a thought. It is interesting to look at where and how countries get placed and bracketed with in a GDP vs energy efficiency graph. India is fairly low productive country at around 3000 USD GDP per capita (yes despite the recent growth) compared to the US, Hong Kong, Canada and Australia whose GDP per capita is in excess of 30,000 USD. Energy use however is moderately higher at around 250 USD (GDP per Million Btu) compared to the highly efficient economies like the Phillipines and Bangladesh (at 500 USD). The US and other highly productive countries are known to be quite energy efficient. So India is neither yet fully developed (in terms of GDP per capita) nor is energy efficient and has got bracketed with the likes of Viet Nam and Pakistan. Surely, India’s challenges are varied considering the size, population, geography and climate. The challenge remains and something to watch out for as to how India, once committed, would like to meet its energy intensity reduction targets while maintaining growth.

Saturday, January 22, 2011

why smart grid?

The title makes a presumption that the reader is already aware of what a smart grid is. But I will take a couple of paragraphs to explain just what it is, why was it conceived and how it will help the world at large, if at all.

The term 'smart grid' was introduced in 2005 and like the elephant story, it means different things to different people. And that has also been the main source of confusion for its acceptance. I met a senior expert in the field last week and he said they have been doing the automation related work for decades now. No one noticed it till the catch-phrase 'smart grid' was introduced. But it is not that grids have been non-existent or even dumb before that time. So we will see what is so smart about it and what makes it different from earlier grids. But let us understand first what a grid really is. Let us trace the journey of power as it generated in a power plant till it reaches you for use. Power plants can be of many types, thermal, coal, hydro- or nuclear. Be that as it may, the generated power from the power plant is fed through power transformers. From the power transformers, it is received by the transmission substation. The journey of transmission grid terminates here. From here, the power is fed into the distribution substation. From the distribution substation, the power is transmitted to distribution transformers and to residences or commercial or industrial outfits.

Transmission lines mostly use three phase alternating current (AC), although single phase AC is sometimes used in railway electrification systems. High-voltage direct current (HVDC) technology is used only for very long distances (typically greater than 400 miles, or 600 km); submarine power cables (typically longer than 30 miles, or 50 km); or for connecting two AC networks that are not synchronized.

Electricity is transmitted at high voltages to reduce the energy lost in long distance transmission. Power is usually transmitted through overhead power lines. Underground power transmission has a significantly higher cost and greater operational limitations but is sometimes used in urban areas or sensitive locations. Power transformers are up transformers and transform the voltage from typically 12KV to 380KV. Through distribution substations this is down transformed to 220KV to 110KV, 25KV, to 20 KV and eventually to 230V that most of us use.

A key limitation in the distribution of electricity is that, with minor exceptions, electrical energy cannot be stored, and therefore it must be generated as it is needed. A sophisticated system of control is therefore required to ensure electric generation very closely matches the demand. If supply and demand are not in balance, generation plants and transmission equipment can shut down which, in the worst cases, can lead to a major regional blackout, something we are used to in India. To reduce the risk of such failures, electric transmission networks are interconnected into regional, national or continental wide networks thereby providing multiple redundant alternate routes for power to flow should (weather or equipment) failures occur. Much analysis is done by transmission companies to determine the maximum reliable capacity of each line which is mostly less than its physical or thermal limit, to ensure spare capacity is available should there be any such failure in another part of the network.

Energy demand is expected to grow by 55% by 2030. CO2 emissions grow faster than energy demand. An inefficient energy chain with 2/3rd of primary energy lost mostly due to power conversion. Between 7 and 15% of the electricity generated is lost on all networks.

Four major power regions of the country namely, North-Eastern, Eastern, Western and Northern are now operating as one synchronous grid (same frequency). Southern Regional grid is connected to this synchronous grid through HVDC links. For overall improvement and better grid management in the country, Power Grid Corporation of India has modernized all the Regional Load Dispatch Centers (RLDCs). The RLDCs correspond to the region and namely are : southern RLDC, northern RLDC, western RLDC, eastern RLDC and north-eastern RLDC besides the national LDC. These modernized RLDCs are greatly contributing to bring quality and economy in the operation of the power system besides improving data availability, visibility and transparency. With the adoption of state-of-the-art operational practices, proactive preventive maintenance, implementation of availability-based tariff, the modernization of RLDCs coupled with training & deployment of expert manpower and round the clock vigil for grid management, no major grid disturbances in the country have been encountered for the last 6½ years. Further, tripping of lines and minor grid disturbances in regional grids have come down so significantly that it can be reckoned as a benchmark achievement. For overall co-ordination, National Load Dispatch Center (NLDC) at Delhi,with back up at Kolkata, has been successfully commissioned. The power grid has spearheaded the implementation of Availability Based Tariff (ABT) across the country, which has a built-in commercial mechanism to reward proper grid behavior. This has significantly stabilized vital grid parameters, i.e. voltage and frequency thereby improving the quality of power.

So much about what it is. Why need to make it strong? In its present form, is it dumb? No it is not. But maybe, the answer lies in the meaning of the word smart. As per the dictionary, when used as an adjective it primarily means to be 'capable of quick and prompt action'. That is what is the crux of the function of smart grid. A 'quick and prompt action' requires a quick and prompt input to take an action. Smart grid enables this through a two-way communication between distribution and customers. A smart grid is a form of electricity network utilizing digital technology. A smart grid delivers electricity from suppliers to consumers using two-way digital communications to control appliances at consumers' homes; this saves energy, reduces costs and increases reliability and transparency. It overlays the ordinary electrical grid with an information and net metering system, that includes smart meters. Smart grids are being promoted by many governments as a way of addressing energy independence, global warming and emergency resilience issues. A smart grid is made possible by applying sensing, measurement and control devices with two-way communications to electricity production, transmission, distribution and consumption parts of the power grid that communicate information about grid condition to system users, operators and automated devices, making it possible to dynamically respond to changes in grid condition. Of course, I have given the definition which primarily stems from the distribution requirement of the transmitted power. You will hear totally different definition of smart grid from transmission experts and it is important to be aware of the distinction.

So just why it is important to have a smart-grid and why its existence is inevitable today than ever before? Reasons are many

  1. Response to many supply-demand conditions: A smart grid could respond to events which occur anywhere in the power generation, distribution and demand chain. Events may occur generally in the environment, e.g., clouds blocking the sun and reducing the amount of solar power or a very hot day requiring increased use of air conditioning. They could occur commercially in the power supply market, e.g., customers change their use of energy as prices are set to reduce energy use during high peak demand. Events might also occur locally on the distribution grid, e.g., an MV transformer fails, requiring a temporary shutdown of one distribution line. Finally these events might occur in the home, e.g., everyone leaves for work, putting various devices into hibernation, and data ceases to flow to an IPTV. Each event motivates a change to power flow. Latency of the data flow is a major concern, with some early smart meter architectures allowing actually as long as 24 hours delay in receiving the data, preventing any possible reaction by either supplying or demanding devices.
  2. Smart energy demand - Smart energy demand describes the energy user component of the smart grid. It goes beyond and means much more than even energy efficiency and demand response combined. Smart energy demand is what delivers the majority of smart meter and smart grid benefits. Smart energy demand is a broad concept. It includes any energy-user actions to enhancement of reliability, reduce peak demand, shift usage to off-peak hours, lower total energy consumption, actively manage electric vehicle charging, actively manage other usage to respond to solar, wind, and other renewable resources, and buy more efficient appliances and equipment over time based on a better understanding of how energy is used by each appliance or item of equipment. All of these actions minimize adverse impacts on electricity grids and maximize consumer savings. Smart Energy Demand mechanisms and tactics include: smart meters, dynamic pricing, smart thermostats and smart appliances, automated control of equipment, real-time and next day energy information feedback to electricity users, usage by appliance data, and scheduling and control of loads such as electric vehicle chargers, home area networks (HANs), and others.
  3. Peak load management and time of day pricing - To reduce demand during the high cost peak usage periods, communications and metering technologies inform smart devices in the home and business when energy demand is high and track how much electricity is used and when it is used. To motivate them to cut back use and perform what is called peak curtailment or peak leveling, prices of electricity are increased during high demand periods, and decreased during low demand periods. It is thought that consumers and businesses will tend to consume less during high demand periods if it is possible for consumers and consumer devices to be aware of the high price premium for using electricity at peak periods. This could mean making trade-offs such as cooking dinner at 9pm instead of 5pm. When businesses and consumers see a direct economic benefit of using energy at off-peak times become more energy efficient, the theory is that they will include energy cost of operation into their consumer device and building construction decisions.
  4. Real-time monitoring of grid performance will improve grid reliability and utilization, reduce blackouts, and increase financial returns on investments in the grid.
Finally, it boils down to whether we have sufficient electricity to meet demand. The answer is an affirmative no and hence focus shifted on optimal use of electricity. Starting from simple things like switching off lights and fans and AC when not required to shifting to CFLs and LEDs, each one of us can contribute a lot. From the utility side, the smart-grid will also offer similar benefits to the end-user while offering a platform for the utility to manage demand side requirements. The business case is clear for smart-grid and many Indian utilities are now looking at moving into smart grid implementations, not just because it makes the grid smart, it makes the business smarter too.


Saturday, January 1, 2011

Time for resolve

After a hiatus of around 3 months, it is time for me to get back to the Innotomy blog articles again. It is completely inconsequential and surely unsolicited on my part to get into the reasons for this break. There can be many depending upon degree of confidence that I can attach to. Starting from as absurd as "there was no time" to saying "I got busy" (the two statements have different connotations although similar meaning!) Be that as it may, one can fit any theory to a set of observations and theories can be speculative but observations are never. The fact is that I could not write and the fact is that I intend to do a lot better than the last time I was forced into a break. And what better time to choose than the start of a new year! A title like "Time for Resolve" has lot more forceful intent than say a "Resolution time" which almost sounds rhetorical and almost implicitly gives you a exit clause from the resolution.

There is another reason why this should be time for resolve, rather than just a resolution time and that is because there is something about 2011 than it just being another new year ! It happens to be the year before 2012 - the much (in)famed doomsday year. Why not be resolute now for you may never get a chance to be one again! There is zero scientific evidence that anything at all will happen in 2012 but there are many things that could happen and easily threaten homo sapiens in the year 2012. Why 2012 ? These could even happen in 2011. There could be geomagnetic reversal, there could be an asteroid strike, or supernova. Or just a bad bout of H1N1 or a flu pandemic of another hitherto unknown viral strain or even a destructive nuclear dimension. Either ways, the time window to be doing good things is shrinking and it should be seen and used as a time for definite resolve!

So time is good and opportune alright but what to resolve? For a start and maybe for an end, as far as this space is concerned, the resolve is definitely to continue to contribute to this space at least twice a month. If I can do better than that in a month, it will be bonus but resolution time should be about setting realistic goals that can be measured - and two a month for next 12 months is a fairly concrete and achievable task.

What does 2011 have in store for us? Many things can happen - Here is a laundry list (actually only a sampled one) of what they could be from many domains..
  1. India's 'green revolution' has other meanings too. While the one in early 1970s was focused on transforming the agricultural industry, the one in 2011 is more on focusing technologies that make the planet green. The focus on energy efficiency and her baby steps in bringing energy revolution will be debated alright but will bear significant fruits in making India a role model. India's energy revolution is replete with stories of innovation and self-discipline. India's generation capacity can not match its consumption capacity and in a sense the revolution is thrust upon us.
  2. While energy is being focused because of both compelling international and domestic pressures, what India will learn is to focus on its other natural resource hugely mismanaged and that is water. While clean drinking water is a proverbial necessity of most Indians even today, the management of our fresh water, whether through rain water harvesting, management or through self-discipline will become need of hour.
  3. Both of energy and water revolutions will need plenty of funds and economic recovery and India getting back to 8-9% GDP growth are good signs. The bull market returns and generally brings in lots of positive sentiments in the markets.
  4. Venture Capital returns. energy sector, smart phone apps, social networking are amongst the leading domains that will attract VCs. Hopefully in 2011, VCs will be the the blue-sky optimists.
  5. German management model returns. The Rhineland model of capitalism will start looking very appealing across the Indian landscape. Lots of mid-sized companies, with huge technical expertise, low debt and skilled workforces exporting niche products to the whole world. This model will guide the entrepreneurs in India through 2011
  6. Apple backlash starts. It started showing signs with 4G in 2010 itself but Apple will soon learn the hard way, the downside of being a monopolist and will start facing the backlash that Microsofts and IBMs have witnessed in the past for precisely those reasons.
  7. While Apple rides a trough, Android will ride a crest. Apple has graduated to being a monopoly, but Android is still in its early infancy of cool start-up idea and will continue to ride the wave and offer compelling alternative to Apple. The developer community of Android will have field days in 2011.
  8. Iceland teaches world a lesson - and no not for its green marketing but interestingly enough for a role model that was thrust upon it. Unlike other countries, it could not sustain to bail out its banks during the economic slowdown. What resulted was a lot of pain for the country but it is showing signs of business recovery. The only conclusion being - its not necessary to bail out banks for recovery.
  9. Russia comes into force at world stage so that the world will talk more of BRIC than of BASIC group. A raw materials supplier with a dictatorial government will soon be the technology power house that the world can not live without. The brains of Russia were never in doubt and have been known for a long time to be scientifically advanced. If they can leverage that into today's global economy, they can make BRIC look like bRic.
  10. And many other miscellaneous things that may not have necessarily to do with this space but could bring joy to all of us - maybe an Indian victory in the Cricket world cup.
Hopefully each time I write this year, I would be able to write on each of above.. For now, it feels good to have started and I take this opportunity to wish you and your family a very happy, wealthy and healthy new year..