Sunday, December 26, 2021

Back to the future 3 – the James Webb Space Telescope

At 5:49pm (IST) on Christmas day of 2021, NASA successfully launched the largest and the most powerful telescope ever produced – the James Webb Space Telescope (JWST)! Carl Sagan was once said to have been proud of the “Back to the Future” movie and had said that it depicted the science in it pretty well. Time has progressed and so has technology. So, the proverbial movie “Back to the Future 3” happened on Dec 25, 2021, just a week ago - in real life! The JWST was launched with the Ariane5 launcher from French Guiana. It was not just another launch. It was special and first in many ways. Carl Sagan also once said, “We can judge our progress by the courage of our questions and the depth of our answers, our willingness to embrace what is true rather than what feels good.” I am not privy to the context in which he was speaking, but we can relate the quote to the launch of JWST – because it is going to do precisely that – ask better questions and maybe redefine our understanding of the field if the answers are inconvenient. In science, we do that all the time!

You can Google about the JWST and find all the fun facts and trivia and indeed the serious science surrounding it. The point of this article is not to repeat it yet again, but instead to offer a different perspective.

Before we begin, let’s first understand what we are talking about. Why is James Webb space telescope named after James Webb? Who is he? Well, he wasn’t an astronomer like Hubble (Hubble space telescope has been servicing us with spectacular imagery past 30 years or so). James Webb was a government officer! He held together the fledgling NASA space program between 1961 and 1968 and worked towards ensuring the Apollo moon mission went ahead. This space telescope has been named after him, to honour his singular contribution to asking difficult questions and accepting uncomfortable truth, in pursuit of the unknown. So what is JWST? And what is the big deal?

In layman’s terms, JWST is as long and wide as a tennis court and is as high as a 3-storied building! Even the mighty Ariane5 could not have taken off with the telescope in this form! So the telescope was designed to be folded and will then be unfurled in space once it reaches its designated position. The idea of having a better telescope than Hubble had begun even as Hubble telescope was being launched. JWST is a multinational effort, spanning over 25 years of push and pull. 10 billion dollars later, we had the moment we witnessed on Dec 25th!

Successful launch was only the beginning of the complicated mission. It has 341 points of failure in its next journey of around six months and anything can still go wrong! The telescope is now hurtling through the space to its designated point, the Lagrange point (L2), about 1.5 million kms away from earth. At L2, the gravity of earth and the Sun is balanced out and will eventually dock in its position about 28 days from the launch. It wont be until summer though, that the first pictures from JWST will be received.

So whats different in James Webb compared to Hubble?

  1. Size. Hubble’s mirror size is approx 8 feet, whereas JWST mirror is about 21 feet in diameter. Hubble is the size of a bus, JWST is the size of a tennis court!.
  2. The light itself that JWST will see will be different from Hubble. Hubble uses visible light – so it will see what we see, if we were in space at that location! JWST is only going to see the orange/red that we see and then infrared light beyond the red. The idea is to peep deeper into the space and it turns out that the deeper you peep, the more ‘red-shifted’ the light becomes. Stated alternately, what we see as normal visible light emitted billions of years ago now appears in infrared.
  3. Hubble is orbiting the Earth. JWST will be orbiting the Sun. The infrared instruments on JWST need to be maintained at a very cold temperature (-266 deg C). The Lagrange point L2 offers conditions to achieve this. In fact the sun shields of JWST are so powerful that it can hide all of solar power and only let 1W be generated
As a fun fact, it is said that the JWST is so powerful that it can detect heat changes in fluttering of bees from a distance of 500,000 kms! That’s almost the distance between the Earth and the Moon!

How JWST moved out of sight of humanity?

I will summarize a rather elaborate process of how JWST detached from the Ariane5 upper stage and continued its journey – alone in the vast expanse of the universe – only to bring us more data to ascertain our theories or hypothesize new ones.

First, the moment the JWST detached from the upper stage of the Ariane5 launcher. This was the moment, 27 minutes after the launch that set JWST free!

Then, the rather spectacular views of the JWST from atop the upper stage as JWST drifts away into the space. Fig (b) shows the JWST back side view after detachment. Fig (c) shows the solar panels being unfurled through a carefully maneuvered time critical operation. This was critical for the JWST to start receiving the power and not become a piece of debris!

The solar panels were fully lit up by the solar rays – thereby signaling that all went well so far and the solar power was being used to charge the electronics on board!

The final view by humanity of the litup JWST as it hurtles away into the space towards its designated Lagrange point (L2). After this, we will never see the JWST again! We will see the images captured by it over next 5-10 years, but never the telescope itself!

As the final images like the one in fig (e) were being flashed, I was comprehending the magnitude of the moment. This was a picture for posterity. So many philosophical essays or science fictions can be written just around this moment of time! Many artists will come forth with their art forms to capture this moment. This moment has changed human quest for understanding of universe. To those who stare at night sky (Bangalore clouds willing :), do have these thoughts that eventually cross their minds at some time – just where does this infinite begin and where does it end? And the methodical answers to such questions so far agree with the big bang. We will know soon from the pictures of JWST, what were the earliest galaxies like? What were they composed of etc. I said its proverbial “Back to the Future 3” because it will really start detecting the faintest of signals that may have originated close to the time Big Bang occurred, or when the universe just came into being. These are fascinating times for the scientific pursuit and we all look forward to the treasure trove that JWST will share with us!

Finally, what new science can we expect? 

NASA, ESA and Canada spent around 10 billion dollars for a few top-level goals

  1. To study light from the first stars and galaxies after the Big Bang.

  2. To study the formation and evolution of these galaxies.

  3. To understand the formation of stars and planetary systems.

  4. To study planetary systems and origin of life!

As Ken Sembach, Director of the space telescope at Science Institute in Baltimore said, “Science wont be the same after today. Webb is more than a telescope – it is a gift to everyone who contemplates the vastness of the universe”. And gift it is. Hopefully, by the time of refleXion’s next issue, the JWST will be in L2 and an issue further later, we would have the first pictures.

Tuesday, December 14, 2021

AI enabled medical devices by US FDA

 


In India, it is difficult to track regulatory approvals for many products. Much worse are the AI/ML enabled algorithms. Around the world, things are not great either. However, recently, US FDA decided to publish a list of the AI/ML enabled medical devices (or algorithms) that it has approved by category. It is interesting to browse the list as it shows some interesting patterns.

Interest in medical devices incorporating ML functionality has increased in recent years. Over the past decade, the FDA has reviewed and authorized a growing number of devices legally marketed (via 510(k) clearance, granted De Novo request, or approved PMA) with ML across many different fields of medicine—and expects this trend to continue.

The FDA is providing this initial list of AI/ML-enabled medical devices marketed in the United States as a resource to the public about these devices and the FDA’s work in this area.

On October 14, 2021, FDA’s Digital Health Center of Excellence (DHCoE) held a public workshop on the transparency of artificial intelligence/machine learning-enabled medical devices. The workshop followed the recently published list of nearly 350 AI/ML-enabled medical devices that have received regulatory approval since 1997. The workshop was aimed at moving forward the objectives of FDA’s DHCoE to “empower stakeholders to advance healthcare by fostering responsible and high-quality digital health innovation.” The DHCoE was established in 2020 within FDA’s Center for Devices and Radiological Health (CDRH) under Bakul Patel.

This initial list contains publicly available information on AI/ML-enabled devices. The FDA assembled this list by searching FDA’s publicly-facing information, as well as by reviewing information in the publicly available resources cited below and in other publicly available materials published by the specific manufacturers.

This list is not meant to be an exhaustive or comprehensive resource of AI/ML-enabled medical devices. Rather, it is a list of AI/ML-enabled devices across medical disciplines, based on publicly available information.

If grouped by category, this is what we see.

Radiology 241

Cardiovascular 41

Hematology 13

Neurology 12

Ophthalmic 6

Chemistry 5

Surgery 5

Microbiology 5

Anesthesia 4

GI-Urology 4

Hospital 3

Dental 1

Ob/Gyn 1

Orthopedic 1

Pathology 1


Radiology is no surprise with almost 70% share of listed devices in that area as most of the AI work in healthcare and indeed in medical imaging has been primarily around chest X-rays and there are many algorithms and solutions available. What is surprising is the last in the list, pathology! Considering that too in some ways is also imaging based (whole slide scans for example), it is intriguing that it does not list as many as it should.

What is also visible from the list is that other than radiology really, there are not many solutions in other areas. Radiology is the so to speak, low hanging fruit in healthcare and imaging.

There is so much scope to do in healthcare. The need of the hour is for computer science community to engage with medical fraternity and help deploy some of the algorithms, not as to replace those in there, but to aid them in making decision, the proverbial second opinion. It does not harm. Can it bias the practitioner to just go with the AI prediction? It may, but if there is uncertainty, there is anyways a dilemma the practitioners face.

It is time, given the scale and scarcity of resources we have in India and population so widely spread geographically, that such solutions will only help provide better healthcare. How to achieve that is a different question though.

Wednesday, December 8, 2021

My PhD Thesis Title..

Yesterday, I posted an image of an AI-generated art (on Twitter and on LinkedIn). The image was generated by providing it my PhD thesis title (which is actually irrelevant for this post). Today, I will share the story about the “AI” software that generated that stunning image.

If you are on Twitter, you would have lately seen a deluge of such AI generated images all over your timeline. These pictures are being generated using a new app called Dream (wombo.art) which lets anyone create an AI-generated artistic image by simply typing a brief description of what they would like the generated image to depict. If you search on Twitter recently, you will see many examples of what people have already generated using this app. Many in the Twitter academics have been doing what I eventually did too. They provided their respective PhD thesis titles to generate their own art and shared on Twitter. It has kind of become a craze – and I fell for it too.

This type of software that generates such images is not totally new though. There have been DALL-E and VQGAN+CLIP algorithms before. The Dream app takes it further with its speed, quality, ease of use and probably tweaks to the algorithm itself. It’s available as a mobile app on Android and iOS and also on the web. The app is developed by a Canadian startup, Wombo.

The algorithm behind wombo.art could still be VQGAN+CLIP. It stands for a verbose “Vector Quantized Generative Adversarial Network and Contrastive Language – Image Pretraining). If I were to really explain this to a layman or someone not in the field, it simply is a piece of software that takes as input words and generates pictures based on trained datasets.

VQGAN+CLIP, as the “+” indicates is a combination of two deep learning models, both released earlier this year. VQGAN is a type of GAN, ie a type of generative adversarial network, to which you can pass a vector or a code and it outputs an image!

VQGAN has a continuous traversable latent space which means that vectors with similar values will generate similar images and following a smooth path from one vector to the other will lead to a smooth interpolation from one image to another.

CLIP is a model released by OpenAI that can be used to measure similarity between the input text and the image.

So in VQGAN+CLIP, we start with an initial image generated by VQGAN with a random vector and input text presented by user (e.g. my PhD title!). CLIP then provides a similarity measure between the input text and generated image. Through optimization (typically gradient ascent), the algorithm iteratively adjusts the image to maximize the CLIP similarity.

So CLIP guides the initial image to a nuanced version of itself, which can be considered as “close” to the input text as possible.

Ofcourse, Wombo has not specified they are using the VQGAN+CLIP algorithm specifically. Clearly they have added a few bells and whistles, but the basic concept remains the same.

So, try inputting any text, your PhD thesis title, your paper title, your dream destination and let wombo.art generate some aesthetic art for you!

Tuesday, December 7, 2021

How complex is a single biological neuron?

 For an audience that is well versed with machine learning and deep learning these days, they often know the complexity of a single artificial neuron while building complex architectures. A single neuron typically comprises of a linear block and non-linear activation. The linear block simply does a weighted linear combination of its inputs and the the non-linear activation block computes the output using the defined non-linear activation function. It could be a simple sigmoid or a tanh or a softmax or a ReLU or a leaky ReLU or any other. It’s often said that the artificial neural networks were inspired by the brain. So just how complex is a typical biological neuron – which inspired us all – in comparison to an artificial neuron in terms of complexity.

We will mention the notion of “complexity”, at least as it was used in the work that David Beniaguev, Idan Segev and Michael London, at the Hebrew University of Jerusalem carried out. They trained an artificial deep neural network to mimic the computations of a simulated biological neuron. They published their work titled “Single cortical neurons as deep artificial neural networks” (ref https://www.sciencedirect.com/science/article/abs/pii/S0896627321005018). 

They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.

The paper says that “This study provides a unified characterization of the computational complexity of single neurons and suggests that cortical networks therefore have a unique architecture, potentially supporting their computational power.”

The authors also hope that their result will change the present state-of-the-art deep network architecture in AI. “We call for the replacement of the deep network technology to make it closer to how the brain works by replacing each simple unit in the deep network today with a unit that represents a neuron, which is already—on its own—deep,” said Segev. In this replacement scenario, AI researchers and engineers could plug in a five-layer deep network as a “mini network” to replace every artificial neuron.

This might provide insights into comparing architectures to real brains, especially image classification tasks. If 100 neurons is equivalent to 20 neurons in a biological network, then that is all that is required for completing a classification task in the brain!

So, guess, it is okay to claim that they brain (especially the visual cortex) inspired the artificial neural network architecture, but unfair to say that they are equivalent!

Data and code availability

As mentioned in the paper cited above, all data and pre-trained networks that were used in this work are available on Kaggle datasets platform (https://doi.org/10.34740/kaggle/ds/417817) at the following link:

https://www.kaggle.com/selfishgene/single-neurons-as-deep-nets-nmda-test-data

Additionally, the dataset was deposited to Mendeley Data (https://doi.org/10.17632/xjvsp3dhzf.2) at the link:

https://data.mendeley.com/datasets/xjvsp3dhzf/2

A github repository of all simulation, fitting and evaluation code can be found in the following link:

https://github.com/SelfishGene/neuron_as_deep_net.

Additionally, we provide a python script that loads a pretrained artificial network and makes a prediction on the entire NMDA test set that replicates the main result of the paper (Figure 2):

https://www.kaggle.com/selfishgene/single-neuron-as-deep-net-replicating-key-result.

Also, a python script that loads the data and explores the dataset (Figure S1) can be found in the following link: https://www.kaggle.com/selfishgene/exploring-a-single-cortical-neuron.

Wednesday, December 1, 2021

New year - new resolution

There is a month to go in this year and I thought of a new resolution for the coming year. Of course I have 31 days to change this resolution to another one, should there be any need :)

I have been lately working on a topic that can be broadly called 'machine learning in healthcare'. But the scope is much wider. I have been working on applications of machine learning to tasks in healthcare. By machine learning, I mean, everything that could be traditional statistical inferencing, deep learning, self supervised learning or even reinforcement learning. Also picking up expertise in graph neural networks and an overarching geometric deep learning. By healthcare, I mean topics related to critical care data, and as varied applications as in histology, pathology, radiology, dermatology, speech and many other. I also have been updating myself with newest tools. So I am working on TF+keras, PyTorch, PyTorch lightening, Python, Julia and even Swift. 

I have started feeling lately that the glorified machine learning really is all about finding patterns in data. Sure, the algorithms do better than humans and have tons of applications, but there is a fallacy in the fundamental assumption that all answers lie in the data. To circumvent this, have started forays into causal inference and causal discovery, especially in observational data. The inputs from causality theory will enhance the predictions coming out of machine learning on an average, hopefully. It has been fascinating reading about "lineages" in causal theory and indeed there are "lineages" in statistics itself. Judea Pearl vs Donald Rubin vs Jamie Robins .. and the fights go on when there should be none. 

So now onto the resolution and what has this got to do with it. The idea is to create a writeup on most recent event in the "AI in healthcare" space and I will shed some light with "my own research" (this is in quotes because of recent abuse of the term). The "event" itself could be a paper published in a JAMA or NEJM, could be a policy directives from around the world or even some interesting findings that someone shares that I would feel like commenting on. The "event" could even be a book review.


So welcome to a new resolution and uncharacteristically on the first day of the last month of the year. The posts themselves will be published each weekend starting in 3 days time. Happy reading!

Thursday, April 1, 2021

Turing award winners - 2021

 Yesterday, I heard that Profs Aho and Ullman were awarded this years' Turing awards for their work in compiler design. I have never met Aho and Ullman, nor have I directly spoken with them. However, they were my heroes of my student life. I will explain how and why.

I joined my PhD in the year 1991 at IIT Bombay. I had completed by Bachelors in Electronics in 1987 and masters in 1989. This was an era when the Internet was not public craze yet and computers were mostly very basic - the PC XTs and ATs of the time based on Intel 286 and 386 chipsets! I had no professional training in computer science, yet computers fascinated me.

1991 was also the year when Linux was born. Probably outside of Linus' home, v 0.1 must have been installed in our lab in IIT Bombay! I had developed interests in software engineering aspects through use of Linux and working with many other server systems that we had recently procured in our lab. That, within months, led me to my interests in programming languages, how the compilers are built etc and I came across Aho and Ullman's red book, Principles of compiler design. I bought my personal copy then and it continues to be on my book shelf even today!

Concept of compilers fascinated me, given I had no formal training in CS.. So I started experimenting with concepts from Aho-Ullman book and started writing my experimental language parsers. I bought and read a book on the Backus-Naur Form of notations and started writing some toy programming language in the BNF notation and then starting to compile it with 'cc' on one of our Sun Solaris servers. I wrote my first language parser and compiler, autodidactically, by reading Aho-Ullman's 'red book'. Since then, of course, my interests have diversified and I haven't much worked in compilers for couple of decades now, but the mention of Aho and Ullman as rightful winners of Turing awards, took me back to the nostalgia that was 90s..

Congratulations Aho and Ullman on winning Turing award - richly deserved. And you will never know just how many lives you touched with your book and your work.

Monday, March 15, 2021

mRNA vaccines: A brief history of time


Covid-19 pandemic has ravaged the world throughout 2020. Amidst public health measures that varied across the globe, from full compliance to calling Covid19 a hoax, the scientific community was quietly working on the development of a vaccine for Covid-19. Here is a summary of vaccines for SARS-Cov-2, both approved and in development.

Vaccines Approved

Vaccines in Development

Pfizer, BioNTech’s BNT162b2

Bharat Biotech’s Covaxin

Moderna’s mRNA-1273

Univ of Oxford-AstraZeneca’s AZD1222

Sinovac’s CoronaVac

...

Russia’s Sputnik-V

...

Russia’s EpiVacCorona


China’s BBIBP-CorV



mRNA vaccines give our immune system genetic instructions to recognize the virus, without at any point in time introducing the virus (dead or alive or part or weakened) itself! An mRNA sequence is synthesized for virus’s spike protein (S-protein) and this sequence is introduced into our cellular mechanism to allow our cells to make the spike protein and thereby induce an immune response. The synthetic mRNA is packaged in a lipid nanoparticle that delivers the instructions to the cells. Once inside the cell, cellular machinery follows mRNA instructions to produce the viral spike protein, which then induces an immune response. This is not science fiction – this is real life! We insert instructions to our cells to make a protein from it that looks like a virus protein. The body then builds an immune response to that protein, so in future, if you are ever exposed to the virus, the immune system recognizes that spike protein on the virus and then destroys it before it can enter the cells!

I decided to put together a history in timeline for two most popular vaccines today, Moderna’s and Pfizer’s, which are both mRNA based vaccines.

What you may have heard or read:

Time

Description

64 days

Time it took Moderna to develop their vaccine and launch phase I trial

Jul 2020

Phase III trials began

Dec 2020

The vaccines were ready for deployment and around 3million people worldwide have already been vaccinated.


While everything you have heard and read is correct, the devil though is in the detail and there is a need to understand and appreciate the background work carried out by many scientists and particularly Katalin Kariko in most adverse conditions. In any scientific endeavour, there are hundreds and thousands of scientific researchers who give their everything and largely go unnamed, but in the success of both Moderna and Pfizer is one individual – Katalin Kariko!

I have tried to take descriptive narration out and summarize the history in a tabular form captured as a timeline. Hopefully this is useful, readable and informative.

Time

Description

1961

Messenger RNA (mRNA for short) was discovered by 9 scientists including Francis Crick (of the Crick-Watson double helix fame), Jacob, Brenner and Meselson (of the famous Meselson-Stahl experiment)

1976

Kariko first came to know mRNA in details after attending seminar in Hungary and became inspired to use it for therapeutics

1985

She moved to US from Hungary and joined Temple University as faculty

1990

Kariko moved to Univ of Pennsylvania (Upenn) following a dispute with her boss at Temple University who threatened to deport her.

1995

Through early nineties, she continued her work on using mRNA for drugs and therapeutics, but was not able to generate funding, as all her grant applications were rejected. Eventually, UPenn gave her 2 options – either leave or prepare for demotion

1995

Same year, she was diagnosed with Cancer – so given her circumstances and her desire to pursue research for using mRNA for therapeutics, she decided to stay on and take the humiliation of a demotion at Upenn

1997

In front of a largely dysfunctional copier machine, she met Drew Weissman, who had recently joined Upenn and had approved grants. He became interested in her work and decided to partly fund her experiments and started a partnership

2005

Kariko and Weissman published a paper announcing a modified form mRNA – which is congenial to easy acceptance by the immune system. Normally, we know of 4 bases in DNA, namely, A, T, C and G. In RNA, the T is replaced by a U. Their paper talked of replacing the U with 1-methyl-3’-pseudouridylyl in a synthetically created mRNA. This is generally denoted by greek letter, ψ.

For next 5 years, no additional funding came, not much interest was generated.

2010

Derrick Rossi got inspired by her paper and founded Moderna

2010

Kariko and Weissman licensed their technology to small German company BioNTech.

2012

UPenn refused to renew her faculty contract (since demotion) and told her “she was not faculty quality”

2013

Kariko accepted senior VP role at BioNTech

2017

Moderna began developing Zika virus vaccine based on mRNA

2018

BioNTech and Pfizer started co-working on development of mRNA vaccine for influenza. The landmark paper of 2005 and use of ψ is an integral part of the Pfizer’s vaccine.

Jan 2020

Within weeks, Chinese scientists had sequenced the SARS-Cov-2 virus. A synthetic mRNA sequence is extracted that corresponds to the spike protein.

Vaccine’s trick #1: A clever lipid packaging system delivers this (synthetic) mRNA into our cells.

Feb 2020

Pfizer’s vaccine development revolved around the use of ψ in mRNA sequence.

Vaccine’s trick #2: Cells are extremely unenthusiastic about foreign RNA and try hard to destroy it before it does anything. But the vaccine needs to go past immune system. The use of placates the immune system and interestingly it is however treated as a normal U by relevant parts of the cell.

Mar 2020

In 64 days, Moderna had completed the development of their mRNA vaccine and BioNTech had also reached similar completion stage.

Jul 2020

Phase III trials began

Nov / Dec 2020

The world is ready for vaccination with two leading mRNA vaccines that are 95% efficacious!


In scientific pursuit, never follow only the news in the media – it serves us better to find the whole truth. The vaccines were not developed in one year, as claimed – they are largely a result of tireless pursuit of one woman over 3 decades along with other inspired scientists who ensured that the recipe was ready, come Jan 2020. Technically speaking, we have waited 59 years since 1961 for this day! Yes, the arrival of SARS-Cov-2 virus definitely fast paced the latter development.

Katalin Kariko is directly responsible for the Pfizer vaccine, while an inspiration to Rossi and why Moderna was created! Remember the name, she might just feature in the news as a future Nobel Laureate!

Monday, March 8, 2021

FC1 and FC2 - a tale of two genes

This short article is largely my fictional essay. When the scientific community first put together the human genome in early 2000 and published subsequently, we suddenly discovered that there are more than 20,000 genes in our genome. We are still discovering new genes and their functions through a rigorous scientific protocol. These genes are either named based on their location, or their function. They are appended with a number if there are more genes doing the same thing but with a small difference. Metaphorically speaking, humans are discovering newer genes also based on human behaviour. While there is a MAGA gene doing rounds lately in the other half of the world, in our own backyard, two genes were unearthed during these last 6 months of the pandemic. Based on location and function, I choose to call them FC1 and FC2. Their resemblance to our WhatsApp group names is purely coincidental. Other than location and function, these 2 genes also have another attribute – behaviour!

Some of the most fundamental questions concerning our evolutionary origins, our social relations, and the organization of society are centred around the issues of altruism and selfishness. Experimental evidence indicates that human altruism is a powerful force and is unique in the animal world. However, there is much individual heterogeneity and the interaction between altruists and selfish individuals is vital to human cooperation. Depending on the environment and circumstances, a minority of altruists can force a majority of selfish individuals to cooperate or, conversely, a few egoists can induce a large number of altruists to defect. Current gene-based evolutionary theories cannot explain important patterns of human altruism, pointing towards the importance of both theories of cultural evolution as well as gene–culture co-evolution.

In evolutionary biology, an organism is said to behave altruistically when its behaviour benefits other organisms, at a cost to itself. Altruistic behaviour is largely considered more surreal and noble in nature, whereas its opposite, the selfish behaviour is more common in animal world. In everyday parlance, an action would only be called ‘altruistic’ if it was done with the conscious intention of helping another. But in the biological sense there is no such requirement. Indeed, some of the most interesting examples of biological altruism are found among creatures that are (presumably) not capable of conscious thought at all, e.g. insects.

Altruistic behaviour is common throughout the animal kingdom, particularly in species with complex social structures. There are plenty of examples of vampire bats, vervet monkeys, helper birds, meercats volunteering an individual to watch out for a predator essentially putting its life at risk. Such behaviour is maximally altruistic. From a Darwinian viewpoint, existence of altruism is puzzling. Natural selection leads us to expect animals to behave in ways that increase their own chances of survival.

Human societies represent a huge anomaly in the animal world. They are based on a detailed division of labour and cooperation between genetically unrelated individuals in large groups. This is obviously true for modern societies like ours. Why are humans so unusual among animals in this respect? Human altruism goes far beyond that which has been observed in the animal world. Among animals, fitness-reducing acts that confer fitness benefits on other individuals are largely restricted to kin groups. On the other hand, humans have the unique ability to form and cooperate within large social groups, which include many genetic strangers. For example, humans invest time and energy in helping other members in their neighborhood and make frequent donations to charity. They come to each other’s rescue in crises and disasters. They respond to appeals to sacrifice for their country during a war, and they put their lives at risk by helping complete strangers in an emergency.

Plato argues in his treatise, ‘The Republic’, that the soul comprises of three parts, rational, appetitive and spirit. He says, for a community to be just, every element has to perform the role to the best ability. He combined the concept of soul as defined by Socrates and Pythagoras before him.

Sigmund Freud presented an alternative theory of ego, superego and id. The id is trying to get you to do things and the superego is trying to get you to make good decisions and be an upstanding person. So the id and superego are always fighting with each other and the ego steps in between the two.

Both of the above abstractions, try to explain human behaviour at an individual and community level.

In the literature, two largely eminent theories of altruism are discussed, which are both mathematically founded and have overwhelming empirical evidence. They are the ‘kin selection theory’ and ‘reciprocal altruism theory’. The kin selection theory says that natural selection would favour behaviours that benefit those organisms or others who share their genes, e.g. closely related kins. On the other hand, reciprocal altruism involves shared altruism between neighbours as a reciprocal act of kindness either directly or at some point in time in future. ‘Competitive altruism theory’ explains other forms of altruism that can not be explained by these two theories. For example, acts of volunteering and charity for non-kin groups.

To some extent, the idea that kin-directed altruism is not ‘real’ has been fostered by the use of the ‘selfish gene’ terminology used by Richard Dawkins in his famous book by same name. A ‘selfish gene’ story can by definition be told about any trait, including a behavioural trait.

The origin of “The Selfish Gene” is intriguing. Dawkins revealed in the first volume of his memoirs, “An Appetite for Wonder”, that the idea of selfish genes was born ten years before the book was published. The Dutch biologist Niko Tinbergen asked Dawkins, then a research assistant with a new doctorate in animal behaviour, to give some lectures in his stead. Inspired by Hamilton, Dawkins wrote in his notes (reproduced in An Appetite for Wonder): “Genes are in a sense immortal. They pass through the generations, reshuffling themselves each time they pass from parent to offspring ... Natural selection will favour those genes which build themselves a body which is most likely to succeed in handing down safely to the next generation a large number of replicas of those genes ... our basic expectation on the basis of the orthodox, neo-Darwinian theory of evolution is that Genes will be 'selfish'.”

As an example of how the book changed science as well as explained it, a throwaway remark by Dawkins led to an entirely new theory in genomics. In the third chapter, he raised the then-new conundrum of excess DNA. It was dawning on molecular biologists that humans possessed 30–50 times more DNA than they needed for protein-coding genes; some species, such as lungfish, had even more. About the usefulness of this “apparently surplus DNA”, Dawkins wrote that “from the point of view of the selfish genes themselves there is no paradox. The true 'purpose' of DNA is to survive, no more and no less. The simplest way to explain the surplus DNA is to suppose that it is a parasite.” Four years later, two pairs of scientists published papers in Nature magazine formally setting out this theory of “selfish DNA”.

So, as a corollary to the competitive altruism theory, I can think of a theory that explains selfishness rather than altruism, which I can best describe as ‘competitive selfishness theory’.

I think we carry these altruist and selfish genes together in our DNA. They get expressed depending on the environment and circumstances. Let’s say FC1 is the altruist gene. It gets expressed routinely for our near and dear ones and those in immediate family (as per all theories mentioned above such as the kin selection, reciprocal altruism and competitive altruism). FC2, on the other hand, let’s say is the selfish gene. It gets expressed routinely for community related issues. Clearly, we have both of them. It is just their amount of gene expression depends on environment and context. Largely, the empirical evidence witnessed during the last 6 months of the pandemic speaks volumes of which gene is expressed more. It is also true, that we have seen both get expressed simultaneously. What else explains a behaviour when the individual is altruistic in his/her environment but clearly selfish for the same issue when it comes to community?

If humans are the most evolved form in animal kingdom and the only form capable of cognitive thinking, then we need to do much better than just expressing the FC2 gene most of the time. That is a behaviour genetically coded in us, by just virtue of us being the living species of animal kingdom. Every other species does that too. While a noble, just, fair and charitable community is clearly a Utopian idea, should the FC1 in us not get expressed at least as much as FC2, if not more? I think it should. What do you think?

Sunday, March 7, 2021

New beginning

March  2021 is not just a new month/year - its a year after the pandemic set foot on this planet and changed all of us - whether we liked it or not, whether we agreed with it or not.

So, my thought too is to revive my blog series. You would see I made many attempts before (since 2010) and have largely got caught up in following dilemmas:

  1.  Should I publish only after I write a meaningful post? The problem with this is it cant be periodic, but I have control over what I publish.
  2. Should I publish periodically with  not-so-great contents ? The problem is obvious.

I have decided to find the middle ground and decided to start publishing, both periodically and not wait for a long article, but publish one liners, paras, pages or articles as they appear important to me at that time. So, here is going to be another start on the lines of Hugh Prather's - "Notes to myself".