Friday, October 8, 2021

COMMONSENSE THINKING ALGORITHM -

 COMMONSENSE THINKING ALGORITHM - 


There are 2 kinds of commonsense links - commonsense knowledge links and commonsense thinking links. What are they?


When I go by the commonsense that ‘if you have an upset stomach that means you have eaten outside food’ (this is an Indian phenomenon, not in western developed countries), I am stating the link : upset stomach => outside food. This is what I call a commonsense knowledge link - commonsense because it's obvious / known to everyone and knowledge because it essentially comes from a piece of knowledge or rule in the mind that the typical cause of an upset stomach is outside food; you aren't really “thinking” or “thinking afresh” as such. Hence I call it a commonsense knowledge link.


What is a commonsense thinking link? Suppose someone has an upset stomach, and you are thinking what could be the cause. Now suppose the outside food possibility is eliminated by some reasoning. Then some deeper commonsense thinking tells you - someone might have given him bad food with a malicious intent. This follows the pathway : upset stomach => ate bad food => that bad food came to him from somewhere => home food / outside food / someone gave him bad food at home on intent (from amongst which the second option is eliminated. And the first option is naturally eliminated). The commonsense link : ‘ate bad food => that bad food came to him from somewhere’ is what I call a commonsense thinking link. Commonsense because it is commonsense - something very obvious. And Thinking because it is clearly an instance of a thinking connection and not something plucked out of knowledge. We don't know, consciously, rules or pieces like ‘if something exists then there is an existential condition’. It's thinking.


Note : The rule ‘upset stomach => outside food’ is also a piece of someone’s thinking - someone thought of that first, inferred it first - but then now it has become a common piece of knowledge or rule stored as a piece of memory in our minds. It's an instance of a thought becoming knowledge.


Critical 2-step GENERAL mental algorithm for Commonsense Thinking links - 

  1. If there is X, X exists, and if X exists, what existential condition does that imply?

  2. If that implies Y, then what does Y bring along with it (/its parts)? 


My claim is that most (perhaps all) commonsense thinking links contain their implied parts in the above algorithm.


Examples : 


  1. Consider the above example first. 


The CST link is : ate food => food came to him from somewhere

This is clearly step 1 of the algorithm. For food to exist, the existential condition is that it has to come to him from somewhere.


  1. If you are walking, the sole of your shoes touches the ground.


For walking to exist, there has to be “foot touching the ground” and “foot in the air” alternately. The first part implies, by step 1 of the algorithm, that the foot touched the ground. Step 2 of the algorithm implies that foot, along with it, brings the shoes + its soles.


  1.  John attended Jim’s wedding. This means he saw Jim’s wife.


The CSK link here is : attended wedding => went up to the stage to greet the bride and groom.


The (further) CST link is : greeted bride => saw the wife.


Applying the algorithm, 

‘Greeting’ existing, implies there was a hand and mouth which brings along with it a face. (it doesn't imply other body parts existing since the wife may be crippled or basically having some other body part damaged. But head/face has to exist).


        4.   Lock was opened => someone went inside.

'Lock OPENING' existing, implies there was a key. Key brings along with it key-holder.
--------

COMMENTS : 


- The algorithm can obviously be applied in loops.


- The suggestion is to build commonsense knowledge bases inspired from this algorithm i.e. oriented to fill these slots connected to entities/concepts.

Labels:

ANALOGY BETWEEN ‘CHEMISTRY’, AND ‘KNOWLEDGE REPRESENTATION IN ARTIFICIAL INTELLIGENCE’ -

 ANALOGY BETWEEN ‘CHEMISTRY’, AND ‘KNOWLEDGE REPRESENTATION IN ARTIFICIAL INTELLIGENCE’ - 


Consider this sentence - 


A gave a ball to B.


There are 6 words. Words are like atoms. They connect with each other to form a molecule called sentence.


There are 2 kinds of words/atoms - stable (independent) and unstable (dependent).

This difference is on the basis of their meanings. 

An unstable word depends upon attachment to other entities for the complete expression of its meaning. A stable word doesn't need that. So unstable atoms form bonds with other atoms in the molecule for stability.


For example, ‘ball’ is stable because you can define ‘ball’ independently of any situation or scenario. The meaning of a ball is ‘a spherical solid object used in games’…..something like that. But when one has to define ‘gave’ one has to take recourse to creating a background and invoking other entities; one says - when one transfers the possession of something to someone. Here, ‘when’,  ‘something’, ‘someone’ are things upon which the meaning of ‘gave’  DEPENDS upon for complete expression. These ‘some-X’ words don't come in while defining a ball or a pen or dance. But come in while defining ‘give’ or ‘preparation’.


When unstable words get into a sentence (molecule) they form bonds with other words. The stable ones don't form bonds with other ones FROM THEIR OWN SIDE. You could think of this as a co-ordinate bond - a directed bond. (Of course the stable words are INVOLVED in the bonds WITH the unstable words).


In the case of the above sentence, ‘A’, ‘ball’ and ‘B’ are the stable atoms. ‘Gave’, ‘a’ and ‘to’ are the unstable ones.


Lets see the bonds - 


  1. A - gave

  2. Gave - (a ball)

  3. Gave - (to B)

  4. A - ball

  5. To - B


These are the bonds in the molecule - ‘A gave a ball to B’. The atoms in bold are the stable atoms.


Now, every direct (we will come to indirect bonds later) bond indicates a question-answer pair.



The corresponding Q-A pairs for the list of bonds above is - 

  1. Who gave? A

  2. Gave what? A ball

  3. Gave to whom? To B

  4. A what? As in, there was one (“a”) of what? Ball

  5. To whom? B



Indirect bonds help in answering questions in general about the entities present in the sentence: How?

Bonds like ‘A - gave - a ball’ i.e. the bond between ‘A’ and ‘a ball’ are indirect bonds or ‘pathways’.

Now, any question will involve 2 atoms typically (X and Y) and some blah blah blah in it. Mostly the pathway between the atoms X and Y (i.e. the indirect bond) will be the answer to the question.

For example, ‘what did A do to the ball’? The pathway or the indirect bond says - A GAVE a ball.

Whereas if someone asks - Did the ball give? Here there is no pathway / indirect bond between ‘ball’ and give. So the answer doesnt exist, which  is indeed true. There is nothing like “the ball giving” mentioned here.


Inspired by the above 2 points - direct and indirect bonds - one can say that all the “sub-semantics” of the sentence (i.e. all the sub-information contained in the sentence) can be represented by the BONDS.



We can define Isomers also. Molecules with the same words but different bonds. That is, ‘A ball was given by A to B’ is an isomer of ‘A gave a ball to B’.



All the coordinate bonds have an electron-pair. This “unsaid content” of every co-ordinate bond (the electron-pair) is the commonsense about that linkage that can be placed above the bond. For example, for the bond : ‘A - gave’, what can be written above that bond hyphen is ‘with his hands’ i.e. the hidden unsaid invisible part of the bond.

If the sentence was - A gave a house to B, then on the bond ‘A -gave’ can be placed the commonsense ‘by transferring the papers’ & on the bond ‘gave - (a house)’ can be placed the commonsense ‘all the contents of the house’’. 



What happens when molecules come close to each other? They react with each other - their atoms react. Similarly, when 2 sentences follow each other, their atoms bear relationships with each other (ones from each sentence with the others). Such analysis can be done. (Terms like “reactivity”, “breakage of bonds”, “formation of new bonds” can be invoked…. !!)

For example, the pronoun applicability of a pronoun in the second sentence to the atoms in the first sentence can be seen in this light.



This work can be extended by drawing further analogies between the 2 concept-domains.



Finally, a picture of the molecule - 


Labels: ,

COMMONSENSE-ANALYSIS -

 COMMONSENSE-ANALYSIS - 


A general, rough algo for ROUGHLY JUDGING via analysis, given : a pair of events in consecutive sentences.
It is also akin to the mind's processing and the possible cognitive confusions and misleading possibilities it brings along while data-processing.

Algo : 
  1. See the second event/action.
  2. See who did it upon whom (who are the Doer and Doee)
  3. See who all (agents) are there in the whole scene.
  4. Assign (/re-assign) agents in the scene, in the slots (/ to the entries in the slots) of Doer and Doee, using (simple mathematical) possibilities.
  5. Apply commonsense from KB. 
* This is an Analysis toolkit. 
   This can lead to various useful commonsensical "leads", by evaluating and enumerating interpretations / (uncommonsensical) misinterpretations, of a pair of              events in general.
   
EXAMPLES -

1) John was playing the guitar. Suddenly he received a call. 

1. received a call
2. Doer - ? Doee - He
3. John, Guitar, call
4. i) He received a call from John ii) He received a call from the guitar iii) He received a call from the call iv) John = He ( understood by the program) v) Guitar = He (NA) vi) Call = He (NA)
(Above are 6 of the 9 possibilities. The rest 3 aren't considered since they all involve assigning He to the 3 agents, which is rejected as shown above).
'He' is John, is understood by the program.
5. i) John called John. Cruelly misleading! ii) Guitar cannot call John. There are no phones in guitars. So John didn't receive a call from the guitar iii) A call is not capable of acting on its own. 


2) John called the waiter. He asked him to get french fries.

1. asked him to get french fries
2. Doer - He (?) , Doee - him (?)
3. John, waiter, french fries
4 & 5. i) John asked the waiter to get french fries ii) Waiter asked John to get french fries (invalid, from KB) iii) 'John asked FF', 'FF asked John', 'Waiter asked FF', 'FF asked waiter' are eliminated by the program knowing that He and Him cannot apply to FF. iv) Since the word 'himself' is absent, John telling John & Waiter telling waiter are rejected

The first possibility is the valid pronoun assignment.


3) John was watching TV. He remembered a chore.
1. remembered a chore
2. Doer - He (John), Doee - chore
3. John, TV, chore
4 & 5. (He) John = John , chore = John   -> John remembered John. -> John remembered himself.
    (He) John = John , chore = TV       -> John remembered TV.      -> John remembered something about the TV.
    (He) John = John , chore = chore  ->  John remembered chore -> Normal; as given.
    (He) John = TV , chore= John        -> TV remembered John      -> TV showed John the chore. TV showed something which reminded John of the chore.
    (He) John = TV, chore = TV           -> TV remembered TV           -> Absurd.
    (He) John = TV, chore = chore       -> TV remembered chore      -> TV had chores programmed in it for reminder to the viewer.
    (He) John = chore , chore = John   -> chore remembered John  -> Remembering requires a mental agent which chore isn't.
    (He) John = chore , chore = TV      -> chore remembered TV       -> Remembering requires a mental agent which chore isn't.
    (He) John = chore  , chore = chore  -> chore remembered chore -> Remembering requires a mental agent which chore isn't.
    

Labels:

First law of Commonsense -

 First law of Commonsense - 


Something is "allowed" if it is LIKE something else which is allowed.

Consider this piece of commonsense knowledge - Roads cannot fly.

Now I ask you - why cannot roads fly? One response would be that they don't have wings to fly. 
Then I would say - even human beings fly; we say "I flew to New York ''.
Then you would say - well, then they would need to be carried in an airplane like humans are. 
I would say - carry them.
Then you would say - well, then they would have to be cut/sliced from the earth/ground. Then that sheet would have to be placed on the floor of a big airplane and flown.

Now, notice what you have done at every stage. First, you said - they don't have wings. Then you said they would need to be carried in an airplane like humans are. And then you said they would need to be sliced and then placed on the floor of a big airplane to be flown. At every stage, you have LIKENED the road to something else in whose case the possibility of the proposition was knowingly allowed. First to a bird, then to being carried in an airplane like humans are, and then sliced like, say, a layer of cream from cake and then to something like cargo being placed on the floor of an airplane for them to be "flown". 
(You have never considered the road, just on its own, as to be able to be done with those propositions on, so to speak).

This is how commonsense thinking works - you liken a new proposition to something else in whose case the proposition is valid/allowable. In one way, this would seem obvious, because ultimately the proposition is something "in the external world" which is "coming at you" and all you have to "deal with it" is your brain, in which is stored valid knowledge, with whose elements you liken the constituents of the proposition, for considering the latter's allowability/possibility. 
We automatically seek recourse in whose evidence/knowledge we have already.

Labels:

LEVELS OF DATA -

 LEVELS OF DATA -  


Any data can be seen at many levels. Consider this data - 


John kicked the ball.


How other elements present in the data relate to the subject of consideration (mostly the action) leads to a new level of viewing the data at. Each such relation leads to a kind of inference.


Illustration : 


Consider the action ‘kicked’. So, the subject of consideration is the action i.e. kicked.


The following are 2 of the different levels of viewing the data at - 


LEVEL A - You see this as an instance of application of force. 

What led to seeing things at that level - THE ACTION TOUCHED THE BALL. That is, relating the subject to the ball.

Inference drawn from seeing things at that level - You think that the ball must have moved away.


LEVEL B - You see this as an act requiring energy.

What led to seeing things at that level - HUMAN DID THE ACTION. That is, relating the subject to the human doer - John.

Inference drawn from seeing things at that level - You think that John must have been a bit tired.


Labels:

The Psychological Mechanism of Deduction -

 The Psychological Mechanism of Deduction - 


The following are the kinds of relevant, intelligent things that can come in one's mind upon being exposed to some data.

The ones not covered are mostly the variants, in one way or the other, of these.

How the mind works in each case is mentioned, followed by a way to exploit the same on a machine.



  1. Suppose someone teaches you ‘If you shoot someone, he will be killed’. Then, one of the kinds of intelligent things that can come to your mind is - So if I want to kill someone, one of the ways is to shoot him! How does this work in the mind? How did this come to your mind?


Well, when you hear the part - he will be killed - you identify it to be a GOAL - say, your goal or someone else’s goal you know of. So this is simply a case of seeing one of the goals you know / have to be a part of some data which is presented to you. So this is partly luck also. Seeing the goal in your mind in some implication you come across, makes you reverse the link of implication and gives you one way to achieve the goal present in your mind. This is the mechanism of the thought occurring to you.


This can be mechanized-ly exploited on a machine. Every ‘A (implies) B’ link implies that one of the ways to achieve goal B is to do A !



  1. Suppose you hear on TV that India and Pakistan have started a war with each other. One relevant thought that comes to your mind is - oh ! so now there will be missiles onto my house. How did this effect of the given data come to your mind? 


The key was to identify yourself to be a part of India, (which has engaged in a war with Pakistan). This was about identifying something in a data to be the same as, or a part of, or relating to, something and then turning the effects of that data (that there will be missiles fired onto each other) onto that ‘something’.


This can be exploited on a machine - given any data, retrieve all that you know about every part of that data (from Wikipedia etc.), combine the two (apply / plug in each information you retrieved, to the data) and you have a deduction.



  1. Why + Generalisation - You see someone pull out a gun. You think - Now he is going to kill the other fellow in front of him. How did this come to mind?         


Well, the key here was to question i.e ask a WHY to a part of data you saw. You asked a ‘why’ to the pulling out of the gun, generalizing of which and applying the result to the specific case again, led to that thought.        


Even this can be exploited on a machine. Ask a ‘why’ to parts of data (from the Web - wikipedia etc.) and combine the general answer re-applied specifically, with the original data.


Labels: ,

COMMON SENSE v/s COMMON FACTORS -

 COMMON SENSE v/s COMMON FACTORS - 


We say that if you drop something, it will fall down. Now, this is not mainly because you dropped it. Of course if you didn't drop it, it wouldn't have fallen, mostly. But the stronger reason is gravity. The reasons gravity is the stronger reason are that 1. it is more importantly responsible for the phenomenon - the effect (the falling down) - i.e. than the dropping (since in space, dropping won't lead to falling down) and 2. also because it is present everywhere on earth. Let me elaborate on the second reason more. There are ‘common factors’ everywhere during one's life. Like gravity and people and buildings etc. They will be present in descriptions of most scenes and events, in general. So why repeat them again and again. Don't mention them. Assume them. That's how they become commonsense. Gravity is commonsense for most of us for all of our lifetimes. So the actual and relevant piece of commonsense or “common factor” here (i.e. when we talking of objects falling if dropped down) is, instead, that ‘gravity, which pulls things down is everywhere’. This is the common, repeating, assumed part, everywhere and everytime, during one’s life (since it is rarely that someone is going to go to a non-gravity zone in life). For someone who makes frequent trips to space and spends a significant portion of his life there, it wouldn't be so much of a “common” sense that ‘if you drop things they will fall down’. 

Similarly we say that - "it is commonsense that if you cut a finger, it will start bleeding. But if it is the finger of a statue, it won't". Here the relevant actual piece of commonsense is that ‘most fingers are human fingers’. This is the common “FACTOR” ruling everywhere and everytime, which is responsible for the piece of commonsense of ‘bleeding if you cut a finger’ existing. This is also responsible for that piece of commonsense being uttered and mentioned the way it is. Note : you didn't mention gravity while saying the piece of commonsense that 'if you drop something it will fall down'!

Labels:

PRECURSOR THOUGHT-FRAGMENTS TO SPEECH -

 PRECURSOR THOUGHT-FRAGMENTS TO SPEECH  - 


Here is a small theory about what part of a sentence comes to mind before it is uttered (or framed just before it is uttered). 


Suppose you and I are travelling in a car and we happen to pass by a school. I say to you - 

“This was my school till grade 5”. 


Now, there are 2 parts to this sentence - The Understanding-intensive part and the Knowledge/Fact-intensive part. The factual parts are ‘This’ (signifying its this building), ‘was’ (signifying what I am telling was in the past), ‘my’ (signifying it was mine), ‘school’ (signifying it to be a place of learning we all go to), ‘grade 5’ (signifying the 5th level of learning gradation standard). The understanding or reason-intensive part is the word ‘till’.


The ‘till’ is the stitch. The rest are just static items, which might as well have been something else also. The stitch is the connector of the 2 sets of chunks (on the left and right of it, here). 


Always, in sentences, the “small-words” like till, until, upon, at, around, about, onto, on etc. are the understanding-intensive parts. They are the stitches - the connectors. They are the zones in the sentence where the critical understanding part of the sentence lies. As said before, the rest of the chunks could be anything. The sentence might as well have been “Don't move till I signal”. Here the critical understanding part still remains to be the “till”. This is the thinking component in the cognition of the sentence.

When we are generating something in the mind, to speak, in the mind, the first part that happens is the reason-intensive part. That's what's linked to the thought that inspires what's to be said. 

That engages the brain-resources the most - more than the factual elements (knowledge-parts). 

Here, the first part that comes to mind is “till grade 5”. Note that the stitch is accompanied by one of the sets of chunks (‘grade 5’). That's the key part, in my view, of what's conceptualised, as something to be said.


When you say “Today I am going to sleep without the blanket”, the inception of the thought corresponds to “Without the blanket”, which is the first-generated part, before the speech. The ‘without’ is the stitch - the connector - the reason-intensive part.


Another consequence of these stitches is that the “mostly/typically-following-up-parts” to sentences i.e. the kinds of things that are mostly/typically spoken as a response by the other person to an uttered sentence are inspired by these stitches. We talked about speaking above. Responding is similar. We process sentences as being composed of 2 kinds of parts - knowledge-intensive parts and reason-intensive parts, when words are thrown at us. The reason-intensive part is what is picked up as the centre of cognition around which the understanding of the sentence is built. The response follows / is inspired by this reason-intensive part. In the case of the grade 5 sentence, the typical follow-up is - “and after grade 5?” In the case of the blanket sentence, the most conspicuously noticed part would be “without the blanket” and the response would be focussed on that “why without the blanket?”


Labels: ,

WORDS AND ACTIONS

  WORDS AND ACTIONS


The ‘Physical’ and the ‘Semantic’, seen via a problem-solving strategy for making sense of something - 


Here is a problem-solving strategy : “Hold it tightly, upright and steady, in one place, and build other things around it with reference to it.” 

This is both physical and Semantic. Lets see how. 


Consider this sentence - Bill is a Physics undergraduate.


Now, “a Physics undergraduate” (which could also be described as “a Physics-undergraduate”) is “invented” English, after undergraduate programs in universities began in the world. We neither say “undergraduate of Physics” nor do we say “undergraduate of the subject Physics” nor “Physics’s undergraduate” which would be the logical constructions of the same. What have we done to solve this problem of naming when elements like ‘Physics’ and ‘undergraduate’ were floating around in the mind begging to be arranged in a meaningful form as a name? We held Physics tightly upright steady in one place - asserted the word Physics. And then we attached the others - undergraduate (in this case) - to it. We built the world around the word Physics with the rest being arranged with reference to it around it - attached to it. Even a physical scenario is similar. When we have to arrange some physical items and are confused, what do we do? We begin; we take a first step - we fix one of them tightly upright and steady in one place, and then arrange the rest around it with reference to it (on the basis of some relative rationale as regards to it). 

The problem solving strategy - in the Semantic and physical realms - is the same ! 


This shows that in some senses, our ways to deal with words have been inspired by those to deal with physical objects. More importantly, our Semantic and physical realms imitate each other.


Labels: ,

THE FORMULA FOR NAMES OF WORDS

 THE FORMULA FOR NAMES OF WORDS


Naming of words is mostly on the basis of the actions or causes or effects, related to those entities/phenomena (whose naming is done by those words), in our personal/sensory experiences with those entities/phenomena.


Introduction to anything (i.e. to meaning of anything) begins by “sensing” it in some way - in reality or in imagination. 


What does ‘liquid’ mean to us? That “wet”, “loose” thing we feel on our hands or bodies. Thats the sensory meaning of liquid for us. But for meaning that has to be converted into words (for the mind). So we call it ‘liquid’, which is its STATE. The state relates to the sensory experience by way of being the CAUSE of the same (i.e the cause of the wetness).


What does ‘slippery’ mean to us? That “quick-moving” thing upon touch. Slippery comes from ‘slip’, which we do upon stepping on a slippery surface. So here the naming is done on the basis of the EFFECT of the entity upon us in our experience with that entity/phenomenon.


‘Hard’ is called hard on the basis of the breakability of the entity upon exertion of force on it which (relates to) / is an ACTION done ON the entity. Note - here the action is not done by the entity, but ON the entity (by us). 


Why is ‘Long’ called long? It represents the extent of something (the length) of say, a stick. This extent is in a sense a manifested action of the entity (here, a stick) on the world around it, by virtue of its existence. So here we have an ACTION done by the entity on our perceptual world.


Labels:

GENERATING A CONTEXT IN THE MIND -

 GENERATING A CONTEXT IN THE MIND - 


Whenever we are presented with a sentence, we tend to automatically imagine/construct a background scenario / context for it.


If I tell you that 'John killed the elephant', it would be very easy to imagine that this must have been in some jungle somewhere where John was probably a hunter.

But if I say, slightly modifying the sentence as ‘John planned to kill the elephant’ it's a bit more difficult to imagine the context than before. The (planned + elephant) out of “planned to kill the elephant” makes it difficult. That bracket is the rare part. That is the less heard, encountered, experienced part. (Also, note that here a certain effect is operating. After presenting the first sentence and talking about a possible context it is easy to modify it and imagine the context in this next case. If this sentence was presented fresh, at first, things would have been different!)


It is the rare parts which make the conceptualization/imagination of a contextual scenario difficult. We will come to this later.


We begin thinking from the rare / conspicuous parts. We don't start from ‘killing’ (which is the common part), we start from ‘elephant’, (which is a bit rare an element by itself since we don't encounter elephants commonly in some form/medium in life, and is also more "concrete" an entity than the abstract 'planned'), which leads to the 'jungle'. Then there is also the part ‘planned’ which has to be fitted in and hence (planned + jungle) is a bit hard. 


But it's not just the individual words whose rarity or commonness decide things

. 

Suppose I say, ‘We killed the polar bear’.

Here the combination (we + polar bear) in the sentence ‘we killed the polar bear’ is the rare part. Killing is fine (common). But who could this ‘we’ be? And of course, more importantly, a polar bear?


Some different combinations of words from among those in the presented sentence happen / juggle in the mind, till the mind settles/fixates at the rarest combination, to start generating the imaginary context from.

Labels: ,

Why hasn’t someone tried this simple idea till now?

 Why hasn’t someone tried this simple idea till now?


We want relevant commonsensical (or otherwise) inferences to be generated automatically from sentences.

Here is a way to get some basic inferences from a sentence.


The key idea is this - With a word at the centre, branch out from it all of its DEFINITIVE PROPERTIES parsed from the dictionary meaning of the word.

For example, consider the word - ‘book’.

The dictionary meaning of book is - a written or printed work consisting of pages glued or sewn together along one side and bound in covers.

This gets parsed as - 

written or printed work, consisting of glued or sewn pages, and bound in covers


Now, write book as - 





Now, consider the sentence - A book dropped from the table.


The key words are - book, dropped and table.


STEPS - 

  1. Generate the definitive properties of each word

  2. Combine every definitive property of every word with that of each other in sets of 2 or 3 at a time.

  3. Each such combination will be a “chunk of a sentence”.

  4. Apply all above steps repeatedly over each chunk of a sentence generated.


This process repeated till a loop-depth of say 3 or 4 would enumerate a lot of relevant commonsensical and uncommonsensical inferences from the given sentence.



Principle used


Suppose I tell you - John gave a book to Jack.

Now there is no general magic formula to draw relevant inferences from any given sentence because, say, it could be that Jack is an extremely non-studious boy who spends all his time in sports and movies such that he hardly has touched a book. In that case, a relevant inference from this sentence would be - “a book given ! And that too to Jack! Good heavens!” meaning it is like giving a cricket bat to play cricket to a man who has never touched one in his whole life!


Now, 2 things happened here - 


  1. Certain relevant sub-parts of the sentence combined to give something relevant. (John didn't turn out to be relevant (say, he is a regular guy)).

  2. It is the specific definitive property of Jack that counted, in making this inference being drawn. 


So any definitive property of any of the words might combine with that of the other(s) to give some RELEVANT inference.


This principle is exploited.










































Labels: ,