Saturday, March 6, 2021

GENERATING COMMONSENSE INFERENCES

 GENERATING COMMONSENSE INFERENCES - 


What is a 'word'? What is a 'gun'? What is 'wedding'? Simply speaking, it is the usage of language for an instance of reality.  When certain instances of reality meet a certain 'n' number of criteria, it is assigned a certain corresponding word 'W'. When there is a ceremony of a bride and a groom, with attendees, with a function......etc., all of these are collectively called a 'wedding'. Similarly, when there is a pipe, a trigger and a handle, they are collectively called a 'gun'. So when these enlisted criteria are met, the respective word is ascribed to that piece of reality. 
Now, when someone describes a piece of reality experienced, with the word 'gun' in the description, it means a gun has been perceived. That means that a pipe, a trigger and a handle have been collectively perceived. This set collectively is a gun. Now, these 3 are the key features - identifiers - for the criteria for a gun to be met for an object. These identifiers don't have any "language" - they are sensory visual and auditory (mostly) inputs into the brain - signals. Now, in the very perception of a gun or the identification of the features which satisfy the criteria for an object to be a gun, there is commonsense involved. This is because the rest isn't talked about and could be anything. But commonsense tells us that if these criteria are met, most often than not, the reality in question corresponds to the word associated with that set of criteria. (In some cases, the perceived features might be even lesser than the required number for the criteria for ascription of the word being met).
So when we talk of 'Commonsense applied to NLP', we have to bear in mind that in the very  perception of the very words which make up the knowledge in the sentence, there is commonsense involved. Lets exploit this commonsense. 

So, a gun = pipe + trigger + handle
Or rather, gun = visual symbol (pipe) + VS (trigger) + VS (handle)  .... VS = visual symbol.

Similarly, fall = visual symbol (at top initially) + visual clip (speedy vertical disappearance) + visual symbol (towards down)

Make a database of ALL the words in the dictionary in terms of the ACTUAL SYMBOLS (not in terms of the words described in the brackets above) and CLIPS, making up the criteria being met for a certain instance of reality to be ascribed that word.

So, we will have an entry in the database like, W = S1 + S2 + ...

Now, write any sentence - The gun fell down.
Break each word into its constituent symbols. So you will have a string of symbols - S1, S2, S3, S4, S5,.........Sn, collectively for all the words in the sentence.
Now is the key trick. Generate all possible combinatorial sub-sets of this set of symbols.
Now,
1) Check each such sub-set against the database. If there is a word in our database matching with a sub-set, it is very likely that the newly-matched word is an aspect in the 'commonsense-inferential-space' of that sentence!
2) Check for a substring standing for another new sentence altogether. Very likely it is a direct commonsensical inferential sentence, as a whole, to the original sentence! How to check this match for a sentence? Well, you would need to have a ready database of all the combinations of all the symbols involved in all the words in the dictionary, and a sentence written, in case there is a meaningful one, corresponding to such a combination.

Labels:

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home