Another attempt about LSI

时间:2021-05-25 04:16:15

Last week I was here Natural Language Processing in NZ.

Someone asked a question, is there any existed library or solution which can extract certain information out of a natural language dataset of a specific topic, for example, to find objective facts and subjective sentimental information out of a bunch of customer complaints.

I was doing something similar in this realm, and I think LSI or Word2Vec is the best solution of this problem now. But my poor english speaking could not match either my confidence or LSI's power, my explanation of LSI was so terrible that someone gave me a few hints very politely and cryptically. And I could tell some of you were far from convinced.

I felt sorry for LSI, it was not its fault that I didn't present it in a proper way. Thanks to Alyona and everyone in the meetup, I appreciate the opportunity to meet nice people and interesting ideas there. So, here is my 2 cent.

We have been trying to build different models of natural language. The problem of querying natural language dataset is the problem of how to build a model which could interpret a query into results. The question raised at last meetup was how to build such a model which can extract information from natural language dataset based on specific syntactic structure and some specific semantics.

We have plenty natural language parsers now, so I guess syntactic structure is not the problem here.

As for semantics, there are some rule-based approach, such as, WordNet , FrameNet, however, I found it was difficult to map a word to similar words in semantics because a word usaually has multiple meanings, and there is no way to find the right semantics of a word in a specific context with these approach. And you could end up with different results with different meaning of a word. Furthermore, there is not any threshold which can determine how far you can go in a WordNet graph, at least not in a mathematically decidable way.

LSI is a tool of finding similarity among documents, it stems from SVD. After mapping all documents into a space, we could find similarities between documents in different dimension. And of course we can get a distance between any two words in a specific dimension.

Word2Vec is based on the same idea but the algorithm is different.

With these tools, by training a model with a dataset of some topics, we can determine the distance of any two words in a specific dataset(or topic) with this model. Then we can map a query into a set of sentences with this similarity or distance in respect to the specific topic.

LSI or similar algorithm is actually another piece of the puzzle.

'''Note'''
We still have to deal with the problem of how to model multi meanings of a word