文本挖掘和NLP:从R到Python

时间:2022-10-17 15:50:53

First of all, saying that I am new to python. At the moment, I am "translating" a lot of R code into python and learning along the way. This question relates to this one replicating R in Python (in there they actually suggest to wrap it up using rpy2, which I would like to avoid for learning purposes).

首先,说我是python的新手。目前,我正在将很多R代码“翻译”成python并沿途学习。这个问题与Python中的这个复制R有关(在那里他们实际上建议使用rpy2来包装它,我想避免出于学习目的)。

In my case, rather than exactly replicating R in python, I would actually like to learn a "pythonian" way of doing what I am describing here:

在我的情况下,我不想在python中完全复制R,而是希望学习一种“pythonian”方式来做我在这里描述的内容:

I have a long vector (40000 elements) in which each element is a piece of text, for example:

我有一个长向量(40000个元素),其中每个元素都是一段文本,例如:

> descr
[1] "dress Silver Grey Printed Jersey Dress 100% cotton"
[2] "dress Printed Silk Dress 100% Silk Effortless style."                                                                                                                                                                                    
[3] "dress Rust Belted Kimono Dress 100% Silk relaxed silhouette, mini length" 

I then preprocess it as, for example:

然后我将其预处理为,例如:

# customized function to remove patterns in strings. used later within tm_map
rmRepeatPatterns <- function(str) gsub('\\b(\\S+?)\\1\\S*\\b', '', str,
                                   perl = TRUE)

# process the corpus
pCorp <- Corpus(VectorSource(descr))
pCorp <- tm_map(pCorp, content_transformer(tolower))
pCorp <- tm_map(pCorp, rmRepeatPatterns)
pCorp <- tm_map(pCorp, removeStopWords)
pCorp <- tm_map(pCorp, removePunctuation)
pCorp <- tm_map(pCorp, removeNumbers)
pCorp <- tm_map(pCorp, stripWhitespace)
pCorp <- tm_map(pCorp, PlainTextDocument)

# create a term document matrix (control functions can also be passed here) and a table: word - freq
Tdm1 <- TermDocumentMatrix(pCorp)
freq1 <- rowSums(as.matrix(Tdm1))
dt <- data.table(terms=names(freq1), freq=freq1)

# and perhaps even calculate a distance matrix (transpose because Dist operates on a row basis)
D <- Dist(t(as.matrix(Tdm1)))

Overall, I would like to know an adequate way of doing this in python, mainly the text processing.

总的来说,我想知道在python中做这个的适当方法,主要是文本处理。

For example, I could remove stopwords and numbers as they describe here get rid of StopWords and Numbers (although seems a lot of work for such a simple task). But all the options I see imply processing the text itself rather than mapping the whole corpus. In other words, they imply "looping" through the descr vector.

例如,我可以删除停用词和数字,因为他们在这里描述了摆脱StopWords和数字(尽管对于这么简单的任务似乎很多工作)。但是我看到的所有选项都意味着处理文本本身而不是映射整个语料库。换句话说,它们意味着通过descr向量“循环”。

Anyway, any help would be really appreciated. Also, I have a bunch of customised functions like rmRepeatPatterns, so learning how to map these would be extremely useful.

无论如何,任何帮助将非常感激。此外,我有一些自定义函数,如rmRepeatPatterns,因此学习如何映射这些函数将非常有用。

thanks in advance for your time.

在此先感谢您的时间。

1 个解决方案

#1


Looks like "doing this" involves making some regexp substitutions to a list of strings. Python offers a lot more power than R in this domain. Here's how I'd apply your rmRepeatedPatterns substitution, using a list comprehension:

看起来“执行此操作”涉及对字符串列表进行一些正则表达式替换。 Python在这个领域提供了比R更多的功能。以下是我使用列表推导应用rmRepeatedPatterns替换的方法:

pCorp = [ re.sub(r'\b(\S+?)\1\S*\b', '', line) for line in pCorp ]

If you wish to wrap this in a function:

如果你想将它包装在一个函数中:

def rmRepeatedPatterns(line):
    return re.sub(r'\b(\S+?)\1\S*\b', '', line)

pCorp = [ rmRepeatedPatterns(line) for line in pCorp ]

Python also has a map operator that you could use with your function:

Python还有一个可以与你的函数一起使用的map运算符:

pCorp = map(rmRepeatedPatterns, pCorp)

But list comprehensions are more powerful, expressive and flexible; as you see you can apply simple substitutions without burying them in a function.

但是列表理解更强大,更具表现力和灵活性;如你所见,你可以应用简单的替换而不会将它们隐藏在函数中。

Additional notes:

  1. If your datasets are large, you can also learn about using generators instead of list comprehensions; essentially they let you generate your elements on demand, instead of creating a lot of intermediate lists.

    如果您的数据集很大,您还可以了解使用生成器而不是列表推导;基本上它们允许您按需生成元素,而不是创建大量的中间列表。

  2. Python has some operators like map, but if you'll be doing a lot of matrix manipulations you should read about numpy, which offers a more R-like experience.

    Python有一些像map这样的运算符,但是如果你要进行大量的矩阵操作,你应该阅读numpy,它提供了更像R的体验。

Edit: Having looked again at your sample R script, here's how I'd do the rest of the clean-up, ie. take your list of lines, convert to lower case, drop punctuation and digits (specifically: everything that's not an English letter), and remove stopwords.

编辑:再次查看您的示例R脚本,这是我如何进行剩余的清理,即。获取你的行列表,转换为小写,删除标点和数字(特别是:所有不是英文字母),并删除停用词。

# Lower-case, split into words, discard everything that's not a letter
tok_lines = [ re.split(r"[^a-z]+", line.lower()) for line in pCorp ]
# tok_lines is now a list of lists of words

stopwordlist = nltk.corpus.stopwords.words("english") # or any other list
stopwords = set(w.lower() for w in stopwordlist)
cleantoks = [ [ t for t in line if t not in stopwords ] 
                                        for line in tok_lines ]

I wouldn't advise using either of the proposed solutions in the question you link to. Looking up things in a set is a lot faster than looking them up in a large list, and I would use a comprehension instead of filter().

我不建议在您链接的问题中使用任何一种建议的解决方案。查找集合中的内容要比在大型列表中查找它们快得多,我会使用理解而不是filter()。

#1


Looks like "doing this" involves making some regexp substitutions to a list of strings. Python offers a lot more power than R in this domain. Here's how I'd apply your rmRepeatedPatterns substitution, using a list comprehension:

看起来“执行此操作”涉及对字符串列表进行一些正则表达式替换。 Python在这个领域提供了比R更多的功能。以下是我使用列表推导应用rmRepeatedPatterns替换的方法:

pCorp = [ re.sub(r'\b(\S+?)\1\S*\b', '', line) for line in pCorp ]

If you wish to wrap this in a function:

如果你想将它包装在一个函数中:

def rmRepeatedPatterns(line):
    return re.sub(r'\b(\S+?)\1\S*\b', '', line)

pCorp = [ rmRepeatedPatterns(line) for line in pCorp ]

Python also has a map operator that you could use with your function:

Python还有一个可以与你的函数一起使用的map运算符:

pCorp = map(rmRepeatedPatterns, pCorp)

But list comprehensions are more powerful, expressive and flexible; as you see you can apply simple substitutions without burying them in a function.

但是列表理解更强大,更具表现力和灵活性;如你所见,你可以应用简单的替换而不会将它们隐藏在函数中。

Additional notes:

  1. If your datasets are large, you can also learn about using generators instead of list comprehensions; essentially they let you generate your elements on demand, instead of creating a lot of intermediate lists.

    如果您的数据集很大,您还可以了解使用生成器而不是列表推导;基本上它们允许您按需生成元素,而不是创建大量的中间列表。

  2. Python has some operators like map, but if you'll be doing a lot of matrix manipulations you should read about numpy, which offers a more R-like experience.

    Python有一些像map这样的运算符,但是如果你要进行大量的矩阵操作,你应该阅读numpy,它提供了更像R的体验。

Edit: Having looked again at your sample R script, here's how I'd do the rest of the clean-up, ie. take your list of lines, convert to lower case, drop punctuation and digits (specifically: everything that's not an English letter), and remove stopwords.

编辑:再次查看您的示例R脚本,这是我如何进行剩余的清理,即。获取你的行列表,转换为小写,删除标点和数字(特别是:所有不是英文字母),并删除停用词。

# Lower-case, split into words, discard everything that's not a letter
tok_lines = [ re.split(r"[^a-z]+", line.lower()) for line in pCorp ]
# tok_lines is now a list of lists of words

stopwordlist = nltk.corpus.stopwords.words("english") # or any other list
stopwords = set(w.lower() for w in stopwordlist)
cleantoks = [ [ t for t in line if t not in stopwords ] 
                                        for line in tok_lines ]

I wouldn't advise using either of the proposed solutions in the question you link to. Looking up things in a set is a lot faster than looking them up in a large list, and I would use a comprehension instead of filter().

我不建议在您链接的问题中使用任何一种建议的解决方案。查找集合中的内容要比在大型列表中查找它们快得多,我会使用理解而不是filter()。