本文实例讲述了Python实现的统计文章单词次数功能。分享给大家供大家参考,具体如下:
题目是这样的:你有一个目录,放了你一个月的日记,都是 txt,为了避免分词的问题,假设内容都是英文,请统计出你认为每篇日记最重要的词。
其实就是统计一篇文章出现最多的单词,但是要去除那些常见的连词、介词和谓语动词等,代码:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
|
#coding=utf-8
import collections
import re
import os
useless_words = ( 'the' , 'a' , 'an' , 'and' , 'by' , 'of' , 'in' , 'on' , 'is' , 'to' )
def get_important_word( file ):
f = open ( file )
word_counter = collections.Counter()
for line in f:
words = re.findall( '\w+' ,line.lower())
word_counter.update(words)
f.close()
most_important_word = word_counter.most_common( 1 )[ 0 ][ 0 ]
count = 2
while (most_important_word in useless_words):
most_important_word = word_counter.most_common(count)[count - 1 ][ 0 ]
count + = 1
num = word_counter.most_common(count)[count - 1 ][ 1 ]
print 'the most important word in %s is %s,it appears %d times' % ( file ,most_important_word,num)
if __name__ = = '__main__' :
filepath = '.'
for dirpath,dirname,dirfiles in os.walk(filepath):
for file in dirfiles:
if os.path.splitext( file )[ 1 ] = = '.txt' :
abspath = os.path.join(dirpath, file )
if os.path.isfile(abspath):
get_important_word(abspath)
|
学习笔记:
collections
模块,是python内建的模块,提供了许多有用的集合类。我们这里用到了Counter
类和其中的most_common()
方法
PS:这里再为大家推荐2款相关统计工具供大家参考:
在线字数统计工具:https://tool.zzvips.com/t/paiban/
在线字符统计与编辑工具:https://tool.zzvips.com/t/textcount/
希望本文所述对大家Python程序设计有所帮助。
原文链接:https://blog.csdn.net/qq_20817327/article/details/77655399