So I have this text file made up of numbers and words, for example like this - 09807754 18 n 03 aristocrat 0 blue_blood 0 patrician
and I want to split it so that each word or number will come up as a new line.
所以我有这个由数字和单词组成的文本文件,例如像这样 - 09807754 18 n 03 aristocrat 0 blue_blood 0 patrician我想拆分它以便每个单词或数字都会作为一个新行出现。
A whitespace separator would be ideal as I would like the words with the dashes to stay connected.
一个空白分隔符是理想的,因为我希望带有破折号的单词保持连接。
This is what I have so far:
这是我到目前为止:
f = open('words.txt', 'r')
for word in f:
print(word)
not really sure how to go from here, I would like this to be the output:
不确定如何离开这里,我希望这是输出:
09807754
18
n
3
aristocrat
...
5 个解决方案
#1
86
If you do not have quotes around your data and you just want one word at a time (ignoring the meaning of spaces vs line breaks in the file):
如果您的数据周围没有引号,并且您一次只想要一个单词(忽略文件中空格与换行符的含义):
with open('words.txt','r') as f:
for line in f:
for word in line.split():
print(word)
If you want a nested list of the words in each line of the file (for example, to create a matrix of rows and columns from a file):
如果需要文件每行中单词的嵌套列表(例如,要从文件创建行和列的矩阵):
with open("words.txt") as f:
[line.split() for line in f]
Or, If you want to flatten the file into a single flat list of words in the file, you might do something like this:
或者,如果要将文件展平为文件中单个单词的单词列表,则可以执行以下操作:
with open('words.txt') as f:
[word for line in f for word in line.split()]
If you want a regex solution:
如果你想要一个正则表达式解决方案:
import re
with open("words.txt") as f:
for line in f:
for word in re.findall(r'\w+', line):
# word by word
Or, if you want that to be a line by line generator with a regex:
或者,如果您希望它是带有正则表达式的逐行生成器:
with open("words.txt") as f:
(word for line in f for word in re.findall(r'\w+', line))
#2
14
f = open('words.txt')
for word in f.read().split():
print(word)
#3
8
As supplementary, if you are reading a vvvvery large file, and you don't want read all of the content into memory at once, you might consider using a buffer, then return each word by yield:
作为补充,如果您正在阅读vvvvery大文件,并且您不希望一次将所有内容读入内存,您可以考虑使用缓冲区,然后按yield返回每个单词:
def read_words(inputfile):
with open(inputfile, 'r') as f:
while True:
buf = f.read(10240)
if not buf:
break
# make sure we end on a space (word boundary)
while not str.isspace(buf[-1]):
ch = f.read(1)
if not ch:
break
buf += ch
words = buf.split()
for word in words:
yield word
yield '' #handle the scene that the file is empty
if __name__ == "__main__":
for word in read_words('./very_large_file.txt'):
process(word)
#4
1
What you can do is use nltk to tokenize words and then store all of the words in a list, here's what I did. If you don't know nltk; it stands for natural language toolkit and is used to process natural language. Here's some resource if you wanna get started [http://www.nltk.org/book/]
你可以做的是使用nltk来标记单词,然后将所有单词存储在列表中,这就是我所做的。如果你不知道nltk;它代表自然语言工具包,用于处理自然语言。如果你想开始,这里有一些资源[http://www.nltk.org/book/]
import nltk
from nltk.tokenize import word_tokenize
file = open("abc.txt",newline='')
result = file.read()
words = word_tokenize(result)
for i in words:
print(i)
The output will be this:
输出将是这样的:
09807754
18
n
03
aristocrat
0
blue_blood
0
patrician
#5
0
Here is my totally functional approach which avoids having to read and split lines. It makes use of the itertools
module:
这是我完全功能性的方法,避免了必须读取和分割线条。它使用了itertools模块:
Note for python 3, replace itertools.imap
with map
import itertools
def readwords(mfile):
byte_stream = itertools.groupby(
itertools.takewhile(lambda c: bool(c),
itertools.imap(mfile.read,
itertools.repeat(1))), str.isspace)
return ("".join(group) for pred, group in byte_stream if not pred)
Sample usage:
样品用法:
>>> import sys
>>> for w in readwords(sys.stdin):
... print (w)
...
I really love this new method of reading words in python
I
really
love
this
new
method
of
reading
words
in
python
It's soo very Functional!
It's
soo
very
Functional!
>>>
I guess in your case, this would be the way to use the function:
我想在你的情况下,这将是使用该功能的方式:
with open('words.txt', 'r') as f:
for word in readwords(f):
print(word)
#1
86
If you do not have quotes around your data and you just want one word at a time (ignoring the meaning of spaces vs line breaks in the file):
如果您的数据周围没有引号,并且您一次只想要一个单词(忽略文件中空格与换行符的含义):
with open('words.txt','r') as f:
for line in f:
for word in line.split():
print(word)
If you want a nested list of the words in each line of the file (for example, to create a matrix of rows and columns from a file):
如果需要文件每行中单词的嵌套列表(例如,要从文件创建行和列的矩阵):
with open("words.txt") as f:
[line.split() for line in f]
Or, If you want to flatten the file into a single flat list of words in the file, you might do something like this:
或者,如果要将文件展平为文件中单个单词的单词列表,则可以执行以下操作:
with open('words.txt') as f:
[word for line in f for word in line.split()]
If you want a regex solution:
如果你想要一个正则表达式解决方案:
import re
with open("words.txt") as f:
for line in f:
for word in re.findall(r'\w+', line):
# word by word
Or, if you want that to be a line by line generator with a regex:
或者,如果您希望它是带有正则表达式的逐行生成器:
with open("words.txt") as f:
(word for line in f for word in re.findall(r'\w+', line))
#2
14
f = open('words.txt')
for word in f.read().split():
print(word)
#3
8
As supplementary, if you are reading a vvvvery large file, and you don't want read all of the content into memory at once, you might consider using a buffer, then return each word by yield:
作为补充,如果您正在阅读vvvvery大文件,并且您不希望一次将所有内容读入内存,您可以考虑使用缓冲区,然后按yield返回每个单词:
def read_words(inputfile):
with open(inputfile, 'r') as f:
while True:
buf = f.read(10240)
if not buf:
break
# make sure we end on a space (word boundary)
while not str.isspace(buf[-1]):
ch = f.read(1)
if not ch:
break
buf += ch
words = buf.split()
for word in words:
yield word
yield '' #handle the scene that the file is empty
if __name__ == "__main__":
for word in read_words('./very_large_file.txt'):
process(word)
#4
1
What you can do is use nltk to tokenize words and then store all of the words in a list, here's what I did. If you don't know nltk; it stands for natural language toolkit and is used to process natural language. Here's some resource if you wanna get started [http://www.nltk.org/book/]
你可以做的是使用nltk来标记单词,然后将所有单词存储在列表中,这就是我所做的。如果你不知道nltk;它代表自然语言工具包,用于处理自然语言。如果你想开始,这里有一些资源[http://www.nltk.org/book/]
import nltk
from nltk.tokenize import word_tokenize
file = open("abc.txt",newline='')
result = file.read()
words = word_tokenize(result)
for i in words:
print(i)
The output will be this:
输出将是这样的:
09807754
18
n
03
aristocrat
0
blue_blood
0
patrician
#5
0
Here is my totally functional approach which avoids having to read and split lines. It makes use of the itertools
module:
这是我完全功能性的方法,避免了必须读取和分割线条。它使用了itertools模块:
Note for python 3, replace itertools.imap
with map
import itertools
def readwords(mfile):
byte_stream = itertools.groupby(
itertools.takewhile(lambda c: bool(c),
itertools.imap(mfile.read,
itertools.repeat(1))), str.isspace)
return ("".join(group) for pred, group in byte_stream if not pred)
Sample usage:
样品用法:
>>> import sys
>>> for w in readwords(sys.stdin):
... print (w)
...
I really love this new method of reading words in python
I
really
love
this
new
method
of
reading
words
in
python
It's soo very Functional!
It's
soo
very
Functional!
>>>
I guess in your case, this would be the way to use the function:
我想在你的情况下,这将是使用该功能的方式:
with open('words.txt', 'r') as f:
for word in readwords(f):
print(word)