问题:如何使用Python检查单词是否为英语单词?
我想检查Python程序中英语词典中是否有单词。
我相信可以使用nltk wordnet接口,但是我不知道如何将其用于如此简单的任务。
def is_english_word(word):
pass # how to I implement is_english_word?
is_english_word(token.lower())
将来,我可能想检查单词的单数形式是否在字典中(例如,属性->属性->英语单词)。我将如何实现?
I want to check in a Python program if a word is in the English dictionary.
I believe nltk wordnet interface might be the way to go but I have no clue how to use it for such a simple task.
def is_english_word(word):
pass # how to I implement is_english_word?
is_english_word(token.lower())
In the future, I might want to check if the singular form of a word is in the dictionary (e.g., properties -> property -> english word). How would I achieve that?
回答 0
要获得更大的功能和灵活性,请使用专用的拼写检查库,例如PyEnchant
。有一个教程,或者您可以直接学习:
>>> import enchant
>>> d = enchant.Dict("en_US")
>>> d.check("Hello")
True
>>> d.check("Helo")
False
>>> d.suggest("Helo")
['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', "He'll"]
>>>
PyEnchant
带有一些词典(en_GB,en_US,de_DE,fr_FR),但是如果您需要更多语言,可以使用任何OpenOffice。
似乎有一个名为的多元化图书馆inflect
,但我不知道它是否有用。
For (much) more power and flexibility, use a dedicated spellchecking library like PyEnchant
. There’s a tutorial, or you could just dive straight in:
>>> import enchant
>>> d = enchant.Dict("en_US")
>>> d.check("Hello")
True
>>> d.check("Helo")
False
>>> d.suggest("Helo")
['He lo', 'He-lo', 'Hello', 'Helot', 'Help', 'Halo', 'Hell', 'Held', 'Helm', 'Hero', "He'll"]
>>>
PyEnchant
comes with a few dictionaries (en_GB, en_US, de_DE, fr_FR), but can use any of the OpenOffice ones if you want more languages.
There appears to be a pluralisation library called inflect
, but I’ve no idea whether it’s any good.
回答 1
它不适用于WordNet,因为WordNet并不包含所有英语单词。基于NLTK却没有附魔的另一种可能性是NLTK的语料库
>>> from nltk.corpus import words
>>> "would" in words.words()
True
>>> "could" in words.words()
True
>>> "should" in words.words()
True
>>> "I" in words.words()
True
>>> "you" in words.words()
True
It won’t work well with WordNet, because WordNet does not contain all english words.
Another possibility based on NLTK without enchant is NLTK’s words corpus
>>> from nltk.corpus import words
>>> "would" in words.words()
True
>>> "could" in words.words()
True
>>> "should" in words.words()
True
>>> "I" in words.words()
True
>>> "you" in words.words()
True
回答 2
使用NLTK:
from nltk.corpus import wordnet
if not wordnet.synsets(word_to_test):
#Not an English Word
else:
#English Word
如果您在安装wordnet时遇到问题或想要尝试其他方法,则应该参考本文。
Using NLTK:
from nltk.corpus import wordnet
if not wordnet.synsets(word_to_test):
#Not an English Word
else:
#English Word
You should refer to this article if you have trouble installing wordnet or want to try other approaches.
回答 3
使用集合存储单词列表,因为查找它们会更快:
with open("english_words.txt") as word_file:
english_words = set(word.strip().lower() for word in word_file)
def is_english_word(word):
return word.lower() in english_words
print is_english_word("ham") # should be true if you have a good english_words.txt
为了回答问题的第二部分,复数已经在一个好的单词列表中了,但是如果出于某种原因要专门从列表中排除那些复数,则确实可以编写一个函数来处理它。但是英语的复数规则非常棘手,以至于我只在单词列表中包括复数。
至于在哪里找到英语单词列表,我只是通过谷歌搜索“英语单词列表”找到了几个。这是其中之一:http : //www.sil.org/linguistics/wordlists/english/wordlist/wordsEn.txt 如果您想特别使用其中一种方言,则可以使用Google的英式或美式英语。
Using a set to store the word list because looking them up will be faster:
with open("english_words.txt") as word_file:
english_words = set(word.strip().lower() for word in word_file)
def is_english_word(word):
return word.lower() in english_words
print is_english_word("ham") # should be true if you have a good english_words.txt
To answer the second part of the question, the plurals would already be in a good word list, but if you wanted to specifically exclude those from the list for some reason, you could indeed write a function to handle it. But English pluralization rules are tricky enough that I’d just include the plurals in the word list to begin with.
As to where to find English word lists, I found several just by Googling “English word list”. Here is one: http://www.sil.org/linguistics/wordlists/english/wordlist/wordsEn.txt You could Google for British or American English if you want specifically one of those dialects.
回答 4
对于更快的基于NLTK的解决方案,您可以对单词集进行哈希处理以避免线性搜索。
from nltk.corpus import words as nltk_words
def is_english_word(word):
# creation of this dictionary would be done outside of
# the function because you only need to do it once.
dictionary = dict.fromkeys(nltk_words.words(), None)
try:
x = dictionary[word]
return True
except KeyError:
return False
For a faster NLTK-based solution you could hash the set of words to avoid a linear search.
from nltk.corpus import words as nltk_words
def is_english_word(word):
# creation of this dictionary would be done outside of
# the function because you only need to do it once.
dictionary = dict.fromkeys(nltk_words.words(), None)
try:
x = dictionary[word]
return True
except KeyError:
return False
回答 5
我发现有3种基于包的解决方案可以解决该问题。它们是pyenchant,wordnet和语料库(自定义或来自ntlk)。使用py3无法在Win64中轻松安装Pyenchant。Wordnet不能很好地运行,因为它的语料库不完整。所以对我来说,我选择@Sadik回答的解决方案,并使用’set(words.words())’加快速度。
第一:
pip3 install nltk
python3
import nltk
nltk.download('words')
然后:
from nltk.corpus import words
setofwords = set(words.words())
print("hello" in setofwords)
>>True
I find that there are 3 package-based solutions to solve the problem. They are pyenchant, wordnet and corpus(self-defined or from ntlk). Pyenchant couldn’t installed easily in win64 with py3. Wordnet doesn’t work very well because it’s corpus isn’t complete. So for me, I choose the solution answered by @Sadik, and use ‘set(words.words())’ to speed up.
First:
pip3 install nltk
python3
import nltk
nltk.download('words')
Then:
from nltk.corpus import words
setofwords = set(words.words())
print("hello" in setofwords)
>>True
回答 6
使用pyEnchant.checker SpellChecker:
from enchant.checker import SpellChecker
def is_in_english(quote):
d = SpellChecker("en_US")
d.set_text(quote)
errors = [err.word for err in d]
return False if ((len(errors) > 4) or len(quote.split()) < 3) else True
print(is_in_english('“办理美国加州州立大学圣贝纳迪诺分校高仿成绩单Q/V2166384296加州州立大学圣贝纳迪诺分校学历学位认证'))
print(is_in_english('“Two things are infinite: the universe and human stupidity; and I\'m not sure about the universe.”'))
> False
> True
With pyEnchant.checker SpellChecker:
from enchant.checker import SpellChecker
def is_in_english(quote):
d = SpellChecker("en_US")
d.set_text(quote)
errors = [err.word for err in d]
return False if ((len(errors) > 4) or len(quote.split()) < 3) else True
print(is_in_english('“办理美国加州州立大学圣贝纳迪诺分校高仿成绩单Q/V2166384296加州州立大学圣贝纳迪诺分校学历学位认证'))
print(is_in_english('“Two things are infinite: the universe and human stupidity; and I\'m not sure about the universe.”'))
> False
> True
回答 7
For a semantic web approach, you could run a sparql query against WordNet in RDF format. Basically just use urllib module to issue GET request and return results in JSON format, parse using python ‘json’ module. If it’s not English word you’ll get no results.
As another idea, you could query Wiktionary’s API.
回答 8
对于所有Linux / Unix用户
如果您的操作系统使用Linux内核,则有一种简单的方法可以从英语/美国词典中获取所有单词。在目录中,/usr/share/dict
您有一个words
文件。还有一个更具体american-english
和british-english
文件。这些包含该特定语言的所有单词。您可以通过每种编程语言来访问它,这就是为什么我认为您可能想了解这一点的原因。
现在,对于特定于python的用户,下面的python代码应该将列表单词分配为具有每个单词的值:
import re
file = open("/usr/share/dict/words", "r")
words = re.sub("[^\w]", " ", file.read()).split()
def is_word(word):
return word.lower() in words
is_word("tarts") ## Returns true
is_word("jwiefjiojrfiorj") ## Returns False
希望这可以帮助!!!
For All Linux/Unix Users
If your OS uses the Linux kernel, there is a simple way to get all the words from the English/American dictionary. In the directory /usr/share/dict
you have a words
file. There is also a more specific american-english
and british-english
files. These contain all of the words in that specific language. You can access this throughout every programming language which is why I thought you might want to know about this.
Now, for python specific users, the python code below should assign the list words to have the value of every single word:
import re
file = open("/usr/share/dict/words", "r")
words = re.sub("[^\w]", " ", file.read()).split()
def is_word(word):
return word.lower() in words
is_word("tarts") ## Returns true
is_word("jwiefjiojrfiorj") ## Returns False
Hope this helps!!!