标签归档:ocr

OpenCV-Python中的简单数字识别OCR

问题:OpenCV-Python中的简单数字识别OCR

我正在尝试在OpenCV-Python(cv2)中实现“数字识别OCR”。它仅用于学习目的。我想学习OpenCV中的KNearest和SVM功能。

我每个数字有100个样本(即图像)。我想和他们一起训练。

letter_recog.pyOpenCV示例附带一个示例。但是我仍然不知道如何使用它。我不了解样本,响应等内容。此外,它首先会加载txt文件,而我首先并不了解。

稍后进行搜索时,我可以在cpp样本中找到letter_recognitiontion.data。我用它并在letter_recog.py模型中为cv2.KNearest编写了一个代码(仅用于测试):

import numpy as np
import cv2

fn = 'letter-recognition.data'
a = np.loadtxt(fn, np.float32, delimiter=',', converters={ 0 : lambda ch : ord(ch)-ord('A') })
samples, responses = a[:,1:], a[:,0]

model = cv2.KNearest()
retval = model.train(samples,responses)
retval, results, neigh_resp, dists = model.find_nearest(samples, k = 10)
print results.ravel()

它给了我一个大小为20000的数组,我不知道它是什么。

问题:

1)什么是letter_recognition.data文件?如何从我自己的数据集中构建该文件?

2)results.reval()代表什么?

3)我们如何使用letter_recognition.data文件(KNearest或SVM)编写一个简单的数字识别工具?

I am trying to implement a “Digit Recognition OCR” in OpenCV-Python (cv2). It is just for learning purposes. I would like to learn both KNearest and SVM features in OpenCV.

I have 100 samples (i.e. images) of each digit. I would like to train with them.

There is a sample letter_recog.py that comes with OpenCV sample. But I still couldn’t figure out on how to use it. I don’t understand what are the samples, responses etc. Also, it loads a txt file at first, which I didn’t understand first.

Later on searching a little bit, I could find a letter_recognition.data in cpp samples. I used it and made a code for cv2.KNearest in the model of letter_recog.py (just for testing):

import numpy as np
import cv2

fn = 'letter-recognition.data'
a = np.loadtxt(fn, np.float32, delimiter=',', converters={ 0 : lambda ch : ord(ch)-ord('A') })
samples, responses = a[:,1:], a[:,0]

model = cv2.KNearest()
retval = model.train(samples,responses)
retval, results, neigh_resp, dists = model.find_nearest(samples, k = 10)
print results.ravel()

It gave me an array of size 20000, I don’t understand what it is.

Questions:

1) What is letter_recognition.data file? How to build that file from my own data set?

2) What does results.reval() denote?

3) How we can write a simple digit recognition tool using letter_recognition.data file (either KNearest or SVM)?


回答 0

好吧,我决定对我的问题进行锻炼以解决上述问题。我想要的是使用OpenCV中的KNearest或SVM功能实现简单的OCR。以下是我的工作方式。(这只是为了学习如何将KNearest用于简单的OCR目的)。

1)我的第一个问题是有关OpenCV示例随附的letter_recognition.data文件的。我想知道该文件中的内容。

它包含一个字母以及该字母的16个功能。

this SOF帮助我找到了它。本文介绍了这16个功能Letter Recognition Using Holland-Style Adaptive Classifiers。(尽管我不了解最后的一些功能)

2)由于我知道,如果不了解所有这些功能,就很难做到这一点。我尝试了其他一些论文,但是对于初学者来说,都有些困难。

So I just decided to take all the pixel values as my features. (我并不担心准确性或性能,我只是希望它能够工作,至少以最低的准确性)

我为训练数据拍摄了下图:

(我知道训练数据的数量较少。但是,由于所有字母的字体和大小都相同,因此我决定尝试一下)。

为了准备训练数据,我在OpenCV中编写了一个小代码。它执行以下操作:

  1. 它加载图像。
  2. 选择数字(显然是通过轮廓查找并在字母的面积和高度上施加约束来避免错误检测)。
  3. 围绕一个字母绘制边界矩形,然后等待key press manually。这次我们自己按对应于框中字母的数字键
  4. 一旦按下相应的数字键,它将将该框的大小调整为10×10,并在一个数组(此处为样本)中保存100个像素值,在另一个数组中(此处为响应)保存相应的手动输入的数字。
  5. 然后将两个数组保存在单独的txt文件中。

手动数字分类结束时,火车数据(train.png)中的所有数字都是由我们自己手动标记的,图像如下所示:

以下是我用于上述目的的代码(当然,不是很干净):

import sys

import numpy as np
import cv2

im = cv2.imread('pitrain.png')
im3 = im.copy()

gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)

#################      Now finding Contours         ###################

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

samples =  np.empty((0,100))
responses = []
keys = [i for i in range(48,58)]

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)

        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            cv2.imshow('norm',im)
            key = cv2.waitKey(0)

            if key == 27:  # (escape to quit)
                sys.exit()
            elif key in keys:
                responses.append(int(chr(key)))
                sample = roismall.reshape((1,100))
                samples = np.append(samples,sample,0)

responses = np.array(responses,np.float32)
responses = responses.reshape((responses.size,1))
print "training complete"

np.savetxt('generalsamples.data',samples)
np.savetxt('generalresponses.data',responses)

现在我们进入培训和测试部分。

为了测试零件,我使用了下面的图片,该图片与我训练过的字母具有相同的类型。

对于培训,我们执行以下操作

  1. 加载我们之前已经保存的txt文件
  2. 创建一个我们正在使用的分类器的实例(这里是KNearest)
  3. 然后我们使用KNearest.train函数来训练数据

出于测试目的,我们执行以下操作:

  1. 我们加载用于测试的图像
  2. 较早处理图像并使用轮廓法提取每个数字
  3. 为其绘制一个边界框,然后将其大小调整为10×10,并将其像素值存储在数组中,如之前所做的那样。
  4. 然后,我们使用KNearest.find_nearest()函数查找与我们给出的项目最接近的项目。(如果幸运,它将识别出正确的数字。)

我在下面的单个代码中包括了最后两个步骤(培训和测试):

import cv2
import numpy as np

#######   training part    ############### 
samples = np.loadtxt('generalsamples.data',np.float32)
responses = np.loadtxt('generalresponses.data',np.float32)
responses = responses.reshape((responses.size,1))

model = cv2.KNearest()
model.train(samples,responses)

############################# testing part  #########################

im = cv2.imread('pi.png')
out = np.zeros(im.shape,np.uint8)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)
        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            roismall = roismall.reshape((1,100))
            roismall = np.float32(roismall)
            retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1)
            string = str(int((results[0][0])))
            cv2.putText(out,string,(x,y+h),0,1,(0,255,0))

cv2.imshow('im',im)
cv2.imshow('out',out)
cv2.waitKey(0)

它奏效了,下面是我得到的结果:


在这里,它以100%的精度工作。我认为这是因为所有数字都是相同的种类和大小。

但是无论如何,这对于初学者来说是一个不错的开始(我希望如此)。

Well, I decided to workout myself on my question to solve above problem. What I wanted is to implement a simpl OCR using KNearest or SVM features in OpenCV. And below is what I did and how. ( it is just for learning how to use KNearest for simple OCR purposes).

1) My first question was about letter_recognition.data file that comes with OpenCV samples. I wanted to know what is inside that file.

It contains a letter, along with 16 features of that letter.

And this SOF helped me to find it. These 16 features are explained in the paperLetter Recognition Using Holland-Style Adaptive Classifiers. ( Although I didn’t understand some of the features at end)

2) Since I knew, without understanding all those features, it is difficult to do that method. I tried some other papers, but all were a little difficult for a beginner.

So I just decided to take all the pixel values as my features. (I was not worried about accuracy or performance, I just wanted it to work, at least with the least accuracy)

I took below image for my training data:

( I know the amount of training data is less. But, since all letters are of same font and size, I decided to try on this).

To prepare the data for training, I made a small code in OpenCV. It does following things:

  1. It loads the image.
  2. Selects the digits ( obviously by contour finding and applying constraints on area and height of letters to avoid false detections).
  3. Draws the bounding rectangle around one letter and wait for key press manually. This time we press the digit key ourselves corresponding to the letter in box.
  4. Once corresponding digit key is pressed, it resizes this box to 10×10 and saves 100 pixel values in an array (here, samples) and corresponding manually entered digit in another array(here, responses).
  5. Then save both the arrays in separate txt files.

At the end of manual classification of digits, all the digits in the train data( train.png) are labeled manually by ourselves, image will look like below:

Below is the code I used for above purpose ( of course, not so clean):

import sys

import numpy as np
import cv2

im = cv2.imread('pitrain.png')
im3 = im.copy()

gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)

#################      Now finding Contours         ###################

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

samples =  np.empty((0,100))
responses = []
keys = [i for i in range(48,58)]

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)

        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            cv2.imshow('norm',im)
            key = cv2.waitKey(0)

            if key == 27:  # (escape to quit)
                sys.exit()
            elif key in keys:
                responses.append(int(chr(key)))
                sample = roismall.reshape((1,100))
                samples = np.append(samples,sample,0)

responses = np.array(responses,np.float32)
responses = responses.reshape((responses.size,1))
print "training complete"

np.savetxt('generalsamples.data',samples)
np.savetxt('generalresponses.data',responses)

Now we enter in to training and testing part.

For testing part I used below image, which has same type of letters I used to train.

For training we do as follows:

  1. Load the txt files we already saved earlier
  2. create a instance of classifier we are using ( here, it is KNearest)
  3. Then we use KNearest.train function to train the data

For testing purposes, we do as follows:

  1. We load the image used for testing
  2. process the image as earlier and extract each digit using contour methods
  3. Draw bounding box for it, then resize to 10×10, and store its pixel values in an array as done earlier.
  4. Then we use KNearest.find_nearest() function to find the nearest item to the one we gave. ( If lucky, it recognises the correct digit.)

I included last two steps ( training and testing) in single code below:

import cv2
import numpy as np

#######   training part    ############### 
samples = np.loadtxt('generalsamples.data',np.float32)
responses = np.loadtxt('generalresponses.data',np.float32)
responses = responses.reshape((responses.size,1))

model = cv2.KNearest()
model.train(samples,responses)

############################# testing part  #########################

im = cv2.imread('pi.png')
out = np.zeros(im.shape,np.uint8)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)
        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            roismall = roismall.reshape((1,100))
            roismall = np.float32(roismall)
            retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1)
            string = str(int((results[0][0])))
            cv2.putText(out,string,(x,y+h),0,1,(0,255,0))

cv2.imshow('im',im)
cv2.imshow('out',out)
cv2.waitKey(0)

And it worked, below is the result I got:


Here it worked with 100% accuracy. I assume this is because all the digits are of same kind and same size.

But any way, this is a good start to go for beginners ( I hope so).


回答 1

对于那些对C ++代码感兴趣的人,可以参考以下代码。感谢Abid Rahman的出色解释。


步骤与上面相同,但是轮廓查找仅使用第一层次级别的轮廓,因此算法仅对每个数字使用外部轮廓。

用于创建样本和标签数据的代码

//Process image to extract contour
Mat thr,gray,con;
Mat src=imread("digit.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); //Threshold to find contour
thr.copyTo(con);

// Create sample and label data
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
Mat sample;
Mat response_array;  
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); //Find contour

for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through first hierarchy level contours
{
    Rect r= boundingRect(contours[i]); //Find bounding rect for each contour
    rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,0,255),2,8,0);
    Mat ROI = thr(r); //Crop the image
    Mat tmp1, tmp2;
    resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR ); //resize to 10X10
    tmp1.convertTo(tmp2,CV_32FC1); //convert to float
    sample.push_back(tmp2.reshape(1,1)); // Store  sample data
    imshow("src",src);
    int c=waitKey(0); // Read corresponding label for contour from keyoard
    c-=0x30;     // Convert ascii to intiger value
    response_array.push_back(c); // Store label to a mat
    rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,255,0),2,8,0);    
}

// Store the data to file
Mat response,tmp;
tmp=response_array.reshape(1,1); //make continuous
tmp.convertTo(response,CV_32FC1); // Convert  to float

FileStorage Data("TrainingData.yml",FileStorage::WRITE); // Store the sample data in a file
Data << "data" << sample;
Data.release();

FileStorage Label("LabelData.yml",FileStorage::WRITE); // Store the label data in a file
Label << "label" << response;
Label.release();
cout<<"Training and Label data created successfully....!! "<<endl;

imshow("src",src);
waitKey();

培训和测试代码

Mat thr,gray,con;
Mat src=imread("dig.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); // Threshold to create input
thr.copyTo(con);


// Read stored sample and label for training
Mat sample;
Mat response,tmp;
FileStorage Data("TrainingData.yml",FileStorage::READ); // Read traing data to a Mat
Data["data"] >> sample;
Data.release();

FileStorage Label("LabelData.yml",FileStorage::READ); // Read label data to a Mat
Label["label"] >> response;
Label.release();


KNearest knn;
knn.train(sample,response); // Train with sample and responses
cout<<"Training compleated.....!!"<<endl;

vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;

//Create input sample by contour finding and cropping
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
Mat dst(src.rows,src.cols,CV_8UC3,Scalar::all(0));

for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour for first hierarchy level .
{
    Rect r= boundingRect(contours[i]);
    Mat ROI = thr(r);
    Mat tmp1, tmp2;
    resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR );
    tmp1.convertTo(tmp2,CV_32FC1);
    float p=knn.find_nearest(tmp2.reshape(1,1), 1);
    char name[4];
    sprintf(name,"%d",(int)p);
    putText( dst,name,Point(r.x,r.y+r.height) ,0,1, Scalar(0, 255, 0), 2, 8 );
}

imshow("src",src);
imshow("dst",dst);
imwrite("dest.jpg",dst);
waitKey();

结果

结果,第一行中的点被检测为8,而我们尚未训练该点。另外,我正在考虑将第一个层次结构中的每个轮廓作为样本输入,用户可以通过计算面积来避免它。

For those who interested in C++ code can refer below code. Thanks Abid Rahman for the nice explanation.


The procedure is same as above but, the contour finding uses only first hierarchy level contour, so that the algorithm uses only outer contour for each digit.

Code for creating sample and Label data

//Process image to extract contour
Mat thr,gray,con;
Mat src=imread("digit.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); //Threshold to find contour
thr.copyTo(con);

// Create sample and label data
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
Mat sample;
Mat response_array;  
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); //Find contour

for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through first hierarchy level contours
{
    Rect r= boundingRect(contours[i]); //Find bounding rect for each contour
    rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,0,255),2,8,0);
    Mat ROI = thr(r); //Crop the image
    Mat tmp1, tmp2;
    resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR ); //resize to 10X10
    tmp1.convertTo(tmp2,CV_32FC1); //convert to float
    sample.push_back(tmp2.reshape(1,1)); // Store  sample data
    imshow("src",src);
    int c=waitKey(0); // Read corresponding label for contour from keyoard
    c-=0x30;     // Convert ascii to intiger value
    response_array.push_back(c); // Store label to a mat
    rectangle(src,Point(r.x,r.y), Point(r.x+r.width,r.y+r.height), Scalar(0,255,0),2,8,0);    
}

// Store the data to file
Mat response,tmp;
tmp=response_array.reshape(1,1); //make continuous
tmp.convertTo(response,CV_32FC1); // Convert  to float

FileStorage Data("TrainingData.yml",FileStorage::WRITE); // Store the sample data in a file
Data << "data" << sample;
Data.release();

FileStorage Label("LabelData.yml",FileStorage::WRITE); // Store the label data in a file
Label << "label" << response;
Label.release();
cout<<"Training and Label data created successfully....!! "<<endl;

imshow("src",src);
waitKey();

Code for training and testing

Mat thr,gray,con;
Mat src=imread("dig.png",1);
cvtColor(src,gray,CV_BGR2GRAY);
threshold(gray,thr,200,255,THRESH_BINARY_INV); // Threshold to create input
thr.copyTo(con);


// Read stored sample and label for training
Mat sample;
Mat response,tmp;
FileStorage Data("TrainingData.yml",FileStorage::READ); // Read traing data to a Mat
Data["data"] >> sample;
Data.release();

FileStorage Label("LabelData.yml",FileStorage::READ); // Read label data to a Mat
Label["label"] >> response;
Label.release();


KNearest knn;
knn.train(sample,response); // Train with sample and responses
cout<<"Training compleated.....!!"<<endl;

vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;

//Create input sample by contour finding and cropping
findContours( con, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
Mat dst(src.rows,src.cols,CV_8UC3,Scalar::all(0));

for( int i = 0; i< contours.size(); i=hierarchy[i][0] ) // iterate through each contour for first hierarchy level .
{
    Rect r= boundingRect(contours[i]);
    Mat ROI = thr(r);
    Mat tmp1, tmp2;
    resize(ROI,tmp1, Size(10,10), 0,0,INTER_LINEAR );
    tmp1.convertTo(tmp2,CV_32FC1);
    float p=knn.find_nearest(tmp2.reshape(1,1), 1);
    char name[4];
    sprintf(name,"%d",(int)p);
    putText( dst,name,Point(r.x,r.y+r.height) ,0,1, Scalar(0, 255, 0), 2, 8 );
}

imshow("src",src);
imshow("dst",dst);
imwrite("dest.jpg",dst);
waitKey();

Result

In the result the dot in the first line is detected as 8 and we haven’t trained for dot. Also I am considering every contour in first hierarchy level as the sample input, user can avoid it by computing the area.


回答 2

如果您对机器学习的最新技术感兴趣,则应研究深度学习。您应该具有支持GPU的CUDA,或者在Amazon Web Services上使用GPU。

Google Udacity使用Tensor Flow对此提供了很好的教程。本教程将教您如何在手写数字上训练自己的分类器。使用卷积网络,我在测试集上的准确性超过97%。

If you are interested in the state of the art in Machine Learning, you should look into Deep Learning. You should have a CUDA supporting GPU or alternatively use the GPU on Amazon Web Services.

Google Udacity has a nice tutorial on this using Tensor Flow. This tutorial will teach you how to train your own classifier on hand written digits. I got an accuracy of over 97% on the test set using Convolutional Networks.


Easyocr — 3行代码识别图片中的任意语言文字

今天给大家介绍一个超级简单且强大的OCR文本识别工具:easyocr.

这个模块支持70多种语言的即用型OCR,包括中文,日文,韩文和泰文等。

下面是使用这个模块的实战教程。

1.准备

开始之前,你要确保Python和pip已经成功安装在电脑上,如果没有,请访问这篇文章:超详细Python安装指南 进行安装。

(可选1) 如果你用Python的目的是数据分析,可以直接安装Anaconda:Python数据分析与挖掘好帮手—Anaconda,它内置了Python和pip.

(可选2) 此外,推荐大家用VSCode编辑器来编写小型Python项目:Python 编程的最好搭档—VSCode 详细指南

Windows环境下打开Cmd(开始—运行—CMD),苹果系统环境下请打开Terminal(command+空格输入Terminal),输入命令安装依赖:

pip install easyocr

它会安装除了模型文件之外的所有依赖,模型文件则会在运行代码的时候下载。

对于Windows,如果在安装Torch或Torchvision时报错了,请先按照https://pytorch.org的官方说明安装Torch和Torchvision 。

在pytorch网站上,请确保选择正确的CUDA版本。如果仅打算在CPU模式下运行,请选择CUDA = None。

2.实战教程

这个模块用起来真的非常简单,三行代码完事了:

import easyocr
reader = easyocr.Reader(['ch_sim','en'])
result = reader.readtext('test.png')

运行的过程中会安装所需要的模型文件,像下面这样:

不过它的下载速度非常慢,而且经常会失败,因此这里给出第二个解决方案:先下载好模型文件,再将其放置到所需要的位置:

如果下载速度太慢,请在Python实用宝典公众号后台回复:easyocr, 下载我上传到微云网盘的文字检测模型(CRAFT)和中文简体模型文件包。

下载完模型后,将文件放到下面这个位置。

Windows:C:\Users\用户名.EasyOCR\model
Linux:~/ .EasyOCR / model

重新执行脚本不会再提醒下载模型了:

import easyocr
reader = easyocr.Reader(['ch_sim'])
result = reader.readtext('test.png')
print(result)

我随便截了一个直播弹幕的图片保存在脚本所在的文件夹下,命名为test.png:

结果如下:

基本上所有应该识别的文字都识别出来了,效果非常不错。

另外也可以看到,输出采用列表格式,每个item分别表示对应文字的边界框,识别文本结果和置信度。

这个模块还能识别多语种的情况:

我将这张图片命名为test2.jpg,修改代码中对应的图片名称:

import easyocr
reader = easyocr.Reader(['ch_sim','en'])
result = reader.readtext('test2.jpg')
print(result)

效果如下:

这张图片很复杂,而且是中英文混杂在一起的情况,但是可以看到模型除了左上角的水印,图片中的文字基本都是识别出来了,尽管有部分文字识别错误,但还在可以接受的范围之内。

不过需要注意的是,虽然可以一次性识别许多种语言,但并非所有语言都可以一起用,通常是公共语言和一个特殊语种可以一起识别,相互兼容,比如英语和日语。

如果你的电脑没有GPU或者显存不足,可以加一个gpu=false的参数仅使用CPU运行:

reader = easyocr.Reader(['ch_sim','en'], gpu = False)

另外,这个模块还支持直接使用命令行运行,相当方便,大家可以试一试:

easyocr -l ch_sim en -f test.png --detail=1 --gpu=True

我们的文章到此就结束啦,如果你喜欢今天的 Python 教程,请持续关注Python实用宝典。

有任何问题,可以在公众号后台回复:加群,回答相应验证信息,进入互助群询问。

原创不易,希望你能在下面点个赞和在看支持我继续创作,谢谢!

给作者打赏,选择打赏金额
¥1¥5¥10¥20¥50¥100¥200 自定义

​Python实用宝典 ( pythondict.com )
不只是一个宝典
欢迎关注公众号:Python实用宝典

Python 自动识别图片文字—OCR实战教程

OCR 是光学字符识别(英语:Optical Character Recognition,OCR)是指对文本资料的图像文件进行分析识别处理,获取文字及版面信息的过程。

很早之前就有同学在公众号后台回复希望出一篇 OCR 相关的文章,今天尝试了一下cnocr和tesseract,给大家分别讲讲两个模块的使用方法和效果。

1.准备

开始之前,你要确保Python和pip已经成功安装在电脑上,如果没有,请访问这篇文章:超详细Python安装指南 进行安装。

(可选1) 如果你用Python的目的是数据分析,可以直接安装Anaconda:Python数据分析与挖掘好帮手—Anaconda,它内置了Python和pip.

(可选2) 此外,推荐大家用VSCode编辑器来编写小型Python项目:Python 编程的最好搭档—VSCode 详细指南

Windows环境下打开Cmd(开始—运行—CMD),苹果系统环境下请打开Terminal(command+空格输入Terminal),输入命令安装依赖:

(选择一)安装 cnocr:

pip install cnocr

看到 Successfully installed xxx 则说明安装成功。

如果你只想对图片中的中文进行识别,那么 cnocr 是一个不错的选择,你只需要安装 cnocr 包即可。

但如果你想试试其他语言的OCR识别,Tesseract 是更好的选择。

(选择二)安装 pytesseract:

首先,无论是Windows还是macOS,你都需要安装 pytesseract:

pip install pytesseract

其次,还需要安装Tesseract.

(macOS)Tesseract 在macOS下可以使用brew安装:

brew install tesseract

非常方便,一条命令即可完成安装。

(Windows)安装Tesseract

需要先下载安装tesseract的程序,然后下载中文简体字预训练好的模型包(尽管本教程不会用tesseract,但还是给大家提供了)。

你可以在Python实用宝典公众号后台回复:tesseract 打包下载。

下载完成后,将 tesseract-ocr-setup-4.00.00dev.exe 安装到 Tesseract-OCR 指定目录下,复制该目录路径增加到Path中:

并将训练好的模型文件chi_sim.traineddata放入该目录中,这样安装就完成了。

2.cnocr 识别图片的中文

cnocr 主要针对的是排版简单的印刷体文字图片,如截图图片,扫描件等。目前内置的文字检测和分行模块无法处理复杂的文字排版定位。

尽管它分别提供了单行识别函数和多行识别函数,但在本人实测下,单行识别函数的效果非常糟糕,或者说要求的条件十分苛刻,基本上连截图的文字都识别不出来。

不过多行识别函数还不错,使用该函数识别的代码如下:

from cnocr import CnOcr
ocr = CnOcr()
res = ocr.ocr('test.png')
print("Predicted Chars:", res)

用于识别这个图片里的文字:

效果如下:

如果不是很吹毛求疵,这样的效果已经很不错了。

3.pytesseract 识别图片的英文

如果你的OCR目的不是中文而是英文,是需要别的模型的。这里给大家分享Tesseract-OCR,它是一款由HP实验室开发,由Google维护的开源OCR引擎。

Tesseract-OCR 可扩展性很强,你可以基于它训练属于自己的OCR模型。

现在给大家看看它分类英文的效果,代码如下:

import pytesseract
from PIL import Image

image = Image.open('test2.png')
code = pytesseract.image_to_string(image, lang='eng')
print(code)

识别的图片:

效果如下:

英文效果真的很不错,当然官方预训练好的中文模型效果就比较一般了。

如果你想试试Tesseract识别中文,只需要将代码中的eng改为chi_sim即可。

我们的文章到此就结束啦,如果你喜欢我们今天的Python 实战教程,请持续关注我们,如果对你有帮助,麻烦在下面点一个赞/在看哦有任何问题都可以在下方留言区留言,我们都会耐心解答的!


​Python实用宝典 (pythondict.com)
不只是一个宝典
欢迎关注公众号:Python实用宝典