2023-04-26更新：

1.数据准备

`pip install -U scikit-learn`

2.数据预处理

```print('(2) doc to var...')
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer

# CountVectorizer考虑每种词汇在该训练文本中出现的频率，得到计数矩阵
count_v0= CountVectorizer(analyzer='word',token_pattern='\w{1,}')
counts_all = count_v0.fit_transform(all_text)

count_v1= CountVectorizer(vocabulary=count_v0.vocabulary_)
counts_train = count_v1.fit_transform(train_texts)
print("the shape of train is "+repr(counts_train.shape)  )
count_v2 = CountVectorizer(vocabulary=count_v0.vocabulary_)
counts_test = count_v2.fit_transform(test_texts)
print("the shape of test is "+repr(counts_test.shape)  )

# 保存数字化后的词典
joblib.dump(count_v0.vocabulary_, "model/die_svm_20191110_vocab.m")

counts_all = count_v2.fit_transform(all_text)
print("the shape of all is "+repr(counts_all.shape))

# 将计数矩阵转换为规格化的tf-idf格式
tfidftransformer = TfidfTransformer()
train_data = tfidftransformer.fit(counts_train).transform(counts_train)
test_data = tfidftransformer.fit(counts_test).transform(counts_test)
all_data = tfidftransformer.fit(counts_all).transform(counts_all) ```

3.训练

```print('(3) SVM...')
from sklearn.svm import SVC

# 使用线性核函数的SVM分类器，并启用概率估计（分别显示分到两个类别的概率如：[0.12983359 0.87016641]）
svclf = SVC(kernel = 'linear', probability=True)

# 开始训练
svclf.fit(x_train,y_train)
# 保存模型
joblib.dump(svclf, "model/die_svm_20191110.m")```

4.测试

```# 测试集进行测试
preds = svclf.predict(x_test)
y_preds = svclf.predict_proba(x_test)

preds = preds.tolist()
for i,pred in enumerate(preds):
# 显示被分错的微博
if int(pred) != int(y_test[i]):
try:
print(origin_eval_text[i], ':', test_texts[i], pred, y_test[i], y_preds[i])
except Exception as e:
print(e)

# 分别查看两个类别的准确率、召回率和F1值
print(classification_report(y_test, preds)) ```

​Python实用宝典 (pythondict.com)