已经是最新一篇文章了!
已经是最后一篇文章了!
概率分类的经典方法
前言
朴素贝叶斯(Naive Bayes)是基于贝叶斯定理的概率分类器。”朴素”指它假设特征之间条件独立。尽管这个假设很强,但在文本分类等任务中表现优异。
贝叶斯定理回顾
基本形式
\[P(A|B) = \frac{P(B|A)P(A)}{P(B)}\]分类问题中的应用
给定特征向量 $\mathbf{x}$,预测类别 $c$:
\[P(c|\mathbf{x}) = \frac{P(\mathbf{x}|c)P(c)}{P(\mathbf{x})}\]-
$P(c \mathbf{x})$:后验概率(我们要求的) -
$P(\mathbf{x} c)$:似然(类别c产生特征x的概率) - $P(c)$:先验概率
- $P(\mathbf{x})$:证据(归一化常数)
分类决策
\[\hat{c} = \arg\max_c P(c|\mathbf{x}) = \arg\max_c P(\mathbf{x}|c)P(c)\]分母 $P(\mathbf{x})$ 对所有类别相同,可以忽略。
朴素贝叶斯假设
条件独立假设
假设给定类别 $c$ 时,特征之间相互独立:
\[P(\mathbf{x}|c) = P(x_1, x_2, ..., x_d|c) = \prod_{i=1}^{d} P(x_i|c)\]分类公式
\[\hat{c} = \arg\max_c P(c)\prod_{i=1}^{d} P(x_i|c)\]取对数避免数值下溢:
\[\hat{c} = \arg\max_c \left[\log P(c) + \sum_{i=1}^{d} \log P(x_i|c)\right]\]三种朴素贝叶斯
高斯朴素贝叶斯(连续特征)
假设特征服从高斯分布:
\[P(x_i|c) = \frac{1}{\sqrt{2\pi\sigma_c^2}} \exp\left(-\frac{(x_i - \mu_c)^2}{2\sigma_c^2}\right)\]import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.naive_bayes import GaussianNB
# 生成数据
np.random.seed(42)
X, y = make_classification(n_samples=200, n_features=2, n_informative=2,
n_redundant=0, n_clusters_per_class=1, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# 训练高斯朴素贝叶斯
gnb = GaussianNB()
gnb.fit(X_train, y_train)
print(f"训练准确率: {gnb.score(X_train, y_train):.4f}")
print(f"测试准确率: {gnb.score(X_test, y_test):.4f}")
# 可视化决策边界
xx, yy = np.meshgrid(np.linspace(X[:, 0].min()-1, X[:, 0].max()+1, 100),
np.linspace(X[:, 1].min()-1, X[:, 1].max()+1, 100))
Z = gnb.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
Z = Z.reshape(xx.shape)
plt.figure(figsize=(10, 8))
plt.contourf(xx, yy, Z, levels=np.linspace(0, 1, 11), cmap='RdYlBu_r', alpha=0.8)
plt.colorbar(label='P(y=1)')
plt.scatter(X[y==0, 0], X[y==0, 1], c='blue', edgecolors='k', label='Class 0')
plt.scatter(X[y==1, 0], X[y==1, 1], c='red', edgecolors='k', label='Class 1')
plt.contour(xx, yy, Z, levels=[0.5], colors='k', linewidths=2)
plt.title('高斯朴素贝叶斯决策边界')
plt.legend()
plt.show()
# 查看学习到的参数
print(f"\n类别先验: {gnb.class_prior_}")
print(f"各类别均值:\n{gnb.theta_}")
print(f"各类别方差:\n{gnb.var_}")
多项式朴素贝叶斯(离散计数特征)
适用于词频等计数特征:
\[P(x_i|c) = \frac{N_{ci} + \alpha}{N_c + \alpha d}\]- $N_{ci}$:类别c中特征i出现的总次数
- $N_c$:类别c中所有特征出现的总次数
- $\alpha$:平滑参数(通常为1,即拉普拉斯平滑)
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
# 文本分类示例
texts = [
"I love this movie",
"This movie is great",
"Best film ever",
"I hate this movie",
"Terrible film",
"Worst movie ever",
"Really bad movie",
"Amazing story and acting"
]
labels = [1, 1, 1, 0, 0, 0, 0, 1] # 1=positive, 0=negative
# 文本向量化
vectorizer = CountVectorizer()
X_text = vectorizer.fit_transform(texts)
print(f"词汇表: {vectorizer.get_feature_names_out()}")
print(f"特征矩阵:\n{X_text.toarray()}")
# 训练多项式朴素贝叶斯
mnb = MultinomialNB(alpha=1.0) # 拉普拉斯平滑
mnb.fit(X_text, labels)
# 预测新文本
new_texts = ["I love this great movie", "This is terrible"]
X_new = vectorizer.transform(new_texts)
predictions = mnb.predict(X_new)
probas = mnb.predict_proba(X_new)
for text, pred, proba in zip(new_texts, predictions, probas):
sentiment = "Positive" if pred == 1 else "Negative"
print(f"'{text}' -> {sentiment} (P={proba[pred]:.3f})")
伯努利朴素贝叶斯(二元特征)
适用于0/1特征:
\[P(x_i|c) = P(i|c)x_i + (1 - P(i|c))(1 - x_i)\]from sklearn.naive_bayes import BernoulliNB
from sklearn.feature_extraction.text import CountVectorizer
# 二值化文本特征
vectorizer_binary = CountVectorizer(binary=True)
X_binary = vectorizer_binary.fit_transform(texts)
bnb = BernoulliNB()
bnb.fit(X_binary, labels)
X_new_binary = vectorizer_binary.transform(new_texts)
predictions = bnb.predict(X_new_binary)
for text, pred in zip(new_texts, predictions):
sentiment = "Positive" if pred == 1 else "Negative"
print(f"'{text}' -> {sentiment}")
从零实现高斯朴素贝叶斯
class GaussianNBScratch:
def __init__(self):
self.classes = None
self.mean = {}
self.var = {}
self.prior = {}
def fit(self, X, y):
self.classes = np.unique(y)
n_samples = len(y)
for c in self.classes:
X_c = X[y == c]
self.mean[c] = np.mean(X_c, axis=0)
self.var[c] = np.var(X_c, axis=0)
self.prior[c] = len(X_c) / n_samples
return self
def _gaussian_likelihood(self, x, mean, var):
"""计算高斯概率密度"""
eps = 1e-10 # 避免除零
coef = 1 / np.sqrt(2 * np.pi * (var + eps))
exponent = np.exp(-(x - mean) ** 2 / (2 * (var + eps)))
return coef * exponent
def _predict_single(self, x):
"""预测单个样本"""
posteriors = []
for c in self.classes:
prior = np.log(self.prior[c])
likelihood = np.sum(np.log(
self._gaussian_likelihood(x, self.mean[c], self.var[c])
))
posterior = prior + likelihood
posteriors.append(posterior)
return self.classes[np.argmax(posteriors)]
def predict(self, X):
return np.array([self._predict_single(x) for x in X])
def predict_proba(self, X):
"""计算预测概率"""
probas = []
for x in X:
log_posteriors = []
for c in self.classes:
prior = np.log(self.prior[c])
likelihood = np.sum(np.log(
self._gaussian_likelihood(x, self.mean[c], self.var[c])
))
log_posteriors.append(prior + likelihood)
# 使用log-sum-exp技巧避免数值问题
max_log = np.max(log_posteriors)
posteriors = np.exp(log_posteriors - max_log)
posteriors = posteriors / np.sum(posteriors)
probas.append(posteriors)
return np.array(probas)
def score(self, X, y):
return np.mean(self.predict(X) == y)
# 测试
gnb_scratch = GaussianNBScratch()
gnb_scratch.fit(X_train, y_train)
print(f"自实现 - 训练准确率: {gnb_scratch.score(X_train, y_train):.4f}")
print(f"自实现 - 测试准确率: {gnb_scratch.score(X_test, y_test):.4f}")
# 与sklearn对比
print(f"sklearn - 测试准确率: {gnb.score(X_test, y_test):.4f}")
拉普拉斯平滑
零概率问题
| 如果某个特征在训练集某类别中从未出现,则 $P(x_i | c) = 0$,导致整个后验为0。 |
解决方案
添加平滑:
\[P(x_i|c) = \frac{count(x_i, c) + \alpha}{count(c) + \alpha |V|}\]- $\alpha$:平滑参数(通常为1)
-
$ V $:特征的可能取值数量
# 平滑参数影响
alphas = [0.01, 0.1, 1.0, 10.0]
print("不同平滑参数的影响:")
for alpha in alphas:
mnb = MultinomialNB(alpha=alpha)
mnb.fit(X_text, labels)
# 使用交叉验证(数据太少只能用留一法)
from sklearn.model_selection import cross_val_score
# 由于数据太少,这里直接看训练准确率
acc = mnb.score(X_text, labels)
print(f"alpha={alpha:5.2f}: 准确率={acc:.4f}")
文本分类实战
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import classification_report, confusion_matrix
import seaborn as sns
# 加载数据(选择4个类别)
categories = ['alt.atheism', 'soc.religion.christian', 'comp.graphics', 'sci.med']
train_data = fetch_20newsgroups(subset='train', categories=categories,
shuffle=True, random_state=42)
test_data = fetch_20newsgroups(subset='test', categories=categories,
shuffle=True, random_state=42)
print(f"训练样本数: {len(train_data.data)}")
print(f"测试样本数: {len(test_data.data)}")
print(f"类别: {train_data.target_names}")
# TF-IDF特征
tfidf = TfidfVectorizer(max_features=5000, stop_words='english')
X_train_tfidf = tfidf.fit_transform(train_data.data)
X_test_tfidf = tfidf.transform(test_data.data)
# 训练朴素贝叶斯
mnb_news = MultinomialNB(alpha=0.1)
mnb_news.fit(X_train_tfidf, train_data.target)
# 评估
y_pred = mnb_news.predict(X_test_tfidf)
print(f"\n测试准确率: {mnb_news.score(X_test_tfidf, test_data.target):.4f}")
print("\n分类报告:")
print(classification_report(test_data.target, y_pred, target_names=train_data.target_names))
# 混淆矩阵
plt.figure(figsize=(10, 8))
cm = confusion_matrix(test_data.target, y_pred)
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues',
xticklabels=train_data.target_names,
yticklabels=train_data.target_names)
plt.xlabel('预测')
plt.ylabel('实际')
plt.title('新闻分类混淆矩阵')
plt.xticks(rotation=45)
plt.tight_layout()
plt.show()
# 查看各类别的重要特征
feature_names = np.array(tfidf.get_feature_names_out())
print("\n各类别Top 10特征:")
for i, category in enumerate(train_data.target_names):
top_features = np.argsort(mnb_news.feature_log_prob_[i])[-10:]
print(f"\n{category}:")
print(", ".join(feature_names[top_features]))
垃圾邮件分类
# 简单的垃圾邮件分类示例
spam_data = [
("Free money now!!!", 1),
("Win a lottery today", 1),
("Cheap medicine available", 1),
("You have won $1000000", 1),
("Meeting at 3pm tomorrow", 0),
("Project report attached", 0),
("Can you review this document?", 0),
("Team lunch on Friday", 0),
("Buy now, limited offer!", 1),
("Conference call reminder", 0),
("Get rich quick scheme", 1),
("Quarterly review meeting", 0),
]
texts, labels = zip(*spam_data)
labels = np.array(labels)
# 向量化
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(texts)
# 训练
mnb_spam = MultinomialNB()
mnb_spam.fit(X, labels)
# 预测新邮件
new_emails = [
"Free lottery tickets for you!",
"Please review the attached report",
"Cheap pills available now"
]
X_new = vectorizer.transform(new_emails)
predictions = mnb_spam.predict(X_new)
probas = mnb_spam.predict_proba(X_new)
print("垃圾邮件检测:")
for email, pred, proba in zip(new_emails, predictions, probas):
label = "垃圾邮件" if pred == 1 else "正常邮件"
confidence = proba[pred]
print(f" '{email[:30]}...' -> {label} (置信度: {confidence:.3f})")
朴素贝叶斯 vs 其他分类器
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC
from sklearn.ensemble import RandomForestClassifier
import time
# 使用20newsgroups数据比较
classifiers = {
'Naive Bayes': MultinomialNB(alpha=0.1),
'Logistic Regression': LogisticRegression(max_iter=1000),
'Linear SVM': LinearSVC(),
'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42)
}
results = []
for name, clf in classifiers.items():
start = time.time()
clf.fit(X_train_tfidf, train_data.target)
train_time = time.time() - start
start = time.time()
accuracy = clf.score(X_test_tfidf, test_data.target)
pred_time = time.time() - start
results.append({
'Classifier': name,
'Accuracy': accuracy,
'Train Time': train_time,
'Predict Time': pred_time
})
print(f"{name}: Acc={accuracy:.4f}, Train={train_time:.3f}s, Predict={pred_time:.3f}s")
# 结果表格
import pandas as pd
pd.DataFrame(results).set_index('Classifier')
常见问题
Q1: 朴素贝叶斯的独立性假设现实吗?
很少是现实的,但即使假设不成立,朴素贝叶斯仍然表现良好。这是因为:
- 分类只需要排序,不需要精确概率
- 误差往往会相互抵消
Q2: 什么时候用哪种朴素贝叶斯?
| 类型 | 适用场景 |
|---|---|
| Gaussian | 连续数值特征 |
| Multinomial | 词频、TF-IDF |
| Bernoulli | 二元特征、短文本 |
Q3: 朴素贝叶斯的优缺点?
| 优点 | 缺点 |
|---|---|
| 训练预测都快 | 独立性假设强 |
| 小数据也能工作 | 概率估计不准确 |
| 可解释性好 | 特征相关时表现差 |
| 天然处理多分类 | 零概率问题 |
Q4: 如何改进朴素贝叶斯?
- 特征选择减少相关性
- 调整平滑参数
- 使用TF-IDF而非词频
- 结合集成方法
总结
| 概念 | 说明 |
|---|---|
| 核心 | $\hat{c} = \arg\max_c P(c)\prod_i P(x_i|c)$ |
| 关键假设 | 特征条件独立 |
| Gaussian | 连续特征,假设正态分布 |
| Multinomial | 计数特征,多项分布 |
| Bernoulli | 二元特征 |
| 平滑 | 拉普拉斯平滑解决零概率 |
参考资料
- 《统计学习方法》李航 第4章
- 《机器学习》周志华 第7章
- scikit-learn 文档:Naive Bayes
版权声明: 如无特别声明,本文版权归 sshipanoo 所有,转载请注明本文链接。
(采用 CC BY-NC-SA 4.0 许可协议进行授权)
本文标题:《 机器学习基础系列——朴素贝叶斯 》
本文链接:http://localhost:3015/ai/%E6%9C%B4%E7%B4%A0%E8%B4%9D%E5%8F%B6%E6%96%AF.html
本文最后一次更新为 天前,文章中的某些内容可能已过时!