停用词
停用词,即语料中大量出现,但是跟主题内容不相关的词句(或者符号)。例如:标点符号、量词、转折词等。网上有很多停用词表,可以直接从网上下载。
通常,对于一段文段,为了提取主题内容,通常要从文段中依照停用词表删除停用词。
TF-IDF关键词提取
TF-IDF是提取关键词的一种常用的算法策略,即提取文段主题内容。如果某个词比较少见,但是它在这篇文章中多次出现,那么它很可能就反映了该文章的特性,正是我们所需要提取的关键词。
- TF(Term Frequency):词频
- IDF(Inverse Document Frequency)逆文档频率
相关公式:
最终的计算结果是:
其中值越大,表示重要性越大,关键词的可能性越大。
相似度
相似度:比较两个句子的相似程度。
例如:
句子A:我喜欢看电视,不喜欢看电影
句子B:我不喜欢看电视,也不喜欢看电影
Step 1:分词
句子A:我/喜欢/看/电视,不/喜欢/看/电影
句子B:我/不/喜欢/看/电视,也/不/喜欢/看/电影
Step 2:构建语料库
语料库:我,喜欢,看,电视,电影,不,也
Step 3:统计词频
句子A:我 1,喜欢 2,看 2,电视 1,电影 1,不 1,也 0
句子B:我 1,喜欢 2,看 2,电视 1,电影 1,不 2,也 1
Step 4:构建词频向量
其实构建向量的方式很多,这里采用的是词频,但是词频有一个缺点就是会忽略过多的上下文信息,所以一般不用词频,一般采用word2Vec、Gensim这个库进行,但是此处仅介绍词频方式。
句子A:
句子B:
Step 5:相似度计算
有多种相似度计算方法,这里只介绍比较常用的余弦相似度。
例如句子A与句子B的相似度为:
LDA建模
此处类似于K-Means的一种无监督主题分类算法。
代码如下:
本例提供两种方法,一种是基于词频向量,一种是基于TF-IDF关键词向量的一种贝叶斯分类。
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import jieba
%matplotlib inline
df_news = pd.read_table('./data/val.txt',names=['category','theme','URL','content'],encoding='utf-8')
df_news = df_news.dropna()
df_news.head()
df_news.shape
content=df_news.content.values.tolist()
print(content[1000])
content_S=[]
for line in content:
current_segemnt =jieba.lcut(line)
if len(current_segemnt)>1 and current_segemnt!='\r\n':
content_S.append(current_segemnt)
content_S[1000]
df_content = pd.DataFrame({'contnet_S':content_S})
df_content.head()
stopwords = pd.read_csv('stopwords.txt',index_col=False,sep='\t',quoting=3,names=['stopword'], encoding='utf-8')
stopwords.head(20)
def drop_stopwords(contents,stopwords):
contents_clean = []
all_words = []
for line in contents:
line_clean = []
for word in line:
if word in stopwords:
continue
line_clean.append(word)
all_words.append(str(word))
contents_clean.append(line_clean)
return contents_clean,all_words
contents = df_content.contnet_S.values.tolist()
stopwords = stopwords.stopword.values.tolist()
contents_clean,all_words = drop_stopwords(contents,stopwords)
df_content = pd.DataFrame({'contents_clean':contents_clean})
df_content.head()
df_all_words = pd.DataFrame({'all_words':all_words})
df_all_words.head()
words_count=df_all_words.groupby(by=['all_words'])['all_words'].agg({"count":np.size})
words_count=words_count.reset_index().sort_values(by=["count"],ascending=False)
words_count.head()
from wordcloud import WordCloud
matplotlib.rcParams['figure.figsize'] = (10.0, 5.0)
wordcloud=WordCloud(font_path="./data/simhei.ttf",background_color="white",max_font_size=80)
word_frequence = {x[0]:x[1] for x in words_count.head(100).values}
wordcloud=wordcloud.fit_words(word_frequence)
plt.imshow(wordcloud)
import jieba.analyse
index=1000
print(df_news['content'][index])
content_S_str = ' '.join(content_S[index])
print(' '.join(jieba.analyse.extract_tags(content_S_str, topK=5, withWeight=False)))
from gensim import corpora, models, similarities
import gensim
#http://radimrehurek.com/gensim/
#做映射,相当于词袋
dictionary = corpora.Dictionary(contents_clean)
corpus = [dictionary.doc2bow(sentence) for sentence in contents_clean]
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=20) #类似Kmeans自己指定K值
#一号分类结果
print (lda.print_topic(1, topn=5))
for topic in lda.print_topics(num_topics=20, num_words=5):
print (topic[1])
df_train=pd.DataFrame({'contents_clean':contents_clean,'label':df_news['category']})
df_train.tail()
df_train.label.unique()
label_mapping = {"汽车": 1, "财经": 2, "科技": 3, "健康": 4, "体育":5, "教育": 6,"文化": 7,"军事": 8,"娱乐": 9,"时尚": 0}
df_train['label'] = df_train['label'].map(label_mapping)
df_train.head()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(df_train['contents_clean'].values, df_train['label'].values, random_state=1)
x_train[0][1]
words = []
for line_index in range(len(x_train)):
try:
#x_train[line_index][word_index] = str(x_train[line_index][word_index])
words.append(' '.join(x_train[line_index]))
except:
print (line_index,word_index)
words[0]
print (len(words))
from sklearn.feature_extraction.text import CountVectorizer
texts=["dog cat fish","dog cat cat","fish bird", 'bird']
cv = CountVectorizer()
cv_fit=cv.fit_transform(texts)
print(cv.get_feature_names())
print(cv_fit.toarray())
print(cv_fit.toarray().sum(axis=0))
from sklearn.feature_extraction.text import CountVectorizer
texts=["dog cat fish","dog cat cat","fish bird", 'bird']
cv = CountVectorizer(ngram_range=(1,4))
cv_fit=cv.fit_transform(texts)
print(cv.get_feature_names())
print(cv_fit.toarray())
print(cv_fit.toarray().sum(axis=0))
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer(analyzer='word', max_features=4000, lowercase = False)
vec.fit(words)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(vec.transform(words), y_train)
test_words = []
for line_index in range(len(x_test)):
try:
#x_train[line_index][word_index] = str(x_train[line_index][word_index])
test_words.append(' '.join(x_test[line_index]))
except:
print (line_index,word_index)
test_words[0]
classifier.score(vec.transform(test_words), y_test)
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(analyzer='word', max_features=4000, lowercase = False)
vectorizer.fit(words)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(vectorizer.transform(words), y_train)
classifier.score(vectorizer.transform(test_words), y_test)
两次分类误差分别为:0.804和0.8152
共同学习,写下你的评论
评论加载中...
作者其他优质文章