2 回答
TA贡献1998条经验 获得超6个赞
OneHotEncoder()
您可以使用和以直接的方式执行此操作np.dot()
将数据框中的每个元素转换为字符串
使用单热编码器通过分类元素的唯一词汇表将数据帧转换为单热
与自身进行点积以计算共现
使用同现矩阵和
feature_names
一个热编码器重新创建数据帧
#assuming this is your dataset
0 1 2 3
0 (-1.774, 1.145] (-3.21, 0.533] (0.0166, 2.007] (2.0, 3.997]
1 (-1.774, 1.145] (-3.21, 0.533] (2.007, 3.993] (2.0, 3.997]
from sklearn.preprocessing import OneHotEncoder
df = df.astype(str) #turn each element to string
#get one hot representation of the dataframe
l = OneHotEncoder()
data = l.fit_transform(df.values)
#get co-occurance matrix using a dot product
co_occurance = np.dot(data.T, data)
#get vocab (columns and indexes) for co-occuance matrix
#get_feature_names() has a weird suffix which I am removing for better readibility here
vocab = [i[3:] for i in l.get_feature_names()]
#create co-occurance matrix
ddf = pd.DataFrame(co_occurance.todense(), columns=vocab, index=vocab)
print(ddf)
(-1.774, 1.145] (-3.21, 0.533] (0.0166, 2.007] \
(-1.774, 1.145] 2.0 2.0 1.0
(-3.21, 0.533] 2.0 2.0 1.0
(0.0166, 2.007] 1.0 1.0 1.0
(2.007, 3.993] 1.0 1.0 0.0
(2.0, 3.997] 2.0 2.0 1.0
(2.007, 3.993] (2.0, 3.997]
(-1.774, 1.145] 1.0 2.0
(-3.21, 0.533] 1.0 2.0
(0.0166, 2.007] 0.0 1.0
(2.007, 3.993] 1.0 1.0
(2.0, 3.997] 1.0 2.0
正如您可以从上面的输出中验证的那样,它正是共现矩阵应该是什么。
这种方法的优点是您可以使用transform单热编码器对象的方法对其进行缩放,并且大部分处理都发生在稀疏矩阵中,直到创建数据帧的最后一步,以提高内存效率。
TA贡献1829条经验 获得超7个赞
假设您的数据位于数据框 df 中。
然后,您可以在数据帧上执行 2 个循环,并在数据帧的每一行上执行两个循环,如下所示:
from collections import defaultdict
co_occrence = defaultdict(int)
for index, row in df.iterrows():
for index2, row2 in df.iloc[index + 1:].iterrows():
for row_index, feature in enumerate(row):
for feature2 in row2[row_index + 1:]:
co_occrence[feature, feature2] += 1
添加回答
举报