为了账号安全,请及时绑定邮箱和手机立即绑定

双向 LSTM 给出的损失为 NaN

双向 LSTM 给出的损失为 NaN

慕容3067478 2023-04-11 15:11:23
我正在使用 Twitter 的情绪数据集对情绪进行分类。为了实现这一点,我写了下面的代码,但是当我训练它时,我得到了损失 NaN。我无法理解问题所在。虽然我设法找到了问题的解决方案,但为什么问题首先发生在我不明白的地方。代码 :import pandas as pdimport numpy as npimport recols = ["id","text","emotion","intensity"]anger_df_train = pd.read_csv("D:/Dataset/twitter_emotion/train/anger.csv",delimiter='\t',names=cols)fear_df_train = pd.read_csv("D:/Dataset/twitter_emotion/train/fear.csv",delimiter='\t',names=cols)joy_df_train = pd.read_csv("D:/Dataset/twitter_emotion/train/joy.csv",delimiter='\t',names=cols)sadness_df_train = pd.read_csv("D:/Dataset/twitter_emotion/train/sadness.csv",delimiter='\t',names=cols)df_train = pd.concat([anger_df_train,fear_df_train,joy_df_train,sadness_df_train])import spacynlp = spacy.load('en_core_web_md')doc = nlp("The big grey dog ate all of the chocolate, but fortunately he wasn't sick!")def spacy_tokenizer(sentence):    emails = '[A-Za-z0-9]+@[a-zA-Z].[a-zA-Z]+'    websites = '(http[s]*:[/][/])[a-zA-Z0-9]'    mentions = '@[A-Za-z0-9]+'    sentence = re.sub(emails,'',sentence)    sentence = re.sub(websites,'',sentence)    sentence = re.sub(mentions,'',sentence)    sentence_list=[word.lemma_ for word in nlp(sentence) if not (word.is_stop or word.is_space or word.like_num or len(word)==1)]    return ' '.join(sentence_list)import tensorflow as tffrom tensorflow.keras.preprocessing.text import Tokenizerfrom tensorflow.keras.preprocessing.sequence import pad_sequencesdf_train['new_text']=df_train['text'].apply(spacy_tokenizer)tokenizer = Tokenizer(num_words=10000)tokenizer.fit_on_texts(df_train['new_text'].values)sequences = tokenizer.texts_to_sequences(df_train['new_text'].values)text_embedding = np.zeros((len(word_index)+1,300))for word,i in word_index.items():    text_embedding[i]=nlp(word).vectorlabels = df_train['emotion'].unique()label_tokenizer = Tokenizer()label_tokenizer.fit_on_texts(labels)
查看完整描述

1 回答

?
蝴蝶刀刀

TA贡献1801条经验 获得超8个赞

sparse_categorical_crossentropy导致的原因NaN是我们在Tokenize the Labels使用的时候TokenizerTrain Labels Array生成的,如下图:

array([[1],
       [2],
       [3],
       [4],
       [1],
       [2],
       [3]])

但是,如果sparse_categorical_crossentropy必须应用 Loss,Train Labels Array则应如下所示:

array([0, 1, 2, 3, 0, 1, 2])

因此,我们可以sparse_categorical_crossentropy使用以下代码使您的代码与 Loss 一起工作:

label_map={'anger': 0, 'fear': 1, 'joy': 2, 'sadness': 3}
df['Labels'] = df['Labels'].map(label_map)

sparse_categorical_labels = df['Labels'].values

X_train,X_test,y_train,y_test =  train_test_split(train_padd,sparse_categorical_labels,test_size=0.2,shuffle=True)



查看完整回答
反对 回复 2023-04-11
  • 1 回答
  • 0 关注
  • 141 浏览
慕课专栏
更多

添加回答

举报

0/150
提交
取消
意见反馈 帮助中心 APP下载
官方微信