1 回答
TA贡献1811条经验 获得超5个赞
您可以使用TimeDistributed层包装器来做到这一点:
from keras.layers import Input, Dense, TimeDistributed
from keras.models import Sequential, Model
N = None # Use fixed value if you do not want variable input size
K = 20
def small_model():
inputs = Input(shape=(K,))
# Define the small model
# Here it is just a single dense layer
outputs = Dense(K, activation='relu')(inputs)
return Model(inputs=inputs, outputs=outputs)
def large_model():
inputs = Input(shape=(N, K))
# Define the large model
# Just a single neuron here
outputs = Dense(1, activation='relu')(inputs)
return Model(inputs=inputs, outputs=outputs)
def combined_model():
inputs = Input(shape=(N, K))
# The TimeDistributed layer applies the given model
# to every input across dimension 1 (N)
small_model_out = TimeDistributed(small_model())(inputs)
# Apply large model
outputs = large_model()(small_model_out)
return Model(inputs=inputs, outputs=outputs)
model = combined_model()
model.compile(loss='mean_squared_error', optimizer='sgd')
model.summary()
输出:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, None, 20) 0
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 20) 420
_________________________________________________________________
model_2 (Model) (None, None, 1) 21
=================================================================
Total params: 441
Trainable params: 441
Non-trainable params: 0
_________________________________________________________________
添加回答
举报