如何儲存Fine-tune BERT model 的網路權重及架構?
Custom mask layers require a config and must override get_config ...........
1. ) 若pre-trained bert model 只是用來作為sentence embedding 的話,. 那就只儲存後面自己接的網路架構..不用整個儲存(即不必含BERT model), 因此可忽略載入pre-trained bert model . 那就回到原本傳統的model.save() , load_models('xxxx.h5')
2.) pre-trained bert model 串接自己網路架構, 一起訓練, 如果是這種的, 就用方法2
只存weight (model.save_weights), 用原本model 架構去產生一個空的new_model, 然後new_model.load_weights
model.save_weights('my_model_weights.h5')
...
new_model = <build your model with your model building code>
new_model.load_weights('my_model_weights.h5')
沒有留言 :
張貼留言