类似微调LLM,微调embedding模型也是为了提高在自己数据域上的效果。sentence transformer本质上在transformers的基础上包装了方法。因此用起来相当简洁。

语义文本相似度 Semantic Textual Similarity (STS)

以这个任务为示例,训练过程分为如下几步。PS:不用复制代码,直接下载附件(包含logging处理和argparse参数,和脚本),以下代码只是为了方便读者理解和我解释。

1. 加载模型

model = SentenceTransformer(model_name_or_path)

2.加载数据集

ds = load_dataset("sentence-transformers/stsb")
trainset = ds['train']
evalset = ds['validation']
testset = ds['test']

这个数据集是语义文本相似度任务的一个主流数据集。它是标准的带float分数的pair数据,其中float分数代表sentence1和2之间的语义相似程度。

# print(ds)
DatasetDict({
    train: Dataset({
        features: ['sentence1', 'sentence2', 'score'],
        num_rows: 5749
    })
    validation: Dataset({
        features: ['sentence1', 'sentence2', 'score'],
        num_rows: 1500
    })
    test: Dataset({
        features: ['sentence1', 'sentence2', 'score'],
        num_rows: 1379
    })
})

# print(ds['train'][0])
{'sentence1': 'A plane is taking off.',
 'sentence2': 'An air plane is taking off.',
 'score': 1.0}

3.定义loss,将会传给训练函数

""" Options
In this case, CoSENTLoss is better. 
loss_fn = losses.CosineSimilarityLoss(model=model)
loss_fn = losses.CoSENTLoss(model=model)
"""
loss_fn = losses.CosineSimilarityLoss(model=model)

适配该任务的损失函数有CosineSimilarityLoss和CoSENTLoss,后者效果更好。注意要把模型传进去,而不是只有损失函数名。

4.定义evaluator,也会传给训练函数,根据你的设置会定期对模型评估

eval_evaluator = EmbeddingSimilarityEvaluator(
    sentences1=evalset["sentence1"],
    sentences2=evalset["sentence2"],
    scores=evalset["score"],
    main_similarity=SimilarityFunction.COSINE,  # similarity function
)

5.定义训练参数

args = SentenceTransformerTrainingArguments(
    # Required parameter:
    output_dir=output_path,
    # Optional training parameters:
    num_train_epochs=num_epochs,
    per_device_train_batch_size=train_batch_size,
    per_device_eval_batch_size=train_batch_size,
    warmup_ratio=0.1,
    fp16=True,  # Set to False if you get an error that your GPU can't run on FP16
    bf16=False,  # Set to True if you have a GPU that supports BF16
    # Optional tracking/debugging parameters:
    evaluation_strategy="steps",
    eval_steps=200,
    save_strategy="steps",
    save_steps=200,
    save_total_limit=2,
    logging_steps=200,
    # run_name="sts",  # Will be used in W&B if `wandb` is installed
)

很类似transformers的训练函数,本文不解释了

6.定义训练函数

trainer = SentenceTransformerTrainer(
    model=model,
    args=args,
    train_dataset=trainset,
    eval_dataset=evalset,
    loss=loss_fn,
    evaluator=eval_evaluator,
)
trainer.train()

这里就是把之前定义的变量都传入训练函数。注意下评估相关参数:eval_dataset和evaluator,evaluator是评估必须的,eval_dataset在evaluator已有的情况下可以不写,因为evaluator的定义已经包含了所用数据集的说明。附上官方说明:https://sbert.net/docs/package_reference/sentence_transformer/trainer.html

PS:这里还可以同时用多个evaluator,只需要在定义完evaluator的基础上用SequentialEvaluator套一下,就可以作为正常evaluator传入trainer。

seq_evaluator = SequentialEvaluator([eval_evaluator_pair, eval_evaluator_triplet])

7.评估

test_evaluator = EmbeddingSimilarityEvaluator(
    sentences1=testset["sentence1"],
    sentences2=testset["sentence2"],
    scores=testset["score"],
    main_similarity=SimilarityFunction.COSINE,
)

test_evaluator(model)

8.保存模型

model.save(final_output_dir)

基于三元组数据(anchor, positive, negative)的训练

三元组数据为每个句子都提供了正相关数据和符相关数据,目前官方文档中没提到有相关数据集。

我用的公司构建的数据(略不同与正常三元组,每个句子的正相关数据和符相关数据都有多个),因此不方便放出,但放出对应的转换代码

# load json file into (anchor, positive, negtive) triplets
def json2triplet(filepath:str):
    l_anchor = []
    l_positive = []
    l_negative = []
    with open(filepath) as file:
        for line in file:
            tmp_dict = json.loads(line)
            anchor = tmp_dict['query']
            for pos in tmp_dict['pos']:
                for neg in tmp_dict['neg']:
                    l_anchor.append(anchor)
                    l_positive.append(pos)
                    l_negative.append(neg)
                    
    data = {
        "anchor": l_anchor,
        "positive": l_positive,
        "negative": l_negative,
    }
    return data

# load json file into (sentence1, sentence2) pairs with float score
def json2pairswithscore(filepath:str):
    
    l_sentences_1 = []
    l_sentences_2 = []
    l_score = []
    
    with open(filepath) as file:
        for line in file:
            tmp_dict = json.loads(line)
            anchor = tmp_dict['query']
            for pos in tmp_dict['pos']:
                l_sentences_1.append(anchor)
                l_sentences_2.append(pos)
                l_score.append(1)
            for neg in tmp_dict['neg']:
                l_sentences_1.append(anchor)
                l_sentences_2.append(neg)
                l_score.append(-1)
                
    data = {
        "sentence1": l_sentences_1,
        "sentence2": l_sentences_2,
        "score": l_score,
    }
    return data

pairswithscore = json2pairswithscore(args.valid_data_path)
ds_pairswithscore = Dataset.from_dict(pairswithscore)
ds_eval = ds_pairswithscore

triplets = json2triplet(args.valid_data_path)
ds_triplets = Dataset.from_dict(triplets)
ds_eval = ds_triplets

只要把数据转换成正确的格式后,就可以用Dataset.from_dict()转换成dataset。

此外,仅有如下内容回合语义文本相似度任务有差别。注意数据集样式要和loss对应。

# dataset
Dataset({
    features: ['anchor', 'positive', 'negative'],
    num_rows: 10868
})

# loss function
loss_fn = losses.TripletLoss(model=model)


# evaluator
eval_evaluator_triplet = TripletEvaluator(
    anchors=ds_eval["anchor"],
    positives=ds_eval["positive"],
    negatives=ds_eval["negative"],
)

评估说明

这里我就只说下EmbeddingSimilarityEvaluator和TripletEvaluator,这两个分别是我提供的代码中用到的evaluator。

EmbeddingSimilarityEvaluator输出内容如下;

EmbeddingSimilarityEvaluator: Evaluating the model on the  dataset:
Cosine-Similarity :       Pearson: 0.7874 Spearman: 0.8004
Manhattan-Distance:       Pearson: 0.7823 Spearman: 0.7827
Euclidean-Distance:       Pearson: 0.7824 Spearman: 0.7827
Dot-Product-Similarity:   Pearson: 0.7192 Spearman: 0.7126

解释:第一列是不同相似度函数,Pearson 和 Spearman 都是衡量模型计算出的相似度得分与数据集中的真实相似度得分的相关性的指标,但它们捕捉的是不同类型的关系。两个分数的区间都为[-1,1],越接近1代表相关性高,基于embedding模型的相似度越接近给的score,反之接近-1则比较差。

TripletEvaluator输出内容如下:

TripletEvaluator: Evaluating the model on the  dataset:
Accuracy Cosine Distance:   	92.63
Accuracy Dot Product:       	7.30
Accuracy Manhattan Distance:	92.91
Accuracy Euclidean Distance:	92.63

解释:第一列代表基于不同相似度函数(例如cosine,点乘)的将positive/negative数据正确分类的准确率。PS:为什么点乘比较差,因为我构造评估数据时,score是以cosine函数给的,即每个句子与其negative数据的score为-1,与positive数据的score为1,因此这种score可能不适合点乘。

参考:

https://www.sbert.net/docs

Logo

腾讯云面向开发者汇聚海量精品云计算使用和开发经验,营造开放的云计算技术生态圈。

更多推荐