1. Attention model简介
0x1:AM是什么
深度学习里的Attention model其实模拟的是人脑的注意力模型,举个例子来说,当我们观赏一幅画时,虽然我们可以看到整幅画的全貌,但是在我们深入仔细地观察时,其实眼睛聚焦的就只有很小的一块,这个时候人的大脑主要关注在这一小块图案上,也就是说这个时候人脑对整幅图的关注并不是均衡的,是有一定的权重区分的。这就是深度学习里的Attention Model的核心思想。
AM刚开始是应用在图像领域里的,并且在图像处理领域取得了非常好的效果,之后,就有人开始研究怎么将AM模型引入到NLP领域。最早提出 attention 思想的是这篇paepr,“Neural machine translation by jointly learning to align and translate”,这篇论文最早提出了Soft Attention Model,并将其应用到了机器翻译领域。
0x2:AM在机器翻译中的应用
Encoder-Decoder模型
Relevant Link:
https://blog.csdn.net/mpk_no1/article/details/72862348
https://machinelearningmastery.com/encoder-decoder-attention-sequence-to-sequence-prediction-keras/
https://arxiv.org/pdf/1409.0473.pdf
https://www.zhihu.com/question/36591394
0x3:Attention Mechanism分类
1. hard: Attention和soft: Attention
简单来说,soft attention是对输入向量的所有维度都计算一个关注权重,根据重要性赋予不同的权重。
而hard attention是针对输入向量计算得到一个唯一的确定权重,例如加权平均。
2. Global Attention 和 Local Attention
3. Self Attention
Self Attention与传统的Attention机制非常的不同:
传统的Attention是基于source端和target端的隐变量(hidden state)计算Attention的,得到的结果是源端的每个词与目标端每个词之间的依赖关系。
但Self Attention不同,它分别在source端和target端进行,仅与source input或者target input自身相关的Self Attention,捕捉source端或target端自身的词与词之间的依赖关系;然后再把source端的得到的self Attention加入到target端得到的Attention中,捕捉source端和target端词与词之间的依赖关系。
因此,self Attention Attention比传统的Attention mechanism效果要好,主要原因之一是:
传统的Attention机制忽略了源端或目标端句子中词与词之间的依赖关系,相对比,self Attention可以不仅可以得到源端与目标端词与词之间的依赖关系,同时还可以有效获取源端或目标端自身词与词之间的依赖关系,如下图所示。
Relevant Link:
https://zhuanlan.zhihu.com/p/31547842
https://blog.csdn.net/jteng/article/details/52864401
2. 通过一个简单的例子来理解attention model的思想原理
需要明白的是,AM不是一个具体的算法或者模型,AM更多的是一种思想,笔者觉得它实质上是一种更加合理的深度神经网络结构设计思想,以及特征权重调整策略。
0x1:Dense Layer - 在DNN隐层中加入soft attention机制
这个小节,我们通过一个简单的DNN神经网络里展示AM思想。
现在我们有一个dim=32维度的输入vector,我们正在设计一个DNN网络结构,来对这个dim32 vector进行进行分类预测。
在开始写代码之前,我们通过观察数据的概率分布,发现了一个很有趣的现象,训练数据对应的特征向量中有一个维度起到了决定性的作用,输入数据如下图
testing_inputs_1 [[-7.03187310e-01 1.00000000e+00 -3.21814330e-01 -1.75507872e+00
2.06664470e-01 -2.01126457e+00 -5.57250708e-01 3.37217008e-01
1.54883597e+00 -1.37073656e+00 1.42529140e+00 -2.79463910e-01
-5.59627907e-01 1.18638337e+00 1.69851891e+00 -1.69122016e+00
-6.99522844e-01 5.82962842e-01 9.78222630e-01 -1.21737211e+00
-1.32939545e+00 -1.45474227e-03 -1.31465268e+00 -3.79611743e-01
1.26521065e+00 1.20667744e-01 1.47941778e-01 -2.75372579e+00
-3.56896324e-01 7.71783656e-03 1.47827716e+00 -9.57614629e-01]
[ 1.32900811e+00 0.00000000e+00 4.71557202e-01 -8.74652950e-03
3.67018689e-01 1.11855474e+00 -8.38993512e-03 4.66315379e-01
1.26326870e+00 -9.01654654e-01 -1.02884269e+00 5.69678421e-01
6.41664780e-01 2.59811930e-01 1.19317814e+00 -1.04630036e+00
1.39888921e-01 -1.73065584e+00 -1.30623116e-01 -1.31026002e+00
-2.17131242e+00 -1.06618141e+00 -3.31618443e-02 1.46639575e+00
8.76643096e-01 6.69989580e-01 6.97449511e-01 -2.52785434e-01
5.67987107e-01 3.04387858e-01 -1.00002960e+00 -2.45641783e+00]
[ 2.52307022e-01 1.00000000e+00 -1.58345465e+00 1.98042282e-01
8.52522298e-02 6.40507750e-01 -7.90658155e-01 7.71182395e-01
-1.95067777e+00 -1.29401021e+00 -1.07352377e+00 3.06910919e-02
7.74109345e-01 -8.71396303e-01 1.66344014e-01 6.35789777e-01
1.08167197e+00 -2.82773662e-01 1.55478794e+00 -8.58308135e-01
-2.79650432e-01 -8.54234325e-02 -2.19597647e-01 -2.17359887e+00
9.06332427e-01 7.50338575e-01 -5.75259737e-01 -3.68953224e-01
7.65748246e-01 -1.10066159e+00 7.33829660e-01 -3.15740222e-02]
[-1.27394186e+00 0.00000000e+00 -5.42515179e-01 -1.05202857e+00
-7.75720653e-01 -1.23228165e-01 -5.36931271e-01 1.65373406e-01
8.99855721e-01 1.25719599e+00 1.15406861e+00 -6.74225801e-01
8.83266671e-01 -1.80074100e+00 3.15524021e-01 -2.98942433e-01
9.23266706e-01 -8.64610423e-01 9.06323896e-01 1.43665365e-01
-4.28784038e-01 4.36334858e-02 -1.15963013e+00 -1.44581716e-01
1.06269721e+00 1.50348168e+00 8.90477309e-01 1.10184730e-01
-2.80878365e-01 4.70876779e-01 -1.22654812e-01 1.80971612e+00]
[-2.11504034e-01 0.00000000e+00 5.60009299e-01 -1.17945640e+00
-4.67803781e-01 -1.74241319e+00 -3.70322401e-03 -2.17006719e+00
4.24510049e-01 1.46478639e-01 5.92744407e-02 -4.91253927e-01
-1.01717308e+00 4.19307196e-01 -7.71367508e-01 1.43788652e+00
2.68676712e+00 3.96732882e-01 4.76923961e-01 8.15901697e-01
-5.03092218e-01 1.44864196e-01 3.91584490e-02 -6.12835945e-01
7.00882108e-01 9.76864848e-01 -6.30941522e-01 -8.38602720e-01
-4.39203663e-01 -1.36452679e+00 -1.27237114e+00 8.60190888e-01]
[ 9.14860457e-01 1.00000000e+00 1.56077637e-01 1.15855621e+00
-4.98210125e-01 1.67069107e+00 4.31765280e-01 4.26712047e-01
9.86745986e-01 9.77680603e-01 -1.06466820e+00 5.38847940e-01
8.43082569e-01 9.00722906e-01 -8.01677331e-01 4.87130812e-01
-3.58399587e-01 1.20297675e+00 4.58699197e-01 -1.11963082e+00
3.35130398e-01 -6.86900220e-01 1.20681682e+00 1.91752106e+00
5.42198956e-01 7.22353555e-01 -1.74881350e-01 -1.15996824e-01
-1.98712683e+00 9.98292115e-03 7.12149198e-02 -1.75004126e+00]
[ 5.54438377e-01 0.00000000e+00 1.72070508e+00 -2.39421276e+00
-4.38335835e-01 1.22198125e+00 3.74376988e-01 -1.38100426e+00
-6.76686553e-01 4.07591917e-01 5.93619771e-01 7.83618421e-01
6.73002113e-01 4.78781433e-01 8.39040116e-01 8.69123716e-01
1.34632773e+00 1.36734769e+00 3.66827392e-01 3.60041568e-01
6.66945023e-01 -1.14536483e+00 4.38891453e-01 -4.37844713e-01
-4.65689776e-01 3.12033012e-02 -8.19522312e-01 7.58853868e-01
5.18056531e-01 4.28196906e-01 2.08135008e-01 1.24826488e+00]
[ 1.04258559e+00 0.00000000e+00 -5.93238790e-01 1.52406418e+00
1.21646035e+00 1.05836917e+00 -5.16890856e-01 1.08085391e+00
-1.38284038e+00 1.06456352e-01 2.74257861e-01 -1.63748280e+00
9.94120958e-01 -1.36070702e+00 -3.46128572e-01 1.56069434e+00
6.36408438e-01 -2.13655632e-01 -5.30028711e-01 -1.14739552e+00
-1.33102035e+00 8.67112945e-01 1.01777222e-01 -5.65421800e-01
5.44866549e-01 -5.88216752e-01 -1.53028975e+00 -1.05510083e+00
1.23102591e+00 1.49268412e+00 1.09572693e+00 -8.32754259e-01]
[ 1.42119684e+00 1.00000000e+00 -6.68588743e-01 2.06587470e+00
6.73939981e-01 1.78367879e-01 1.20959596e+00 2.05228057e+00
1.17298340e+00 -2.99209254e-01 1.54491060e+00 5.13288354e-01
-4.70304173e-01 -3.10097090e-01 -4.28043935e-01 -1.40723789e+00
-7.96590363e-01 -8.85643489e-01 2.11063371e+00 1.07039253e+00
1.39945292e+00 5.71403123e-01 2.75430532e-01 -1.99253003e-01
-3.59019207e-01 1.26609682e-01 -1.69233428e+00 1.33714780e+00
-1.10716769e+00 -5.72247993e-01 8.97152528e-01 -1.28169975e+00]
[-1.89902418e+00 0.00000000e+00 -2.82853143e-01 -4.48757897e-01
1.14923027e+00 -9.81086421e-01 -1.43486014e+00 -7.53626739e-01
1.37505923e+00 6.51163018e-03 -5.37901188e-01 4.93670710e-01
-8.27477300e-01 2.21030844e-01 -5.26978585e-01 -4.00566932e-01
-4.59691412e-01 -1.87982990e+00 5.19494331e-01 -1.77753816e+00
-2.89858663e-01 3.67898297e-01 9.63175026e-01 -4.51156518e-01
-1.43890933e-01 -6.47600423e-01 7.69697009e-01 -1.29930416e+00
7.55207368e-01 1.29158295e-01 1.12152724e+00 -3.52497951e-01]]
testing_outputs [[]
[]
[]
[]
[]
[]
[]
[]
[]
[]]
从图中可以看到:
1. A vector v of values as input to the model (simple feedforward neural network).
2. v[] = target.
3. Target is binary (either or ).
4. All the other values of the vector v (v[] and v[:]) are purely random and do not contribute to the target.
按照rule-based或者决策树的思想,仅仅根据特征进行判断,就可以获得非常好的模型性能。
现在问题来了,我不想用决策树,因为决策树太“硬”了,损失掉了很多输入数据中的概率分布信息,深度DNN的这种复杂非线性组合能够获得更“软”的概率分布拟合能力。
那有什么办法能将更好地将这个先验知识融合到模型中呢?(即强制模型更加关注那个决定性较强的特征维度,而相对忽略其他特征维度)
答案是attention model思想。
inputs = Input(shape=(input_dim,)) # ATTENTION PART STARTS HERE
attention_probs = Dense(input_dim, activation='softmax', name='attention_vec')(inputs)
attention_mul = Multiply()([inputs, attention_probs])
# ATTENTION PART FINISHES HERE attention_mul = Dense()(attention_mul)
output = Dense(, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
我们在输入层input之后增加了一个Dense层,并使用softmax激活函数,神经元个数=输入向量的维度。这一层的核心作用就是通过softmax从input中选择对target贡献度最大的一个vector dim维度。
之后通过merge该attention model layer和input输入层,通过一个DNN隐层进行综合决策。
通过BP反馈训练后,attention medel layer的权重
import numpy as np from attention_utils import get_activations, get_data np.random.seed() # for reproducibility
from keras.models import *
from keras.layers import Input, Dense, Multiply input_dim = def build_model():
inputs = Input(shape=(input_dim,)) # ATTENTION PART STARTS HERE
attention_probs = Dense(input_dim, activation='softmax', name='attention_vec')(inputs)
attention_mul = Multiply()([inputs, attention_probs])
# ATTENTION PART FINISHES HERE attention_mul = Dense()(attention_mul)
output = Dense(, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
return model def main():
N =
inputs_1, outputs = get_data(N, input_dim) m = build_model()
m.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
print(m.summary()) m.fit([inputs_1], outputs, epochs=, batch_size=, validation_split=0.5) testing_inputs_1, testing_outputs = get_data(, input_dim)
print "testing_inputs_1", testing_inputs_1
print "testing_outputs", testing_outputs # Attention vector corresponds to the second matrix.
# The first one is the Inputs output.
attention_vector = get_activations(m, testing_inputs_1,
print_shape_only=True,
layer_name='attention_vec')[].flatten()
print('attention =', attention_vector) # plot part.
import matplotlib.pyplot as plt
import pandas as pd pd.DataFrame(attention_vector, columns=['attention (%)']).plot(kind='bar',
title='Attention Mechanism as '
'a function of input'
' dimensions.')
plt.show() if __name__ == '__main__':
main()
可以看到,v[1] 获得了绝对的dominate权重
0x2:LSTM/GRU Layer
这个小节我们对比下在LSTM前/后插入attention model layer,对各个维度的权重关注效果。
from keras.layers import Multiply
from keras.layers.core import *
from keras.layers.recurrent import LSTM
from keras.models import * from attention_utils import get_activations, get_data_recurrent INPUT_DIM =
TIME_STEPS =
# if True, the attention vector is shared across the input_dimensions where the attention is applied.
SINGLE_ATTENTION_VECTOR = False
APPLY_ATTENTION_BEFORE_LSTM = False def attention_3d_block(inputs):
# inputs.shape = (batch_size, time_steps, input_dim)
input_dim = int(inputs.shape[])
a = Permute((, ))(inputs)
a = Reshape((input_dim, TIME_STEPS))(a) # this line is not useful. It's just to know which dimension is what.
a = Dense(TIME_STEPS, activation='softmax')(a)
if SINGLE_ATTENTION_VECTOR:
a = Lambda(lambda x: K.mean(x, axis=), name='dim_reduction')(a)
a = RepeatVector(input_dim)(a)
a_probs = Permute((, ), name='attention_vec')(a)
output_attention_mul = Multiply()([inputs, a_probs])
return output_attention_mul def model_attention_applied_after_lstm():
inputs = Input(shape=(TIME_STEPS, INPUT_DIM,))
lstm_units =
lstm_out = LSTM(lstm_units, return_sequences=True)(inputs)
attention_mul = attention_3d_block(lstm_out)
attention_mul = Flatten()(attention_mul)
output = Dense(, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
return model def model_attention_applied_before_lstm():
inputs = Input(shape=(TIME_STEPS, INPUT_DIM,))
attention_mul = attention_3d_block(inputs)
lstm_units =
attention_mul = LSTM(lstm_units, return_sequences=False)(attention_mul)
output = Dense(, activation='sigmoid')(attention_mul)
model = Model(input=[inputs], output=output)
return model if __name__ == '__main__': N =
# N = -> too few = no training
inputs_1, outputs = get_data_recurrent(N, TIME_STEPS, INPUT_DIM) if APPLY_ATTENTION_BEFORE_LSTM:
m = model_attention_applied_before_lstm()
else:
m = model_attention_applied_after_lstm() m.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
print(m.summary()) m.fit([inputs_1], outputs, epochs=, batch_size=, validation_split=0.1) attention_vectors = []
for i in range():
testing_inputs_1, testing_outputs = get_data_recurrent(, TIME_STEPS, INPUT_DIM)
attention_vector = np.mean(get_activations(m,
testing_inputs_1,
print_shape_only=True,
layer_name='attention_vec')[], axis=).squeeze()
print('attention =', attention_vector)
assert (np.sum(attention_vector) - 1.0) < 1e-
attention_vectors.append(attention_vector) attention_vector_final = np.mean(np.array(attention_vectors), axis=)
# plot part.
import matplotlib.pyplot as plt
import pandas as pd pd.DataFrame(attention_vector_final, columns=['attention (%)']).plot(kind='bar',
title='Attention Mechanism as '
'a function of input'
' dimensions.')
plt.show()
1. Directly on the inputs (same as the Dense example above): APPLY_ATTENTION_BEFORE_LSTM = True
直接作用于input层的attention可以让我们获得对输入特征空间的重要性理解。
2. After the LSTM layer: APPLY_ATTENTION_BEFORE_LSTM = False
后置的attention layer可以让模型的最终决策更加聚焦,将主要的决策权重分配在真正对最终分类有正向帮助的特征维度上,只是这时候,输入attention layer的特征维度是已经经过LSTM抽象过的特征空间,可解释性已经相对较差了。
Relevant Link:
https://github.com/philipperemy/keras-attention-mechanism
3. attention model在安全中有什么作用?
笔者对这个model的原理的理解还不是非常深刻,还在实践中逐渐摸索中,这里谈一些已经在项目中通过大数据集验证过的的场景。有不对之处,望不吝指正。
0x1:包含恶意指令的正常文件
在安全攻防中,有一个很常见的场景是,恶意软件或者黑客会通过自动化的方式将恶意的shellcode或者恶意的脚本代码插入到正常的文件中。这种黑客技术在对抗上会产生几个问题:
. 传统的特征码检测技术可能不会受到影响,因为依然会匹配到这段恶意代码
. 基于异常行为的检测技术(例如sandbox重放检测)可能会遭到绕过,因为这个时候整个程序的运行时期间的api call序列可能会呈现出一个正常模式
. 基于深度学习的检测技术会受到挑战,CNN卷积网络可能不会受到影响,但是对训练样本集的数量和种类的要求就会提高
4. TODO
attension
https://arxiv.org/abs/1706.03762
https://*.com/questions/42918446/how-to-add-an-attention-mechanism-in-keras?answertab=votes#tab-top
https://github.com/philipperemy/keras-attention-mechanism
https://gist.github.com/mbollmann/ccc735366221e4dba9f89d2aab86da1e