朋友们,手把手语音识别第三部分来了,这部分开始讲解网络搭建部分,同样是手把手教你哦,不要错过。

1、读入数据

这部分,就不展开讲了,之前的文章讲过:数据处理,这次直接从读入数据的格式到怼入网络开始讲,不过,由于网络很大,层数很多,所以要一层层的去讲解,本次咱们讲解的是encode里面的embeding层,也就是词嵌入层,这里可以认为embedding就是一个lookup,也就是一个速查表,用id->embedding做一个对应,这样的话,根据词的id,能通过vocab.txt查找到这个id对应的字或词,比如a、b、c、d、你、我等,也可以找到这个词的embedding,从而计算他们的关系,比如距离、语义关系等等。

import soundfile
audio, audio_sample_rate = soundfile.read("C:\Users\Desktop\asr16.wav", dtype="int16",always_2d=True)

在这里插入图片描述

import numpy as np
audio = audio.mean(axis=1, dtype=np.int16)

在这里插入图片描述

def pcm16to32(audio):
    assert (audio.dtype == np.int16)
    audio = audio.astype("float32")
    bits = np.iinfo(np.int16).bits
    audio = audio / (2**(bits - 1))
    return audio
def pcm32to16(audio):
    assert (audio.dtype == np.float32)
    bits = np.iinfo(np.int16).bits
    audio = audio * (2**(bits - 1))
    audio = np.round(audio).astype("int16")
    return audio
audio = pcm16to32(audio)
audio=pcm32to16(audio)

在这里插入图片描述

with open("transform.pkl", "rb") as tf:    
    preprocessing = pickle.load(tf)
audio = preprocessing(audio,**{"train":False}
audio = paddle.to_tensor(audio, dtype='float32').unsqueeze(axis=0)

在这里插入图片描述

2、进入模型

前面的数据读取已经完成,并且进行了transform,也就是预处理,经过这些操作以后,特征已经经过预处理,就比较好用了,具体经过那些处理,前面的文章讲过了,忘记的可以自行翻阅,下面就是要进入网络层,进入网络层之前,可以看到,现在的audio的shape为(1,363,80),前面的文章也讲过,这个363很重要,需要记住这个数,从现在开始就可以忘记了,因为后面经过很多次reshape后,这个363就没了。

(1)查看网络

在搭建网络之前,得先看看网络啥样,咱们先展示下网络层把,这个之前加载网络讲过,这里展示下网络的summary。

U2Model(
  (encoder): ConformerEncoder(
    (embed): Conv2dSubsampling4(
      (pos_enc): RelPositionalEncoding(
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
      )
      (conv): Sequential(
        (0): Conv2D(1, 512, kernel_size=[3, 3], stride=[2, 2], data_format=NCHW)
        (1): ReLU()
        (2): Conv2D(512, 512, kernel_size=[3, 3], stride=[2, 2], data_format=NCHW)
        (3): ReLU()
      )
      (out): Sequential(
        (0): Linear(in_features=9728, out_features=512, dtype=float32)
      )
    )
    (after_norm): LayerNorm(normalized_shape=[512], epsilon=1e-12)
    (encoders): LayerList(
      (0): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (1): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (2): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (3): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (4): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (5): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (6): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (7): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (8): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (9): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (10): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (11): ConformerEncoderLayer(
        (self_attn): RelPositionMultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
          (linear_pos): Linear(in_features=512, out_features=512, dtype=float32)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (feed_forward_macaron): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): Swish()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (conv_module): ConvolutionModule(
          (pointwise_conv1): Conv1D(512, 1024, kernel_size=[1], data_format=NCL)
          (depthwise_conv): Conv1D(512, 512, kernel_size=[15], padding=7, groups=512, data_format=NCL)
          (norm): LayerNorm(normalized_shape=[512], epsilon=1e-05)
          (pointwise_conv2): Conv1D(512, 512, kernel_size=[1], data_format=NCL)
          (activation): Swish()
        )
        (norm_ff): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_mha): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_ff_macaron): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_conv): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm_final): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear): Linear(in_features=1024, out_features=512, dtype=float32)
      )
    )
  )
  (decoder): TransformerDecoder(
    (embed): Sequential(
      (0): Embedding(5537, 512, sparse=False)
      (1): PositionalEncoding(
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
      )
    )
    (after_norm): LayerNorm(normalized_shape=[512], epsilon=1e-12)
    (output_layer): Linear(in_features=512, out_features=5537, dtype=float32)
    (decoders): LayerList(
      (0): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (1): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (2): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (3): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (4): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
      (5): DecoderLayer(
        (self_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (src_attn): MultiHeadedAttention(
          (linear_q): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_k): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_v): Linear(in_features=512, out_features=512, dtype=float32)
          (linear_out): Linear(in_features=512, out_features=512, dtype=float32)
          (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
        )
        (feed_forward): PositionwiseFeedForward(
          (w_1): Linear(in_features=512, out_features=2048, dtype=float32)
          (activation): ReLU()
          (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
          (w_2): Linear(in_features=2048, out_features=512, dtype=float32)
        )
        (norm1): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm2): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (norm3): LayerNorm(normalized_shape=[512], epsilon=1e-12)
        (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
        (concat_linear1): Linear(in_features=1024, out_features=512, dtype=float32)
        (concat_linear2): Linear(in_features=1024, out_features=512, dtype=float32)
      )
    )
  )
  (ctc): CTCDecoderBase(
    (dropout): Dropout(p=0.0, axis=None, mode=upscale_in_train)
    (ctc_lo): Linear(in_features=512, out_features=5537, dtype=float32)
    (criterion): CTCLoss(
      (loss): CTCLoss()
    )
  )
  (criterion_att): LabelSmoothingLoss(
    (criterion): KLDivLoss()
  )
)

这个网络非常长,所以需要拆开慢慢讲解,下面我们先讲这一层,然后这一层,主要是两部分:一个是卷积,一个是Linear,然后我们来手工实现。


Conv2dSubsampling4(
  (pos_enc): RelPositionalEncoding(
    (dropout): Dropout(p=0.1, axis=None, mode=upscale_in_train)
  )
  (conv): Sequential(
    (0): Conv2D(1, 512, kernel_size=[3, 3], stride=[2, 2], data_format=NCHW)
    (1): ReLU()
    (2): Conv2D(512, 512, kernel_size=[3, 3], stride=[2, 2], data_format=NCHW)
    (3): ReLU()
  )
  (out): Sequential(
    (0): Linear(in_features=9728, out_features=512, dtype=float32)
  )
)

(2)搭建网络

import paddle
class Model(paddle.nn.Layer):
    def __init__(self):
        super(Model,self).__init__()
        self.conv2d_1 = paddle.nn.Conv2D(1, 512, kernel_size=[3, 3], stride=[2, 2], data_format="NCHW")
        self.conv2d_2 = paddle.nn.Conv2D(512, 512, kernel_size=[3, 3], stride=[2, 2], data_format="NCHW")
        self.lineone = paddle.nn.Linear(9728,512)
    def forward(self,inputs):
        y = self.conv2d_1(inputs)
        y = paddle.nn.functional.relu(y)
        y = self.conv2d_2(y)
        y = paddle.nn.functional.relu(y)
        b, c, t, f = paddle.shape(y)
        y = y.transpose([0, 2, 1, 3]).reshape([b,t,c*f])
        y = self.lineone(y)
        print(y.shape)
        print(y)
        return y

上面这是使用paddle框架,根据最终的summary搭建出来的网络,具体为何如此搭建,请看下图。

在这里插入图片描述

可以清楚的看到,在forward过程中,先是卷积,然后卷积完以后,进行了transpose,reshape才怼入Linear,所以咱们也得这么写。

在这里插入图片描述

model = Model()
audio1 = audio.unsqueeze(1) # (b, c=1, t, f)

在这里插入图片描述

x = model(audio1)

在这里插入图片描述

可以看到,这个时候,咱们自己写的网络和源代码里面的都对上了,输出shape也一样。
(3)embedding

这一部分其实就是一个lookup表,不写代码了,直接看源码,不过这个地方主要是position_embed,最终是多个结果一起输出

在这里插入图片描述

完成位置嵌入后,其实后面还有很多处理,这个下次再讲了。

3、总结

好了,这里讲了这么多,其实只是完成了网络的第一部分,后面还有很多,下期继续讲。

Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐