背景


谷歌刚刚推出了一款能够检测人体姿态的 MoveNet 模型,并且提供了相应的 TensorFlow.js 应用程序接口(API)。官方宣称 MoveNet 能够非常快速、准确地检测人体的 17 个关键节点,此外通过与 InclueHealth 的合作,该公司还将确定 MoveNet 是否能够为患者远程护理提供帮助。

全文地址(国内):谷歌研究院推出MoveNet动作检测工具和TensorFlow.js API

全文地址(国外):谷歌研究院推出MoveNet动作检测工具和TensorFlow.js API

步骤


  1. 导入第三方插件TensorFlow
  2. package.json的配置,配置一些依赖包
  3. 构建npm,导入依赖包
  4. 外网下载模型,放自己服务器上,并配置合法域名
  5. 使用movenet,依据合法入参,本地识别视频流,返回17个点,利用canvas显示
  6. 最后配合python服务器进行识别

1.第三方插件

首先进入小程序公众平台,将AppID:wx6afed118d9e81df9的TensorFlowJS加入你的小程序里。
步骤:设置 -> 第三方设置 -> 插件管理 -> 添加插件 ->输入AppID


在这里插入图片描述

2.配置包

主要是TensorFlow.js,三连包tfjs-core,tfjs-converter,tfjs-backend-webgl,加上通信fetch-wechat,和你依赖模型的pose-detection(movenet在这个库里面),@mediapipe/pose(pose-detection依赖包)


{
  "name": "ai-action",
  "version": "1.0.0",
  "description": "",
  "main": "app.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@mediapipe/pose": "^0.3.1621277220",
    "@tensorflow-models/pose-detection": "0.0.3",
    "@tensorflow/tfjs-backend-webgl": "^3.6.0",
    "@tensorflow/tfjs-converter": "^3.6.0",
    "@tensorflow/tfjs-core": "^3.6.0",
    "fetch-wechat": "0.0.3"
  }
}

注意:最好使用命令行,进行构建package并且安装依赖,因为有可能会错。

3.构建npm

package.json配置好后,使用npm或者yarn,安装


  npm install

node_modules弄好后,如下图,允许项目使用npm模块,并构建npm
使用npm模块
构建npm

4.下载模型,合法域名


由于依赖包内的模型链接是外网的,要挂vpn才能使用,所以只好将模型下载下来,放到自己的服务器上面。
好在tensorflow官网,是有专门下载模型的网页,下载量竟然很多,参考文献几乎没有,说明大家都偷偷摸摸搞动作。

movenet/singlepose/lightning单人动作识别模型下载地址(需要vpn)


下载下来,是一个压缩包,压缩出来大约有10mb,1个model.json文件和3个bin文件。

注意:放到自己服务器上面,bin文件名不要重命名,model.json无所谓。


模型下载并上传服务器后,相应的要配置小程序的合法域名,如下图所示
在这里插入图片描述
在这里插入图片描述

5.前端movenet的使用

1.首先配置TensorFlow包的注册

  // app.js
var fetchWechat = require('fetch-wechat');
var tf = require('@tensorflow/tfjs-core');
var webgl = require('@tensorflow/tfjs-backend-webgl');
var plugin = requirePlugin('tfjsPlugin');
App({
  onLaunch() {
    plugin.configPlugin({ //注册tf
      fetchFunc: fetchWechat.fetchFunc(),
      tf,
      webgl,
      canvas: wx.createOffscreenCanvas(),
      backendName: 'wechat-webgl-' + Date.now()
    });
  },
  globalData: {
    movenet: null,
  }
})

2.加载模型页的使用

var app = getApp()
var poseDetection = require('@tensorflow-models/pose-detection')
loadMoveNet() {
  if (app.globalData.movenet) return false
   var that = this,
     modelUrl = 'https://oss.lanniuh.com/actionRecognition/movenet/lightning/model.json',
     detectorConfig = {
       modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING,
       modelUrl: modelUrl,
     };
   poseDetection.createDetector(poseDetection.SupportedModels.MoveNet, detectorConfig).then(function (detector) {
     app.globalData.movenet = detector
   }).catch(function (err) {
     console.log(err)
   })
 },

3.点操作页的使用

  <camera id="camera" class="camera" flash="off" device-position="front" resolution="medium" frame-size="medium"></camera>
  <canvas class="camera canvas" type="2d" id="myCanvas"></canvas>
cameraFrame() { // 视频流
    var that = this,
          store = [],
          startTime = new Date(),
          camera = wx.createCameraContext()
      that.listener = camera.onCameraFrame(function (frame) {
          if (frame && app.globalData.movenet) { //帧率控制
              store.push(frame)
          }
      })
      that.listener.start({
          success: function () {
              that.flagTimer && clearInterval(that.flagTimer)
              that.flagTimer = setInterval(function () {  //帧率控制
                  if (store.length == 0) return;
                  var object = {
                      data: new Uint8Array(store[store.length - 1].data),
                      height: Number(store[store.length - 1].height),
                      width: Number(store[store.length - 1].width)
                  }
                  that.actionSend(object)
                  store = []
                  that.setData({
                      resultFps: that.data.resultFps + 1,
                      fpstime: parseInt((that.data.resultFps + 1) * 1000 / (new Date().getTime() - startTime))
                  })
              }, 1000 / that.data.fps)
          },
      })
  },
  
actionSend(){ // 识别点
   app.globalData.movenet.estimatePoses(object).then(function (res) {
         var ctx = that.ctx,
	         keypoimts = res[0].keypoints
	         ctx.clearRect(0, 0, that.canvas.width, that.canvas.height)
	         that.drawSkevaron(keypoimts)
             that.drawKeypoints(keypoimts)
   }).catch(function (err) {
       console.log(err)
    });
},

drawSkevaron(keypoints, scale = 1) { // 关键点连线
    // 头部
    this.drawSegment(keypoints[0], keypoints[1]);
    this.drawSegment(keypoints[0], keypoints[2]);
    this.drawSegment(keypoints[1], keypoints[3]);
    this.drawSegment(keypoints[2], keypoints[4]);
    // 下身
    this.drawSegment(keypoints[10], keypoints[8]);
    this.drawSegment(keypoints[8], keypoints[6]);
    this.drawSegment(keypoints[6], keypoints[5]);
    this.drawSegment(keypoints[5], keypoints[7]);
    this.drawSegment(keypoints[7], keypoints[9]);

    this.drawSegment(keypoints[6], keypoints[12]);
    this.drawSegment(keypoints[12], keypoints[11]);
    this.drawSegment(keypoints[11], keypoints[5]);
    this.drawSegment(keypoints[12], keypoints[14]);
    this.drawSegment(keypoints[14], keypoints[16]);
    this.drawSegment(keypoints[11], keypoints[13]);
    this.drawSegment(keypoints[13], keypoints[15]);
},

drawSegment(akeypoints, bkeypoints) { // 画线
     var ax = akeypoints[0],
         ay = akeypoints[1],
         bx = bkeypoints[0],
         by = bkeypoints[1]
     this.ctx.beginPath();
     this.ctx.moveTo(ax, ay);
     this.ctx.lineTo(bx, by);
     this.ctx.lineWidth = 3;
     this.ctx.strokeStyle = '#ffffff';
     this.ctx.stroke();
     this.ctx.restore();
 },
 
 drawKeypoints(keypoints) { // 画关键点
     for (var i = 0; i < keypoints.length; i++) {
         var keypoint = keypoints[i];
         this.drawPoint(keypoint[1], keypoint[0]);
     }
 },

drawPoint(y, x) { // canvas画点
     this.ctx.beginPath();
     this.ctx.arc(x, y, 4, 0, 2 * Math.PI, false);
     this.ctx.lineWidth = 2
     this.ctx.strokeStyle = '#ffffff'
     this.ctx.fillStyle = '#00ff66'
     this.ctx.fill();
     this.ctx.stroke()
     this.ctx.restore();
 },
 
 canvasInit() { // 获取canvas
   var that = this
    wx.createSelectorQuery().select('#myCanvas')
        .fields({ node: true, size: true })
        .exec(function (res) {
            var canvas = res[0].node
            var ctx = canvas.getContext('2d')
            var dpr = wx.getSystemInfoSync().pixelRatio
            canvas.width = 480 * dpr
            canvas.height = 640 * dpr
            ctx.scale(dpr, dpr)
            that.ctx = ctx
            that.canvas = canvas
            that.res0 = res[0]
        })
 },
  
onLoad: function (options) {
    this.canvasInit()
    this.cameraFrame()    
},

6.python识别

最后,将未处理过17个点,传个接口,给python服务器,返回参数,再进行相应的处理,就能实现一些功能了

例如我下面是动作计数的功能

apiRead(keypoints) {
    var that = this
      network.ajax_ai('/train/upload_landmarks', {
          user_id: app.globalData.userInfo.id,
          engine: "movenet",
          data: {
              keypoints: keypoints
          },
      }, function (res) {
          that.setData({
              poseCount: res.data.count || 0,
              poseScore: res.data.score || 0,
          })
          if (res.data.state == 'finish') {
              that.endPractise()
          }
      }, function (error) {
          console.log(error)
      })
  },

总结


谷歌新推出的movenet,相较于它上一代posenet,识别更精准和平滑。

posenet是按照每一次识别训练的,也就是图片训练,识别精度可能不够。

而movenet会有对称猜测,缓存前面的帧,对后续的帧进行预计算,也就是视频训练,识别精度相对来说,很强。

由于movenet刚出没多久,才一个月,相应的坑还是挺多的,关键国内镜像模型地址,都没有,还得自己下载下来。。。我就不多说什么了

未解决的问题


  1. movenet华为手机模型编译出错,导致识别失败-已解决(目前使用posenet)
  2. 模型因为超过10MB,小程序缓存不了,所以每次加载有点慢。(最新模型5mb)
Logo

CSDN联合极客时间,共同打造面向开发者的精品内容学习社区,助力成长!

更多推荐