一、事有蹊跷
; Y9 s) a& a4 z5 |) @# i I 接篇一,前面提到在使用cube.AI生成的c语言神经网络模型API调用时,输入数据数量是24,输出数据数量是4,但上文设想采集了三轴加速度传感器的x/y/z三个各数据,按Jogging(慢跑),Walking(走了)两种态势采集了两组数据.csv,那么在实际中应该是输入数据数量是3(x/y/x-value),输出数据数量是2(Jogging,Walking两种类别)。
' d6 O( M7 W' R4 `
1 J& D7 t, O4 l) [$ `! b 由于模型API是cube.AI基于训练输出的神经网络模型而生成的,想必应该是训练神经网络模型时参数设置问题,因此我们重新回顾HAR训练项目.
& E( @5 `( \, }- p( t5 U1 K6 m* f* [4 u- J, Q
进入“STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR”目录,运行 python .\RunMe.py -h命令,查看参数设置指令帮助,可以看到--seqLength和--stepSize参数设置都和input有关,默认数值是24,就可以笃定在API中调用时,输入数据数量是24就来自于此。
# d( g3 h/ t: \- l. l- PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py -h
3 @# C8 N' j" X) d8 j+ o- u* P - Using TensorFlow backend.' q- M9 r/ }$ T: c8 o
- usage: RunMe.py [-h] [--model MODEL] [--dataset DATASET] [--dataDir DATADIR]& a8 Z% t8 P1 l9 v: q+ O/ t
- [--seqLength SEQLENGTH] [--stepSize STEPSIZE] [-m MERGE]) t! V4 s; A! W
- [--preprocessing PREPROCESSING] [--trainSplit TRAINSPLIT]' f) O) s* \9 S8 i; l
- [--validSplit VALIDSPLIT] [--epochs N] [--lr LR]8 R7 B5 O# J! g$ ~9 C
- [--decay DECAY] [--batchSize N] [--verbose N]/ J" }# T$ \ W- ^
- [--nrSamplesPostValid NRSAMPLESPOSTVALID]9 c, y; n6 s( S- F" z) z0 ~
- ( [: z( x) V) n5 X. |% \: [" Y6 e
- Human Activity Recognition (HAR) in Keras with Tensorflow as backend on WISDM
* f' T$ y) l) ]6 F4 L+ Y* b - and WISDM + self logged datasets
1 d5 O1 ^9 b9 l) f1 y% J6 w9 t -
( G4 C" \7 d {0 B, d - optional arguments:
V% R1 g$ y" f+ d3 n: a - -h, --help show this help message and exit
. u# q+ A' q; t; K+ d - --model MODEL choose one of the two availavle choices, IGN or GMP, (
$ y6 T6 q! E/ i4 `: |8 p' ? - default = IGN )) J# k! d# ^" h4 \
- --dataset DATASET choose a dataset to use out of two choices, WISDM or( J5 e2 J, ?- D# J8 V; F
- AST, ( default = WISDM )
7 \' J2 Z$ \& Y - --dataDir DATADIR path to new data collected using STM32 IoT board+ E$ B& [0 H0 {9 e* A
- recorded at 26Hz as sampling rate, (default = )
5 e* a0 {, ^5 f7 U3 k - --seqLength SEQLENGTH) L; g |# u! d. H! S9 r( m
- input sequence lenght (default:24)
* T/ v& Q$ t5 |0 f - --stepSize STEPSIZE step size while creating segments (default:24, equal
c7 W$ x& Z+ _" h* o9 }6 H+ i9 v - to seqLen)
x# b6 q/ U8 p0 u+ I3 { - -m MERGE, --merge MERGE
$ h8 G Z* X" N4 O - if to merge activities (default: True)
( r% i1 ]# J8 f# T( b - --preprocessing PREPROCESSING
3 i! ^( P( o" K) N7 l4 b - gravity rotation filter application (default = True)7 h! V# W) D: W: i
- --trainSplit TRAINSPLIT" B: b) N# u4 @7 Y
- train and test split (default = 0.6 (60 precent for, F- }! K% I5 C8 p) j
- train and 40 precent for test))
" |! j5 H+ C0 r2 j( U - --validSplit VALIDSPLIT
; q8 v9 ]) Q* O# I3 b7 o - train and validation data split (default = 0.7 (70
" k U2 A9 C$ Z, v1 A, K - percent for train and 30 precent for validation))
6 ?) K; E2 C, ~; R - --epochs N number of total epochs to run (default: 20)
( ]5 n, c+ v; E0 b& I7 | - --lr LR initial learning rate
5 T1 A, X4 j1 w! C" Z - --decay DECAY decay in learning rate, (default = 1e-6)
9 x- c+ B7 w \* f1 b9 Z; j - --batchSize N mini-batch size (default: 64)
' \ D* m+ i p) j; ] - --verbose N verbosity of training and test functions in keras, 0,
; G+ T4 ?6 x1 \ - 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar,
( `5 ?; w. o( \" ] - 2 = one line per epoch (default: 1)" c- P$ P3 h# K6 \2 A r* q2 ?/ C
- --nrSamplesPostValid NRSAMPLESPOSTVALID0 h* o0 o, Z; o; r9 f# O2 _4 w
- Number of samples to save from every class for post- z/ q" a- B1 O$ l6 O* S0 |( G. E
- training and CubeAI conversion validation. (default =/ @1 \$ ^4 ?9 z% s2 o# r7 H
- 2)
2 I% b) S* R, Y6 ]8 i. ~- x - PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>
复制代码 ; y" }' K, G% m9 f* C
而输出数据数量是4的原因追踪源码可以看到,来自于PrepareDataset.py(数据集预处理源文件),由于在参数--dataset默认设置是WISDM,因此分类输出即为'Jogging', 'Stationary', 'Stairs', 'Walking',即输出数据数量为4:5 M7 E. P: ~8 \8 n* P
g* h6 N( c3 V7 t9 k
6 I: P% K0 `" C9 }# q8 a% V" A) l3 w" m* i# c
下面来深入了解HAR(Human Activity Recognition,人类行为识别)案例为何会有这样的设置。
7 H) N! ], Z) P' `5 H3 U- `, d! D% ` P7 ~; {$ i. F
二、HAR训练项目分析0 M' ?/ y+ _2 i- n# L1 y
由于前文仅仅采集Jogging(慢跑),Walking(走了)两种态势数据,但通常Stationary(站立不动)态势更常见和默认姿态,因此本文按篇一方法再采集一组Stationary姿态日志数据。- W4 `# C& x% h% y
9 S4 k" q* B* ]! Y3 r
7 l" s, [ @- J$ c$ K- M5 Q/ G( J% R+ ]# t* _- m1 d; g
并同样拷贝到HAR/Log_data目录下,数据如下:
) {/ z' R9 H( y8 S3 i* e' b0 x* p
; @/ P% y0 i6 X' d' B3 _4 C
P% q) P4 d: }; N
) T6 B6 H3 [4 E/ b9 ?
在PrepareDataset.py文件中,read_dataset( self )和preprocess_data( self, data )是用来处理WISDM数据集的,而get_data_from_file( self, fileName, preparedDataFileName )和prepare_self_logged_data( self )函数是用来处理自行采集数据集的。
0 Y0 }6 h1 l) y) u# W
6 T6 h/ [0 p1 z0 D
/ K* T6 G7 N! j' {
+ Z! q& r4 L4 W6 x$ a: g$ } read_dataset会读取WISDM数据集的datasets/WISDM_ar_v1.1_raw.txt文件
& z. x) p; c- V: x% b) ?" V* B, _( _2 H: W7 ~0 T9 q
- ~8 r5 V, [( ?- J/ F4 N/ r
' W" \/ K4 d& M+ F+ ^8 F 这些数据是已经做了转换预处理的数据集,见下图红线标注部分。3 _" ~, i9 o& T' J( a! Y
4 F6 z" N b' h
2 t: c# y& N2 P* _
; ^. u- r+ r2 D x" m6 F 而get_data_from_file( self, fileName, preparedDataFileName )函数读取自行采集数据集(.csv)文件时,做了转换预处理,确保和WISDM数据集一致。
) ^' w- k6 B* ]1 ^ |9 u; F3 [, A7 E+ W5 A; d/ i, W$ P" s
$ l; G0 x9 R7 t1 {* H* I; u$ Y; z% O1 a: y
; r" Z" ]( I1 D! a
同时项目还把预处理过的自行采集数据集与训练好的模型一并以csv格式输出到目录中,以源数据目录命名,例如Log_data.csv。7 V& m' w [$ T. c% F# z
. b3 U0 F8 a6 n" j% m
& ]6 X; ^$ L) ~2 s1 l0 m/ T7 ?% n H1 O0 M1 k' s
下来就是输入数据数量问题,由于在FP-AI-SENSING1案例项目,数据采集是按一定时间间隔连续实时采集的,采集间隔也很短(毫秒级别)
+ Y' [, ~* v o0 p) @7 {7 @2 ^* Q# ^; |- N$ @0 j
g! h0 T& O4 j& h% q; ~* z9 H" C
: ~: d( @6 b0 X8 j H 因此get_segment_indices和get_data_segments函数就是将采集到的连续数据处理成一个长度为seqLength的窗口的输入数据集,即每次输入数据是一段24组(一组3个数据值,x/y/z)数据,也就对应了我们前文定义输入数据缓存是static ai_float in_data[AI_HAR_IGN_IN_1_SIZE=24*3*1];,共72个float 数据。显然HAR项目做法是通过一组连续数据集作为输入比单个态势数据更能反应人类行为姿态的持续性,更贴近实际。9 A" [ i% Y! }0 I; ?
. {, z/ |& P" O: h! t
1 v2 U( C. x8 w4 Y7 N- X, L
/ Q: h/ B& [- E) P. s
在read_dataset函数加入打印读取的WISDM数据集语句。
$ H( N/ T) X! E8 p( ~/ U. b
& A% \( }5 G$ \( s* G+ \, I$ }
) c0 y$ R/ U3 [: b% p( J/ u8 v3 [4 R _
在prepare_self_logged_data函数加入打印读取自行采集数据集语句 ?1 ~ L* l2 I5 K+ z( V. j3 P
- Q6 N8 m1 w; l
' Q9 i' w# e/ m
# b/ V% T$ j' ~8 y
因为打印输出依据后,后续会打印数据按输入数据长度来分段的信息,暂时保持seqLength和stepSize为默认的24数值,运行python3 .\RunMe.py --dataDir=Log_data命令:! r' ?! U* u0 k% W2 p0 Z! Y4 N
- PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR> python3 .\RunMe.py --dataDir=Log_data
9 |/ ]6 E" Q6 D: {$ s - Using TensorFlow backend.
, n' c' c1 l2 I/ c5 m9 W; @ - Running HAR on WISDM dataset, with following variables7 `* b& F( N" S& _' t7 f5 e, i
- merge = True
; o% ]" G, ~3 R1 K' V - modelName = IGN,- ^0 n, x$ h( \; k
- segmentLength = 24' Y. l }7 M& c( g
- stepSize = 24
z( }" U$ [# |) z - preprocessing = True" i6 O# `$ {9 Z; d
- trainTestSplit = 0.6# y2 @8 P7 m) }$ ^, ~
- trainValidationSplit = 0.71 I% K% ~, s3 _8 F1 Z7 F$ q
- nEpochs = 20
. _' _& u# A$ }5 F& o( ^ - learningRate = 0.0005, k# K) y: P6 [% [1 N5 ^) b7 X
- decay =1e-06
+ y2 w( q8 U6 H9 H+ X4 y - batchSize = 644 | I) `+ t0 b& @8 Y/ J2 e3 @
- verbosity = 1) R9 y2 j! B# i; O2 ?- m8 |
- dataDir = Log_data
- o! |9 h+ s7 _5 F( r - nrSamplesPostValid = 2
( k" j& H6 A9 I. T$ S. m+ z4 ]. B - User Activity_Label Arrival_Time x y z2 y- J# K: P, K( A
- 0 33 Jogging 49105962326000 -0.694638 12.680544 0.50395286;
% T& m' I! a+ n, \# j9 U# u. ~ - 1 33 Jogging 49106062271000 5.012288 11.264028 0.95342433;6 u- k$ T. M; k, u1 @# N
- 2 33 Jogging 49106112167000 4.903325 10.882658 -0.08172209;
- U# b; n' @. U0 g- _9 A: U - 3 33 Jogging 49106222305000 -0.612916 18.496431 3.0237172;: C, v% {/ f! Y, U7 G
- 4 33 Jogging 49106332290000 -1.184970 12.108489 7.205164;
) G+ W9 L* }, W5 R% F - ... ... ... ... ... ... ...
& T; n: H3 E" _, \1 |, ~ - 1098199 19 Sitting 131623331483000 9.000000 -1.570000 1.69;
& r5 z! ]" ^1 A3 Q: Z - 1098200 19 Sitting 131623371431000 9.040000 -1.460000 1.73;
- M; {$ o; m7 h, Z4 `) ^. n. H - 1098201 19 Sitting 131623411592000 9.080000 -1.380000 1.69;
' O3 F5 z. T$ r2 f6 Y! _# Y. H* Y8 V+ O - 1098202 19 Sitting 131623491487000 9.000000 -1.460000 1.73;
6 k6 B {6 E9 Q3 t' R - 1098203 19 Sitting 131623531465000 8.880000 -1.330000 1.61;+ |" H8 _& n7 y: P1 A
- 5 \, b8 E3 C# k5 N% I! |+ }
- [1098204 rows x 6 columns]
o" f) [) |) l9 |1 A7 ] - Segmenting Train data
) U! }' K' y/ L. f4 a W7 ]+ v - Segments built : 100%|███████████████████████████████████████████████████| 27456/27456 [00:28<00:00, 954.67 segments/s]1 G6 P! w0 g; L2 |( P
- Segmenting Test data9 c0 o0 S, P" G
- Segments built : 100%|██████████████████████████████████████████████████| 18304/18304 [00:14<00:00, 1298.22 segments/s]
9 f1 ]! r# p! j: t' j9 J - Segmentation finished!6 R% y" ~ y1 p4 w( S
- preparing data file from all the files in directory Log_data
+ d: o( g/ q1 b7 k( V# j4 s# _ - parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_17s.csv
* e( I& \7 }& f# [: V5 @ - parsing data from IoT01-MemsAnn_11_Jan_23_16h_57m_53s.csv& K7 L- F$ X# P" ^9 h8 O& ]5 e4 P U
- parsing data from IoT01-MemsAnn_26_Jan_23_15h_51m_01s.csv
0 W @) s/ J, N+ w - x y z Activity_Label0 f# m1 F2 x7 Y
- 0 -1.965414 -0.143890 9.367359 Walking
8 |$ z7 z, T! E3 ?, { - 1 -1.629783 0.664754 9.618931 Walking
0 B: e/ b4 P: T) h - 2 -1.720833 0.384629 9.492079 Walking4 i/ W: `1 [: o# H% Q) E
- 3 -1.681419 0.534637 9.648173 Walking
$ m# b$ e& a5 K; F0 C- x - 4 -1.729849 0.421650 9.557259 Walking
' h7 V1 d* L1 q - ... ... ... ... ...
% E8 d; A) n( b) N/ r2 E5 D - 2639 -1.171046 0.033572 9.746819 Stationary7 U+ W' ~0 p4 |' p. m% V
- 2640 -1.212873 0.007256 9.759376 Stationary5 R) X5 i; X% C0 T% z q
- 2641 -1.212011 0.019485 9.753982 Stationary
/ k5 @. @" }& z- j - 2642 -1.172311 -0.004511 9.734770 Stationary5 W. N7 q3 J$ H) l. Q% D* n
- 2643 -1.175431 0.035787 9.753447 Stationary! r6 b# _# x! F8 P2 i6 l
- ; _% U: G3 n `+ N+ g* v/ p
- [2644 rows x 4 columns], V& x f' E8 S# G) } H- X
- Segmenting the AI logged Train data2 K( |* @% k0 q1 [
- Segments built : 100%|████████████████████████████████████████████████████████| 67/67 [00:00<00:00, 2795.31 segments/s]6 Y9 T+ q" [+ y1 z6 h& F/ @
- Segmenting the AI logged Test data
+ C- U& C* L8 k" a' F% m) q( r% ` - Segments built : 100%|████████████████████████████████████████████████████████| 45/45 [00:00<00:00, 2370.94 segments/s]; t% e4 X: p- a2 q8 }5 |$ n, h9 V
- Segmentation finished!) g8 r6 z4 t2 p2 f, _3 `- h9 f- o
- _________________________________________________________________
9 v* g, i- l$ ^; C, O - Layer (type) Output Shape Param #7 \" u) p9 _ A8 p' N6 @
- =================================================================5 T8 e g x, s7 U+ D
- conv2d_1 (Conv2D) (None, 9, 3, 24) 408
H! A0 m- b* R8 K! S6 p - _________________________________________________________________
0 S' m9 z1 x' J5 T9 K - max_pooling2d_1 (MaxPooling2 (None, 3, 3, 24) 0; v4 B& {" v' t6 Z
- _________________________________________________________________! @: L( g8 Z7 Q1 Q( i0 K2 h
- flatten_1 (Flatten) (None, 216) 04 Z) ^* j7 b5 K0 x6 `
- _________________________________________________________________
, l L2 x3 D& p& J/ q, W$ c - dense_1 (Dense) (None, 12) 26042 b# E! n* y3 |; q$ y4 S7 Y
- _________________________________________________________________9 F: Y/ G- M7 ]9 ~0 i, k
- dropout_1 (Dropout) (None, 12) 0
4 H% i$ A/ s$ l; Z' p0 E - _________________________________________________________________
" q3 k+ u" d' {9 N: H - dense_2 (Dense) (None, 4) 52
3 Q" T; Q- o8 I# ] - =================================================================/ U) Q6 K G w9 z
- Total params: 3,064
9 ?. s7 M; H3 C; `# w( ^( \0 x# ]- P9 H - Trainable params: 3,064
; k L4 e) p$ w& J: b - Non-trainable params: 0
, v B+ b0 r' } - _________________________________________________________________
3 _, r; }1 ^' `) O - Train on 19288 samples, validate on 8233 samples
& u0 B/ c. H5 { - Epoch 1/20
: Z, ~: H% ?9 x! ^* | - 2023-01-26 17:12:56.882726: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX28 W3 w" r" |' P6 S3 B
- 19288/19288 [==============================] - 1s 54us/step - loss: 1.2022 - acc: 0.5290 - val_loss: 0.7089 - val_acc: 0.7409' s/ B' V+ A- X3 |2 B4 I" O
- Epoch 2/20
4 q# C6 U7 h- q - 19288/19288 [==============================] - 1s 41us/step - loss: 0.7520 - acc: 0.7017 - val_loss: 0.5342 - val_acc: 0.7985
; Z7 Y% t; p7 v) W - Epoch 3/20
; K* [# C- r. a" M5 m; d - 19288/19288 [==============================] - 1s 41us/step - loss: 0.6079 - acc: 0.7571 - val_loss: 0.4573 - val_acc: 0.81532 l! j! l V! P# j2 o2 |/ z6 |
- Epoch 4/20, I; t [2 D! f0 L+ ]8 E( Q
- ... ...+ r% Z& O& K: i" L4 \
- 19288/19288 [==============================] - 1s 39us/step - loss: 0.3306 - acc: 0.8899 - val_loss: 0.2669 - val_acc: 0.9113
6 ~. ~0 l# r" h$ } - Epoch 20/202 V1 ~0 r. h9 |7 h9 K- X
- 19288/19288 [==============================] - 1s 40us/step - loss: 0.3194 - acc: 0.8913 - val_loss: 0.2646 - val_acc: 0.9168" z6 t% X+ F X$ ~7 N; {
- 12831/12831 [==============================] - 0s 22us/step. b" i" z* s7 w4 a- [+ T0 j
- Accuracy for each class is given below.
1 P/ d% @( A: d( w. i- o9 Q - Jogging : 97.51 %
; G( g1 x }8 t! S2 K C6 X - Stationary : 98.63 %
' e, F8 l; ^' m: | - Stairs : 70.94 %
6 l# C; @# _1 }3 l2 v( A- S% J$ i - Walking : 81.55 %+ B/ a6 E5 w3 t/ S6 S7 _
- PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>
复制代码
' G. x1 @! |+ d" i' ] 读取到的WISDM数据集大小是1098204≈24*(27456+18304),而27456+18304是按--trainSplit参数默认值0.6:0.4比例切分的。读取到的自行采集数据大小是2644≈24*(67+45)。; m; E0 p, T# l5 \ F& c
" O7 u) w; e3 W' T+ U/ n* O
/ B$ ?( u+ }" _$ L* p0 r
0 q7 B' N1 r% @ 三、工程调整
5 Z& j2 p. ~1 a* i0 \) t 从前面分析来看,输入数据数量还是采用默认数值24问题不大(各位粉丝可以自行调整其大小来测试及观察效果),输出数据数量由于有WISDM数据集参与,保持'Jogging', 'Stationary', 'Stairs', 'Walking'是中分类也OK。现注意到,其实输入数据是经过了预处理转换实际加速度值,而非是传感器原始输出数值。
. ~6 B4 {/ q& X* R$ ~# P$ V1 P" F( {1 ?5 `# ]: Z6 |: c
由于自行采集数据增加了Stationary数据文件,因此在cubeMX上采用新的训练模型文件重新生成c语言的神经网络模型。" C& S# _/ K# a
; c8 _0 @8 B6 W: h0 R$ N
( M* _ N4 A' ~1 M( j5 S
5 E) r; H" X; T 然后调整输入数据及预处理函数acquire_and_process_data,采用真实数据来测试* c: s% m4 t9 I9 u8 \- p
3 z$ l5 m- s7 }: O) ?
2 E$ P/ y+ L5 h3 ]. u% `0 t+ u2 @2 r) h$ n4 b
该函数有原来的随机赋值,
4 L, d9 l0 I0 |/ p9 N+ H! L6 \) f$ |- int acquire_and_process_data(void *in_data,int factor)$ C7 Q! r8 `; l2 i" }! d, R4 a1 u
- {7 J4 Z" i% P4 p- I, e; {: a$ `
- printf("in_data:");
& Q7 |6 m- k8 P, F - for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)% ?& S0 y' H/ f) K# e( ?, l0 N
- {% ^: M h9 c# t+ J( n
- switch(i%3){
* H; T8 O9 f- @ - case 0:
) z* ]( I" ^4 y( \4 ^+ p3 } - ((ai_float*)in_data)[i] = -175+(ai_float)(i*factor*1.2)/10.0;
N9 D- u' `- [6 a - break;6 a5 C8 g. ~: l8 _! x/ R/ s
- case 1:
* Z1 ~, Z2 D. C6 ~7 t - ((ai_float*)in_data)[i] = 50+(ai_float)(i*factor*0.6)/100.0;* Y2 Y6 z' ~1 D
- break;, {- L# [5 B. q" N
- case 2:
6 k: M2 n# w% g1 e0 l6 a - ((ai_float*)in_data)[i] = 975-(ai_float)(i*factor*1.8)/100.0;' g' {0 J$ B6 v; \ q0 |* _
- break;
3 Y1 o9 v+ \/ m8 [" g( I( F - default:
* Z0 p# T+ S/ O - break;& [: @2 A/ Z$ L1 n# Y2 V: U; ]. m
- }* n$ r/ \" ]% `
- printf("%.4f ",((ai_float*)in_data)[i]);5 ~/ X' _) p/ J, D, {
- }) C: p4 Q9 g w; q
- printf("\n");
5 u. Y. D( o) m X' F - " B( e1 X5 y' H7 ~& v
- return 0;
; W R8 K8 r& B - }
复制代码 2 T! d% Q9 R6 q( c1 M
调整为
- o. [! J6 Y2 E7 _1 k- ai_float in_buf[] =
" y& ]) o P: @; ^5 r: a" C - {) m5 n) r/ A" O, g" x
- -1.9654135467252831,-0.14388957575400915,9.36735860765576,: K% _& c. {: v! j
- -1.6297827960727935,0.6647544204312931,9.618930851170278,
! Q6 J. c$ \% T$ Y - -1.7208332169161387,0.38462856310059845,9.492079217583663,
* \* v7 G) z& t5 n& E y - -1.6814190066807724,0.5346365723749872,9.648173176699613,) o: @, Z9 p( T8 ^% A- H# U+ ?
- -1.7298486510592452,0.42164964981080416,9.557259289818587,
% y" k6 g! M+ v2 t. o- v) H - -1.7618787694384546,0.45864558999786653,9.653153776935605,
1 ?( z- S& z; U - -1.7410197123193858,0.4236369742675384,9.55486293595946,
. u, F3 O* q* Y - -1.7600076822930908,0.46214612481362705,9.594426626710453,
& x4 i; f! H0 V M' o - -1.5761631958773263,0.3715109598910308,9.436853714636964,
1 \8 D) Y- V; E2 t1 f& t - -1.5920827364351244,0.37070313540523914,9.66189484448469,
% M/ ]. a5 A( l/ d0 p4 O5 l - -1.6178308849438598,0.37500917334673567,9.695719226290015,
" E" j/ D5 u3 o0 `! L: a$ C$ D% a - -1.4388296833472143,0.6108605310285585,9.464814699883437,
- q4 `; Y N, `! y+ o% M( m+ { - -1.5651621282887258,0.5691273914891515,9.513897717476588,
+ j. O6 k6 a4 V - -1.4637992479412343,0.5105873209777632,9.501636895304161,
$ t! E v# X' t1 P: m - -0.6794677157685166,0.5024637601753793,8.96404801376064,$ w R* z$ |' @+ I9 |3 b
- 0.2600149177042748,0.6699546179356337,8.903349009412763,
7 c" H' T) q5 x% e J& H - 1.0712686735261918,1.4889662656074603,9.520348132500752,( }3 ` C& D4 h. C* U q2 g* o5 ~
- 0.3914123345764725,1.4210706041563634,10.557387805652848,0 G t) a6 F+ M" f# ^" q' Z* T. k% s
- 1.0779003359396493,1.0582703827741018,10.454469820960814,+ K* Y/ ^" \, u% v
- 0.12433283758079197,-0.27273511643713033,10.328552286632643,, V! k! E6 v2 p" [. x0 A. S
- -0.010219096051988997,0.2961821896002729,9.483084545625971,
' e+ C) b* {5 X - -1.6910112286007235,-0.2898761724876157,9.704755735796937,+ N$ ?6 K1 {7 j# d
- -2.693651827312974,-0.41126025575408387,9.825328217800239,
* T6 P6 I: k! K. N - -2.8416981790648177,-0.14586229740441406,9.880552703938179
# O3 Y- _1 R1 h3 z - };
) j6 { F# e/ w2 G) v* O -
( n" G i; |2 B+ T' L7 u n - int acquire_and_process_data(void *in_data,int factor)3 p0 D5 N$ O* J
- {
/ G* V* {8 N$ } - printf("in_data:");
( I; `# P% y! ?- D- d9 J - for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++), J! a1 o4 m) A3 M* |
- {3 u: J2 \( }1 g$ n2 \2 y
- ((ai_float*)in_data)[i] =in_buf[i];
- z$ H% y; o+ q8 G$ V - printf("%.4f ",((ai_float*)in_data)[i]);% s, a: {! r1 y
- }1 y$ ^4 v" y' v: B, F5 z
- printf("\n");- A4 z% W$ F! ~/ j+ p7 q
-
" {% c0 G, Y! ?& k - return 0;4 m1 \( H6 \+ X8 ^- d
- }
复制代码 + d) V) G1 O0 U' W1 l4 R
重新编辑及加载到开发板,打开串口助手,输入test*,查看测试效果如下,显然和期待有很大出入:. E* V" E" u7 [0 U, P% Y l# W
" W! `! ]# M, |" l
) f6 \( g1 P$ y8 L( m
* ^$ k, _/ f, E4 o+ D1 v( ?四、测试有误问题" d% F5 M1 v$ _( F. @/ u
上述结果和HAR项目中的PrepareDataset.py分类似乎并没匹配上,那么就要分析PrepareDataset.py分类和生成后的c语言模型时如何对应的,以及用来测试的数据是否符合模型要求。% o" {' C4 }1 ]/ }
+ C7 Y. S7 \3 W% _
1 X- w: H7 H/ Z- ?! C% N! N
$ {( z" ?1 J0 Z* ^2 X) {( w 为此,再次回到HAR训练模型项目,在RunMe.py文件中追加以下语句:: j& l9 O5 y# r0 O; _, T
- print("TestX:"). x% t8 l: W$ @- S u& }
- print(TestX[0:1])
# N( z8 a) x3 p: z# k% V - print("TestY:")2 S N& q- g8 A, R, r
- print(TestY[0:1])
复制代码 7 u0 f9 h$ o) ~% H& S. N) b
以获取用来测试的真实模拟数据,再次运行python3 .\RunMe.py --dataDir=Log_data命令,最后部分输出如下:$ j0 v! E3 g' r/ A9 ?
- TestX:! A. o! s4 n+ Z# K \+ o
- [[[[-3.64077855e-16]( Y2 y- ^; S+ ~& o
- [-4.69612372e-16]
+ g: `2 \, R S5 q% H7 w - [ 3.68092310e-10]]5 l1 e& S1 G+ B4 O% x
-
6 A/ u( y+ Y/ I - [[ 1.02756571e+01]
# d/ i* S4 L2 p" z r - [-1.14305305e+01]0 Z5 m3 H" [6 H$ M
- [ 2.61872125e+01]]
5 l/ {6 `! c+ t; Q8 D5 D -
" ]! B$ W( o- G* B! h- o9 q( P - [[-2.84181689e+00]
$ i: a5 G9 H" q E. N - [-3.54747048e+00]
5 [* F$ {5 \5 X" v7 b - [-5.51206446e+00]]% H, c( N. r! v
- ) c3 t }0 k& y4 W0 A5 R/ _* Q
- [[-3.82102513e+00]
; V6 J; b2 O$ @3 W8 Z - [-1.41233186e+01]( T! P% y* T0 x8 L
- [-4.59900586e+00]]1 H% E* p# d4 ^9 Q/ h% `
- # D9 P6 @0 g0 e- G$ P
- [[ 6.68010824e+00]
! b1 H f; f4 X: N - [ 9.39457601e+00], z% c1 O) Y+ v* X3 S
- [-2.96397789e+00]]
# P9 Z2 i) c) I - * {$ `1 R6 b$ y$ R8 C( `$ v
- [[-1.71771782e+01]' I4 @/ T7 a$ H
- [ 1.19374149e+01]
% I4 v G9 h) z+ B - [ 3.05770680e+00]]
8 {9 o. C x4 `: \7 y2 N! N$ K- n - ' s4 X- N" M/ _
- [[-6.65782005e+00]: I* @9 g( p& w
- [ 2.39062819e+00]" E4 L6 P) E! h" w0 A
- [ 3.22844912e+00]]$ B4 e% h, A3 _
-
/ X6 l5 a4 L0 J- p) B. {" W1 Z - [[ 4.59021292e+00]4 m) V' m# @& Z" m
- [-6.27548028e+00]( A7 M# T: \! d4 Q# i3 h
- [-4.92783556e+00]]
+ u! R# ^5 b) g {- h5 x5 T - , E* d) Q7 S5 f0 \, n' ]3 I/ l
- [[ 8.03018658e+00]
/ L. t$ {# k. @* Y% K) K" h$ S8 X - [-2.72208600e+00]0 [* f, D1 m y" C
- [-6.35796053e+00]]8 y& v' X' ?, I( k
- 9 y# }8 I4 `: B! a1 O
- [[ 7.73164454e+00]
$ |0 r! |; c4 @1 ]9 k2 v7 b, u - [-6.31879160e+00]7 {0 y. H9 `: \
- [-5.90723810e+00]]
+ H+ n4 P0 B- `0 j% N( K - $ D) b: f* P- P. G' N' J
- [[ 8.53803514e-01]
9 j( N/ b( D v" T! p. i8 l* W - [-9.75763211e+00]
" V" y. u- T- o" m2 P- N - [ 1.02466115e+01]]2 J9 U, G+ Q( o; _' \
-
0 g2 _) f. I% U7 n7 m, n% o - [[ 1.11299171e+01]
3 }9 l- B6 B; E. Y - [-1.70658346e+01]+ e' o, ^) |( @3 h+ U
- [ 2.18511283e+01]]: b0 W# E2 ~* Y* K' P
-
5 O% r/ F6 N/ g8 q$ ~) w$ O1 h" S; L - [[ 3.92044994e-01]
" J, g' [6 m1 A/ H& N- S4 i7 O - [ 5.94768181e+00]
8 x$ y* q2 F9 n - [ 4.30131750e+00]]
9 _$ S4 `' ?+ e: Q4 W -
+ Z" _: @" X; r5 t - [[-5.61807988e+00]0 v7 p! N8 X- s* J7 f
- [ 1.97310400e+01]5 B! u/ j. X( F: d
- [-2.22512540e+00]]7 d: }7 o, u1 c7 L* o( n/ k
-
) d5 N1 u5 d7 Q8 s: {: l - [[ 3.86836548e+00]& _/ ~4 O/ V4 B& b6 f @6 m4 H
- [ 1.71617325e+00]
' ]& p4 B# U3 M; B2 b% m - [-5.86292387e+00]]
. g& c6 d- i( c$ Z1 Z" ^: }/ ^ - 1 o% x {) e! H; ^* d' B7 g
- [[ 7.65913325e+00]
, J1 P4 w6 I: D! [2 y7 E" D - [-7.19628424e+00]
7 d' N2 h# Y T$ i - [ 2.01628025e+00]]4 ]$ _) D/ }: z' N. R
-
8 T( ~% ?2 z" ~: H7 n - [[-7.52357836e+00]9 r+ W: z! ` z' h, {- g/ l
- [ 3.68102584e+00]
7 l n/ ^8 ?( u( v3 A- q - [-1.22753233e+01]]
& L; B6 M1 t( k0 q; r2 r -
9 v9 G! k' F; [% m% `! Y( |; J: m4 W - [[-5.12351958e+00]. Q! U: Y1 x" v' _& G: m$ c
- [ 1.23941669e+01]
% p, @& ?8 ~. M7 K4 K- x% r - [-1.77385540e+00]]! v W# o( y: P% i
-
" B; d& w8 t2 B1 o2 q - [[-4.86155823e-01]: a# U* w& A; _+ b$ o
- [ 1.26333902e+01]7 v/ u+ g2 d* x3 t' |& `
- [ 5.93595914e+00]], n6 w- k2 K5 G+ i
-
) ~- B8 t# _. Y - [[-1.96569165e+01]
" U* t, H' v1 o - [ 1.00467317e+01]
' E% ^! W9 U% m7 M - [ 9.47374003e+00]]
P" T# L/ Q1 o; m' F -
3 l* \3 y U9 N; z - [[-4.34050581e+00]' m; D0 G- P+ U+ L3 ~
- [ 5.16311148e-01]' R$ V$ M8 F: p6 Y4 P ?
- [-5.63004156e-01]]
; O/ Y7 C( N* H3 Z -
9 e: w5 T! @0 K* X9 u - [[-3.57974669e+00]
# I2 _/ u9 v* x+ k( `- s- Y1 ] - [ 4.87240857e-01]" }& U, U4 B5 {$ }
- [-9.38271247e-01]]4 ]' q1 q' ^/ r2 q" w
- # s8 a* |+ _/ R: Z# e/ Y3 ~4 P9 d
- [[ 6.11930536e+00]" Q2 {: h. W/ x: a: y
- [ 5.99067573e+00]
+ G) E x( ~( Q - [-7.68834262e+00]]
6 j) _8 z2 H) i) d' |0 @# r - " T' I: _5 u* q! c3 f
- [[ 1.12153409e+01] P; r( ? I6 U: z6 [- k0 E/ T
- [ 2.37168199e+00]) W) z6 f6 P! ?" S% ^7 l
- [-7.40963357e+00]]]]3 f. z$ \ f' X3 X4 P4 o
- TestY:: n' ], Q$ u; Q7 U8 U" F* C6 Q5 ~
- [[1. 0. 0. 0.]]
& @" b; ]5 v+ S% J) g/ c - PS D:\tools\arm_tool\STM32CubeIDE\STM32CubeFunctionPack_SENSING1_V4.0.3\Utilities\AI_Ressources\Training Scripts\HAR>
复制代码
' I" a! H9 k; g5 w 显然输入24*3的一组数据,输出结果是[1. 0. 0. 0.],再将该数据用于测试 c语言模型神经网络模型API调用。, Q, z6 r: ?6 }6 A; |7 m3 z5 G3 `; p
- ai_float in_buf[] =5 {* R' u9 a* l1 ]# `- w
- {6 o( S* ^; _+ H2 a1 C3 Z) N
- -3.64077855e-16,-4.69612372e-16,3.68092310e-10,2 B' {5 m2 m5 n0 _# p
- 1.02756571e+01,-1.14305305e+01,2.61872125e+01,
' ^+ ?. J8 Q" K. {6 K3 Z0 w( L - -2.84181689e+00,-3.54747048e+00,-5.51206446e+00,
) w. y$ h5 q2 t/ w - -3.82102513e+00,-1.41233186e+01,-4.59900586e+00,
1 x* G6 N# y) z5 E - 6.68010824e+00, 9.39457601e+00,-2.96397789e+00,, L; f- e; S. S: @4 C
- -1.71771782e+01,1.19374149e+01,3.05770680e+00,
/ k. z1 `' H9 Y* x - -6.65782005e+00,2.39062819e+00,3.22844912e+00,
1 Z d% u W* `% Y( J5 @1 Z - 4.59021292e+00,-6.27548028e+00,-4.92783556e+00,6 X# ]: |7 B ?6 @
- 8.03018658e+00,-2.72208600e+00,-6.35796053e+00,( A. f% ]4 J- k; P$ t
- 7.73164454e+00,-6.31879160e+00,-5.90723810e+00,
: i& A! X& l7 A0 f- V, U5 @ - 8.53803514e-01,-9.75763211e+00, 1.02466115e+01,: T, [5 i# I& D6 i
- 1.11299171e+01,-1.70658346e+01, 2.18511283e+01,
+ H. c& C R X2 n0 v - 3.92044994e-01, 5.94768181e+00, 4.30131750e+00,; \4 Y* s5 r: ?/ l) [
- -5.61807988e+00, 1.97310400e+01,-2.22512540e+00,: t4 T, E8 X1 C- X$ x# V
- 3.86836548e+00, 1.71617325e+00,-5.86292387e+00,$ w n* f( [1 I3 {9 {
- 7.65913325e+00,-7.19628424e+00, 2.01628025e+00,3 [' u, }* ?& V
- -7.52357836e+00, 3.68102584e+00,-1.22753233e+01,
: m# n2 ~4 R1 y - -5.12351958e+00, 1.23941669e+01,-1.77385540e+00,# X. ]+ }# B; s4 a
- -4.86155823e-01, 1.26333902e+01, 5.93595914e+00,
6 @9 y/ e+ }) q+ D$ ] - -1.96569165e+01, 1.00467317e+01, 9.47374003e+00,
3 I( U8 E: k% P5 \' r7 V& c - -4.34050581e+00, 5.16311148e-01,-5.63004156e-01,; H2 s. b, K8 P# r8 k0 ?; z2 Y/ `+ c
- -3.57974669e+00, 4.87240857e-01,-9.38271247e-01,
x5 v% z/ G! P6 G [, Y8 E - 6.11930536e+00, 5.99067573e+00,-7.68834262e+00,/ ^, y. u, ]( a0 W- \8 o, ]4 k
- 1.12153409e+01, 2.37168199e+00,-7.40963357e+00 Z o) x! n, C( A C; L- L( v
- };) @( r2 {8 p+ [2 r- J
- + V% L' U4 e8 A* l, C3 C$ N
- int acquire_and_process_data(void *in_data,int factor); z$ D7 ^3 [2 ~& r5 |% x
- {
! N0 O, g0 j6 h - printf("in_data:");7 _, X+ l% a' ?0 u) ^# Y
- for (int i=0; i<AI_HAR_IGN_IN_1_SIZE; i++)
+ M! Q! {! F% f/ w$ c' y - {
: s3 c1 a: v; P4 f* f$ Z) b - ((ai_float*)in_data)[i] =in_buf[i];
a& d2 O) F+ t' r u! Y' i! H2 W - printf("%.4f ",((ai_float*)in_data)[i]);
1 L: J5 ^+ i/ s: E - }
* A* P0 q; h3 k) S, o" y/ i - printf("\n");" D& p0 x- I' P, \9 A9 D1 d
- # O% F- r. _7 g" |# v5 x/ m
- return 0;& k% t& a' H1 v8 l0 u8 X
- }
复制代码 $ f/ O& G3 o; ?" _' v8 m
再次编译下载程序,采用串口助手测试如下,输出数据为,【0.943687 0.000000 0.000294 0.056019】,而实际数据是[1. 0. 0. 0.],显然能对应上,只是精度问题:' Z( e9 j; K# w, A6 Z$ j
1 W v4 H% p6 p4 ]! n
/ ~! l) L1 A+ y S) f0 V* s& c5 _& U. t) N/ O5 Q6 B
J; y/ j$ x* S" [+ N2 k* C五、精度问题+ v# M# P, b7 ~* R; u
再次回到HAR训练项目上,如前文所述,训练结果按时间生成一个目录进行输出:
% ]) n1 k7 D+ N9 d' v# y9 g: H3 i: A" O
0 _0 h- \# d+ d8 }
1 `# ~) {6 d+ B 输出了训练的acc 、loss变化曲线图和混淆矩阵图,观察可以看到,用于训练数据其主要还是来自于WISDM的数据集占据大头,从混淆矩阵也可以看出,为何前面测试时输出结果0.943687 0.000000 0.000294 0.056019】,显然训练过程中,'Jogging'姿态会有一部分判断 'Stairs', 'Walking'姿态。; I/ c, _ p8 n! o0 c: P$ i
* ^6 O8 l1 m. T) f
1 d3 k+ r5 d3 v1 m4 c
- Z% o y8 \6 Y2 R- V; T% i
在训练过程日志显示:
0 p$ E0 w8 Y9 U; y a$ o1 m6 ]$ Q5 Q. v2 \$ x
! V/ }. w& @( r r- y k
) ^' H$ ~3 _) X0 Y. Y2 Z 再次改写RunMe.py,加入下面打印语句,查看用于训练、验证及测试的数据集量级: z6 y) d% `* ^# r3 G8 F
- print("TrainX:"+str(len(TrainX)))
( A: `) }) d+ n5 G, M - print("TrainY:"+str(len(TrainY)))2 V z0 q$ q4 u% O8 ~# _
- print("ValidationX:"+str(len(ValidationX)))
# S) ]/ f7 r% ?" Z, p; Y! E - print("ValidationY:"+str(len(ValidationY)))
' f: `* ]5 o1 z# y& c# A7 P* V - print("TestX:"+str(len(TestX)))
* p3 Z, F, j$ {7 x1 v; Q - print("TestY:"+str(len(TestY)))
复制代码
# P/ j- V$ O! p( u5 c4 Q3 S 再次运行python3 .\RunMe.py --dataDir=Log_data命令:4 N( Y- g1 N; ]( M" l3 }0 v: s0 I
+ }* Z% }" ^. |1 n0 u
' l$ i& x' M6 U
9 @" O5 Q" W5 u" g! ]% i! f 通常在实际项目中,不断优化算法模型是主要原因,但相信ST公司及其合作团队应该就算法模型做了足够好,而本文测试验证为何还出现精度相差这么大,应该是以下几点:
$ D' N. O' J5 p4 F6 _ 1)本文用于训练模型的数据集不是同一设备,同一批次,统一采集方法获得足够量数据集。
; X8 W/ I% `6 k4 }% k0 m* _! [+ v8 v7 u9 g6 \6 z) H; H
2)本文自行采集的数量偏小,可能预WISDM采集数据方法存在差异而出现精确性问题,并加入其中,还是会影响模型训练精度的(若仅采用WISDM采集数据集)。
: P4 D7 y- `; q. W* X/ }
/ k! |$ [5 J x0 n7 z$ K 3)本文仅采用了HAR训练项目建议的默认参数,实际项目中,可能需要我们根据数据集及部署环境,不断尝试不同参数设置来训练和使用神经网络模型。
# h# U% k" P* g# w# F; Z* e* ?8 s& R" y, J' Z
4)cube.AI将keras训练的神经网络模型转换为c语言的神经网络模型虽然损耗极小,但还是存在一定精度损失的。, l8 x: K s N" `
; B( T6 D3 t8 z( T
9 i' z& G/ x. P/ }- E) f" U! L4 ^7 N/ g+ j' B$ J. t3 ^
篇一、二,是基于FP-AI-SENSING1案例和配套的B-L475E-IOT01A开发板来使用cube.AI软件包的,那么在实际使用cube.AI时,需要与我们项目实际硬件平台,并基于该硬件平台采集的数据进行数据预处理和神经网络模型构建、训练、验证及测试,然后再通过cubeMX和cube.AI将训练好的神经网络模型转换为c语言支持的AI模型,通过嵌入式程序加载(AI模型+cube.AI)从而实现神经网络计算的前端或边缘端部署,请阅读偏三。0 M) e% w# C" M% v7 M! N5 q# }
————————————————
6 l x6 }; }3 W! U2 v( c版权声明:py_free-物联智能& h+ ~1 u( Z! L5 {/ ?& ^
如有侵权请联系删除+ \+ ~; m! k% A( w1 U. y/ s
|