{"id":178,"date":"2016-11-08T02:56:17","date_gmt":"2016-11-08T02:56:17","guid":{"rendered":"http:\/\/imalogic.com\/blog\/?p=178"},"modified":"2016-11-08T04:15:27","modified_gmt":"2016-11-08T04:15:27","slug":"reconnaissance-vocale-etude-dun-engine-monolocuteur-approche-globale","status":"publish","type":"post","link":"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaissance-vocale-etude-dun-engine-monolocuteur-approche-globale\/","title":{"rendered":"Reconnaissance vocale : Etude d&#8217;un engine Monolocuteur &#8211; Approche globale"},"content":{"rendered":"<body><p><\/p>\n<h1>Etude d\u2019un engine Monolocuteur \u2013 Approche globale<\/h1>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE.gif?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"140\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/principe\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE.gif?fit=800%2C375&amp;ssl=1\" data-orig-size=\"800,375\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"PRINCIPE\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE.gif?fit=800%2C375&amp;ssl=1\" class=\"alignnone size-full wp-image-140\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE.gif?resize=800%2C375&#038;ssl=1\" alt=\"PRINCIPE\" width=\"800\" height=\"375\" loading=\"lazy\"><\/a><\/p>\n<p>En g\u00e9n\u00e9rale, on peut distinguer quatre modules fondamentaux pour les syst\u00e8mes de traitement de la parole.<\/p>\n<ul>\n<li><strong>Le module d\u2019analyse de la parole (front-end)<\/strong> qui cherche \u00e0 mettre en \u00e9vidence les caract\u00e9ristiques du signal vocal tel qu\u2019il est produit, mais jamais tel qu\u2019il est compris, ce r\u00f4le \u00e9tant r\u00e9serv\u00e9 \u00e0 la reconnaissance.<\/li>\n<\/ul>\n<ul>\n<li><strong>Le module d\u2019apprentissage<\/strong> qui a pour but de d\u00e9finir dans la machine un dictionnaire de r\u00e9f\u00e9rences acoustiques. Pour l\u2019approche analytique, l\u2019ordinateur demande \u00e0 l\u2019utilisateur d\u2019\u00e9noncer des phrases souvent d\u00e9pourvues de toute signification, mais qui pr\u00e9sentent l\u2019int\u00e9r\u00eat de comporter des successions de phon\u00e8mes bien particuliers. Dans l\u2019approche globale, ce sera des mots que l\u2019on devra \u00e9noncer afin de d\u00e9finir le dictionnaire. Pour un syst\u00e8me multi-locuteur, cette phase n\u2019existe pas, c\u2019est la principale diff\u00e9rence. (On utilise alors un dictionnaire de r\u00e9f\u00e9rences acoustiques pr\u00e9d\u00e9fini et suppos\u00e9 repr\u00e9sentatif des interlocuteurs cibles)<\/li>\n<\/ul>\n<ul>\n<li><strong>Le module de reconnaissance<\/strong> qui a pour mission de d\u00e9coder l\u2019information port\u00e9e par le signal vocal \u00e0 partir des donn\u00e9es fournies par l\u2019analyse. On distingue fondamentalement deux types de reconnaissance, en fonction de l\u2019information que l\u2019on cherche \u00e0 extraire du signal vocal : la reconnaissance du locuteur, dont l\u2019objectif est de reconna\u00eetre la personne qui parle, et la reconnaissance de la parole, o\u00f9 l\u2019on s\u2019attache plut\u00f4t \u00e0 reconna\u00eetre ce qui est dit. Ce module va retourner une s\u00e9rie de \u201cmeilleur solution\u201d en terme de mot reconnu. Chaque solution \u00e9tant caract\u00e9ris\u00e9e par un \u201cscore\u201d.<\/li>\n<\/ul>\n<ul>\n<li><strong>Le module de rejet<\/strong> va permettre d\u2019\u00e9liminer dans certains cas la (les) solution(s) fournie(s) par le module de reconnaissance. Celle-ci pouvant \u00eatre catalogu\u00e9e comme \u00e9tant soit un \u201cd\u00e9chets\u201d, soit trop ambig\u00fce selon que le \u201cscore\u201d de la premi\u00e8re solution est trop proche du score de la seconde, ou pas assez satisfaisant selon que le score n\u2019est pas assez \u00e9lev\u00e9.<\/li>\n<\/ul>\n<h1><strong>Le module d\u2019analyse de la parole \u2013 Front-End \u2013 Feature extraction<\/strong><\/h1>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"112\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/analyse\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?fit=800%2C640&amp;ssl=1\" data-orig-size=\"800,640\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"analyse\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?fit=800%2C640&amp;ssl=1\" class=\"alignnone wp-image-112\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?resize=468%2C374&#038;ssl=1\" alt=\"analyse\" width=\"468\" height=\"374\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?resize=300%2C240&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?resize=768%2C614&amp;ssl=1 768w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/analyse.jpg?w=800&amp;ssl=1 800w\" sizes=\"auto, (max-width: 468px) 100vw, 468px\" \/><\/a><\/p>\n<p>Le Front End d\u2019un engine vocal est en g\u00e9n\u00e9ral compos\u00e9 de plusieurs sous-modules qui permettent de transformer dans les meilleurs conditions un morceau d\u2019\u00e9chantillon sonore en empreintes acoustiques qui ont comme principale caract\u00e9ristique d\u2019\u00eatre plus facilement identifiable.Nous pouvons mettre en \u00e9vidence les sous-modules suivants :<\/p>\n<p><strong>Pr\u00e9emphasis : <\/strong>Suite \u00e0 l\u2019utilisation du micro, d\u2019amplificateur, de filtre analogique, le signal contient du \u201cbruit additionnel\u201d que l\u2019on d\u00e9sire retirer.<\/p>\n<p>On applique en g\u00e9n\u00e9ral une pr\u00e9emphase permettant d\u2019\u00e9craser les graves \u201csans pour autant toucher au \u201caigus\u201d et ce, d\u2019une mani\u00e8re lin\u00e9aire.<\/p>\n<p><strong>Hamming : <\/strong>La fen\u00eatre de Hamming est utilis\u00e9 pour minimiser l\u2019effet de bord de la FFT d\u00fb au d\u00e9coupage en \u201cframes\u201d de la source sonore.<\/p>\n<p>Lorsque le signal pr\u00e9sente des discontinuit\u00e9s ( signal carr\u00e9 par exemple), la limitation du nombre de termes fait appara\u00eetre des d\u00e9passements au moment des transitions : c\u2019est le <strong>ph\u00e9nom\u00e8ne de Gibbs<\/strong>. Ce ph\u00e9nom\u00e8ne peut \u00eatre att\u00e9nu\u00e9 en augmentant le nombre de termes de la d\u00e9composition. On peut att\u00e9nuer ce ph\u00e9nom\u00e8ne de Gibbs en utilisant une technique de fen\u00eatrage. Les N coefficients de la d\u00e9composition tronqu\u00e9e sont alors pond\u00e9r\u00e9s par un coefficient qui varie selon le type de fen\u00eatre utilis\u00e9.<\/p>\n<ul>\n<li>une fen\u00eatre simple \u00e0 appliquer est celle de<strong> Fejer<\/strong>, il suffit de multiplier l\u2019harmonique k par le coefficient <strong>(N-k)\/N<\/strong><\/li>\n<li>une autre fen\u00eatre tr\u00e8s utilis\u00e9e est celle de <strong>Hamming<\/strong> pour laquelle le coefficient s\u2019\u00e9crit <strong>0,54 + 0,46.cos(k<\/strong><strong>p\/N)<\/strong><\/li>\n<\/ul>\n<p><strong>FFT : <\/strong><strong>Fast Fourier Transform : <\/strong>Application de l\u2019algorithme de transform\u00e9e de Fourier Discret optimis\u00e9 (Conversion dans le domaine spectrale)<\/p>\n<p><strong>Mel-scale triangular filters :<\/strong><\/p>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"121\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/bark\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?fit=366%2C183&amp;ssl=1\" data-orig-size=\"366,183\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"BARK\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?fit=366%2C183&amp;ssl=1\" class=\"alignnone size-medium wp-image-121\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?resize=300%2C150&#038;ssl=1\" alt=\"BARK\" width=\"300\" height=\"150\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?resize=300%2C150&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/BARK.jpg?w=366&amp;ssl=1 366w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>On retrouve dans ce sous-module, deux fonctions: la premi\u00e8re permettant de remettre en \u201cforme\u201d le spectre obtenu par la FFT et ce de fa\u00e7on \u00e0 mieux s\u00e9parer certaines fr\u00e9quences (fonction de scaling). Le Bark-Scale par exemple permet de \u201cZoomer sur les graves\u201d et de \u201cDezoomer sur les aigus\u201d, comme le montre la figure ci-apr\u00e8s. La raison de cette remise en forme vient du fait que l\u2019oreille humaine est plus sensible aux variations dans les fr\u00e9quences graves que dans les fr\u00e9quences aigus.<\/p>\n<p>Son utilisation a montr\u00e9 de fa\u00e7on empirique qu\u2019il augmentait le taux de reconnaissance vocal.<\/p>\n<p>\u00a0<\/p>\n<p>La seconde fonction joue le r\u00f4le de \u201ccompacteur\u201d, elle permet de rassembler par paquet une s\u00e9rie de valeur. Ces valeurs seront donc repr\u00e9sent\u00e9e par un nombre \u201cmoyen\u201d.<\/p>\n<p>\u00a0<\/p>\n<p>Dans notre exemple, le Mel-Scale Filters-bank permet en plus du Bark-Scale de mettre en paquet une s\u00e9rie de valeurs par l\u2019utilisation de Filters-bank. (par exemple on passera de 128 valeurs \u00e0 16 valeurs moyennes) Chacun d\u2019eux va repr\u00e9senter les valeurs moyennes des zones qu\u2019ils couvrent. L\u2019utilisation de valeur moyenne permettent de passer un nombre moindre de valeur au niveau de la DCT et du LOG (16 au lieu de 128).<\/p>\n<p>\u00a0<\/p>\n<p><strong>Log : <\/strong>SI l\u2019on regarde le sch\u00e9ma g\u00e9n\u00e9ral d\u2019Analyse de la parole, on remarque que l\u2019utilisation d\u2019une \u00e9chelle logarithmique permet de mettre en \u00e9vidence des composantes (2) qui sous une \u00e9chelle lin\u00e9aire serait cach\u00e9s par la composante principale (1).<\/p>\n<p><strong>DCT : <\/strong>Conversion vers le \u201cCepstral domain\u201d : En comparant les deux graphiques, on remarque que deux personnes pronon\u00e7ant le m\u00eame mot g\u00e9n\u00e8re une courbe diff\u00e9rente mais ayant une allure g\u00e9n\u00e9rale sensiblement identique. On recherche donc un syst\u00e8me capable de d\u00e9terminer \u00e0 partir d\u2019un signal, la tendance g\u00e9n\u00e9rale. Il y a plusieurs moyens de r\u00e9aliser cela, et une de celles-ci consiste \u00e0 utiliser la d\u00e9composition en s\u00e9rie de fonctions, sous la condition de ne prendre que les premi\u00e8res composantes et ce afin de n\u2019avoir que l\u2019allure g\u00e9n\u00e9rale. En effet, ces composantes principales ne d\u00e9crivent que les variations lentes de la courbe qui dans notre cas sont les premieres composantes de la DCT de notre forme spectrale. C\u2019est cette fonction donnant l\u2019allure g\u00e9n\u00e9rale du spectre que les sp\u00e9cialistes de la reconnaissance vocale appelle \u201cCepstrum\u201d.<\/p>\n<p>\u00a0<\/p>\n<p><strong>D\u00e9riv\u00e9e :<\/strong> On ajoute aux Cepstres, une information dynamique sur l\u2019\u00e9volution du spectre, en calculant la d\u00e9riv\u00e9e premi\u00e8re et seconde. On obtient alors la r\u00e9f\u00e9rence acoustique ou Features.<\/p>\n<p>En r\u00e9sum\u00e9, les Features = Cepstre + \u0192 Cepstre + \u0192\u2019 Cepstre + \u00c9nergie + \u0192 \u00c9nergie + \u0192\u2019 \u00c9nergie = Vecteur compos\u00e9 de +\/- 40 \u00e9l\u00e9ments.<\/p>\n<h1><strong>Le module d\u2019apprentissage \u2013 Training<\/strong><\/h1>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"142\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/principe_ap\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?fit=800%2C375&amp;ssl=1\" data-orig-size=\"800,375\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"PRINCIPE_AP\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?fit=800%2C375&amp;ssl=1\" class=\"alignnone wp-image-142\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?resize=486%2C228&#038;ssl=1\" alt=\"PRINCIPE_AP\" width=\"486\" height=\"228\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?resize=300%2C141&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?resize=768%2C360&amp;ssl=1 768w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/PRINCIPE_AP.jpg?w=800&amp;ssl=1 800w\" sizes=\"auto, (max-width: 486px) 100vw, 486px\" \/><\/a><\/p>\n<p>Le principe de l\u2019apprentissage est divis\u00e9 en deux phases : la premi\u00e8re consistant \u00e0 cr\u00e9\u00e9 les \u00e9l\u00e9ments du dictionnaires acoustiques, la seconde phase servant \u00e0 ajuster ces donn\u00e9es et ce afin d\u2019avoir une valeur moyenne repr\u00e9sentative de la fa\u00e7on dont on peut prononcer un mot. Cette phase est un \u00e9l\u00e9ment primordiale dans un moteur de reconnaissance. Si cette phase d\u2019apprentissage est r\u00e9alis\u00e9e de fa\u00e7on inopportune, le moteur ne pourra pas donner de bon r\u00e9sultat. C\u2019est pourquoi, une formation doit \u00eatre donn\u00e9e aux utilisateurs afin de mener aux mieux \u201cces campagnes d\u2019entra\u00eenement\u201d.<\/p>\n<h1><strong>Le module de reconnaissance \u2013 Recognition \u2013 Pattern Matching<\/strong><\/h1>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/Energy.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"126\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/energy\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/Energy.jpg?fit=279%2C79&amp;ssl=1\" data-orig-size=\"279,79\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"Energy\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/Energy.jpg?fit=279%2C79&amp;ssl=1\" class=\"alignnone size-full wp-image-126\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/Energy.jpg?resize=279%2C79&#038;ssl=1\" alt=\"Energy\" width=\"279\" height=\"79\" loading=\"lazy\"><\/a><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/SpeechEnergy.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"146\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/speechenergy\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/SpeechEnergy.jpg?fit=288%2C144&amp;ssl=1\" data-orig-size=\"288,144\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"SpeechEnergy\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/SpeechEnergy.jpg?fit=288%2C144&amp;ssl=1\" class=\"alignnone wp-image-146\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/SpeechEnergy.jpg?resize=262%2C131&#038;ssl=1\" alt=\"SpeechEnergy\" width=\"262\" height=\"131\" loading=\"lazy\"><\/a><\/p>\n<p>Avant toute chose : Comment d\u00e9terminer que le locuteur\u2026 parle ? Le calcul des fronti\u00e8res d\u2019un signal est bas\u00e9 sur la variation instantan\u00e9e de l\u2019\u00e9nergie de la parole. La plus simple m\u00e9thode pour calculer l\u2019\u00e9nergie instantan\u00e9e de la parole est de calculer l\u2019\u00e9nergie de chaque paquet en sommant le carr\u00e9 des \u00e9chantillons. Ce module est souvent nomm\u00e9 VAD (Voice Activity Detection)<\/p>\n<p>Maintenant que nous savons que le locuteur vient de parler, il s\u2019agit ici de le reconna\u00eetre.<\/p>\n<p>Principe : Le signal vocal \u00e9mis par l\u2019utilisateur, une fois param\u00e9tr\u00e9, va pouvoir \u00eatre compar\u00e9 aux mots du dictionnaire de r\u00e9f\u00e9rences (cf. module d\u2019apprentissage) en terme d\u2019images acoustiques. L\u2019algorithme de reconnaissance permet de choisir le mot le plus ressemblant, par calcul d\u2019un taux de similitude \u2013 au sens d\u2019une distance \u2013 entre le mot prononc\u00e9 et les diverses r\u00e9f\u00e9rences.<\/p>\n<p>Aussi, ce calcul n\u2019est pas simple, m\u00eame pour un locuteur unique, car les mots, donc les formes, \u00e0 comparer ont des dur\u00e9es et des rythmes diff\u00e9rents . En effet, un locuteur m\u00eame entra\u00een\u00e9 ne peut prononcer plusieurs fois une m\u00eame s\u00e9quence vocale avec exactement le m\u00eame rythme et la m\u00eame dur\u00e9e. Les \u00e9chelles temporelles de deux occurrences d\u2019un m\u00eame mot ne co\u00efncident donc pas, et les formes acoustiques issues de l\u2019\u00e9tape de param\u00e9trisation ne peuvent \u00eatre simplement compar\u00e9es point \u00e0 point.<\/p>\n<p>Une technique pour r\u00e9soudre le probl\u00e8me d\u2019\u00e9chelle temporelle consisterait \u00e0 r\u00e9aliser un \u201cmapping lin\u00e9aire\u201d (resizing) avant la comparaison. Mais l\u00e0 aussi, des probl\u00e8mes subsistent : \u201cle temps est \u00e9lastique\u201d. En effet, un \u201cheeello\u201d et un \u201chellooooo\u201d peuvent \u00eatre consid\u00e9r\u00e9s comme diff\u00e9rent alors qu\u2019il poss\u00e8dent la m\u00eame longueur. On utilise dans ce cas des algorithmes de type \u201cDynamic Time Warping\u201d ou algorithmes de comparaison dynamique qui vont mettre en correspondance optimale les \u00e9chelles temporelles des deux mots.<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"138\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/ltw\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?fit=437%2C300&amp;ssl=1\" data-orig-size=\"437,300\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"LTW\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?fit=437%2C300&amp;ssl=1\" class=\" wp-image-138 alignleft\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?resize=252%2C173&#038;ssl=1\" alt=\"LTW\" width=\"252\" height=\"173\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?resize=300%2C206&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/LTW.jpg?w=437&amp;ssl=1 437w\" sizes=\"auto, (max-width: 252px) 100vw, 252px\" \/><\/a><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"139\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/nltw\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?fit=424%2C300&amp;ssl=1\" data-orig-size=\"424,300\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"NLTW\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?fit=424%2C300&amp;ssl=1\" class=\" wp-image-139 alignleft\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?resize=250%2C177&#038;ssl=1\" alt=\"NLTW\" width=\"250\" height=\"177\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?resize=300%2C212&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/NLTW.jpg?w=424&amp;ssl=1 424w\" sizes=\"auto, (max-width: 250px) 100vw, 250px\" \/><\/a><br>\nLa mod\u00e9lisation stochastique, en particulier sous forme de mod\u00e8les markoviens peut-\u00eatre aussi utilis\u00e9e. Dans cette approche, chaque mot du vocabulaire est repr\u00e9sent\u00e9 par une source de Markov capable d\u2019\u00e9mettre le signal vocal correspondant au mot. Les param\u00e8tres de cette source sous-jacente au processus d\u2019\u00e9mission d\u2019un mot sont ajust\u00e9s au cours d\u2019une phase pr\u00e9alable d\u2019apprentissage sur de tr\u00e8s gros corpus de parole. La reconnaissance d\u2019un mot inconnu consiste \u00e0 d\u00e9terminer la source ayant la probabilit\u00e9 la plus forte d\u2019avoir \u00e9mis ce mot.D<a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"125\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/dtw\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?fit=441%2C304&amp;ssl=1\" data-orig-size=\"441,304\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"DTW\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?fit=441%2C304&amp;ssl=1\" class=\"size-medium wp-image-125 alignleft\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?resize=300%2C207&#038;ssl=1\" alt=\"DTW\" width=\"300\" height=\"207\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?resize=300%2C207&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/DTW.jpg?w=441&amp;ssl=1 441w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a>TW = fa\u00e7on d\u2019optimiser le calcule d\u2019un \u201cnon linear time warping\u201d permettant de d\u00e9terminer le plus court chemin\u2026 (on pourrait essayer toute les possibilit\u00e9s mais ce serait tr\u00e8s long ).On va calculer les distances entre les vecteurs acoustiques Xk et Yk, le but \u00e9tant de tester toute la suite de vecteurs acoustiques. Le probl\u00e8me c\u2019est qu\u2019il faut trouver le meilleur chemin (de poids minimum) pour comparer ces vecteurs. Le meilleur chemin donnant la moins mauvaise note en totalit\u00e9. Chaque mod\u00e8le sera compar\u00e9 de cette fa\u00e7on, et les r\u00e9sultats obtenus seront transmis au module de rejet.<\/p>\n<p><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"127\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/etat\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?fit=320%2C100&amp;ssl=1\" data-orig-size=\"320,100\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"etat\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?fit=320%2C100&amp;ssl=1\" class=\"alignnone size-medium wp-image-127\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?resize=300%2C94&#038;ssl=1\" alt=\"etat\" width=\"300\" height=\"94\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?resize=300%2C94&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/etat.jpg?w=320&amp;ssl=1 320w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<h1><strong>Le module de rejet \u2013 Rejection<\/strong><\/h1>\n<p>Ce module de rejet va interpr\u00e9ter les r\u00e9sultats obtenus par le module de reconnaissance. C\u2019est lui qui va d\u00e9terminer si un mot a \u00e9t\u00e9 reconnu ou s\u2019il doit \u00eatre \u201crejet\u00e9\u201d<\/p>\n<p>\u2013 Si un mot obtient un bon r\u00e9sultat (80%) par exemple, et que les autres mots n\u2019obtiennent pas de r\u00e9sultats probants (&lt;60%), le mot sera consid\u00e9r\u00e9 comme reconnu. Si deux, voir plusieurs mots obtiennent des r\u00e9sultats probant, un ambigu\u00eft\u00e9 subsiste : le mot ne sera donc pas reconnu.<\/p>\n<p>\u2013 L\u2019utilisation de mots poubelles (garbage), permet au syst\u00e8me de ne pas se d\u00e9clencher sur des bruits involontaires (bruits d\u2019environnement, sonnerie de t\u00e9l\u00e9phone,\u2026). Si la reconnaissance renvoie une \u201cbonne note\u201d pour l\u2019un de ces bruits, le module de rejet ne tiendra donc pas compte de celui-ci (sachant son attributs \u201cgarbage\u201d).<\/p>\n<h1><strong>Performance\u00a0de l\u2019engine<\/strong><\/h1>\n<h1><strong><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"107\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/10mots\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?fit=699%2C436&amp;ssl=1\" data-orig-size=\"699,436\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"10MOTS\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?fit=699%2C436&amp;ssl=1\" class=\"wp-image-107 alignnone\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?resize=265%2C165&#038;ssl=1\" alt=\"10MOTS\" width=\"265\" height=\"165\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?resize=300%2C187&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/10MOTS.jpg?w=699&amp;ssl=1 699w\" sizes=\"auto, (max-width: 265px) 100vw, 265px\" \/><\/a><a href=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?ssl=1\"><img data-recalc-dims=\"1\" decoding=\"async\" data-attachment-id=\"108\" data-permalink=\"https:\/\/imalogic.com\/blog\/2016\/11\/08\/reconnaisance-vocale-generalites\/20mots\/\" data-orig-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?fit=696%2C439&amp;ssl=1\" data-orig-size=\"696,439\" data-comments-opened=\"0\" data-image-meta=\"{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}\" data-image-title=\"20MOTS\" data-image-description=\"\" data-image-caption=\"\" data-large-file=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?fit=696%2C439&amp;ssl=1\" class=\"wp-image-108 alignnone\" src=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?resize=262%2C165&#038;ssl=1\" alt=\"20MOTS\" width=\"262\" height=\"165\" loading=\"lazy\" srcset=\"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?resize=300%2C189&amp;ssl=1 300w, https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/04\/20MOTS.jpg?w=696&amp;ssl=1 696w\" sizes=\"auto, (max-width: 262px) 100vw, 262px\" \/><\/a><br>\nProbl\u00e8me de robustesse<\/strong><\/h1>\n<p>On se doute bien que ces engines de reconnaissance vocale sont influenc\u00e9s par l\u2019environnement ext\u00e9rieur qui peuvent \u00eatre de tout type. Diverse techniques peuvent \u00eatre utilis\u00e9e afin d\u2019augmenter le taux de reconnaissance dans ces environnement :<\/p>\n<p>\u2013 Echo canceler<\/p>\n<p>\u2013 Dictionnaire acoustique contenant des \u201cmots garbages\u201d<\/p>\n<\/body>","protected":false},"excerpt":{"rendered":"<p>Etude d\u2019un engine Monolocuteur \u2013 Approche globale En g\u00e9n\u00e9rale, on peut distinguer quatre modules fondamentaux pour les syst\u00e8mes de traitement<\/p>\n","protected":false},"author":1,"featured_media":190,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[6],"tags":[],"class_list":["post-178","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-signal-processing"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/i0.wp.com\/imalogic.com\/blog\/wp-content\/uploads\/2016\/11\/05105916-photo-logo-reconnaissance-vocale-google.png?fit=512%2C512&ssl=1","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p8J21V-2S","jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/178","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/comments?post=178"}],"version-history":[{"count":2,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/178\/revisions"}],"predecessor-version":[{"id":182,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/posts\/178\/revisions\/182"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/media\/190"}],"wp:attachment":[{"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/media?parent=178"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/categories?post=178"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/imalogic.com\/blog\/wp-json\/wp\/v2\/tags?post=178"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}