The differents among versions of funasr-1.x.x and funasr-0.x.x #1319
Replies: 2 comments
-
|
For me, the previous Mandarin-only model (aishell2-vocab5212) was better for non-standard Mandarin. It is also 4-5x faster during inference. Example: In the AISHELL-3 test set, generated (SSB06930005.txt -- please rename file to .wav to listen) I forced all models to produce Chinese by setting decoder_out = -∞ for non-Hanzi tokens.
However, the old pipeline method does not work anymore. Therefore, I have mapped the configs over: Just unzip and copy it into the |
Beta Was this translation helpful? Give feedback.
-
|
Hello,can anyone help me out here.i'm in iran.can you write here vpn server iran location in vpngate is online for me.tank you guys |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
FunASR
To run without errors, the versions of
modelscope,funasrandmodel paramsshould follows:We recommend the usage of
AutoModel(recommend):More examples could be found in docs
If you still want to use the pipeline of
modelscope:legacy,not recommend):The old version is
no longer in maintenance:3.1. In the
latest versionoffunasr>=1.0.3andmodelscope>=1.11.1, you could download the model params by:a. automatically download by funasr (default):
(
Notes: Both latest and old version are supported. In the latest version (funasr>=1.0.3), you should add themodel_revision. And you could not add it in the old version (funasr-0.8.8), otherwise it would run with errors.)When you run the code above, it would check whether
modelis a local path or model name.If the
modelis the local path, it would skip the downloading.If the
modelis model name from model zoo, it would automatically download the model params from zoos.b.
git clonemanually (only in the lates version):Notes: Only usegit clonein the latest version (funasr>=1.0.3). If your version if funasr-0.8.3, it would run with errors.You could download the model params by
git clone, for example:Then you could set the
modelto the local path you downloaded.3.2. In the
latest versionoffunasr>=1.0.3andmodelscope>=1.11.1, the input name isinput:or
But in the old version, the input name is
audio_in:3.3. In the
latest versionoffunasr>=1.0.3andmodelscope>=1.11.1, the output result islist:But in the old version, the output result is
dict:3.4. In the
latest versionoffunasr>=1.0.3andmodelscope>=1.11.1, thebatch_size:If you inference without
vad_model, thebatch_sizerefer to numbers of audio files:(
Notes: both latest and old version are support)or
If you inference with
vad_model, thebatch_size_srefer to the total duration of audio file in seconds (s):or
But in the old version, it is the
batch_size_token:Beta Was this translation helpful? Give feedback.
All reactions