![]() This was my alternative to listening to shitty music. If the name of our team was mentioned on-air, we had a couple of minutes to call the radio station in order to win a prize. My sports team participated in a contest hosted by a local radio station that went on for a couple of weeks. With better specs / GPU you can increase model size for better quality transcriptions or reduce latency. There will be some delay because of the 30sec chunks and because inference takes some time, but it's fast enough to process all audio without falling behind. The benefit of the small model being that it works in almost real-time even on a CPU-only machine with mediocre specs (on AWS a c5a.large EC2 instance is sufficient). Hence, we use fuzzy-matching to monitor for our terms, so that we reduce false-negatives (but increase false-positive) alarms. ![]() Thus, the transcription quality is decent but not perfect. Per default the small OpenAI-Whisper model is used. For example, you can set durations during which the stream is monitored. Take a look at the files to configure them. msg_group_via_signal.sh relays the alarm message to the signal-cli tool which messages a group on the Signal messenger.On match, it calls msg_group_via_signal.sh Then, it uses fuzzy matching to monitor the spoken word for specific terms. transcribe.py transcribes each incoming audio chunk using OpenAI-Whisper. Messenger from Facebook helps you stay close with those who matter most, from anywhere and on any device.mp3 files in chunks of 30sec from an audio stream Run pip install -r requirements.txt to resolve dependencies. Trigger an alarm via Signal messenger when your terms are mentioned. Monitor it for specific terms in the transcribed text using fuzzy-matching. Transcribe an audio-stream in almost real time using OpenAI-Whisper.
0 Comments
Leave a Reply. |