![]() On our folder, we will run npm run init which will create package.json.įor now, we will need to install two packages to make our life easier. Web Speech API section code will be added here The recognition.js file will be wrapped in the DOMContentLoaded event so we make sure that the page has loaded before executing our JS: document.addEventListener('DOMContentLoaded', speechToEmotion, false) ![]() On the front end side of things, our index.html file will include the JS and CSS: |-css // optional folder, we have only one obvious file |-public // folder with the content that we will feed to the browser Our project folder and files structure will be as follows: src/ Note: If you're familiar with project setup you can mostly skip the "project files and setup" section below. The browser shows a different emoji depending on the score.The server evaluates the text using AFINN's list and returns the score.It makes a request to our Node.js server with the text.The browser listens to the user and returns some text using the Web Speech API.Now that we know what we're going to use, we can sum it up: Since we're already using the browser we can show a different emoji with HTML, JavaScript and CSS depending on the result. It has a limited scope with "only" 2477 words but it's more than enough for our project. It has exactly what we need in the SpeechRecognition interface.Īs for text scoring, I found AFINN which is a list of words that are already scored. A way to show the result to the user that just spokeĪfter researching for a while, I discovered that the voice recording and translation to text parts were already done by the Web Speech API that's available in Google Chrome. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |