So This week has been a really busy one. I have been working on actually implementing the Watson API with Unity. The goal is to be able to make a VR conference room where the speech-text translator will be used with the virtual avatar.
This week I was able to actually get the Speech->text to work, so that was a step in the right direction. The second part of the project. Making the translator work from he now written down speech was hard. It took a few days to figure out how to reference the two different services that IBM Watson lets you use. Spending a couple hours in the lab today, I finally was able to get both of the two parts to work together.

The biggest problem that I have ran into is the amount of LAG that there is between the computer running Unity and the Watson API. I was originally running the program on my laptop, and using WIFI ( noted that my laptop does not have a very hefty GPU/CPU). So I am now trying to run the program on one of the computers in the VR lab.

The next problem that I have ran into, and did not think of was that I did not have a MIC on the lab computer, nor could I find one anywhere. So I may have to come back to it and work latter tonight to see if I can further advance the project. As of now, it is doing better than I thought but I would like to implement a few more things to it.  The project currently has no graphics to it, and that is a little boring and I wish that I could add a little more so that the user has something better to look at.

Comments

Popular Posts