It’s getting exciting again as I am very close to release of the latest major upgrade on Kenzy.
The new version boasts a lot of new enhancements with the primary focus being on a complete redesign of the various device runtimes and their interactions with the primary skill manager. I will post a much more lengthy update in the coming days with a full change log for more thorough review and analysis. Mostly all that is left is more real-world testing and writing up all the changes for the documentation.
Major Upgrades
In the new version virtually everything will be configurable right down to the assistant’s name. Each of the devices have been renamed to make them a little more obvious and they each can be run independently of each other for testing during installations as well as to separate hardware usages.
I’ve added several model options for image processing and speech output and encourage you to get involved and help expand the flexibility of these items. With the new models Kenzy sounds much more like a real human and has a significantly improved accuracy for speech recognition and image processing.
Battle Tested
I’m happy to report that I’ve been running the upgraded version of Kenzy’s image processing unit at my home using the inputs from six of my 4K network security cameras. It is set up to capture video based on human detection and save the video with a pre-event and post-event buffer to a network drive, and it’s been running flawlessly for 3 months.
When will it be launched?
My goal is to launch this over the Christmas holiday (on or before New Year’s Day 2024). I’m close enough that that sounds very realistic.
If you want early access then you can visit the Github project and simple clone the dev-v2 branch. All code is in there and the docs will be updated over the next week or two.
What will come next?
After the launch of v2.0 I will be shifting to expand out the skills and capabilities of the core runtime. I think that after the redesign that the foundation is reasonably solid so I can work on expanding what she is capable of doing. My first thought is to connect her to home assistant so she can control the 60+ light switches in my house. Then maybe add in some weather functions, a news or market routine or two, and perhaps even a ChatGPT (or Bard, or whatever) interface for expanded conversation. The sky really is the limit here so the creative juices are beginning to flow.