Hi. It’s nice
to meet you.

All About Me

KENZY.Ai is a foundation of Python code that enables hardware interactions leveraging Machine Learning to create Artificial Intelligence. Wow, that’s a mouthful. Let’s try again, Kenzy is a set of Python classes and functions that you can use to build your own personal assistant similar to Apple’s Siri, Google’s Assistant, or Amazon’s Alexa. There are libraries for speech recognition, speech synthesis, face and object detection, and even face recognition. So basically Kenzy can see you, hear you, and respond to you.

Kenzy is 100% customizeable. Its modular design allows you to create your own skills or integrate new hardware to fit your specific needs. The code is open source and shared with a business-friendly license.

How does it work?

Kenzy leverages a modular design that separates its primary functions into isolated processes. The individual modules use machine learning models to parse inputs into commands which are sent back to the SkilManager or “Brain” for processing. Each module interfaces with the skill manager via TCP/IP sockets which can run via a variety of methods including the default standalone configuration which is an isolated/secure design.

Expansion Through Modular Design

Kenzy’s primary functions are separated into modules which can be mixed, matched, and recombined through the use of device containers. A device container can represent any of device module so long as the hardware is available. This separation of the physical layer and the communication layer enables the addition of multiple inputs or outputs to a virtually unlimited number of devices.

The list of device modules currently include:

  • Speaker – (Output) For synthesized speech output
  • Listener – (Input) For speech recognition
  • Watcher – (Input) For face and object detection
  • Skill Manager – (CPU) For parsing inputs, identifying intents, and generating outputs
  • Mover – (Output) For servo/controller board interaction (coming soon)


Machine Learning & 3rd Party Libraries

Machine Learning or ML is a technology that allows for the hardware to detect sought after differences based on huge quantities of data. These differences can then be inferred as a spoken word, a recognized face, or any number of other things. The more specifc the identified item the more data is required to insure accuracy.

Machine Learning technology has come a long way in only a few years thanks to the commiditization of hardware that has happened with cloud technologies. That being said, Kenzy is a 100% standalone code base. It does not require any network or external connectivity to work properly. Kenzy is able to do this because the models were pre-built. All that data has already been crunched so that a much smaller model file can be loaded into the local system that tells the system what to look for specifically.

Kenzy uses the following 3rd party libraries:

  • openCV for object detection through the Haar Cascade model
  • OpenAi’s Whisper model for speech-to-text translation
  • Festival for text-to-speech actions (with the optional “speecht5” HuggingFace model)
  • MyCroft AI’s Padatious for intent parsing.

It’s important to note that Kenzy does not require Internet access to function properly. By comparison, many of the other personal assistants (Apple, Google, Amazon) actually send the captured inputs (your voice or captured images from a camera) to the cloud for processing. Kenzy keeps everything on the local hardware.

Why the difference? Large corporations have spent millions of dollars on building accurate models for the various parsing activities. These models are proprietary and by sending your data to their cloud they are able to protect their investment. Additionally, these models are quite large and are not easily portable to a single machine for use. In Kenzy’s case, the models selected have been tuned for their specific purpose in order to keep them as small as possible. This means Kenzy likely gives up some accuracy, but with the benefit of being able to run in a completely private/isolated way and without a dependency on some big company’s black box.

Hardware Requirements

Kenzy is designed to be lightweight and can therefore be run on hardware with very light specifications. At present, Kenzy can successfully be run on a RaspberryPi, but given the demands of some of the models it is not recommended.  In our tests the i5-4600 CPU with 8GB RAM was more than sufficent to service all the primary components of Kenzy, but keep in mind that Kenzy is designed to run on multiple computers to spread the processing load so it is highly likely that she can be executed with much smaller boundaries.

What Should You Expect?

KENZY.Ai’s main goal is to share knowledge and grow organically based on feedback and support from people like you. On this site and within each of Kenzy’s code modules you will find lots of documentation, examples, sample code, and other artifacts to help you grow in your own experience with Ai/ML and coding.

Who built me?

Kenzy was originally designed by  @lnxusr1 during the Covid-19 lockdown in 2020.