GlyphNet is an experiment in using neural networks to generate a visual language. The project is inspired by Joel Simon's Dimensions of Dialogue. At its core, GlyphNet is an autoencoder where the latent space is trained to be human-readable. In this project, the latent space is subjected to increasingly random visual noise such as rotation and static. The autoencoder must learn to produce representations that are robust to this noise.
The motivation is to explore the design space of visual notations that can be produced by neural networks. If you're interested in more stuff like this, check out Ryan Murdock's post which describes this class of neural networks as Cooperative Communication Networks.
The Digit Glove was an attempt at designing a tactile representation of language similar to Braille, but more suited for presenting digital information (it can be programmed.) The prototype is a wired glove with vibration motors on each finger. The goal mirrored that of Braille: represent each character as 6 bits.