Kagura, making music with your body

Brian Eno once said, “The problem with computers is that there is not enough Africa in them.” The interfaces we use to interact with computers are too digital, not fuzzy enough to sense analog inputs. We’re stuck with mouse and keyboard.

Kagura is a game that runs on a laptop and uses the camera to detect movement of the players as they interact with musical instruments projected on the screen in front of them to play along or riff on a musical track.

Part Dance Dance Revolution and part Guitar Hero, the UI is intuitive and easy and fun to pick up. All that’s required is a Windows laptop (Mac coming later) and they launched a Kickstarter today to fund the final development and release in August.

Shunsuke Nakamura, the inventor of the game, stopped by the SmartNews offices on Friday to show us how the game works. He’s been working on the concept of using your body to make music for 14 years but only now has technology reached a point where his dream could be realized.

We truly live in amazing times.

SmartNews TV commercials featuring Tamori

SmartNews (where I work) is running a series of TV commercials in Japan featuring Japanese celebrity, Tamori. The tagline for the campaign is “禁断のニュースアプリ” which roughly translates as “The forbidden news application” as in it’s so addicting that you binge use it when you’ve got time alone.

News Junkie are you? Check out the US Edition.

SmartNews shoutout on Google Play

The crew running @googleplay account gave SmartNews a nice shoutout this morning. Thanks Google!

The cool animated GIF and tagline was all them. Love it! Posting here for posterity.

AI is only human

I’m so glad that The New York Times ran this op-ed (Artificial Intelligence’s White Guy Problem) about the inherent biases in Artificial Intelligence algorithms. Popular culture and much media coverage of AI tends to mysticize how it works, neglecting to point out that any machine learning algorithm is only going to be as good as the training set that goes into its creation.

Delip Rao, a machine learning consultant, thinks long and hard about the bias problem. He recently gave a fascinating talk at a machine learning meetup where he implored a room of machine learning engineers to be vigilant in making sure their algorithms were not encoding any hidden bias.

The slides from his talk are posted online but Delip’s final takeaway lessons have stuck with me and are good to keep in mind whenever you read stories of algorithms taking on a mind of their own.

Delip Rao takeaways

It is still very early days and many embarrassing mistakes have been made and more will be made in the future. Our assumption should be that every automated system is fallible and that each mistake is an opportunity to make things better (both ourselves and the algorithm) and should not be an indictment of the technology.