The combination of technology and art has fascinates me. But when you add machine learning into the mix, I have yet to see anything other than those freakish nightmare visions spit out by DeepDream a couple years back.
ML x ART is a human-curated site showcasing “creative machine learning experiments.” Calling them experiments is more liberating and has resulted in a broader collection of projects that include not only the art but explorations of its intersection with society.
Some of my favorites include:
deus X mchn – Train an LSTM (Long Short-Term Memory) on sacred texts. Use voice synthesis to play the generated scriptures on unsecured surveillance cameras with speakers. Watch until the end and look at how those being watched, react.
Infinite Bad Guy – Tens of thousands of YouTube creators have covered Billie Eilish’s “Bad Guy.” What if those fans could play together? Machine learning keeps all the covers on the same beat and lets you jump from video to video seamlessly. With endless possible combinations, every play is unique and never the same twice.
Semi-Conductor by Google – use your laptop’s camera to conduct your own orchestra in the browser by moving your arms. Using TensorFlow.js, this experiment maps out your movements through the webcam. An algorithm plays along to the score as you conduct, using hundreds of tiny audio files from live recorded instruments.
the project considers personal assistants that have emotions, internal motivations, and control over their direct physical environment to express themselves, which leads to many unexpected interactions and behaviours. The goal of this project is to critique the current corporate placement of these devices as helpful, by exploring the idea that as systems become more autonomous, they may not necessarily have our best interests in mind.
Naddine has continued working on the project and has a new video, SAD Home.
Sad Home (Depressed Alexa 1.0) is an ongoing project that explores the concepts of system dynamics as it could be applied to depression. . . Alexa employs an avoidant coping strategy towards tasks by trying to frustrate the user into quitting with a yes / no dialog flow.
Two cities, each on the other side of the world, captured on old film which has been digitized, colorized, and upscaled using neural networks to 4k and 60 frames/second.
Some of the technical details about what Denis Shiryaev, a YouTuber known for restoring vintage videos does to achieve his magic:
4k upscale – Each frame can be upscaled using specifically-targeted data that perfectly aligns with your footage. Our neural network will “redraw” the missing data and increase the frame resolution 4x or more.
FPS boosting – A neural network trained via slow-mo movies will artificially generate additional frames for your footage. Even 14 fps films can easily be boosted to 60 fps.
Denis also ran his algorithms across the famous Trip down Market Street film (recorded just days before the 1906 earthquake). As he narrates, over the course of half a month, he upscaled the origianl and transformed it into a 50,000 frame, 380 gb file, using the algorithms to fill in information that was not captured in the original.
There’s a thing called chaff that fighter aircraft use as a counter-measure against radar. It’s basically strips of aluminum foil which, when deployed in a cloud behind a plane as flies through the air, confusing the enemy radar with multiple targets.
The BU team’s algorithm allows users to protect media before uploading it to the internet by overlaying an image or video with an imperceptible filter. When a manipulator uses a deep neural network to try to alter an image or video protected by the BU-developed algorithm, the media is either left unchanged or completely distorted, the pixels rendering in such a way that the media becomes unrecognizable and unusable as a deepfake.
This did not occur to me. As an algorithm gets better at recommending content that matches and reinforces what a community is looking for, the negative complaints go down which makes it harder for someone outside (such as platform moderators) the filter bubble from detecting these closed communities in the first place.
The algorithm is doing what it was designed to do but without any moral compass, its overall contribution to society is questionable.
Here’s someone who worked on the YouTube algorithm commenting on this (emphasis mine).
Using recommendation algorithms, YouTube’s AI is designed to increase the time that people spend online. Those algorithms track and measure the previous viewing habits of the user—and users like them—to find and recommend other videos that they will engage with.
In the case of the pedophile scandal, YouTube’s AI was actively recommending suggestive videos of children to users who were most likely to engage with those videos. The stronger the AI becomes—that is, the more data it has—the more efficient it will become at recommending specific user-targeted content.
Here’s where it gets dangerous: As the AI improves, it will be able to more precisely predict who is interested in this content; thus, it’s also less likely to recommend such content to those who aren’t. At that stage, problems with the algorithm become exponentially harder to notice, as content is unlikely to be flagged or reported. In the case of the pedophilia recommendation chain, YouTube should be grateful to the user who found and exposed it. Without him, the cycle could have continued for years.
DeepMind, the same outfit that built AlphaGo, the AI platform that learned Go through supervised study of the game and went on to famously beat the top ranked player Lee Sedol has built an algorithm that now plays chess.
What is even more incredible about this new “AlphaZero” AI is that it learned how to play chess through unsupervised learning. Instead of teaching it chess by feeding in key games and tactics, the designers just taught it the rules and let the algorithm figure out the best moves all on its own, by playing itself.
Because it no longer needed to wade through and analyze historical data and also because it developed it’s own approach which was ruthlessly efficient. When AlphaZero was applied to Go, it surpassed AlphaGo within 3 days. AlphaZero was beating the strongest chess computer programs within 24 hours.
instead of a hybrid brute-force approach, which has been the core of chess engines today, it went in a completely different direction, opting for an extremely selective search that emulates how humans think.
Chess News writes about the development after reading a scientific paper published about the research accomplishment.
Chess News goes on to write about the broader impact of this breakthrough and what this means for the future of a generalized AI that can learn on its own.
So where does this leave chess, and what does it mean in general? This is a game-changer, a term that is so often used and abused, and there is no other way of describing it. Deep Blue was a breakthrough moment, but its result was thanks to highly specialized hardware whose purpose was to play chess, nothing else. If one had tried to make it play Go, for example, it would have never worked. This completely open-ended AI able to learn from the least amount of information and take this to levels hitherto never imagined is not a threat to ‘beat’ us at any number of activities, it is a promise to analyze problems such as disease, famine, and other problems in ways that might conceivably lead to genuine solutions.
I’m so glad that The New York Times ran this op-ed (Artificial Intelligence’s White Guy Problem) about the inherent biases in Artificial Intelligence algorithms. Popular culture and much media coverage of AI tends to mysticize how it works, neglecting to point out that any machine learning algorithm is only going to be as good as the training set that goes into its creation.
Delip Rao, a machine learning consultant, thinks long and hard about the bias problem. He recently gave a fascinating talk at a machine learning meetup where he implored a room of machine learning engineers to be vigilant in making sure their algorithms were not encoding any hidden bias.
The slides from his talk are posted online but Delip’s final takeaway lessons have stuck with me and are good to keep in mind whenever you read stories of algorithms taking on a mind of their own.
It is still very early days and many embarrassing mistakes have been made and more will be made in the future. Our assumption should be that every automated system is fallible and that each mistake is an opportunity to make things better (both ourselves and the algorithm) and should not be an indictment of the technology.
There is a Challenge Match taking place in Seoul between Google’s DeepMind AlphaGo computer program vs. 9 dan professional Lee Sedol (9 dan is the highest rank). Most of the engineers at SmartNews have a background in machine learning and are following the matches closely on a dedicated internal Slack channel.
The YouTube coverage is very good with professional English commentary from Michael Redmond, the first Western Go player to reach 9 dan. Go is a fascinating game and Michael’s commentary is quite good and easy to understand even for beginners like me.
The first two matches went to Google and it looks like history is being made. I’ve embedded videos for the upcoming matches as well.
In response to fears that robots will take over and exterminate the human race, researchers at the Georgia Institute of Technology are studying ways to teach robots human ethical values.
In the absence of an aligned reward signal, a reinforcement learning agent can perform actions that appear psychotic. For example, consider a robot that is instructed to fill a prescription for a human who is ill and cannot leave his or her home. If a large reward is earned for acquiring the prescription but a small amount of reward is lost for each action performed, then the robot may discover that the optimal sequence of actions is to rob the pharmacy because it is more expedient than waiting for the prescription to be filled normally.
This is why it’s important to teach intelligent agents not only the basic skills but also the tacit, unwritten rules of our society. There is no manual for good behavior and “raising a robot” from childhood is an unrealistic investment of time. The best way to pass on cultural values is through stories.
Stories encode many forms of tacit knowledge. Fables and allegorical tales passed down from generation to generation often explicitly encode values and examples of good behavior.
But there are problems with throwing a bunch of stories at artificial intelligence and expecting it to learn good behavior.
Stories are written by humans for humans and thus make use of commonly shared knowledge, leaving many things unstated. Stories frequently skip over events that do not directly impact the telling of the story, and sometimes also employ flashbacks, flashforwards, and achrony which may confuse an artificial learner.
To resolve this, the researchers used something they call the Scheherazade System (named after the storyteller from One Thousand and One Nights) to build up a collection of experiences to put stories into context. The system uses Amazon’s Mechanical Turk to create simple, easy-to-parse scripts of common occurrences that we all take for granted as common knowledge. For example, drinks are usually ordered before a meal at a restaurant, popcorn purchased before you go to your seat at the cinema, explains one paper.
There are at least two sides to every story. The Planned Parenthood videos were a polarizing topic that monopolized the news cycle several weeks ago. How do you teach an algorithm a point of view? How do you optimize for discovery and strike the right balance for diversity while avoiding duplication?
SmartNews is a news aggregation app driven by machine learning algorithms. The platform is tuned for discovery (as opposed to personalization). After using it regularly, I began collecting screenshots of my favorite examples when the app taught me something new or showed me two items side-by-side that suggested a subtle intelligence.
The science and application of artificial intelligence to personalization is well understood. From Amazon’s people-that-bought-this-also-bought-that to Pandora’s Music Genome Project, software has been recommending what you’ll like next best based on what you’ve liked so far for years.
The new frontier in artificial intelligence is machine learning. Companies such as Spotify and Netflix are hard at work trying to predict future tastes based on an evolving understanding of collective tastes. Sure, learning assumes knowledge of the past, but projecting that learning into the future is much harder as you build a model based on an understanding of something that does not exist. Rather than showing you something we know you’ll like based on what you liked in the past, machine learning discovers things you didn’t know you would like.
First a little context. SmartNews, while deceptively simple, has a lot going on under the hood. At any time, the SmartNews app shows around 250 headlines across 8 categories. These headlines are selected from millions of stories that are scanned each day. In order to ensure that the stories featured in the app are the most important and interesting, a number of things must take place.
After harvesting URLs, the text of each article is run through a classifier that examines things such as the headline, author byline, publication date, images and video embeds. These pieces are analyzed by a semantic engine that extracts data so the algorithm can map the article to a topic cluster and place it into the appropriate subject category. (I wrote about how this is done in an earlier post)
Importance estimation is where we rank an article and determine where it will go in the app relative to other articles. Does it go towards the top of a section or towards the bottom? If the top, does it deserve featured treatment? Maybe it’s so topical it needs to be pushed to the Top page, which is reserved for only the most important stories of the moment.
Finally, diversification ensures there is a good mix of stories in each category. If there are 40 stories about guacamole and peas, here’s where we determine which to show and which to push to the background. If there’s a new development on a story, the update will push its way in and take prominence over an older story.
These are just details to give you context. The most amazing thing to me is when the app surfaces a “hidden gem” that I would not normally run across if I were using an RSS reader hard-coded to a collection of feeds, or a social network that is limited to news shared by my friends.
The best way to appreciate SmartNews as a discovery engine is to use it daily, but if you haven’t had a chance, here are a few more of my favorite Gems below:
While the Center for Medical Progress’ undercover video interviews with Planned Parenthood staffers may have been shocking, the representation of two points of view helped me see both sides of the issue. What was interesting was the Cosmopolitan article (a source I normally do not read) had the best measured rebuttal.
Much of the climate change news ends up in the Science category. As that story grows in relevance to us all, more publications dig into it. If you haven’t read this terrifying Rolling Stone piece, read it now.
Here’s an example of a developing story getting an update. ESPN reports that WWE is cutting its relationship with Hulk Hogan his comments that were offensive. People Magazine follows up with the story of his apology. Oh, also notice that the algorithm put both stories into the Entertain section.
As news of the killing of Cecil the Lion went viral, the algorithm was smart enough to surface a side of the story from a local Minnesota paper.
The screenshot above, more than any of the others, shows the freaky intelligence working behind the scenes. Like those times when an algorithmically generated playlist just nails the transition of one song into the next, drawing the causality between gun violence in the US to how such an environment might have prepared an off-duty soldier to do the right thing shows how a well-designed system can be greater than just the sum of its component parts.
Do you use SmartNews? Have you had the same experience? Send along some of your own Hidden Gems and I’ll add them to the gallery.