Deep Learning – Hyper Construction

walle

“Text is linear…
Text is unlinear when written on paper…
Digital text is different…
Digital text is above all … hyper…
Digital ethnography hypermedia anthropology
When we post and then tag picture…
We are teaching the machine…
We teach it an idea”

– Michael Wesch, The Machine is Us/ing Us

Teaching machines the workings of our brains is the current hot topic and has been for the past couple of decades. Michael Wesch, an associate professor of cultural anthropology, noticed this trend back in 2007.[1] With the formation of the web and the tagging capabilities, the vast online database is able to capture the multitude human understanding of an idea – defined by a collective, rather than an individual.

Just about eight years before the Web 2.0 video was launched, Will Glasser and Tim Westergren had started the Music Genome Project.[2] I remember asking someone what that meant when I had first used Pandora in 2008 –it seemed pretty neat that I can create a radio station that has (1) a mix of songs that I enjoy, and (2) randomness of a radio (because who wants to listen to the same recorded tracks over and over again!). Brilliant!

The duo had figured out a way to teach the music operator the patterns of genre/beats by allowing the human input of “like” and “dislike.” From these two very simple responses, it is able to track and predict the characteristics of the music we are and may be attracted to.

Not long after the Music Genome Project, borne the visual recognition research project – ImageNet – pioneered by Professor Fei Fei Li. It feeds images and corresponding tags to the computer so that it may be able to formulate associations between image and text. In a way, this is similar to how children learn about the world. I’ve witness children say, “there is a rainbow reflected on the ice because the ice is white because white contains all the colors of the rainbow” – of course this is not why … But they are able to connect bits of information, (1) white = all colors and (2) ice = white, and formulate a statement to try to make sense of the world.

Google, as part of their Self-Driving Car research, also embarks on similar trajectories – to be able to identify whether a many object is a car, pedestrian, bicyclist, or obstacle.[3] We are teaching the computer to scan and understand the world, just as children do. Machines now can use data and actively affect the way things move in the playing field – google cars on highways.

With NEST, air systems can now learn from the day-to-day patterns and can smartly set the temperature of each room (based on the time of day, personal comfort, and external factors). The concept of the smart house is no longer simple science fiction – it is happening.[4]

Now the question extends to – can machines learn our methods simply through scanning, and recreate what we have produced? Can we teach, rather than use, the tools? Currently, with most digital fabrication, we input a string of instructions such as “take this form, contour it into a waffle construction, lay the pieces flat, and CNC each piece.” Instead, can we ask a scanner to view the way we have constructed buildings and learn it on its own without a manual?

Recently, a group of engineers, technicians, and historians in the Netherlands has conducted a full analysis of Rembrant’s collection in an attempt for a machine to create another “Rembrant.”[5] The machine (1) scanned each painting, (2) analyzed the facial features, as well as (3) analyzed the heightfield/map of the paint thickness, and finally (4) was able to 3D print (via a paint medium) a painting that seemingly Rembrant himself may have produced.

Machines will be able to learn and produce without needing our step-by-step directions.

“Construction is linear…
Construction can be … hyper…
Machines can scan and record
We are teaching the machine a method…
Hyper-construction is here”

– Anesta Iwan

 

[1] Wesch, Michael . “Web 2.0 … The Machine is Us/ing Us” Online video clip. Youtube. Jan 31, 2007.  https://www.youtube.com/watch?v=6gmP4nk0EOE. Web. April 9, 2016.

[2] 2016. Pandora.Com. https://www.pandora.com/about/mgp.

[3] “Google Self-Driving Car Project”. 2016. Google Self-Driving Car Project.https://www.google.com/selfdrivingcar/.

[4] “Meet The Nest Learning Thermostat”. 2016. Nest.https://nest.com/thermostat/meet-nest-thermostat/?alt=3.

[5] Li, Fei-Fei. (2010, December 21). Fei-Fei Li: How we’re teaching computers to understand pictures [Video file]. Retrieved fromhttp://www.ted.com/talks/fei_fei_li_how_we_re_teaching_computers_to_unders

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s