I have always loved IQ-based games since they revolve around systematic thinking – understanding patterns and commands. One of the first experiences I had with pre-pre basic coding was through an early 3D computer game called Logic Quest. Maybe some of you may have remembered it from the nostalgic 90s… It was one of the first computer games (released in 1996) that featured a first-person walk-through of the virtual world.
Logic Quest CD-ROM, Learning Company, 1996
(image: Anesta Iwan)
The premise of the game is to venture through a three-dimensional maze while locking and unlocking doors along the way to reveal parts of a virtual robot. At the end of each level, the player must approach a specific gated room. Here lies the fully composed robot (from all the collected pieces) along with one golden key. Typically, there are also two to four partition walls placed as obstacles throughout the room. The goal is to program the robot through a string of very simple commands (such as “walk forward until I hit something,” “turn left,” “pick up key,” “use key,” etc.) to approach the key and use it to unlock the gate. It was very crude, but to me it was fantastic!
But all that happened twenty years ago!
And today, understanding logic through an analytical (rather than intuitive) approach is more critical than ever. With the incoming three billion people on top of the current seven billion, we can no longer rely on the manual analysis and (construction) drawings methodology. Instead we need systems that can analyze, act, and react in real-time (sensor-build).
As Nish had mentioned in a recent post (Zoe & Me – a Robotic Relationship), we were given a chance to remotely control the robot at ST robotics. After several simple one-liner commands, such as “move 0 0 1800” and “grip” and “ungrip,” we figured out how to teach the robot to remember specific locations as well as conduct a string of commands through memory. Coordinates – that is the universal language. As long as there is a coordinate input, the robot has a destination. Comparing 1996 with 2016, the system and commands are still relatively similar – simple toddler speak.
Left: ST Robotics Interface (image: Anesta Iwan + Nish Kothari)
Right: Logic Quest Interface (image: https://www.youtube.com/watch?v=v5GTUcvScnQ)
And as we look for ways to build the 27 steps pavilion, we ask ourselves, can we code its construction? Can the robot generate its own coordinates? From the basic geometry, the robot can assign A, B, C, and D as the primary coordinates. We can now ask the computer to reconstruct the basic geometry through those 4 points – forming edges AB, AC, CD, and DA. Secondary vertices can be generated by subdividing each edge into n number of segments to form AB-1, AB-2, etc.
(image: Anesta Iwan + Nish Kothari)
Now that we have the complete list of vertices, like in a gumball machine, we can add a random generator to select the next vertex that the robot will move to. Of course it does not make sense for it to move from AB1 to AB3, since they’re both from the same branch, so we can null coordinates belonging to the same branch as the current position. Essentially, this random generator will be run at each current location – updated in real time. Through this process, the robot will be able to generate its own path and “continue weaving until end of spool” or “continue until achieve 70% covered density.”
The next steps would be to be able to apply this logic to any basic geometry. Can we 3D scan (with Recap 360, Autodesk) any given geometric frame and have the computer recognize edges and vertices? From there, the same series of commands can run similar weaving patterns. That is the 2016 quest for logic!