I had to wipe the grin off my own face. I thought I was about to view some dated, unintended hilarious demo but it turned out to be really impressive and well taught out concept that we still haven't managed to create yet.
At 8:00: "We envision a future in which most of the user interface code will be generated by interface designers using sketching tool rather than programmers writing the code."
In that one sentence, he say the words envision (InVision) and sketch as if it were prophecy. Not a good sign for Figma if we are going by Nostradamus standards.
This is a delight to watch. I found a paper with more information on how it works (worked?) http://prior.sigchi.org/chi95/Electronic/documnts/papers/jal1bdy.htm
Thanks for the link!
I'm sure Airbnb took some inspiration when building his own image recognition prototyping tool... Video ( https://youtu.be/z5XxgxBz3Fo )
21 years later and we still don't have tools that fulfil that promise. Airbnb's thing recently looks close (with a few added machine learning buzzwords), but still nothing which can go from sketch > prototype > working code in a few simple steps.
Even with the latest tools like Sketch/InvisionApp Studio/Figma etc. we're still shuffling objects around pages: the basic concepts of designing for web and apps aren't much different from print design even though the end medium is so fundamentally different.
There's still too much of a disconnect from how we design (pages, symbols, pixels) to the final product (data, systems, components, relationships).
Software like Framer and Origami are interesting, but at best they are a simulation of the end result, not a step on the way to achieving it.
Concepts are cyclical. Often there is a reason they never come to fruition.
Adobe Comp's gesture system works very similarly to this. The foundation of Principle's animation model is the same. There are lots of great concepts in here which actually came to be.
This is wild.