image generated by DALL·E 2

First taste of DALL·E 2

Experimenting with images generated from text

Mark Ryan
5 min readJul 16, 2022

--

I’ve had access to OpenAI’s GPT-3 for almost two years, and access to Codex for about a year. I’m grateful for the opportunity to experiment with these models, and I have blogged and created videos to record my experiences. After a wait of several months, this week I got access to DALL·E 2, OpenAI’s system for generating images from text descriptions. In this article I’ll share my first impressions of DALL·E 2.

Seeing what never existed — anachronistic tech

Many examples of DALL·E 2 output are images of things that don’t exist or have never been imagined before. In this spirit, I prompted DALL·E 2 to show me what contemporary technology would look like in the design language of periods prior to the existence of the technology. For example, what would an iphone from 50 years ago look like? You could call this anachronistic tech.

Here’s an example of what DALL·E 2 generated when prompted with “1960’s iphone”:

image generated by DALL·E 2

Here’s “1970’s iphone”:

--

--

Mark Ryan

Technical writing manager at Google. Opinions expressed are my own.