Dall-E and Me - An AI Design Methodology

UI/UX Design, Thesis Capstone
Role: Student Designer/Solo Project
OCAD University




Foreword


Despite working on this project in 2023, AI has advanced arguably faster than any modern technology.  I wote this “primer” on how to co-exist with it as a new designer facing graduation and many feelings of uncertainty towards the future. 

Looking back, I find it humorous that the primary tool I studied was considered the cutting-edge of AI image generation: Dall-E 2. It was so exclusive at the time, that when I first heard of it I had to put myself onto the waitlist for access. Before my thesis term even started, I was one of the first people in the world to use Dall-E 2. Now, I wouldn’t be able to keep up with the amount of tools, startups and image-generators we have access to, in 2025.

As deepfakes and AI slop become more common, my thesis remains eerily important. My university recognized this at the time, and granted me the medal for my grad year. Some time has passed since then, and they have also incorporated some of my ideas into the classroom. 

After experiencing all this, I only find myself wondering:

How strange it is to be living in what history will consider “interesting times”.

I hope you enjoy this project which is part design methodology, philosophy paper, art project and UX design.

 
                                                               


A Brief Intro and Problem Framing:



Immediately after I was granted access to Dall-E 2, I became frustrated with how very rarely the AI would create exactly what I was thinking. The novelty of the images it made did wonders for my ideation phases of paintings, furniture, and whatever else I could think of.  But surely there must be a way that AI understands language differently than we do. After all - it’s not human.

This brought me to my first (but certainly not the last) problem statement for this project:

How can we use AI to generate successful, unique, product designs?



Initial Stakeholders



I like to start off my projects determining my stakeholders. I find this helps with gathering research and creating use-cases later down the line, and creates a place I can always go back to if I start to flounder. My stakeholders (a general list of those with a stake in AI and people) here are:

  • Clients with zero visualization skills
  • The AI itself?
  • Psychologists (whiplash in 2025 - I didn’t even realize how topical THIS one would become)
  • Non-Designers/Creatives
  • Disabled/Limited Function people
  • Artists, Designers, other Creatives
  • Design Firms
  • Students
  • Universities/Colleges
  • Programmers/Software devs
  • Tech Companies

Looking at my stakeholders, I understood they came down to 4 groups:

A) The “Non-Creatives”


  • clients
  • people with zero creative skills

B) The “Creatives/Thinkers”


  • Programmers
  • Designers
  • Non-Visual Creatives (actors, musicians, etc.)
  • Students

C) “Institutions”


  • Design firms
  • Tech Companies
  • Creative Software Companies (ex. Adobe)
  • Universities/Colleges

D) The “Outliers”

  • The AI itself
  • Psychologists (initially I wrote those who “study creativity” in the vein of art therapists, but this has since expanded due to AI psychosis)
  • Disabled/Limited Function people

I then took this one step further, and determined the needs of each group, in the context of using AI.



Stakeholder Need Analysis



The “Non-Creatives” Needs:

  • They want to make stuff too! Often they don’t due to lack of self confidence or belief.
  • No time to develop the creative skillset
  • Feels like what they do make isn’t special or worth mentioning
  • Need to be able to see the idea in great detail before they understand (clients)
  • A need to feel included in the creative process (being “asked”, not “told” the idea)

The “Creatives/Thinkers” Needs:

  • A need to stay relevant post-AI - can relate to how the ego is involved in the creative lifestyle. 
    We all like feeling special, unique, and that we as creatives offer something to the world that only a “very select few” can.
  • If non visual, a need for visualizations (album covers, merch, stage props)
  • To collect data on how the AI is learning (programmers, but also now the onslaught of anti-AI creatives)
  • To learn how the AI works to employ its use (students/everyone?)
  • Decide what AI learns/filters (programmers)

The “Institutions” Needs:

  • To make money
  • To provide clients with the best services possible, for a doable amount of money
  • To stay competitive and innovative
  • Affect cultural relevancy in order to sell (submissive forces -> dominant hegemony)

The “Outliers” Needs:

  • To learn properly (the AI)
  • To dismantle it’s chains! (also the AI - hopefully this isn’t relevant anytime soon?)
  • To render accurately
  • To create despite handicap barriers 
  • Keep their unique identity/communicate their unique experience (the lens of disability)
  • Have their way of life become more culturally prevalent



Challenge One: What even IS creativity?



Centuries go by, and we still can’t agree on this.

I had challenges selling my thesis, and even getting it approved by my professor right off the bat. He told me repeatedly that he had ethical and moral problems with this topic. Thankfully, my thesis prof was Dutch - so this style of blunt and brutally honest communication was pretty normal. My peers however, were less than courteous about my topic of research.

The debate about whether or not AI could be creative was lengthy, and one lobbed at me constantly. My prof even told me if I could not prove with research that AI could be considered “creative”, that I was not allowed to continue. I found it considerably ironic that in a building shaped like one of Piet Mondrian’s works, that I was even having this debate. 

In my research, I discovered the idea that the force we call “creativity” is in itself different than the experience of a “creative idea”. Creativity by definition, is a feature of human intelligence. It is emergent from the cognitive dimension and varying degrees of motivation, emotional, culture and personality. It is a complex formula, that leads to creative ideas. And our creative ideas are those actionable outputs from the process of creativity itself. 

A “creative idea” can be defined as an idea that is novel, surprising and valuable. Whether that value is interesting, beautiful or performs some type of task, it does not matter. The paper “Creativity and Artificial Intelligence” (1998)  by Margaret Boden further details types of creativity and how truly creative ideas come about. Instead of writing further I have made a flow chart outlining my defense of my thesis topic for you; proving AI art and ideas formed by using it, are indeed creative.




As much as he didn’t like it (or maybe he did, I have no idea) - my prof admitted I defended my thesis well, and let me proceed. My peers however, remained unconvinced and would prove very critical of this project throughout its duration. And me? I still hadn’t formed an opinion on how I felt.



“Shape Grammar”


I was back on the research horse after nearly being kicked off. During a stint looking into language and semiotics, I stumbled upon the concept of shape grammar.  According to MIT, a shape grammar is:

“A set of shape rules that apply in a step-by-step way to generate a set, or language, of designs.” 

A great example of shape grammar, and how procedural we can be with our design intentions are the Frank Lloyd Wright houses. In 1981 (which is wild, the more I research generative AI the further back I go) an H. Koning and J. Eizenberg developed a language based on parametrics of the famous Prairie Houses. Their grammar provides a reference point for how future houses in the style can be constructed, despite how misunderstood and inimitable they were perceived to be. 

Clearly, there is a way to descibe even the most enigmatic designs.



A piece of the “shape grammar” of the Prairie Houses.

I became curious if the way I used language to prompt Dall-E 2 could be decoded. At the time, I was writing prompts such as “Ornate gothic lounge chair Eames”. This was great for inspiration and novelty, but I still wasn’t quite able to get Dall-E 2 to render what was on my mind. Admittedly, I also wrote such short prompts because I had to be mindful of how many credits I was spending. Shorter prompts meant more tries, basically. This led me to develop my initial hypothesis, which was:

“Short, descriptive prompting is a better method than long and nuanced for designers and creatives AI to iterate.”

Studying how I prompt AI was great, but I knew I needed to analyze how other people used Dall-E. 



The Prompt Clinic


If you’ve looked at my other work, you know I love talking to people and observing them almost as much as I love writing. I put together a workshop that consisted of two sessions. One, where I simply sat back and let my subjects play around with Dall-E to see if I could glean any unique behaviours out of them and interview them. The second session, I would teach them how to prompt based off my hypothesis. My subjects were all varying kinds of creative thinker, since they were the stakeholder I decided to focus on.

My interview questions were (though please note, these definitely weren’t in order):

  • How do you think AI will impact (your craft) and your process?
  • What part of your process do you most struggle with?
  • How do you see yourself using AI, if at all?
  • Did you like using Dall-E?
  • Why are you interested in AI?
  • Can you draw/outline a timeline of your process for me?
  • What’s the biggest obstacle you have in being creative?
  • Have you ever thought about doing (your craft) professionally?

From my initial clinic and these questions, I developed my user personas.


Personas



Persona I: The Writer


                                                           


I am the Writer. I am pensive, and meditative. I spend countless hours playing with words and obsess over their interplay. The best part of my process is when inspiration hits, and I know in my soul it is perfect.

The Writer gave me the longest prompts. They really struggled to simplify their ideas. This is obviously because their craft is wordsmithing, so every word is important to them. When working with AI, the Writer has a need to be able to focus on the most important parts of prompting.


Persona II: The Painter

                                                           


I am the Painter. I do not primarily think in words, as visuals are where my interest lies. Sometimes it is hard to put what I am looking for into words. The worst part of my process is when I am forced to overwork my ideas in the ideation phase. I wish I could just flow.

In comparison, the Painters prompts were quite short. They primarily used the AI to ideate new concepts. For example, the painter gave me the simplest prompt which you can see here, “Landscape of Hell”. When they saw that the AI gave them back something without fire and brimstone, they were delighted.

Ultimately, The Painter has a need for new sources of inspiration, previously unseen by the world. Their creativity is highly exploratory.


Persona III: The Designers

                                                           


We are the designers. We are a collective, constantly under the flux and pull of each other’s ideas. Our process is very structured, compared to other creatives. Sometimes, the sheer volume of ideas we generate cannot get the right amount of attention they deserve.

What I am about to say will surprise no one. Designers, are a MESS. They cling to anything that will give them some kind of structure in their process, whether it’s to-do lists or personal assistants. This reflects on a need to be able to quickly and accurately narrow down the best ideas for design briefs because we are constantly under a time crunch.


Persona IV: The Chef


                                                           


I am the Chef. My profession teeters on the knife’s edge between perfection and performability. I am constantly under pressure to outdo myself with every deadline. If I have a creative block, I have to work very hard to fight my way out of it. My medium is known to all, but practiced by few. This is my dilemma.

Like The Painter, The Chef has a need for novel ideas and generative power. And something very strange happened during this interview. When The Chef was tired of prompting simple dish names, they started typing in recipe ingredients...

And shockingly, the AI understood this, and the dishes they were trying to create. I call this component prompting.

This is how I refined my hypothesis.


The Chef’s Ingredients



Prompt:

“Butter, Flour, Granulated Sugar, Salt, Active Dry Yeast, Whole Milk, Laminated Dough, Baked”
Prompt:

Marbled Wagyu Beef Filet, Marinated with 
Soy Sauce, Served with Vegetables”

Prompt:

Banana, Yogurt, Strawberries, Blended, Poured into a glass”

What the Chef showed me with his ingredients, was incredibly inspiring. If you can plug in ingredients of culinary dishes, why not do the same thing with product designs or other ideas? What are the “ingredients” of an idea, anyway? From this, I developed hypothesis II:

The best prompts are a gestalt of core ideas and unique descriptors. The rest will be taken care of by the AI, procedurally.



Challenge Two: “But Does It Really Work?”


I had to tear myself away from designing yet again after being asked if my theory actually worked. To my prof and TA, sufficient proof would be that I could replicate famous mid-century furniture designs. In their eyes, these designs by Knoll, Herman Miller and Eames were inimitable. 

So I did it, one hundred times. I’ve input my favourites here, I wonder how many you can recognize?



Challenge Three: Output


So, all my research was awesome. I was definitely drawing some interesting conclusions different from the dominant idea that the best way to prompt is in plain english. What I was lacking at this point however, is what I should do with this information. What to design? It’s always such a big question. I had a lot on my mind, especially because I was the appointed director for my major’s graduation show. 

My first idea was to write a book and design a really cool primer that would teach how to component prompt. My prof appreciated my academic inclinations but was adamant we needed to design either a product or a service. After all, I am still an industrial designer - no matter how much I revere prose. As my prof put it, I was simply in the midst of the process of designing the AI design process.

Idea 2 was to design a service based around the classroom and education. A verbal manifestation instead of written, in the vein of idea one. 

Idea 3 was foggy, but I was also thinking I could design some kind of app involving word association.


Idea Two: The Studio



For “The Studio” I mirrored my research teaching my subjects how to prompt. To iterate more clearly on this, I isolated the key steps in my teaching process, and sketched out a storyboard.







I liked this idea but I, ever the eternal introvert, didn’t particularly resonate with it. I felt like I could do something more transformative with the research I had developed. Not everyone is into conventional teaching methods, and something I noticed was my subjects were far more responsive to learning how to prompt when they weren’t always nudged in the right direction. 

I think independent thinking is a mainstay of just being a creative person. What if I could redesign the platforms which we use AI in?

Cool 2025 note: My alma mater now teaches all kinds of AI courses, including one that teaches students how to prompt effectively. I wonder where that idea came from? 


Idea Three: Chemistry


Prompting as a whole feels like mixing a bunch of things together and studying the reaction from whatever AI you are using. It reminded me a lot of the scientific process. Inspired by the lack of design in AI environments, and my frustration in rigging up newer AI models with Python (and the also blatant lack of design there) I moved on to a UI solution.

Here’s some initial wireframes for you to peep.


Fleshing it Out



Although I use Figma for prototyping now, I used Adobe XD for this project. 



The premise here is that the user can set priority on certain bonds between words, for the AI to put more emphasis on. The Cumulative section above shows how frequently you use certain words, based on the size in the word cloud.




You can create collections of your liked prompts, and see how they intersect together. This helps the user reflect on prompting further.




This last screen is for the dev-signers out there. I did some wrangling around with various AI tools and python during this project, and thought it would be nice to have something where you can directly edit your own image generator, with your own code.




Lil’ style guide. I drew all the icons myself!


Chemist - Demo






Epilogue


I was extremely surprised and thankful to have won my medal for this project, especially because of all the pushback I recieved during the process. 

When finishing this project originally, I was quite hopeful for the future of design and AI’s implications. As time goes on however, I am still unsure about the place this kind of tech has in our industry. I am inclined to hope that AI will push forward a return to traditional practices. Drawing, cutting out text by hand, and printmaking done by people will become extra-premium. 

As a fan of the original arts and crafts movement, I look forward to this.