DALL-E Can Now Turn Water into Wine

DALL-E, the image generation model by OpenAI, now enables users to modify specific elements within images.

Today’s Equation

OpenAI has ​launched​ a new version of the DALL-E editor, the AI image generator included in ChatGPT’s paid tiers, based on the advanced DALL-E 3 model.

The updated editor introduces an inpainting feature that allows users to edit specific areas of an image by selecting the region and describing the desired changes.

This new feature gives users more control over their creative vision, enabling them to refine generated images with precision.

However, how does this compare to Midjourney’s ​”Vary Region”​?

Let’s see if it all adds up.

  • What is DALL-E Inpainting
  • So, What? AI-Generated Images Edited by AI
  • Now, What’s in it for u⁺

DALL-E Inpainting

Inpainting allows users to erase or edit portions of an image, and DALL-E will fill in the blanks or make alterations based on a natural language description while maintaining the context of the original image.

This is different from ​”outpainting,”​ where DALL-E extends an image beyond its original borders to create a larger composition.

Inpainting takes into account the image’s existing visual elements, such as shadows, reflections, and textures, to maintain consistency when making edits.

Users can erase parts of their images and have DALL-E fill in the erased areas with surprising elements or place the subject in a new environment.

Just take a look at how Reddit user ​u/m703324​ asked DALL-E to turn this glass water to a glass of wine.

The original poster even dubbed it as something miraculous, referring to how Jesus Christ turns water into wine.

However, in the field of editing AI-generated images, Midjourney has emerged as a pioneering image-generating AI tool.

In August 2023, Midjourney introduced its “Vary Region” feature.

This feature allows users to select and regenerate specific parts of an upscaled image. Variations are guided by the content in the original image and the area selected by the user.

For instance, in the example below, users can carefully select a specific region of the image and alter the results.

By simply selecting the region using a freeform or rectangle tool, you can input a new prompt and create the same image with minor variations.

With these editing features emerging in the market, what does it mean for the users?

AI-Generated Images Edited by AI

So, how does it work?

Log in to your GPT-4 account and input your prompt. In this case, my prompt is “A dog with a party hat licking an ice cream on a sunny day.”

Wait for the image to generate.

Then, using the paintbrush tool at the upper right corner, select the region you want to edit.

Here, I’m selecting the ice cream.

Once you’ve selected the area you want to edit, prompt GPT with the changes you want to implement. For this example, I wanted the ice cream to be strawberry flavored.

Wait for the image to generate.

And, don’t forget to thank ChatGPT!

I’m really happy with how it turned out so I’ll stop here.

BUT, if you feel like it needs more revisions, just repeat the process.

What’s in it for u⁺

As these technologies emerge, I believe that these three key factors will further influence AI and human collaboration:

Refined Creative Control

With the ability to edit specific areas of an AI-generated image, you now have more precise control over the final output. This will allow you to refine the image according to your creative vision.

For example, you could generate an image of a living room, then select the artwork on the wall and replace it with a different painting that better suits your preferences.

Iterative Design Process

The editing feature enables you to treat DALL-E generated images as a starting point rather than a final product. Now, you can quickly generate a base image, then make targeted edits to improve or customize it.

This iterative approach can streamline the design process for tasks such as creating marketing visuals, product mockups, or concept art. With this update, you can now rapidly prototype and refine ideas without starting from scratch each time.

Enhanced Photorealism

DALL-E’s inpainting capability takes into account the existing visual elements of an image, such as shadows, reflections, and textures, when making edits.

This means that you can make localized changes that blend seamlessly with the original image, maintaining a high degree of realism. For instance, you could add objects to a generated photo and the AI will automatically adjust the lighting and shadows to make the new elements look natural.

The image editing features in DALL-E empower users with greater creative control, enable a more efficient iterative design workflow, and allow for realistic modifications to generated images.

These capabilities unlock new possibilities for applications like graphic design, content creation, and visual prototyping.

In partnership with

Top-Up Your Toolbox

Sheetgo • Turn your Google Sheets into forms.

Intellectia • Invest smarter with AI-driven insights.

Maven • Online course that will help you build with AI.

Namebase • Name your startup, secure a domain, and brand it!

Salina • Create better, and learn faster with customized AI agents.

Do you want to stay current with the latest AI news?
At a.i. + u, we deliver fresh, engaging, and digestible AI updates.

Stay tuned for more exciting developments!
Let’s see what stories we can bring to life next.

See you next addition!