DALL-E Mini is said to be the internet’s favorite AI meme machine, photo creation app helps understand how AI can distort reality

Hugging Face, a company that hosts open source artificial intelligence projects, has seen traffic to an AI image creation tool called the DALL-E Mini. It’s been over a year since an independent developer created this app. It uses a single line of text to generate nine images in response to any typed text.

The DALL-E Mini was inspired by a more powerful AI imaging tool called the DALL-E. Artificial intelligence research group OpenAI has created a new version of DALL-E, a text-to-image creation program. DALL-E 2 is a higher resolution and lower latency version of the original system, which produces images that represent the descriptions written by users. It also includes new features, such as editing an existing photo.

Named after surrealist artist Salvador Dali and robot character Pixar Wall-E, the predecessor to the Model DALL-E was launched last year. This program is able to create images in different artistic styles when guided by text inputs: it generates images of what you describe to it. You ask for an anatomically realistic heart, or a cartoon of a Chinese radish baby in a tutu walking with a dog, and he will do his best to create an image that matches it.

DALL-E is more powerful but not publicly available for fear of abuse. The DALL-E 2 version is said to be more versatile and capable of creating images of captions in higher resolution. It also has new capabilities. It was a limited but cool test of the AI’s ability to visually represent concepts, whether that’s a plain representation of a mannequin in a flannel shirt, a giraffe like a turtle, or an illustration of a radish walking with a dog.

It has become common for breakthroughs in AI research to be quickly replicated elsewhere, often within months, and DALL-E was no exception. Boris Daima, a machine learning consultant in Houston, Texas, says he was intrigued by the original research paper for DALL-E. Although OpenAI did not release any code, it was able to compile the first version of the DALL-E Mini during a hackathon organized by Hugging Face and Google in July 2021.

The first copy produced poor quality images that are often difficult to recognize, but always has continued to improve them since then. Last week, he renamed his project Craiyon, after OpenAI asked him to change the name to avoid confusion with the original DALL-E project. The new site displays advertisements, and always plans to release a premium version of its image generator.

DALL-E Mini’s AI model generates images based on statistical models obtained from analyzing nearly 30 million tagged images to extract connections between words and pixels. I always collected this training data from several public image sets collected from the web, including one published by OpenAI.

Generate an image from text

What would you like to see ?

Lawyer chair fly in space

The system can make mistakes, especially because it doesn’t really understand how things should behave in the physical world. Small snippets of text are often ambiguous, and AI models don’t understand their meaning the way people do. However, always amazed at what people have gotten from his creativity over the past few weeks. He says my most creative post was the Eiffel Tower on the Moon. Now people do crazy things and walk away.

However, some of these claims have taken the DALL-E Mini in questionable directions. The system is not trained on explicit content, and is designed to block certain keywords. Despite this, users have shared images of claims including war crimes, school shootings and the attack on the World Trade Center.

AI-assisted photo manipulation, including the so-called “deep fake” photo falsification of real people, has become a concern for AI researchers, lawmakers and nonprofits fighting against online harassment. Advances in machine learning could enable many useful uses for AI-generated images, but also malicious use cases, such as spreading lies or hate.

OpenAI has only granted access to DALL-E and DALL-E 2 to select users, including artists and computer scientists who are required to follow strict rules, and researchers can register online for an overview of the system, and OpenAI hopes to make it available later for use in Third Party Applications. An approach, according to the company, will enable it to expand the capabilities and boundaries of the technology.

Other companies are building their imaging tools at a rapid pace. Last May, Google announced a search engine called Imagen, which it claims can create images with a quality level similar to the DALL-E 2; Last week it announced another device called Party that uses a different tech style. None of these systems are publicly accessible.

In May, Google unveiled its image-to-text AI called Imagen and claims it’s better than OpenAI’s DALL-E 2. Introducing Imagen, a text-to-image broadcast model with an unprecedented degree of image realism and a deep level of language understanding. Imagen relies on the power of high-converting language models to understand text and the power of diffusion models to generate high-resolution images. According to Google, Imagen is not suitable for general use at the moment. The company said it plans to develop a new way to compare social and cultural biases in future work and to test for future iterations.

Don Allen Stevenson III, one of the artists with access to OpenAI’s more powerful DALL-E 2, uses it to find ideas and speed up the creation of new artwork, including augmented reality content like Snapchat filters that turn a person into a crab cartoon. “I feel like I’m learning a whole new way to be creative,” he says. It allows you to take more risk with your ideas and experiment with more complex designs because it supports many iterations.

Stevenson said he encountered limitations programmed by OpenAI to prevent the creation of certain content. Sometimes I forget that there are guarantees, and I have to remind them of app warnings that their access may be revoked. But he doesn’t see this as a limitation to his creativity, as DALL-E 2 is still a research project.

Delangue of Hugging Face thinks it’s good that the DALL-E Mini’s designs are more ruthless than those of the DALL-E 2, as its flaws show that the images aren’t real and were created by artificial intelligence. He claims that this has allowed the DALL-E Mini to help people learn first-hand about new image manipulation capabilities with artificial intelligence, which have often been kept out of the public eye. “Machine learning is becoming the default new way to build technology, but there is this disconnect with companies that build these tools behind closed doors,” he says.

Some of this damage may become increasingly difficult to control. Daima, creator of the DALL-E Mini, admits that it’s only a matter of time before tools like him, which are widely available, can create more photo-realistic images. But he thinks the AI-generated memes that have been circulating in recent weeks may have helped prepare us for that possibility. Dima said, You know, this will happen. But I hope the DALL-E Mini makes people realize that when they see an image, they need to know that it isn’t necessarily true.

While DALL-E 2 can be tested by authorized partners, subject to certain conditions, users are prohibited from uploading or creating images that may cause harm, including anything that includes hate symbols. or nudity, obscene gestures, plots, or major events related to geopolitics, major current events. They must also disclose the role of artificial intelligence in creating images, and may not serve images created by other people through an app or website.

The constant stream of DALL-E Mini content has also helped the company troubleshoot technical issues, with users reporting issues such as sexually explicit results or bias in the results. The system formed by images from the web, for example, may be more likely to show one gender rather than the other in certain roles, reflecting deep-rooted social biases. When asked to act as a “doctor”, the DALL-E Mini shows male-like characters; If you are asked to draw a “nurse”, the pictures seem to show women.

The influx of DALL-E Mini memes made her realize the importance of developing tools that can detect or measure social biases in these new types of AI models, says Sacha Lucioni, a research scientist working on AI ethics at Hugging Face. I definitely see ways that can be both harmful and beneficial,” she says.

And you?

Do you use image generators?

What do you think of the DALL-E Mini?

Do you see potential abuse?

How about the idea that a viral photo generation app, DALL-E Mini, is no-nonsense entertainment?

See also:

Dall-E 2: The AI ​​image generator developed by OpenAI can produce a wide range of images from just a few words

OpenAI’s DALL-E AI image generator can now edit images, and researchers can sign up to test it

Open AI introduces a DALL-E model (such as GPT-3), a model that creates images from text for a wide range of concepts that can be expressed in natural language

Leave a Comment