Nvidia has a generative art system that uses AI to change the words into a visually spectacular artwork. This is not the first time this kind of concept has been posted, or even produced. However, it was the first time we saw such a system working with extraordinary speed and precision.
You can peek at the Openai to view a project called Dall · E. it’s a project that produces images based on GPT-3, which you can learn about Cornell University. You can start making a wild style interpretation with a deep dream generator, or learn about several sources for the Nvidia Research Project that we see today – see the Karenate Candy Network to learn about bro!
The Nvidia Gaugan2 project built what was made by company researchers with Canvas Nvidia. The application – In current beta mode – works with the first Gaugan model. With the artificial intelligence you want, anyone can produce artwork that seems relatively realistic with inputs that are nothing more than what is needed to make finger paintings.
With Gaugan2, Nvidia researchers expand what is possible with simple input and artificial intelligence interpretations of the input. This model uses a variety of sketches (around 10 million images of high-quality landscapes), as a visual knowledge bank, and draws on the bank to decide what your words mean in artwork.
One single frame bro in Gaugan2 covers several modalities. Nvidia pointed to text, semantic segmentation, sketches, and styles. Below you will see the demonstration of this new text input element in the interface which is basically the Nvidia canvas extension.
The demonstration is far more important than what it represents. Smartphones can now remove elements miraculously in a photo. If you use a system like Google photos, artificial intelligence already exists in your life, grow smarter when you feed more images captured by your cellphone.
The next wave is here, with a demonstration of Nvidia, showing us how the machine not only knows how to identify elements in photos, he knows how to produce an image based on his knowledge about the picture that has been fed. NVIDIA has a model here which effectively shows us that graphical processing power and the right code set can produce a reliable representation of what we appreciate as reality.