How an artist uses AI to create a lot of offline art

How an artist uses AI to create a lot of offline art

In recent issues, I looked at why artists and creatives are pushing back against generative AI. In short, they worry that their work will be used to train these models without consent, their work devalued, and fewer opportunities for human creators in a world where AI can be faster and cheaper. But even more than 200 prominent music artists, who last week signed an open letter raising similar concerns, stressed that they believe that “when used responsibly, AI has enormous potential to enhance creativity.”

We are still very much at the beginning of these issues and generative AI in general, and there are no easy answers yet for how to balance it all. But as it unfolds, some artists are experimenting with how they can use generative AI as a tool to enhance, not hinder, their creative work.

One such artist is Sidney Swisher, an Illinois-based artist and photographer. I’ve been following Swisher for a while. Her unique works include painting on fabric, immersing a scene – a whimsical window sill, a dresser adorned with jewels and photographs, a dreamy bedroom – into the pattern (usually floral) on a piece of fabric, merging them into one. The result is always beautiful, textured and huge, and everything feels amazing offline. So I was shocked to learn that between saving old fabrics and brushing paint, Sydney uses generative AI to help bring her visions to life.

After choosing a fabric to serve as the basis for the painting, Swisher selects photos she’s taken to use as reference images, brings them into Midjourney, and adds quick captions to describe a memory she has that the fabric seems to evoke . She then changes the prompts, adds more reference images, and plays with how the images and fabric can interact. After choosing one image created by Midjourney, she edits it further in Photoshop to bring it closer to the specific memory she is channeling to create a final reference image that she will paint.

“I discovered DALL-E and was blown away by its ability to create the scene I had in mind. I found Midjourney around April of last year and was blown away by the quality of the images and the ability to edit specific details,” she told me, adding that the quality of the Midjourney images won her over and that she also likes being able to feed it reference images herself , including old film stills from her childhood.

Much of her art, Swisher said in a TikTok video describing her process, is based on machine intelligence that visually interprets descriptions of memories.

“I want my work to be a reflection of memory. In my mind, I have very vague visual references to places that evoke strong nostalgia and emotion,” she said. “I’m interested in researching attempts to go back in time completely.

She continued: “When you focus on trying to pin down every detail of a memory, pieces will always be blurred and you start to doubt yourself. Which parts are correct? Did I dream this or did it really happen? The first time I saw a generated image from DALL-E, I felt it captured that feeling perfectly. To me it really looked like a picture from a memory or a dream.

In another example of creatives trying to play nice with generative AI, filmmaker Paul Trillo talks about Hard fork podcast about his experience previewing Sora, OpenAI’s new text-to-video model that recently sent both Hollywood executives and much of the general public into a panic. Trillo’s account is one of Sora’s first hands-on reviews, and he describes how he tried to test what the model could do and discovered that it had its own sense of editing and pacing.

“I was shocked. I was blown away. I was confused. I was a little disturbed because, hell, it’s doing things that I didn’t know they were capable of,” he said when asked what his initial emotional reaction was when for first entered a prompt and got a video .

Trillo said the video he created in days with Sora would have taken months with traditional tools — and also that the model “can give us some experimental wild, bold weird things that might be hard to achieve with other tools .” Still, he thinks the model is just a supplement to human video production. Like Swisher, he sees it as a new part of the process he can use to achieve his vision.

“I’m just going to focus on this instrument to get everything out of my head,” Trillo said.

And with that, here’s more AI news.

Wise Lazarus
[email protected]
sagellazzaro.com

Today’s edition of Eye on AI was curated by Sharon Goldman.

AI IN THE NEWS

Amazon CEO Andy Jassy tells shareholders he’s “optimistic” about generative AI. Amazon may have a reputation as a laggard in the race to be the generative AI leader, but in annual shareholder letter published Thursday, Amazon CEO Andy Jassy said he was “optimistic that much of this world-changing AI will be built on AWS.” CNBC indicated that under Jassi, “Amazon has become a leaner version of itself as slowing sales and a challenging economy have caused the company to avoid the relentless growth of the Bezos years.” Still, Jassi said he believes generative AI will become Amazon’s next “pillar,” after retail, Prime and cloud computing.

Intel reveals details of new AI chip in battle with Nvidia. Nvidia can holds more than 80% market share in AI chips, but that’s not stopping Intel from making a move to challenge Nvidia’s dominance. On Tuesday, at its Vision event, Intel shared details about its latest Gaudi 3 chip, which Reuters reported is “capable of training specific large language models 50% faster than Nvidia’s previous generation H100 processor.” It also said it can compute Gen AI outputs, known as “inference,” faster than Nvidia H100s. Of course, since Nvidia CEO Jensen Huang recently announced a new “big, big GPU called Blackwell at its GTC conference, the race is certainly still on.

Adobe is reportedly buying videos for $3 a minute to build a new AI model. OpenAI’s text-to-video model, Sora, which is currently only available as a demo, has made headlines over the past two months, and other tech companies have made it clear that it may be hard to catch up. Even Google DeepMind CEO Demis Hassabis reportedly said mate, it might be hard to compare to Sora. But according to Bloomberg, Adobe pays its network of photographers and artists “to submit videos of people engaged in everyday actions such as walking or expressing emotions, including joy and anger,” in order to source assets for training AI models. The report says that pay amounts to about $2.62 per minute of video sent, although it can go as high as $7.25 per minute.

Arm’s CEO says AI’s “voracious” energy needs are unsustainable. According to Wall Street Journal, Rene Haas, CEO of UK-based smartphone chip design company Arm, says AI models like OpenAI’s ChatGPT “are just voracious in terms of their thirst” for electricity. The comments came ahead of Tuesday’s announcement that the US and Japan are funding a $110 million program for AI research in the UK and Japan. “The more information they collect, the smarter they are, but the more information they collect to get smarter, the more energy they need,” he said. Without intervention to achieve greater efficiency, AI data centers could consume 20-25% of all US energy requirements by the end of the decade. “Today it’s probably 4% or less,” he said.

STATUS OF AI

London-based AI firm V7 is expanding from image data labeling to workplace automation – by Jeremy Kahn

AlphaSense, Goldman Sachs-backed AI research startup valued at $2.5 billion, prepares for IPO as it surpasses $200 million in annual recurring revenue – by Jessica Matthews

Vinod Khosla is betting on a former Tesla Autopilot engineer who left to create tiny AI models that can reason – by Sharon Goldman

AI-driven rally ‘not a one-way street’ but tech earnings set to jump 18% this year as demand booms, says UBS Global Wealth Management – by Will Daniel

YOU HAVE A CALENDAR

April 15-16: Fortune Brainstorm AI London (Sign up here)

May 7-11: International Conference on Learning Performance (ICLR) Vienna

May 21-23: Microsoft Build in Seattle

June 5: FedTalks at FedScoop 2024 in Washington, DC

June 25-27: 2024 IEEE Conference on Artificial Intelligence in Singapore

July 15-17: Fortune Brainstorm Tech in Park City, Utah (Sign up here)

August 12-14: Ai4 2024 in Las Vegas

LOOK AT AI’S NUMBERS

39%

This is the percentage of CEOs who are scaling their generative AI efforts and moving from pilots to industrialization across multiple functions or business units, according to new CEO survey from consulting firm KPMG. Another 29% remain focused on specific use cases in specific functions or teams, while another 15% say they are expanding the adoption of generative AI tools in their workforce.

The study of 100 CEOs of organizations with more than $500 million in revenue found that CEOs “see generative AI as fundamental to gaining competitive advantage and are working to rapidly deploy the technology in their enterprises in a responsible manner.” However, while the survey found that 41% of CEOs plan to increase their investment in generative AI, another 56 % expect their investments to remain unchanged.

Leave a Comment

Your email address will not be published. Required fields are marked *