• @Eccitaze
    link
    fedilink
    English
    37 months ago

    🙄 And right on cue, here comes the techbros with the same exact arguments I’ve heard dozens of times…

    The problem with “AI as a tool” theory is that it abstracts away so much of the work of creating something, that what little meaning the AI “author” puts into the work is drowned out by the AI itself. The author puts in a sentence, maybe a few words, and magically gets multiple paragraphs, or an image that would take them hours to make on their own (assuming they had the skill). Even if they spend hours learning how to “engineer” a prompt, the effort they put in to generate a result that’s similar (but still inferior) to what actual artists can make is infinitesimal–a matter of a few days at most, versus the multiple years an artist will spend, along with the literal thousands of practice drawings an artist will create to improve their skill.

    The entire point of LLMs and generative AI is reducing the work you put in to get a result to a trivial basis; if using AI required as much effort as creating something yourself, nobody would ever bother using it and we wouldn’t be having this discussion in the first place. But the drawback of reducing the amount of effort you put in is that you reduce the amount of control you have over the result. So-called “AI artists” have no ability to control the output of an image on the level of the brush or stroke style; they can only control the result of their “work” on the macro level. In much the same way that Steve Jobs claimed credit for creating the iPhone when it was really the hundreds of talented engineers working at Apple who did the work, AI “artists” claim the credit for something that they had no hand in creating beyond vague directions.

    This also creates a conundrum where there’s little-to-no ability to demonstrate skill in AI art–from an external viewer, there’s very little real difference between the quality of a one-sentence image prompt and one fine-tuned over several hours. The only “skill” in creating AI art is in being able to cajole the LLM to create something that more closely matches what you were thinking of, and it’s impossible for a neutral observer to discern the difference between the creator’s vision and the actual result, because that would require reading the creator’s mind. And since AI “artists,” by the nature of how AI art works, have precious little control over how something is composed, AI “art” has no rules or conventions–and this means that one cannot choose to deliberately break those rules or conventions to make a statement and add more meaning to their work. Even photographers, the favorite poster-child of AI techbros grasping at straws to defend the pink slime they call “art,” can play with things like focus, shutter speed, exposure length, color saturation, and overall photo composition to alter an image and add meaning to an otherwise ordinary shot.

    And all of that above assumes the best-case scenario of someone who actually takes the time to fine-tune the AI’s output, fix all the glaring errors and melting hands, and correct the hallucinations and falsehoods. In practice, 99% of what an AI creates goes straight onto the Internet without any editing or oversight, because the intent behind the human telling the AI to create something isn’t to contribute something meaningful, it’s to make money by farming clicks and follows for ad dollars, driving traffic from Google search results using SEO, and to scam gullible people.

    • @[email protected]
      link
      fedilink
      English
      17 months ago

      Always good to start an argument with name calling. Did you actually want a discussion or did you want to post your opinion and pretend it’s the only true one in a topic as subjective as the definition of art?

      Just because someone isn’t able to discern whether someone has put more effort into something doesn’t mean that person lacks skill. A lot of people would not be able to tell some modern art from a child’s art, that doesn’t invalidate the artists skill. There are some people though who can tell the modern art painting and recognize the different decisions the artist made, just as someone can tell whether someone put effort and meaning into a piece of art generated by AI. There’s always the obvious deformities an errors that it may spit out, but there’s also other tendencies the AI has that most are not aware of but someone who has studied it can recognize and can recognize the way the author may or may not have handled it.

      Back to the camera argument all those options and choices you listed of saturation, focus, shutter speed … can also be put into an AI prompt to get that desired effect. Do those choices have any less of a meaning because someone entered a word instead of turning a dial on a camera? Does a prompt maker convey less of a feeling of isolation to the subject by entering “shallow depth of field” as a photographer who had configured there camera to do so? Could someone looking at the piece not recognize that choice and realize the meaning that was conveyed?

      As for your final argument about it being used to farm clicks and ad dollars, that’s just art in capitalism. Most art these days is made for the same purposes of advertising and marketing, because artists need to eat and corporations looking to sell products are one of the few places willing to give them money for their work. If anything AI art allows more messages that are less friendly to consumerism to flourish since it’s less constrained by the cash needed to pay a highly technically trained artist.

      It’s obviously horrible in the near term for the artist but long term if they realize there position they may come ahead. AI needs human originated art to keep going or it may devolve into a self referential mess. If the artists realize there common interests and are able to control the use of there art they may be able to regain an economic position and sell it to the AI training companies. The companies would probably want more differentiated art to freshen the collapsing median AI tends towards so they get to make the avant-garde art while the ad drivel and click farming is left to the AI. This would require a lot of government regulation but it’s the only positive outcome I can see from this so anything on that path, like the watermarking, would be good.

      • @[email protected]
        link
        fedilink
        English
        17 months ago

        It’s obviously horrible in the near term for the artist but long term if they realize there position they may come ahead. AI needs human originated art to keep going or it may devolve into a self referential mess. If the artists realize there common interests and are able to control the use of there art they may be able to regain an economic position and sell it to the AI training companies. The companies would probably want more differentiated art to freshen the collapsing median AI tends towards so they get to make the avant-garde art while the ad drivel and click farming is left to the AI. This would require a lot of government regulation but it’s the only positive outcome I can see from this so anything on that path, like the watermarking, would be good.

        Are you seriously suggesting artists should pivot to selling their work to the LLM masters?

        The same masters that stole everything, paid for nothing, and decimated their industry?