Mon. Dec 23rd, 2024
Will Deepfakes Be Used To “show” That Computers Can Now

on friday evolution news, design theorist design theorist William Dembski gave some thoughts on the growing problem of deep fake. He acknowledged something we don’t often hear: there are harmless and even constructive uses for deepfakes. Bringing historical figures to life in educational settings. ”

But he also raises concerns that aren’t often heard.

Even if artificial intelligence develops so powerfully over time that it eventually turns into artificial general intelligence, that’s another story (though if my previous claims hold true, this prospect is a pipe dream). It’s a story). But what if we instead use AI to make it appear that AGI has been or is being achieved? It would be like saying that the findings were fake and that all research papers touting them should be retracted, only to find out later that they had fabricated the results. We have witnessed this kind of thing in academia (see his J. Hendrik Schön case in my article “Academic steroids: plagiarism and data falsification”) Improving Performance in Higher Education”).

William A. Dembski, “Deepfakes and Propaganda of artificial general intelligence” Evolution News, February 9, 2024

Those who absolutely believe that the rise of artificial general intelligence is inevitable may not think they are deceiving others if they have at least predicted it through reliable deepfakes.

But deepfakes create a whole new layer of complexity. For example, how can we know that Dembski himself even exists if we have never met him? He, his presumed background, and his views may be an elaborate deepfake. The end of last year, sports illustrated Publish AI-generated articles like: completely fake Author photo and bio.

future of sports illustrated Right now, right? Be suspicious of But that is not the future of deepfakes.

taylor swift amplification effect

The whole topic gained even more attention after the pornographic images of the famous singer-songwriter Taylor Swift Surfaced on social media late last month:

To illustrate how complicated everything can be, a 2021 Pennsylvania case saw a woman arrested for creating an invasive deepfake to discredit one of her daughter’s cheerleading competitors. , was tried. But, unexpectedly, experts testified that they: It’s not a deepfake… But 6,000 media outlets reported the story in front everything collapsed. Apparently, “Several deepfake experts say this e-cigarette video is so complex and nuanced that even Silicon Valley’s most advanced AI models could not create it, much less any tools available to a mother in suburban Pennsylvania.” It appears that it cannot be created. (Gizmodo)

It’s reassuring to know that experts can still tell the difference between deepfakes and reality. Anyway, back then.

techmag gizmodo Deepfake technology is evolving very rapidly, and most service providers not really ready For what’s to come:

OpenAI this week introduced watermarks embedded in visual and photo metadata to Dall-E images. However, the company also acknowledged that this can be easily avoided by taking a screenshot. This felt less like a solution and more like the company saying, “Well, at least we tried!” …

These solutions alone are not enough. The problem is that deepfake detection technology is new and hasn’t caught on as quickly as generative AI. Platforms like Meta, X, and even telcos should embrace deepfake detection. These companies are making headlines for all their new AI capabilities, but what about their AI detection capabilities?

Maxwell Zeff: “The AI ​​deepfake problem will become more serious” It’s getting worse and it can’t be stopped,” Gizmodo, February 9, 2024

Maybe that’s what the multi-million dollar lawsuit is about. But who wants to be a plaintiff? We need something like data poisoning. stay tuned.

Please also read below. Artists fight back!: New tool uses AI to “poison” pirated images. Developed by Ben Zhao, a computer science professor at the University of Chicago, Nightshade lets an AI generator keep returning cats when you ask for a dog. All in all, Nightshade may prove more beneficial to artists than litigation. It is embedded in pixels and is invisible to humans, only to the AI.