Create a 7-string Guitar with Midjourney AI

BMFan30

SS.org Regular
Joined
Jan 28, 2021
Messages
1,309
Reaction score
970
I feel that these palpably grotesque caricatures are in some way emblematic of advanced technology's potential effect on humanity.

Or maybe it simply reveals a truth we don't want to see... :lol:
Are you saying that the AI is saying that we will all be saying "DeetDeeDee" in the future all whilst trying to licka a pennie outre left shirt pocket?
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,322
Reaction score
13,380
Location
St. Johnsbury, VT USA
Oh, I totally forgot that I had changed my background/banner picture to an AI generated metal band. It could have been timely to have mentioned that at the start of this thread.

I find it interesting that images generated by AI all look frightening and unsettling, like everyone is made of raw meat or are the elephant man. I can see this getting better a few years from now, but everyone wanting AI images generated this way because they all have that similar look about them that will forever pin them to this time.

Like how you look at that image, and without a word of description, you know that it's John Petrucci, but he looks almost exactly as he does when you have reoccurring nightmares of him sneaking into your house at night to eat your pets, and nothing like he looks when he's up on stage playing Under a Glass Moon. AI, get out of my dreams; get into my car (and drive it for me)!
 

BlackMastodon

\m/ (゚Д゚) \m/
Contributor
Joined
Sep 26, 2010
Messages
7,636
Reaction score
3,629
Location
Windsor, ON
I'm gonna use this thread as a soapbox instead of making another one since we're on the topic, but forgive me if there's another thread on this in the Politics subforum or somewhere else.

I view AI art and deep fake videos in the same way for being incredibly dangerous once the technology becomes advanced enough as to be indistinguishable to actual images or videos, and @narad, by all means, educate me if this isn't the case.

Stuff like the reports from a few years ago where there was an algorithm that could determine with surprisingly good results if someone was a homosexual just based off of a mugshot, would have terrifying impacts for places that still have barbaric laws against homosexuality. But also, this could be extrapolated to something like attacking political opponents or activists by creating fake scandals. We already have a huge issue with spreading misinformation through memes and social media, but could you imagine if it would take less than a minute to generate an image of <disliked political opposition> <performing lude/illegal sex act> against <children/animals/anyone but a consenting partner>?

It was obvious like 10 years ago that lawmakers were way too under-educated and slow on the uptake when it comes to how technology and the Internet works, and technology has quickly outpaced that point with things like machine learning with little regard for ethical implications.

Shit is scary from the perspective of the boring dystopia we currently live in, not even considering the absolute hellscape that it could turn into.
 
Last edited:

gabito

Stay at home musician
Joined
Aug 1, 2010
Messages
533
Reaction score
614
Location
Argentina
Present and future AI could - like other technologies- be a lot of things: dangerous, fun, useful, a waste of resources, legal, illegal, nightmarish, beautiful, etc.

But I think only one thing is certain about AI: there's no stopping it. In a few years the world will adapt, and nobody will remember these discussions.
 

Albake21

Ibanez Nerd
Joined
Jul 19, 2017
Messages
3,424
Reaction score
3,544
Location
Chicago, IL
I'm gonna use this thread as a soapbox instead of making another one since we're on the topic, but forgive me if there's another thread on this in the Politics subforum or somewhere else.

I view AI art and deep fake videos in the same way for being incredibly dangerous once the technology becomes advanced enough as to be indistinguishable to actual images or videos, and @narad, by all means, educate me if this isn't the case.

Stuff like the reports from a few years ago where there was an algorithm that could determine with surprisingly good results if someone was a homosexual just based off of a mugshot, would have terrifying impacts for places that still have barbaric laws against homosexuality. But also, this could be extrapolated to something like attacking political opponents or activists by creating fake scandals. We already have a huge issue with spreading misinformation through memes and social media, but could you imagine if it would take less than a minute to generate an image of <disliked political opposition> <performing lude/illegal sex act> against <children/animals/anyone but a consenting partner>?

It was obvious like 10 years ago that lawmakers were way too under-educated and slow on the uptake when it comes to how technology and the Internet works, and technology has quickly outpaced that point with things like machine learning with little regard for ethical implications.

Shit is scary from the perspective of the boring dystopia we currently live in, not even considering the absolute hellscape that it could turn into.
I've been thinking about this a lot lately, and as much as it pains me to say this.... I think this might be where blockchain technology will finally play a role in our society. As much as I absolutely hate all of the crypto/blockchain bullshit, it's really the only time I can actually see it being used. Once AI image and video making hits a point of being instinctual from the real thing, I truly believe this is where receipts to prove it being real will come in.

It's also when I'll probably quit using the internet, move into a cabin in the woods, and live out life uninterrupted from anyone :lol:
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,322
Reaction score
13,380
Location
St. Johnsbury, VT USA
I'm gonna use this thread as a soapbox instead of making another one since we're on the topic, but forgive me if there's another thread on this in the Politics subforum or somewhere else.

I view AI art and deep fake videos in the same way for being incredibly dangerous once the technology becomes advanced enough as to be indistinguishable to actual images or videos, and @narad, by all means, educate me if this isn't the case.

Stuff like the reports from a few years ago where there was an algorithm that could determine with surprisingly good results if someone was a homosexual just based off of a mugshot, would have terrifying impacts for places that still have barbaric laws against homosexuality. But also, this could be extrapolated to something like attacking political opponents or activists by creating fake scandals. We already have a huge issue with spreading misinformation through memes and social media, but could you imagine if it would take less than a minute to generate an image of <disliked political opposition> <performing lude/illegal sex act> against <children/animals/anyone but a consenting partner>?

It was obvious like 10 years ago that lawmakers were way too under-educated and slow on the uptake when it comes to how technology and the Internet works, and technology has quickly outpaced that point with things like machine learning with little regard for ethical implications.

Shit is scary from the perspective of the boring dystopia we currently live in, not even considering the absolute hellscape that it could turn into.
Some new revolutionary tool is an existential threat to us? Kind of like the internet, nuclear weapons, communism, birth control, the automobile, machine guns, guerilla warfare, Napoleon, the revolution, the musket, gunpowder, the new world, the barbarians, the romans, the greeks, the egyptians, fire, the wheel, etc....?

Lawmakers are ignorant about AI? Kind of like covid, WMD in Iraq, the internet, Vietnam, communism, Napoleon, the revolution, etc....?

Yes, it's going to turn the world into a hellscape, just like everything else has, but there's nothing you nor I can do to stop it from happening; we can only prepare for it and try not to become obsolete. Eventually, though, we all become obsolete... AI might even determine who is and who isn't. But, for now, it looks like AI is capable of kicking our assess at board games, and designing really nightmarish art, and maybe driving cars that sometimes run over homeless people, but less often than actual humans run over homeless people...
 

BlackMastodon

\m/ (゚Д゚) \m/
Contributor
Joined
Sep 26, 2010
Messages
7,636
Reaction score
3,629
Location
Windsor, ON
Some new revolutionary tool is an existential threat to us? Kind of like the internet, nuclear weapons, communism, birth control, the automobile, machine guns, guerilla warfare, Napoleon, the revolution, the musket, gunpowder, the new world, the barbarians, the romans, the greeks, the egyptians, fire, the wheel, etc....?
I see what you're saying, but a few of those are unquestionably dangerous in that they're designed to kill as many people as quickly as possible.

I guess it is part of the human cycle, but this is a weird new step in that it makes us question the reality of what we're seeing. I just saw a video making the rounds last week of a white guy doing a Morgan Freeman impression with a deepfake digital overlay side-by-side to make it look like Morgan Freeman was doing the talking and I hate it. I feel like the practical and ethical benefits of that kind of technology are outweighed by the implications of how it can be used. Unless we do implement some sort of blockchain digital watermark like Albake is saying that says "nothing about this is real, this is bullshit made for lols, carry on."
 

bostjan

MicroMetal
Contributor
Joined
Dec 7, 2005
Messages
21,322
Reaction score
13,380
Location
St. Johnsbury, VT USA
I see what you're saying, but a few of those are unquestionably dangerous in that they're designed to kill as many people as quickly as possible.

I guess it is part of the human cycle, but this is a weird new step in that it makes us question the reality of what we're seeing. I just saw a video making the rounds last week of a white guy doing a Morgan Freeman impression with a deepfake digital overlay side-by-side to make it look like Morgan Freeman was doing the talking and I hate it. I feel like the practical and ethical benefits of that kind of technology are outweighed by the implications of how it can be used. Unless we do implement some sort of blockchain digital watermark like Albake is saying that says "nothing about this is real, this is bullshit made for lols, carry on."
You're afraid that someone will try to deceive you? Honestly, it'd take way less effort to track down a real photo of someone, start a facebook account with their name and photo, and then defame them using that.

I'm 100% certain that someone out there will use AI to fuck over someone else; I guess where we differ is that I'm 100% certain that, in the absence of AI, that same person would have fucked over that same other person, just some other way.

And, as we speak, Russia is feverishly trying to develop some sort of military AI to help them win the war in Ukraine, and, ostensibly, either Ukraine or the USA or the UK or maybe all three are working on some sort of counter-measure to stop it from working.

AI is a tool that can be used for some horrible things, but maybe the good will outweigh the bad. At the very least, it's being introduced, so far, in a way that everyone can access it, so at least it won't be something exclusively available to the bad guys.
 

BlackMastodon

\m/ (゚Д゚) \m/
Contributor
Joined
Sep 26, 2010
Messages
7,636
Reaction score
3,629
Location
Windsor, ON
And, as we speak, Russia is feverishly trying to develop some sort of military AI to help them win the war in Ukraine, and, ostensibly, either Ukraine or the USA or the UK or maybe all three are working on some sort of counter-measure to stop it from working.
All conflicts should be fought with no items, Fox only, Final Destination.
 

wheresthefbomb

SS.org Regular
Joined
Jul 30, 2013
Messages
4,428
Reaction score
7,213
Location
Planet Claire
@BlackMastodon and other nerds re: deepfakes, have you heard of Holly+? Holly Herndon taught an AI to sing in her voice and then made it free for anyone to use. Hard to say where exactly this will lead but it's a bold, proactive solution to the coming era of deepfakes in creative IP. It's not perfect as you'll hear in the video, but it's damn close.

 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
14,221
Reaction score
24,833
Location
Tokyo
I'm gonna use this thread as a soapbox instead of making another one since we're on the topic, but forgive me if there's another thread on this in the Politics subforum or somewhere else.

I view AI art and deep fake videos in the same way for being incredibly dangerous once the technology becomes advanced enough as to be indistinguishable to actual images or videos, and @narad, by all means, educate me if this isn't the case.

Stuff like the reports from a few years ago where there was an algorithm that could determine with surprisingly good results if someone was a homosexual just based off of a mugshot, would have terrifying impacts for places that still have barbaric laws against homosexuality. But also, this could be extrapolated to something like attacking political opponents or activists by creating fake scandals. We already have a huge issue with spreading misinformation through memes and social media, but could you imagine if it would take less than a minute to generate an image of <disliked political opposition> <performing lude/illegal sex act> against <children/animals/anyone but a consenting partner>?

It was obvious like 10 years ago that lawmakers were way too under-educated and slow on the uptake when it comes to how technology and the Internet works, and technology has quickly outpaced that point with things like machine learning with little regard for ethical implications.

Shit is scary from the perspective of the boring dystopia we currently live in, not even considering the absolute hellscape that it could turn into.

Nah, that's certainly the case. It's hard to know how quickly the video-based methods will scale up -- I've seen short clips that were basically at the level of where image gen was ~3 years ago, so if it follows similar trajectories we're looking at a world where you can't trust your eyes or ears when it comes to anything you see on the internet. It's incredibly scary because if you live in a world where forces are pushing narratives to control the behavior of the masses, they're now going to have the perfect tool to do that. Can you imagine all the idiots that would have gotten riled up if there was actual video of Hillary eating a baby? Or a video given to Russians showing Nazi Ukranians killing Russian citizens?

On the other hand, people really had no qualms with believing that Hillary eats babies, or that Russia is invading Ukraine to rid them of nazis anyway. The technology is going to get there for sure, but what sort of world that actually creates is anyone's guess. I'd like to think that shattering your ability to trust the images and videos you see online will cause everyone to have a heightened BS detector, and that mainstream outlets will again be more trusted than a random guy with a podcast. Wishful thinking? I don't see how the blockchain mentioned by @Albake21 could really help that much though, because it's about who the trusted sources are. I don't often have trouble finding a source for some important image or video. But often these days that just leads back to some random person, and I don't know whether to trust them. Having a society where everyone has some level of trustworthiness score starts to sound even more Black Mirror than the AI aspect, and that I also find scary.

But ya, the only thing I think is certain, as @gabito said, there's no stopping it. I followed the source of one of the posts the artist person here posted, and it led to a coalition of artists that have some demands they want to bring to washington. Among the demands were that like, companies would only employ X% of AI art tools. That's laughable. It reminds me of when Beyonce wanted all the unflattering superbowl photos of her to be removed from the internet. You can't place artificial controls on the use of the technology. If anything, it's worth remembering that America is just one place, and we can send photos instantly anywhere in the world. If you want to cost a company a huge amount of money employing artists that work slowly by artificially limiting the amount that more powerful tools can be used (again, not really in line with any existing policy in any field), congratulations, you have now been outsourced.

One thing worth clarifying though is that homosexual detector was, predictably, total bullshit. It's a good cautionary tale in bad ML and one of the only examples I can think of where an academic paper should have been rejected on ethical grounds. It turned out that the homo/hetero partitions in the dataset were constructed in different ways, so not only were all these people heterosexual, but they also had this lighting and this clothing, etc. But with image models it's nice because you can basically visualize where the model's attention is focused when making the prediction, so it's easier to sanity check than these current models we're talking about. Maybe I'm getting confused with a different work, but I do remember one homosexual detector or such thing turned out to be basically a facial hair detector :D So yea, it's was fake science, but made the media rounds before there was time for better ML people to look into it.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
14,221
Reaction score
24,833
Location
Tokyo
@BlackMastodon and other nerds re: deepfakes, have you heard of Holly+? Holly Herndon taught an AI to sing in her voice and then made it free for anyone to use. Hard to say where exactly this will lead but it's a bold, proactive solution to the coming era of deepfakes in creative IP. It's not perfect as you'll hear in the video, but it's damn close.



I don't know -- doesn't seem like a bold/proactive solution to just give up your voice for anyone to use for free. It's kind of like full surrender, which is probably the best course of action, but doesn't seem very empowering. But as a singer, anyone could just take your voice and train a model in the same way, so might as well get a TED talk out of it. But I mean, "taught" already implies a lot of agency or involvement that's not there. Singing goes in to the blackbox, voice-to-voice model comes out. As far as Holly goes, it's just giving a bunch of her singing to a company (or at most, rerecording a number of set phrases if the model doesn't learn a phonemic transcription and alignment automatically, which most do these days).
 

Alsvartr

SS.org Regular
Joined
Sep 12, 2008
Messages
6,244
Reaction score
6,431
"Stole" in the sense that it did the ML analog of looking at them. Which is what humans do. If you want to attempt to actually make an argument, you need to get clear on how the processing the model does is any different to what the human artist does to the extent that you want to call one theft and the other inspiration.
When Ran makes an explorer it's theft. When Ken Lawrence makes an explorer it's inspiration.
 

narad

Progressive metal and politics
Joined
Feb 15, 2009
Messages
14,221
Reaction score
24,833
Location
Tokyo
When Ken makes an explorer it's like a preview of the singularity :D
 
Top