narad
Progressive metal and politics
Except that it is. It literally is. It IS a computer, or bunch of computers, running some software. It's a machine. It's not a brain, it doesn't "think" in the way a person thinks, it doesn't feel or reason - it's literally a set of instructions and weights and patterns being carried out by a machine, albeit a very complicated set of instructions.
That doesn't mean it won't have an impact, that jobs won't be lost, but I remain unconvinced that the general public have a solid grasp of what AI is now or will become later - and what its limitations are and will be.
Take as an example - the one that comes up often: graphic design. AI is really good at generating anything at all given a prompt. But what it can't do is understand what a client wants to convey and come up with something novel or clever to serve that purpose. Like the "hidden" arrow in the fedex logo: an AI could not have done that. You still need people for stuff like that. I'd be willing to believe that AI will take over the "I only care about this enough to pay someone on fiver for it" segment of graphic design, and the "I need something like a texture that's been done 100 times before but needs to be just unique enough" segment of art (which is a large segment, granted). But there's no data set for innovation, there's no data set for novelty.
I mean, I don't know how to really get into this further but from my pov you are in the dangerous position of having enough knowledge of AI to feel comfortable forming a fact-based opinion but not enough familiarity with AI to have a good intuition of what it can/can't do and why. I see this a lot in the talk of AI being creative, because people think of creativity as like some sort of spark that's really closely linked to their sense of active thought. And you point out that AI is analyzing patterns and making predictions based on them, and that's supposed to somehow be at odds with generating creative outputs. But when you view creativity as a search problem through the space of the things that could be, suddenly novel "creative" generations become a lot less magical sounding. Like I'm generating a sequence of words, and the model assigns a probability distribution over each word, and I can choose to be sampling closer to the empirical distribution, or deviating into less likely words and phrases, and I can actually have another model to help choose how to deviate from the empirical distribution in ways that might be more interesting to the user and be more "creative" from their pov.
For instance with the Fedex logo, you think you couldn't ask an AI what symbols might be most associated with a parcel delivery company, and then ask another way in which imagery is typically incorporated into logo text, and then to ask another AI to generate logos which incorporate these symbols into text in those ways? And then generate a thousand logos in an instant to sort through and find something that does a sufficiently good job at coming up with something "clever"? If we're not there now, we're there soon.
And the other example that keeps coming up for some reason is HR. AI HR would suck more than HR already sucks. HR serves two purposes: protecting a company from its employees, and resolving inter-human problems. Sure, an AI can point you at the right form if that's all you need. What happens when you're raising a complaint or an issue that needs to be investigated? What happens when someone is harassed, or you need to ask for a raise or time off? Given HRs first purpose of protecting the company there's WAY too much risk to allow AI to answer all of those with boilerplate answers. Having an AI "front desk" to direct you to the right forms and to the right people for more complex issues make sense - but HR as a whole remains a very human job.
HR in my company spends most of the time scheduling meetings and interviews, then cancelling them, emailing us to update software, summarizing some corporate fluff, or clarifying company policy. Of course people will need to deal with human conflict situations. It's just clear there are many HR roles that don't deal with infighting quite so much.
And many people who don't know what they're talking about think it'll make programmers obsolete - which it can't. What it can do is mimic existing code, generate a lot of boilerplate, and solve some basic (aka already solved) problems - but it can't architect larger systems or do any meaningful debugging. Say a weird rare bug pops up that only happens on Tuesdays for people using out-of-date Macs but only if they're on a VPN. There's no training data for that. The best an AI can do is generate generic troubleshooting steps, recommend testing for better coverage, etc - but it doesn't understand the codebase well enough, or have a picture of the typical use-cases or the contexts the software might be used in, in order to intuit why the bug might be happening and to propose a solution.
Yea, hard disagree there.
We also don't know that AI isn't going to have some kind of giant push-back kind of like the art community had. Just because it exists doesn't mean everyone will embrace it. It's a complicated enough tool that comes with enough risk and complexity that I'd be fully willing to bet that some industries will skip it entirely, for reasons of fear, for moral reasons, for not knowing what to do with it, for not having the desire or time to invest into training models on their specific needs, etc etc. We're not at the "push a button and it solves all our problems" stage yet.
The art community is pushing back because it's going to decimate them, in the traditional sense of the word. And of course some companies will be slow to adopt or try to sidestep for moral grounds, but they're just going to get outdone by companies with more streamlined automation. I'm sure some people pushed back in some "horses forever!" clique when people starting driving automobiles. Nothing stops them from trying, but basic capitalism stops them from surviving.