Anyone on here particularly religious?

Discussion in 'Politics & Current Events' started by Hollowway, Jul 24, 2017.

  1. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,970
    Likes Received:
    1,139
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    1. I read your paper. I quoted your paper back to you multiple times. I posted my own links to papers I read, and quoted those to you. Stop saying I don't read the literature, at this point you are insulting yourself as well as insulting me.
    2. My library link is 100% relevant to this discussion. The paper you keep saying I didn't read is just about road and lane identification and steering controls. Why don't you start acting like you read your own link?
    3. I don't care if you don't care what my stance is. Your stance is that you are somehow all-knowing in this field, where you are clearly misunderstanding some things you are posting, and you seem to think it makes you look cool to shit on bostjan. Yet I've made valid points and your response is to say that I'm "whining" (which didn't even make sense in context), or that I don't read things, yet you don't seem to be challenging specific things I've said.
     
  2. narad

    narad SS.org Regular

    Messages:
    4,816
    Likes Received:
    1,177
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    That's not "the literature." When it comes to how much behavior you can learn without hand-coding things, it's not just a couple self-driving car papers. The deepmind papers are a great start. The fundamental learning paradigm of these papers and the way that self-driving cars can be trained (in simulation) are the same. I already discussed why this isn't done for self-driving systems that rely solely on example or real-world driving scenarios. At the end of the day, it's up to you to decide whether driving around grand theft auto avoiding cars and getting to a goal is sufficiently similar to driving a real car to the grocery store, as it pertains to exhibiting intelligent behavior.

    It was not at all relevant to that particular Nvidia car. That car does not use those libraries. End of discussion. You were free to bring it up, but trying to make it relevant to my post (with examples) was obviously wrong.

    I guess you're so well-informed that you can properly assess my well-informedness. But you throw out your points like they are critical counter examples. Does the Nvidia system use image normalization? Sure. That doesn't affect my own assessment of what's being learned, because other systems have already learned such normalization. If that's too brittle for you, fine.

    AI can be a weird thing to argue about. First we could find a system that doesn't use normalization. Then it would be about if it could avoid a kid running into the street. Then it wouldn't be AI until it could make an ethical decision about who it must runover when multiple people of all ages and social classes run into the street. Then it wouldn't be AI until it was emotionally distressed over the ethical decision. When you wind up arguing with always-right guys like you, this is how it goes. When really I was just trying to show some cool complex behavior without if-else rules, broaden some people's minds over how current AI works :-/
     
  3. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,970
    Likes Received:
    1,139
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    I brought it up before you posted that example.

    :shrug:

    Funny, that's the way I thought you were coming off to me. :lol: I mean, I agree that I have made counter-examples to your points, but, to be fair, my counterpoints specifically address your points.

    I guess the internet is a weird place to try to have a discussion. :/

    I mean, sure, but you kept beating that drum that no hand-coding of anything was being used in the example, when the second sentence of the very paper you posted as example admitted the opposite. In fact, I am certain that AI to do image normalization would be totally possible. It's not exactly a trivial task, but it's super easy compared to the things we are discussing here about driving a car.

    Obviously, the point of disagreement here is how advanced these AI's are. They are advanced as hell. But they are not as advanced as you have stated on several posts. I think we've covered that. The paper you keep coming back to states that the AI can determine the outline of the road without any explicit coding telling it such. I don't see where anyone claimed AI could not do that.
     
  4. TedEH

    TedEH Cromulent

    Messages:
    3,946
    Likes Received:
    473
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    I don't think anyone is saying this isn't AI. The distinction being made is that AI is not the same as "real" or "human" intelligence. It's impressive for sure, but it's IMO nowhere near truly understanding what it's doing or "learning".
     
  5. narad

    narad SS.org Regular

    Messages:
    4,816
    Likes Received:
    1,177
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    I think the point is that if it can determine the road without any explicit coding telling it such, can it not determine people, dogs, deer, etc.? This would run very counter to the opinions expressed earlier in this thread, that these need to be fed in, by hard-coding or by pre-training some object recognition system on labels ("this is a person" "this is a deer", etc.)

    I mean, it is clear that this is the case when you extrapolate from the Atari game work and the ability of that system to do similar from similar input. You just need an objective function that rewards/penalizes something relevant to these objects ("hitting people is bad" or even "hitting people means cops come, and going to jail drastically slows estimated time to destination")

    With regards to the second sentence of the paper that contradicts something I said, I'm not sure what you mean. I guess you mean this (third sentence)?: "With minimum training data from humans the system learns to drive in traffic on local roads with or without lane markings and on highways." Well its objective is derived from human steering so that's natural. That is a learning by example paper, where the model itself does not contain hand-coding. See deep reinforcement learning if you want to ditch the human.

    But yea, I don't think we're close to human intelligence. But to the points expressed earlier -- that AI is mostly tricks to appear intelligent -- I strongly disagree. I don't see any indication that deep RL is so far removed from the process of human intelligence to be considered a qualitatively different type of thing. The biggest obstacles seem to be in training strong predictive models, such that if you're in a situation you can imagine your outcomes in the future based on the actions you take, and composing small actions into conceptually larger ones, so that an RL reward applies more to concepts than super tiny actions. And then grounding such an agent in a multitask world. But there are people working on all of these problems -- it seems one of scale.
     
  6. bostjan

    bostjan MicroMetal Contributor

    Messages:
    12,970
    Likes Received:
    1,139
    Joined:
    Dec 7, 2005
    Location:
    St. Johnsbury, VT USA
    Determining: "Road/NOT Road" -> "Drive here/NOT Drive here" is not trivially developed into "This is an object that could potentially get in your way within a safe distance X, so slow down on approach."

    Agree/Disagree?
     
  7. narad

    narad SS.org Regular

    Messages:
    4,816
    Likes Received:
    1,177
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Disagree. The model has a functional representation of the road, so in the presence of this curve, align the steering wheel in this manner. This is not so different from the functional representation of a child, where in the presence of this object, reduce speed 20-30% (because this is what all humans are doing when this object enters the field of the camera / i.e., there is a strong association between this type of phenomena and the need to slow down in the objective function).

    It's not exhibiting the same rationale for reducing speed that you mention, but functionally it is equivalent (and would lead back to Chinese room talk to say that these are fundamentally different).
     
  8. TedEH

    TedEH Cromulent

    Messages:
    3,946
    Likes Received:
    473
    Joined:
    Jun 8, 2007
    Location:
    Gatineau, Quebec
    I'm not sure I'd call any of the process "rationale" at all. It's not like the machine thinks to itself "oh no, there's something here, I should move out of the way" - it's just giving the result that best matches the pattern it was trained on. There's no reasoning involved. I imagine it would respond to any foreign shape the same way, regardless of whether or not it's safe or ok to drive over it.

    I think it's safe to say we've strayed super far from the original topic though. Maybe time for a new thread? The AI thread? Edit: Maybe we should poke a mod to move this to it's own discussion.
     
    bostjan likes this.
  9. narad

    narad SS.org Regular

    Messages:
    4,816
    Likes Received:
    1,177
    Joined:
    Feb 15, 2009
    Location:
    Cambridge, UK
    Well exactly -- I know you wouldn't call it a rationale. But ultimately if a machine makes all the same decisions of an informed human, on some complex task that we would require a chain of reasoning, then one has to consider that the function that the model has learned has an implicit understanding of the discrete logical steps that you would cite when making the same decision.

    But ya, also in favor of a thread split.
     

Share This Page