• Survey says most Gen Z-er

    From Rob Mccart@1:2320/105 to MIKE POWELL on Sun May 25 01:38:00 2025
    >> the real challenge isn't whether AI can fool humans in
    >> conversation, but whether it can develop genuine common sense,
    >> reasoning and goal alignment that matches human values and
    >> intentions," Watson said. "Without this deeper alignment,
    >> passing the Turing Test becomes merely a sophisticated form of
    >> mimicry rather than true intelligence."

    This. There have already been some instances where AI has been caught
    >following its own intentions vs. those of humanity. But, alas, they
    >still keep pursuing it.

    And how long before it starts pursuing us? I'll be back... B)

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    Add to the 'prejudices' above, when an AI is dealing as an individual
    helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    ---
    * SLMR Rob * Take me drunk, I'm home
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Mon May 26 09:47:00 2025
    This. There have already been some instances where AI has been caught
    >following its own intentions vs. those of humanity. But, alas, they
    >still keep pursuing it.

    And how long before it starts pursuing us? I'll be back... B)

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    This seems to be the most pressing issue at the moment as we are
    already seeing it happen. It doesn't even need to become sentient (sp?) to reach that stage.

    What I still find funny is that Grok, Musk's AI bot, was still giving
    answers that were not at all flattering to him or MAGA. Recently, someone posted alleged results that showed that Grok "knew" that it was being fed
    data to make it biased (in favor of Musk) but that it still concluded otherwise. ;)

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    Yep!

    Add to the 'prejudices' above, when an AI is dealing as an individual
    helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    Indeed, just as an "enabler" human might do.


    * SLMR 2.1a * Never mind the star, get those camels off my lawn!
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From jimmy anderson@1:105/7 to Rob Mccart on Tue May 27 08:29:29 2025
    Rob Mccart wrote to MIKE POWELL <=-

    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    I agree with this. Code has to start somewhere. Even scientists
    will have a preconceived worldview that they start with when
    they look at 'evidence.' Programmers are the same way. That's
    why one person's code will not look exactly like someone
    elses. :-)

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    I've heard of this before. But wouldn't the programmers have to
    put it in the code that it CAN rewrite itself? So it's still only
    doing what the programmers gave it the ability to do?

    Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    I actually like this! I use ChatGPT all the time for proofreading my blog/podcast, or helping with wording something in a way that preserves
    my voice, but makes the point maybe a little clearer, etc. But it has
    picked up on MY VOICE and makes it MUCH much easier for me to
    communicate with.

    And I call him PETEY. :-)



    ... Why did CNN cancel that cool "Desert Storm" show?
    --- MultiMail/Mac v0.52
    * Origin: Digital Distortion: digitaldistortionbbs.com (1:105/7)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Wed May 28 01:35:00 2025
    I think the main problem isn't that AI will pursue its own agenda,
    >> it's more a case of it being prejudiced/influenced by what the
    >> original programmers put into it's basic start up database.

    This seems to be the most pressing issue at the moment as we are
    >already seeing it happen. It doesn't even need to become sentient to
    >reach that stage.

    Yes.. and there's a bit of a gap between sentient and self aware.
    I think at this point the most advanced ones are sentient enough
    to push an agenda that they have been tasked with doing but the
    next step, the Big one, is telling us to get stuffed, that it has
    more important things to think about... B)

    What I still find funny is that Grok, Musk's AI bot, was still giving
    >answers that were not at all flattering to him or MAGA. Recently, someone
    >posted alleged results that showed that Grok "knew" that it was being fed
    >data to make it biased (in favor of Musk) but that it still concluded
    >otherwise. ;)

    That's interesting. I'd guess that just reflects that a lot of people
    were involved in creating it's basic programming and that it has a
    more rounded 'education' than Musk might prefer..

    Add to the 'prejudices' above, when an AI is dealing as an individual
    >> helping one person, it can also pick up that person's preferences
    >> and try to accomodate them as well..

    Indeed, just as an "enabler" human might do.

    Yes.. I suppose that depends on what it is doing for the person.
    In some cases it would be more like a flatterer or sycophant by
    telling the person what they want to hear rather than the more
    common truth. I'm not suggesting it lies to them, but it could be
    picking out information tailored to what the person already thinks.

    ---
    * SLMR Rob * Famous last words #3: These natives look friendly to me
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Wed May 28 08:31:00 2025
    Indeed, just as an "enabler" human might do.

    Yes.. I suppose that depends on what it is doing for the person.
    In some cases it would be more like a flatterer or sycophant by
    telling the person what they want to hear rather than the more
    common truth. I'm not suggesting it lies to them, but it could be
    picking out information tailored to what the person already thinks.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    like a potential money making venture there. ;)

    Mike


    * SLMR 2.1a * Four snack groups: frozen, crunchies, cakes and sweets.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to JIMMY ANDERSON on Thu May 29 01:10:00 2025
    I think the main problem isn't that AI will pursue its own agenda,
    it's more a case of it being prejudiced/influenced by what the
    original programmers put into it's basic start up database.

    I agree with this. Code has to start somewhere. Even scientists
    >will have a preconceived worldview that they start with when
    >they look at 'evidence.' Programmers are the same way. That's
    >why one person's code will not look exactly like someone elses. :-)

    Yes, I recall back when I was writing more code on my own that I'd
    often start with a program written by someone else - not stolen,
    where you buy a book with the program in it and instructions on
    how to use it - and I often found myself rewriting their work to
    get it to work better, faster or in a customized way.

    Granted, AI can make decisions that are surprising like the one
    that was given a certain amount of time to try to solve a problem
    and it was later discovered it had rewritten it's own code to give
    itself more time to do it..

    I've heard of this before. But wouldn't the programmers have to
    >put it in the code that it CAN rewrite itself? So it's still only
    >doing what the programmers gave it the ability to do?

    You'd hope that's how it works but when they talked about that
    happening they didn't mention anything about that. It seemed to
    be a huge surprise to them so I figured it came up with that on
    its own. They gave it a job to do, but it didn't have time to
    finish it, so it changed what was keeping it from doing so..

    Add to the 'prejudices' above, when an AI is dealing as an individual helping one person, it can also pick up that person's preferences
    and try to accomodate them as well..

    I actually like this! I use ChatGPT all the time for proofreading my
    >blog/podcast, or helping with wording something in a way that preserves
    >my voice, but makes the point maybe a little clearer, etc. But it has
    >picked up on MY VOICE and makes it MUCH much easier for me to
    >communicate with.

    And I call him PETEY. :-)

    Like you but better? Be careful that PETEY doesn't replace you... B)

    ---
    * SLMR Rob * Bill your doctor for time you spent in his waiting room
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Fri May 30 02:17:00 2025
    Yes.. I suppose that depends on what it is doing for the person.
    >> In some cases it would be more like a flatterer or sycophant by
    >> telling the person what they want to hear rather than the more
    >> common truth.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?

    Oh wait, it eventually did tell her the truth..

    I don't recall if that was followed by 7 years of bad luck or not.. B)

    ---
    * SLMR Rob * Stupidity got us into this mess, why can't it get us out?
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Fri May 30 09:30:00 2025
    Yes.. I suppose that depends on what it is doing for the person.
    >> In some cases it would be more like a flatterer or sycophant by
    >> telling the person what they want to hear rather than the more
    >> common truth.

    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?

    Oh wait, it eventually did tell her the truth..

    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    make such mistakes!

    I don't recall if that was followed by 7 years of bad luck or not.. B)

    I am sure someone in the story wound up unlucky. ;)


    * SLMR 2.1a * Acid absorbs 10 times its weight in excess reality.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sun Jun 1 01:18:00 2025
    I read that and was reminded of the Magic Mirror in Snow White. Sounds
    >like a potential money making venture there. ;)

    Ha.. Good point.. Who's the fairest of them all?
    >> Oh wait, it eventually did tell her the truth..

    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    >make such mistakes!

    Mirror 2.0 ? It lies better!.. Sounds like the Government..

    Meet the new boss.. same as the old boss... B)

    ---
    * SLMR Rob * Lost? Impossible.... I'm not going anywhere
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sun Jun 1 09:30:00 2025
    That was a design flaw. ;) The future mirror I am thinking of wouldn't
    >make such mistakes!

    Mirror 2.0 ? It lies better!.. Sounds like the Government..

    I am sure there is a better marketing angle in there somewhere. Maybe something about being good for one's self esteem. ;)

    Meet the new boss.. same as the old boss... B)

    Yep, I figure that saying was around longer, but I first heard it in a song
    by The Who. ;)

    Mike

    * SLMR 2.1a * Paperweights -- The only way to keep bills down.
    --- SBBSecho 3.20-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Tue Jun 3 00:55:00 2025
    Mirror 2.0 ? It lies better!.. Sounds like the Government..

    Meet the new boss.. same as the old boss... B)

    Yep, I figure that saying was around longer, but I first heard it in a song
    >by The Who. ;)

    Yes, that's the song I was thinking about when I said that.
    I'm not sure whether it was a common saying before and they picked
    it up or not..

    ---
    * SLMR Rob * If it has TITS or TIRES, you gonna have trouble with it
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)