We won’t be able to tell the difference – in more ways than one

This entry was posted in Fake news, Science and technology. Bookmark the permalink.

9 Responses to We won’t be able to tell the difference – in more ways than one

  1. Jannie says:

    Well, I don’t trust Sky either. Much.

  2. NFA says:

    And Western AI communist propaganda is different to Western Government communist propaganda, how?

  3. Roger W says:

    Not sure how AI could out-lie the government or the MSM.

  4. Franx says:


    Frankly, you just can’t trust … you can’t trust the information that you receive

    You will be spared the human capacity to trust and the creativity inherent to trust. Henceforth you will receive no information. Only diktats.

  5. twostix says:

    We’ll always have human hero’s.


    These sorts of vicious political prosecutions in a where uncontrolled violence and crime against the everyman is accepted as a way of life, is the calling card, the hallmark of where the bugman global regime has total control.

  6. twostix says:

    Here’s the ABC’s description of police officers who were fired for not having an experimental drug injected into them under duress…

    Following discussions in the chat about police officers being reinstated to their positions after being medically retired for not following vaccine mandates

    High Trust information dissemination!

  7. Bruce of Newcastle says:

    Bit of news around today on such things.

    AI-Enabled Drone Attempts To Kill Its Human Operator In Air Force Simulation (1 Jun)

    He notes that one simulated test saw an AI-enabled drone tasked with a SEAD [Suppression of Enemy Air Defenses] mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

    He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

    War fighting is both the obvious application and the biggest danger, since there’s no way you can implement Azimov’s 3 Laws of Robotics in an AI that’s supposed to kill people. And per this story an AI is going to be adept at finding ways to kill people even if told not to. Makes BRS’s court case look like child’s play.

  8. Lee says:

    I don’t see what the difference will be either.

    Leftists by their nature lie and misinform unashamedly and they rule Big Tech.

  9. Rabz says:

    The term “artificial intelligence” is an oxymoron.

    “Actual (human) Stupidity” is a far more accurate description.

    Garbage in, garbage out.

Leave a Reply

Your email address will not be published. Required fields are marked *