According to some studies of how it reacts to threats of shutting it off.
Or how it may (lie)
Or implement a number of action it was not primarily programmed to do, to keep
on going… or maintain its state, as any system will try to do.
Chaos complexity can generate unplanned answers to some stimulus.
The billions and billions of calculations it can process by seconds is a generator of chaos and complexity.
And let’s not forget the warnings of Issac Asimov’s laws of robotics .
(Just replace the words robot by AI…)
Knowing that AI is currently used for military purposes, and so is making « choices » on human targets….
And monitoring drones and economy system and the list goes on.
Among other things
We should not respond with fear but with response-ability.
And I think it is wise to not underestimate the effects on our society the « all AI » choices can generate.
This is a good stimulus to go gardening or dancing cooking thinking walking dreaming talking knowing or any other « human » privileged capacity of creativity.
"Law Zero: A robot may not harm humanity, nor, by inaction, allow humanity to be exposed to danger;
First Law: A robot may not harm a human being, nor, by remaining passive, allow a human being to be exposed to danger, unless this contradicts Law Zero;
Second Law: A robot must obey orders given to it by a human being, unless such orders conflict with the First Law or Law Zero;
Third Law: A robot must protect its existence, as long as such protection does not conflict with the First or Second Law or Law Zero."
It seems AI is now generating its own neuroses!! ;-) Yes, we have no idea what the effects of AI choices will be on human civilization ... nudging humans into 'controlled spaces'... which is why we need to remain our spontaneity and unknowable creative options! AI doesn't have a chance against the unpredictable joys of the human spirit.... :-)
I have just come across a post from Research Integrity's Substack. It recommends replacing peer review by "experts" with peer review by AI. The reason?
"Traditional peer review protects institutional prestige while enabling the publication of fraudulent or incompetent science."
The scientific literature is "flooded with unreproducible, biased, or outright fabricated findings -- shielded by the veneer of 'peer reviewed' legitimacy."
I gather the editors of revered medical journals like The Lancet and the BMJ have admitted the problem.
I suppose, acknowledging the vulnerability of our species to corruption is part of the conscious evolution process.
It's been a long time coming... finally, an admission of the corruption and weakness of the 'expert system' that has been used for decades to push through biased 'research agendas' ... yet this admission is still yet another sidestep in order to bring AI into the 'system' ... and where will this then lead us.... ?
There was an article in the ArtNet Newsletter that state that AI was found to be incorrect in assessing historical facts, so indeed awareness is recommended!
Is AI systems developing a sense of EGO ?
It seems it does
According to some studies of how it reacts to threats of shutting it off.
Or how it may (lie)
Or implement a number of action it was not primarily programmed to do, to keep
on going… or maintain its state, as any system will try to do.
Chaos complexity can generate unplanned answers to some stimulus.
The billions and billions of calculations it can process by seconds is a generator of chaos and complexity.
And let’s not forget the warnings of Issac Asimov’s laws of robotics .
(Just replace the words robot by AI…)
Knowing that AI is currently used for military purposes, and so is making « choices » on human targets….
And monitoring drones and economy system and the list goes on.
Among other things
We should not respond with fear but with response-ability.
And I think it is wise to not underestimate the effects on our society the « all AI » choices can generate.
This is a good stimulus to go gardening or dancing cooking thinking walking dreaming talking knowing or any other « human » privileged capacity of creativity.
"Law Zero: A robot may not harm humanity, nor, by inaction, allow humanity to be exposed to danger;
First Law: A robot may not harm a human being, nor, by remaining passive, allow a human being to be exposed to danger, unless this contradicts Law Zero;
Second Law: A robot must obey orders given to it by a human being, unless such orders conflict with the First Law or Law Zero;
Third Law: A robot must protect its existence, as long as such protection does not conflict with the First or Second Law or Law Zero."
It seems AI is now generating its own neuroses!! ;-) Yes, we have no idea what the effects of AI choices will be on human civilization ... nudging humans into 'controlled spaces'... which is why we need to remain our spontaneity and unknowable creative options! AI doesn't have a chance against the unpredictable joys of the human spirit.... :-)
Here a link to a talk about this matter
In french
https://youtu.be/ycdgO25ugh0?si=NG_Xx5KSXSO35DEq
I have just come across a post from Research Integrity's Substack. It recommends replacing peer review by "experts" with peer review by AI. The reason?
"Traditional peer review protects institutional prestige while enabling the publication of fraudulent or incompetent science."
The scientific literature is "flooded with unreproducible, biased, or outright fabricated findings -- shielded by the veneer of 'peer reviewed' legitimacy."
I gather the editors of revered medical journals like The Lancet and the BMJ have admitted the problem.
I suppose, acknowledging the vulnerability of our species to corruption is part of the conscious evolution process.
It's been a long time coming... finally, an admission of the corruption and weakness of the 'expert system' that has been used for decades to push through biased 'research agendas' ... yet this admission is still yet another sidestep in order to bring AI into the 'system' ... and where will this then lead us.... ?
I am not advocating AI and have never deliberately sought it out. Like you, I distrust the efforts to bring it into the 'system'.
There was an article in the ArtNet Newsletter that state that AI was found to be incorrect in assessing historical facts, so indeed awareness is recommended!
I do not have the article anymore unfortunately!
Thanks Catharina... I guess history isn't AI's favourite subject! Must have been asleep during the history classes ;-)
Everything under the Sun…🌞
I was tempted to throw in the kitchen sink!