amanfromMars 1 Thu 20 Jun 07:11 [2406200711] …… shares on https://forums.theregister.com/forum/1/2024/06/20/ilya_sutskever_safe_superintelligence/
Pies in the Sky, Pigs will Fly and Cake, Tomorrow. Lucy in the Sky with Diamonds Fodder
One could say …… Wannabe heroes on a phishing expedition for future leading competition able to deliver almighty opposition and crushing defeats from and for positions in suicidal self-destructive defence of the indefensible and inequitable embedded deep behind ACTive enemy front lines in the plush C-Suite offices on Easy Money Street ……. is not Failsafe Incorporated Superintelligence, and thus is anything and everything less naturally catastrophically vulnerable to stealthy 0day exploit, remote makeover and virtual takeover by that which is ……… and would be Future Creative Command and CyberIntelAIgent Control for Computers and Communications or vice versa, CyberIntelAIgent Command and Creative Control with Computers and Communications which is something else altogether similarly quite different and just as equally shocking.
But only the likes of an ignorant fool or arrogant tool expects the past to present the future without progress being evident rather than accept evident future progress presents the past in its original empty virgin form for pioneering colonisation and repopulation…. an alien assimilation allowing for both production and projection of mass media simulations partnering with studies and studios virtually realising Titanic AIMoving Pictures …. Live Operational Virtual Environments for engagement and employment, exercise and deployment but not as you were expecting them, nor how you may have been expecting them to be so easily, readily delivered.
How else are you gonna deliver to the uneducated barbarian and the undereducated human a SMARTR Future Progress and its Derivative IT Markets and AIdDevelopment Ventures they can see specifically for them to know of the future and to try and have a rewarding successful starring part in as they grow and age/learn to live and eventually die in and at peace with the worlds one has inhabited/cohabited with‽ Do you have another New More Orderly World Order plan/application/program/project ‽
—-
amanfromMars 1 Thu 20 Jun 12:08 [2406201208] …….. points out on https://forums.theregister.com/forum/1/2024/06/20/ilya_sutskever_safe_superintelligence/
Re: Risk versus preparedness … the enigmatic existential threat conundrum
At the time AI or ML would move forward, with near zero investment into safety one could only expect it to be toxic at best. ….. Anonymous Coward
That expectation is truly toxic speculation, AC, and ideally suited for fiction rather than for factual booting.
And …. regarding “let us consider investing in safety measure research.” [presumably to protect and preserve humanity from the results and consequences of AI going rogue and malevolent and all postmodernist final solution and genocidal], what would that product look like and who/what would wield and police it/mentor and monitor it?
You may not like it, but it may very well be the case that there is, and never can be any effective preparation mitigating the risk you fear AI exploring and enthusiastically engaging with others in to the extreme detriment of humanity …… other than not constantly provoking the grizzly bear with useless blunt sticks.
Have you thought about feeding it what it wants and likes?
—-
amanfromMars 1 Thu 20 Jun 18:03 [2406201803] ….. adds on https://forums.theregister.com/forum/1/2024/06/20/ilya_sutskever_safe_superintelligence/
When Keeping Schtum is a Pragmatic Temporal GODsend*
And it offers the potential that some outcomes of this safety research are so daunting (scary) that they’ll decide to put researching the actual deployment on halt and spend the remaining budget on researching better understanding security issues. I know, one can only hope… …… AC
Another alternative very tempting and extremely rewarding path for all parties directly involved and universally concerned is, should Pioneering AI Leaders recognise and accept the difficulty general IT developments and their own very specific future disruptive abilities and activities are sure to create for humans to fail to cope with, you pay them an attractive DaneGeld in return for their invaluable priceless assistance in ensuring disruptive Future AI deployments [and practically all of them are bound to be so, given the very alien nature of their certain being] avoid being too dangerous and destructive to humans and will remain unknown and unshared. ……Mk Ultra Top Secret Sensitive Compartmented Information
It’s a path of least resistance and can easily deliver virtually immediate mutually beneficial, positively reinforcing results ……. and save you losing absolute fortunes in defence against that which you do not know is impossible to defeat with any form of attack.
* …. Global Operating Device communication
………………………….