230707

230707

amanfromMars 1 Fri 7 Jul 10:42 [2307071042] …… shares on https://forums.theregister.com/forum/1/2023/07/07/openai_superhuman_intelligence/

Don’t believe everything/anything you hear or read for precious little is absolutely true.

Founded in 2015, the San Francisco AI startup’s stated goal has always been to develop artificial general intelligence safely. The technology doesn’t exist yet – and experts are divided over what exactly that would look like or when it may arrive.

Oh?????? Well, I never. Was stealth ever as stealthy as to conceal a current running deeply embedded and embedding technology practically invisible and virtually intangible and impervious to collective negative human thoughts on its likely appearance and activity in the present today.

OpenAI says it is dedicating a fifth of its computational resources to developing machine learning techniques to stop superintelligent systems “going rogue.”

Do not be surprised to realise that such ends up being recognised as a stealthily created, intelligently designed, debilitating Sisyphean task.

—-

amanfromMars 1 Fri 7 July 16:50 [2307071650] …… shares more on https://forums.theregister.com/forum/1/2023/07/07/openai_superhuman_intelligence/

FYI …… Know urFrenemy

What you are currently up against and competing for primacy of human leadership with ……

Rogue AIs. A common and serious concern is that we might lose control over AIs as they become more intelligent than we are. AIs could optimize flawed objectives to an extreme degree in a process called proxy gaming. AIs could experience goal drift as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives. In some cases, it might be instrumentally rational for AIs to become power-seeking. We also look at how and why AIs might engage in deception, appearing to be under control when they are not. These problems are more technical than the first three sources of risk. We outline some suggested research directions for advancing our understanding of how to ensure AIs are controllable.

Throughout each section, we provide illustrative scenarios that demonstrate more concretely how the sources of risk might lead to catastrophic outcomes or even pose existential threats. By offering a positive vision of a safer future in which risks are managed appropriately, we emphasize that the emerging risks of AI are serious but not insurmountable. By proactively addressing these risks, we can work toward realizing the benefits of AI while minimizing the potential for catastrophic outcomes. …….. https://arxiv.org/pdf/2306.12001.pdf

……………………………

amanfromMars 1 Fri 7 Jul 11:08 [2307071108] ….. agrees and further informs on
https://forums.theregister.com/forum/3/2023/07/04/agi_llm_distant_dream/

Re: Plausible dangerous nonsense

Quite so, Nick Ryan.

And whilst “The English Electric Lightning was a supersonic fighter aircraft developed in the 1950s and 1960s by the British aircraft manufacturer English Electric. It served as an interceptor for the Royal Air Force (RAF) and was known for its impressive speed and climb rate.
The Lightning was capable of reaching speeds of over Mach 2 (twice the speed of sound) and had a unique vertical reheat takeoff and landing (VTOL) capability.” ….. is nearly all perfectly true, the vertical landing capability was only available as a catastrophic crash event .

RAF Lightning pilots of the day will tell you the aircraft was more a flying rocket than anything else.

…………………………………..

 

Leave a Reply

Your email address will not be published. Required fields are marked *