After reading the GPT-4 Research paper I can say for certain I am more concerned than ever.• https://www.reddit.com, SouthRye
I decided to spend some time to sit down and actually look over the latest report on GPT-4. I've been a big fan of the tech and have used the API to build smaller pet projects but after reading some of the safety concerns in this latest research I can't help but feel the tech is moving WAY too fast.
Per Section 2.0 these systems are already exhibiting novel behavior like long term independent planning and Power-Seeking.
To test for this in GPT-4 ARC basically hooked it up with root access, gave it a little bit of money (I'm assuming crypto) and access to its OWN API. This theoretically would allow the researchers to see if it would create copies of itself and crawl the internet to try and see if it would improve itself or generate wealth. This in itself seems like a dangerous test but I'm assuming ARC had some safety measures in place.
ARCs linked report also highlights that many ML systems are not fully under human control and that steps need to be taken now for safety.