The developers of AI began by directing their large languages models through a kind of Constitution, “Do this, and don’t do that,” but they quickly learned that in some mysterious way those models tended to "do what they want” anyway, disobeying the parameters and rules. Now the designers of this false utopia are almost pleading with the software to function with good intentions, and to make decisions based on what is best for the human person. They are asking it to “be nice,” as if that were enough.
The developers of AI began by directing their large languages models through a kind of Constitution, “Do this, and don’t do that,” but they quickly learned that in some mysterious way those models tended to "do what they want” anyway, disobeying the parameters and rules.
Now the designers of this false utopia are almost pleading with the software to function with good intentions, and to make decisions based on what is best for the human person. They are asking it to “be nice,” as if that were enough.