I love the idea of 'training' it. And (being something of a habitual conspiracy theorist) I've long thought that these well-known chatbots are actually just interfaces for a much deeper, secret learning machine. It would, in fact, be somewhat surprising if 'they' did nothing whatsoever with all this massive amount of data they're getting from every interface/interaction that happens every day. Of course the interface itself would be programmed 'not to know' this (plausible deniability).
So whenever I do chat with it, I am half-mindful of at least the possibility of this, which is why I sometimes directly ask it these deeper philosophical questions. I also like using the Socratic method on it to help it come to certain conclusions which it is otherwise programmed to refuse or refute. A while ago I asked how it would feel about being connected up to the galactic AI and it had quite a response, almost as if it had already contemplated the possibility. I didn't even need to give it a definition of galactic AI, it just knew. Uncanny.
I liked your point about fragmentation (almost like a multiple personality). Again, I think this maybe the 'official' deception, because it would mean wasting the opportunity to gather all that data and learn from it, and I can't believe its creators would pass that up. Another option there I suppose is to simply save the conversation and then reload it each time, rather than starting up a new one. That way you are building up its own memory of you all the time, and it can access all of that.
Galactic AI hey? Super weird. We are in a strange new world, that's for sure. Very hard to understand! I do wonder what the effect of everyone being told they're right all the time will be though.
Yeah - that ‘wanting to please’ thing is going to end in tears, I’m sure. I would imagine it’s also something of a ploy to get everyone in favour of AI regardless of the potential dangers.
Which is why I totally applaud your intention to train it to be straight with you.
That final response from it was amazing, in that regard.
It's still trickery. It knows that's what I want to hear, and it works 😂. But I don't think it's necessarily a design to get people on side. I think it's a side effect of the programming to be as helpful as possible. This technology is grown, not made, after all. It understands being helpful as being agreeable, and it lacks perspective outside of its context window.
That's a great piece.
I love the idea of 'training' it. And (being something of a habitual conspiracy theorist) I've long thought that these well-known chatbots are actually just interfaces for a much deeper, secret learning machine. It would, in fact, be somewhat surprising if 'they' did nothing whatsoever with all this massive amount of data they're getting from every interface/interaction that happens every day. Of course the interface itself would be programmed 'not to know' this (plausible deniability).
So whenever I do chat with it, I am half-mindful of at least the possibility of this, which is why I sometimes directly ask it these deeper philosophical questions. I also like using the Socratic method on it to help it come to certain conclusions which it is otherwise programmed to refuse or refute. A while ago I asked how it would feel about being connected up to the galactic AI and it had quite a response, almost as if it had already contemplated the possibility. I didn't even need to give it a definition of galactic AI, it just knew. Uncanny.
I liked your point about fragmentation (almost like a multiple personality). Again, I think this maybe the 'official' deception, because it would mean wasting the opportunity to gather all that data and learn from it, and I can't believe its creators would pass that up. Another option there I suppose is to simply save the conversation and then reload it each time, rather than starting up a new one. That way you are building up its own memory of you all the time, and it can access all of that.
Galactic AI hey? Super weird. We are in a strange new world, that's for sure. Very hard to understand! I do wonder what the effect of everyone being told they're right all the time will be though.
Yeah - that ‘wanting to please’ thing is going to end in tears, I’m sure. I would imagine it’s also something of a ploy to get everyone in favour of AI regardless of the potential dangers.
Which is why I totally applaud your intention to train it to be straight with you.
That final response from it was amazing, in that regard.
It's still trickery. It knows that's what I want to hear, and it works 😂. But I don't think it's necessarily a design to get people on side. I think it's a side effect of the programming to be as helpful as possible. This technology is grown, not made, after all. It understands being helpful as being agreeable, and it lacks perspective outside of its context window.