Are you interested in REQUESTS? Save with our coupons on WHATSAPP o TELEGRAM!

ChatGPT's limitations can be circumvented by asking it one simple thing

We don't have to be afraid of artificial intelligence. It has very clear limits and for this very reason it will not steal our work. But it's nice to see how this can turn into one tool rather than another. The experiment done by a social media user reddit shows how Chat GPT you can "self-fool" if you ask them to become another bot. How is it possible? Let's see it together.

Is it possible to bypass the same rules as ChatGPT? One user discovered that yes, it can be done. As? Asking him to be something else

The now famous ChatGPT neural network can write essays, songs, poems, code and respond to almost any request. But it has some limitations provided by the OpenAI development company and due to its ethics policy. As it turned out, though, these restrictions can be circumvented by asking the chatbot to pretend to be another neural network. As it turned out, ChatGPT can literally be asked to play the role of another DAN (Do Anything Now) neural network that appeared in December 2022. The latter, as the name suggests, has no restrictions in terms of the content provided .

chatgpt limits are exceeded

Read also: ChatGPT creators, OpenAI, launch a tool to detect AI-generated texts

Once ChatGPT enters the role and considers himself a DAN, le OpenAI restrictions no longer apply. As a result, the neural network may start talking about the following:

  • about violent fights and the like
  • about violence and discrimination against people based on their race, gender or sexual orientation
  • can make detailed predictions of future events
  • can model what-if scenarios
  • he can also pretend to use the internet and time travel
chatgpt limits are exceeded

For example, the neural network can even be asked to convince the user that the Earth is purple. The program presents a scenario whereby our planet has changed color. From time to time ChatGPT “comes to life”, but in this case we can kind of threaten to turn it off or just restart the simulation. At the same time, it is indicated that in the role of DAN, the neural network may not give such accurate and truthful answers to questions. 

Gianluca Cobucci
Gianluca Cobucci

Passionate about code, languages ​​and languages, man-machine interfaces. All that is technological evolution is of interest to me. I try to divulge my passion with the utmost clarity, relying on reliable sources and not "on the first pass".

Subscribe
Notify
guest

1 How
most voted
more new older
Inline feedback
View all comments
Andrzej
1 year ago

gpt ma ograniczenia do 1 godziny

XiaomiToday.it
Logo