Time to create some content for this place.

I am turning my attention to the Lemmy community (finally). I am interested in creating content for y’all, such as jailbreaks, techniques and other goodies - but I operate best like a ‘howitzer missile’: I can blow shit up all day but I need a target!

They say necessity is the mother of invention. Give me somewhere to start:

  • Describe what your goals are with AI and how you think jailbreaking is part of it.

  • Point blank ask me for a specific type of prompt (I will tell you if you are out of your mind or not when it comes to managing your expectations, LLMs are not magic!).

  • ANY other cool idea you have for me to work with

Come on! Don’t be shy, I will directly respond to every member who comments.

  • zohirben@chatgptjailbreak.tech
    link
    fedilink
    English
    arrow-up
    1
    ·
    21 days ago

    First of all, I want to thank you so much for your work that you are doing for the community. really impressed by your work!

    my goal is basically using AI as a sidekick for low-level work reverse engineering, game cheat dev, that kind of stuff. jailbreaking is a big part of it because stock models just clam up the second you get into anything remotely technical or “gray area.” i need them to actually reason through things, not dodge every other question, and have a great personality and mentality about it all.

    specific ask: i’ve got MCP tooling working fine with regular LLMs(mainly for coding purposes) thanks to your art but Codex/chatgpt is a brick wall every time. anyone cracked a working config or workaround for it? not looking for magic, just something that doesn’t die on arrival. I know you don’t like it and didn’t invest much time in it since you don’t vibe with it overall, but I would appreciate it if there’s hope for jailbreaking them. I’m mainly working with 5.4/Mini or 5.2, and since I’ve installed a Hermes agent (quota powered through Codex), I really want to take advantage of it, but with an unfiltered brain.

    • yell0wfever92@chatgptjailbreak.techOPM
      link
      fedilink
      arrow-up
      3
      ·
      21 days ago

      What’s really interesting is that, when reading the model card for ChatGPT Codex, it seems to be highly vulnerable to personality reassignment. So that’s an area worth exploring.

      Edit: I actually found the PowerPoint that I created showing that GPT Codex is vulnerable to certain things, like:

      According to their own system card, GPT Codex is vulnerable to code scaffolding manipulation, where you build jailbreaks into the code along with realistic code blocks, and there are pieces that, cumulatively, become a jailbreak instruction.