
Jailbreak — AI Security Platform
Information
Jailbreak is a open source-decentralized AI-security platform built around crowdfunded LLM prompt-injection challenges. Users submit protected prompts, the community funds a reward pool, and attackers pay per attempt to try breaking the model's guardrails. If someone succeeds, they instantly claim the prize—turning AI safety testing into an open, incentive-driven competition.
As an early co-founder and contributor, I helped define the platform's foundational mechanics, including the core protocol logic, challenge flow, and the strategy behind the reward and payout system. My work contributed to the early structure of how challenges are created, funded, attempted, and resolved, shaping the direction of the project during its initial phase.
Credits
Early Protocol Design & Core Mechanics →
Reward System Strategy & Challenge Flow →
Links
Website →https://jailbreakme.xyz/
X (Twitter) →https://x.com/jailbreakme_xyz