Jason Bennett

Jailbreak AI security challenge interface

Jailbreak — AI Security Platform

ROLE: EARLY CO-FOUNDER / CONTRIBUTOR
YEAR: 2024
CATEGORY: AI SECURITY

Information

Jailbreak is a open source-decentralized AI-security platform built around crowdfunded LLM prompt-injection challenges. Users submit protected prompts, the community funds a reward pool, and attackers pay per attempt to try breaking the model's guardrails. If someone succeeds, they instantly claim the prize—turning AI safety testing into an open, incentive-driven competition.

As an early co-founder and contributor, I helped define the platform's foundational mechanics, including the core protocol logic, challenge flow, and the strategy behind the reward and payout system. My work contributed to the early structure of how challenges are created, funded, attempted, and resolved, shaping the direction of the project during its initial phase.

Credits

Early Protocol Design & Core MechanicsJason Bennet
Reward System Strategy & Challenge FlowJason Bennet

Links