Research
Generative Agents
Digital Twins
Impact Evaluation
LLMs
Governance
Simulation
Value Alignment
Simocracy: Generative Agentic Impact Evaluators
This site uses telemetry cookies.
We collect usage analytics (wallet connects, form steps, swap telemetry) to improve Ecocertain. Your data stays on our servers and is never sold. By continuing you agree to this processing.
Decentralized funding ecosystems face a critical bottleneck: as funding scales grow, human evaluators struggle with burnout, inconsistency, and the cognitive demands of assessing diverse impact claims. We introduce Generative Agentic Impact Evaluators—LLM-based digital twins that simulate individual evaluators' reasoning and value judgments. Through structured interviews with five participants at the Impact Evaluator Research Retreat in Iceland, we captured each person's evaluation philosophy and constructed their digital counterparts. When tested on value-laden funding scenarios, these digital twins demonstrated remarkable variation in fidelity: the most successful achieved 95.0% agreement with their human counterpart, while overall system accuracy reached 76.7%—more than double random chance. Our findings reveal that generative models can meaningfully replicate evaluative reasoning when participants articulate consistent values, offering a path toward scalable, pluralistic evaluation systems that preserve individual perspectives while reducing human cognitive load. However, model performance depends critically on interview quality and participant consistency, highlighting both the promise and current limitations of AI-mediated governance. A working demonstration is available at simocracy.org.