Skip to content
Closed beta — v0.1

ChronicleDB

Persistent graph-plus-vector memory for SillyTavern roleplays.

A small extraction LLM reads each new batch of messages and writes structured memory — characters, traits, events, relationships, plot threads — into a Postgres graph with pgvector embeddings. On every turn, a six-bucket hybrid retrieval (dense + lexical across memories, events, dialogue, and scene snapshots) is fused via Reciprocal Rank Fusion and injected into the chat as a focused memory block, so the model writes with long-term grounding in addition to recent context.

Built around four research-adjacent pipelines: a three-layer trait dedup (lexicon gate → fuzzy pre-check → contextual-embedding kNN with LLM verifier), RRF hybrid retrieval with optional HyDE query rewriting, three-pass Louvain community detection for a super-arc / arc / episode hierarchy, and per-chat character scoping to prevent alias and trait bleed across stories.

SillyTavern memory, graph+vector hybrid retrieval, trait canonicalization, Louvain-derived story arcs
View docs
Active — Paper in progress

Biased Speculative Decoding

Biased speculative decoding as an attack vector on LLM alignment.

Speculative decoding uses a small draft model to accelerate inference from a larger target model. SpecSec investigates what happens when the draft model is intentionally biased — fine-tuned to subtly shift the target model’s outputs toward misaligned behavior.

Our findings show that draft model manipulation during speculative decoding can measurably shift target model outputs within the first 15 tokens of generation, with effects propagating through the rest of the sequence.

Speculative decoding pipelines, vLLM, TGI, draft model trust boundaries
Early stage

Assistant-LoRA

Fine-tuning research for specialized AI assistant behaviors.

Exploring how LoRA adapters can be used to create targeted behavioral modifications in language models — with a focus on understanding the interaction between parameter-efficient fine-tuning and existing safety training.

Early stage. More details as the work progresses.

LoRA adapters, safety training interactions, behavioral fine-tuning