Weave combines LLMs and domain-specific ML to quantify engineering output and quality.
Engineers & engineering leaders always want to know how good they are, yet they've never been able to quantify it. Historically, people would rely on metrics like lines of code (correlation with effort: 0.34), number of PRs, or story points (slightly better at ~0.35). These metrics are, frankly, terrible proxies for productivity.
We’ve developed a custom model that directly analyzes code and its impact, with a far better 0.94 correlation. We've created a standardized engineering output metric that doesn’t reward vanity.