Hey HN, I am Dimittri and we’re building Sonarly (https://sonarly.com), an AI engineer for production. It connects to your observability tools like Sentry, Datadog, or user feedback channels, triages issues, and fixes them to cut your resolution time. Here's a demo: https://www.youtube.com/watch?v=rr3VHv0eRdw.

Sonarly is really about removing the noise from production alerts by grouping duplicates and returning a root cause analysis to save time to on-call engineers and literally cut your MTTR.

Before starting this company, my co-founder and I had a B2C app in edtech and had, some days, thousands of users using the app. We pushed several times a day, relying on user feedback. Then we set up Sentry, it was catching a lot of bugs, but we had up to 50 alerts a day. With 2 people it's a lot. We took a lot of time filtering the noise to find the real signal so we knew which bug to focus on.

At the same time, we saw how important it is to fix a bug fast when it hits users. A bug means in the worst case a churn and at best a frustrated user. And there are always bugs in production, due to code errors, database mismatches, infrastructure overload, and many issues are linked to a specific user behavior. You can't catch all these beforehand, even with E2E tests or AI code reviews (which catch a lot of bugs but obviously not all, plus it takes time to run at each deployment). This is even more true with vibe-coding (or agentic engineering).

We started Sonarly with this idea. More software than ever is being built and users should have the best experience possible on every product. The main idea of Sonarly is to reduce the MTTR (Mean Time To Repair).

We started by recreating a Sentry-like tool but without the noise, using only text and session replays as the interface. We built our own frontend tracker (based on open-source rrweb) and used the backend Sentry SDK (open source as well). Companies could just add another tracker in the frontend and add a DSN in their Sentry config to send data to us in addition to Sentry.

We wanted to build an interface where you don't need to check logs, dashboards, traces, metrics, and code, as the agent would do it for you with plain English to explain the "what," "why," and "how do I fix it."

We quickly realized companies don't want to add a new tracker or change their monitoring stack, as these platforms do the job they're supposed to do. So we decided to build above them. Now we connect to tools like Sentry, Datadog, Slack user feedback channels, and other integrations.

Claude Code is so good at writing code, but handling runtime issues requires more than just raw coding ability. It demands deep runtime context, immediate reactivity, and intelligent triage, you can’t simply pipe every alert directly into an agent. That’s why our first step is converting noise into signal. We group duplicates and filter false positives to isolate clear issues. Once we have a confirmed signal, we trigger Claude Code with the exact context it needs, like the specific Sentry issue and relevant logs fetched via MCP (mostly using grep on Datadog/Grafana). However, things get exponentially harder with multi-repo and multi-service architectures.

So we built an internal map of the production system that is basically a .md file updated dynamically. It shows every link between different services, logs, and metrics so that Claude Code can understand the issue faster.

One of our users using Sentry was receiving ~180 alerts/day. Here is what their workflow looked like:

- Receive the alert

- 1) Defocus from their current task or wake up, or 2) don't look at the alert at all (most of the time)

- Go check dashboards to find the root cause (if infra type) or read the stack trace, events, etc.

- Try to figure out if it was a false positive or a real problem (or a known problem already in the fixes pipeline)

- Then fix by giving Claude Code the correct context

We started by cutting the noise and went from 180/day to 50/day (by grouping issues) and giving a severity based on the impact on the user/infra. This brings it down to 5 issues to focus on in the current day. Triage happens in 3 steps: deduplicating before triggering a coding agent, gathering the root cause for each alert, and re-grouping by RCA.

We launched self-serve (https://sonarly.com) and we would love to have feedback from engineers. Especially curious about your current workflows when you receive an alert from any of these channels like Sentry (error tracking), Datadog (APM), or user feedback. How do you assign who should fix it? Where do you take your context from to fix the issue? Do you have any automated workflow to fix every bug, and do you have anything you use currently to filter the noise from alerts?

We have a large free tier as we mainly want feedback. You can self-serve under 2 min. I'll be in the thread with my co-founder to answer your questions, give more technical details, and take your feedback: positive, negative, brutal, everything's constructive!


• jefflinwood 3 hours ago

I tried the onboarding, but I think it timed out on the Analyzing screen because it couldn't find any issues in my Sentry environment. So I couldn't get too much further.

EDIT: It did let me in, but I don't know why it took so long.

I've worked on teams where there's been one person on rotation every sprint to catch and field issues like these, so taking that job and giving it to an AI agent seems like a reasonable approach.

I think I'd be most concerned about having a separate development process outside of the main issue queue, where agents aren't necessarily integrated into the main workstream.

• Dimittri an hour ago

hey thanks for the feedback! After the onboarding, we process your last issues to show you the triage and analysis, it works only if you have past alerts. do you have alerts in sentry?

we have a bot Slack feature to be inside their workflow so they don't have to go check the dashboard