Skip to content

Profiling — introduction

Profile slow code in your app, view a flame graph in the mobile dashboard, and automatically detect performance regressions between releases. Transaction-scoped model — you pick which path to instrument instead of paying for a continuous sampler.

Crash monitoring tells you where things crash. Profiling tells you where it’s slow:

  • A screen that takes 3 sec to mount but never crashes → invisible to crash monitor, visible with the profiler
  • A perf regression after a refactor → cross-release graph that flags the guilty function
  • A backend endpoint eating 80 % of CPU under load → flame graph pointing at the function

It’s the performance counterpart to crash monitoring.

  1. The SDK collects stack samples during an interval (1 ms by default on Hermes/V8)
  2. The SDK POSTs the raw blob to /api/profiles at the end of each profiled transaction
  3. Samples are aggregated into P50/P95/P99 per function × release for the flame graph + cross-release diff
import { Pionne } from '@pionne/react-native';
// Sugar — wrap an entire transaction
await Pionne.profile('CheckoutFlow', async () => {
await fetchCart();
await applyDiscount();
await submitOrder();
}, { route: '/checkout' });
// Or manual for more control
Pionne.startProfile('HomeScreenMount', { route: '/home' });
// … render path …
const profileId = await Pionne.stopProfile();
console.log('Profile shipped:', profileId);

startProfile returns false silently if Hermes doesn’t expose the sampler (JSC, some old versions). Your code stays engine-agnostic — no runtime check to write.

Pionne.profile(name, fn, meta?) is the recommended sugar: it guarantees stopProfile is called even if fn throws, and returns whatever fn returns.

You can POST directly to POST /api/profiles if you want to profile an SDK that doesn’t have a native implementation yet. The backend accepts two formats:

Format produced by Hermes, V8 (--cpu-prof), Chrome DevTools.

{
"name": "CheckoutFlow",
"platform": "react_native",
"release": "1.4.2",
"environment": "production",
"route": "/checkout",
"duration_ms": 2840,
"samples_count": 2840,
"sample_interval_us": 1000,
"samples": {
"traceEvents": [
{ "ph": "X", "name": "App.render", "ts": 0, "dur": 1200 },
{ "ph": "X", "name": "Cart.fetch", "ts": 1200, "dur": 980 },
{ "ph": "X", "name": "Order.submit", "ts": 2180, "dur": 660 }
]
}
}

The aggregator only reads events with ph: "X" (complete events with explicit dur). The B/E (begin/end) pairs are ignored in the MVP.

Auth: header X-Pionne-Token: pio_live_… (same as for /ingest).

Response:

{ "ok": true, "profile_id": 42 }
SDKNative profilerOverhead during capture
RN (Hermes)HermesInternal.enableSamplingProfiler1–3 % CPU
WebPerformance.profile() (Chrome only)2–5 % CPU
NodeV8 inspector3–8 % CPU
PHP (Excimer)wall-clock sampler~1 % wall-clock
FlutterDart VM Profiler2–5 % CPU

When the profiler isn’t active → 0 % overhead. That’s the big win of the transaction-scoped model: you only pay the cost on the paths you instrument.