Skill v1.0.0
currentAutomated scan100/100version: "1.0.0" name: telemetry-essentials description: MANDATORY for ALL telemetry, logging, and observability work. Invoke before writing telemetry handlers, Logger calls, or metrics code. file_patterns:
- "**/*.ex"
- "**/*.exs"
auto_suggest: true
Telemetry Essentials
RULES — Follow these with no exceptions
- Use structured logging (`Logger.info("action", key: value)`) — never string interpolation in log messages; structured logs are searchable and parseable
- Attach telemetry handlers in `Application.start/2` — not in modules that may restart; handler attachment is not idempotent
- Use `Ecto.Repo` telemetry events for query monitoring — don't wrap every query manually; Ecto already emits events
- Use `Phoenix.LiveDashboard` in dev/staging — it's free observability with zero code
- Tag telemetry events with metadata (user_id, request_id) — without correlation IDs, distributed traces are useless
- Never log at `:debug` level in production — it includes query parameters and PII
Structured Logging
Structured logs can be filtered, searched, and aggregated. String-interpolated logs cannot.
Bad:
# String interpolation — unsearchable, inconsistent formatLogger.info("User #{user.id} created order #{order.id} for $#{order.total}")Logger.error("Failed to process payment for user #{user.id}: #{inspect(reason)}")
Good:
# Structured logging — searchable, parseable by log aggregatorsLogger.info("Order created", user_id: user.id, order_id: order.id, total: order.total)Logger.error("Payment failed", user_id: user.id, reason: inspect(reason))
Logger Metadata
Set metadata once per request — it's automatically included in all subsequent log calls.
# In a Plug (added to your endpoint or router pipeline)defmodule MyAppWeb.Plugs.RequestMetadata doimport Plug.Conndef init(opts), do: optsdef call(conn, _opts) doLogger.metadata(request_id: conn.assigns[:request_id] || Ecto.UUID.generate(),remote_ip: to_string(:inet.ntoa(conn.remote_ip)))connendend# In a LiveView mount@impl truedef mount(_params, _session, socket) doif connected?(socket) doLogger.metadata(user_id: socket.assigns.current_user.id)end{:ok, socket}end
JSON Logging for Production
# config/prod.exsconfig :logger, :console,format: {LogfmtEx, :format}, # Or Jason-based formattermetadata: [:request_id, :user_id, :module, :function]
:telemetry Basics
The :telemetry library is the standard for metrics in the BEAM ecosystem. Libraries (Ecto, Phoenix, Oban) emit events — you attach handlers.
Event Structure
# An event has: name (list of atoms), measurements (map), metadata (map):telemetry.execute([:my_app, :orders, :created], # event name%{count: 1, total_cents: 4999}, # measurements%{user_id: user.id, source: :web} # metadata)
Attaching Handlers
Always attach in Application.start/2 — handler attachment is not idempotent and modules may restart.
Bad:
# In a GenServer init — if GenServer restarts, handlers are attached againdefmodule MyApp.MetricsServer dodef init(_) do:telemetry.attach("order-handler", [:my_app, :orders, :created], &handle/4, nil){:ok, %{}}endend
Good:
# In application.ex — runs once at bootdefmodule MyApp.Application douse Application@impl truedef start(_type, _args) doMyApp.Telemetry.attach_handlers()children = [MyApp.Repo,MyAppWeb.Endpoint]Supervisor.start_link(children, strategy: :one_for_one)endend# lib/my_app/telemetry.exdefmodule MyApp.Telemetry dorequire Loggerdef attach_handlers do:telemetry.attach_many("my-app-handlers", [[:my_app, :orders, :created],[:my_app, :payments, :processed],[:my_app, :payments, :failed]], &handle_event/4, nil)enddef handle_event([:my_app, :orders, :created], measurements, metadata, _config) doLogger.info("Order created",total_cents: measurements.count,user_id: metadata.user_id)enddef handle_event([:my_app, :payments, :failed], _measurements, metadata, _config) doLogger.error("Payment failed",user_id: metadata.user_id,reason: metadata.reason)endend
Telemetry Spans
For timing operations:
def process_order(order) do:telemetry.span([:my_app, :orders, :process], %{order_id: order.id}, fn ->result = do_process(order){result, %{order_id: order.id, status: :completed}}end)end# Emits two events:# [:my_app, :orders, :process, :start] — with measurements: %{system_time: ...}# [:my_app, :orders, :process, :stop] — with measurements: %{duration: ...}# [:my_app, :orders, :process, :exception] — if an exception is raised
Ecto Telemetry Events
Ecto automatically emits telemetry events for every query. You don't need to instrument queries manually.
Built-in Events
# Ecto emits: [:my_app, :repo, :query]# Measurements: %{# total_time: integer, # Total time in native units# decode_time: integer, # Time decoding results# query_time: integer, # Time executing the query# queue_time: integer, # Time waiting for a connection# idle_time: integer # Time the connection was idle# }# Metadata: %{# query: "SELECT ...",# source: "users",# repo: MyApp.Repo,# result: {:ok, %Postgrex.Result{}} | {:error, ...}# }
Monitoring Slow Queries
defmodule MyApp.Telemetry dorequire Loggerdef attach_handlers do:telemetry.attach("ecto-slow-query",[:my_app, :repo, :query],&handle_slow_query/4,%{threshold_ms: 100})enddef handle_slow_query(_event, measurements, metadata, %{threshold_ms: threshold}) doduration_ms = System.convert_time_unit(measurements.total_time, :native, :millisecond)if duration_ms > threshold doLogger.warning("Slow query",duration_ms: duration_ms,source: metadata.source,query: metadata.query)endendend
Phoenix Telemetry Events
Phoenix emits events for the request lifecycle.
# Request events:# [:phoenix, :endpoint, :start]# [:phoenix, :endpoint, :stop]# [:phoenix, :router_dispatch, :start]# [:phoenix, :router_dispatch, :stop]# LiveView events:# [:phoenix, :live_view, :mount, :start]# [:phoenix, :live_view, :mount, :stop]# [:phoenix, :live_view, :handle_event, :start]# [:phoenix, :live_view, :handle_event, :stop]# Channel events:# [:phoenix, :channel_joined]# [:phoenix, :channel_handled_in]
LiveDashboard Setup
Phoenix.LiveDashboard provides free observability with zero code.
# mix.exs — add dependency (already included in new Phoenix projects){:phoenix_live_dashboard, "~> 0.8"}# router.eximport Phoenix.LiveDashboard.Routerscope "/" dopipe_through :browser# Only in dev/staging — never expose in production without authlive_dashboard "/dashboard",metrics: MyAppWeb.Telemetry,ecto_repos: [MyApp.Repo],ecto_psql_extras_options: [long_running_queries: [threshold: "200 milliseconds"]]end
Custom Metrics for LiveDashboard
# lib/my_app_web/telemetry.exdefmodule MyAppWeb.Telemetry douse Supervisorimport Telemetry.Metricsdef start_link(arg) doSupervisor.start_link(__MODULE__, arg, name: __MODULE__)end@impl truedef init(_arg) dochildren = [{:telemetry_poller, measurements: periodic_measurements(), period: 10_000}]Supervisor.init(children, strategy: :one_for_one)enddef metrics do[# Phoenix metricssummary("phoenix.endpoint.stop.duration", unit: {:native, :millisecond}),summary("phoenix.router_dispatch.stop.duration", unit: {:native, :millisecond}),# Ecto metricssummary("my_app.repo.query.total_time", unit: {:native, :millisecond}),summary("my_app.repo.query.queue_time", unit: {:native, :millisecond}),# VM metricssummary("vm.memory.total", unit: :byte),summary("vm.total_run_queue_lengths.total"),# Custom business metricscounter("my_app.orders.created.count"),summary("my_app.payments.processed.duration", unit: {:native, :millisecond})]enddefp periodic_measurements do[{MyApp.Metrics, :dispatch_queue_depth, []}]endend
Custom Business Metrics
Emit telemetry events from your contexts for important business operations.
defmodule MyApp.Orders dodef create_order(attrs) do:telemetry.span([:my_app, :orders, :create], %{}, fn ->case %Order{}|> Order.changeset(attrs)|> Repo.insert() do{:ok, order} ->:telemetry.execute([:my_app, :orders, :created], %{count: 1,total_cents: order.total_cents}, %{user_id: order.user_id}){{:ok, order}, %{status: :ok}}{:error, changeset} ->{{:error, changeset}, %{status: :error}}endend)endend
External Tool Integration
Prometheus
# mix.exs{:telemetry_metrics_prometheus, "~> 1.1"}# application.ex childrenTelemetryMetricsPrometheus.child_spec(metrics: MyAppWeb.Telemetry.metrics())# Exposes /metrics endpoint for Prometheus scraping
StatsD / Datadog
# mix.exs{:telemetry_metrics_statsd, "~> 0.7"}# application.ex children{TelemetryMetricsStatsd, metrics: MyAppWeb.Telemetry.metrics()}
Production Log Levels
# config/prod.exs — default to :infoconfig :logger, level: :info# config/runtime.exs — allow override for debuggingif config_env() == :prod doif log_level = System.get_env("LOG_LEVEL") doconfig :logger, level: String.to_existing_atom(log_level)endend
What each level logs:
| Level | Includes | Production Use | |
|---|---|---|---|
:debug | SQL params, internal state | Never (PII risk) | |
:info | Requests, business events | Default | |
:warning | Recoverable issues | Always | |
:error | Failures needing attention | Always |
See deployment-gotchas skill for production configuration patterns. See security-essentials skill for sensitive data logging rules. See otp-essentials skill for process monitoring patterns.