Skip to content

Conversation

GrzegorzDrozd
Copy link
Contributor

  • add duration tracking
  • add returned rows tracking
  • add pending operations metric
  • add active connections metric
  • minor code fixes

Copy link

codecov bot commented Jun 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.37%. Comparing base (1d5099d) to head (a22da72).

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff              @@
##               main     #391      +/-   ##
============================================
- Coverage     82.82%   82.37%   -0.46%     
+ Complexity     1948     1898      -50     
============================================
  Files           142      140       -2     
  Lines          8142     7831     -311     
============================================
- Hits           6744     6451     -293     
+ Misses         1398     1380      -18     
Flag Coverage Δ
Aws 92.59% <ø> (ø)
Context/Swoole 0.00% <ø> (ø)
Instrumentation/AwsSdk 81.13% <ø> (ø)
Instrumentation/CakePHP 20.40% <ø> (ø)
Instrumentation/CodeIgniter 73.55% <ø> (ø)
Instrumentation/Curl 90.42% <ø> (ø)
Instrumentation/Doctrine 92.72% <ø> (ø)
Instrumentation/ExtAmqp 88.48% <ø> (ø)
Instrumentation/ExtRdKafka 86.11% <ø> (ø)
Instrumentation/Guzzle 75.58% <ø> (ø)
Instrumentation/HttpAsyncClient 78.04% <ø> (ø)
Instrumentation/IO 70.68% <ø> (ø)
Instrumentation/Laravel 63.91% <ø> (ø)
Instrumentation/MongoDB 74.28% <ø> (ø)
Instrumentation/MySqli 95.81% <ø> (ø)
Instrumentation/OpenAIPHP 87.21% <ø> (ø)
Instrumentation/PDO ?
Instrumentation/Psr14 76.47% <ø> (ø)
Instrumentation/Psr15 89.15% <ø> (ø)
Instrumentation/Psr16 97.50% <ø> (ø)
Instrumentation/Psr18 77.46% <ø> (ø)
Instrumentation/Psr3 67.01% <ø> (ø)
Instrumentation/Psr6 97.61% <ø> (ø)
Instrumentation/ReactPHP 99.45% <ø> (ø)
Instrumentation/Slim 86.11% <ø> (ø)
Instrumentation/Symfony 84.74% <ø> (ø)
Instrumentation/Yii 77.50% <ø> (ø)
Logs/Monolog 100.00% <ø> (ø)
Propagation/Instana 98.11% <ø> (ø)
Propagation/ServerTiming 100.00% <ø> (ø)
Propagation/TraceResponse 100.00% <ø> (ø)
ResourceDetectors/Azure 91.66% <ø> (ø)
ResourceDetectors/Container 93.02% <ø> (ø)
ResourceDetectors/DigitalOcean 100.00% <ø> (ø)
Sampler/RuleBased 33.51% <ø> (ø)
Shims/OpenTracing 92.45% <ø> (ø)
Symfony 87.81% <ø> (ø)
Utils/Test 87.53% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

see 2 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1d5099d...a22da72. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@brettmc
Copy link
Contributor

brettmc commented Jun 27, 2025

@GrzegorzDrozd I've finally had some time to look at metrics generation. I started down the path of tracking everything in pre and post hooks, in #396 - when I got something working I started to realise that I was duplicating a lot of effort from tracing (and some attributes were really hard to get for a metric since you can start a span and add to it while it's in progress, whereas a metric is a single call).

Since in my case (an HTTP server span), everything I need seems to be already stored in a span, I tried a different approach (I guess loosely based on span metrics connector and implemented a Span Processor to emit metrics: https://github.com/open-telemetry/opentelemetry-php/pull/1651/files#diff-9ed562697c2c477790ec9394f926eeb261f2433f39fb3eeca3e7999ee7c390dbR54

Looking at your implementation here, I see a similar pattern:

  1. start a span
  2. start some timers and track some objects
  3. end a span
  4. calculate duration/s and counts from timers and trackers, gather attributes, emit metric/s

The metrics are closely related to the spans generated nearby.

Given that duration and most of the attributes you're interested in are already in a span, do you think you could achieve your goals with a span processor?

@GrzegorzDrozd
Copy link
Contributor Author

GrzegorzDrozd commented Jul 20, 2025

@GrzegorzDrozd I've finally had some time to look at metrics generation. I started down the path of tracking everything in pre and post hooks, in #396 - when I got something working I started to realise that I was duplicating a lot of effort from tracing (and some attributes were really hard to get for a metric since you can start a span and add to it while it's in progress, whereas a metric is a single call).

Since in my case (an HTTP server span), everything I need seems to be already stored in a span, I tried a different approach (I guess loosely based on span metrics connector and implemented a Span Processor to emit metrics: https://github.com/open-telemetry/opentelemetry-php/pull/1651/files#diff-9ed562697c2c477790ec9394f926eeb261f2433f39fb3eeca3e7999ee7c390dbR54

Looking at your implementation here, I see a similar pattern:

1. start a span

2. start some timers and track some objects

3. end a span

4. calculate duration/s and counts from timers and trackers, gather attributes, emit metric/s

The metrics are closely related to the spans generated nearby.

Given that duration and most of the attributes you're interested in are already in a span, do you think you could achieve your goals with a span processor?

But that would force someone to use spans with metrics? What if they want only to use metrics? What about sampling? So with this argument I think I should move my code outside of hooks with spans and create separate code, so that we would allow using metrics without spans ...

@brettmc
Copy link
Contributor

brettmc commented Jul 21, 2025

But that would force someone to use spans with metrics? What if they want only to use metrics? What about sampling? So with this argument I think I should move my code outside of hooks with spans and create separate code, so that we would allow using metrics without spans ...

Yes it would...but we're creating spans anyway (even if they're no-op). Anyway, I'm not saying we have to do it that way, I just want to make sure we need the extra tracking/timing classes before accepting the ongoing maintenance.

$parent = Context::getCurrent();

$instrumentation->meter()
->createUpDownCounter('db.client.connection.count', '1')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we be creating a new instrument each time, or one that is reused?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants