Skip to main content

Advanced Metrics Configuration

Once you've chosen your first metrics and enabled them with default thresholds, you'll likely need to adapt the configuration to your project's specific needs.

This guide covers how to configure path-specific thresholds, non-ignorable metrics, data collection without enforcement, effort configuration, and combining metrics with architecture rules.

Ways to Use Metrics

Metrics usually serve one of two goals: enforcing quality gates in CI or collecting data for trend analysis in Dashboards.

This guide focuses on configuration patterns. In the problems below, some setups are mainly for CI, some are mainly for Dashboards, and some work well for both. For platform-specific setup, see CI Integrations and Integrating DCM Dashboards.

Problem 1: Legacy Code Has Hundreds of Violations

You enable cyclomatic-complexity with the recommended threshold of 15. CI fails immediately with 847 violations, all in code written years ago by people who've left the company.

You can't fix 847 functions before your next release. But you also can't just disable the metric; new code would regress to the same state.

The solution: Path-specific thresholds.

Use entries to apply stricter standards to new code while allowing legacy code to exist (for now):

analysis_options.yaml
dcm:
metrics:
cyclomatic-complexity:
threshold: 25 # Default: catches the worst offenders
entries:
- threshold: 15 # new features must meet modern standards
paths:
- '.*lib/features/.*'
- threshold: 40 # legacy code gets temporary ceiling
paths:
- '.*lib/legacy/.*'
- '.*lib/old_modules/.*'
📂 Example Project & DCM Output

View example on GitHub →

legacy_code_thresholds/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── features/
│ └── user_service.dart # Threshold: 15
├── legacy/
│ └── old_processor.dart # Threshold: 40
└── old_modules/
└── old_data_handler.dart # Threshold: 40
$ dcm calculate-metrics lib --reporter=console
lib/features/user_service.dart:
• method UserService.getUserStatus (1 entry):
HIGH This method has cyclomatic complexity of 16, which exceeds
the threshold of 15.

Scanned files: 3 | Scanned classes: 3
cyclomatic-complexity Min: 11.0 Max: 16.0 Avg: 12.7

Notice: The same complexity (16) passes in lib/legacy/ (threshold 40) but fails in lib/features/ (threshold 15).

The threshold at the top level is required, it's the fallback for any file not matching an entries pattern. Once your thresholds are set, you can enforce them progressively in CI using platform-specific configuration. If you're also uploading results to DCM Dashboards, this setup helps you track whether legacy areas are improving over time without blocking every change in them.

Problem 2: Widget Code vs Non-Widget Code

Flutter apps have fundamentally different types of code. Widget code builds UI, it's visual, compositional, often deeply nested. Non-widget code handles business logic, data transformations, and coordination between parts of the app. They need different metrics and different thresholds.

A build method with 8 levels of widget nesting might be normal. A domain function with 8 levels of control flow nesting is a problem.

The solution: Separate thresholds for widget and non-widget code.

  1. Widget-Specific Metrics: Some metrics only make sense for widget code:
analysis_options.yaml
dcm:
metrics:
# Widget-specific: only applies to build methods
widgets-nesting-level:
threshold: 6
entries:
- threshold: 4 # Design system widgets should be flat
paths:
- '.*lib/design_system/.*'
- '.*lib/ui/components/.*'

# Widget-specific: tracks widget count in build methods
number-of-used-widgets:
threshold: 15
entries:
- threshold: 8 # Reusable components should be focused
paths:
- '.*lib/design_system/.*'
- threshold: 20 # Screens can compose more widgets
paths:
- '.*lib/presentation/screens/.*'
📂 Example Project & DCM Output

View example on GitHub →

widget_specific_metrics/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── design_system/
│ └── button.dart # widgets-nesting-level threshold: 4
└── presentation/
└── screens/
└── home_screen.dart # number-of-used-widgets threshold: 20
$ dcm calculate-metrics lib --reporter=console
lib/design_system/button.dart:
• method DesignSystemButton.build (1 entry):
HIGH This method has widgets nesting level of 7, which exceeds
the threshold of 4.

lib/presentation/screens/home_screen.dart:
• method HomeScreen.build (1 entry):
VERY HIGH This method uses 48 different widgets, which exceeds the
threshold of 20.

Scanned files: 2 | Scanned classes: 2
widgets-nesting-level Min: 5.0 Max: 7.0 Avg: 6.0
number-of-used-widgets Min: 8.0 Max: 48.0 Avg: 28.0
  1. Non-Widget Code Metrics

Code that connects different parts of the app, repositories, services, use cases, should be simple and loosely coupled:

analysis_options.yaml
dcm:
metrics:
# Control flow complexity: stricter for non-widget code
cyclomatic-complexity:
threshold: 15
entries:
- threshold: 10 # Business logic should be simple
paths:
- '.*lib/domain/.*'
- '.*lib/application/.*'
- threshold: 20 # Widget code can have more branches (platform checks, etc.)
paths:
- '.*lib/presentation/.*'

# Coupling: critical for code connecting app parts
coupling-between-object-classes:
threshold: 12
entries:
- threshold: 6 # Services should be loosely coupled
paths:
- '.*lib/services/.*'
- '.*lib/repositories/.*'
- threshold: 15 # Widgets often depend on more types
paths:
- '.*lib/presentation/.*'

# Nesting: stricter for logic, relaxed for widgets
maximum-nesting-level:
threshold: 4
entries:
- threshold: 3 # Business logic: keep it flat
paths:
- '.*lib/domain/.*'
- threshold: 6 # Widget builders can nest deeper
paths:
- '.*lib/presentation/.*'
📂 Example Project & DCM Output

View example on GitHub →

widget_vs_nonwidget/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── domain/
│ └── order_calculator.dart # Strict: complexity 10, nesting 3
└── presentation/
└── product_screen.dart # Relaxed: complexity 20, nesting 6
$ dcm calculate-metrics lib --reporter=console
lib/domain/order_calculator.dart:
• method OrderCalculator.calculateTotal (1 entry):
HIGH This method has a nesting level of 6, which exceeds the
threshold of 3.

Scanned files: 2 | Scanned classes: 2
maximum-nesting-level Min: 0.0 Max: 6.0 Avg: 3.7
cyclomatic-complexity Min: 1.0 Max: 9.0 Avg: 6.3

Notice: The same nesting level (6) passes in presentation but fails in domain.

Here's a comprehensive example combining widget and non-widget patterns:

analysis_options.yaml
dcm:
metrics:
cyclomatic-complexity:
threshold: 15
entries:
- threshold: 8 # Domain logic: simple and testable
paths:
- '.*lib/domain/.*'
- threshold: 12 # Services connecting app parts
paths:
- '.*lib/services/.*'
- '.*lib/repositories/.*'
- threshold: 20 # Widgets: platform checks, null handling
paths:
- '.*lib/presentation/.*'
- '.*lib/ui/.*'

source-lines-of-code:
threshold: 50
entries:
- threshold: 30 # Utility functions: concise
paths:
- '.*lib/utils/.*'
- '.*lib/extensions/.*'
- threshold: 40 # Domain: focused use cases
paths:
- '.*lib/domain/.*'
- threshold: 80 # Screens: can be longer
paths:
- '.*lib/presentation/screens/.*'

widgets-nesting-level:
threshold: 6
entries:
- threshold: 4 # Reusable components: flat
paths:
- '.*lib/design_system/.*'
- threshold: 8 # Complex screens: more flexibility
paths:
- '.*lib/presentation/screens/.*'
📂 Example Project & DCM Output

View example on GitHub →

comprehensive_metrics/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── design_system/
│ └── card.dart # widgets-nesting-level threshold: 4
├── domain/
│ └── order_use_case.dart # cyclomatic-complexity threshold: 8
└── utils/
└── string_utils.dart # source-lines-of-code threshold: 30
$ dcm calculate-metrics lib --reporter=console
lib/design_system/card.dart:
• method DesignSystemCard.build (1 entry):
HIGH This method has widgets nesting level of 8, which exceeds
the threshold of 4.

Scanned files: 3 | Scanned classes: 3
widgets-nesting-level Min: 0.0 Max: 8.0 Avg: 2.7
source-lines-of-code Min: 1.0 Max: 34.0 Avg: 5.4
cyclomatic-complexity Min: 1.0 Max: 6.0 Avg: 1.8

Problem 3: Different Teams Have Different Standards

Your monorepo has a platform team maintaining core libraries, feature teams shipping product, and a design system team building UI components. They have different maturity levels and different needs.

The platform team wants strict coupling limits because their code is foundational. Feature teams need more flexibility because they're shipping fast. The design system team wants extremely strict widget metrics because their components are used everywhere.

The solution: Per-team threshold configuration.

analysis_options.yaml
dcm:
metrics:
coupling-between-object-classes:
threshold: 15
entries:
- threshold: 8 # Platform: foundation must be loosely coupled
paths:
- '.*packages/core/.*'
- '.*packages/networking/.*'
- threshold: 20 # Features: more flexibility for shipping
paths:
- '.*packages/features/.*'
- threshold: 6 # Design system: components used everywhere
paths:
- '.*packages/design_system/.*'

widgets-nesting-level:
threshold: 8
entries:
- threshold: 4 # Design system: flat, composable widgets
paths:
- '.*packages/design_system/.*'

source-lines-of-code:
threshold: 50
entries:
- threshold: 30 # Core utilities should be concise
paths:
- '.*packages/core/.*'
- threshold: 80 # Feature screens get more room
paths:
- '.*packages/features/.*/screens/.*'
📂 Example Project & DCM Output

View example on GitHub →

team_thresholds/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
└── packages/
├── core/
│ └── cache_manager.dart # coupling threshold: 8
├── design_system/
│ └── fancy_button.dart # coupling threshold: 6, nesting: 4
└── features/
└── checkout/
└── screens/
└── cart_screen.dart # SLOC threshold: 80
$ dcm calculate-metrics lib --reporter=console
Scanned files: 3 | Scanned classes: 3
widgets-nesting-level Min: 0.0 Max: 3.0 Avg: 1.0
source-lines-of-code Min: 1.0 Max: 17.0 Avg: 6.4
coupling-between-object-classes Min: 0.0 Max: 1.0 Avg: 0.3

Each team can evolve their thresholds independently as they mature. If you're using DCM Dashboards in a monorepo, this kind of per-team configuration also makes trends easier to interpret because each package or area is measured against expectations that match its role.

Problem 4: Critical Code Needs Extra Protection

Your payment processing module handles money. Your authentication module handles security. Your data migration code runs once and must work correctly.

These aren't the places to allow someone to add // ignore: high-cyclomatic-complexity and move on.

The solution: Non-ignorable metrics for critical paths.

analysis_options.yaml
dcm:
metrics:
cyclomatic-complexity:
threshold: 20
ignorable: false # Cannot suppress with // ignore:
entries:
- threshold: 10 # Extra strict for critical code
paths:
- '.*lib/payments/.*'
- '.*lib/auth/.*'
- '.*lib/migrations/.*'

maximum-nesting-level:
threshold: 5
ignorable: false
entries:
- threshold: 3
paths:
- '.*lib/payments/.*'
📂 Example Project & DCM Output

View example on GitHub →

critical_code_protection/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── auth/
│ └── token_validator.dart # complexity threshold: 10
├── migrations/
│ └── data_migration.dart # complexity threshold: 10
└── payments/
└── payment_processor.dart # complexity: 10, nesting: 3
$ dcm calculate-metrics lib --reporter=console
lib/migrations/data_migration.dart:
• method DataMigration.migrate:
HIGH This method has a cyclomatic complexity of 12, which exceeds
the threshold of 10.

lib/payments/payment_processor.dart:
• method PaymentProcessor.processPayment:
HIGH This method has a maximum nesting level of 6, which exceeds
the threshold of 3.

Scanned files: 3 | Scanned classes: 3
cyclomatic-complexity Min: 1.0 Max: 12.0 Avg: 3.7
maximum-nesting-level Min: 0.0 Max: 6.0 Avg: 2.0

With ignorable: false, there's no escape hatch. The code must meet the threshold, period. Use this in a restricted manner, reserve it for code where complexity creates genuine risk.

Problem 5: Tracking Without Enforcing

Sometimes you want to collect metric data without triggering violations. Perhaps you're establishing a baseline before setting thresholds. Or you want comprehensive data in dashboards while only enforcing a subset in CI.

The solution: Use threshold: unset for data collection.

Setting threshold: unset collects metric values without comparing them to any threshold. The data appears in reports and dashboards, but never triggers violations.

analysis_options.yaml
dcm:
metrics:
# Enforced: these will fail CI if exceeded
cyclomatic-complexity:
threshold: 20
source-lines-of-code:
threshold: 60

# Tracked only: collect data, no violations
coupling-between-object-classes:
threshold: unset
maximum-nesting-level:
threshold: unset
📂 Example Project & DCM Output

View example on GitHub →

tracking_without_enforcing/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── complex_service.dart # cyclomatic-complexity enforced (threshold: 20)
└── coupled_class.dart # coupling-between-object-classes tracked (unset)
$ dcm calculate-metrics lib --reporter=console
✔ no metric violations found!

Scanned folders: 1
Scanned files: 2
Scanned classes: 2

Name Min Max Avg Sum
source-lines-of-code 0 1.0 13.0 4.3 17.0
maximum-nesting-level 0 0.0 3.0 0.6
cyclomatic-complexity 0 1.0 6.0 2.0
coupling-between-object-classes 0 0.0 0.0 0.0

Note: Metrics with threshold: unset appear in reports but never trigger violations.

dcm calculate-metrics lib --reporter=console --report-all
✔ Calculation is completed. Preparing the results: 33ms

lib/complex_service.dart:
• class ComplexService (1 entry):
BELOW This class is coupled with 0 other classes.

• method ComplexService.processData (3 entries):
BELOW This method has cyclomatic complexity of 6.

BELOW This method has a nesting level of 3.

BELOW This method has 13 source lines of code.


lib/coupled_class.dart:
• class CoupledClass (1 entry):
BELOW This class is coupled with 0 other classes.

• constructor CoupledClass.CoupledClass (2 entries):
BELOW This constructor has cyclomatic complexity of 1.

BELOW This constructor has a nesting level of 0.


• getter CoupledClass.stream (3 entries):
BELOW This getter has cyclomatic complexity of 1.

BELOW This getter has a nesting level of 0.

BELOW This getter has 1 source line of code.


• method CoupledClass.addItem (3 entries):
BELOW This method has cyclomatic complexity of 1.

BELOW This method has a nesting level of 0.

BELOW This method has 2 source lines of code.


• method CoupledClass.dispose (3 entries):
BELOW This method has cyclomatic complexity of 1.

BELOW This method has a nesting level of 0.

BELOW This method has 1 source line of code.


Scanned folders: 1
Scanned files: 2
Scanned classes: 2

Name Min Max Avg Sum
source-lines-of-code 0 1.0 13.0 4.3 17.0
maximum-nesting-level 0 0.0 3.0 0.6
cyclomatic-complexity 0 1.0 6.0 2.0
coupling-between-object-classes 0 0.0 0.0 0.0

You can also use threshold: unset in entries for specific paths:

analysis_options.yaml
dcm:
metrics:
coupling-between-object-classes:
threshold: 15
entries:
- threshold: unset # Collect data for DI setup code, no violations
paths:
- '.*lib/di/.*'
- '.*lib/injection/.*'
📂 Example Project & DCM Output

View example on GitHub →

tracking_with_entries/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── di/
│ └── injection_container.dart # threshold: unset (tracked only)
└── services/
└── api_service.dart # threshold: 15 (enforced)
$ dcm calculate-metrics lib --reporter=console --report-all
lib/di/injection_container.dart:
• class InjectionContainer (1 entry):
BELOW This class is coupled with 0 other classes.

lib/services/api_service.dart:
• class ApiService (1 entry):
BELOW This class is coupled with 0 other classes.

Scanned folders: 2
Scanned files: 2
Scanned classes: 2

Name Min Max Avg
coupling-between-object-classes 0 0.0 0.0 0.0

Note: Files matching .*lib/di/.* show "BELOW" status - data is collected but no violations are triggered.

This is different from excluding files entirely. Excluded files don't appear in reports at all. Files with threshold: unset appear in reports with their values, you just won't see violations for them.

This approach pairs well with DCM Dashboards: upload data collected with threshold: unset so you can visualize values over time without blocking CI. For a complete walkthrough on uploading this tracked data from your pipeline, see the Integrating DCM Dashboards guide.

Problem 6: Leadership Wants Trend Reporting

"Is code quality improving?" Your manager asks this quarterly. You need actual data!

The solution: Dashboard integration with comprehensive tracking.

Before setting thresholds, understand where you are. Configure all metrics with threshold: unset to collect data without enforcement:

analysis_options.yaml
dcm:
metrics:
cyclomatic-complexity:
threshold: unset
coupling-between-object-classes:
threshold: unset
weighted-methods-per-class:
threshold: unset
source-lines-of-code:
threshold: unset
maximum-nesting-level:
threshold: unset
widgets-nesting-level:
threshold: unset
📂 Example Project & DCM Output

View example on GitHub →

progress_metrics/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── complex_logic.dart # cyclomatic-complexity tracked
├── coupled_service.dart # coupling-between-object-classes tracked
└── nested_widget.dart # widgets-nesting-level tracked
$ dcm calculate-metrics lib --reporter=console --report-all
✔ Calculation is completed. Preparing the results: 0.4s

lib/complex_logic.dart:
• class ComplexLogic (2 entries):
BELOW This class is coupled with 0 other classes.

BELOW This class has the total methods complexity of 4.

• method ComplexLogic.processData (3 entries):
BELOW This method has cyclomatic complexity of 4.

BELOW This method has a nesting level of 3.

BELOW This method has 10 source lines of code.


lib/coupled_service.dart:
• class CoupledService (2 entries):
BELOW This class is coupled with 0 other classes.

BELOW This class has the total methods complexity of 3.

• getter CoupledService.stream (3 entries):
BELOW This getter has cyclomatic complexity of 1.

BELOW This getter has a nesting level of 0.

BELOW This getter has 1 source line of code.


• method CoupledService.addItem (3 entries):
BELOW This method has cyclomatic complexity of 1.

BELOW This method has a nesting level of 0.

BELOW This method has 3 source lines of code.


• method CoupledService.dispose (3 entries):
BELOW This method has cyclomatic complexity of 1.

BELOW This method has a nesting level of 0.

BELOW This method has 1 source line of code.


lib/nested_widget.dart:
• class NestedWidget (2 entries):
BELOW This class is coupled with 10 other classes.

BELOW This class has the total methods complexity of 2.

• constructor NestedWidget.NestedWidget (2 entries):
BELOW This constructor has cyclomatic complexity of 1.

BELOW This constructor has a nesting level of 0.


• method NestedWidget.build (4 entries):
BELOW This method has cyclomatic complexity of 1.

BELOW This method has a nesting level of 0.

BELOW This method has 15 source lines of code.

BELOW This method has widgets nesting level of 7.


Scanned files: 3 | Scanned classes: 3
widgets-nesting-level Min: 0.0 Max: 5.0 Avg: 1.7
source-lines-of-code Min: 1.0 Max: 28.0 Avg: 6.9
weighted-methods-per-class Min: 0.0 Max: 7.0 Avg: 2.7
maximum-nesting-level Min: 0.0 Max: 3.0 Avg: 1.0
cyclomatic-complexity Min: 1.0 Max: 4.0 Avg: 1.5
coupling-between-object-classes Min: 0.0 Max: 2.0 Avg: 1.0

All metrics show "BELOW" status - values are collected for dashboards without triggering any violations.

Run with --report-all to capture all values:

dcm calculate-metrics lib --report-all --reporter=json --output-to=baseline.json

After a few weeks of data, you'll have evidence-based thresholds instead of guesses.

Uploading to Dashboards

DCM Dashboards visualize trends over time. Upload data from CI:

dcm run lib --all --upload --project=PROJECT_KEY --email=LICENSE_EMAIL

In practice, teams usually add --upload to the same CI job that already runs dcm run, so every merge, scheduled build, or release build contributes a new data point automatically.

Dashboard uploads capture everything: metrics, rules, anti-patterns. Over time, you'll see whether complexity is increasing or decreasing, which modules need attention, and whether refactoring efforts are paying off.

Problem 7: Estimated Fix Times Are Wrong

DCM estimates remediation effort for each violation, how long it takes to fix. But the defaults assume generic code. Your team's experience differs.

Reducing cyclomatic complexity in your domain logic takes longer than the default 15 minutes. Decoupling classes in your legacy module is a multi-hour endeavor.

The solution: Calibrate effort estimates to match reality.

analysis_options.yaml
dcm:
metrics:
cyclomatic-complexity:
threshold: 5
effort: 30 # Your team averages 30 min to refactor complex functions

coupling-between-object-classes:
threshold: 2
effort: 90 # Decoupling is architectural work

weighted-methods-per-class:
threshold: 10
effort: 60 # Class refactoring takes time
📂 Example Project & DCM Output

View example on GitHub →

calibrated_effort/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── complex_function.dart # effort: 30 min per violation
├── coupled_class.dart # effort: 90 min per violation
└── heavy_class.dart # effort: 60 min per violation

Use dcm init metrics-preview to see effort estimates for violations:

$ dcm init metrics-preview lib --format=console --only-enabled
calibrated_effort:
1 cyclomatic-complexity - min: 1.0 - max: 6.0 - avg: 1.3 - 30.0m
1 weighted-methods-per-class - min: 3.0 - max: 13.0 - avg: 7.3 - 1.0h
0 coupling-between-object-classes - min: 0.0 - max: 0.0 - avg: 0.0

✔ total applied metrics: 3

The effort column shows your custom values: 30.0m for cyclomatic-complexity and 1.0h (60 min) for weighted-methods-per-class violations.

Calibrated effort values aggregate into accurate technical debt totals. When dashboards show "47 hours of technical debt," that number actually means something.

Problem 8: Metrics and Architecture Rules Work Together

You've set up architecture rules to enforce layer boundaries. Now you're adding metrics. They seem related, high coupling often indicates boundary violations, but how do they work together?

The solution: Use metrics to detect what rules prevent.

Metrics are observational: they measure what exists. Rules are prescriptive: they prevent specific violations. Use them together:

analysis_options.yaml
dcm:
metrics:
# Detect: presentation layer has high coupling
coupling-between-object-classes:
threshold: 15
entries:
- threshold: 0 # Presentation should be loosely coupled - any coupling triggers warning
paths:
- 'lib/presentation/.*'

rules:
# Prevent: presentation importing data layer directly
- avoid-banned-imports:
entries:
- paths: ['lib/presentation/.*\.dart']
deny:
- 'data/'
message: 'Presentation cannot import data layer directly.'
📂 Example Project & DCM Output

View example on GitHub →

metrics_with_rules/
├── analysis_options.yaml
├── pubspec.yaml
└── lib/
├── data/
│ └── user_repository.dart # Data layer
└── presentation/
└── user_screen.dart # Coupling threshold: 0 (strict)
$ dcm calculate-metrics lib --reporter=console
lib/presentation/user_screen.dart:
• class UserScreen (1 entry):
VERY HIGH This class is coupled with 1 other class, which exceeds the threshold of 0.

Scanned files: 2 | Scanned classes: 2
coupling-between-object-classes Min: 0.0 Max: 1.0 Avg: 0.5

$ dcm analyze lib --reporter=console

lib/presentation/user_screen.dart (1 issue):
WARNING This import is not allowed. Presentation cannot import data layer directly.
at lib/presentation/user_screen.dart:13:1
avoid-banned-imports

Scanned files: 2
warning issues: 1

Metrics measure the coupling level, while rules (like avoid-banned-imports) prevent specific architectural violations. Use dcm analyze lib to check rules alongside metrics.

When coupling violations correlate with import rule violations, you've found a structural issue. The metric tells you something's wrong; the rule tells you what specifically.

Advanced Automation: Discovering Hidden Patterns

Metrics give you raw data, but custom automation reveals patterns your team might never notice manually. While dashboards show individual violations, some problems are invisible until you look at them the right way.

For example: a file with multiple metric violations simultaneously, high coupling and complexity and nesting, is usually signaling a deeper structural issue. These architectural hotspots are exactly where refactoring delivers the most value, but they rarely stand out in standard reports.

JSON output enables you to build detectors for complex scenarios like this:

dcm calculate-metrics lib --reporter=json --output-to=metrics.json
JSON Output Format (expand for schema)

The JSON format provides structured data for building custom automation:

{
"formatVersion": 12,
"timestamp": "2026-01-10 00:07:25.000",
"summary": [
{ "title": "Total metric high severity violations", "value": 9 },
{ "title": "Scanned files", "value": 47 },
{
"title": "CBO",
"value": 5,
"avg": 4.67,
"max": 32,
"min": 0,
"sum": 869
},
{
"title": "CYCLO",
"value": 0,
"avg": 1.22,
"max": 6,
"min": 0,
"sum": 376
}
],
"metricResults": [
{
"path": "lib/widgets/drawing_pad.dart",
"issues": [
{
"id": "coupling-between-object-classes",
"message": "This class is coupled with 32 other classes...",
"level": "high",
"threshold": 20,
"value": 32,
"effortInMinutes": 15,
"declarationName": "_DrawingPadScreenState",
"declarationType": "class",
"location": { "startLine": 29, "endLine": 154 }
}
]
}
]
}

Key fields available for automation:

  • level: Filter by severity (high, very-high) to prioritize issues
  • effortInMinutes: Aggregate across issues for technical debt estimates
  • declarationName: Identify specific functions or classes needing attention
  • summary: Track aggregate metrics (average, min, max, sum) over time

Example: Detecting Architectural Hotspots

Here's a practical detector that finds the files your team should focus on, the ones with multiple violations that signal structural problems:

import 'dart:convert';
import 'dart:io';

void main() async {
final json = jsonDecode(await File('metrics.json').readAsString());
final results = json['metricResults'] as List;

// Find architectural hotspots: files with multiple violations
final hotspots = results
.where((r) => (r['issues'] as List).length > 1)
.map((r) => {
'path': r['path'],
'violations': (r['issues'] as List).length,
'effort': (r['issues'] as List)
.fold<int>(0, (sum, i) => sum + (i['effortInMinutes'] as int)),
})
.toList()
..sort((a, b) => (b['effort'] as int).compareTo(a['effort'] as int));

print('Architectural hotspots (multiple violations = structural issues):');
for (final file in hotspots) {
print(' ${file['path']}: ${file['violations']} metric violations, ${file['effort']} min effort to fix');
}

// Calculate total technical debt
final totalEffort = results.fold<int>(0, (sum, r) {
return sum +
(r['issues'] as List)
.fold<int>(0, (s, i) => s + (i['effortInMinutes'] as int));
});
print('\nTotal estimated effort: $totalEffort minutes');
}

Output:

Architectural hotspots (multiple violations = structural issues):
lib/widgets/drawing_pad.dart: 4 metric violations, 180 min effort to fix
lib/services/sync_engine.dart: 3 metric violations, 120 min effort to fix

Total estimated effort: 485 minutes

Files with multiple violations rarely stand out in normal dashboards, but they're exactly where architectural refactoring delivers the most value. Automate their detection and they become impossible to ignore; you've turned raw metrics into actionable insights.

Configuration Reference

For complete documentation on configuration options, see:

What's Next?